WO2019144046A1 - Distributed high performance computing using distributed average consensus - Google Patents

Distributed high performance computing using distributed average consensus Download PDF

Info

Publication number
WO2019144046A1
WO2019144046A1 PCT/US2019/014351 US2019014351W WO2019144046A1 WO 2019144046 A1 WO2019144046 A1 WO 2019144046A1 US 2019014351 W US2019014351 W US 2019014351W WO 2019144046 A1 WO2019144046 A1 WO 2019144046A1
Authority
WO
WIPO (PCT)
Prior art keywords
distributed computing
computing device
consensus
sampled
matrix
Prior art date
Application number
PCT/US2019/014351
Other languages
French (fr)
Inventor
Todd Allen CHAPMAN
Ivan James RAVLICH
Christopher Taylor HANSEN
Daniel MAREN
Original Assignee
Hyperdyne, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hyperdyne, Inc. filed Critical Hyperdyne, Inc.
Publication of WO2019144046A1 publication Critical patent/WO2019144046A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores

Definitions

  • Distributed computing can be used to break a large computation into sub-components, assign distributed computing devices components of the computation, and combine the results from the distributed computing devices to generate the result of the computation.
  • Existing methods for distributed computing use various techniques to obtain a result from a distributed computing task, e.g., selecting a coordinator to evaluate the sub-component results, or determining a majority result.
  • Typical distributed computing operations are designed to be fault- tolerant, which allows convergence even if a computing device was not able to perform its assigned portion of the computation. However, such operations also allow a computing device that claims to contribute to the computation, but did not contribute, to converge with the other computing devices.
  • One use for distributed computing devices relates to improving artificial intelligence (AI) models.
  • Distributed computers connected to a network can implement an AI model and also collect data that is used to update and improve the AI model.
  • a“gather and scatter” method is used to generate and propagate updates to the AI models determined from the collected data.
  • distributed computers collect data and transmit the data to a central server.
  • the central server updates the AI model and transmits the updated AI model to the distributed computers.
  • the central server must be reliable, and each distributed computer must have a reliable connection to the server to provide data to and receive model updates from the central server. This gather and scatter method requires a large amount of computing to be performed at the central server, and does not take advantage of the computing resources of the distributed computers.
  • a centralized system obtains user preference data and generates recommendations based on the obtained data.
  • the centralized system may collect data describing users’ prior purchases and product ratings, or data tracking user behavior, such as clickstream data.
  • the centralized system uses this collected data to provide personalized recommendations to users.
  • such centralized systems may often also exploit users’ personal data for other purposes, such as targeting content towards them or selling the data to third parties.
  • Many users would prefer to receive personalized recommendations without having a centralized system collect, store, or distribute their personal data.
  • Latent semantic indexing is a mathematical tool for indexing text that is used for indexing and retrieving content from a large number of unstructured text-based documents, such as web pages. LSI is used for various applications, such as search engines and document comparison.
  • a central server indexes a set of searchable content and allows other users to search this content through the central server.
  • a search engine uses a web crawler to retrieve publicly-accessible websites or other documents and stores information describing the documents’ content.
  • the search engine provides a search interface to which a user can submit queries, and upon receiving a search query, the search engine compares the query to the stored information and provides relevant results.
  • the search engine system obtains and analyzes both the documents being searched and the search queries.
  • current search engines require information providers to make their documents publicly available, or at least available to the search engine, to allow others to search the documents.
  • centralized search engines can collect data on its users’ behaviors. Many information providers and users would prefer a search implementation that does not involve a centralized system collecting and storing their data.
  • the result of the DAC algorithm indicates whether each computing device has contributed to the calculation of the average.
  • the DAC procedure is able to confirm that each computing device in the distributed environment has contributed to the calculation.
  • the DAC procedure confirms that each computing device has participated using the same connections that are used to obtain the consensus result; thus, no additional routing protocols or overlay topologies are needed to confirm participation.
  • several exemplary applications for DAC are described herein. Distributed implementations for calculating a dot product, calculating a matrix-vector product, calculating a least squares calculation, and performing decentralized Bayesian parameter learning are described.
  • a method for distributed AI learning is also described.
  • a method for generating a subspace for recommendations, and exemplary uses of the recommendation model are also described.
  • a method for generating a subspace for latent semantic indexing, and exemplary uses of the latent semantic index are also described.
  • One application involves cooperatively generating a recommendation model and generating personalized recommendations based on the model without exposing personal user data.
  • cooperating distributed computing devices use a cooperative sub space approach that combines the DAC algorithm with the theory of random sampling.
  • Each distributed computing device randomly samples local user preference data in a cooperative subspace that approximates the user preference data reflecting the users of all cooperating distributed computing devices.
  • the sampled preference data is shared among the cooperating distributed computing devices.
  • the distributed computing devices use the DAC algorithm to cooperatively create a recommendation model based on the sampled preference data of many users.
  • Each distributed computing device individually applies the recommendation model to the distributed computing device’s local preference data to generate personalized recommendations for the user of the distributed computing device.
  • the cooperative subspace approach allows the DAC algorithm to be performed efficiently, and the random sampling obscures the underlying user preference data so that users’ data privacy is maintained.
  • a set of cooperating distributed computing devices use a cooperative subspace approach that combines the DAC algorithm with the theory of random sampling.
  • Each cooperating distributed computing device stores one or more documents, and the documents distributed across the set of cooperating distributed computing devices are jointly referred to as a corpus of documents.
  • the documents in the corpus may be documents that their respective users plan to make available for searching by other distributed computing devices, e.g., documents that can be searched by some or all of the cooperating distributed computing devices and/or other devices.
  • the cooperating distributed computing devices jointly generate a latent semantic index based on the corpus of documents, without the contents of any individual document being exposed to other distributed computing devices.
  • each distributed computing device individually analyzes its locally-stored documents, and randomly samples the results of this analysis to generate a matrix that approximates and obscures the content of the local documents in each distributed computing device.
  • the distributed computing devices share their matrices and perform the DAC algorithm described above to generate a matrix reflecting the corpus of documents stored by of all cooperating distributed computing devices.
  • Each distributed computing device then extracts a low-dimension latent semantic index (LSI) subspace from the matrix based on the DAC result.
  • LSI subspace reflects the analysis of all of the documents in the corpus, but is much smaller than a matrix concatenating the raw analysis results of the local documents in each distributed computing device.
  • the cooperative subspace approach allows the LSI subspace to be calculated efficiently, and the random sampling obscures the underlying documents so that privacy is maintained.
  • a first distributed computing device of a plurality of distributed computing devices receives over a network a data partition of a plurality of data partitions for a computing task. Each of the plurality of distributed computing devices is assigned a respective data partition of the plurality of data partitions. The first distributed computing device generates a first partial result of a plurality of partial results generated by the plurality of distributed computing devices. The first distributed computing device iteratively executes a distributed average consensus (DAC) process.
  • DAC distributed average consensus
  • the DAC process includes, for each iteration of the process, transmitting the first partial result of the first distributed computing device to a second distributed computing device of the plurality of distributed computing devices, receiving a second partial result generated by the second distributed computing device from the second distributed computing device, and updating the first partial result of the first distributed computing device by computing an average of the first partial result and the second partial result.
  • the first distributed computing device determines to stop executing the DAC process.
  • the first distributed computing device generates a final result of the computing task based on the consensus value.
  • an intermediary computing device receives over a network a request for a computing task from a requesting computing device.
  • the request includes a set of requirements for the computing task.
  • the intermediary computing device transmits at least a portion of the set of requirements to a plurality of distributed computing devices over the network.
  • the intermediary computing device receives over the network commitments from a plurality of distributed computing devices to perform the computing task.
  • Each of the plurality of distributed computing devices meets the portion of the set of requirements.
  • the intermediary computing device transmits, to each of the plurality of distributed computing devices, a respective data partition of a plurality of data partitions for the computing task.
  • the plurality of distributed computing devices are configured to iteratively execute a distributed average consensus (DAC) process to calculate a consensus value for the computing task.
  • DAC distributed average consensus
  • intermediary computing device returns a result of the computing task to the requesting computing device.
  • a distributed computing device generates a gradient descent matrix based on data received by the distributed computing device and a model stored on the distributed computing device.
  • the distributed computing device calculates a sampled gradient descent matrix based on the gradient descent matrix and a random matrix.
  • the distributed computing device iteratively executes a process to determine a consensus gradient descent matrix in conjunction with a plurality of additional distributed computing devices connected by a network.
  • the consensus gradient descent matrix is based on the sampled gradient descent matrix and a plurality of additional sampled gradient decent matrices calculated by the plurality of additional distributed computing devices.
  • the distributed computing device updates the model stored on the distributed computing device based on the consensus gradient descent matrix.
  • a distributed computing device stores user preference data representing preferences of a user with respect to a portion of a set of items.
  • the distributed computing device calculates sampled user preference data by randomly sampling the user preference data.
  • the distributed computing device iteratively executes, in conjunction with a plurality of additional distributed computing devices connected to the distributed computing device by a network, a process to determine a consensus result for the sampled user preference data.
  • the consensus result is based on the sampled user preference data calculated by the distributed computing device and additional sampled user preference data calculated by the plurality of additional distributed computing devices.
  • the additional sampled user preference data is based on preferences of a plurality of additional users.
  • the distributed computing device determines a recommendation model based on the consensus result for the sampled user preference data.
  • the recommendation model reflects the preferences of the user and the plurality of additional users.
  • the distributed computing device identifies an item of the set of items to provide to the user as a
  • recommendation based on the recommendation model, and provides the recommendation of the item to the user.
  • a distributed computing device calculates word counts for each of a set of documents.
  • the word counts for each of the set of documents are represented as a plurality of values, each value representing a number of times a corresponding word appears in one of the set of documents.
  • the distributed computing device calculates sampled word counts by randomly sampling the word counts.
  • the distributed computing device in conjunction with a plurality of additional distributed computing devices connected to the distributed computing device by a network, iteratively executes a process to determine a consensus result for the sampled word counts.
  • the consensus result is based on the sampled word counts calculated by the distributed computing device and additional sampled word counts calculated by the plurality of additional distributed computing devices, the additional sampled user word counts based on additional sets of documents.
  • the distributed computing device determines a latent semantic index (LSI) subspace based on the consensus result for the sampled word counts.
  • the LSI subspace reflects contents of the set of documents and the additional sets of documents.
  • the distributed computing device projects a document into the LSI subspace to determine the latent semantic content of the document.
  • a search device calculates a word count vector for one of a document or a set of keywords. Each element of the word count vector has a value representing instances of a different word in the document or the set of keywords.
  • the search device projects projecting the word count vector into a latent semantic index (LSI) subspace to generate a subspace search vector characterizing the document in the LSI subspace.
  • the LSI subspace is generated cooperatively by a plurality of distributed computing devices connected by a network based on a corpus of documents, the LSI subspace reflecting contents of the corpus of documents.
  • the search device transmits the subspace search vector to a target device as a search request.
  • the search device receives from the target device, in response to the search request, data describing a target document that matches the search request.
  • the target device determines that the target document matches the search request by comparing the sub space search vector to a target vector characterizing the target document in the LSI subspace.
  • FIG. l is a flow diagram showing contract formation in an environment for distributed computing, according to one embodiment.
  • FIG. 2 is a flow diagram showing publishing of distributed computing device information in the environment of for distributed computing, according to one embodiment.
  • FIG. 3 is a block diagram showing peer-to-peer connections between distributed computing devices, according to one embodiment.
  • FIG. 4A is a diagram showing a first arrangement of peer connections among a group of distributed computing devices at a first time, according to one embodiment.
  • FIG. 4B is a diagram showing a second arrangement of peer-to-peer connections among the group of distributed computing devices at a second time, according to one embodiment.
  • FIG. 5A is a graphical illustration of an initialized distributed average consensus convergence indicator, according to one embodiment.
  • FIG. 5B is a graphical illustration of a first peer-to-peer update in a distributed average consensus convergence indicator, according to one embodiment.
  • FIG. 6 illustrates an example of using distributed computing devices to perform a distributed dot product calculation, according to one embodiment.
  • FIG. 7 illustrates an example of using distributed computing devices to perform a distributed matrix-vector product calculation, according to one embodiment.
  • FIG. 8 illustrates an example of using distributed computing devices to perform a distributed least squares calculation, according to one embodiment.
  • FIG. 9 illustrates an example of using distributed computing devices to perform decentralized Bayesian parameter learning, according to one embodiment.
  • FIG. 10 is a flow diagram illustrating a prior art procedure for training an artificial intelligence (AI) model.
  • AI artificial intelligence
  • FIG. 11 is a flow diagram illustrating a procedure for training an artificial intelligence (AI) model using distributed average consensus, according to one embodiment.
  • AI artificial intelligence
  • FIG. 12 is a flowchart showing a method for determining a consensus result within a cooperative subspace, according to one embodiment.
  • FIG. 13 is a flow diagram illustrating a distributed environment for generating a personalized recommendation model using distributed average consensus, according to one embodiment.
  • FIG. 14 is a flowchart showing a method for generating a personalized personalized
  • FIG. 15 is a flow diagram illustrating a distributed environment for generating a low- dimension subspace for latent semantic indexing, according to one embodiment.
  • FIG. 16 is a flowchart showing a method for generating a low-dimension subspace for latent semantic indexing using distributed average consensus, according to one embodiment.
  • FIG. 17 is a flowchart showing a method for searching for documents in the distributed environment based on the latent semantic index, according to one embodiment.
  • “130” in the text refers to reference numerals“l30a” and/or“l30b” and/or“l30c” in the figures.
  • the DAC algorithm can be implemented in a two-sided market that includes requesting computing devices seeking computing power and distributed computing devices that provide computing power.
  • the requesting computing devices, or users of the requesting computing devices want to run a computing task on the distributed computing devices.
  • the requesting computing devices may be used by scientists, statisticians, engineers, financial analysts, etc.
  • the requesting computing device can transmit requests to one or more
  • a smart contract is an agreement made between multiple computing devices (e.g., a set of distributed computing devices, or a requesting computing device and a set of distributed computing devices) to commit computing resources to a computing task.
  • a smart contract specifies a set of technical requirements for completing the computing task, and may specify compensation for completing the computing task or a portion of the computing task.
  • the smart contract may include a list of distributed computing devices that have agreed to the smart contract.
  • smart contracts are published to a blockchain.
  • the requesting computing devices, intermediary computing devices, and distributed computing devices are computing devices capable of transmitting and receiving data via a network.
  • Any of the computing devices described herein may be a conventional computer system, such as a desktop computer or a laptop computer.
  • a computing device may be any device having computer functionality, such as a mobile computing device, server, tablet, smartphones, smart appliance, personal digital assistant (PDA), etc.
  • PDA personal digital assistant
  • the computing devices are configured to communicate via a network, which may comprise any combination of local area and/or wide area networks, using both wired and/or wireless communication systems.
  • the network uses standard communications technologies and/or protocols.
  • the network includes communication links using technologies such as Ethernet, 802.11, worldwide interoperability for microwave access (WiMAX), 3G, 4G, code division multiple access (CDMA), digital subscriber line (DSL), etc.
  • networking protocols used for communicating via the network include multiprotocol label switching (MPLS), transmission control protocol/Internet protocol (TCP/IP), hypertext transport protocol (HTTP), simple mail transfer protocol (SMTP), and file transfer protocol (FTP).
  • MPLS multiprotocol label switching
  • TCP/IP transmission control protocol/Internet protocol
  • HTTP hypertext transport protocol
  • SMTP simple mail transfer protocol
  • FTP file transfer protocol
  • FIG. 1 illustrates contract formation in an exemplary environment 100 for distributed computing.
  • a requesting computing device 110 communicates over a network 160 with a smart contract scheduler 120, which is an intermediary computing device that coordinates computing resources for performing distributed computing tasks.
  • the environment 100 also includes a set of distributed computing devices 130 that can connect to each other and to the smart contract scheduler 120 over a network 170.
  • the networks 160 and 170 may be the same network, e.g., the Internet, or they may be different networks.
  • FIG. 1 shows four distributed computing devices l30a, l30b, l30c, and l30d, but it should be understood that the environment 100 can include many more distributed computing devices, e.g., millions of distributed computing devices 130.
  • the environment 100 can include additional requesting computing devices 110 and smart contract schedulers 120. While the requesting computing device 110, smart contract scheduler 120, and distributed computing devices 130 are shown as separate computing devices, in other embodiments, some of the components in the environment 100 may be combined as a single physical computing device.
  • the requesting computing device 110 may include a smart contract scheduling component.
  • the requesting computing device 110 and/or smart contract scheduler 120 are also distributed computing devices 130 with computing resources for performing requested calculations.
  • the requesting computing device 110 transmits a set of job requirements 140 to the smart contract scheduler 120 over the network 160.
  • the job requirements 140 may include, for example, minimum technical requirements for performing the task or a portion of the task, such as memory, disk space, number of processors, or network bandwidth.
  • the job requirements 140 also include an amount and/or type of compensation offered by the requesting computing device 110 for the task or a portion of the task.
  • the smart contract scheduler 120 generates a smart contract 150 for the requesting computing device 110 based on the job requirements 140 and transmits the smart contract 150 to the distributed computing devices 130 over the network 170.
  • the smart contract scheduler 120 may broadcast the smart contract 150 to all participating distributed computing devices 130, or transmit the smart contract 150 to some subset of the distributed computing devices 130.
  • the smart contract scheduler 120 may maintain a list of distributed computing devices 130 and their technical specifications, and identify a subset of the distributed computing devices 130 that meet one or more technical requirements provided in the job requirements 140.
  • the smart contract scheduler 120 may determine, based on prior smart contracts, distributed computing devices 130 that are currently engaged with tasks for other smart contracts, and identify a subset of the distributed computing devices 130 that may be available for the smart contract 150.
  • Each distributed computing device 130 that receives the smart contract 150 from the smart contract scheduler 120 can independently determine whether the technical requirements and compensation are suitable. At least some portion of distributed computing devices 130 agree to the smart contract 150 and transmit their acceptance of the contract to the smart contract scheduler 120 over the network 170. In the example shown in FIG. 1, distributed computing devices l30a, l30b, and l30c agree to the smart contract 150, and distributed computing device l30d has not agreed to the smart contract.
  • the distributed computing devices l30a-l30c that agree to the smart contract 150 may each publish a signed copy of the smart contract 150 to a blockchain in which the distributed computing devices 130 and the smart contract scheduler 120 participate. Contracts published to the blockchain can be received by all participants, including the smart contract scheduler 120 and, in some embodiments, the requesting computing device 110.
  • the smart contract 150 specifies a requisite number of distributed computing devices 130 for performing the computing task. Once the requisite number of distributed computing devices publish their acceptance of the smart contract 150 to the blockchain, the distributed computing devices that have committed to the contract complete the computing task.
  • the distributed computing devices receive code provided by the requesting computing device 110 with instructions for completing the computing task.
  • the requesting computing device 110 may transmit the code directly to the distributed computing devices l30a-l30c over the network 170, or the requesting computing device 110 may provide the code to the distributed computing devices l30a-l30c via the smart contract scheduler 120.
  • the code include checkpoints, which are used to indicate suitable restart locations for long-running calculations.
  • the code may fail before completion of a task, but after a distributed computing device 130 has performed a substantial amount of work.
  • the distributed computing device 130 is compensated for the work it has done up to that checkpoint.
  • the distributed computing devices 130 cooperate for computing tasks that benefit the distributed computing devices 130 themselves, rather than for the benefit of a particular requesting computing device 110.
  • the distributed computing devices 130 may perform a DAC procedure for cooperative learning, such as decentralized Bayesian parameter learning or neural network training, described in further detail below.
  • a distributed computing device 130 may not receive compensation from a requesting computing device, but instead receives the benefit of data and cooperation from the other distributed computing devices 130.
  • the distributed computing devices 130 may sign a smart contract 150 with each other, rather than with a requesting computing device 110 outside of the group of distributed computing devices 130.
  • the distributed computing devices 130 may cooperate on computing tasks without a smart contract 150.
  • the distributed computing devices 130 may receive code for performing the calculations from a coordinating computing device, which may be one of the distributed computing devices 130 or another computing device.
  • the distributed computing devices 130 provide connection information to the other distributed computing devices 130 so that they are able to communicate their results to each other over the network 170.
  • the smart contract 150 may be implemented by a blockchain accessed by each of the distributed computing devices 130 and on which each distributed computing device 130 publishes connection information.
  • FIG. 2 is a flow diagram showing publishing of distributed computing device information in the environment for distributed computing shown in FIG. 1.
  • the distributed computing devices l30a, l30b, and l30c that have signed the smart contract 150 each publish their respective connection information 2l0a, 210b, and 2l0c to a smart contract blockchain 200 over the network 170.
  • Information published to the smart contract blockchain 200 is received by each of the distributed computing devices l30a-l30c over the network 170.
  • the connection information 210 can be, for example, the IP address of the distributed computing device 130 and the port on which the distributed computing device 130 wishes to receive communications from the other distributed computing devices.
  • the distributed computing devices 130 each compile a peer list 220 based on the information published to the smart contract blockchain 200.
  • the peer list 220 includes the connection information 210 for some or all of the distributed computing devices 130 that signed the smart contract 150.
  • the peer list 220 allows each distributed computing device 130 to communicate with at least a portion of the other distributed computing devices over the network 170.
  • Each distributed computing device 130 stores a local copy of the peer list 220. If the peer list 220 includes a portion of the distributed computing devices 130 that signed the smart contract 150, the peer lists 220 stored on different distributed computing devices 130 are different, e.g., each distributed computing device 130 may store a unique peer list containing some portion of the distributed computing devices 130 that signed the smart contract 150.
  • FIG. 3 illustrates peer-to-peer connections formed between distributed computing devices according to the peer list 220.
  • the distributed computing devices 130 connect to each other (e.g., over the network 170 shown in FIGs. 1 and 2) to share results.
  • each distributed computing device 130 initializes a server thread 310 to listen to the port that it posted to the smart contract blockchain 200, i.e., the port it provided in the connection information 210.
  • Each distributed computing device 130 also initializes a client thread 320 capable of connecting to another distributed computing device 130.
  • FIG. 3 illustrates peer-to-peer connections formed between distributed computing devices according to the peer list 220.
  • the client thread 320a of distributed computing device l30a has formed a connection 340 to the server thread 3 l0b of distributed computing device l30b using the connection information 2l0b provided by distributed computing device l30b.
  • the client thread 320b of distributed computing device l30b has formed a connection 350 to the server thread 3 lOc of distributed computing device l30c using the connection information 2l0c provided by distributed computing device l30c.
  • Distributed computing devices l30a and l30b can share computing results over the connection 340, and distributed computing devices l30b and l30c can share computing results over the connection 350.
  • the distributed computing devices 130 undertake a sequence of forming connections, sharing results, computing an average, and determining whether consensus is reached. If consensus has not been reached, the distributed computing devices 130 form a new set of connections, share current results (i.e., the most recently computed averages), compute a new average, and again determine whether consensus is reached. This process continues iteratively until consensus is reached.
  • a mathematical discussion of the DAC algorithm is described in greater detail below.
  • FIG. 4A illustrates a first arrangement 400 of peer connections formed among a group of seven distributed computing devices at a first time, according to one embodiment.
  • FIG. 4A includes a set of seven distributed computing devices l30a-l30g that have connected to form three sets of pairs.
  • distributed computing devices l30a is connected to distributed computing device l30c over connection 410.
  • the distributed computing devices 130, or some portion of the distributed computing devices 130 may each select a random computing device from the pair list 220 and attempt to form a peer-to-peer connection.
  • distributed computing device l30g has not formed a connection to any other distributed computing device in this iteration.
  • a single distributed computing device 130 may be connected to two other distributed computing devices, e.g., both the client thread and the server thread are connected to a respective computing device.
  • FIG. 4B illustrates a second arrangement 450 of peer-to-peer connections among the group of distributed computing devices l30a-l30g at a second time, according to one
  • the distributed computing devices l30a-l30g have formed the connections in a different configuration from the connections 400 shown in FIG. 4A. For example, distributed computing device l30a is now connected to distributed computing device l30b over connection 460. The distributed computing devices l30a-l30g continue to form new sets of connections and exchange data until they determine that distributed average consensus is reached.
  • process replication is used to ensure that the loss of a distributed computing device 130 does not compromise the results of an entire computation task.
  • Process replication provides a safeguard to the inherently unreliable nature of dynamic networks, and offers a mechanism for distributed computing devices 130 to check that peers computing devices 130 are indeed contributing to the calculation in which they are participating.
  • distributed computing devices 130 can be arranged into groups that are assigned the same data.
  • each computing device in the group of distributed computing devices can ensure that no other computing device in the group has cheated by hashing its current result (which should be the same across all computing devices in the group) with a piece of public information (such as a process ID assigned to the computing device), and sharing this with the group of computing devices.
  • One or more computing devices in the group can check the current results received from other computing devices in the group to confirm that the other computing devices are participating and have obtained the same result.
  • the distributed average consensus (DAC) algorithm is used in conjunction with a calculation in which a number of agents (e.g., N distributed computing devices 130), referred to as Nprocess agents, must agree on their average value.
  • Nprocess agents a number of agents
  • the continuous time model for the local agent state governed by the DAC algorithm is given by the feedback model:
  • the rate at which Xi ⁇ t) converges to x/( ⁇ ) for this protocol is proportional to the smallest nonzero eigenvalue of the system Laplacian matrix L. Furthermore, the equilibrium state can be attained under dynamic, directional topologies with time delays.
  • This notion of consensus is suitable for a distributed protocol since each process requires communication only with a set of neighboring processors, and there is no need for a fusion center or centralized node with global information. It is in this sense that consensus can be exploited in the distributed computing environment 100 to achieve a variety of useful tools for distributed computing, such as multi-agent estimation and control. Distributed consensus is particularly advantageous for performing reductions on distributed data because it bypasses the need for sophisticated routing protocols and overlay topologies for complicated distributed networks.
  • the distributed computing devices 130 compute a convergence indicator after each set of connections (e.g., after forming the set of connections shown in FIG. 4 A or 4B).
  • the convergence indicator can be represented geometrically, e.g., as a circle, sphere, or hypersphere, or, more generally, an n-sphere.
  • An n-sphere is a generalization of a sphere to a space of arbitrary dimensions; for example, a circle is a 1 -sphere, and an ordinary sphere is a 2-sphere.
  • the distributed computing devices 130 can be assigned initial portions of the geometric structure, each having a center of mass.
  • each distributed computing device 130 exchanges with at least one neighboring distributed computing device two pieces of data: the distributed computing device’s current Xi(t), and the distributed computing device’s current mass and position in the convergence indicator.
  • Each distributed computing device 130 averages its Xi(t) with the received xj(() received from its neighbor to calculate X (l /); similarly, each distributed computing device 130 combines its center of mass with its neighbor’s to determine a new center of mass.
  • the DAC algorithm terminates, and the last Xi can be used to calculate the final result of the computation task.
  • a given distance from the center of mass of the geometric structure can be defined as a
  • convergence threshold for determining when the process has converged. If the convergence process does not reach the center of mass of the geometric structure, this indicates that at least one distributed computing device 130 did not participate in the calculation.
  • FIG. 5A is a graphical illustration of an initialized distributed average consensus convergence indicator, according to one embodiment.
  • the convergence indicator is a circle having a global center of mass (c.m.) 510.
  • Each distributed computing device 130 that signed the smart contract 150 is assigned a random, non-overlapping portion of an arc on a circle, e.g., a unit circle.
  • the smart contract scheduler 120, the requesting computing device 110, or one of the distributed computing devices 130 may determine and assign arcs to the participating distributed computing devices 130.
  • FIG. 5A is a graphical illustration of an initialized distributed average consensus convergence indicator, according to one embodiment.
  • the convergence indicator is a circle having a global center of mass (c.m.) 510.
  • Each distributed computing device 130 that signed the smart contract 150 is assigned a random, non-overlapping portion of an arc on a circle, e.g., a unit circle.
  • the smart contract scheduler 120, the requesting computing device 110, or one of the distributed computing devices 130 may determine
  • a first portion of the arc between 0° and qi° is assigned to a distributed computing device 1 520a.
  • Three additional portions of the circle are assigned to three additional distributed computing devices 520b-520d.
  • the distributed computing devices 520 are embodiments of the distributed computing devices 130 described above.
  • the arcs are not of equal size; for example, the arc assigned to distributed computing device 1 520a is smaller than the arc assigned to distributed computing device 2 520b.
  • Each distributed computing device 520 computes the center of mass (c.m.) 530 of its unique arc, including both the mass and location of the center of mass.
  • the differing masses are represented in FIG. 5A as different sizes of the centers of mass 530; for example, the circle around c.m. 1 530a is smaller than the circle around c.m. 2 530b, because the portion assigned to distributed computing device 1 520a is smaller than the portion assigned to distributed computing device 2 520b and therefore has a smaller mass.
  • each distributed computing device After each successful connection (e.g., after the distributed computing devices 520 form the first set of peer connections shown in FIG. 4A or the second set of peer connections shown in FIG. 4B), each distributed computing device updates the location of its c.m. relative to the c.m. of the distributed computing device to which it connected and exchanged data.
  • FIG. 5B is a graphical illustration of a first peer-to-peer update in the distributed average consensus convergence indicator shown in FIG. 5A.
  • distributed computing device 1 520a has connected to distributed computing device 4 520d
  • distributed computing device 2 520b has connected to distributed computing device 3 520c.
  • Each set of connecting distributed computing devices exchange their respective centers of mass and calculate a joint center of mass.
  • distributed computing devices 1 and 4 calculate the joint c.m. 1 540a based on the locations and masses of c.m. 1 530a and c.m. 4 530d. As shown, joint c.m. 1 540a is partway between c.m. 1 530a and c.m. 4 530d, but closer to c.m. 4 530d due to its larger mass.
  • the distributed computing devices 520 continue forming different sets of connections. This iterative procedure of connecting, exchanging, and updating continues until the distributed computing devices 520 reach a center of mass that is within a specified distance of the global center of mass 510, at which point the distributed computing devices 520 terminate the consensus operation.
  • the specified distance from the global center of mass 510 for stopping the iterative procedure may be a specified error tolerance value, e.g., 0.0001, or lxlO 10 . If the distributed computing devices 520 do not reach the global center of mass 510, this indicates that at least one distributed computing device did not participate in the consensus mechanism.
  • the center of mass determined by the DAC procedure is pulled away from that distributed computing device’s portion of the arc, because that distributed computing device, represented by its assigned mass, did not contribute to DAC procedure.
  • the distributed computing devices 520 may perform the iterative procedure a particular number of times before stopping even if convergence is not reached. The number of iterations to attempt convergence may be based on the number of distributed computing devices participating in the DAC process. Alternatively, the distributed computing devices may perform the iterative procedure until the center of mass becomes stationary, e.g., stationary within a specified threshold.
  • a higher dimensional shape is used as the convergence indicator, such as a sphere or a hypersphere.
  • each distributed computing device is assigned a higher-dimensional portion of the shape; for example, if the convergence indicator is a sphere, each distributed computing device is assigned a respective section of the sphere.
  • the DAC algorithm can be used to perform a dot product calculation.
  • the dot product is one of the most important primitive algebraic manipulations for parallel computing applications. Without a method for computing distributed dot products, critical parallel numerical methods (such as conjugate gradients, Newton-Krylov, or GMRES) for simulations and machine learning are not possible.
  • the DAC algorithm, described above can be used to perform a dot product of two vectors v and y, represented as x T y, in a distributed manner by assigning distributed computing devices 130 to perform respective local dot products on local sub-vectors, and then having the distributed computing devices 130 perform consensus on the resulting local scalar values.
  • FIG. 6 illustrates an example 600 of using three distributed computing devices to perform a distributed dot product calculation, according to one embodiment.
  • a first vector x 610 is partitioned into three sub-vectors, i T , X2 T , and ri .
  • a second vector y 620 is also partitioned into three sub-vectors, y 1, yi, and 3.
  • a first distributed computing device l30a receives the first vector portions ci ⁇ and i and calculates the dot product VI 'J .
  • Second and third distributed computing devices l30b and l30c calculate dot products AO 'JO and
  • the distributed computing devices l30a-l30c exchange the dot products via connections 630 and calculate averages, as described above, until consensus is reached. After consensus, the average dot product is multiplied by the number of participating distributed computing devices 130 (in this example, 3) to determine x T .
  • the DAC algorithm can be performed on scalar quantities, as shown in the dot product example, and on vector quantities.
  • the DAC algorithm is used to perform a distributed matrix-vector product calculation.
  • Distributed matrix-vector products are essential for most iterative numerical schemes, such as fixed point iteration or successive approximation.
  • To calculate a matrix-vector product a matrix is partitioned column-wise, and each distributed computing device 130 receives one or more columns of the global matrix.
  • a local matrix-vector product is calculated at each distributed computing device 130, and average consensus is performed on the resulting local vectors. The consensus result is then multiplied by the number of distributed computing devices 130 in the computation.
  • FIG. 7 illustrates an example 700 of using three distributed computing devices to perform a distributed matrix-vector product calculation, according to one embodiment.
  • FIG. 7 illustrates an example 700 of using three distributed computing devices to perform a distributed matrix-vector product calculation, according to one embodiment.
  • a first matrix A 710 is partitioned column-wise into three sub-matrices, A i, Ai, and A3.
  • a vector y 720 is partitioned into three sub-vectors, y 1, yi, and jo.
  • the first distributed computing device l30a receives the first matrix portion A 1 and the first vector portion yi and calculates the matrix-vector product Aiyi.
  • the second and third distributed computing devices l30b and l30c calculate the matrix-vector products Aiyi and A3 3, respectively.
  • the distributed computing devices l30a-l30c exchange the matrix-vector products via connections 730 and calculate averages, as described above, until consensus is reached. After consensus, the average matrix- vector product is multiplied by the number of participating distributed computing devices 130.
  • the DAC algorithm is used to calculate a distributed least squares regression.
  • Least squares is one of the most important regressions used by scientists and engineers. It is one of the main numerical ingredients in software designed for maximum likelihood estimation, image reconstruction, neural network training, and other applications.
  • the problem of finding the least-squares solution to an overdetermined system of equations can be defined as follows:
  • A is a sensing matrix
  • x is the least-squares solution vector
  • b is a target vector.
  • the sensing matrix, A is distributed row-wise and the least-squares solution, x, is solved for locally on each
  • each distributed computing device 130 in the network owns a few rows (e.g., measurements) of the sensing matrix A and the target vector b.
  • the least squares solution for the system can be recovered from the local least-squares solutions using the DAC algorithm.
  • the portions of the sensing matrix and target vector owned by a given distributed computing device i are represented as At and bi, respectively.
  • Each distributed computing device i calculates the products A /3 ⁇ 4 and /T '/f and stores these products in its local memory.
  • DAC is then performed on these quantities, which both are small compared to the total number of observations in A.
  • the results of the DAC process are Aj which are present at every distributed computing device at the end of the DAC process. These quantities are multiplied by the number n of processes in the computation, so that every distributed computing device has copies of A T b and A ⁇ A that can be used to locally obtain the least squares fit to the global data set.
  • FIG. 8 illustrates an example 800 of using three distributed computing devices to perform a distributed least squares calculation, according to one embodiment.
  • the transpose of the sensing matrix A T 810 is partitioned column-wise into three sub-matrices, A i T , A2 T , and A3 T .
  • the sensing matrix A 820 is partitioned row-wise into three sub-matrices, A i, Ai, and A 3.
  • Each distributed computing device l30a-l30c calculates a respective matrix-matrix product A i T Ai, A ⁇ Ai, and As 1 A 3.
  • each distributed computing device l30a-l30c has a respective portion of the target vector b 830 and calculates a respective matrix-vector product Ai3 ⁇ 4i, Ai 'hi, and Ai 3 ⁇ 4, similar to the calculation shown in FIG. 7.
  • the distributed computing devices l30a-l30c exchange the matrix-matrix products and matrix-vector products via connections 840 and calculate averages of these products, as described above, until consensus is reached. After consensus, the average matrix-matrix product and average matrix-vector product are multiplied by the number of participating distributed computing devices 130, and the results are used to calculate the least square solution x.
  • the DAC algorithm can be applied to decentralized Bayesian parameter learning.
  • Many industrial applications benefit from having a data-driven statistical model of a given process based on prior knowledge.
  • Economic time series, seismology data, and speech recognition are just a few big data applications that leverage recursive Bayesian estimation for refining statistical representations.
  • DAC can be used to facilitate recursive Bayesian estimation on distributed data sets.
  • p(x) p(x ⁇ yi:n).
  • Each distributed computing device i e ( 1, ... n ⁇ makes an observation, j y that is related to the quantity of interest through a predefined statistical model mi(g ⁇ , x).
  • mi(g ⁇ , x) mi(g ⁇ , x).
  • Leveraging DAC to compute the global average of the distributed measurement functions allows each distributed computing device to consistently update its local posterior estimate without direct knowledge or explicit communication with the rest of the global data set.
  • FIG. 9 illustrates an example 900 of using three distributed computing devices to perform decentralized Bayesian parameter learning, according to one embodiment.
  • each distributed computing device 130 receives or calculates the prior distribution po(c) 910.
  • each distributed computing device l30a makes or receives a respective observation or set of observations yr, for example, distributed computing device l30a receives the observation yi 920.
  • each distributed computing device l30a-l30c calculates the quantity ln(jn(yi , x)); for example distributed computing device 130 calculates mi(gi, x) 930.
  • the distributed computing devices l30a-l30c exchange the calculated quantities via connections 940 and calculate averages, as described above, until consensus is reached. After consensus, the distributed computing devices 130 use the average of the quantity ln( j n(yi , x)) to calculate the posterior estimate, p(c) 950, according to equation 9.
  • FIG. 10 shows an exemplary prior art system 1000 performing the gather and scatter method for training an AI model.
  • a number N of computing devices 1010 referred to as computing device lOlOa through computing device 1010N, are connected to a server 1020.
  • Each computing device 1010 includes an AI module 1015.
  • Each AI module 1015 can include, among other things, an AI model (such as a neural network) for making one or more predictions based on input data, e.g., data 1025 collected or received by the computing device 1010.
  • each AI module 1015 is also configured to generate a gradient descent vector 1030 based on the received data; the gradient descent vectors 1030a- 103 ON are used to train the AI model.
  • Each gradient descent vector 1030 calculated by each AI module 1015 is transmitted by each computing device 1010 to the server 1020; for example, computing device lOlOa transmits gradient descent vector l030a to the server 1020.
  • the server 1020 Based on all of the received gradient descent vectors l030a-l030N, the server 1020 optimizes and updates the AI model, and based on the updated AI model, the server 1020 transmits an update to the AI module 1035 to each of the computing devices !OlOa-lOlON.
  • the gather and scatter method requires a central server 1020 to manage the process of updating the AI model.
  • the server 1020 must be reliable, and each computing device 1010 must have a reliable connection to the server 1020 to receive updates to the AI model.
  • the processing performed by the server 1020 on the gradient vectors 1030a- 103 ON to generate the update 1030 can require a large amount of computing and storage resources, especially if the number of computing devices N is large and/or the gradient vectors 1030 are large. Further, the gather and scatter method does not take advantage of the computing resources available on the computing devices lOlOa-lOlON themselves.
  • FIG. 11 illustrates a system 1100 for training an artificial intelligence (AI) model using distributed average consensus, according to one embodiment.
  • FIG. 11 includes a number N of distributed computing devices 1110, referred to as distributed computing device 11 lOa through distributed computing device 1110N.
  • the distributed computing devices 1100 may be embodiments of the distributed computing devices 130 described above.
  • Each distributed computing device 1110 receives respective data 1125.
  • distributed computing device 11 lOa receives data 1 l25a
  • distributed computing device 11 lOb receives data 1 l25b, and so on.
  • the respective data 1125 received by two different distributed computing devices may be different; for example, data 1125 a may be different from data 1 l25b.
  • the data 1125 may be structured as sets of training pairs including one or more data inputs paired with one or more labels.
  • the data 1125 may be generated internally by the distributed computing device 1110, received from one or more sensors within or connected to the distributed computing device 1110, received from one or more users, received from one or more other distributed computing devices, or received from some other source or combination of sources.
  • Each distributed computing device 1110 includes an AI module 1115.
  • the AI module 1115 includes an AI model for processing one or more input signals and making predictions based on the processed input signals.
  • the AI model may be a neural network or other type of machine learning model.
  • each AI module 1115 is configured to train the AI model based on the data 1125 received by the set of distributed computing devices 1110.
  • the AI modules 1115 of different distributed computing devices 1110 may be functionally similar or identical.
  • the AI module 1115 generates data for optimizing the AI model based on its respective received data 1125, compresses the generated data, and exchanges the compressed data with the compressed data generated by other AI modules 1115 of other distributed computing devices 1110.
  • the AI modules 1115 execute a convergence algorithm, such as the distributed average consensus (DAC) algorithm described above, on the exchanged compressed data to obtain a consensus result for optimizing the AI model.
  • Each respective AI module 1115 updates its local AI model based on the consensus result.
  • DAC distributed average consensus
  • each AI module 1115 is configured to compute a gradient descent vector for each training pair (e.g., one or more data inputs paired with one or more labels) in the respective data 1125 received by the distributed computing device 1110 based on a locally-stored AI model.
  • the AI module 11 l5a of distributed computing device 11 lOa calculate a gradient descent vector for each training pair included in the data 1 l25a.
  • the AI module 1115 is further configured to concatenate the gradient descent vectors to form a gradient descent matrix, and sample the gradient descent matrix to generate a sampled gradient matrix 1130, which is shared with the other distributed computing devices in a peer-to-peer fashion.
  • distributed computing device 11 lOb shares its sampled gradient matrix 1130b with both distributed computing device 11 lOa and distributed computing device 1110N, and receives the sampled gradient matrices 1 l30a and 1130N from distributed computing devices 11 lOa and 1110N, respectively.
  • the distributed computing devices 1110 form various sets of connections, as described with respect to FIG. 4, and exchange sampled gradient matrices 1130 until the distributed computing devices 1110 reach consensus according to the DAC algorithm, as described above.
  • each distributed computing device 1110 has a local copy of a consensus gradient matrix.
  • the length and number of gradient descent vectors produced by an AI module 1115 can be large. While a single gradient descent vector or matrix (e.g., a gradient vector 1030 described with respect to FIG. 10, or a set of gradient descent vectors generated by one distributed computing device 1110) can be generated and stored on a single distributed computing device 1110, if the number of distributed computing devices N is large, a single distributed computing device 1110 may not be able to store all of the gradient descent vectors generated by the N distributed computing devices, or even the gradient descent vectors generated by a portion of the N distributed computing devices. In addition, transferring a large number of large vectors between the distributed computing devices 11 lOa-l 110N uses a lot of communication bandwidth. To reduce the size of data transfers and the computational resources required for each distributed computing device 1110, the AI module 1115 samples each matrix of gradient descent vectors.
  • a single gradient descent vector or matrix e.g., a gradient vector 1030 described with respect to FIG. 10, or a set of gradient descent vectors generated by one distributed computing device
  • the distributed computing devices 11 lOa-l 110N run a convergence algorithm on the exchanged data (e.g., the exchanged sampled gradient matrices) to determine whether a distributed average consensus (DAC) on the exchanged data has obtained by all of the distributed computing devices 11 lOa-l 110N.
  • the distributed computing devices 11 lOa-l 110N may perform distributed average consensus on sampled gradient descent matrices to obtain a global matrix of the same size as the sampled gradient descent matrices.
  • each AI module 1115 When each distributed computing device 1110 has received some or all of the other sampled gradient matrices 1130, and a distributed average consensus has been achieved, each AI module 1115 generates its own update to the AI model 1135.
  • the update 1135 may be an optimization of the weights of the AI model stored in the AI module 1115 based on the sampled gradient matrices 1 l30a-l 13 ON, including the locally generated sampled gradient matrix and the matrices received from peer distributed computing devices.
  • the DAC process ensures that each distributed computing device 1110 has contributed to the coordinated learning effort undertaken by the distributed computing devices 11 lOa-l 110N.
  • the coordinated learning process runs without the need for a central server.
  • the distributed computing devices 11 lOa-l 110N exchange sampled gradient matrices 1 l30a-l 130N, rather than the underlying data 1 l25a-l 125N, the privacy of the distributed computing devices 1110 and their users is maintained.
  • distributed computing device 11 lOa when distributed computing device 11 lOa receives the sampled gradient matrix 1 l30b from another distributed computing device 11 lOb, the distributed computing device 11 lOa cannot determine any personal information about the data 1 l25b collected by the distributed computing device 11 lOb from the received sampled gradient matrix 1130b .
  • the training of a neural network consists of specifying an optimization objective function, T: R Min ® M + , that is a function of both the network weights, w E R Ww , (i.e. the network topology) and the available training data, ⁇ x t E M Ai,n ,y i where x represents the primal data, y represents the associated labels, and N x is the number of training examples.
  • the goal of neural network training is to produce a predictive neural network by manipulating the weights w such that the expected value of the objective function T is minimized. This goal can be expressed as follows:
  • the method of gradient descent can be used to tune the weights of a neural network.
  • Gradient descent involves the evaluation of the partial derivative of the objective function with respect to the vector of weights. This quantity is known as the gradient vector, and can be expressed as follows:
  • a gradient vector can be computed for each training pair (x i; y ) in the training set.
  • the AI module 1115 computes a gradient vector for each training pair in the data 1125 received at each distributed computing device 1110.
  • a cooperative subspace approach that combines the DAC process with the theory of random sampling can be used.
  • a cooperative subspace is used to sample the gradient vectors (e.g., to form sampled gradient vectors 1130) so that the DAC process can be performed more efficiently.
  • the cooperative subspace approach computes, in a fully distributed fashion, a representative subspace, U E R Nxq that approximates the range of A such that , where e is a user specified tolerance on the accuracy of the approximation between 0 and 1.
  • FIG. 12 is a flowchart showing a method 1200 for determining a consensus result within a cooperative subspace at a particular distributed computing device e.g., one of the distributed computing devices 1110.
  • the distributed computing device 1110 generates 1210 a Gaussian ensemble matrix W ; e R k ‘ xq .
  • the Gaussian ensemble matrix is a matrix of random values used to sample a local data matrix A .
  • the local data matrix A is the matrix of gradient descent vectors computed by the AI module 1115 of a given distributed computing device 1110 based on the data 1125 received by the distributed computing device 1110.
  • Each distributed computing device 1110 generates its random matrix W ; independently. In other embodiments, other types of random matrices are used.
  • the product Y is an approximation of the data in the local data matrix Aj and compresses the local data. While the full data matrix A global that includes the data from each distributed computing device 1110 may be too large to be stored on and manipulated by a single distributed computing device 1110, the sampled data matrix Y t is sufficiently small to be stored on and manipulated by a single distributed computing device 1110.
  • the distributed computing device 1110 in cooperation with the other distributed computing devices in the system, performs 1230 the DAC process on the sampled data matrices Yi.
  • the DAC process is performed according to the procedure described above.
  • a convergence indicator such as the convergence indicators described with respect to FIGs. 5 A and 5B, may be used to determine when to terminate the DAC process.
  • the DAC process produces a normalized global matrix-matrix product Y global on each node, which can be represented as follows:
  • a distributed computing device 1110 exchanges its sampled data matrix Yi with another distributed computing device 1110.
  • distributed computing device 11 lOa transmits the sampled gradient matrix 1 l30a to the distributed computing device 11 lOb, and receives sampled gradient matrix 1 l30b from distributed computing device 11 lOb.
  • the distributed computing device 1110 calculates an average of its sampled data matrix Yi and the sampled data matrix received from the other distributed computing device.
  • the distributed computing device 1110 calculates an average of its sampled gradient matrix 1 l30a and the received sampled gradient matrix 1130b . This results in a consensus gradient descent matrix, which is a matrix of the same size as the sampled data matrix Yi.
  • distributed computing devices 1110 exchange and average their current consensus gradient descent matrices.
  • the consensus gradient descent matrices are repeatedly exchanged and averaged until a consensus result for the consensus gradient descent matrix is reached across the distributed computing devices 1110.
  • the consensus result which is the matrix Y global , is obtained when the consensus gradient descent matrices are substantially the same across all the distributed computing devices 1110, e.g., within a specified margin of error.
  • the convergence indicator described with respect to FIGs. 5 A and 5B may be used to determine when Y global has been obtained, and to determine whether all distributed computing devices 1110 participated in determining the consensus result.
  • each distributed computing device in the network computes the local gradients associated with its local data set, producing the gradient dT(x,y;w)
  • This gradient vector data is used to form the local data matrix Aj in the
  • the gradient vectors are compressed into a suitably low dimensional subspace according to steps 1210 and 1220, the sampled, global gradient descent vectors are obtained according to the DAC process (step 1230), and gradient descent is performed in the global subspace locally on each agent (step 1240).
  • the AI module 1115 updates its AI model (e.g., by updating the model weights) based on the representative subspace U, which reflects the data 1125 gathered by all of the distributed computing devices 1110.
  • a centralized system obtains personal user data, such as preferences, purchases, ratings, tracked activities (e.g., clickstreams), or other explicit or implicit preference information.
  • the centralized system trains a recommendation model based on the collected user data, and provides recommendations to a given user based on data collected about that user.
  • the centralized system controls both the model and its users’ data.
  • Centralized systems that provide recommendations may also exploit users’ data for other purposes, such as targeting content, or selling the data to third parties, that users may not approve of or may not have knowingly consented to. Many users would prefer to receive personalized recommendations without a central system collecting, storing, or distributing data about them.
  • cooperating distributed computing devices use a cooperative subspace approach that combines the DAC algorithm described above with the theory of random sampling.
  • Each distributed computing device randomly samples local user preference data in a cooperative subspace.
  • the cooperative sub space approximates the user preference data, reflecting the users of all cooperating distributed computing devices.
  • the sampled preference data is shared among the cooperating distributed computing devices.
  • the distributed computing devices use the DAC algorithm to cooperatively create a recommendation model based on the sampled preference data of many users.
  • Each distributed computing device individually applies the recommendation model to the distributed computing device’s local preference data to generate personalized recommendations for the user of the distributed computing device.
  • the cooperative subspace approach allows the DAC algorithm to be performed efficiently, and the random sampling obscures the underlying user preference data so that users’ data privacy is maintained.
  • the recommendation model can be generated for and applied to any finite set of items.
  • the cooperative subspace approach can be used to generate
  • the set of items are represented as a vector, with each vector element corresponding to one item in the set.
  • the first vector element corresponds to“The A- Team”
  • the second vector element corresponds to“A.I. Artificial Intelligence,” etc.
  • the set of items may be represented as a matrix, e.g., with each row corresponding to an item in the set, and each column representing a preference feature of the item. For example, for a matrix representing restaurant ratings, one columns in the matrix represents ratings for overall quality, another column represents ratings for food, another column represents ratings for service, and another column represents ratings for decor.
  • FIG. 13 illustrates a distributed environment 1300 for generating a personalized recommendation model using distributed average consensus, according to one embodiment.
  • FIG. 13 includes a number N of distributed computing devices 1310, referred to as distributed computing device l3 l0a through distributed computing device 1310N.
  • the distributed computing devices 1310 may be embodiments of the distributed computing devices 130 described above.
  • Each distributed computing device 1310 is associated with a user.
  • Each distributed computing device 1310 includes user preference data 1315 and a recommendation module 1320.
  • the user preference data 1315 is data that reflects user preferences about some or all items in a set of items.
  • the distributed computing device 1310 receives as user input ratings (e.g., ratings from -1 to 1, or ratings from 1 to 10) for a set of movies.
  • the distributed computing device 1310 stores the user ratings as vector elements corresponding to the movies to which the movies apply. For example, the user provides a rating of 0.8 for“The A- Team,” the first element of the movie vector is 0.8. Movies that the user has not rated may be assigned a neutral rating, e.g., 0.
  • the distributed computing device 1310 normalizes user-supplied ratings, e.g., so that each rating is between 0 and 1.
  • the distributed computing device 1310 learns the user preference data 1315 implicitly. For example, the distributed computing device 1310 may assign a relatively high rating (e.g., 1) to each movie that the user watches through to its end, and a relatively low rating (e.g., 0) to each movie that the user starts but does not finish.
  • a relatively high rating e.g., 1
  • a relatively low rating e.g., 0
  • the recommendation module 1320 uses the user preference data 1315 to train a recommendation model, which the recommendation module 1320 uses to generate
  • the recommendation module 1320 is configured to work cooperatively with the recommendation modules 1320 of other distributed computing devices 1310 to develop the recommendation model. To train the recommendation model, the recommendation module 1320 of each distributed computing device 1310 samples the user preference data 1315 stored locally on the respective distributed computing device 1310 to generate sampled preference data 1330. For example, recommendation module l320a of distributed computing device 13 lOa samples the user preference data 1315a to generate the sampled preference data l330a.
  • the sampled preference data 1330 is a mathematical function of the user preference data 1315 that involves random sampling, such as multiplying the user preference data 1315 by a random matrix.
  • the sampled preference data 1330 is shared with the other distributed computing devices in a peer-to-peer fashion.
  • distributed computing device 13 lOb shares its sampled preference data l330b with both distributed computing device l3 l0a and distributed computing device 1310N, and receives the sampled preference data l330a and 1330N from distributed computing devices l3 l0a and 1310N, respectively.
  • the distributed computing devices 1310 form various sets of connections, as described with respect to FIGs. 4A and 4B, and exchange and average the sampled preference data until the distributed computing devices 1310 reach a consensus result according to the DAC algorithm, as described with respect to FIGs. 4A-5B.
  • the sampled preference data 1330 of one of the distributed computing devices 1310 is shared with the other distributed computing devices 1310, the raw user preference data 1315 does not leave any one of the distributed computing devices 1310. Randomly sampling the user preference data 1315 to generate the sampled preference data 1330 that is shared with other distributed computing devices 1310 obscures the underlying user preference data 1315, so that user privacy is maintained. For example, when distributed computing device 13 lOa receives the sampled preference data l330b from another distributed computing device 13 lOb, the distributed computing device 13 lOa cannot recover the raw, underlying user preference data 1315b from the sampled preference data l330b.
  • the distributed computing devices l3 l0a-l3 l0N run a consensus algorithm, such as the distributed average consensus (DAC) algorithm described above, on the exchanged sampled preference data 1330 to obtain a consensus result for the sampled preference data.
  • the distributed computing devices l3 l0a-l3 l0N may also use a convergence indicator, such as the convergence indicator described above with respect to FIGs. 5A and 5B, to determine when a consensus result for the sampled preference data has been reached by all of the distributed computing devices l3 l0a-l3 l0N.
  • the devices l3 l0a-l3 l0N perform the DAC process on the matrices of sampled preference data 1330 to obtain a global matrix of the same size as the matrices of sampled preference data 1330.
  • each recommendation module 1320 generates its own recommendation model from the consensus result.
  • the recommendation model may vary slightly between distributed computing devices 1310, e.g., within the margin of error tolerance permitted for consensus.
  • the recommendation module 1320 then applies the recommendation model to the local user preference data 1315 to generate recommendations for the user of the distributed computing device 1310.
  • While a single sampled user preference matrix can be stored on a single distributed computing device 1310, if the number of distributed computing devices N is large, a single distributed computing device 1310 may not be able to store all of the sampled preference data generated by the N devices, or even a portion of the N devices.
  • the distributed computing devices 1310 exchange and manipulate matrices of the size of the matrix of sampled preference data 1330 to generate a global matrix of the same size as the matrix of sampled preference data 1330. At no point during the DAC process does a distributed computing device 1310 store close to the amount of preference data or sampled preference data generated by all N devices.
  • a L E M w represents a vector of user preference data that is local to node e NxNnodes represents the global data set of all user preference data 1315.
  • the cooperative subspace approach computes, in a fully distributed fashion, a representative subspace, U E M /Vxc? , which approximates the range of A such that ⁇ A— UU T A ⁇ ⁇ e
  • user preference data can be arranged in a matrix rather than a vector.
  • FIG. 14 is a flowchart showing a method for generating a personalized personalized
  • the distributed computing device 1310 collects 1410 local user preference data At, e.g., the distributed computing device l3 l0a collects user preference data 13 l5a.
  • the distributed computing device 1310 may receive explicit user preference data as user inputs, e.g., ratings or reviews, or the distributed computing device 1310 may determine user preference data 1315 based on monitoring user activity.
  • the recommendation module 1320 samples 1420 the local user preference data At. For example, the recommendation module 1320 generates a random vector W ⁇ e WL q and multiplies the random vector W ⁇ by the local data A
  • the random vector W ⁇ is a vector of random values, e.g., a Gaussian ensemble. Each distributed computing device 1310 generates the random vector W ⁇ independently.
  • the matrix Yi is an example of the sampled preference data 1330, and Y, approximates the data in the local data vector At (i.e., Yi approximates the user preference data 1315).
  • the recommendation module 1320 of the distributed computing device 1310 in cooperation with the other distributed computing devices, performs 1430 the DAC algorithm on the sampled preference data matrices Yi to obtain a global DAC result Y global, which is the global matrix representing a consensus result for the matrices of sampled preference data 1330.
  • Y global can be represented as follows:
  • a distributed computing device 1310 exchanges its sampled preference data matrix Yi with another distributed computing device 1310.
  • distributed computing device l3 l0a transmits the sampled preference data l330a to the distributed computing device 13 lOb, and receives sampled preference data l330b from distributed computing device 1310b .
  • the recommendation module 1320 calculates an average of its sampled preference data matrix Y, and the sampled data preference matrix received from the other distributed computing device.
  • the recommendation module l320a calculates an average of its sampled preference data l330a and the received sampled preference data l330b.
  • consensus sampled user preference data which is a matrix of the same size as the sampled preference data matrix .
  • distributed computing devices 1310 exchange and average their current consensus sampled user preference data.
  • the consensus sampled user preference data is repeatedly exchanged and averaged until a consensus result across the distributed computing devices 1310 is reached.
  • the consensus result which is the matrix Y global, is obtained when the consensus sampled user preference data is substantially the same across all the distributed computing devices 1310, e.g., within a specified margin of error.
  • the convergence indicator described with respect to FIGs. 5A and 5B may be used to determine when the consensus result Y global has been reached, and to determine whether all distributed computing devices 1310 participated in determining the consensus result.
  • the representative subspace U is a recommendation model that the recommendation module 1320 can apply to an individual user’s user preference data 1315 to determine recommendations for the user.
  • Each element of the recommendation vector A corresponds to an element in the user preference vector A,. For example, if the user preference vector At indicates user preferences for each of a set of movies, the recommendation vector A indicates potential user interest in each of the same set of movies.
  • the value for a given element of the recommendation vector A represents a predicted preference of the user for the item represented by the element.
  • the recommendation module 1320 extracts and provides 1460 recommendations based on the recommendation vector A to the user of the distributed computing device 1310. For example, the recommendation module 1320 identifies the items in the set corresponding to the elements in the recommendation vector A having the highest values, and provides these items to the user as recommendations. In the movie example, the recommendation module 1320 may identify ten movies with the highest values in the recommendation vector A and for which the user has not provided preference data in the user preference data 1315, and return these movies as recommendations.
  • a centralized system analyzes documents to determine their latent semantic content.
  • the centralized system stores data describing these documents, receives searches from users, compares the search information to the stored data, and provides relevant documents to users.
  • the centralized system must have access to the documents themselves in order to analyze them.
  • the centralized system collects and analyzes a significant amount of data.
  • the centralized system may also track users’ searches to learn about individual users. Content providers would prefer to provide access to documents without having the documents scraped and analyzed by a search service, and users would prefer to search without a central system collecting and storing data about their behavior.
  • embodiments herein use a cooperative subspace approach that combines the DAC algorithm described above with the theory of random sampling.
  • Each cooperating distributed computing device stores one or more documents, and the documents distributed across the set of
  • the cooperating distributed computing devices are jointly referred to as a corpus of documents.
  • the documents in the corpus may be documents that their respective users plan to make available for searching by other distributed computing devices, e.g., documents that can be searched by some or all of the cooperating distributed computing devices and/or other devices.
  • the cooperating distributed computing devices jointly generate a latent semantic index based on the corpus of documents, without the contents of any individual document being exposed to other distributed computing devices.
  • each distributed computing device individually analyzes its locally-stored documents, and randomly samples the results of this analysis to generate a matrix that approximates and obscures the content of the local documents.
  • the distributed computing devices share their matrices and perform the DAC algorithm described above to generate a matrix reflecting the corpus of documents stored by of all cooperating distributed computing devices.
  • Each distributed computing device then extracts a low-dimension latent semantic index (LSI) subspace from the matrix based on the DAC result.
  • LSI latent semantic index
  • This LSI subspace reflects the analysis of all of the documents in the corpus, but is much smaller than a matrix concatenating the raw analysis results of the local documents.
  • the cooperative subspace approach allows the subspace to be calculated efficiently, and the random sampling obscures the underlying documents so that privacy is maintained.
  • the LSI subspace generated through this approach can be used for various applications.
  • one distributed computing device can search for documents on other distributed computing devices using the LSI subspace.
  • the searching distributed computing device receives a search request that may include, for example, one or more keywords (i.e., a keyword search) or one or more documents (e.g., for a search for similar documents).
  • the searching device represents the received search request in the subspace and transmits the representation of the search request to the cooperating distributed computing devices, or some other set of searchable devices.
  • Each distributed computing device being searched compares the received representation of the search request to representations of the distributed computing device’s local documents in the same subspace.
  • a corpus can be constructed, and a search performed, on any type of text-based document. For example, a subspace constructed from a corpus of resumes can be used to conduct a hiring search, or a sub space constructed from a corpus of dating profiles can be used to implement a dating service.
  • FIG. 15 illustrates a distributed environment 1500 for generating a low-dimension subspace for latent semantic indexing, according to one embodiment.
  • the environment 1500 includes a number N of distributed computing devices 1510, referred to as distributed computing device l5l0a through distributed computing device 1510N.
  • the distributed computing devices 1510 may be embodiments of the distributed computing devices 130 described above.
  • Each distributed computing device 1510 includes a set of documents 1515 and a latent semantic indexing (LSI) module 1520.
  • LSI latent semantic indexing
  • the documents 1515 are any text-based or text-containing documents on or accessible to the distributed computing device 1510.
  • the documents 1515 are locally stored on the distributed computing device 1510.
  • the documents 1515 are documents that are accessible to the distributed computing device 1510, but not permanently stored on the distributed computing device 1510.
  • the documents 1515 may be documents that the distributed computing device 1510 accesses from an external hard drive, from a networked server with dedicated storage for the distributed computing device 1510, from cloud-based storage, etc.
  • the documents 1515 may be any file format, e.g., text files, PDFs, LaTeX, HTML, etc.
  • the documents 1515 form a general corpus of documents, such as a corpus of websites, a corpus of text-based documents, or a corpus including any files the distributed computing devices 1510 are willing to share with other distributed computing devices.
  • the documents 1515 form a specialized corpus of documents that users wish to share, such as resumes, dating profiles, social media profiles, research papers, works of fiction, computer code, recipes, reference materials, etc.
  • the LSI module 1520 uses the documents 1515 to generate, in conjunction with the other distributed computing devices, a low-dimension subspace in which the documents 1515 can be represented and compared.
  • the LSI module 1520 includes a calculation module 1525 that operates on the documents 1515. Using the documents 1515, the calculation module 1525 generates word counts 1530 and sampled word counts 1535. Using the sampled word counts 1535 and working in conjunction with the other distributed computing devices, the calculation module 1525 generates the LSI subspace 1540.
  • the calculation module 1525 analyzes the documents 1515 to calculate the word counts 1530 for each document.
  • each document is first represented as a vector in which each vector element represents a distinct word.
  • the value for each element in the word count vector is the number of times the corresponding word appears in the document. For example, if in a given document, the word“patent” appears five times and the word“trademark” appears three times, the element in the vector corresponding to“patent” is assigned a value of five, and the element corresponding to“trademark” is assigned a value of three.
  • the elements in the word count vector are mathematically related to the actual word counts, e.g., the values in the word count vector are normalized or otherwise proportional to the actual word counts of the document.
  • the words represented by the vector elements can be, e.g., all words in a given dictionary, a set of words that excludes stop words, or a set of words that groups words with the same word stem (e.g., one element may group “patent,”“patents,” and“patenting,”).
  • the calculation module 1525 calculates a word count vector for each document. In some embodiments, the distributed computing device 1510 includes multiple documents 1515, the calculation module 1525 calculates a word count vector for each document.
  • the distributed computing device 1510 may combine multiple documents into a single vector (e.g., two related documents), or separate a single document into multiple word count vectors (e.g., a long document, or a document that has subsections).
  • the calculation module 1525 concatenates the word count vectors for the documents 1515 to form a word count matrix.
  • the calculation module 1525 samples the word counts 1530 to calculate the sampled word counts 1535.
  • the sampled word counts 1535 are a mathematical function of the word counts 1530 that involves random sampling, such as multiplying the matrix of word counts 1530 by a random matrix.
  • the sampled word counts 1535 are shared with the other distributed computing devices in a peer-to-peer fashion. For example, distributed computing device 15 lOb shares its sampled word counts 1535b with both distributed computing device l5l0a and distributed computing device 1510N, and receives the sampled word counts 1535a and 1535N from distributed computing devices l5l0a and 1510N, respectively.
  • the distributed computing devices 1510 form various sets of connections, as described with respect to FIGs. 4A and 4B, and exchange and average the sampled word counts until the distributed computing devices 1510 reach a consensus result according to the DAC algorithm, as described with respect to FIGs. 4A-5B.
  • the word counts 1530 do not leave any one of the distributed computing devices 1530. Representing the documents 1515 as word counts 1530 and then sampling the word counts 1530 to generate the sampled word counts 1535 that are shared among the distributed computing devices 1510 obscures the underlying documents 1515, so that privacy of the documents is maintained. For example, when distributed computing device !5l0a receives the sampled word counts 1535b from another distributed computing device 15 lOb, the distributed computing device l5l0a cannot recover the documents 1515b, or even the word counts l530b, from the sampled word counts l535b. This is advantageous for applications where users want other users to be able to find their documents, but do not wish to provide full public access to their documents.
  • the distributed computing devices l5l0a-l5l0N run a consensus algorithm, such as the distributed average consensus (DAC) algorithm described above, on the exchanged sampled word counts 1535 to obtain a consensus result for the sampled word counts 1535.
  • the distributed computing devices l5l0a-l5l0N may also use a convergence indicator, such as the convergence indicator described above with respect to FIGs. 5A and 5B, to determine when a consensus result for the sampled word counts 1535 has been reached by all of the distributed computing devices l5l0a-l5l0N.
  • the distributed computing devices l5l0a-l5l0N perform the DAC process on matrices of the sampled word counts 1535 to obtain a global matrix of the same size as the matrices of the sampled word counts 1535.
  • each calculation module 1525 independently calculates an LSI subspace 1540 from the consensus result. While FIG. 15 indicates that all distributed computing devices 1510 have the same LSI subspace 1540, the calculated LSI subspaces may vary slightly between distributed computing devices 1510, e.g., within a margin of error tolerance permitted for consensus.
  • the distributed computing devices 1510 can then apply the LSI subspace 1540 to analyze their own documents 1515 and to search for documents on other distributed computing devices.
  • the word counts 1530 are typically sparse but very large matrices, particularly when a distributed computing device 1510 contains a large number of documents 1515. While a matrix of sampled word counts for a single distributed computing device’s documents can be stored on and manipulated by a single distributed computing device 1510, if the number of distributed computing devices N is large, a single distributed computing device may not be able to store all of the sampled word count data generated by the N distributed computing devices, or even a portion of the N distributed computing devices.
  • the distributed computing devices 1510 exchange and manipulate matrices of the size of the matrix of sampled word counts 1535 to generate a global matrix of the same size as the matrix of sampled word counts 1535. At no point during the DAC process does a distributed computing device 1510 store close to the amount of word count data or sampled word count data generated by all N devices.
  • a L e M /Vxki represents a matrix of word counts 1530 of the h documents local to node
  • e is a user specified tolerance on the accuracy of the approximation between 0 and 1.
  • FIG. 16 is a flowchart showing a method for generating a low-dimension subspace for latent semantic indexing using distributed average consensus at a particular node i, e.g., one of the distributed computing devices 1510.
  • the LSI module 1520 generates 1610 a local word count matrix A for a set of local documents.
  • the calculation module l525a calculates the word counts l530a for a set of documents 1515a accessible to the distributed computing device l5l0a.
  • the LSI module 1520 samples 1620 the local word counts data A .
  • the calculation module 1525 generates a random matrix W ; e R k ‘ xq and multiplies the random matrix W ⁇ by the local word count matrix A,.
  • the random matrix W ⁇ is a matrix of random values, e.g., a Gaussian ensemble matrix.
  • Each distributed computing device 1510 generates the random matrix independently.
  • the calculation module 1525 multiplies its local word count matrix A t and the random matrix W ⁇ o generate the outer product e R Nxq .
  • the matrix Y is an example of the sampled word counts 1535, and approximates the data in the local word count matrix A (i.e., Y, approximates the word counts 1530).
  • the LSI module 1520 of the distributed computing device 1510 in cooperation with the other distributed computing devices, performs 1630 the DAC algorithm on the sampled word count matrices Yi to obtain a global DAC result matrix Y global, which is the global matrix representing a consensus result for the matrices of sampled word counts 1535.
  • Y global can be represented as follows:
  • a distributed computing device 1510 exchanges its sampled word count matrix Yi with another distributed computing device 1510.
  • distributed computing device l5l0a transmits the sampled word counts l535a to the distributed computing device 15 lOb, and receives sampled word counts 1535b from distributed computing device 1510b .
  • the LSI module 1520 calculates an average of its sampled word count matrix Yi and the sampled word count matrix received from the other distributed computing device.
  • the calculation module l525a of the LSI module l525a calculates an average of its matrix of sampled word counts 1535a and the received matrix of sampled word counts l535b. This results in consensus sampled word count matrix, which is a matrix of the same size as the sampled word count matrix Yi.
  • distributed computing devices 1510 exchange and average their current consensus sampled word count matrices.
  • the consensus sampled word count matrices are repeatedly exchanged and averaged until a consensus result across the distributed computing devices 1510 is reached.
  • the consensus result which is the matrix Y global , is obtained when the consensus sampled word counts are substantially the same across all the distributed computing devices 1510, e.g., within a specified margin of error.
  • the convergence indicator described with respect to FIGs. 5 A and 5B may be used to determine when the consensus result Ygiobai has been reached, and to determine whether all distributed computing devices 1510 participated in determining the consensus result.
  • the sampled word count matrices Yi and therefore the consensus sampled word count matrices and the global consensus result Y global, are sufficiently small to be stored on and manipulated by a single distributed computing device 1510.
  • the LSI module 1520 extracts 1640 a low- dimension LSI subspace matrix U from the DAC result Y global that spans the range of Y global.
  • the distributed computing device 1510 (and each other cooperating distributed computing device) holds a copy of the representative subspace, U E R N xq , which approximately spans the range of the global word count data matrix A global.
  • the LSI subspace matrix U is a low-dimension subspace 1540 (e.g., has a low dimension relative to Agiobai) that the LSI module 1520 can use for various applications.
  • the LSI module 1520 can project a document into the LSI subspace 1540 to determine the latent semantic content of a document, or the LSI module 1540 can compare the latent semantic content of multiple documents by projecting the documents into the same LSI subspace.
  • FIG. 17 is a flowchart showing a method for searching for documents in the distributed environment based on the LSI subspace, according to one embodiment.
  • a requesting device e.g., one of the distributed computing devices 1510, receives a search request and generates 1710 a vector s of word counts for the search.
  • the search request may be, for example, a set of keywords, or one or more documents.
  • a searching user e.g., a hiring manager
  • the searching user may provide or select (e.g., from the documents 1515) one or more resumes of current, successful employees or other candidates to search for similar candidates.
  • a calculation module 1525 of the requesting device generates the word count vector s in a similar manner to generating the word counts A t .
  • the requesting device calculates 1720 a subspace search vector s by projecting the word count vector s into the LSI subspace.
  • the subspace search vector characterizes the search request in the LSI subspace, and is a lower-dimension vector than the word count vector s (i.e., s e m q , s e m N , q ⁇ A).
  • the subspace search vector characterizes the skills and attributes being sought by a hiring manager in the LSI subspace.
  • the requesting device transmits 1730 the subspace search vector 5 to a set of searchable devices for document searching.
  • the searchable devices are a set of devices that accept search requests from requesting devices, and that have a copy of the LSI subspace matrix U.
  • the searchable devices include the same distributed computing devices 1510 that cooperated to generate the LSI subspace matrix U, or a subset of these distributed computing devices.
  • the searchable devices include devices that did not cooperate to generate U, but obtained U from another device.
  • the searchable devices each compare 1740 the received subspace search vector to subspace vectors in the same LSI subspace used to characterize the searchable devices’ local documents (e.g., documents 1515).
  • the subspace vectors characterizing searchable devices’ local documents for searching are referred to as target vectors.
  • Each searchable device calculates the target vectors in the same manner as the subspace search vector was calculated in 1720.
  • the searchable devices may calculate and store the target vectors for their local documents prior to receiving the request, e.g., after obtaining the LSI subspace matrix U at 1640 in FIG. 16, or after receiving the LSI subspace matrix from another device.
  • a searchable device may calculate a dot product of the search vector and the target vector, a Euclidean distance between the search vector and the target vector, or some other measure of distance between the two vectors.
  • the searchable devices return 1750 any local documents, or data describing local documents, that were determined to be relevant to the requesting device’s search. For example, if a searchable device calculates a Euclidean distance to compare the search vector to the target vector or each local document, the searchable device may provide data describing any documents with target vectors that have a Euclidean distance to the search vector below a threshold value. Alternatively, a searchable device may return data describing a set of documents with the closest match (e.g., the ten closest matching documents), or data describing all documents and their match value. The match value indicates the measure of distance between the search vector and the target vector.
  • the closest match e.g., the ten closest matching documents
  • the returned data may include a document identifier, the match value (e.g., the Euclidean distance or the dot product), and some information describing the document, such as a title, author, date of creation or publication, etc.
  • the information returned may depend on the context; for example, for a resume search, the searchable device may return a candidate overview (e.g., current position, desired position, location) that is machine-generated or supplied by the candidate. Based on the returned results, the searching device may request one or more full document from one or more searchable devices.
  • one or more searchable devices store target vectors describing documents stored on one or more other devices.
  • a searchable device e.g., a web server
  • the searchable device does not access the full documents, but instead only receives the target vectors that characterize the documents in the subspace from the documents’ owners.
  • the searchable device can return information for retrieving matching documents from the devices that store the matching documents.
  • the LSI subspace matrix U can be used for other applications besides document searching.
  • the values in the resulting vector a each of which corresponds to a particular word in the set of N words (e.g., the set of words in a particular dictionary), indicates the relevance of each word to the document.
  • the words that have high values e.g., the five or ten words corresponding to the highest values in the vector a
  • a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described.
  • Embodiments may also relate to an apparatus for performing the operations herein.
  • This apparatus may be specially constructed for the required purposes, and/or it may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computer.
  • a computer program may be stored in a non-transitory, tangible computer readable storage medium, or any type of media suitable for storing electronic instructions, which may be coupled to a computer system bus.
  • any computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
  • Embodiments may also relate to a product that is produced by a computing process described herein.
  • a product may comprise information resulting from a computing process, where the information is stored on a non-transitory, tangible computer readable storage medium and may include any embodiment of a computer program product or other data combination described herein.

Abstract

Each of a plurality of distributed computing devices receives a respective data partition of a plurality of data partitions for a computing task. A first distributed computing device generates a first partial result of a plurality of partial results generated by the plurality of distributed computing devices. The first computing device iteratively executes a distributed average consensus (DAC) process. At each iteration, the first computing device transmits the first partial result to a second computing device, receives a second partial result generated by the second computing device, and updates the first partial result by computing an average of the first and second partial results. In response to determining that respective partial results of the plurality of distributed computing devices have reached a consensus value, the first computing device stops executing the DAC process, and generates a final result of the computing task based on the consensus value.

Description

DISTRIBUTED HIGH PERFORMANCE COMPUTING USING
DISTRIBUTED AVERAGE CONSENSUS
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. Provisional Application No. 62/619,715, filed January 19, 2018; U.S. Provisional Application No. 62/619,719, filed January 19, 2018; U.S. Provisional Application No. 62/662,059, filed April 24, 2018; U.S. Provisional Application No. 62/700,153, filed July 18, 2018; U.S. Provisional Application No. 62/727,355, filed September 5, 2018; and U.S. Provisional Application No. 62/727,357, filed September 5, 2018, each of which is incorporated by reference in its entirety.
BACKGROUND
CONVERGENCE IN DISTRIBUTED COMPUTING
[0002] Distributed computing can be used to break a large computation into sub-components, assign distributed computing devices components of the computation, and combine the results from the distributed computing devices to generate the result of the computation. Existing methods for distributed computing use various techniques to obtain a result from a distributed computing task, e.g., selecting a coordinator to evaluate the sub-component results, or determining a majority result. Typical distributed computing operations are designed to be fault- tolerant, which allows convergence even if a computing device was not able to perform its assigned portion of the computation. However, such operations also allow a computing device that claims to contribute to the computation, but did not contribute, to converge with the other computing devices. Thus, in a typical distributed computing operation, the convergence result will not indicate if any computing devices did not participate in calculating the result. This is problematic in situations where computing devices receive compensation for their work, because a computing device may be able to receive compensation without performing any work. UPDATING AI MODELS
[0003] One use for distributed computing devices relates to improving artificial intelligence (AI) models. Distributed computers connected to a network can implement an AI model and also collect data that is used to update and improve the AI model. In current systems for improving AI models using data collected by the distributed computers, a“gather and scatter” method is used to generate and propagate updates to the AI models determined from the collected data. In the gather and scatter method, distributed computers collect data and transmit the data to a central server. The central server updates the AI model and transmits the updated AI model to the distributed computers. The central server must be reliable, and each distributed computer must have a reliable connection to the server to provide data to and receive model updates from the central server. This gather and scatter method requires a large amount of computing to be performed at the central server, and does not take advantage of the computing resources of the distributed computers.
PERSONALIZED RECOMMENDATIONS
[0004] In conventional systems for generating personalized recommendations, a centralized system obtains user preference data and generates recommendations based on the obtained data. For example, the centralized system may collect data describing users’ prior purchases and product ratings, or data tracking user behavior, such as clickstream data. The centralized system uses this collected data to provide personalized recommendations to users. However, such centralized systems may often also exploit users’ personal data for other purposes, such as targeting content towards them or selling the data to third parties. Many users would prefer to receive personalized recommendations without having a centralized system collect, store, or distribute their personal data.
LATENT SEMANTIC INDEXING
[0005] Latent semantic indexing (LSI) is a mathematical tool for indexing text that is used for indexing and retrieving content from a large number of unstructured text-based documents, such as web pages. LSI is used for various applications, such as search engines and document comparison. In current search engines, a central server indexes a set of searchable content and allows other users to search this content through the central server. For example, a search engine uses a web crawler to retrieve publicly-accessible websites or other documents and stores information describing the documents’ content. The search engine provides a search interface to which a user can submit queries, and upon receiving a search query, the search engine compares the query to the stored information and provides relevant results.
[0006] In current search implementations, the search engine system obtains and analyzes both the documents being searched and the search queries. Thus, current search engines require information providers to make their documents publicly available, or at least available to the search engine, to allow others to search the documents. In addition, centralized search engines can collect data on its users’ behaviors. Many information providers and users would prefer a search implementation that does not involve a centralized system collecting and storing their data.
SUMMARY
[0007] Systems and methods for performing computations in a distributed environment are described herein. To perform a computation in the distributed environment, different portions of the computation are assigned to different computing devices, and the results of the portions are combined to determine the computation result. The computation is portioned in such a way that the computing devices can exchange their portioned results in a peer-to-peer fashion, and perform a consensus algorithm that both (1) obtains the final computation result and (2) confirms that all of the contributing devices have performed their assigned portion of the computation. In particular, the computing devices perform a distributed average consensus (DAC) algorithm in which the computing devices repeatedly form connections, exchange data, and calculate an average of the exchanged data, which is used as the data to exchange in a subsequent step. When this procedure leads to a consensus (e.g., the averages across all computing devices settle around a consensus average value), the result of the DAC algorithm indicates whether each computing device has contributed to the calculation of the average. Thus, the DAC procedure is able to confirm that each computing device in the distributed environment has contributed to the calculation. The DAC procedure confirms that each computing device has participated using the same connections that are used to obtain the consensus result; thus, no additional routing protocols or overlay topologies are needed to confirm participation. [0008] In addition to the DAC environment and algorithm, several exemplary applications for DAC are described herein. Distributed implementations for calculating a dot product, calculating a matrix-vector product, calculating a least squares calculation, and performing decentralized Bayesian parameter learning are described. A method for distributed AI learning is also described. A method for generating a subspace for recommendations, and exemplary uses of the recommendation model, are also described. A method for generating a subspace for latent semantic indexing, and exemplary uses of the latent semantic index, are also described.
[0009] One application involves cooperatively generating a recommendation model and generating personalized recommendations based on the model without exposing personal user data. To generate the recommendation model, cooperating distributed computing devices use a cooperative sub space approach that combines the DAC algorithm with the theory of random sampling. Each distributed computing device randomly samples local user preference data in a cooperative subspace that approximates the user preference data reflecting the users of all cooperating distributed computing devices. The sampled preference data is shared among the cooperating distributed computing devices. The distributed computing devices use the DAC algorithm to cooperatively create a recommendation model based on the sampled preference data of many users. Each distributed computing device individually applies the recommendation model to the distributed computing device’s local preference data to generate personalized recommendations for the user of the distributed computing device. The cooperative subspace approach allows the DAC algorithm to be performed efficiently, and the random sampling obscures the underlying user preference data so that users’ data privacy is maintained.
[0010] In another application, to generate a latent semantic index and enable searching in a distributed manner, a set of cooperating distributed computing devices use a cooperative subspace approach that combines the DAC algorithm with the theory of random sampling. Each cooperating distributed computing device stores one or more documents, and the documents distributed across the set of cooperating distributed computing devices are jointly referred to as a corpus of documents. The documents in the corpus may be documents that their respective users plan to make available for searching by other distributed computing devices, e.g., documents that can be searched by some or all of the cooperating distributed computing devices and/or other devices. [0011] The cooperating distributed computing devices jointly generate a latent semantic index based on the corpus of documents, without the contents of any individual document being exposed to other distributed computing devices. First, each distributed computing device individually analyzes its locally-stored documents, and randomly samples the results of this analysis to generate a matrix that approximates and obscures the content of the local documents in each distributed computing device. The distributed computing devices share their matrices and perform the DAC algorithm described above to generate a matrix reflecting the corpus of documents stored by of all cooperating distributed computing devices. Each distributed computing device then extracts a low-dimension latent semantic index (LSI) subspace from the matrix based on the DAC result. This LSI subspace reflects the analysis of all of the documents in the corpus, but is much smaller than a matrix concatenating the raw analysis results of the local documents in each distributed computing device. The cooperative subspace approach allows the LSI subspace to be calculated efficiently, and the random sampling obscures the underlying documents so that privacy is maintained.
[0012] In one embodiment, a first distributed computing device of a plurality of distributed computing devices receives over a network a data partition of a plurality of data partitions for a computing task. Each of the plurality of distributed computing devices is assigned a respective data partition of the plurality of data partitions. The first distributed computing device generates a first partial result of a plurality of partial results generated by the plurality of distributed computing devices. The first distributed computing device iteratively executes a distributed average consensus (DAC) process. The DAC process includes, for each iteration of the process, transmitting the first partial result of the first distributed computing device to a second distributed computing device of the plurality of distributed computing devices, receiving a second partial result generated by the second distributed computing device from the second distributed computing device, and updating the first partial result of the first distributed computing device by computing an average of the first partial result and the second partial result. In response to determining that respective partial results of the plurality of distributed computing devices have reached a consensus value, the first distributed computing device determines to stop executing the DAC process. The first distributed computing device generates a final result of the computing task based on the consensus value. [0013] In one embodiment, an intermediary computing device receives over a network a request for a computing task from a requesting computing device. The request includes a set of requirements for the computing task. The intermediary computing device transmits at least a portion of the set of requirements to a plurality of distributed computing devices over the network. The intermediary computing device receives over the network commitments from a plurality of distributed computing devices to perform the computing task. Each of the plurality of distributed computing devices meets the portion of the set of requirements. The intermediary computing device transmits, to each of the plurality of distributed computing devices, a respective data partition of a plurality of data partitions for the computing task. The plurality of distributed computing devices are configured to iteratively execute a distributed average consensus (DAC) process to calculate a consensus value for the computing task. The
intermediary computing device returns a result of the computing task to the requesting computing device.
[0014] In one embodiment, a method for cooperative learning is described. A distributed computing device generates a gradient descent matrix based on data received by the distributed computing device and a model stored on the distributed computing device. The distributed computing device calculates a sampled gradient descent matrix based on the gradient descent matrix and a random matrix. The distributed computing device iteratively executes a process to determine a consensus gradient descent matrix in conjunction with a plurality of additional distributed computing devices connected by a network. The consensus gradient descent matrix is based on the sampled gradient descent matrix and a plurality of additional sampled gradient decent matrices calculated by the plurality of additional distributed computing devices. The distributed computing device updates the model stored on the distributed computing device based on the consensus gradient descent matrix.
[0015] In one embodiment, a method for generating personalized recommendations is described. A distributed computing device stores user preference data representing preferences of a user with respect to a portion of a set of items. The distributed computing device calculates sampled user preference data by randomly sampling the user preference data. The distributed computing device iteratively executes, in conjunction with a plurality of additional distributed computing devices connected to the distributed computing device by a network, a process to determine a consensus result for the sampled user preference data. The consensus result is based on the sampled user preference data calculated by the distributed computing device and additional sampled user preference data calculated by the plurality of additional distributed computing devices. The additional sampled user preference data is based on preferences of a plurality of additional users. The distributed computing device determines a recommendation model based on the consensus result for the sampled user preference data. The recommendation model reflects the preferences of the user and the plurality of additional users. The distributed computing device identifies an item of the set of items to provide to the user as a
recommendation based on the recommendation model, and provides the recommendation of the item to the user.
[0016] In one embodiment, a method for generating a latent semantic index is described. A distributed computing device calculates word counts for each of a set of documents. The word counts for each of the set of documents are represented as a plurality of values, each value representing a number of times a corresponding word appears in one of the set of documents.
The distributed computing device calculates sampled word counts by randomly sampling the word counts. The distributed computing device, in conjunction with a plurality of additional distributed computing devices connected to the distributed computing device by a network, iteratively executes a process to determine a consensus result for the sampled word counts. The consensus result is based on the sampled word counts calculated by the distributed computing device and additional sampled word counts calculated by the plurality of additional distributed computing devices, the additional sampled user word counts based on additional sets of documents. The distributed computing device determines a latent semantic index (LSI) subspace based on the consensus result for the sampled word counts. The LSI subspace reflects contents of the set of documents and the additional sets of documents. The distributed computing device projects a document into the LSI subspace to determine the latent semantic content of the document.
[0017] In one embodiment, a method for performing a search is described. A search device calculates a word count vector for one of a document or a set of keywords. Each element of the word count vector has a value representing instances of a different word in the document or the set of keywords. The search device projects projecting the word count vector into a latent semantic index (LSI) subspace to generate a subspace search vector characterizing the document in the LSI subspace. The LSI subspace is generated cooperatively by a plurality of distributed computing devices connected by a network based on a corpus of documents, the LSI subspace reflecting contents of the corpus of documents. The search device transmits the subspace search vector to a target device as a search request. The search device receives from the target device, in response to the search request, data describing a target document that matches the search request. The target device determines that the target document matches the search request by comparing the sub space search vector to a target vector characterizing the target document in the LSI subspace.
BRIEF DESCRIPTION OF THE DRAWINGS
[0018] FIG. l is a flow diagram showing contract formation in an environment for distributed computing, according to one embodiment.
[0019] FIG. 2 is a flow diagram showing publishing of distributed computing device information in the environment of for distributed computing, according to one embodiment.
[0020] FIG. 3 is a block diagram showing peer-to-peer connections between distributed computing devices, according to one embodiment.
[0021] FIG. 4A is a diagram showing a first arrangement of peer connections among a group of distributed computing devices at a first time, according to one embodiment.
[0022] FIG. 4B is a diagram showing a second arrangement of peer-to-peer connections among the group of distributed computing devices at a second time, according to one embodiment.
[0023] FIG. 5A is a graphical illustration of an initialized distributed average consensus convergence indicator, according to one embodiment.
[0024] FIG. 5B is a graphical illustration of a first peer-to-peer update in a distributed average consensus convergence indicator, according to one embodiment.
[0025] FIG. 6 illustrates an example of using distributed computing devices to perform a distributed dot product calculation, according to one embodiment.
[0026] FIG. 7 illustrates an example of using distributed computing devices to perform a distributed matrix-vector product calculation, according to one embodiment.
[0027] FIG. 8 illustrates an example of using distributed computing devices to perform a distributed least squares calculation, according to one embodiment. [0028] FIG. 9 illustrates an example of using distributed computing devices to perform decentralized Bayesian parameter learning, according to one embodiment.
[0029] FIG. 10 is a flow diagram illustrating a prior art procedure for training an artificial intelligence (AI) model.
[0030] FIG. 11 is a flow diagram illustrating a procedure for training an artificial intelligence (AI) model using distributed average consensus, according to one embodiment.
[0031] FIG. 12 is a flowchart showing a method for determining a consensus result within a cooperative subspace, according to one embodiment.
[0032] FIG. 13 is a flow diagram illustrating a distributed environment for generating a personalized recommendation model using distributed average consensus, according to one embodiment.
[0033] FIG. 14 is a flowchart showing a method for generating a personalized
recommendation using a cooperative subspace algorithm, according to one embodiment.
[0034] FIG. 15 is a flow diagram illustrating a distributed environment for generating a low- dimension subspace for latent semantic indexing, according to one embodiment.
[0035] FIG. 16 is a flowchart showing a method for generating a low-dimension subspace for latent semantic indexing using distributed average consensus, according to one embodiment.
[0036] FIG. 17 is a flowchart showing a method for searching for documents in the distributed environment based on the latent semantic index, according to one embodiment.
DETAILED DESCRIPTION
[0037] The Figures (FIGs.) and the following description describe certain embodiments by way of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles described herein. Reference will now be made in detail to several embodiments, examples of which are illustrated in the accompanying figures.
[0038] It is noted that wherever practicable similar or like reference numbers may be used in the figures and may indicate similar or like functionality. A letter after a reference numeral, such as“l30a,” indicates that the text refers specifically to the element having that particular reference numeral. A reference numeral in the text without a following letter, such as“130,” refers to any or all of the elements in the figures bearing that reference numeral. For example, “130” in the text refers to reference numerals“l30a” and/or“l30b” and/or“l30c” in the figures.
DISTRIBUTED AVERAGE CONSENSUS (DAC) ENVIRONMENT
[0039] The DAC algorithm can be implemented in a two-sided market that includes requesting computing devices seeking computing power and distributed computing devices that provide computing power. The requesting computing devices, or users of the requesting computing devices, want to run a computing task on the distributed computing devices. The requesting computing devices may be used by scientists, statisticians, engineers, financial analysts, etc. The requesting computing device can transmit requests to one or more
intermediary computing devices, which coordinate the fulfillment of the request with a set of distributed computing devices. The requesting computing devices request compute time on the distributed computing devices, and may provide compensation to the distributed computing devices in exchange for compute time. The arrangement between a requesting computing device and a set of distributed computing devices can be represented by a smart contract. A smart contract is an agreement made between multiple computing devices (e.g., a set of distributed computing devices, or a requesting computing device and a set of distributed computing devices) to commit computing resources to a computing task. A smart contract specifies a set of technical requirements for completing the computing task, and may specify compensation for completing the computing task or a portion of the computing task. The smart contract may include a list of distributed computing devices that have agreed to the smart contract. In some embodiments, smart contracts are published to a blockchain.
[0040] The requesting computing devices, intermediary computing devices, and distributed computing devices are computing devices capable of transmitting and receiving data via a network. Any of the computing devices described herein may be a conventional computer system, such as a desktop computer or a laptop computer. Alternatively, a computing device may be any device having computer functionality, such as a mobile computing device, server, tablet, smartphones, smart appliance, personal digital assistant (PDA), etc. The computing devices are configured to communicate via a network, which may comprise any combination of local area and/or wide area networks, using both wired and/or wireless communication systems. In one embodiment, the network uses standard communications technologies and/or protocols. For example, the network includes communication links using technologies such as Ethernet, 802.11, worldwide interoperability for microwave access (WiMAX), 3G, 4G, code division multiple access (CDMA), digital subscriber line (DSL), etc. Examples of networking protocols used for communicating via the network include multiprotocol label switching (MPLS), transmission control protocol/Internet protocol (TCP/IP), hypertext transport protocol (HTTP), simple mail transfer protocol (SMTP), and file transfer protocol (FTP).
[0041] FIG. 1 illustrates contract formation in an exemplary environment 100 for distributed computing. In the example shown in FIG. 1, a requesting computing device 110 communicates over a network 160 with a smart contract scheduler 120, which is an intermediary computing device that coordinates computing resources for performing distributed computing tasks. The environment 100 also includes a set of distributed computing devices 130 that can connect to each other and to the smart contract scheduler 120 over a network 170. The networks 160 and 170 may be the same network, e.g., the Internet, or they may be different networks. FIG. 1 shows four distributed computing devices l30a, l30b, l30c, and l30d, but it should be understood that the environment 100 can include many more distributed computing devices, e.g., millions of distributed computing devices 130. Similarly, the environment 100 can include additional requesting computing devices 110 and smart contract schedulers 120. While the requesting computing device 110, smart contract scheduler 120, and distributed computing devices 130 are shown as separate computing devices, in other embodiments, some of the components in the environment 100 may be combined as a single physical computing device.
For example, the requesting computing device 110 may include a smart contract scheduling component. As another example, the requesting computing device 110 and/or smart contract scheduler 120 are also distributed computing devices 130 with computing resources for performing requested calculations.
[0042] To request computation of a given computing task, the requesting computing device 110 transmits a set of job requirements 140 to the smart contract scheduler 120 over the network 160. The job requirements 140 may include, for example, minimum technical requirements for performing the task or a portion of the task, such as memory, disk space, number of processors, or network bandwidth. The job requirements 140 also include an amount and/or type of compensation offered by the requesting computing device 110 for the task or a portion of the task. [0043] The smart contract scheduler 120 generates a smart contract 150 for the requesting computing device 110 based on the job requirements 140 and transmits the smart contract 150 to the distributed computing devices 130 over the network 170. The smart contract scheduler 120 may broadcast the smart contract 150 to all participating distributed computing devices 130, or transmit the smart contract 150 to some subset of the distributed computing devices 130. For example, the smart contract scheduler 120 may maintain a list of distributed computing devices 130 and their technical specifications, and identify a subset of the distributed computing devices 130 that meet one or more technical requirements provided in the job requirements 140. As another example, the smart contract scheduler 120 may determine, based on prior smart contracts, distributed computing devices 130 that are currently engaged with tasks for other smart contracts, and identify a subset of the distributed computing devices 130 that may be available for the smart contract 150.
[0044] Each distributed computing device 130 that receives the smart contract 150 from the smart contract scheduler 120 can independently determine whether the technical requirements and compensation are suitable. At least some portion of distributed computing devices 130 agree to the smart contract 150 and transmit their acceptance of the contract to the smart contract scheduler 120 over the network 170. In the example shown in FIG. 1, distributed computing devices l30a, l30b, and l30c agree to the smart contract 150, and distributed computing device l30d has not agreed to the smart contract. The distributed computing devices l30a-l30c that agree to the smart contract 150 may each publish a signed copy of the smart contract 150 to a blockchain in which the distributed computing devices 130 and the smart contract scheduler 120 participate. Contracts published to the blockchain can be received by all participants, including the smart contract scheduler 120 and, in some embodiments, the requesting computing device 110.
[0045] While three distributed computing devices l30a-l30c are shown as signing the smart contract 150 in FIG. 1, it should be understood that additional distributed computing devices 130 (e.g., tens of computing devices, thousands of computing devices, etc.) can sign a single smart contract and participate in the computing task. In some embodiments, the smart contract 150 specifies a requisite number of distributed computing devices 130 for performing the computing task. Once the requisite number of distributed computing devices publish their acceptance of the smart contract 150 to the blockchain, the distributed computing devices that have committed to the contract complete the computing task.
[0046] Once the distributed computing devices 130 have agreed to cooperate on the task, the distributed computing devices receive code provided by the requesting computing device 110 with instructions for completing the computing task. The requesting computing device 110 may transmit the code directly to the distributed computing devices l30a-l30c over the network 170, or the requesting computing device 110 may provide the code to the distributed computing devices l30a-l30c via the smart contract scheduler 120. In some embodiments, the code include checkpoints, which are used to indicate suitable restart locations for long-running calculations.
In a long calculation, the code may fail before completion of a task, but after a distributed computing device 130 has performed a substantial amount of work. When a distributed computing device 130 successfully reach a specified checkpoint, the distributed computing device 130 is compensated for the work it has done up to that checkpoint.
[0047] In some embodiments, the distributed computing devices 130 cooperate for computing tasks that benefit the distributed computing devices 130 themselves, rather than for the benefit of a particular requesting computing device 110. For example, the distributed computing devices 130 may perform a DAC procedure for cooperative learning, such as decentralized Bayesian parameter learning or neural network training, described in further detail below. In such embodiments, a distributed computing device 130 may not receive compensation from a requesting computing device, but instead receives the benefit of data and cooperation from the other distributed computing devices 130. The distributed computing devices 130 may sign a smart contract 150 with each other, rather than with a requesting computing device 110 outside of the group of distributed computing devices 130. Alternatively, the distributed computing devices 130 may cooperate on computing tasks without a smart contract 150. The distributed computing devices 130 may receive code for performing the calculations from a coordinating computing device, which may be one of the distributed computing devices 130 or another computing device.
[0048] The distributed computing devices 130 provide connection information to the other distributed computing devices 130 so that they are able to communicate their results to each other over the network 170. For example, the smart contract 150 may be implemented by a blockchain accessed by each of the distributed computing devices 130 and on which each distributed computing device 130 publishes connection information.
[0049] FIG. 2 is a flow diagram showing publishing of distributed computing device information in the environment for distributed computing shown in FIG. 1. The distributed computing devices l30a, l30b, and l30c that have signed the smart contract 150 each publish their respective connection information 2l0a, 210b, and 2l0c to a smart contract blockchain 200 over the network 170. Information published to the smart contract blockchain 200 is received by each of the distributed computing devices l30a-l30c over the network 170. The connection information 210 can be, for example, the IP address of the distributed computing device 130 and the port on which the distributed computing device 130 wishes to receive communications from the other distributed computing devices. The distributed computing devices 130 each compile a peer list 220 based on the information published to the smart contract blockchain 200. The peer list 220 includes the connection information 210 for some or all of the distributed computing devices 130 that signed the smart contract 150. The peer list 220 allows each distributed computing device 130 to communicate with at least a portion of the other distributed computing devices over the network 170. Each distributed computing device 130 stores a local copy of the peer list 220. If the peer list 220 includes a portion of the distributed computing devices 130 that signed the smart contract 150, the peer lists 220 stored on different distributed computing devices 130 are different, e.g., each distributed computing device 130 may store a unique peer list containing some portion of the distributed computing devices 130 that signed the smart contract 150.
[0050] FIG. 3 illustrates peer-to-peer connections formed between distributed computing devices according to the peer list 220. After each distributed computing device 130 has performed its portion of the computation, the distributed computing devices 130 connect to each other (e.g., over the network 170 shown in FIGs. 1 and 2) to share results. To form the connections, each distributed computing device 130 initializes a server thread 310 to listen to the port that it posted to the smart contract blockchain 200, i.e., the port it provided in the connection information 210. Each distributed computing device 130 also initializes a client thread 320 capable of connecting to another distributed computing device 130. In the example shown in FIG. 3, the client thread 320a of distributed computing device l30a has formed a connection 340 to the server thread 3 l0b of distributed computing device l30b using the connection information 2l0b provided by distributed computing device l30b. In addition, the client thread 320b of distributed computing device l30b has formed a connection 350 to the server thread 3 lOc of distributed computing device l30c using the connection information 2l0c provided by distributed computing device l30c. Distributed computing devices l30a and l30b can share computing results over the connection 340, and distributed computing devices l30b and l30c can share computing results over the connection 350.
[0051] While three distributed computing devices 130 that signed the smart contract 150 are illustrated in FIGs. 1-3, in many cases, more distributed computing devices are involved in a computing task. According to the DAC protocol, the distributed computing devices 130 undertake a sequence of forming connections, sharing results, computing an average, and determining whether consensus is reached. If consensus has not been reached, the distributed computing devices 130 form a new set of connections, share current results (i.e., the most recently computed averages), compute a new average, and again determine whether consensus is reached. This process continues iteratively until consensus is reached. A mathematical discussion of the DAC algorithm is described in greater detail below.
[0052] FIG. 4A illustrates a first arrangement 400 of peer connections formed among a group of seven distributed computing devices at a first time, according to one embodiment. FIG. 4A includes a set of seven distributed computing devices l30a-l30g that have connected to form three sets of pairs. For example, distributed computing devices l30a is connected to distributed computing device l30c over connection 410. The distributed computing devices 130, or some portion of the distributed computing devices 130, may each select a random computing device from the pair list 220 and attempt to form a peer-to-peer connection. In the example shown in FIG. 4A, distributed computing device l30g has not formed a connection to any other distributed computing device in this iteration. In some embodiments, a single distributed computing device 130 may be connected to two other distributed computing devices, e.g., both the client thread and the server thread are connected to a respective computing device.
[0053] FIG. 4B illustrates a second arrangement 450 of peer-to-peer connections among the group of distributed computing devices l30a-l30g at a second time, according to one
embodiment. The distributed computing devices l30a-l30g have formed the connections in a different configuration from the connections 400 shown in FIG. 4A. For example, distributed computing device l30a is now connected to distributed computing device l30b over connection 460. The distributed computing devices l30a-l30g continue to form new sets of connections and exchange data until they determine that distributed average consensus is reached.
[0054] In some embodiments, process replication is used to ensure that the loss of a distributed computing device 130 does not compromise the results of an entire computation task. Process replication provides a safeguard to the inherently unreliable nature of dynamic networks, and offers a mechanism for distributed computing devices 130 to check that peers computing devices 130 are indeed contributing to the calculation in which they are participating. In such embodiments, distributed computing devices 130 can be arranged into groups that are assigned the same data. When a group of distributed computing devices 130 assigned the same data reach a checkpoint, each computing device in the group of distributed computing devices can ensure that no other computing device in the group has cheated by hashing its current result (which should be the same across all computing devices in the group) with a piece of public information (such as a process ID assigned to the computing device), and sharing this with the group of computing devices. One or more computing devices in the group can check the current results received from other computing devices in the group to confirm that the other computing devices are participating and have obtained the same result.
MATHEMATICAL THEORY OF DISTRIBUTED AVERAGE CONSENSUS (DAC)
[0055] The distributed average consensus (DAC) algorithm is used in conjunction with a calculation in which a number of agents (e.g., N distributed computing devices 130), referred to as Nprocess agents, must agree on their average value. The continuous time model for the local agent state governed by the DAC algorithm is given by the feedback model:
*i (0 = ut(t)
xt E Rn i E {l, , Nprocess} (1) where Xi(t) is the numerical state of process i at time t, xt(t ) is the time derivative of the state, and Ui(t) represents a particular consensus feedback protocol.
[0056] For illustrative purposes, a Nearest Neighbor protocol is used as the consensus feedback protocol:
Figure imgf000019_0001
is the neighbor set of process i.
[0057] The global system can be written as the following dynamical system of the equations:
X;(t) =—Lx(t)
Xt e RnNVrocess
Figure imgf000019_0002
where L is the graph Laplacian matrix.
[0058] In the case of a connected network, the unique and universally convergent equilibrium state of this system is as follows:
Figure imgf000019_0003
where l 7 G nNProcess is a vector of all ones. This result means that the agents in the network (e.g., the distributed computing devices 130) not only come to an agreement on a value, but a particular unique value: the average of the initial conditions of the agents on the network.
[0059] The rate at which Xi{t) converges to x/(¥) for this protocol is proportional to the smallest nonzero eigenvalue of the system Laplacian matrix L. Furthermore, the equilibrium state can be attained under dynamic, directional topologies with time delays. This notion of consensus is suitable for a distributed protocol since each process requires communication only with a set of neighboring processors, and there is no need for a fusion center or centralized node with global information. It is in this sense that consensus can be exploited in the distributed computing environment 100 to achieve a variety of useful tools for distributed computing, such as multi-agent estimation and control. Distributed consensus is particularly advantageous for performing reductions on distributed data because it bypasses the need for sophisticated routing protocols and overlay topologies for complicated distributed networks.
[0060] In order for each distributed computing device 130 to gauge its proximity to the global average and, based on the proximity, determine when to terminate the DAC algorithm, the distributed computing devices 130 compute a convergence indicator after each set of connections (e.g., after forming the set of connections shown in FIG. 4 A or 4B). The convergence indicator can be represented geometrically, e.g., as a circle, sphere, or hypersphere, or, more generally, an n-sphere. An n-sphere is a generalization of a sphere to a space of arbitrary dimensions; for example, a circle is a 1 -sphere, and an ordinary sphere is a 2-sphere. The distributed computing devices 130 can be assigned initial portions of the geometric structure, each having a center of mass. During each iteration of the DAC algorithm, each distributed computing device 130 exchanges with at least one neighboring distributed computing device two pieces of data: the distributed computing device’s current Xi(t), and the distributed computing device’s current mass and position in the convergence indicator. Each distributed computing device 130 averages its Xi(t) with the received xj(() received from its neighbor to calculate X (l /); similarly, each distributed computing device 130 combines its center of mass with its neighbor’s to determine a new center of mass. When the exchanges lead to the convergence indicator becoming sufficiently close to the global center of mass of the geometric structure, the DAC algorithm terminates, and the last Xi can be used to calculate the final result of the computation task. A given distance from the center of mass of the geometric structure can be defined as a
convergence threshold for determining when the process has converged. If the convergence process does not reach the center of mass of the geometric structure, this indicates that at least one distributed computing device 130 did not participate in the calculation.
[0061] An exemplary convergence scheme based on a unit circle is shown in FIGs. 5A and 5B. FIG. 5A is a graphical illustration of an initialized distributed average consensus convergence indicator, according to one embodiment. In this example, the convergence indicator is a circle having a global center of mass (c.m.) 510. Each distributed computing device 130 that signed the smart contract 150 is assigned a random, non-overlapping portion of an arc on a circle, e.g., a unit circle. For example, the smart contract scheduler 120, the requesting computing device 110, or one of the distributed computing devices 130 may determine and assign arcs to the participating distributed computing devices 130. In the example shown in FIG. 5A, a first portion of the arc between 0° and qi° is assigned to a distributed computing device 1 520a. Three additional portions of the circle are assigned to three additional distributed computing devices 520b-520d. The distributed computing devices 520 are embodiments of the distributed computing devices 130 described above. As shown in FIG. 5A, the arcs are not of equal size; for example, the arc assigned to distributed computing device 1 520a is smaller than the arc assigned to distributed computing device 2 520b. Each distributed computing device 520 computes the center of mass (c.m.) 530 of its unique arc, including both the mass and location of the center of mass. The differing masses are represented in FIG. 5A as different sizes of the centers of mass 530; for example, the circle around c.m. 1 530a is smaller than the circle around c.m. 2 530b, because the portion assigned to distributed computing device 1 520a is smaller than the portion assigned to distributed computing device 2 520b and therefore has a smaller mass.
[0062] After each successful connection (e.g., after the distributed computing devices 520 form the first set of peer connections shown in FIG. 4A or the second set of peer connections shown in FIG. 4B), each distributed computing device updates the location of its c.m. relative to the c.m. of the distributed computing device to which it connected and exchanged data. FIG. 5B is a graphical illustration of a first peer-to-peer update in the distributed average consensus convergence indicator shown in FIG. 5A. In this example, distributed computing device 1 520a has connected to distributed computing device 4 520d, and distributed computing device 2 520b has connected to distributed computing device 3 520c. Each set of connecting distributed computing devices exchange their respective centers of mass and calculate a joint center of mass. For example, distributed computing devices 1 and 4 calculate the joint c.m. 1 540a based on the locations and masses of c.m. 1 530a and c.m. 4 530d. As shown, joint c.m. 1 540a is partway between c.m. 1 530a and c.m. 4 530d, but closer to c.m. 4 530d due to its larger mass.
[0063] As described with respect to FIGs. 4A and 4B, the distributed computing devices 520 continue forming different sets of connections. This iterative procedure of connecting, exchanging, and updating continues until the distributed computing devices 520 reach a center of mass that is within a specified distance of the global center of mass 510, at which point the distributed computing devices 520 terminate the consensus operation. The specified distance from the global center of mass 510 for stopping the iterative procedure may be a specified error tolerance value, e.g., 0.0001, or lxlO 10. If the distributed computing devices 520 do not reach the global center of mass 510, this indicates that at least one distributed computing device did not participate in the consensus mechanism. For example, if one distributed computing device did not participate in consensus, the center of mass determined by the DAC procedure is pulled away from that distributed computing device’s portion of the arc, because that distributed computing device, represented by its assigned mass, did not contribute to DAC procedure. The distributed computing devices 520 may perform the iterative procedure a particular number of times before stopping even if convergence is not reached. The number of iterations to attempt convergence may be based on the number of distributed computing devices participating in the DAC process. Alternatively, the distributed computing devices may perform the iterative procedure until the center of mass becomes stationary, e.g., stationary within a specified threshold.
[0064] If multiple distributed computing devices do not participate in consensus, it may be difficult to identify the non-participating computing devices from a circular structure. Therefore, in some embodiments, a higher dimensional shape is used as the convergence indicator, such as a sphere or a hypersphere. In such embodiments, each distributed computing device is assigned a higher-dimensional portion of the shape; for example, if the convergence indicator is a sphere, each distributed computing device is assigned a respective section of the sphere. Using a higher number of dimensions for a higher number of distributed computing devices involved in a computation task (e.g., N dimensions for N distributed computing devices) can ensure that the non-participating distributed computing devices are identified.
EXAMPLE APPLICATIONS OF DISTRIBUTED AVERAGE CONSENSUS (DAC)
[0065] The DAC algorithm can be used to perform a dot product calculation. The dot product is one of the most important primitive algebraic manipulations for parallel computing applications. Without a method for computing distributed dot products, critical parallel numerical methods (such as conjugate gradients, Newton-Krylov, or GMRES) for simulations and machine learning are not possible. The DAC algorithm, described above, can be used to perform a dot product of two vectors v and y, represented as xTy, in a distributed manner by assigning distributed computing devices 130 to perform respective local dot products on local sub-vectors, and then having the distributed computing devices 130 perform consensus on the resulting local scalar values. After consensus is reached, the result of the consensus on the scalar values is multiplied by the number of processes in the computation. The relationship between the dot product T of two vectors of length n and the average of the local scalar calculations xy, is as follows:
Figure imgf000022_0001
[0066] FIG. 6 illustrates an example 600 of using three distributed computing devices to perform a distributed dot product calculation, according to one embodiment. In FIG. 6, a first vector x 610 is partitioned into three sub-vectors, iT, X2T, and ri . A second vector y 620 is also partitioned into three sub-vectors, y 1, yi, and 3. A first distributed computing device l30a receives the first vector portions ciΎ and i and calculates the dot product VI 'J . Second and third distributed computing devices l30b and l30c calculate dot products AO 'JO and
Figure imgf000023_0001
respectively. The distributed computing devices l30a-l30c exchange the dot products via connections 630 and calculate averages, as described above, until consensus is reached. After consensus, the average dot product is multiplied by the number of participating distributed computing devices 130 (in this example, 3) to determine xT .
[0067] The DAC algorithm can be performed on scalar quantities, as shown in the dot product example, and on vector quantities. In a second example, the DAC algorithm is used to perform a distributed matrix-vector product calculation. Distributed matrix-vector products are essential for most iterative numerical schemes, such as fixed point iteration or successive approximation. To calculate a matrix-vector product, a matrix is partitioned column-wise, and each distributed computing device 130 receives one or more columns of the global matrix. A local matrix-vector product is calculated at each distributed computing device 130, and average consensus is performed on the resulting local vectors. The consensus result is then multiplied by the number of distributed computing devices 130 in the computation.
[0068] FIG. 7 illustrates an example 700 of using three distributed computing devices to perform a distributed matrix-vector product calculation, according to one embodiment. In FIG.
7, a first matrix A 710 is partitioned column-wise into three sub-matrices, A i, Ai, and A3. A vector y 720 is partitioned into three sub-vectors, y 1, yi, and jo. The first distributed computing device l30a receives the first matrix portion A 1 and the first vector portion yi and calculates the matrix-vector product Aiyi. The second and third distributed computing devices l30b and l30c calculate the matrix-vector products Aiyi and A3 3, respectively. The distributed computing devices l30a-l30c exchange the matrix-vector products via connections 730 and calculate averages, as described above, until consensus is reached. After consensus, the average matrix- vector product is multiplied by the number of participating distributed computing devices 130.
[0069] As another example, the DAC algorithm is used to calculate a distributed least squares regression. Least squares is one of the most important regressions used by scientists and engineers. It is one of the main numerical ingredients in software designed for maximum likelihood estimation, image reconstruction, neural network training, and other applications. The problem of finding the least-squares solution to an overdetermined system of equations can be defined as follows:
Ax = b
A G .( n Nprocess)xM
[0070] In the above equations, A is a sensing matrix, x is the least-squares solution vector, and b is a target vector. The solution to this problem is given by the pseudo inverse, as follows:
x = (ATA)~1ATb (7)
[0071] In some embodiments of parallel computing applications, the sensing matrix, A, is distributed row-wise and the least-squares solution, x, is solved for locally on each
computational node since the local least-squares solutions, or components of the least-squares solutions (e.g., local components for Alb and ATA) are small in comparison to the total number of measurements. This means that each distributed computing device 130 in the network owns a few rows (e.g., measurements) of the sensing matrix A and the target vector b. The least squares solution for the system can be recovered from the local least-squares solutions using the DAC algorithm. The portions of the sensing matrix and target vector owned by a given distributed computing device i are represented as At and bi, respectively. Each distributed computing device i calculates the products A /¾ and /T '/f and stores these products in its local memory. DAC is then performed on these quantities, which both are small compared to the total number of observations in A. The results of the DAC process are
Figure imgf000024_0001
Aj which are present at every distributed computing device at the end of the DAC process. These quantities are multiplied by the number n of processes in the computation, so that every distributed computing device has copies of ATb and AΎA that can be used to locally obtain the least squares fit to the global data set.
[0072] FIG. 8 illustrates an example 800 of using three distributed computing devices to perform a distributed least squares calculation, according to one embodiment. In FIG. 8, the transpose of the sensing matrix A T 810 is partitioned column-wise into three sub-matrices, A iT, A2T, and A3T. The sensing matrix A 820 is partitioned row-wise into three sub-matrices, A i, Ai, and A3. Each distributed computing device l30a-l30c calculates a respective matrix-matrix product A iTAi, A^Ai, and As1 A3. In addition, each distributed computing device l30a-l30c has a respective portion of the target vector b 830 and calculates a respective matrix-vector product Ai¾i, Ai 'hi, and Ai ¾, similar to the calculation shown in FIG. 7. The distributed computing devices l30a-l30c exchange the matrix-matrix products and matrix-vector products via connections 840 and calculate averages of these products, as described above, until consensus is reached. After consensus, the average matrix-matrix product and average matrix-vector product are multiplied by the number of participating distributed computing devices 130, and the results are used to calculate the least square solution x.
[0073] As another example, the DAC algorithm can be applied to decentralized Bayesian parameter learning. Many industrial applications benefit from having a data-driven statistical model of a given process based on prior knowledge. Economic time series, seismology data, and speech recognition are just a few big data applications that leverage recursive Bayesian estimation for refining statistical representations. DAC can be used to facilitate recursive Bayesian estimation on distributed data sets.
[0074] In an exemplary decentralized Bayesian parameter learning process, each distributed computing device attempts to estimate a quantity, x, via a probability distribution, p(x) =p(x\yi:n). Each distributed computing device i e ( 1, ... n} makes an observation, jy that is related to the quantity of interest through a predefined statistical model mi(gί, x). Finder mild conditions, the Bayesian estimate of x is proportional to:
7T(X) OC 7T0 (X) i=lm ^i(yu X) (8) where po(c) is the prior distribution based on past knowledge. The posterior estimate, p(c), conditional on the distributed measurements can be computed using the DAC approach by rewriting the product term in equation 8 in the form of an average quantity:
Figure imgf000025_0001
[0075] Leveraging DAC to compute the global average of the distributed measurement functions allows each distributed computing device to consistently update its local posterior estimate without direct knowledge or explicit communication with the rest of the global data set.
[0076] FIG. 9 illustrates an example 900 of using three distributed computing devices to perform decentralized Bayesian parameter learning, according to one embodiment. In FIG. 9, each distributed computing device 130 receives or calculates the prior distribution po(c) 910. In addition, each distributed computing device l30a makes or receives a respective observation or set of observations yr, for example, distributed computing device l30a receives the observation yi 920. Based on the prior distribution po(c) and observation ;, each distributed computing device l30a-l30c calculates the quantity ln(jn(yi , x)); for example distributed computing device 130 calculates mi(gi, x) 930. The distributed computing devices l30a-l30c exchange the calculated quantities via connections 940 and calculate averages, as described above, until consensus is reached. After consensus, the distributed computing devices 130 use the average of the quantity ln(jn(yi , x)) to calculate the posterior estimate, p(c) 950, according to equation 9.
[0077] While four example calculations described shown in FIGs. 6-9 each are shown in distributed environments with three computing devices, it should be understood that the calculations can be performed using larger sets of distributed computing devices. In addition, the DAC method can be used for other types of calculations that involve calculating an average, e.g., any type of calculation from which a higher result can be obtained from an average.
USING DISTRIBUTED AVERAGE CONSENSUS (DAC) TO TRAIN AN ARTIFICIAL INTELLIGENCE MODEL
[0078] In prior systems for improving artificial intelligence (AI) models using data collected in a distributed manner, a“gather and scatter” method was used to generate and propagate updates to the AI models based on collected data. FIG. 10 shows an exemplary prior art system 1000 performing the gather and scatter method for training an AI model. As shown in FIG. 10, a number N of computing devices 1010, referred to as computing device lOlOa through computing device 1010N, are connected to a server 1020. Each computing device 1010 includes an AI module 1015. Each AI module 1015 can include, among other things, an AI model (such as a neural network) for making one or more predictions based on input data, e.g., data 1025 collected or received by the computing device 1010. In this example, each AI module 1015 is also configured to generate a gradient descent vector 1030 based on the received data; the gradient descent vectors 1030a- 103 ON are used to train the AI model. Each gradient descent vector 1030 calculated by each AI module 1015 is transmitted by each computing device 1010 to the server 1020; for example, computing device lOlOa transmits gradient descent vector l030a to the server 1020. Based on all of the received gradient descent vectors l030a-l030N, the server 1020 optimizes and updates the AI model, and based on the updated AI model, the server 1020 transmits an update to the AI module 1035 to each of the computing devices !OlOa-lOlON. [0079] The gather and scatter method requires a central server 1020 to manage the process of updating the AI model. The server 1020 must be reliable, and each computing device 1010 must have a reliable connection to the server 1020 to receive updates to the AI model. The processing performed by the server 1020 on the gradient vectors 1030a- 103 ON to generate the update 1030 can require a large amount of computing and storage resources, especially if the number of computing devices N is large and/or the gradient vectors 1030 are large. Further, the gather and scatter method does not take advantage of the computing resources available on the computing devices lOlOa-lOlON themselves.
[0080] FIG. 11 illustrates a system 1100 for training an artificial intelligence (AI) model using distributed average consensus, according to one embodiment. FIG. 11 includes a number N of distributed computing devices 1110, referred to as distributed computing device 11 lOa through distributed computing device 1110N. The distributed computing devices 1100 may be embodiments of the distributed computing devices 130 described above. Each distributed computing device 1110 receives respective data 1125. For example, distributed computing device 11 lOa receives data 1 l25a, distributed computing device 11 lOb receives data 1 l25b, and so on. The respective data 1125 received by two different distributed computing devices may be different; for example, data 1125 a may be different from data 1 l25b. The data 1125 may be structured as sets of training pairs including one or more data inputs paired with one or more labels. The data 1125 may be generated internally by the distributed computing device 1110, received from one or more sensors within or connected to the distributed computing device 1110, received from one or more users, received from one or more other distributed computing devices, or received from some other source or combination of sources.
[0081] Each distributed computing device 1110 includes an AI module 1115. The AI module 1115 includes an AI model for processing one or more input signals and making predictions based on the processed input signals. For example, the AI model may be a neural network or other type of machine learning model. In addition, each AI module 1115 is configured to train the AI model based on the data 1125 received by the set of distributed computing devices 1110. The AI modules 1115 of different distributed computing devices 1110 may be functionally similar or identical. In general, the AI module 1115 generates data for optimizing the AI model based on its respective received data 1125, compresses the generated data, and exchanges the compressed data with the compressed data generated by other AI modules 1115 of other distributed computing devices 1110. The AI modules 1115 execute a convergence algorithm, such as the distributed average consensus (DAC) algorithm described above, on the exchanged compressed data to obtain a consensus result for optimizing the AI model. Each respective AI module 1115 updates its local AI model based on the consensus result.
[0082] In some embodiments, to generate the data used to optimize the AI model, each AI module 1115 is configured to compute a gradient descent vector for each training pair (e.g., one or more data inputs paired with one or more labels) in the respective data 1125 received by the distributed computing device 1110 based on a locally-stored AI model. For example, the AI module 11 l5a of distributed computing device 11 lOa calculate a gradient descent vector for each training pair included in the data 1 l25a. The AI module 1115 is further configured to concatenate the gradient descent vectors to form a gradient descent matrix, and sample the gradient descent matrix to generate a sampled gradient matrix 1130, which is shared with the other distributed computing devices in a peer-to-peer fashion. For example, distributed computing device 11 lOb shares its sampled gradient matrix 1130b with both distributed computing device 11 lOa and distributed computing device 1110N, and receives the sampled gradient matrices 1 l30a and 1130N from distributed computing devices 11 lOa and 1110N, respectively. The distributed computing devices 1110 form various sets of connections, as described with respect to FIG. 4, and exchange sampled gradient matrices 1130 until the distributed computing devices 1110 reach consensus according to the DAC algorithm, as described above. In particular, after performing the DAC process, each distributed computing device 1110 has a local copy of a consensus gradient matrix.
[0083] The length and number of gradient descent vectors produced by an AI module 1115 can be large. While a single gradient descent vector or matrix (e.g., a gradient vector 1030 described with respect to FIG. 10, or a set of gradient descent vectors generated by one distributed computing device 1110) can be generated and stored on a single distributed computing device 1110, if the number of distributed computing devices N is large, a single distributed computing device 1110 may not be able to store all of the gradient descent vectors generated by the N distributed computing devices, or even the gradient descent vectors generated by a portion of the N distributed computing devices. In addition, transferring a large number of large vectors between the distributed computing devices 11 lOa-l 110N uses a lot of communication bandwidth. To reduce the size of data transfers and the computational resources required for each distributed computing device 1110, the AI module 1115 samples each matrix of gradient descent vectors.
[0084] In addition, the distributed computing devices 11 lOa-l 110N run a convergence algorithm on the exchanged data (e.g., the exchanged sampled gradient matrices) to determine whether a distributed average consensus (DAC) on the exchanged data has obtained by all of the distributed computing devices 11 lOa-l 110N. For example, the distributed computing devices 11 lOa-l 110N may perform distributed average consensus on sampled gradient descent matrices to obtain a global matrix of the same size as the sampled gradient descent matrices. When each distributed computing device 1110 has received some or all of the other sampled gradient matrices 1130, and a distributed average consensus has been achieved, each AI module 1115 generates its own update to the AI model 1135. The update 1135 may be an optimization of the weights of the AI model stored in the AI module 1115 based on the sampled gradient matrices 1 l30a-l 13 ON, including the locally generated sampled gradient matrix and the matrices received from peer distributed computing devices.
[0085] As described above, the DAC process ensures that each distributed computing device 1110 has contributed to the coordinated learning effort undertaken by the distributed computing devices 11 lOa-l 110N. The coordinated learning process runs without the need for a central server. In addition, because the distributed computing devices 11 lOa-l 110N exchange sampled gradient matrices 1 l30a-l 130N, rather than the underlying data 1 l25a-l 125N, the privacy of the distributed computing devices 1110 and their users is maintained. For example, when distributed computing device 11 lOa receives the sampled gradient matrix 1 l30b from another distributed computing device 11 lOb, the distributed computing device 11 lOa cannot determine any personal information about the data 1 l25b collected by the distributed computing device 11 lOb from the received sampled gradient matrix 1130b .
[0086] In an example, the training of a neural network consists of specifying an optimization objective function, T: RMin ® M+, that is a function of both the network weights, w E RWw, (i.e. the network topology) and the available training data, {xt E MAi,n,yi
Figure imgf000029_0001
where x represents the primal data, y represents the associated labels, and Nx is the number of training examples. The goal of neural network training is to produce a predictive neural network by manipulating the weights w such that the expected value of the objective function T is minimized. This goal can be expressed as follows:
minimize W E NW IE [T (x, y; w)] (10)
[0087] The method of gradient descent can be used to tune the weights of a neural network. Gradient descent involves the evaluation of the partial derivative of the objective function with respect to the vector of weights. This quantity is known as the gradient vector, and can be expressed as follows:
dT(x,y,w )
dw e RNW (11)
[0088] A gradient vector can be computed for each training pair (xi; y ) in the training set. As described above, the AI module 1115 computes a gradient vector for each training pair in the data 1125 received at each distributed computing device 1110.
[0089] To approximate the data set used for optimization, a cooperative subspace approach that combines the DAC process with the theory of random sampling can be used. A cooperative subspace is used to sample the gradient vectors (e.g., to form sampled gradient vectors 1130) so that the DAC process can be performed more efficiently. As an example, e Nxki represents the matrix of data that is local to a given distributed computing device 1110, referred to as node t, for i =
Figure imgf000030_0001
E Nx(kiNnodes ) represents the global data set (i.e., the data 1125 received by the set of distributed computing devices 1110). The cooperative subspace approach computes, in a fully distributed fashion, a representative subspace, U E RNxq that approximates the range of A such that
Figure imgf000030_0002
, where e is a user specified tolerance on the accuracy of the approximation between 0 and 1.
[0090] FIG. 12 is a flowchart showing a method 1200 for determining a consensus result within a cooperative subspace at a particular distributed computing device e.g., one of the distributed computing devices 1110. The distributed computing device 1110 generates 1210 a Gaussian ensemble matrix W; e Rkxq . The Gaussian ensemble matrix is a matrix of random values used to sample a local data matrix A . For example, the local data matrix A is the matrix of gradient descent vectors computed by the AI module 1115 of a given distributed computing device 1110 based on the data 1125 received by the distributed computing device 1110. Each distributed computing device 1110 generates its random matrix W; independently. In other embodiments, other types of random matrices are used. [0091] The distributed computing device 1110 multiplies 1220 its local data matrix Aj of data local to the distributed computing device 1110 and its Gaussian ensemble matrix Wίΐo generate the matrix-matrix product Yt = A i E RNxq . The product Y, is an approximation of the data in the local data matrix Aj and compresses the local data. While the full data matrix A global that includes the data from each distributed computing device 1110 may be too large to be stored on and manipulated by a single distributed computing device 1110, the sampled data matrix Yt is sufficiently small to be stored on and manipulated by a single distributed computing device 1110.
[0092] The distributed computing device 1110, in cooperation with the other distributed computing devices in the system, performs 1230 the DAC process on the sampled data matrices Yi. The DAC process is performed according to the procedure described above. A convergence indicator, such as the convergence indicators described with respect to FIGs. 5 A and 5B, may be used to determine when to terminate the DAC process. The DAC process produces a normalized global matrix-matrix product Y global on each node, which can be represented as follows:
Figure imgf000031_0001
[0093] During a first iteration of DAC process, a distributed computing device 1110 exchanges its sampled data matrix Yi with another distributed computing device 1110. For example, distributed computing device 11 lOa transmits the sampled gradient matrix 1 l30a to the distributed computing device 11 lOb, and receives sampled gradient matrix 1 l30b from distributed computing device 11 lOb. The distributed computing device 1110 calculates an average of its sampled data matrix Yi and the sampled data matrix received from the other distributed computing device. For example, the distributed computing device 1110 calculates an average of its sampled gradient matrix 1 l30a and the received sampled gradient matrix 1130b . This results in a consensus gradient descent matrix, which is a matrix of the same size as the sampled data matrix Yi. In subsequent iterations, distributed computing devices 1110 exchange and average their current consensus gradient descent matrices. The consensus gradient descent matrices are repeatedly exchanged and averaged until a consensus result for the consensus gradient descent matrix is reached across the distributed computing devices 1110. The consensus result, which is the matrix Y global , is obtained when the consensus gradient descent matrices are substantially the same across all the distributed computing devices 1110, e.g., within a specified margin of error. The convergence indicator described with respect to FIGs. 5 A and 5B may be used to determine when Y global has been obtained, and to determine whether all distributed computing devices 1110 participated in determining the consensus result.
[0094] After calculating Y global, the distributed computing device 1110 extracts 1240 the orthogonal subspace that spans the range of Y global via a local unitary decomposition, i.e., y Global = UR. Following the decomposition, the distributed computing device 1110 (and each other distributed computing device in the system) holds a copy of the representative subspace, U e M/Vxc?, that approximately spans the range of the global data matrix ^global.
[0095] In the context of training an AI model, each distributed computing device in the network computes the local gradients associated with its local data set, producing the gradient dT(x,y;w)
vectors dw This gradient vector data is used to form the local data matrix Aj in the
i
cooperative subspace algorithm 1200. The gradient vectors are compressed into a suitably low dimensional subspace according to steps 1210 and 1220, the sampled, global gradient descent vectors are obtained according to the DAC process (step 1230), and gradient descent is performed in the global subspace locally on each agent (step 1240). The AI module 1115 updates its AI model (e.g., by updating the model weights) based on the representative subspace U, which reflects the data 1125 gathered by all of the distributed computing devices 1110.
[0096] While algorithms described herein are applied to optimizing a neural network, it should be understood that the algorithms can be applied to any type of machine learning. For example, other optimization techniques for improving machine learned models may be used, such as simulated annealing, nonlinear conjugate gradient, limited-memory BFGS, etc. In addition, other types of machine learning models can be used, such as capsule networks, Bayesian networks, genetic algorithms, etc.
USING DISTRIBUTED AVERAGE CONSENSUS (DAC) TO GENERATE PERSONALIZED
RECOMMENDATIONS
[0097] In prior systems for providing personalized recommendations, a centralized system obtains personal user data, such as preferences, purchases, ratings, tracked activities (e.g., clickstreams), or other explicit or implicit preference information. The centralized system trains a recommendation model based on the collected user data, and provides recommendations to a given user based on data collected about that user. The centralized system controls both the model and its users’ data. Centralized systems that provide recommendations may also exploit users’ data for other purposes, such as targeting content, or selling the data to third parties, that users may not approve of or may not have knowingly consented to. Many users would prefer to receive personalized recommendations without a central system collecting, storing, or distributing data about them.
[0098] To generate a recommendation model and personalized recommendations based on the model without exposing personal data, cooperating distributed computing devices according to embodiments herein use a cooperative subspace approach that combines the DAC algorithm described above with the theory of random sampling. Each distributed computing device randomly samples local user preference data in a cooperative subspace. The cooperative sub space approximates the user preference data, reflecting the users of all cooperating distributed computing devices. The sampled preference data is shared among the cooperating distributed computing devices. In particular, the distributed computing devices use the DAC algorithm to cooperatively create a recommendation model based on the sampled preference data of many users. Each distributed computing device individually applies the recommendation model to the distributed computing device’s local preference data to generate personalized recommendations for the user of the distributed computing device. The cooperative subspace approach allows the DAC algorithm to be performed efficiently, and the random sampling obscures the underlying user preference data so that users’ data privacy is maintained.
[0099] The recommendation model can be generated for and applied to any finite set of items. For example, the cooperative subspace approach can be used to generate
recommendations for a list of movies available through a particular service, all movies listed on the INTERNET MOVIE DATABASE (IMDB), all songs in a particular song catalog, a list of products for sale on a website, a set of restaurants in a given metro area, etc. The set of items are represented as a vector, with each vector element corresponding to one item in the set. For example, in a vector representing a set of movies, the first vector element corresponds to“The A- Team,” the second vector element corresponds to“A.I. Artificial Intelligence,” etc. In other embodiments, the set of items may be represented as a matrix, e.g., with each row corresponding to an item in the set, and each column representing a preference feature of the item. For example, for a matrix representing restaurant ratings, one columns in the matrix represents ratings for overall quality, another column represents ratings for food, another column represents ratings for service, and another column represents ratings for decor.
[0100] FIG. 13 illustrates a distributed environment 1300 for generating a personalized recommendation model using distributed average consensus, according to one embodiment.
FIG. 13 includes a number N of distributed computing devices 1310, referred to as distributed computing device l3 l0a through distributed computing device 1310N. The distributed computing devices 1310 may be embodiments of the distributed computing devices 130 described above. Each distributed computing device 1310 is associated with a user. Each distributed computing device 1310 includes user preference data 1315 and a recommendation module 1320.
[0101] The user preference data 1315 is data that reflects user preferences about some or all items in a set of items. For example, the distributed computing device 1310 receives as user input ratings (e.g., ratings from -1 to 1, or ratings from 1 to 10) for a set of movies. The distributed computing device 1310 stores the user ratings as vector elements corresponding to the movies to which the movies apply. For example, the user provides a rating of 0.8 for“The A- Team,” the first element of the movie vector is 0.8. Movies that the user has not rated may be assigned a neutral rating, e.g., 0. In some embodiments, the distributed computing device 1310 normalizes user-supplied ratings, e.g., so that each rating is between 0 and 1. In other examples, the distributed computing device 1310 learns the user preference data 1315 implicitly. For example, the distributed computing device 1310 may assign a relatively high rating (e.g., 1) to each movie that the user watches through to its end, and a relatively low rating (e.g., 0) to each movie that the user starts but does not finish.
[0102] The recommendation module 1320 uses the user preference data 1315 to train a recommendation model, which the recommendation module 1320 uses to generate
recommendations for the user. The recommendation module 1320 is configured to work cooperatively with the recommendation modules 1320 of other distributed computing devices 1310 to develop the recommendation model. To train the recommendation model, the recommendation module 1320 of each distributed computing device 1310 samples the user preference data 1315 stored locally on the respective distributed computing device 1310 to generate sampled preference data 1330. For example, recommendation module l320a of distributed computing device 13 lOa samples the user preference data 1315a to generate the sampled preference data l330a. The sampled preference data 1330 is a mathematical function of the user preference data 1315 that involves random sampling, such as multiplying the user preference data 1315 by a random matrix. The sampled preference data 1330 is shared with the other distributed computing devices in a peer-to-peer fashion. For example, distributed computing device 13 lOb shares its sampled preference data l330b with both distributed computing device l3 l0a and distributed computing device 1310N, and receives the sampled preference data l330a and 1330N from distributed computing devices l3 l0a and 1310N, respectively. The distributed computing devices 1310 form various sets of connections, as described with respect to FIGs. 4A and 4B, and exchange and average the sampled preference data until the distributed computing devices 1310 reach a consensus result according to the DAC algorithm, as described with respect to FIGs. 4A-5B.
[0103] While the sampled preference data 1330 of one of the distributed computing devices 1310 is shared with the other distributed computing devices 1310, the raw user preference data 1315 does not leave any one of the distributed computing devices 1310. Randomly sampling the user preference data 1315 to generate the sampled preference data 1330 that is shared with other distributed computing devices 1310 obscures the underlying user preference data 1315, so that user privacy is maintained. For example, when distributed computing device 13 lOa receives the sampled preference data l330b from another distributed computing device 13 lOb, the distributed computing device 13 lOa cannot recover the raw, underlying user preference data 1315b from the sampled preference data l330b.
[0104] The distributed computing devices l3 l0a-l3 l0N run a consensus algorithm, such as the distributed average consensus (DAC) algorithm described above, on the exchanged sampled preference data 1330 to obtain a consensus result for the sampled preference data. The distributed computing devices l3 l0a-l3 l0N may also use a convergence indicator, such as the convergence indicator described above with respect to FIGs. 5A and 5B, to determine when a consensus result for the sampled preference data has been reached by all of the distributed computing devices l3 l0a-l3 l0N. For example, the devices l3 l0a-l3 l0N perform the DAC process on the matrices of sampled preference data 1330 to obtain a global matrix of the same size as the matrices of sampled preference data 1330. When the convergence indicator indicates that a distributed average consensus for the sampled preference data matrices has been achieved (i.e., that the exchanged and averaged sampled preference data matrices have converged), each recommendation module 1320 generates its own recommendation model from the consensus result. The recommendation model may vary slightly between distributed computing devices 1310, e.g., within the margin of error tolerance permitted for consensus. The recommendation module 1320 then applies the recommendation model to the local user preference data 1315 to generate recommendations for the user of the distributed computing device 1310.
[0105] As described above, using the DAC algorithm in conjunction with the convergence indicator to generate the recommendation model ensures that each distributed computing device 1310 has contributed to the coordinated recommendation modeling effort undertaken by the distributed computing devices l3 l0a-l3 l0N. Unlike prior recommendation modeling processes, processes for generating the recommendation model according to embodiments herein run without the need for a central server. In addition, sampling the user preference data 1315 and performing the DAC algorithm reduces the computational resources required for each device 1310. The amount of user preference data 1315, or sampled preference data 1330, generated by all distributed computing devices 1310 can be large. While a single sampled user preference matrix can be stored on a single distributed computing device 1310, if the number of distributed computing devices N is large, a single distributed computing device 1310 may not be able to store all of the sampled preference data generated by the N devices, or even a portion of the N devices. In performing the DAC process, the distributed computing devices 1310 exchange and manipulate matrices of the size of the matrix of sampled preference data 1330 to generate a global matrix of the same size as the matrix of sampled preference data 1330. At no point during the DAC process does a distributed computing device 1310 store close to the amount of preference data or sampled preference data generated by all N devices.
[0106] As an example, AL E Mw represents a vector of user preference data that is local to node
Figure imgf000036_0001
e NxNnodes represents the global data set of all user preference data 1315. The cooperative subspace approach computes, in a fully distributed fashion, a representative subspace, U E M/Vxc?, which approximates the range of A such that \\A— UUTA\\ < e|| 4||, where e is a user specified tolerance on the accuracy of the approximation between 0 and 1. As noted above, in other embodiments, user preference data can be arranged in a matrix rather than a vector.
[0107] FIG. 14 is a flowchart showing a method for generating a personalized
recommendation using a cooperative subspace algorithm at a particular node e.g., one of the distributed computing devices 1310. The distributed computing device 1310 collects 1410 local user preference data At, e.g., the distributed computing device l3 l0a collects user preference data 13 l5a. As described above, the distributed computing device 1310 may receive explicit user preference data as user inputs, e.g., ratings or reviews, or the distributed computing device 1310 may determine user preference data 1315 based on monitoring user activity.
[0108] The recommendation module 1320 samples 1420 the local user preference data At. For example, the recommendation module 1320 generates a random vector Wέ e WLq and multiplies the random vector Wί by the local data A The random vector Wί is a vector of random values, e.g., a Gaussian ensemble. Each distributed computing device 1310 generates the random vector Wί independently. The recommendation module 1320 multiplies its local data vector At and the random vector Wίΐo generate the outer product Yt = AέWέ e RNxq . The matrix Yi is an example of the sampled preference data 1330, and Y, approximates the data in the local data vector At (i.e., Yi approximates the user preference data 1315).
[0109] The recommendation module 1320 of the distributed computing device 1310, in cooperation with the other distributed computing devices, performs 1430 the DAC algorithm on the sampled preference data matrices Yi to obtain a global DAC result Y global, which is the global matrix representing a consensus result for the matrices of sampled preference data 1330. Y global can be represented as follows:
Figure imgf000037_0001
[0110] During a first iteration of DAC process, a distributed computing device 1310 exchanges its sampled preference data matrix Yi with another distributed computing device 1310. For example, distributed computing device l3 l0a transmits the sampled preference data l330a to the distributed computing device 13 lOb, and receives sampled preference data l330b from distributed computing device 1310b . The recommendation module 1320 calculates an average of its sampled preference data matrix Y, and the sampled data preference matrix received from the other distributed computing device. For example, the recommendation module l320a calculates an average of its sampled preference data l330a and the received sampled preference data l330b. This results in consensus sampled user preference data, which is a matrix of the same size as the sampled preference data matrix . In subsequent iterations, distributed computing devices 1310 exchange and average their current consensus sampled user preference data. The consensus sampled user preference data is repeatedly exchanged and averaged until a consensus result across the distributed computing devices 1310 is reached. The consensus result, which is the matrix Y global, is obtained when the consensus sampled user preference data is substantially the same across all the distributed computing devices 1310, e.g., within a specified margin of error. The convergence indicator described with respect to FIGs. 5A and 5B may be used to determine when the consensus result Y global has been reached, and to determine whether all distributed computing devices 1310 participated in determining the consensus result.
[0111] While the full global data set of all user preference data A global including the preference data from each distributed computing device 1310 may be too large to be stored on and manipulated by a single distributed computing device 1310, the sampled preference data matrices Yi, and therefore the consensus sampled preference data matrices and the global consensus result Y global, are sufficiently small to be stored on and manipulated by a single distributed computing device 1310.
[0112] After calculating the DAC result Y global, the recommendation module 1320 extracts 1440 a subspace matrix U that spans the range of Y global. For example, the recommendation module 1320 performs a local unitary decomposition, i.e., Yaobai = UR, to obtain U, or performs another form of orthogonal decomposition. Following the decomposition, the distributed computing device 1310 (and each other cooperating distributed computing device) holds a copy of the representative subspace, U E RN xq , which approximately spans the range of the global preference data matrix A global. The representative subspace U is a recommendation model that the recommendation module 1320 can apply to an individual user’s user preference data 1315 to determine recommendations for the user.
[0113] The recommendation module 1320 projects 1450 the local user preference data At onto the subspace U to obtain a recommendation vector A = UUT At . Each element of the recommendation vector A corresponds to an element in the user preference vector A,. For example, if the user preference vector At indicates user preferences for each of a set of movies, the recommendation vector A indicates potential user interest in each of the same set of movies. The value for a given element of the recommendation vector A represents a predicted preference of the user for the item represented by the element. As an example, if the local preference data At represents a set of movies, the value of the first element of the recommendation vector A corresponds to the user’s predicted preference for or interest in the movie“The A-Team.” [0114] The recommendation module 1320 extracts and provides 1460 recommendations based on the recommendation vector A to the user of the distributed computing device 1310. For example, the recommendation module 1320 identifies the items in the set corresponding to the elements in the recommendation vector A having the highest values, and provides these items to the user as recommendations. In the movie example, the recommendation module 1320 may identify ten movies with the highest values in the recommendation vector A and for which the user has not provided preference data in the user preference data 1315, and return these movies as recommendations.
USING DISTRIBUTED AVERAGE CONSENSUS (DAC) FOR LATENT SEMANTIC INDEXING
[0115] As described above, in prior implementations for text-based searching using latent semantic indexing (LSI), a centralized system analyzes documents to determine their latent semantic content. The centralized system stores data describing these documents, receives searches from users, compares the search information to the stored data, and provides relevant documents to users. The centralized system must have access to the documents themselves in order to analyze them. Thus, the centralized system collects and analyzes a significant amount of data. The centralized system may also track users’ searches to learn about individual users. Content providers would prefer to provide access to documents without having the documents scraped and analyzed by a search service, and users would prefer to search without a central system collecting and storing data about their behavior.
[0116] As disclosed herein, to generate a latent semantic index and enable searching in a distributed manner, a set of cooperating distributed computing devices according to
embodiments herein use a cooperative subspace approach that combines the DAC algorithm described above with the theory of random sampling. Each cooperating distributed computing device stores one or more documents, and the documents distributed across the set of
cooperating distributed computing devices are jointly referred to as a corpus of documents. The documents in the corpus may be documents that their respective users plan to make available for searching by other distributed computing devices, e.g., documents that can be searched by some or all of the cooperating distributed computing devices and/or other devices.
[0117] The cooperating distributed computing devices jointly generate a latent semantic index based on the corpus of documents, without the contents of any individual document being exposed to other distributed computing devices. First, each distributed computing device individually analyzes its locally-stored documents, and randomly samples the results of this analysis to generate a matrix that approximates and obscures the content of the local documents. The distributed computing devices share their matrices and perform the DAC algorithm described above to generate a matrix reflecting the corpus of documents stored by of all cooperating distributed computing devices. Each distributed computing device then extracts a low-dimension latent semantic index (LSI) subspace from the matrix based on the DAC result. This LSI subspace reflects the analysis of all of the documents in the corpus, but is much smaller than a matrix concatenating the raw analysis results of the local documents. The cooperative subspace approach allows the subspace to be calculated efficiently, and the random sampling obscures the underlying documents so that privacy is maintained.
[0118] The LSI subspace generated through this approach can be used for various applications. For example, one distributed computing device can search for documents on other distributed computing devices using the LSI subspace. The searching distributed computing device receives a search request that may include, for example, one or more keywords (i.e., a keyword search) or one or more documents (e.g., for a search for similar documents). The searching device represents the received search request in the subspace and transmits the representation of the search request to the cooperating distributed computing devices, or some other set of searchable devices. Each distributed computing device being searched compares the received representation of the search request to representations of the distributed computing device’s local documents in the same subspace. If a distributed computing device being searched finds a document similar to the search request, the distributed computing device returns the document, or information about the document, to the searching device. A corpus can be constructed, and a search performed, on any type of text-based document. For example, a subspace constructed from a corpus of resumes can be used to conduct a hiring search, or a sub space constructed from a corpus of dating profiles can be used to implement a dating service.
[0119] FIG. 15 illustrates a distributed environment 1500 for generating a low-dimension subspace for latent semantic indexing, according to one embodiment. The environment 1500 includes a number N of distributed computing devices 1510, referred to as distributed computing device l5l0a through distributed computing device 1510N. The distributed computing devices 1510 may be embodiments of the distributed computing devices 130 described above. Each distributed computing device 1510 includes a set of documents 1515 and a latent semantic indexing (LSI) module 1520.
[0120] The documents 1515 are any text-based or text-containing documents on or accessible to the distributed computing device 1510. In some embodiments, the documents 1515 are locally stored on the distributed computing device 1510. In other embodiments, the documents 1515 are documents that are accessible to the distributed computing device 1510, but not permanently stored on the distributed computing device 1510. For example, the documents 1515 may be documents that the distributed computing device 1510 accesses from an external hard drive, from a networked server with dedicated storage for the distributed computing device 1510, from cloud-based storage, etc. The documents 1515 may be any file format, e.g., text files, PDFs, LaTeX, HTML, etc. In some embodiments, the documents 1515 form a general corpus of documents, such as a corpus of websites, a corpus of text-based documents, or a corpus including any files the distributed computing devices 1510 are willing to share with other distributed computing devices. In other embodiments, the documents 1515 form a specialized corpus of documents that users wish to share, such as resumes, dating profiles, social media profiles, research papers, works of fiction, computer code, recipes, reference materials, etc.
[0121] The LSI module 1520 uses the documents 1515 to generate, in conjunction with the other distributed computing devices, a low-dimension subspace in which the documents 1515 can be represented and compared. The LSI module 1520 includes a calculation module 1525 that operates on the documents 1515. Using the documents 1515, the calculation module 1525 generates word counts 1530 and sampled word counts 1535. Using the sampled word counts 1535 and working in conjunction with the other distributed computing devices, the calculation module 1525 generates the LSI subspace 1540.
[0122] First, the calculation module 1525 analyzes the documents 1515 to calculate the word counts 1530 for each document. To generate a latent semantic index, each document is first represented as a vector in which each vector element represents a distinct word. The value for each element in the word count vector is the number of times the corresponding word appears in the document. For example, if in a given document, the word“patent” appears five times and the word“trademark” appears three times, the element in the vector corresponding to“patent” is assigned a value of five, and the element corresponding to“trademark” is assigned a value of three. In other embodiments, the elements in the word count vector are mathematically related to the actual word counts, e.g., the values in the word count vector are normalized or otherwise proportional to the actual word counts of the document. The words represented by the vector elements can be, e.g., all words in a given dictionary, a set of words that excludes stop words, or a set of words that groups words with the same word stem (e.g., one element may group “patent,”“patents,” and“patenting,”).
[0123] If the distributed computing device 1510 includes multiple documents 1515, the calculation module 1525 calculates a word count vector for each document. In some
embodiments, the distributed computing device 1510 may combine multiple documents into a single vector (e.g., two related documents), or separate a single document into multiple word count vectors (e.g., a long document, or a document that has subsections). The calculation module 1525 concatenates the word count vectors for the documents 1515 to form a word count matrix.
[0124] The calculation module 1525 samples the word counts 1530 to calculate the sampled word counts 1535. The sampled word counts 1535 are a mathematical function of the word counts 1530 that involves random sampling, such as multiplying the matrix of word counts 1530 by a random matrix. The sampled word counts 1535 are shared with the other distributed computing devices in a peer-to-peer fashion. For example, distributed computing device 15 lOb shares its sampled word counts 1535b with both distributed computing device l5l0a and distributed computing device 1510N, and receives the sampled word counts 1535a and 1535N from distributed computing devices l5l0a and 1510N, respectively. The distributed computing devices 1510 form various sets of connections, as described with respect to FIGs. 4A and 4B, and exchange and average the sampled word counts until the distributed computing devices 1510 reach a consensus result according to the DAC algorithm, as described with respect to FIGs. 4A-5B.
[0125] While the sampled word counts 1535 of one of the distributed computing devices 1510 are shared with the other distributed computing devices 1510, the word counts 1530 do not leave any one of the distributed computing devices 1530. Representing the documents 1515 as word counts 1530 and then sampling the word counts 1530 to generate the sampled word counts 1535 that are shared among the distributed computing devices 1510 obscures the underlying documents 1515, so that privacy of the documents is maintained. For example, when distributed computing device !5l0a receives the sampled word counts 1535b from another distributed computing device 15 lOb, the distributed computing device l5l0a cannot recover the documents 1515b, or even the word counts l530b, from the sampled word counts l535b. This is advantageous for applications where users want other users to be able to find their documents, but do not wish to provide full public access to their documents.
[0126] The distributed computing devices l5l0a-l5l0N run a consensus algorithm, such as the distributed average consensus (DAC) algorithm described above, on the exchanged sampled word counts 1535 to obtain a consensus result for the sampled word counts 1535. The distributed computing devices l5l0a-l5l0N may also use a convergence indicator, such as the convergence indicator described above with respect to FIGs. 5A and 5B, to determine when a consensus result for the sampled word counts 1535 has been reached by all of the distributed computing devices l5l0a-l5l0N. For example, the distributed computing devices l5l0a-l5l0N perform the DAC process on matrices of the sampled word counts 1535 to obtain a global matrix of the same size as the matrices of the sampled word counts 1535. When the convergence indicator indicates that a distributed average consensus for the sampled word count matrices has been achieved (i.e., that the exchanged and averaged word count matrices have converged), each calculation module 1525 independently calculates an LSI subspace 1540 from the consensus result. While FIG. 15 indicates that all distributed computing devices 1510 have the same LSI subspace 1540, the calculated LSI subspaces may vary slightly between distributed computing devices 1510, e.g., within a margin of error tolerance permitted for consensus. The distributed computing devices 1510 can then apply the LSI subspace 1540 to analyze their own documents 1515 and to search for documents on other distributed computing devices.
[0127] As described above, using the DAC algorithm in conjunction with the convergence indicator to generate the LSI subspace ensures that each distributed computing device 1510 has contributed to the coordinated subspace construction effort undertaken by the distributed computing devices l5l0a-l5l0N. Unlike prior latent semantic indexing methods, processes for generating a latent semantic index according to embodiments herein run without the need for a central server. In addition, using sampled word counts 1535, rather than raw documents or full word counts 1530, and performing the DAC algorithm reduces the computational resources required for each distributed computing device 1510. The amount of data in the documents 1515, and even in the word counts 1530, generated by all distributed computing devices 1510 can be large. For example, the word counts 1530 are typically sparse but very large matrices, particularly when a distributed computing device 1510 contains a large number of documents 1515. While a matrix of sampled word counts for a single distributed computing device’s documents can be stored on and manipulated by a single distributed computing device 1510, if the number of distributed computing devices N is large, a single distributed computing device may not be able to store all of the sampled word count data generated by the N distributed computing devices, or even a portion of the N distributed computing devices. In performing the DAC process, the distributed computing devices 1510 exchange and manipulate matrices of the size of the matrix of sampled word counts 1535 to generate a global matrix of the same size as the matrix of sampled word counts 1535. At no point during the DAC process does a distributed computing device 1510 store close to the amount of word count data or sampled word count data generated by all N devices.
[0128] As an example, AL e M/Vxki represents a matrix of word counts 1530 of the h documents local to node
Figure imgf000044_0001
represents the global data set of all word counts 1530. N i s the length of the word count vectors. The cooperative subspace approach computes, in a fully distributed fashion, a representative LSI subspace, U E EWxc?, which approximates the range of A such that
Figure imgf000044_0002
where e is a user specified tolerance on the accuracy of the approximation between 0 and 1.
[0129] FIG. 16 is a flowchart showing a method for generating a low-dimension subspace for latent semantic indexing using distributed average consensus at a particular node i, e.g., one of the distributed computing devices 1510. The LSI module 1520 generates 1610 a local word count matrix A for a set of local documents. As an example, as described above, the calculation module l525a calculates the word counts l530a for a set of documents 1515a accessible to the distributed computing device l5l0a.
[0130] The LSI module 1520 samples 1620 the local word counts data A . For example, the calculation module 1525 generates a random matrix W; e Rkxq and multiplies the random matrix Wί by the local word count matrix A,. The random matrix Wί is a matrix of random values, e.g., a Gaussian ensemble matrix. Each distributed computing device 1510 generates the random matrix independently. The calculation module 1525 multiplies its local word count matrix At and the random matrix Wίΐo generate the outer product
Figure imgf000044_0003
e RNxq . The matrix Y, is an example of the sampled word counts 1535, and approximates the data in the local word count matrix A (i.e., Y, approximates the word counts 1530). [0131] The LSI module 1520 of the distributed computing device 1510, in cooperation with the other distributed computing devices, performs 1630 the DAC algorithm on the sampled word count matrices Yi to obtain a global DAC result matrix Y global, which is the global matrix representing a consensus result for the matrices of sampled word counts 1535. Y global can be represented as follows:
Figure imgf000045_0001
[0132] During a first iteration of DAC process, a distributed computing device 1510 exchanges its sampled word count matrix Yi with another distributed computing device 1510.
For example, distributed computing device l5l0a transmits the sampled word counts l535a to the distributed computing device 15 lOb, and receives sampled word counts 1535b from distributed computing device 1510b . The LSI module 1520 calculates an average of its sampled word count matrix Yi and the sampled word count matrix received from the other distributed computing device. For example, the calculation module l525a of the LSI module l525a calculates an average of its matrix of sampled word counts 1535a and the received matrix of sampled word counts l535b. This results in consensus sampled word count matrix, which is a matrix of the same size as the sampled word count matrix Yi. In subsequent iterations, distributed computing devices 1510 exchange and average their current consensus sampled word count matrices. The consensus sampled word count matrices are repeatedly exchanged and averaged until a consensus result across the distributed computing devices 1510 is reached. The consensus result, which is the matrix Y global , is obtained when the consensus sampled word counts are substantially the same across all the distributed computing devices 1510, e.g., within a specified margin of error. The convergence indicator described with respect to FIGs. 5 A and 5B may be used to determine when the consensus result Ygiobai has been reached, and to determine whether all distributed computing devices 1510 participated in determining the consensus result.
[0133] While a full word count matrix Agiobai including the word counts of all documents in the corpus may be too large to be stored on and manipulated by a single distributed computing device 1510, the sampled word count matrices Yi, and therefore the consensus sampled word count matrices and the global consensus result Y global, are sufficiently small to be stored on and manipulated by a single distributed computing device 1510. [0134] After calculating the DAC result Y global, the LSI module 1520 extracts 1640 a low- dimension LSI subspace matrix U from the DAC result Y global that spans the range of Y global. For example, the calculation module 1525 performs a local unitary decomposition, i.e., YGiobai =
UR, to obtain l /, or performs another form of orthogonal decomposition. Following the decomposition, the distributed computing device 1510 (and each other cooperating distributed computing device) holds a copy of the representative subspace, U E RN xq , which approximately spans the range of the global word count data matrix A global. The LSI subspace matrix U is a low-dimension subspace 1540 (e.g., has a low dimension relative to Agiobai) that the LSI module 1520 can use for various applications. For example, the LSI module 1520 can project a document into the LSI subspace 1540 to determine the latent semantic content of a document, or the LSI module 1540 can compare the latent semantic content of multiple documents by projecting the documents into the same LSI subspace.
[0135] FIG. 17 is a flowchart showing a method for searching for documents in the distributed environment based on the LSI subspace, according to one embodiment. A requesting device, e.g., one of the distributed computing devices 1510, receives a search request and generates 1710 a vector s of word counts for the search. The search request may be, for example, a set of keywords, or one or more documents. For example, to perform a search of job candidates by searching users’ resumes, a searching user (e.g., a hiring manager) may input a set of skills and attributes, e.g.,“Python”,“PhD”,“volunteer”, etc., into an interface of the requesting device. Alternatively, the searching user may provide or select (e.g., from the documents 1515) one or more resumes of current, successful employees or other candidates to search for similar candidates. A calculation module 1525 of the requesting device generates the word count vector s in a similar manner to generating the word counts At.
[0136] The requesting device then calculates 1720 a subspace search vector s by projecting the word count vector s into the LSI subspace. For example, the calculation module 1525 generates the subspace search vector s by multiplying the word count vector s by the transpose of the LSI subspace matrix If, i.e., s = UTs. The subspace search vector characterizes the search request in the LSI subspace, and is a lower-dimension vector than the word count vector s (i.e., s e mq, s e mN, q < A). For the resume search example, the subspace search vector characterizes the skills and attributes being sought by a hiring manager in the LSI subspace. [0137] The requesting device transmits 1730 the subspace search vector 5 to a set of searchable devices for document searching. The searchable devices are a set of devices that accept search requests from requesting devices, and that have a copy of the LSI subspace matrix U. In some embodiments, the searchable devices include the same distributed computing devices 1510 that cooperated to generate the LSI subspace matrix U, or a subset of these distributed computing devices. In some embodiments, the searchable devices include devices that did not cooperate to generate U, but obtained U from another device.
[0138] The searchable devices each compare 1740 the received subspace search vector to subspace vectors in the same LSI subspace used to characterize the searchable devices’ local documents (e.g., documents 1515). The subspace vectors characterizing searchable devices’ local documents for searching are referred to as target vectors. Each searchable device calculates the target vectors in the same manner as the subspace search vector was calculated in 1720. The searchable devices may calculate and store the target vectors for their local documents prior to receiving the request, e.g., after obtaining the LSI subspace matrix U at 1640 in FIG. 16, or after receiving the LSI subspace matrix from another device. To compare the search vector to a target vector describing a searched document, a searchable device (e.g., the calculation module 1525 of the searchable device) may calculate a dot product of the search vector and the target vector, a Euclidean distance between the search vector and the target vector, or some other measure of distance between the two vectors.
[0139] The searchable devices return 1750 any local documents, or data describing local documents, that were determined to be relevant to the requesting device’s search. For example, if a searchable device calculates a Euclidean distance to compare the search vector to the target vector or each local document, the searchable device may provide data describing any documents with target vectors that have a Euclidean distance to the search vector below a threshold value. Alternatively, a searchable device may return data describing a set of documents with the closest match (e.g., the ten closest matching documents), or data describing all documents and their match value. The match value indicates the measure of distance between the search vector and the target vector. The returned data may include a document identifier, the match value (e.g., the Euclidean distance or the dot product), and some information describing the document, such as a title, author, date of creation or publication, etc. The information returned may depend on the context; for example, for a resume search, the searchable device may return a candidate overview (e.g., current position, desired position, location) that is machine-generated or supplied by the candidate. Based on the returned results, the searching device may request one or more full document from one or more searchable devices.
[0140] In some embodiments, one or more searchable devices store target vectors describing documents stored on one or more other devices. In this case, a searchable device (e.g., a web server) compares the search vector to each target vector stored by the searchable device, on behalf of the other devices storing the documents. Unlike prior search engines, the searchable device does not access the full documents, but instead only receives the target vectors that characterize the documents in the subspace from the documents’ owners. In response to a search request, the searchable device can return information for retrieving matching documents from the devices that store the matching documents.
[0141] The LSI subspace matrix U can be used for other applications besides document searching. As another example, to determine a set of relevant words (e.g., keyword) for a given document with word count vector a, the calculation module 1525 of a distributed computing device 1510 projects the word count vector a into the LSI subspace 1540 by calculating the product a = UUTa, a E Rw. The values in the resulting vector a, each of which corresponds to a particular word in the set of N words (e.g., the set of words in a particular dictionary), indicates the relevance of each word to the document. The words that have high values (e.g., the five or ten words corresponding to the highest values in the vector a) can be selected as keywords to describe the document.
CONCLUSION
[0142] The foregoing description of the embodiments has been presented for the purpose of illustration; it is not intended to be exhaustive or to limit the patent rights to the precise forms disclosed. Persons skilled in the relevant art can appreciate that many modifications and variations are possible in light of the above disclosure.
[0143] Some portions of this description describe the embodiments in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are commonly used by those skilled in the data processing arts to convey the substance of their work effectively to others skilled in the art. These operations, while described functionally, computationally, or logically, are understood to be implemented by computer programs or equivalent electrical circuits, microcode, or the like. Furthermore, it has also proven convenient at times, to refer to these arrangements of operations as modules, without loss of generality. The described operations and their associated modules may be embodied in software, firmware, hardware, or any combinations thereof.
[0144] Any of the steps, operations, or processes described herein may be performed or implemented with one or more hardware or software modules, alone or in combination with other devices. In one embodiment, a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described.
[0145] Embodiments may also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, and/or it may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a non-transitory, tangible computer readable storage medium, or any type of media suitable for storing electronic instructions, which may be coupled to a computer system bus. Furthermore, any computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
[0146] Embodiments may also relate to a product that is produced by a computing process described herein. Such a product may comprise information resulting from a computing process, where the information is stored on a non-transitory, tangible computer readable storage medium and may include any embodiment of a computer program product or other data combination described herein.
[0147] Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the patent rights. It is therefore intended that the scope of the patent rights be limited not by this detailed description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosure of the embodiments is intended to be illustrative, but not limiting, of the scope of the patent rights, which is set forth in the following claims.

Claims

What is claimed is:
1. A method for distributed computing comprising:
receiving over a network, at a first distributed computing device of a plurality of distributed computing devices, a data partition of a plurality of data partitions for a computing task, wherein each of the plurality of distributed computing devices is assigned a respective data partition of the plurality of data partitions;
generating, by the first distributed computing device, a first partial result of a plurality of partial results generated by the plurality of distributed computing devices; at the first distributed computing device, iteratively executing a distributed average consensus (DAC) process comprising, for each iteration of the process:
transmitting the first partial result of the first distributed computing device to a second distributed computing device of the plurality of distributed computing devices,
receiving a second partial result generated by the second distributed
computing device from the second distributed computing device, and
updating the first partial result of the first distributed computing device by computing an average of the first partial result and the second partial result,
in response to determining that respective partial results of the plurality of distributed computing devices have reached a consensus value, determining to stop executing the DAC process; and
generating, by the first distributed computing device, a final result of the computing task based on the consensus value.
2. The method of claim 1, wherein a requesting computing device provides the computing task and a set of requirements for the computing task to an intermediary computing device, the method further comprising: receiving over the network, at the first distributed computing device from the
intermediary computing device, a smart contract generated by the
intermediary computing device, the smart contract comprising the set of requirements;
determining, by the first distributed computing device, that the first distributed
computing device meets the set of requirements; and
transmitting over the network, from the first distributed computing device, a commitment to perform the computing task to the intermediary computing device.
3. The method of claim 2, wherein the first distributed computing device receives instructions for performing the computing task from the intermediary computing device over the network, and the first distributed computing device executes the DAC process according to the received instructions.
4. The method of claim 1, the method further comprising:
publishing, by the first distributed computing device to a blockchain, connection information for communicating with the first distributed computing device; and
compiling, by the first distributed computing device, a peer list comprising connection information published to the blockchain for at least a portion of the plurality of distributed computing devices;
wherein executing a DAC process for each iteration of the process further comprises forming a connection with the second distributed computing device over the network according to connection information for the second distributed computing device from the peer list.
5. The method of claim 1, wherein the second distributed computing device during a first iteration of the DAC process and the second distributed computing device during a second iteration of the DAC process are different distributed computing devices of the plurality of distributed computing devices.
6. The method of claim 1, wherein iteratively executing the DAC process further comprises, for each iteration of the process:
transmitting over the network a first convergence indicator of the first distributed computing device to the second distributed computing device;
receiving over the network a second convergence indicator of the second distributed computing device from the second distributed computing device; and updating the first convergence indicator of the first distributed computing device by determining a center of mass of the first convergence indicator and the second convergence indicator.
7. The method of claim 6, wherein determining that respective partial results of the plurality of distributed computing devices have reached a consensus value comprises
determining that the first convergence indicator of the first distributed computing device is within a threshold distance of a global center of mass of the convergence indicator.
8. The method of claim 7, wherein the convergence indicator comprises an n-sphere, the method further comprising receiving data specifying a portion of the n-sphere having center of mass comprising a weight and a position, wherein each distributed computing device of the plurality of distributed computing devices is assigned a respective non-overlapping portion of the n-sphere.
9. The method of claim 1, wherein the computing task comprises at least one of a distributed dot product, a distributed matrix-vector product, a distributed least squares calculation, or decentralized Bayesian parameter learning.
10. A method for distributed computing comprising:
receiving over a network, at an intermediary computing device, a request for completing a computing task from a requesting computing device, the request comprising a set of requirements for the computing task;
transmitting, by the intermediary computing device, at least a portion of the set of requirements to a plurality of distributed computing devices over the network; receiving commitments from the plurality of distributed computing devices over the network to perform the computing task, each of the plurality of distributed computing devices meeting the portion of the set of requirements;
transmitting, to each of the plurality of distributed computing devices, a respective data partition of a plurality of data partitions for the computing task, wherein the plurality of distributed computing devices are configured to iteratively execute a distributed average consensus (DAC) process to calculate a consensus value for the computing task; and
returning a result of the computing task to the requesting computing device.
11. The method of claim 10, further comprising transmitting over the network instructions to a first distributed computing device of the plurality of distributed computing devices to iteratively execute the DAC process by, for each iteration of the DAC process:
transmitting over the network a first partial result for the computing task to at least one other of the plurality of distributed computing devices;
receiving over the network a second partial result for the computing task from the at least one other distributed computing device; and
updating the first partial result for the computing task by computing an average of the first partial result and the second partial result.
12. The method of claim 11, further comprising transmitting instructions over the network to the first distributed computing device of the plurality of distributed computing devices to determine that respective partial results of the plurality of distributed computing devices have reached a consensus value, and, in response to the determination, stop execution of the DAC process.
13. The method of claim 10, further comprising:
assigning, to each of the plurality of distributed computing devices, a portion of a convergence indicator, each portion of the convergence indicator having a center of mass, and the convergence indicator having a global center of mass; transmitting over the network, to each of the plurality of distributed computing
devices, data specifying the portion of the convergence indicator assigned to the distributed computing device; and
receiving over the network, from at least one of the plurality of distributed computing devices, confirmation that the convergence indicator is within a threshold distance of a global center of mass of the convergence indicator, wherein the plurality of the distributed computing devices iteratively update the respective convergence indicators during execution of the DAC process.
14. The method of claim 13, wherein the convergence indicator comprises an n-sphere, each distributed computing device of the plurality of distributed computing devices is assigned a respective non-overlapping portion of the n-sphere, and the data specifying each portion of the convergence indicator comprises a respective weight and a respective position of the center of mass of the portion of the convergence indicator.
15. The method of claim 10, wherein at least the portion of the set of requirements transmitted to the plurality of distributed computing devices and the commitments from the plurality of distributed computing devices to perform the computing task are recorded in a smart contract.
16. A non-transitory computer readable storage medium configured to store program code, the program code comprising instructions that, when executed by one or more processors, cause the one or more processors to:
receive, over a network, a request for completing a computing task from a requesting computing device, the request comprising a set of requirements for the computing task;
transmit at least a portion of the set of requirements to a plurality of distributed
computing devices over the network;
receive commitments from the plurality of distributed computing devices over the network to perform the computing task, each of the plurality of distributed computing devices meeting the portion of the set of requirements; transmit, to each of the plurality of distributed computing devices, a respective data partition of a plurality of data partitions for the computing task, wherein the plurality of distributed computing devices are configured to iteratively execute a distributed average consensus (DAC) process to calculate a consensus value for the computing task; and
return a result of the computing task to the requesting computing device.
17. The non-transitory computer readable storage medium of claim 16, further comprising instructions to transmit over the network DAC instructions to a first distributed computing device of the plurality of distributed computing devices, the DAC instructions comprising instructions to iteratively execute the DAC process, for each iteration of the DAC process, by:
transmitting over the network a first partial result for the computing task to at least one other of the plurality of distributed computing devices;
receiving over the network a second partial result for the computing task from the at least one other distributed computing device; and
updating the first partial result for the computing task by computing an average of the first partial result and the second partial result.
18. The non-transitory computer readable storage medium of claim 17, wherein the DAC instructions further comprise instructions to determine that respective partial results of the plurality of distributed computing devices have reached a consensus value, and, in response to the determination, stop execution of the DAC process.
19. The non-transitory computer readable storage medium of claim 16, further comprising instructions to:
assign, to each of the plurality of distributed computing devices, a portion of a
convergence indicator, each portion having a center of mass, and the convergence indicator having a global center of mass; transmit over the network, to each of the plurality of distributed computing devices, data specifying the portion of the convergence indicator assigned to the distributed computing device; and
receive over the network, from at least one of the plurality of distributed computing devices, confirmation that the convergence indicator is within a threshold distance of a global center of mass of the convergence indicator, wherein the plurality of the distributed computing devices iteratively update the respective convergence indicators during execution of the DAC process.
20. The non-transitory computer readable storage medium of claim 19, wherein the convergence indicator comprises an n-sphere, each distributed computing device of the plurality of distributed computing devices is assigned a respective non-overlapping portion of the n- sphere, and the data specifying each portion of the convergence indicator comprises a respective weight and a respective position of the center of mass of the portion of the convergence indicator.
21. A method for cooperative learning comprising:
generating, at a distributed computing device, a gradient descent matrix based on data received by the distributed computing device and a model stored on the distributed computing device;
calculating, by the distributed computing device, a sampled gradient descent matrix based on the gradient descent matrix and a random matrix;
iteratively executing, by the distributed computing device, a process to determine a consensus gradient descent matrix in conjunction with a plurality of additional distributed computing devices connected to the distributed computing device by a network, the consensus gradient descent matrix based on the sampled gradient descent matrix calculated by the distributed computing device and a plurality of additional sampled gradient decent matrices calculated by the plurality of additional distributed computing devices; and
updating, by the distributed computing device, the model stored on the distributed computing device based on the consensus gradient descent matrix.
22. The method of claim 21, wherein iteratively executing the process to determine the consensus gradient descent matrix comprises, for a first iteration of the process:
transmitting, over the network, the sampled gradient descent matrix of the distributed computing device to a second distributed computing device of the plurality of additional distributed computing devices;
receiving, over the network, an additional sampled gradient descent matrix generated by the second distributed computing device from the second distributed computing device; and
calculating the consensus gradient descent matrix by computing an average of the sampled gradient descent matrix and the additional sampled gradient descent matrix.
23. The method of claim 22, wherein iteratively executing the process to determine the consensus gradient descent matrix comprises, for a second iteration of the process:
transmitting, over the network, the consensus gradient descent matrix of the distributed computing device to a third distributed computing device of the plurality of additional distributed computing devices;
receiving, over the network, an additional consensus gradient descent matrix generated by the third distributed computing device from the third distributed computing device; and
updating the consensus gradient descent matrix by computing an average of the consensus gradient descent matrix and the additional consensus gradient descent matrix.
24. The method of claim 21, wherein iteratively executing the process to determine a consensus gradient descent matrix comprises, for each iteration of the process:
transmitting, over the network, a first convergence indicator of the distributed
computing device to a second distributed computing device of the plurality of additional distributed computing devices;
receiving, over the network, a second convergence indicator of the second distributed computing device from the second distributed computing device; updating the first convergence indicator of the distributed computing device by
determining a center of mass of the first convergence indicator and the second convergence indicator; and
determining whether the consensus gradient descent matrix has been obtained based on the updated first convergence indicator.
25. The method of claim 24, wherein determining whether the consensus gradient descent matrix has been obtained based on the updated first convergence indicator comprises
determining that the first convergence indicator of the distributed computing device is within a threshold distance of a global center of mass of the first convergence indicator.
26. The method of claim 21, wherein generating the gradient descent matrix based on data received by the distributed computing device and a model stored on the distributed computing device comprises:
receiving, at the distributed computing device, a plurality of pairs of training data, each pair comprising a data input and a label;
for each pair of the plurality of pairs of training data, computing a gradient vector of a plurality of gradient vectors by evaluating a partial derivative of an objective function of the model based on the pair; and
concatenating the plurality of gradient vectors to generate the gradient descent matrix.
27. The method of claim 21, wherein the sampled gradient descent matrix represents the gradient descent matrix in a cooperative subspace common to the distributed computing devices and the plurality of additional distributed computing devices.
28. The method of claim 21, wherein calculating the sampled gradient descent matrix based on the gradient descent matrix and a random matrix comprises:
generating, as the random matrix, a Gaussian ensemble matrix; and
multiplying the gradient descent matrix and the Gaussian ensemble matrix to generate the sampled gradient descent matrix.
29. The method of claim 21, wherein updating the model stored on the distributed computing device based on the consensus gradient descent matrix comprises:
extracting an orthogonal sub space of the consensus gradient descent matrix spanning the range of a global gradient descent matrix; and
updating weights of the model based on the extracted orthogonal subspace.
30. The method of claim 21, wherein the model is a machine learning artificial intelligence (AI) model configured to make a prediction based on one or more input signals received by the distributed computing device.
31. A non-transitory computer readable storage medium configured to store program code, the program code comprising instructions that, when executed by one or more processors, cause the one or more processors to:
generate a gradient descent matrix based on data received by a distributed computing device and a model stored on the distributed computing device;
calculate a sampled gradient descent matrix based on the gradient descent matrix and a random matrix;
iteratively execute a process to determine a consensus gradient descent matrix in conjunction with a plurality of additional distributed computing devices connected by a network to the distributed computing device, the consensus gradient descent matrix based on the sampled gradient descent matrix calculated by the distributed computing device and a plurality of additional sampled gradient decent matrices calculated by the plurality of additional distributed computing devices; and
update the model stored on the distributed computing device based on the consensus gradient descent matrix.
32. The non-transitory computer readable storage medium of claim 31, wherein the instructions to iteratively execute the process to determine the consensus gradient descent matrix comprise instructions to, for a first iteration of the process: transmit, over the network, the sampled gradient descent matrix of the distributed computing device to a second distributed computing device of the plurality of additional distributed computing devices;
receive, over the network, an additional sampled gradient descent matrix generated by the second distributed computing device from the second distributed computing device; and
calculate the consensus gradient descent matrix by computing an average of the
sampled gradient descent matrix and the additional sampled gradient descent matrix.
33. The non-transitory computer readable storage medium of claim 32, wherein the instructions to iteratively execute the process to determine the consensus gradient descent matrix further comprise instructions to, for a second iteration of the process:
transmit, over the network, the consensus gradient descent matrix of the distributed computing device to a third distributed computing device of the plurality of additional distributed computing devices;
receive, over the network, an additional consensus gradient descent matrix generated by the third distributed computing device from the third distributed computing device; and
update the consensus gradient descent matrix by computing an average of the
consensus gradient descent matrix and the additional consensus gradient descent matrix.
34. The non-transitory computer readable storage medium of claim 31, wherein the instructions to iteratively execute the process to determine the consensus gradient descent matrix comprise instructions to, for each iteration of the process:
transmit, over the network, a first convergence indicator of the distributed computing device to a second distributed computing device of the plurality of additional distributed computing devices;
receive, over the network, a second convergence indicator of the second distributed computing device from the second distributed computing device; update the first convergence indicator of the distributed computing device by
determining a center of mass of the first convergence indicator and the second convergence indicator; and
determine whether the consensus gradient descent matrix has been obtained based on the updated first convergence indicator.
35. The non-transitory computer readable storage medium of claim 34, wherein the instructions to determine whether the consensus gradient descent matrix has been obtained based on the updated first convergence indicator comprise instructions to determine that the first convergence indicator of the distributed computing device is within a threshold distance of a global center of mass of the first convergence indicator.
36. The non-transitory computer readable storage medium of claim 31, wherein the instructions to generate the gradient descent matrix based on data received by the distributed computing device and a model stored on the distributed computing device comprise instructions to:
receive a plurality of pairs of training data, each pair comprising a data input and a label;
for each pair of the plurality of pairs of training data, compute a gradient vector of a plurality of gradient vectors by evaluating a partial derivative of an objective function of the model based on the pair; and
concatenate the plurality of gradient vectors to generate the gradient descent matrix.
37. The non-transitory computer readable storage medium of claim 31, wherein the sampled gradient descent matrix represents the gradient descent matrix in a cooperative subspace common to the distributed computing devices and the plurality of additional distributed computing devices.
38. The non-transitory computer readable storage medium of claim 31, wherein the instructions to calculate the sampled gradient descent matrix based on the gradient descent matrix and a random matrix comprise instructions to: generate, as the random matrix, a Gaussian ensemble matrix; and
multiply the gradient descent matrix and the Gaussian ensemble matrix to generate the sampled gradient descent matrix.
39. The non-transitory computer readable storage medium of claim 31, wherein the instructions to update the model stored on the distributed computing device based on the consensus gradient descent matrix comprise instructions to:
extract an orthogonal subspace of the consensus gradient descent matrix spanning the range of a global gradient descent matrix; and
update weights of the model based on the extracted orthogonal subspace.
40. The non-transitory computer readable storage medium of claim 31, wherein the model is a machine learning artificial intelligence (AI) model configured to make a prediction based on one or more input signals received by the distributed computing device.
41. A computer-implemented method for generating personalized recommendations comprising:
storing, at a distributed computing device, user preference data representing preferences of a user with respect to a portion of a set of items;
calculating, by the distributed computing device, sampled user preference data by randomly sampling the user preference data;
iteratively executing, by the distributed computing device, in conjunction with a plurality of additional distributed computing devices connected to the distributed computing device by a network, a process to determine a consensus result for the sampled user preference data, the consensus result based on the sampled user preference data calculated by the distributed computing device and additional sampled user preference data calculated by the plurality of additional distributed computing devices, the additional sampled user preference data based on preferences of a plurality of additional users; determining, by the distributed computing device, a recommendation model based on the consensus result for the sampled user preference data, the recommendation model reflecting the preferences of the user and the plurality of additional users;
identifying, by the distributed computing device, an item of the set of items to provide to the user as a recommendation based on the recommendation model; and providing, by the distributed computing device, the recommendation of the item to the user.
42. The method of claim 41, wherein the user preference data is a user preference vector, each element in the user preference vector corresponds to an item of the set of items, and wherein calculating the sampled user preference data by randomly sampling the user preference data comprises calculating a sampled user preference matrix by multiplying a random matrix and the user preference vector.
43. The method of claim 42, wherein the consensus result is a global consensus matrix of a same dimensionality as the sampled user preference matrix, and wherein determining a recommendation model based on the consensus result for the sampled user preference data comprises extracting the recommendation model from the global consensus matrix using orthogonal decomposition.
44. The method of claim 43, wherein identifying an item of the set of items to provide to the user as a recommendation based on the recommendation model comprises:
projecting the user preference vector onto the extracted recommendation model to obtain a personalized recommendation vector; and
identifying the item of the set of items to provide to the user as the recommendation based on a value of an element in the personalized recommendation vector corresponding to the item.
45. The method of claim 41, wherein iteratively executing the process to determine the consensus result comprises, for a first iteration of the process: transmitting, over the network, the sampled user preference data of the distributed computing device to a second distributed computing device of the plurality of additional distributed computing devices;
receiving, over the network, second sampled user preference data generated by the second distributed computing device from the second distributed computing device; and
calculating consensus sampled user preference data by computing an average of the sampled user preference data and the second sampled user preference data.
46. The method of claim 45, wherein iteratively executing the process to determine the consensus result comprises, for a second iteration of the process:
transmitting, over the network, the consensus sampled user preference data of the distributed computing device to a third distributed computing device of the plurality of additional distributed computing devices;
receiving, over the network, an additional consensus sampled user preference data generated by the third distributed computing device from the third distributed computing device; and
updating the consensus sampled user preference data by computing an average of the consensus sampled user preference data and the additional consensus sampled user preference data.
47. The method of claim 46, wherein, after a plurality of iterations, the consensus sampled user preference data calculated by the distributed computing device substantially converges with consensus sampled user preference data calculated by each of remaining ones of the plurality of additional computing devices, and the consensus sampled user preference data calculated by the distributed computing device is the consensus result.
48. The method of claim 45, wherein randomly sampling the user preference data obscures the user preference data, such that the second distributed computing device cannot recover the user preference data from the sampled user preference data.
49. The method of claim 41, wherein iteratively executing the process to determine the consensus result comprises, for each iteration of the process:
transmitting, over the network, a first convergence indicator of the distributed computing device to a second distributed computing device of the plurality of additional distributed computing devices;
receiving, over the network, a second convergence indicator of the second distributed computing device from the second distributed computing device;
updating the first convergence indicator of the distributed computing device by
determining a center of mass of the first convergence indicator and the second convergence indicator; and
determining whether the consensus result has been obtained based on the updated first convergence indicator.
50. The method of claim 49, wherein determining whether the consensus result has been obtained based on the updated first convergence indicator comprises determining that the first convergence indicator of the distributed computing device is within a threshold distance of a global center of mass of the first convergence indicator.
51. A non-transitory computer readable storage medium configured to store program code, the program code comprising instructions that, when executed by one or more processors, cause the one or more processors to:
store user preference data representing preferences of a user with respect to a portion of a set of items;
calculate sampled user preference data by randomly sampling the user preference data;
iteratively execute, in conjunction with a plurality of additional distributed computing devices connected to the distributed computing device by a network, a process to determine a consensus result for the sampled user preference data, the consensus result based on the sampled user preference data and additional sampled user preference data calculated by the plurality of additional distributed computing devices, the additional sampled user preference data based on preferences of a plurality of additional users;
determine a recommendation model based on the consensus result for the sampled user preference data, the recommendation model reflecting the preferences of the user and the plurality of additional users;
identify an item of the set of items to provide to the user as a recommendation based on the recommendation model; and
provide the recommendation of the item to the user.
52. The non-transitory computer readable storage medium of claim 51, wherein the user preference data is a user preference vector, each element in the user preference vector corresponds to an item of the set of items, and wherein the instructions to calculate the sampled user preference data by randomly sampling the user preference data comprise instructions to calculate a sampled user preference matrix by multiplying a random matrix and the user preference vector.
53. The non-transitory computer readable storage medium of claim 52, wherein the consensus result is a global consensus matrix of a same dimensionality as the sampled user preference matrix, and wherein the instructions to determine a recommendation model based on the consensus result for the sampled user preference data comprise instructions to extract the recommendation model from the global consensus matrix using orthogonal decomposition.
54. The non-transitory computer readable storage medium of claim 53, wherein the instructions to identify an item of the set of items to provide to the user as a recommendation based on the recommendation model comprise instructions to:
project the user preference vector onto the extracted recommendation model to obtain a personalized recommendation vector; and
identify the item of the set of items to provide to the user as the recommendation based on a value of an element in the personalized recommendation vector corresponding to the item.
55. The non-transitory computer readable storage medium of claim 51, wherein the instructions to iteratively execute the process to determine the consensus result comprise instructions to, for a first iteration of the process:
transmit, over the network, the sampled user preference data to a second distributed computing device of the plurality of additional distributed computing devices; receive, over the network, second sampled user preference data generated by the second distributed computing device from the second distributed computing device; and
calculate consensus sampled user preference data by computing an average of the sampled user preference data and the second sampled user preference data.
56. The non-transitory computer readable storage medium of claim 55, wherein the instructions to iteratively execute the process to determine the consensus result comprise instructions to, for a second iteration of the process:
transmit, over the network, the consensus sampled user preference data to a third distributed computing device of the plurality of additional distributed computing devices;
receive, over the network, an additional consensus sampled user preference data generated by the third distributed computing device from the third distributed computing device; and
update the consensus sampled user preference data by computing an average of the consensus sampled user preference data and the additional consensus sampled user preference data.
57. The non-transitory computer readable storage medium of claim 56, wherein, after a plurality of iterations, the consensus sampled user preference data substantially converges with additional consensus sampled user preference data calculated by each of remaining ones of the plurality of additional computing devices, and the consensus sampled user preference data is the consensus result.
58. The non-transitory computer readable storage medium of claim 55, wherein randomly sampling the user preference data obscures the user preference data, such that the second distributed computing device cannot recover the user preference data from the sampled user preference data.
59. The non-transitory computer readable storage medium of claim 51, wherein the instructions to iteratively execute the process to determine the consensus result comprise instructions to, for each iteration of the process:
transmit, over the network, a first convergence indicator to a second distributed
computing device of the plurality of additional distributed computing devices; receive, over the network, a second convergence indicator of the second distributed computing device from the second distributed computing device;
update the first convergence indicator by determining a center of mass of the first convergence indicator and the second convergence indicator; and
determine whether the consensus result has been obtained based on the updated first convergence indicator.
60. The non-transitory computer readable storage medium of claim 59, wherein the instructions to determine whether the consensus result has been obtained based on the updated first convergence indicator comprise instructions to determine that the first convergence indicator is within a threshold distance of a global center of mass of the first convergence indicator.
61. A computer-implemented method for generating a latent semantic index comprising: calculating, by a distributed computing device, word counts for each of a set of
documents, wherein the word counts for each of the set of documents are represented as a plurality of values, each value representing a number of times a corresponding word appears in one of the set of documents; calculating, by the distributed computing device, sampled word counts by randomly sampling the word counts; iteratively executing, by the distributed computing device, in conjunction with a plurality of additional distributed computing devices connected to the distributed computing device by a network, a process to determine a consensus result for the sampled word counts, the consensus result based on the sampled word counts calculated by the distributed computing device and additional sampled word counts calculated by the plurality of additional distributed computing devices, the additional sampled user word counts based on additional sets of documents;
determining, by the distributed computing device, a latent semantic index (LSI)
subspace based on the consensus result for the sampled word counts, the LSI sub space reflecting contents of the set of documents and the additional sets of documents; and
projecting, by the distributed computing device, a document into the LSI subspace to determine the latent semantic content of the document.
62. The method of claim 61, wherein the plurality of values representing the word counts for each document in the set of documents are arranged as a word count vector; the word counts for the set of documents are arranged as a word count matrix; and wherein calculating the sampled word counts by randomly sampling the word counts comprises calculating a sampled word count matrix by multiplying a random matrix and the word count matrix.
63. The method of claim 62, wherein the consensus result is a global consensus matrix of a same dimensionality as the sampled word count matrix, and wherein determining the LSI subspace based on the consensus result for the sampled word counts comprises extracting an LSI subspace matrix from the global consensus matrix using orthogonal decomposition.
64. The method of claim 63, wherein projecting a document into the LSI subspace to determine the latent semantic content of the document comprises multiplying a search word count vector of the document by a transpose of the LSI subspace matrix to generate a subspace search vector characterizing the document in the LSI subspace, the method further comprising: transmitting the subspace search vector to a second distributed computing device as a search request; and
receiving, from the second distributed computing device, data describing a target document that matches the search request, wherein the second distributed computing device determines the target document matches the search request by comparing the sub space search vector to a target vector characterizing the target document in the LSI subspace.
65. The method of claim 63, wherein projecting a document into the LSI subspace to determine the latent semantic content of the document comprises:
multiplying a document word count vector of the document by a transpose of the LSI subspace matrix and the LSI subspace matrix to generate a resulting vector, each element in the resulting vector having a value corresponding to a different word; and
extracting, as keywords to describe the document; a set of words corresponding to elements in the resulting vector having high values.
66. The method of claim 61, wherein iteratively executing the process to determine the consensus result comprises, for a first iteration of the process:
transmitting, over the network, the sampled word counts of the distributed computing device to a second distributed computing device of the plurality of additional distributed computing devices;
receiving, over the network, second sampled word counts generated by the second distributed computing device from the second distributed computing device; and
calculating consensus sampled word counts by computing an average of the sampled word counts and the second sampled word counts.
67. The method of claim 66, wherein iteratively executing the process to determine the consensus result comprises, for a second iteration of the process: transmitting, over the network, the consensus sampled word counts of the distributed computing device to a third distributed computing device of the plurality of additional distributed computing devices;
receiving, over the network, additional consensus sampled word counts generated by the third distributed computing device from the third distributed computing device; and
updating the consensus sampled word counts by computing an average of the consensus sampled word counts and the additional consensus sampled word counts.
68. The method of claim 67, wherein, after a plurality of iterations, the consensus sampled word counts calculated by the distributed computing device substantially converge with consensus sampled word counts calculated by each of remaining ones of the plurality of additional computing devices, and the consensus sampled word counts calculated by the distributed computing device are the consensus result.
69. A non-transitory computer readable storage medium configured to store program code, the program code comprising instructions that, when executed by one or more processors, cause the one or more processors to:
calculate word counts for each of a set of documents of a distributed computing
device, wherein the word counts for each of the set of documents are represented as a plurality of values, each value representing a number of times a corresponding word appears in one of the set of documents; calculate sampled word counts by randomly sampling the word counts;
iteratively execute, in conjunction with a plurality of additional distributed computing devices connected to the distributed computing device by a network, a process to determine a consensus result for the sampled word counts, the consensus result based on the sampled word counts calculated by the distributed computing device and additional sampled word counts calculated by the plurality of additional distributed computing devices, the additional sampled user word counts based on additional sets of documents; determine a latent semantic index (LSI) subspace based on the consensus result for the sampled word counts, the LSI subspace reflecting contents of the set of documents and the additional sets of documents; and
project a document into the LSI subspace to determine the latent semantic content of the document.
70. The non-transitory computer readable storage medium of claim 69, wherein the plurality of values representing the word counts for each document in the set of documents are arranged as a word count vector; the word counts for the set of documents are arranged as a word count matrix; and wherein the instructions to calculate the sampled word counts by randomly sampling the word counts comprise instructions to calculate a sampled word count matrix by multiplying a random matrix and the word count matrix.
71. The non-transitory computer readable storage medium of claim 70, wherein the consensus result is a global consensus matrix of a same dimensionality as the sampled word count matrix, and wherein the instructions to determine the LSI subspace based on the consensus result for the sampled word counts comprise instructions to extract an LSI subspace matrix from the global consensus matrix using orthogonal decomposition.
72. The non-transitory computer readable storage medium of claim 71, wherein the instructions to project a document into the LSI subspace to determine the latent semantic content of the document comprise instructions to multiply a search word count vector of the document by a transpose of the LSI subspace matrix to generate a subspace search vector characterizing the document in the LSI subspace, and the instructions further comprise instructions to:
transmit the subspace search vector to a second distributed computing device as a search request; and
receive, from the second distributed computing device, data describing a target
document that matches the search request, wherein the second distributed computing device determines the target document matches the search request by comparing the sub space search vector to a target vector characterizing the target document in the LSI subspace.
73. The non-transitory computer readable storage medium of claim 72, wherein the instructions to project a document into the LSI subspace to determine the latent semantic content of the document comprise instructions to:
multiply a document word count vector of the document by a transpose of the LSI subspace matrix and the LSI subspace matrix to generate a resulting vector, each element in the resulting vector having a value corresponding to a different word; and
extract, as keywords to describe the document; a set of words corresponding to
elements in the resulting vector having high values.
74. The non-transitory computer readable storage medium of claim 69, wherein the instructions to iteratively execute the process to determine the consensus result comprise instructions to, for a first iteration of the process:
transmit, over the network, the sampled word counts of the distributed computing device to a second distributed computing device of the plurality of additional distributed computing devices;
receive, over the network, second sampled word counts generated by the second distributed computing device from the second distributed computing device; and
calculate consensus sampled word counts by computing an average of the sampled word counts and the second sampled word counts.
75. The non-transitory computer readable storage medium of claim 74, wherein the instructions to iteratively execute the process to determine the consensus result comprise instructions to, for a second iteration of the process:
transmit, over the network, the consensus sampled word counts of the distributed computing device to a third distributed computing device of the plurality of additional distributed computing devices;
receive, over the network, additional consensus sampled word counts generated by the third distributed computing device from the third distributed computing device; and update the consensus sampled word counts by computing an average of the consensus sampled word counts and the additional consensus sampled word counts.
76. The non-transitory computer readable storage medium of claim 75, wherein, after a plurality of iterations, the consensus sampled word counts calculated by the distributed computing device substantially converge with consensus sampled word counts calculated by each of remaining ones of the plurality of additional computing devices, and the consensus sampled word counts calculated by the distributed computing device are the consensus result.
77. A computer-implemented method for performing a search comprising:
calculating, by a search device, a word count vector for one of a document or a set of keywords, wherein each element of the word count vector has a value representing instances of a different word in the document or the set of keywords;
projecting, by the search device, the word count vector into a latent semantic index (LSI) subspace to generate a subspace search vector characterizing the document in the LSI subspace, the LSI subspace generated cooperatively by a plurality of distributed computing devices connected by a network based on a corpus of documents, the LSI subspace reflecting contents of the corpus of documents;
transmitting, by the search device, the sub space search vector to target device as a search request; and
receiving, from the target device in response to the search request, data describing a target document that matches the search request, wherein the target device determines the target document matches the search request by comparing the sub space search vector to a target vector characterizing the target document in the LSI subspace.
78. The method of claim 77, wherein projecting the word count vector into the LSI subspace comprises multiplying the word count vector by a transpose of an LSI subspace matrix describing the LSI subspace.
79. The method of claim 77, wherein the data describing the target document that matches the search request comprises a match value indicates the measure of distance between the subspace search vector and the target vector, the match value calculated using one of a dot product of the subspace search vector and the target vector and a Euclidean distance between the sub space search vector and the target vector.
80. The method of claim 77, wherein the search device is one of the plurality of distributed computing devices, the search device comprising a portion of the corpus of documents, the method further comprising:
iteratively executing, by the search device in conjunction with additional ones of the plurality of distributed computing devices, a process to determine a consensus result based on sampled word counts of the corpus of documents generated by the search device and the additional ones of the plurality of distributed computing devices; and
determining the LSI subspace based on the consensus result for the sampled word counts.
PCT/US2019/014351 2018-01-19 2019-01-18 Distributed high performance computing using distributed average consensus WO2019144046A1 (en)

Applications Claiming Priority (12)

Application Number Priority Date Filing Date Title
US201862619719P 2018-01-19 2018-01-19
US201862619715P 2018-01-19 2018-01-19
US62/619,715 2018-01-19
US62/619,719 2018-01-19
US201862662059P 2018-04-24 2018-04-24
US62/662,059 2018-04-24
US201862700153P 2018-07-18 2018-07-18
US62/700,153 2018-07-18
US201862727355P 2018-09-05 2018-09-05
US201862727357P 2018-09-05 2018-09-05
US62/727,357 2018-09-05
US62/727,355 2018-09-05

Publications (1)

Publication Number Publication Date
WO2019144046A1 true WO2019144046A1 (en) 2019-07-25

Family

ID=67301199

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/014351 WO2019144046A1 (en) 2018-01-19 2019-01-18 Distributed high performance computing using distributed average consensus

Country Status (1)

Country Link
WO (1) WO2019144046A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114580578A (en) * 2022-05-06 2022-06-03 鹏城实验室 Method and device for training distributed random optimization model with constraints and terminal
CN116610756A (en) * 2023-07-17 2023-08-18 山东浪潮数据库技术有限公司 Distributed database self-adaptive copy selection method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160248631A1 (en) * 2007-04-23 2016-08-25 David D. Duchesneau Computing infrastructure
US20160328253A1 (en) * 2015-05-05 2016-11-10 Kyndi, Inc. Quanton representation for emulating quantum-like computation on classical processors
US20170103468A1 (en) * 2015-10-13 2017-04-13 TransActive Grid Inc. Use of Blockchain Based Distributed Consensus Control
US20170132630A1 (en) * 2015-11-11 2017-05-11 Bank Of America Corporation Block chain alias for person-to-person payments
US20170173262A1 (en) * 2017-03-01 2017-06-22 François Paul VELTZ Medical systems, devices and methods

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160248631A1 (en) * 2007-04-23 2016-08-25 David D. Duchesneau Computing infrastructure
US20160328253A1 (en) * 2015-05-05 2016-11-10 Kyndi, Inc. Quanton representation for emulating quantum-like computation on classical processors
US20170103468A1 (en) * 2015-10-13 2017-04-13 TransActive Grid Inc. Use of Blockchain Based Distributed Consensus Control
US20170132630A1 (en) * 2015-11-11 2017-05-11 Bank Of America Corporation Block chain alias for person-to-person payments
US20170173262A1 (en) * 2017-03-01 2017-06-22 François Paul VELTZ Medical systems, devices and methods

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114580578A (en) * 2022-05-06 2022-06-03 鹏城实验室 Method and device for training distributed random optimization model with constraints and terminal
CN114580578B (en) * 2022-05-06 2022-08-23 鹏城实验室 Method and device for training distributed random optimization model with constraints and terminal
CN116610756A (en) * 2023-07-17 2023-08-18 山东浪潮数据库技术有限公司 Distributed database self-adaptive copy selection method and device
CN116610756B (en) * 2023-07-17 2024-03-08 山东浪潮数据库技术有限公司 Distributed database self-adaptive copy selection method and device

Similar Documents

Publication Publication Date Title
Peng et al. Lime: Low-cost and incremental learning for dynamic heterogeneous information networks
US20210117454A1 (en) Decentralized Latent Semantic Index Using Distributed Average Consensus
Zhao et al. A unified framework of active transfer learning for cross-system recommendation
WO2023000574A1 (en) Model training method, apparatus and device, and readable storage medium
US11244243B2 (en) Coordinated learning using distributed average consensus
US11468492B2 (en) Decentralized recommendations using distributed average consensus
Hammou et al. An effective distributed predictive model with Matrix factorization and random forest for Big Data recommendation systems
Chang et al. IoT big-data centred knowledge granule analytic and cluster framework for BI applications: a case base analysis
Fu et al. An experimental study on stability and generalization of extreme learning machines
Peremezhney et al. Combining Gaussian processes, mutual information and a genetic algorithm for multi-target optimization of expensive-to-evaluate functions
Bojchevski et al. Is pagerank all you need for scalable graph neural networks
da Silva et al. Content-based social recommendation with poisson matrix factorization
Hartmann Federated learning
Kakad et al. Cross domain-based ontology construction via Jaccard Semantic Similarity with hybrid optimization model
WO2019144046A1 (en) Distributed high performance computing using distributed average consensus
Li et al. Time-aware hyperbolic graph attention network for session-based recommendation
Kim et al. Ontology-based quantitative similarity metric for event matching in publish/subscribe system
Maksimov et al. Addressing cold start in recommender systems with hierarchical graph neural networks
CN110717116A (en) Method, system, device and storage medium for predicting link of relational network
Yoon et al. Autonomous graph mining algorithm search with best performance trade-off
US20230308360A1 (en) Methods and systems for dynamic re-clustering of nodes in computer networks using machine learning models
Gupta et al. Cross domain sentiment analysis using transfer learning
WO2023173550A1 (en) Cross-domain data recommendation method and apparatus, and computer device and medium
Touati et al. Deep reinforcement learning approach for ontology matching problem
WO2021152715A1 (en) Learning device, search device, learning method, search method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19741307

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19741307

Country of ref document: EP

Kind code of ref document: A1