WO2019095858A1 - 随机游走、基于集群的随机游走方法、装置以及设备 - Google Patents

随机游走、基于集群的随机游走方法、装置以及设备 Download PDF

Info

Publication number
WO2019095858A1
WO2019095858A1 PCT/CN2018/107308 CN2018107308W WO2019095858A1 WO 2019095858 A1 WO2019095858 A1 WO 2019095858A1 CN 2018107308 W CN2018107308 W CN 2018107308W WO 2019095858 A1 WO2019095858 A1 WO 2019095858A1
Authority
WO
WIPO (PCT)
Prior art keywords
node
dimensional array
identifier
nodes
random sequence
Prior art date
Application number
PCT/CN2018/107308
Other languages
English (en)
French (fr)
Inventor
曹绍升
杨新星
周俊
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Priority to SG11202000460UA priority Critical patent/SG11202000460UA/en
Priority to EP18878726.1A priority patent/EP3640813B1/en
Publication of WO2019095858A1 publication Critical patent/WO2019095858A1/zh
Priority to US16/805,079 priority patent/US11074246B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2379Updates performed during online database operations; commit processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • G06F16/2237Vectors, bitmaps or matrices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • G06F16/2264Multidimensional index structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures
    • G06F16/9024Graphs; Linked lists

Definitions

  • the present specification relates to the field of computer software technology, and in particular, to a random walk, cluster-based random walk method, device and device.
  • each user acts as a node, and if there is a transfer relationship between the two users, there is an edge between the corresponding two nodes, and the edge may be undirected. It is also possible to define the direction according to the transfer direction; and so on, the graph data including multiple nodes and multiple edges can be obtained, and the graph calculation is performed based on the graph data to realize the wind control.
  • the random walk algorithm is a basic and important part of the graph calculation, which provides support for the upper layer complex algorithm.
  • a random walk algorithm is generally adopted: a node included in the graph data is randomly read in the database, and then a neighboring node of the node is randomly read in the database, and so on. Random walks in the graph data.
  • the embodiments of the present specification provide a random walk, cluster-based random walk method, device, and device to solve the following technical problem: a more efficient random walk scheme that can be applied to large-scale map data is needed.
  • the cluster acquires information of each node included in the graph data
  • a random sequence is generated based on the two-dimensional array, the random sequence reflecting a random walk in the graph data.
  • a random sequence is generated based on the two-dimensional array, the random sequence reflecting a random walk in the graph data.
  • a cluster-based random walk device provided by the embodiment of the present specification, the device belongs to the cluster, and includes:
  • a first generating module according to the information of each node, generating a two-dimensional array, each row of the two-dimensional array respectively including an identifier of a neighboring node of the node;
  • a second generating module according to the two-dimensional array, generating a random sequence, the random sequence reflecting a random walk in the graph data.
  • Obtaining a module acquiring a two-dimensional array generated according to information of each node included in the graph data, each row of the two-dimensional array respectively including an identifier of a neighboring node of the node;
  • Generating a module based on the two-dimensional array, generating a random sequence, the random sequence reflecting a random walk in the graph data.
  • a cluster-based random walk device provided by the embodiment of the present specification, the device belongs to the cluster, and includes:
  • At least one processor and,
  • the memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to:
  • a random sequence is generated based on the two-dimensional array, the random sequence reflecting a random walk in the graph data.
  • At least one processor and,
  • the memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to:
  • a random sequence is generated based on the two-dimensional array, the random sequence reflecting a random walk in the graph data.
  • the above at least one technical solution adopted by the embodiment of the present specification can achieve the following beneficial effects: it is advantageous to reduce access to the database of the original saved map data, and the two-dimensional array does not need to depend on the database after being generated, and the node can be quickly indexed through the two-dimensional array. Adjacent nodes, the scheme can be applied to large-scale graph data and is highly efficient, and in the case of implementing the scheme based on the cluster, the efficiency can be further improved.
  • FIG. 1 is a schematic diagram of an overall architecture involved in an implementation scenario of the present specification
  • FIG. 2 is a schematic flowchart diagram of a cluster-based random walk method according to an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of a process of generating a two-dimensional array based on a cluster in an actual application scenario according to an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram of a cluster-based random sequence generation process in an actual application scenario according to an embodiment of the present disclosure
  • FIG. 5 is a schematic flowchart diagram of a random walk method according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of a cluster-based random walk device corresponding to FIG. 2 according to an embodiment of the present disclosure
  • FIG. 7 is a schematic structural diagram of a random walk device corresponding to FIG. 5 according to an embodiment of the present specification.
  • Embodiments of the present specification provide a random walk, cluster-based random walk method, apparatus, and device.
  • the solution of this specification applies to both clusters and stand-alone machines.
  • the processing of large-scale graph data is more efficient under clusters because the tasks (such as data read tasks, data synchronization tasks, etc.) can be split, and then executed by multiple machines in the cluster in parallel. Part of the task.
  • the following embodiments are mainly described based on a cluster scenario.
  • clusters involved in the scheme There may be one or more clusters involved in the scheme. For example, in Figure 1, two clusters are involved.
  • FIG. 1 is a schematic diagram of an overall architecture involved in an implementation scenario of the present specification.
  • the overall architecture there are mainly three parts: server cluster, worker cluster, and database.
  • the database saves the graph data for the cluster to read, and the server cluster cooperates with the working machine cluster to realize the random walk in the graph data according to the data read from the database.
  • the solution may involve a cluster, the cluster includes at least one dispatcher and a plurality of work machines; for example, the solution may also involve a work machine cluster and a server; etc.; the machines involved in the solution cooperate with each other to implement the map. Random walks in the data.
  • FIG. 2 is a schematic flowchart diagram of a cluster-based random walk method according to an embodiment of the present disclosure.
  • the steps in Figure 2 are performed by at least one machine (or a program on the machine) in the cluster, and the execution entities of the different steps may be different.
  • the process in Figure 2 includes the following steps:
  • S202 The cluster acquires information about each node included in the graph data.
  • the information of the node may include: an identifier of the node, an identifier of a neighboring node of the node (hereinafter referred to as an example), or information indicating a neighboring node other than the identifier, and the like.
  • the information of each node may be obtained once or in multiple times.
  • the original map data is stored in the database.
  • the information of each node needs to be read by accessing the database.
  • multiple machines in the cluster can separately read information of a part of the nodes that are not repeated. Further, multiple machines can read the database in parallel to quickly acquire information of the nodes.
  • each work machine in the working machine cluster can read and process information of a part of the nodes from the database in parallel, and then synchronize the processed data to the server cluster.
  • each working machine can directly synchronize the information of the read node to the server cluster and further process it by the server cluster.
  • the processing includes at least generating a two-dimensional array.
  • S204 Generate a two-dimensional array according to the information of each node, where each row of the two-dimensional array includes an identifier of a neighboring node of the node.
  • a two-dimensional array can be regarded as a matrix, and each row is a one-dimensional array.
  • Each row may correspond to a node, the row including at least the identifier of the neighboring node of its corresponding node, and the identifier of each adjacent node may be a one-dimensional array element of the row.
  • the identifier of the corresponding node itself may also be a one-dimensional array element of the row.
  • the identifier of the corresponding node is the 0th one-dimensional array element of the row, and the subsequent one-dimensional array elements are in turn Is the identity of each adjacent node of the node.
  • the identity of the node may not be included in the row, but only has an association relationship with the row, by which the identifier of the node can be indexed to the row.
  • the identifier of any adjacent node of the node can be quickly indexed, thereby facilitating efficient random walk in the graph data.
  • each node is preferably a number.
  • the order of each node is defined by the identifier size of each node, starting from 0.
  • the identifier of the first node in the order is 0, the identifier of the node in the second order is 1, and so on.
  • the following embodiments are described based on the definitions in this example.
  • the original identifier of the node is not a number
  • the original identifier may be mapped to a number based on a one-to-one mapping rule, and the identifier of the node is used to generate a two-dimensional array.
  • S208 Generate, according to the two-dimensional array, a random sequence, where the random sequence reflects a random walk in the graph data.
  • the random sequence is a sequence consisting of identifiers of multiple nodes, and the order of each identifier in the random sequence is a random walk sequence, and the maximum length of the random sequence is generally determined by a predetermined random walk step. .
  • step S206 can be performed multiple times independently of each other, thereby obtaining a plurality of mutually independent random sequences. For example, each working machine generates one or more random sequences according to a two-dimensional array.
  • the access to the database of the original saved graph data is facilitated, and the two-dimensional array does not need to depend on the database after being generated, and the adjacent nodes of the node can be quickly indexed through the two-dimensional array, and the scheme can be applied to a large scale.
  • the graph data is also highly efficient, and since the method is implemented based on the cluster, it is possible to further improve the efficiency.
  • the embodiment of the present specification further provides some specific implementations of the method, and an extended solution.
  • the following uses the architecture in FIG. 1 as an example for description.
  • the cluster may include a server cluster and a working machine cluster.
  • the cluster obtains information about each node included in the graph data, which may include:
  • the working machine cluster reads the identifiers of the neighboring nodes of each node included in the graph data from the database, wherein each worker machine reads the identifiers of the neighboring nodes of a part of the nodes. It should be noted that if the identifier of the node is unknown to the working machine cluster, the working machine cluster can read the identifier of the node and read the neighbor nodes of the node according to the identifier of the node (as a primary key in the database). logo.
  • the working machine cluster includes a working machine 0, a working machine 1, and a working machine 2.
  • Each working machine reads the identifiers of the neighboring nodes of a part of the nodes from the database respectively; for example, the working machine 0 reads the identifiers of the adjacent nodes of the node 1 ( 0, 2), and the identification of node 2 (1, 3, 4 respectively); work machine 1 reads the identifier of the adjacent node of node 0 (is 1); work machine 2 reads the adjacent node 3 The identity of the node (2, 4 respectively) and the identity of node 4 (2, 3 respectively).
  • each working machine may generate a non-full amount of two-dimensional array according to the identifier of the adjacent node and its corresponding node that read the identification.
  • the worker cluster can synchronize these non-full-size two-dimensional arrays to the server cluster.
  • the server cluster can obtain a full two-dimensional array of these non-full-size two-dimensional arrays.
  • the server cluster can be specifically integrated (for example, splitting two-dimensional arrays, merging two-dimensional arrays, etc.) Two-dimensional arrays, get the full amount of two-dimensional arrays; you can also not integrate them exclusively, but after the synchronization of the working machine cluster is completed, all the data obtained by synchronization will be regarded as a whole, that is, the full-size two-dimensional array.
  • Each server in the server cluster can store a full two-dimensional array, or just a part of a full two-dimensional array.
  • the two-dimensional array described in step S204 may be the full-size two-dimensional array, or may be the non-full-size two-dimensional array, or may further process the full-quantity two-dimensional array (for example, reordering, etc.) After the two-dimensional array obtained, the following embodiments are mainly described by taking the third case as an example.
  • the server cluster can further synchronize the full amount of the two-dimensional array to each working machine, so that each working machine can generate a random sequence according to the full amount of the two-dimensional array.
  • each working machine can further process the full amount of the two-dimensional array and then use it to produce a random sequence.
  • each row in the full amount of the two-dimensional array may be sorted according to the node identification order; a random sequence is generated according to the sorted two-dimensional array. For example, the row where the identifier of node 0 and its neighboring nodes is located is ranked in the first row, the row where the identifier of node 1 and its neighboring nodes is located in the second row, and so on; further, The identifier of node 0 in the first row is culled, only the identifier of its neighboring node is retained, and the association relationship between node 0 and the processed first row is established, so that the index in the first row is subsequently indexed according to the identifier of node 0.
  • the identity of any neighboring node, and so on can only retain the identity of the neighboring node in each row after processing.
  • the number of one-dimensional array elements is equal, and the number of the elements is generally not less than the number of neighbor nodes of the nodes with the most neighbor nodes in each node. For lines that are not filled with the identity of the neighbor node, you can fill in the end of the line with an empty element (that is, a "null" element).
  • the number of one-dimensional array elements in each row after processing may be defined by referring to the other nodes, and for the individual nodes A neighbor node can take only a part of neighbor nodes as elements of the corresponding row of the individual node, so as to waste a lot of memory unnecessarily.
  • FIG. 3 a schematic diagram of a two-dimensional array generation process based on a cluster is shown in FIG. 3 .
  • the data table in the database uses the identifier of the node as a primary key, and records the identifier of the neighboring node of each node, wherein the neighboring node of node 0 is node 1, and the neighboring node of node 1 is node 0, Node 2, the neighboring nodes of node 2 are node 1, node 3, node 4, the neighboring nodes of node 3 are node 2, node 4, and the neighboring nodes of node 4 are node 2, node 3.
  • Working machines 0 to 2 it is preferable to read the identifiers of the adjacent nodes of a part of the nodes from the database in parallel.
  • Each working machine generates a non-full amount of two-dimensional array correspondingly according to the identifier read by itself.
  • the two-dimensional array generated by the working machine 0 contains two rows
  • the two-dimensional array generated by the working machine 1 contains one row
  • the two-dimensional array generated by the working machine 2 contains two rows.
  • both the identifier of the node and the identifier of each adjacent node of the node are included.
  • the working machine cluster synchronizes the generated non-quantized two-dimensional arrays to the server cluster. It can be seen that the server cluster has obtained a full two-dimensional array and is partially stored in the servers 0 to 2.
  • the server cluster synchronizes the full amount of the two-dimensional array to each working machine. Then, each working machine can independently sort and eliminate the full-quantity two-dimensional array, and obtain an ordered two-dimensional array containing only the identifiers of adjacent nodes, for generating a random sequence.
  • the generating a random sequence according to the two-dimensional array may specifically include:
  • the working machine randomly determines an identifier as an identifier of the target node in the identifier of each node, and obtains a corresponding row in the two-dimensional array according to the identifier of the target node, and the corresponding row Include an identifier of the target node, and an identifier of a neighboring node of the target node; determine a number of identifiers of neighboring nodes included in the corresponding row; randomly determine a non-negative integer smaller than the quantity, and obtain And determining, by the corresponding row, an identifier of the non-negative integer number of neighboring nodes; performing iterative calculation by using the non-negative integer neighboring nodes as the target node again, generating the target nodes obtained in sequence Identify the random sequence that constitutes.
  • FIG. 4 is a schematic diagram of a cluster-based random sequence generation process in an actual application scenario according to an embodiment of the present disclosure.
  • the graph data includes a total of N nodes, the identifier of the mth node is m, 0 ⁇ m ⁇ N-1, the target node is the i-th node, and the corresponding behavior is the second-dimensional array. i line.
  • the corresponding behavior is a one-dimensional array, the identifier of the nth neighboring node of the target node is the nth element of the one-dimensional array, n is counted from 0, and the non-negative integer is denoted as j.
  • N 5
  • the working machine according to the full amount of the two-dimensional array synchronized by the server cluster, the processed two-dimensional array (called the adjacent node array) correspondingly contains 5 rows, which in turn correspond to node 0.
  • each row is a one-dimensional array
  • the one-dimensional array includes the identifiers of the adjacent nodes of its corresponding nodes, and the insufficient portion is filled with “null” elements.
  • the target node is node 2
  • the i-th behavior [1, 3, 4] and the identifier of the first neighboring node of the acquired target node is the first element of the array, that is, 3.
  • the random walk from the node 2 to the node 3 is realized, and then the node 3 is iteratively calculated as the target node, and the random walk is continued.
  • the identifiers of the plurality of nodes passing through in sequence constitute a random sequence.
  • the number of random walk steps is set to 8 in advance, and the number of lots is 5. It is represented by a matrix.
  • the number of random walks is, for example, the number of columns of the matrix, and the number of rows is the number of rows of the matrix. Each row of the matrix can store a random sequence.
  • the random walk step defines the maximum length of a random sequence. When the random sequence reaches the maximum length, the next random sequence can be generated without relying on the random sequence.
  • the batch number defines the maximum number of random sequences generated by each work machine before the database write has been generated.
  • the work machine can generate multiple random sequences that have not been written by itself (represented as The corresponding matrix) is written to the database. For example, in FIG. 5, the work machine 2 has generated an unwritten random sequence that has reached the maximum number of 5, and the corresponding matrix can be written into the database.
  • the random sequence represents a random walk through the following nodes in sequence: node 3.
  • a threshold may also be preset to limit the maximum total number of random sequences generated by the entire working machine cluster. When the set threshold is reached, each working machine can stop generating a random sequence.
  • some working machines in the working machine cluster may be abnormal, resulting in the loss of the two-dimensional array used to generate the random sequence.
  • the work machine stores the two-dimensional array in memory only, the data in the memory will be lost after the machine is down.
  • a full two-dimensional array can be retrieved from the server cluster and processed to generate a random sequence. This is illustrated by the working machine 2 in FIG.
  • the embodiment of the present specification further provides a schematic flowchart of a random walk method, as shown in FIG. 5.
  • the execution body of the process in FIG. 5 may be a single computing device or multiple computing devices, and the process includes the following steps:
  • S502 Acquire a two-dimensional array generated according to information of each node included in the graph data, where each row of the two-dimensional array respectively includes an identifier of a neighboring node of the node.
  • step S502 the two-dimensional array is specifically generated by whom, and the application is not limited. Generally, as long as the map data has not changed, the two-dimensional array that has been generated according to the graph data can be multiplexed all the time.
  • S504 Generate, according to the two-dimensional array, a random sequence, where the random sequence reflects a random walk in the graph data.
  • the embodiment of the present specification further provides corresponding devices of the above methods, as shown in FIG. 6 and FIG. 7.
  • FIG. 6 is a schematic structural diagram of a cluster-based random walk device corresponding to FIG. 2 according to an embodiment of the present disclosure.
  • the device belongs to the cluster, and includes:
  • the obtaining module 601 is configured to obtain information about each node included in the graph data
  • the first generation module 602 is configured to generate a two-dimensional array according to the information of each node, where each row of the two-dimensional array includes an identifier of a neighboring node of the node;
  • the second generation module 603 generates a random sequence according to the two-dimensional array, and the random sequence reflects a random walk in the graph data.
  • the cluster includes a server cluster and a working machine cluster
  • the acquiring module 601 acquires information about each node included in the graph data, and specifically includes:
  • the working machine cluster reads the identifiers of the neighboring nodes of each node included in the graph data from the database, wherein each worker machine reads the identifiers of the neighboring nodes of a part of the nodes.
  • the first generating module 602 generates a two-dimensional array according to the information of each node, and specifically includes:
  • Each of the working machines generates a non-full amount of two-dimensional array according to the identifiers of the adjacent nodes and their corresponding nodes that read the identification;
  • the working machine cluster synchronizes each of the non-full amount two-dimensional arrays to the server cluster;
  • the server cluster obtains a full two-dimensional array according to each of the non-full-size two-dimensional arrays.
  • the second generation module 603 synchronizes the full amount of the two-dimensional array to each of the working machines before the random sequence is generated according to the two-dimensional array, so that each of the working machines is configured according to The full amount of the two-dimensional array generates a random sequence.
  • the second generating module 603 generates a random sequence according to the two-dimensional array, and specifically includes:
  • the working machine sorts each row in the full amount of the two-dimensional array according to the node identification order
  • a random sequence is generated based on the sorted two-dimensional array.
  • the second generating module 603 generates a random sequence according to the two-dimensional array, and specifically includes:
  • the working machine randomly determines an identifier in the identifier of each node as an identifier of the target node;
  • the total number of the nodes is N
  • the identifier of the mth node is m
  • the target node is an i-th node
  • the corresponding behavior is the two-dimensional The ith row of the array.
  • the corresponding behavior is a one-dimensional array
  • the identifier of the nth neighboring node of the target node is the nth element of the one-dimensional array, and n is counted from 0;
  • the non-negative integer is denoted as j, and the working machine acquires the identifier of the non-negative integer number of adjacent nodes included in the corresponding row, which specifically includes:
  • the working machine obtains the identifier of the jth neighboring node of the target node by reading the jth element of the one-dimensional array.
  • the total number of elements of the one-dimensional array is equal to the number of neighbor nodes of the node with the most neighbor nodes in each node.
  • the working machine generates a random sequence that is formed by the identifiers of the target nodes that are sequentially obtained, and specifically includes:
  • the second generating module 603 generates a random sequence, and specifically includes:
  • Each of the working machines generates a random sequence until the total number of generated random sequences reaches a set threshold.
  • the working machine is lost locally, the working machine is re-acquired from the server cluster.
  • FIG. 7 is a schematic structural diagram of a random walk device corresponding to FIG. 5 according to an embodiment of the present disclosure, the device includes:
  • the obtaining module 701 is configured to obtain a two-dimensional array generated according to information about each node included in the graph data, where each row of the two-dimensional array includes an identifier of a neighboring node of the node;
  • a generating module 702 is configured to generate a random sequence according to the two-dimensional array, the random sequence reflecting a random walk in the graph data.
  • the embodiment of the present specification further provides a cluster-based random walk device corresponding to FIG. 2, the device belongs to the cluster, and includes:
  • At least one processor and,
  • the memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to:
  • a random sequence is generated based on the two-dimensional array, the random sequence reflecting a random walk in the graph data.
  • the embodiment of the present specification further provides a random walk device corresponding to FIG. 5, including:
  • At least one processor and,
  • the memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to:
  • a random sequence is generated based on the two-dimensional array, the random sequence reflecting a random walk in the graph data.
  • the embodiment of the present specification further provides a non-volatile computer storage medium corresponding to FIG. 2, which stores computer executable instructions, and the computer executable instructions are set as:
  • a random sequence is generated based on the two-dimensional array, the random sequence reflecting a random walk in the graph data.
  • the embodiment of the present specification further provides a non-volatile computer storage medium corresponding to FIG. 5, which stores computer executable instructions, and the computer executable instructions are set as:
  • a random sequence is generated based on the two-dimensional array, the random sequence reflecting a random walk in the graph data.
  • the device, the device, the non-volatile computer storage medium and the method provided by the embodiments of the present specification are corresponding, and therefore, the device, the device, and the non-volatile computer storage medium also have similar beneficial technical effects as the corresponding method, since The beneficial technical effects of the method are described in detail, and therefore, the beneficial technical effects of the corresponding device, device, and non-volatile computer storage medium are not described herein.
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • HDL Hardware Description Language
  • the controller can be implemented in any suitable manner, for example, the controller can take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (eg, software or firmware) executable by the (micro)processor.
  • computer readable program code eg, software or firmware
  • examples of controllers include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, The Microchip PIC18F26K20 and the Silicone Labs C8051F320, the memory controller can also be implemented as part of the memory's control logic.
  • the controller can be logically programmed by means of logic gates, switches, ASICs, programmable logic controllers, and embedding.
  • Such a controller can therefore be considered a hardware component, and the means for implementing various functions included therein can also be considered as a structure within the hardware component.
  • a device for implementing various functions can be considered as a software module that can be both a method of implementation and a structure within a hardware component.
  • the system, device, module or unit illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product having a certain function.
  • a typical implementation device is a computer.
  • the computer can be, for example, a personal computer, a laptop computer, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or A combination of any of these devices.
  • embodiments of the specification can be provided as a method, system, or computer program product.
  • embodiments of the present specification can take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware.
  • embodiments of the present specification can take the form of a computer program product embodied on one or more computer usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
  • a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • the memory may include non-persistent memory, random access memory (RAM), and/or non-volatile memory in a computer readable medium, such as read only memory (ROM) or flash memory.
  • RAM random access memory
  • ROM read only memory
  • Memory is an example of a computer readable medium.
  • Computer readable media includes both permanent and non-persistent, removable and non-removable media.
  • Information storage can be implemented by any method or technology.
  • the information can be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory. (ROM), electrically erasable programmable read only memory (EEPROM), flash memory or other memory technology, compact disk read only memory (CD-ROM), digital versatile disk (DVD) or other optical storage, Magnetic tape cartridges, magnetic tape storage or other magnetic storage devices or any other non-transportable media can be used to store information that can be accessed by a computing device.
  • computer readable media does not include temporary storage of computer readable media, such as modulated data signals and carrier waves.
  • program modules include routines, programs, objects, components, data structures, and the like that perform particular tasks or implement particular abstract data types.
  • the present specification can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are connected through a communication network.
  • program modules can be located in both local and remote computer storage media including storage devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本说明书实施例公开了随机游走、基于集群的随机游走方法、装置以及设备,方案包括:获取图数据包含的各节点的信息,根据各节点的信息生成反映节点及其相邻节点之间的对应关系的二维数组,根据二维数组生成随机序列,实现在图数据中的随机游走;该方案可以既适用于集群也适用于单机。

Description

随机游走、基于集群的随机游走方法、装置以及设备 技术领域
本说明书涉及计算机软件技术领域,尤其涉及随机游走、基于集群的随机游走方法、装置以及设备。
背景技术
随着计算机和互联网技术的迅速发展,很多业务都可以在网上进行,图计算是处理社交方面的网上业务的一种常用手段。
例如,对于社交风控业务中的账户欺诈识别:每个用户分别作为一个节点,若两个用户之间存在转账关系,则对应的两个节点之间存在一条边,边可以是无向的,也可以是根据转账方向定义了方向的;以此类推,可以得到包含多个节点和多条边的图数据,进而基于图数据进行图计算以实现风控。
随机游走算法是图计算中比较基础和重要的一环,其为上层复杂算法提供支持。在现有技术中,一般采用这样的随机游走算法:在数据库中随机读取图数据包含的一个节点,再继续在该数据库中随机读取该节点的一个相邻节点,以此类推,实现在图数据中的随机游走。
基于现有技术,需要能够应用于大规模图数据的更为高效的随机游走方案。
发明内容
本说明书实施例提供随机游走、基于集群的随机游走方法、装置以及设备,用以解决如下技术问题:需要能够应用于大规模图数据的更为高效的随机游走方案。
为解决上述技术问题,本说明书实施例是这样实现的:
本说明书实施例提供的一种基于集群的随机游走方法,包括:
所述集群获取图数据包含的各节点的信息;
根据所述各节点的信息,生成二维数组,所述二维数组的每行分别包括一个所述节点的相邻节点的标识;
根据所述二维数组,生成随机序列,所述随机序列反映在所述图数据中的随机游走。
本说明书实施例提供的一种随机游走方法,包括:
获取根据图数据包含的各节点的信息生成的二维数组,所述二维数组的每行分别包括一个所述节点的相邻节点的标识;
根据所述二维数组,生成随机序列,所述随机序列反映在所述图数据中的随机游走。
本说明书实施例提供的一种基于集群的随机游走装置,所述装置属于所述集群,包括:
获取模块,获取图数据包含的各节点的信息;
第一生成模块,根据所述各节点的信息,生成二维数组,所述二维数组的每行分别包括一个所述节点的相邻节点的标识;
第二生成模块,根据所述二维数组,生成随机序列,所述随机序列反映在所述图数据中的随机游走。
本说明书实施例提供的一种随机游走装置,包括:
获取模块,获取根据图数据包含的各节点的信息生成的二维数组,所述二维数组的每行分别包括一个所述节点的相邻节点的标识;
生成模块,根据所述二维数组,生成随机序列,所述随机序列反映在所述图数据中的随机游走。
本说明书实施例提供的一种基于集群的随机游走设备,所述设备属于所述集群,包括:
至少一个处理器;以及,
与所述至少一个处理器通信连接的存储器;其中,
所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被 所述至少一个处理器执行,以使所述至少一个处理器能够:
获取图数据包含的各节点的信息;
根据所述各节点的信息,生成二维数组,所述二维数组的每行分别包括一个所述节点的相邻节点的标识;
根据所述二维数组,生成随机序列,所述随机序列反映在所述图数据中的随机游走。
本说明书实施例提供的一种随机游走设备,包括:
至少一个处理器;以及,
与所述至少一个处理器通信连接的存储器;其中,
所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够:
获取根据图数据包含的各节点的信息生成的二维数组,所述二维数组的每行分别包括一个所述节点的相邻节点的标识;
根据所述二维数组,生成随机序列,所述随机序列反映在所述图数据中的随机游走。
本说明书实施例采用的上述至少一个技术方案能够达到以下有益效果:有利于减少对原始保存图数据的数据库的访问,二维数组在生成后无需依赖该数据库,通过二维数组能够快速索引节点的相邻节点,该方案能够适用于大规模图数据且效率较高,在基于集群实施该方案的情况下,还能够进一步地提高效率。
附图说明
为了更清楚地说明本说明书实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本说明书中记载的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本说明书的方案在一种实际应用场景下涉及的一种整体架构示意图;
图2为本说明书实施例提供的一种基于集群的随机游走方法的流程示意图;
图3为本说明书实施例提供的一种实际应用场景下,基于集群的二维数组生成流程示意图;
图4为本说明书实施例提供的一种实际应用场景下,基于集群的随机序列生成流程示意图;
图5为本说明书实施例提供的一种随机游走方法的流程示意图;
图6为本说明书实施例提供的对应于图2的一种基于集群的随机游走装置的结构示意图;
图7为本说明书实施例提供的对应于图5的一种随机游走装置的结构示意图。
具体实施方式
本说明书实施例提供随机游走、基于集群的随机游走方法、装置以及设备。
为了使本技术领域的人员更好地理解本说明书中的技术方案,下面将结合本说明书实施例中的附图,对本说明书实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本说明书实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。
本说明书的方案既适用于集群,也适用于单机。在集群下对于大规模图数据的处理效率更高,原因在于:可以拆分任务(比如,数据读取任务、数据同步任务等),进而由集群中的多个机器并行执行被分配给自己的一部分任务。以下各实施例主要基于集群场景进行说明。
方案涉及的集群可以有一个或者多个,以图1为例,涉及了两个集群。
图1为本说明书的方案在一种实际应用场景下涉及的一种整体架构示意图。该整体架构中,主要涉及三部分:服务器集群、工作机集群、数据库。数据库保存有图数据,供集群读取,服务器集群与工作机集群相互配合,根据从数据库读取的数据,实现在图数据中的随机游走。
图1中的架构是示例性的,并非唯一。比如,方案可以涉及一个集群,该集群中包含至少一个调度机和多个工作机;再比如,方案也可以涉及一个工作机集群和一个服务器;等等;方案涉及的机器相互配合,实现在图数据中的随机游走。
下面对本说明书的方案进行详细说明。
图2为本说明书实施例提供的一种基于集群的随机游走方法的流程示意图。图2中各步骤由集群中的至少一个机器(或者机器上的程序)执行,不同步骤的执行主体可以不同。
图2中的流程包括以下步骤:
S202:所述集群获取图数据包含的各节点的信息。
在本说明书实施例中,节点的信息可以包括:节点的标识、节点的相邻节点的标识(以下以此为例)、或者标识以外的能够指示节点的相邻节点的信息等。各节点的信息可以是一次性获取的,也可以是分多次获取的。
一般地,原始的图数据保存于数据库中,在这种情况下,需要通过访问数据库,读取得到各节点的信息。为了避免重复读取数据增加数据库的负担,集群中的多个机器可以分别读取不重复的一部分节点的信息,进一步地,多个机器可以并行读取数据库,以快速获取节点的信息。
例如,可以由工作机集群中的各工作机并行地、分别从数据库读取一部分节点的信息并进行处理,再将处理后得到的数据同步至服务器集群。或者,各工作机也可以将读取的节点的信息,直接同步至服务器集群,由服务器集群进一步地处理。所述处理至少包括生成二维数组。
S204:根据所述各节点的信息,生成二维数组,所述二维数组的每行分别包括一个所述节点的相邻节点的标识。
在本说明书实施例中,二维数组可以视为矩阵,其每行分别为一个一维数组。
每行可以分别对应一个节点,该行至少包括其对应的节点的相邻节点的标识,每个相邻节点的标识即可以是该行的一个一维数组元素。为了便于索引,该对应的节点的标识自身也可以是该行的一个一维数组元素,比如,该对应的节点的标识为该行的第0个一维数组元素,之后的一维数组元素依次为该节点的各相邻节点的标识。或者,该节点的标识自身可以不包含在该行内,而只是与该行具有关联关系,通过该关联关系能够用该节点的标识索引到该行。
根据二维数组以及任意节点的标识,能够快速索引到该节点的任意相邻节点的标识,从而,有利于高效地在图数据中随机游走。
为了便于索引,各节点的标识优选地为数字。比如,用各节点的标识大小定义各节点的顺序,从0开始计数,顺序最先的节点的标识为0、顺序第二的节点的标识为1,依次类推。以下各实施例基于该例中的定义进行说明。
当然,若节点原本的标识并非数字,也可以基于一一映射的规则,将所述原本的标识映射为数字后,作为节点的标识用于生成二维数组。
S208:根据所述二维数组,生成随机序列,所述随机序列反映在所述图数据中的随机游走。
在本说明书实施例中,随机序列为多个节点的标识构成的序列,各标识在该随机序列中的顺序即为随机游走顺序,随机序列的最大长度一般由预定的随机游走步数决定。
在得到二维数组后,可以相互独立地多次执行步骤S206,进而得到多个相互独立的随机序列。比如,各工作机分别根据二维数组,生成一个或者多个随机序列。
通过图2的方法,有利于减少对原始保存图数据的数据库的访问,二维数组在生成后无需依赖该数据库,通过二维数组能够快速索引节点的相 邻节点,该方案能够适用于大规模图数据且效率较高,由于基于集群实施该方法,因此,还能够进一步地提高效率。
基于图2的方法,本说明书实施例还提供了该方法的一些具体实施方案,以及扩展方案,下面以图1中的架构为例,进行说明。
在本说明书实施例中,如前所述,所述集群可以包括服务器集群和工作机集群,对于步骤S202,所述集群获取图数据包含的各节点的信息,具体可以包括:
所述工作机集群从数据库读取图数据包含的各节点的相邻节点的标识,其中,每个工作机读取一部分节点的相邻节点的标识。需要说明的是,若对于工作机集群,节点的标识本身也是未知的,则工作机集群可以读取节点的标识,并根据节点的标识(在数据库中作为主键),读取节点的相邻节点的标识。
例如,假定有标识分别为0~4的5个节点。工作机集群包括工作机0、工作机1、工作机2,每个工作机分别从数据库读取一部分节点的相邻节点的标识;比如,工作机0读取节点1的相邻节点的标识(分别为0、2),以及节点2的标识(分别为1、3、4);工作机1读取节点0的相邻节点的标识(为1);工作机2读取节点3的相邻节点的标识(分别为2、4),以及节点4的标识(分别为2、3)。
在本说明书实施例中,各工作机可以根据自己读取标识的相邻节点及其对应节点的标识,生成非全量的二维数组。
进一步地,工作机集群可以将这些非全量的二维数组同步给服务器集群。从而,服务器集群能够得到由这些非全量的二维数组构成的全量的二维数组,具体地:服务器集群可以通过专门整合(比如,拆分二维数组、合并二维数组等)这些非全量的二维数组,得到全量的二维数组;也可以不专门整合而只是在工作机集群同步完毕后,将同步得到的全部数据视为一个整体,即全量的二维数组。服务器集群中的各服务机可以分别保存全量的二维数组,也可以只保存全量的二维数组的一部分。
步骤S204所述的二维数组可以是所述全量的二维数组,也可以是所述非全量的二维数组,也可以是对所述全量的二维数组进一步地处理(比如,重排序等)后得到的二维数组,下面各实施例主要以第三种情况为例进行说明。需要说明的是,若是非全量的二维数组,则后续的随机游走相应地在该非全量的二维数组涉及的节点(这些节点只是图数据中的一部分节点)中进行,任意工作机可以根据自己生成的非全量的二维数组,生成随机序列,而未必要依赖于上述同步和服务器集群。
对上述同步后的动作继续说明。服务器集群可以进一步地将全量的二维数组再向各工作机分别同步,以便各工作机能够根据全量的二维数组,生成随机序列。上述的第三种情况已经提到,各工作机可以对全量的二维数组进一步处理后,再用于生产随机序列。
例如,可以根据节点标识顺序,对所述全量的二维数组中的各行进行排序;根据排序后的二维数组,生成随机序列。比如,将节点0及其相邻节点的标识所在的行排在第一行,将节点1及其相邻节点的标识所在的行排在第二行,以此类推;进一步地,还可以将第一行中节点0的标识剔除,只保留其相邻节点的标识,并建立节点0与处理后的第一行之间的关联关系,以便于后续根据节点0的标识索引第一行中的任意相邻节点的标识,以此类推,处理后的每行中可以只保留相邻节点的标识。
在本说明书实施例中,在处理后各行中,一维数组元素数量相等,该元素数量一般不小于各节点中邻居节点最多的节点的邻居节点数量。对于用邻居节点的标识填不满的行,可以用空元素(也即“null”元素)在行的尾部进行填充。另外,若只有个别节点的邻居节点数量很多,而其他节点的邻居数量相比而言少很多,而也可以参照所述其他节点定义处理后各行中一维数组元素数量,而对于该个别节点的邻居节点,可以只取一部分邻居节点作为该个别节点对应行的元素,以免无谓地浪费大量内存。
根据上面的说明,本说明书实施例提供的一种实际应用场景下,基于集群的二维数组生成流程示意图,如图3所示。
在图3中,数据库中的数据表以节点的标识作为主键,记录了各节点的相邻节点的标识,其中,节点0的相邻节点为节点1,节点1的相邻节点为节点0、节点2,节点2的相邻节点为节点1、节点3、节点4,节点3的相邻节点为节点2、节点4、节点4的相邻节点为节点2、节点3。工作机0~2如前所述,优选地可以并行分别从数据库读取一部分节点的相邻节点的标识。
每个工作机根据自己读取的标识,对应地生成非全量的二维数组。工作机0生成的二维数组中包含两行,工作机1生成的二维数组中包含一个行,工作机2生成的二维数组中包含两行。在非全量的二维数组的每行中,既包括节点的标识,也包括该节点的各相邻节点的标识。
工作机集群将生成的各非全量的二维数组都同步至服务器集群,可以看到服务器集群得到了全量的二维数组并分部分保存在服务器0~2中。
服务器集群将全量的二维数组分别同步给各工作机。则各工作机可以分别独立地对全量的二维数组进行排序和节点剔除处理,得到有序的只包含相邻节点的标识的二维数组,用于生成随机序列。
在本说明书实施例中,对于步骤S206,所述根据所述二维数组,生成随机序列,具体可以包括:
所述工作机在所述各节点的标识中,随机确定一个标识,作为目标节点的标识;根据所述目标节点的标识,在所述二维数组中索引得到对应的行,所述对应的行包括所述目标节点的标识,以及所述目标节点的相邻节点的标识;确定所述对应的行包括的相邻节点的标识的数量;随机确定一个小于所述数量的非负整数,并获取所述对应的行包括的第所述非负整数个相邻节点的标识;通过将该第所述非负整数个相邻节点重新作为目标节点进行迭代计算,生成由依次得到的各目标节点的标识构成的随机序列。
进一步地沿用图3的例子,结合图4说明。图4为本说明书实施例提供的一种实际应用场景下,基于集群的随机序列生成流程示意图。
假定图数据共包含N个节点,第m个所述节点的标识为m,0≤m≤N-1, 所述目标节点为第i个节点,所述对应的行为所述二维数组的第i行。所述对应的行为一维数组,所述目标节点的第n个相邻节点的标识为该一维数组的第n个元素,n从0开始计数,所述非负整数记作j。
在图5中,N=5,工作机根据服务器集群所同步的全量的二维数组,处理后得到的二维数组(称为相邻节点数组)中相应地包含5行,依次对应于节点0~4,每行分别为一个一维数组,一维数组包括其对应的节点的各相邻节点的标识,不足部分用“null”元素填充。
工作机随机生成一个属于[0,N-1=4]的整数,即工作机在各节点的标识中,随机确定的目标节点的标识;根据目标节点的标识i,在相邻节点数组索引到第i行(为一维数组);确定该第i行包含的非“null”元素的元素数量;随机确定一个小于该元素数量的非负整数j;通过读取该第i行的第j个元素,得到目标节点的第j个相邻节点的标识。
假定目标节点的标识为2,j=1。则目标节点为节点2,该第i行为[1,3,4],获取的目标节点的第1个相邻节点的标识为该数组的第1个元素,即3。从而,实现从节点2随机游走到节点3,进而将节点3作为目标节点迭代计算,继续随机游走,如此,依次经过的多个节点的标识构成随机序列。
在图4中,预先设定随机游走步数为8,批数为5。用矩阵进行表示,随机游走步数比如为该矩阵的列数,批数为该矩阵的行数,该矩阵的每一行可以存储一个随机序列。
随机游走步数定义了一个随机序列的最大长度,每当随机序列达到该最大长度时,可以不依赖该随机序列而开始生成下一个随机序列。
批数定义了每个工作机在向数据库写入已生成前,生成随机序列的最大个数,到达该最大个数时,工作机可以将自己已生成未写入的多个随机序列(表示为对应的矩阵)写入数据库。比如,图5中工作机2当前已生成未写入的随机序列已经到达最大个数5,则可以将对应的矩阵写入数据库。
以图4中工作机0生成的第一个随机序列(3,4,3,2,4,2,3,2) 为例,该随机序列即表示依次经过下列节点的随机游走过程:节点3、节点4、节点3、节点2、节点4、节点2、节点3、节点2。
进一步地,还可以预先设定阈值,用于限定整个工作机集群生成的随机序列的最大总数量。当到达该设定阈值时,各工作机可以停止生成随机序列。
另外,在实际应用中,工作机集群中的某些工作机可能会出现异常,导致之前用于生成随机序列的二维数组丢失。比如,若工作机将该二维数组只存储在内存中,则宕机后内存中的数据会丢失。在这种情况下,当这些工作机恢复正常时,可以从服务器集群重新获取全量的二维数组并进行处理后用于生成随机序列。图4中通过工作机2示出了这种情况。
上面主要是基于集群场景,对本说明书的方案进行说明的,本说明书的方案也可以脱离集群场景。比如,基于同样的思路,本说明书实施例还提供了一种随机游走方法的流程示意图,如图5所示。
图5中的流程的执行主体可以是单一的计算设备,也可以是多个计算设备,该流程包括以下步骤:
S502:获取根据图数据包含的各节点的信息生成的二维数组,所述二维数组的每行分别包括一个所述节点的相邻节点的标识。
在步骤S502中,二维数组具体由谁生成,本申请并不做限定。一般地,只要图数据未发生变化,根据该图数据已生成的二维数组可以一直复用。
S504:根据所述二维数组,生成随机序列,所述随机序列反映在所述图数据中的随机游走。
基于同样的思路,本说明书实施例还提供了上面各方法的对应装置,如图6、图7所示。
图6为本说明书实施例提供的对应于图2的一种基于集群的随机游走装置的结构示意图,该装置属于所述集群,包括:
获取模块601,获取图数据包含的各节点的信息;
第一生成模块602,根据所述各节点的信息,生成二维数组,所述二维 数组的每行分别包括一个所述节点的相邻节点的标识;
第二生成模块603,根据所述二维数组,生成随机序列,所述随机序列反映在所述图数据中的随机游走。
可选地,所述集群包括服务器集群和工作机集群;
所述获取模块601获取图数据包含的各节点的信息,具体包括:
所述工作机集群从数据库读取图数据包含的各节点的相邻节点的标识,其中,每个工作机读取一部分节点的相邻节点的标识。
可选地,所述第一生成模块602根据所述各节点的信息,生成二维数组,具体包括:
各所述工作机分别根据自己读取标识的相邻节点及其对应节点的标识,生成非全量的二维数组;
所述工作机集群将各所述非全量的二维数组向所述服务器集群同步;
所述服务器集群根据各所述非全量的二维数组,得到全量的二维数组。
可选地,所述第二生成模块603根据所述二维数组,生成随机序列前,所述服务器集群将所述全量的二维数组向各所述工作机同步,以便各所述工作机根据所述全量的二维数组,生成随机序列。
可选地,所述第二生成模块603根据所述二维数组,生成随机序列,具体包括:
所述工作机根据节点标识顺序,对所述全量的二维数组中的各行进行排序;
根据排序后的二维数组,生成随机序列。
可选地,所述第二生成模块603根据所述二维数组,生成随机序列,具体包括:
所述工作机在所述各节点的标识中,随机确定一个标识,作为目标节点的标识;
根据所述目标节点的标识,在所述二维数组中索引得到对应的行,所述对应的行包括所述目标节点的标识,以及所述目标节点的相邻节点的标 识;
确定所述对应的行包括的相邻节点的标识的数量;
随机确定一个小于所述数量的非负整数,并获取所述对应的行包括的第所述非负整数个相邻节点的标识;
通过将该第所述非负整数个相邻节点重新作为目标节点进行迭代计算,生成由依次得到的各目标节点的标识构成的随机序列。
可选地,所述节点总数量为N,第m个所述节点的标识为m,0≤m≤N-1,所述目标节点为第i个节点,所述对应的行为所述二维数组的第i行。
可选地,所述对应的行为一维数组,所述目标节点的第n个相邻节点的标识为该一维数组的第n个元素,n从0开始计数;
所述非负整数记作j,所述工作机获取所述对应的行包括的第所述非负整数个相邻节点的标识,具体包括:
所述工作机通过读取该一维数组的第j个元素,得到所述目标节点的第j个相邻节点的标识。
可选地,所述一维数组的元素总数量等于所述各节点中邻居节点最多的节点的邻居节点数量。
可选地,所述工作机生成由依次得到的各目标节点的标识构成的随机序列,具体包括:
所述工作机当依次得到的各目标节点总数量达到预设的随机游走步数时,生成由所述依次得到的各目标节点的标识构成的随机序列。
可选地,所述第二生成模块603生成随机序列,具体包括:
各所述工作机分别生成随机序列,直至生成的随机序列总数量达到设定阈值。
可选地,所述工作机若本地已有的所述二维数组丢失,则重新从所述服务器集群获取。
图7为本说明书实施例提供的对应于图5的一种随机游走装置的结构 示意图,该装置包括:
获取模块701,获取根据图数据包含的各节点的信息生成的二维数组,所述二维数组的每行分别包括一个所述节点的相邻节点的标识;
生成模块702,根据所述二维数组,生成随机序列,所述随机序列反映在所述图数据中的随机游走。
基于同样的思路,本说明书实施例还提供了对应于图2的一种基于集群的随机游走设备,该设备属于所述集群,包括:
至少一个处理器;以及,
与所述至少一个处理器通信连接的存储器;其中,
所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够:
获取图数据包含的各节点的信息;
根据所述各节点的信息,生成二维数组,所述二维数组的每行分别包括一个所述节点的相邻节点的标识;
根据所述二维数组,生成随机序列,所述随机序列反映在所述图数据中的随机游走。
基于同样的思路,本说明书实施例还提供了对应于图5的一种随机游走设备,包括:
至少一个处理器;以及,
与所述至少一个处理器通信连接的存储器;其中,
所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够:
获取根据图数据包含的各节点的信息生成的二维数组,所述二维数组的每行分别包括一个所述节点的相邻节点的标识;
根据所述二维数组,生成随机序列,所述随机序列反映在所述图数据中的随机游走。
基于同样的思路,本说明书实施例还提供了对应于图2的一种非易失 性计算机存储介质,存储有计算机可执行指令,所述计算机可执行指令设置为:
获取图数据包含的各节点的信息;
根据所述各节点的信息,生成二维数组,所述二维数组的每行分别包括一个所述节点的相邻节点的标识;
根据所述二维数组,生成随机序列,所述随机序列反映在所述图数据中的随机游走。
基于同样的思路,本说明书实施例还提供了对应于图5的一种非易失性计算机存储介质,存储有计算机可执行指令,所述计算机可执行指令设置为:
获取根据图数据包含的各节点的信息生成的二维数组,所述二维数组的每行分别包括一个所述节点的相邻节点的标识;
根据所述二维数组,生成随机序列,所述随机序列反映在所述图数据中的随机游走。
上述对本说明书特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置、设备、非易失性计算机存储介质实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
本说明书实施例提供的装置、设备、非易失性计算机存储介质与方法是对应的,因此,装置、设备、非易失性计算机存储介质也具有与对应方 法类似的有益技术效果,由于上面已经对方法的有益技术效果进行了详细说明,因此,这里不再赘述对应装置、设备、非易失性计算机存储介质的有益技术效果。
在20世纪90年代,对于一个技术的改进可以很明显地区分是硬件上的改进(例如,对二极管、晶体管、开关等电路结构的改进)还是软件上的改进(对于方法流程的改进)。然而,随着技术的发展,当今的很多方法流程的改进已经可以视为硬件电路结构的直接改进。设计人员几乎都通过将改进的方法流程编程到硬件电路中来得到相应的硬件电路结构。因此,不能说一个方法流程的改进就不能用硬件实体模块来实现。例如,可编程逻辑器件(Programmable Logic Device,PLD)(例如现场可编程门阵列(Field Programmable Gate Array,FPGA))就是这样一种集成电路,其逻辑功能由用户对器件编程来确定。由设计人员自行编程来把一个数字系统“集成”在一片PLD上,而不需要请芯片制造厂商来设计和制作专用的集成电路芯片。而且,如今,取代手工地制作集成电路芯片,这种编程也多半改用“逻辑编译器(logic compiler)”软件来实现,它与程序开发撰写时所用的软件编译器相类似,而要编译之前的原始代码也得用特定的编程语言来撰写,此称之为硬件描述语言(Hardware Description Language,HDL),而HDL也并非仅有一种,而是有许多种,如ABEL(Advanced Boolean Expression Language)、AHDL(Altera Hardware Description Language)、Confluence、CUPL(Cornell University Programming Language)、HDCal、JHDL(Java Hardware Description Language)、Lava、Lola、MyHDL、PALASM、RHDL(Ruby Hardware Description Language)等,目前最普遍使用的是VHDL(Very-High-Speed Integrated Circuit Hardware Description Language)与Verilog。本领域技术人员也应该清楚,只需要将方法流程用上述几种硬件描述语言稍作逻辑编程并编程到集成电路中,就可以很容易得到实现该逻辑方法流程的硬件电路。
控制器可以按任何适当的方式实现,例如,控制器可以采取例如微处 理器或处理器以及存储可由该(微)处理器执行的计算机可读程序代码(例如软件或固件)的计算机可读介质、逻辑门、开关、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程逻辑控制器和嵌入微控制器的形式,控制器的例子包括但不限于以下微控制器:ARC 625D、Atmel AT91SAM、Microchip PIC18F26K20以及Silicone Labs C8051F320,存储器控制器还可以被实现为存储器的控制逻辑的一部分。本领域技术人员也知道,除了以纯计算机可读程序代码方式实现控制器以外,完全可以通过将方法步骤进行逻辑编程来使得控制器以逻辑门、开关、专用集成电路、可编程逻辑控制器和嵌入微控制器等的形式来实现相同功能。因此这种控制器可以被认为是一种硬件部件,而对其内包括的用于实现各种功能的装置也可以视为硬件部件内的结构。或者甚至,可以将用于实现各种功能的装置视为既可以是实现方法的软件模块又可以是硬件部件内的结构。
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机。具体的,计算机例如可以为个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任何设备的组合。
为了描述的方便,描述以上装置时以功能分为各种单元分别描述。当然,在实施本说明书时可以把各单元的功能在同一个或多个软件和/或硬件中实现。
本领域内的技术人员应明白,本说明书实施例可提供为方法、系统、或计算机程序产品。因此,本说明书实施例可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本说明书实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本说明书是参照根据本说明书实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦 除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
本说明书可以在由计算机执行的计算机可执行指令的一般上下文中描述,例如程序模块。一般地,程序模块包括执行特定任务或实现特定抽象数据类型的例程、程序、对象、组件、数据结构等等。也可以在分布式计算环境中实践本说明书,在这些分布式计算环境中,由通过通信网络而被连接的远程处理设备来执行任务。在分布式计算环境中,程序模块可以位于包括存储设备在内的本地和远程计算机存储介质中。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
以上所述仅为本说明书实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。

Claims (28)

  1. 一种基于集群的随机游走方法,包括:
    所述集群获取图数据包含的各节点的信息;
    根据所述各节点的信息,生成二维数组,所述二维数组的每行分别包括一个所述节点的相邻节点的标识;
    根据所述二维数组,生成随机序列,所述随机序列反映在所述图数据中的随机游走。
  2. 如权利要求1所述的方法,所述集群包括服务器集群和工作机集群;
    所述集群获取图数据包含的各节点的信息,具体包括:
    所述工作机集群从数据库读取图数据包含的各节点的相邻节点的标识,其中,每个工作机读取一部分节点的相邻节点的标识。
  3. 如权利要求2所述的方法,所述根据所述各节点的信息,生成二维数组,具体包括:
    各所述工作机分别根据自己读取标识的相邻节点及其对应节点的标识,生成非全量的二维数组;
    所述工作机集群将各所述非全量的二维数组向所述服务器集群同步;
    所述服务器集群根据各所述非全量的二维数组,得到全量的二维数组。
  4. 如权利要求3所述的方法,所述根据所述二维数组,生成随机序列前,所述方法还包括:
    所述服务器集群将所述全量的二维数组向各所述工作机同步,以便各所述工作机根据所述全量的二维数组,生成随机序列。
  5. 如权利要求4所述的方法,所述根据所述二维数组,生成随机序列,具体包括:
    根据节点标识顺序,对所述全量的二维数组中的各行进行排序;
    根据排序后的二维数组,生成随机序列。
  6. 如权利要求2所述的方法,所述根据所述二维数组,生成随机序列, 具体包括:
    所述工作机在所述各节点的标识中,随机确定一个标识,作为目标节点的标识;
    根据所述目标节点的标识,在所述二维数组中索引得到对应的行,所述对应的行包括所述目标节点的标识,以及所述目标节点的相邻节点的标识;
    确定所述对应的行包括的相邻节点的标识的数量;
    随机确定一个小于所述数量的非负整数,并获取所述对应的行包括的第所述非负整数个相邻节点的标识;
    通过将该第所述非负整数个相邻节点重新作为目标节点进行迭代计算,生成由依次得到的各目标节点的标识构成的随机序列。
  7. 如权利要求6所述的方法,所述节点总数量为N,第m个所述节点的标识为m,0≤m≤N-1,所述目标节点为第i个节点,所述对应的行为所述二维数组的第i行。
  8. 如权利要求6所述的方法,所述对应的行为一维数组,所述目标节点的第n个相邻节点的标识为该一维数组的第n个元素,n从0开始计数;
    所述非负整数记作j,所述获取所述对应的行包括的第所述非负整数个相邻节点的标识,具体包括:
    通过读取该一维数组的第j个元素,得到所述目标节点的第j个相邻节点的标识。
  9. 如权利要求8所述的方法,所述一维数组的元素总数量等于所述各节点中邻居节点最多的节点的邻居节点数量。
  10. 如权利要求6所述的方法,所述生成由依次得到的各目标节点的标识构成的随机序列,具体包括:
    当依次得到的各目标节点总数量达到预设的随机游走步数时,生成由所述依次得到的各目标节点的标识构成的随机序列。
  11. 如权利要求2所述的方法,所述生成随机序列,具体包括:
    各所述工作机分别生成随机序列,直至生成的随机序列总数量达到设定阈值。
  12. 如权利要求4所述的方法,所述方法还包括:
    所述工作机若本地已有的所述二维数组丢失,则重新从所述服务器集群获取。
  13. 一种随机游走方法,包括:
    获取根据图数据包含的各节点的信息生成的二维数组,所述二维数组的每行分别包括一个所述节点的相邻节点的标识;
    根据所述二维数组,生成随机序列,所述随机序列反映在所述图数据中的随机游走。
  14. 一种基于集群的随机游走装置,所述装置属于所述集群,包括:
    获取模块,获取图数据包含的各节点的信息;
    第一生成模块,根据所述各节点的信息,生成二维数组,所述二维数组的每行分别包括一个所述节点的相邻节点的标识;
    第二生成模块,根据所述二维数组,生成随机序列,所述随机序列反映在所述图数据中的随机游走。
  15. 如权利要求14所述的装置,所述集群包括服务器集群和工作机集群;
    所述获取模块获取图数据包含的各节点的信息,具体包括:
    所述工作机集群从数据库读取图数据包含的各节点的相邻节点的标识,其中,每个工作机读取一部分节点的相邻节点的标识。
  16. 如权利要求15所述的装置,所述第一生成模块根据所述各节点的信息,生成二维数组,具体包括:
    各所述工作机分别根据自己读取标识的相邻节点及其对应节点的标识,生成非全量的二维数组;
    所述工作机集群将各所述非全量的二维数组向所述服务器集群同步;
    所述服务器集群根据各所述非全量的二维数组,得到全量的二维数组。
  17. 如权利要求16所述的装置,所述第二生成模块根据所述二维数组,生成随机序列前,所述服务器集群将所述全量的二维数组向各所述工作机同步,以便各所述工作机根据所述全量的二维数组,生成随机序列。
  18. 如权利要求17所述的装置,所述第二生成模块根据所述二维数组,生成随机序列,具体包括:
    所述工作机根据节点标识顺序,对所述全量的二维数组中的各行进行排序;
    根据排序后的二维数组,生成随机序列。
  19. 如权利要求15所述的装置,所述第二生成模块根据所述二维数组,生成随机序列,具体包括:
    所述工作机在所述各节点的标识中,随机确定一个标识,作为目标节点的标识;
    根据所述目标节点的标识,在所述二维数组中索引得到对应的行,所述对应的行包括所述目标节点的标识,以及所述目标节点的相邻节点的标识;
    确定所述对应的行包括的相邻节点的标识的数量;
    随机确定一个小于所述数量的非负整数,并获取所述对应的行包括的第所述非负整数个相邻节点的标识;
    通过将该第所述非负整数个相邻节点重新作为目标节点进行迭代计算,生成由依次得到的各目标节点的标识构成的随机序列。
  20. 如权利要求19所述的装置,所述节点总数量为N,第m个所述节点的标识为m,0≤m≤N-1,所述目标节点为第i个节点,所述对应的行为所述二维数组的第i行。
  21. 如权利要求19所述的装置,所述对应的行为一维数组,所述目标节点的第n个相邻节点的标识为该一维数组的第n个元素,n从0开始计数;
    所述非负整数记作j,所述工作机获取所述对应的行包括的第所述非负整数个相邻节点的标识,具体包括:
    所述工作机通过读取该一维数组的第j个元素,得到所述目标节点的第j个相邻节点的标识。
  22. 如权利要求21所述的装置,所述一维数组的元素总数量等于所述各节点中邻居节点最多的节点的邻居节点数量。
  23. 如权利要求19所述的装置,所述工作机生成由依次得到的各目标节点的标识构成的随机序列,具体包括:
    所述工作机当依次得到的各目标节点总数量达到预设的随机游走步数时,生成由所述依次得到的各目标节点的标识构成的随机序列。
  24. 如权利要求15所述的装置,所述第二生成模块生成随机序列,具体包括:
    各所述工作机分别生成随机序列,直至生成的随机序列总数量达到设定阈值。
  25. 如权利要求17所述的装置,所述工作机若本地已有的所述二维数组丢失,则重新从所述服务器集群获取。
  26. 一种随机游走装置,包括:
    获取模块,获取根据图数据包含的各节点的信息生成的二维数组,所述二维数组的每行分别包括一个所述节点的相邻节点的标识;
    生成模块,根据所述二维数组,生成随机序列,所述随机序列反映在所述图数据中的随机游走。
  27. 一种基于集群的随机游走设备,所述设备属于所述集群,包括:
    至少一个处理器;以及,
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够:
    获取图数据包含的各节点的信息;
    根据所述各节点的信息,生成二维数组,所述二维数组的每行分别包括一个所述节点的相邻节点的标识;
    根据所述二维数组,生成随机序列,所述随机序列反映在所述图数据中的随机游走。
  28. 一种随机游走设备,包括:
    至少一个处理器;以及,
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够:
    获取根据图数据包含的各节点的信息生成的二维数组,所述二维数组的每行分别包括一个所述节点的相邻节点的标识;
    根据所述二维数组,生成随机序列,所述随机序列反映在所述图数据中的随机游走。
PCT/CN2018/107308 2017-11-17 2018-09-25 随机游走、基于集群的随机游走方法、装置以及设备 WO2019095858A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
SG11202000460UA SG11202000460UA (en) 2017-11-17 2018-09-25 Random walk method, apparatus and device, and cluster-based random walk method, apparatus, and device
EP18878726.1A EP3640813B1 (en) 2017-11-17 2018-09-25 Cluster-based random walk method and apparatus
US16/805,079 US11074246B2 (en) 2017-11-17 2020-02-28 Cluster-based random walk processing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711144728.1A CN108073687B (zh) 2017-11-17 2017-11-17 随机游走、基于集群的随机游走方法、装置以及设备
CN201711144728.1 2017-11-17

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/805,079 Continuation US11074246B2 (en) 2017-11-17 2020-02-28 Cluster-based random walk processing

Publications (1)

Publication Number Publication Date
WO2019095858A1 true WO2019095858A1 (zh) 2019-05-23

Family

ID=62157250

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/107308 WO2019095858A1 (zh) 2017-11-17 2018-09-25 随机游走、基于集群的随机游走方法、装置以及设备

Country Status (6)

Country Link
US (1) US11074246B2 (zh)
EP (1) EP3640813B1 (zh)
CN (1) CN108073687B (zh)
SG (1) SG11202000460UA (zh)
TW (1) TWI709049B (zh)
WO (1) WO2019095858A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111090783A (zh) * 2019-12-18 2020-05-01 北京百度网讯科技有限公司 推荐方法、装置和系统、图嵌入的游走方法、电子设备
CN112347260A (zh) * 2020-11-24 2021-02-09 深圳市欢太科技有限公司 数据处理方法、装置以及电子设备

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108073687B (zh) * 2017-11-17 2020-09-08 阿里巴巴集团控股有限公司 随机游走、基于集群的随机游走方法、装置以及设备
US11334567B2 (en) * 2019-08-16 2022-05-17 Oracle International Corporation Efficient SQL-based graph random walk
CN112100489B (zh) * 2020-08-27 2022-07-15 北京百度网讯科技有限公司 对象推荐的方法、装置和计算机存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7778945B2 (en) * 2007-06-26 2010-08-17 Microsoft Corporation Training random walks over absorbing graphs
CN105741175A (zh) * 2016-01-27 2016-07-06 电子科技大学 一种对在线社交网络中账户进行关联的方法
CN106530097A (zh) * 2016-10-11 2017-03-22 中国人民武装警察部队工程大学 一种基于随机游走机制的有向社交网络关键传播节点发现方法
CN107145977A (zh) * 2017-04-28 2017-09-08 电子科技大学 一种对在线社交网络用户进行结构化属性推断的方法
CN108021610A (zh) * 2017-11-02 2018-05-11 阿里巴巴集团控股有限公司 随机游走、基于分布式系统的随机游走方法、装置以及设备
CN108073687A (zh) * 2017-11-17 2018-05-25 阿里巴巴集团控股有限公司 随机游走、基于集群的随机游走方法、装置以及设备

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8081658B2 (en) * 2006-04-24 2011-12-20 Interdigital Technology Corporation Method and signaling procedure for transmission opportunity usage in a wireless mesh network
US7672919B2 (en) * 2006-08-02 2010-03-02 Unisys Corporation Determination of graph connectivity metrics using bit-vectors
US20090043797A1 (en) * 2007-07-27 2009-02-12 Sparkip, Inc. System And Methods For Clustering Large Database of Documents
US7877385B2 (en) * 2007-09-21 2011-01-25 Microsoft Corporation Information retrieval using query-document pair information
US9092483B2 (en) * 2010-10-19 2015-07-28 Microsoft Technology Licensing, Llc User query reformulation using random walks
US20130231862A1 (en) * 2011-06-03 2013-09-05 Microsoft Corporation Customizable route planning
CN103309818B (zh) * 2012-03-09 2015-07-29 腾讯科技(深圳)有限公司 存储数据的方法及装置
EP3101392B1 (en) * 2013-03-15 2021-12-15 Apple Inc. Mapping application with turn-by-turn navigation mode for output to vehicle display
IN2013MU02217A (zh) * 2013-07-01 2015-06-12 Tata Consultancy Services Ltd
EP2963564A1 (en) * 2014-07-04 2016-01-06 Gottfried Wilhelm Leibniz Universität Hannover Method for determining the relevance of a tag
US9916187B2 (en) * 2014-10-27 2018-03-13 Oracle International Corporation Graph database system that dynamically compiles and executes custom graph analytic programs written in high-level, imperative programming language
US9852231B1 (en) * 2014-11-03 2017-12-26 Google Llc Scalable graph propagation for knowledge expansion
US9798818B2 (en) * 2015-09-22 2017-10-24 International Business Machines Corporation Analyzing concepts over time
US10025867B2 (en) * 2015-09-29 2018-07-17 Facebook, Inc. Cache efficiency by social graph data ordering
US20170161619A1 (en) * 2015-12-08 2017-06-08 International Business Machines Corporation Concept-Based Navigation
CN106127301B (zh) * 2016-01-16 2019-01-11 上海大学 一种随机神经网络硬件实现装置
JP6757913B2 (ja) * 2016-02-26 2020-09-23 国立研究開発法人情報通信研究機構 画像クラスタリングシステム、画像クラスタリング方法、画像クラスタリングプログラム、および、コミュニティ構造検出システム
CN107179940B (zh) * 2016-03-10 2020-06-19 阿里巴巴集团控股有限公司 一种任务执行的方法及装置
US10089761B2 (en) * 2016-04-29 2018-10-02 Hewlett Packard Enterprise Development Lp Graph processing using a shared memory
CN106874080B (zh) * 2016-07-07 2020-05-12 阿里巴巴集团控股有限公司 基于分布式服务器集群的数据计算方法及系统
US20190065612A1 (en) * 2017-08-24 2019-02-28 Microsoft Technology Licensing, Llc Accuracy of job retrieval using a universal concept graph
US20190066054A1 (en) * 2017-08-24 2019-02-28 Linkedln Corporation Accuracy of member profile retrieval using a universal concept graph
CN110019975B (zh) * 2017-10-10 2020-10-16 创新先进技术有限公司 随机游走、基于集群的随机游走方法、装置以及设备
US20190114362A1 (en) * 2017-10-12 2019-04-18 Facebook, Inc. Searching Online Social Networks Using Entity-based Embeddings
US20190114373A1 (en) * 2017-10-13 2019-04-18 Facebook, Inc. Scalable candidate selection for recommendations

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7778945B2 (en) * 2007-06-26 2010-08-17 Microsoft Corporation Training random walks over absorbing graphs
CN105741175A (zh) * 2016-01-27 2016-07-06 电子科技大学 一种对在线社交网络中账户进行关联的方法
CN106530097A (zh) * 2016-10-11 2017-03-22 中国人民武装警察部队工程大学 一种基于随机游走机制的有向社交网络关键传播节点发现方法
CN107145977A (zh) * 2017-04-28 2017-09-08 电子科技大学 一种对在线社交网络用户进行结构化属性推断的方法
CN108021610A (zh) * 2017-11-02 2018-05-11 阿里巴巴集团控股有限公司 随机游走、基于分布式系统的随机游走方法、装置以及设备
CN108073687A (zh) * 2017-11-17 2018-05-25 阿里巴巴集团控股有限公司 随机游走、基于集群的随机游走方法、装置以及设备

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111090783A (zh) * 2019-12-18 2020-05-01 北京百度网讯科技有限公司 推荐方法、装置和系统、图嵌入的游走方法、电子设备
CN111090783B (zh) * 2019-12-18 2023-10-03 北京百度网讯科技有限公司 推荐方法、装置和系统、图嵌入的游走方法、电子设备
CN112347260A (zh) * 2020-11-24 2021-02-09 深圳市欢太科技有限公司 数据处理方法、装置以及电子设备

Also Published As

Publication number Publication date
US20200201844A1 (en) 2020-06-25
EP3640813A4 (en) 2020-07-22
TWI709049B (zh) 2020-11-01
EP3640813A1 (en) 2020-04-22
TW201923631A (zh) 2019-06-16
EP3640813B1 (en) 2022-01-26
CN108073687A (zh) 2018-05-25
US11074246B2 (en) 2021-07-27
CN108073687B (zh) 2020-09-08
SG11202000460UA (en) 2020-02-27

Similar Documents

Publication Publication Date Title
WO2019095858A1 (zh) 随机游走、基于集群的随机游走方法、装置以及设备
WO2018177235A1 (zh) 一种区块链共识方法及装置
WO2018121319A1 (zh) 一种区块数据校验方法和装置
WO2018177245A1 (zh) 一种基于区块链的数据处理方法及设备
WO2019085614A1 (zh) 随机游走、基于分布式系统的随机游走方法、装置以及设备
WO2018177250A1 (zh) 一种基于区块链的数据处理方法及设备
WO2019020094A1 (zh) 一种指标异常检测方法、装置以及电子设备
WO2019080615A1 (zh) 基于集群的词向量处理方法、装置以及设备
TWI694342B (zh) 一種資料快取方法、裝置及系統
WO2019128527A1 (zh) 一种社交内容风险识别方法、装置以及设备
CN113837635B (zh) 风险检测处理方法、装置及设备
US10776334B2 (en) Random walking and cluster-based random walking method, apparatus and device
CN103559247A (zh) 一种数据业务处理方法及装置
WO2019072040A1 (zh) 随机游走、基于集群的随机游走方法、装置以及设备
WO2019095836A1 (zh) 基于集群的词向量处理方法、装置以及设备
CN110889424B (zh) 向量索引建立方法及装置和向量检索方法及装置
CN110083602B (zh) 一种基于hive表的数据存储及数据处理的方法及装置
CN111125157B (zh) 查询数据的处理方法、装置、存储介质及处理器
CN111008198A (zh) 业务数据获取方法、装置、存储介质、电子设备
CN112463785B (zh) 一种数据质量监控方法、装置、电子设备及存储介质
Wang et al. The method of cloudizing storing unstructured LiDAR point cloud data by MongoDB
CN108121719B (zh) 一种实现数据抽取转换加载etl的方法及装置
CN104239576A (zh) 查找HBase表列值中所有行的方法和装置

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2018878726

Country of ref document: EP

Effective date: 20200116

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18878726

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE