WO2021042844A1 - 大规模数据聚类方法、装置、计算机设备及计算机可读存储介质 - Google Patents

大规模数据聚类方法、装置、计算机设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2021042844A1
WO2021042844A1 PCT/CN2020/098957 CN2020098957W WO2021042844A1 WO 2021042844 A1 WO2021042844 A1 WO 2021042844A1 CN 2020098957 W CN2020098957 W CN 2020098957W WO 2021042844 A1 WO2021042844 A1 WO 2021042844A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
sample set
cluster centers
data sample
value
Prior art date
Application number
PCT/CN2020/098957
Other languages
English (en)
French (fr)
Inventor
陈善彪
尹浩
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021042844A1 publication Critical patent/WO2021042844A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering

Definitions

  • This application relates to the field of artificial intelligence technology, and in particular to a method, device, computer equipment, and computer-readable storage medium for intelligently performing large-scale data clustering based on big data input.
  • clustering is the core of finding similar categories from large-scale data sets, and dividing the samples into multiple non-overlapping subsets.
  • the K-means clustering algorithm is one of the most extensive methods of dividing and clustering.
  • the mass centers of various samples are used to represent the class for iteration, and the various centers are dynamically adjusted for clustering.
  • the inventor realizes that the K-means algorithm is very dependent on the initial k centers, and improper selection of the initial centers can easily lead to local optimal solutions, increase the number of iterations, and reduce execution efficiency; in addition, in the K-means clustering process, The Euclidean distance between the data point and the class center point needs to be calculated, and to calculate the Euclidean distance, the dot product of the data point and the class center needs to be calculated. In the case of massive data participating in clustering, there are a lot of dot products to be calculated, which takes a long time and is low in efficiency. Therefore, traditional clustering algorithms cannot provide a good solution in terms of system resources or real-time efficiency when processing large-scale data.
  • the embodiments of the present application provide a large-scale data clustering method, device, computer equipment, and computer-readable storage medium.
  • Step A The cluster center number calculation layer receives the data sample set input by the user, calculates the average contour coefficient of the cluster center number K value according to the data sample set, selects the K value with the largest average contour coefficient, and randomly determines K cluster centers , Input the data sample set, the K value and the K cluster centers into the cluster center storage layer;
  • Step B The cluster center storage layer stores the K cluster centers and the data sample set in a database in a row-first storage format
  • Step C The cluster center update layer sequentially reads the K cluster centers and the data sample set from the database, and calculates the value of the K cluster centers and the data sample set according to the minimum square error algorithm Loss value, and determine the magnitude relationship between the loss value and a preset threshold;
  • Step D When the loss value is greater than the preset threshold, calculate the distance between the data sample set and the K cluster centers, and re-determine the distance between the data sample set and the K cluster centers The K cluster centers are returned to step B. When the loss value is less than the threshold, the K cluster centers are output to complete the clustering result.
  • An embodiment of the present application also provides a computer device that includes a memory and a processor, the memory stores a large-scale data clustering program that can run on the processor, and the large-scale data clustering program When executed by the processor, the following steps are implemented:
  • Step A The cluster center number calculation layer receives the data sample set input by the user, calculates the average contour coefficient of the cluster center number K value according to the data sample set, selects the K value with the largest average contour coefficient, and randomly determines K cluster centers , Input the data sample set, the K value and the K cluster centers into the cluster center storage layer;
  • Step B The cluster center storage layer stores the K cluster centers and the data sample set in a database in a row-first storage format
  • Step C The cluster center update layer sequentially reads the K cluster centers and the data sample set from the database, and calculates the value of the K cluster centers and the data sample set according to the minimum square error algorithm Loss value, and determine the magnitude relationship between the loss value and a preset threshold;
  • Step D When the loss value is greater than the preset threshold, calculate the distance between the data sample set and the K cluster centers, and re-determine the distance between the data sample set and the K cluster centers The K cluster centers are returned to step B. When the loss value is less than the threshold, the K cluster centers are output to complete the clustering result.
  • An embodiment of the present application also provides a large-scale data clustering device, wherein the device includes:
  • the data receiving module is used to receive the data sample set input by the user, calculate the average contour coefficient of the cluster center number K value according to the data sample set, and select the K value with the largest average contour coefficient, and compare the data sample set with all the
  • the K value is input to a data storage module, and the data storage module randomly determines K cluster centers according to the K value;
  • a data storage module configured to store the K cluster centers and the data sample set in a database in a row-first storage format
  • the clustering training module is used to sequentially read the K cluster centers and the data sample set from the database, and according to the minimum square error algorithm, and according to the data sample set and the K cluster centers Calculate the loss value of the K cluster centers and the data sample set, and determine the magnitude relationship between the loss value and a preset threshold;
  • the clustering result output module is used to calculate the distance between the data sample set and the K cluster centers when the loss value is greater than the preset threshold, re-determine the K cluster centers, and return to data storage Module, when the loss value is less than the threshold value, output the K cluster centers to complete the clustering result.
  • the embodiment of the present application also provides a computer-readable storage medium, the computer-readable storage medium stores a large-scale data clustering program, and the large-scale data clustering program can be executed by one or more processors to To achieve the following steps:
  • Step A The cluster center number calculation layer receives the data sample set input by the user, calculates the average contour coefficient of the cluster center number K value according to the data sample set, selects the K value with the largest average contour coefficient, and randomly determines K cluster centers , Input the data sample set, the K value and the K cluster centers into the cluster center storage layer;
  • Step B The cluster center storage layer stores the K cluster centers and the data sample set in a database in a row-first storage format
  • Step C The cluster center update layer sequentially reads the K cluster centers and the data sample set from the database, and calculates the value of the K cluster centers and the data sample set according to the minimum square error algorithm Loss value, and determine the magnitude relationship between the loss value and a preset threshold;
  • Step D When the loss value is greater than the preset threshold, calculate the distance between the data sample set and the K cluster centers, and re-determine the distance between the data sample set and the K cluster centers The K cluster centers are returned to step B. When the loss value is less than the threshold, the K cluster centers are output to complete the clustering result.
  • FIG. 1 is a schematic flowchart of a large-scale data clustering method provided by an embodiment of this application;
  • FIG. 2 is a schematic diagram of the internal structure of a computer device provided by an embodiment of the application.
  • FIG. 3 is a schematic diagram of modules of a large-scale data clustering device provided by an embodiment of the application.
  • This application provides a large-scale data clustering method.
  • FIG. 1 it is a schematic flowchart of a large-scale data clustering method provided by an embodiment of this application.
  • the method can be executed by a device, and the device can be implemented by software and/or hardware.
  • the large-scale data clustering method includes:
  • the K value calculation layer receives the data sample set input by the user, calculates the average contour coefficient according to the data sample set, and selects the K value with the largest average contour coefficient, randomly determines K cluster centers, and divides the data sample set, The K value and the K cluster centers are input to the cluster center calculation layer.
  • the data sample set includes coordinate positions of the data sample set.
  • the coordinate position of the data sample set can be represented by (x, y).
  • the coordinates of the data sample set The position can be represented by (x, y, z).
  • the K value is initialized, and the value within the range of [K min , K max ] determined by the data sample set is substituted for the K value, and K cluster centers are randomly determined according to the K value. ; For each data x i in the data sample set, calculate the cohesion a(x i ) between the data x i and all other data with the data x i at the same cluster center; at the same time, further, Traverse all the data of other cluster centers, calculate the separation degree between all the data of the other cluster centers and the data x i , and sort to obtain the minimum separation degree b(x i );
  • the values of the average contour coefficient s(x i ) are sorted, and the K value corresponding to the largest value of the average contour coefficient s(x i) is selected.
  • the cluster center storage layer stores the K cluster centers and the data sample set in a database in a row-first storage format.
  • the row-first storage format is to transpose the data of the K cluster centers and the data sample set, and the storage rules are defined by rows, so the entire data storage is automatically indexed.
  • the transposed data is stored first by row, and when the K cluster centers and the data sample set are read in the subsequent model training layer, the data can be read and retrieved only by indexing a few fields, reducing the indexing process time.
  • the cluster center update layer sequentially reads the K cluster centers and the data sample set from the database, and calculates the K cluster centers and the n data sample sets according to the algorithm for minimizing square error And determine the magnitude relationship between the loss value and the preset threshold.
  • centroid vectors of the K cluster centers and the coordinate positions of the n data sample sets are input into the square error minimization algorithm, and the square error minimization algorithm calculates the loss value E .
  • x t is the data in the n data samples, and the x t must be within the K number of cluster center samples, and the threshold is generally set to 0.01.
  • the distance between the data x t in the n data sample sets and the centroid vectors ⁇ i of the K cluster centers is calculated according to a distance formula, so The distance formula is:
  • d ti represents the distance between the data x t in the t-th data sample set and the centroid vector ⁇ i of the i-th cluster center.
  • the preferred embodiment of the present application selects the number of samples c i corresponding to the centroid vector with the smallest distance d ti , and adds the data x t in the data sample set to the number of samples c i corresponding to the centroid vector until The calculation of ⁇ 1 to ⁇ K has been completed, and the K cluster centers have been re-determined.
  • This application also provides a computer device.
  • FIG. 2 it is a schematic diagram of the internal structure of a computer device provided by an embodiment of this application.
  • the computer device 1 may be a PC (Personal Computer, personal computer), or a terminal device such as a smart phone, a tablet computer, or a portable computer, or a server.
  • the computer device 1 at least includes a memory 11, a processor 12, a communication bus 13, and a network interface 14.
  • the memory 11 includes at least one type of readable storage medium, and the readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, and the like.
  • the memory 11 may be an internal storage unit of the computer device 1 in some embodiments, such as a hard disk of the computer device 1.
  • the memory 11 may also be an external storage device of the computer device 1, for example, a plug-in hard disk equipped on the computer device 1, a smart memory card (Smart Media Card, SMC), and a Secure Digital (SD) Card, Flash Card, etc.
  • the memory 11 may also include both an internal storage unit of the computer device 1 and an external storage device.
  • the memory 11 can be used not only to store application software and various data installed in the computer device 1, such as the code of the large-scale data clustering program 01, etc., but also to temporarily store data that has been output or will be output.
  • the processor 12 may be a central processing unit (CPU), controller, microcontroller, microprocessor, or other data processing chip, for running program codes or processing stored in the memory 11 Data, such as the implementation of large-scale data clustering program 01 and so on.
  • CPU central processing unit
  • controller microcontroller
  • microprocessor or other data processing chip, for running program codes or processing stored in the memory 11 Data, such as the implementation of large-scale data clustering program 01 and so on.
  • the communication bus 13 is used to realize the connection and communication between these components.
  • the network interface 14 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface), and is usually used to establish a communication connection between the computer device 1 and other electronic devices.
  • the computer device 1 may also include a user interface.
  • the user interface may include a display (Display) and an input unit such as a keyboard (Keyboard).
  • the optional user interface may also include a standard wired interface and a wireless interface.
  • the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode, organic light-emitting diode) touch device, etc.
  • the display can also be appropriately called a display screen or a display unit, which is used to display the information processed in the computer device 1 and to display a visualized user interface.
  • Figure 2 only shows a computer device 1 with components 11-14 and a large-scale data clustering program 01. Those skilled in the art can understand that the structure shown in Figure 1 does not constitute a limitation on the computer device 1. Including fewer or more components than shown, or combining some components, or different component arrangements.
  • a large-scale data clustering program 01 is stored in the memory 11; when the processor 12 executes the large-scale data clustering program 01 stored in the memory 11, the following steps are implemented:
  • Step 1 The K value calculation layer receives the data sample set input by the user, calculates the average contour coefficient according to the data sample set, selects the K value with the largest average contour coefficient, and randomly determines K cluster centers, and the data sample The set, the K value, and the K cluster centers are input to the cluster center calculation layer.
  • the data sample set includes coordinate positions of the data sample set.
  • the coordinate position of the data sample set can be represented by (x, y).
  • the coordinates of the data sample set The position can be represented by (x, y, z).
  • the K value is initialized, and the value within the range of [K min , K max ] determined by the data sample set is substituted for the K value, and K cluster centers are randomly determined according to the K value. ; For each data x i in the data sample set, calculate the cohesion a(x i ) between the data x i and all other data with the data x i at the same cluster center; at the same time, further, Traverse all the data of other cluster centers, calculate the separation degree between all the data of the other cluster centers and the data x i , and sort to obtain the minimum separation degree b(x i );
  • the values of the average contour coefficient s(x i ) are sorted, and the K value corresponding to the largest value of the average contour coefficient s(x i) is selected.
  • Step 2 The cluster center storage layer stores the K cluster centers and the data sample set in a database in a row-first storage format.
  • the row-first storage format is to transpose the data of the K cluster centers and the data sample set, and the storage rules are defined by rows, so the entire data storage is automatically indexed.
  • the transposed data is stored first by row, and when the K cluster centers and the data sample set are read in the subsequent model training layer, the data can be read and retrieved only by indexing a few fields, reducing the indexing process time.
  • Step 3 The cluster center update layer sequentially reads the K cluster centers and the data sample set from the database, and calculates the K cluster centers and the n data samples according to the algorithm of minimizing square error Set the loss value, and determine the magnitude relationship between the loss value and the preset threshold.
  • centroid vectors of the K cluster centers and the coordinate positions of the n data sample sets are input into the square error minimization algorithm, and the square error minimization algorithm calculates the loss value E .
  • x t is the data in the n data samples, and the x t must be within the K number of cluster center samples, and the threshold is generally set to 0.01.
  • Step 4 When the loss value is greater than the preset threshold, calculate the distance between the data sample set and the K cluster centers, and re-determine the distance between the data sample set and the K cluster centers The K cluster centers, and return to step two.
  • the distance between the data x t in the n data sample sets and the centroid vectors ⁇ i of the K cluster centers is calculated according to a distance formula, so The distance formula is:
  • d ti represents the distance between the data x t in the t-th data sample set and the centroid vector ⁇ i of the i-th cluster center.
  • the preferred embodiment of the present application selects the number of samples c i corresponding to the centroid vector with the smallest distance d ti , and adds the data x t in the data sample set to the number of samples c i corresponding to the centroid vector until The calculation of ⁇ 1 to ⁇ K has been completed, and the K cluster centers have been re-determined.
  • Step 5 When the loss value is less than the threshold value, output the K cluster centers to complete the clustering result.
  • the large-scale data clustering device includes a data receiving module 10, a data storage module 20, and a clustering training module. 30.
  • the clustering result output module 40 exemplarily:
  • the data receiving module 10 is configured to receive the data sample set input by the user, calculate the average contour coefficient of the cluster center number K value according to the data sample set, and select the K value with the largest average contour coefficient, and the data sample The set and the K value are input to the data storage module 20, and the data storage module 20 randomly determines K cluster centers according to the K value.
  • the data storage module 20 is configured to store the K cluster centers and the data sample set in a database in a row-first storage format.
  • the cluster training module 30 is configured to read the K cluster centers and the data sample set sequentially from the database, and according to the minimum square error algorithm, and according to the data sample set and the K
  • the distance between the cluster centers calculates the loss value of the K cluster centers and the data sample set, and determines the magnitude relationship between the loss value and a preset threshold.
  • the clustering result output module 40 is configured to: when the loss value is greater than the preset threshold, calculate the distance between the data sample set and the K cluster centers, re-determine the K cluster centers, and Return to the data storage module 20, and when the loss value is less than the threshold, output the K cluster centers to complete the clustering result.
  • the embodiment of the present application also proposes a computer-readable storage medium.
  • the computer-readable storage medium may be non-volatile or volatile.
  • the computer-readable storage medium stores large-scale data aggregation.
  • Class program, the large-scale data clustering program can be executed by one or more processors to achieve the following operations:
  • Receive the data sample set input by the user calculate the average contour coefficient of the cluster center number K value according to the data sample set, select the K value with the largest average contour coefficient, and input the data sample set and the K value into the data A storage module, wherein the data storage module randomly determines K cluster centers according to the K value.
  • the K cluster centers and the data sample set are stored in a database in a row-first storage format.
  • the distance between the data sample set and the K cluster centers is calculated, and the K cluster centers are re-determined according to the distance between the data sample set and the K cluster centers. And return to the data storage module, and when the loss value is less than the threshold, output the K cluster centers to complete the clustering result.
  • the specific implementation of the computer-readable storage medium of the present application is basically the same as the foregoing embodiments of the large-scale data clustering method, and will not be repeated here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

一种人工智能技术,揭露了一种大规模数据聚类方法,装置、计算机设备以及一种计算机可读存储介质,可以实现精准的大规模数据聚类功能。所述方法包括:接收用户输入的数据样本集,根据所述数据样本集计算簇心数K值的平均轮廓系数,并选取出平均轮廓系数最大的K值,随机确定K个簇心(S1),将所述K个簇心和所述数据样本集,按照行优先存储形式存储至数据库(S2);根据最小化平方误差算法,计算所述K个簇心和所述数据样本集的损失值,并判断所述损失值与预设阈值的大小关系(S3);当所述损失值小于所述阈值时,输出所述K个簇心,完成聚类结果(S5)。

Description

大规模数据聚类方法、装置、计算机设备及计算机可读存储介质
本申请要求于2019年9月6日提交中国专利局、申请号为201910846891.5,发明名称为“大规模数据聚类方法、装置及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能技术领域,尤其涉及一种基于大数据输入后,智能化的进行大规模数据聚类的方法、装置、计算机设备及计算机可读存储介质。
背景技术
聚类作为一种典型的数据分类方法,其核心是从大规模数据集中发现相似的类别,将样本划分为多个不重合的子集。K-means聚类算法是一种最为广泛的划分聚类方法,以各类样本的质量中心代表该类进行迭代,通过动态调整各类中心进行聚类。但发明人意识到K-means算法对初始的k个中心依赖性很大,初始中心选择不当,容易造成局部最优解,增加迭代次数,降低执行效率;此外,K-means聚类过程中,需要计算数据点与类中心点之间的欧式距离,而要计算欧式距离,就需要计算数据点与类中心的点积。在海量数据参与聚类的情况下,需要计算的点积则非常多,耗时较长,效率较低。所以,传统的聚类算法在处理大规模数据时无论从系统资源还是从实时性效率的角度,都不能提供很好的解决方案。
发明内容
本申请实施例提供一种大规模数据聚类方法、装置、计算机设备及计算机可读存储介质。
本申请实施例提供的一种大规模数据聚类方法,包括:
步骤A:簇心数计算层接收用户输入的数据样本集,根据所述数据样本集计算簇心数K值的平均轮廓系数,并选取出平均轮廓系数最大的K值,随机确定K个簇心,将所述数据样本集、所述K值与所述K个簇心输入至簇心存储层;
步骤B:所述簇心存储层将所述K个簇心和所述数据样本集,按照行优先存储形式存储至数据库;
步骤C:簇心更新层从所述数据库中依次读取所述K个簇心和所述数据样本集,并根据最小化平方误差算法,计算所述K个簇心和所述数据样本集的损失值,并判断所述损失值与预设阈值的大小关系;
步骤D:当所述损失值大于所述预设阈值时,计算所述数据样本集与所述K个簇心的距离,并根据所述数据样本集与所述K个簇心的距离重新确定所述K个簇心,并返回步骤B,当所述损失值小于所述阈值时,输出所述K个簇心,完成聚类结果。
本申请实施例还提供一种计算机设备,该计算机设备包括存储器和处理器,所述存储器中存储有可在所述处理器上运行的大规模数据聚类程序,所述大规模数据聚类程序被所述处理器执行时实现如下步骤:
步骤A:簇心数计算层接收用户输入的数据样本集,根据所述数据样本集计算簇心数K值的平均轮廓系数,并选取出平均轮廓系数最大的K值,随机确定K个簇心,将所述数据样本集、所述K值与所述K个簇心输入至簇心存储层;
步骤B:所述簇心存储层将所述K个簇心和所述数据样本集,按照行优先存储形式 存储至数据库;
步骤C:簇心更新层从所述数据库中依次读取所述K个簇心和所述数据样本集,并根据最小化平方误差算法,计算所述K个簇心和所述数据样本集的损失值,并判断所述损失值与预设阈值的大小关系;
步骤D:当所述损失值大于所述预设阈值时,计算所述数据样本集与所述K个簇心的距离,并根据所述数据样本集与所述K个簇心的距离重新确定所述K个簇心,并返回步骤B,当所述损失值小于所述阈值时,输出所述K个簇心,完成聚类结果。
本申请实施例还提供一种大规模数据聚类装置,其中,所述装置包括:
数据接收模块,用于接收用户输入的数据样本集,根据所述数据样本集计算簇心数K值的平均轮廓系数,并选取出平均轮廓系数最大的K值,将所述数据样本集与所述K值输入至数据存储模块,所述数据存储模块根据所述K值,随机确定K个簇心;
数据存储模块,用于将所述K个簇心和所述数据样本集,按照行优先存储形式存储至数据库;
聚类训练模块,用于从所述数据库中依次读取所述K个簇心和所述数据样本集,并根据最小化平方误差算法,并根据所述数据样本集与所述K个簇心的距离计算所述K个簇心和所述数据样本集的损失值,并判断所述损失值与预设阈值的大小关系;
聚类结果输出模块,用于当所述损失值大于所述预设阈值时,计算所述数据样本集与所述K个簇心的距离,重新确定所述K个簇心,并返回数据存储模块,当所述损失值小于所述阈值时,输出所述K个簇心,完成聚类结果。
本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有大规模数据聚类程序,所述大规模数据聚类程序可被一个或者多个处理器执行,以实现如下步骤:
步骤A:簇心数计算层接收用户输入的数据样本集,根据所述数据样本集计算簇心数K值的平均轮廓系数,并选取出平均轮廓系数最大的K值,随机确定K个簇心,将所述数据样本集、所述K值与所述K个簇心输入至簇心存储层;
步骤B:所述簇心存储层将所述K个簇心和所述数据样本集,按照行优先存储形式存储至数据库;
步骤C:簇心更新层从所述数据库中依次读取所述K个簇心和所述数据样本集,并根据最小化平方误差算法,计算所述K个簇心和所述数据样本集的损失值,并判断所述损失值与预设阈值的大小关系;
步骤D:当所述损失值大于所述预设阈值时,计算所述数据样本集与所述K个簇心的距离,并根据所述数据样本集与所述K个簇心的距离重新确定所述K个簇心,并返回步骤B,当所述损失值小于所述阈值时,输出所述K个簇心,完成聚类结果。
附图说明
图1为本申请一实施例提供的大规模数据聚类方法的流程示意图;
图2为本申请一实施例提供的计算机设备的内部结构示意图;
图3为本申请一实施例提供的大规模数据聚类装置的模块示意图。
本申请功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请提供一种大规模数据聚类方法。参照图1所示,为本申请一实施例提供的大规模数据聚类方法的流程示意图。该方法可以由一个装置执行,该装置可以由软件和/或硬件实现。
在本实施例中,大规模数据聚类方法包括:
S1、K值计算层接收用户输入的数据样本集,根据所述数据样本集计算平均轮廓系数,并选取出平均轮廓系数最大的K值,随机确定K个簇心,将所述数据样本集、所述K值与所述K个簇心输入至簇心计算层。
本申请较佳实施例,所述数据样本集包括数据样本集的坐标位置。例如,所述数据样本集在二维平面坐标内,则所述数据样本集的坐标位置可用(x,y)表示,当所述数据样本集在三维平面坐标,则所述数据样本集的坐标位置可用(x,y,z)表示。
本申请较佳实施例中,初始化K值,并依次根据所述数据样本集确定的[K min,K max]范围内的值代替所述K值,根据所述K值随机确定K个簇心;对于所述数据样本集内每个数据x i,计算所述数据x i与所述数据x i在同一簇心的其他所有数据之间的凝聚度a(x i);同时,进一步地,遍历其他簇心的所有数据,并计算所述其他簇心的所有数据与所述数据x i的分离度,并排序得到分离度最小值b(x i);
本申请较佳实施例中,根据所述凝聚度a(x i)与所述分离度最小值b(x i)计算得出所述平均轮廓系数s(x i):
Figure PCTCN2020098957-appb-000001
对所述平均轮廓系数s(x i)的值进行排序,并选择所述平均轮廓系数s(x i)最大的值所对应的K值。
S2、所述簇心存储层将所述K个簇心和所述数据样本集,按照行优先存储形式存储至数据库。
本申请较佳实施例中,所述行优先存储形式是转置所述K个簇心和所述数据样本集的数据,将存储规则通过行进行定义,因此整个数据存储是自动索引化的。所述按行优先存储转置的数据,在后续模型训练层读取所述K个簇心和所述数据样本集时,只需索引少数几个字段就可读取出数据,减少索引过程的时间。
S3、簇心更新层从所述数据库中依次读取所述K个簇心和所述数据样本集,并根据最小化平方误差算法,计算所述K个簇心和所述n个数据样本集的损失值,并判断所述损失值与预设阈值的大小关系。
本申请较佳实施例,将所述K个簇心的质心向量与所述n个数据样本集的坐标位置输入至所述最小化平方误差算法,所述最小化平方误差算法计算出损失值E。
本申请较佳实施例所述最小化平方误差算法为:
Figure PCTCN2020098957-appb-000002
其中,x t为所述n个数据样本内的数据,且所述x t一定在所述K个簇心样本数内,所述阈值一般设定为0.01。
S4、当所述损失值大于所述预设阈值时,计算所述数据样本集与所述K个簇心的距离,并根据所述数据样本集与所述K个簇心的距离重新确定所述K个簇心,并返回S2。
本申请较佳实施例,当所述损失值大于所述阈值时,根据距离公式计算所述n个数据样本集内的数据x t与所述K个簇心的质心向量μ i的距离,所述距离公式为:
d ti=|x ti| 2
其中d ti表示第t个数据样本集内的数据x t与第i个簇心的质心向量μ i的距离。
本申请较佳实施例选取距离d ti最小的质心向量所对应的样本数c i,并将所述数据样本集内的数据x t添加进所述质心向量所对应的样本数c i内,直至μ 1到μ K全部计算完毕,重新确定完所述K个簇心。
S5、当所述损失值小于所述阈值时,输出所述K个簇心,完成聚类结果。
本申请还提供一种计算机设备。参照图2所示,为本申请一实施例提供的计算机设备 的内部结构示意图。
在本实施例中,所述计算机设备1可以是PC(Personal Computer,个人电脑),或者是智能手机、平板电脑、便携计算机等终端设备,也可以是一种服务器等。该计算机设备1至少包括存储器11、处理器12,通信总线13,以及网络接口14。
其中,存储器11至少包括一种类型的可读存储介质,所述可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、磁性存储器、磁盘、光盘等。存储器11在一些实施例中可以是计算机设备1的内部存储单元,例如该计算机设备1的硬盘。存储器11在另一些实施例中也可以是计算机设备1的外部存储设备,例如计算机设备1上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,存储器11还可以既包括计算机设备1的内部存储单元也包括外部存储设备。存储器11不仅可以用于存储安装于计算机设备1的应用软件及各类数据,例如大规模数据聚类程序01的代码等,还可以用于暂时地存储已经输出或者将要输出的数据。
处理器12在一些实施例中可以是一中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器或其他数据处理芯片,用于运行存储器11中存储的程序代码或处理数据,例如执行大规模数据聚类程序01等。
通信总线13用于实现这些组件之间的连接通信。
网络接口14可选的可以包括标准的有线接口、无线接口(如WI-FI接口),通常用于在该计算机设备1与其他电子设备之间建立通信连接。
可选地,该计算机设备1还可以包括用户接口,用户接口可以包括显示器(Display)、输入单元比如键盘(Keyboard),可选的用户接口还可以包括标准的有线接口、无线接口。可选地,在一些实施例中,显示器可以是LED显示器、液晶显示器、触控式液晶显示器以及OLED(Organic Light-Emitting Diode,有机发光二极管)触摸器等。其中,显示器也可以适当的称为显示屏或显示单元,用于显示在计算机设备1中处理的信息以及用于显示可视化的用户界面。
图2仅示出了具有组件11-14以及大规模数据聚类程序01的计算机设备1,本领域技术人员可以理解的是,图1示出的结构并不构成对计算机设备1的限定,可以包括比图示更少或者更多的部件,或者组合某些部件,或者不同的部件布置。
在图2所示的计算机设备1实施例中,存储器11中存储有大规模数据聚类程序01;处理器12执行存储器11中存储的大规模数据聚类程序01时实现如下步骤:
步骤一、K值计算层接收用户输入的数据样本集,根据所述数据样本集计算平均轮廓系数,并选取出平均轮廓系数最大的K值,并随机确定K个簇心,将所述数据样本集、所述K值与所述K个簇心输入至簇心计算层。
本申请较佳实施例,所述数据样本集包括数据样本集的坐标位置。例如,所述数据样本集在二维平面坐标内,则所述数据样本集的坐标位置可用(x,y)表示,当所述数据样本集在三维平面坐标,则所述数据样本集的坐标位置可用(x,y,z)表示。
本申请较佳实施例中,初始化K值,并依次根据所述数据样本集确定的[K min,K max]范围内的值代替所述K值,根据所述K值随机确定K个簇心;对于所述数据样本集内每个数据x i,计算所述数据x i与所述数据x i在同一簇心的其他所有数据之间的凝聚度a(x i);同时,进一步地,遍历其他簇心的所有数据,并计算所述其他簇心的所有数据与所述数据x i的分离度,并排序得到分离度最小值b(x i);
本申请较佳实施例中,根据所述凝聚度a(x i)与所述分离度最小值b(x i)计算得出所述平均轮廓系数s(x i):
Figure PCTCN2020098957-appb-000003
对所述平均轮廓系数s(x i)的值进行排序,并选择所述平均轮廓系数s(x i)最大的值所对应的K值。
步骤二、所述簇心存储层将所述K个簇心和所述数据样本集,按照行优先存储形式存储至数据库。
本申请较佳实施例中,所述行优先存储形式是转置所述K个簇心和所述数据样本集的数据,将存储规则通过行进行定义,因此整个数据存储是自动索引化的。所述按行优先存储转置的数据,在后续模型训练层读取所述K个簇心和所述数据样本集时,只需索引少数几个字段就可读取出数据,减少索引过程的时间。
步骤三、簇心更新层从所述数据库中依次读取所述K个簇心和所述数据样本集,并根据最小化平方误差算法,计算所述K个簇心和所述n个数据样本集的损失值,并判断所述损失值与预设阈值的大小关系。
本申请较佳实施例,将所述K个簇心的质心向量与所述n个数据样本集的坐标位置输入至所述最小化平方误差算法,所述最小化平方误差算法计算出损失值E。
本申请较佳实施例所述最小化平方误差算法为:
Figure PCTCN2020098957-appb-000004
其中,x t为所述n个数据样本内的数据,且所述x t一定在所述K个簇心样本数内,所述阈值一般设定为0.01。
步骤四、当所述损失值大于所述预设阈值时,计算所述数据样本集与所述K个簇心的距离,并根据所述数据样本集与所述K个簇心的距离重新确定所述K个簇心,并返回步骤二。
本申请较佳实施例,当所述损失值大于所述阈值时,根据距离公式计算所述n个数据样本集内的数据x t与所述K个簇心的质心向量μ i的距离,所述距离公式为:
d ti=|x ti| 2
其中d ti表示第t个数据样本集内的数据x t与第i个簇心的质心向量μ i的距离。
本申请较佳实施例选取距离d ti最小的质心向量所对应的样本数c i,并将所述数据样本集内的数据x t添加进所述质心向量所对应的样本数c i内,直至μ 1到μ K全部计算完毕,重新确定完所述K个簇心。
步骤五、当所述损失值小于所述阈值时,输出所述K个簇心,完成聚类结果。
例如,参照图3所示,为本申请大规模数据聚类装置一实施例的模块示意图,该实施例中,大规模数据聚类装置包括数据接收模块10、数据存储模块20、聚类训练模块30、聚类结果输出模块40示例性地:
所述数据接收模块10用于:接收用户输入的数据样本集,根据所述数据样本集计算簇心数K值的平均轮廓系数,并选取出平均轮廓系数最大的K值,将所述数据样本集与所述K值输入至数据存储模块20,所述数据存储模块20根据所述K值,随机确定K个簇心。
所述数据存储模块20用于:将所述K个簇心和所述数据样本集,按照行优先存储形式存储至数据库。
所述聚类训练模块30用于:从所述数据库中依次读取所述K个簇心和所述数据样本集,并根据最小化平方误差算法,并根据所述数据样本集与所述K个簇心的距离计算所述K个簇心和所述数据样本集的损失值,并判断所述损失值与预设阈值的大小关系。
所述聚类结果输出模块40用于:当所述损失值大于所述预设阈值时,计算所述数据样本集与所述K个簇心的距离,重新确定所述K个簇心,并返回数据存储模块20,当所述损失值小于所述阈值时,输出所述K个簇心,完成聚类结果。
上述数据接收模块10、数据存储模块20、聚类训练模块30、聚类结果输出模块40 等模块被执行时所实现的功能或操作步骤与上述大规模数据聚类方法各实施例大体相同,在此不再赘述。
此外,本申请实施例还提出一种计算机可读存储介质,所述计算机可读存储介质可以是非易失性,也可以是易失性,所述计算机可读存储介质上存储有大规模数据聚类程序,所述大规模数据聚类程序可被一个或多个处理器执行,以实现如下操作:
接收用户输入的数据样本集,根据所述数据样本集计算簇心数K值的平均轮廓系数,并选取出平均轮廓系数最大的K值,将所述数据样本集与所述K值输入至数据存储模块,所述数据存储模块根据所述K值,随机确定K个簇心。
将所述K个簇心和所述数据样本集,按照行优先存储形式存储至数据库。
从所述数据库中依次读取所述K个簇心和所述数据样本集,并根据最小化平方误差算法,计算所述K个簇心和所述数据样本集的损失值,并判断所述损失值与预设阈值的大小关系。
当所述损失值大于所述预设阈值时,计算所述数据样本集与所述K个簇心的距离,并根据所述数据样本集与所述K个簇心的距离重新确定所述K个簇心,并返回数据存储模块,当所述损失值小于所述阈值时,输出所述K个簇心,完成聚类结果。本申请计算机可读存储介质具体实施方式与上述大规模数据聚类方法各实施例基本相同,在此不作累述。
需要说明的是,上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。并且本文中的术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、装置、物品或者方法不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、装置、物品或者方法所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、装置、物品或者方法中还存在另外的相同要素。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (20)

  1. 一种大规模数据聚类方法,其中,所述方法包括:
    步骤A:簇心数计算层接收用户输入的数据样本集,根据所述数据样本集计算簇心数K值的平均轮廓系数,并选取出平均轮廓系数最大的K值,随机确定K个簇心,将所述数据样本集、所述K值与所述K个簇心输入至簇心存储层;
    步骤B:所述簇心存储层将所述K个簇心和所述数据样本集,按照行优先存储形式存储至数据库;
    步骤C:簇心更新层从所述数据库中依次读取所述K个簇心和所述数据样本集,并根据最小化平方误差算法,计算所述K个簇心和所述数据样本集的损失值,并判断所述损失值与预设阈值的大小关系;
    步骤D:当所述损失值大于所述预设阈值时,计算所述数据样本集与所述K个簇心的距离,并根据所述数据样本集与所述K个簇心的距离重新确定所述K个簇心,并返回步骤B,当所述损失值小于所述阈值时,输出所述K个簇心,完成聚类结果。
  2. 如权利要求1所述的大规模数据聚类方法,其中,所述根据所述数据样本集计算簇心数K值的平均轮廓系数,并选取出平均轮廓系数最大的K值,包括:
    初始化K值,依次用根据所述数据样本集确定的[K min,K max]范围内的值代替所述K值,根据所述K值随机确定K个簇心;
    对于所述数据样本集内每个数据x i,计算所述数据x i与所述数据x i在同一簇心的其他所有数据之间的凝聚度a(x i);
    遍历其他簇心的所有数据,并计算所述其他簇心的所有数据与所述数据x i的分离度,并排序得到分离度最小值b(x i);
    根据所述凝聚度a(x i)与所述分离度最小值b(x i)计算得出所述平均轮廓系数s(x i):
    Figure PCTCN2020098957-appb-100001
    对所述平均轮廓系数s(x i)的值进行排序,并选择所述平均轮廓系数s(x i)最大的值所对应的K值。
  3. 如权利要求2所述的大规模数据聚类方法,其中,所述随机确定K个簇心包括:随机确定所述K个簇心的样本数{c 1,c 2,c 3,..c i..,c K}和K个簇心的质心向量{μ 123,..μ i..,μ K},其中,所述质心向量μ i的确定方法为:
    Figure PCTCN2020098957-appb-100002
    其中,x t为所述n个数据样本集内的数据。
  4. 如权利要求3中的大规模数据聚类方法,其中,所述根据最小化平方误差算法,计算所述K个簇心和所述数据样本集的损失值,包括:
    将所述K个簇心的质心向量与所述n个数据样本集的坐标位置输入至所述最小化平方误差算法,利用所述最小化平方误差算法计算出损失值E:
    Figure PCTCN2020098957-appb-100003
    其中,x t为所述n个数据样本集内的数据,且所述x t在所述K个簇心样本数内。
  5. 如权利要求4所述的大规模数据聚类方法,其中,所述计算所述数据样本集与所述K个簇心的距离,并根据所述数据样本集与所述K个簇心的距离重新确定所述K个簇心,包括:
    根据预设距离公式计算所述n个数据样本集内的数据x t与所述K个簇心的质心向量μ i的距离d ti,其中,所述距离公式为:
    d ti=|x ti| 2
    其中d ti表示第t个数据样本集内的数据x t与第i个簇心的质心向量μ i的距离;
    选取所述距离d ti最小的质心向量所对应的样本数c i,并将所述第t个数据样本集内的数据x t添加进所述质心向量所对应的样本数c i内,直至μ 1到μ K全部计算完毕,重新确定完所述K个簇心。
  6. 如权利要求1所述的大规模数据聚类方法,其中,所述数据样本集包括数据样本集的坐标位置。
  7. 如权利要求1所述的大规模数据聚类方法,其中,所述行优先存储形式是转置所述K个簇心和所述数据样本集的数据,将存储规则通过行进行定义。
  8. 一种计算机设备,其中,所述计算机设备包括存储器和处理器,所述存储器上存储有可在所述处理器上运行的大规模数据聚类程序,所述大规模数据聚类程序被所述处理器执行时实现如下步骤:
    步骤A:簇心数计算层接收用户输入的数据样本集,根据所述数据样本集计算簇心数K值的平均轮廓系数,并选取出平均轮廓系数最大的K值,随机确定K个簇心,将所述数据样本集、所述K值与所述K个簇心输入至簇心存储层;
    步骤B:所述簇心存储层将所述K个簇心和所述数据样本集,按照行优先存储形式存储至数据库;
    步骤C:簇心更新层从所述数据库中依次读取所述K个簇心和所述数据样本集,并根据最小化平方误差算法,计算所述K个簇心和所述数据样本集的损失值,并判断所述损失值与预设阈值的大小关系;
    步骤D:当所述损失值大于所述预设阈值时,计算所述数据样本集与所述K个簇心的距离,并根据所述数据样本集与所述K个簇心的距离重新确定所述K个簇心,并返回步骤B,当所述损失值小于所述阈值时,输出所述K个簇心,完成聚类结果。
  9. 如权利要求8所述的计算机设备,其中,所述根据所述数据样本集计算簇心数K值的平均轮廓系数,并选取出平均轮廓系数最大的K值,包括:
    初始化K值,依次用根据所述数据样本集确定的[K min,K max]范围内的值代替所述K值,根据所述K值随机确定K个簇心;
    对于所述数据样本集内每个数据x i,计算所述数据x i与所述数据x i在同一簇心的其他所有数据之间的凝聚度a(x i);
    遍历其他簇心的所有数据,并计算所述其他簇心的所有数据与所述数据x i的分离度,并排序得到分离度最小值b(x i);
    根据所述凝聚度a(x i)与所述分离度最小值b(x i)计算得出所述平均轮廓系数s(x i):
    Figure PCTCN2020098957-appb-100004
    对所述平均轮廓系数s(x i)的值进行排序,并选择所述平均轮廓系数s(x i)最大的值所对应的K值。
  10. 如权利要求9所述的计算机设备,其中,所述随机确定K个簇心包括:随机确定所述K个簇心的样本数{c 1,c 2,c 3,..c i..,c K}和K个簇心的质心向量{μ 123,..μ i..,μ K},其中,,所述质心向量μ i的确定方法为:
    Figure PCTCN2020098957-appb-100005
    其中,x t为所述n个数据样本集内的数据。
  11. 如权利要求10中的计算机设备,其中,所述根据最小化平方误差算法,计算所述K个簇心和所述数据样本集的损失值,包括:
    将所述K个簇心的质心向量与所述n个数据样本集的坐标位置输入至所述最小化平方误差算法,利用所述最小化平方误差算法计算出损失值E:
    Figure PCTCN2020098957-appb-100006
    其中,x t为所述n个数据样本集内的数据,且所述x t在所述K个簇心样本数内。
  12. 如权利要求11所述的计算机设备,其中,所述计算所述数据样本集与所述K个簇心的距离,并根据所述数据样本集与所述K个簇心的距离重新确定所述K个簇心,包括:
    根据预设距离公式计算所述n个数据样本集内的数据x t与所述K个簇心的质心向量μ i的距离d ti,其中,所述距离公式为:
    d ti=|x ti| 2
    其中d ti表示第t个数据样本集内的数据x t与第i个簇心的质心向量μ i的距离;
    选取所述距离d ti最小的质心向量所对应的样本数c i,并将所述第t个数据样本集内的数据x t添加进所述质心向量所对应的样本数c i内,直至μ 1到μ K全部计算完毕,重新确定完所述K个簇心。
  13. 如权利要求8所述的计算机设备,其中,所述行优先存储形式是转置所述K个簇心和所述数据样本集的数据,将存储规则通过行进行定义。
  14. 一种大规模数据聚类装置,其中,所述装置包括:
    数据接收模块,用于接收用户输入的数据样本集,根据所述数据样本集计算簇心数K值的平均轮廓系数,并选取出平均轮廓系数最大的K值,将所述数据样本集与所述K值输入至数据存储模块,所述数据存储模块根据所述K值,随机确定K个簇心;
    数据存储模块,用于将所述K个簇心和所述数据样本集,按照行优先存储形式存储至数据库;
    聚类训练模块,用于从所述数据库中依次读取所述K个簇心和所述数据样本集,并根据最小化平方误差算法,并根据所述数据样本集与所述K个簇心的距离计算所述K个簇心和所述数据样本集的损失值,并判断所述损失值与预设阈值的大小关系;
    聚类结果输出模块,用于当所述损失值大于所述预设阈值时,计算所述数据样本集与所述K个簇心的距离,重新确定所述K个簇心,并返回数据存储模块,当所述损失值小于所述阈值时,输出所述K个簇心,完成聚类结果。
  15. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有大规模数据聚类程序,所述大规模数据聚类程序可被一个或者多个处理器执行,以实现如下步骤:
    步骤A:簇心数计算层接收用户输入的数据样本集,根据所述数据样本集计算簇心数K值的平均轮廓系数,并选取出平均轮廓系数最大的K值,随机确定K个簇心,将所述数据样本集、所述K值与所述K个簇心输入至簇心存储层;
    步骤B:所述簇心存储层将所述K个簇心和所述数据样本集,按照行优先存储形式存储至数据库;
    步骤C:簇心更新层从所述数据库中依次读取所述K个簇心和所述数据样本集,并根据最小化平方误差算法,计算所述K个簇心和所述数据样本集的损失值,并判断所述损失值与预设阈值的大小关系;
    步骤D:当所述损失值大于所述预设阈值时,计算所述数据样本集与所述K个簇心的距离,并根据所述数据样本集与所述K个簇心的距离重新确定所述K个簇心,并返回步骤B,当所述损失值小于所述阈值时,输出所述K个簇心,完成聚类结果。
  16. 如权利要求15所述的计算机可读存储介质,其中,所述根据所述数据样本集计算簇心数K值的平均轮廓系数,并选取出平均轮廓系数最大的K值,包括:
    初始化K值,依次用根据所述数据样本集确定的[K min,K max]范围内的值代替所述K值,根据所述K值随机确定K个簇心;
    对于所述数据样本集内每个数据x i,计算所述数据x i与所述数据x i在同一簇心的其他所有数据之间的凝聚度a(x i);
    遍历其他簇心的所有数据,并计算所述其他簇心的所有数据与所述数据x i的分离度,并排序得到分离度最小值b(x i);
    根据所述凝聚度a(x i)与所述分离度最小值b(x i)计算得出所述平均轮廓系数s(x i):
    Figure PCTCN2020098957-appb-100007
    对所述平均轮廓系数s(x i)的值进行排序,并选择所述平均轮廓系数s(x i)最大的值所对应的K值。
  17. 如权利要求16所述的计算机可读存储介质,其中,所述随机确定K个簇心包括:随机确定所述K个簇心的样本数{c 1,c 2,c 3,..c i..,c K}和K个簇心的质心向量{μ 123,..μ i..,μ K},其中,所述质心向量μ i的确定方法为:
    Figure PCTCN2020098957-appb-100008
    其中,x t为所述n个数据样本集内的数据。
  18. 如权利要求17中的计算机可读存储介质,其中,所述根据最小化平方误差算法,计算所述K个簇心和所述数据样本集的损失值,包括:
    将所述K个簇心的质心向量与所述n个数据样本集的坐标位置输入至所述最小化平方误差算法,利用所述最小化平方误差算法计算出损失值E:
    Figure PCTCN2020098957-appb-100009
    其中,x t为所述n个数据样本集内的数据,且所述x t在所述K个簇心样本数内。
  19. 如权利要求18所述的计算机可读存储介质,其中,所述计算所述数据样本集与所述K个簇心的距离,并根据所述数据样本集与所述K个簇心的距离重新确定所述K个簇心,包括:
    根据预设距离公式计算所述n个数据样本集内的数据x t与所述K个簇心的质心向量μ i的距离d ti,其中,所述距离公式为:
    d ti=|x ti| 2
    其中d ti表示第t个数据样本集内的数据x t与第i个簇心的质心向量μ i的距离;
    选取所述距离d ti最小的质心向量所对应的样本数c i,并将所述第t个数据样本集内的数据x t添加进所述质心向量所对应的样本数c i内,直至μ 1到μ K全部计算完毕,重新确定完所述K个簇心。
  20. 如权利要求15所述的计算机可读存储介质,其中,所述行优先存储形式是转置所述K个簇心和所述数据样本集的数据,将存储规则通过行进行定义。
PCT/CN2020/098957 2019-09-06 2020-06-29 大规模数据聚类方法、装置、计算机设备及计算机可读存储介质 WO2021042844A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910846891.5A CN110705602A (zh) 2019-09-06 2019-09-06 大规模数据聚类方法、装置及计算机可读存储介质
CN201910846891.5 2019-09-06

Publications (1)

Publication Number Publication Date
WO2021042844A1 true WO2021042844A1 (zh) 2021-03-11

Family

ID=69195127

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/098957 WO2021042844A1 (zh) 2019-09-06 2020-06-29 大规模数据聚类方法、装置、计算机设备及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN110705602A (zh)
WO (1) WO2021042844A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230316099A1 (en) * 2022-03-29 2023-10-05 Microsoft Technology Licensing, Llc System and method for identifying and resolving performance issues of automated components

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110705602A (zh) * 2019-09-06 2020-01-17 平安科技(深圳)有限公司 大规模数据聚类方法、装置及计算机可读存储介质
CN112148859A (zh) * 2020-09-27 2020-12-29 深圳壹账通智能科技有限公司 问答知识库管理方法、装置、终端设备及存储介质
CN114386502A (zh) * 2022-01-07 2022-04-22 北京点众科技股份有限公司 对快应用的用户进行聚类分析的方法、设备以及存储介质
CN115130581B (zh) * 2022-04-02 2023-06-23 北京百度网讯科技有限公司 样本生成方法、训练方法、数据处理方法以及电子设备
CN114896393B (zh) * 2022-04-15 2023-06-27 中国电子科技集团公司第十研究所 一种数据驱动的文本增量聚类方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015088780A1 (en) * 2013-12-10 2015-06-18 University Of Southern California Noise-enhanced clustering and competitive learning
CN107451622A (zh) * 2017-08-18 2017-12-08 长安大学 一种基于大数据聚类分析的隧道运营状态划分方法
CN108364026A (zh) * 2018-02-24 2018-08-03 南京邮电大学 一种簇心更新方法、装置及K-means聚类分析方法、装置
CN110109975A (zh) * 2019-05-14 2019-08-09 重庆紫光华山智安科技有限公司 数据聚类方法及装置
CN110705602A (zh) * 2019-09-06 2020-01-17 平安科技(深圳)有限公司 大规模数据聚类方法、装置及计算机可读存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105224467B (zh) * 2014-05-30 2018-05-29 华为技术有限公司 一种全局内存访问的方法和设备
CN104809161B (zh) * 2015-04-01 2018-08-21 中国科学院信息工程研究所 一种对稀疏矩阵进行压缩和查询的方法及系统
KR101953479B1 (ko) * 2017-11-09 2019-05-23 강원대학교산학협력단 거리의 상대적 비율을 적용한 그룹 탐색 최적화 데이터 클러스터링 방법 및 시스템
CN109472300A (zh) * 2018-10-24 2019-03-15 南京邮电大学 一种面向k均值聚类算法的质心以及质心个数初始化方法
CN109885685A (zh) * 2019-02-01 2019-06-14 珠海世纪鼎利科技股份有限公司 情报数据处理的方法、装置、设备及存储介质
CN110188320A (zh) * 2019-04-23 2019-08-30 山东大学 基于多核平台的二阶盲源分离并行优化方法及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015088780A1 (en) * 2013-12-10 2015-06-18 University Of Southern California Noise-enhanced clustering and competitive learning
CN107451622A (zh) * 2017-08-18 2017-12-08 长安大学 一种基于大数据聚类分析的隧道运营状态划分方法
CN108364026A (zh) * 2018-02-24 2018-08-03 南京邮电大学 一种簇心更新方法、装置及K-means聚类分析方法、装置
CN110109975A (zh) * 2019-05-14 2019-08-09 重庆紫光华山智安科技有限公司 数据聚类方法及装置
CN110705602A (zh) * 2019-09-06 2020-01-17 平安科技(深圳)有限公司 大规模数据聚类方法、装置及计算机可读存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JIANG LI, XUE SHANLIANG: "A K-means Algorithm Based on Optimizing the Initial Clustering Center and Determining the K Value", JISUANJI YU SHUZI GONGCHENG - COMPUTER AND DIGITAL ENGINEERING, ZHONGGUO CHUANBO ZHONGGONG JITUAN GONGSI. DI-709 YANJIUSUO, CN, vol. 46, no. 339, 1 January 2018 (2018-01-01), CN, pages 21 - 25, XP055788868, ISSN: 1672-9722 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230316099A1 (en) * 2022-03-29 2023-10-05 Microsoft Technology Licensing, Llc System and method for identifying and resolving performance issues of automated components

Also Published As

Publication number Publication date
CN110705602A (zh) 2020-01-17

Similar Documents

Publication Publication Date Title
WO2021042844A1 (zh) 大规模数据聚类方法、装置、计算机设备及计算机可读存储介质
JP6710483B2 (ja) 損害賠償請求書類の文字認識方法、装置、サーバ及び記憶媒体
WO2022126971A1 (zh) 基于密度的文本聚类方法、装置、设备及存储介质
WO2021068610A1 (zh) 资源推荐的方法、装置、电子设备及存储介质
JP5615932B2 (ja) 検索方法およびシステム
WO2019242144A1 (zh) 电子装置、偏好倾向预测方法和计算机可读存储介质
WO2019205373A9 (zh) 相似用户查找装置、方法及计算机可读存储介质
WO2019080411A1 (zh) 电子装置、人脸图像聚类搜索方法和计算机可读存储介质
WO2022042123A1 (zh) 图像识别模型生成方法、装置、计算机设备和存储介质
WO2019137185A1 (zh) 一种图片筛选方法及装置、存储介质、计算机设备
WO2019205375A1 (zh) 牲畜识别方法、装置及存储介质
WO2021169116A1 (zh) 智能化的缺失数据填充方法、装置、设备及存储介质
US8768100B2 (en) Optimal gradient pursuit for image alignment
JP2013519152A (ja) テキスト分類の方法及びシステム
CN110378480B (zh) 模型训练方法、装置及计算机可读存储介质
CN109685092B (zh) 基于大数据的聚类方法、设备、存储介质及装置
CN109918498B (zh) 一种问题入库方法和装置
WO2020248365A1 (zh) 智能分配模型训练内存方法、装置及计算机可读存储介质
CN111460234A (zh) 图查询方法、装置、电子设备及计算机可读存储介质
WO2019119635A1 (zh) 种子用户拓展方法、电子设备及计算机可读存储介质
CN110866042A (zh) 表格智能查询方法、装置及计算机可读存储介质
WO2021027149A1 (zh) 基于画像相似性的信息检索推荐方法、装置及存储介质
CN112668482A (zh) 人脸识别训练方法、装置、计算机设备及存储介质
CN110633733B (zh) 图像智能匹配方法、装置及计算机可读存储介质
US20220284990A1 (en) Method and system for predicting affinity between drug and target

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20860192

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20860192

Country of ref document: EP

Kind code of ref document: A1