CN110555110A - text clustering method combining K-means and evidence accumulation - Google Patents
text clustering method combining K-means and evidence accumulation Download PDFInfo
- Publication number
- CN110555110A CN110555110A CN201910857061.2A CN201910857061A CN110555110A CN 110555110 A CN110555110 A CN 110555110A CN 201910857061 A CN201910857061 A CN 201910857061A CN 110555110 A CN110555110 A CN 110555110A
- Authority
- CN
- China
- Prior art keywords
- matrix
- cluster
- similarity
- value
- clustering
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
a text clustering method combining K-means and evidence accumulation relates to a text clustering technology. The method aims to solve the problems of poor consistency, accuracy and stability of the traditional text clustering method. The method comprises the following steps: step one, initializing a co-covariance matrix; step two, voting; step three, standardizing a co-covariance matrix; step two, initializing a similar matrix; step two, updating the similar matrix; step two and step three: building a minimum spanning tree and pruning. The evidence accumulation strategy considers a plurality of different clustering results at the same time, can be used for smoothing the difference between different clustering results generated by running the k-means algorithm for many times, and can greatly improve the reliability, consistency and accuracy of the results.
Description
Technical Field
The invention relates to a text clustering technology.
Background
With the rapid development of applications and sensing and storage technologies based on online social networks, a large amount of high-dimensional data with complex and diverse features is derived in a network space, wherein text data occupies a large proportion of the data. The classification and clustering management of large-scale high-dimensional diversified text data emerging in a network space becomes an urgent topic in text mining.
conventional text clustering methods are typically single-pass clustering, which tends to result in random, inaccurate clustering results. Even on the same text data set, different clustering algorithms or the same clustering algorithm with different parameters can affect the accuracy of the final clustering segmentation, and small changes of the parameters of the clustering algorithm can possibly lead to large path of the clustering result. The single clustering on the text data is difficult to obtain stable, accurate and consistent clustering results, and the influence of random factors such as the difference between different clustering algorithms and the difference between internal parameters of the same clustering algorithm on the accuracy of the clustering results is difficult to eliminate.
In the past, researches on text clustering by researchers mainly focus on the problem of single clustering, and compared with fusion clustering, clustering results obtained by single clustering cannot be convincing in the aspects of accuracy and consistency. Taking the K-means algorithm as an example, the K-means algorithm is closely related to the lloyd's algorithm, which aims to find a uniform partition over any subset of euclidean space, so that the midpoints on each subset are assigned to the Voronoi cell where the closest central point (cell center) is located. Whereas the K-means algorithm uses a discrete set instead of the Voronoi diagram in the lloyd algorithm, assigning centroids to a finite number of points that are closest to them. The K-means algorithm is suitable for text clustering. The observed objects are abstracted as instances of data in a finite dimensional euclidean space. The K-means algorithm focuses on minimizing the loss function, which is defined as the sum of the squares of the distances between each data point and the centroid closest thereto. The formalization of the loss function is defined as follows:
Wherein C is a set of clusters, k is the number of clusters in C, CqIs a cluster, xiis cqmiddle data point, | · | | non-conducting phosphor2Squared in euclidean distance.
the K-means algorithm is divided into three steps, (1) centroid initialization: selecting k points from the data set as initial centroids, wherein each centroid represents a cluster; (2) formation of clusters: assigning a point in the data set to a cluster represented by the centroid closest thereto; (3) and (3) updating the centroid: and (4) taking the average value of the points in each cluster as a new centroid point according to the distribution of the points in each cluster, and returning to the step (2). The condition for the termination of the K-means algorithm is that the maximum iteration number is reached or the difference of the loss function between the results of two adjacent iterations is smaller than a tolerable value. The flow of steps of the K-means algorithm is shown in FIG. 1. The K-means iterative process is shown in FIG. 2.
The K-means algorithm has strong adaptability to complex types of data. It divides a given data set into a predefined number of clusters. K-means trains data starting from a set of centroids specified by the user and then iterates between the two steps, i.e., assignments and updates. However, K-means is a locally improved heuristic that is sensitive to the value of the parameter. The output of the K-means algorithm depends to a large extent on the setting of the initial centroids (centroids), and small changes in the centroid initialization process may have a significant effect on the output of the entire algorithm.
Aiming at the traditional text clustering problem, how to obtain consistent, accurate and stable clustering results still remains a problem to be solved urgently at present.
Disclosure of Invention
The invention aims to solve the problems of poor consistency, accuracy and stability of the traditional text clustering method and provides a text clustering method combining K-means and evidence accumulation.
the invention relates to a text clustering method for combining K-means with evidence accumulation, which comprises the following steps:
Step one, establishing a co-covariance matrix, specifically comprising the following steps:
Step one, initializing a co-covariance matrix: initializing the values of all elements in a co-ordination matrix A to be 0, wherein A is an n multiplied by n matrix, n is the number of data points in a data set, and an element Aij in the matrix A represents the similarity between an ith data point xi and a jth data point xj;
step two, voting: selecting different values of K, wherein K is the number of clusters, operating the K-means algorithm for multiple times, further selecting the range with the maximum descending gradient of the loss function as the optimal value range of the parameter K of the K-means algorithm, uniformly taking the value of the parameter K in the optimal value range, and operating the K-means algorithm for multiple times on the basis of the value taken each time; for any two data points xi and xj, in the process of clustering by using a K-means algorithm each time, if the data points xi and xj are distributed into the same cluster, the value of the corresponding element point Aij in the co-ordination matrix A is increased by 1, otherwise the value of the Aij is unchanged;
step three, standardizing a co-covariance matrix: dividing each element in the co-ordination matrix A by M to realize the standardization of A, wherein M is the number of times of running a K-means algorithm in the first step and the second step;
step two, operating a hierarchical clustering algorithm on the co-covariance matrix, and specifically comprising the following steps:
step two, initializing a similar matrix: defining a similar matrix, initializing the similar matrix into a standardized co-ordination matrix A obtained in the step one and the step three, regarding each element in the data set as a cluster, and representing the similarity between two corresponding data points by the value of each element in the similar matrix;
Step two, updating the similarity matrix: the process of updating the similarity matrix comprises 3 steps: step a, combining two most similar clusters into one cluster according to a single connection criterion; step b, setting the similarity value of elements in the same cluster as 0, and updating a similarity matrix; step c, recalculating the similarity between the newly generated cluster and the original cluster; continuously repeating the steps a to c until all the clusters are combined into one cluster;
step three, constructing a minimum spanning tree and pruning; and in the second step, generating a non-directional edge connecting two data points from different clusters every time a new cluster is generated by combination, obtaining a tree structure T formed by all the non-directional edges after the second step is finished, and cutting off the edges with the weight smaller than a preset value T in the T, so that the text clustering method is finished.
the EAC (evidence accumulation) algorithm based on K-means makes up the defects of the K-means, and the EAC strategy considers various different clustering results at the same time. The EAC strategy can be used for smoothing the difference between different clustering results generated by running the k-means algorithm for many times, and the reliability, consistency and accuracy of the results can be greatly improved.
Drawings
FIG. 1 is a flow chart of the K-means algorithm in the background art;
FIG. 2 is a diagram illustrating an iterative process of the K-means algorithm in the prior art, in which (a) is the result of the 1 st iteration, (b) is the result of the 2 nd iteration, (c) is the result of the 5 th iteration, and (d) is the result of the 20 th iteration;
FIG. 3 is a flow chart for creating a co-ordination matrix;
FIG. 4 is a flow chart of a hierarchical clustering algorithm running on a co-ordination matrix;
FIG. 5 is a flow chart of the EAC algorithm based on K-means.
Detailed Description
The first embodiment is as follows: the text clustering method combining K-means and evidence accumulation according to the present embodiment includes the following steps:
Step one, establishing a co-covariance matrix, specifically comprising the following steps:
Step one, initializing a co-covariance matrix: initializing the values of all elements in a co-ordination matrix A to be 0, wherein A is an n multiplied by n matrix, n is the number of data points in a data set, and an element Aij in the matrix A represents the similarity between an ith data point xi and a jth data point xj;
Step two, voting: selecting different values of K, wherein K is the number of clusters, operating the K-means algorithm for multiple times, further selecting the range with the maximum descending gradient of the loss function as the optimal value range of the parameter K of the K-means algorithm, uniformly taking the value of the parameter K in the optimal value range, and operating the K-means algorithm for multiple times on the basis of the value taken each time; for any two data points xi and xj, in the process of clustering by using a K-means algorithm each time, if the data points xi and xj are distributed into the same cluster, the value of the corresponding element point Aij in the co-ordination matrix A is increased by 1, otherwise the value of the Aij is unchanged;
Step three, standardizing a co-covariance matrix: dividing each element in the co-ordination matrix A by M to realize the standardization of A, wherein M is the number of times of running a K-means algorithm in the first step and the second step;
Step two, operating a hierarchical clustering algorithm on the co-covariance matrix, and specifically comprising the following steps:
Step two, initializing a similar matrix: defining a similar matrix, initializing the similar matrix into a standardized co-ordination matrix A obtained in the step one and the step three, regarding each element in the data set as a cluster, and representing the similarity between two corresponding data points by the value of each element in the similar matrix;
Step two, updating the similarity matrix: the process of updating the similarity matrix comprises 3 steps: step a, combining two most similar clusters into one cluster according to a single connection criterion; step b, setting the similarity value of elements in the same cluster as 0, and updating a similarity matrix; step c, recalculating the similarity between the newly generated cluster and the original cluster; continuously repeating the steps a to c until all the clusters are combined into one cluster;
step three, constructing a minimum spanning tree and pruning; and in the second step, generating a non-directional edge connecting two data points from different clusters every time a new cluster is generated by combination, obtaining a tree structure T formed by all the non-directional edges after the second step is finished, and cutting off the edges with the weight smaller than a preset value T in the T, so that the text clustering method is finished.
EAC based on K-means is a clustering fusion algorithm based on a voting mechanism, and the main idea of EAC is that naturally similar data points are more likely to be distributed to the same cluster in a multi-clustering process compared with other data points. The EAC algorithm based on K-means comprehensively considers the segmentation result of the K-means clustering for multiple times.
the operation process of the EAC algorithm based on K-means is divided into two steps: firstly, creating a co-covariance matrix; and secondly, running a hierarchical clustering algorithm on the co-covariance matrix.
In step one, the data points in the data set are mapped from the original data feature space to a completely new feature space, and the features in the new feature space are not the euclidean distances between the data points any more, but the similarities between the data points. Let the co-ordination matrix be denoted a, where a is an n × n matrix and n is the number of data points in the dataset. Element A in matrix AijRepresents the ith data point xiAnd j-th data point xjthe similarity between them, i.e. Aijthe greater the value of (c), the ith data point xiAnd j-th data point xjthe more similar and closer together. As shown in fig. 3, the creation process of the co-covariance matrix is divided into three steps:
Step one, initializing a co-covariance matrix: the values of all elements in the covariance matrix a are initialized to 0.
Step two, voting process: selecting different values of K (the number of clusters), operating the K-means algorithm for multiple times, and further selecting the range with the maximum descending gradient of the loss function as the value range of the optimal K-means algorithm parameter K. On the basis, the parameter K is uniformly valued in the optimal value range, and the K-means algorithm is operated for multiple times. For any two data points xiand xjIn other words, in each clustering process using the K-means algorithm, if they are allocated to the same cluster, their corresponding element points A in the co-ordination matrix AijIs increased by 1, otherwise Aijthe value of (a) is not changed.
step three, standardizing a co-covariance matrix: and dividing each element in the co-ordination matrix A by M to realize the standardization of A, wherein M is the number of times of running the K-means algorithm in the second step.
in the second step, a hierarchical clustering algorithm is applied to the co-covariance matrix a to obtain a hierarchical structure of the co-covariance matrix a under a threshold (threshold) t. Herein, the single-join criterion is used as the join criterion of hierarchical clustering. The definition of the single connection criterion is as follows: for any two clusters ciand cjThe similarity is defined as Apqwherein x isp∈ci,xq∈cjthat is to say the data point xpIn cluster ciMiddle, data point xqin cluster cjIn (1). A. thepqIs a data point xpAnd data point xqthe corresponding element value in the co-ordination matrix A, i.e. the data point xpAnd data point xqThe similarity between them. As shown in fig. 4, the detailed process of step two can be divided into 3 steps:
step two, initializing a similar matrix: defining a similarity matrix (similarity matrix), initializing the similarity matrix into the co-ordination matrix A obtained in the step (1), regarding each element in the data set as a cluster, and enabling the value of each element in the similarity matrix to represent the similarity between corresponding data points.
step two, updating the similarity matrix: the process of updating the similarity matrix comprises 3 steps: (a) according to the single connection criterion, the two most similar clusters are merged into one cluster. (b) And setting the similarity value of the elements in the same cluster as 0, and updating the similarity matrix. (c) And (d) recalculating the similarity between the newly generated cluster and the original cluster, and returning to the step (a). The above steps are continued until all clusters are merged into one cluster.
step three, constructing a minimum spanning tree and pruning; in step two, each time a new cluster is generated by merging, an edge connecting two data points from different clusters is generated, and the weight (weight) of the edge is defined as the similarity between the two clusters. For example: in step two, cluster ciAnd cluster cjThe similarity between them is ApqWherein x isp∈ci,xq∈cj, Apqis a data point xiAnd data point xjThe similarity between them; if cluster ciAnd cluster cjIs merged into a new cluster, a connection data point x is establishediAnd data point xjhas a weight of Apq. Therefore, when the second step is finished, a tree structure T is obtained, and edges with weights smaller than T in the T are cut off.
Regarding G as a completely undirected graph formed by connecting data points in the data set, the weight of an edge in G is set as the inverse number of the similarity between the connected data points, namely the value of the corresponding position element in the covariance matrix A takes a negative value. According to the Prime algorithm idea, the minimum spanning tree T is actually the minimum spanning tree on the undirected complete graph G.
The overall flow of the text clustering method according to this embodiment is shown in fig. 5.
The pseudo code of the EAC algorithm based on K-means is expressed as follows:
Inputting: x: data set, xi∈Rd;
k: the number of initial centroids;
M: the number of times the K-means algorithm is run;
t: weight threshold (for pruning);
and (3) outputting: segmentation results of data points in the dataset;
The process is as follows: 1. initializing a co-covariance matrix A: setting the value of the element in A to 0;
2. The following operations were run M times:
2.1 using K-means + + algorithm (K-means + + is an algorithm for selecting K-means centroids proposed by D.Arthur et al in 2007) to select K centroids in the data set X;
2.2 under the above conditions, running the K-means algorithm with constraints of the Elkan triangle inequality (the Elkan triangle inequality is the inequality proposed by Charles Elkan in 2003 for accelerating the K-means algorithm);
2.3 voting mechanism: for any pair of data points xiAnd xjif they are assigned to the same cluster,
Aij=Aij+1;
3. Normalized covariance matrix:
4. and (4) running a hierarchical clustering algorithm on the co-cooperation matrix A to obtain a consistent, stable and accurate clustering segmentation result of the data set X. The connection criterion used by hierarchical clustering is a single connection criterion:
4.1 construct an undirected complete map: constructing an undirected complete graph G with the co-covariance matrix-A as an adjacent matrix;
4.2 find the minimum spanning tree: calculating a minimum spanning tree T on the undirected complete graph G;
4.3 pruning: cutting off edges with the weight less than-T in the minimum spanning tree T;
4.4 Each connected tree obtained after pruning is a cluster.
and (6) ending.
Claims (1)
1. A text clustering method combining K-means with evidence accumulation is characterized by comprising the following steps:
Step one, establishing a co-covariance matrix, specifically comprising the following steps:
Step one, initializing a co-covariance matrix: initializing the values of all elements in a co-ordination matrix A to be 0, wherein A is an n multiplied by n matrix, n is the number of data points in a data set, and an element Aij in the matrix A represents the similarity between an ith data point xi and a jth data point xj;
Step two, voting: selecting different values of K, wherein K is the number of clusters, operating the K-means algorithm for multiple times, further selecting the range with the maximum descending gradient of the loss function as the optimal value range of the parameter K of the K-means algorithm, uniformly taking the value of the parameter K in the optimal value range, and operating the K-means algorithm for multiple times on the basis of the value taken each time; for any two data points xi and xj, in the process of clustering by using a K-means algorithm each time, if the data points xi and xj are distributed into the same cluster, the value of the corresponding element point Aij in the co-ordination matrix A is increased by 1, otherwise the value of the Aij is unchanged;
Step three, standardizing a co-covariance matrix: dividing each element in the co-ordination matrix A by M to realize the standardization of A, wherein M is the number of times of running a K-means algorithm in the first step and the second step;
step two, operating a hierarchical clustering algorithm on the co-covariance matrix, and specifically comprising the following steps:
Step two, initializing a similar matrix: defining a similar matrix, initializing the similar matrix into a standardized co-ordination matrix A obtained in the step one and the step three, regarding each element in the data set as a cluster, and representing the similarity between two corresponding data points by the value of each element in the similar matrix;
Step two, updating the similarity matrix: the process of updating the similarity matrix comprises 3 steps: step a, combining two most similar clusters into one cluster according to a single connection criterion; step b, setting the similarity value of elements in the same cluster as 0, and updating a similarity matrix; step c, recalculating the similarity between the newly generated cluster and the original cluster; continuously repeating the steps a to c until all the clusters are combined into one cluster;
step three, constructing a minimum spanning tree and pruning; and in the second step, generating a non-directional edge connecting two data points from different clusters every time a new cluster is generated by combination, obtaining a tree structure T formed by all the non-directional edges after the second step is finished, and cutting off the edges with the weight smaller than a preset value T in the T, so that the text clustering method is finished.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910857061.2A CN110555110A (en) | 2019-09-10 | 2019-09-10 | text clustering method combining K-means and evidence accumulation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910857061.2A CN110555110A (en) | 2019-09-10 | 2019-09-10 | text clustering method combining K-means and evidence accumulation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110555110A true CN110555110A (en) | 2019-12-10 |
Family
ID=68739949
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910857061.2A Pending CN110555110A (en) | 2019-09-10 | 2019-09-10 | text clustering method combining K-means and evidence accumulation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110555110A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111985837A (en) * | 2020-08-31 | 2020-11-24 | 平安医疗健康管理股份有限公司 | Risk analysis method, device and equipment based on hierarchical clustering and storage medium |
CN113946967A (en) * | 2021-10-21 | 2022-01-18 | 北京理工大学 | Coverage control method for improving Llyod algorithm balance |
CN114118296A (en) * | 2021-12-08 | 2022-03-01 | 昆明理工大学 | Rock mass structural plane advantage and occurrence grouping method based on clustering integration |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109002858A (en) * | 2018-07-23 | 2018-12-14 | 合肥工业大学 | A kind of clustering ensemble method based on evidential reasoning for user behavior analysis |
-
2019
- 2019-09-10 CN CN201910857061.2A patent/CN110555110A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109002858A (en) * | 2018-07-23 | 2018-12-14 | 合肥工业大学 | A kind of clustering ensemble method based on evidential reasoning for user behavior analysis |
Non-Patent Citations (1)
Title |
---|
HONGLI ZHANG等: "Marrying K-means with Evidence Accumulation in Clustering Analysis", 《IEEE》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111985837A (en) * | 2020-08-31 | 2020-11-24 | 平安医疗健康管理股份有限公司 | Risk analysis method, device and equipment based on hierarchical clustering and storage medium |
CN113946967A (en) * | 2021-10-21 | 2022-01-18 | 北京理工大学 | Coverage control method for improving Llyod algorithm balance |
CN114118296A (en) * | 2021-12-08 | 2022-03-01 | 昆明理工大学 | Rock mass structural plane advantage and occurrence grouping method based on clustering integration |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Prajwala | A comparative study on decision tree and random forest using R tool | |
Grover | A study of various fuzzy clustering algorithms | |
Shen et al. | An ensemble model for day-ahead electricity demand time series forecasting | |
CN103106279B (en) | Clustering method a kind of while based on nodal community and structural relationship similarity | |
CN110555110A (en) | text clustering method combining K-means and evidence accumulation | |
CN103678671B (en) | A kind of dynamic community detection method in social networks | |
CN104809244B (en) | Data digging method and device under a kind of big data environment | |
CN104346481A (en) | Community detection method based on dynamic synchronous model | |
Bandyopadhyay | Multiobjective simulated annealing for fuzzy clustering with stability and validity | |
CN110598061A (en) | Multi-element graph fused heterogeneous information network embedding method | |
CN114386466B (en) | Parallel hybrid clustering method for candidate signal mining in pulsar search | |
CN112948345A (en) | Big data clustering method based on cloud computing platform | |
CN108595631B (en) | Three-dimensional CAD model double-layer retrieval method based on graph theory | |
CN115293919A (en) | Graph neural network prediction method and system oriented to social network distribution generalization | |
CN112215655A (en) | Client portrait label management method and system | |
Chaturvedi et al. | An improvement in K-mean clustering algorithm using better time and accuracy | |
CN115114484A (en) | Abnormal event detection method and device, computer equipment and storage medium | |
Gajawada et al. | Optimal clustering method based on genetic algorithm | |
CN111401412B (en) | Distributed soft clustering method based on average consensus algorithm in Internet of things environment | |
CN111126467B (en) | Remote sensing image space spectrum clustering method based on multi-target sine and cosine algorithm | |
CN108198084A (en) | A kind of complex network is overlapped community discovery method | |
CN112639761A (en) | Method and device for establishing index for data | |
US20220358360A1 (en) | Classifying elements and predicting properties in an infrastructure model through prototype networks and weakly supervised learning | |
Morvan et al. | Graph sketching-based massive data clustering | |
Burdescu et al. | A Spatial Segmentation Method. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191210 |