CN113688229B - Text recommendation method, system, storage medium and equipment - Google Patents

Text recommendation method, system, storage medium and equipment Download PDF

Info

Publication number
CN113688229B
CN113688229B CN202111016193.6A CN202111016193A CN113688229B CN 113688229 B CN113688229 B CN 113688229B CN 202111016193 A CN202111016193 A CN 202111016193A CN 113688229 B CN113688229 B CN 113688229B
Authority
CN
China
Prior art keywords
text
texts
keywords
recommended
clustering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111016193.6A
Other languages
Chinese (zh)
Other versions
CN113688229A (en
Inventor
周劲
郭颖颖
韩士元
王琳
杜韬
纪科
张坤
赵亚欧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Jinan
Original Assignee
University of Jinan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Jinan filed Critical University of Jinan
Priority to CN202111016193.6A priority Critical patent/CN113688229B/en
Publication of CN113688229A publication Critical patent/CN113688229A/en
Application granted granted Critical
Publication of CN113688229B publication Critical patent/CN113688229B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/335Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • G06F16/355Class or cluster creation or modification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/958Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking

Abstract

The invention belongs to the field of text recommendation, and provides a text recommendation method, a system, a storage medium and equipment. The method comprises the steps of obtaining keywords of a text to be recommended; clustering all the texts to be recommended based on the keywords of the texts to be recommended and the texts with known attributes; sequentially recommending the texts according to the distances between the keywords of all the texts to be recommended and the keywords with known text attributes; in the process of clustering all candidate texts, the affinity information between keywords of all texts to be recommended and known attribute texts is considered, the obtained affinities are combined with the weights of the attributes to construct attribute weight lasso regularization items based on the dimension affinities, and meanwhile, maximum entropy regularization is utilized to realize optimized distribution of the attribute weights.

Description

Text recommendation method, system, storage medium and equipment
Technical Field
The invention belongs to the field of text recommendation, and particularly relates to a text recommendation method, a system, a storage medium and equipment.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
With the advent of the big data age, the search field of website keywords is more complex, and the data dimension is higher and higher. In the face of complex and varied data environments, the clustering algorithm is not applicable any more compared with the traditional clustering algorithm. In order to solve the problem of high dimension data, a great deal of advanced subspace clustering algorithms are widely studied.
The inventor finds that in the face of sparsity and complexity among high-dimensional data features, the current subspace clustering algorithm still has difficulty in effectively mining potential nonlinear information among the data features, namely, information representation based on data feature affinities, and cannot obtain a good clustering effect, so that the accuracy of text recommendation to be retrieved is reduced.
Disclosure of Invention
In order to solve the technical problems in the background art, the invention provides a text recommendation method, a system, a storage medium and equipment, which can improve the accuracy of text recommendation to be retrieved.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
a first aspect of the present invention provides a text recommendation method, including:
Acquiring keywords of a text to be recommended;
clustering all the texts to be recommended based on the keywords of the texts to be recommended and the texts with known attributes;
Sequentially recommending the texts according to the distances between the keywords of all the texts to be recommended and the keywords with known text attributes;
In the process of clustering all candidate texts, the affinity information between keywords of all texts to be recommended and known attribute texts is considered, the obtained affinities are combined with the weights of the attributes to construct attribute weight lasso regularization items based on the dimension affinities, and meanwhile, maximum entropy regularization is utilized to realize optimized distribution of the attribute weights.
Further, the affinity information adopts a nonlinear kernel functionDigging; wherein D i represents the i-th keyword, D j represents the j-th keyword, and σ represents a parameter of a kernel function for solving the similarity of the two keywords.
Further, the keyword of each text is a feature of each piece of data.
Further, the attribute weights are iteratively optimized by using an alternate direction multiplier method, and finally all texts to be recommended are clustered.
Further, the elements on the diagonal of the matrix of affinities of all keywords are 0.
A second aspect of the present invention provides a text recommendation system, comprising:
the keyword acquisition module is used for acquiring keywords of the text to be recommended;
the text clustering module is used for clustering all the texts to be recommended based on the keywords of the texts to be recommended and the texts with known attributes;
the text recommending module is used for recommending texts in sequence according to the distances between the keywords of all texts to be recommended and the keywords with known text attributes;
In the process of clustering all candidate texts in the text clustering module, the affinity information between keywords of all texts to be recommended and texts with known attributes is considered, the obtained affinities are combined with weights of the attributes to construct attribute weight lasso regular terms based on the dimension affinities, and meanwhile, maximum entropy regularization is utilized to realize optimal distribution of the attribute weights.
A third aspect of the present invention provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs steps in a text recommendation method as described above.
A fourth aspect of the invention provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps in the text recommendation method as described above when the program is executed.
Compared with the prior art, the invention has the beneficial effects that:
The method is used for solving the great challenges of complex high-dimensional data on cluster analysis, clustering all texts to be recommended based on the keywords of the texts to be recommended and the texts with known attributes, and recommending the texts in sequence according to the distance between the keywords of all the texts to be recommended and the keywords with known text attributes; in the process of clustering all the texts to be recommended, the affinity information between the keywords of all the texts to be recommended and the texts with known attributes is considered, the obtained affinity and the attribute weights are combined to form attribute weight lasso regular terms based on dimension affinity, and meanwhile, the maximum entropy regularization technology is utilized to realize the optimal distribution of the attribute weights, so that a better clustering effect is obtained, and the accuracy of recommending the texts to be searched is improved.
Additional aspects of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention.
FIG. 1 is an exemplary diagram of a high-dimensional dataset of an embodiment of the present invention;
FIG. 2 is a model architecture of subspace fuzzy clustering based on feature affinities according to an embodiment of the invention;
FIG. 3 is a flow chart of feature affinity-based subspace fuzzy clustering in accordance with an embodiment of the present invention;
FIG. 4 is a flow chart of a text recommendation method according to an embodiment of the present invention;
Fig. 5 is a schematic structural diagram of a text recommendation system according to an embodiment of the present invention.
Detailed Description
The invention will be further described with reference to the drawings and examples.
It should be noted that the following detailed description is illustrative and is intended to provide further explanation of the invention. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the present invention. As used herein, the singular is also intended to include the plural unless the context clearly indicates otherwise, and furthermore, it is to be understood that the terms "comprises" and/or "comprising" when used in this specification are taken to specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof.
Example 1
Referring to fig. 4, the present embodiment provides a text recommendation method, which specifically includes the following steps:
S101: acquiring keywords of a text to be recommended;
s102: clustering all the texts to be recommended based on the keywords of the texts to be recommended and the texts with known attributes;
In the process of clustering all candidate texts, the affinity information between keywords of all texts to be recommended and known attribute texts is considered, the obtained affinities are combined with the weights of the attributes to construct attribute weight lasso regularization items based on the dimension affinities, and meanwhile, maximum entropy regularization is utilized to realize optimized distribution of the attribute weights.
S103: and recommending the texts in turn according to the distances between the keywords of all the texts to be recommended and the keywords with known text attributes.
The embodiment provides a subspace fuzzy clustering method based on feature affinities for improving the complex high-dimensional data clustering precision, and a model structure diagram of the method is shown in fig. 2. And (3) taking the affinity information among the data features into consideration, combining the obtained affinity with the attribute weight to construct an attribute weight lasso regularization term based on the feature affinity, and simultaneously realizing the optimized distribution of the attribute weight by using a maximum entropy regularization technology. Then, the method is iteratively optimized using an alternating direction multiplier method. And finally, outputting a clustering result of the data, analyzing the attribute weight distribution and characteristic affinity relation of the data, and drawing a related effect graph, wherein a method flow chart is shown in figure 3.
The following is a detailed description of the implementation steps of the present embodiment:
a) The text information X to be clustered is read in, which contains N texts in total, and which is divided into K texts of different contents, as shown in fig. 1, such as politics, literature, sports, etc. The text information data is analyzed by a computer, each text is a piece of data, the obtained keywords in each text are the characteristics of each piece of data, and the number M of the keywords in each text is the characteristics of each data sample. Acquiring basic information of texts, wherein the basic information comprises the number N of the texts and the number M of keywords acquired by each text;
b) Initializing a feature affinity matrix S, i.e. solving a text affinity matrix between keywords of all samples, each element in the matrix representing a degree of similarity between every two keywords by a nonlinear kernel Calculating the similarity between keywords to obtain a matrix, wherein D i represents the ith keyword in the acquired text to be recommended, D j represents the jth keyword in the acquired text to be recommended, sigma represents the parameter of a kernel function for solving the similarity of the two keywords, and affinity information between data features is explored through a nonlinear kernel according to the read data information;
c) The method of rasterizing initial parameters is adopted, an objective function is the core idea of the proposed text recommendation method, and the objective function aims at classifying texts with similar types into one type through iterative optimization of an algorithm, so that the text recommendation of the same type is realized for a user more intelligently, wherein the value of a fuzzy factor alpha in the objective function is set as [1.1,1.5,2.0], the values of two regular term parameters gamma and mu in the objective function are set to be approximately [0.001,0.01,0.05,0.1,0.5,1,2,5,10] and [0.001,0.01,0.05,0.1,0.5,1,3,5,7,9,10], and the value of a step length rho of an optimization algorithm is set as [0.1,0.5,1,2,3,4,5,6,7,8,9,10];
d) Initializing an iteration counter ite, setting the iteration counter ite as 0, setting the maximum iteration number MaxIte as 100, setting an iteration convergence threshold epsilon as 1e-6, initializing K clustering centers, namely initializing K types of texts from all texts to be class center texts, wherein each text contains M keywords, namely M characteristics, setting the weight W of the keywords in each class center text, and finally obtaining a weight matrix W of the class center text;
e) The iteration counter is incremented by 1, i.e., ite=ite+1;
f) Calculating the membership U nk of the nth text belonging to the kth class center text by using an iterative formula of the membership to obtain a membership matrix U of the text;
g) Updating the class center text C k of the kth class by using an iterative formula of the class center text to obtain a new class center text C;
h) Updating the weight W km of the m-th keyword in the kth class center text by using an iterative formula of the weight of the class center text to obtain a new class center text weight matrix W;
i) In order to better optimize the weight of the class center text, observing the influence of the characteristic affinity on the class center text weight, namely the influence of the keyword affinity matrix on the class center text weight, introducing an auxiliary variable Q, and updating the auxiliary variable Q by using an iterative formula of the auxiliary variable;
j) Updating the dual variable D by using an iterative formula of the dual variable;
k) Calculating a value J (ite) of an objective function obtained by the ith iteration;
l) judging whether the iteration loop is terminated, comparing the difference between the value J (ite) of the objective function of the ith time and the value J (ite-1) of the objective function of the ith-1 time, if |J (ite)-J(ite-1) | < epsilon or ite > MaxIte is met, terminating the iteration loop, outputting a clustering result, and if not, repeating the steps e) to l) until the iteration loop termination condition is met.
The potential nonlinear information among the high-dimensional data features is considered in the newly proposed objective function of subspace fuzzy clustering, meanwhile, the attribute weight is automatically optimized by adopting the entropy weighting technology, and the clustering precision of complex high-dimensional data is further improved, wherein the formula involved in the implementation steps in the design scheme of the invention is described as follows:
(1) The data characteristic affinity in the step b) adopts a nonlinear kernel method, and the method comprises the following steps of firstly passing through a formula And calculating the value S ij of the affinity between the ith feature and the jth feature, and then obtaining an affinity matrix S of all the features of the data.
(2) Membership U in step f) is determined by the formulaWherein x nm represents the value of the M-th feature of the nth data sample, c km represents the value of the M-th feature of the kth cluster center, w km represents the value of the attribute weight of the M-th feature of the kth cluster center, w hm represents the value of the attribute weight of the M-th feature of the kth cluster center, K represents the number of cluster centers, and M represents the number of features; alpha is the blurring factor.
(3) The clustering center C in the step g) passes through the formulaTo calculate.
(4) The attribute weight W in step h) is determined by the formulaIs calculated, wherein gamma is an entropy term parameter and ρ is an optimization step.
D kmj is the j element in the dual variable D corresponding to the m feature of the k cluster center;
Q kmj is the j-th element in the auxiliary variable Q corresponding to the m-th feature of the k-th cluster center.
(5) The auxiliary variable Q in step i) is represented by the formulaTo calculate, wherein/>Μ is a regularization term parameter and ρ is an optimization step.
(6) The dual variable D in step j) is represented by the formulaTo calculate.
(7) The objective function J (ite) in step k) is calculated by the formula
To calculate.
Where S mj is an element of the mth row and the jth column in the affinity matrix S.
Example two
As shown in fig. 5, the present embodiment provides a text recommendation system based on subspace fuzzy clustering of feature affinities, which specifically includes the following modules:
a keyword obtaining module 201, configured to obtain keywords of a text to be recommended;
A text clustering module 202, configured to cluster all the texts to be recommended based on the keywords of the texts to be recommended and the texts with known attributes;
The text recommending module 203 is configured to sequentially recommend texts according to distances between keywords of all texts to be recommended and keywords with known text attributes;
In the process of clustering all candidate texts in the text clustering module, the affinity information between keywords of all texts to be recommended and texts with known attributes is considered, the obtained affinities are combined with weights of the attributes to construct attribute weight lasso regular terms based on the dimension affinities, and meanwhile, maximum entropy regularization is utilized to realize optimal distribution of the attribute weights.
Here, it should be noted that, each module in the text recommendation system based on feature affinity fuzzy clustering in the present embodiment corresponds to each step in the text recommendation method in the first embodiment one to one, and the specific implementation process is the same, and will not be described here again.
Example III
The present embodiment provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the text recommendation method as described in the above embodiment.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, magnetic disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The storage medium may be a magnetic disk, an optical disc, a Read-Only Memory (ROM), a Random access Memory (Random AccessMemory, RAM), or the like.
Example IV
The present embodiment provides a computer device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps in the text recommendation method according to the above embodiment when the processor executes the program.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. A text recommendation method, comprising:
Acquiring keywords of a text to be recommended;
clustering all the texts to be recommended based on the keywords of the texts to be recommended and the texts with known attributes;
Sequentially recommending the texts according to the distances between the keywords of all the texts to be recommended and the keywords with known text attributes;
The method further comprises the steps of:
a) Reading in text information to be clustered The text information/>Co-inclusion/>Text, which is further divided intoText of different contents, and the number of keywords in each text is/>
B) Initializing a characteristic affinity matrix S, namely solving the characteristic affinity matrix among keywords of all samples;
c) Setting a fuzzy factor in an objective function by adopting a method of rasterizing initial parameters Entropy term parameter/>And canonical term parameters/>Setting the step length/>, of an optimization algorithm
D) Initializing an iteration counterSetting it to 0, setting the maximum iteration number/>Setting the iteration convergence threshold/>, to 100For/>Initialization/>A cluster center;
e) The iteration counter is incremented by 1, i.e
F) Calculation of the first by means of an iterative formula of membershipThe individual text belongs to the/>Membership degree/>, of individual class center textObtaining the membership matrix/>, of the text
G) Updating the first using an iterative formula for class-centric textClass center text of individual class/>Obtaining a new class center text C;
h) Updating the first of the iterative formulas using the weights of the class center text The/>, in the personal class center textWeights of individual keywordsObtaining a new text weight matrix/>, related to the class center
I) Updating auxiliary variables using iterative formulas for auxiliary variables
J) Updating a dual variable using an iterative formula for the dual variable
K) Calculate the firstValues of objective function obtained by iteration/>
L) judging whether the iteration loop is terminated, comparing the firstValue of the secondary objective function/>And/>Value of the secondary objective function/>If the difference between them satisfies/>Or/>Ending the iteration loop, outputting a clustering result, and if not, repeatedly executing the steps e) to l) until the iteration loop ending condition is met;
in the process of clustering all candidate texts, considering affinity information between all texts to be recommended and keywords of known attribute texts, combining the obtained affinities with weights of attributes to construct attribute weight lasso regularization items based on dimension affinities, and simultaneously utilizing maximum entropy regularization to realize optimized distribution of attribute weights;
The affinity information adopts a nonlinear kernel function Digging; wherein/>Represents the/>Key words/>Represents the/>Key words/>Parameters representing a kernel function for solving the similarity of two keywords;
wherein the objective function The calculation formula of (2) is as follows:
Wherein u nk is membership; Represents the/> First/>, of the data samplesThe values of the individual features; /(I)Represents the/>First/>, of the cluster centersThe values of the individual features; /(I)Represents the/>First/>, of the cluster centersA value of an attribute weight for the individual feature; n represents the number of data samples; k represents the number of clustering centers; m represents the number of features; /(I)Is a blurring factor; /(I)Representing entropy term parameters; /(I)Representing a regularization term parameter; /(I)For/>First/>, of the cluster centersAuxiliary variable/>, corresponding to each featureThe j-th element of (a); q kjm is the/>Auxiliary variable/>, corresponding to the j-th feature of the cluster center/>An element; /(I)For affinity matrix/>The element of the mth row and the jth column of the list.
2. The text recommendation method of claim 1, wherein the keyword of each text is a feature of each piece of data.
3. The text recommendation method of claim 1, wherein the attribute weights are iteratively optimized using an alternate direction multiplier method to finally cluster all text to be recommended.
4. The text recommendation method of claim 1, wherein elements on a diagonal of a matrix formed by affinities of all keywords are 0.
5. A text recommendation system, comprising:
the keyword acquisition module is used for acquiring keywords of the text to be recommended;
the text clustering module is used for clustering all the texts to be recommended based on the keywords of the texts to be recommended and the texts with known attributes;
the text recommending module is used for recommending texts in sequence according to the distances between the keywords of all texts to be recommended and the keywords with known text attributes;
the method executed by the system specifically comprises the following steps:
a) Reading in text information to be clustered The text information/>Co-inclusion/>Text, which is further divided intoText of different contents, and the number of keywords in each text is/>
B) Initializing a characteristic affinity matrix S, namely solving the characteristic affinity matrix among keywords of all samples;
c) Setting a fuzzy factor in an objective function by adopting a method of rasterizing initial parameters Entropy term parameter/>And canonical term parameters/>Setting the step length/>, of an optimization algorithm
D) Initializing an iteration counterSetting it to 0, setting the maximum iteration number/>Setting the iteration convergence threshold/>, to 100For/>Initialization/>A cluster center;
e) The iteration counter is incremented by 1, i.e
F) Calculation of the first by means of an iterative formula of membershipThe individual text belongs to the/>Membership degree/>, of individual class center textObtaining the membership matrix/>, of the text
G) Updating the first using an iterative formula for class-centric textClass center text of individual class/>Obtaining a new class center text C;
h) Updating the first of the iterative formulas using the weights of the class center text The/>, in the personal class center textWeights of individual keywordsObtaining a new text weight matrix/>, related to the class center
I) Updating auxiliary variables using iterative formulas for auxiliary variables
J) Updating a dual variable using an iterative formula for the dual variable
K) Calculate the firstValues of objective function obtained by iteration/>
L) judging whether the iteration loop is terminated, comparing the firstValue of the secondary objective function/>And/>Value of the secondary objective function/>If the difference between them satisfies/>Or/>Ending the iteration loop, outputting a clustering result, and if not, repeatedly executing the steps e) to l) until the iteration loop ending condition is met;
In the process of clustering all candidate texts in the text clustering module, considering affinity information between all texts to be recommended and keywords of known attribute texts, combining the obtained affinities with weights of attributes to construct attribute weight lasso regular terms based on dimension affinities, and regularizing by using maximum entropy to realize optimal distribution of attribute weights;
The affinity information adopts a nonlinear kernel function Digging; wherein/>Represents the/>Key words/>Represents the/>Key words/>Parameters representing a kernel function for solving the similarity of two keywords;
wherein the objective function The calculation formula of (2) is as follows:
Wherein u nk is membership; Represents the/> First/>, of the data samplesThe values of the individual features; /(I)Represents the/>First/>, of the cluster centersThe values of the individual features; /(I)Represents the/>First/>, of the cluster centersA value of an attribute weight for the individual feature; n represents the number of data samples; k represents the number of clustering centers; m represents the number of features; /(I)Is a blurring factor; /(I)Representing entropy term parameters; /(I)Representing a regularization term parameter; /(I)For/>First/>, of the cluster centersAuxiliary variable/>, corresponding to each featureThe j-th element of (a); q kjm is the/>Auxiliary variable/>, corresponding to the j-th feature of the cluster center/>An element; /(I)For affinity matrix/>The element of the mth row and the jth column of the list.
6. The text recommendation system of claim 5, wherein the keywords of each text are characteristic of each piece of data.
7. A computer readable storage medium, on which a computer program is stored, which program, when being executed by a processor, implements the steps of the text recommendation method as claimed in any one of claims 1-4.
8. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps in the text recommendation method of any of claims 1-4 when the program is executed.
CN202111016193.6A 2021-08-31 2021-08-31 Text recommendation method, system, storage medium and equipment Active CN113688229B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111016193.6A CN113688229B (en) 2021-08-31 2021-08-31 Text recommendation method, system, storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111016193.6A CN113688229B (en) 2021-08-31 2021-08-31 Text recommendation method, system, storage medium and equipment

Publications (2)

Publication Number Publication Date
CN113688229A CN113688229A (en) 2021-11-23
CN113688229B true CN113688229B (en) 2024-04-23

Family

ID=78584533

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111016193.6A Active CN113688229B (en) 2021-08-31 2021-08-31 Text recommendation method, system, storage medium and equipment

Country Status (1)

Country Link
CN (1) CN113688229B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101408886A (en) * 2007-10-05 2009-04-15 富士通株式会社 Selecting tags for a document by analyzing paragraphs of the document
CN101692223A (en) * 2007-10-05 2010-04-07 富士通株式会社 Refining a search space inresponse to user input
CN109997124A (en) * 2016-10-24 2019-07-09 谷歌有限责任公司 System and method for measuring the semantic dependency of keyword
CN112749344A (en) * 2021-02-04 2021-05-04 北京百度网讯科技有限公司 Information recommendation method and device, electronic equipment, storage medium and program product

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8392249B2 (en) * 2003-12-31 2013-03-05 Google Inc. Suggesting and/or providing targeting criteria for advertisements
US8612293B2 (en) * 2010-10-19 2013-12-17 Citizennet Inc. Generation of advertising targeting information based upon affinity information obtained from an online social network
CN103729360A (en) * 2012-10-12 2014-04-16 腾讯科技(深圳)有限公司 Interest label recommendation method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101408886A (en) * 2007-10-05 2009-04-15 富士通株式会社 Selecting tags for a document by analyzing paragraphs of the document
CN101692223A (en) * 2007-10-05 2010-04-07 富士通株式会社 Refining a search space inresponse to user input
CN109997124A (en) * 2016-10-24 2019-07-09 谷歌有限责任公司 System and method for measuring the semantic dependency of keyword
CN112749344A (en) * 2021-02-04 2021-05-04 北京百度网讯科技有限公司 Information recommendation method and device, electronic equipment, storage medium and program product

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
WAF改进算法在基于语义分析的查询扩展上的应用;邹扬;中国优秀硕士学位论文全文数据库 信息科技辑;20140115(第01期);第I138-2306页 *
推荐系统用户模型的研究热点及启示――基于近十年核心文献的知识图谱分析;周琴英,等;情报科学(第09期);第168-175页 *

Also Published As

Publication number Publication date
CN113688229A (en) 2021-11-23

Similar Documents

Publication Publication Date Title
Bharadiya A tutorial on principal component analysis for dimensionality reduction in machine learning
Chen et al. Kernel sparse representation for time series classification
Nguyen et al. Practical and theoretical aspects of mixture‐of‐experts modeling: An overview
Aaron et al. Dynamic incremental k-means clustering
Wankhade et al. Data stream classification: a review
Chen et al. A novel information-theoretic approach for variable clustering and predictive modeling using Dirichlet process mixtures
Hassanat et al. Magnetic force classifier: a Novel Method for Big Data classification
Gabbay et al. Isolation forests and landmarking-based representations for clustering algorithm recommendation using meta-learning
Villa-Blanco et al. Feature subset selection for data and feature streams: a review
Chhabra et al. Missing value imputation using hybrid k-means and association rules
CN115392474B (en) Local perception graph representation learning method based on iterative optimization
CN113688229B (en) Text recommendation method, system, storage medium and equipment
Sun et al. Analysis of English writing text features based on random forest and Logistic regression classification algorithm
Chen et al. Experiments with rough set approach to face recognition
Choong et al. Variational approach for learning community structures
CN112884028A (en) System resource adjusting method, device and equipment
Meng et al. Adaptive resonance theory (ART) for social media analytics
Chang et al. Calibrated multi-task subspace learning via binary group structure constraint
Johnpaul et al. Representational primitives using trend based global features for time series classification
Guo et al. An enhanced self-adaptive differential evolution based on simulated annealing for rule extraction and its application in recognizing oil reservoir
Kurama et al. Credit analysis using a combination of fuzzy robust PCA and a classification algorithm
Wang et al. Clustering analysis of human behavior based on mobile phone sensor data
Adamov Analysis of feature selection techniques for classification problems
Tejasree et al. An improved differential bond energy algorithm with fuzzy merging method to improve the document clustering for information mining
CN114281994B (en) Text clustering integration method and system based on three-layer weighting model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant