CN106294295A - Article similarity recognition method based on word frequency - Google Patents
Article similarity recognition method based on word frequency Download PDFInfo
- Publication number
- CN106294295A CN106294295A CN201610653494.2A CN201610653494A CN106294295A CN 106294295 A CN106294295 A CN 106294295A CN 201610653494 A CN201610653494 A CN 201610653494A CN 106294295 A CN106294295 A CN 106294295A
- Authority
- CN
- China
- Prior art keywords
- webpage
- entry
- similarity
- word frequency
- vector
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/194—Calculation of difference between files
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/951—Indexing; Web crawling techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/205—Parsing
- G06F40/216—Parsing using statistical methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Probability & Statistics with Applications (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention provides a kind of article similarity recognition method based on word frequency, the method includes: web page characteristics vector is carried out dimensionality reduction and mapping, obtains the similarity represented with hashed value;The difference degree of entry is calculated based on each entry word frequency in webpage;Webpage weights are obtained according to described difference degree;The recommendation of similar web page is carried out by Candidate Recommendation similarity and webpage weights product.The present invention proposes a kind of article similarity recognition method based on word frequency, for large-scale dataset, checks set of metadata of similar data fast and efficiently, quickly excavates valuable information, promotes the Consumer's Experience of search engine.
Description
Technical field
The present invention relates to natural language processing, particularly to a kind of article similarity recognition method based on word frequency.
Background technology
Along with Internet technology and the fast development of related industry, data just increase with unprecedented scale, greatly rapidly
Data, while bringing motive force, also bring challenge.How in magnanimity internet data, to seek valuable resource, root
Recommend Similar content according to the search of user, be the vital task of big data text process.It is directed to the approx imately-detecting of webpage, it is desirable to
The space complexity of algorithm and time complexity will reduce as much as possible, to meet the demand of user.Existing based on text
The recommendation method of similarity has the following disadvantages, and when data scale is the hugest, the generation of web page characteristics value and calculating will consumptions
Take long time;To professional field, too much rely on basis corpus and calculate word weights;Short text similarity discrimination
Low.
Summary of the invention
For solving the problem existing for above-mentioned prior art, the present invention proposes a kind of article similarity based on word frequency and knows
Other method, including:
Webpage X and Y characteristic vector are carried out dimensionality reduction and mapping, obtains similarity sim (X, Y) represented with hashed value;
Difference degree WD (w) of entry w is calculated based on each entry w word frequency in webpage;
The webpage weights of webpage X and Y are obtained according to described difference degree WD (w);
Similar web page is carried out by the webpage weights product of Candidate Recommendation similarity sim (X, Y) Yu webpage X and Y
Recommend.
Preferably, described webpage X and Y characteristic vector are carried out dimensionality reduction and mapping, obtain the similarity represented with hashed value
Sim (X, Y), farther includes:
Obtain and calculate whole sentence eigenvalue in units of whole sentence in webpage, then use editing distance to calculate similar
Degree;It is mapped to a dimensionality reduction vector space for a multidimensional characteristic vectors, and produces an x dimension according to the vector after this dimensionality reduction
Eigenvalue, wherein x > 1, the most one-dimensional value is 1 or-1, is weighted processing in x gt by each characteristic item, finally will
Weights the most one-dimensional in this x dimensional vector are mapped as 0 or 1 according to pre-defined rule, then are coupled together by these binary digits,
X position hashed value to webpage vector.
It is preferably based on each entry w word frequency in webpage and calculates difference degree WD (w) of entry w, be expressed as:
All collections of web pages that wherein P crawls in being gatherer process, T is the set of all entries, and (p w) represents entry to FP
The word frequency that w occurs in webpage p;
The described webpage weights obtaining webpage X and Y according to described difference degree, particularly as follows:
Preferably, carry out similar by Candidate Recommendation similarity sim (X, Y) to the webpage weights product of webpage X and Y
The recommendation of webpage, is expressed as:
According to webpage similarity sim (X, Y), calculate Candidate Recommendation similarity sim (X, the Y) × IM with webpage weights
(X) × IM (Y), preserves final similarity and recommends more than the web results of threshold alpha more than predetermined threshold value Φ and number of visits.
The present invention compared to existing technology, has the advantage that
The present invention proposes a kind of article similarity recognition method based on word frequency, for large-scale dataset, quick, high
Check set of metadata of similar data to effect, quickly excavate valuable information, promote the Consumer's Experience of search engine.
Accompanying drawing explanation
Fig. 1 is the flow chart of article similarity recognition method based on word frequency according to embodiments of the present invention.
Detailed description of the invention
The detailed description of embodiment one or more to the present invention is hereafter provided together with the accompanying drawing of the diagram principle of the invention.
Describe the present invention in conjunction with such embodiment, but the invention is not restricted to any embodiment.The scope of the present invention is only wanted by right
Ask book to limit, and the present invention contains many replacements, amendment and equivalent.Illustrate many details in the following description so that
Thorough understanding of the present invention is provided.These details are provided for exemplary purposes, and without in these details
A little or all details can also realize the present invention according to claims.
An aspect of of the present present invention provides a kind of article similarity recognition method based on word frequency.Fig. 1 is according to the present invention
The article similarity recognition method flow chart based on word frequency of embodiment.
The present invention is by webpage approx imately-detecting, and circulation is read user and searched for the entry in text, close with predefined class gathering,
Each class bunch Chinese version and each entry are initial condition in the word frequency of class bunch, and search text is carried out participle and index;Then
In training set in each class bunch text, statistical nature word word frequency is higher than the quantity of threshold value;Entry is calculated special in each class bunch
Value indicative, is stored in web page characteristics set, completes the extraction to text feature.After the eigenvalue obtaining webpage, by this feature
Value sorts as key word and sets up index;It is indexed in existing web page library with the whole sentence eigenvalue of webpage to be analyzed,
Retrieve candidate web pages;Finally, with webpage to be analyzed, candidate web pages being performed Similarity Measure, according to result of calculation, decision is
The no webpage recommending that is analysed to is to user.
The present invention is primarily based on the net page data source crawled, and defined feature extracts strategy, including page structure, position letter
Breath, extraction flow process, rules back, output result etc.;Then, carry out page pretreatment, determine obtain webpage content, abandon with
The entry attribute that extraction information is unrelated;According to extracting strategy, it is thus achieved that required data item, and it is saved in XML document;By XML
Document obtains characteristic vector by feature extraction and clusters.By the document after cluster, by class bunch storage to correspondence database.
Wherein, characteristic extraction procedure farther includes:
Predefined class gathering closes { c1,c2,…,cm, each class bunch cjInclude text (dj1,dj2,…djn), each text
djIncluding entry (t1,t2,…tk), entry tkAt class bunch cjThreshold value word frequency MM of middle appearance;Number NM that Feature Words is chosen.
(1) participle and to text collection set up index, initialization feature set S be sky;
(2) entry during index file is read in circulation;
(3) entry t is calculatedkThe word frequency text number DF (t no less than MM time in the text of each class bunch of training setk,
ci);
(4) t is calculatedkCharacteristic frequency FF and average word frequency AN relative to each class bunch:
Wherein tfikIt is characterized t at text dikThe word frequency of middle appearance;
(5) t is calculatedkFeature weight MI (t in each class bunchk,ci):
MI(tk,ci)=FF × AN × log (Pm(tk,ci)/P(ci)Pm(tk))
Wherein Pm(tk,ci)=DF (tk,ci)/DF(tk)
P(ci)=n/N
Pm(tk)=DF (tk)/N
Wherein DF (tk) represent feature t in whole training textkWord frequency minimize the text number of value, N is whole instruction
Practice the text sum of collection.
(6) select the document feature sets that MI value is maximum, be incorporated in set S, as first Feature Words, and with in set S
Between entry, the minimum principle of interdependence selects next document feature sets;
(7) step 6 is repeated, until Feature Words number reaches threshold value NM.
Alternatively, for the webpage with summary, its feature extraction uses the higher method of following accuracy rate, specifically walks
Suddenly:
(1), filter out the information that web page text head and the tail are unrelated with feature extraction, obtain the web page text after denoising;
(2), summary and the Chinese word segmentation result of textual content are respectively obtained;
(3), the Chinese word segmentation result of summary and textual content is carried out parts of speech classification, after completing classification, to textual content
Carry out predicate with the parts of speech classification result of summary to extract and notional word identification;
(4), according to presetting the parts of speech classification result of web page text after described predicate is extracted by merger rule set and described
The notional word recognition result of web page text carries out merger, obtains the merger result of original text;The word of the summary after described predicate is extracted
Property classification results and the notional word recognition result of described summary carry out merger, obtain the merger result of summary;
(5), the merger result of web page text and the merger result of summary are carried out unit merger, obtain the letter of web page text
The unit merger result of interest statement unit merger result and summary;
(6), the unit merger result of web page text is clustered, webpage literary composition after being clustered according to characterization rules collection
This feature extraction result;Described characterization rules collection is cut by the statement of weights allocation strategy, the unit merger result of web page text
Divider then, atomic sentence segmentation rules, voice decimation rule, tone recognition rule constitute.
Described cluster process farther includes:
(6.1), the webpage text content inputted is carried out dimension-reduction treatment, it is thus achieved that each Feature Words in web page text and
The group of word frequency is right, is designated as < word, value >;
(6.2), described group is ranked up according to lexicographic order, and sets up index according to described sequence;
(6.3), described index and described Feature Words are set up corresponding relation, will the group of each Feature Words and its frequency right
< word, value > is converted to the corresponding relation of each index and its word frequency, is designated as vector < index, value >;
(6.4) definition cycle-index t, maximum cycle tmax;And initialize t=0;Take turns from index vector collection < at t
Index, value > obtains n index vector, is designated as N(t)={ N1 (t),N2 (t),…,Nn (t), Ni (t)Represent the i-th of t wheel
Index vector < indexi (t),valuei (t)>;Calculate the i-th index vector N of t wheeli (t)With jth index vector Nj (t)'s
Regularization similarity Nsim (i, j)=Nj (t)·Ni (t);
(6.5), n the index vector N that described t is taken turns(t)Weights be designated as WEN(t)={ WEN1 (t),WEN2 (t),…,WENn (t), WENi (t)Represent the i-th index vector N of t wheeli (t)Weights;Initialize WENi (t)=1;Calculate the i-th index of t wheel
Vector Ni (t)With jth index vector Nj (t)Similarity distance matrix S(t)(i, j):
S(t)(i, j)=(1+WENi (t)/WENj (t))/Nsim(i,j)
(6.6), the S that t is taken turns(t)(i, j) is assigned to Affinity Propagation algorithm, n the rope taking turns described t
The amount of guiding into N(t)Cluster, it is thus achieved that the m of t wheeltIndividual preliminary clusters center, is designated as C(t)={ C1 (t),C2 (t),…,Cmt (t)};By t
Increase 1;And judge t=tmaxWhether setting up, if setting up, then performing step 2.11;Otherwise from described index vector collection < index,
Value > obtains n index vector N of t wheel(t)={ N1 (t),N2 (t),…,Nn (t)}
(6.7), the m that described t-1 is taken turnst-1Individual cluster centre C(t-1)It is appended to n index vector N of described t wheel(t)In,
Thus obtain n+mt-1Individual index vector, the n+m that will updatet-1Individual index vector N(t)' it is assigned to the index vector N that described t takes turns(t), and return step 6.5 order execution;Thus obtain the m of t wheeltIndividual final cluster centre C(t);
(6.8);Obtain each cluster centre taken turns, complete described cluster.
After obtaining eigenvalue, on the one hand the Similarity Measure of the present invention uses whole sentence is that unit obtains and calculate whole
Sentence eigenvalue, then uses editing distance to calculate similarity.It is mapped to a dimensionality reduction vector empty for a multidimensional characteristic vectors
Between, and producing an x dimensional feature value (x > 1) according to the vector after this dimensionality reduction, the most one-dimensional value is 1 or-1, by each characteristic item
It is weighted processing in x gt, finally the most one-dimensional weights in this x dimensional vector is mapped as 0 according to pre-defined rule
Or 1, then these binary digits are coupled together, obtain the x position hashed value of webpage vector.And carry out similarity detection process:
Step 1;The vector v of one x dimension is initialized as 0, and the binary number fbin of x position is initialized as 0.
Step 2: to statement s in whole sentence set SPi, use SHA1 hashing algorithm to obtain the hashed value of an x position.
Step 3: defined function g (hj(si)):
Wherein hj(si) represent siThe binary numeral that jth position is corresponding;Definition vjRepresent the jth dimension of vector v, to 1 to x, meter
Calculate vjWeights
vj=vj+W(si)×g(hj(si))
Wherein, W (si) represent statement siWeights.
Step 4, if there is the most untreated statement in set SP, then jumps to step 2 and is iterated calculating;Otherwise turn step
Rapid 5.
Step 5, defines fbinjRepresent the jth bit value in fbin, to 1 to x, if vj> 0, then fbinj=1;If vj≤ 0,
Then fbinj=0.
Step 6, using the binary sequence fbin that obtains as the eigenvalue of current whole sentence;Then for given webpage X
With webpage Y, respectively the characteristic value combinations of each whole sentence is formed whole sentence characteristic value collection SXAnd SY, use | SX| and | SY| table respectively
Show the element number in each set, | SX∩SY| represent the number approximating sentence in two set, the similarity of calculating webpage X and Y:
Sim (X, Y)=| SX∩SY|/(|SX|+|SY|-|SX∩SY|)
The judgment criterion wherein approximating sentence is, if two respective eigenvalues of whole sentence a, b meetIt is higher than
Predefined threshold value η, then be judged as that two whole sentences are for approximation sentence.
Step 7, if sim (X, Y) > λ (presetting similarity threshold), it is determined that webpage X with Y is similar, otherwise dissimilar.
And in search-engine web page recommendation process, the webpage that number of visits is different is used different methods to enter by the present invention
Row is recommended.
For the number of visits webpage more than predetermined threshold α, making to complete user using the following method and recommend, concrete step is such as
Under:
1.1 search user gathers similar users u of each user u in U ', by the user of browsed same web page be wherein
Similar users.To each similar users u ' the entry t that browsed, give weights according to the sequence number of entry;For each word
Bar, calculating total weight value:
Wgh(ti)=θ × Fr (ti)+ζ×Se(ti);
Wherein Fr (ti) represent that all users use entry to browse the number of times of webpage, Se (ti) represent entry browse order,
θ, ζ are regulation coefficient, and meet θ+ζ=1;
1.2 press entry total weight value descending, merge synonym entry;Finally, multiple by the maximum weight of predetermined number
Webpage recommending corresponding to entry is to user u.
For number of visits less than the webpage of predetermined threshold α, search and number of visits the highest with current web page similarity
Many webpages, recommend user by entry bigger for total weight value in calculated webpage.Concrete step is as follows:
2.1 make to evaluate using the following method the difference degree of entry w,
All collections of web pages that wherein P crawls in being gatherer process, T is the set of all entries, and (p w) represents entry to FP
The word frequency that w occurs in webpage p.
The 2.2 webpage weights height with more high difference degree entry, calculate webpage weights as follows:
Further according to aforementioned webpage similarity sim (X, Y), calculate Candidate Recommendation similarity sim (X, Y) with webpage weights
× IM (X) × IM (Y), preserves final similarity and carries out more than the web results of threshold alpha more than predetermined threshold value Φ and number of visits
Recommend.
Still optionally further, for above-mentioned webpage weights, it is possible to use entry semantic similarity quaternary tree, then with former phase
The calculating seemingly spending sim (X, Y) is weighted summation.Entry semantic similarity quaternary tree comprises leaf node and nonleaf node, leaf segment
In point, all similarities exceed the entry of threshold value Phi and arrange the most in descending order, and are sequentially saved in leaf node.And entry number information
It is saved in nonleaf node.During the semantic similarity calculated between document feature sets vector, if Feature Words vector viAnd vj
Feature w of certain dimensionikAnd wjlMeet following condition 1 or 2, then to document feature sets vector viAnd vjSimilarity result carry out
Weighting processes.
Condition 1: if wjlBelong to the entry descending queue of some leaf node in quaternary tree, and wikIt is not belonging to above-mentioned fall
Sequence queue, then according to wikWith the similarity of other entry in the entry descending queue of place, containing wjlEntry descending queue in
Determine wikOrdinal position in entry descending queue.
Condition 2: if wikAnd wjlAll it is not belonging to the entry descending queue of some leaf node in quaternary tree, wikAnd wjlWith
The document feature sets with maximum similarity in the entry descending queue of certain leaf node and have minimum similar in quaternary tree
When the Similarity value of the document feature sets of degree is both less than a certain threshold value Phi, then set up a branch, and by wikAnd wjlIt is inserted into this
In the document feature sets queue of individual branch leaf node.
After entry semantic similarity quaternary tree has built, from viIn each entry start, find vjIn with wjl
Most like entry, the similarity between record entry.By viIn other entries repeat above-mentioned searching process, until viIn all
Entry is all at vjIn have found the most most like entry.Similarity between the entry that will obtain adds up, divided by viIn all words
Bar number, as viAnd vjSimilarity sim (vi, vj).Then sim (v is calculatedi, vj) and sim (vj, vi) meansigma methods, as
Vector viAnd vjSemantic similarity.To vector viAnd vjSemantic similarity be weighted processing, finally give the semantic phase of weighting
Like degree.
In sum, the present invention proposes a kind of article similarity recognition method based on word frequency, for large-scale data
Collection, checks set of metadata of similar data fast and efficiently, quickly excavates valuable information, promotes the Consumer's Experience of search engine.
Obviously, it should be appreciated by those skilled in the art, each module of the above-mentioned present invention or each step can be with general
Calculating system realize, they can concentrate in single calculating system, or is distributed in what multiple calculating system was formed
On network, alternatively, they can realize with the executable program code of calculating system, it is thus possible to be stored in
Storage system is performed by calculating system.So, the present invention is not restricted to the combination of any specific hardware and software.
It should be appreciated that the above-mentioned detailed description of the invention of the present invention is used only for exemplary illustration or explains the present invention's
Principle, and be not construed as limiting the invention.Therefore, that is done in the case of without departing from the spirit and scope of the present invention is any
Amendment, equivalent, improvement etc., should be included within the scope of the present invention.Additionally, claims purport of the present invention
Whole within containing the equivalents falling into scope and border or this scope and border change and modifications
Example.
Claims (4)
1. an article similarity recognition method based on word frequency, it is characterised in that including:
Webpage X and Y characteristic vector are carried out dimensionality reduction and mapping, obtains similarity sim (X, Y) represented with hashed value;
Difference degree WD (w) of entry w is calculated based on each entry w word frequency in webpage;
The webpage weights of webpage X and Y are obtained according to described difference degree WD (w);
The recommendation of similar web page is carried out by the webpage weights product of Candidate Recommendation similarity sim (X, Y) Yu webpage X and Y.
Method the most according to claim 1, it is characterised in that described dimensionality reduction and reflecting that webpage X and Y characteristic vector are carried out
Penetrate, obtain similarity sim (X, Y) represented with hashed value, farther include:
Obtain and calculate whole sentence eigenvalue in units of whole sentence in webpage, then use editing distance to calculate similarity;Pin
One multidimensional characteristic vectors is mapped to a dimensionality reduction vector space, and produces an x dimensional feature according to the vector after this dimensionality reduction
Value, wherein x > 1, the most one-dimensional value is 1 or-1, is weighted processing, finally by this in x gt by each characteristic item
Weights the most one-dimensional in x dimensional vector are mapped as 0 or 1 according to pre-defined rule, then are coupled together by these binary digits, obtain net
The x position hashed value of page vector.
Method the most according to claim 2, it is characterised in that the described word frequency based on each entry w in webpage calculates
Difference degree WD (w) of entry w, is expressed as:
All collections of web pages that wherein P crawls in being gatherer process, T is the set of all entries, and (p w) represents that entry w exists to FP
The word frequency occurred in webpage p;
The described webpage weights obtaining webpage X and Y according to described difference degree, particularly as follows:
Method the most according to claim 3, it is characterised in that by Candidate Recommendation similarity sim (X, Y) and webpage X and Y
Both webpage weights products carry out the recommendation of similar web page, are expressed as:
According to webpage similarity sim (X, Y), calculate Candidate Recommendation similarity sim (X, the Y) × IM (X) with webpage weights ×
IM (Y), preserves final similarity and recommends more than the web results of threshold alpha more than predetermined threshold value Φ and number of visits.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610653494.2A CN106294295B (en) | 2016-08-10 | 2016-08-10 | Article similarity recognition method based on word frequency |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610653494.2A CN106294295B (en) | 2016-08-10 | 2016-08-10 | Article similarity recognition method based on word frequency |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106294295A true CN106294295A (en) | 2017-01-04 |
CN106294295B CN106294295B (en) | 2019-08-16 |
Family
ID=57668260
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610653494.2A Active CN106294295B (en) | 2016-08-10 | 2016-08-10 | Article similarity recognition method based on word frequency |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106294295B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109615001A (en) * | 2018-12-05 | 2019-04-12 | 上海恺英网络科技有限公司 | A kind of method and apparatus identifying similar article |
CN109886787A (en) * | 2019-02-22 | 2019-06-14 | 清华大学 | Discrete social recommendation method and system |
CN112182230A (en) * | 2020-11-27 | 2021-01-05 | 北京健康有益科技有限公司 | Text data classification method and device based on deep learning |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010146288A (en) * | 2008-12-18 | 2010-07-01 | Dainippon Printing Co Ltd | Method, apparatus, program, and recording medium for providing information of combination merchandise and gathering reaction information of customers, |
CN104598532A (en) * | 2014-12-29 | 2015-05-06 | 中国联合网络通信有限公司广东省分公司 | Information processing method and device |
-
2016
- 2016-08-10 CN CN201610653494.2A patent/CN106294295B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010146288A (en) * | 2008-12-18 | 2010-07-01 | Dainippon Printing Co Ltd | Method, apparatus, program, and recording medium for providing information of combination merchandise and gathering reaction information of customers, |
CN104598532A (en) * | 2014-12-29 | 2015-05-06 | 中国联合网络通信有限公司广东省分公司 | Information processing method and device |
Non-Patent Citations (2)
Title |
---|
秦玉平等: "基于局部词频指纹的论文抄袭检测算法", 《计算机工程》 * |
赵俊杰等: "一种基于段落词频统计的论文抄袭判定算法", 《计算机技术与发展》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109615001A (en) * | 2018-12-05 | 2019-04-12 | 上海恺英网络科技有限公司 | A kind of method and apparatus identifying similar article |
CN109886787A (en) * | 2019-02-22 | 2019-06-14 | 清华大学 | Discrete social recommendation method and system |
CN112182230A (en) * | 2020-11-27 | 2021-01-05 | 北京健康有益科技有限公司 | Text data classification method and device based on deep learning |
CN112182230B (en) * | 2020-11-27 | 2021-03-16 | 北京健康有益科技有限公司 | Text data classification method and device based on deep learning |
Also Published As
Publication number | Publication date |
---|---|
CN106294295B (en) | 2019-08-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106294733B (en) | Page detection method based on text analyzing | |
US20230195773A1 (en) | Text classification method, apparatus and computer-readable storage medium | |
CN110750640B (en) | Text data classification method and device based on neural network model and storage medium | |
CN106294736A (en) | Text feature based on key word frequency | |
US20190347281A1 (en) | Apparatus and method for semantic search | |
CN108132927B (en) | Keyword extraction method for combining graph structure and node association | |
CN106156272A (en) | A kind of information retrieval method based on multi-source semantic analysis | |
US20040162827A1 (en) | Method and apparatus for fundamental operations on token sequences: computing similarity, extracting term values, and searching efficiently | |
JP2002014999A (en) | Similar document retrieval device and relative keyword extract device | |
Wang et al. | Ptr: Phrase-based topical ranking for automatic keyphrase extraction in scientific publications | |
JP2012524314A (en) | Method and apparatus for data retrieval and indexing | |
CN108647322B (en) | Method for identifying similarity of mass Web text information based on word network | |
CN112632228A (en) | Text mining-based auxiliary bid evaluation method and system | |
CN106844632A (en) | Based on the product review sensibility classification method and device that improve SVMs | |
CN108875065B (en) | Indonesia news webpage recommendation method based on content | |
US20220180317A1 (en) | Linguistic analysis of seed documents and peer groups | |
CN108090178A (en) | A kind of text data analysis method, device, server and storage medium | |
CN111241410A (en) | Industry news recommendation method and terminal | |
CN106294295A (en) | Article similarity recognition method based on word frequency | |
CN114138979B (en) | Cultural relic safety knowledge map creation method based on word expansion unsupervised text classification | |
Zaware et al. | Text summarization using tf-idf and textrank algorithm | |
Sintia et al. | Product Codefication Accuracy With Cosine Similarity And Weighted Term Frequency And Inverse Document Frequency (TF-IDF) | |
CN111563361B (en) | Text label extraction method and device and storage medium | |
Guadie et al. | Amharic text summarization for news items posted on social media | |
CN114580557A (en) | Document similarity determination method and device based on semantic analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |