CN106502921A - A kind of new dynamic queue's caching method of social networkies based on user activity - Google Patents

A kind of new dynamic queue's caching method of social networkies based on user activity Download PDF

Info

Publication number
CN106502921A
CN106502921A CN201610939422.4A CN201610939422A CN106502921A CN 106502921 A CN106502921 A CN 106502921A CN 201610939422 A CN201610939422 A CN 201610939422A CN 106502921 A CN106502921 A CN 106502921A
Authority
CN
China
Prior art keywords
user
list
new dynamic
queue
caching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610939422.4A
Other languages
Chinese (zh)
Inventor
唐雪飞
吴昊天
肖文倩
李贞昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201610939422.4A priority Critical patent/CN106502921A/en
Publication of CN106502921A publication Critical patent/CN106502921A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0844Multiple simultaneous or quasi-simultaneous cache accessing
    • G06F12/0846Cache with multiple tag or data arrays being simultaneously accessible
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0877Cache access modes
    • G06F12/0886Variable-length word access

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a kind of new dynamic queue's caching method of social networkies based on user activity, comprises the following steps:S1, generation concern list;S2, search liveness highest user and search the exclusive new dynamic buffering list of the user;S3, calculating user cache length limit value;S4, exclusive new dynamic buffering list length is reduced to original half;S5, the stem for being newly dynamically added to the user cache list;S6, judge that whether pay close attention to list is empty;S7, judge system cache total capacity whether more than default system cache total capacity limit value;S8, minimum for liveness user is removed queue;S9, exclusive new dynamic buffering list length is reduced to original half;S10, judge system cache total capacity whether more than default caching total capacity limit value.The present invention sets up independent new dynamic buffering queue for each user, dynamic can update the caching of other users, delete automatically caching when system cache capacity is too high when user delivers dynamic.

Description

A kind of new dynamic queue's caching method of social networkies based on user activity
Technical field
The invention belongs to technical field of data storage, more particularly to a kind of social networkies based on user activity are newly dynamic Queue caching method.
Background technology
With internet development, deliver new dynamic, obtain new dynamic and other privatization customization letters that concern user issues The application of function of breath is more and more extensive.But many sides for setting up unified message sequence with the whole network of current correlation technique implementation Formula is stored, set up mapping network using hash table algorithm or realized using skip list (Skip List) data structure right In data base, related content makes a look up, and sets up new dynamic queue temporarily or arranges new dynamic buffering queue, so as to obtain user User's multidate information of concern.Under the situation that social networks are complicated, different user active degree difference is larger, interim foundation is new The mode of dynamic queue needs to carry out substantial amounts of data screening work, not only time-consuming huge, and is difficult to ensure that the efficiency of caching With hit rate, Consumer's Experience is poor.In the information searching stage, if utilizing hash table algorithm, as Hash table is based on array spy Property, after some Hash tables are filled, array is difficult to extension and is not easy to shift, and can also produce inevitable conflict phenomenon, be solution Certainly conflict will consume no small internal memory.Go to realize then needing to spend no small memory space according to jump list data structure, have There is obviously internal memory to waste inferior position.
In order to overcome above-mentioned deficiency, need to design a kind of new dynamic queue's caching system for being more suitable for such social networkies System, brings more efficiently experience to user, and lifts the computation burden and expansion of server when user accesses, real The now effective use to memory source.
Content of the invention
It is an object of the invention to overcoming the deficiencies in the prior art, there is provided a kind of set up independent new dynamic for each user Buffer queue, dynamic can update the caching of other users, be deleted when system cache capacity is too high automatically when user delivers dynamic Slow down and deposit, realize the new dynamic queue's caching method of the social networkies based on user activity of the optimum work of caching system.
The purpose of the present invention is achieved through the following technical solutions:A kind of social networkies based on user activity are new Dynamic queue's caching method, comprises the following steps:
S1, one new concern list of generation:Concern relation in reading database between user, dynamic for issuing User, takes out all user's marks for paying close attention to the user, and the user's mark of taking-up is stored in the concern list of generation;
S2, search in concern list and liveness highest user take out the user, and the work is searched in caching system The exclusive new dynamic buffering list of jerk highest user;It is the use if the exclusive new dynamic buffering list of the user is not present Family creates an exclusive new dynamic buffering list, then execution step S5;Otherwise execution step S3;
S3, based on user activity calculate user cache length limit value:Exclusive new dynamic buffering list according to user User cache length limit value is calculated, and judges whether the size of current new dynamic buffering queue is limited more than user cache length Value, if exceed execution step S4, otherwise execution step S5;
S4, the information in the exclusive new dynamic buffering list of user is arranged according to the timestamp information that dynamic has Sequence, removes list, the only newest half data of retention time information of long duration, and according to the data for retaining by the user's Exclusive new dynamic buffering list length is reduced to original half, and carries out overflow flag to the user, then execution step S5;
S5, the operation that the exclusive new dynamic buffering list to user is carried out is added as new dynamic event and is delayed to the user Deposit the stem of list;
S6, judge that whether pay close attention to list is empty, if concern list for sky if return to step S2;Step S7 is otherwise carried out;
Whether S7, computing system caching total capacity, judge system cache total capacity more than default system cache total capacity Limit value, if then setting up new queue, all users in system is put in the queue, otherwise according to liveness ascending sort Terminate caching;
The user that liveness is minimum in S8, the queue for obtaining S7 removes queue, and reads the user in caching system Exclusive new dynamic buffering list;
Data in S9, the exclusive new dynamic buffering list for obtaining step S8 are entered according to the timestamp information that dynamic has Row sequence, removes list, the only newest half data of retention time data of long duration, and uses this according to the data for retaining The exclusive new dynamic buffering list length at family is reduced to original half;Then overflow flag is produced to the user;
S10, system cache total capacity is recalculated, if system cache total capacity is limited more than default caching total capacity It is worth, then return to step S8;Otherwise terminate to cache.
The invention has the beneficial effects as follows:The present invention sets up independent new dynamic buffering queue for each user, can be in user When delivering dynamic, dynamic updates the caching of other users, deletes automatically caching when system cache capacity is too high, realizes caching system The optimum work of system;New dynamic buffering queue thresholds are designed using based on liveness, the data of caching system have been obviously improved Effective percentage, in terms of user's impression, due to being mapping relations one by one between independent new dynamic buffering queue and dynamic data, When user obtains new dynamic, ensure that the hit rate of new dynamic buffering and rate of data acquisition are fast, have preferably use Experience at family.
Description of the drawings
Fig. 1 is the flow chart of the new dynamic queue's caching method of social networkies of the invention.
Specific embodiment
The present invention delivers dynamic scene, there is provided a kind of new base for mutually paying close attention between user in social networkies In the new dynamic buffering system of the social networkies of user activity.Within the system, after certain user produces new dynamic, will extract The situation that is concerned of the user generates a concern list.Also, for the new dynamic buffering that each user sets up user-specific Queue.User activity checks new dynamic frequency, logs in the associative operation algorithm for design such as frequency according to which, according to calculating. It is slow that user-specific new dynamic is specifically formulated according to many factors such as user activity result of calculation and social networkies running status Queue thresholds numerical procedure is deposited, on the whole, user activity is higher, new dynamic buffering queue thresholds are less.Due to whole slow There is deposit system certain internal memory to limit, and will set caching total capacity limit value according to actual total internal memory situation.And it is exclusive slow The dynamic event for depositing queue includes a time stamp data.
Technical scheme is further illustrated below in conjunction with the accompanying drawings.
As shown in figure 1, a kind of new dynamic queue's caching method of social networkies based on user activity, when a user sends out When cloth is new dynamic, follow the steps below:
S1, one new concern list of generation:Concern relation in reading database between user, dynamic for issuing User, takes out all user's marks for paying close attention to the user, and the user's mark of taking-up is stored in the concern list of generation;
S2, search in concern list and liveness highest user take out the user, and the work is searched in caching system The exclusive new dynamic buffering list of jerk highest user;It is the use if the exclusive new dynamic buffering list of the user is not present Family creates an exclusive new dynamic buffering list, then execution step S5;Otherwise execution step S3;
S3, based on user activity calculate user cache length limit value:Exclusive new dynamic buffering list according to user User cache length limit value is calculated, and judges whether the size of current new dynamic buffering queue is limited more than user cache length Value, if exceed execution step S4, otherwise execution step S5;
S4, the information in the exclusive new dynamic buffering list of user is arranged according to the timestamp information that dynamic has Sequence, removes list, the only newest half data of retention time information of long duration, and according to the data for retaining by the user's Exclusive new dynamic buffering list length is reduced to original half, and carries out overflow flag to the user, then execution step S5;
S5, the operation that the exclusive new dynamic buffering list to user is carried out is added as new dynamic event and is delayed to the user Deposit the stem of list;
S6, judge that whether pay close attention to list is empty, if concern list for sky if return to step S2;Step S7 is otherwise carried out;
Whether S7, computing system caching total capacity, judge system cache total capacity more than default system cache total capacity Limit value, if then setting up new queue, all users in system is put in the queue, otherwise according to liveness ascending sort Terminate caching;
The user that liveness is minimum in S8, the queue for obtaining S7 removes queue, and reads the user in caching system Exclusive new dynamic buffering list;
Data in S9, the exclusive new dynamic buffering list for obtaining step S8 are entered according to the timestamp information that dynamic has Row sequence, removes list, the only newest half data of retention time data of long duration, and uses this according to the data for retaining The exclusive new dynamic buffering list length at family is reduced to original half;Then overflow flag is produced to the user;
S10, system cache total capacity is recalculated, if system cache total capacity is limited more than default caching total capacity It is worth, then return to step S8;Otherwise terminate to cache.
Different from principal and subordinate's server architecture of present main flow, partition queue element can be distributed to different caching masters by the present invention Carry out categorical data process in machine, project has a stronger expansibility, and improve the convenient degree of the management of data with Data-handling efficiency.Related caching mechanism of the invention is based on user activity, and exclusive to the user of different liveness is new Dynamic buffering queue size carries out dynamic control, according to the relatively small principle of the very high user cache queue size of liveness, To different user's design thresholds and buffer queue size is controlled, the internal memory so as to be obviously improved caching system is effective Utilization rate, effectively reduces fragment probability.
The present invention set up exclusive buffer queue for each user so that the event in the exclusive buffer queue of user and Dynamic related data information is defined and is mapped one by one, eliminates information searching related link.If built using hash table algorithm Vertical mapping network, or realize all having obviously internal memory to waste bad using skip list (Skip List) data structure Gesture, the present invention have the characteristics of speed is fast, and data processing method is simple.When follower is when newly dynamic is obtained, only need according to certainly Oneself exclusive buffer queue takes out multidate information parsing from data base and spreads out of and carry out again front end and render, with dynamic hit first The characteristics of rate 100%.
The present invention will arrange a total capacity limit value to caching system, deliver dynamic in each user and complete to map Journey, after being successively inserted into related exclusive new dynamic buffering queue, will also carry out internal memory inspection to caching system, so as to prevent caching system System internal memory overflows.When the new dynamic buffering queue of user-specific has the situation for reducing queue size because of queue full, it will Prompting is once overflowed to user's mark, so as to the new information for oneself having omission is known that in user.
One of ordinary skill in the art will be appreciated that embodiment described here is to aid in reader and understands this Bright principle, it should be understood that protection scope of the present invention is not limited to such especially statement and embodiment.This area It is each that those of ordinary skill can make other without departing from essence of the invention various according to these technology enlightenment disclosed by the invention Plant concrete deformation and combine, these deformations and combination are still within the scope of the present invention.

Claims (1)

1. the new dynamic queue's caching method of a kind of social networkies based on user activity, it is characterised in that comprise the following steps:
S1, one new concern list of generation:Concern relation in reading database between user, for the dynamic use of issue Family, takes out all user's marks for paying close attention to the user, and the user's mark of taking-up is stored in the concern list of generation;
S2, search in concern list and liveness highest user take out the user, and the liveness is searched in caching system The exclusive new dynamic buffering list of highest user;It is user's wound if the exclusive new dynamic buffering list of the user is not present An exclusive new dynamic buffering list is built, then execution step S5;Otherwise execution step S3;
S3, based on user activity calculate user cache length limit value:Exclusive new dynamic buffering list according to user is calculated User cache length limit value, and judge whether the size of current new dynamic buffering queue exceedes user cache length limit value, If exceeding execution step S4, otherwise execution step S5;
S4, the information in the exclusive new dynamic buffering list of user is ranked up according to the timestamp information that dynamic has, Information of long duration removes list, the only newest half data of retention time, and according to the data for retaining by the exclusive of the user New dynamic buffering list length is reduced to original half, and carries out overflow flag to the user, then execution step S5;
S5, the operation for carrying out the exclusive new dynamic buffering list to user are added to the user cache row as new dynamic event The stem of table;
S6, judge that whether pay close attention to list is empty, if concern list for sky if return to step S2;Step S7 is otherwise carried out;
S7, computing system caching total capacity, judge whether system cache total capacity is limited more than default system cache total capacity Value, if then setting up new queue, all users in system is put in the queue according to liveness ascending sort, is otherwise terminated Caching;
The user that liveness is minimum in S8, the queue for obtaining S7 removes queue, and reads the special of the user in caching system Belong to new dynamic buffering list;
Data in S9, the exclusive new dynamic buffering list for obtaining step S8 are arranged according to the timestamp information that dynamic has Sequence, removes list, the only newest half data of retention time data of long duration, and according to the data for retaining by the user's Exclusive new dynamic buffering list length is reduced to original half;Then overflow flag is produced to the user;
S10, system cache total capacity is recalculated, if system cache total capacity is more than default caching total capacity limit value, Return to step S8;Otherwise terminate to cache.
CN201610939422.4A 2016-10-25 2016-10-25 A kind of new dynamic queue's caching method of social networkies based on user activity Pending CN106502921A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610939422.4A CN106502921A (en) 2016-10-25 2016-10-25 A kind of new dynamic queue's caching method of social networkies based on user activity

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610939422.4A CN106502921A (en) 2016-10-25 2016-10-25 A kind of new dynamic queue's caching method of social networkies based on user activity

Publications (1)

Publication Number Publication Date
CN106502921A true CN106502921A (en) 2017-03-15

Family

ID=58319119

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610939422.4A Pending CN106502921A (en) 2016-10-25 2016-10-25 A kind of new dynamic queue's caching method of social networkies based on user activity

Country Status (1)

Country Link
CN (1) CN106502921A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108595475A (en) * 2018-03-12 2018-09-28 电子科技大学 A kind of cache node selection method in mobile community network
CN114218503A (en) * 2022-02-22 2022-03-22 飞狐信息技术(天津)有限公司 Attention relationship caching method and device, electronic equipment and computer storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102202072A (en) * 2010-03-23 2011-09-28 盛霆信息技术(上海)有限公司 Unidirectional synchronization method of internet website data
US20110302365A1 (en) * 2009-02-13 2011-12-08 Indilinx Co., Ltd. Storage system using a rapid storage device as a cache
CN102571910A (en) * 2011-11-16 2012-07-11 腾讯科技(深圳)有限公司 Method for searching nearby users in social network, and server

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110302365A1 (en) * 2009-02-13 2011-12-08 Indilinx Co., Ltd. Storage system using a rapid storage device as a cache
CN102202072A (en) * 2010-03-23 2011-09-28 盛霆信息技术(上海)有限公司 Unidirectional synchronization method of internet website data
CN102571910A (en) * 2011-11-16 2012-07-11 腾讯科技(深圳)有限公司 Method for searching nearby users in social network, and server

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王梓: "社交网络中节点影响力评估算法研究", 《中国优秀硕士学位论文全文数据库》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108595475A (en) * 2018-03-12 2018-09-28 电子科技大学 A kind of cache node selection method in mobile community network
CN108595475B (en) * 2018-03-12 2022-03-04 电子科技大学 Cache node selection method in mobile social network
CN114218503A (en) * 2022-02-22 2022-03-22 飞狐信息技术(天津)有限公司 Attention relationship caching method and device, electronic equipment and computer storage medium

Similar Documents

Publication Publication Date Title
CN103856567B (en) Small file storage method based on Hadoop distributed file system
CN103186919B (en) A kind of word rendering intent and device
CN102117338B (en) Data base caching method
CN103593436B (en) file merging method and device
CN104508639B (en) Use the coherency management of coherency domains table
CN107590226A (en) A kind of map vector rendering intent based on tile
CN106909644A (en) A kind of multistage tissue and indexing means towards mass remote sensing image
CN104407879B (en) A kind of power network sequential big data loaded in parallel method
CN106919654A (en) A kind of implementation method of the High Availabitity MySQL database based on Nginx
CN105989129A (en) Real-time data statistic method and device
CN103577123A (en) Small file optimization storage method based on HDFS
CN109542907A (en) Database caches construction method, device, computer equipment and storage medium
CN106959928A (en) A kind of stream data real-time processing method and system based on multi-level buffer structure
CN104572505A (en) System and method for ensuring eventual consistency of mass data caches
CN103294799B (en) A kind of data parallel batch imports the method and system of read-only inquiry system
CN105468541B (en) A kind of buffer memory management method towards lucidification disposal intelligent terminal
CN106844607A (en) A kind of SQLite data reconstruction methods suitable for non-integer major key and idle merged block
CN103944964A (en) Distributed system and method carrying out expansion step by step through same
CN106502921A (en) A kind of new dynamic queue's caching method of social networkies based on user activity
CN111209278A (en) Apparatus and method for streaming real-time processing of on-line production data
CN110399096A (en) Metadata of distributed type file system caches the method, apparatus and equipment deleted again
CN101217449A (en) A remote call office procedure
CN103399915A (en) Optimal reading method for index file of search engine
CN102158533A (en) Distributed web service selection method based on QoS (Quality of Service)
CN101303657A (en) Method of optimization of multiprocessor real-time task execution power consumption

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170315

RJ01 Rejection of invention patent application after publication