CN109828961A - A kind of article publication high concurrent caching method - Google Patents

A kind of article publication high concurrent caching method Download PDF

Info

Publication number
CN109828961A
CN109828961A CN201811543876.5A CN201811543876A CN109828961A CN 109828961 A CN109828961 A CN 109828961A CN 201811543876 A CN201811543876 A CN 201811543876A CN 109828961 A CN109828961 A CN 109828961A
Authority
CN
China
Prior art keywords
article
article data
terminal user
data
high concurrent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811543876.5A
Other languages
Chinese (zh)
Other versions
CN109828961B (en
Inventor
陈相熔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Seven India Mdt Infotech Ltd
Original Assignee
Shanghai Seven India Mdt Infotech Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Seven India Mdt Infotech Ltd filed Critical Shanghai Seven India Mdt Infotech Ltd
Priority to CN201811543876.5A priority Critical patent/CN109828961B/en
Publication of CN109828961A publication Critical patent/CN109828961A/en
Application granted granted Critical
Publication of CN109828961B publication Critical patent/CN109828961B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

A kind of article disclosed by the invention issues high concurrent caching method, comprising the following steps: receives the article data of terminal user's publication;The validity of the article data received is checked;If checking article data is valid data, will be stored in the cache layer of the article data write service device received using high concurrent technology, and generate effective result and feed back to terminal user;The article data that the cache layer of server is written into using high concurrent technology write service device database in stored.The present invention returns to terminal user after checking whether article data is effective immediately, and effective article data is entered in cache layer using high concurrent technology, and the processing time is extremely short, greatlys improve average throughput, greatly shortens the waiting time that terminal user issues article.

Description

A kind of article publication high concurrent caching method
Technical field
The present invention relates to field of computer technology more particularly to a kind of article to issue high concurrent caching method.
Background technique
After server receives article data, directly article data is stored when terminal user issues article referring to Fig. 1 In the database of server, terminal user needs the reply data of waiting for server always in this process.But it is existing Article publication caching method there are following some problems:
1, this method can make the average throughput of request per second relatively low, when terminal user being caused to wait when issuing article Between it is too long, notably have time-out situation;
2, server process single request takes a long time, and causes terminal user waiting for a long time, influences making for terminal user With experience;
3, article publication is stored directly in the database of server, magnetic higher to bottom disk read-write speed dependence The speed of disk while writing, reading also has certain limitation.
For this purpose, the applicant passes through beneficial exploration and research, solution to the problems described above is had found, will be detailed below being situated between The technical solution to continue generates in this background.
Summary of the invention
Technical problem to be solved by the present invention lies in: it provides in view of the deficiencies of the prior art a kind of for shortening article The time of issuing process, and improve the article publication high concurrent caching method of the concurrency of article issuing process.
The technical problems to be solved by the invention can adopt the following technical scheme that realize:
A kind of article publication high concurrent caching method, comprising the following steps:
Receive the article data of terminal user's publication;
The validity of the article data received is checked;
If checking article data is valid data, the article data write service that will be received using high concurrent technology It is stored in the cache layer of device, and generates effective result and feed back to terminal user;
The article data that the cache layer of server is written into using high concurrent technology write service device database in into Row storage.
In a preferred embodiment of the invention, further includes: if checking article data is invalid data, generate nothing Effect result feeds back to terminal user, informs that terminal user resubmits and meets the article data that publication requires.
In a preferred embodiment of the invention, further comprising the steps of:
When receiving the article read requests of terminal user's submission, to the article data stored in the cache layer of server It is retrieved;
If retrieving the article data that the article read requests submitted with terminal user match, the cache layer of server The article data retrieved is directly sent to the terminal user for submitting article data to request;
If the article data that the article read requests submitted with terminal user match is not retrieved, further to clothes The article data of the databases storage of business device is retrieved;
If retrieving the article data that the article read requests submitted with terminal user match, the database of server The article data retrieved is directly sent to the terminal user for submitting article data to request;
If not retrieving the article data that the article read requests submitted with terminal user match, to submission article The terminal user of request of data, which returns, is not present article data result.
Due to using technical solution as above, the beneficial effects of the present invention are: the present invention is in inspection article data It is no effectively after return to terminal user immediately, effective article data is entered in cache layer using high concurrent technology, handles the time It is extremely short, average throughput is greatlyd improve, the waiting time that terminal user issues article is greatly shortened.In addition, the present invention uses Caching mechanism is written in database the article data in cache layer is had been saved in by high concurrent technology, and data buffer storage is improved Efficiency, while also improve terminal user reading article efficiency.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with It obtains other drawings based on these drawings.
Fig. 1 is the flow chart of existing article dissemination method.
Fig. 2 is article publication flow chart of the invention.
Fig. 3 is the flow chart that terminal user of the invention reads article.
Specific embodiment
In order to be easy to understand the technical means, the creative features, the aims and the efficiencies achieved by the present invention, tie below Conjunction is specifically illustrating, and the present invention is further explained.
Referring to fig. 2, what is provided in figure is a kind of article publication high concurrent caching method, comprising the following steps:
Step S10, terminal user issue article, the article data of server receiving terminal user publication.
Step S20, server check the validity of the article data received;
Step S30 generates null result and feeds back to terminal user if checking article data is invalid data, informs Terminal user, which resubmits, meets the article data that publication requires;If checking article data is valid data, enter step S40。
Step S40 will be stored using high concurrent technology in the cache layer of the article data write service device received, And it generates effective result and feeds back to terminal user.
Step S50, the article data that the cache layer of server is written into use the number of high concurrent technology write service device According to being stored in library.
Referring to Fig. 3, when terminal user reads article, the invention also includes following steps:
Step S61, server receive the article read requests of terminal user's submission.
Step S62 retrieves the article data stored in the cache layer of server.
Step S63, if retrieving the article data that the article read requests submitted with terminal user match, server Cache layer directly send the article data that retrieves to the terminal user for submitting article data to request;If not retrieving and whole The article data that the article read requests that end subscriber is submitted match, then enter step S64.
Step S64 retrieves the article data of the databases storage of server.
Step S65, if retrieving the article data that the article read requests submitted with terminal user match, server Database directly send the article data that retrieves to the terminal user for submitting article data to request;If not retrieving and whole The article data that the article read requests that end subscriber is submitted match, then enter step S66.
Step S66 is returned to the terminal user for submitting article data to request and article data result is not present.
The above shows and describes the basic principles and main features of the present invention and the advantages of the present invention.The technology of the industry Personnel are it should be appreciated that the present invention is not limited to the above embodiments, and the above embodiments and description only describe this The principle of invention, without departing from the spirit and scope of the present invention, various changes and improvements may be made to the invention, these changes Change and improvement all fall within the protetion scope of the claimed invention.The claimed scope of the invention by appended claims and its Equivalent thereof.

Claims (3)

1. a kind of article issues high concurrent caching method, which comprises the following steps:
Receive the article data of terminal user's publication;
The validity of the article data received is checked;
If checking article data is valid data, use high concurrent technology by the article data write service device received It is stored in cache layer, and generates effective result and feed back to terminal user;
The article data that the cache layer of server is written into using high concurrent technology write service device database in deposited Storage.
2. article as described in claim 1 issues high concurrent caching method, which is characterized in that further include: if checking article Data are invalid data, then generate null result and feed back to terminal user, inform that terminal user resubmits and meet publication requirement Article data.
3. article as described in claim 1 issues high concurrent caching method, which is characterized in that further comprising the steps of:
When receiving the article read requests of terminal user's submission, the article data stored in the cache layer of server is carried out Retrieval;
If retrieving the article data that the article read requests submitted with terminal user match, the cache layer of server is direct The article data retrieved is sent to the terminal user for submitting article data to request;
If the article data that the article read requests submitted with terminal user match is not retrieved, further to server Databases storage article data retrieved;
If retrieving the article data that the article read requests submitted with terminal user match, the database of server is direct The article data retrieved is sent to the terminal user for submitting article data to request;
If not retrieving the article data that the article read requests submitted with terminal user match, to submission article data The terminal user of request, which returns, is not present article data result.
CN201811543876.5A 2018-12-17 2018-12-17 Article release high concurrency caching method Active CN109828961B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811543876.5A CN109828961B (en) 2018-12-17 2018-12-17 Article release high concurrency caching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811543876.5A CN109828961B (en) 2018-12-17 2018-12-17 Article release high concurrency caching method

Publications (2)

Publication Number Publication Date
CN109828961A true CN109828961A (en) 2019-05-31
CN109828961B CN109828961B (en) 2023-12-08

Family

ID=66859565

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811543876.5A Active CN109828961B (en) 2018-12-17 2018-12-17 Article release high concurrency caching method

Country Status (1)

Country Link
CN (1) CN109828961B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103916437A (en) * 2013-01-05 2014-07-09 中国移动通信集团公司 File release system, device and method
CN105871959A (en) * 2015-01-22 2016-08-17 阿里巴巴集团控股有限公司 Message delivery method, system and device
CN107590210A (en) * 2017-08-25 2018-01-16 咪咕互动娱乐有限公司 A kind of data processing method, device, system and computer-readable recording medium
CN108898463A (en) * 2018-07-02 2018-11-27 山东大学 A kind of network business system for farm products and its building method, operation method of high concurrent

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103916437A (en) * 2013-01-05 2014-07-09 中国移动通信集团公司 File release system, device and method
CN105871959A (en) * 2015-01-22 2016-08-17 阿里巴巴集团控股有限公司 Message delivery method, system and device
CN107590210A (en) * 2017-08-25 2018-01-16 咪咕互动娱乐有限公司 A kind of data processing method, device, system and computer-readable recording medium
CN108898463A (en) * 2018-07-02 2018-11-27 山东大学 A kind of network business system for farm products and its building method, operation method of high concurrent

Also Published As

Publication number Publication date
CN109828961B (en) 2023-12-08

Similar Documents

Publication Publication Date Title
CN1280743C (en) Data-transmitting method and system
US20180285167A1 (en) Database management system providing local balancing within individual cluster node
CN103856567A (en) Small file storage method based on Hadoop distributed file system
US20220004321A1 (en) Effective transaction table with page bitmap
CN107025243A (en) A kind of querying method of resource data, inquiring client terminal and inquiry system
WO2012060889A1 (en) Systems and methods for grouped request execution
Canim et al. Buffered Bloom Filters on Solid State Storage.
CN106846024B (en) Redis-based coupon issuing method, system and computer-readable storage medium
CN105975638A (en) NoSQL-based massive small file storage structure for aviation logistics and storage method of NoSQL-based massive small file storage structure
CN104392377A (en) Cloud transaction system
CN107729504A (en) A kind of method and system for handling large data objectses
CN103744975A (en) Efficient caching server based on distributed files
CN110147345A (en) A kind of key assignments storage system and its working method based on RDMA
CN104158863A (en) Cloud storage mechanism based on transaction-level whole-course high-speed buffer
CN101788887A (en) System and method of I/O cache stream based on database in disk array
US9836513B2 (en) Page feed for efficient dataflow between distributed query engines
US9928259B2 (en) Deleted database record reuse
US11099960B2 (en) Dynamically adjusting statistics collection time in a database management system
CN103365987A (en) Clustered database system and data processing method based on shared-disk framework
CN108280123B (en) HBase column polymerization method
CN104484136B (en) A kind of method of sustainable high concurrent internal storage data
CN1155891C (en) Equity elevator scheduling calculating method used for direct access storage device
CN111427920B (en) Data acquisition method, device, system, computer equipment and storage medium
CN117349326A (en) Static data query method and system
CN109828961A (en) A kind of article publication high concurrent caching method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant