CN109828961B - Article release high concurrency caching method - Google Patents

Article release high concurrency caching method Download PDF

Info

Publication number
CN109828961B
CN109828961B CN201811543876.5A CN201811543876A CN109828961B CN 109828961 B CN109828961 B CN 109828961B CN 201811543876 A CN201811543876 A CN 201811543876A CN 109828961 B CN109828961 B CN 109828961B
Authority
CN
China
Prior art keywords
article
article data
terminal user
data
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811543876.5A
Other languages
Chinese (zh)
Other versions
CN109828961A (en
Inventor
陈相熔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Qiyin Information Technology Co ltd
Original Assignee
Shanghai Qiyin Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Qiyin Information Technology Co ltd filed Critical Shanghai Qiyin Information Technology Co ltd
Priority to CN201811543876.5A priority Critical patent/CN109828961B/en
Publication of CN109828961A publication Critical patent/CN109828961A/en
Application granted granted Critical
Publication of CN109828961B publication Critical patent/CN109828961B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The application discloses a high concurrency caching method for article release, which comprises the following steps: receiving article data issued by a terminal user; checking the validity of the received article data; if the article data are checked to be effective data, writing the received article data into a cache layer of a server for storage by adopting a high concurrency technology, and generating an effective result to be fed back to a terminal user; and the cache layer of the server writes the written article data into the database of the server for storage by adopting a high concurrency technology. The method and the device for processing the article data immediately return to the terminal user after checking whether the article data are effective or not, the effective article data enter the cache layer by adopting a high concurrence technology, the processing time is extremely short, the average throughput is greatly improved, and the waiting time for the terminal user to issue the article is greatly shortened.

Description

Article release high concurrency caching method
Technical Field
The application relates to the technical field of computers, in particular to a high concurrency caching method for article release.
Background
Referring to fig. 1, when an end user issues an article, after receiving the article data, a server directly stores the article data in a database of the server, and the end user needs to wait for reply data of the server all the time in the process. However, the existing article publishing and caching method has the following problems:
1. the method can cause the average throughput of the requests per second to be lower, so that the waiting time is too long when the terminal user issues the article, and even the condition of overtime occurs;
2. the server consumes longer time to process a single request, so that the waiting time of the terminal user is long, and the use experience of the terminal user is affected;
3. the article release is directly stored in a database of a server, the dependence on the read-write speed of a bottom disk is high, and the speed of writing and reading of the disk at the same time is limited to a certain extent.
To this end, the present inventors have found a method for solving the above-mentioned problems through beneficial studies and studies, and the technical solutions to be described below are made in this context.
Disclosure of Invention
The technical problems to be solved by the application are as follows: aiming at the defects of the prior art, the high concurrency caching method for the article release is provided for shortening the time of the article release process and improving the concurrency of the article release process.
The technical problems to be solved by the application can be realized by adopting the following technical scheme:
a high concurrency caching method for article release comprises the following steps:
receiving article data issued by a terminal user;
checking the validity of the received article data;
if the article data are checked to be effective data, writing the received article data into a cache layer of a server for storage by adopting a high concurrency technology, and generating an effective result to be fed back to a terminal user;
and the cache layer of the server writes the written article data into the database of the server for storage by adopting a high concurrency technology.
In a preferred embodiment of the present application, further comprising: if the article data is checked to be invalid data, an invalid result is generated and fed back to the terminal user, and the terminal user is informed of resubmitting the article data meeting the release requirements.
In a preferred embodiment of the application, the method further comprises the steps of:
when receiving an article reading request submitted by a terminal user, retrieving article data stored in a cache layer of a server;
if the article data matched with the article reading request submitted by the terminal user is retrieved, the caching layer of the server directly sends the retrieved article data to the terminal user submitting the article data request;
if the article data matched with the article reading request submitted by the terminal user is not retrieved, further retrieving the article data stored in the database of the server;
if the article data matched with the article reading request submitted by the terminal user is retrieved, the database of the server directly sends the retrieved article data to the terminal user submitting the article data request;
if the article data matched with the article reading request submitted by the end user is not retrieved, returning the result that the article data does not exist to the end user submitting the article data request.
Due to the adoption of the technical scheme, the application has the beneficial effects that: the method and the device for processing the article data immediately return to the terminal user after checking whether the article data are effective or not, the effective article data enter the cache layer by adopting a high concurrence technology, the processing time is extremely short, the average throughput is greatly improved, and the waiting time for the terminal user to issue the article is greatly shortened. In addition, the application adopts a caching mechanism to write the article data stored in the caching layer into the database through a high concurrency technology, thereby improving the efficiency of data caching and simultaneously improving the efficiency of reading articles of the terminal user.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a conventional article publishing method.
Fig. 2 is a flowchart of article publishing according to the present application.
FIG. 3 is a flow chart of the end user reading an article of the present application.
Detailed Description
The application is further described with reference to the following detailed drawings in order to make the technical means, the creation characteristics, the achievement of the purpose and the effect of the implementation of the application easy to understand.
Referring to fig. 2, a method for high concurrency caching of article release is provided, which includes the following steps:
in step S10, the end user issues an article, and the server receives the article data issued by the end user.
Step S20, the server checks the validity of the received article data;
step S30, if the article data are checked to be invalid data, an invalid result is generated and fed back to the terminal user, and the terminal user is informed of submitting the article data meeting the release requirements again; if the article data is checked to be valid data, the process proceeds to step S40.
And S40, writing the received article data into a cache layer of the server for storage by adopting a high concurrency technology, and generating an effective result to be fed back to the terminal user.
And S50, writing the written article data into a database of the server by the cache layer of the server by adopting a high concurrence technology for storage.
Referring to fig. 3, when the end user reads an article, the present application further includes the steps of:
in step S61, the server receives an article reading request submitted by the end user.
Step S62, the article data stored in the cache layer of the server is searched.
Step S63, if the article data matched with the article reading request submitted by the terminal user is retrieved, the caching layer of the server directly sends the retrieved article data to the terminal user submitting the article data request; if no article data matching the article reading request submitted by the end user is retrieved, the process proceeds to step S64.
Step S64, the article data stored in the database of the server is searched.
Step S65, if the article data matched with the article reading request submitted by the terminal user is retrieved, the database of the server directly sends the retrieved article data to the terminal user submitting the article data request; if no article data matching the article reading request submitted by the end user is retrieved, the process proceeds to step S66.
And step S66, returning the result that the article data does not exist to the end user submitting the article data request.
The foregoing has shown and described the basic principles and main features of the present application and the advantages of the present application. It will be understood by those skilled in the art that the present application is not limited to the embodiments described above, and that the above embodiments and descriptions are merely illustrative of the principles of the present application, and various changes and modifications may be made without departing from the spirit and scope of the application, which is defined in the appended claims. The scope of the application is defined by the appended claims and equivalents thereof.

Claims (2)

1. The article release high concurrency caching method is characterized by comprising the following steps of:
receiving article data issued by a terminal user;
checking the validity of the received article data;
if the article data are checked to be effective data, writing the received article data into a cache layer of a server for storage by adopting a high concurrency technology, and generating an effective result to be fed back to a terminal user;
the cache layer of the server writes the written article data into the database of the server for storage by adopting a high concurrency technology;
the method also comprises the following steps:
when receiving an article reading request submitted by a terminal user, retrieving article data stored in a cache layer of a server;
if the article data matched with the article reading request submitted by the terminal user is retrieved, the caching layer of the server directly sends the retrieved article data to the terminal user submitting the article data request;
if the article data matched with the article reading request submitted by the terminal user is not retrieved, further retrieving the article data stored in the database of the server;
if the article data matched with the article reading request submitted by the terminal user is retrieved, the database of the server directly sends the retrieved article data to the terminal user submitting the article data request;
if the article data matched with the article reading request submitted by the end user is not retrieved, returning the result that the article data does not exist to the end user submitting the article data request.
2. The article publication high concurrency caching method of claim 1, further comprising: if the article data is checked to be invalid data, an invalid result is generated and fed back to the terminal user, and the terminal user is informed of resubmitting the article data meeting the release requirements.
CN201811543876.5A 2018-12-17 2018-12-17 Article release high concurrency caching method Active CN109828961B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811543876.5A CN109828961B (en) 2018-12-17 2018-12-17 Article release high concurrency caching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811543876.5A CN109828961B (en) 2018-12-17 2018-12-17 Article release high concurrency caching method

Publications (2)

Publication Number Publication Date
CN109828961A CN109828961A (en) 2019-05-31
CN109828961B true CN109828961B (en) 2023-12-08

Family

ID=66859565

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811543876.5A Active CN109828961B (en) 2018-12-17 2018-12-17 Article release high concurrency caching method

Country Status (1)

Country Link
CN (1) CN109828961B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105871959A (en) * 2015-01-22 2016-08-17 阿里巴巴集团控股有限公司 Message delivery method, system and device
CN107590210A (en) * 2017-08-25 2018-01-16 咪咕互动娱乐有限公司 A kind of data processing method, device, system and computer-readable recording medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103916437A (en) * 2013-01-05 2014-07-09 中国移动通信集团公司 File release system, device and method
CN108898463A (en) * 2018-07-02 2018-11-27 山东大学 A kind of network business system for farm products and its building method, operation method of high concurrent

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105871959A (en) * 2015-01-22 2016-08-17 阿里巴巴集团控股有限公司 Message delivery method, system and device
CN107590210A (en) * 2017-08-25 2018-01-16 咪咕互动娱乐有限公司 A kind of data processing method, device, system and computer-readable recording medium

Also Published As

Publication number Publication date
CN109828961A (en) 2019-05-31

Similar Documents

Publication Publication Date Title
US8548945B2 (en) Database caching utilizing asynchronous log-based replication
US7165144B2 (en) Managing input/output (I/O) requests in a cache memory system
CN104580437A (en) Cloud storage client and high-efficiency data access method thereof
WO2015149628A1 (en) Dns cache information processing method, device and system
JP4794571B2 (en) System and method for efficient access to database
US20090300289A1 (en) Reducing back invalidation transactions from a snoop filter
US20220004321A1 (en) Effective transaction table with page bitmap
CN106354732B (en) A kind of off-line data version conflict solution for supporting concurrently to cooperate with
US20130013587A1 (en) Incremental computing for web search
WO2017097048A1 (en) Data searching method and apparatus
CN107888687B (en) Proxy client storage acceleration method and system based on distributed storage system
CN112181902B (en) Database storage method and device and electronic equipment
CN111414392A (en) Cache asynchronous refresh method, system and computer readable storage medium
CN104158863A (en) Cloud storage mechanism based on transaction-level whole-course high-speed buffer
US10565184B2 (en) Method and system for committing transactions in a semi-distributed manner
CN112214178B (en) Storage system, data reading method and data writing method
CN108280123B (en) HBase column polymerization method
US10642745B2 (en) Key invalidation in cache systems
US20150100730A1 (en) Freeing Memory Safely with Low Performance Overhead in a Concurrent Environment
CN109828961B (en) Article release high concurrency caching method
CN103077099A (en) Block-level snapshot system and user reading and writing method based on same
CN113010535A (en) Cache data updating method, device, equipment and storage medium
CN113157777A (en) Distributed real-time data query method, cluster, system and storage medium
CN110750566A (en) Data processing method and device, cache system and cache management platform
CN104376097A (en) Active cache method based on Windows service program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant