CN106202082B - Method and device for assembling basic data cache - Google Patents

Method and device for assembling basic data cache Download PDF

Info

Publication number
CN106202082B
CN106202082B CN201510219204.9A CN201510219204A CN106202082B CN 106202082 B CN106202082 B CN 106202082B CN 201510219204 A CN201510219204 A CN 201510219204A CN 106202082 B CN106202082 B CN 106202082B
Authority
CN
China
Prior art keywords
basic data
data
received
cache
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510219204.9A
Other languages
Chinese (zh)
Other versions
CN106202082A (en
Inventor
黄利祥
王维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cainiao Smart Logistics Holding Ltd
Original Assignee
Cainiao Smart Logistics Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cainiao Smart Logistics Holding Ltd filed Critical Cainiao Smart Logistics Holding Ltd
Priority to CN201510219204.9A priority Critical patent/CN106202082B/en
Publication of CN106202082A publication Critical patent/CN106202082A/en
Application granted granted Critical
Publication of CN106202082B publication Critical patent/CN106202082B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The application discloses a method and a device for assembling basic data cache, wherein a business server subscribes a plurality of pieces of basic data from a pushing center server in advance, and calls a preset callback function when the business server receives the basic data pushed by the pushing center server and a trigger request for calling the callback function, and the method comprises the following steps: temporarily storing the received basic data in a local storage through the callback function; judging whether all the subscribed basic data are received; if so, performing cache assembly according to a preset assembly strategy according to each piece of basic data temporarily stored in the local storage. By the method, the problem of overlarge instantaneous pressure of the database server can be solved, the problems that the assembly task is hung up or the database server is forcibly accessed and the like due to the dependency relationship among the basic data of the service server are prevented, and the method has stronger adaptability and practicability.

Description

Method and device for assembling basic data cache
Technical Field
The present application relates to the field of database technologies, and in particular, to a method and an apparatus for assembling a basic data cache.
Background
Database technology has wide application in many systems, for example, various data related to a website can be stored in a database in an organized manner and managed and used by database management software, so that various data services are provided for the website. With the continuous development of internet applications, the scale of internet data is rapidly expanding, and how to store and apply large-scale data more efficiently becomes an important subject of continuous intensive research of technicians. Especially, for some sites with relatively concentrated data volume, such as e-commerce websites, the data size is quite large, and whether large-scale data in a database can be effectively organized and called becomes an important factor influencing the business processing efficiency of the site. When the size of the site business data is relatively small, the data can be directly read in the database as required. However, when the data size of a site reaches a certain order of magnitude, if the database is still directly read by the read interface, the system overhead may be too large due to direct data call access, for example, a disk IO, a network throughput, network card occupation, database connection pool occupation, and the like may bear higher pressure, and the data call efficiency may be reduced, so that the execution of the upper layer application may be affected to different degrees, for example, serious timeout or even system crash may occur.
In order to avoid the database system becoming the bottleneck of the website system as much as possible, on the software level, the application efficiency of the database can be improved by improving the management and use method of the data. For example, different methods may be applied to different data according to the frequency of use or update of the data. For example, the frequently used and infrequently changed basic data can be loaded into the cache with higher access efficiency, so that when the business layer system needs to use the basic data, the basic data only needs to be read from the cache without directly accessing the database, thereby improving the access efficiency of the data and reducing the expenses of disk reading and the like.
The early scheme of the basic data caching method is that data is directly loaded from a database server when the cache of a business layer system is initialized, but with the increase of the business layer systems, a plurality of business systems often access the database server due to the simultaneous loading or updating of the data cache, so that the instantaneous pressure of the database server is too high. In order to reduce the instant pressure of the database server as much as possible, each service layer system can only select to load the data cache when the service flow is low, and great limitation is caused to the application of the cache basic data.
Disclosure of Invention
The application provides a method and a device for assembling basic data cache, which solve the problem of overlarge instantaneous pressure of a database server, prevent the problems of hanging up of an assembling task or forced access to the database server and the like caused by the dependency relationship among basic data of a business server, and have stronger adaptability and practicability.
The application provides the following scheme:
a method for assembling basic data cache is provided, a service server subscribes a plurality of basic data from a pushing center server in advance, the method comprises:
when a business server receives basic data pushed by a pushing center server and a trigger request for calling a preset callback function, calling the callback function so as to execute the following steps through the callback function:
temporarily storing the received basic data in a local storage;
judging whether all the subscribed basic data are received;
if so, performing cache assembly according to a preset assembly strategy according to each piece of basic data temporarily stored in the local storage.
An apparatus for assembling basic data cache, wherein a service server subscribes multiple pieces of basic data from a push center server in advance, the apparatus comprising:
the function calling unit is used for calling the callback function when receiving the basic data pushed by the pushing center server and a trigger request for calling the preset callback function, and the callback function comprises the following modules:
the data storage module is used for temporarily storing the received basic data in a local storage;
the integrity judging module is used for judging whether all the subscribed basic data are received;
and the cache assembling module is used for performing cache assembling according to a preset assembling strategy according to each piece of basic data temporarily stored in the local storage if the judgment result of the integrity judging module is positive.
According to the specific embodiments provided herein, the present application discloses the following technical effects:
according to the method and the device, when the business server receives the basic data pushed by the pushing center server and the triggering request of the preset callback function, the preset callback function is called, the received basic data are temporarily stored in the local storage through the callback function, and under the condition that all the subscribed basic data are judged to be received, caching and assembling are carried out according to all the temporarily stored basic data in the local storage and the preset assembling strategy. The method solves the problem that the instantaneous pressure of the database server is overlarge due to the fact that a plurality of service servers access the database server by loading or updating the data cache at the same time, and meanwhile, the problems that the service servers cannot receive the depended basic data timely to cause hanging of an assembly task or forced access to the database server and the like due to the fact that the basic data exist among the service servers are solved, so that the method for assembling the basic data cache has stronger adaptability and practicability.
Of course, it is not necessary for any product to achieve all of the above-described advantages at the same time for the practice of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a diagram illustrating the transmission of basic data according to an embodiment of the present application;
FIG. 2 is a flow chart of a method for assembling a base data cache according to an embodiment of the present application;
FIG. 3 is a flow chart of another method for assembling a base data cache according to an embodiment of the present application;
FIG. 4 is a flow chart of a further method for assembling a base data cache according to an embodiment of the present application;
fig. 5 is a schematic diagram of an apparatus for assembling a basic data cache according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments that can be derived from the embodiments given herein by a person of ordinary skill in the art are intended to be within the scope of the present disclosure.
The continuous increase of internet service scale and the data size to be processed become very large, and in some sites with larger data size, a dedicated server or server cluster is often used to realize database service. For the cache processing of basic data, in the prior art, when a service server generates a service initialization or a cache refresh request, the service server directly obtains data from a database server. Under the condition that the scale of a business server or the scale of required basic data is large, the implementation mode can expose the defects that the instantaneous access pressure of a database server is overlarge, the transmission and loading of the basic data are delayed and the like, and finally the basic data cached becomes the bottleneck of a system. In order to meet the efficiency requirement of large-scale cache data processing, the embodiment of the application provides a method for assembling basic data cache, which relies on a layer of pushing center server erected between a database server and a business server. Fig. 1 is a schematic diagram illustrating transmission of basic data according to an embodiment of the present application. The database server 110, the push center server 120, and the service server 130 may be implemented by a single server, or may be implemented based on a server cluster; the server or server cluster may be implemented based on virtualization technology.
When a basic data caching requirement is generated, for example, when the cache data of the service server is updated according to a preset period, the push center server may first obtain required cache data from the database server, where the cache data required by each service server may include a plurality of basic data files, and each basic data file corresponds to one piece of basic data. The pushing center server only needs to access the database server once, and can download the basic data required by all the service servers, and then push the required data to all the service servers according to the requirements of all the service servers. Therefore, the basic data can be pushed away from other database management tasks by the agent of the pushing center server special for pushing the file, so that the data management pressure of the database server can be effectively relieved, and the pushing efficiency of the basic data is improved.
In an implementation manner, the push center server may push the required basic data to each service server in real time according to the request of each service server. However, in the process of implementing the method, the inventor finds that, in general, the type of the basic data content required by the service server to assemble the cache is usually fixed, and in addition, the time for the service server to assemble the cache is also usually fixed, for example, when the service system is started, or when the cache content is periodically refreshed for the purpose of management, therefore, the task of the service server to assemble the cache can be periodically triggered. Thus, in another implementation manner, when the push center server pushes the required basic data to each service server, the push center server may push the basic data with a fixed time and a fixed content type according to the "appointment" with the service center server, for example, the service server may subscribe a plurality of pieces of basic data from the push center server in advance. Correspondingly, the basic data file required for the service center server to receive can be passive. The implementation mode of the active pushing of the pushing center server and the passive receiving of the service server can meet the actual application requirements on one hand, and can ensure that the pushing center server does not need to process the data request of the service server, thereby avoiding the risk of blockage caused by the concurrency of a large number of requests, and meanwhile, the pushing center server can autonomously and more flexibly arrange the pushing task. In addition to solving the main problem of transient pressure caused by a large number of service servers accessing a direct database in the existing basic data caching technology, and improving the flexibility of the basic data caching technology, the method provided by the embodiment of the application also solves some other problems in the implementation of the basic data caching technology, and please refer to the specific explanation of the embodiment for details.
The following describes in detail a specific implementation manner of the method for assembling a basic data cache provided in the embodiment of the present application, by taking an implementation manner of active pushing by a pushing center server and passive receiving by a service server as an example. Referring to fig. 2, which is a flowchart of a method for assembling a basic data cache according to an embodiment of the present application, when a service server receives basic data pushed by a push center server and a trigger request for calling a preset callback function, the service server calls the callback function, so that the following steps are performed by the callback function:
s210: temporarily storing the received basic data in a local storage;
in the implementation manner provided by the embodiment of the present application, the service server subscribes multiple pieces of basic data from the push center server in advance, and different service servers can subscribe the same or different pieces of basic data in the push center server according to application requirements thereof. And after the push service is initiated, the push center server pushes basic data to the service server according to the subscription of the service server. The preset callback function can be triggered by the push center server, for example, the preset callback function can be triggered along with the push of each piece of basic data and executed at the service server side. The pushing center server can simultaneously send a request for calling the service server to execute the preset callback function when pushing the basic data to the service server, the service server can call the local preset callback function when receiving the request for pushing the basic data to the service server and calling the preset callback function, and the received basic data is firstly temporarily stored in the local storage through the callback function. As mentioned above, the basic data may be carried by files, each of which corresponds to a copy of the basic data. The service server can store and temporarily store each received basic data file in a local cache through a preset callback function. Specifically, when the temporary storage is performed, the temporary storage may be stored in a hard disk of the service server, for example, in a temporary data storage area of the hard disk, or a space may also be applied in the memory, and the received basic data is stored in the applied memory space, so that the fast writing and subsequent reading can be completed conveniently.
S220: judging whether all the subscribed basic data are received;
after the received basic data is temporarily stored in the local storage through the preset callback function, whether all the subscribed basic data are received or not can be judged, so that the service server can perform cache assembly according to the received basic data. As mentioned above, the method solves the problem of overlarge access pressure of the database server by a mode of pushing the central server to proxy basic data distribution. In the process of assembling the basic data cache, one implementation is to sequentially assemble the basic data cache according to the order of the received cache data, that is, to receive one cache data, and then assemble the basic data cache once, and in the case that there is no dependency relationship among the received multiple cache data, this implementation has higher assembly efficiency, but brings a new problem in the case that there is a dependency relationship among the received multiple cache data, which is described in detail below.
In practical applications, in the case that there is no dependency relationship among the received multiple sets of cache data, the received multiple sets of cache data may be sequentially assembled according to the order of the received cache data, for example, when the basic data a is completely received, the basic data a may be assembled into the cache a and loaded, and when the other basic data B is completely received, the basic data B may be assembled into the cache B and loaded. However, in a case that there is a dependency relationship between the received multiple sets of cache data, for example, when the basic data B is received and the cache B is assembled, it needs to rely on the basic data a, and at this time, the basic data a has not been received for various reasons, for example, because the data size of the basic data a is large, the transmission time is elongated, or the task of pushing the basic data a to the current service server by a certain server in the push center server cluster has not been started yet, and at this time, the service server can only suspend the assembly task of the set cache B, or generate a request for forcibly accessing data to the database server. However, in general, a service server is used as a provider of external services, because the suspension of the cache assembly is unacceptable, and the suspension time is also difficult to control, after the cache assembly is suspended, the service server usually chooses to force access to a database server, thereby increasing the pressure of the database server. In order to avoid this situation, in the method provided in this embodiment of the present application, before performing cache assembly according to the received basic data, it may be determined whether all the subscribed pieces of basic data have been received, and further, the cache assembly may be performed only when all the subscribed pieces of basic data have been received. By the method, the problem of forced access to the database server caused by the fact that the relied basic data cannot be received in time due to the dependency relationship among the basic data is solved.
As shown in fig. 3, which is a flowchart of a method for assembling a basic data cache in another implementation manner provided in the embodiment of the present application, as shown in the figure, when determining whether all the subscribed pieces of basic data have been received, the method may include step S310: every time a piece of basic data is received, a preset counter is incremented by one. This counting operation may be triggered or performed by the callback function described previously. And step 2201: and judging whether the value of the counter is equal to the number of the subscribed basic data, and if so, determining that all the subscribed basic data are received. When receiving the basic data, the service server may occasionally receive the repeated data due to some reasons, for example, repeated subscription, repeated sending, retransmission caused by network error, and the like, so that the service server receives the repeated basic data. In order to avoid such repetition of the basic data, before the counter is accumulated, the received basic data may be determined to be received repeatedly, specifically, the determination may be performed by using identifiers such as a file name, a file size, and a file characteristic value.
As shown in fig. 4, which is a flowchart of a method for assembling a basic data cache according to another implementation manner provided in the embodiment of the present application, the method may include step S410: acquiring and recording the identification of the received basic data; for example, the file name, file size or file characteristic value of the basic data file may be obtained as the identifier of the received basic data, for example, a hash value of the received basic data file is calculated as the identifier. When the operation of adding one to the preset counter is performed each time the basic data is received, the method may include step S3101: when receiving every basic data, judging whether the basic data is received or not according to the identification of the basic data, if so, discarding the basic data, otherwise, triggering a counter to add one. In addition, after the cache assembly is completed, the preset counter can be cleared, so that the cache data can be counted by reusing the counter when the cache assembly task is performed next time.
S230: if so, performing cache assembly according to a preset assembly strategy according to each piece of basic data temporarily stored in the local storage.
After all the subscribed basic data are received, cache assembly can be performed according to the basic data temporarily stored in the local storage according to a preset assembly strategy. The preset assembly strategy can be specified according to the actual application. For example, all received complete basic data may be assembled into a cache at one time, or may be assembled in batches according to the actual needs of the service system.
In practical applications, after the push center server pushes a piece of basic data to the service server, the push center server will preset a callback function in the service server for one-time execution, so that in the service server, multiple threads are often started, and each thread will call the preset callback function. In the case of push concurrency, multiple threads may operate on shared data, and thus, in such a concurrent case, the consistency of the shared data needs to be protected. In the embodiment of the present application, the shared data whose consistency needs to be protected may include at least two types, one of which is the value of the aforementioned preset counter, and the other is a data structure for storing an identifier of the underlying data (e.g., a linked list for storing an identifier of the underlying data). In order to maintain the consistency of the shared data, a synchronization method can be used in the callback function, for example, when the callback function is written in java language, a method for accessing the value of a counter in the callback function and a method for accessing a data structure for storing the basic data identification can be a synchronization method for adding a synchronized keyword, when a plurality of threads access the shared data concurrently, the synchronization method called by only one thread can be ensured to access the shared data at the same time so as to maintain the consistency of the shared data under multi-thread access.
In addition, in the service, a timing thread can be started through a preset callback function, the basic data receiving and caching assembly process is timed, and the overtime information is returned after the receiving and assembly process exceeds the preset time length, so that a manager can conveniently and timely check the overtime reason according to the overtime information. After the cache assembly is completed according to the preset assembly strategy, the basic data temporarily stored in the local storage can be deleted to release the occupied storage space, so that other applications can continuously use the part of the storage space. After the assembly of the cache is completed according to the basic data, the assembled cache can be loaded for other applications of the service system to read and write access.
The method for assembling the basic data cache provided by the embodiment of the application is described in detail above, and by the method, when the service server receives the basic data pushed by the pushing center server and calls the trigger request of the preset callback function, the preset callback function is called, the received basic data is temporarily stored in the local storage through the callback function, and under the condition that all the subscribed basic data are judged to be received, cache assembly is performed according to all the basic data temporarily stored in the local storage and the preset assembly strategy. The method solves the problem that the instant pressure of the database server is overlarge due to the fact that a plurality of service servers access the database server due to the fact that data caches are loaded or updated simultaneously, meanwhile, a pushing center server does not need to design a more complex pushing mode according to the dependency relationship among the basic data, the problems that an assembly task is hung up or the database server is forcibly accessed due to the fact that the dependent basic data cannot be received timely due to the fact that the dependency relationship exists among the basic data of the service servers are avoided, and the method for assembling the basic data caches has strong adaptability and practicability.
Corresponding to the method for assembling a basic data cache provided in the embodiment of the present application, a device for assembling a basic data cache is also provided, as shown in fig. 5, which is a schematic diagram of the device for assembling a basic data cache provided in the embodiment of the present application, and a service server subscribes multiple pieces of basic data from a push center server in advance, where the device includes:
the function calling unit 510 is configured to call a callback function when receiving basic data pushed by the push center server and a trigger request for calling a preset callback function. The callback function includes:
a data storage module 520, configured to temporarily store the received basic data in a local storage;
an integrity judging module 530, configured to judge whether all the subscribed pieces of basic data have been received;
and the cache assembling module 540 is configured to, if yes, perform cache assembling according to a preset assembling policy according to each piece of basic data temporarily stored in the local storage.
In addition, the callback function may further include:
the data counting module is used for adding one to a preset counter when receiving every basic data;
in this implementation, the integrity determination module 530 may include:
and the integrity judgment submodule is used for judging whether the value of the counter is equal to the number of the subscribed basic data, and if so, determining that all the subscribed basic data are received.
In another implementation, the callback function may further include:
the identification acquisition module is used for acquiring and recording the identification of the received basic data;
at this time, the data counting module may include:
and the data counting submodule is used for judging whether the basic data is received according to the identification of the basic data when receiving every basic data, discarding the basic data if the basic data is received, and otherwise, triggering a counter to add one.
In addition, the callback function may further include:
and the counter zero clearing module is used for clearing the preset counter after the cache assembly is finished.
To facilitate timeout management, the apparatus for assembling an underlying data cache may further include:
and the overtime processing unit is used for starting a timing thread, timing the receiving of the basic data and the assembly process of the cache, and returning overtime information after the receiving and the assembly process exceeds the preset time length.
In order to maintain consistency of shared data under multiple threads, a synchronization method can be used in a callback function, and the device can further comprise:
and the data consistency maintaining unit accesses the shared data through a synchronous method when a plurality of threads are called to access the shared data concurrently so as to maintain the consistency of the shared data under the multi-thread access.
After the cache is assembled, the received basic data may be cleaned, and the apparatus for assembling the basic data cache may further include:
and the basic data cleaning unit is used for deleting the basic data temporarily stored in the local storage after the cache assembly is completed according to the preset assembly strategy.
The device for assembling the basic data cache provided by the embodiment of the application is described in detail above, and by the device, when the service server receives the basic data pushed by the pushing center server and calls the trigger request of the preset callback function, the preset callback function is called, the received basic data is temporarily stored in the local storage through the callback function, and under the condition that all the subscribed basic data are judged to be received, cache assembly is performed according to all the basic data temporarily stored in the local storage and the preset assembly strategy. The problem that the instantaneous pressure of the database server is overlarge due to the fact that a plurality of service servers access the database server by loading or updating the data cache at the same time is solved, and meanwhile, the problems that the service servers cannot receive the depended basic data timely to cause hanging of an assembly task or forced access to the database server and the like due to the fact that the basic data exist dependency relations are also avoided, so that the device for assembling the basic data cache has stronger adaptability and practicability.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments of the present application.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, the system or system embodiments are substantially similar to the method embodiments and therefore are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for related points. The above-described system and system embodiments are only illustrative, wherein the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The method and the device for assembling the basic data cache provided by the application are introduced in detail, a specific example is applied in the text to explain the principle and the implementation of the application, and the description of the embodiment is only used for helping to understand the method and the core idea of the application; meanwhile, for a person skilled in the art, according to the idea of the present application, the specific embodiments and the application range may be changed. In view of the above, the description should not be taken as limiting the application.

Claims (14)

1. A method for assembling basic data cache is characterized in that a business server subscribes a plurality of basic data from a push center server in advance, and the method comprises the following steps:
when a business server receives basic data pushed by a pushing center server and a trigger request for calling a preset callback function, calling the callback function so as to execute the following steps through the callback function:
temporarily storing the received basic data in a local storage;
judging whether all the subscribed basic data are received;
if so, performing cache assembly according to a preset assembly strategy according to each piece of basic data temporarily stored in the local storage;
and acquiring basic data of the service server from a database server.
2. The method of claim 1, further comprising:
adding one to a preset counter when each piece of the basic data is received;
the determining whether all the subscribed basic data have been received includes:
and judging whether the value of the counter is equal to the number of the subscribed basic data, and if so, determining that all the subscribed basic data are received.
3. The method of claim 2, further comprising:
acquiring and recording the identification of the received basic data;
and when every piece of the basic data is received, adding one to a preset counter, including:
and when receiving every basic data, judging whether the basic data is received or not according to the identification of the basic data, if so, discarding the basic data, and otherwise, triggering the counter to add one.
4. The method of claim 3, further comprising:
and after the cache assembly is finished, clearing the preset counter.
5. The method of claim 1, further comprising:
starting a timing thread, timing the receiving of the basic data and the assembly process of the cache, and returning overtime information after the receiving and the assembly process exceed a preset time length.
6. The method of any of claims 1-5, wherein a synchronization method is used in the callback function, the method further comprising:
when a plurality of threads are called to access shared data concurrently, the shared data is accessed through the synchronization method so as to maintain the consistency of the shared data under multi-thread access.
7. The method of any one of claims 1-5, further comprising:
and after the cache assembly is completed according to a preset assembly strategy, deleting the basic data temporarily stored in the local storage.
8. An apparatus for assembling basic data cache, wherein a service server subscribes multiple pieces of basic data from a push center server in advance, the apparatus comprising:
the function calling unit is used for calling the callback function when receiving the basic data pushed by the pushing center server and a trigger request for calling the preset callback function, and the callback function comprises the following modules:
the data storage module is used for temporarily storing the received basic data in a local storage;
the integrity judging module is used for judging whether all the subscribed basic data are received;
the cache assembly module is used for carrying out cache assembly according to each piece of basic data temporarily stored in the local storage and a preset assembly strategy if the judgment result of the integrity judgment module is positive;
and acquiring basic data of the service server from a database server.
9. The apparatus of claim 8, wherein the callback function further comprises:
the data counting module is used for adding one to a preset counter when receiving every part of the basic data;
the integrity judgment module comprises:
and the integrity judgment submodule is used for judging whether the value of the counter is equal to the number of the subscribed basic data, and if so, determining that all the subscribed basic data are received.
10. The apparatus of claim 9, wherein the callback function further comprises:
the identification acquisition module is used for acquiring and recording the identification of the received basic data;
the data counting module comprises:
and the data counting submodule is used for judging whether the basic data is received according to the identification of the basic data when the basic data is received, discarding the basic data if the basic data is received, and otherwise, triggering the counter to add one.
11. The apparatus of claim 10, wherein the callback function further comprises:
and the counter zero clearing module is used for clearing the preset counter after the cache is assembled.
12. The apparatus of claim 8, further comprising:
and the overtime processing unit is used for starting a timing thread, timing the receiving of the basic data and the assembly process of the cache, and returning overtime information after the receiving and the assembly process exceeds the preset time length.
13. The apparatus of any of claims 8-12, wherein a synchronization method is used in the callback function, the apparatus further comprising:
and the data consistency maintaining unit accesses the shared data through the synchronous method when a plurality of threads are called to concurrently access the shared data so as to maintain the consistency of the shared data under multi-thread access.
14. The apparatus of any one of claims 8-12, further comprising:
and the basic data cleaning unit is used for deleting the basic data temporarily stored in the local storage after the cache assembly is completed according to a preset assembly strategy.
CN201510219204.9A 2015-04-30 2015-04-30 Method and device for assembling basic data cache Active CN106202082B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510219204.9A CN106202082B (en) 2015-04-30 2015-04-30 Method and device for assembling basic data cache

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510219204.9A CN106202082B (en) 2015-04-30 2015-04-30 Method and device for assembling basic data cache

Publications (2)

Publication Number Publication Date
CN106202082A CN106202082A (en) 2016-12-07
CN106202082B true CN106202082B (en) 2020-01-14

Family

ID=57458573

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510219204.9A Active CN106202082B (en) 2015-04-30 2015-04-30 Method and device for assembling basic data cache

Country Status (1)

Country Link
CN (1) CN106202082B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108694194A (en) * 2017-04-10 2018-10-23 北京京东尚科信息技术有限公司 A kind of method and apparatus of construction data object
CN107562469B (en) * 2017-08-08 2021-02-02 武汉斗鱼网络科技有限公司 Title bar data display method and system
CN108600320A (en) * 2018-03-23 2018-09-28 阿里巴巴集团控股有限公司 A kind of data cache method, apparatus and system
CN110399393B (en) * 2018-04-16 2020-06-30 北京三快在线科技有限公司 Data processing method, device, medium and electronic equipment
CN109753501A (en) * 2018-12-27 2019-05-14 广州市玄武无线科技股份有限公司 A kind of data display method of off-line state, device, equipment and storage medium
CN111008157B (en) * 2019-11-29 2022-02-18 北京浪潮数据技术有限公司 Storage system write cache data issuing method and related components

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006117638A1 (en) * 2005-05-02 2006-11-09 Nokia Corporation System and method for adaptive remote file caching
CN101272354A (en) * 2007-03-20 2008-09-24 重庆优腾信息技术有限公司 File transfer method, device and system
CN102523120A (en) * 2011-12-20 2012-06-27 许继集团有限公司 IED (intelligent electronic device) network pressure control method for intelligent substation process layer and IED network pressure control device for same
CN102968578A (en) * 2012-10-30 2013-03-13 山东中创软件商用中间件股份有限公司 Injection prevention method and system
CN103108002A (en) * 2011-11-10 2013-05-15 阿里巴巴集团控股有限公司 Method, system and device for data pushing
CN104104698A (en) * 2013-04-01 2014-10-15 深圳维盟科技有限公司 Web data cache processing method, device and system
CN104317737A (en) * 2014-10-10 2015-01-28 浪潮集团有限公司 Method for realizing consistency of caches at synchronization points based on program without hardware support

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006117638A1 (en) * 2005-05-02 2006-11-09 Nokia Corporation System and method for adaptive remote file caching
CN101272354A (en) * 2007-03-20 2008-09-24 重庆优腾信息技术有限公司 File transfer method, device and system
CN103108002A (en) * 2011-11-10 2013-05-15 阿里巴巴集团控股有限公司 Method, system and device for data pushing
CN102523120A (en) * 2011-12-20 2012-06-27 许继集团有限公司 IED (intelligent electronic device) network pressure control method for intelligent substation process layer and IED network pressure control device for same
CN102968578A (en) * 2012-10-30 2013-03-13 山东中创软件商用中间件股份有限公司 Injection prevention method and system
CN104104698A (en) * 2013-04-01 2014-10-15 深圳维盟科技有限公司 Web data cache processing method, device and system
CN104317737A (en) * 2014-10-10 2015-01-28 浪潮集团有限公司 Method for realizing consistency of caches at synchronization points based on program without hardware support

Also Published As

Publication number Publication date
CN106202082A (en) 2016-12-07

Similar Documents

Publication Publication Date Title
CN106202082B (en) Method and device for assembling basic data cache
CN108052675B (en) Log management method, system and computer readable storage medium
CN110062924B (en) Capacity reservation for virtualized graphics processing
US10572285B2 (en) Method and apparatus for elastically scaling virtual machine cluster
CN109947668B (en) Method and device for storing data
CN106790629A (en) Data synchronization unit and its realize the method for data syn-chronization, client access system
CN113010818A (en) Access current limiting method and device, electronic equipment and storage medium
CN110932912A (en) Method for realizing unified management of configuration files under micro-service architecture
WO2022057231A1 (en) Method and apparatus for accessing server, device, and storage medium
CN107026879B (en) Data caching method and background application system
US20150112934A1 (en) Parallel scanners for log based replication
CN104657435A (en) Storage management method for application data and network management system
CN112764948A (en) Data transmission method, data transmission device, computer device, and storage medium
CN111125057B (en) Method and device for processing service request and computer system
CN113282580A (en) Method, storage medium and server for executing timed task
CN108153794B (en) Page cache data refreshing method, device and system
CN112948498A (en) Method and device for generating global identification of distributed system
CN111865687A (en) Service data updating method and equipment
CN116521363B (en) Code packaging method, computer equipment and storage medium
CN116701020A (en) Message delay processing method, device, equipment, medium and program product
CN115174158B (en) Cloud product configuration checking method based on multi-cloud management platform
CN111177109A (en) Method and device for deleting overdue key
CN115658171A (en) Method and system for solving dynamic refreshing of java distributed application configuration in lightweight mode
CN105718291B (en) Multilevel cache acceleration method for mixed desktop application
CN114374657A (en) Data processing method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20180419

Address after: Four story 847 mailbox of the capital mansion of Cayman Islands, Cayman Islands, Cayman

Applicant after: CAINIAO SMART LOGISTICS HOLDING Ltd.

Address before: Cayman Islands Grand Cayman capital building a four storey No. 847 mailbox

Applicant before: ALIBABA GROUP HOLDING Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant