CN110427525B - Data access method and device - Google Patents

Data access method and device Download PDF

Info

Publication number
CN110427525B
CN110427525B CN201910729095.3A CN201910729095A CN110427525B CN 110427525 B CN110427525 B CN 110427525B CN 201910729095 A CN201910729095 A CN 201910729095A CN 110427525 B CN110427525 B CN 110427525B
Authority
CN
China
Prior art keywords
data
storage
queue
stored
storage queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910729095.3A
Other languages
Chinese (zh)
Other versions
CN110427525A (en
Inventor
高立闯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201910729095.3A priority Critical patent/CN110427525B/en
Publication of CN110427525A publication Critical patent/CN110427525A/en
Application granted granted Critical
Publication of CN110427525B publication Critical patent/CN110427525B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the disclosure provides a data access method and a data access device, wherein a server determines a position for storing data by using a head-of-queue pointer, so that the data is stored, determines a position for releasing the data by using a tail-of-queue pointer, so that the data is released, and moreover, key-values of storage queues are not required to be maintained, so that the number of the storage queues can be infinitely increased, and the storage requirement of large-scale data can be met.

Description

Data access method and device
Technical Field
The embodiment of the disclosure relates to the technical field of databases, in particular to a data access method and device.
Background
Remote dictionary server (Redis) is one of the most widely used memory unstructured databases at present, and Redis is an open-source, network-supported, memory-based key-value database. The data types supported by Redis to be stored comprise a string (string), a queue (list), a set (set), an ordered set (zset), a hash type (hash), and the like.
Based on the Redis list technology, a server splits a memory into a plurality of queues (list), each queue corresponds to one key (key), and the server needs to record the keys of the queues. When the number of queues is large, the number of keys that need to be maintained for recording increases accordingly.
However, the number of keys that can be recorded and maintained by one server is limited, which results in a limited number of queues for splitting the cache, and is difficult to adapt to the storage requirement of large-scale data.
Disclosure of Invention
The embodiment of the disclosure provides a data access method and device, a head pointer indicates a position for storing data, and a tail pointer indicates a position for taking out data, so that the number of storage queues is not limited, and the storage requirement of large-scale data is met.
In a first aspect, an embodiment of the present disclosure provides a data access method, including:
receiving a data storage instruction, wherein the data storage instruction is used for indicating a plurality of data to be stored;
sequentially storing the data to be stored at the head of a first storage queue pointed by a queue head pointer;
if the available storage space of the first storage queue is smaller than the size of the data to be stored, adding a second storage queue, and storing the remaining data to be stored to the second storage queue;
and pointing the head of line pointer to the second storage queue.
In a second aspect, an embodiment of the present disclosure provides a data access apparatus, including:
the device comprises a receiving module, a storage module and a processing module, wherein the receiving module is used for receiving a data storage instruction, and the data storage instruction is used for indicating a plurality of data to be stored;
the processing module is used for sequentially storing the data to be stored at the head of the first storage queue pointed by the queue head pointer; if the available storage space of the first storage queue is smaller than the size of the data to be stored, adding a second storage queue, and storing the remaining data to be stored to the second storage queue;
and the updating module is used for enabling the user to point the head of line pointer to the second storage queue.
In a third aspect, the embodiments of the present disclosure provide a data access apparatus, including a processor, a memory, and a computer program stored on the memory and executable on the processor, where the processor executes the program to implement the method according to the first aspect or the various possible implementations of the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a storage medium having stored therein instructions that, when executed on a server, cause the server to perform a method as set forth in the first aspect or the various possible implementations of the first aspect.
In a fifth aspect, embodiments of the present disclosure provide a computer program product, which when run on a server, causes the server to perform the method as described above for the first aspect or the various possible implementations of the first aspect.
According to the data access method and device provided by the embodiment of the disclosure, in the storage process, after a server receives a data storage instruction indicating a plurality of data to be stored, a first storage queue is determined according to a queue head pointer, the data to be stored is sequentially stored in the first storage queue, if the available space of the first storage queue cannot meet the space requirement required by the data to be stored, the remaining data is stored in a second storage queue, and the queue head pointer points to the second storage queue. In the releasing process, after receiving a data releasing instruction indicating the data volume of the data to be released, the server determines a third storage queue according to the queue tail pointer, sequentially releases the data from the tail of the third storage queue, if the data in the third storage queue is insufficient, sequentially releases the data from the tail of the next storage queue of the third storage queue until enough data is released, and points the queue tail pointer to the storage queue where the next data of the last released data is located. That is to say, the server determines the position of the stored data by using the head of queue pointer, so that the data storage is realized, determines the position of the released data by using the tail of queue pointer, so that the data release is realized, and moreover, the key-value of each storage queue is not required to be maintained, so that the number of the storage queues can be infinitely increased, and the storage requirement of large-scale data can be met.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present disclosure, and for those skilled in the art, other drawings can be obtained according to the drawings without inventive exercise.
FIG. 1 is a schematic diagram of an operating environment of a data access method according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of a data access method provided by an embodiment of the present disclosure;
FIG. 3 is a flow chart of another data access method provided by the embodiments of the present disclosure;
fig. 4 is a schematic structural diagram of a data access device according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of another data access device according to an embodiment of the disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
Redis is a key-value based storage system, and is more and more widely applied due to higher read-write performance. The data types supported by Redis to be stored comprise a string (string), a queue (list), a set (set), an ordered set (zset), a hash type (hash), and the like. In the technology based on Redis list, distributed storage is adopted, one server corresponds to a plurality of storages, different storages of the server are regarded as different queues, each queue corresponds to a key-value pair, the memory space occupied by the value corresponding to the key is large, the key is called big key (big key), and common big keys are divided into a big key of a character string type and a big key of a non-character string type. big keys bring some disadvantages: the method has the disadvantages that the memory space is not uniform, for example, in a Redis cluster (cluster), big keys can cause the use of the inner space of a node to be non-uniform; the method has the following technical scheme that the defect 2, overtime blocking is caused because the big possibility of overtime blocking is high due to the fact that the big time is consumed for operating the big key due to the characteristic of the Redis single thread; the network congestion is caused, the network traffic generated each time a big key is obtained, and if one big key is 1 Mega (MB) and the access amount per second is 1000, 1000MB of traffic is generated per second, which may cause the server crash for the ordinary server of giga-network card (128 MB/S in terms of bytes). Moreover, the general server is deployed in a stand-alone multi-instance manner, that is, one big key may affect other instances.
Due to the drawback of big keys, in the prior art, the memory space is divided into more queues, and different queues in the queues correspond to different key-value pairs. Compared to the big keys, the keys of the queues are called small keys. At this time, the server needs to maintain the keys by using a special space record, and the value corresponding to the key can be determined according to the keys, so that the storage and release of the data are realized. It will be appreciated that when the number of queues is large, the number of keys that need to be maintained for recording increases accordingly. However, the space dedicated for recording and maintaining the small keys is limited, and if the number of queues obtained by splitting the memory space exceeds the number of queues that can be recorded and maintained by the dedicated space, the dedicated space needs to be reallocated. For example, if the length of each queue is 1000 and the dedicated space can record a minimum key of 1000 queues at most, when the number of queues exceeds 1000, the dedicated space needs to be increased. Therefore, in this method of splitting the queue, since the number of keys that can be recorded and maintained in the dedicated space is limited, the number of queues into which the cache is split is also limited.
However, in some large-scale data access scenarios, the number of queues required is relatively large. For example, if the upstream capability is strong and the downstream capability is weak, a buffer pool is needed, and the number of queues in the buffer pool is extremely large. For example, a certain APP (application) for pushing news is installed on mobile phones of many users, and after a piece of news is pushed by the APP, if many users comment on the news, comment data sent by each user arrives at the server and is stored in the buffer pool, and the buffer pool is required to store the queue with the largest number. At this time, if there are only one or two thousand queues in the buffer pool, the requirement cannot be met, and it is difficult to adapt to the storage requirement of large-scale data.
In view of this, the embodiments of the present disclosure provide a data access method, where a head pointer indicates a location for storing data, and a tail pointer indicates a location for retrieving data, so that the number of storage queues is not limited, and thus, the storage requirement of large-scale data is met.
First, concepts related to the embodiments of the present disclosure will be described in detail.
Redis: is an open source (BSD licensed), memory-stored data structure server that can be used as a database, cache, and message queue agent that supports data types such as strings, hash tables, lists, sets, ordered sets, bitmaps, etc. Built-in copy, Lua script, LRU eviction, transaction, and different levels of disk persistence functionality, while providing high availability through Redis sentinels (Sentinel), and automatic partitioning through Redis Cluster.
The queue may be empty, and may not contain a head element and a tail element when empty, and when the head element and the tail element are the same, it indicates that the queue contains only one element. Wherein, the head element refers to the first element at the left end/front end of the queue, and the tail element refers to the first element at the right end/rear end of the queue. For example, a queue contains three elements, x, y, and z from left to right, such that x is the head element and y is the tail element.
The common operation mode of the queue is as follows:
enqueue (push), putting an element at the head of the queue;
dequeue (pop), remove the element at the end of the queue;
an empty (empty) queue, which determines whether the queue is empty, and if the head of the queue and the queue are equal, the queue is empty, that is, there is no element in the queue.
A full (full) queue, which holds an amount of data that exceeds the limit rule.
Fig. 1 is a schematic diagram of an operating environment of a data access method according to an embodiment of the present disclosure. Referring to fig. 1, the operating environment includes an electronic device 10 and a server 20, and a network connection is established between the electronic device 10 and the server 20, where the network connection may be a wired connection, a wireless communication link, a fiber optic cable, or the like. A user may use the electronic device 10 to interact with the server 20 to receive or send messages or the like. Various APPs, such as a shopping APP, a web browsing APP, a search APP, an instant messaging tool, social platform software, and the like, may be installed on the electronic device 10. The electronic device 10 includes, but is not limited to, a smart phone, a tablet computer, an e-book reader, a laptop portable computer, a desktop computer, and the like.
The server 20 may be a server 20 capable of providing various services, for example, a background management server 20 capable of providing support for a shopping network browsed by a user using the electronic device 10, and may query the received product information and feed back the query result to the electronic device 10. In the embodiment of the present disclosure, a server 20 of a Redis database is disposed on the server 20, which may also be referred to as a Redis server 20, a data access method described in the embodiment of the present disclosure is generally executed by the server 20, and accordingly, a data access device is generally disposed on the server 20.
It should be noted that the number of the electronic devices 10 and the servers 20 shown in fig. 1 is only illustrative, and in actual implementation, any number of electronic devices and servers may be provided according to implementation requirements.
Next, a recommendation method according to an embodiment of the present disclosure will be described in detail based on fig. 1. For example, please refer to fig. 2.
Fig. 2 is a flowchart of a data access method according to an embodiment of the disclosure. The embodiment of the present disclosure describes the data access method in detail from the perspective of data storage, and the embodiment of the present disclosure includes:
101. receiving a data storage instruction, wherein the data storage instruction is used for indicating a plurality of data to be stored.
Generally, access to a server provided with Redis data mainly includes data storage and data release.
For data storage, when the electronic equipment needs to write data into the server, the data to be stored is carried in a data storage instruction and is sent to the server; accordingly, the server receives the data storage instruction.
102. And sequentially storing the data to be stored at the head of the first storage queue pointed by the queue head pointer.
After receiving the data storage instruction, the server determines a first storage queue according to the queue head pointer, and sequentially stores data to be stored from the head of the first storage queue. The first storage queue is not the first established storage queue in the buffer pool, but the storage queue capable of storing data currently, and the first storage queue may be an empty queue or a queue that stores data but has part of available space. For example, 10 storage queues have been established in the buffer pool, and of the 10 th storage queue, the 10 th storage queue is not full, and the head pointer points to the 10 th storage queue, so that the 10 th storage queue is the first storage queue, and the 10 th storage queue may be an empty queue or a queue with a part of free storage space.
103. And if the available storage space of the first storage queue is smaller than the size of the data to be stored, adding a second storage queue, and storing the remaining data to be stored to the second storage queue.
In general, the amount of data to be stored carried by a data storage instruction can be very large. For example, assume that a store queue is 1000 in length, i.e., the store queue is capable of storing 1000 data. If 1001 tape storage data carried by the data storage instruction can only store 200 data in the first storage queue, 200 data are sequentially put into the head of the first storage queue, and then the remaining 801 data are sequentially stored into the head of the second storage queue. In the process, the data to be stored may be directly stored to the head of each storage queue, or the data may be stored to the storage queue after being subjected to processing such as encapsulation.
104. And pointing the head of line pointer to the second storage queue.
In step 103, since the first storage queue is full of data after the data to be stored is stored, in order to quickly find the storage queue capable of storing data after the data storage instruction is received again, the head pointer needs to be updated and points to the second storage queue.
It should be noted that, if the second storage queue is full of data, the head-of-line pointer is pointed to the next storage queue of the second storage queue.
In the embodiment of the disclosure, after receiving a data storage instruction indicating a plurality of data to be stored, a server determines a first storage queue according to a head pointer, sequentially stores the data to be stored to the first storage queue, stores the remaining data to a second storage queue if an available space of the first storage queue cannot meet a space requirement required by the data to be stored, and points the head pointer to the second storage queue. In the process, the position for storing the data is determined by using the head of queue pointer, the data storage is realized, and in the process, key-value of each storage queue is not required to be maintained, so that the number of the storage queues can be infinitely increased, and the storage requirement of large-scale data can be met.
Generally, access to a server provided with Redis data mainly includes data storage and data release. In the above embodiment, the storage process of data is mainly described, and the release process of data is explained in detail below.
For example, please refer to fig. 3. Fig. 3 is a flowchart of another data access method provided by the embodiment of the disclosure. The embodiment of the present disclosure describes in detail a data access method described in the embodiment of the present disclosure from the perspective of data release, and the embodiment of the present disclosure includes:
201. and receiving a data release instruction, wherein the data release instruction is used for indicating the data volume of the data to be released.
For example, for data release, when the electronic device needs to release data from the server, sending a data release quality indicator zero indicating the data amount of the data to be released to the server; accordingly, the server receives the data release quality.
202. And sequentially releasing data from the first queue element at the tail part of the third storage queue pointed by the queue tail pointer.
And after receiving the data release instruction, the server determines a third storage queue according to the queue tail pointer, and sequentially takes out data from the tail of the third storage queue. The third store queue is not a store queue established in the third buffer pool, but a store queue capable of releasing data currently, and may be a full queue or a store queue from which data has been fetched but which has remained. For example, 10 storage queues have been established in the buffer pool, and of the 10 storage queues, data in the 1 st to 4 th storage queues has been released, the 4 storage queues have been released, and part of data in the 5 th storage queue has been released, then the queue tail pointer points to the 5 th storage queue.
203. And if the quantity of the data stored in the third storage queue is smaller than the data quantity, continuing to release the data from the first queue element at the tail part of the next storage queue of the third storage queue until the quantity of the released data reaches the data quantity.
Generally, the amount of data indicated by a data release instruction may be relatively large, exceeding the amount of data stored in the third store queue. For example, assume that the length of the third store queue is 1000, and the store queue is a full queue, i.e., 1000 data are stored in the third store queue. If the data volume indicated by the data release instruction is 1500, sequentially fetching data from the tail of the third storage queue, and after 1000 data are fetched, continuing to release the data from the first queue element at the tail of the next storage queue of the third storage queue until 500 data are released.
204. And pointing the queue tail pointer to a storage queue where the next data of the released last data is located.
In step 203, since all the data in the third storage queue is completely fetched after the data is released, and the third storage queue is released, in order to quickly find the storage queue where the data can be released after the data release instruction is received again, the queue tail pointer needs to be updated to point to the storage queue where the next data of the last released data is located. Continuing with the example in step 203, if the next storage queue of the third storage queue is the fourth storage queue and the next storage queue of the fourth storage queue is the fifth storage queue, and after 500 data are fetched from the next storage queue of the third storage queue, assuming that the length of the next storage queue of the third storage queue is 500, it indicates that the data in the next storage queue of the third storage queue has been fetched, and the last released data is located in the fourth storage queue, at this time, the queue tail pointer is pointed to the fifth storage queue. Assuming that the length of the fourth store queue is 1000, after 500 data is fetched, there is still data not fetched in the fourth store queue, and at this time, the tail pointer is pointed to the fourth queue.
In the embodiment of the disclosure, after receiving a data release instruction indicating the data amount of data to be released, the server determines a third storage queue according to the queue tail pointer, sequentially releases the data from the tail of the third storage queue, and if the data in the third storage queue is insufficient, continues to sequentially release the data from the tail of a next storage queue of the third storage queue until enough data is released, and points the queue tail pointer to a storage queue where the next data of the last released data is located. In the process, the position for taking out the data is determined by using the queue tail pointer, the data release is realized, and in the process, key-value of each storage queue is not required to be maintained, so that the number of the storage queues can be infinitely increased, and the storage requirement of large-scale data can be met.
It should be noted that, although the above-mentioned embodiments of fig. 2 and 3 are described separately for data storage and data release. However, the data storage and the data release are not independent, and in practice, the data storage and the data release may be performed simultaneously or sequentially, and the embodiments of the present disclosure are not limited.
In the above example, the queues in the buffer pool may be pre-established, but initially, each queue is empty, and the head and tail pointers point to the first queue, which is empty. For example, the cache space is divided into a plurality of queues, and each queue is provided with a unique identifier, which may be a small key, an Identity (ID), an order (order), or the like. For example, the unique identifiers may be assigned to the queues in an increasing order, taking order as an example, the unique identifiers of the queues are order0, order1, order2, order3 … … to positive infinity in sequence; for another example, the unique identifiers are distributed to the queues in a descending order, and taking the order as an example, the unique identifiers of the queues are sequentially order0, order-1, order-2 and order-3 … … to negative infinity; for another example, a unique identifier, such as order a, is generated for each queue by the unique identifier generating means.
In addition, in the above embodiment, the queue in the buffer pool may also be established in the process of storing data. At this time, the default initial buffer pool has an initial storage queue, the initial storage queue is empty, and the head pointer and the tail pointer both point to the initial queue. When data is stored subsequently, queues are sequentially established, for example, the length of the storage queue is 1000, when the data amount of the data to be stored is 5002 when the data is stored for the first time, order0, order1, order2, order3 and order4 are established, the data is stored from the head of order0 until 5002 data are stored, and a head pointer points to order 4. In the data release process, data is released from the tail of the order 0.
In the above embodiments, the lengths of the first storage queue and the second storage queue are the same or different. That is, the queue lengths in the buffer pool may be the same or different. For example, an average distribution mode is adopted, so that the length of each split queue is the same, that is, the storage space of each queue is equal. By adopting the mode, the storage is more uniform, the over-high load of the storage where the queue with longer length is positioned is avoided, and each queue also has capacity expansion capacity. For another example, the length of the queue can be set in a customized manner according to the service, the storage space and the like. As another example, the length of the queue may be randomly set.
In the above embodiment, the server may correspond to a plurality of memories, and the data is stored in each memory in a distributed manner. Generally, there is one memory for each large key. And in the process of storing the data, determining a large key, determining a first storage queue according to the queue head pointer, starting to store the data to be stored from the head of the first storage queue, and in the process of storing, if the available storage space of the first storage queue is insufficient, storing the remaining data to be stored to the next storage queue and updating the queue head pointer. Similarly, in the process of releasing the data, determining a large key, determining a third storage queue according to the queue tail pointer, and starting to release the data from the tail of the third storage queue until the data volume of the released data meets the requirement. Table 1 is a representation of a head of line pointer and a tail of line pointer provided by an embodiment of the present disclosure.
TABLE 1
Figure BDA0002159923040000091
Figure BDA0002159923040000101
In table 1, XXX is a large key, and the head of line pointer points to a queue with a unique identifier of 9999, wherein 9999 can also be understood as a small key of the queue; the queue tail pointer points to a queue with a unique identifier of 0101, which 0101 can also be understood as a small key of the queue.
From table 1, it can be seen that: in the process of data storage or data release, only the following records are required to be recorded: the large keys, the head of the queue pointer and the tail of the queue pointer, different large keys correspond to different services, or different large keys correspond to different memories of the server in distributed storage. Compared with a mode of recording and maintaining key-values of each queue, namely maintaining small key-values, only the head-of-queue pointer and the tail-of-queue pointer need to be maintained in the embodiment of the disclosure, and the number of recording and maintaining keys is reduced to the greatest extent. Furthermore, in the embodiment of the present disclosure, there is no limitation on the number of queues, and the storage requirement of large-scale data can be used.
In the data storage process, a scene that a plurality of objects store data to the server at the same time is likely to exist, for example, a plurality of users store data to the server at the same time through the electronic device, and the large keys corresponding to the data to be stored are the same, that is, written into the same memory of the server. In order to ensure that the data of the plurality of objects are all stored in the server, in the embodiment of the present disclosure, the queue may be a queue with capacity expansion capability. By capacity expansion capability, it is meant that the queue has a default length, but the queue also has a real length, which is greater than the default length. Next, how a plurality of objects store data at the same time will be described in detail.
In a possible implementation manner, the data storage instruction includes a first data storage instruction and a second data storage instruction, and the sequentially storing the data to be stored at the head of the first storage queue pointed by the head-of-queue pointer includes: storing the data to be stored indicated by the first data storage instruction to the first storage queue; if the available storage space of the first storage queue is equal to the size of the data to be stored indicated by the first data storage instruction, judging whether the first storage queue has capacity expansion capacity;
and if the first storage queue has capacity expansion capacity, storing the data to be stored indicated by the second data storage instruction to the first storage queue.
Illustratively, two objects store data to the server at the same time, wherein one object sends a first data storage instruction, the other object sends a second data storage instruction, the server stores the data to be stored indicated by the first data storage instruction to the first storage queue, and if the available storage space of the first storage queue is equal to the size of the data to be stored indicated by the first data storage instruction, it is determined whether the first storage queue has capacity expansion capability, for example, the default length of the first storage queue is 1000, and in essence, the data that can be actually stored by the first storage queue is 1600, the server does not update the queue head pointer, and continues to store the data to be stored indicated by the second data storage instruction at the head of the first storage queue until the length of the first storage queue is equal to 1600. And if the first storage queue does not have capacity expansion capacity, updating the queue head pointer, and pointing the queue head pointer to the second storage queue, so that the server stores the data to be stored indicated by the second data storage instruction to the second storage queue.
In this embodiment, through the capacity expansion capability of the queue, the problem of low storage efficiency caused by storage failure of conflicting data to be stored or re-execution of the storage process during storage conflict can be avoided, and the storage success rate can be improved.
It should be noted that, although in the above embodiments, the head-of-line pointer is used for storing data, and the tail-of-line pointer is used for releasing data, the embodiments of the present disclosure are not limited, and in other possible implementations, the tail-of-line pointer may be used for storing and the head-of-line pointer may be used for releasing.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods. For details not disclosed in the embodiments of the apparatus of the present disclosure, refer to the embodiments of the method of the present disclosure.
Fig. 4 is a schematic structural diagram of a data access device according to an embodiment of the disclosure. The data access device according to the present embodiment may be a server, or may be a chip applied to a server. The data access means may be adapted to perform the functions of the server in the above embodiments. As shown in fig. 4, the data access apparatus 100 may include:
a receiving module 11, configured to receive a data storage instruction, where the data storage instruction is used to indicate a plurality of data to be stored;
the processing module 12 is configured to sequentially store the data to be stored at the head of the first storage queue pointed by the head-of-queue pointer; if the available storage space of the first storage queue is smaller than the size of the data to be stored, adding a second storage queue, and storing the remaining data to be stored to the second storage queue;
and the updating module 13 is used for enabling the user to point the head of line pointer to the second storage queue.
In a feasible design, the data storage instruction includes a first data storage instruction and a second data storage instruction, and the processing module 12 is configured to, when the head of a first storage queue pointed by a head pointer sequentially stores the data to be stored, store the data to be stored indicated by the first data storage instruction to the first storage queue, if an available storage space of the first storage queue is equal to the size of the data to be stored indicated by the first data storage instruction, determine whether the first storage queue has capacity expansion capability, and if the first storage queue has capacity expansion capability, store the data to be stored indicated by the second data storage instruction to the first storage queue.
In a possible design, the processing module 12 is further configured to store the data to be stored, which is indicated by the second data storage instruction, in the second storage queue if the first storage queue does not have capacity expansion capability.
In a possible design, the receiving module 11 is further configured to receive a data release instruction, where the data release instruction is used to indicate a data amount of data to be released;
the processing module 12 is further configured to sequentially release data from a first queue element at the tail of a third storage queue pointed by a queue tail pointer, and if the amount of data stored in the third storage queue is smaller than the data amount, continue to release data from the first queue element at the tail of a next storage queue of the third storage queue until the amount of released data reaches the data amount;
and the updating module 13 is configured to point the queue tail pointer to a storage queue in which the next data of the released last data is located.
In a possible design, the processing module 12 is further configured to establish the head pointer and the tail pointer at the beginning, where the head pointer and the tail pointer both point to an initial storage queue, and the initial storage queue is empty.
In one possible design, the lengths of the first and second store queues are the same or different.
Fig. 5 is a schematic structural diagram of another data access device according to an embodiment of the disclosure. As shown in fig. 5, the data access apparatus 200 includes:
at least one processor 21 and memory 22;
the memory 22 stores computer-executable instructions;
the at least one processor 21 executes computer-executable instructions stored by the memory 22 to cause the at least one processor 21 to perform the data access methods described above.
Optionally, the data access device 200 further comprises a communication component 23. The processor 21, the memory 22, and the communication unit 23 may be connected by a bus 24.
The embodiment of the present disclosure also provides a storage medium, in which computer execution instructions are stored, and when the computer execution instructions are executed by a processor, the computer execution instructions are used to implement the data access method described above.
The embodiment of the disclosure also provides a computer program product, which when running on a server, causes the server to execute the data access method.
In the above embodiments, it should be understood that the described apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules is only one logical division, and other divisions may be realized in practice, for example, a plurality of modules may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present disclosure may be integrated into one processing unit, or each module may exist alone physically, or two or more modules are integrated into one unit. The unit formed by the modules can be realized in a hardware form, and can also be realized in a form of hardware and a software functional unit.
The integrated module implemented in the form of a software functional module may be stored in a computer-readable storage medium. The software functional module is stored in a storage medium and includes several instructions to cause a server (which may be a personal computer, a server, or a network device, etc.) or a processor (english: processor) to execute some steps of the method according to the embodiments of the present disclosure.
It should be understood that the processor may be a Central Processing Unit (CPU), other general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present disclosure may be embodied directly in a hardware processor, or in a combination of the two.
The memory may comprise a high-speed RAM memory, and may further comprise a non-volatile storage NVM, such as at least one disk memory, and may also be a usb disk, a removable hard disk, a read-only memory, a magnetic or optical disk, etc.
The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, the buses in the figures of the present disclosure are not limited to only one bus or one type of bus.
The storage medium may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an Application Specific Integrated Circuits (ASIC). Of course, the processor and the storage medium may reside as discrete components in a terminal or server.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions according to the embodiments of the present application are all or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The term "plurality" herein means two or more. The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship; in the formula, the character "/" indicates that the preceding and following related objects are in a relationship of "division".
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present disclosure, and not for limiting the same; while the present disclosure has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present disclosure.

Claims (8)

1. A method for accessing data, comprising:
receiving a data storage instruction, wherein the data storage instruction is used for indicating a plurality of data to be stored;
sequentially storing the data to be stored at the head of a first storage queue pointed by a queue head pointer;
if the available storage space of the first storage queue is smaller than the size of the data to be stored, adding a second storage queue, and storing the remaining data to be stored to the second storage queue;
directing the head of line pointer to the second store queue;
the data storage instruction comprises a first data storage instruction and a second data storage instruction, and the data to be stored are sequentially stored at the head of a first storage queue pointed by a queue head pointer, and the method comprises the following steps:
storing the data to be stored indicated by the first data storage instruction to the first storage queue;
if the available storage space of the first storage queue is equal to the size of the data to be stored indicated by the first data storage instruction, judging whether the first storage queue has capacity expansion capacity;
and if the first storage queue has capacity expansion capacity, storing the data to be stored indicated by the second data storage instruction to the first storage queue.
2. The method of claim 1, further comprising:
and if the first storage queue does not have capacity expansion capacity, storing the data to be stored indicated by the second data storage instruction to the second storage queue.
3. The method of claim 1 or 2, further comprising:
receiving a data release instruction, wherein the data release instruction is used for indicating the data volume of data to be released;
sequentially releasing data from a first queue element at the tail of a third storage queue pointed by a queue tail pointer;
if the quantity of the data stored in the third storage queue is smaller than the data quantity, continuing to release the data from the first queue element at the tail part of the next storage queue of the third storage queue until the quantity of the released data reaches the data quantity;
and pointing the queue tail pointer to a storage queue where the next data of the released last data is located.
4. The method according to any one of claims 1 or 2, further comprising:
and initially, establishing the head pointer and the tail pointer, wherein the head pointer and the tail pointer both point to an initial storage queue, and the initial storage queue is empty.
5. The method according to any one of claims 1 or 2,
the lengths of the first storage queue and the second storage queue are the same or different.
6. A data access device, comprising:
the device comprises a receiving module, a storage module and a processing module, wherein the receiving module is used for receiving a data storage instruction, and the data storage instruction is used for indicating a plurality of data to be stored;
the processing module is used for sequentially storing the data to be stored at the head of the first storage queue pointed by the queue head pointer; if the available storage space of the first storage queue is smaller than the size of the data to be stored, adding a second storage queue, and storing the remaining data to be stored to the second storage queue;
the updating module enables the user to point the head of line pointer to the second storage queue;
the data storage instruction comprises a first data storage instruction and a second data storage instruction, the processing module is used for storing the data to be stored indicated by the first data storage instruction to a first storage queue when the head of the first storage queue pointed by a queue head pointer stores the data to be stored in sequence, if the available storage space of the first storage queue is equal to the size of the data to be stored indicated by the first data storage instruction, whether the first storage queue has capacity expansion capability is judged, and if the first storage queue has capacity expansion capability, the data to be stored indicated by the second data storage instruction is stored to the first storage queue.
7. A server comprising a processor, a memory and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to any of the preceding claims 1-5 when executing the program.
8. A storage medium having stored therein instructions which, when run on a server, cause the server to perform the method of any one of claims 1-5.
CN201910729095.3A 2019-08-08 2019-08-08 Data access method and device Active CN110427525B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910729095.3A CN110427525B (en) 2019-08-08 2019-08-08 Data access method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910729095.3A CN110427525B (en) 2019-08-08 2019-08-08 Data access method and device

Publications (2)

Publication Number Publication Date
CN110427525A CN110427525A (en) 2019-11-08
CN110427525B true CN110427525B (en) 2022-02-25

Family

ID=68415036

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910729095.3A Active CN110427525B (en) 2019-08-08 2019-08-08 Data access method and device

Country Status (1)

Country Link
CN (1) CN110427525B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7089380B1 (en) * 2003-05-07 2006-08-08 Avago Technologies General Ip (Singapore) Pte. Ltd. Method and system to compute a status for a circular queue within a memory device
CN103634379A (en) * 2013-11-13 2014-03-12 华为技术有限公司 Management method for distributed storage space and distributed storage system
CN109542615A (en) * 2018-10-18 2019-03-29 深圳市景阳科技股份有限公司 A kind of implementation method, device and the terminal device of variable node generic queue
CN109656515A (en) * 2018-11-16 2019-04-19 深圳证券交易所 Operating method, device and the storage medium of queue message

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103312624B (en) * 2012-03-09 2016-03-09 腾讯科技(深圳)有限公司 A kind of Message Queuing Services system and method
US10250519B2 (en) * 2014-05-21 2019-04-02 Oracle International Corporation System and method for supporting a distributed data structure in a distributed data grid

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7089380B1 (en) * 2003-05-07 2006-08-08 Avago Technologies General Ip (Singapore) Pte. Ltd. Method and system to compute a status for a circular queue within a memory device
CN103634379A (en) * 2013-11-13 2014-03-12 华为技术有限公司 Management method for distributed storage space and distributed storage system
CN109542615A (en) * 2018-10-18 2019-03-29 深圳市景阳科技股份有限公司 A kind of implementation method, device and the terminal device of variable node generic queue
CN109656515A (en) * 2018-11-16 2019-04-19 深圳证券交易所 Operating method, device and the storage medium of queue message

Also Published As

Publication number Publication date
CN110427525A (en) 2019-11-08

Similar Documents

Publication Publication Date Title
WO2021180025A1 (en) Message processing method and apparatus, electronic device and medium
CN110995776B (en) Block distribution method and device of block chain, computer equipment and storage medium
US11237761B2 (en) Management of multiple physical function nonvolatile memory devices
CN107092628B (en) Time series data processing method and device
CN109697019B (en) Data writing method and system based on FAT file system
CN115617255A (en) Management method and management device for cache files
CN110427394B (en) Data operation method and device
CN115348222A (en) Message distribution method, device, server and storage medium
CN111866156B (en) Fusing processing method and device
CN111078697B (en) Data storage method and device, storage medium and electronic equipment
CN110427525B (en) Data access method and device
CN111666045A (en) Data processing method, system, equipment and storage medium based on Git system
CN110020290B (en) Webpage resource caching method and device, storage medium and electronic device
CN114116656B (en) Data processing method and related device
US11023275B2 (en) Technologies for queue management by a host fabric interface
WO2022028165A1 (en) Cache management method, terminal, and storage medium
CN101819589B (en) Method and device for controlling file to be input into/output from cache
CN113672248A (en) Patch acquisition method, device, server and storage medium
CN107689996B (en) Data transmission method and device and terminal equipment
CN108509478B (en) Splitting and calling method of rule engine file, electronic device and storage medium
CN112817923B (en) Application program data processing method and device
CN111142727A (en) Application icon management method, mobile terminal and readable storage medium
CN114237509B (en) Data access method and device
CN112286973B (en) Data message storage method and device, computer equipment and storage medium
CN114327281B (en) TCG software and hardware acceleration method and device for SSD, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant