CN118260272A - Data unit access and update method and device, electronic equipment and storage medium - Google Patents

Data unit access and update method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN118260272A
CN118260272A CN202410683437.3A CN202410683437A CN118260272A CN 118260272 A CN118260272 A CN 118260272A CN 202410683437 A CN202410683437 A CN 202410683437A CN 118260272 A CN118260272 A CN 118260272A
Authority
CN
China
Prior art keywords
target
lock
cache pool
data unit
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410683437.3A
Other languages
Chinese (zh)
Other versions
CN118260272B (en
Inventor
李佳欣
郭鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Nankai University General Data Technologies Co ltd
Original Assignee
Tianjin Nankai University General Data Technologies Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Nankai University General Data Technologies Co ltd filed Critical Tianjin Nankai University General Data Technologies Co ltd
Priority to CN202410683437.3A priority Critical patent/CN118260272B/en
Publication of CN118260272A publication Critical patent/CN118260272A/en
Application granted granted Critical
Publication of CN118260272B publication Critical patent/CN118260272B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/217Database tuning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application is suitable for the technical field of electric digital data processing, and provides a data unit access and update method, a device, electronic equipment and a storage medium. The method comprises the following steps: receiving a request sent by a thread for accessing a target data unit in a database, and acquiring a target lock corresponding to the target data unit according to the request; if the target lock is obtained and the target pointer corresponding to the target lock is not null, sending a target object corresponding to a target data unit in a cache pool pointed by the target pointer to a thread based on the target lock, and releasing the target lock; determining whether to update the position of the target object in the cache pool queue according to a preset updating condition; if the target object is updated, the position of the target object in the cache pool queue is moved to a preset position in the direction of the head of the queue. The application sets the target lock and the target pointer, can reduce the contention of the global lock when the multithreading is concurrent, and sets the preset updating condition, can reduce the updating frequency of the target object, further reduce the contention of the global lock and improve the performance of the database.

Description

Data unit access and update method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of electric digital data processing, in particular to a data unit access and update method, a device, electronic equipment and a storage medium.
Background
The minimum unit of access to the database is a data unit, the database comprises a plurality of data units, each data unit stores a plurality of data, for example 65536 data units in each data unit in the GBase 8a database. The memory is stored in the storage device, and to reduce the number of accesses to the storage device, a buffer pool is typically used to hold data in the data units read from the storage device. When the working thread needs to access the data unit, firstly, searching the object (data) corresponding to the data unit in the cache pool, and updating the searched object in the queue in the cache pool.
The steps of querying and updating need to be realized or changed, so that global locks are needed to ensure that only one working thread accesses the current data unit. When a plurality of working threads need data with discrete physical and chemical line numbers, different data units may be frequently accessed, for example, a database comprises 1000 data units, and the current task is to take one piece of data from each data unit, so that 1000 data units need to be accessed simultaneously.
Disclosure of Invention
In view of the above, the embodiments of the present application provide a method, an apparatus, an electronic device, and a storage medium for accessing and updating a data unit, so as to solve the technical problem in the related art that when a plurality of working threads are concurrent, serious global lock contention is caused, the access speed is slower, and thus the performance of a database is reduced.
In a first aspect, an embodiment of the present application provides a method for accessing and updating a data unit, including:
Receiving a request sent by a thread for accessing a target data unit in a database, and acquiring a target lock corresponding to the target data unit according to the request;
If the target lock is obtained and a target pointer corresponding to the target lock is not empty, sending a target object corresponding to the target data unit in a cache pool pointed by the target pointer to the thread based on the target lock, and releasing the target lock; the lock is used for locking the processing process of the object of the corresponding data unit in the cache pool after the lock is acquired, and the pointer is used for pointing to the object corresponding to the data unit in the cache pool;
Determining whether to update the position of the target object in a cache pool queue according to a preset updating condition; the cache pool comprises objects corresponding to a plurality of data units in the database, and the objects are arranged in a queue form; the preset updating condition is determined according to the last updating time of the target object or the reference count corresponding to the target object;
If the update is determined, the position of the target object in the cache pool queue is moved to a preset position in the direction of the head of the queue.
In a possible implementation manner of the first aspect, the method further includes:
if the target lock is acquired and a target pointer corresponding to the target lock is empty, constructing a target object corresponding to the target data unit in the cache pool based on the target data unit in the storage device where the database is located;
And based on the target lock, the target pointer points to the target object in the cache pool, the target object in the cache pool pointed to by the target pointer is sent to the thread, and the target lock is released.
In a possible implementation manner of the first aspect, the method further includes:
If the target lock is not acquired, determining the times of acquiring the target lock by each thread in a preset time period, and judging whether the times are greater than a preset times threshold;
and if the times are greater than the preset times threshold, acquiring the target lock again after a preset time interval until the target lock is acquired.
In a possible implementation manner of the first aspect, the determining whether to update the location of the target object in the cache pool queue according to a preset update condition includes:
judging whether the time interval between the current time and the last updating time of the target object is larger than a preset time threshold value or not;
And if the time interval is larger than the preset time threshold, determining to update the position of the target object in the cache pool queue.
In a possible implementation manner of the first aspect, the determining whether to update the location of the target object in the cache pool queue according to a preset update condition includes:
Updating the reference count corresponding to the target object based on the target lock, and judging whether the updated reference count of the target object is larger than a preset count threshold;
And if the updated reference count is greater than the preset count threshold, determining to update the position of the target object in the cache pool queue.
In a possible implementation manner of the first aspect, after the constructing, in the cache pool, a target object corresponding to the target data unit, the method further includes:
Acquiring a global lock corresponding to the cache pool;
If the global lock is obtained, judging whether the number of the objects in the cache pool is larger than a preset number threshold, and when the number of the objects in the cache pool is larger than the preset number threshold, eliminating the first object with the reference count of 0 based on the global lock along the direction from the tail of the queue to the head of the queue in the cache pool, and releasing the global lock.
In a possible implementation manner of the first aspect, the moving the position of the target object in the buffer pool queue towards the first direction by a preset position includes:
Acquiring a global lock corresponding to the cache pool;
And if the global lock is acquired, updating the target object to the head of a queue in the cache pool based on the global lock, and releasing the global lock.
In a second aspect, an embodiment of the present application provides a data unit accessing and updating apparatus, including:
the receiving module is used for receiving a request sent by a thread for accessing a target data unit in a database, and acquiring a target lock corresponding to the target data unit according to the request;
the access module is used for sending a target object corresponding to the target data unit in the cache pool pointed by the target pointer to the thread based on the target lock when the target lock is acquired and the target pointer corresponding to the target lock is not empty, and releasing the target lock; the lock is used for locking the processing process of the object of the corresponding data unit in the cache pool after the lock is acquired, and the pointer is used for pointing to the object corresponding to the data unit in the cache pool;
The judging module is used for determining whether to update the position of the target object in the cache pool queue according to a preset updating condition; the cache pool comprises objects corresponding to a plurality of data units in the database, and the objects are arranged in a queue form; the preset updating condition is determined according to the last updating time of the target object or the reference count corresponding to the target object;
and the updating module is used for moving the position of the target object in the buffer pool queue to a preset position towards the head direction when the updating is determined.
In a third aspect, an embodiment of the present application provides an electronic device, including a memory and a processor, where the memory stores a computer program executable on the processor, and the processor implements the method for accessing and updating data units according to any one of the first aspects when executing the computer program.
In a fourth aspect, an embodiment of the present application provides a computer readable storage medium storing a computer program, which when executed by a processor implements a data unit accessing and updating method according to any of the first aspects.
In a fifth aspect, an embodiment of the application provides a computer program product for, when run on an electronic device, causing the electronic device to perform the data unit accessing and updating method according to any of the first aspects above.
It will be appreciated that the advantages of the second to fifth aspects may be found in the relevant description of the first aspect, and are not described here again.
According to the data unit access and update method, device, electronic equipment and storage medium, locks and pointers corresponding to data units one by one are set, wherein the locks are used for locking the processing process of the object of the corresponding data unit in a cache pool after the locks are acquired, and the pointers are used for pointing to the object corresponding to the data unit in the cache pool, so that when the target data unit in a database needs to be accessed, the target lock corresponding to the target data unit can be acquired, and when the target lock is acquired and the target pointer corresponding to the target lock is not empty, the target object corresponding to the target data unit is directly sent to a thread based on the target lock and the target pointer, and therefore whether the target object corresponding to the target data unit exists in the cache pool or not is not needed to be inquired by the global lock.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart illustrating a method for accessing and updating data units according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a data unit accessing and updating apparatus according to an embodiment of the present application;
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The present application will be more clearly described with reference to the following examples. The following examples will assist those skilled in the art in further understanding the function of the present application, but are not intended to limit the application in any way. It should be noted that variations and modifications could be made by those skilled in the art without departing from the inventive concept. These are all within the scope of the present application.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
In the description of the present specification and the appended claims, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
Furthermore, references to "a plurality of" in embodiments of the present application should be interpreted as two or more.
When the working thread needs to access the data unit, firstly, searching the object corresponding to the data unit in the cache pool, and updating the searched object in the queue in the cache pool. And the steps of inquiring, updating and the like all need to use the global lock corresponding to the cache pool. When a plurality of working threads are concurrent, different data units may be accessed frequently, which may cause serious global lock contention and slow access speed, thereby causing performance degradation of the database.
Alternatively, taking a communication system as an example, the communication system may include a network device having a communication function and a plurality of terminal devices, and the network device may be a device that communicates with each terminal device. The network device comprises a storage device, the storage device comprises a database, the database comprises a plurality of data units, and each data unit can store related data of one terminal device. In order to reduce the number of accesses to the storage device, a buffer pool is used to store data in the data units read from the storage device, i.e. the relevant data of the terminal device stored in the buffered data units in the buffer pool. If a certain terminal device accesses a data unit in a network device, a global lock needs to be used to ensure that only one working thread accesses the current data unit, and when a plurality of working threads are concurrent, different data units may be frequently accessed, which may cause serious global lock contention.
Based on the above-mentioned problems, the inventors have found that a lock and a pointer corresponding to a data unit one by one may be set, where the lock is used to lock a process of obtaining an object of a corresponding data unit in a cache pool after the lock is obtained, and the pointer is used to point to an object corresponding to the data unit in the cache pool, so when the target data unit in the database needs to be accessed, the target lock corresponding to the target data unit may be obtained, and when the target lock is obtained and the target pointer corresponding to the target lock is not empty (which indicates that the target object corresponding to the target data unit is included in the cache pool), the target object corresponding to the target data unit is directly sent to a thread based on the target lock and the target pointer, so that it is not necessary to use the global lock to query whether the target object corresponding to the target data unit exists in the cache pool, and in addition, when the multithreading concurrency is performed, a preset update condition is set, and only when the condition is met, the update frequency of the target object in the cache pool is updated, and the performance of the global lock is further improved.
Fig. 1 is a flowchart of a method for accessing and updating a data unit according to an embodiment of the present application. As shown in fig. 1, the method in the embodiment of the present application may include:
Step 101, receiving a request sent by a thread for accessing a target data unit in a database, and acquiring a target lock corresponding to the target data unit according to the request.
From the foregoing, it can be seen that the database is stored in the storage device and includes a plurality of data units, which are units of actually stored data. To reduce the number of accesses to the storage device, a cache pool is typically used to store data in data units read from the storage device, i.e., the cache pool includes objects corresponding to multiple data units in the database.
For example, in this embodiment, the data units are in one-to-one correspondence with locks, and the locks are used to lock the processing procedure of the object of the corresponding data unit in the cache pool after the locks are acquired, so as to ensure that multiple threads cannot access the same object at the same time. According to the embodiment, a target lock corresponding to the target data unit is obtained according to the target data unit indicated by the request.
Step 102, if the target lock is obtained and the target pointer corresponding to the target lock is not null, sending the target object corresponding to the target data unit in the cache pool pointed by the target pointer to the thread based on the target lock, and releasing the target lock.
The locks are in one-to-one correspondence with pointers, the pointers are used for pointing to objects corresponding to data units in the cache pool, and the embodiment can directly access the objects pointed by the pointers through reading the pointers. Here, a pointer pointing to an object may be understood as a pointer indicating how the corresponding object is found.
Optionally, in this embodiment, after the target lock is obtained, if the target pointer corresponding to the target lock is not null, it is indicated that the cache pool includes a target object corresponding to the target data unit, that is, the target pointer points to the target object corresponding to the target data unit in the cache pool, so that the target object may be sent to the corresponding thread through the target pointer. In this process, to ensure that multiple threads cannot access the target object at the same time, the process of sending the target object to the corresponding thread through the target pointer may be locked based on the target lock.
Thus, compared with the prior art that the global lock of the cache pool needs to be utilized to inquire whether the cache pool includes the target object corresponding to the target data unit, the embodiment can utilize the target lock and the target pointer to determine whether the cache pool includes the target object corresponding to the target data unit, and the target lock and the target pointer only correspond to the target object, so that the conflict of the global lock is reduced during multi-thread concurrency.
In a possible implementation manner, when the target lock is obtained and the target pointer corresponding to the target lock is empty, the embodiment may further construct, in the cache pool, a target object corresponding to the target data unit based on the target data unit in the storage device where the database is located, and then, based on the target lock, point the target pointer to the target object in the cache pool, and send the target object in the cache pool pointed to by the target pointer to the thread, and release the target lock.
For example, after the target lock is obtained, if the target pointer corresponding to the target lock is empty, it is indicated that the cache pool does not include the target object corresponding to the target data unit, and at this time, the target data unit in the database in the storage device needs to be loaded into the cache pool, that is, the target object corresponding to the target data unit is constructed in the cache pool. And then, the target pointer is pointed to a target object in the cache pool, and the target object is sent to a corresponding thread through the target pointer. In this process, to ensure that multiple threads cannot access the target object at the same time, the process of pointing the target pointer to the target object in the cache pool and sending the target object to the corresponding thread through the target pointer may be locked based on the target lock.
In a possible implementation manner, when the target lock is not acquired, the embodiment may further determine the number of times that each thread acquires the target lock in a preset time period, determine whether the number of times is greater than a preset number of times threshold, and acquire the target lock again after a preset time interval if the number of times is greater than the preset number of times threshold until the target lock is acquired.
Optionally, in this embodiment, if the target lock cannot be obtained, it is indicated that other threads are accessing the target object corresponding to the target data unit in the cache pool, and in order to avoid the action of repeatedly executing the target lock obtaining all the time, the number of times that each thread obtains the target lock in the preset time period may be greater than the preset number of times threshold, that is, when the target lock is frequently used, the action of obtaining the target lock is not performed, and after the preset time interval, the target lock is obtained again until the target lock is obtained. In this way, it is possible to avoid that the target lock is not acquired by repeating the operation of acquiring the target lock at all times. And the number of times that each thread acquires the target lock in the preset time period is smaller than or equal to a preset number of times threshold, namely, when the target lock is used infrequently, the target lock can be acquired until the target lock is acquired.
Step 103, determining whether to update the position of the target object in the cache pool queue according to a preset update condition.
The cache pool comprises objects corresponding to a plurality of data units in a database, and the objects are arranged in a queue form; the preset updating condition is determined according to the last updating time of the target object or the reference count corresponding to the target object.
As can be seen from the foregoing, in the related art, after determining the target object in the cache pool, the target object is updated to the head of the queue in the cache pool, for example, the LirsUpdate function is called to update the target object to the head of the queue, and in this process, the storage structure of the cache pool is changed, so that the global lock is required, which further causes serious global lock contention during multi-thread concurrency.
In an exemplary embodiment, after determining the target object in the cache pool, whether to update the position of the target object in the cache pool queue may be determined according to a preset update condition, and only when the preset update condition is satisfied, the position of the target object in the cache pool queue may be updated, otherwise, the position is not updated. In this way, the update frequency of the target object can be reduced, and thus the contention of the global lock is reduced during the multi-thread concurrency. The preset update condition may be determined according to a last update time of the target object or a reference count corresponding to the target object, for example, the preset update condition may be whether a time interval between a current time and a last update time of the target object is greater than a preset time threshold, or the like.
And 104, if the update is determined, moving the position of the target object in the cache pool queue to a preset position in the direction of the head of the queue.
In a possible implementation manner, the preset position may be determined according to an actual situation, for example, the preset position is determined according to a position of a queue head in the cache pool, and when the position of the target object in the queue of the cache pool is moved to the queue head by the preset position, the embodiment may obtain a global lock corresponding to the cache pool, and when the global lock is obtained, update the target object to the queue head of the queue in the cache pool based on the global lock, and release the global lock.
In this embodiment, after determining, according to a preset update condition, a position of an update target object in a queue of a cache pool, a global lock is first acquired, and when the global lock is acquired, the target object is updated to a head of the queue in the cache pool, or the target object is updated to a preset position of the queue in the cache pool. In this process, to ensure that multiple threads cannot access the target object at the same time, the process of updating the target object in the cache pool queue may be locked based on the global lock.
Optionally, if the global lock is not acquired, it indicates that the global lock is occupied at this time, and the global lock may be acquired again until the global lock is acquired, or the number of times that each thread acquires the global lock in a preset time period is determined, and when the number of times that the global lock is acquired is greater than a preset threshold, the global lock is acquired again after a preset time interval until the global lock is acquired.
According to the data unit access and update method provided by the embodiment of the application, the locks and the pointers are arranged in a one-to-one correspondence with the data units, wherein the locks are used for locking the processing process of the object of the corresponding data unit in the cache pool after the locks are acquired, and the pointers are used for pointing to the object corresponding to the data unit in the cache pool, so that when the target data unit in the database is required to be accessed, the target lock corresponding to the target data unit can be acquired, and when the target lock is acquired and the target pointer corresponding to the target lock is not empty, the target object corresponding to the target data unit is directly sent to the thread based on the target lock and the target pointer, thereby avoiding the need of utilizing the global lock to inquire whether the target object corresponding to the target data unit exists in the cache pool or not, reducing the contention of the global lock when the multithread concurrency is performed, and setting the preset update condition and updating the position of the target object in the cache pool queue only when the preset update condition is met, thereby reducing the update frequency of the target object, reducing the contention of the global lock, improving the access speed and further improving the performance of the database.
In some embodiments, the preset update condition may be whether a time interval between the current time and the last update time of the target object is greater than a preset time threshold, so in this embodiment, when determining whether to update the position of the target object in the cache pool queue according to the preset update condition, it may be determined whether the time interval between the current time and the last update time of the target object is greater than the preset time threshold, and when the time interval is greater than the preset time threshold, determine to update the position of the target object in the cache pool queue.
If the time interval between the current time and the last update time of the target object is greater than the preset time threshold, it indicates that the update of the target object is not frequent, and at this time, the position of the target object in the cache pool queue may be updated, otherwise, it indicates that the update of the target object is frequent, and at this time, the position of the target object in the cache pool queue is not updated any more, so as to reduce the update frequency of the target object, and further reduce the contention of the global lock.
In some embodiments, the preset update condition may be whether the updated reference count of the target object is greater than a preset count threshold, so in this embodiment, when determining whether to update the position of the target object in the cache pool queue according to the preset update condition, the reference count corresponding to the target object may be updated based on the target lock, and whether the updated reference count of the target object is greater than the preset count threshold may be determined, and when the updated reference count is greater than the preset count threshold, the position of the updated target object in the cache pool queue may be determined.
Illustratively, after determining the target object in the cache pool, the present embodiment updates the reference count corresponding to the target object, for example, calls the Lock function to increment the reference count corresponding to the target object by 1. In this process, to ensure that multiple threads cannot access the target object at the same time, the process of updating the reference count corresponding to the target object may be locked based on the target lock.
Optionally, if the updated reference count of the target object is greater than the preset count threshold, it indicates that the target object is frequently accessed, and at this time, the position of the target object in the cache pool queue may be updated, otherwise, it indicates that the target object is not frequently accessed, and at this time, the position of the target object in the cache pool queue is not updated any more, so as to reduce the update frequency of the target object, and further reduce the contention of the global lock.
In some embodiments, after constructing a target object corresponding to a target data unit in a cache pool, the embodiment may further acquire a global lock corresponding to the cache pool, determine, when the global lock is acquired, whether the number of objects in the cache pool is greater than a preset number threshold, and when the number of objects in the cache pool is greater than the preset number threshold, eliminate, along a direction from a queue tail to a queue head in the cache pool, an object with a first reference count of 0 based on the global lock, and release the global lock.
The number of objects corresponding to the data units included in the cache pool is limited, i.e. the number of objects of the data units included in the cache pool typically does not exceed a maximum number threshold. Therefore, after constructing the target object corresponding to the target data unit in the cache pool, when the number of the objects in the cache pool is greater than a preset number threshold, eliminating the object with the first reference count of 0 along the direction from the tail of the queue to the head of the queue, namely removing the object with the first reference count of 0 from the cache pool, thereby ensuring that the number of the objects in the cache pool does not exceed the preset number threshold, and ensuring the performance of the database. Wherein the preset number threshold may be determined based on a maximum number threshold, e.g. the preset number threshold is equal to the maximum number threshold.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present application.
Fig. 2 is a schematic structural diagram of a data unit accessing and updating apparatus according to an embodiment of the present application. As shown in fig. 2, the data unit accessing and updating apparatus provided in this embodiment may include: a receiving module 201, an accessing module 202, a judging module 203 and an updating module 204.
The receiving module 201 is configured to receive a request sent by a thread to access a target data unit in a database, and obtain a target lock corresponding to the target data unit according to the request.
An access module 202, configured to, when the target lock is obtained and a target pointer corresponding to the target lock is not empty, send, based on the target lock, a target object corresponding to the target data unit in a cache pool pointed to by the target pointer to the thread, and release the target lock; the data units are in one-to-one correspondence with the locks, the locks are in one-to-one correspondence with the pointers, the locks are used for locking the processing process of the object of the corresponding data unit in the cache pool after the locks are acquired, and the pointers are used for pointing to the object corresponding to the data unit in the cache pool.
A judging module 203, configured to determine whether to update the position of the target object in the cache pool queue according to a preset update condition; the cache pool comprises objects corresponding to a plurality of data units in the database, and the objects are arranged in a queue form; and the preset updating condition is determined according to the last updating time of the target object or the reference count corresponding to the target object.
And the updating module 204 is configured to move the position of the target object in the cache pool queue towards the head of the queue by a preset position when the updating is determined.
Optionally, the access module 202 is further configured to:
if the target lock is acquired and a target pointer corresponding to the target lock is empty, constructing a target object corresponding to the target data unit in the cache pool based on the target data unit in the storage device where the database is located;
And based on the target lock, the target pointer points to the target object in the cache pool, the target object in the cache pool pointed to by the target pointer is sent to the thread, and the target lock is released.
Optionally, the access module 202 is further configured to:
If the target lock is not acquired, determining the times of acquiring the target lock by each thread in a preset time period, and judging whether the times are greater than a preset times threshold;
and if the times are greater than the preset times threshold, acquiring the target lock again after a preset time interval until the target lock is acquired.
Optionally, the judging module 203 is specifically configured to:
judging whether the time interval between the current time and the last updating time of the target object is larger than a preset time threshold value or not;
And if the time interval is larger than the preset time threshold, determining to update the position of the target object in the cache pool queue.
Optionally, the judging module 203 is specifically configured to:
Updating the reference count corresponding to the target object based on the target lock, and judging whether the updated reference count of the target object is larger than a preset count threshold;
And if the updated reference count is greater than the preset count threshold, determining to update the position of the target object in the cache pool queue.
Optionally, the access module 202 is further configured to:
Acquiring a global lock corresponding to the cache pool;
If the global lock is obtained, judging whether the number of the objects in the cache pool is larger than a preset number threshold, and when the number of the objects in the cache pool is larger than the preset number threshold, eliminating the first object with the reference count of 0 based on the global lock along the direction from the tail of the queue to the head of the queue in the cache pool, and releasing the global lock.
Optionally, the updating module 204 is specifically configured to:
Acquiring a global lock corresponding to the cache pool;
And if the global lock is acquired, updating the target object to the head of a queue in the cache pool based on the global lock, and releasing the global lock.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 3, the electronic apparatus 300 of this embodiment includes: a processor 310, a memory 320, and a computer program 321 executable on the processor 310 is stored in the memory 320. The steps of any of the various method embodiments described above, such as steps 101 through 104 shown in fig. 1, are implemented when the processor 310 executes the computer program 321. Or the processor 310, when executing the computer program 321, performs the functions of the modules/units in the above-described device embodiments, such as the functions of the modules 201 to 204 shown in fig. 2.
By way of example, the computer program 321 may be partitioned into one or more modules/units that are stored in the memory 320 and executed by the processor 310 to complete the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing the specified functions, which instruction segments are used to describe the execution of the computer program 321 in the electronic device 300.
It will be appreciated by those skilled in the art that fig. 3 is merely an example of an electronic device and is not limiting of an electronic device and may include more or fewer components than shown, or may combine certain components, or different components, such as input-output devices, network access devices, buses, etc.
The Processor 310 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), off-the-shelf Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 320 may be an internal storage unit of the electronic device, such as a hard disk or a memory of the electronic device, or an external storage device of the electronic device, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD) or the like. The memory 320 may also include both internal storage units and external storage devices of the electronic device. The memory 320 is used to store computer programs and other programs and data required by the electronic device. The memory 320 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/electronic device and method may be implemented in other manners. For example, the apparatus/electronic device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (10)

1. A method for accessing and updating data units, comprising:
Receiving a request sent by a thread for accessing a target data unit in a database, and acquiring a target lock corresponding to the target data unit according to the request;
If the target lock is obtained and a target pointer corresponding to the target lock is not empty, sending a target object corresponding to the target data unit in a cache pool pointed by the target pointer to the thread based on the target lock, and releasing the target lock; the lock is used for locking the processing process of the object of the corresponding data unit in the cache pool after the lock is acquired, and the pointer is used for pointing to the object corresponding to the data unit in the cache pool;
Determining whether to update the position of the target object in a cache pool queue according to a preset updating condition; the cache pool comprises objects corresponding to a plurality of data units in the database, and the objects are arranged in a queue form; the preset updating condition is determined according to the last updating time of the target object or the reference count corresponding to the target object;
If the update is determined, the position of the target object in the cache pool queue is moved to a preset position in the direction of the head of the queue.
2. The data unit access and update method of claim 1, further comprising:
if the target lock is acquired and a target pointer corresponding to the target lock is empty, constructing a target object corresponding to the target data unit in the cache pool based on the target data unit in the storage device where the database is located;
And based on the target lock, the target pointer points to the target object in the cache pool, the target object in the cache pool pointed to by the target pointer is sent to the thread, and the target lock is released.
3. The data unit access and update method of claim 1, further comprising:
If the target lock is not acquired, determining the times of acquiring the target lock by each thread in a preset time period, and judging whether the times are greater than a preset times threshold;
and if the times are greater than the preset times threshold, acquiring the target lock again after a preset time interval until the target lock is acquired.
4. A method of accessing and updating a data unit according to any one of claims 1 to 3, wherein determining whether to update the location of the target object in the cache pool queue according to a preset update condition comprises:
judging whether the time interval between the current time and the last updating time of the target object is larger than a preset time threshold value or not;
And if the time interval is larger than the preset time threshold, determining to update the position of the target object in the cache pool queue.
5. A method of accessing and updating a data unit according to any one of claims 1 to 3, wherein determining whether to update the location of the target object in the cache pool queue according to a preset update condition comprises:
Updating the reference count corresponding to the target object based on the target lock, and judging whether the updated reference count of the target object is larger than a preset count threshold;
And if the updated reference count is greater than the preset count threshold, determining to update the position of the target object in the cache pool queue.
6. The method for accessing and updating data units according to claim 2, further comprising, after said constructing a target object corresponding to said target data unit in said cache pool:
Acquiring a global lock corresponding to the cache pool;
If the global lock is obtained, judging whether the number of the objects in the cache pool is larger than a preset number threshold, and when the number of the objects in the cache pool is larger than the preset number threshold, eliminating the first object with the reference count of 0 based on the global lock along the direction from the tail of the queue to the head of the queue in the cache pool, and releasing the global lock.
7. A method for accessing and updating a data unit according to any one of claims 1 to 3, wherein the moving the position of the target object in the buffer pool queue toward the head of the queue by a preset position includes:
Acquiring a global lock corresponding to the cache pool;
And if the global lock is acquired, updating the target object to the head of a queue in the cache pool based on the global lock, and releasing the global lock.
8. A data unit access and update apparatus, comprising:
the receiving module is used for receiving a request sent by a thread for accessing a target data unit in a database, and acquiring a target lock corresponding to the target data unit according to the request;
the access module is used for sending a target object corresponding to the target data unit in the cache pool pointed by the target pointer to the thread based on the target lock when the target lock is acquired and the target pointer corresponding to the target lock is not empty, and releasing the target lock; the lock is used for locking the processing process of the object of the corresponding data unit in the cache pool after the lock is acquired, and the pointer is used for pointing to the object corresponding to the data unit in the cache pool;
The judging module is used for determining whether to update the position of the target object in the cache pool queue according to a preset updating condition; the cache pool comprises objects corresponding to a plurality of data units in the database, and the objects are arranged in a queue form; the preset updating condition is determined according to the last updating time of the target object or the reference count corresponding to the target object;
and the updating module is used for moving the position of the target object in the buffer pool queue to a preset position towards the head direction when the updating is determined.
9. An electronic device comprising a memory and a processor, the memory having stored therein a computer program executable on the processor, wherein the processor implements the data unit access and update method of any of claims 1 to 7 when the computer program is executed by the processor.
10. A computer readable storage medium storing a computer program, characterized in that the computer program, when executed by a processor, implements the data unit access and update method according to any one of claims 1 to 7.
CN202410683437.3A 2024-05-30 2024-05-30 Data unit access and update method and device, electronic equipment and storage medium Active CN118260272B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410683437.3A CN118260272B (en) 2024-05-30 2024-05-30 Data unit access and update method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410683437.3A CN118260272B (en) 2024-05-30 2024-05-30 Data unit access and update method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN118260272A true CN118260272A (en) 2024-06-28
CN118260272B CN118260272B (en) 2024-09-20

Family

ID=91605753

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410683437.3A Active CN118260272B (en) 2024-05-30 2024-05-30 Data unit access and update method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN118260272B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070174513A1 (en) * 2006-01-23 2007-07-26 Arm Limited Buffering data during a data transfer
US20140236913A1 (en) * 2013-02-20 2014-08-21 Nec Laboratories America, Inc. Accelerating Distributed Transactions on Key-Value Stores Through Dynamic Lock Localization
CN105302840A (en) * 2014-07-31 2016-02-03 阿里巴巴集团控股有限公司 Cache management method and device
CN105447092A (en) * 2015-11-09 2016-03-30 联动优势科技有限公司 Caching method and apparatus
US20190332532A1 (en) * 2016-12-28 2019-10-31 New H3C Technologies Co., Ltd. Processing message
CN111857597A (en) * 2020-07-24 2020-10-30 浪潮电子信息产业股份有限公司 Hot spot data caching method, system and related device
CN113672166A (en) * 2021-07-08 2021-11-19 锐捷网络股份有限公司 Data processing method and device, electronic equipment and storage medium
CN113791916A (en) * 2021-11-17 2021-12-14 支付宝(杭州)信息技术有限公司 Object updating and reading method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070174513A1 (en) * 2006-01-23 2007-07-26 Arm Limited Buffering data during a data transfer
US20140236913A1 (en) * 2013-02-20 2014-08-21 Nec Laboratories America, Inc. Accelerating Distributed Transactions on Key-Value Stores Through Dynamic Lock Localization
CN105302840A (en) * 2014-07-31 2016-02-03 阿里巴巴集团控股有限公司 Cache management method and device
CN105447092A (en) * 2015-11-09 2016-03-30 联动优势科技有限公司 Caching method and apparatus
US20190332532A1 (en) * 2016-12-28 2019-10-31 New H3C Technologies Co., Ltd. Processing message
CN111857597A (en) * 2020-07-24 2020-10-30 浪潮电子信息产业股份有限公司 Hot spot data caching method, system and related device
CN113672166A (en) * 2021-07-08 2021-11-19 锐捷网络股份有限公司 Data processing method and device, electronic equipment and storage medium
CN113791916A (en) * 2021-11-17 2021-12-14 支付宝(杭州)信息技术有限公司 Object updating and reading method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ZHUOLONG YU等: "NetLock: Fast, Centralized Lock Management Using Programmable Switches", ZHUOLONG YU, 30 July 2020 (2020-07-30), pages 126, XP058892259, DOI: 10.1145/3387514.3405857 *
朱明清: "面向数据库负载的LSM-Tree存储缓存设计与优化", 中国优秀硕士学位论文全文数据库, 15 November 2022 (2022-11-15) *
陶润东;: "针对拷贝因子的云存储负载均衡技术", 信息技术, no. 01, 25 January 2015 (2015-01-25) *

Also Published As

Publication number Publication date
CN118260272B (en) 2024-09-20

Similar Documents

Publication Publication Date Title
CN111090663B (en) Transaction concurrency control method, device, terminal equipment and medium
CN110442463B (en) Data transmission method and device in TEE system
CN110399235B (en) Multithreading data transmission method and device in TEE system
CN110442462B (en) Multithreading data transmission method and device in TEE system
CA2706737C (en) A multi-reader, multi-writer lock-free ring buffer
US9213586B2 (en) Computer-implemented systems for resource level locking without resource level locks
US8615635B2 (en) Database management methodology
US6601120B1 (en) System, method and computer program product for implementing scalable multi-reader/single-writer locks
CN110427274B (en) Data transmission method and device in TEE system
US9851920B2 (en) System and method for removing hash table entries
US8543743B2 (en) Lock free queue
CN112163013A (en) Data processing method and device, terminal equipment and storage medium
CN114490251A (en) Log processing system, log processing method and terminal equipment
CN113609128B (en) Method, device, terminal equipment and storage medium for generating database entity class
US20090222494A1 (en) Optimistic object relocation
CN118260272B (en) Data unit access and update method and device, electronic equipment and storage medium
US8719274B1 (en) Method, system, and apparatus for providing generic database services within an extensible firmware interface environment
EP2804102B1 (en) Parallel atomic increment
CN111950879A (en) Queuing arbitration method and device for processes and terminal equipment
US7171537B1 (en) Non-blocking growable arrays
CN116483745A (en) Data transmission method, device, power module and storage medium
CN111666339B (en) Multithreading data synchronization method
CN111143351B (en) IMSI data management method and equipment
CN108874560B (en) Method and communication device for communication
US10635726B2 (en) Data processing circuit and data processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant