CN112650449B - Method and system for releasing cache space, electronic device and storage medium - Google Patents

Method and system for releasing cache space, electronic device and storage medium Download PDF

Info

Publication number
CN112650449B
CN112650449B CN202011540102.4A CN202011540102A CN112650449B CN 112650449 B CN112650449 B CN 112650449B CN 202011540102 A CN202011540102 A CN 202011540102A CN 112650449 B CN112650449 B CN 112650449B
Authority
CN
China
Prior art keywords
storage unit
processed
occupied
data
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011540102.4A
Other languages
Chinese (zh)
Other versions
CN112650449A (en
Inventor
张梦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Spreadtrum Semiconductor Nanjing Co Ltd
Original Assignee
Spreadtrum Semiconductor Nanjing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spreadtrum Semiconductor Nanjing Co Ltd filed Critical Spreadtrum Semiconductor Nanjing Co Ltd
Priority to CN202011540102.4A priority Critical patent/CN112650449B/en
Publication of CN112650449A publication Critical patent/CN112650449A/en
Priority to PCT/CN2021/136650 priority patent/WO2022135160A1/en
Application granted granted Critical
Publication of CN112650449B publication Critical patent/CN112650449B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0619Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0871Allocation or management of cache space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0644Management of space entities, e.g. partitions, extents, pools
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/0647Migration mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0688Non-volatile semiconductor memory arrays

Abstract

The invention discloses a method and a system for releasing a cache space, electronic equipment and a storage medium, wherein the cache space comprises a plurality of storage units, and the method comprises the following steps: selecting a current occupied storage unit according to the number of the tasks to be processed associated with each occupied storage unit; selecting a current processing task; and when the current occupied storage unit is not associated with the task to be processed, releasing the current occupied storage unit. The invention can dynamically adjust the space of the stored data in the cache, and release the occupied storage units in time, thereby improving the utilization efficiency of the cache space, and overcoming the defects that the occupied cache space can not be released to cause new data to be easily lost because the occupied cache space is not uniformly distributed.

Description

Method and system for releasing cache space, electronic device and storage medium
Technical Field
The present invention relates to the field of wireless communications, and in particular, to a method and a system for releasing a cache space, an electronic device, and a storage medium.
Background
In the communication process, a data manager in the chip receives data all the time and stores the data into a cache, and since the same data may be associated with a plurality of processing tasks, after the received data is stored into the cache, the received data needs to be repeatedly read for a plurality of times to perform non-repetitive task processing, so that the cache data cannot be released immediately after being read.
Under normal conditions, the buffer space in the chip is small and fixed in size, and cannot be dynamically adjusted; if the release efficiency of the old buffer content is low, the buffer space is gradually full along with the continuous increase of the buffer content, and the problem that the received data cannot be stored or lost may exist.
Such as: in a scenario of neighbor cell measurement, because timing of each cell is different, and cell time domain data are randomly distributed, there may be partial overlap or complete overlap, therefore, after a sampled data is stored in a buffer memory during measurement, the sampled data needs to be read many times to process the same data many times. The general processing mode mainly adopts a ring data manager and a FIFO (first in first out) mechanism to perform buffer management on cell data, and the processing mode has at least the following problems: the ring data manager and the FIFO mechanism cannot control the priority of the read-write access of the cache, and can only process the data of each cell successively according to the arrival sequence of the data, and the newly arrived data cannot obtain the cache space and is lost because the data in the cache cannot be released. Particularly, in an extreme scene, the distribution of cells is not uniform, in an area with dense cell distribution, the hardware measurement processing capacity is not matched with the data arrival rate, and at the moment, a larger storage space is needed to buffer unprocessed data, so that the use efficiency of the storage space is lower, and the occupied area is larger. This further makes it impossible for the chip to continue receiving data or even if data is received, data is lost, resulting in the occurrence of a defect that the processing task cannot be executed.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide a method, a system, an electronic device and a storage medium for releasing a cache space, which improve the efficiency of releasing data stored in a cache, in order to overcome the defects that the data in the cache is difficult to release, so that the newly received data cannot be stored and the data can be lost in the prior art.
The invention solves the technical problems through the following technical scheme:
the invention provides a method for releasing a cache space, wherein the cache space comprises a plurality of storage units, and the method for releasing comprises the following steps:
selecting a target occupied storage unit according to the number of the to-be-processed tasks associated with each occupied storage unit;
selecting a target processing task in the target occupation storage unit;
and when the target occupation storage unit is not associated with the task to be processed, releasing the target occupation storage unit.
Preferably, after the step of selecting the target processing task from the target occupied storage unit, the method further includes:
and after the target processing task is processed, judging whether the occupied storage unit associated with the target processing task is associated with the task to be processed or not, if not, executing the step of releasing the target occupied storage unit, and if so, returning to the step of selecting the target occupied storage unit according to the number of the tasks to be processed associated with each occupied storage unit.
Preferably, the step of selecting a target occupied storage unit according to the number of to-be-processed tasks associated with each occupied storage unit includes:
calculating the number of tasks to be processed related to each occupied storage unit;
selecting one of the occupied storage units with the number of the associated tasks to be processed less than the task processing threshold as a target occupied storage unit; or, the number of the associated tasks to be processed is sorted from less to more, and one of the plurality of occupied storage units which are sorted in the front is selected as a target occupied storage unit; or selecting the occupation storage unit with the least number of the associated tasks to be processed as the target occupation storage unit.
Preferably, the step of selecting a target processing task in the target occupied storage unit includes:
and taking the first task to be processed associated with the target occupation storage unit as a target processing task.
Preferably, the to-be-processed tasks associated with each occupied storage unit are numbered in sequence, and the step of calculating the number of the to-be-processed tasks associated with each occupied storage unit includes:
for each occupied storage unit, acquiring a first number of a first currently associated to-be-processed task and a second number of a last currently associated to-be-processed task;
and calculating the number of the tasks to be processed related to the occupied storage unit according to the second number and the first number.
Preferably, the step of selecting a target occupied storage unit according to the number of to-be-processed tasks associated with each occupied storage unit further comprises:
receiving data to be stored, storing the data to be stored into at least one idle storage unit according to the size of the data to be stored, and when the data to be stored is stored in a plurality of idle storage units, connecting the idle storage units in series through a linked list, wherein the data to be stored is associated with at least one task to be processed.
Preferably, when the data required to be used by the target processing task is stored in a plurality of occupied storage units, the addresses of the occupied storage units are obtained through corresponding linked lists.
Preferably, the step of selecting a target occupied storage unit according to the number of to-be-processed tasks associated with each occupied storage unit further comprises:
and dividing the cache space into a plurality of storage units with the same size.
Preferably, the step of storing the data to be stored into at least one of the free storage units according to the size of the data to be stored includes:
judging whether the storage space contained in the current free storage unit meets the storage requirement of the data to be stored: if yes, storing the data to be stored into a corresponding idle storage unit; if not, waiting for the release of the occupied storage unit until the storage space contained in the current idle storage unit meets the storage requirement of the data to be stored, or storing a part of the data to be stored in the current idle storage unit, and storing the other part of the data to be stored in the new idle storage unit when the new idle storage unit exists.
Preferably, the storage unit includes storage unit state information, number information of tasks to be processed, linked list address information, and storage unit identification information, where the storage unit state information is used to indicate a storage state of the storage unit, and the linked list address information is used to indicate an address of a storage unit connected in series with the current storage unit.
The invention also provides a releasing system of the cache space, wherein the cache space comprises a plurality of storage units, and the releasing system comprises: the system comprises a storage unit selection module, a processing task selection module and a space release module;
the storage unit selection module is used for selecting a target occupied storage unit according to the number of the to-be-processed tasks associated with each occupied storage unit;
the processing task selection module is used for selecting a target processing task from the target occupation storage unit;
the space releasing module is used for releasing the target occupation storage unit when the target occupation storage unit is not associated with the task to be processed.
Preferably, the release system further includes a task determination module, the processing task selection module is further configured to invoke the task determination module after selecting a target processing task, and the task determination module is configured to determine whether an occupied storage unit associated with the target processing task is associated with a task to be processed after the target processing task is processed, if not, invoke the space release module, and if yes, invoke the storage unit selection module.
Preferably, the memory cell selection module includes: the task computing unit and the storage selection unit;
the task computing unit is used for computing the number of the tasks to be processed related to each occupied storage unit;
the storage selection unit is used for selecting one of the occupied storage units as a target occupied storage unit, wherein the number of the associated tasks to be processed is less than a task processing threshold; or the storage selection unit is used for sequencing the number of the associated tasks to be processed from small to large, and selecting one of a plurality of occupation storage units which are sequenced at the front as a target occupation storage unit; or, the storage selection unit is configured to select an occupied storage unit with the smallest number of associated to-be-processed tasks as a target occupied storage unit.
Preferably, the processing task selection module is configured to take a first to-be-processed task associated with the target occupied storage unit as a target processing task.
Preferably, the to-be-processed tasks associated with each occupied storage unit are numbered in sequence, and the task calculating unit is configured to, for each occupied storage unit, obtain a first number of a currently associated first to-be-processed task and a second number of a currently associated last to-be-processed task, and calculate the number of the to-be-processed tasks associated with the occupied storage unit according to the second number and the first number.
Preferably, the release system further includes a data receiving module, configured to receive data to be stored, and store the data to be stored into at least one of the idle storage units according to the size of the data to be stored, when the data to be stored is stored in a plurality of the idle storage units, the idle storage units are connected in series by a linked list, and the data to be stored is associated with at least one task to be processed.
Preferably, the space releasing module is configured to, when data that needs to be used by the target processing task is stored in a plurality of occupied storage units, obtain addresses of the occupied storage units through corresponding linked lists.
Preferably, the release system further comprises: and the storage unit dividing module is used for dividing the cache space into a plurality of storage units with the same size.
Preferably, the data receiving module is configured to determine whether a storage space included in the current free storage unit meets a storage requirement of the data to be stored: if yes, storing the data to be stored into a corresponding idle storage unit; if not, waiting for the release of the occupied storage unit until the storage space contained in the current idle storage unit meets the storage requirement of the data to be stored, or firstly storing a part of the data to be stored in the current idle storage unit, and when a new idle storage unit exists, then storing another part of the data to be stored in the new idle storage unit.
Preferably, the storage unit includes storage unit state information, number information of tasks to be processed, linked list address information, and storage unit identification information, where the storage unit state information is used to indicate a storage state of the storage unit, and the linked list address information is used to indicate an address of a storage unit connected in series with the current storage unit.
The invention also provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the computer program to realize the method for releasing the cache space.
The present invention also provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method for releasing cache space as described above.
The positive progress effects of the invention are as follows: according to the invention, the target occupied storage unit is selected according to the number of the tasks to be processed associated with the current occupied storage unit, so that the target processing task is further selected, and when all the tasks to be processed associated with the occupied storage unit are processed, the occupied storage unit can be released, namely, the occupied storage unit is converted into a free storage unit for storing subsequent data.
Drawings
Fig. 1 is a flowchart of a method for releasing a cache space in embodiment 1 of the present invention.
Fig. 2 is a partial flowchart of a method for releasing a cache space in embodiment 1 of the present invention.
Fig. 3 is a schematic diagram of a numbering manner of a task to be processed in a specific scenario in embodiment 1 of the present invention.
Fig. 4 is a schematic diagram of linked list concatenation in a specific scenario in embodiment 1 of the present invention.
Fig. 5 is a schematic block diagram of a system for releasing cache space in embodiment 2 of the present invention.
Fig. 6 is a schematic block diagram of an electronic device in embodiment 3 of the present invention.
Detailed Description
The invention is further illustrated by the following examples, which are not intended to limit the invention thereto.
Example 1
The embodiment provides a method for releasing a cache space, where the cache space includes a plurality of storage units, each storage unit can be independently applied and released, the plurality of storage units include two storage states of occupied storage units and idle storage units, occupied storage units indicate that data has been stored in the storage units, idle storage units indicate that data has not been stored in the storage units, the idle storage units are switched to occupied storage units when the idle storage units store data, and occupied storage units are switched to idle storage units when data in the occupied storage units are released.
As shown in fig. 1, the release method in the present embodiment includes:
and 11, selecting a target occupied storage unit according to the number of the to-be-processed tasks associated with each occupied storage unit.
Specifically, in step 11, the number of to-be-processed tasks associated with each occupied storage unit may be calculated first, and then the target occupied storage unit may be selected according to the calculated number.
In a first specific implementation, one of the occupied storage units with the number of associated to-be-processed tasks less than the task processing threshold may be selected as a target occupied storage unit, for example: the task processing threshold value can be set to be 3, all the occupied storage units with the number less than 3 of the related to-be-processed tasks are all candidate storage spaces, and one of the candidate storage spaces is randomly selected as a target occupied storage unit.
In a second specific implementation, the number of associated to-be-processed tasks may be sorted from small to large, and one of the plurality of occupied storage units that are sorted in the top is selected as a target occupied storage unit, for example: the preset number can be set to 4, the number of the associated tasks to be processed is sorted from small to large, the occupied storage units which are sorted into the top 4 are used as candidate storage units, and one of the candidate storage spaces is randomly selected as a target occupied storage unit.
In a third specific embodiment, the occupied storage unit with the least number of associated to-be-processed tasks is selected as the target occupied storage unit.
In this embodiment, in order to improve the efficiency of selecting a target occupied memory cell, the third embodiment is preferable.
And 12, selecting a target processing task in the target occupation storage unit.
Specifically, the target processing task may be randomly selected from the target occupied storage unit, or the target processing task may be selected according to a certain sequence, in this embodiment, the latter way is selected to specifically describe step 12: in this embodiment, the first to-be-processed task associated with the target occupation storage unit is taken as the target processing task, and it should be understood that in other embodiments, the last to-be-processed task associated with the target occupation storage unit may also be taken as the target processing task, or an intermediate to-be-processed task associated with the target occupation storage unit may also be taken as the target processing task.
And step 13, after the target processing task is processed, judging whether the occupied storage unit associated with the target processing task is associated with the task to be processed, if not, executing step 14, and if so, returning to step 11.
And 14, releasing the target occupied storage unit, and returning to the step 11.
It should be understood that, before step 11, the buffer space may be first divided into a plurality of storage units according to actual requirements, so as to be used for subsequent dynamic allocation and release of the buffer space.
In this embodiment, a target occupied storage unit is selected according to the number of to-be-processed tasks associated with a current occupied storage unit, and a target processing task associated with the target occupied storage unit is further selected, and after all the to-be-processed tasks associated with the target occupied storage unit are processed, the occupied storage unit can be released, that is, the occupied storage unit is switched to a free storage unit for subsequent data storage.
As shown in fig. 2, the method for releasing the cache space in this embodiment may further include:
and step 21, receiving data to be stored.
In step 21, when starting to receive the data to be stored, an idle storage unit is applied first, and after step 21, the method may further include the step of determining whether a storage space included in the current idle storage unit meets a storage requirement of the data to be stored: if yes, go to step 22; if not, in a specific implementation manner, the release of the occupied storage unit may be waited until the storage space included in the current free storage unit meets the storage requirement of the data to be stored, and in this case, the step of judging whether the storage space included in the current free storage unit meets the storage requirement of the data to be stored may be executed again at intervals of a first time threshold until the storage space included in the current free storage unit meets the storage requirement of the data to be stored; in another specific embodiment, a part of the data to be stored may be stored in the current free storage unit, and when a new free storage unit exists, another part of the data to be stored may be stored in the new free storage unit.
In this embodiment, the plurality of data sources do not occupy the same storage unit, so as to facilitate management of data of different data sources, and it should be understood that in other embodiments, the plurality of data sources may also occupy the same storage unit, and may be specifically selected according to the actual situation.
And step 22, storing the data to be stored into at least one idle storage unit according to the size of the data to be stored.
It should be understood that steps 21 and 22 may be performed synchronously with steps 11 to 14, so that the buffer space may be released while receiving the data to be processed, in other words, the buffer space may be released while receiving the data to be processed, so that the data processing process may be continuously and uninterruptedly performed, thereby further improving the efficiency of data processing.
The received data to be stored can be randomly allocated with free storage units as long as the sum of the storage spaces contained in the allocated free storage units meets the storage requirement of the data to be stored.
It should be understood that each to-be-stored data is associated with at least one to-be-processed task, and while step 22 is executed, or after the idle storage unit application succeeds in step 21, the following steps may be further executed: the tasks to be processed associated with each occupied storage unit are numbered in sequence, for example, in a specific scenario: currently, there are 3 occupied storage units, a first occupied storage unit is associated with two to-be-processed tasks, a second occupied storage unit is associated with one to-be-processed task, and a third occupied storage unit is associated with three to-be-processed tasks, then the to-be-processed tasks in the first occupied storage unit can be numbered as 001 and 002 respectively, the to-be-processed tasks in the second occupied storage unit are numbered as 003, and the to-be-processed tasks in the third occupied storage unit are numbered as 004, 005, and 006 respectively.
In this embodiment, in step 11, the number of the associated to-be-processed tasks may be calculated according to the number of the to-be-processed task associated with each occupied storage unit, specifically, for each occupied storage unit, a first number of a currently associated first to-be-processed task and a second number of a currently associated last to-be-processed task are obtained, and the number of the to-be-processed tasks associated with the occupied storage unit is calculated according to the second number and the first number. Fig. 3 shows a schematic diagram of a numbering manner of a task to be processed in a specific scenario, where the scenario includes storage units HEAD (first storage unit, i.e., HEAD of storage unit) \8230; front storage unit 2, front storage unit 1, front storage unit 0, current storage unit, rear storage unit 0, rear storage unit 1, rear storage unit 2 \8230; \8230, and so on. 101, 102, 103, and 104 represent different tasks to be processed, and it can be seen from the figure that data required to be used by each task to be processed may be stored in a plurality of storage units, for example, data required to be used by the task to be processed 101 is stored in the front storage unit 2 and the current storage unit, for the task to be processed 101, the front storage unit 2 is a data reading start storage unit, data required to be used by the task to be processed 102 is stored in the front storage unit 1 and the back storage unit 0, for the task to be processed 102, the front storage unit 1 is a data reading start storage unit, data required to be used by the task to be processed 103 is stored in the front storage unit 0 and the back storage unit 1, for the task to be processed 103, the front storage unit 0 is a data reading start storage unit, data required to be used by the task to be processed 104 is stored in the current storage unit and the back storage unit 2, and for the task to be processed 104, the current storage unit is a data reading start storage unit. After the current storage unit is applied, the number (104) of the last task to be processed associated with the current storage unit is subtracted by the number (101) of the first processing unit associated with the current storage unit, and the result is the number of the tasks to be processed associated with the current storage unit.
It should be understood that when the storage space occupied by the data to be stored is small, only one idle storage unit may need to be applied, and when the storage space occupied by the data to be stored is large, a plurality of idle storage units need to be applied, and at this time, the storage units may be connected in series through a linked list. Fig. 4 shows a schematic diagram of linked list series connection in a specific scenario, where each storage unit in the linked list may be randomly selected from the idle storage units, and there is no order requirement. For example, in fig. 3, data required to be used by the task 101 to be processed is stored in the previous storage unit 2 and the current storage unit respectively, the addresses of the storage unit 2 before the link and the current storage unit are stored in the linked list, and when the task 101 to be processed is processed and after the data in the previous storage unit 2 is read, the current storage unit is found through the address in the linked list and the corresponding data in the current storage unit is continuously read.
According to the parameters of the received data, a plurality of linked list information may be maintained, and the linked lists are not related to each other.
For example, after the current idle storage unit of the data to be stored is successfully applied, the addresses of the idle storage units that are successfully applied may be sequentially recorded in the information of the linked list, and since the data that needs to be used by the task to be processed may be stored in a plurality of occupied storage units, the corresponding data may be found through the linked list, the first address stored in the linked list is the start address information of the stored data (storage unit 0 in fig. 4), and the last address stored in the linked list is the end address information of the data, so that when the task to be processed is processed, the start address of the needed data may be found in the linked list to start processing the data until the end address information is found, that is, all the needed data are processed.
Specifically, according to the requirement of the current task to be processed, data occupying one or more storage units may need to be continuously read, the first storage unit to be read is an occupied storage unit at the beginning of the data associated with the current task to be processed, and specifically, the data can be obtained through initial address information in a linked list, after the first occupied storage unit is read, the occupied storage units needing to be read subsequently are searched based on the address information connected in series in the linked list, after the data reading of each storage unit is finished, after the processing of the current task to be processed is finished, the number of the tasks to be processed in the occupied storage units needs to be updated, when the number of the tasks to be processed in a certain occupied storage unit is 0, the corresponding occupied storage units are released, and the state information of the storage units is updated to be in an idle state, that is, the storage units are switched to be in the idle storage units to apply for the storage space of the subsequent data to be stored and are used.
In this embodiment, each storage unit maintains corresponding storage unit state information, storage unit identification information, number information of tasks to be processed, and linked list information, where the storage unit state information is used to indicate a storage state of the storage unit, such as an idle state or an occupied state, an initial storage state of each storage unit is idle, that is, an idle storage unit, that is, occupied after application, that is, an occupied storage unit, and that is, an idle state after occupation and release, that is, an idle storage unit. The linked list information comprises linked list address information used for representing addresses of the storage units connected with the current storage unit in series, the linked list information comprises initial address information, end address information and a plurality of pieces of address information related in the middle, and the storage unit identification information is used for representing identifiers of the storage units so as to facilitate management of the storage units.
It should be understood that the method for releasing the buffer space in this embodiment may be applied to various scenarios, such as a scenario of cell measurement, a scenario of data demodulation, a scenario of parameter estimation, and the like, which is not limited in this embodiment.
Example 2
This embodiment provides a system for releasing a cache space, where the cache space includes a plurality of storage units, and as shown in fig. 5, the system for releasing includes: a storage unit selection module 31, a processing task selection module 32 and a space release module 33.
The storage unit selecting module 31 is configured to select a target occupied storage unit according to the number of to-be-processed tasks associated with each occupied storage unit. Specifically, the storage unit selection module 31 is configured to calculate the number of to-be-processed tasks associated with each occupied storage unit, and then select a target occupied storage unit according to the calculated number.
In a first specific embodiment, the storage unit selection module 31 is configured to select one of the occupied storage units with the number of associated to-be-processed tasks less than the task processing threshold as a target occupied storage unit, where: the task processing threshold value can be set to be 3, all the occupied storage units with the number less than 3 of the related to-be-processed tasks are all candidate storage spaces, and one of the candidate storage spaces is randomly selected as a target occupied storage unit.
In a second specific embodiment, the storage unit selection module 31 is configured to sort the number of associated to-be-processed tasks from small to large, and select one of the plurality of occupied storage units that are sorted in the top as a target occupied storage unit, for example: the preset number can be set to 4, the number of the associated tasks to be processed is sorted from small to large, the occupied storage units which are sorted into the top 4 are used as candidate storage units, and one of the candidate storage spaces is randomly selected as a target occupied storage unit.
In a third specific embodiment, the storage unit selecting module 31 is configured to select an occupied storage unit with the smallest number of associated to-be-processed tasks as the target occupied storage unit.
In this embodiment, in order to improve the efficiency of selecting a target occupied memory cell, the third embodiment is preferable.
The processing task selection module 32 is configured to select a target processing task in the target occupancy storage unit. Specifically, the processing task selection module 32 is configured to randomly select a target processing task in the target occupied storage unit, and may also select the target processing task according to a certain order, in this embodiment, the processing task selection module 32 is specifically described by selecting the latter method: in this embodiment, the processing task selection module 32 is configured to use the first to-be-processed task associated with the target occupation storage unit as the target processing task, and it should be understood that in other embodiments, the processing task selection module 32 may also be configured to use the last to-be-processed task associated with the target occupation storage unit as the target processing task, or use an intermediate to-be-processed task associated with the target occupation storage unit as the target processing task.
The space releasing module 33 is configured to release the target occupied storage unit and call the storage unit selecting module 31 when the target occupied storage unit is not associated with the task to be processed.
The release system may further include a task determination module 34, the processing task selection module 32 is further configured to invoke the task determination module 34 after the target processing task is selected, and the task determination module 34 is configured to determine whether the occupied storage unit associated with the target processing task is associated with a task to be processed after the target processing task is processed, if not, invoke the space release module 33, and if so, invoke the storage unit selection module 31.
It should be understood that the delivery system may further comprise: the storage unit dividing module 35 is configured to divide the cache space into a plurality of storage units for subsequent dynamic allocation and release of the cache space, in this embodiment, the storage unit dividing module 35 preferably divides the cache space into a plurality of storage units with the same size so as to further improve efficiency and stability of subsequent dynamic allocation and release of the cache space.
In this embodiment, the target occupied storage unit is selected according to the number of to-be-processed tasks associated with the current occupied storage unit, the processing task selection module 32 is configured to further select the target processing task associated with the target occupied storage unit, and the space release module 33 is configured to release the occupied storage unit after all the to-be-processed tasks associated with the target occupied storage unit are processed, that is, the occupied storage unit is switched to a free storage unit for subsequent data storage.
The system for releasing the cache space in this embodiment may further include: the data receiving module 36 is configured to receive data to be stored, store the data to be stored into at least one idle storage unit according to the size of the data to be stored, when the data to be stored is stored in multiple idle storage units, the idle storage units are connected in series by a linked list, and the data to be stored is associated with at least one task to be processed.
The data receiving module 36 may be configured to apply for an idle storage unit first when starting to receive the data to be stored, and the data receiving module 36 may be further configured to determine whether a storage space included in the current idle storage unit meets a storage requirement of the data to be stored: if so, storing the data to be stored into at least one idle storage unit according to the size of the data to be stored; if not, in a specific implementation manner, the release of the occupied storage unit can be waited until the storage space contained in the current free storage unit meets the storage requirement of the data to be stored, and in this case, whether the storage space contained in the current free storage unit meets the storage requirement of the data to be stored or not can be judged again at intervals of a first time threshold until the storage space contained in the current free storage unit meets the storage requirement of the data to be stored; in another specific implementation, a part of the data to be stored may be stored in the current free storage unit, and when a new free storage unit exists, another part of the data to be stored may be stored in the new free storage unit.
In this embodiment, the plurality of data sources may not occupy the same storage unit, so as to facilitate management of data of different data sources, and it should be understood that in other embodiments, the plurality of data sources may also occupy the same storage unit, and may be specifically selected according to the actual situation.
In this embodiment, the data receiving module 36, the storage unit selecting module 31, the processing task selecting module 32, the task determining module 34, and the space releasing module 33 may be invoked simultaneously, so that the buffer space may be released while receiving the to-be-processed data, in other words, the buffer space may be released while receiving the to-be-processed data, so that the data processing process may be continuously and uninterruptedly performed, and the efficiency of data processing may be further improved.
The storage unit selection module 31 may randomly allocate free storage units for the received data to be stored, as long as the sum of storage spaces included in the allocated free storage units meets the storage requirement of the data to be stored.
It should be understood that each piece of data to be stored is associated with at least one task to be processed, and the data receiving module 36 or the storage unit selecting module 31 may be invoked while the storage unit selecting module is invoked, and the storage unit selecting module is configured to, for each occupied storage unit, obtain a first number of a currently associated first task to be processed and a second number of a currently associated last task to be processed, and calculate the number of tasks to be processed associated with the occupied storage unit according to the second number and the first number. As in one particular scenario: currently, there are 3 occupied storage units, the first occupied storage unit is associated with two tasks to be processed, the second occupied storage unit is associated with one task to be processed, and the third occupied storage unit is associated with three tasks to be processed, then the tasks to be processed in the first occupied storage unit may be numbered 001 and 002, respectively, the tasks to be processed in the second occupied storage unit may be numbered 003, and the tasks to be processed in the third occupied storage unit may be numbered 004, 005, and 006.
Fig. 3 shows a schematic diagram of the numbering manner of the to-be-processed tasks in a specific scenario, where after the current storage unit is applied, the number (104) of the last to-be-processed task associated with the current storage unit is subtracted from the number (101) of the first processing unit associated with the current storage unit, and the result is the number of to-be-processed tasks associated with the current storage unit.
It should be understood that when the storage space occupied by the data to be stored is small, the data receiving module 36 may only need to apply for one idle storage unit, and when the storage space occupied by the data to be stored is large, the data receiving module 36 may need to apply for a plurality of idle storage units, and at this time, the storage units may be connected in series through a linked list. Fig. 4 shows a schematic diagram of linked list concatenation in a specific scenario, where each storage unit in the linked list may be randomly selected from the idle storage units without order requirement. According to the parameters of the received data, a plurality of linked list information may be maintained, and the linked lists are not related to each other.
For example, after the current idle storage unit of the data to be stored is successfully applied, the addresses of the idle storage units that are successfully applied may be sequentially recorded in the information of the linked list, and since the data that needs to be used by the task to be processed may be stored in a plurality of occupied storage units, the corresponding data may be found through the linked list, the first address stored in the linked list is the start address information of the stored data (storage unit 0 in fig. 4), and the last address stored in the linked list is the end address information of the data, so that when the task to be processed is processed, the start address of the needed data may be found in the linked list to start processing the data until the end address information is found, that is, all the needed data are processed.
Specifically, according to the requirement of the current to-be-processed task, data occupying one or more storage units may need to be continuously read, the first storage unit that is read is an occupied storage unit at the beginning of the data associated with the current to-be-processed task, and may specifically be obtained through the initial address information in the linked list, after the first occupied storage unit is completely read, the occupied storage unit that needs to be subsequently read is searched based on the address information connected in series in the linked list, after the data reading of each storage unit is completed, and after the current to-be-processed task is completely processed, the number of to-be-processed tasks in the occupied storage unit needs to be updated, the space releasing module 33 is used to release the corresponding occupied storage unit when the number of to-be-processed tasks in a certain occupied storage unit is 0, and the state information of the storage unit is updated to an idle state, that is, and is switched to an idle storage unit for the application and use of the storage space of the subsequent to-be-stored data.
In this embodiment, each storage unit maintains corresponding storage unit state information, storage unit identification information, number information of tasks to be processed, and linked list information, where the storage unit state information is used to indicate a storage state of the storage unit, such as an idle state or an occupied state, an initial storage state of each storage unit is idle, that is, an idle storage unit, that is, occupied after application, that is, an occupied storage unit, and that is, an idle state after occupation and release, that is, an idle storage unit. The linked list information comprises linked list address information used for representing addresses of storage units connected with the current storage unit in series, including initial address information, termination address information and a plurality of pieces of address information related in the middle, and the storage unit identification information is used for representing identifiers of the storage units so as to facilitate management of the storage units.
It should be understood that the system for releasing the buffer space in this embodiment may be applied in various scenarios, such as a scenario of cell measurement, a scenario of data demodulation, a scenario of parameter estimation, and the like, which is not limited in this embodiment.
Example 3
An embodiment of the present invention further provides an electronic device, which may be represented in a form of a computing device (for example, may be a server device), and includes a memory, a processor, and a computer program that is stored in the memory and is executable on the processor, where when the processor executes the computer program, the method for releasing a cache space in embodiment 1 of the present invention may be implemented.
Fig. 6 shows a schematic diagram of a hardware structure of the embodiment, and as shown in fig. 6, the electronic device 9 specifically includes:
at least one processor 91, at least one memory 92, and a bus 93 for connecting the various system components (including the processor 91 and the memory 92), wherein:
the bus 93 includes a data bus, an address bus, and a control bus.
Memory 92 includes volatile memory, such as Random Access Memory (RAM) 921 and/or cache memory 922, and can further include Read Only Memory (ROM) 923.
Memory 92 also includes a program/utility 925 having a set (at least one) of program modules 924, such program modules 924 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
The processor 91 executes various functional applications and data processing, such as a method of releasing a cache space in embodiment 1 of the present invention, by running the computer program stored in the memory 92.
The electronic device 9 may further communicate with one or more external devices 94, such as a keyboard, pointing device, etc. Such communication may be through an input/output (I/O) interface 95. Also, the electronic device 9 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 96. The network adapter 96 communicates with the other modules of the electronic device 9 via the bus 93. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 9, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID (disk array) systems, tape drives, and data backup storage systems, to name a few.
It should be noted that although in the above detailed description several units/modules or sub-units/modules of the electronic device are mentioned, such a division is merely exemplary and not mandatory. Indeed, the features and functions of two or more units/modules described above may be embodied in one unit/module according to embodiments of the application. Conversely, the features and functions of one unit/module described above may be further divided into embodiments by a plurality of units/modules.
Example 4
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the method for releasing a cache space in embodiment 1 of the present invention.
More specific examples that may be employed by the readable storage medium include, but are not limited to: a portable disk, a hard disk, random access memory, read only memory, erasable programmable read only memory, optical storage device, magnetic storage device, or any suitable combination of the foregoing.
In a possible implementation manner, the present invention can also be implemented in the form of a program product, which includes program codes, and when the program product runs on a terminal device, the program codes are used for making the terminal device execute steps of implementing the method for releasing the cache space in embodiment 1 of the present invention.
Where program code for carrying out the invention is written in any combination of one or more programming languages, the program code may be executed entirely on the user device, partly on the user device, as a stand-alone software package, partly on the user device, partly on a remote device or entirely on the remote device.
While specific embodiments of the invention have been described above, it will be appreciated by those skilled in the art that this is by way of example only, and that the scope of the invention is defined by the appended claims. Various changes and modifications to these embodiments may be made by those skilled in the art without departing from the spirit and scope of the invention, and these changes and modifications are within the scope of the invention.

Claims (18)

1. A method for releasing a cache space, wherein the cache space includes a plurality of storage units, the method comprising:
selecting a target occupied storage unit according to the number of the tasks to be processed associated with each occupied storage unit;
selecting a target processing task in the target occupation storage unit;
when the target occupation storage unit is not associated with the task to be processed, releasing the target occupation storage unit;
after the step of selecting the target processing task from the target occupied storage unit, the method further comprises the following steps:
after the target processing task is processed, judging whether an occupied storage unit associated with the target processing task is associated with a task to be processed, if not, executing the step of releasing the target occupied storage unit, and if so, returning to the step of selecting the target occupied storage unit according to the number of the tasks to be processed associated with each occupied storage unit;
the step of selecting a target occupied storage unit according to the number of to-be-processed tasks associated with each occupied storage unit comprises:
calculating the number of tasks to be processed related to each occupied storage unit;
selecting one of the occupied storage units with the number of the associated tasks to be processed less than the task processing threshold as a target occupied storage unit; or, the number of the associated tasks to be processed is sorted from less to more, and one of the plurality of occupied storage units which are sorted in the front is selected as a target occupied storage unit; or selecting the occupation storage unit with the least number of the associated to-be-processed tasks as the target occupation storage unit.
2. The method of releasing a cache space of claim 1,
the step of selecting a target processing task in the target occupied storage unit comprises:
and taking the first task to be processed associated with the target occupation storage unit as a target processing task.
3. The method for releasing cache space according to claim 1, wherein the tasks to be processed associated with each occupied storage unit are numbered in sequence, and the step of calculating the number of the tasks to be processed associated with each occupied storage unit comprises:
for each occupied storage unit, acquiring a first number of a currently associated first task to be processed and a second number of a currently associated last task to be processed;
and calculating the number of the tasks to be processed related to the occupied storage unit according to the second number and the first number.
4. The method of freeing cache space of claim 1, further comprising:
receiving data to be stored, storing the data to be stored into at least one idle storage unit according to the size of the data to be stored, and when the data to be stored is stored in a plurality of idle storage units, connecting the idle storage units in series through a linked list, wherein the data to be stored is associated with at least one task to be processed.
5. The method for releasing cache space according to claim 4, wherein when data required to be used by the target processing task is stored in a plurality of occupied storage units, the addresses of the occupied storage units are obtained through corresponding linked lists.
6. The method for releasing cache space according to claim 1, wherein the step of selecting the target occupied storage unit according to the number of the to-be-processed tasks associated with each occupied storage unit further comprises:
and dividing the cache space into a plurality of storage units with the same size.
7. The method for releasing the buffer space according to claim 4, wherein the step of storing the data to be stored into at least one of the free storage units according to the size of the data to be stored comprises:
judging whether the storage space contained in the current free storage unit meets the storage requirement of the data to be stored: if yes, storing the data to be stored into a corresponding idle storage unit; if not, waiting for the release of the occupied storage unit until the storage space contained in the current idle storage unit meets the storage requirement of the data to be stored, or storing a part of the data to be stored in the current idle storage unit, and storing the other part of the data to be stored in the new idle storage unit when the new idle storage unit exists.
8. The method for releasing the cache space according to any one of claims 1 to 7, wherein the storage unit includes storage unit state information, number information of the tasks to be processed, link list address information, and storage unit identification information, the storage unit state information is used for indicating the storage state of the storage unit, and the link list address information is used for indicating the address of the storage unit connected in series with the current storage unit.
9. A system for releasing a cache space, wherein the cache space includes a plurality of storage units, the system comprising: the system comprises a storage unit selection module, a processing task selection module and a space release module;
the storage unit selection module is used for selecting a target occupied storage unit according to the number of the tasks to be processed related to each occupied storage unit;
the processing task selection module is used for selecting a target processing task from the target occupation storage unit;
the space release module is used for releasing the target occupation storage unit when the target occupation storage unit is not associated with a task to be processed;
the release system also comprises a task judgment module, the processing task selection module is also used for calling the task judgment module after a target processing task is selected, the task judgment module is used for judging whether an occupied storage unit associated with the target processing task is also associated with the task to be processed or not after the target processing task is processed, if not, the space release module is called, and if so, the storage unit selection module is called;
the memory cell selection module includes: the task computing unit and the storage selection unit;
the task computing unit is used for computing the number of the tasks to be processed related to each occupied storage unit;
the storage selection unit is used for selecting one of the occupied storage units as a target occupied storage unit, wherein the number of the associated tasks to be processed is less than a task processing threshold; or the storage selection unit is used for sequencing the number of the associated tasks to be processed from small to large and selecting one of a plurality of occupation storage units which are sequenced at the front as a target occupation storage unit; or, the storage selection unit is configured to select an occupied storage unit with the smallest number of associated to-be-processed tasks as a target occupied storage unit.
10. The system for releasing a cache space of claim 9,
and the processing task selection module is used for taking the first to-be-processed task associated with the target occupation storage unit as a target processing task.
11. The system for releasing cache space according to claim 9, wherein the to-be-processed tasks associated with each occupied storage unit are numbered in sequence, and the task calculating unit is configured to, for each occupied storage unit, obtain a first number of a currently associated first to-be-processed task and a second number of a currently associated last to-be-processed task, and calculate the number of to-be-processed tasks associated with the occupied storage unit according to the second number and the first number.
12. The system for releasing cache space according to claim 9, wherein the system for releasing cache space further comprises a data receiving module, configured to receive data to be stored, and store the data to be stored into at least one idle storage unit according to a size of the data to be stored, when the data to be stored is stored in a plurality of idle storage units, the idle storage units are connected in series by a linked list, and the data to be stored is associated with at least one task to be processed.
13. The system for releasing cache space according to claim 12, wherein the space releasing module is configured to, when data required to be used by the target processing task is stored in a plurality of occupied storage units, obtain addresses of the occupied storage units through corresponding linked lists.
14. The system for releasing cache space of claim 9, wherein the releasing system further comprises: and the storage unit dividing module is used for dividing the cache space into a plurality of storage units with the same size.
15. The system for releasing cache space according to claim 12, wherein the data receiving module is configured to determine whether the storage space included in the currently free storage unit meets the storage requirement of the data to be stored: if yes, storing the data to be stored into a corresponding idle storage unit; if not, waiting for the release of the occupied storage unit until the storage space contained in the current idle storage unit meets the storage requirement of the data to be stored, or storing a part of the data to be stored in the current idle storage unit, and storing the other part of the data to be stored in the new idle storage unit when the new idle storage unit exists.
16. The system for releasing cache space according to any one of claims 9 to 15, wherein the storage unit includes storage unit state information, information on the number of tasks to be processed, linked list address information, and storage unit identification information, the storage unit state information is used to indicate the storage state of the storage unit, and the linked list address information is used to indicate the address of the storage unit connected in series with the current storage unit.
17. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of releasing the cache space according to any one of claims 1 to 8 when executing the computer program.
18. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method for releasing cache space according to any one of claims 1 to 8.
CN202011540102.4A 2020-12-23 2020-12-23 Method and system for releasing cache space, electronic device and storage medium Active CN112650449B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011540102.4A CN112650449B (en) 2020-12-23 2020-12-23 Method and system for releasing cache space, electronic device and storage medium
PCT/CN2021/136650 WO2022135160A1 (en) 2020-12-23 2021-12-09 Releasing method and releasing system for buffer space, and electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011540102.4A CN112650449B (en) 2020-12-23 2020-12-23 Method and system for releasing cache space, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN112650449A CN112650449A (en) 2021-04-13
CN112650449B true CN112650449B (en) 2022-12-27

Family

ID=75359543

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011540102.4A Active CN112650449B (en) 2020-12-23 2020-12-23 Method and system for releasing cache space, electronic device and storage medium

Country Status (2)

Country Link
CN (1) CN112650449B (en)
WO (1) WO2022135160A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112650449B (en) * 2020-12-23 2022-12-27 展讯半导体(南京)有限公司 Method and system for releasing cache space, electronic device and storage medium
CN112995704B (en) * 2021-04-25 2021-08-06 武汉中科通达高新技术股份有限公司 Cache management method and device, electronic equipment and storage medium

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05143497A (en) * 1991-11-18 1993-06-11 Nec Corp Method and device for buffer management
US6105108A (en) * 1997-10-24 2000-08-15 Compaq Computer Corporation Method and apparatus for releasing victim data buffers of computer systems by comparing a probe counter with a service counter
CN1184777C (en) * 2002-04-17 2005-01-12 华为技术有限公司 Method for managing and allocating buffer storage during Ethernet interchange chip transmission of data
EP1834231A1 (en) * 2004-12-10 2007-09-19 Koninklijke Philips Electronics N.V. Data processing system and method for cache replacement
CN101753580B (en) * 2010-01-08 2012-07-25 烽火通信科技股份有限公司 Packet processing chip and data storage and forwarding method thereof
US9367348B2 (en) * 2013-08-15 2016-06-14 Globalfoundries Inc. Protecting the footprint of memory transactions from victimization
CN105159777B (en) * 2015-08-03 2018-07-27 中科创达软件股份有限公司 The method for recovering internal storage and device of process
CN107665146B (en) * 2016-07-29 2020-07-07 华为技术有限公司 Memory management device and method
CN106681829B (en) * 2016-12-09 2020-07-24 北京康吉森技术有限公司 Memory management method and system
CN110032438B (en) * 2019-04-24 2021-11-26 北京高途云集教育科技有限公司 Delayed task execution method and device and electronic equipment
CN111538694B (en) * 2020-07-09 2020-11-10 常州楠菲微电子有限公司 Data caching method for network interface to support multiple links and retransmission
CN112650449B (en) * 2020-12-23 2022-12-27 展讯半导体(南京)有限公司 Method and system for releasing cache space, electronic device and storage medium

Also Published As

Publication number Publication date
WO2022135160A1 (en) 2022-06-30
CN112650449A (en) 2021-04-13

Similar Documents

Publication Publication Date Title
CN110096336B (en) Data monitoring method, device, equipment and medium
CN113641457B (en) Container creation method, device, apparatus, medium, and program product
CN112650449B (en) Method and system for releasing cache space, electronic device and storage medium
CN112799606B (en) Scheduling method and device of IO (input/output) request
US20060107261A1 (en) Providing Optimal Number of Threads to Applications Performing Multi-tasking Using Threads
CN107515781B (en) Deterministic task scheduling and load balancing system based on multiple processors
CN107977275B (en) Task processing method based on message queue and related equipment
CN111143331A (en) Data migration method and device and computer storage medium
EP3945420A1 (en) Method and apparatus for data processing, server and storage medium
CN116302453B (en) Task scheduling method and device for quantum electronic hybrid platform
CN115951845B (en) Disk management method, device, equipment and storage medium
US11194619B2 (en) Information processing system and non-transitory computer readable medium storing program for multitenant service
CN110764705B (en) Data reading and writing method, device, equipment and storage medium
CN114157717B (en) System and method for dynamic current limiting of micro-service
CN115658295A (en) Resource scheduling method and device, electronic equipment and storage medium
CN114416357A (en) Method and device for creating container group, electronic equipment and medium
CN111913812A (en) Data processing method, device, equipment and storage medium
CN115599838B (en) Data processing method, device, equipment and storage medium based on artificial intelligence
CN115174483B (en) Time window based current limiting method, device, server and storage medium
CN113076178B (en) Message storage method, device and equipment
CN115981808A (en) Scheduling method, scheduling device, computer equipment and storage medium
CN117873694A (en) Heap space allocation method, heap space allocation device, electronic equipment and storage medium
CN116126466A (en) Resource scheduling method and device based on Kubernetes, electronic equipment and medium
CN117793197A (en) Method and device for distributing basic resources of voice task, electronic equipment and storage medium
CN117971663A (en) Case distribution method and computing equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant