CN116303125B - Request scheduling method, cache, device, computer equipment and storage medium - Google Patents

Request scheduling method, cache, device, computer equipment and storage medium Download PDF

Info

Publication number
CN116303125B
CN116303125B CN202310547976.XA CN202310547976A CN116303125B CN 116303125 B CN116303125 B CN 116303125B CN 202310547976 A CN202310547976 A CN 202310547976A CN 116303125 B CN116303125 B CN 116303125B
Authority
CN
China
Prior art keywords
request
target
scheduled
transaction
history
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310547976.XA
Other languages
Chinese (zh)
Other versions
CN116303125A (en
Inventor
潘滨
虞美兰
吕晖
路文斌
马瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taichu Wuxi Electronic Technology Co ltd
Original Assignee
Taichu Wuxi Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taichu Wuxi Electronic Technology Co ltd filed Critical Taichu Wuxi Electronic Technology Co ltd
Priority to CN202310547976.XA priority Critical patent/CN116303125B/en
Publication of CN116303125A publication Critical patent/CN116303125A/en
Application granted granted Critical
Publication of CN116303125B publication Critical patent/CN116303125B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0815Cache consistency protocols
    • G06F12/0831Cache consistency protocols using a bus scheme, e.g. with bus monitoring or watching means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application relates to the technical field of buses and discloses a request scheduling method, a cache, a device, computer equipment and a storage medium, wherein the method is applied to the cache and comprises the steps of receiving a request to be scheduled, and extracting a target transaction ID and a target mapping address from the request to be scheduled; the target mapping address is the address mapped into the cache; determining whether a target transaction ID history request and a target mapping address history request exist according to the target transaction ID and the target mapping address; when the target transaction ID history request and the target mapping address history request do not exist, executing a request to be scheduled; when the target mapping address history request does not exist and the target transaction ID history request exists, executing the request to be scheduled if the target transaction ID history request is in a specified state. The transaction dependence and the mapping address dependence of the request to be scheduled are considered, and the cache performance is improved while the same transaction ID order keeping performance is included.

Description

Request scheduling method, cache, device, computer equipment and storage medium
Technical Field
The present application relates to the field of bus technologies, and in particular, to a request scheduling method, a cache, a device, a computer device, and a storage medium.
Background
With the development of bus protocols and cache (cache) technologies, caches based on different bus protocols are becoming more common in different application scenarios. For example, the cache based on AXI (Advanced eXtensible Interface) protocol is often applied to application scenarios requiring low information latency due to the advantages of separating the read request, the write request, the read data, the write data and the write response into different channels for transmission and utilizing the parallel reduction of the information transfer delay of the channels.
In the prior art, the workflow of buffering in a bus system based on different bus protocols is mostly that a buffer is configured through a newly added management device, so that relevant information channels in a bus of the total system are introduced into the buffer, and relevant requests are responded through the buffer when the buffer hits (hit). However, in the present technology, for the cache based on the bus protocol (for example, AXI protocol) with the Identity (ID) of co-transaction, due to the order keeping of the co-transaction ID, when the current hit request is executed, if the current hit request exists with a miss request with the co-transaction ID that has not been executed before, the current hit request needs to be executed after the miss request is executed, and other requests that are processed in parallel with the current hit request also need to be executed after the miss request is executed, which causes the cache to be blocked and reduces the performance of the cache.
Therefore, how to solve the problem of improving the performance of the cache while ensuring the order keeping of the same transaction ID has become an urgent need.
Disclosure of Invention
In view of this, the embodiments of the present application provide a request scheduling method, a cache, a device, a computer apparatus, and a storage medium, so as to solve the problem of how to improve the performance of the cache while ensuring the order keeping performance of the same transaction ID.
In a first aspect, an embodiment of the present application provides a request scheduling method, where the method includes:
receiving a request to be scheduled, and extracting a target transaction ID and a target mapping address from the request to be scheduled; the target mapping address is the address mapped into the cache;
determining whether a target transaction ID history request and a target mapping address history request exist according to the target transaction ID and the target mapping address;
when the target transaction ID history request and the target mapping address history request do not exist, executing a request to be scheduled;
when the target mapping address history request does not exist and the target transaction ID history request exists, executing the request to be scheduled if the target transaction ID history request is in a specified state.
Optionally, the specified state includes that the target transaction ID history request is in an execution state, and the hit result and the allocation attribute of the target transaction ID history request and the request to be scheduled are the same; alternatively, execution of the target transaction ID history request is completed.
Optionally, the cache includes a tag storage unit; after receiving the request to be scheduled, the method further comprises:
extracting address information in a request to be scheduled;
determining a hit result of the request to be scheduled according to whether address information exists in the tag storage unit;
and when the hit result of the to-be-scheduled request is a miss, storing the address information into a tag storage unit.
Optionally, the cache includes at least one recording unit, where the recording unit is configured to store the received request to be scheduled; determining whether there is a target transaction ID history request and a target map address history request based on the target transaction ID and the target map address, including:
determining whether a history request of the target transaction ID exists according to whether the history request of which the transaction ID is the target transaction ID exists in the recording unit;
and determining whether a history request of the target mapping address exists according to whether the history request of which the mapping address is the target mapping address exists in the recording unit.
Optionally, after determining whether there is a target transaction ID history request and a target map address history request according to the target transaction ID and the target map address, the method further includes:
when the target transaction ID historical request and the target mapping address historical request exist, whether the target transaction ID historical request and the target mapping address historical request are executed is monitored;
When the target transaction ID history request is executed, determining that the target transaction ID history request does not exist;
when the target mapping address history request is executed, it is determined that there is no target mapping address history request.
In a second aspect, an embodiment of the present application provides a cache, where the cache includes a transaction scheduling module, where the transaction scheduling module is configured to:
receiving a request to be scheduled, and extracting a target transaction ID and a target mapping address from the request to be scheduled; the target mapping address is mapped to an address in the cache;
determining whether a target transaction ID history request and a target mapping address history request exist according to the target transaction ID and the target mapping address;
when the target transaction ID history request and the target mapping address history request do not exist, executing a request to be scheduled;
when the target mapping address history request does not exist and the target transaction ID history request exists, executing the request to be scheduled if the target transaction ID history request is in a specified state.
Optionally, the cache further includes a tag management module and a tag storage unit, and the tag management module is electrically connected with the tag storage unit; the label management module is connected with the transaction scheduling module through an interface;
The label management module is used for:
receiving a request to be scheduled, and extracting address information in the request to be scheduled;
determining a hit result of the request to be scheduled according to whether address information exists in the tag storage unit;
when the hit result of the request to be scheduled is a miss, storing the address information into a tag storage unit;
and sending the to-be-scheduled request and the hit result of the to-be-scheduled request to the transaction scheduling module through an interface.
In a third aspect, an embodiment of the present application provides a request scheduling apparatus, where the apparatus includes:
the receiving module is used for receiving the request to be scheduled and extracting a target transaction ID and a target mapping address from the request to be scheduled; the target mapping address is the address mapped into the cache;
the determining module is used for determining whether a target transaction ID historical request and a target mapping address historical request exist according to the target transaction ID and the target mapping address;
the execution module is used for executing the request to be scheduled when the target transaction ID historical request and the target mapping address historical request do not exist;
and the execution module is also used for executing the request to be scheduled if the target transaction ID historical request is in a specified state when the target mapping address historical request does not exist and the target transaction ID historical request exists.
In a fourth aspect, an embodiment of the present application provides a computer apparatus, including: the system comprises a memory and a processor, wherein the memory and the processor are in communication connection, the memory stores computer instructions and data, and the processor executes the computer instructions and data, so that the request scheduling method of the first aspect or any implementation mode corresponding to the first aspect is executed.
In a fifth aspect, an embodiment of the present application provides a computer readable storage medium, where computer instructions and data are stored, where the computer instructions and data are configured to cause a computer to perform the request scheduling method of the first aspect or any implementation manner corresponding to the first aspect.
The technical scheme provided by the application can comprise the following beneficial effects:
when a request to be scheduled is received, extracting a target transaction ID and a target mapping address from the request to be scheduled, thereby determining whether the request to be scheduled has a target transaction ID history request and a target mapping address history request so as to consider the transaction dependency and the address dependency of the request to be scheduled. When the target transaction ID history request and the target mapping address history request do not exist, the request to be scheduled is directly executed. The execution sequence of different transaction ID requests is disordered, the possibility of blocking the cache is reduced, and the performance of the cache is improved. When there is no target mapping address history request but there is a target transaction ID history request, the request to be scheduled is executed if the target transaction ID history request is in a specified state. Meaning that when the current request is to be executed, whether the current request still has a co-mapped address history request is considered, and the cache consistency is ensured by considering the address dependency of the current request; and determining whether to execute the current request according to whether the history request with the same transaction ID before the current request is in a specified state, considering the transaction dependency of the current request, ensuring the order keeping property of the transaction ID of the executor contract of the request, and further improving the performance of the cache.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present application, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram illustrating a cache architecture according to some embodiments of the application;
FIG. 2 is a schematic diagram of the configuration of the transaction scheduling module and tag management module in the cache referred to in FIG. 1;
FIG. 3 is a flow chart illustrating a method of request scheduling according to some embodiments of the application;
FIG. 4 is a flow chart illustrating another request scheduling method according to some embodiments of the application;
FIG. 5a is a schematic diagram of a flow of execution of a request to be scheduled according to an application scenario of the present application;
FIG. 5b is a schematic diagram of a request to be scheduled execution flow according to another application scenario of the present application;
FIG. 6 is a block diagram showing a request scheduler according to an embodiment of the present application;
Fig. 7 is a schematic diagram of a hardware structure of a computer device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be understood that the "indication" mentioned in the embodiments of the present application may be a direct indication, an indirect indication, or an indication having an association relationship. For example, a indicates B, which may mean that a indicates B directly, e.g., B may be obtained by a; it may also indicate that a indicates B indirectly, e.g. a indicates C, B may be obtained by C; it may also be indicated that there is an association between a and B.
In the description of the embodiments of the present application, the term "corresponding" may indicate that there is a direct correspondence or an indirect correspondence between the two, or may indicate that there is an association between the two, or may indicate a relationship between the two and the indicated, configured, etc.
In the embodiment of the present application, the "predefining" may be implemented by pre-storing corresponding codes, tables or other manners that may be used to indicate relevant information in devices (including, for example, terminal devices and network devices), and the present application is not limited to the specific implementation manner thereof.
For the convenience of understanding the technical solution of the present application, a description will be first made of terms involved in the present application.
And (3) caching: is a layer in the memory hierarchy and is an important component in high performance computing systems. Generally, it is slower than the memory of the upper level and faster than the memory of the lower level. The method saves a part of data in the next-level memory, and can be directly provided by the cache without accessing the next-level memory when the previous-level memory requests the data, thereby saving access time and improving system performance. The cache can be divided into three major parts: the system comprises a tag (tag) storage unit, a data (data) storage unit and a management unit, wherein the tag storage unit is used for recording an address corresponding to each cache line and state information of the cache line, the data storage unit is used for recording a memory data copy of the address corresponding to the cache line, and the management unit is used as control logic to realize access and content update of the tag storage unit and the storage unit.
AXI (Advanced eXtensible Interface) protocol: is a bus protocol that is a high performance, high bandwidth, low latency on-chip bus. The method separates the read request, the write request, the read data, the write data and the write response into different channels for transmission, and the information flow in each channel is transmitted in one direction only, so that the delay of information transmission is reduced by utilizing the parallel of the channels. The method uses the bus ID, namely the transaction ID to represent the attribution of each access, supports the sending of a plurality of read-write operations (outlying) without obtaining returned data, and can also disturb the sequence (out-of-order) among the accesses of different transaction IDs, thereby greatly improving the data throughput capacity by executing a plurality of bus operations in parallel.
Currently mainstream is a management unit MSHR (Miss Status Handling Register) of a non-AXI protocol. After a certain request is passed, other requests are processed continuously, and when the requests with the same miss address appear later, the requests are combined, so that unnecessary repeated access is avoided. However, for AXI protocol with the same transaction ID order keeping property, even though the hit request may be executed after waiting for the completion of the previous miss request with the same transaction ID, the out-of-order characteristic in AXI protocol cannot be realized, resulting in blocking of the cache and lowering of the performance of the cache.
The technical scheme of the application can receive the request to be scheduled which is transmitted by the front-stage module and is converted into the address level, and determine whether to execute the request to be scheduled according to whether the request to be scheduled has the history request of the same transaction ID and the history request of the same mapping address, and when the request to be scheduled has the history request of the same transaction ID but does not have the history request of the same mapping address, determine whether to execute the request to be scheduled according to whether the history request of the same transaction ID is in a specified state. Meanwhile, address dependence and transaction dependence of a current request to be scheduled are considered, execution of a history request before waiting is avoided as far as possible, the characteristics of an AXI protocol out-of-order and an outtan are met, and for other caches based on bus protocols with the same transaction ID order keeping property, a plurality of requests can be executed in parallel, the same transaction ID order keeping property is guaranteed, the possibility of cache sending blocking is reduced, and therefore the performance of the caches is improved.
First, a cache according to the present application is shown in fig. 1, which is a schematic diagram of a cache according to an embodiment of the present application. The cache comprises a transaction scheduling module 110 and a tag management module 120, wherein the tag management module 120 is connected with the transaction scheduling module 110 through an interface; the tag management module 120 is electrically connected to the tag storage unit.
Such as the cache shown in fig. 1. The tag management module 120 is configured to receive an address-level to-be-scheduled request sent by a preceding module, and extract address information of the to-be-scheduled request; and further, determining a hit result of the request to be scheduled according to whether the address information exists in the tag storage unit, and sending the request to be scheduled and the hit result to the transaction scheduling module 110 through an interface.
In particular, when the hit result of the scheduling request is a miss, the address information of the scheduling request is stored in the tag storage unit. Compared with the update timing of the existing tag storage unit, in the embodiment of the application, after a miss occurs, the tag management module 120 does not need to wait for the completion of the loading of the related data to update the related data, but updates the related data directly, so that when the other to-be-scheduled requests of the subsequent same address information come, because the address information already exists in the tag storage unit, the other to-be-scheduled requests of the same address information are regarded as hits (although the loading of the related data may not be completed), and the same main memory address access request is not generated any more, thereby realizing the combination of the access requests and further improving the performance of the cache. It should be noted that, before the relevant data loading is completed, other to-be-scheduled requests of the same address information are in a pseudo-hit state, and the relevant waiting data loading can be executed after being completed, and the waiting function is implemented in the transaction scheduling module.
The transaction scheduling module 110 is configured to receive a to-be-scheduled request and a hit result of the to-be-scheduled request sent by the tag management module 120, extract a target transaction ID and a target mapping address from the to-be-scheduled request, and further determine whether a target transaction ID history request and a target mapping address history request exist according to the target transaction ID and the target mapping address; when the target transaction ID history request and the target mapping address history request do not exist, executing the request to be scheduled; when the target mapping address history request does not exist and the target transaction ID history request exists, executing the request to be scheduled if the target transaction ID history request is in a specified state. Meanwhile, address dependence and transaction dependence of the request to be scheduled are considered, and cache consistency is guaranteed when the request to be scheduled is executed without a target mapping address history request. And when executing the request to be scheduled, considering whether the target transaction ID historical request is in a specified state so as to ensure the order keeping property of the transaction ID of the executor contract of the request, and reducing the blocking occurrence probability and improving the caching performance on the basis of ensuring the order keeping property of the same transaction ID so as to ensure the caching consistency. It should be noted that, the target mapping address is an address mapped to the cache, for example, an address where the request to be scheduled is mapped to a data storage unit in the cache.
Optionally, the transaction scheduling module 110 may include at least one buffer unit 111, where the buffer unit 111 includes at least one recording unit 101, and the recording unit 101 is configured to store the to-be-scheduled request received by the transaction scheduling module 110. After receiving the request to be scheduled sent by the tag management module 120, the transaction scheduling module first stores the request to be scheduled in the available recording unit 101, and further extracts the target transaction ID and the target mapping address in the request to be scheduled. All recording units 101 are queried for a request for the same target transaction ID or a request for the same target map address to determine whether there is a target transaction ID history request and a target map address history request.
It should be noted that, in the embodiment of the present application, the specific number of interfaces and interface definitions between the tag management module 120 and the transaction scheduling module 110 are not limited, and may be 12 interfaces such as tagi2disp_vld, tagi2tagi_rdy, tagi2disp_id, etc. shown in fig. 2, or may be other interfaces set by those skilled in the art according to different application scenarios. The specific number of the recording units 101 and the buffer units 111 is not particularly limited, and those skilled in the art can set the recording units according to different application scenarios.
Preferably, the specific structures of the tag management module 120 and the transaction scheduling module 110 in the cache are shown in fig. 2, and the tag management module 120 and the transaction scheduling module 110 are connected through an interface, wherein an arrow between the transaction scheduling module 110 and the tag management module 120 indicates a data flow direction between the two modules. 128 recording units 101 are arranged in the buffer unit 111, the numbers of the recording units are idx0 to idx127, each recording unit 101 comprises a plurality of storage units for storing corresponding parts of a request to be scheduled, the specific number of the storage units and the specific data in the request to be scheduled stored in each storage unit are not particularly limited, and the embodiment of the application can be set by a person skilled in the art according to different application scenes; preferably, as shown in fig. 2, the recording unit 101 may include 11 storage units for storing read-write bits, hit flags, allocation attributes, transaction IDs (i.e., IDs in fig. 2), random access memory (Random Access Memory, RAM) addresses, physical addresses, last flags, RAM address dependent chain last flags, ID dependent chain last flags, RAM dependent transaction numbers, and ID dependent transaction numbers, respectively, in a request to be scheduled.
Optionally, the transaction scheduling module 110 may also receive a request to be scheduled sent by a module other than the tag management module 120, for example, a main memory response sent by the main memory. The subsequent processing steps are consistent with the steps after the transaction scheduling module 110 receives the request to be scheduled sent by the tag management module 120, and will not be described herein.
Fig. 3 is a flow chart illustrating another request scheduling method according to an embodiment of the present application. The method is performed by a cache, which may be a cache as shown in fig. 1. As shown in fig. 3, the request scheduling method application and cache may include the following steps:
step 301, a request to be scheduled is received, and a target transaction ID and a target mapping address are extracted from the request to be scheduled.
The target mapping address is an address mapped into the cache; the transaction ID is used for representing attribution of a request to be scheduled; the cache may receive the request to be scheduled, which is sent by the previous module and is converted into the address level, and extract the target transaction ID and the target mapping address thereof in the request to be scheduled.
Step 302, determining whether there is a target transaction ID history request and a target mapping address history request according to the target transaction ID and the target mapping address.
And the cache determines whether a target transaction ID historical request and a target mapping address historical request exist according to whether the historical request with the transaction ID as a target transaction ID and the historical request with the mapping address as a target mapping address exist in the historical request set, namely, determines whether at least one of the same transaction ID historical request and the same mapping address historical request exists in the request to be scheduled. It can be understood that the target transaction ID history request is a history request of the same transaction ID as the request to be scheduled; the target mapping address history request is the history request of the same mapping address as the request to be scheduled.
In step 303, when there is no target transaction ID history request and no target mapping address history request, the request to be scheduled is executed.
When the cache determines that the target transaction ID historical request and the target mapping address historical request do not exist, the fact that the co-transaction ID historical request and the co-mapping address historical request which are not executed exist before the to-be-scheduled request is indicated, the co-transaction ID historical request and the co-mapping address historical request can be scheduled, and therefore the cache can execute the to-be-scheduled request. The method and the device can directly execute the co-worker ID historical requests which are not executed and the requests to be scheduled of the co-worker ID historical requests and the co-mapped address historical requests in parallel, disturb the execution sequence of different transaction ID requests, reduce the possibility of blocking the cache and improve the performance of the cache.
Step 304, when there is no target mapping address history request and there is a target transaction ID history request, executing the request to be scheduled if the target transaction ID history request is in a specified state.
When the cache determines that the target mapping address history request does not exist, but the target transaction ID history request exists, the cache executes the request to be scheduled when the target transaction ID history request is in a specified state. The designated state is a state indicating that execution of the request to be scheduled does not affect the order retention of the different transaction IDs. When the cache is the cache shown in fig. 1, the cache may perform steps 301 to 304 through the transaction scheduling module 110.
In summary, when a request to be scheduled is received, the target transaction ID and the target mapping address are extracted from the request to be scheduled, so as to determine whether the request to be scheduled has the target transaction ID history request and the target mapping address history request, so as to consider the transaction dependency and the address dependency of the request to be scheduled. When the target transaction ID history request and the target mapping address history request do not exist, the request to be scheduled is directly executed. The execution sequence of different transaction ID requests is disordered, the possibility of blocking the cache is reduced, and the performance of the cache is improved. When there is no target mapping address history request but there is a target transaction ID history request, the request to be scheduled is executed if the target transaction ID history request is in a specified state. Meaning that when the current request is to be executed, whether the current request still has a co-mapped address history request is considered, and the cache consistency is ensured by considering the address dependency of the current request; and determining whether to execute the current request according to whether the history request with the same transaction ID before the current request is in a specified state, considering the transaction dependency of the current request, ensuring the order keeping property of the transaction ID of the executor contract of the request, and further improving the performance of the cache.
Fig. 4 is a flow chart illustrating a request scheduling method according to an embodiment of the present application. The method is performed by a cache, which may be a cache as shown in fig. 1. As shown in fig. 4, the request scheduling method application and cache may include the following steps:
step 401, a request to be scheduled is received, and a target transaction ID and a target mapping address are extracted from the request to be scheduled.
Please refer to step 301 in the embodiment shown in fig. 3 in detail, which is not described herein.
Optionally, the cache includes a tag storage unit; after receiving the request to be scheduled, the request scheduling method further comprises the following steps: extracting address information in a request to be scheduled; determining a hit result of the request to be scheduled according to whether address information exists in the tag storage unit; and when the hit result of the to-be-scheduled request is a miss, storing the address information into a tag storage unit.
After the buffer memory receives the to-be-scheduled request which is sent by the previous stage module and is converted into the address level, the address information in the to-be-scheduled request can be extracted from the to-be-scheduled request of the address level. And then inquiring whether the record of the address information exists in the tag storage unit so as to determine whether the address information exists in the tag storage unit. When the address information exists in the tag storage unit, determining a hit result of the request to be scheduled as a hit; when the address information does not exist in the tag storage unit, a hit result of the request to be scheduled is determined as a miss. When the hit result of the request to be scheduled is a miss, the address information of the request to be scheduled is stored in the tag storage unit, so that the hit results of the subsequently received requests to be scheduled with the same address level are hit, and the same main memory address access request is not generated any more, thereby realizing the combination of the access requests.
Optionally, when the cache is the cache shown in fig. 1, the cache receives, through the tag management module 120, a request to be scheduled, which is sent by a previous module and is converted into an address level, and extracts address information in the request to be scheduled; determining a hit result of the request to be scheduled according to whether address information exists in the tag storage unit; when the hit result of the request to be scheduled is a miss, storing the address information into a tag storage unit; the request to be scheduled and the hit result of the request to be scheduled are sent to the transaction scheduling module 110 through the interface, so that the cache performs steps 401 to 407 through the transaction scheduling module.
Step 402, determining whether there is a target transaction ID history request and a target mapping address history request according to the target transaction ID and the target mapping address.
Please refer to step 302 in the embodiment shown in fig. 3 in detail, which is not described herein.
Optionally, in order to improve the accuracy and efficiency of the determination of the co-transaction ID history request and the co-map address history request, step 402 may include the steps of:
and determining whether a history request of the target transaction ID exists according to whether the history request of which the transaction ID is the target transaction ID exists in the recording unit.
The cache comprises at least one recording unit, wherein the recording unit is used for storing the received hardware equipment to be scheduled request; the recording unit at this time is the history request collection. The cache searches whether a history request with the transaction ID consistent with the target transaction ID of the current request to be scheduled exists in all recording units, and if so, the existence of the history request with the target transaction ID is determined if the history request with the transaction ID dependency relationship of the current request to be scheduled exists; otherwise, there is no target transaction ID history request.
And determining whether a history request of the target mapping address exists according to whether the history request of which the mapping address is the target mapping address exists in the recording unit.
The cache searches whether a history request with the mapping address consistent with the target mapping address of the current request to be scheduled exists in all recording units, and if so, the history request with the target mapping address is determined to exist, wherein if so, the history request means that the current request to be scheduled has an address dependency relationship; otherwise, there is no destination-mapped address history request. By setting the recording units storing the received request to be scheduled, the history requests are combined in each recording unit, and a method for monitoring the history requests does not need to be additionally set. And whether the history request with the transaction ID and the history request with the mapping address are present or not is determined directly through whether the history request with the transaction ID as the target transaction ID and the history request with the mapping address as the target mapping address exist in all the recording units, so that the efficiency and the accuracy of determining the history request with the transaction ID and the history request with the mapping address are improved.
Optionally, after determining whether there is a target transaction ID history request and a target map address history request according to the target transaction ID and the target map address, the method further includes: and determining whether to update the transaction identifier and the mapping address identifier of the request to be scheduled to the transaction identifier and the mapping address identifier of the target transaction ID history request according to whether the target transaction ID history request and the target mapping address history request exist.
Wherein the request to be scheduled includes a transaction identifier and a mapping address identifier. When a target ID transaction history request and/or a target mapping address history request exist, caching and updating the transaction identification and/or the mapping address identification of the to-be-scheduled request into the transaction identification and/or the mapping address identification of the target ID transaction history request; when there is no target ID transaction history request and/or target mapping address history request, the cache allocates a new transaction identification and/or mapping address identification to the request to be scheduled in the recording unit. And when the request to be scheduled is associated with the co-worker ID historical request and the co-mapped address historical request, each recording unit can be ensured to store only one request, so that the subsequent positioning of the request is facilitated, and the storage capacity of the recording unit is saved. The transaction identification and the mapped address identification may be numerical values or characters having uniqueness. When the cache is the cache shown in fig. 2, in the recording unit 101, the ID-dependent transaction sequence number indicates a transaction identifier of the request to be scheduled stored in the recording unit, and the RAM address-dependent transaction sequence number indicates a mapped address identifier of the request to be scheduled stored in the recording unit.
Preferably, the transaction identifier and the mapping address identifier may be represented by numerical values in numbers of the recording unit, and determining whether to update the transaction identifier and the mapping address identifier of the to-be-scheduled request to the transaction identifier and the mapping address identifier of the target transaction ID history request according to whether the target transaction ID history request and the target mapping address history request exist includes:
when a target ID transaction history request and/or a target mapping address history request exist, assigning the transaction identifier of the request to be scheduled as a numerical value in the number of the record unit where the target ID transaction history request is located and/or assigning the mapping address identifier of the request to be scheduled as a numerical value in the number of the record unit where the target mapping address history request is located; and when the target ID transaction history request and/or the target mapping address history request do not exist, assigning the transaction identification of the request to be scheduled as a numerical value in the number of the record unit where the request to be scheduled is located and/or assigning the mapping address identification of the request to be scheduled as a numerical value in the number of the record unit where the request to be scheduled is located. The numerical value in the number of the recording unit is used as the transaction identifier and the mapping address identifier of the request to be scheduled, the request to be scheduled can be associated with the historical request of the co-worker ID and the historical request of the same mapping address without generating the transaction identifier and the mapping address identifier through other algorithms, and the storage capacity of the recording unit is saved.
In an application scenario, as shown in fig. 5a, taking an ID-dependent transaction sequence number in a to-be-scheduled request as a transaction identifier and a RAM address-dependent transaction sequence number in a to-be-scheduled request as a mapping address identifier as an example, when there is no target transaction ID history request and no target mapping address history request, if the current to-be-scheduled request is stored in a record unit with the number idx1, the ID-dependent transaction sequence number=1 and the RAM address-dependent transaction sequence number=1 of the to-be-scheduled request may be executed at this time. As shown in fig. 5b, when there is a target transaction ID history request stored in the record unit numbered idx1 and a target map address history request stored in the record unit numbered idx3, the ID of the request to be scheduled depends on transaction number=3 and the RAM address depends on transaction number=0.
In step 403, when the target transaction ID history request and the target mapping address history request exist, whether the target transaction ID history request and the target mapping address history request are executed is monitored.
When the cache determines that the target transaction ID history request and the target map address history request exist, the cache continuously listens to the data stream in the module for executing the request to determine whether the target transaction ID history request and the target map address history request are executed. When the cache is the cache in fig. 1, the module for executing the request may be the transaction scheduling module 110, and the cache may continuously monitor the data flow of the transaction scheduling module through the recording unit 101 to determine whether the target transaction ID history request and the target mapping address history request are executed.
Step 404, when the execution of the target transaction ID history request is completed, it is determined that the target transaction ID history request does not exist.
When the cache determines that the target transaction ID historical request is executed, indicating that no unexecuted co-transaction ID historical request exists in the current to-be-scheduled request, determining that the target transaction ID historical request does not exist.
Step 405, when the execution of the target mapping address history request is completed, determining that there is no target mapping address history request.
When the cache determines that the target mapping address history request is executed, the cache indicates that the current to-be-scheduled request has no unexecuted co-mapping address history request, and then the cache determines that the target mapping address history request does not exist.
Optionally, when the target transaction ID history request and/or the target mapping address history request are executed, updating the transaction identifier and/or the mapping address identifier of the request to be scheduled.
Preferably, when the target transaction ID history request and/or the target mapping address history request are executed, updating the transaction identifier and/or the mapping address identifier of the request to be scheduled includes:
when the target transaction ID historical request and/or the target mapping address historical request are/is executed, the transaction identification of the request to be scheduled is assigned as a numerical value in the number of the recording unit where the request to be scheduled is located, and/or the mapping address identification of the request to be scheduled is assigned as a numerical value in the number of the recording unit where the request to be scheduled is located.
In an application scenario, continuing to refer to fig. 5b, at a moment after the to-be-scheduled request is stored in the recording unit with the number idx1, the target transaction ID history request stored in the recording unit with the number idx3 is executed, and at this moment, the value of the ID dependent transaction sequence number of the to-be-scheduled request is changed from 3 to 1. Then, the target map address history request stored in the recording unit with the number idx0 is also executed, and at this time, the value of the RAM address dependent transaction sequence number of the request to be scheduled is changed from 0 to 1. At this time, the request to be scheduled may be executed.
In step 406, when there is no target transaction ID history request and no target map address history request, a request to be scheduled is executed.
Please refer to step 303 in the embodiment shown in fig. 3 in detail, which is not described herein.
In step 407, when there is no target mapping address history request and there is a target transaction ID history request, if the target transaction ID history request is in a specified state, the request to be scheduled is executed.
Please refer to step 304 in the embodiment shown in fig. 3 in detail, which is not described herein.
Optionally, the specified state includes that the target transaction ID history request is in an executing state, and the target transaction ID history request is identical to a hit result and an allocation (allocation) attribute of the request to be scheduled; alternatively, execution of the target transaction ID history request is completed. And if the target transaction ID history request is not in the specified state, continuously monitoring the target transaction ID history request until the target transaction ID history request is in the specified state.
Preferably, when the transaction identifier and the mapping address identifier of the request to be scheduled meet the scheduling condition, executing the request to be scheduled; the scheduling conditions include:
condition 1: the mapping address of the request to be scheduled is identified as the number of the recording unit where the request to be scheduled is located;
condition 2: the request to be scheduled also satisfies one of four conditions:
(1) The transaction identifier of the request to be scheduled is the number of the recording unit where the request to be scheduled is located;
(2) The hit result and the allocation attribute of the request to be scheduled and the target transaction ID historical request are the same, and the target transaction ID historical request is being executed;
(3) The hit result and the allocation attribute of the request to be scheduled and the target transaction ID historical request are the same, and the target transaction ID historical request is already executed;
(4) The target transaction ID history request has been executed and the execution result has been returned to the transaction scheduling module.
In an application scenario, referring to fig. 1 and fig. 2, the tag management module 120 receives a to-be-scheduled request converted into an address level from a preceding module, queries whether a record of address information of the to-be-scheduled request exists in a tag storage unit, and sends a query result (hit or not, i.e. hit result) and the to-be-scheduled request of the address level to the transaction scheduling module 110 through an interface, and updates contents in the tag storage unit. The transaction scheduling module 110 receives the query result and the request to be scheduled sent by the tag management module 120, schedules each request to be scheduled according to the ID-dependent transaction sequence number, the RAM address-dependent transaction sequence number, hit or not and other information of the request to be scheduled, and sends the request to other functional units. The request to be scheduled may be any one of the data RAM request, the retire request, and the main memory request in fig. 2.
The specific process of the transaction scheduling module 110 scheduling each request to be scheduled is that firstly, the received request to be scheduled is stored in the available recording unit 101, and then two pieces of information are extracted from the request to be scheduled: the target transaction ID corresponding to the address to-be-scheduled request, the address of the data storage unit mapped by the address to-be-scheduled request and the target mapping address. And then, according to the two information, inquiring whether the history request of the co-worker ID or the history request of the co-mapping address exists in all the recording units. If the history request with the transaction ID exists, the current request to be scheduled and the previous history request have ID dependency relationship (the order keeping property of the same ID), and the ID dependency transaction serial number of the request to be scheduled is assigned as the serial number of the record unit where the history request with the transaction ID exists; if there is no history request with a transaction ID, the request to be scheduled is transaction ID independent, and the ID of the request to be scheduled depends on the number of the record unit stored by the request to be scheduled to be assigned.
If a history request with the same mapping address exists, the current request to be scheduled and the previous history request have a RAM address dependency relationship, and the RAM address dependency transaction serial number of the request to be scheduled is assigned as the serial number of a recording unit where the history request with the same mapping address exists; if there is no history request with the same mapping address, the request to be scheduled is independent of the RAM address, and the RAM address of the request to be scheduled depends on the serial number of the transaction to be assigned as the serial number of the record unit stored by the request to be scheduled. The recording unit continuously monitors the data flow of the transaction scheduling module, and when the history request on which the request to be scheduled depends due to the RAM address is completed, the transaction scheduling module 110 assigns the RAM address dependent transaction sequence number of the request to be scheduled to the number of the recording unit where the request to be scheduled is located through the recording unit; when the history request on which the request to be scheduled depends due to the transaction ID is completed, the recording unit assigns the ID-dependent transaction sequence number of the request to be scheduled as the number of the recording unit where the request to be scheduled is located.
After the registration of the request to be scheduled is completed, the transaction scheduling module 110 searches the buffer unit 111 for the request to be scheduled that can be executed, and the following two conditions are simultaneously satisfied for the determination of the request to be scheduled that can be executed:
1. the RAM address of the current request to be scheduled depends on the number of the recording unit where the value of the transaction sequence number is located.
2. One of the following four conditions is satisfied:
a) The ID of the current request to be scheduled depends on the number of the recording unit where the transaction serial number is located;
b) The hit mark and the allocation attribute of the current to-be-scheduled request and the co-worker ID historical request are the same, and the co-worker ID historical request is being scheduled, wherein the hit mark is a hit result;
c) The hit mark and the allocation attribute of the to-be-scheduled request and the co-worker ID historical request are the same, and the co-worker ID historical request is already scheduled;
d) The request has been scheduled with the transaction ID history and its execution results have been returned to the transaction scheduling module 110.
By the request scheduling mode, both the dependence generated by the colleague ID order keeping property and the dependence generated by the co-mapping address can be accurately recorded, and the schedulable request to be scheduled can be scheduled as early as possible. The performance of the cache is improved.
In summary, after determining whether the target transaction ID history request and the target mapping address history request exist according to the target transaction ID and the target mapping address, the method further monitors whether the target transaction ID history request and the target mapping address history request are executed. When the target transaction ID historical request and/or the target mapping address historical request are executed, it is determined that the to-be-scheduled request does not have a historical request with the same transaction ID and/or the same mapping address. And further executing the request to be scheduled when there is no target transaction ID history request and no target mapping address history request. When the target mapping address history request does not exist and the target transaction ID history request exists, executing the request to be scheduled if the target transaction ID history request is in a specified state. After the historical requests with the same transaction ID and/or the same mapping address are executed, the current request to be scheduled can be executed by a vertical horse, so that the request to be scheduled can be executed as early as possible to the greatest extent, and the performance of the cache is improved; the method also realizes that the same transaction ID order retention is ensured while the execution sequence of different transaction ID requests is disturbed. And when executing the request to be scheduled, the request to be scheduled is strictly required to have no history request with the same mapping address, so that the history request with the mapping address is ensured to be executed when the request to be scheduled is executed, and the consistency of the cache is ensured.
Fig. 6 is a block diagram of a request scheduling device according to an embodiment of the present application, where the request scheduling device is used to implement the foregoing embodiments and preferred implementations, and the description is omitted herein. The request scheduling device comprises:
a receiving module 601, configured to receive a request to be scheduled, and extract a target transaction ID and a target mapping address from the request to be scheduled; the target mapping address is the address mapped into the cache;
a determining module 602, configured to determine whether there is a target transaction ID history request and a target mapping address history request according to the target transaction ID and the target mapping address;
an execution module 603, configured to execute a request to be scheduled when there is no target transaction ID history request and no target mapping address history request;
the execution module 603 is further configured to execute the request to be scheduled if the target transaction ID history request is in the specified state when the target mapping address history request does not exist and the target transaction ID history request exists.
In some optional embodiments, the request scheduling apparatus further includes:
the extraction module is used for extracting the address information in the request to be scheduled;
the determining module is used for determining a hit result of the request to be scheduled according to whether address information exists in the tag storage unit;
And the storage module is used for storing the address information into the tag storage unit when the hit result of the request to be scheduled is a miss.
In some alternative embodiments, the determining module includes:
a determining unit, configured to determine whether a history request of the target transaction ID exists according to whether a history request of the transaction ID as the target transaction ID exists in the recording unit;
and the determining unit is further used for determining whether the history request of the target mapping address exists according to whether the history request of which the mapping address is the target mapping address exists in the recording unit.
In some optional embodiments, the request scheduling apparatus further includes:
the monitoring module is used for monitoring whether the target transaction ID historical request and the target mapping address historical request are executed or not when the target transaction ID historical request and the target mapping address historical request exist;
the determining module is further used for determining that the target transaction ID historical request does not exist when the target transaction ID historical request is executed;
and the determining module is also used for determining that the target mapping address history request does not exist when the target mapping address history request is executed.
The request scheduler in this embodiment is presented in the form of functional units, here referred to as ASIC circuits, processors and memories executing one or more software or fixed programs, and/or other devices that can provide the above described functionality.
Further functional descriptions of the above respective modules and units are the same as those of the above corresponding embodiments, and are not repeated here.
The embodiment of the application also provides a computer device which is provided with the request scheduling device shown in the figure 6.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a computer device according to an alternative embodiment of the present application, as shown in fig. 7, the computer device includes: one or more processors 10, memory 20, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are communicatively coupled to each other using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the computer device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In some alternative embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple computer devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 10 is illustrated in fig. 7.
The processor 10 may be a central processor, a network processor, or a combination thereof. The processor 10 may further include a hardware chip, among others. The hardware chip may be an application specific integrated circuit, a programmable logic device, or a combination thereof. The programmable logic device may be a complex programmable logic device, a field programmable gate array, a general-purpose array logic, or any combination thereof.
Wherein the memory 20 stores instructions executable by the at least one processor 10 to cause the at least one processor 10 to perform the methods shown in implementing the above embodiments.
The memory 20 may include a storage program area that may store an operating system, at least one application program required for functions, and a storage data area; the storage data area may store data created from the use of the computer device of the presentation of a sort of applet landing page, and the like. In addition, the memory 20 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some alternative embodiments, memory 20 may optionally include memory located remotely from processor 10, which may be connected to the computer device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Memory 20 may include volatile memory, such as random access memory; the memory may also include non-volatile memory, such as flash memory, hard disk, or solid state disk; the memory 20 may also comprise a combination of the above types of memories.
The computer device also includes a communication interface 30 for the computer device to communicate with other devices or communication networks.
The embodiments of the present application also provide a computer readable storage medium, and the method according to the embodiments of the present application described above may be implemented in hardware, firmware, or as a computer code which may be recorded on a storage medium, or as original stored in a remote storage medium or a non-transitory machine readable storage medium downloaded through a network and to be stored in a local storage medium, so that the method described herein may be stored on such software process on a storage medium using a general purpose computer, a special purpose processor, or programmable or special purpose hardware. The storage medium can be a magnetic disk, an optical disk, a read-only memory, a random access memory, a flash memory, a hard disk, a solid state disk or the like; further, the storage medium may also comprise a combination of memories of the kind described above. It will be appreciated that a computer, processor, microprocessor controller or programmable hardware includes a storage element that can store or receive software or computer code that, when accessed and executed by the computer, processor or hardware, implements the methods illustrated by the above embodiments.
Although embodiments of the present application have been described in connection with the accompanying drawings, various modifications and variations may be made by those skilled in the art without departing from the spirit and scope of the application, and such modifications and variations fall within the scope of the application as defined by the appended claims.

Claims (9)

1. A method for scheduling requests, the method being applied to a cache, the method comprising:
receiving a request to be scheduled, and extracting a target transaction ID and a target mapping address from the request to be scheduled; the target mapping address is an address mapped into a cache;
determining whether a target transaction ID historical request and a target mapping address historical request exist according to the target transaction ID and the target mapping address;
when the target transaction ID historical request and the target mapping address historical request do not exist, executing the request to be scheduled;
when the target mapping address history request does not exist and the target transaction ID history request exists, executing a request to be scheduled if the target transaction ID history request is in a specified state;
the appointed state comprises that the target transaction ID historical request is in an execution state, and the hit result and the distribution attribute of the target transaction ID historical request and the request to be scheduled are the same; alternatively, execution of the target transaction ID history request is completed.
2. The method of claim 1, wherein the cache comprises a tag storage unit; after receiving the to-be-scheduled request, the method further includes:
extracting address information in the request to be scheduled;
determining a hit result of the request to be scheduled according to whether the address information exists in the tag storage unit;
and when the hit result of the request to be scheduled is a miss, storing the address information into the tag storage unit.
3. The method according to claim 1, wherein the cache comprises at least one recording unit for storing received requests to be scheduled; the determining whether there is a target transaction ID history request and a target mapping address history request according to the target transaction ID and the target mapping address includes:
determining whether a history request of the target transaction ID exists according to whether the history request of which the transaction ID is the target transaction ID exists in the recording unit;
and determining whether a history request of the target mapping address exists according to whether the history request of the mapping address which is the target mapping address exists in the recording unit.
4. A method according to claim 3, wherein after determining whether there is a target transaction ID history request and a target map address history request based on the target transaction ID and the target map address, the method further comprises:
when a target transaction ID historical request and a target mapping address historical request exist, monitoring whether the target transaction ID historical request and the target mapping address historical request are executed completely or not;
when the target transaction ID historical request is executed, determining that the target transaction ID historical request does not exist;
and when the target mapping address history request is executed, determining that the target mapping address history request does not exist.
5. A cache, the cache comprising a transaction scheduling module configured to:
receiving a request to be scheduled, and extracting a target transaction ID and a target mapping address from the request to be scheduled; the target mapping address is mapped to the address in the cache;
determining whether a target transaction ID historical request and a target mapping address historical request exist according to the target transaction ID and the target mapping address;
When the target transaction ID historical request and the target mapping address historical request do not exist, executing the request to be scheduled;
when the target mapping address history request does not exist and the target transaction ID history request exists, executing a request to be scheduled if the target transaction ID history request is in a specified state;
the appointed state comprises that the target transaction ID historical request is in an execution state, and the hit result and the distribution attribute of the target transaction ID historical request and the request to be scheduled are the same; alternatively, execution of the target transaction ID history request is completed.
6. The cache of claim 5, further comprising a tag management module and a tag storage unit, the tag management module being electrically connected to the tag storage unit; the label management module is connected with the transaction scheduling module through an interface;
the label management module is used for:
receiving a request to be scheduled, and extracting address information in the request to be scheduled;
determining a hit result of the request to be scheduled according to whether the address information exists in the tag storage unit;
when the hit result of the request to be scheduled is a miss, storing the address information into the tag storage unit;
And sending the hit results of the request to be scheduled and the request to be scheduled to the transaction scheduling module through the interface.
7. A request scheduling apparatus, the apparatus comprising:
the receiving module is used for receiving a request to be scheduled and extracting a target transaction ID and a target mapping address from the request to be scheduled; the target mapping address is an address mapped into a cache;
the determining module is used for determining whether a target transaction ID historical request and a target mapping address historical request exist according to the target transaction ID and the target mapping address;
an execution module, configured to execute the request to be scheduled when the target transaction ID history request and the target mapping address history request do not exist;
the execution module is further used for executing a request to be scheduled if the target transaction ID history request is in a specified state when the target mapping address history request does not exist and the target transaction ID history request exists;
the appointed state comprises that the target transaction ID historical request is in an execution state, and the hit result and the distribution attribute of the target transaction ID historical request and the request to be scheduled are the same; alternatively, execution of the target transaction ID history request is completed.
8. A computer device, comprising:
a memory and a processor communicatively coupled to each other, the memory having stored therein computer instructions and data, the processor executing the method of scheduling requests of any one of claims 1 to 4 by executing the computer instructions and data.
9. A computer readable storage medium having stored thereon computer instructions and data for causing a computer to perform the request scheduling method of any one of claims 1 to 4.
CN202310547976.XA 2023-05-16 2023-05-16 Request scheduling method, cache, device, computer equipment and storage medium Active CN116303125B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310547976.XA CN116303125B (en) 2023-05-16 2023-05-16 Request scheduling method, cache, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310547976.XA CN116303125B (en) 2023-05-16 2023-05-16 Request scheduling method, cache, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116303125A CN116303125A (en) 2023-06-23
CN116303125B true CN116303125B (en) 2023-09-29

Family

ID=86803438

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310547976.XA Active CN116303125B (en) 2023-05-16 2023-05-16 Request scheduling method, cache, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116303125B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107710169A (en) * 2016-02-19 2018-02-16 华为技术有限公司 The access method and device of a kind of flash memory device
CN111159062A (en) * 2019-12-20 2020-05-15 海光信息技术有限公司 Cache data scheduling method and device, CPU chip and server
WO2022037565A1 (en) * 2020-08-21 2022-02-24 中兴通讯股份有限公司 Access method and system for memory, memory access management module, energy efficiency ratio controller and computer-readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10725937B2 (en) * 2018-07-30 2020-07-28 International Business Machines Corporation Synchronized access to shared memory by extending protection for a store target address of a store-conditional request

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107710169A (en) * 2016-02-19 2018-02-16 华为技术有限公司 The access method and device of a kind of flash memory device
CN111159062A (en) * 2019-12-20 2020-05-15 海光信息技术有限公司 Cache data scheduling method and device, CPU chip and server
WO2022037565A1 (en) * 2020-08-21 2022-02-24 中兴通讯股份有限公司 Access method and system for memory, memory access management module, energy efficiency ratio controller and computer-readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
夏军 ; 徐炜遐 ; 庞征斌 ; 张峻 ; 常俊胜 ; .用于减少远程Cache访问延迟的最后一次写访问预测方法.国防科技大学学报.2015,(第01期),全文. *

Also Published As

Publication number Publication date
CN116303125A (en) 2023-06-23

Similar Documents

Publication Publication Date Title
US8972690B2 (en) Methods and apparatuses for usage based allocation block size tuning
EP3608790B1 (en) Modifying nvme physical region page list pointers and data pointers to facilitate routing of pcie memory requests
CN105677580A (en) Method and device for accessing cache
CN110119304B (en) Interrupt processing method and device and server
CN111949568B (en) Message processing method, device and network chip
CN110555001B (en) Data processing method, device, terminal and medium
US20150143045A1 (en) Cache control apparatus and method
US10079916B2 (en) Register files for I/O packet compression
CN107665095B (en) Apparatus, method and readable storage medium for memory space management
CN113641596B (en) Cache management method, cache management device and processor
CN108536617B (en) Cache management method, medium, system and electronic device
US9817754B2 (en) Flash memory management
CN112148736B (en) Method, device and storage medium for caching data
CN113419824A (en) Data processing method, device, system and computer storage medium
US9189477B2 (en) Managing direct attached cache and remote shared cache
US10747773B2 (en) Database management system, computer, and database management method
CN116414735A (en) Data storage method, system, storage access configuration method and related equipment
CN109597697B (en) Resource matching processing method and device
CN112612728B (en) Cache management method, device and equipment
US8341368B2 (en) Automatic reallocation of structured external storage structures
CN116303125B (en) Request scheduling method, cache, device, computer equipment and storage medium
CN113157628A (en) Storage system, data processing method and device, storage system and electronic equipment
US8966220B2 (en) Optimizing large page processing
CN113076178B (en) Message storage method, device and equipment
CN108762666B (en) Access method, system, medium and device of storage system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant