CN109933543A - Data locking method, device and the computer equipment of Cache - Google Patents

Data locking method, device and the computer equipment of Cache Download PDF

Info

Publication number
CN109933543A
CN109933543A CN201910180214.4A CN201910180214A CN109933543A CN 109933543 A CN109933543 A CN 109933543A CN 201910180214 A CN201910180214 A CN 201910180214A CN 109933543 A CN109933543 A CN 109933543A
Authority
CN
China
Prior art keywords
cache
data
row
request address
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910180214.4A
Other languages
Chinese (zh)
Other versions
CN109933543B (en
Inventor
刘泽权
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Jieli Technology Co Ltd
Original Assignee
Zhuhai Jieli Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Jieli Technology Co Ltd filed Critical Zhuhai Jieli Technology Co Ltd
Priority to CN201910180214.4A priority Critical patent/CN109933543B/en
Publication of CN109933543A publication Critical patent/CN109933543A/en
Application granted granted Critical
Publication of CN109933543B publication Critical patent/CN109933543B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

This application involves the data locking method of Cache a kind of, device, computer equipment and storage mediums.The described method includes: receiving the data interlock instruction of CPU;Data interlock instruction carries the request address of data to be locked;According to the request address of data to be locked, the Cache row of inquiry request address hit in Cache memory;Lock state is set by the row locking bit in the status information section in the Cache row of the request address inquired hit.The memory space that can make full use of each road each Cache row using this method, effectively improves the memory service efficiency of Cache memory.

Description

Data locking method, device and the computer equipment of Cache
Technical field
This application involves cache memory technical fields, data locking method, dress more particularly to a kind of Cache It sets, computer equipment and storage medium.
Background technique
Cache is a kind of cache memory, centrally located processor (Central Processing Unit, CPU) Between main memory, have the characteristics that capacity is small fireballing.Due to CPU design speed be much higher than memory, when CPU directly from The regular hour is waited when accessing data in memory, and the in store CPU of Cache is just used or data (the example that frequently uses Such as may include code or other types of data), if CPU needs to reuse the partial data, CPU can be from It is called directly in Cache, this avoid the CPU waiting time brought by data is directly re-read from memory, to mention The high operational efficiency of CPU.
Since the capacity of Cache memory is limited, during use, Cache can be filled quickly, once by It fills up, Cache controller will continually remove original data from Cache memory, and store into new data.It is this What replacement operation usually occurred at random, at any time, there may be in Cache for some data, it is also possible to not in Cache In.When the data are there are in Cache, the load time of data is short;When the data are not in Cache, CPU needs memory Again value, data load time are long.Therefore a program just has difference the time required to executing every time.
In practical applications, by taking the data that store in Cache are code as an example, it is sometimes necessary to which executing some pairs of times wants Very harsh code is sought, and code can be brought to execute time uncertain problem the replacement operation of store code in Cache, it is right The operation of these codes brings very big influence.It is uncertain in order to avoid these particular codes to replace as far as possible and load the bring time Property, counter productive caused by Cache failure is avoided, the lock-in techniques of Cache can solve the above problem.Cache lock Surely it refers to for this part special code being locked in Cache in specific time, locked code always exists Cache In, it avoids being replaced by the other content of main memory.Locked code and data has faster system reflection ability in this way, Due to always existing in Cache, so it is determining for executing the time accordingly.
However, the data locking method of existing Cache, locks some generally by the method that road (Way) is locked Or the data in multiple roads Cache (Cache Way), all Cache rows (Cache Line) in the locked road Cache are straight It can be just replaced to after being unlocked.The lock mode of this Cache, allows system code individually to select locking and unlocking For Cache per all the way, locking and unlocking step is simple, but is locked in use, usually there is the road Cache, but practical The case where upper locked data have only taken up the Cache Lu Zhongyi small part space, remaining big quantity space is wasted, causes The storage space utilization of Cache memory is low.
Summary of the invention
Based on this, it is necessary in view of the above technical problems, provide a kind of memory space benefit for being able to ascend Cache memory With the data locking method of the Cache of rate, device, computer equipment and storage medium.
A kind of data locking method of Cache, which comprises
Receive the data interlock instruction of CPU;Data interlock instruction carries the request address of data to be locked;According to be locked The request address of data, the Cache row of inquiry request address hit in Cache memory;The request address inquired is ordered In Cache row in status information section in row locking bit be set as lock state.
In one embodiment, the data locking method of Cache further include: be not present in Cache memory if inquiring The Cache row of request address hit then sends corresponding with data interlock instruction data read request to main memory, and receives The data corresponding with data read request that main memory returns;It writes data into Cache memory, and data will be written with Cache row in status information section in row locking bit be set as lock state.
In one embodiment, the data locking method of Cache, further includes: obtain the number in Cache memory to be written According to and the corresponding request address of data;Read the row in the status information section in each Cache row corresponding with request address Locking bit;The Cache row that row locking bit is set as lock state is excluded, is set as unlocked state in remaining row locking bit In Cache row, a Cache row is selected;The data replacement that main memory returns is written in the Cache row of selection.
In one embodiment, the Cache row that row locking bit is set as lock state is excluded, is set in remaining row locking bit It is set in the Cache row of unlocked state, selects a Cache row, comprising: exclude row locking bit and be set as lock state Cache row obtains remaining row locking bit and is set as the multichannel group associative structure that the Cache row of unlocked state is formed;According to Multichannel group associative structure obtains the replacement policy of remaining Cache row;According to replacement policy, selected from remaining Cache row One Cache row.
In one embodiment, the data locking method of Cache, further includes: receive the unlocking data instruction of CPU;Data Unlock instruction carries the request address of data to be unlocked;According to the request address of data to be unlocked, inquired in Cache memory The Cache row of request address hit;It sets the row locking bit in the status information section in the Cache row of request address hit to Unlocked state.
In one embodiment, the data locking method of Cache, further includes: receive the data read request of CPU;Data Read requests carry the request address of data to be read;According to the request address of data to be read, inquired in Cache memory The Cache row of request address hit;If inquiring all Cache rows is not requested address hit, sends reading data and ask It asks to main memory, and receives the data corresponding with data read request of main memory return, write data into Cache storage In device, and return data into CPU.
In one embodiment, the data locking method of Cache, further includes: according to the request address of data to be read, In Cache memory after the Cache row of inquiry request address hit, further includes: asked if inquiring any one Cache row Address hit is sought, then is read the data stored in Cache row, and returns to the data of reading to CPU.
In one embodiment, in Cache memory inquiry request address hit Cache row, comprising: obtain The matched Cache row of group index of each Cache Lu Zhongyu request address in Cache memory;Respectively by request address Label is compared with the Cache label in each corresponding Cache row;If comparing the Cache in any one Cache row Tag match in label and request address, and the significance bit in the Cache row is in effective status, it is determined that the Cache row For the Cache row of request address hit.
In one embodiment, when the row locking bit in the status information section in Cache row is set as lock state, The data stored in Cache row can not be replaced and can not be removed.
In one embodiment, when the row locking bit in the status information section in Cache row is set as unlocked state, The data stored in Cache row can be replaced or be removed.
A kind of data interlock device of Cache, described device include:
Lock instruction receiving module, the data interlock for receiving CPU instruct;Data interlock instruction carries data to be locked Request address;
Cache row enquiry module, for the request address according to data to be locked, the inquiry request in Cache memory The Cache row of address hit;
First data interlock module, in the status information section in Cache row for hitting the request address inquired Row locking bit be set as lock state.
A kind of computer equipment, including memory and processor, the memory are stored with computer program, the processing Device performs the steps of when executing the computer program
Receive the data interlock instruction of CPU;Data interlock instruction carries the request address of data to be locked;According to be locked The request address of data, the Cache row of inquiry request address hit in Cache memory;The request address inquired is ordered In Cache row in status information section in row locking bit be set as lock state.
A kind of computer readable storage medium, is stored thereon with computer program, and the computer program is held by processor It is performed the steps of when row
Receive the data interlock instruction of CPU;Data interlock instruction carries the request address of data to be locked;According to be locked The request address of data, the Cache row of inquiry request address hit in Cache memory;The request address inquired is ordered In Cache row in status information section in row locking bit be set as lock state.
Data locking method, device, computer equipment and the storage medium of above-mentioned Cache, by each Cache row In status information section in increase row locking bit, be that lock state is deposited with locking in a Cache row by setting row locking bit The data of storage, to realize the independent locking to each Cache row data.Need to lock certain all the way in a certain Cache row When data, other Cache rows with road need not be locked simultaneously, can make full use of depositing for each Way each Cache row Space is stored up, the memory service efficiency of Cache memory is effectively improved.
Detailed description of the invention
Fig. 1 is the applied environment figure of the data locking method of Cache in one embodiment;
Fig. 2 is the structural schematic diagram of Cache in one embodiment;
Fig. 3 is the flow diagram of the data locking method of Cache in one embodiment;
Fig. 4 is the structural schematic diagram of Cache in one embodiment;
Fig. 5 is the flow diagram of the data locking method of Cache in another embodiment;
Fig. 6 is the flow diagram of the data write step of Cache in one embodiment;
Fig. 7 is the schematic diagram that four road PLRU replace selection strategy in one embodiment;
Fig. 8 is that four road PLRU of the application optimization in one embodiment replace the schematic diagram of selection strategy;
Fig. 9 is the schematic diagram for locking the three road PLRU obtained afterwards all the way in one embodiment and replacing selection strategy;
Figure 10 is the schematic diagram for locking the two road PLRU obtained after two-way in one embodiment and replacing selection strategy;
Figure 11 is the structural block diagram of the data interlock device of Cache in one embodiment.
Specific embodiment
It is with reference to the accompanying drawings and embodiments, right in order to which the objects, technical solutions and advantages of the application are more clearly understood The application is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the application, not For limiting the application.
The data locking method of Cache provided by the present application can be applied in application environment as shown in Figure 1.Wherein, Cache controller 121 is connect with central processing unit (CPU) 110, Cache memory 120 and main memory 130 respectively.Cache Controller 121, central processing unit 110, Cache memory 120 and main memory 130 may include in terminal, and terminal can be with But be not limited to various personal computers, laptop, smart phone and tablet computer etc..
In the embodiment of the present application, the data access structure relationship between Cache memory 120 and main memory 130 is one The mode of the kind road N multichannel group associative structure (N Way Set-Associative) includes multi-block data that is, in main memory 130 Block, any one piece of data block therein may occur in the corresponding N number of Cache row in the road N in Cache memory 120, and N can It can be 2,4,8,12 or other values.
By taking 4 tunnel multichannel group associative structures as an example, then as shown in Fig. 2, Cache is divided into s group, every group includes 4 Cache rows, Also referred to as four tunnels (4Ways), each data block in main memory can be only positioned at some in s Set, but can be in specified Set In any one Cache row in, such as in main memory the data block of Index=0 can store in Cache memory Any Cache row corresponding all the way in tetra- tunnel Set0, Way0-4.And if by four tunnel groups be connected in wherein arbitrarily take all the way See out, then it is corresponding this all the way group be connected in may include in main memory all data blocks in a Cache row on the road.
In one embodiment, as shown in figure 3, providing the data locking method of Cache a kind of, the embodiment of the present application The step of data locking method of Cache may include the stages such as data interlock, reading data, data write-in and unlocking data, It is applied to be illustrated for the terminal in Fig. 1 in this way, wherein the data interlock stage may comprise steps of:
S310 receives the data interlock instruction of CPU;Data interlock instruction carries the request address of data to be locked;
In this step, Cache controller receives the data interlock instruction of CPU, wherein carries in data interlock instruction The request address of data to be locked.
S320, according to the request address of data to be locked, inquiry is hit with the presence or absence of request address in Cache memory Cache row? if inquiring in Cache memory there are the Cache row of request address hit, S330 is executed;
In this step, Cache controller according to the request address of data to be locked, inquired in Cache memory whether There are the Cache rows of request address hit, wherein the successful search in Cache controller that hit (Hit) refers to the request The data of address corresponding requests, namely the data requested is needed to be stored in the situation in Cache controller.
S330 sets the row locking bit in the status information section in the Cache row of the request address inquired hit to Lock state.
In this step, when S320 inquire in Cache memory there are request address hit Cache row when, then will The row locking bit in status information section in the Cache row of the request address hit inquired is set as lock state, works as Cache When the row locking bit in status information section in row is set as lock state, the data stored in Cache row can not be replaced and It can not be removed, can be read in Cache memory, and when needed with fixed storage with the data for guaranteeing locked It arrives.
In the embodiment of the present application, by increasing row locking bit in the status information section in Cache row, realize to Cache Data locks or unlocks in row.As shown in figure 4, for a kind of common Dcache structure chart, catalogue storage shown in the figure Section and status information section may be collectively referred to as label (Tag), and Tag is used to compare identify whether current Cache row is effective, if quilt It modifies, and whether the address for comparing CPU request hits the Cache row.
Wherein, row locking bit be set as lock state can by row locking bit assignment realize.It, can be with as illustratively By increasing by 1 row locking bit in Tag, it is used to indicate whether current Cache row is in locked state, such as the following table 1 institute Show, L bit is that 1 to represent current line locked, and L bit is that 0 to represent current line unlocked, allows to be replaced away.
1 Tag information of table
Catalogue memory paragraph Row locking bit Significance bit Modify position
Cache label L V D
The data locking method of above-mentioned Cache, by increasing row locking in the status information section in each Cache row Position is lock state to lock the data stored in a Cache row by setting row locking bit, to realize to each The independent locking of Cache row data.Need to lock certain all the way in a certain Cache row data when, with other Cache on road Row need not be locked simultaneously, can be made full use of the memory space of each Way each Cache row, be effectively improved Cache and deposit The memory service efficiency of reservoir.
The data locking method of above-mentioned Cache does not have to first remove to be stored in front of loading locking data in Cache Data in Cache, when inquiring data in Cache memory can writing line locking bit complete to the data in Cache Locking, lock mode is simple and efficient.The step that above-mentioned approximation can also be used when unlock sets the row locking bit in Tag to Unlocked state.Locking and unlocking can be carried out to the data of needs in any time of running in this way, solve tradition Problem of the data locking method of Cache to the loading efficiency difference and using flexible difference of locking data.
If inquiring the data less than hit in Cache memory, need to request required number from main memory According to.In one embodiment, as shown in figure 5, being looked into Cache memory in S320 according to the request address of data to be locked It askes after whether there is the Cache row that request address is hit, further includes:
If inquiring in Cache memory there is no the Cache row of request address hit, S340-S350 is executed;Its In, inquire the Cache row hit in Cache memory there is no request address namely request address miss (Miss) Cache row in Cache memory, that is, the data required to look up are not stored in Cache memory.
S340, transmission data read request corresponding with data interlock instruction to main memory, and receive main memory and return The data corresponding with data read request returned;
S350, the data that main memory is returned are written in Cache memory, and will be written in the Cache row of data Status information section in row locking bit be set as lock state.
Wherein, S350 can at least be realized by two ways, and S350 includes: by main memory in one of the embodiments, In the data write-in Cache memory that reservoir returns;Receive the data interlock instruction of CPU;Data interlock instruction carries primary storage The request address for the data that device returns;According to the request address, the inquiry request address hit from Cache memory The row locking bit in the status information section in the Cache row of hit is set lock state by Cache row.In the present embodiment, nothing Lock mode is separately provided in the data interlock that need to be returned for main memory, in the Cache row write data into Cache memory Be not provided with row locking bit when middle, but after returning data to CPU, instructed by the data interlock that CPU is initiated, using with Routine data locks the data that identical mode locks main memory return, implements relatively simple.
And in another embodiment, S350 include: by main memory return data write-in Cache memory in, and When significance bit in the Cache row write data into is set as effective and marks corresponding Cache label, by the Cache The row locking bit in status information section in row is set as lock state.In the present embodiment, while memory returned data Row locking bit is marked, the locking to returned data can with higher efficiency be completed.
In one embodiment, the reading data stage in the data locking method of the application Cache may include as follows Step: the data read request of CPU is received;Data read request carries the request address of data to be read;According to access of continuing According to request address, the Cache row of inquiry request address hit in Cache memory;Judge whether request address hits Cache row in Cache memory? if all Cache rows are not requested address hit, data read request is sent to master Memory, and the data corresponding with data read request of main memory return are received, it writes data into Cache memory, And return data into CPU.
In one embodiment, after judging whether request address hits the Cache row in Cache memory, further includes: If inquiring any one Cache row is requested address hit, the data stored in Cache row are read, and return to reading Data to CPU.
In above-mentioned two embodiment, provide under the data locking method of the application Cache accordingly to Cache memory The read method of the data of middle storage.And when reading the request address miss of data, it can be by data received in S440 It is stored into Cache memory, in order to reading data next time, if the road Cache memory Zhong Sige corresponds to position at this time The four Cache rows set have been stored with data, then the operation for needing that Cache traveling row data is selected to replace.
In one embodiment, the inquiry request address hit in Cache memory in each embodiment of the application Cache row, comprising: obtain the matched Cache row of group index of each Cache Lu Zhongyu request address in Cache memory; The label of request address is compared with the Cache label in each corresponding Cache row respectively;If comparing any one The tag match in Cache label and request address in Cache row, and the significance bit in the Cache row is in effective status, Then determine the Cache row of Cache behavior request address hit.
For example, in the Cache system that a four tunnel groups are connected, it, all can be by 4 Way when CPU initiates request address every time In 4 Cache rows corresponding with the request address Tag information read, when 4 Tag the Cache label of any one and ask It asks the label of address consistent and when the significance bit V of the Tag is 1, represents the address hit Cache (Hit) of current request, request Data directly from Cache return.
Specifically, as shown in figure 4, Cache controller can lead to request address and the Tag information comparison in Cache row Cross following steps realization: by the content (content expresses which row in 1 road) of group index in request address, as group index is 4, search the 4th row Cache row on each road.Obtain catalogue memory paragraph in 4 Cache rows, status information section and data item section Information.For every Cache row, if V=0, illustrates that the information of this Cache row is invalid;If V=1 is then compared The label of Cache label and request address illustrates that current Cache row does not cache in currently requested address if mismatched Content.If V=1 and Cache label==request address label, then successful match (hit) pass through request ground in next step Data directory in location obtains the content of needs from the data item section in Cache memory, and data directory content is N, then takes N-th word.
And when request address is all not hit by Cache (Miss) to 4 Cache rows, then Cache controller is needed to main memory Data needed for reservoir request.After requesting to obtain required data from main memory, if 4 Way are buffered at this time Effective data vacate the space of caching to specifically asking then need that one of replacement in 4 way is selected to go out at this time It asks.
In one embodiment, as shown in fig. 6, the data write phase in the data locking method of the application Cache can To include the following steps:
S610 obtains data and the corresponding request address of data in Cache memory to be written;
S620 reads the row locking bit in the status information section in each Cache row corresponding with request address;
S630 excludes the Cache row that row locking bit is set as lock state, is set as unlocked in remaining row locking bit In the Cache row of state, a Cache row is selected;
The data replacement that main memory returns is written in the Cache row of selection S640.
In one embodiment, the Cache row that row locking bit is set as lock state is excluded, is set in remaining row locking bit It is set in the Cache row of unlocked state, selects a Cache row, comprising: exclude row locking bit and be set as lock state Cache row obtains remaining row locking bit and is set as the multichannel group associative structure that the Cache row of unlocked state is formed;According to Multichannel group associative structure obtains the replacement policy of remaining Cache row;According to replacement policy, selected from remaining Cache row One Cache row.
In one embodiment, when locking Cache row, it is only necessary to which the replacement priority for the Cache row that need to be locked is strong It sets up and is set to minimum, then can guarantee that locked Cache row is never replaced away, this method is suitble to any replacement plan Slightly.
In one embodiment, replacement policy may include: random replacement policy (Random Replacement), it is advanced First go out tactful (First In First Out, FIFO), least commonly using strategy (Least Frequently Used, LFU), least recently used tactful (Least Recently Used, LRU) or pseudo- least recently used strategy (Pseudo Least Recently Used, PLRU) etc..
The method for writing data of the above-mentioned each embodiment of the application excludes locked row, takes remaining unlocked Row generates corresponding replacement policy, obtains suitable Cache row with selection, new using the Cache row replacement storage of the selection Data, can carry out selection replacement for single Cache row, and data with higher replace write efficiency.It is empty to improve storage Between service efficiency.
As illustratively, when using random replacement policy, since technical scheme uses line-locked method, institute For the random replacement that tetra- tunnel group of Yi Yi is connected, can be addressed to by Tag information as Cache Miss 4 The locking information of Cache Line, when the locking bit L of corresponding Cache Line is 1, the Cache Line is locked, does not allow It is replaced away.When all 4 Cache Line are locked, Cache only reads back addressing context, does not distribute cache lines, then The data write phase of the embodiment of the present application reads 4 Cache corresponding with request address in 4Way under random replacement policy Row locking bit L0, L1, L2 and L3 in Line corresponding replacement selection strategy under different values is as shown in table 2 below.
Replacement selection under 2 random replacement policy of table
As illustratively, when using pseudo- least recently used strategy, refering to what is shown in Fig. 7, four tunnel groups of one kind are connected PLRU replacement policy.B0-B2 represents replacement selection position in table, and way0-way3 represents the Cache being replaced in the Way chosen Line.Both sides relation is as shown in table 3, and wherein B0-B2 can be combined into 4 kinds of replacement selections.It is selected as B0==0 and B1==0 The replacement corresponding Cache Line of way0 is selected, the selection replacement corresponding Cache of way2 as B0==0 and B1==1 Line, the selection replacement corresponding Cache Line of way1 as B0==1 and B2==0, as B0==1 and B2==1 When selection replacement the corresponding Cache Line of way3.
3 PLRU of table replacement selection
After the application is to above scheme optimization, as Cache Miss, we can be addressed to by Tag information 4 Cache Line locking information, and with the PLRU replacement policy of above-mentioned Cache Line combine to obtain shown in Fig. 8 PLRU replacement policy.The switch for increasing a locking bit on the access of Way selection, for controlling whether the road can be replaced choosing In, if locked, the Cache Line on corresponding road does not allow to be replaced away.
For example, finding way3's after the locking information for 4 Cache Line being addressed to by Tag information Cache Line is locked, then PLRU replacement policy closes the selection path of way3 in replacement, current Cache Line's PLRU replacement policy can be changed to the PLRU replacement policy of 3Way a kind of, as shown in figure 8, but being not limited to PLRU shown in Fig. 9 and replacing Change strategy.
For example, finding way2 and way3 after the locking information for 4 Cache Line being addressed to by Tag information Cache Line it is locked, then PLRU replacement policy closes the selection path of way2 and way3, current Cache in replacement The PLRU replacement policy of Line can be changed to the PLRU replacement policy of 2Way a kind of, as shown in figure 9, but being not limited to shown in Fig. 10 PLRU replacement policy.
In conclusion 4 Cache Line in the Cache that four tunnel groups are connected, on the same position of 4 Way In, when 2 Cache Line of same position are locked, remaining other 2 Cache Line become on this position The structure that one 2Way group is connected;Similarly in 4 Cache Line on the same position of 4 Way, when same position When 1 Cache Line is locked, remaining other 3 Cache Line become what a 3Way group was connected on this position Structure;And the Cache Line in other positions is still the structure that 4Way group is connected.It is may be implemented in this way to Cache Line Quick selection replacement, and no matter lock how many place's codes, no matter how discrete lock code is, can be by the storage of each Way Space fully uses, and effectively improves Cache storage space utilization.
In one embodiment, the unlocking data stage in the data locking method of the application Cache may include as follows Step: the unlocking data instruction of CPU is received;Unlocking data instructs the request address for carrying data to be unlocked;According to number to be unlocked According to request address, the Cache row of inquiry request address hit in Cache memory;The Cache row that request address is hit In status information section in row locking bit be set as unlocked state.Wherein, when the row lock in the status information section in Cache row When positioning is set as unlocked state, the data stored in Cache row can be replaced or be removed.
In the present embodiment, when the data locked in Cache are unlocked, with inquiring the request of data to be unlocked When the Cache row of location hit, i.e., unlocked state is set by the row locking bit in the status information section in the Cache row, it can be with Realize the quick release to the data locked in Cache.
It should be understood that although each step in the flow chart of Fig. 3, Fig. 5 and Fig. 6 is successively shown according to the instruction of arrow Show, but these steps are not that the inevitable sequence according to arrow instruction successively executes.Unless expressly state otherwise herein, this There is no stringent sequences to limit for the execution of a little steps, these steps can execute in other order.Moreover, Fig. 3, Fig. 5 and At least part step may include multiple sub-steps perhaps these sub-steps of multiple stages or rank in flow chart in Fig. 6 Section is not necessarily to execute completion in synchronization, but can execute at different times, these sub-steps or stage Execution sequence is also not necessarily and successively carries out, but can be with the sub-step or stage of other steps or other steps extremely Few a part executes in turn or alternately.
In one embodiment, as shown in figure 11, the data interlock device 1100 of Cache a kind of is provided, comprising:
Lock instruction receiving module 1110, the data interlock for receiving CPU instruct;Data interlock instruction carries to be locked The request address of data;
Cache row enquiry module 1120 is inquired in Cache memory for the request address according to data to be locked The Cache row of request address hit;
First data interlock module 1130, the status information in Cache row for hitting the request address inquired Row locking bit in section is set as lock state.
In one embodiment, the data interlock device of Cache further include: data demand module, if for inquiring There is no the Cache rows of request address hit in Cache memory, then send number corresponding with data interlock instruction According to read requests to main memory, and receive the data corresponding with the data read request of main memory return;Second number According to locking module, for the data to be written in Cache memory, and the shape in the Cache row of the data will be written with Row locking bit in state message segment is set as lock state.
In one embodiment, the data interlock device of Cache further includes data replacement module, to be written for obtaining The corresponding request address of data and data in Cache memory;It reads in each Cache row corresponding with request address Status information section in row locking bit;The Cache row that row locking bit is set as lock state is excluded, in remaining row locking bit It is set as in the Cache row of unlocked state, selects a Cache row;The data replacement write-in selection that main memory is returned Cache row in.
In one embodiment, data replacement module be further used for exclude row locking bit be set as lock state Cache row obtains remaining row locking bit and is set as the multichannel group associative structure that the Cache row of unlocked state is formed;According to Multichannel group associative structure obtains the replacement policy of remaining Cache row;According to replacement policy, selected from remaining Cache row One Cache row.
In one embodiment, the data interlock device of Cache further include: unlocking data module, for receiving the number of CPU According to unlock instruction;Unlocking data instructs the request address for carrying data to be unlocked;According to the request address of data to be unlocked, The Cache row of inquiry request address hit in Cache memory;By the status information section in the Cache row of request address hit In row locking bit be set as unlocked state.
In one embodiment, the data interlock device of Cache further include: data read module, for receiving the number of CPU According to read requests;Data read request carries the request address of data to be read;According to the request address of data to be read, The Cache row of inquiry request address hit in Cache memory;If inquiring all Cache rows is not requested address hit, Data read request is then sent to main memory, and receives the data corresponding with data read request of main memory return, it will Data are written in Cache memory, and return data into CPU.
In one embodiment, if data read module is also used to inquire any one Cache row and is requested address life In, then the data stored in Cache row are read, and return to the data of reading to CPU.
In one embodiment, Cache row enquiry module is used to obtain each Cache Lu Zhongyu in Cache memory The matched Cache row of the group index of request address;Respectively by the label of request address in each corresponding Cache row Cache label is compared;If comparing the tag match in the Cache label and request address in any one Cache row, And the significance bit in the Cache row is in effective status, it is determined that the Cache row of Cache behavior request address hit.
In one embodiment, when the row locking bit in the status information section in Cache row is set as lock state, The data stored in Cache row can not be replaced and can not be removed.
In one embodiment, when the row locking bit in the status information section in Cache row is set as unlocked state, The data stored in Cache row can be replaced or be removed.
The specific of data interlock device about Cache limits the data interlock side that may refer to above for Cache The restriction of method, details are not described herein.Modules in the data interlock device of above-mentioned Cache can be fully or partially through soft Part, hardware and combinations thereof are realized.Above-mentioned each module can be embedded in the form of hardware or independently of the processing in computer equipment It in device, can also be stored in a software form in the memory in computer equipment, in order to which processor calls execution above each The corresponding operation of a module.
In one embodiment, a kind of computer equipment, including memory and processor are provided, is stored in memory Computer program, the processor perform the steps of when executing computer program
Receive the data interlock instruction of CPU;Data interlock instruction carries the request address of data to be locked;According to be locked The request address of data, the Cache row of inquiry request address hit in Cache memory;The request address inquired is ordered In Cache row in status information section in row locking bit be set as lock state.
In one embodiment, the Cache of as above any one embodiment is also realized when processor executes computer program Data locking method the step of.
In one embodiment, a kind of computer readable storage medium is provided, computer program is stored thereon with, is calculated Machine program performs the steps of when being executed by processor
Receive the data interlock instruction of CPU;Data interlock instruction carries the request address of data to be locked;According to be locked The request address of data, the Cache row of inquiry request address hit in Cache memory;The request address inquired is ordered In Cache row in status information section in row locking bit be set as lock state.
In one embodiment, any one embodiment as above is also realized when computer program is executed by processor The step of data locking method of Cache.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with Relevant hardware is instructed to complete by computer program, the computer program can be stored in a non-volatile computer In read/write memory medium, the computer program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, To any reference of memory, storage, database or other media used in each embodiment provided herein, Including non-volatile and/or volatile memory.Nonvolatile memory may include read-only memory (ROM), programming ROM (PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include Random access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms, Such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhancing Type SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
Each technical characteristic of above embodiments can be combined arbitrarily, for simplicity of description, not to above-described embodiment In each technical characteristic it is all possible combination be all described, as long as however, the combination of these technical characteristics be not present lance Shield all should be considered as described in this specification.
The several embodiments of the application above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously It cannot therefore be construed as limiting the scope of the patent.It should be pointed out that coming for those of ordinary skill in the art It says, without departing from the concept of this application, various modifications and improvements can be made, these belong to the protection of the application Range.Therefore, the scope of protection shall be subject to the appended claims for the application patent.

Claims (11)

1. a kind of data locking method of Cache, which comprises
Receive the data interlock instruction of CPU;The data interlock instruction carries the request address of data to be locked;
According to the request address of the data to be locked, the Cache of the request address hit is inquired in Cache memory Row;
Locking shape is set by the row locking bit in the status information section in the Cache row of the request address inquired hit State.
2. the method according to claim 1, wherein further include:
If inquiring in Cache memory, there is no the Cache rows of request address hit, send and the data interlock It instructs corresponding data read request to main memory, and receives the corresponding with the data read request of main memory return Data;
The data are written in Cache memory, and will be written in the status information section in the Cache row of the data Row locking bit is set as lock state.
3. the method according to claim 1, wherein further include:
Obtain the data and the corresponding request address of the data in Cache memory to be written;
Read the row locking bit in the status information section in each Cache row corresponding with the request address;
The Cache row that row locking bit is set as lock state is excluded, is set as unlocked state in remaining row locking bit In Cache row, a Cache row is selected;
The data replacement that main memory returns is written in the Cache row of selection.
4. according to the method described in claim 3, it is characterized in that, the exclusion row locking bit is set as lock state Cache row selects a Cache row in the Cache row that remaining row locking bit is set as unlocked state, comprising:
The Cache row that row locking bit is set as lock state is excluded, remaining row locking bit is obtained and is set as unlocked state The multichannel group associative structure that Cache row is formed;According to the multichannel group associative structure, the replacement plan of remaining Cache row is obtained Slightly;
According to the replacement policy, a Cache row is selected from remaining Cache row.
5. the method according to claim 1, which is characterized in that further include:
Receive the unlocking data instruction of CPU;The unlocking data instruction carries the request address of data to be unlocked;
According to the request address of the data to be unlocked, the Cache of the request address hit is inquired in Cache memory Row;
Unlocked state is set by the row locking bit in the status information section in Cache row that the request address is hit.
6. the method according to claim 1, which is characterized in that further include:
Receive the data read request of CPU;The data read request carries the request address of data to be read;
According to the request address of the data to be read, the Cache of the request address hit is inquired in Cache memory Row;
If inquiring any one Cache row to be hit by the request address, the data stored in the Cache row are read Out, and the data of reading are returned to CPU;
If inquiring all Cache rows not hit by the request address, data read request is sent to main memory, and The data corresponding with the data read request that main memory returns are received, the data are written in Cache memory, and The data are returned to CPU.
7. the method according to claim 1, which is characterized in that described to inquire institute in Cache memory State the Cache row of request address hit, comprising:
Obtain the matched Cache row of group index in each road Cache in Cache memory with the request address;
The label of the request address is compared with the Cache label in each corresponding Cache row respectively;
If comparing the tag match in the Cache label and the request address in any one Cache row, and the Cache Significance bit in row is in effective status, it is determined that the Cache row of the hit of request address described in the Cache behavior.
8. the method according to claim 1, which is characterized in that the status information in the Cache row When row locking bit in section is set as lock state, the data stored in the Cache row can not be replaced and can not be removed; When the row locking bit in the status information section in the Cache row is set as unlocked state, the number that is stored in the Cache row According to can be replaced or be removed.
9. the data interlock device of Cache a kind of, which is characterized in that described device includes:
Lock instruction receiving module, the data interlock for receiving CPU instruct;The data interlock instruction carries data to be locked Request address;
Cache row enquiry module, for the request address according to the data to be locked, in Cache memory described in inquiry The Cache row of request address hit;
First data interlock module, in the status information section in Cache row for hitting the request address inquired Row locking bit be set as lock state.
10. a kind of computer equipment, including memory and processor, the memory are stored with computer program, feature exists In the step of processor realizes any one of claims 1 to 8 the method when executing the computer program.
11. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program The step of method described in any item of the claim 1 to 8 is realized when being executed by processor.
CN201910180214.4A 2019-03-11 2019-03-11 Data locking method and device of Cache and computer equipment Active CN109933543B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910180214.4A CN109933543B (en) 2019-03-11 2019-03-11 Data locking method and device of Cache and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910180214.4A CN109933543B (en) 2019-03-11 2019-03-11 Data locking method and device of Cache and computer equipment

Publications (2)

Publication Number Publication Date
CN109933543A true CN109933543A (en) 2019-06-25
CN109933543B CN109933543B (en) 2022-03-18

Family

ID=66986640

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910180214.4A Active CN109933543B (en) 2019-03-11 2019-03-11 Data locking method and device of Cache and computer equipment

Country Status (1)

Country Link
CN (1) CN109933543B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110941449A (en) * 2019-11-15 2020-03-31 新华三半导体技术有限公司 Cache block processing method and device and processor chip
CN112380013A (en) * 2020-11-16 2021-02-19 海光信息技术股份有限公司 Cache preloading method and device, processor chip and server
CN114860785A (en) * 2022-07-08 2022-08-05 深圳云豹智能有限公司 Cache data processing system, method, computer device and storage medium
CN116737609A (en) * 2022-03-04 2023-09-12 格兰菲智能科技有限公司 Method and device for selecting replacement cache line

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6842829B1 (en) * 2001-12-06 2005-01-11 Lsi Logic Corporation Method and apparatus to manage independent memory systems as a shared volume
CN101297270A (en) * 2005-08-23 2008-10-29 先进微装置公司 Method for proactive synchronization within a computer system
CN101398786A (en) * 2008-09-28 2009-04-01 东南大学 Method for implementing controllable cache facing embedded application software
CN101772759A (en) * 2007-08-02 2010-07-07 飞思卡尔半导体公司 Cache locking device and method thereof
CN102325010A (en) * 2011-09-13 2012-01-18 浪潮(北京)电子信息产业有限公司 Processing device and method for avoiding sticky data packets
CN104067242A (en) * 2012-01-23 2014-09-24 国际商业机器公司 Combined cache inject and lock operation
CN106201915A (en) * 2014-09-17 2016-12-07 三星电子株式会社 Cache memory system and operational approach thereof

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6842829B1 (en) * 2001-12-06 2005-01-11 Lsi Logic Corporation Method and apparatus to manage independent memory systems as a shared volume
CN101297270A (en) * 2005-08-23 2008-10-29 先进微装置公司 Method for proactive synchronization within a computer system
CN101772759A (en) * 2007-08-02 2010-07-07 飞思卡尔半导体公司 Cache locking device and method thereof
CN101398786A (en) * 2008-09-28 2009-04-01 东南大学 Method for implementing controllable cache facing embedded application software
CN102325010A (en) * 2011-09-13 2012-01-18 浪潮(北京)电子信息产业有限公司 Processing device and method for avoiding sticky data packets
CN104067242A (en) * 2012-01-23 2014-09-24 国际商业机器公司 Combined cache inject and lock operation
CN106201915A (en) * 2014-09-17 2016-12-07 三星电子株式会社 Cache memory system and operational approach thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
唐朔飞: "《计算机组成原理》", 31 July 2000 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110941449A (en) * 2019-11-15 2020-03-31 新华三半导体技术有限公司 Cache block processing method and device and processor chip
CN112380013A (en) * 2020-11-16 2021-02-19 海光信息技术股份有限公司 Cache preloading method and device, processor chip and server
CN112380013B (en) * 2020-11-16 2022-07-29 海光信息技术股份有限公司 Cache preloading method and device, processor chip and server
CN116737609A (en) * 2022-03-04 2023-09-12 格兰菲智能科技有限公司 Method and device for selecting replacement cache line
CN114860785A (en) * 2022-07-08 2022-08-05 深圳云豹智能有限公司 Cache data processing system, method, computer device and storage medium
CN114860785B (en) * 2022-07-08 2022-09-06 深圳云豹智能有限公司 Cache data processing system, method, computer device and storage medium

Also Published As

Publication number Publication date
CN109933543B (en) 2022-03-18

Similar Documents

Publication Publication Date Title
CN109933543A (en) Data locking method, device and the computer equipment of Cache
US10241919B2 (en) Data caching method and computer system
US6901483B2 (en) Prioritizing and locking removed and subsequently reloaded cache lines
US20100332726A1 (en) Structure and method for managing writing operation on mlc flash memory
US9436615B2 (en) Optimistic data read
CN110347613B (en) Method for realizing RAID in multi-tenant solid-state disk, controller and multi-tenant solid-state disk
US20110302359A1 (en) Method for managing flash memories having mixed memory types
CN109582214A (en) Data access method and computer system
CN109240945A (en) A kind of data processing method and processor
CN107818052A (en) Memory pool access method and device
TW201935223A (en) Memory system and method for controlling nonvolatile memory
CN111400306B (en) RDMA (remote direct memory Access) -and non-volatile memory-based radix tree access system
CN115328820B (en) Access method of multi-level cache system, data storage method and device
CN107463509A (en) Buffer memory management method, cache controller and computer system
EP3198447A1 (en) Smart flash cache logger
CN112749198A (en) Multi-level data caching method and device based on version number
US20040158570A1 (en) Methods for intra-partition parallelism for inserts
CN116701246B (en) Method, device, equipment and storage medium for improving cache bandwidth
CN116540950B (en) Memory device and control method for writing data thereof
CN106155919B (en) A kind of control method and control system of 3D flash memory
US9699263B1 (en) Automatic read and write acceleration of data accessed by virtual machines
CN110727610B (en) Cache memory, storage system, and method for evicting cache memory
CN115168248B (en) Cache memory supporting SIMT architecture and corresponding processor
US8381023B2 (en) Memory system and computer system
CN106598730B (en) Design method of online recoverable object distributor based on nonvolatile memory

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 519000 No. 333, Kexing Road, Xiangzhou District, Zhuhai City, Guangdong Province

Applicant after: ZHUHAI JIELI TECHNOLOGY Co.,Ltd.

Address before: Floor 1-107, building 904, ShiJiHua Road, Zhuhai City, Guangdong Province

Applicant before: ZHUHAI JIELI TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant