CN105677580A - Method and device for accessing cache - Google Patents

Method and device for accessing cache Download PDF

Info

Publication number
CN105677580A
CN105677580A CN201511024173.8A CN201511024173A CN105677580A CN 105677580 A CN105677580 A CN 105677580A CN 201511024173 A CN201511024173 A CN 201511024173A CN 105677580 A CN105677580 A CN 105677580A
Authority
CN
China
Prior art keywords
data
address space
target
buffer memory
exclusive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201511024173.8A
Other languages
Chinese (zh)
Other versions
CN105677580B (en
Inventor
杨彬
任超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Hangzhou Huawei Digital Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Huawei Digital Technologies Co Ltd filed Critical Hangzhou Huawei Digital Technologies Co Ltd
Priority to CN201511024173.8A priority Critical patent/CN105677580B/en
Publication of CN105677580A publication Critical patent/CN105677580A/en
Application granted granted Critical
Publication of CN105677580B publication Critical patent/CN105677580B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems

Abstract

The invention relates to a method for accessing a cache. The method comprises that a data access request of an application program is received, the application program operates in a CPU target core, the CPU comprises a plurality of cores, the target core is one of the plurality of cores, and the data access request includes the memory address of target data to be accessed; in dependence on the data access request, whether the multi-core shared cache contains the target data or not is queried, the cache comprises an exclusive address space corresponding to the target core, the exclusive address space is used for storing data read by the target core from the memory, and data read by other cores in the plurality of cores except the target core cannot replace data that the target core stores in the exclusive address space; when the cache is stored with the target data, the target data is read from the cache; when the cache is not stored with the target data, the target data is read from the memory in dependence on the memory address, and the target data is stored in the exclusive address space. Usage competition generated when a multi-core CPU accesses the cache is reduced.

Description

The method and apparatus of access cache
Technical field
The present invention relates to computer realm, particularly relate to the method and apparatus of access cache in computer realm.
Background technology
In modem computer systems, CPU (CPU, CentralProcessingUnit) is by bus and Memory linkage, and CPU is subject to the restriction of Bus Speed when accessing internal storage data, there is performance bottleneck. So CPU improves the efficiency obtaining data by buffer memory (cache). Buffer memory is for preserving the CPU data frequently used. When CPU to access the data in internal memory, first whether query caching there are data and data whether expired, if data are not out of date, from buffer memory, read data. Otherwise CPU reads data from internal memory and flushes in buffer memory; Owing to program locality feature in the access of internal memory when program is run, namely it is likely to have to identical data in a period of time and repeatedly reads and writes, so the speed of processor to access data can be greatly speeded up after using buffer memory.
For multi-core computer system, there are level cache, L2 cache and three grades of buffer memorys at present. Wherein three grades of buffer memorys are shared by all cores on a physical cpu, so would be likely to occur the situation using competition when multiple cores access three grades of buffer memorys simultaneously. Such as, in 64 core systems, each core will be at war with other 63 cores when accessing the data being stored in three grades of buffer memorys. When application program runs on certain core, the data of its access can be put in three grades of buffer memorys by CPU. If the access frequency of these data is on average not high on 64 cores, it is possible to the data accessed by other core replace. Program accesses these data again needs CPU again to read from internal memory, causes the situation that data are read repeatedly. Along with being continuously increased of CPU core number, cache conflict or competition probability between each core are also continuously increased.
Summary of the invention
The method and apparatus that the invention provides a kind of access cache, to reduce use competition during multi-core CPU access cache.
First aspect, a kind of method that the invention provides access cache, the method includes: receive the data access request of application program, this application program runs in the target core of central processing unit CPU, this CPU is the CPU including multinuclear, this target core is one of this multinuclear, and this data access request comprises the memory address of target data to be visited;According to this data access request, inquire about in the buffer memory that this multinuclear is shared and whether there is this target data, wherein, this buffer memory includes this target and checks the exclusive address space answered, this exclusive address space is for storing the data that this target core reads from internal memory, and the data that other core except this target core in this multinuclear reads can not replace the data that this target core is stored in this exclusive address space; When storage has this target data in this buffer memory, from this buffer memory, read this target data; When this buffer memory does not store this target data, according to this memory address, from this internal memory, read this target data, and this target data is stored among this exclusive address space.
In the present invention, when target core in CPU processes the data access request of application program, the buffer memory that multinuclear is shared exists exclusive address space, exclusive address space is for storing the data that target core reads from internal memory, and the data that other core except target core in multinuclear reads can not replace the data that target core is stored in exclusive address space, thus supporting the exclusive use at target core exclusive address space in the buffer, decrease use competition during multi-core CPU access cache.
In conjunction with first aspect, in the first possible implementation of first aspect, the method also includes: receive the instruction information of this application program, this instruction information arranges this exclusive address space for instruction in this buffer memory, and this instruction information comprises this exclusive address space address in this buffer memory and size; According to this instruction information, this buffer memory arranges this exclusive address space.
The first possible implementation in conjunction with first aspect, in the implementation that the second of first aspect is possible, this multinuclear and multiple depositor one_to_one corresponding, this is according to this instruction information, this exclusive address space is set, including: according to this instruction information, the depositor answered by the verification of this target arranges buffer address corresponding to this exclusive address space and buffer storage length.
In conjunction with first aspect, the implementation that the first or the second of first aspect are possible, in the third possible implementation of first aspect, this exclusive address space includes resident address space, this resident address space is for storing the first data in this target data, and these first data being arranged in this resident address space can not be replaced by any data.
By being arranged in exclusive address space to arrange resident address space, and the first data being positioned at resident address space are set can not being replaced by any data, thus having locked the first data in the buffer, and then improve the hit rate of buffer memory when reading the first data.
In conjunction with first aspect, the first of first aspect is to any one the possible implementation in the third possible implementation, and in the 4th kind of possible implementation of first aspect, this buffer memory is three grades of buffer memorys.
Second aspect, the invention provides the device of a kind of access cache, including: receiver module, for receiving the data access request of application program, this application program runs in the target core of central processing unit CPU, this CPU is the CPU including multinuclear, and this target core is one of this multinuclear, and this data access request comprises the memory address of target data to be visited; Enquiry module, for according to this data access request, inquire about in the buffer memory that this multinuclear is shared and whether there is this target data, wherein, this buffer memory includes this target and checks the exclusive address space answered, this exclusive address space is for storing the data that this target core reads from internal memory, and the data that other core except this target core in this multinuclear reads can not replace the data that this target core is stored in this exclusive address space;Perform module, for when storage has this target data in this buffer memory, reading this target data from this buffer memory; When this execution module is additionally operable to not store this target data in this buffer memory, according to this memory address, from this internal memory, read this target data, and this target data is stored among this exclusive address space.
In conjunction with second aspect, in the first possible implementation of second aspect, this receiver module is additionally operable to receive the instruction information of application program, this instruction information arranges this exclusive address space for instruction in this buffer memory, and this instruction information comprises this exclusive address space address in this buffer memory and size; Wherein this execution module is additionally operable to, according to this instruction information, arrange this exclusive address space in this buffer memory.
The first possible implementation in conjunction with second aspect, in the implementation that the second of second aspect is possible, this multinuclear and multiple depositor one_to_one corresponding, this execution module is specifically for according to this instruction information, and the depositor answered by the verification of this target arranges buffer address corresponding to this exclusive address space and buffer storage length.
In conjunction with second aspect, the implementation that the first or the second of second aspect are possible, in the third possible implementation of second aspect, this exclusive address space includes resident address space, this resident address space is for storing the first data in this target data, and these first data being arranged in this resident address space can not be replaced by any data.
In conjunction with second aspect, the first of second aspect is to any one the possible implementation in the third possible implementation, and in the 4th kind of possible implementation of second aspect, this buffer memory is three grades of buffer memorys.
The third aspect, a kind of method that the invention provides access cache, receive the data access request of application program, this application program runs in the target core of central processing unit CPU, this CPU is the CPU including multinuclear, this target core is one of this multinuclear, and this data access request comprises the memory address of target data to be visited; According to this data access request, inquire about in the buffer memory that this multinuclear is shared and whether there is this target data, wherein, this buffer memory includes this target and checks the resident address space answered, this resident address space is for storing the data that this target core reads from internal memory, and this target data being arranged in this resident address space can not be replaced by any data; When storage has this target data in this buffer memory, from this buffer memory, read this target data; When this buffer memory does not store this target data, according to this memory address, from this internal memory, read this target data, and this target data is stored among this buffer memory.
In the present invention, when target core in CPU processes the data access request of this application program, there is resident address space in the buffer memory that multinuclear is shared, resident address space is for storing the data that target core reads from internal memory, and be stored in the target data in resident address space and can not be replaced by any data, thus supporting the exclusive use at target core resident address space in the buffer, decrease use competition during multi-core CPU access cache.
In conjunction with the third aspect, in the first possible implementation of the third aspect, the method includes: receive the instruction information of application program, this instruction information arranges this resident address space for instruction in this buffer memory, and this instruction information comprises this resident address space address in this buffer memory and size;According to this instruction information, this buffer memory arranges this resident address space.
Fourth aspect, the invention provides the device of a kind of access cache, and this device includes processor and memorizer; This memorizer is used for storing code; This processor is by reading this code of this memorizer storage, for performing the method that first aspect provides.
Accompanying drawing explanation
In order to be illustrated more clearly that the technical scheme of the embodiment of the present invention, the accompanying drawing used required in the embodiment of the present invention will be briefly described below, apparently, drawings described below is only some embodiments of the present invention, for those of ordinary skill in the art, under the premise not paying creative work, it is also possible to obtain other accompanying drawing according to these accompanying drawings.
Fig. 1 is the schematic flow sheet of the method for access cache according to embodiments of the present invention.
Fig. 2 is the multi-core computer system architecture diagram according to further embodiment of this invention.
Fig. 3 is the Organization Chart of the method for access cache according to yet another embodiment of the invention.
Fig. 4 is the schematic diagram of the device of access cache according to another embodiment of the present invention.
Fig. 5 is the schematic diagram of the device of access cache according to another embodiment of the present invention.
Detailed description of the invention
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is a part of embodiment of the present invention, rather than whole embodiment. Based on the embodiment in the present invention, the every other embodiment that those of ordinary skill in the art obtain under the premise not making creative work, all should belong to the scope of protection of the invention.
Should be understood that the method for the access cache in the embodiment of the present invention can apply to multi-core computer system, this multi-core computer system can be general multi-core computer system. CPU in this multi-core computer system can include multiple core, can pass through system bus or corsspoint switch matrix communication between multiple cores, can include the buffer memory that the multinuclear in CPU is shared in this multi-core computer system.
Fig. 1 illustrates the schematic diagram of the method 100 of a kind of access cache in the embodiment of the present invention. As it is shown in figure 1, the method 100 includes:
S110, receiving the data access request of application program, this application program runs in the target core of central processing unit CPU, and this CPU is the CPU including multinuclear, this target core is one of this multinuclear, and this data access request comprises the memory address of target data to be visited;
S120, according to described data access request, inquire about in the buffer memory that described multinuclear is shared and whether there is described target data, wherein, described buffer memory includes described target and checks the exclusive address space answered, described exclusive address space is for storing the data that described target core reads from internal memory, and the data that other core except described target core in described multinuclear reads can not replace the data that described target core is stored in described exclusive address space;
S130, when storage has this target data in this buffer memory, reads this target data from this buffer memory;
S140, when not storing this target data in this buffer memory, according to this memory address, reads this target data from this internal memory, and described target data is stored among described exclusive address space.
In embodiments of the present invention, when target core in CPU processes the data access request of application program, the buffer memory that multinuclear is shared exists exclusive address space, exclusive address space is for storing the data that target core reads from internal memory, and the data that other core except target core in multinuclear reads can not replace the data that target core is stored in exclusive address space, thus supporting the exclusive use at target core exclusive address space in the buffer, decrease use competition during multi-core CPU access cache.
Should be understood that in the invention process, checking, with target, the exclusive address space answered owing to being provided with in the buffer memory that multinuclear is shared so that the target data that target core reads will not be substituted by other data except target core, so improve the hit rate of buffer memory.
Should be understood that the memory address of target data in the embodiment of the present invention, it is possible to refer to the virtual address of target data, it is also possible to referring to the physical address of target data, this is not limited by the present invention.
Should be understood that the buffer memory that the multinuclear in the embodiment of the present invention is shared, it is possible to be three grades of buffer memorys (i.e. L3cache), or other buffer memory can shared by multinuclear, the present invention is not limited to this. Such as, all cores on same physical cpu can be shared three grades of buffer memorys, each core and can be used alone corresponding level cache or L2 cache.
It is alternatively possible to arranged address and the size of this exclusive address space by software system, for instance, it is possible to the system that is operated by is arranged, or can also arrange this exclusive address space by application program. For example, it is possible to receive the instruction information that software system (operating system or application program) sends, this instruction information is for indicating address and the size of exclusive address space, it is possible to according to this instruction information, arrange this exclusive address space in the buffer.
Alternatively, exclusive address space can be stored in the data that other core reads except target core, but the data that other core reads can not replace the data that target core reads, and can only use the space that the data not read by target core in exclusive address space take.
Alternatively, as an embodiment, the method 100 of this access cache also includes: receive the instruction information of application program, and this instruction information arranges this exclusive address space for instruction in this buffer memory, and this instruction information comprises this exclusive address space address in this buffer memory and size; According to this instruction information, this buffer memory arranges this exclusive address space.
Should be understood that in embodiments of the present invention, according to instruction information, this exclusive address space is set in the buffer, it is possible to be according to the address and the size that arrange this exclusive address space in the depositor that this exclusive address space is corresponding.
Alternatively, as an embodiment, this multinuclear and multiple depositor one_to_one corresponding, in the method 100 of this access cache, this is according to this instruction information, this exclusive address space is set, including: according to this instruction information, the depositor answered by the verification of this target arranges buffer address corresponding to this exclusive address space and buffer storage length.
In order to make it easy to understand, Fig. 2 illustrates the inner bay composition of the multi-core computer system of the embodiment of the present invention, as in figure 2 it is shown, can include multiple core inside CPU, multiple cores can share a buffer memory. Such as, this buffer memory shared may be located on the hardware chip at CPU place. Multiple cores in CPU can with multiple depositor one_to_one corresponding, and each depositor can store the instruction of the core of correspondence. Core in CPU can arrange the relevant information of exclusive address space by depositor.
For example, it is possible to the configuration information of the exclusive address space arranged in a register in the buffer memory that this multinuclear is shared. Such as, depositor can include the buffer address flag bit of exclusive address space, the buffer storage length flag bit of exclusive address space and the enabler flags position of exclusive address space. Wherein buffer address flag bit may be used for arranging the address of exclusive address space, and buffer storage length flag bit may be used for arranging the length of exclusive address space, and it is effective or invalid that enabler flags position may be used for arranging exclusive address space. Such as, this enabler flags position can called after exclusive (exclusive) flag bit.Such as, the buffer memory that described multinuclear is shared can be three grades of buffer memorys, when the program is started, the in CPU the 0th core of programmatic binding (can will be called for short " CPU0 core "), exclusive address space in three grades of buffer memorys can be set on the depositor that CPU0 verification is answered simultaneously, it is 1 by exclusive position assignment, represents exclusive address space the currently active; Being 0x10000 by buffer address flag bit assignment, buffer storage length flag bit assignment is 0x10000, represents that the exclusive address field of CPU0 core is the spatial cache of 0x10000 to 0x1ffff. Within the enabler flags position effective time period that this exclusive address space is corresponding, address is that the data that the data that three grades of spatial caches between 0x10000 to 0x1ffff store are set to be read from internal memory by other core except target core are replaced.
For the ease of understanding, Fig. 3 illustrates the Organization Chart of the method for the access cache of the embodiment of the present invention, as it is shown on figure 3, depositor can be carried out and monopolize the setting of address space correlation by software system (such as, operating system or application program) by cached configuration interface. Wherein cached configuration interface can be the interface that software system accesses depositor.
Alternatively, after the enabler flags position being arranged exclusive address space by depositor is effective, it is possible to the address of the notice this exclusive address space of other core except described target core, the address realm that the exclusive address space of notice is corresponding is currently being monopolized by target core. When carrying out data cached renewal, target core can preferentially use exclusive address space. For example, it is possible to give other core except target core by system bus monitoring protocols by the address broadcast of exclusive address space.
Alternatively, as an embodiment, the method 100 of this access cache also includes: this exclusive address space includes resident address space, and this resident address space is for storing the first data in this target data, and the data being arranged in this resident address space can not be replaced by any data.
In the embodiment of the present invention, by being arranged in exclusive address space, resident address space is set, and arrange and be positioned at the first data of resident address space and can not be replaced by any data, thus having locked the first data in the buffer, and then improve the hit rate of buffer memory when reading the first data.
Should be understood that the first data in the embodiment of the present invention, it is possible to refer to the part data in target data or total data. Such as, these first data can be the critical data in target data, this critical data can be stored among resident address space, in the resident address space valid period, this critical data will not be replaced by any data at application program run duration, or, it is possible to understand that for, this critical data is locked, and will not be swapped out by follow-up.
Alternatively, these first data can be determined by application program, can also be determined by operating system, such as, can receiving the first information that application program or operating system send, this first information can serve to indicate that the first data and resident address space address in the buffer and size, it is possible to according to this first information, exclusive address space arranges resident address space, and uses this resident address space to be stored in the first data.
It should be understood that, this resident address space can be stored in the data that other core reads except target core, the data that can also be stored in target data except described first data, but, any data in described resident address space can not replace described first data, in other words, described first data, after being stored in described resident address space, will not be replaced.
Should be understood that the resident address space in the embodiment of the present invention, it is possible to arranged before accessing the first data, it is also possible to arranging when accessing the first data, this is not construed as limiting by the present invention.
For example, it is possible to arrange the enabler flags position of the address mark position of resident address space, the length mark position of resident address space and resident address space in a register. By arranging above each flag bit, the address of resident address space, size and whether effective are set. Such as, the enabler flags position that can name this resident address space is non-replaced (noswap) position, when the depositor that target verification is answered arranges resident address space, buffer address and the buffer storage length of this resident address space can be set, and be effectively (such as by noswap position, it is effective for putting 1, sets to 0 as invalid).
Alternatively, when application program is not in use by these first data, it is possible to cancel this resident address space, other data read for target core use. Such as, after determining not in use by the first data, can in the depositor that target verification is answered, it is invalid to be set to the enabler flags position of resident address space storing these the first data, then be stored in other data that the data of resident address space can read by target core after invalid and replace.
In the embodiment of the present invention, when application program need not resident address space store the first data time, by cancelling resident address space, the space that resident address space takies is reverted to the data read by target core and uses, improve the utilization rate of exclusive address space and buffer memory.
Should be understood that in the embodiment of the present invention, it is possible to a resident address space is set, it is also possible to multiple resident address space is set.
Alternatively, as an embodiment, the method 100 of this access cache also comprises determining that this target core need not use this exclusive address space; Cancel this exclusive address space, in order to other core except this target core uses this exclusive address space.
In the embodiment of the present invention, by cancel target core not in use by exclusive address space, the space of exclusive address space hold is reverted to multinuclear and shares, improve the utilization rate of buffer memory.
Such as, in embodiments of the present invention, this target core need not use the situation of this exclusive address space to include: using the application program of this exclusive address space to run complete, this target core is no longer necessary to the buffer memory using this multinuclear to share.
Such as, cancel the concrete operations of this exclusive address space, it is possible to be address in the buffer and length that exclusive address space is set in the target depositor answered of verification, and the enabler flags position of this exclusive address space is set to invalid. Such as, when application program uses exclusive address space, the address being provided with exclusive address space is 0x10000, and the length being provided with exclusive address space is 0x10000, and the enabler flags position assignment of exclusive address space is effective. In expression buffer memory, the space between address field 0x10000 to 0x1ffff is monopolized by target core. When target core need not use exclusive address space (such as, the application program run in target core terminates), the address that can arrange exclusive address space in a register is 0x10000, the length arranging exclusive address space is 0x10000, and the enabler flags position of address space is set to invalid. Represent that in buffer memory, the space of address field 0x10000 to 0x1ffff is no longer monopolized by target core, it is possible to revert to the state that multinuclear is shared, in other words, the more New Policy of buffer memory is reverted to the more New Policy of acquiescence. Thus determine need not monopolize address space time, cancel exclusive address space, the space of exclusive address space hold reverted to multinuclear and shares, improve the utilization rate of buffer memory.
Illustrating a kind of method 100 of access cache above in association with Fig. 1 to Fig. 3, a kind of method 200 of access cache is discussed in detail below, the method 200 includes:
S210, receiving the data access request of application program, this application program runs in the target core of central processing unit CPU, and this CPU is the CPU including multinuclear, this target core is one of this multinuclear, and this data access request comprises the memory address of target data to be visited;
S220, according to this data access request, inquire about in the buffer memory that this multinuclear is shared and whether there is this target data, wherein, this buffer memory includes this target and checks the resident address space answered, this resident address space is for storing the data that this target core reads from internal memory, and the described target data being arranged in described resident address space can not be replaced by any data;
S230, when storage has this target data in this buffer memory, reads this target data from this buffer memory;
S240, when not storing this target data in this buffer memory, according to this memory address, reads this target data from this internal memory, and this target data is stored among this buffer memory.
In the present invention, when target core in CPU processes the data access request of this application program, there is resident address space in the buffer memory that multinuclear is shared, resident address space is for storing the data that target core reads from internal memory, and be stored in the target data in resident address space and can not be replaced by any data, thus supporting the exclusive use at target core resident address space in the buffer, decrease use competition during multi-core CPU access cache.
Alternatively, as an embodiment, the method 200 includes: receive the instruction information of application program, and this instruction information arranges this resident address space for instruction in this buffer memory, and this instruction information comprises this resident address space address in this buffer memory and size; According to this instruction information, this buffer memory arranges this resident address space.
Above in association with Fig. 1 to Fig. 3 method illustrating access cache, the device of access cache is described in detail below in conjunction with Fig. 4 to Fig. 5.
Fig. 4 illustrates the schematic diagram of the device 400 of access cache according to embodiments of the present invention. Should be understood that the following of the modules in the device 400 of the embodiment of the present invention and other operations and/or function are respectively in order to realize the corresponding flow process of each method in Fig. 1 to Fig. 3, for sake of simplicity, do not repeat them here, as shown in Figure 4, this device 400 includes:
Receiver module 410, for receiving the data access request of application program, this application program runs in the target core of central processing unit CPU, and this CPU is the CPU including multinuclear, this target core is one of this multinuclear, and this data access request comprises the memory address of target data to be visited;
Enquiry module 420, for according to this data access request, inquire about in the buffer memory that this multinuclear is shared and whether there is this target data, wherein, this buffer memory includes this target and checks the exclusive address space answered, this exclusive address space is for storing the data that this target core reads from internal memory, and the data that other core except this target core in this multinuclear reads can not replace the data that this target core is stored in this exclusive address space;
Perform module 430, for when storage has this target data in this buffer memory, reading this target data from this buffer memory;
When this execution module 430 is additionally operable to not store this target data in this buffer memory, according to this memory address, from this internal memory, read this target data, and this target data is stored among this exclusive address space.
Alternatively, as an embodiment, this receiver module 410 is additionally operable to receive the instruction information of this application program, and this instruction information arranges this exclusive address space for instruction in this buffer memory, and this instruction information comprises this exclusive address space address in this buffer memory and size; Wherein this execution module 430 is additionally operable to, according to this instruction information, arrange this exclusive address space in this buffer memory.
Alternatively, as an embodiment, this multinuclear and multiple depositor one_to_one corresponding, this execution module 430 is specifically for according to this instruction information, and the depositor answered by the verification of this target arranges buffer address corresponding to this exclusive address space and buffer storage length.
Alternatively, as an embodiment, this exclusive address space includes resident address space, and this resident address space is for storing the first data in this target data, and these first data being arranged in this resident address space can not be replaced by any data.
Alternatively, as an embodiment, this buffer memory is three grades of buffer memorys.
In embodiments of the present invention, when target core in CPU processes the data access request of application program, the buffer memory that multinuclear is shared exists exclusive address space, exclusive address space is for storing the data that target core reads from internal memory, and the data that other core except target core in multinuclear reads can not replace the data that target core is stored in exclusive address space, thus supporting the exclusive use at target core exclusive address space in the buffer, decrease use competition during multi-core CPU access cache.
Fig. 5 illustrates the schematic diagram of the device of access cache according to embodiments of the present invention. As it is shown in figure 5, this device 500 includes: processor 510, memorizer 520, bus system 530, wherein this processor 500 is connected by this bus system 530 with this memorizer 520, and this memorizer 520 is used for storing instruction, and this processor 510 is for performing the instruction of this memorizer 520 storage.
Wherein, this processor 510 is used for: receiving the data access request of application program, this application program runs in the target core of central processing unit CPU, and this CPU is the CPU including multinuclear, this target core is one of this multinuclear, and this data access request comprises the memory address of target data to be visited; According to this data access request, inquire about in the buffer memory that this multinuclear is shared and whether there is this target data, wherein, this buffer memory includes this target and checks the exclusive address space answered, this exclusive address space is for storing the data that this target core reads from internal memory, and the data that other core except this target core in this multinuclear reads can not replace the data that this target core is stored in this exclusive address space; When storage has this target data in this buffer memory, from this buffer memory, read this target data; When this buffer memory does not store this target data, according to this memory address, from this internal memory, read this target data, and this target data is stored among this exclusive address space.
In embodiments of the present invention, when target core in CPU processes the data access request of application program, the buffer memory that multinuclear is shared exists exclusive address space, exclusive address space is for storing the data that target core reads from internal memory, and the data that other core except target core in multinuclear reads can not replace the data that target core is stored in exclusive address space, thus supporting the exclusive use at target core exclusive address space in the buffer, decrease use competition during multi-core CPU access cache.
Alternatively, as an embodiment, this processor 520 is additionally operable to receive the instruction information of this application program, and this instruction information arranges this exclusive address space for instruction in this buffer memory, and this instruction information comprises this exclusive address space address in this buffer memory and size; According to this instruction information, this buffer memory arranges this exclusive address space.
Alternatively, as an embodiment, this multinuclear and multiple depositor one_to_one corresponding, this processor 520 is specifically for according to this instruction information, and the depositor answered by the verification of this target arranges buffer address corresponding to this exclusive address space and buffer storage length.
Alternatively, as an embodiment, this exclusive address space includes resident address space, and this resident address space is for storing the first data in this target data, and these first data being arranged in this resident address space can not be replaced by any data.
Alternatively, as an embodiment, this buffer memory is three grades of buffer memorys.
It addition, the terms " system " and " network " are often used interchangeably in this article. The terms "and/or", is only a kind of incidence relation describing affiliated partner, and expression can exist three kinds of relations, for instance, A and/or B, it is possible to represent: individualism A, there is A and B, individualism B these three situation simultaneously. It addition, character "/" herein, typically represent forward-backward correlation to as if the relation of a kind of "or".
Should be understood that in embodiments of the present invention, " B corresponding with A " represents that B and A is associated, and may determine that B according to A. It is also to be understood that determine that B is not meant to determine B only according to A according to A, it is also possible to determine B according to A and/or out of Memory.
Those of ordinary skill in the art it can be appreciated that, the unit of each example described in conjunction with the embodiments described herein and algorithm steps, can with electronic hardware, computer software or the two be implemented in combination in, in order to clearly demonstrate the interchangeability of hardware and software, generally describe composition and the step of each example in the above description according to function. These functions perform with hardware or software mode actually, depend on application-specific and the design constraint of technical scheme. Professional and technical personnel specifically can should be used for using different methods to realize described function to each, but this realization is it is not considered that beyond the scope of this invention.
Those skilled in the art is it can be understood that arrive, for convenience of description and succinctly, and the specific works process of the system of foregoing description, device and unit, it is possible to reference to the corresponding process in preceding method embodiment, do not repeat them here.
In several embodiments provided herein, it should be understood that disclosed system, apparatus and method, it is possible to realize by another way. Such as, device embodiment described above is merely schematic, such as, the division of this unit, being only a kind of logic function to divide, actual can have other dividing mode when realizing, for instance multiple unit or assembly can in conjunction with or be desirably integrated into another system, or some features can ignore, or do not perform. It addition, shown or discussed coupling each other or direct-coupling or communication connection can be through INDIRECT COUPLING or the communication connection of some interfaces, device or unit, it is also possible to be electric, machinery or other form connect.
This as the unit that separating component illustrates can be or may not be physically separate, and the parts shown as unit can be or may not be physical location, namely may be located at a place, or can also be distributed on multiple NE. Some or all of unit therein can be selected according to the actual needs to realize the purpose of embodiment of the present invention scheme.
It addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it is also possible to be that unit is individually physically present, it is also possible to be that two or more unit are integrated in a unit. Above-mentioned integrated unit both can adopt the form of hardware to realize, it would however also be possible to employ the form of SFU software functional unit realizes.
If this integrated unit is using the form realization of SFU software functional unit and as independent production marketing or use, it is possible to be stored in a computer read/write memory medium. Based on such understanding, the part that prior art is contributed by technical scheme substantially in other words, or all or part of of this technical scheme can embody with the form of software product, this computer software product is stored in a storage medium, including some instructions with so that a computer equipment (can be personal computer, server, or the network equipment etc.) perform all or part of step of each embodiment the method for the present invention. And aforesaid storage medium includes: USB flash disk, portable hard drive, read only memory (ROM, Read-OnlyMemory), the various media that can store program code such as random access memory (RAM, RandomAccessMemory), magnetic disc or CD.
Technical characteristic in a certain embodiment and description above, in order to make application documents succinctly clear, it is possible to understand that suitable in other embodiments, no longer repeat one by one in other embodiments.
The above; it is only the specific embodiment of the present invention; but protection scope of the present invention is not limited thereto; any those familiar with the art is in the technical scope that the invention discloses; can readily occurring in amendment or the replacement of various equivalence, these amendments or replacement all should be encompassed within protection scope of the present invention. Therefore, protection scope of the present invention should be as the criterion with scope of the claims.

Claims (10)

1. the method for an access cache, it is characterised in that including:
Receiving the data access request of application program, this application program runs in the target core of central processing unit CPU, and this CPU is the CPU including multinuclear, and this target core is one of this multinuclear, and this data access request comprises the memory address of target data to be visited;
According to described data access request, inquire about in the buffer memory that described multinuclear is shared and whether there is described target data, wherein, described buffer memory includes described target and checks the exclusive address space answered, described exclusive address space is for storing the data that described target core reads from internal memory, and the data that other core except described target core in described multinuclear reads can not replace the data that described target core is stored in described exclusive address space;
When storage has described target data in described buffer memory, from described buffer memory, read described target data;
When described buffer memory does not store described target data, according to described memory address, from described internal memory, read described target data, and described target data is stored among described exclusive address space.
2. the method for claim 1, it is characterised in that described method also includes:
Receiving the instruction information of described application program, described instruction information arranges described exclusive address space for instruction in described buffer memory, and described instruction information comprises described exclusive address space address in described buffer memory and size;
According to described instruction information, described buffer memory arranges described exclusive address space.
3. method as claimed in claim 2, it is characterised in that described multinuclear and multiple depositor one_to_one corresponding,
Described according to described instruction information, described exclusive address space is set, including:
According to described instruction information, the depositor answered by the verification of described target arranges buffer address corresponding to described exclusive address space and buffer storage length.
4. method as claimed any one in claims 1 to 3, it is characterized in that, described exclusive address space includes resident address space, described resident address space is for storing the first data in described target data, and described first data being arranged in described resident address space can not be replaced by any data.
5. the method as according to any one of Claims 1-4, it is characterised in that described buffer memory is three grades of buffer memorys.
6. the device of an access cache, it is characterised in that including:
Receiver module, for receiving the data access request of application program, described application program runs in the target core of central processing unit CPU, described CPU is the CPU including multinuclear, described target core is one of described multinuclear, and described data access request comprises the memory address of target data to be visited;
Enquiry module, for according to described data access request, inquire about in the buffer memory that described multinuclear is shared and whether there is described target data, wherein, described buffer memory includes described target and checks the exclusive address space answered, described exclusive address space is for storing the data that described target core reads from internal memory, and the data that other core except described target core in described multinuclear reads can not replace the data that described target core is stored in described exclusive address space;
Perform module, for when storage has described target data in described buffer memory, reading described target data from described buffer memory;
Described execution module is additionally operable to when not storing described target data in described buffer memory, according to described memory address, reads described target data, and described target data be stored among described exclusive address space from described internal memory.
7. device as claimed in claim 6, it is characterized in that, described receiver module is additionally operable to receive the instruction information of described application program, described instruction information arranges described exclusive address space for instruction in described buffer memory, and described instruction information comprises described exclusive address space address in described buffer memory and size; Wherein said execution module is additionally operable to, according to described instruction information, arrange described exclusive address space in described buffer memory.
8. device as claimed in claim 7, it is characterized in that, described multinuclear and multiple depositor one_to_one corresponding, described execution module is specifically for according to described instruction information, and the depositor answered by the verification of described target arranges buffer address corresponding to described exclusive address space and buffer storage length.
9. the device as according to any one of claim 6 to 8, it is characterized in that, described exclusive address space includes resident address space, described resident address space is for storing the first data in described target data, and described first data being arranged in described resident address space can not be replaced by any data.
10. the device as according to any one of claim 6 to 9, it is characterised in that described buffer memory is three grades of buffer memorys.
CN201511024173.8A 2015-12-30 2015-12-30 The method and apparatus of access cache Active CN105677580B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201511024173.8A CN105677580B (en) 2015-12-30 2015-12-30 The method and apparatus of access cache

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201511024173.8A CN105677580B (en) 2015-12-30 2015-12-30 The method and apparatus of access cache

Publications (2)

Publication Number Publication Date
CN105677580A true CN105677580A (en) 2016-06-15
CN105677580B CN105677580B (en) 2019-04-12

Family

ID=56189852

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201511024173.8A Active CN105677580B (en) 2015-12-30 2015-12-30 The method and apparatus of access cache

Country Status (1)

Country Link
CN (1) CN105677580B (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108268384A (en) * 2016-12-30 2018-07-10 华为技术有限公司 Read the method and device of data
CN108614782A (en) * 2018-04-28 2018-10-02 张家口浩扬科技有限公司 A kind of cache access method for data processing system
CN109597776A (en) * 2017-09-30 2019-04-09 杭州华为数字技术有限公司 A kind of data manipulation method, Memory Controller Hub and multicomputer system
CN109617832A (en) * 2019-01-31 2019-04-12 新华三技术有限公司合肥分公司 Message caching method and device
CN109783403A (en) * 2017-11-10 2019-05-21 深圳超级数据链技术有限公司 Read the method, apparatus and data processor of data
CN110096455A (en) * 2019-04-26 2019-08-06 海光信息技术有限公司 The exclusive initial method and relevant apparatus of spatial cache
CN110765034A (en) * 2018-07-27 2020-02-07 华为技术有限公司 Data prefetching method and terminal equipment
CN111159062A (en) * 2019-12-20 2020-05-15 海光信息技术有限公司 Cache data scheduling method and device, CPU chip and server
CN111679728A (en) * 2019-12-31 2020-09-18 泰斗微电子科技有限公司 Data reading method and device
CN112241320A (en) * 2019-07-17 2021-01-19 华为技术有限公司 Resource allocation method, storage device and storage system
CN112307067A (en) * 2020-11-06 2021-02-02 支付宝(杭州)信息技术有限公司 Data processing method and device
CN112527205A (en) * 2020-12-16 2021-03-19 江苏国科微电子有限公司 Data security protection method, device, equipment and medium
CN112559433A (en) * 2019-09-25 2021-03-26 阿里巴巴集团控股有限公司 Multi-core interconnection bus, inter-core communication method and multi-core processor
CN114036084A (en) * 2021-11-17 2022-02-11 海光信息技术股份有限公司 Data access method, shared cache, chip system and electronic equipment
CN114721726A (en) * 2022-06-10 2022-07-08 成都登临科技有限公司 Method for obtaining instructions in parallel by multithread group, processor and electronic equipment
CN115114192A (en) * 2021-03-23 2022-09-27 北京灵汐科技有限公司 Memory interface, functional core, many-core system and memory data access method
CN115328820A (en) * 2022-09-28 2022-11-11 北京微核芯科技有限公司 Access method of multi-level cache system, data storage method and device
CN115827504A (en) * 2023-01-31 2023-03-21 南京砺算科技有限公司 Data access method for multi-core graphic processor, graphic processor and medium
CN112307067B (en) * 2020-11-06 2024-04-19 支付宝(杭州)信息技术有限公司 Data processing method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009199384A (en) * 2008-02-22 2009-09-03 Nec Corp Data processing apparatus
CN101673244A (en) * 2008-09-09 2010-03-17 上海华虹Nec电子有限公司 Memorizer control method for multi-core or cluster systems
CN101739299A (en) * 2009-12-18 2010-06-16 北京工业大学 Method for dynamically and fairly partitioning shared cache based on chip multiprocessor
CN102483840A (en) * 2009-08-21 2012-05-30 英派尔科技开发有限公司 Allocating processor cores with cache memory associativity

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009199384A (en) * 2008-02-22 2009-09-03 Nec Corp Data processing apparatus
CN101673244A (en) * 2008-09-09 2010-03-17 上海华虹Nec电子有限公司 Memorizer control method for multi-core or cluster systems
CN102483840A (en) * 2009-08-21 2012-05-30 英派尔科技开发有限公司 Allocating processor cores with cache memory associativity
CN101739299A (en) * 2009-12-18 2010-06-16 北京工业大学 Method for dynamically and fairly partitioning shared cache based on chip multiprocessor

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108268384A (en) * 2016-12-30 2018-07-10 华为技术有限公司 Read the method and device of data
CN109597776A (en) * 2017-09-30 2019-04-09 杭州华为数字技术有限公司 A kind of data manipulation method, Memory Controller Hub and multicomputer system
CN109783403A (en) * 2017-11-10 2019-05-21 深圳超级数据链技术有限公司 Read the method, apparatus and data processor of data
CN108614782A (en) * 2018-04-28 2018-10-02 张家口浩扬科技有限公司 A kind of cache access method for data processing system
US11586544B2 (en) 2018-07-27 2023-02-21 Huawei Technologies Co., Ltd. Data prefetching method and terminal device
CN110765034A (en) * 2018-07-27 2020-02-07 华为技术有限公司 Data prefetching method and terminal equipment
CN109617832A (en) * 2019-01-31 2019-04-12 新华三技术有限公司合肥分公司 Message caching method and device
CN109617832B (en) * 2019-01-31 2022-07-08 新华三技术有限公司合肥分公司 Message caching method and device
CN110096455A (en) * 2019-04-26 2019-08-06 海光信息技术有限公司 The exclusive initial method and relevant apparatus of spatial cache
CN112241320B (en) * 2019-07-17 2023-11-10 华为技术有限公司 Resource allocation method, storage device and storage system
WO2021008197A1 (en) * 2019-07-17 2021-01-21 华为技术有限公司 Resource allocation method, storage device, and storage system
US11861196B2 (en) 2019-07-17 2024-01-02 Huawei Technologies Co., Ltd. Resource allocation method, storage device, and storage system
CN112241320A (en) * 2019-07-17 2021-01-19 华为技术有限公司 Resource allocation method, storage device and storage system
CN112559433A (en) * 2019-09-25 2021-03-26 阿里巴巴集团控股有限公司 Multi-core interconnection bus, inter-core communication method and multi-core processor
CN112559433B (en) * 2019-09-25 2024-01-02 阿里巴巴集团控股有限公司 Multi-core interconnection bus, inter-core communication method and multi-core processor
CN111159062B (en) * 2019-12-20 2023-07-07 海光信息技术股份有限公司 Cache data scheduling method and device, CPU chip and server
CN111159062A (en) * 2019-12-20 2020-05-15 海光信息技术有限公司 Cache data scheduling method and device, CPU chip and server
CN111679728B (en) * 2019-12-31 2021-12-24 泰斗微电子科技有限公司 Data reading method and device
CN111679728A (en) * 2019-12-31 2020-09-18 泰斗微电子科技有限公司 Data reading method and device
CN112307067A (en) * 2020-11-06 2021-02-02 支付宝(杭州)信息技术有限公司 Data processing method and device
CN112307067B (en) * 2020-11-06 2024-04-19 支付宝(杭州)信息技术有限公司 Data processing method and device
CN112527205A (en) * 2020-12-16 2021-03-19 江苏国科微电子有限公司 Data security protection method, device, equipment and medium
CN115114192A (en) * 2021-03-23 2022-09-27 北京灵汐科技有限公司 Memory interface, functional core, many-core system and memory data access method
CN114036084A (en) * 2021-11-17 2022-02-11 海光信息技术股份有限公司 Data access method, shared cache, chip system and electronic equipment
CN114721726B (en) * 2022-06-10 2022-08-12 成都登临科技有限公司 Method for multi-thread group to obtain instructions in parallel, processor and electronic equipment
CN114721726A (en) * 2022-06-10 2022-07-08 成都登临科技有限公司 Method for obtaining instructions in parallel by multithread group, processor and electronic equipment
CN115328820A (en) * 2022-09-28 2022-11-11 北京微核芯科技有限公司 Access method of multi-level cache system, data storage method and device
CN115328820B (en) * 2022-09-28 2022-12-20 北京微核芯科技有限公司 Access method of multi-level cache system, data storage method and device
CN115827504A (en) * 2023-01-31 2023-03-21 南京砺算科技有限公司 Data access method for multi-core graphic processor, graphic processor and medium

Also Published As

Publication number Publication date
CN105677580B (en) 2019-04-12

Similar Documents

Publication Publication Date Title
CN105677580A (en) Method and device for accessing cache
US9785571B2 (en) Methods and systems for memory de-duplication
US10169232B2 (en) Associative and atomic write-back caching system and method for storage subsystem
US7165144B2 (en) Managing input/output (I/O) requests in a cache memory system
EP2478441B1 (en) Read and write aware cache
KR100978156B1 (en) Method, apparatus, system and computer readable recording medium for line swapping scheme to reduce back invalidations in a snoop filter
US8762651B2 (en) Maintaining cache coherence in a multi-node, symmetric multiprocessing computer
US8423736B2 (en) Maintaining cache coherence in a multi-node, symmetric multiprocessing computer
CN105095116A (en) Cache replacing method, cache controller and processor
CN109977129A (en) Multi-stage data caching method and equipment
US20150143045A1 (en) Cache control apparatus and method
KR102575913B1 (en) Asymmetric set combined cache
KR101893966B1 (en) Memory management method and device, and memory controller
US9003130B2 (en) Multi-core processing device with invalidation cache tags and methods
CN109478164B (en) System and method for storing cache location information for cache entry transfer
CN108733584B (en) Method and apparatus for optimizing data caching
CN105095104A (en) Method and device for data caching processing
CN106164874B (en) Method and device for accessing data visitor directory in multi-core system
CN104252423A (en) Consistency processing method and device based on multi-core processor
CN109478163B (en) System and method for identifying a pending memory access request at a cache entry
CN112148639A (en) High-efficiency small-capacity cache memory replacement method and system
US11249914B2 (en) System and methods of an efficient cache algorithm in a hierarchical storage system
CN110658999B (en) Information updating method, device, equipment and computer readable storage medium
WO2015047284A1 (en) Flexible storage block for a solid state drive (ssd)-based file system
CN105659216B (en) The CACHE DIRECTORY processing method and contents controller of multi-core processor system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20200417

Address after: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen

Patentee after: HUAWEI TECHNOLOGIES Co.,Ltd.

Address before: 301, A building, room 3, building 301, foreshore Road, No. 310052, Binjiang District, Zhejiang, Hangzhou

Patentee before: Huawei Technologies Co.,Ltd.

TR01 Transfer of patent right