CN109213641B - Cache consistency detection system and method - Google Patents

Cache consistency detection system and method Download PDF

Info

Publication number
CN109213641B
CN109213641B CN201710516703.3A CN201710516703A CN109213641B CN 109213641 B CN109213641 B CN 109213641B CN 201710516703 A CN201710516703 A CN 201710516703A CN 109213641 B CN109213641 B CN 109213641B
Authority
CN
China
Prior art keywords
cache line
detected
cache
level cache
backfill
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710516703.3A
Other languages
Chinese (zh)
Other versions
CN109213641A (en
Inventor
王正算
荆刚
黄小康
余红斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Spreadtrum Communications Shanghai Co Ltd
Original Assignee
Spreadtrum Communications Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spreadtrum Communications Shanghai Co Ltd filed Critical Spreadtrum Communications Shanghai Co Ltd
Priority to CN201710516703.3A priority Critical patent/CN109213641B/en
Publication of CN109213641A publication Critical patent/CN109213641A/en
Application granted granted Critical
Publication of CN109213641B publication Critical patent/CN109213641B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/22Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing
    • G06F11/2205Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing using arrangements specific to the hardware being tested
    • G06F11/2236Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing using arrangements specific to the hardware being tested to test CPU or processors
    • G06F11/2242Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing using arrangements specific to the hardware being tested to test CPU or processors in multi-processor systems, e.g. one processor becoming the test master
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/22Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing
    • G06F11/2247Verification or detection of system hardware configuration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3065Monitoring arrangements determined by the means or processing involved in reporting the monitored data
    • G06F11/3072Monitoring arrangements determined by the means or processing involved in reporting the monitored data where the reporting involves data filtering, e.g. pattern matching, time or event triggered, adaptive or policy-based reporting
    • G06F11/3075Monitoring arrangements determined by the means or processing involved in reporting the monitored data where the reporting involves data filtering, e.g. pattern matching, time or event triggered, adaptive or policy-based reporting the data filtering being achieved in order to maintain consistency among the monitored data, e.g. ensuring that the monitored data belong to the same timeframe, to the same system or component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0811Multiuser, multiprocessor or multiprocessing cache systems with multilevel cache hierarchies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0893Caches characterised by their organisation or structure
    • G06F12/0897Caches characterised by their organisation or structure with two or more cache hierarchy levels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/82Solving problems relating to consistency

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Computer Hardware Design (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention provides a cache consistency detection system and a cache consistency detection method. The system comprises: the system comprises a plurality of CPU core primary cache images, a primary cache line detection module corresponding to each CPU core primary cache image, a secondary cache line detection module corresponding to the secondary cache image, a backfill request monitor, a replacement request monitor, a monitoring request monitor and a comparator. The invention can dynamically cover the detection of all application scenes of a real system in real time and improve the efficiency of cache consistency detection.

Description

Cache consistency detection system and method
Technical Field
The invention relates to the technical field of CPU design, in particular to a cache consistency detection system and method.
Background
With the rapid development of semiconductor manufacturing processes, the dominant frequency of a single-core processor has gradually reached the limit, and in order to further improve the performance of the processor, a plurality of processor cores need to be integrated into one chip to form a multi-core processor. The Cache is between the processor and the memory and is used to match the access speed between the memory and the processor, so that the CPU in the multi-core processor has the speed of the Cache and the capacity of the main memory.
In a multi-core processor, the difference of the access speeds of the processor and the memory to the same memory unit becomes obvious, so that the design of the multi-core processor can relieve the contradiction through a multi-level cache hierarchical storage structure. Cache coherency is the processing mechanism employed to match the access speed between the processor and the memory, to achieve shared data access coherency, and to provide a shared memory programming interface. The method not only directly determines the correctness of the multi-core processor system, but also has important influence on the scale and the performance of the system, and is the key for completing the multi-core processor sharing system. On most high performance processors today, almost all memory accesses need to be done through a cache.
With the development of multi-core processors, multi-core multi-processors and many-core technologies, the speed of processors is faster and faster, the hierarchy of memories is more and more complex, and the problem of data inconsistency may occur in adjacent hierarchies of the memories or in the same layer of the memories.
At present, when detecting cache consistency, a software simulation mode is usually adopted to carry out protocol level verification of cache consistency, and the method is to write a constraint model in a manual mode, carry out pseudo-random test with constraint at the same time, verify a specific target and feed back the correctness and defects of a protocol.
In the process of implementing the invention, the inventor finds that at least the following technical problems exist in the prior art:
because the access mode type of the manually written parallel consistency verification program in the system level verification is limited, the consistency of the processor cache cannot be fully verified.
Disclosure of Invention
The cache consistency detection system and method provided by the invention can dynamically cover the detection of all application scenes of a real system in real time, and improve the efficiency of cache consistency detection.
In a first aspect, the present invention provides a cache consistency detection system, including a plurality of CPU core primary cache mirrors, a primary cache line detection module corresponding to each CPU core primary cache mirror, a secondary cache line detection module corresponding to the secondary cache mirror, a backfill request monitor, a replacement request monitor, a snoop request monitor, and a comparator; wherein the content of the first and second substances,
the CPU core primary cache mirror image is used for detecting write enable, write address and write data signals of the CPU core primary cache in the design to be verified, and when data of the address in the CPU core primary cache is updated, the latest data is automatically and synchronously updated;
the second-level cache mirror image is used for detecting write enable, write address and write data signals of a second-level cache in the design to be verified, and when data of the address in the second-level cache is updated, the latest data is automatically and synchronously updated;
the backfill request monitor is used for monitoring backfill requests which start and do not end in all the CPU cores and determining whether the cache line to be detected is in a backfill applying state;
the replacement request monitor is used for monitoring the replacement requests which are started and not ended in all the CPU cores and determining whether the cache line to be detected is in a replacement applying state;
the monitoring request monitor is used for monitoring the monitoring requests which are started and not finished in all the CPU cores and determining whether the cache line to be detected is in a monitoring application state;
the first-level cache line detection module is used for detecting the state of a cache line to be detected in the first-level cache mirror image of the CPU core and sending a detection result to the comparator;
the second-level cache line detection module is used for detecting the state of the cache line to be detected in the second-level cache mirror image when the cache line to be detected is not in a backfill applying state, a replacement applying state and a monitoring applying state, and sending a detection result to the comparator;
the comparator is used for comparing the detection results of the cache line to be detected sent by the first-level cache line detection module and the second-level cache line detection module.
Optionally, the comparator is further configured to output an error report to the simulation file when the comparison result shows that an error occurs.
Optionally, the backfill request monitor is further configured to record a start time and an end time of the backfill request;
and the secondary cache line detection module is also used for skipping the detection of the cache line to be detected in a time period corresponding to the backfill request when the cache line to be detected is in the backfill applying state.
Optionally, the replacement request monitor is further configured to record a start time and an end time of the replacement request;
and the secondary cache line detection module is further configured to skip detection of the cache line to be detected in a time period corresponding to the replacement request when the cache line to be detected is in a replacement application state.
Optionally, the monitor is further configured to record a start time and an end time of the listening request;
and the second-level cache line detection module is further used for skipping the detection of the cache line to be detected in a time period corresponding to the monitoring request when the cache line to be detected is in the monitoring application state.
In a second aspect, the present invention provides a cache coherence detection method, including:
detecting write enable, write address and write data signals of the primary cache of the CPU core in the design to be verified by the primary cache mirror image of the CPU core, and automatically and synchronously updating the latest data when the data of the address in the primary cache of the CPU core is updated;
detecting write enable, write address and write data signals of a second-level cache in the design to be verified by a second-level cache mirror image, and automatically and synchronously updating the latest data when the data of the address in the second-level cache is updated;
the backfill request monitor monitors backfill requests which are started and not finished in all CPU cores, and determines whether a cache line to be detected is in a backfill applying state;
the replacement request monitor monitors the replacement requests which are started and not finished in all the CPU cores, and determines whether the cache line to be detected is in a replacement applying state or not;
monitoring the monitoring requests which are started and not finished in all the CPU cores by a monitoring request monitor, and determining whether the cache line to be detected is in a monitoring application state;
the first-level cache line detection module detects the state of a cache line to be detected in the first-level cache mirror image of the CPU core and sends a detection result to the comparator;
when the cache line to be detected is not in a backfill applying state, a replacement applying state and a monitoring applying state, a second-level cache line detection module detects the state of the cache line to be detected in the second-level cache mirror image and sends a detection result to the comparator;
and the comparator compares the detection results of the cache line to be detected sent by the first-level cache line detection module and the second-level cache line detection module.
Optionally, the method further comprises:
when the comparison result of the comparator shows that an error occurs, the comparator outputs an error report to the simulation file
Optionally, the method further comprises:
the backfill request monitor records the start time and the end time of the backfill request;
and when the cache line to be detected is in the backfill applying state, the secondary cache line detection module skips detection on the cache line to be detected in a time period corresponding to the backfill request.
Optionally, the method further comprises:
the replacement request monitor records the start time and the end time of the replacement request;
and when the cache line to be detected is in the replacement applying state, the secondary cache line detection module skips the detection of the cache line to be detected in the time period corresponding to the replacement request.
Optionally, the method further comprises:
the monitoring request monitor records the starting time and the ending time of the monitoring request;
and when the cache line to be detected is in the monitoring application state, the secondary cache line detection module skips the detection of the cache line to be detected in the time period corresponding to the monitoring request.
The cache consistency detection system and method provided by the embodiment of the invention can realize the verification of the multi-core processor of the cache consistency at a system level, dynamically cover the detection of all application scenes of a real system in real time, and make up the problems of low efficiency and insufficient verification coverage rate of the traditional verification method based on a simulation method; meanwhile, the reason for the error is automatically analyzed, and designers are helped to quickly locate the problem, so that the verification period for realizing the design of the cache consistency chip among the multi-core processors can be shortened, and the one-time chip-casting success rate of the chip can be effectively ensured.
Drawings
FIG. 1 is a schematic diagram of a typical shared memory system with multi-core processors;
fig. 2 is a schematic structural diagram of a cache coherency detection system according to an embodiment of the present invention;
FIG. 3 is a data structure diagram of a first level cache line, a second level cache line, a first level cache mirror, and a second level cache mirror;
fig. 4 is a schematic structural diagram of a cache coherence detection method according to another embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the shared storage structure of the multi-core processor, each CPU core has an independent cache, and the structure enables data of the same address to have multiple backups in the caches of the CPU cores. When one of the CPU cores updates the data of the address, the other CPU cores may also read the latest data, as shown in fig. 1, a typical shared storage system with a multi-core processor includes multiple cores, each CPU core has its own first-level Cache (L1Cache), and multiple CPU cores share a second-level Cache (L2Cache), and Cache consistency detection is to check the consistency between the caches in the shared storage system with the multi-core processor.
An embodiment of the present invention provides a cache coherence detection system, as shown in fig. 2, the system includes:
the system comprises a plurality of CPU core primary cache images, a primary cache line detection module corresponding to each CPU core primary cache image, a secondary cache line detection module corresponding to the secondary cache image, a backfill request monitor, a replacement request monitor, a monitoring request monitor and a comparator;
the CPU core primary cache mirror image is used for detecting a CPU core primary cache write enable signal, a write address signal and a write data signal in a design to be verified, and when the fact that data with addresses in the CPU core primary cache are updated is found, the latest data are automatically and synchronously updated;
the second-level cache mirror image is used for detecting write enable, write address and write data signals of a second-level cache in the design to be verified, and automatically and synchronously updating the latest data when the data with addresses in the second-level cache is found to be updated;
as shown in fig. 3, data structure diagrams of a first-level Cache Line (L1Cache Line) and a second-level Cache Line (L2Cache Line) and a first-level Cache mirror image and a second-level Cache mirror image are respectively shown.
To better understand the data structure of the cache line, table 1 and table 2 detail the first level cache status bit and the second level cache status bit in fig. 3, respectively.
Where Dirty is D in fig. 3, Exclusive is E in fig. 3, Valid is V in fig. 3, L1_ Exclusive is L1_ E in fig. 3, and CP indicates a Cache Line (Core Present Cache Line) existing in the Core.
TABLE 1
Figure BDA0001336790680000071
TABLE 2
Figure BDA0001336790680000072
The backfill request refers to that after the first-level cache generates the Miss, the CPU core requests to retrieve corresponding data from the second-level cache or the Memory, and applies for backfilling to the first-level cache so as to hit the cache in the next cache query. When a cache line backfill request occurs, the state of the second-level cache line and the state of the first-level cache line have a short-term inconsistency, which is allowed by the design, so that the condition needs to be ignored when performing cache consistency detection. At this point it is necessary to know all cache lines that are applying for backfill.
The backfill request monitor is used for monitoring backfill requests which are started and not ended in all the CPU cores, recording cache addresses, start time and end time of the backfill requests, and determining that the cache line is not in a backfill application state before detecting the cache line state. If the cache line is still in the backfill applying state, the secondary cache line detection module selects skip detection in the time period.
A replacement request refers to the process by which an old cache line is kicked out of the cache when a replacement of the cache line occurs. The second level cache line state and the first level cache line state may differ when a cache line is requesting to be replaced.
And the replacement request monitor is used for tracking all the replacement requests which are started and not ended in the CPU core, and recording the cache addresses, the starting time and the ending time of the replacement requests. It is determined that the cache line is not applying for a replacement state prior to detecting the cache line state. If the cache line is still in the replacement applying state, the secondary cache line detection module selects to skip detection in the period of time.
The monitoring request means that when one CPU core needs to monitor the first-level cache of another CPU core, the monitoring request is sent to the second-level cache first, at this time, the second-level cache changes the state of the cache line of the second-level cache according to the monitoring request information, at this time, the monitoring request is not sent to the other CPU core, and the states of the second-level cache and the first-level cache are different in this period.
The monitoring request monitor is used for tracking all the monitoring requests which are started and not ended in the CPU core, and recording the cache addresses, the starting time and the ending time of all the monitoring requests. It is determined that the cache line is not in the snoop state prior to detecting the cache line state. If the cache line is still in the monitoring application state, the first-level cache line detection module and the second-level cache line detection module do not detect the state of the cache line in the first-level cache and the second-level cache.
The first-level cache line detection module is used for detecting the state of a cache line to be detected in the first-level cache mirror image of the CPU core and sending a detection result to the comparator;
the second-level cache line detection module is used for detecting the state of the cache line to be detected in the second-level cache mirror image when the cache line to be detected is not in a backfill applying state, a replacement applying state and a monitoring applying state, and sending a detection result to the comparator;
and the comparator is used for comparing the detection results of the first-level cache line detection module and the second-level cache line detection module, when an error occurs, printing an error report into the simulation file, wherein the error report comprises time, state information of the first-level cache line and state information of the second-level cache line, and the specific information is shown in table 3.
TABLE 3
Figure BDA0001336790680000091
The cache consistency detection system provided by the embodiment of the invention can realize the verification of the multi-core processor of the cache consistency at a system level, dynamically cover the detection of all application scenes of a real system in real time, and make up the problems of low efficiency and insufficient verification coverage rate of the traditional verification method based on a simulation method; when an error occurs, designers can be told about the error cache address, error time, error data and data which should be normal, and meanwhile, the reason of the error can be automatically analyzed to help the designers to quickly locate the problem, so that the verification period for realizing cache consistency chip design among multi-core processors can be shortened, and the one-time chip-putting success rate of the chip can be effectively ensured.
An embodiment of the present invention further provides a cache coherence detection method, as shown in fig. 4, where the method includes:
step S101, a CPU core sends a reading request for a first-level cache line address, a second-level cache line detection module calculates an index of a second-level cache line to be read according to the first-level cache line address, and the second-level cache line is read from a second-level cache mirror image, so that a cache address, data and state information are obtained;
step S102, according to the address of the cache line, a backfill request/replacement request/monitoring request monitor detects the cache line;
step S103, if the backfill request/replacement request/monitor request monitor detects that the cache line is in a backfill, replacement or monitor state, the secondary cache line detection module ignores the detection of the cache line and starts to detect the next cache line; if not, executing step S104;
step S104, a secondary cache line detection module detects the internal state of a cache line in a secondary cache, the specific information is shown in tables 1 and 2, and the detection result is sent to a comparator;
step S105, a first-level cache line detection module reads the cache line from the first-level cache mirror image, detects whether the first-level cache has the cache line, and starts to detect the next cache line if the first-level cache does not have the cache line; if so, executing step S106;
step S106, a first-level cache line detection module detects the state of the cache line in a first-level cache and sends a detection result to a comparator;
step S107, a first-level cache line detection module detects whether data in a first-level cache is changed, and if not, a next cache line is detected; if yes, executing step S108;
step S108, the comparator compares the states of the same cache line sent by the first-level cache line detection module and the second-level cache line detection module, and the specific comparison information is shown in table 4. If an error occurs, an error report is printed in the simulation file, and the specific information is shown in table 3.
The first-level cache line detection modules of the CPU core 0 and the CPU core 1 dynamically detect the states of all the first-level cache lines in real time in each simulation clock cycle.
The working steps of the first-level cache line detection module are the same as those of the second-level cache line detection module, but the detection contents are different, and the specific detection contents are as shown in the following table 5.
TABLE 4
Figure BDA0001336790680000111
TABLE 5
Figure BDA0001336790680000112
The cache consistency detection method provided by the embodiment of the invention can realize the verification of the multi-core processor of the cache consistency at a system level, dynamically cover the detection of all application scenes of a real system in real time, and make up the problems of low efficiency and insufficient verification coverage rate of the traditional verification method based on a simulation method; when an error occurs, designers can be told about the error cache address, error time, error data and data which should be normal, and meanwhile, the reason of the error can be automatically analyzed to help the designers to quickly locate the problem, so that the verification period for realizing cache consistency chip design among multi-core processors can be shortened, and the one-time chip-putting success rate of the chip can be effectively ensured.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A cache consistency detection system is characterized by comprising a plurality of CPU core primary cache images, a primary cache line detection module corresponding to each CPU core primary cache image, a secondary cache line detection module corresponding to the secondary cache image, a backfill request monitor, a replacement request monitor, a monitoring request monitor and a comparator; wherein the content of the first and second substances,
the CPU core first-level cache mirror image is used for detecting write enable, write address and write data signals of the CPU core first-level cache in the design to be verified, and when data of the address in the CPU core first-level cache is updated, the data in the CPU core first-level cache mirror image is automatically and synchronously updated;
the second-level cache mirror image is used for detecting write enable, write address and write data signals of a second-level cache in a design to be verified, and when data of addresses in the second-level cache are updated, the data in the second-level cache mirror image are automatically and synchronously updated;
the backfill request monitor is used for monitoring backfill requests which start and do not end in all the CPU cores and determining whether the cache line to be detected is in a backfill applying state;
the replacement request monitor is used for monitoring the replacement requests which are started and not ended in all the CPU cores and determining whether the cache line to be detected is in a replacement applying state;
the monitoring request monitor is used for monitoring the monitoring requests which are started and not finished in all the CPU cores and determining whether the cache line to be detected is in a monitoring application state;
the first-level cache line detection module is used for detecting the state of a cache line to be detected in the first-level cache mirror image of the CPU core and sending a detection result to the comparator;
the second-level cache line detection module is used for detecting the state of the cache line to be detected in the second-level cache mirror image when the cache line to be detected is not in a backfill applying state, a replacement applying state and a monitoring applying state, and sending a detection result to the comparator;
the comparator is used for comparing the detection results of the cache line to be detected sent by the first-level cache line detection module and the second-level cache line detection module.
2. The system of claim 1, wherein the comparator is further configured to output an error report to the simulation file when the comparison result indicates that an error has occurred.
3. The system of claim 1, wherein the backfill request monitor is further configured to record a start time and an end time of the backfill request;
and the secondary cache line detection module is also used for skipping the detection of the cache line to be detected in a time period corresponding to the backfill request when the cache line to be detected is in the backfill applying state.
4. The system of claim 1, wherein the replacement request monitor is further configured to record a start time and an end time of the replacement request;
and the secondary cache line detection module is further configured to skip detection of the cache line to be detected in a time period corresponding to the replacement request when the cache line to be detected is in a replacement application state.
5. The system of claim 1, wherein the monitor is further configured to record a start time and an end time of the snoop request;
and the second-level cache line detection module is further used for skipping the detection of the cache line to be detected in a time period corresponding to the monitoring request when the cache line to be detected is in the monitoring application state.
6. A cache coherence detection method, comprising:
the method comprises the steps that a first-level cache mirror image of a CPU core detects write enable, write address and write data signals of a first-level cache of the CPU core in a design to be verified, and when data of addresses in the first-level cache of the CPU core are updated, the data in the first-level cache mirror image of the CPU core are automatically updated synchronously;
detecting write enable, write address and write data signal of a second-level cache in a design to be verified by a second-level cache mirror image, and automatically updating data in the second-level cache mirror image synchronously when the data of the address in the second-level cache is updated;
the backfill request monitor monitors backfill requests which are started and not finished in all CPU cores, and determines whether a cache line to be detected is in a backfill applying state;
the replacement request monitor monitors the replacement requests which are started and not finished in all the CPU cores, and determines whether the cache line to be detected is in a replacement applying state or not;
monitoring the monitoring requests which are started and not finished in all the CPU cores by a monitoring request monitor, and determining whether the cache line to be detected is in a monitoring application state;
the first-level cache line detection module detects the state of a cache line to be detected in the first-level cache mirror image of the CPU core and sends a detection result to the comparator;
when the cache line to be detected is not in a backfill applying state, a replacement applying state and a monitoring applying state, a second-level cache line detection module detects the state of the cache line to be detected in the second-level cache mirror image and sends a detection result to the comparator;
and the comparator compares the detection results of the cache line to be detected sent by the first-level cache line detection module and the second-level cache line detection module.
7. The method of claim 6, further comprising:
and when the comparison result of the comparator shows that an error occurs, the comparator outputs an error report to the simulation file.
8. The method of claim 6, further comprising:
the backfill request monitor records the start time and the end time of the backfill request;
and when the cache line to be detected is in the backfill applying state, the secondary cache line detection module skips detection on the cache line to be detected in a time period corresponding to the backfill request.
9. The method of claim 6, further comprising:
the replacement request monitor records the start time and the end time of the replacement request;
and when the cache line to be detected is in the replacement applying state, the secondary cache line detection module skips the detection of the cache line to be detected in the time period corresponding to the replacement request.
10. The method of claim 6, further comprising:
the monitoring request monitor records the starting time and the ending time of the monitoring request;
and when the cache line to be detected is in the monitoring application state, the secondary cache line detection module skips the detection of the cache line to be detected in the time period corresponding to the monitoring request.
CN201710516703.3A 2017-06-29 2017-06-29 Cache consistency detection system and method Active CN109213641B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710516703.3A CN109213641B (en) 2017-06-29 2017-06-29 Cache consistency detection system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710516703.3A CN109213641B (en) 2017-06-29 2017-06-29 Cache consistency detection system and method

Publications (2)

Publication Number Publication Date
CN109213641A CN109213641A (en) 2019-01-15
CN109213641B true CN109213641B (en) 2021-10-26

Family

ID=64976742

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710516703.3A Active CN109213641B (en) 2017-06-29 2017-06-29 Cache consistency detection system and method

Country Status (1)

Country Link
CN (1) CN109213641B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110232030B (en) 2019-06-12 2021-08-10 上海兆芯集成电路有限公司 Multi-chip system and cache processing method
CN111782320B (en) * 2020-06-23 2023-03-24 上海赛昉科技有限公司 GUI interface method for debugging cache consistency C case and electronic equipment
CN111739577B (en) * 2020-07-20 2020-11-20 成都智明达电子股份有限公司 DSP-based efficient DDR test method
CN112732591B (en) * 2021-01-15 2023-04-07 杭州中科先进技术研究院有限公司 Edge computing framework for cache deep learning
CN114168200B (en) * 2022-02-14 2022-04-22 北京微核芯科技有限公司 System and method for verifying memory access consistency of multi-core processor
CN116627331B (en) * 2023-01-05 2024-04-02 摩尔线程智能科技(北京)有限责任公司 Cache verification device, method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1968206A (en) * 2006-09-20 2007-05-23 杭州华为三康技术有限公司 Method and apparatus for global statistics in multi-core system
CN101958834A (en) * 2010-09-27 2011-01-26 清华大学 On-chip network system supporting cache coherence and data request method
CN102662885A (en) * 2012-04-01 2012-09-12 天津国芯科技有限公司 Device and method for maintaining second-level cache coherency of symmetrical multi-core processor
US9009372B2 (en) * 2012-08-16 2015-04-14 Fujitsu Limited Processor and control method for processor
CN105740164A (en) * 2014-12-10 2016-07-06 阿里巴巴集团控股有限公司 Multi-core processor supporting cache consistency, reading and writing methods and apparatuses as well as device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1968206A (en) * 2006-09-20 2007-05-23 杭州华为三康技术有限公司 Method and apparatus for global statistics in multi-core system
CN101958834A (en) * 2010-09-27 2011-01-26 清华大学 On-chip network system supporting cache coherence and data request method
CN102662885A (en) * 2012-04-01 2012-09-12 天津国芯科技有限公司 Device and method for maintaining second-level cache coherency of symmetrical multi-core processor
US9009372B2 (en) * 2012-08-16 2015-04-14 Fujitsu Limited Processor and control method for processor
CN105740164A (en) * 2014-12-10 2016-07-06 阿里巴巴集团控股有限公司 Multi-core processor supporting cache consistency, reading and writing methods and apparatuses as well as device

Also Published As

Publication number Publication date
CN109213641A (en) 2019-01-15

Similar Documents

Publication Publication Date Title
CN109213641B (en) Cache consistency detection system and method
KR102398912B1 (en) Method and processor for processing data
JP5526626B2 (en) Arithmetic processing device and address conversion method
JP4297968B2 (en) Coherency maintenance device and coherency maintenance method
JP2003162447A (en) Error recovery
US20110047411A1 (en) Handling of errors in a data processing apparatus having a cache storage and a replicated address storage
CN109684237B (en) Data access method and device based on multi-core processor
US8352646B2 (en) Direct access to cache memory
CN112231243B (en) Data processing method, processor and electronic equipment
US7461212B2 (en) Non-inclusive cache system with simple control operation
CN116737459A (en) Implementation method of three-level cache mechanism of tight coupling consistency bus
US6918011B2 (en) Cache memory for invalidating data or writing back data to a main memory
US10866892B2 (en) Establishing dependency in a resource retry queue
CN115202738A (en) Verification method and system of multi-core system under write-through strategy
JP5452148B2 (en) Memory control system
US8028128B2 (en) Method for increasing cache directory associativity classes in a system with a register space memory
CN113704026A (en) Distributed financial memory database security synchronization method, device and medium
US7519778B2 (en) System and method for cache coherence
US20020188810A1 (en) Cache memory control apparaus and processor
US11836085B2 (en) Cache line coherence state upgrade
CN112612726B (en) Data storage method and device based on cache consistency, processing chip and server
CN116561020B (en) Request processing method, device and storage medium under mixed cache granularity
US11269773B2 (en) Exclusivity in circuitry having a home node providing coherency control
WO2020010540A1 (en) Atomic operation execution method and apparatus
CN115470178A (en) Domain object-based domain snapshot rollback method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant