CN115826875B - Cache data invalidation verification method, device and system - Google Patents

Cache data invalidation verification method, device and system Download PDF

Info

Publication number
CN115826875B
CN115826875B CN202310010874.4A CN202310010874A CN115826875B CN 115826875 B CN115826875 B CN 115826875B CN 202310010874 A CN202310010874 A CN 202310010874A CN 115826875 B CN115826875 B CN 115826875B
Authority
CN
China
Prior art keywords
data
cache
request
read
downstream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310010874.4A
Other languages
Chinese (zh)
Other versions
CN115826875A (en
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Moore Threads Technology Co Ltd
Original Assignee
Moore Threads Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Moore Threads Technology Co Ltd filed Critical Moore Threads Technology Co Ltd
Priority to CN202310010874.4A priority Critical patent/CN115826875B/en
Publication of CN115826875A publication Critical patent/CN115826875A/en
Application granted granted Critical
Publication of CN115826875B publication Critical patent/CN115826875B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The application provides a cache data invalidation verification method, device and system, wherein the method comprises the following steps: acquiring a data invalidation request input into a cache to be tested; updating cache data in a cache model corresponding to the target cache data and downstream data in a downstream behavior level model based on the data invalidation request, wherein the downstream behavior level model is used for returning the downstream data corresponding to the read request to the cache when receiving the data request sent by the cache based on the read request so as to enable the cache to obtain readback data based on the downstream data; and acquiring the read-back data returned by the cache in response to the read request for acquiring the target cache data, acquiring the cache data corresponding to the read-back data from the cache model as expected data, and checking the read-back data according to the expected data to determine whether the data invalidation request is successful or not.

Description

Cache data invalidation verification method, device and system
Technical Field
The present disclosure relates to the field of cache verification technologies, and in particular, to a method, an apparatus, and a system for verifying cache data invalidation.
Background
The fact that the cache can correctly execute a write request to write data and read data meeting expectations according to addresses is the key of ensuring normal operation because the processor can correctly store and read the data.
In some scenarios, the cache needs to support some functional characteristics (for example, a cache data invalidation function), and how to verify the cache functional characteristics is a problem to be solved.
Disclosure of Invention
An object of the present application is to provide a method for verifying invalidation of cached data, which realizes invalidation verification of cached data. It is another object of the present application to provide a cached data invalidation verification device. It is yet another object of the present application to provide a cached data invalidation verification system. It is yet another object of the present application to provide a computer device. It is yet another object of the present application to provide a readable medium. It is a further object of the present application to provide a computer program product.
To achieve the above object, in one aspect, the present application discloses a method for verifying invalidation of cached data, including:
acquiring a data invalidation request input into a to-be-tested cache, wherein the data invalidation request is used for invalidating target cache data in the to-be-tested cache;
Updating cache data corresponding to the target cache data in a cache model and downstream data corresponding to the target cache data in a downstream behavior level model based on the data invalidation request, wherein the downstream behavior level model is used for returning the downstream data corresponding to the read request to the cache when receiving a data request sent by the cache based on the read request so that the cache obtains readback data based on the downstream data;
and acquiring the read-back data returned by the cache in response to the read request for acquiring the target cache data, acquiring the cache data corresponding to the read-back data from the cache model as expected data, and checking the read-back data according to the expected data to determine whether the data invalidation request is successful.
Preferably, the obtaining the data invalidation request input to the to-be-tested cache specifically includes:
performing parameterization extraction on the verification request input into the cache to be tested based on the cached parameter configuration to obtain verification parameters;
obtaining a universal verification request according to the verification parameters and a preset data structure;
and determining whether the verification request is the data invalidation request according to the generalized verification request.
Preferably, the cache model stores cache data and corresponding data storage addresses in the cache, the downstream behavior level model stores downstream data and corresponding data storage addresses, and the cache data in the cache is at least part of the downstream data in the downstream behavior level model;
the updating the cache data corresponding to the target cache data in the cache model based on the data invalidation request and the downstream data corresponding to the target cache data in the downstream behavior level model specifically include:
determining a target data storage address of target cache data of data invalidation according to the data invalidation request;
and modifying the cache data and the downstream data corresponding to the target data storage address in the cache model and the downstream behavior level model into default values.
Preferably, modifying the cache data and the downstream data corresponding to the target data storage address in the cache model and the downstream behavior level model to default values specifically includes:
and deleting the cache data and the downstream data corresponding to the target data storage address in the cache model and the downstream behavior level model.
Preferably, before acquiring the cache data corresponding to the read-back data from the cache model as expected data, the method further comprises:
performing parameterization extraction on the verification request input into the cache to be tested based on the cached parameter configuration to obtain verification parameters;
obtaining a universal verification request according to the verification parameters and a preset data structure;
and determining whether the verification request is a read request according to the generalized verification request, and if so, acquiring cache data corresponding to the read request from the cache model as expected data corresponding to the read-back data.
Preferably, the obtaining the cached data corresponding to the read-back data from the cache model as the expected data specifically includes:
storing the read request input into the cache to a request queue;
acquiring a read request corresponding to the read-back data from the request queue according to the read-back data returned by the cache based on the read request;
and acquiring corresponding cache data from the cache model according to the read request as expected data.
Preferably, the cache model stores cache data in the cache and corresponding data storage addresses;
The obtaining the corresponding expected data from the cache model according to the read request specifically includes:
determining a data storage address to be read according to the read request;
obtaining corresponding cache data from the cache model according to the data storage address to be read;
and taking the cached data as the expected data corresponding to the read request.
Preferably, before verifying the read-back data from the expected data, further comprising:
storing the expected data to the expected data queue;
and acquiring expected data corresponding to the read-back data from the expected data queue so as to verify the read-back data according to the expected data.
Preferably, the verifying the read-back data according to the expected data to determine whether the data invalidation request is successful specifically includes:
checking whether the expected data and the read-back data are the same;
if the data are the same, the data invalidation is successful;
if not, the data invalidation fails.
The application also discloses a cache data invalidation verification device, which comprises a cache model, a downstream behavior level model and a control module, wherein the downstream behavior level model is used for returning downstream data corresponding to a read request to the cache when receiving the data request sent by the cache based on the read request so as to enable the cache to obtain readback data based on the downstream data;
The control module includes:
the device comprises a request acquisition unit, a data processing unit and a data processing unit, wherein the request acquisition unit is used for acquiring a data invalidation request input into a to-be-tested cache, and the data invalidation request is used for invalidating target cache data in the to-be-tested cache;
the model updating unit is used for updating cache data corresponding to the target cache data in the cache model and downstream data corresponding to the target cache data in the downstream behavior level model based on the data invalidation request;
and the invalidation checking unit is used for acquiring the readback data returned by the cache in response to the read request for acquiring the target cache data, acquiring the cache data corresponding to the readback data from the cache model as expected data, and checking the readback data according to the expected data to determine whether the data invalidation request is successful in data invalidation.
The application also discloses a cache data invalidation verification system which comprises a cache and the cache data invalidation verification device.
The application also discloses a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, said processor implementing the method as described above when executing said program.
The present application also discloses a computer-readable medium, having stored thereon a computer program,
the program, when executed by a processor, implements the method as described above.
The method for verifying the invalidation of the cache data acquires a data invalidation request input into the cache to be tested, wherein the data invalidation request is used for invalidating target cache data in the cache to be tested, and a cache model and a downstream behavior level model are updated based on the data invalidation request. When the downstream behavior level model receives a data request sent by the cache based on a read request, downstream data corresponding to the read request can be returned to the cache so that the cache obtains readback data based on the downstream data, the readback data returned by the cache in response to the read request for obtaining the target cache data is obtained, the cache data corresponding to the readback data is obtained from the cache model as expected data, and the readback data is checked according to the expected data to determine whether the data invalidation request is successful in data invalidation. Therefore, after the data invalidation request input to the to-be-tested cache is obtained, the cache model and the downstream behavior level model are updated according to the data invalidation request, so that the cache data and the downstream data corresponding to the target cache data to be invalidated by the data invalidation request in the cache model and the downstream behavior level model are identical. Therefore, when the cache receives a read request for obtaining target cache data which is invalidated based on the data invalidation request, if the data invalidation request of the cache data is successful, that is, the target cache data corresponding to the data invalidation request is invalidated, the cache needs to obtain the downstream data corresponding to the invalidated target cache data from the downstream behavior level model again, and at the moment, the downstream data is identical to the cache data corresponding to the target cache data in the cache model. Therefore, the cache data corresponding to the target cache data in the cache model can be used as expected data, the read-back data returned by the cache based on the read request is checked to determine whether the invalidation request is successful in data invalidation, and verification of the data invalidation function of the cache is realized.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the related art, the drawings that are required to be used in the embodiments or the related technical descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to the drawings without inventive effort for a person having ordinary skill in the art.
FIG. 1 is a block diagram illustrating a particular embodiment of a graphics processor cache in the related art;
FIG. 2 is a flow chart of an embodiment of a buffered data invalidation verification method of the present application;
FIG. 3 is a flowchart of an embodiment S100 of a method for validating invalidation of cache data according to the present application;
FIG. 4 is a flowchart of an embodiment S200 of a method for validating invalidation of cache data according to the present application;
FIG. 5 is a flowchart of an embodiment S400 of a method for validating invalidation of cache data according to the present application;
FIG. 6 is a flowchart of a method for verifying invalidation of cache data according to an embodiment S300 of the present application;
FIG. 7 is a flowchart of an embodiment S330 of the method for verifying invalidation of cache data in the present application;
FIG. 8 is a flowchart of an embodiment S500 of a method for validating invalidation of cache data according to the present application;
FIG. 9 is a flowchart illustrating a method for verifying invalidation of cache data according to an embodiment S300 of the present application;
FIG. 10 is a block diagram of an embodiment of a cache data invalidation verification device of the present application;
FIG. 11 is a block diagram illustrating a parameterized module included in a specific embodiment of a cache data invalidation validation apparatus of the present application;
fig. 12 shows a schematic structural diagram of a computer device suitable for use in implementing embodiments of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The buffer memory is used as a data temporary storage module for data interaction between the processor and an external memory, has a crucial role in improving the data throughput rate of the processor, and is therefore used in a large amount inside the processor. Illustratively, graphics Processors (GPUs) have high bandwidth, high parallelization characteristics, and caching has a critical role in improving the data throughput rate of the graphics processor. For ease of understanding, the cache of the graphics processor is described below as an example, and it should be understood that the present disclosure is not limited to application scenarios of the cache data invalidation verification method, apparatus, and system.
The present disclosure proposes a cache feature verification requirement. The functional characteristics include invalidating the cache data present in the cache at different granularity for the cache data in the cache, such that subsequent read requests to obtain the invalidated cache data will be re-read from the downstream modules of the cache. In the related art, there is no verification scheme for invalidating the cache data.
In the related art, as shown in fig. 1, a cache of a graphic processor includes a cache area for storing data downstream of the cache, the cache area including a storage area such as cache line 0, cache line 1, cache line 2 …, cache line n, and an input interface for receiving an external request, the input interface including an input interface 1, an input interface 2 …, and an input interface n. The cache of the graphics processor can cache part of data of a corresponding cache downstream module, and after receiving an externally input data request, if the cache data stored in the cache is processed or the cache data stored in the cache is obtained, the cache can directly process the data or return the cache data. If the downstream data stored by the downstream cache module is required to be processed or the downstream module data is required to be obtained, the cache of the graphics processor further transmits a data request to the downstream cache module according to the input data request, so that the function of processing or obtaining the downstream data stored by the downstream cache module is realized.
In a graphics processor, there may be tens of cache-type data requests. For example, the cache may be provided with n ports for receiving external requests, including input port 1, input port 2, …, input port n. So that the number of input ports according to the data request can be divided into single-port input and multi-port input. The single-port input indicates that only one port sends a data read-write request to the cache; the multi-port input indicates that there are multiple ports sending data read and write requests to the cache at the same time. Classification according to the type of input request may be classified into read-only cache and read-write cache. Wherein, the read-only cache indicates that the input request only has a read data request; the read-write cache indicates that the input request is for both read and write requests. The read-write cache is classified into sequential execution and out-of-order execution according to the order of execution of requests. The sequential execution indicates that the cache is executed according to the sequential time sequence of the received requests, that is, the cache receives the requests A- > B sequentially respectively, and the sequence of the internal execution of the cache is A- > B. Out-of-order execution means that the caches do not have to execute according to the sequential time sequence of the requests, i.e. the caches respectively receive the requests a- > B, the sequence of the internal execution of the caches may be a- > B or B- > a.
The cache of the graphics processor of embodiments of the present disclosure may also need to support some functional features that also need to be verified for correctness of their execution. These functional characteristics may include verification of the data invalidation request. In general, for the buffered data in the buffer, a data invalidation request may be input to the buffer to invalidate the buffered data existing in the buffer according to different granularity, so that a subsequent read request for obtaining the invalidated buffered data will re-read the downstream data of the downstream module of the buffer from the downstream module of the buffer as the read-back data of the read request.
In addition, in the related art, in the verification process of the cache of the graphics processor, test environments are respectively constructed for verification of different cache types, different manpower is required to be respectively input for verification, a large amount of manpower is input for verification of different cache types, a large amount of repeated verification work exists, huge manpower and time expenditure are caused, and further the manpower cost of cache verification is high, the cache verification time is long, and the efficiency is low.
According to one aspect of the present application, a method for buffering data invalidation verification is disclosed. As shown in fig. 2, in this embodiment, the method includes:
S100: and acquiring a data invalidation request input into the to-be-tested cache, wherein the data invalidation request is used for invalidating target cache data in the to-be-tested cache.
It will be appreciated that, in general, each cache line storing cache data in the cache has a valid flag bit, where 1 indicates that the cache data of the cache line is valid, and 0 indicates that the cache data of the cache line is invalid, and the cache may change the valid flag bit of some or all of the cache lines in the cache from 1 to 0 according to the received data invalidation request. Of course, in practical applications, those skilled in the art may determine how to invalidate data based on the data invalidation request according to actual requirements, which is not limited in this application.
S200: and updating the cache data corresponding to the target cache data in a cache model and the downstream data corresponding to the target cache data in a downstream behavior level model based on the data invalidation request, wherein the downstream behavior level model is used for returning the downstream data corresponding to the read request to the cache when receiving the data request sent by the cache based on the read request so as to enable the cache to obtain readback data based on the downstream data.
S300: and acquiring the read-back data returned by the cache in response to the read request for acquiring the target cache data, acquiring the cache data corresponding to the read-back data from the cache model as expected data, and checking the read-back data according to the expected data to determine whether the data invalidation request is successful.
The method for verifying the invalidation of the cache data acquires a data invalidation request input into the cache to be tested, wherein the data invalidation request is used for invalidating target cache data in the cache to be tested, and a cache model and a downstream behavior level model are updated based on the data invalidation request. When the downstream behavior level model receives a data request sent by the cache based on a read request, downstream data corresponding to the read request can be returned to the cache so that the cache obtains readback data based on the downstream data, the readback data returned by the cache in response to the read request for obtaining the target cache data is obtained, the cache data corresponding to the readback data is obtained from the cache model as expected data, and the readback data is checked according to the expected data to determine whether the data invalidation request is successful in data invalidation.
Therefore, after the data invalidation request input to the to-be-tested cache is obtained, the cache model and the downstream behavior level model are updated according to the data invalidation request, so that the cache data and the downstream data corresponding to the target cache data to be invalidated by the data invalidation request in the cache model and the downstream behavior level model are identical. Therefore, when the cache receives a read request for obtaining target cache data which is invalidated based on the data invalidation request, if the data invalidation request of the cache data is successful, that is, the target cache data corresponding to the data invalidation request is invalidated, the cache needs to obtain the downstream data corresponding to the invalidated target cache data from the downstream behavior level model again, and at the moment, the downstream data is identical to the cache data corresponding to the target cache data in the cache model. Therefore, the cache data corresponding to the target cache data in the cache model can be used as expected data, the read-back data returned by the cache based on the read request is checked to determine whether the invalidation request is successful in data invalidation, and verification of the data invalidation function of the cache is realized.
It should be noted that, the downstream behavior level model in the present application may simulate the function of the downstream module of the cache, that is, the downstream behavior level model stores downstream data and a data storage address of the corresponding downstream data, and may return the corresponding downstream data to the cache based on a data request sent by the cache.
In addition, the cache model in the application stores cache data and corresponding data storage addresses, wherein the cache data of the cache model can include cache data in a cache and also can include downstream data stored in a downstream behavior level model. The cache model at least includes cache data corresponding to the target cache data, where the cache data corresponding to the target cache data may be target cache data before the cache performs the data invalidation request, and may be data corresponding to the target cache data but different from the target cache data in the downstream behavior level model after the cache performs the data invalidation request. The cache model can be used for searching the corresponding cache data from the cache model according to the data storage address of the cache data.
In one or more optional embodiments, a write request for writing data into a cache may be obtained, the write request input into the cache is parsed to obtain a data storage address to be written and cache data to be written, and the cache data in the cache and the corresponding data storage address are obtained by intercepting and parsing the write request input into the cache. In other embodiments, the cache model may also be updated by sending a request directly to the cache to obtain cache data and corresponding data storage addresses from the cache. In the practical application process, the cache data in the cache and the corresponding data storage address can be stored in the cache model in a manual input mode or the like, so that the read-back data returned by the cache can be checked, the function verification of the cache can be realized, and the application is not limited to the method.
The cache in the application responds to the read request, determines whether the read request hits or not, takes cache data corresponding to the read request as readback data if the read request hits, sends a data request to the downstream behavior level model based on the read request if the read request does not hit, and determines readback data based on downstream data returned by the downstream behavior level model.
It should be noted that, in the present application, the cache data corresponding to the read-back data obtained from the cache model may be a read request or a data storage address corresponding to the read request determined based on the read-back data, the cache data obtained from the cache model and serving as the expected data, and the method and the timing for obtaining the cache data corresponding to the read-back data are not limited in this disclosure.
In a preferred embodiment, as shown in fig. 3, the step of S100 of obtaining the data invalidation request input into the cache to be tested specifically includes:
s110: and carrying out parameterization extraction on the verification request input into the cache to be tested based on the parameter configuration of the cache to obtain verification parameters.
S120: and obtaining a universal verification request according to the verification parameters and a preset data structure.
S130: and determining whether the verification request is the data invalidation request according to the generalized verification request.
Specifically, it can be understood that in the related art, for cache verification of different cache types and functional characteristics, different test environments need to be established for respective tests. In the preferred embodiment, the parameter configuration of the buffer memory can be formed in advance according to the buffer memory to be verified, and the verification requests of different buffer memory types can be parameterized and extracted through the parameter configuration of the buffer memory to obtain parameters of the same attribute in the verification requests of different buffer memory types, so that the verification requests can be conveniently identified in terms of attributes such as request types.
The parameter configuration of the cache can be obtained according to the basic parameters of the cache to be verified. And extracting verification parameters corresponding to the basic parameters from the verification request input into the cache according to the basic parameters of the parameter configuration, and realizing the parameterization extraction process. Furthermore, verification parameters can be obtained according to parameterization extraction, and the extracted verification parameters are set in corresponding preset data structures according to preset data structures, so that the aim of converting different types of verification requests into universal verification requests is fulfilled. Therefore, after the verification requests of different cache types of the input cache are converted into the universal verification request, the request types of the converted universal verification request can be identified, and automatic cache verification is performed, so that the verification requests such as the data invalidation request and the read request of the input cache can be automatically identified, and the method is suitable for automatic verification of various cache verification scenes.
Alternatively, the validation request entered into the cache may be obtained by an interceptor or by placing a listener in the cache. Of course, in other embodiments, the verification request of the input buffer may be obtained in other manners, which is not limited in this application.
In a preferred embodiment, the cache model stores cache data and corresponding data storage addresses in the cache, the downstream behavior level model stores downstream data and corresponding data storage addresses, and the cache data in the cache is at least part of the downstream data in the downstream behavior level model. As shown in fig. 4, the updating, in the cache model, the cache data corresponding to the target cache data and the downstream data corresponding to the target cache data in the downstream behavioral level model based on the data invalidation request in S200 specifically includes:
s210: and determining a target data storage address of target cache data of data invalidation according to the data invalidation request.
S220: and modifying the cache data and the downstream data corresponding to the target data storage address in the cache model and the downstream behavior level model into default values.
Illustratively, the default value may be any value other than the current target cache data in the cache, for example, may be a set initial value, or may be a value that is random and other than the current target cache data in the cache, which is not limiting in this disclosure.
In particular, it will be appreciated that the cache data in the cache is stored in data storage areas of the cache, each data storage area having a unique data storage address. Therefore, the cache data in the cache and the corresponding data storage address can be stored in the cache model so as to backup the cache data in the cache for cache verification.
The downstream data in the corresponding downstream module is cached and stored in the data storage areas of the downstream module, and each data storage area is provided with a unique data storage address. The downstream behavior model can simulate the function of a downstream module of the cache, store downstream data of the downstream module and corresponding data storage addresses, and when a data request sent by the cache based on a read request is received, determine the data storage address of the downstream data storage to be read in all data storage addresses according to the data request, and return the downstream data corresponding to the data storage address to the cache, so that the cache sends the downstream data as readback data of the read request.
In order to verify the data invalidation processing result of the data invalidation request, a target data storage address of target cache data to be invalidated can be determined according to the data invalidation request input into the cache, and both the cache data and the downstream data in the cache model and the downstream behavior level model are modified to default values according to the target data storage address obtained by analysis. If the invalidation processing of the cache on the cache data is successful, the cache needs to re-acquire the downstream data corresponding to the target cache data from the downstream behavior level model in real time when receiving a read request for acquiring the target cache data which is successful in invalidation processing, namely the cache acquires a default value from the downstream behavior level model, and sends the default value as readback data. At this time, the cache data corresponding to the target data storage address in the cache model is also a default value, and the default value is the expected data corresponding to the read request. Thus, read-back data may be checked against the expected data to determine if the data invalidation request was successfully processed. If the read-back data and the expected data are the same, namely the read-back data and the expected data are both default values, the read-back data returned by the cache based on the read request of the read target cache data are the downstream data acquired from the downstream behavior level model again, the data invalidation processing is successful, and the operation state of the cache is normal.
For example, in one specific example, the downstream behavior level model stores downstream data a, the cache stores cache data a that is identical to the downstream data a, and the cache receives a data invalidation request of the cache data a and performs invalidation processing on the cache data a. And simultaneously, respectively modifying the cache data A and the downstream data A in the cache model and the downstream behavior level model into default values B.
When the cache receives a read request for reading the cache data A, if the data invalidation process is successful, the cache does not return the cache data A any more, and the downstream data corresponding to the cache data A needs to be re-acquired from the downstream behavior level model, namely a default value B corresponding to the cache data A is acquired from the downstream behavior level model, and the default value B is returned as readback data. At this time, the default value B corresponding to the cached data a, that is, the expected data corresponding to the read request, is stored in the cache model. Thus, it is possible to determine whether the data invalidation processing of the cache data a is successful or not according to the default value B in the cache model. If the read-back data is the default value B, the read-back data is consistent with the expected data, the cache acquires the downstream data from the downstream behavior level model in real time and returns the downstream data, and invalidation of the cache data A is successful. If the read-back data is inconsistent with the expected data, the cache does not acquire the default value B corresponding to the cache data A from the downstream behavioral level model again, and invalidation processing of the cache data A is unsuccessful.
In a preferred embodiment, the modifying, in S220, the cache data and the downstream data corresponding to the target data storage address in the cache model and the downstream behavior level model to default values specifically includes:
and deleting the cache data and the downstream data corresponding to the target data storage address in the cache model and the downstream behavior level model.
Specifically, the purpose of modifying the cache data and the downstream data corresponding to the target cache data of the data invalidation request into default values can be achieved by directly deleting the cache data corresponding to the target data storage address in the cache model and the downstream data corresponding to the target data storage address in the downstream behavior level model, for example, after the deleting operation is executed, the cache data and the downstream data corresponding to the target data storage address in the cache model and the downstream behavior level model are directly updated into initial values, so that the data updating of the cache model and the downstream behavior level model is simpler and more convenient.
In a preferred embodiment, as shown in fig. 5, the method further includes S400, before obtaining, from the cache model, cache data corresponding to the readback data as expected data:
S410: and carrying out parameterization extraction on the verification request input into the cache to be tested based on the parameter configuration of the cache to obtain verification parameters.
S420: and obtaining a universal verification request according to the verification parameters and a preset data structure.
S430: and determining whether the verification request is a read request according to the generalized verification request, and if so, acquiring cache data corresponding to the read request from the cache model as expected data corresponding to the read-back data.
Specifically, it can be understood that in the related art, for cache verification of different cache types and functional characteristics, different test environments need to be established for respective tests. In the preferred embodiment, the parameter configuration of the buffer memory can be formed in advance according to the buffer memory to be verified, and the verification requests of different buffer memory types can be parameterized and extracted through the parameter configuration of the buffer memory to obtain parameters of the same attribute in the verification requests of different buffer memory types, so that the verification requests can be conveniently identified in terms of attributes such as request types.
The parameter configuration of the cache can be obtained according to the basic parameters of the cache to be verified. In an alternative embodiment, the parameter configuration of the cache may include basic parameters of the cache, such as input interface type, number of input interfaces, input request address bit width, input request data bit width, cache line data bit width, and cache line initialization data. And extracting verification parameters corresponding to the basic parameters from the verification request input into the cache according to the basic parameters of the parameter configuration, and realizing the parameterization extraction process.
Furthermore, verification parameters can be obtained according to parameterization extraction, and the extracted verification parameters are set in corresponding preset data structures according to preset data structures, so that the aim of converting different types of verification requests into universal verification requests is fulfilled. Therefore, after the verification requests of different cache types of the input cache are converted into the universal verification request, the request types of the converted universal verification request can be identified, and automatic cache verification is performed, so that the verification requests such as the data invalidation request and the read request of the input cache can be automatically identified, and the method is suitable for automatic verification of various cache verification scenes. Furthermore, by setting the data structure, the method can also realize the identification of whether the cache data which needs to be read by the read request is the cache data which is invalidated by the data invalidation request, namely, the method can automatically identify whether the read request is the read request corresponding to the data invalidation request, so that the corresponding verification process is adopted for the read-back data returned by the cache, and the invalidation verification of the cache data is realized. Therefore, the verification method of the preferred embodiment can also be used for the verification process of different cache types such as read-only cache verification, read-write cache verification and other functional characteristics of the cache, and different test environments do not need to be built for different cache types, so that the labor cost of the cache verification is greatly reduced, the verification time of the cache verification is shortened, and the verification efficiency is improved.
The preset data structure can be set according to verification parameters obtained in the parameterized extraction process. In an alternative embodiment, the unified data structure may include fields such as a request type (trans_type), a request address (address), read/write data (data), a data mask (mask), a request tag (id), and a user definition (user_define). Wherein the request type marks a read request or a write request; requesting an address of the address tag read/write; the read-write data stores the write data/read-back data; the data mask identifies whether the data corresponding to the mask bit is to be written into the cache, e.g., the data to be written corresponding to a write request may not be written in its entirety, and the available mask marks which data needs to be written into the cache, e.g., a 64bit data is written into the cache, and an 8bit mask is used to determine the data actually written by the write request for subsequent consistency verification; the request label refers to that each request has a unique label, and the corresponding relation between the read-back data and the request is marked; the user definition supports the expansion of the data request structure by the user in the verification process of different cache types. For example, the user may define a valid bit of the write request at a user-defined location, if 1, indicating that the write request is invalid, will not write to the cache, will be discarded directly, and if 0, indicating that the write request is valid, will write to the cache normally. Of course, in practical applications, a person skilled in the art may perform user definition according to practical requirements, so as to adapt to various verification scenarios of the cached data, which is not limited in this application.
In alternative embodiments, the validation request entered into the cache may be obtained by an interceptor or by placing a listener in the cache. Of course, in other embodiments, the verification request of the input buffer may be obtained in other manners, which is not limited in this application.
It should be noted that, in addition to determining whether the verification request is a read request based on the above-mentioned universal verification request, a read request input to the cache to be tested may also be directly obtained, where the read request is a read request for obtaining the target cache data. The present disclosure is not limited in the manner in which read requests are obtained.
In a preferred embodiment, as shown in fig. 6, the step S300 of obtaining the expected data corresponding to the read-back data from the cache model specifically includes:
s310: and storing the read request input into the cache into a request queue.
S320: and acquiring a read request corresponding to the read-back data from the request queue according to the read-back data returned by the cache based on the read request.
S330: and acquiring corresponding cache data from the cache model according to the read request as expected data.
The read request of the input buffer may be obtained directly or based on a general-purpose verification request.
In particular, it will be appreciated that in the preferred embodiment, a request queue may be pre-configured, which may be used to store read requests. The retrieved read request may be stored to a request queue. When the read-back data returned by the cache based on the read request is obtained from the cache, the read request corresponding to the read-back data can be obtained from the request queue, the cache data corresponding to the target cache data read by the read request is obtained from the cache model according to the read request as expected data, and the read-back data is verified according to the expected data.
In one or more embodiments, the received read requests may be stored in sequence to the request queue, with the returned read-back data buffered to be returned in sequence. And sequentially acquiring read requests from the request queue according to the received read-back data, namely, the read requests corresponding to the current read-back data. In other embodiments, the request label may be set in the read request, and the received read request may be stored in the request queue out of order, where the request label is set in the returned read-back data, so that the read request corresponding to the request label may be obtained from the request queue according to the request label in the read-back data.
In a preferred embodiment, the cache model stores cache data and corresponding data storage addresses in the cache. As shown in fig. 7, the step S330 of obtaining the corresponding expected data from the cache model according to the read request specifically includes:
s331: and determining a data storage address to be read according to the read request.
S332: and acquiring corresponding cache data from the cache model according to the data storage address to be read.
S333: and taking the cached data as the expected data corresponding to the read request.
Specifically, the cache model stores cache data in the cache and a corresponding data storage address, and the cache data in the cache model needs to at least include cache data that is subjected to invalidation processing based on the data invalidation request. In an alternative embodiment, when the cache receives a write request, the write request input to the cache may be obtained, and the cache model may be updated according to the write request. Specifically, the input cached verification request can be subjected to parameterization extraction, and the universal verification request can be obtained through conversion according to a preset data structure, so that the universal verification request can be identified to determine the request type of the verification request. If the request type is a write request, determining a data storage address written into the cache and corresponding written cache data according to the write request, and storing the data storage address and the corresponding cache data into a cache model.
Thus, write operations of the validation request to the cache and all write requests downstream from the cache are recorded in the cache model. When the verification request is a read request, a data storage address of the read request can be determined, the cache model stores a data storage address and cache data of a data writing operation of a pre-writing request corresponding to the data storage address, the corresponding data storage address is matched according to the data storage address of the read request, and then the cache data corresponding to the data storage address of the read request can be obtained from the cache model, so that expected data can be obtained.
In a preferred embodiment, as shown in fig. 8, the method further comprises S500, prior to verifying the read-back data from the expected data:
s510: storing the expected data to the expected data queue.
S520: and acquiring expected data corresponding to the read-back data from the expected data queue so as to verify the read-back data according to the expected data.
In particular, it will be appreciated that in this preferred embodiment, an expected data queue may be preset, which may be used to store expected data. Thus, in one or more embodiments, when the read-back data is verified, and the read-back data is returned according to the sequence of receiving the verification request by the cache, the expected data in the cache model may be sequentially obtained according to the sequence of receiving the verification request, and the expected data may be sequentially stored in the expected data queue. When the read-back data is verified, the expected data are sequentially obtained from the expected data queues according to the sequence of the read-back data, namely the expected data corresponding to each read-back data, the obtained read-back data can be verified according to the expected data to determine whether the data invalidation is successful or not, the expected data obtaining speed can be increased by setting the expected data queues, the verification efficiency of the read-back data is improved, and the cache verification efficiency is further improved.
In other embodiments, if the request label is set in the authentication request, the expected data acquired according to the authentication request may be set in the expected data queue corresponding to the corresponding request label. Further, the request label is required to be set in the read-back data returned by the cache, and the expected data corresponding to the request label can be obtained from the expected data queue according to the request label in the read-back data for consistency verification.
In a preferred embodiment, as shown in fig. 9, the step S300 of verifying the read-back data according to the expected data to determine whether the data invalidation request is successful specifically includes:
s340: checking whether the expected data and the read-back data are identical.
S350: if the data are the same, the data invalidation is successful;
s360: if not, the data invalidation fails.
Specifically, it can be understood that after modifying the cache data in the cache model corresponding to the cache data subjected to invalidation processing and the downstream data in the downstream behavior level model to default values, if the cache is normally operated, invalidation processing can be performed on the cache data in the cache based on the data invalidation request. The invalidation process may be, but is not limited to, modifying the data to an error value or setting an invalidation flag to the cache data.
If the cache is successful in invalidating the cache data based on the data invalidation request, the cache is a default value obtained from the downstream behavior level model based on the read-back data returned by the read request for obtaining the invalidated cache data, and the cache data correspondingly stored in the cache model is also the default value. Therefore, if the read-back data is inconsistent with the expected data, the cache does not acquire the downstream data from the downstream behavior level model in real time as the read-back data, and the data invalidation process is unsuccessful. If the read-back data is consistent with the expected data, the cache returns downstream data which is not the cache data but is obtained from the downstream behavior level model in real time, and the data invalidation processing is successful.
Based on the same principle, the embodiment also discloses a cache data invalidation verification device. As shown in fig. 10, in this embodiment, the apparatus includes a cache model 12, a downstream behavior level model 13, and a control module 11, where the downstream behavior level model 13 is configured to, when receiving a data request sent by the cache based on a read request, return downstream data corresponding to the read request to the cache so that the cache obtains readback data based on the downstream data.
The control module 11 includes a request acquisition unit 111, a model update unit 112, and an invalidation verification unit 113.
The request acquiring unit 111 is configured to acquire a data invalidation request input to the cache to be tested.
The model updating unit 112 is configured to update the cache model and the downstream behavior level model based on the data invalidation request.
The invalidation checking unit 113 is configured to obtain readback data returned by the cache in response to the read request for obtaining the target cache data, obtain expected data corresponding to the readback data from the cache model, and check the readback data according to the expected data to determine whether the data invalidation request is successful.
In a preferred embodiment, the request obtaining unit 111 is specifically configured to perform parameterized extraction on the verification request input into the to-be-tested cache based on the cached parameter configuration to obtain a verification parameter; obtaining a universal verification request according to the verification parameters and a preset data structure; and determining whether the verification request is the data invalidation request according to the generalized verification request.
In a preferred embodiment, the cache model stores cache data and corresponding data storage addresses in the cache, the downstream behavior level model stores downstream data and corresponding data storage addresses, and the cache data in the cache is at least part of the downstream data in the downstream behavior level model.
The model updating unit 112 is specifically configured to determine a target data storage address for data invalidation according to the data invalidation request; and modifying the cache data and the downstream data corresponding to the target data storage address in the cache model and the downstream behavior level model into default values.
In a preferred embodiment, the model updating unit 112 is specifically configured to delete the cache data and the downstream data corresponding to the target data storage address in the cache model and the downstream behavior level model, so as to modify the cache data and the downstream data corresponding to the target data storage address in the cache model and the downstream behavior level model to default values.
In a preferred embodiment, as shown in fig. 11, the apparatus further comprises a parameterization module 10. The parameterization module 10 is configured to perform parameterization extraction on a verification request input into the cache to be tested based on the cached parameter configuration to obtain a verification parameter before acquiring expected data corresponding to the read-back data from the cache model; obtaining a universal verification request according to the verification parameters and a preset data structure; and determining whether the verification request is a read request according to the generalized verification request, and if so, acquiring expected data corresponding to the read-back data from the cache model.
In a preferred embodiment, the invalidation checking unit 113 is specifically configured to store a read request corresponding to the data invalidation request to a request queue; acquiring a read request corresponding to the read-back data from the request queue according to the read-back data returned by the cache based on the read request; and acquiring corresponding expected data from the cache model according to the read request so as to acquire the expected data corresponding to the read-back data from the cache model.
In a preferred embodiment, the cache model stores cache data and corresponding data storage addresses in the cache. The invalidation checking unit 113 is specifically configured to determine a data storage address to be read according to the read request; obtaining corresponding cache data from the cache model according to the data storage address to be read; and taking the cache data as the expected data corresponding to the read request so as to acquire the corresponding expected data from the cache model according to the read request.
In a preferred embodiment, the invalidation checking unit 113 is further configured to store the expected data to the expected data queue; and acquiring expected data corresponding to the read-back data from the expected data queue so as to verify the read-back data according to the expected data.
In a preferred embodiment, the invalidation checking unit 113 is further configured to check whether the expected data and the read-back data are identical; if the data are the same, the data invalidation is successful; if the data invalidation requests are not the same, the data invalidation fails to check the read-back data according to the expected data to determine whether the data invalidation requests are successful in data invalidation.
Since the principle of the device for solving the problem is similar to that of the above method, the implementation of the device can be referred to the implementation of the method, and will not be described herein.
Based on the same principle, the embodiment also discloses a cache verification system. The cache verification system includes a cache and the cache verification device described in this embodiment.
Since the principle of solving the problem of the system is similar to that of the above method, the implementation of the system can be referred to the implementation of the method, and will not be repeated here.
The systems, apparatus, modules or units described in the above embodiments may be implemented in particular by a computer chip or entity or by a computer program product having certain functions. A typical implementation device is a computer device, which may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
In a typical example, the computer apparatus includes a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the program to implement a method performed by a client as described above, or where the processor executes the program to implement a method performed by a server as described above.
Referring now to FIG. 12, there is illustrated a schematic diagram of a computer device 600 suitable for use in implementing embodiments of the present application.
As shown in fig. 12, the computer apparatus 600 includes a Central Processing Unit (CPU) 601, which can perform various appropriate works and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data required for the operation of the system 600 are also stored. The CPU601, ROM602, and RAM603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, mouse, etc.; an output portion 607 including a Cathode Ray Tube (CRT), a liquid crystal feedback device (LCD), and the like, and a speaker, and the like; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The drive 610 is also connected to the I/O interface 605 as needed. Removable media 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on drive 610 as needed, so that a computer program read therefrom is mounted as needed as storage section 608.
In particular, according to embodiments of the present application, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing the method shown in the flowchart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication portion 609, and/or installed from the removable medium 611.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present application.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (13)

1. A method for validating a cached data invalidation, comprising:
acquiring a data invalidation request input into a to-be-tested cache, wherein the data invalidation request is used for invalidating target cache data in the to-be-tested cache;
updating cache data corresponding to the target cache data in a cache model and downstream data corresponding to the target cache data in a downstream behavior level model based on the data invalidation request, wherein the downstream behavior level model is used for returning the downstream data corresponding to the read request to the cache when receiving a data request sent by the cache based on the read request so that the cache obtains readback data based on the downstream data;
and acquiring the read-back data returned by the cache in response to the read request for acquiring the target cache data, acquiring the cache data corresponding to the read-back data from the cache model as expected data, and checking the read-back data according to the expected data to determine whether the data invalidation request is successful.
2. The method for verifying invalidation of cached data according to claim 1, wherein the obtaining the data invalidation request input to the cache to be tested comprises:
Performing parameterization extraction on the verification request input into the cache to be tested based on the cached parameter configuration to obtain verification parameters;
obtaining a universal verification request according to the verification parameters and a preset data structure;
and determining whether the verification request is the data invalidation request according to the generalized verification request.
3. The method of claim 1, wherein the cache model stores cache data and corresponding data storage addresses in the cache, the downstream behavior level model stores downstream data and corresponding data storage addresses, and the cache data in the cache is at least part of the downstream data in the downstream behavior level model;
the updating the cache data corresponding to the target cache data in the cache model and the downstream data corresponding to the target cache data in the downstream behavior level model based on the data invalidation request includes:
determining a target data storage address of target cache data of data invalidation according to the data invalidation request;
and modifying the cache data and the downstream data corresponding to the target data storage address in the cache model and the downstream behavior level model into default values.
4. The method of claim 3, wherein modifying the cache data and the downstream data corresponding to the target data storage address in the cache model and the downstream behavior-level model to default values comprises:
and deleting the cache data and the downstream data corresponding to the target data storage address in the cache model and the downstream behavior level model.
5. The buffered data invalidation verification method of claim 1, further comprising, prior to obtaining buffered data corresponding to the readback data from the buffer model as expected data:
performing parameterization extraction on the verification request input into the cache to be tested based on the cached parameter configuration to obtain verification parameters;
obtaining a universal verification request according to the verification parameters and a preset data structure;
and determining whether the verification request is a read request according to the generalized verification request, and if so, acquiring cache data corresponding to the read request from the cache model as expected data corresponding to the read-back data.
6. The method of claim 1, wherein the obtaining, from the cache model, the cache data corresponding to the readback data as expected data comprises:
Storing the read request input into the cache to a request queue;
acquiring a read request corresponding to the read-back data from the request queue according to the read-back data returned by the cache based on the read request;
and acquiring corresponding cache data from the cache model according to the read request as expected data.
7. The method of claim 6, wherein the cache model stores cache data in the cache and corresponding data storage addresses;
the obtaining the corresponding expected data from the cache model according to the read request includes:
determining a data storage address to be read according to the read request;
obtaining corresponding cache data from the cache model according to the data storage address to be read;
and taking the cached data as the expected data corresponding to the read request.
8. The cached data invalidation verification method according to claim 1, further comprising, prior to verifying the read-back data based on the expected data:
storing the expected data to the expected data queue;
and acquiring expected data corresponding to the read-back data from the expected data queue so as to verify the read-back data according to the expected data.
9. A buffered data invalidation verification method as claimed in any one of claims 1 to 8, wherein said verifying said read-back data against said expected data to determine if said data invalidation request was successful comprises:
checking whether the expected data and the read-back data are the same;
if the data are the same, the data invalidation is successful;
if not, the data invalidation fails.
10. The device is characterized by comprising a cache model, a downstream behavior level model and a control module, wherein the downstream behavior level model is used for returning downstream data corresponding to a read request to the cache when receiving the data request sent by the cache based on the read request so as to enable the cache to obtain readback data based on the downstream data;
the control module includes:
the device comprises a request acquisition unit, a data processing unit and a data processing unit, wherein the request acquisition unit is used for acquiring a data invalidation request input into a to-be-tested cache, and the data invalidation request is used for invalidating target cache data in the to-be-tested cache;
the model updating unit is used for updating cache data corresponding to the target cache data in the cache model and downstream data corresponding to the target cache data in the downstream behavior level model based on the data invalidation request;
And the invalidation checking unit is used for acquiring the readback data returned by the cache in response to the read request for acquiring the target cache data, acquiring the cache data corresponding to the readback data from the cache model as expected data, and checking the readback data according to the expected data to determine whether the data invalidation request is successful in data invalidation.
11. A buffered data invalidation verification system comprising a buffer and a buffered data invalidation verification apparatus according to claim 10.
12. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that,
the processor implementing the method according to any of claims 1-9 when executing the program.
13. A computer readable medium having a computer program stored thereon, characterized in that,
the program, when executed by a processor, implements the method of any of claims 1-9.
CN202310010874.4A 2023-01-05 2023-01-05 Cache data invalidation verification method, device and system Active CN115826875B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310010874.4A CN115826875B (en) 2023-01-05 2023-01-05 Cache data invalidation verification method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310010874.4A CN115826875B (en) 2023-01-05 2023-01-05 Cache data invalidation verification method, device and system

Publications (2)

Publication Number Publication Date
CN115826875A CN115826875A (en) 2023-03-21
CN115826875B true CN115826875B (en) 2023-04-28

Family

ID=85520112

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310010874.4A Active CN115826875B (en) 2023-01-05 2023-01-05 Cache data invalidation verification method, device and system

Country Status (1)

Country Link
CN (1) CN115826875B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108897615A (en) * 2018-05-31 2018-11-27 康键信息技术(深圳)有限公司 Second kills request processing method, application server cluster and storage medium
CN112307088A (en) * 2020-11-03 2021-02-02 平安普惠企业管理有限公司 Method, device and equipment for inquiring state of process node and storage medium
CN115061972A (en) * 2022-07-05 2022-09-16 摩尔线程智能科技(北京)有限责任公司 Processor, data read-write method, device and storage medium
CN115544042A (en) * 2022-11-02 2022-12-30 昆仑芯(北京)科技有限公司 Cached information updating method and device, equipment and medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11210102B2 (en) * 2019-11-26 2021-12-28 Arm Limited Speculative buffer for speculative memory accesses with entries tagged with execution context identifiers

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108897615A (en) * 2018-05-31 2018-11-27 康键信息技术(深圳)有限公司 Second kills request processing method, application server cluster and storage medium
CN112307088A (en) * 2020-11-03 2021-02-02 平安普惠企业管理有限公司 Method, device and equipment for inquiring state of process node and storage medium
CN115061972A (en) * 2022-07-05 2022-09-16 摩尔线程智能科技(北京)有限责任公司 Processor, data read-write method, device and storage medium
CN115544042A (en) * 2022-11-02 2022-12-30 昆仑芯(北京)科技有限公司 Cached information updating method and device, equipment and medium

Also Published As

Publication number Publication date
CN115826875A (en) 2023-03-21

Similar Documents

Publication Publication Date Title
CN111414389B (en) Data processing method and device, electronic equipment and storage medium
US20050131671A1 (en) Subscriber identification module (SIM) emulator
CN115130402B (en) Cache verification method, system, electronic equipment and readable storage medium
CN105516230B (en) A kind of data processing method and device
CN107092535B (en) Method and apparatus for data storage of test interface
US11663288B2 (en) Just-in-time front end template generation using logical document object models
CN111352836A (en) Pressure testing method and related device
CN109710185A (en) Data processing method and device
CN112368682A (en) Using cache for content verification and error remediation
CN116627331B (en) Cache verification device, method and system
CN111459948A (en) Data block deleting method based on centralized block chain type account book
CN111506580A (en) Transaction storage method based on centralized block chain type account book
CN113312008B (en) Processing method, system, equipment and medium for file read-write service
CN106156291A (en) The caching method of static resource and system thereof based on Localstroage
CN115826875B (en) Cache data invalidation verification method, device and system
CN113297084A (en) Test method, test system, electronic equipment and storage medium
CN112800063A (en) Automatic label passing method and device based on data structure
CN115756998B (en) Cache data re-fetching mark verification method, device and system
CN115934338A (en) Inter-process communication method and device
CN113238940B (en) Interface test result comparison method, device, equipment and storage medium
CN115061948A (en) Method and system for verifying non-aligned access in multi-core system
US8793663B2 (en) Smart cache for a server test environment in an application development tool
CN113852610A (en) Message processing method and device, computer equipment and storage medium
CN111143644B (en) Identification method and device of Internet of things equipment
CN113064895A (en) Incremental updating method, device and system for map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant