CN115269199A - Data processing method and device, electronic equipment and computer readable storage medium - Google Patents

Data processing method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN115269199A
CN115269199A CN202210964026.2A CN202210964026A CN115269199A CN 115269199 A CN115269199 A CN 115269199A CN 202210964026 A CN202210964026 A CN 202210964026A CN 115269199 A CN115269199 A CN 115269199A
Authority
CN
China
Prior art keywords
data
information
target
request
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210964026.2A
Other languages
Chinese (zh)
Inventor
韩新辉
姚永斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Eswin Computing Technology Co Ltd
Original Assignee
Beijing Eswin Computing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Eswin Computing Technology Co Ltd filed Critical Beijing Eswin Computing Technology Co Ltd
Priority to CN202210964026.2A priority Critical patent/CN115269199A/en
Publication of CN115269199A publication Critical patent/CN115269199A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0893Caches characterised by their organisation or structure
    • G06F12/0897Caches characterised by their organisation or structure with two or more cache hierarchy levels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30098Register arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The embodiment of the application provides a data processing method and device, electronic equipment and a computer readable storage medium, and relates to the technical field of computers. Determining whether target data requested by the data request exists in a data cache of the processor or not by responding to the data request; and updating the data state information of the target data under the condition that the target data exists in the data cache and meets the preset state updating condition. Since the data state information is stored in a register file that is pre-extended. Thus, on one hand, because the register file is provided with a plurality of read ports and a plurality of write ports, when the updated data state information is written, arbitration with other requests for reading data is not needed; on the other hand, the material characteristics of the register file are stable, and the stored data are not easy to generate errors if the material characteristics are stable, so that corresponding verification information does not need to be generated when data state information is written. Therefore, the time sequence for updating the data state information is shorter, and the CPU performance is improved.

Description

Data processing method and device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a data processing method, an apparatus, an electronic device, and a computer-readable storage medium.
Background
In the field of computer technology, the cache of a computer device is generally divided into three levels of caches, i.e., a first level cache L1, a second level cache L2, and a third level cache L3. Wherein, in the aspect of the running speed, L1 is fastest, L2 times is fast, and L3 is slowest; in terms of capacity size: the L1 capacity is the smallest, the L2 capacity is the larger, and the L3 capacity is the largest. The third level cache collectively serves as a high-speed data buffer between the CPU and the main memory.
Wherein, in the second level cache, a plurality of data requests for reading/writing data are received. For each data request, the last processing operation to perform the data request is: the data state of the requested data is updated, i.e., the coherency state of the cache line (cache line) is updated. Wherein the data status is used to indicate whether the requested data is modified during execution of the data request. For example, during execution of a data request, if the requested data is modified, then the data state of the requested data is dirty data (dirty); if the requested data is not modified, then the data state of the requested data is clean data (clean).
In the prior art, data states are usually stored in a tag random access memory (tag-RAM), and the tag-RAM usually has only one port through which read operations and write operations are performed, so that the time sequence for updating the data states is long, and the performance of a CPU is affected.
Disclosure of Invention
The present application aims to solve at least one of the above technical drawbacks, particularly, the technical drawback that the time sequence of updating the data state in the second level cache of the computer device is long, which affects the performance of the CPU.
According to an aspect of the present application, there is provided a data processing method including:
in response to a data request, determining whether target data requested by the data request exists in a data cache of a processor;
when the target data exist in the data cache and the target data meet a preset state updating condition, updating data state information of the target data;
wherein the data state information is stored in a pre-extended register file of the processor; the updated target state information includes first indication information, where the first indication information indicates whether the target data is modified in an execution process of the data request.
Optionally, the determining whether target data requested by the data request exists in a data cache of the processor includes:
acquiring a first data address carried in the data request and acquiring data address information of data in the data cache;
and determining that the target data requested by the data request exists in the data cache under the condition that a second data address consistent with the first data address exists in the data address information.
In the alternative, the data address information and the data state information are respectively and independently stored;
wherein the data address information is stored in a random access memory and the data state information is stored in the register file.
Optionally, the method further includes:
acquiring verification information corresponding to each data address in the data address information;
verifying the data address and the corresponding verification information;
and under the condition that a preset data relation is met between the data address and the verification information, executing the processing step of determining that target data requested by the data request exists in the data cache under the condition that a second data address consistent with the first data address exists in the data address information.
Optionally, the status update condition includes at least one of:
the target data is not present in a core of the processor;
the target data does not exist on a data bus connected with the processor;
there is no conflicting request to operate on the target data, the conflicting request being a request other than the data request.
Optionally, the register file comprises at least two read/write ports.
According to another aspect of the present application, there is provided a data processing apparatus comprising:
the information determining module is used for responding to a data request and determining whether target data requested by the data request exists in a data cache of the processor;
the information updating module is used for updating the data state information of the target data under the condition that the target data exists in the data cache and meets a preset state updating condition;
wherein the data state information is stored in a pre-extended register file of the processor; the updated target state information includes first indication information, where the first indication information indicates whether the target data is modified in an execution process of the data request.
According to another aspect of the present application, there is provided a processor including:
an information determination unit, configured to determine, in response to a data request, whether target data requested by the data request exists in a data cache of a processor;
the information updating unit is used for updating the data state information of the target data when the target data exists in the data cache and meets a preset state updating condition;
a pre-expanded register file for storing the data state information;
wherein the updated target state information includes first indication information indicating whether the target data is modified in an execution process of the data request.
According to another aspect of the present application, there is provided an electronic device including:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to: performing the data processing method of any one of the first aspects of the present application.
For example, in a third aspect of the present application, there is provided a computing device comprising: the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the corresponding operation of the data processing method as shown in the first aspect of the application.
According to yet another aspect of the present application, there is provided a computer-readable storage medium, the computer program, when executed by a processor, implements the data processing method of any of the first aspects of the present application.
For example, in a fourth aspect of the embodiments of the present application, a computer-readable storage medium is provided, on which a computer program is stored, and the computer program, when executed by a processor, implements the data processing method shown in the first aspect of the present application.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the method provided in the various alternative implementations of the first aspect described above.
The technical scheme provided by the application brings the beneficial effects that:
in the embodiment of the application, the data state information is stored in a register file which is expanded in advance. And writing updated data state information of the target data into a register file when the target data exists in the data cache and the target data meets a preset state updating condition. Thus, on the one hand, since the register file has a plurality of read ports and a plurality of write ports, there is no need to arbitrate with other requests to read data when writing updated said data state information (i.e. updating data state information); on the other hand, the material characteristics of the register file are stable, and the stored data are not easy to generate errors if the material characteristics are stable, so that corresponding verification information does not need to be generated when data state information is written. Therefore, the time for arbitrating and generating the check information is saved, the time sequence for updating the data state information is shorter, the locking time of the cache line where the target data is located when the data state information is updated can be shortened, other requests can access the cache line, and the performance of the CPU is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments of the present application will be briefly described below.
Fig. 1 is a schematic flow chart of a data processing method according to an embodiment of the present application;
fig. 2 is a second schematic flowchart of a data processing method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a processor according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a data processing electronic device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described below in conjunction with the drawings in the present application. It should be understood that the embodiments set forth below in connection with the drawings are exemplary descriptions for explaining technical solutions of the embodiments of the present application, and do not limit the technical solutions of the embodiments of the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, information, data, steps, operations, elements, and/or components, but do not preclude the presence or addition of other features, information, data, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. The term "and/or" as used herein indicates at least one of the items defined by the term, e.g., "a and/or B" may be implemented as "a", or as "B", or as "a and B".
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The terms referred to in this application will first be introduced and explained:
l2 cache: and the second-level cache is integrated on the mainboard or the CPU.
cache: the cache Memory is located between the CPU and the main Memory DRAM (Dynamic Random Access Memory), has a small scale and a high speed, and is generally composed of an SRAM (Static Random Access Memory). It is a memory with small capacity and high speed between CPU and memory. The speed of the CPU is far higher than that of the memory, when the CPU directly accesses data from the memory, the CPU waits for a certain time period, the cache can store a part of data which is just used or recycled by the CPU, and if the CPU needs to reuse the part of data, the CPU can be directly called from the cache, so that the data is prevented from being repeatedly accessed, the waiting time of the CPU is reduced, and the efficiency of the system is improved.
cache line: the data in the cache is read in blocks, when the CPU accesses a certain data, it is assumed that data near the data will be accessed later, so when the block area is accessed for the first time, the data will be read into the cache together with data (64 bytes in total) of the near area, and the block of data is called a cache line.
In the second level cache, multiple data requests for read/write data are typically received. For each data request, the last processing operation of executing the data request is to update the data state of the target data corresponding to the data request, that is, to update the coherency state of the cache line (cache line). The data state is represented by a state bit of a Random Access Memory (RAM) storing the data state. The status bits typically include a Valid bit (V), a Used bit (U), and a Dirty bit (D), wherein the V bit indicates whether the corresponding Tag is a Valid Tag; the U bit represents whether the corresponding Tag is the Tag which is used least recently in the S path; the D bit identifies whether the current Tag has been modified and has not been written back to main memory.
In the prior art, the data state and the data address of target data are usually stored in a random access memory (tag-RAM), and the tag-RAM usually has only one port through which read operation and write operation are performed, so that, when the data state is updated, that is, when the data state is written into the tag-RAM through the port, since both the read operation and the write operation are performed through the port, the operation for updating the data state needs to be sequentially executed with other multiple read-write requests (that is, the operation for updating the data state needs to be arbitrated with other multiple read-write requests to determine the execution sequence), thereby resulting in a longer time sequence for updating the data state and affecting the performance of the CPU.
In view of the above technical problem, the present application provides a data processing method, and as shown in fig. 1, after a data request is received, at stage tArb to stage t2, it may be determined whether target data requested by the data request exists in a second level cache (an example of a "data cache" in the present application), and when the target data exists in the second level cache and data state information of the target data meets a preset state update condition, at stage t3, the data state information of the target data is updated in a register file that is expanded in advance.
That is, in the present application, the data state information of the data in the second-level cache is stored in the register file expanded in advance, and since the register file has a plurality of ports, when updating the data state information, the updated data state information can be written to the register file through a single write port, and it is not necessary to perform arbitration processing due to sharing one port with other data requests for reading data, and therefore, the timing for updating the data state is short.
The technical solutions of the embodiments of the present application and the technical effects produced by the technical solutions of the present application will be described below through descriptions of several exemplary embodiments. It should be noted that the following embodiments may be referred to, referred to or combined with each other, and the description of the same terms, similar features, similar implementation steps and the like in different embodiments is not repeated.
Referring to fig. 2, an embodiment of the present application provides a data Processing method, and optionally, the method is applied to a processor, where the processor may be a CPU or a Graphics Processing Unit (GPU). Specifically, the method may comprise the steps of:
s201: in response to a data request, it is determined whether target data requested by the data request is present in a data cache of a processor.
Optionally, the embodiment of the present application may be applied to the technical field of computers, and may be specifically applied to an application scenario in which a coherency state of a cache line is updated in a data cache, and updating the coherency state of the cache line is to update data state information of data in the cache line.
In a practical scenario, the data request may be any request for performing a data operation on target data in the data cache, where the data operation includes, for example, performing a read, write, modify, invalidate, and the like on the target data. The data cache is, for example, a second level cache L2 cache and a third level cache L3 cache.
After receiving the data request, it is necessary to determine whether the target data requested by the data request exists in the data cache, that is, to determine whether the data request hits in the data cache (hit). When the target data is present in the data cache, judging that the result is hit; and when the target data does not exist in the data cache, judging that the result is missing (miss).
When judging whether the data request hits in the data cache, the determination may be made based on a data address carried in the data request.
Specifically, after receiving the data request, the data address carried in the data request may be obtained, where the data address is an address of target data requested by the data request, and in order to distinguish other data addresses involved in the present application, the address is subsequently referred to as a first data address.
In addition, data address information of data in the data cache needs to be acquired, in this embodiment, the data address information is information of a cache line in which the data is located in the data cache, for example, the information of the cache line may be an identifier of the cache line.
The data address information is stored in a Random Access Memory (RAM). For example, the Random Access Memory RAM may be a Static Random-Access Memory (SRAM). SRAM is fast and is typically used as a temporary data storage medium for an operating system or other program in operation.
The random access memory typically has a read/write port through which data stored in the random access memory can be read from the random access memory or written to the random access memory. Since the operations of reading data and writing data are performed through one port, when there are a plurality of read and write operations for the random access memory, it is generally necessary to perform an arbitration process to determine whether the port is idle and an operation order of the plurality of read and write operations.
Optionally, since the data address information is a part of the data tag information (the data tag information includes the data address information and the data state information), in this embodiment of the present application, the random access memory storing the data address information may be a tag random access memory (tag-RAM).
Further, after the first data address and the data address information are acquired, it is determined whether a second data address identical to the first data address exists in the data address information. If a second data address identical to the first data address exists, the target data requested by the data request exists in the data cache, namely the data request hits (hit) in the data cache; if the second data address identical to the first data address does not exist, the target data requested by the data request does not exist in the data cache, namely the data request is missing (miss) in the data cache.
S202: and updating the data state information of the target data under the condition that the target data exists in the data cache and meets a preset state updating condition.
Wherein the data state information is stored in a pre-extended register file of the processor; the updated target state information includes first indication information, where the first indication information indicates whether the target data is modified in an execution process of the data request.
Specifically, the data request may be executed in the presence of the target data in a data cache. And executing the data request, namely performing data operation on target data in the data cache, wherein the data operation includes reading, writing, modifying, invalidating and the like on the target data.
The last processing action of the process of executing the data request is to update the data state information of the target data, that is, to update the coherency state of the cache line (cache line). Specifically, the data status information is used to indicate whether the target data is modified during the execution of the data request. For example, during execution of a data request, if the target data is modified, the data state of the target data is dirty data (dirty), and if the target data is not modified, the data state of the target data is clean data (clean).
In this embodiment of the present application, under the condition that the target data meets a preset state updating condition, the data state information of the target data may be updated in a register file that is expanded in advance.
The register file is an array formed by a plurality of registers in the CPU, is usually realized by a fast Static Random Access Memory (SRAM), has a special read port and a write port, and can access different registers in a multi-way concurrent mode. In a particular implementation scenario, the register file has multiple read ports and multiple write ports, e.g., in some embodiments, the register file has 9 read ports and 4 write ports, and so on.
Further, the preset state update condition may include that the target data does not exist in the CPU core, that the target data does not exist on a data bus connected to the CPU, that a data request (i.e., a conflict request) does not exist for other operation target data, and the like.
To sum up, in the embodiment of the present application, the data state information is stored in the register file that is pre-expanded. And writing updated data state information of the target data into a register file when the target data exists in the data cache and the target data meets a preset state updating condition. Thus, on the one hand, since the register file has a plurality of read ports and a plurality of write ports, there is no need to arbitrate with other requests to read data when writing updated said data state information (i.e. updating data state information); on the other hand, the material characteristics of the register file are stable, and the stored data is not easy to generate errors if the material characteristics are stable, so that corresponding verification information does not need to be generated when data state information is written. Therefore, the time for arbitrating and generating the verification information is saved, the time sequence for updating the data state information is shorter, the locking time of the cache line where the target data is located when the data state information is updated can be shortened, other requests can access the cache line, and the performance of the CPU is improved.
In an embodiment of the present application, the determining whether target data requested by the data request exists in a data cache of a processor includes:
acquiring a first data address carried in the data request and acquiring data address information of data in the data cache;
and determining that the target data requested by the data request exists in the data cache under the condition that a second data address consistent with the first data address exists in the data address information.
Specifically, after receiving the data request, a first data address carried in the data request may be obtained, where the first data address is an address of target data requested by the data request.
In addition, data address information of the data in the data cache needs to be acquired, where the data address information is information of a cache line in which the data is located in the data cache, and for example, the information of the cache line may be an identifier of the cache line.
Further, in the above-mentioned case, after acquiring the first data address and the data address information, determining whether a second data address identical to the first data address exists in the data address information. If a second data address identical to the first data address exists, the target data requested by the data request exists in the data cache, namely the data request hits (hit) in the data cache; if the second data address identical to the first data address does not exist, the target data requested by the data request does not exist in the data cache, namely the data request is missing (miss) in the data cache.
In one embodiment of the present application, the method further comprises:
acquiring verification information corresponding to each data address in the data address information;
verifying the data address and the corresponding verification information;
and under the condition that the data address and the verification information meet a preset data relation, executing the processing step of determining that target data requested by the data request exists in the data cache under the condition that a second data address consistent with the first data address exists in the data address information.
In the embodiment of the present application, the data address information is stored in the random access memory, and the random access memory is sensitive to environmental influences, such as electrostatic influences, radiation influences, and the like, for example, static electricity may interfere with charges of a capacitor in the random access memory, resulting in data loss and other influences, so that after the data address information is obtained from the random access memory, the accuracy of the data address information needs to be verified.
Specifically, when data address information is obtained, check information, such as an Error Correction Code (ECC), of each data address needs to be obtained from the random access memory.
Then, the data address and the corresponding verification information are verified, that is, whether a preset data relationship (a preset data relationship such as a preset functional relationship) is satisfied between the data address and the corresponding verification information is verified, and the accuracy of the data address information can be confirmed under the condition that the preset data relationship is satisfied between the data address and the verification information.
The following describes an overall implementation flow of the embodiment of the present application, with reference to fig. 1, by taking a data cache as a second-level cache as an example:
after receiving the data request, the first data address carried by the data request may be obtained, and data address information and check information (ECC) of the data in the secondary cache may be obtained from a random access memory (tag-RAM), that is, the processing at the ta rb stage and the t1 stage shown in fig. 1. In practical scenarios, at the above-mentioned tAb stage and t1 stage, data address information and check information (ECC) are read, and at the same time, data state information can also be read from a register file (tag state reg).
Then, in the data address information, each data address and the corresponding verification information are verified to ensure the accuracy of the data address. And judging whether a second data address consistent with the first data address exists in the data address information, namely judging whether the data request hit or miss exists in a secondary cache. I.e., the process shown in fig. 1 at stage t 2.
When the data request hit is determined, a data request may be executed, the last stage of the data request execution, and in case that target data satisfies a preset state update condition, data state information of the target data is updated in a register file. I.e., the process shown in fig. 1 at stage t 3.
In the embodiment of the present application, the data state information is stored in a register file that is expanded in advance. And writing updated data state information of the target data into a register file when the target data exists in the data cache and the target data meets a preset state updating condition. Thus, on the one hand, since the register file has a plurality of read ports and a plurality of write ports, there is no need to arbitrate with other requests to read data when writing updated said data state information (i.e. updating data state information); on the other hand, the material characteristics of the register file are stable, and the stored data are not easy to generate errors if the material characteristics are stable, so that corresponding verification information does not need to be generated when data state information is written. Therefore, the time for arbitrating and generating the check information is saved, the time sequence for updating the data state information is shorter, the locking time of the cache line where the target data is located when the data state information is updated can be shortened, other requests can access the cache line, and the performance of the CPU is improved.
An embodiment of the present application provides a data processing apparatus, and as shown in fig. 3, the data processing apparatus 30 may include: an information determining module 301, an information updating module 302, wherein,
an information determining module 301, configured to determine, in response to a data request, whether target data requested by the data request exists in a data cache of a processor;
an information updating module 302, configured to update data state information of the target data when the target data exists in the data cache and the target data meets a preset state updating condition;
wherein the data state information is stored in a pre-extended register file of the processor; the updated target state information includes first indication information, where the first indication information indicates whether the target data is modified in an execution process of the data request.
In one embodiment of the present application, the information determination module is configured to:
acquiring a first data address carried in the data request and acquiring data address information of data in the data cache;
and determining that the target data requested by the data request exists in the data cache under the condition that a second data address consistent with the first data address exists in the data address information.
In one embodiment of the present application, the data address information and the data status information are separately stored;
wherein the data address information is stored in a random access memory and the data state information is stored in the register file.
In an embodiment of the present application, the apparatus further includes a verification module, configured to obtain verification information corresponding to each data address in the data address information;
verifying the data address and the corresponding verification information;
and under the condition that the data address and the verification information meet a preset data relation, executing the processing step of determining that target data requested by the data request exists in the data cache under the condition that a second data address consistent with the first data address exists in the data address information.
In one embodiment of the present application, the status update condition comprises at least one of:
the target data is not present in a core of the processor;
the target data does not exist on a data bus connected with the processor;
there is no conflicting request to operate on the target data, the conflicting request being a request other than the data request.
In one embodiment of the application, the register file comprises at least two read/write ports.
An embodiment of the present application provides a processor, and as shown in fig. 4, the processor 40 may include: an information determination unit 401, an information update unit 402, a pre-extended register file 403, wherein,
an information determination unit 401, configured to determine, in response to a data request, whether target data requested by the data request exists in a data cache of a processor;
an information updating unit 402, configured to update data state information of the target data when the target data exists in the data cache and the target data meets a preset state updating condition;
a pre-expanded register file 403 for storing the data state information;
wherein the updated target state information includes first indication information indicating whether the target data is modified in an execution process of the data request.
The apparatus in the embodiment of the present application may execute the method provided in the embodiment of the present application, and the implementation principle is similar, the actions executed by the modules in the apparatus in the embodiments of the present application correspond to the steps in the method in the embodiments of the present application, and for the detailed functional description of the modules in the apparatus, reference may be made to the description in the corresponding method shown in the foregoing, and details are not repeated here.
In the embodiment of the application, the data state information is stored in a register file which is expanded in advance. And writing updated data state information of the target data into a register file when the target data exists in the data cache and the target data meets a preset state updating condition. Thus, on the one hand, since the register file has a plurality of read ports and a plurality of write ports, there is no need to arbitrate with other requests to read data when writing updated said data state information (i.e. updating data state information); on the other hand, the material characteristics of the register file are stable, and the stored data are not easy to generate errors if the material characteristics are stable, so that corresponding verification information does not need to be generated when data state information is written. Therefore, the time for arbitrating and generating the check information is saved, the time sequence for updating the data state information is shorter, the locking time of the cache line where the target data is located when the data state information is updated can be shortened, other requests can access the cache line, and the performance of the CPU is improved.
An embodiment of the present application provides an electronic device, including: a memory and a processor; at least one program stored in the memory for execution by the processor, which when executed by the processor, implements: in the embodiment of the present application, the data state information is stored in a register file that is expanded in advance. And writing updated data state information of the target data into a register file when the target data exists in the data cache and the target data meets a preset state updating condition. Thus, on the one hand, since the register file has a plurality of read ports and a plurality of write ports, there is no need to arbitrate with other requests to read data when writing updated said data state information (i.e. updating data state information); on the other hand, the material characteristics of the register file are stable, and the stored data are not easy to generate errors if the material characteristics are stable, so that corresponding verification information does not need to be generated when data state information is written. Therefore, the time for arbitrating and generating the check information is saved, the time sequence for updating the data state information is shorter, the locking time of the cache line where the target data is located when the data state information is updated can be shortened, other requests can access the cache line, and the performance of the CPU is improved.
In an alternative embodiment, an electronic device is provided, as shown in fig. 5, the electronic device 4000 shown in fig. 5 comprising: a processor 4001 and a memory 4003. Processor 4001 is coupled to memory 4003, such as via bus 4002. Optionally, the electronic device 4000 may further include a transceiver 4004, and the transceiver 4004 may be used for data interaction between the electronic device and other electronic devices, such as transmission of data and/or reception of data. In addition, the transceiver 4004 is not limited to one in practical applications, and the structure of the electronic device 4000 is not limited to the embodiment of the present application.
The Processor 4001 may be a CPU (Central Processing Unit), a general-purpose Processor, a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array) or other Programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 4001 may also be a combination that performs a computational function, including, for example, a combination of one or more microprocessors, a combination of a DSP and a microprocessor, or the like.
Bus 4002 may include a path that carries information between the aforementioned components. The bus 4002 may be a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus 4002 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 5, but this is not intended to represent only one bus or type of bus.
The Memory 4003 may be a ROM (Read Only Memory) or other types of static storage devices that can store static information and instructions, a RAM (Random Access Memory) or other types of dynamic storage devices that can store information and instructions, an EEPROM (Electrically Erasable Programmable Read Only Memory), a CD-ROM (Compact Disc Read Only Memory) or other optical Disc storage, optical Disc storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), a magnetic Disc storage medium or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to these.
The memory 4003 is used for storing application program codes (computer programs) for executing the present scheme, and execution is controlled by the processor 4001. Processor 4001 is configured to execute application code stored in memory 4003 to implement what is shown in the foregoing method embodiments.
Among them, electronic devices include but are not limited to: mobile phones, notebook computers, multimedia players, desktop computers, and the like.
The present application provides a computer-readable storage medium, on which a computer program is stored, which, when running on a computer, enables the computer to execute the corresponding content in the foregoing method embodiments.
In the embodiment of the application, the data state information is stored in a register file which is expanded in advance. And writing updated data state information of the target data into a register file when the target data exists in the data cache and the target data meets a preset state updating condition. Thus, on the one hand, since the register file has a plurality of read ports and a plurality of write ports, there is no need to arbitrate with other requests to read data when writing updated said data state information (i.e. updating data state information); on the other hand, the material characteristics of the register file are stable, and the stored data are not easy to generate errors if the material characteristics are stable, so that corresponding verification information does not need to be generated when data state information is written. Therefore, the time for arbitrating and generating the check information is saved, the time sequence for updating the data state information is shorter, the locking time of the cache line where the target data is located when the data state information is updated can be shortened, other requests can access the cache line, and the performance of the CPU is improved.
The terms "first," "second," "third," "fourth," "1," "2," and the like in the description and in the claims of the present application and in the above-described drawings (if any) are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used are interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in other sequences than illustrated or otherwise described herein.
It should be understood that, although each operation step is indicated by an arrow in the flowchart of the embodiment of the present application, the implementation order of the steps is not limited to the order indicated by the arrow. In some implementation scenarios of the embodiments of the present application, the implementation steps in the flowcharts may be performed in other sequences as desired, unless explicitly stated otherwise herein. In addition, some or all of the steps in each flowchart are based on actual implementation scenarios, may include multiple sub-steps or stages. Some or all of these sub-steps or stages may be performed at the same time, or each of these sub-steps or stages may be performed at different times. In a scenario where execution times are different, an execution sequence of the sub-steps or the phases may be flexibly configured according to requirements, which is not limited in the embodiment of the present application.
The foregoing is only an optional implementation manner of a part of implementation scenarios in this application, and it should be noted that, for those skilled in the art, other similar implementation means based on the technical idea of this application are also within the protection scope of the embodiments of this application without departing from the technical idea of this application.

Claims (10)

1. A data processing method, comprising:
in response to a data request, determining whether target data requested by the data request exists in a data cache of a processor;
when the target data exist in the data cache and the target data meet a preset state updating condition, updating data state information of the target data;
wherein the data state information is stored in a pre-extended register file of the processor; the updated target state information includes first indication information, where the first indication information indicates whether the target data is modified in an execution process of the data request.
2. The data processing method of claim 1, wherein the determining whether the target data requested by the data request is present in a data cache of the processor comprises:
acquiring a first data address carried in the data request and acquiring data address information of data in the data cache;
and determining that the target data requested by the data request exists in the data cache under the condition that a second data address consistent with the first data address exists in the data address information.
3. The data processing method of claim 2,
the data address information and the data state information are respectively and independently stored;
wherein the data address information is stored in a random access memory and the data state information is stored in the register file.
4. The data processing method of claim 2, wherein the method further comprises:
acquiring verification information corresponding to each data address in the data address information;
verifying the data address and the corresponding verification information;
and under the condition that the data address and the verification information meet a preset data relation, executing the processing step of determining that target data requested by the data request exists in the data cache under the condition that a second data address consistent with the first data address exists in the data address information.
5. The data processing method of claim 1, wherein the status update condition comprises at least one of:
the target data is not present in a core of the processor;
the target data does not exist on a data bus connected with the processor;
there is no conflicting request to operate on the target data, the conflicting request being a request other than the data request.
6. The data processing method of claim 1, wherein the register file comprises at least two read/write ports.
7. A data processing apparatus, characterized by comprising:
the information determining module is used for responding to a data request and determining whether target data requested by the data request exists in a data cache of the processor;
the information updating module is used for updating the data state information of the target data under the condition that the target data exists in the data cache and meets a preset state updating condition;
wherein the data state information is stored in a pre-extended register file of the processor; the updated target state information includes first indication information, where the first indication information indicates whether the target data is modified in an execution process of the data request.
8. A processor, comprising:
an information determination unit, configured to determine, in response to a data request, whether target data requested by the data request exists in a data cache of a processor;
the information updating unit is used for updating the data state information of the target data under the condition that the target data exists in the data cache and meets a preset state updating condition;
a pre-expanded register file for storing the data state information;
wherein the updated target state information includes first indication information indicating whether the target data is modified in an execution process of the data request.
9. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to: -performing the data processing method according to any of claims 1 to 6.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the data processing method of any one of claims 1 to 6.
CN202210964026.2A 2022-08-11 2022-08-11 Data processing method and device, electronic equipment and computer readable storage medium Pending CN115269199A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210964026.2A CN115269199A (en) 2022-08-11 2022-08-11 Data processing method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210964026.2A CN115269199A (en) 2022-08-11 2022-08-11 Data processing method and device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN115269199A true CN115269199A (en) 2022-11-01

Family

ID=83750708

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210964026.2A Pending CN115269199A (en) 2022-08-11 2022-08-11 Data processing method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN115269199A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117312192A (en) * 2023-11-29 2023-12-29 成都北中网芯科技有限公司 Cache storage system and access processing method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117312192A (en) * 2023-11-29 2023-12-29 成都北中网芯科技有限公司 Cache storage system and access processing method
CN117312192B (en) * 2023-11-29 2024-03-29 成都北中网芯科技有限公司 Cache storage system and access processing method

Similar Documents

Publication Publication Date Title
US11237728B2 (en) Method for accessing extended memory, device, and system
US8140828B2 (en) Handling transaction buffer overflow in multiprocessor by re-executing after waiting for peer processors to complete pending transactions and bypassing the buffer
CN114580344B (en) Test excitation generation method, verification system and related equipment
US11093388B2 (en) Method, apparatus, device and storage medium for accessing static random access memory
KR20110025188A (en) Utilization of a store buffer for error recovery on a store allocation cache miss
US20200159461A1 (en) Data accessing method, device, and storage medium
CN116909943B (en) Cache access method and device, storage medium and electronic equipment
US9274860B2 (en) Multi-processor device and inter-process communication method thereof
CN116049034A (en) Verification method and device for cache consistency of multi-core processor system
US11093245B2 (en) Computer system and memory access technology
CN115269454A (en) Data access method, electronic device and storage medium
CN115269199A (en) Data processing method and device, electronic equipment and computer readable storage medium
CN111857600B (en) Data reading and writing method and device
CN115858417B (en) Cache data processing method, device, equipment and storage medium
CN111381881A (en) AHB (advanced high-performance bus) interface-based low-power-consumption instruction caching method and device
CN111352757A (en) Apparatus, system, and method for detecting uninitialized memory reads
US10019390B2 (en) Using memory cache for a race free interrupt scheme without the use of “read clear” registers
US8364915B2 (en) Method, apparatus and system for generating access information from an LRU tracking list
CN114896179B (en) Memory page copying method and device, computing equipment and readable storage medium
US11947455B2 (en) Suppressing cache line modification
CN115357526A (en) Data processing method and device, electronic equipment and computer readable storage medium
CN115658324B (en) Process scheduling method, computing device and storage medium
US20240184454A1 (en) Storage device and operating method of the same
CN114741034A (en) Data processing method and device, electronic equipment and computer readable storage medium
CN114996024A (en) Memory bandwidth monitoring method, server and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination