CN117093510B - Cache high-efficiency indexing method for general purpose of size end - Google Patents

Cache high-efficiency indexing method for general purpose of size end Download PDF

Info

Publication number
CN117093510B
CN117093510B CN202310620237.9A CN202310620237A CN117093510B CN 117093510 B CN117093510 B CN 117093510B CN 202310620237 A CN202310620237 A CN 202310620237A CN 117093510 B CN117093510 B CN 117093510B
Authority
CN
China
Prior art keywords
cache line
line data
byte
indexing
index
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310620237.9A
Other languages
Chinese (zh)
Other versions
CN117093510A (en
Inventor
温家辉
赵夏
张光达
王会权
何益百
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Defense Technology Innovation Institute PLA Academy of Military Science
Original Assignee
National Defense Technology Innovation Institute PLA Academy of Military Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Defense Technology Innovation Institute PLA Academy of Military Science filed Critical National Defense Technology Innovation Institute PLA Academy of Military Science
Priority to CN202310620237.9A priority Critical patent/CN117093510B/en
Publication of CN117093510A publication Critical patent/CN117093510A/en
Application granted granted Critical
Publication of CN117093510B publication Critical patent/CN117093510B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/06Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication
    • G06F12/0646Configuration or reconfiguration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a cache high-efficiency indexing method for large and small ends, which comprises the following steps: obtaining cache line data, a small end effective signal, an index address and an index granularity; uniformly encoding the cache line data according to the small-end effective signal; uniformly coding the index addresses according to the small-end effective signals and the index granularity; according to the high 2 bits of the index address after unified coding, 16-byte indexing is carried out on the cache line data after unified coding; 4-byte indexing is carried out on the cache line data with 16-byte indexes according to 2 bits in the index address after unified coding; 1 byte indexing is carried out on the cache line data with 4 byte indexes according to the low 2 bits of the index address after unified coding; and obtaining index data according to the cache line data and the index granularity after 1 byte indexing. The method of the invention can realize the efficient indexing of the data in the format of the big end and the small end, and can reduce the design area and the power consumption of the circuit.

Description

Cache high-efficiency indexing method for general purpose of size end
Technical Field
The invention relates to the technical field of data management, in particular to a cache high-efficiency indexing method universal for large and small ends.
Background
In a common chip architecture, data is transferred between different functional modules, typically in cache line units. Common cache line sizes include 32 bytes, 64 bytes, 128 bytes, etc. At present, the general flow of loading data into a general register by an instruction is as follows: calculating according to the addressing mode of the instruction to obtain a data address; acquiring a cache line from the cache according to the data address; and indexing the cache line according to the low order of the address to obtain index data. When loading data into a general purpose register by an instruction, the granularity of the loaded data is often smaller than one cache line, and common loading data granularity comprises 1 byte, 2 bytes, 4 bytes, 8 bytes and the like. Therefore, indexing the cache line by addressing the lower bits of the address is a key to loading the data.
The current indexing method uses the low order of the addressing address as the offset address to directly index the cache line. This direct indexing approach effectively establishes a one-to-one fully associative relationship between address and data. With the increase of cache lines, the direct indexing method brings great cost to the area and power consumption of the chip, and influences the design and the functions of the chip.
Disclosure of Invention
In order to solve part or all of the technical problems in the prior art, the invention provides a cache high-efficiency indexing method with universal size end.
The technical scheme of the invention is as follows:
the method for efficiently indexing the cache with the common size end comprises the following steps:
obtaining cache line data, a small end effective signal, an index address and an index granularity;
uniformly encoding the cache line data according to the small-end effective signal so as to store the cache line data in the order from high byte to low byte;
uniformly encoding the index addresses according to the small-end effective signals and the index granularity so that the index addresses point to low bytes of required index data;
according to the high 2 bits of the index address after unified coding, 16-byte indexing is carried out on the cache line data after unified coding;
4-byte indexing is carried out on the cache line data with 16-byte indexes according to 2 bits in the index address after unified coding;
1 byte indexing is carried out on the cache line data with 4 byte indexes according to the low 2 bits of the index address after unified coding;
and obtaining index data according to the cache line data indexed by 1 byte and the index granularity.
In some possible implementations, the uniformly encoding the cache line data according to the small end valid signal includes:
if the small-end effective signal is 0, performing reverse operation on the cache line data, and outputting the cache line data subjected to the reverse operation into uniformly coded cache line data;
and if the small-end effective signal is 1, outputting the current cache line data into uniformly coded cache line data.
In some possible implementations, the settings are: the size of the obtained cache line data is k bytes, the obtained cache line data is data (k-1:0), and the cache line data after unified coding is data1 (k-1:0);
if the small end valid signal is 0, data1 (k-1:0) =data (0:k-1);
if the small end valid signal is 1, data1 (k-1:0) =data (k-1:0);
data (0:k-1) represents the cache line data after the reverse operation.
In some possible implementations, uniformly encoding the index address according to the small end valid signal and the index granularity includes:
if the small-end effective signal is 0, converting the index address according to the index granularity, and outputting the converted index address as a uniformly coded index address;
and if the small end effective signal is 1, outputting the current index address as a uniformly coded index address.
In some possible implementations, the settings are: the acquired index address is addr, and the granularity of the acquired index is opsize;
the index address is transformed using the following transformation formula:
!(addr+opsize)+1
wherein-! Representing the negation operation.
In some possible implementations, the 16-byte indexing of the uniformly encoded cache line data according to the upper 2 bits of the uniformly encoded index address includes:
if the index address after unified coding is 0 in the upper 2 bits, outputting the cache line data after unified coding as cache line data after 16-byte indexing;
if the index address after unified coding is higher than 2 bits and is 1, circularly right-shifting the cache line data after unified coding by 16 bytes, and outputting the shifted cache line data as cache line data after 16-byte indexing;
if the index address after unified coding is 2 bits, circularly right-shifting the cache line data after unified coding by 32 bytes, and outputting the shifted cache line data as cache line data after 16 bytes of indexes;
if the index address after unified coding is higher than 2 bits and is 3, the cache line data after unified coding is circularly shifted to the right by 48 bytes, and the shifted cache line data is output as the cache line data after 16-byte indexing.
In some possible implementations, 4-byte indexing is performed on the cache line data after 16-byte indexing according to 2 bits in the uniformly encoded index address, including:
if 2 bits in the index address after unified coding are 0, outputting the cache line data after 16-byte indexing as the cache line data after 4-byte indexing;
if 2 bits in the uniformly coded index address are 1, circularly right-shifting the cache line data after 16-byte indexing by 4 bytes, and outputting the shifted cache line data as the cache line data after 4-byte indexing;
if 2 bits in the uniformly coded index address are 2, circularly right-shifting the cache line data after 16-byte indexing by 8 bytes, and outputting the shifted cache line data as cache line data after 4-byte indexing;
if 2 bits in the uniformly coded index address are 3, the cache line data after 16-byte indexing is circularly shifted to the right by 12 bytes, and the shifted cache line data is output as the cache line data after 4-byte indexing.
In some possible implementations, 1-byte indexing is performed on the cache line data after 4-byte indexing according to the lower 2 bits of the index address after unified encoding, including:
if the lower 2 bits of the index address after unified coding are 0, outputting the cache line data after 4-byte indexing as the cache line data after 1-byte indexing;
if the lower 2 bits of the index address after unified coding are 1, circularly right-shifting the cache line data after 4-byte indexing by 1 byte, and outputting the shifted cache line data as the cache line data after 1-byte indexing;
if the lower 2 bits of the index address after unified coding are 2, circularly right-shifting the cache line data after 4-byte indexing by 2 bytes, and outputting the shifted cache line data as cache line data after 1-byte indexing;
if the lower 2 bits of the index address after unified coding are 3, the cache line data after 4-byte indexing is circularly shifted to the right by 3 bytes, and the shifted cache line data is output as the cache line data after 1-byte indexing.
In some possible implementations, obtaining the index data according to the 1-byte indexed cache line data and the index granularity includes:
and carrying out mask operation according to the 1-byte indexed cache line data through the index granularity to obtain index data.
The technical scheme of the invention has the main advantages that:
the cache line data and the index address are uniformly coded, and the uniformly coded cache line data is sequentially subjected to 16-byte index, 4-byte index and 1-byte index according to the uniformly coded index address, so that the lowest byte of the required index data can be shifted to the lowest byte of the cache line data, the high-efficiency index of the data in the large-end format and the small-end format is realized, the processing can be realized through simple logic gate operation, the design area and the power consumption of an index circuit can be reduced, and the cost of the chip area and the power consumption is further reduced.
Drawings
The accompanying drawings, which are included to provide a further understanding of embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and without limitation to the invention. Attached at
In the figure:
FIG. 1 is a flow chart of a general cache line efficient indexing method for large and small end according to an embodiment of the invention;
fig. 2 is a schematic diagram of an indexing process corresponding to embodiment 1 of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to specific embodiments of the present invention and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The following describes in detail the technical scheme provided by the embodiment of the invention with reference to the accompanying drawings.
Referring to fig. 1, an embodiment of the present invention provides a cache efficient indexing method for large and small end use, which includes the following steps S1 to S7:
step S1, cache line data, a small end effective signal, an index address and an index granularity are obtained.
In an embodiment of the present invention, corresponding cache line data, a small end valid signal, an index address and an index granularity are obtained according to an actual data transmission loading condition.
And S2, uniformly encoding the cache line data according to the small-end effective signal so as to store the cache line data in the order from high byte to low byte.
Since the cache line data is stored in the order from high address to low address in the small end mode and the high byte data is stored in the low address in the large end mode, in order to ensure that the cache high-efficiency indexing method can be simultaneously applied to the small end mode and the large end mode, the cache line data needs to be uniformly encoded so that the cache line data is stored in the order from high byte to low byte.
In an embodiment of the present invention, according to the small-end valid signal, the method performs unified encoding on the cache line data, and further includes:
if the small-end effective signal is 0, performing reverse operation on the cache line data, and outputting the cache line data subjected to the reverse operation as cache line data subjected to unified coding;
and if the small-end effective signal is 1, outputting the current cache line data into uniformly coded cache line data.
Specifically, setting: the size of the obtained cache line data is k bytes, the obtained cache line data is data (k-1:0), and the cache line data after unified coding is data1 (k-1:0);
if the small end valid signal is 0, data1 (k-1:0) =data (0:k-1);
if the small end valid signal is 1, data1 (k-1:0) =data (k-1:0);
data (0:k-1) represents the cache line data after the reverse operation.
By the method, the cache line data are uniformly coded, and the uniformly coded cache line data can be stored in the order from high byte to low byte.
And S3, uniformly encoding the index addresses according to the small-end effective signals and the index granularity so that the index addresses point to the low bytes of the required index data.
In an embodiment of the present invention, according to the small end valid signal and the index granularity, the index address is uniformly encoded, and further includes:
if the small-end effective signal is 0, converting the index address according to the index granularity, and outputting the converted index address as a uniformly coded index address;
and if the small-end effective signal is 1, outputting the current index address as the index address after unified coding.
In one embodiment of the present invention, set: the acquired index address is addr, and the granularity of the acquired index is opsize; the index address is transformed using the following transformation formula:
!(addr+opsize)+1
wherein-! Representing the negation operation.
Further, setting: the index address after unified coding is addr1;
if the small end valid signal is 0, addr1= |! (addr+opsize) +1;
if the small end valid signal is 1, addr1=addr.
By means of the method, the index addresses are uniformly coded, and the uniformly coded index addresses can point to low bytes of required index data.
And S4, carrying out 16-byte indexing on the uniformly coded cache line data according to the upper 2 bits of the uniformly coded index address.
In an embodiment of the present invention, according to the 2-bit higher index address after unified coding, 16-byte indexing is performed on the cache line data after unified coding, so as to shift the lowest byte of the required index data to the lowest 16 bytes of the cache line data.
In an embodiment of the present invention, according to the upper 2 bits of the index address after unified coding, 16 byte indexing is performed on the cache line data after unified coding, and the method further includes:
if the index address after unified coding is 0 in the upper 2 bits, outputting the cache line data after unified coding as cache line data after 16-byte indexing;
if the index address after unified coding is higher than 2 bits and is 1, circularly right-shifting the cache line data after unified coding by 16 bytes, and outputting the shifted cache line data as cache line data after 16-byte indexing;
if the index address after unified coding is 2 bits, circularly right-shifting the cache line data after unified coding by 32 bytes, and outputting the shifted cache line data as cache line data after 16 bytes of indexes;
if the index address after unified coding is higher than 2 bits and is 3, the cache line data after unified coding is circularly shifted to the right by 48 bytes, and the shifted cache line data is output as the cache line data after 16-byte indexing.
By means of the method, the uniformly encoded cache line data are subjected to 16-byte indexing, and the lowest byte of the required index data can be shifted to the lowest 16 bytes of the cache line data.
And S5, carrying out 4-byte indexing on the cache line data with 16-byte indexes according to 2 bits in the index address after unified coding.
In an embodiment of the present invention, according to 2 bits in the uniformly encoded index address, 4-byte indexing is performed on the cache line data with 16-byte index, so as to shift the lowest byte of the required index data to the lowest 4 bytes of the cache line data.
In an embodiment of the present invention, according to 2 bits in the uniformly encoded index address, 4-byte indexing is performed on the cache line data with 16-byte index, and the method further includes:
if 2 bits in the index address after unified coding are 0, outputting the cache line data after 16-byte indexing as the cache line data after 4-byte indexing;
if 2 bits in the uniformly coded index address are 1, circularly right-shifting the cache line data after 16-byte indexing by 4 bytes, and outputting the shifted cache line data as the cache line data after 4-byte indexing;
if 2 bits in the uniformly coded index address are 2, circularly right-shifting the cache line data after 16-byte indexing by 8 bytes, and outputting the shifted cache line data as cache line data after 4-byte indexing;
if 2 bits in the uniformly coded index address are 3, the cache line data after 16-byte indexing is circularly shifted to the right by 12 bytes, and the shifted cache line data is output as the cache line data after 4-byte indexing.
By means of the method, 4-byte indexing is conducted on the cache line data after 16-byte indexing, and the lowest byte of the needed index data can be shifted to the lowest 4 bytes of the cache line data.
And S6, carrying out 1-byte indexing on the cache line data with 4-byte indexes according to the lower 2 bits of the index address after unified coding.
In an embodiment of the present invention, 1 byte indexing is performed on the cache line data with 4 bytes of index according to the lower 2 bits of the index address after unified coding, so as to shift the lowest byte of the required index data to the lowest byte of the cache line data.
In an embodiment of the present invention, according to the low 2 bits of the index address after unified coding, 1 byte indexing is performed on the cache line data after 4 byte indexing, and the method further includes:
if the lower 2 bits of the index address after unified coding are 0, outputting the cache line data after 4-byte indexing as the cache line data after 1-byte indexing;
if the lower 2 bits of the index address after unified coding are 1, circularly right-shifting the cache line data after 4-byte indexing by 1 byte, and outputting the shifted cache line data as the cache line data after 1-byte indexing;
if the lower 2 bits of the index address after unified coding are 2, circularly right-shifting the cache line data after 4-byte indexing by 2 bytes, and outputting the shifted cache line data as cache line data after 1-byte indexing;
if the lower 2 bits of the index address after unified coding are 3, the cache line data after 4-byte indexing is circularly shifted to the right by 3 bytes, and the shifted cache line data is output as the cache line data after 1-byte indexing.
By the method, the 1-byte index is carried out on the cache line data with 4-byte index, and the lowest byte of the required index data can be shifted to the lowest byte of the cache line data.
And S7, obtaining index data according to the cache line data indexed by 1 byte and the index granularity.
In an embodiment of the present invention, after the processing in the steps S2 to S6, the index data is located at a low level of the cache line data after 1 byte indexing, and the corresponding byte number is the index granularity.
Specifically, according to the cache line data with 1 byte of index, masking operation is performed through index granularity, and index data is obtained.
According to the cache high-efficiency indexing method for the large end and the small end, cache line data and index addresses are uniformly encoded, and the uniformly encoded cache line data is sequentially subjected to 16-byte indexing, 4-byte indexing and 1-byte indexing according to the uniformly encoded index addresses, so that the lowest byte of index data required to be shifted to the lowest byte of the cache line data, high-efficiency indexing of data in large end and small end formats is achieved, the processing can be achieved through simple logic gate operation, the design area and power consumption of an indexing circuit can be reduced, and the cost of chip area and power consumption is further reduced.
The following describes a general cache high-efficiency indexing method for a large end and a small end according to an embodiment of the present invention with reference to specific embodiments:
example 1
In this embodiment, the cache line size is 64 bytes, the cache line data of 64 bytes is data (64:0), the index address is addr [5:0], the index granularity is opsize [5:0], the small-end valid signal is le, the opsize value range of 6bit is [000001,000010,000100,001000,010000,10000], and the corresponding data granularity is 1 byte, 2 bytes, 4 bytes, 8 bytes, 16 bytes and 32 bytes respectively.
Referring to fig. 2, in performing a cache line index, cache line data is first uniformly encoded.
Specifically, if le=0, data1 (63:0) =data (0:63), if le=1, data1 (63:0) =data (63:0), data1 (63:0) represents uniformly encoded cache line data.
Further, the index addresses are uniformly encoded.
Specifically, if le=0, addr1[5:0] = l! (addr [5:0] +opsize) +1, if le=1, addr1[5:0] =addr [5:0], addr1[5:0] represents the uniformly encoded index address.
Further, the uniformly encoded cache line data is indexed by 16 bytes.
Specifically, if the index address after unified encoding is higher by 2 bits addr1[5:4] =00, the cache line data after unified encoding is not shifted; if addr1[5:4] =01, circularly right-shifting the uniformly coded cache line data by 16 bytes; if addr1[5:4] =10, circularly right-shifting the uniformly coded cache line data by 32 bytes; if addr1[5:4] =11, the uniformly encoded cache line data is circularly shifted to the right by 48 bytes.
Further, 4-byte indexing is performed on the cache line data after 16-byte indexing.
Specifically, if 2 bits addr1[3:2] =00 in the middle of the index address after unified coding, the cache line data after 16-byte indexing is not shifted; if addr1[3:2] =01, the cache line data after 16-byte indexing is circularly shifted to the right by 4 bytes; if addr1[3:2] =10, the cache line data after 16-byte indexing is circularly shifted to the right by 8 bytes; if addr1[3:2] =11, the 16-byte indexed cache line data is shifted right 12 bytes in a round.
Further, 1 byte indexing is performed on the cache line data after 4 byte indexing.
Specifically, if the uniformly encoded cache line data with the index address of the lower 2 bits addr1[1:0] = 00,4 bytes is not shifted; if addr1[1:0] = 01,4 bytes indexed, the cache line data is circularly shifted to the right by 1 byte; if addr1[1:0] =10, 4 bytes indexed cache line data is circularly shifted right by 2 bytes; if addr1[1:0] =11, the 4-byte indexed cache line data is cyclically shifted right by 3 bytes.
After the 3-level index, the index data is located at the lower 6 bits of the cache line data after the 1-byte index.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. In this context, "front", "rear", "left", "right", "upper" and "lower" are referred to with respect to the placement state shown in the drawings.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting thereof; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (7)

1. The cache high-efficiency indexing method for the large end and the small end is characterized by comprising the following steps of:
obtaining cache line data, a small end effective signal, an index address and an index granularity;
uniformly encoding the cache line data according to the small-end effective signal so as to store the cache line data in the order from high byte to low byte;
and uniformly encoding the index address according to the small end valid signal and the index granularity so that the index address points to the low byte of the required index data, wherein the method comprises the following steps of: if the small-end effective signal is 0, converting the index address according to the index granularity, and outputting the converted index address as a uniformly coded index address; if the small end effective signal is 1, outputting the current index address as an index address after unified coding;
according to the high 2 bits of the index address after unified coding, 16-byte indexing is carried out on the cache line data after unified coding;
4-byte indexing is carried out on the cache line data with 16-byte indexes according to 2 bits in the index address after unified coding;
1 byte indexing is carried out on the cache line data with 4 byte indexes according to the low 2 bits of the index address after unified coding;
according to the cache line data after 1 byte indexing and the index granularity, obtaining index data,
wherein, set up: the acquired index address is addr, and the granularity of the acquired index is opsize; the index address is transformed using the following transformation formula: the following is carried out (addr+opsize) +1, wherein-! Representing the negation operation.
2. The method for cache line indexing universal to a size end according to claim 1, wherein uniformly encoding the cache line data according to the small end valid signal comprises:
if the small-end effective signal is 0, performing reverse operation on the cache line data, and outputting the cache line data subjected to the reverse operation into uniformly coded cache line data;
and if the small-end effective signal is 1, outputting the current cache line data into uniformly coded cache line data.
3. The cache efficient indexing method for large and small end use according to claim 2, wherein: the size of the obtained cache line data is k bytes, the obtained cache line data is data (k-1:0), and the cache line data after unified coding is data1 (k-1:0);
if the small end valid signal is 0, data1 (k-1:0) =data (0:k-1);
if the small end valid signal is 1, data1 (k-1:0) =data (k-1:0);
data (0:k-1) represents the cache line data after the reverse operation.
4. The cache line efficient indexing method for large and small end use according to claim 1, wherein the indexing method for the cache line data after unified encoding according to the 2-bit higher index address after unified encoding comprises the following steps:
if the index address after unified coding is 0 in the upper 2 bits, outputting the cache line data after unified coding as cache line data after 16-byte indexing;
if the index address after unified coding is higher than 2 bits and is 1, circularly right-shifting the cache line data after unified coding by 16 bytes, and outputting the shifted cache line data as cache line data after 16-byte indexing;
if the index address after unified coding is 2 bits, circularly right-shifting the cache line data after unified coding by 32 bytes, and outputting the shifted cache line data as cache line data after 16 bytes of indexes;
if the index address after unified coding is higher than 2 bits and is 3, the cache line data after unified coding is circularly shifted to the right by 48 bytes, and the shifted cache line data is output as the cache line data after 16-byte indexing.
5. The method for efficiently indexing cache line data with universal size according to claim 1, wherein the step of indexing the cache line data with 16 bytes according to 2 bits in the uniformly encoded index address comprises:
if 2 bits in the index address after unified coding are 0, outputting the cache line data after 16-byte indexing as the cache line data after 4-byte indexing;
if 2 bits in the uniformly coded index address are 1, circularly right-shifting the cache line data after 16-byte indexing by 4 bytes, and outputting the shifted cache line data as the cache line data after 4-byte indexing;
if 2 bits in the uniformly coded index address are 2, circularly right-shifting the cache line data after 16-byte indexing by 8 bytes, and outputting the shifted cache line data as cache line data after 4-byte indexing;
if 2 bits in the uniformly coded index address are 3, the cache line data after 16-byte indexing is circularly shifted to the right by 12 bytes, and the shifted cache line data is output as the cache line data after 4-byte indexing.
6. The cache line data1 byte-indexed according to the low 2 bits of the uniformly encoded index address, comprising:
if the lower 2 bits of the index address after unified coding are 0, outputting the cache line data after 4-byte indexing as the cache line data after 1-byte indexing;
if the lower 2 bits of the index address after unified coding are 1, circularly right-shifting the cache line data after 4-byte indexing by 1 byte, and outputting the shifted cache line data as the cache line data after 1-byte indexing;
if the lower 2 bits of the index address after unified coding are 2, circularly right-shifting the cache line data after 4-byte indexing by 2 bytes, and outputting the shifted cache line data as cache line data after 1-byte indexing;
if the lower 2 bits of the index address after unified coding are 3, the cache line data after 4-byte indexing is circularly shifted to the right by 3 bytes, and the shifted cache line data is output as the cache line data after 1-byte indexing.
7. The method for efficiently indexing a cache line common to both size ends according to claim 1, wherein obtaining index data according to the 1-byte indexed cache line data and the index granularity comprises:
and carrying out mask operation according to the 1-byte indexed cache line data through the index granularity to obtain index data.
CN202310620237.9A 2023-05-30 2023-05-30 Cache high-efficiency indexing method for general purpose of size end Active CN117093510B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310620237.9A CN117093510B (en) 2023-05-30 2023-05-30 Cache high-efficiency indexing method for general purpose of size end

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310620237.9A CN117093510B (en) 2023-05-30 2023-05-30 Cache high-efficiency indexing method for general purpose of size end

Publications (2)

Publication Number Publication Date
CN117093510A CN117093510A (en) 2023-11-21
CN117093510B true CN117093510B (en) 2024-04-09

Family

ID=88775935

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310620237.9A Active CN117093510B (en) 2023-05-30 2023-05-30 Cache high-efficiency indexing method for general purpose of size end

Country Status (1)

Country Link
CN (1) CN117093510B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101911013A (en) * 2008-01-11 2010-12-08 国际商业机器公司 Extract cache attribute facility and instruction therefore
CN104011660A (en) * 2011-12-22 2014-08-27 英特尔公司 Processor-based apparatus and method for processing bit streams
CN104170259A (en) * 2012-03-15 2014-11-26 国际商业机器公司 Finding the length of a set of character data having a termination character
CN105247472A (en) * 2013-06-28 2016-01-13 英特尔公司 Processors, methods, systems, and instructions to transcode variable length code points of unicode characters
CN105573962A (en) * 2013-03-15 2016-05-11 甲骨文国际公司 Efficient hardware instructions for single instruction multiple data processors
CN108628638A (en) * 2017-03-16 2018-10-09 华为技术有限公司 Data processing method and device
CN114356647A (en) * 2022-03-18 2022-04-15 天津德科智控股份有限公司 EPS system data coding and storing method
CN115658693A (en) * 2022-11-03 2023-01-31 扬州莱斯信息技术有限公司 Efficient optimized storage method suitable for massive raster tile data
CN115765754A (en) * 2022-11-30 2023-03-07 阿里云计算有限公司 Data coding method and coded data comparison method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10509771B2 (en) * 2017-10-30 2019-12-17 AtomBeam Technologies Inc. System and method for data storage, transfer, synchronization, and security using recursive encoding

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101911013A (en) * 2008-01-11 2010-12-08 国际商业机器公司 Extract cache attribute facility and instruction therefore
CN104011660A (en) * 2011-12-22 2014-08-27 英特尔公司 Processor-based apparatus and method for processing bit streams
CN104170259A (en) * 2012-03-15 2014-11-26 国际商业机器公司 Finding the length of a set of character data having a termination character
CN105573962A (en) * 2013-03-15 2016-05-11 甲骨文国际公司 Efficient hardware instructions for single instruction multiple data processors
CN105247472A (en) * 2013-06-28 2016-01-13 英特尔公司 Processors, methods, systems, and instructions to transcode variable length code points of unicode characters
CN108628638A (en) * 2017-03-16 2018-10-09 华为技术有限公司 Data processing method and device
CN114356647A (en) * 2022-03-18 2022-04-15 天津德科智控股份有限公司 EPS system data coding and storing method
CN115658693A (en) * 2022-11-03 2023-01-31 扬州莱斯信息技术有限公司 Efficient optimized storage method suitable for massive raster tile data
CN115765754A (en) * 2022-11-30 2023-03-07 阿里云计算有限公司 Data coding method and coded data comparison method

Also Published As

Publication number Publication date
CN117093510A (en) 2023-11-21

Similar Documents

Publication Publication Date Title
CN109739555B (en) Chip comprising multiply-accumulate module, terminal and control method
CN100477532C (en) Huffman coding method and equipment
US7538696B2 (en) System and method for Huffman decoding within a compression engine
EP3674883B1 (en) Multiplication circuit, system on chip, and electronic device
CN105634499B (en) Data conversion method based on new short floating point type data
CN112953550A (en) Data compression method, electronic device and storage medium
WO2020125527A1 (en) Data compression method, apparatus, computer device and storage medium
CN111859033B (en) IP library query method and device and IP library compression method and device
CN113613289B (en) Bluetooth data transmission method, system and communication equipment
CN102567254B (en) The method that adopts dma controller to carry out data normalization processing
CN101489128A (en) JPEG2000 pipeline arithmetic encoding method and circuit
US6442729B1 (en) Convolution code generator and digital signal processor which includes the same
CN117093510B (en) Cache high-efficiency indexing method for general purpose of size end
US6871274B2 (en) Instruction code conversion apparatus creating an instruction code including a second code converted from a first code
US20230342419A1 (en) Matrix calculation apparatus, method, system, circuit, and device, and chip
CN110365346B (en) Arithmetic entropy coding method and system
CN111464189A (en) Fibonacci binary decoding device and method
WO2021143634A1 (en) Arithmetic coder, method for implementing arithmetic coding, and image coding method
CN111162792A (en) Compression method and device for power load data
CN202602827U (en) Variable-length decoding device based on universal format code table
CN209895329U (en) Multiplier and method for generating a digital signal
CN103428502A (en) Decoding method and decoding system
CN114070470A (en) Encoding and decoding method and device
CN102545910B (en) A kind of jpeg huffman decoding circuit and coding/decoding method thereof
CN116095184B (en) Structured network data transmission coding method, device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant