WO2023060833A1 - 数据交互方法、电子设备、存储介质 - Google Patents

数据交互方法、电子设备、存储介质 Download PDF

Info

Publication number
WO2023060833A1
WO2023060833A1 PCT/CN2022/080752 CN2022080752W WO2023060833A1 WO 2023060833 A1 WO2023060833 A1 WO 2023060833A1 CN 2022080752 W CN2022080752 W CN 2022080752W WO 2023060833 A1 WO2023060833 A1 WO 2023060833A1
Authority
WO
WIPO (PCT)
Prior art keywords
address
interaction
target
data
cache
Prior art date
Application number
PCT/CN2022/080752
Other languages
English (en)
French (fr)
Inventor
雷洪
甄德根
吴桐庆
孔德辉
徐科
Original Assignee
深圳市中兴微电子技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市中兴微电子技术有限公司 filed Critical 深圳市中兴微电子技术有限公司
Publication of WO2023060833A1 publication Critical patent/WO2023060833A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/34Addressing or accessing the instruction operand or the result ; Formation of operand address; Addressing modes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present application relates to but is not limited to the field of data processing, and in particular relates to a data interaction method, electronic equipment, and storage media.
  • the data interaction mainly adopts the method of saving and fetching, that is, reading all the data from the cache according to the instructions, and writing the new data obtained after the data processing is completed. in the corresponding cache.
  • Embodiments of the present application provide a data interaction method, electronic equipment, and a storage medium.
  • the embodiment of the present application provides a data interaction method, including: obtaining an interaction instruction, determining a target cache according to the interaction instruction, wherein the interaction instruction carries a preprocessing parameter; determining a target cache according to the preprocessing parameter For the target address in the target cache, perform data interaction with respect to the target address.
  • the embodiment of the present application provides an electronic device, including: a memory, a processor, and a computer program stored in the memory and operable on the processor, when the processor executes the computer program, the following In one aspect, the data interaction method.
  • an embodiment of the present application provides a computer-readable storage medium storing computer-executable instructions, and the computer-executable instructions are used to execute the data interaction method as described in the first aspect.
  • FIG. 1 is a flowchart of a data interaction method provided by an embodiment of the present application
  • FIG. 2 is a flow chart of determining a target address provided by another embodiment of the present application.
  • FIG. 3 is a flowchart of obtaining an interactive address list provided by another embodiment of the present application.
  • FIG. 4 is a flow chart of generating a mask provided by another embodiment of the present application.
  • Fig. 5 is a flow chart of data interaction provided by another embodiment of the present application.
  • FIG. 6 is an example diagram of an interactive address list provided by another embodiment of the present application.
  • Fig. 7 is a flow chart of judging the progress of interaction provided by another embodiment of the present application.
  • FIG. 8 is a flow chart for avoiding address conflicts provided by another embodiment of the present application.
  • FIG. 9 is a flow chart of read and write operations provided by another embodiment of the present application.
  • Fig. 10 is a device diagram of an electronic device provided by another embodiment of the present application.
  • the present application provides a data interaction method, an electronic device, and a storage medium.
  • the data interaction method includes: acquiring an interaction instruction, and determining a target cache according to the interaction instruction, wherein the interaction instruction carries preprocessing parameters;
  • the processing parameter determines the target address in the target cache; data interaction is performed with respect to the target address.
  • the target address corresponding to the valid data can be calculated according to the preprocessing parameters, the preprocessing operation is omitted, the number of data interactions is reduced, and only the valid data can be interacted with according to the destination address. It realizes the continuity of reading and writing valid data between caches, improves the efficiency of data interaction, and thus improves data throughput.
  • DATA_FIFO Data First Input First Output
  • the cache includes buffer (Buffer) and double rate synchronization Dynamic random access memory (Double Data Rate, DDR), in which Buffer can be composed of 8 random access memories (Random Access Memory, RAM) of the same bit width, and the granularity of DDR is in multiples of Buffer
  • the control module includes a read control module , a write control module and an instruction analysis module, wherein the instruction analysis module is used to analyze instructions, and is also used to control the execution of the read control module and the write control module.
  • the read control module can control the read Enable, address, and mask to select the data read from the cache and store it in the DATA_FIFO.
  • the write control module can write the data in the DATA_FIFO to the cache by controlling the write enable, address, and mask.
  • FIG. 1 is a data interaction method provided by an embodiment of the present application, including but not limited to step S110 and step S120.
  • Step S110 acquiring an interaction instruction, and determining a target cache according to the interaction instruction, wherein the interaction instruction carries a preprocessing parameter.
  • interactive commands can be stored in a fixed-depth command queue through a handshake mechanism, such as a common valid/ready (Valid/Ready) handshake mechanism, and those skilled in the art can select an appropriate handshake mechanism according to actual needs.
  • a handshake mechanism such as a common valid/ready (Valid/Ready) handshake mechanism
  • the interaction instruction can be any type of instruction, such as an AI instruction, which only needs to involve the interaction with the cache.
  • Using the instruction queue to save interactive instructions can ensure the order and accuracy of data transmission, for example, it can ensure that the data is read from the cache as the source data for writing.
  • the interaction may be reading data, writing data, or writing data from a cache to another cache, and this embodiment does not limit the interaction mode.
  • interactive instructions usually carry preprocessing parameters for rearranging and zero-filling the data in the cache. For example, if the bit width of the cache is 8 bits, then from Every time the data of consecutive 8-bit addresses is read from the cache, in order to avoid reading invalid data, the position of the data in the cache needs to be adjusted according to the preprocessing parameters so that the above-mentioned 8-bit addresses are all valid data, and for example, in Several bits of data are lost during data transmission. In order to ensure the success of data reading, the address of the lost data can be filled with zeros.
  • step S120 the target address in the target cache is determined according to the preprocessing parameters, and data interaction is performed with respect to the target address.
  • this embodiment uses the preprocessing parameters as the data basis and calculates according to the preprocessing parameters. It can be determined that The target address of the valid data, therefore, this embodiment obtains the target address through calculation, directly interacts with the data of the target address, saves the first interaction in some cases, and can effectively improve the efficiency of data interaction, thereby improving Data throughput.
  • step S120 of the embodiment shown in FIG. 1 also includes but is not limited to the following steps:
  • Step S210 when the first number is equal to 1, determine the first address of the target cache as the first target address, and based on the principle that the interval between two adjacent target addresses satisfies the second number, determine from the data in the target cache remaining target address;
  • Step S220 when the first number is greater than 1, at least two target address groups are obtained according to the first number, wherein the target addresses in the target address groups are continuous in the order of interaction, and the number of target addresses satisfies the first number, the target address group The distance between the last target address in the group and the first target address in the next target address group satisfies the second quantity.
  • the first number can represent the number of consecutively used base data
  • the second number can represent the number of consecutively used base data.
  • the number of skipped basic data for the convenience of description, the first number is represented by RPT
  • the second number is represented by GAP.
  • the first target address group includes address 0, address 1 and address 2.
  • the above three target addresses are in the order of interaction It is continuous, and the number is 3.
  • 3 addresses are separated, that is, skipping 3 addresses, 4 addresses, and 5 addresses.
  • the second target address group includes 6 addresses, 7 addresses, and 8 addresses.
  • the first target address of the second address group is address 6
  • the last target address of the first target address group is address 2
  • the number of address intervals between them is 3, which satisfies the second number.
  • step S120 of the embodiment shown in FIG. 1 it also includes but is not limited to the following steps:
  • Step S310 determining the unit interaction amount of the target cache, where the unit interaction amount represents the data amount of each data interaction;
  • Step S320 determine the interaction start address according to the unit interaction amount, wherein the interaction start address is the first address corresponding to each data interaction, and the interaction start address is for the target cache;
  • step S330 an interaction address list is obtained according to the interaction start address.
  • the amount of data read and written each time is limited. Take reading data as an example, the bit width of Buffer is 8, the unit interaction amount is 2, and the target address is used as the starting address.
  • the bit width can read the data of 8 addresses at a time, but can only output the data of two addresses from the above data. Therefore, in order to improve the efficiency of data interaction, the first value of each data interaction can be determined according to the unit interaction amount. For example, the target address of valid data is address 0, address 1, and address 2, and the unit interaction amount is 2.
  • the data with addresses 0 to 7 can be read, but only Output the data of address 0 and address 1. Therefore, the first interaction start address in the interaction address list can be set to address 0, and the second start address of interaction can be set to address 2 to ensure that each data interaction can target the next A valid data to read.
  • the target address can be used as the start address of the interaction.
  • the first target address is address 0
  • the second target address is address 7
  • the third target address is address 14
  • the unit interaction amount is 2, with address 0 as the starting address of the interaction, the obtained
  • the address sequence is from address 0 to address 7. Since address 0 is the target address, the first interaction start address is address 0, and since address 7 is used in the first data interaction, the next target address is address 14, so the first interaction
  • the two interaction starting addresses are 14 addresses, and the interaction address list of Example 1 shown in FIG. 6 is obtained.
  • step S330 of the embodiment shown in FIG. 3 after executing step S330 of the embodiment shown in FIG. 3 , it also includes but is not limited to the following steps:
  • Step S410 determining the address sequence for data interaction according to the interaction start address
  • Step S420 determining effective addresses in the address sequence according to the unit interaction amount, wherein the effective addresses belong to the target address, and the number of effective addresses corresponding to each address sequence is less than or equal to the unit interaction amount;
  • Step S430 generating a mask associated with the interaction start address, wherein the valid bit of the mask corresponds to the position in the address sequence according to the valid address.
  • the address sequence for data interaction based on the interaction start address can be determined according to the bit width of the cache.
  • the bit width is 8, and the address sequence includes 8 addresses.
  • Those skilled in the art can also determine the address sequence according to the actual situation. The number of addresses in the address sequence is adjusted, which is not limited here.
  • the target address can be determined according to the first number of RPTs and the second number of GAPs, and the unit interaction amount of the cache defines the number of output data each time, so the address sequence Even if the address is the target address, it is not necessarily a valid address in each data exchange.
  • the mask is a selection signal composed of 0 and 1, which can be ANDed with the data read in each bit address.
  • the obtained data is 0, when the data read from the cache is ANDed with 1, the obtained data is the data itself. Therefore, in order to determine the effective address from the address sequence and ensure that the data interaction is for valid data, it can be based on the effective address in the address sequence The location determines the mask.
  • the address sequence is from address 0 to address 7.
  • the first target address group includes address 0, address 1 and address 2, and there are 3 addresses (address 3, address 4 and address 5) at intervals.
  • the address sequence is from address 0 to address 7.
  • the first target address group includes address 0 to address 4, followed by 8 addresses (address 5 to address 12), and the next target address is obtained
  • the first target address group includes address 0 and address 1, and there are 2 addresses apart (address 2 and address 3), and the next target address is obtained
  • step S120 of the embodiment shown in FIG. 1 also includes but is not limited to the following steps:
  • Step S510 obtaining the interaction start address from the interaction address list
  • Step S520 acquiring the mask associated with the interaction start address
  • Step S530 determining a valid target address from the address sequence corresponding to the interaction start address according to the mask
  • Step S540 perform data interaction with the target cache for the effective target address.
  • each The address sequence corresponding to the interactive start address and then determine the effective target address according to the mask, so as to directly realize data interaction with the target cache according to the effective target address, and replace the data rearrangement and zero padding operations with the mask and the interactive address list , effectively reducing the number of data interactions.
  • step S120 of the embodiment shown in FIG. 1 also includes but is not limited to the following steps:
  • Step S710 determining the interaction length according to the interaction instruction
  • step S720 when the amount of data exchanged with respect to the target address reaches the exchange length, it is determined that the data exchange is completed.
  • the interaction length is used to characterize the length of data interaction. For example, according to the interaction instruction, it is determined that the data of the target address to be read is N, and after the data interaction of N target addresses is completed, the data for the interaction instruction is determined The interaction is completed. By using the count based on the interaction length as the signal of the execution completion of the interaction instruction, the execution of the next instruction can be started in time, and the efficiency of data interaction can be improved.
  • step S120 of the embodiment shown in FIG. 1 is executed, the following steps are also included but not limited to:
  • step S810 it is determined that there is no ongoing data exchange at the target address.
  • the data interaction between the chip and the cache can be executed in parallel by multiple instructions. Therefore, before the data interaction for the target address, in order to avoid the simultaneous execution of multiple instructions, it can be determined that there is no ongoing data at the target address. Interaction, for example, when instruction A is executing the read data operation for the target address, and instruction B resolves the write data operation for the target address, wait for the completion of instruction A’s read data operation for the target address, and then execute the instruction B for the target address The write data operation of the address only needs to be able to avoid address conflicts.
  • step S120 of the embodiment shown in FIG. 1 also includes but is not limited to the following steps:
  • Step S910 read the target data from the target cache, and save the target data to the source queue;
  • Step S920 when the target data exists in the source queue, write the target data into the target cache according to the target address.
  • the target data is stored in the target cache. After the target data is read according to the method of masking and interactive address list described in the above embodiment, it can be stored in the source queue. When Detect that the source queue is not empty, use this as a trigger signal to trigger the data write operation corresponding to the interactive command, and write the target data stored in the source queue into the cache according to the order of the queue, so as to ensure the continuity of read and write operations sex.
  • an embodiment of the present application also provides an electronic device.
  • the electronic device 1000 includes: a memory 1010 , a processor 1020 , and a computer program stored in the memory 1010 and operable on the processor 1020 .
  • the processor 1020 and the memory 1010 may be connected through a bus or in other ways.
  • the non-transitory software programs and instructions required to realize the data interaction method of the above-mentioned embodiment are stored in the memory 1010, and when executed by the processor 1020, the data interaction method applied to the electronic device in the above-mentioned embodiment is executed, for example, executing Method step S110 to step S120 in Fig. 1 described above, method step S210 to step S220 in Fig. 2, method step S310 to step S330 in Fig. 3, method step S410 to step S430 in Fig. 4, in Fig. 5
  • the device embodiments described above are only illustrative, and the units described as separate components may or may not be physically separated, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • an embodiment of the present application also provides a computer-readable storage medium, the computer-readable storage medium stores computer-executable instructions, and the computer-executable instructions are executed by a processor or a controller, for example, by the above-mentioned Execution by a processor in the embodiment of the electronic device can cause the above-mentioned processor to execute the data interaction method applied to the electronic device in the above-mentioned embodiment, for example, execute steps S110 to S120 of the method in Fig. 1 described above, and Fig. 2 Method step S210 to step S220 in the method, method step S310 to step S330 in Fig. 3, method step S410 to step S430 in Fig. 4, method step S510 to step S540 in Fig.
  • computer storage media includes both volatile and nonvolatile media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. permanent, removable and non-removable media.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cartridges, tape, magnetic disk storage or other magnetic storage devices, or can Any other medium used to store desired information and which can be accessed by a computer.
  • communication media typically embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism, and may include any information delivery media .
  • the embodiment of the present application includes: acquiring an interaction instruction, determining a target cache according to the interaction instruction, wherein the interaction instruction carries a preprocessing parameter; determining a target address in the target cache according to the preprocessing parameter, and targeting the Target address for data exchange.
  • the target address corresponding to the valid data can be calculated according to the preprocessing parameters, the preprocessing operation is omitted, the number of data interactions is reduced, and the valid data can be interacted directly according to the destination address. It realizes the continuity of reading and writing valid data between caches, improves the efficiency of data interaction, and thus improves data throughput.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

一种数据交互方法、电子设备、存储介质,数据交互方法包括:获取交互指令,根据所述交互指令确定目标缓存,其中,所述交互指令携带有预处理参数(S110);根据所述预处理参数确定所述目标缓存中的目标地址,针对所述目标地址进行数据交互(S120)。

Description

数据交互方法、电子设备、存储介质
相关申请的交叉引用
本申请基于申请号为202111186796.0、申请日为2021年10月12日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本申请涉及但不限于数据处理领域,尤其涉及一种数据交互方法、电子设备、存储介质。
背景技术
随着人工智能(Artificial Intelligence,AI)技术的发展,对芯片的数据处理能力要求也越来越高。为了充分利用芯片的算力,需要提高芯片的数据交互效率,从而实现更大的吞吐量。目前,在以指令为依托的数据处理过程中,数据交互主要采用整存整取的方式,即根据指令从缓存中读取出全部数据,并在完成数据处理后,将得到的新数据写入对应的缓存中。
然而,缓存中的数据并非全部都是有效地址,为了避免交互过程引入无效数据,需要在交互指令中配置预处理参数,在进行数据交互之前根据预处理参数对缓存中的数据进行重排或者补零等预处理,以确保获取的数据有效,但是预处理操作需要额外进行一次数据交互,影响于数据交互的效率。
发明内容
以下是对本文详细描述的主题的概述。本概述并非是为了限制权利要求的保护范围。
本申请实施例提供了一种数据交互方法、电子设备、存储介质。
第一方面,本申请实施例提供了一种数据交互方法,包括:获取交互指令,根据所述交互指令确定目标缓存,其中,所述交互指令携带有预处理参数;根据所述预处理参数确定所述目标缓存中的目标地址,针对所述目标地址进行数据交互。
第二方面,本申请实施例提供了一种电子设备,包括:存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如第一方面所述的数据交互方法。
第三方面,本申请实施例提供了一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令用于执行如第一方面所述的数据交互方法。
本申请的其它特征和优点将在随后的说明书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本申请而了解。本申请的目的和其他优点可通过在说明书、权利要求书以及附图中所特别指出的结构来实现和获得。
附图说明
附图用来提供对本申请技术方案的进一步理解,并且构成说明书的一部分,与本申请的实施例一起用于解释本申请的技术方案,并不构成对本申请技术方案的限制。
图1是本申请一个实施例提供的数据交互方法的流程图;
图2是本申请另一个实施例提供的确定目标地址的流程图;
图3是本申请另一个实施例提供的得到交互地址列表的流程图;
图4是本申请另一个实施例提供的生成掩码的流程图;
图5是本申请另一个实施例提供的数据交互的流程图;
图6是本申请另一个实施例提供的交互地址列表的示例图;
图7是本申请另一个实施例提供的判断交互进度的流程图;
图8是本申请另一个实施例提供的避免地址冲突的流程图;
图9是本申请另一个实施例提供的读写操作的流程图;
图10是本申请另一个实施例提供的电子设备的装置图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。
需要说明的是,虽然在装置示意图中进行了功能模块划分,在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于装置中的模块划分,或流程图中的顺序执行所示出或描述的步骤。说明书、权利要求书或上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。
本申请提供了一种数据交互方法、电子设备、存储介质,数据交互方法包括:获取交互指令,根据所述交互指令确定目标缓存,其中,所述交互指令携带有预处理参数;根据所述预处理参数确定所述目标缓存中的目标地址;针对所述目标地址进行数据交互。根据本实施例的技术方案,能够根据预处理参数计算出有效数据所对应的目标地址,省去了预处理操作,减少了数据交互的次数,并且可以根据目地址只进行针对有效数据的交互,实现了缓存之间读写有效数据的连续性,提高数据交互的效率,从而提高数据吞吐量。
值得注意的是,对于常见的芯片而言,通常包括如下结构:缓存、控制模块和数据先进先出(Data First Input First Output,DATA_FIFO),其中,缓存包括缓冲器(Buffer)和双倍速率同步动态随机存储器(Double Data Rate,DDR),其中,Buffer可以由8个相同位宽的随机存取存储器(Random Access Memory,RAM)组成,DDR的粒度与Buffer成倍数关系;控制模块包括读控制模块、写控制模块和指令解析模块,其中,指令解析模块用于解析指令,还用于控制读控制模块和写控制模块的执行,采用本实施例的技数据交互方法,读控制模块能够通过控制读使能、地址、掩码来选择从缓存中读取的数据,并存入DATA_FIFO,写控制模块能够通过控制写使能、地址、掩码,将DATA_FIFO中的数据写入缓存。
需要说明的是,上述结构均为示例,本领域技术人员熟知如何在芯片中配置对应的功能模块,本实施例并不涉及硬件的改进,基于上述结构,下面结合附图,对本申请实施例作进一步阐述。
如图1所示,图1是本申请一个实施例提供的一种数据交互方法,包括但不限于有步骤S110和步骤S120。
步骤S110,获取交互指令,根据交互指令确定目标缓存,其中,交互指令携带有预处理参数。
需要说明的是,交互指令可以是通过握手机制存入固定深度的指令队列,例如常见的可用/准备好(Valid/Ready)握手机制,本领域技术人员可以根据实际需求选取合适的握手机制,在此不多做限定。可以理解的是,交互指令可以是任意类型的指令,例如AI指令,涉及 与缓存之间的交互即可。采用指令队列保存交互指令,能够保证数据传输的顺序和准确性,例如能够确保数据从缓存读取出来之后,作为写入的源数据。值得注意的是,交互可以是读数据,也可以是写数据,也可以是从缓存中读取数据后写入另一个缓存,本实施例对交互方式不作限定。
需要说明的是,由于缓存中的数据并非全部有效,因此交互指令通常携带有预处理参数,用于对缓存中的数据进行重排补零操作,例如,缓存的位宽为8位,则从每次从缓存会读取连续8位地址的数据,为了避免读取到无效数据,需要根据预处理参数对缓存中数据的位置进行调整,使得上述8位地址均为有效数据,又如,在数据传输过程中丢失了若干位数据,为了确保数据读取成功,可以对丢失数据的地址进行补零操作。
步骤S120,根据预处理参数确定目标缓存中的目标地址,针对目标地址进行数据交互。
需要说明的是,在一些情况下,以读取数据为例,需要通过第一次交互从缓存中读取数据,根据预处理参数进行重排补零操作,将处理后的有效数据写入缓存,再根据指令进行第二次交互,从缓存中进行有效数据读取,这会产生至少两次数据交互,而本实施例利用预处理参数作为数据基础,根据预处理参数进行计算,是能够确定有效数据的目标地址的,因此,本实施例通过计算得到目标地址,直接针对目标地址的数据进行交互,省去了在一些情况下的第一次交互,能够有效提高数据交互的效率,从而提高数据的吞吐量。
另外,在一实施例中,预处理参数包括第一数量和第二数量,参照图2,图1所示实施例的步骤S120,还包括但不限于有以下步骤:
步骤S210,当第一数量等于1,将目标缓存的首个地址确定为首个目标地址,并以相邻的两个目标地址之间的间隔满足第二数量为原则,从目标缓存的数据中确定剩余的目标地址;
或者,
步骤S220,当第一数量大于1,根据第一数量得到至少两个目标地址组,其中,目标地址组中的目标地址在交互顺序上连续,且目标地址的数量满足第一数量,目标地址组中最末位的目标地址与下一个目标地址组的首位目标地址之间的间隔满足第二数量。
需要说明的是,由于预处理参数是针对目标缓存的基数据进行重排补零操作的,因此,第一数量可以表征连续使用基数据的个数,第二数量可以是表征连续使用基数据后跳过的基数据个数,为了叙述便利,后续用RPT表示第一数量,用GAP表示第二数量,例如,在步骤S210中,当RPT=1,以GAP=6为例,首个目标地址为0地址,第二个目标地址与0地址之间的间隔为6,即7地址为第二个目标地址,14地址为第三个目标地址,以此类推;又如,在步骤S220中,RPT大于1,则需要连续使用至少两个基数据,以RPT=3和GAP=3为例,第一个目标地址组包括0地址、1地址和2地址,上述3个目标地址在交互顺序上是连续的,且数量为3个,在使用3个基数据之后间隔3个地址,即跳过3地址、4地址和5地址,第二个目标地址组包括6地址、7地址和8地址,第二个地址组的首位目标地址为6地址,第一个目标地址组的末位目标地址为2地址,两者之间的地址间隔数量为3,即满足第二数量。通过上述示例所述的方法,能够通过预处理参数,在不进行数据交互的情况下计算出目标缓存中的目标地址,为减少数据交互的数量提供地址基础。
另外,在一实施例中,参照图3,在执行图1所示实施例的步骤S120之前,还包括但不限于有以下步骤:
步骤S310,确定目标缓存的单位交互量,单位交互量表征每次数据交互的数据量;
步骤S320,根据单位交互量确定交互起始地址,其中,交互起始地址为每次数据交互所对应的首个地址,交互起始地址是针对目标缓存的;
步骤S330,根据交互起始地址得到交互地址列表。
需要说明的是,对于Buffer或者DDR,每次读写的数据量是有限的,以读取数据为例,Buffer的位宽为8,单位交互量为2,以目标地址为起始地址,根据位宽每次能够读取出8个地址的数据,但是只能从上述的数据中输出两个地址的数据,因此,为了提高数据交互的效率,可以根据单位交互量确定每次数据交互的首个地址,例如有效数据的目标地址为0地址、1地址和2地址,单位交互量为2,根据0地址为交互起始地址,能够读取出的地址为0至7的数据,但是只会输出0地址和1地址的数据,因此,可以将交互地址列表中的第一个交互起始地址为0地址,第二个交互起始地址为2地址,以确保每次数据交互都能够针对下一个需要读取的有效数据。
值得注意的是,由于缓存中的数据存在无效数据,为了每次交互都能够得到有效数据,可以以目标地址作为交互起始地址,例如,在图2所示实施例中,当RPT=1,GAP=6,第一个目标地址为0地址,第二个目标地址为7地址,第三个目标地址为14地址,且单位交互量为2,以0地址为交互起始地址,能够得到的地址序列为0地址至7地址,由于0地址为目标地址,因此首个交互起始地址为0地址,而由于7地址在第一次数据交互中使用,下一个目标地址为14地址,因此第二个交互起始地址为14地址,得到如图6所示的示例一的交互地址列表。
另外,在一实施例中,参照图4,在执行图3所示实施例的步骤S330之后,还包括但不限于有以下步骤:
步骤S410,确定根据交互起始地址进行数据交互的地址序列;
步骤S420,根据单位交互量确定地址序列中的有效地址,其中,有效地址归属于目标地址,每个地址序列所对应的有效地址的数量小于或等于单位交互量;
步骤S430,生成与交互起始地址相关联的掩码,其中,掩码的有效位与根据有效地址在地址序列中的位置相对应。
需要说明的是,根据交互起始地址进行数据交互的地址序列可以根据缓存的位宽确定,例如上述示例中位宽为8,则地址序列包括8个地址,本领域技术人员也可以根据实际情况调整地址序列中地址的数量,在此不多作限定。
需要说明的是,基于图2所示实施例的描述,目标地址可以根据第一数量RPT和第二数量GAP确定,并且缓存的单位交互量限定了每次输出数据的数量,因此地址序列中的地址即使是目标地址,也并不一定是每次数据交互中的有效地址,例如图6所示的示例3中,RPT=3,GAP=3,前3个目标地址为0地址、1地址和2地址,而单位交互量为2,在第一次交互过程中,虽然上述3个地址均为目标地址,但是只能输出0地址和1地址的数据,因此2地址并非本次交互的有效地址;又如图6所示的示例4中,RPT=5,GAP=8,第三次数据交互的交互起始地址为4地址,而由于RPT=8,因此下一个目标地址为13地址,根据4地址得到的地址序列为4地址至11地址,仅有4地址为目标地址,小于单位交互量,因此本次数据交互的有效地址4地址。
值得注意的是,掩码是由0和1构成的选择信号,能够与每一位地址中读取的数据进行相与操作,当从缓存中读取的数据与0相与,得到的数据为0,当从缓存中读取的数据与1 相与,得到的数据为数据本身,因此,为了从地址序列中确定有效地址,确保数据交互是针对有效数据的,可以根据有效地址在地址序列中的位置确定掩码。
为了更好地说明掩码的计算方式,以单位交互量为2,位宽为8为例,结合图6提出5个具体示例:
示例一:
在本示例中,RPT=1,GAP=6,结合图3所述实施例的描述,可以得到的交互地址列表的前8位如图6示例一所示,以0地址为交互地址所得到的地址序列为0地址至7地址,根据RPT=1和GAP=6可知,在确定0地址为目标地址后,间隔6个地址,得到的下一个目标地址为7地址,而目标地址的数量等于2,并未超出单位交互量,因此0地址和7地址均为有效地址,所对应的掩码为MASK=8`H81,其中,8`表示位宽为8,H81为十六进制表示,对应的二进制为10000001,其置1的位置与0地址和7地址相对应,其余地址的掩码同理可得也为MASK=8`H81。
示例二:
在本示例中,RPT=1,GAP=9,结合图3所述实施例的描述,可以得到的交互地址列表的前8位如图6示例二所示,以0地址为交互地址所得到的地址序列为0地址至7地址,根据RPT=1和GAP=9可知,在确定0地址为目标地址后,间隔9个地址,得到的下一个目标地址为10地址,而目标地址的数量等于2,并未超出单位交互量,因此第一次数据交互只有0地址为有效地址,所对应的掩码为MASK=8`H1,其中,8`表示位宽为8,H1为十六进制表示,对应的二进制为10000000,其置1的位置与0地址相对应;在本示例中,由于8地址和9地址并非目标地址,因此第二个交互起始地址为下一个目标地址,即10地址,其掩码参考上述描述也可以确定同为MASK=8`H1,在此不再赘述计算过程。
示例三:
在本示例中,RPT=3,GAP=3,结合图3所述实施例的描述,可以得到的交互地址列表的前8位如图6示例三所示,以0地址为交互地址所得到的地址序列为0地址至7地址,根据RPT=3和GAP=3可知,首个目标地址组包括0地址、1地址和2地址,再间隔3个地址(3地址、4地址和5地址),得到的下一个目标地址组包括6地址、7地址和8地址,而由于单位交互量为2,因此第一次数据交互的有效地址为0地址和1地址,其对应的掩码以二进制表示为11000000,即MASK=8`H3;而第二次数据交互的交互起始地址为2地址,其地址序列为2地址至9地址,有效地址为2地址和6地址,其对应的掩码以二进制表示为10001000,即MASK=8`H11,得到如图6的示例三所示的两种掩码,分别用白色表示MASK=8`H3,黑色表示MASK=8`H11,后续几位的掩码计算方式同理,在此不再重复赘述。
示例四:
在本示例中,RPT=5,GAP=8,结合图3所述实施例的描述,可以得到的交互地址列表的前8位如图6示例四所示,以0地址为交互地址所得到的地址序列为0地址至7地址,根据RPT=5和GAP=8可知,首个目标地址组包括0地址至4地址,再间隔8个地址(5地址至12地址),得到的下一个目标地址组包括13地址至17地址,而由于单位交互量为2,因此第一次数据交互的有效地址为0地址和1地址,其对应的掩码以二进制表示为11000000,即MASK=8`H3;而第二次数据交互的交互起始地址为2地址,其地址序列为2地址至9地址,有效地址为2地址和3地址,其对应的掩码以二进制也为11000000,即MASK=8`H3,因此交互 地址列表的前2位所对应的掩码相同,而第三次数据交互的交互起始地址为4地址,其地址序列为4地址至11地址,有效地址为4地址,其对应的掩码以二进制为10000000,即MASK=8`H1,得到如图6的示例四所示的两种掩码,分别用白色表示MASK=8`H3,黑色表示MASK=8`H1,后续几位的掩码计算方式同理,在此不再重复赘述。
示例五:
在本示例中,RPT=2,GAP=2,结合图3所述实施例的描述,可以得到的交互地址列表的前8位如图6示例五所示,以0地址为交互地址所得到的地址序列为0地址至7地址,根据RPT=2和GAP=2可知,首个目标地址组包括0地址和1地址,再间隔2个地址(2地址和3地址),得到的下一个目标地址组包括4地址和5地址,而由于单位交互量为2,因此第一次数据交互的有效地址为0地址和1地址,其对应的掩码以二进制表示为11000000,即MASK=8`H3;而第二次数据交互的交互起始地址为2地址,其地址序列为2地址至9地址,有效地址为2地址和3地址,其对应的掩码以二进制也为11000000,即MASK=8`H3,后续几位的掩码计算方式同理,在此不再重复赘述。
另外,在一实施例中,参照图5,图1所示实施例的步骤S120,还包括但不限于有以下步骤:
步骤S510,从交互地址列表获取交互起始地址;
步骤S520,获取与交互起始地址相关联的掩码;
步骤S530,根据掩码从交互起始地址所对应的地址序列中确定有效的目标地址;
步骤S540,针对有效的目标地址与目标缓存进行数据交互。
需要说明的是,基于图4所示实施例的描述,在开始数据交互之前,能够得到交互地址列表和每个交互起始地址所对应的掩码,因此,在开始数据交互之后,确定每个交互起始地址所对应的地址序列,再根据掩码确定有效的目标地址,从而直接根据有效的目标地址与目标缓存实现数据交互,以掩码配合交互地址列表替代了数据重排和补零操作,有效减少了数据交互的次数。
值得注意的是,在根据图4所示实施例计算出掩码之后,可以采用图6所示的形式,以十六进制的方式记载每一位交互起始地址所对应的掩码,也可以为每一位交互起始地址添加掩码标识,再记载每种掩码标识所对应的掩码,例如图6中的示例三,白色表示MASK=8`H3,黑色表示MASK=8`H1,也可以用其他方式进行标识,在此不多作限定。
另外,在一实施例中,参照图7,图1所示实施例的步骤S120,还包括但不限于有以下步骤:
步骤S710,根据交互指令确定交互长度;
步骤S720,当针对目标地址进行数据交互的数据量达到交互长度,确定数据交互完成。
需要说明的是,交互长度用于表征数据交互的长度,例如,根据交互指令确定需要读取的目标地址的数据为N,当完成N个目标地址的数据交互之后,确定针对该交互指令的数据交互完成,通过基于交互长度的计数作为交互指令执行完成的信号,能够及时开始下一条指令的执行,提高数据交互的效率。
另外,在一实施例中,参照图8,在执行完图1所示实施例的步骤S120之后,还包括但不限于有以下步骤:
步骤S810,确定目标地址不存在正在进行的数据交互。
需要说明的是,芯片与缓存之间的数据交互可以是多个指令并行执行,因此,在针对目标地址进行数据交互之前,为了避免多个指令同时执行,可以确定目标地址不存在正在进行的数据交互,例如,当正在执行指令A针对目标地址的读数据操作,指令B解析出针对目标地址的写数据操作,则等待指令A针对目标地址的读数据操作完成之后,再执行指令B的针对目标地址的写数据操作,能够避免发生地址冲突即可。
另外,在一实施例中,数据交互包括数据读取和数据写入,参照图9,图1所示实施例的步骤S120,还包括但不限于有以下步骤:
步骤S910,根据目标地址,从目标缓存中读取目标数据,并将目标数据保存至源队列;
或者,
步骤S920,当源队列存在目标数据,将目标数据根据目标地址写入目标缓存。
需要说明的是,在数据读取操作中,目标数据保存在目标缓存中,当根据上述实施例所述的掩码配合交互地址列表的方式读取到目标数据后,可以存入源队列,当检测到源队列不为空,以此为触发信号,触发交互指令所对应的数据写入操作,将源队列中存储的目标数据按照队列顺序执行写入到缓存中,从而确保读写操作的连续性。
另外,参照图10,本申请的一个实施例还提供了一种电子设备,该电子设备1000包括:存储器1010、处理器1020及存储在存储器1010上并可在处理器1020上运行的计算机程序。
处理器1020和存储器1010可以通过总线或者其他方式连接。
实现上述实施例的数据交互方法所需的非暂态软件程序以及指令存储在存储器1010中,当被处理器1020执行时,执行上述实施例中的应用于电子设备的数据交互方法,例如,执行以上描述的图1中的方法步骤S110至步骤S120、图2中的方法步骤S210至步骤S220、图3中的方法步骤S310至步骤S330、图4中的方法步骤S410至步骤S430、图5中的方法步骤S510至步骤S540、图7中的方法步骤S710至步骤S720、图8中的方法步骤S810、图9中的方法步骤S910至步骤S920。
以上所描述的装置实施例仅仅是示意性的,其中作为分离部件说明的单元可以是或者也可以不是物理上分开的,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
此外,本申请的一个实施例还提供了一种计算机可读存储介质,该计算机可读存储介质存储有计算机可执行指令,该计算机可执行指令被一个处理器或控制器执行,例如,被上述电子设备实施例中的一个处理器执行,可使得上述处理器执行上述实施例中的应用于电子设备的数据交互方法,例如,执行以上描述的图1中的方法步骤S110至步骤S120、图2中的方法步骤S210至步骤S220、图3中的方法步骤S310至步骤S330、图4中的方法步骤S410至步骤S430、图5中的方法步骤S510至步骤S540、图7中的方法步骤S710至步骤S720、图8中的方法步骤S810、图9中的方法步骤S910至步骤S920。本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统可以被实施为软件、固件、硬件及其适当的组合。某些物理组件或所有物理组件可以被实施为由处理器,如中央处理器、数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其他数据)的任何 方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其他存储器技术、CD-ROM、数字多功能盘(DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。
本申请实施例包括:获取交互指令,根据所述交互指令确定目标缓存,其中,所述交互指令携带有预处理参数;根据所述预处理参数确定所述目标缓存中的目标地址,针对所述目标地址进行数据交互。根据本实施例的技术方案,能够根据预处理参数计算出有效数据所对应的目标地址,省去了预处理操作,减少了数据交互的次数,并且可以根据目地址直接进行针对有效数据的交互,实现了缓存之间读写有效数据的连续性,提高数据交互的效率,从而提高数据吞吐量。
以上是对本申请的若干实施进行了具体说明,但本申请并不局限于上述实施方式,熟悉本领域的技术人员在不违背本申请精神的前提下还可作出种种的等同变形或替换,这些等同的变形或替换均包含在本申请权利要求所限定的范围内。

Claims (10)

  1. 一种数据交互方法,包括:
    获取交互指令,根据所述交互指令确定目标缓存,其中,所述交互指令携带有预处理参数;
    根据所述预处理参数确定所述目标缓存中的目标地址,针对所述目标地址进行数据交互。
  2. 根据权利要求1所述的方法,其中,所述预处理参数包括第一数量和第二数量,所述根据所述预处理参数确定所述目标缓存中的目标地址,包括:
    当所述第一数量等于1,将所述目标缓存的首个地址确定为首个目标地址,并以相邻的两个所述目标地址之间的间隔满足所述第二数量为原则,从所述目标缓存的数据中确定剩余的所述目标地址;
    或者,
    当所述第一数量大于1,根据所述第一数量得到至少两个目标地址组,其中,所述目标地址组中的所述目标地址在交互顺序上连续,且所述目标地址的数量满足所述第一数量,所述目标地址组中最末位的目标地址与下一个所述目标地址组的首位目标地址之间的间隔满足所述第二数量。
  3. 根据权利要求1或2所述的方法,其中,在所述针对所述目标地址进行数据交互之前,所述方法还包括:
    确定所述目标缓存的单位交互量,所述单位交互量表征每次数据交互的数据量;
    根据所述单位交互量确定交互起始地址,其中,所述交互起始地址为每次数据交互所对应的首个地址,所述交互起始地址是针对所述目标缓存的;
    根据所述交互起始地址得到交互地址列表。
  4. 根据权利要求3所述的方法,其中,在所述根据所述交互起始地址得到交互地址列表之后,所述方法还包括:
    确定根据所述交互起始地址进行数据交互的地址序列;
    根据所述单位交互量确定所述地址序列中的有效地址,其中,所述有效地址归属于所述目标地址,每个所述地址序列所对应的所述有效地址的数量小于或等于所述单位交互量;
    生成与所述交互起始地址相关联的掩码,其中,所述掩码的有效位与根据所述有效地址在所述地址序列中的位置相对应。
  5. 根据权利要求4所述的方法,其中,所述针对所述目标地址进行数据交互,包括:
    从所述交互地址列表获取所述交互起始地址;
    获取与所述交互起始地址相关联的所述掩码;
    根据所述掩码从所述交互起始地址所对应的地址序列中确定有效的目标地址;
    针对所述有效的目标地址与所述目标缓存进行数据交互。
  6. 根据权利要求1至5任意一项所述的方法,其中,所述针对所述目标地址进行数据交互,包括:
    根据所述交互指令确定交互长度;
    当针对所述目标地址进行数据交互的数据量达到所述交互长度,确定所述数据交互完成。
  7. 根据权利要求1所述的方法,其中,在所述根据所述预处理参数确定所述目标缓存中 的目标地址之后,所述方法还包括:
    确定所述目标地址不存在正在进行的数据交互。
  8. 根据权利要求7所述的方法,其中,所述数据交互包括数据读取和数据写入,所述针对所述目标地址进行数据交互,包括:
    根据所述目标地址,从所述目标缓存中读取目标数据,并将所述目标数据保存至源队列;
    或者,
    当所述源队列存在目标数据,将所述目标数据根据所述目标地址写入所述目标缓存。
  9. 一种电子设备,包括:存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,所述处理器执行所述计算机程序时实现如权利要求1至8中任意一项所述的数据交互方法。
  10. 一种计算机可读存储介质,存储有计算机可执行指令,其中,所述计算机可执行指令用于执行如权利要求1至8中任意一项所述的数据交互方法。
PCT/CN2022/080752 2021-10-12 2022-03-14 数据交互方法、电子设备、存储介质 WO2023060833A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111186796.0A CN115964084A (zh) 2021-10-12 2021-10-12 数据交互方法、电子设备、存储介质
CN202111186796.0 2021-10-12

Publications (1)

Publication Number Publication Date
WO2023060833A1 true WO2023060833A1 (zh) 2023-04-20

Family

ID=85899898

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/080752 WO2023060833A1 (zh) 2021-10-12 2022-03-14 数据交互方法、电子设备、存储介质

Country Status (2)

Country Link
CN (1) CN115964084A (zh)
WO (1) WO2023060833A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009000702A1 (en) * 2007-06-28 2008-12-31 International Business Machines Corporation Method and apparatus for accessing a cache
CN108292232A (zh) * 2015-12-21 2018-07-17 英特尔公司 用于加载索引和分散操作的指令和逻辑
CN109254930A (zh) * 2017-07-13 2019-01-22 华为技术有限公司 数据访问方法及装置
CN110018811A (zh) * 2019-04-15 2019-07-16 北京智芯微电子科技有限公司 Cache数据处理方法以及Cache
CN113900966A (zh) * 2021-11-16 2022-01-07 北京微核芯科技有限公司 一种基于Cache的访存方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009000702A1 (en) * 2007-06-28 2008-12-31 International Business Machines Corporation Method and apparatus for accessing a cache
CN108292232A (zh) * 2015-12-21 2018-07-17 英特尔公司 用于加载索引和分散操作的指令和逻辑
CN109254930A (zh) * 2017-07-13 2019-01-22 华为技术有限公司 数据访问方法及装置
CN110018811A (zh) * 2019-04-15 2019-07-16 北京智芯微电子科技有限公司 Cache数据处理方法以及Cache
CN113900966A (zh) * 2021-11-16 2022-01-07 北京微核芯科技有限公司 一种基于Cache的访存方法及装置

Also Published As

Publication number Publication date
CN115964084A (zh) 2023-04-14

Similar Documents

Publication Publication Date Title
US10346018B2 (en) Method, apparatus and storage medium for processing HTML5 canvas application
US7233335B2 (en) System and method for reserving and managing memory spaces in a memory resource
US20180039424A1 (en) Method for accessing extended memory, device, and system
US11294675B2 (en) Writing prefetched data into intra-core caches of cores identified by prefetching instructions
US20180052685A1 (en) Processor and method for executing instructions on processor
CN105808219B (zh) 一种内存空间分配方法及装置
US9846626B2 (en) Method and apparatus for computer memory management by monitoring frequency of process access
US20230199143A1 (en) Video storage method and apparatus, and soc system and medium
CN110223216B (zh) 一种基于并行plb的数据处理方法、装置及计算机存储介质
CN104503703A (zh) 缓存的处理方法和装置
CN112506823B (zh) 一种fpga数据读写方法、装置、设备及可读存储介质
WO2018032698A1 (zh) 一种翻页方法、装置及书写终端
CN112579595A (zh) 数据处理方法、装置、电子设备及可读存储介质
US20160110286A1 (en) Data writing method and memory system
US10817183B2 (en) Information processing apparatus and information processing system
WO2023060833A1 (zh) 数据交互方法、电子设备、存储介质
US10102116B2 (en) Multi-level page data structure
US10380029B2 (en) Method and apparatus for managing memory
CN116225314A (zh) 数据写入方法、装置、计算机设备和存储介质
US10482027B2 (en) Cache management method and apparatus
US11977894B2 (en) Method and system for distributing instructions in reconfigurable processor and storage medium
US10162525B2 (en) Translating access requests for a multi-level page data structure
CN108694187A (zh) 实时流数据的存储方法及装置
US20220014705A1 (en) Data processing method and related product
CN116185497B (zh) 命令解析方法、装置、计算机设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22879784

Country of ref document: EP

Kind code of ref document: A1