WO2020227878A1 - 一种调度存储器中数据的方法、数据调度设备及系统 - Google Patents

一种调度存储器中数据的方法、数据调度设备及系统 Download PDF

Info

Publication number
WO2020227878A1
WO2020227878A1 PCT/CN2019/086567 CN2019086567W WO2020227878A1 WO 2020227878 A1 WO2020227878 A1 WO 2020227878A1 CN 2019086567 W CN2019086567 W CN 2019086567W WO 2020227878 A1 WO2020227878 A1 WO 2020227878A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
target data
chip memory
chip
scheduling device
Prior art date
Application number
PCT/CN2019/086567
Other languages
English (en)
French (fr)
Inventor
翟记业
姚国才
鲁婷
王少华
吴红梅
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2019/086567 priority Critical patent/WO2020227878A1/zh
Priority to CN201980009406.7A priority patent/CN112292660B/zh
Publication of WO2020227878A1 publication Critical patent/WO2020227878A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems

Definitions

  • This application relates to the field of data processing technology, and in particular to a method, data scheduling equipment and system for scheduling data in a memory.
  • the terminal baseband chip With the development of terminal technology, more and more business scenarios require the terminal baseband chip to provide low-latency characteristics, that is, the terminal generates low delay when processing services through the chip. For this reason, in the current technical solutions for the design of the terminal baseband chip, the on-chip memory on the chip is usually used to read and write data, and the delay caused by the chip when reading and writing data is reduced to reduce the terminal processing time. Extension. However, in some practical application scenarios, the terminal also needs to meet the communication specifications of a higher bit rate. For example, for some terminals using the 5th generation mobile network (5G) technology, the bit rate may be required Up to 10Gbps, which makes it necessary to equip a large amount of on-chip memory on the chip when designing the chip, which leads to high cost and power consumption of the designed chip.
  • 5G 5th generation mobile network
  • the program codes and/or business data of various services are stored in the dynamic random access memory (dynamic random access memory, outside the chip) in advance.
  • DRAM dynamic random access memory
  • the memory management unit (MMU) and the cache (Cache) controller transfer the corresponding data stored in the DRAM to the on-chip memory, and the on-chip memory
  • the original data is transferred to the DRAM; when the next program of the current business needs to be executed or the next business needs to be processed, the corresponding data is transferred in and out of the on-chip memory based on the above-mentioned similar process.
  • using the MMU and Cache controller to control data transfer in and out of the on-chip memory may cause unpredictable delay jitter, which may cause the terminal to be unable to process some services with relatively high delay requirements.
  • the embodiments of the present application provide a method, a data scheduling device, and a system for scheduling data in a memory, so as to reduce the delay caused by transferring data into the on-chip memory, so that the terminal has the ability to process services with higher delay requirements.
  • an embodiment of the present application provides a method for scheduling data in a memory.
  • the method includes: a data scheduling device obtains the length and the length of the first target data in response to a call instruction for the first target data.
  • the data scheduling device is based on the physical storage address and the length of the first target data Extracting the first target data in the off-chip memory;
  • the data scheduling device transfers second target data obtained based on the first target data into the on-chip memory according to the physical operating address,
  • the second target data is processed by an on-chip processor; wherein, the on-chip processor, the on-chip memory, and the data scheduling device are integrated on the same chip.
  • the data scheduling device locates all the data in the off-chip memory that needs to be transferred to the on-chip memory to be processed by the on-chip processor based on the physical storage address of the data and the length of the data, and stores the data once All the features are transferred into the on-chip memory, eliminating the need for the data scheduling device to frequently perform the data transfer process, thereby reducing the time-consuming time for data transfer to the on-chip memory, and reducing the time delay caused by data transfer into the on-chip memory.
  • the data scheduling device is usually hardware. Compared with the implementation of using software to control the data transfer into the on-chip memory, the hardware data scheduling device for data scheduling can quickly transfer the data in the off-chip memory. The data is transferred to the on-chip memory for processing, which can reduce the delay caused by the data transfer.
  • the physical storage address is the first or last address of the first target data in the off-chip memory
  • the physical running address Is the first or last address of the first target data in the on-chip memory.
  • the data scheduling device can locate the start or end position of the first target data based on the first or last address of the first target data in the off-chip memory, and then combine the data length of the first target data , Determine all the data of the first target data in the off-chip memory; similarly, the data scheduling device can locate the first target data when the first target data is stored in the on-chip memory based on the first or last address of the first target data in the on-chip memory The start position or the end position, and then determine the storage position of all the data of the first target data in the on-chip memory.
  • the first target data is compressed data; according to the physical operation address, the data scheduling device will be based on the first target
  • the second target data obtained from the data is transferred to the on-chip memory, including: the data scheduling device decompresses the first target data to obtain the second target data; the data scheduling device operates according to the physical The address transfers the second target data to the on-chip memory.
  • the data that needs to be transferred to the on-chip memory is compressed and stored in the off-chip memory. It can be understood that the amount of data in the compressed data is usually less than the amount of data in the uncompressed data.
  • Storing data in the off-chip memory in a compressed format allows the off-chip memory to store more data, that is, to reduce the memory space of the off-chip memory that is required for data storage.
  • the data scheduling device can decompress the data in the compressed format , And transfer the decompressed data to the on-chip memory.
  • the data scheduling device decompressing the first target data segment includes: the data scheduling device determines The first target data has a compression identifier, and the first target data is decompressed in response to the compression identifier.
  • the compression identifier can be used to identify data stored in a compressed format in the off-chip memory, so that the data scheduling device can query the data every time the data in the off-chip memory is loaded into the on-chip memory. Whether there is a corresponding compression mark, if so, it indicates that the data is in a compressed format, and the data can be decompressed and transferred to the on-chip memory.
  • the method further includes: the data scheduling device downloads from the The third target data is acquired from the on-chip memory, and the fourth target data obtained based on the third target data is called out to the off-chip memory.
  • the data scheduling device can call the data that already exists in the on-chip memory, especially the data that has been processed by the on-chip processor, out of the on-chip memory.
  • the memory space occupied by the recalled data in the on-chip memory It can accommodate the data newly transferred by the data scheduling device; moreover, if the terminal needs to process the same service again, it may need to be processed based on the program code and business data obtained when the service was processed last time. Therefore, the data scheduling device can be on-chip
  • the data in the memory is called out to the off-chip memory for storage, so that when the terminal processes the service again, the scheduling device can transfer the program code and service data reached during the last processing of the service into the on-chip memory for on-chip storage
  • the processor performs processing.
  • the third target data is uncompressed data; the third target data is obtained based on the third target data.
  • Recalling the fourth target data in the off-chip memory into the off-chip memory includes: the data scheduling device compresses the third target data to obtain the fourth target data; the data scheduling device stores the fourth target data The target data is called out to the off-chip memory.
  • the data scheduling device calls out the data in the on-chip memory to the off-chip memory for storage.
  • the third target data is data obtained by processing the second target data by the on-chip processor
  • Compressing the third target data by the data scheduling device includes: the data scheduling device compresses the third target data based on the compression identifier of the first target data.
  • the data that the data scheduling device calls out the on-chip memory is the data previously transferred to the on-chip memory and processed by the on-chip processor, then when the data is stored in the off-chip memory, if the Before the data is transferred to the on-chip memory, it is stored in the off-chip memory in a compressed format, and the data scheduling device can also store the data in the off-chip memory in a compressed format, so that the data before and after being processed can be stored off-chip.
  • the storage mode in the memory remains consistent.
  • the method further includes: the data scheduling device or the on-chip processor recording Describe the length of the fourth target data.
  • the processed data is changed compared to the data before processing, which makes the compressed data of the compressed data when the data processed by the on-chip processor is compressed and stored The amount of data is different from the amount of compressed data stored in the previous off-chip memory.
  • the data scheduling device when the data scheduling device stores compressed data in the off-chip memory, the data scheduling device or the on-chip processor can record the length of the compressed data so that When the data scheduling device subsequently needs to extract the compressed data from the off-chip memory, it can locate the compressed data in the off-chip memory based on the length of the compressed data and the corresponding physical storage address.
  • the off-chip memory is specifically a volatile memory or a non-volatile memory.
  • a volatile memory the on-chip memory is specifically a volatile memory.
  • the program code and service data corresponding to the service can be stored in a volatile memory such as a DDR memory, so that the data scheduling device can store the volatile memory
  • the time required is relatively short; for other services with low latency requirements, the program code corresponding to the service and the corresponding service data can be stored in non-volatile memory such as flash memory.
  • the terminal can process more services based on the program codes and service data stored in the memory.
  • an embodiment of the present application also provides a data scheduling device, including: an acquisition module, configured to acquire the length of the first target data, the length of the first target data, and the first target data in response to an import instruction for the first target data.
  • the physical storage address of the target data in the off-chip memory the physical operating address of the first target data in the on-chip memory; an extraction module, which is used to extract all data according to the physical storage address and the length of the first target data
  • the first target data in the off-chip memory a call-in module, which is used to call the second target data obtained based on the first target data into the on-chip memory according to the physical operating address, the
  • the second target data is processed by an on-chip processor; wherein, the on-chip processor, the on-chip memory, and the data scheduling device are integrated on the same chip.
  • the physical storage address is the first or last address of the first target data in the off-chip memory
  • the physical running address Is the first or last address of the first target data in the on-chip memory.
  • the first target data is compressed data;
  • the transfer module includes: a decompression unit, configured to The data is decompressed to obtain the second target data; and the transfer unit is configured to transfer the second target data to the on-chip memory according to the physical operating address.
  • the decompression unit includes: a determining subunit for determining that the first target data has a compression identifier; and decompression The subunit is configured to decompress the first target data in response to the compression identifier.
  • the data scheduling device further includes: a third target data acquisition module , Used to obtain third target data from the on-chip memory; and a call-out module, used to call out the fourth target data obtained based on the third target data to the off-chip memory.
  • the third target data is uncompressed data;
  • the call-out module includes: a compression unit, It is used to compress the third target data to obtain the fourth target data; and the call-out unit is used to call out the fourth target data to the off-chip memory.
  • the third target data is data obtained by processing the second target data by the on-chip processor
  • the compression unit is specifically configured to compress the third target data based on the compression identifier of the first target data.
  • the data scheduling device further includes: a recording module configured to record the fourth target The length of the data.
  • the off-chip memory is specifically a volatile memory or a non-volatile memory.
  • a volatile memory the on-chip memory is specifically a volatile memory.
  • the data scheduling device provided by the second aspect corresponds to the method for scheduling data in the memory provided by the first aspect
  • various possible implementations of the data scheduling device provided by the second aspect may refer to the scheduling memory provided by the first aspect
  • Various possible implementations of the method in data may refer to the scheduling memory provided by the first aspect
  • an embodiment of the present application also provides a data scheduling system, characterized in that the system includes: an on-chip processor, an on-chip memory, and a data scheduling device, the on-chip processor, the on-chip memory, and the The data scheduling device is integrated on the same chip; the on-chip processor is used for processing the data in the on-chip memory; the on-chip memory is used for caching data; the data scheduling device is used for responding to the transfer of the first target data Instruction to obtain the length of the first target data, the physical storage address of the first target data in the off-chip memory, and the physical operating address of the first target data in the on-chip memory; according to the physical storage address and The length of the first target data, extract the first target data in the off-chip memory; according to the physical operating address, transfer the second target data obtained based on the first target data into the On-chip memory, and the second target data is processed by an on-chip processor.
  • the system includes: an on-chip processor, an on-chip memory, and a data scheduling device, the on-chip processor
  • the physical storage address is the first or last address of the first target data in the off-chip memory
  • the physical running address Is the first or last address of the first target data in the on-chip memory.
  • the first target data is compressed data; the data scheduling device is specifically configured to Decompress the first target data to obtain the second target data; load the second target data into the on-chip memory according to the physical operating address.
  • the data scheduling device is specifically configured to determine that the first target data has a compression identifier, and respond to the The compression identifier decompresses the first target data.
  • the system further includes an off-chip memory, and the off-chip memory is configured to store the first target data.
  • the off-chip memory can be integrated with the on-chip processor, the on-chip memory, and the data scheduling device in a system.
  • the data scheduling device is further configured to obtain a third target from the on-chip memory Data, and call out the fourth target data obtained based on the third target data into the off-chip memory; the off-chip memory is also used to store the fourth target data.
  • the third target data is uncompressed data; the data scheduling device is specifically configured to The third target data is compressed to obtain the fourth target data; the fourth target data is called out to the off-chip memory.
  • the third target data is data obtained by processing the second target data by the on-chip processor;
  • the data scheduling device is specifically configured to compress the third target data based on the compression identifier of the first target data.
  • the data scheduling device or the on-chip processor is further configured to record the first 4. The length of the target data.
  • the off-chip memory is specifically a volatile memory or a non-volatile memory.
  • a volatile memory the on-chip memory is specifically a volatile memory.
  • the data scheduling system provided by the third aspect corresponds to the method for scheduling data in the memory provided by the first aspect
  • various possible implementations of the data scheduling system provided by the third aspect may refer to the scheduling memory provided by the first aspect
  • Various possible implementations of the method in data may refer to the scheduling memory provided by the first aspect
  • the data scheduling device may obtain the length of the first target data that needs to be transferred into the on-chip memory and the physical storage of the first target data in the off-chip memory based on the received transfer instruction. Address and its physical operating address in the on-chip memory, and then extract the first target data in the off-chip memory based on the physical storage address and the length of the first target data, and use the second target data obtained based on the first target data The target data is transferred to the location corresponding to the physical operating address in the on-chip memory at one time, wherein the second target data is processed by the on-chip processor, and the on-chip processor, the on-chip memory and the data scheduling device are integrated on the same chip.
  • the data scheduling device loads data into the on-chip memory, it determines all the data that needs to be transferred into the on-chip memory based on the physical storage address of the data and the length of the data, and loads all the data into the on-chip memory at once. There is no need to perform the data transfer process frequently, thereby reducing the time-consuming time of transferring data into the on-chip memory and reducing the delay caused by transferring data into the on-chip memory.
  • a hardware data scheduling device is used to complete data scheduling. Compared with the existing technical solution that uses software to control the transfer of data in off-chip memory into on-chip memory, data scheduling requires The time-consuming is shorter, which can also reduce the delay caused by data scheduling.
  • Figure 1 is a schematic diagram of an exemplary data scheduling system in an embodiment of the application
  • FIG. 2 is a schematic diagram of another exemplary data scheduling system in an embodiment of the application.
  • FIG. 3 is a schematic flowchart of a method for scheduling data in a memory in an embodiment of the application
  • FIG. 4 is a schematic diagram of signaling interaction of a method for scheduling data in a memory in an embodiment of the application
  • Figure 5 shows an example of a scene including a chip and off-chip memory
  • Figure 6 shows the specific structure of an exemplary descriptor
  • FIG. 7 is a schematic structural diagram of a data scheduling device in an embodiment of the application.
  • the terminal baseband chip in order to make the terminal baseband chip designed to meet the requirements of low latency and high bit rate, it can also reduce the on-chip memory configured on the chip.
  • the required program code and business data are paged and stored in DRAM.
  • the MMU and Cache controller configured on the terminal baseband chip will load the program code and business data on a page in the DRAM.
  • the on-chip memory To the on-chip memory, at the same time, transfer the data on the corresponding page in the on-chip memory to DRAM.
  • the MMU and Cache controller transfer the data on the next page into the on-chip memory and call out the data on the corresponding page.
  • the delay may be small (that is, the delay is relatively high).
  • the MMU and Cache controller transfer the data on one page into and out of the on-chip memory each time and are processed by the processor, then transfer the next page of data into and out of the on-chip memory for processing Processor processing, which will cause the MMU and Cache controller to frequently execute the data transfer process in page units during the execution of this program code and business data, which will cause the processor to execute this program
  • the actual delay caused by the code and business data is longer, which exceeds the maximum delay allowed for processing the data, that is, the delay jitter occurs, which causes this data transfer method to fail to meet the actual application.
  • the processor is processing a piece of program code and business data with a maximum allowable delay of 60 microseconds, and these data are stored in DRAM in 10 pages, and the MMU and Cache controller will store the data on 1 page at a time. It takes 2 microseconds to transfer data into or out of the on-chip memory, and at the same time, it takes 30 microseconds to process the piece of data transferred into the on-chip memory. Since the processor needs to wait for the MMU and Cache controller to transfer the next page of data into and out of the on-chip memory after executing the data of one page in the on-chip memory, it can process the newly transferred data, which makes the processor process this program.
  • the actual time required for code and business data is 70 microseconds, that is, the time it takes to load in data (2 microseconds ⁇ 10), the time to recall data (2 microseconds ⁇ 10), and the time to process data (30 microseconds) )
  • the sum of the three time-consuming is greater than the maximum time delay allowed by the processor to process this piece of data, so that the time delay generated by the processor for processing this piece of data cannot meet the requirements of some low-latency scenarios in practical applications.
  • the embodiment of the present application provides a method for scheduling data in the memory, which reduces the delay caused by the data being transferred into the on-chip memory by transferring the data into the on-chip memory at a time.
  • the data scheduling device may obtain the length of the first target data that needs to be transferred into the on-chip memory, the physical storage address of the first target data in the off-chip memory, and its location in the on-chip memory based on the received transfer instruction.
  • the location corresponding to the physical operating address in the memory, where the second target data is processed by an on-chip processor, and the on-chip processor, the on-chip memory, and the data scheduling device are integrated on the same chip. It can be seen that when the data scheduling device loads data into the on-chip memory, it determines all the data that needs to be transferred into the on-chip memory based on the physical storage address of the data and the length of the data, and loads all the data into the on-chip memory at once.
  • a hardware data scheduling device is used to complete data scheduling. Compared with the technical solution of using software to control the transfer of data in off-chip memory into on-chip memory, data scheduling takes more time. Short, which can also reduce the delay caused by data scheduling.
  • the processor may process the program code and business data with a maximum delay of 60 microseconds (the amount of data remains unchanged) as an example, and the data transferred to the on-chip memory to be processed is still 30 microseconds, even if the data is The time required to call out the on-chip memory remains unchanged at 20 microseconds, but the data scheduling device may only take 8 microseconds when the program code and business data are transferred into the on-chip memory at a time, which makes the terminal The actual time required to process the business is 58 microseconds (that is, 30 microseconds + 20 microseconds + 8 microseconds), which does not exceed the maximum delay allowed by the processor to process the data, thereby meeting the low time of the business Delay request.
  • the time required for the data to be recalled out of the on-chip memory can be reduced to 8 microseconds, which can further reduce the processing of the processor.
  • the actual time required for the segment data is specifically reduced to 46 microseconds (that is, 30 microseconds + 8 microseconds + 8 microseconds).
  • the method for scheduling data in a memory can be applied to the exemplary data scheduling system shown in FIG. 1, and the data scheduling system can be located on the chip 101.
  • the data scheduling device 1013 can transfer the data in the off-chip memory 102 to the on-chip memory 1012.
  • the chip 101 integrates an on-chip processor 1011, an on-chip memory 1012, and a data scheduling device 1013.
  • the on-chip processor 1011 can send a data transfer instruction to the data scheduling device 1013 to control the data scheduling device 1013 to perform corresponding data transfer.
  • the data scheduling device 1013 obtains the length of the first target data that needs to be transferred to the on-chip memory 1012, the physical storage address of the first target data in the off-chip memory 102, and after it is transferred to the on-chip memory 1012 based on the transfer instruction The physical operating address in the on-chip memory 1012, then, the data scheduling device 1013 can extract the first target data from the off-chip memory 102 according to the acquired physical storage address and the length of the first target data, and then according to the acquired The second target data obtained based on the first target data is transferred into the on-chip memory 1012 by the physical operating address of, so that the on-chip processor 1011 processes the second target data in the on-chip memory 1012. Wherein, there may or may not be a connection between the on-chip processor 1011 and the data scheduling device 1013.
  • the foregoing data scheduling system is only an example of the system provided in the embodiment of the present application, and the embodiment of the present application is not limited to this scenario.
  • the off-chip memory 102 shown in FIG. 1 can also be regarded as a part of the data scheduling system for storing data transferred into the on-chip memory 1012 and transferring data from the on-chip memory 1012.
  • the on-chip processor 1011 and the on-chip memory 1012 can be independent of each other, and the data scheduling device 1013 can control the operation behavior of the on-chip processor 1011 to access data in the on-chip memory 1012;
  • the on-chip memory 1012 can also be directly integrated into the on-chip processor 1011, as shown in FIG. 2.
  • the embodiments of the present application can be applied to any applicable data scheduling system, and are not limited to the above exemplary data scheduling system.
  • FIG. 3 shows a schematic flowchart of a method for scheduling data in a memory in an embodiment of the present application.
  • the method for scheduling data in the memory shown in FIG. 3 can be applied to the data scheduling system shown in FIG. 1 or FIG. 2, and the on-chip processor, on-chip memory, and data scheduling device shown in FIG. 3 are integrated in On the same chip, the method may specifically include: S301: the on-chip processor or the on-chip memory generates a call-in instruction for the first target data, and sends the call-in instruction to the data scheduling device.
  • the on-chip processor or on-chip memory may trigger the data scheduling device to transfer data into the on-chip memory.
  • the on-chip processor or the on-chip memory can generate a call-in instruction for the first target data, and the call-in instruction is used to instruct the data scheduling device to call the first target data. Enter the corresponding location in the on-chip memory, and then the on-chip processor can send the generated call-in instruction to the data scheduling device.
  • the on-chip processor can trigger the data scheduling device to complete the data scheduling according to the timing. It can be understood that when the terminal processes a certain business, if the on-chip processor determines that the current program code and business data in the on-chip memory are about to be processed, it can trigger the data scheduling device in advance according to the timing sequence to transfer the next piece of program code and business data into In the on-chip memory, in this way, when the previous section of program code and business data in the on-chip memory are processed, the next section of program code and business data is just or has been transferred to the on-chip memory, so the terminal does not need to spend time waiting for data from off-chip.
  • the memory is transferred to the on-chip memory, which can reduce the delay of the terminal in the service.
  • the controller in the on-chip memory when the controller in the on-chip memory fails to access the program code and business data (that is, the program code and business data that need to be accessed have not been transferred to the on-chip memory), the controller in the on-chip memory can The data transfer device is triggered by generating a transfer instruction to transfer the program code and business data to be accessed into the on-chip memory.
  • the process of transferring data by the data scheduling device may be triggered by the controller in the on-chip memory, without the need for the on-chip processor to trigger the data scheduling process of the data scheduling device.
  • the data scheduling device In response to the call-in instruction, the data scheduling device obtains the length of the first target data, the physical storage address of the first target data in the off-chip memory, and the physical operating address of the first target data in the on-chip memory.
  • the on-chip processor or on-chip memory may pre-store the length of the first target data, the physical storage address of the first target data in the off-chip memory, and the first target data in the on-chip memory.
  • the on-chip processor or the on-chip memory needs to trigger the data scheduling device to perform data transfer, it can be used in the generated transfer address (that is, the specific location of the first target data in the on-chip memory after the data transfer is completed).
  • the incoming instruction can carry the length of the first target data, the physical storage address, and the physical operation address. In this way, after receiving the call-in instruction, the data scheduling device can analyze the call-in instruction to obtain the length, physical storage address, and physical operation address of the first target data.
  • the data scheduling device may pre-store the length of the first target data, the physical storage address of the first target data in the off-chip memory, and the location of the first target data. The physical operating address in the on-chip memory.
  • the data scheduling device receives the call-in instruction for the first target data, it can respond to the call-in instruction to obtain the length, physical storage address, and physical operation address corresponding to the first target data stored by itself.
  • the data scheduling device extracts the first target data in the off-chip memory according to the physical storage address of the first target data and the length of the first target data.
  • the data scheduling device can locate the first target data in the off-chip memory after learning the length of the first target data and the physical storage address in the off-chip memory. Since the located first target data is data that needs to be transferred into the on-chip memory, the data scheduling device can extract the first target data from the off-chip memory for subsequent data transfer.
  • the physical storage address of the first target data in the off-chip memory may specifically be the first address when the first target data is stored in the off-chip memory, so that the data scheduling device is based on the first address and the first address.
  • the length of the target data can determine which data in the off-chip memory is the first target data.
  • the physical storage address may also be the end address or other addresses when the first target data is stored in the off-chip memory.
  • the data scheduling device loads the second target data obtained based on the first target data into the on-chip memory according to the physical operating address of the first target data. After extracting the first target data, the data scheduling device can determine the specific location where the data is transferred to the on-chip memory according to the physical operating address of the first target data, and then can transfer the second target data obtained based on the first target data Transfer to the on-chip memory.
  • the physical operating address of the second target data in the on-chip memory may specifically be the first address when the second target data is cached in the on-chip memory, so that the data scheduling device is based on the first address and the first target data
  • the length of can determine all the cache locations of the second target data in the on-chip memory, so that the data scheduling device transfers the second target data to the determined location for caching.
  • the physical storage address may also be the end address or other addresses when the second target data is stored on the on-chip memory.
  • the second target data may be the first target data
  • the data scheduling device may directly use the physical operating address of the first target data.
  • the first target data (that is, the second target data) is transferred to the corresponding location in the on-chip memory.
  • the first target data is compressed data
  • the target data may be data obtained by decompressing the first target data.
  • the data scheduling device may perform decompression processing on the first target data after extracting the first target data, Obtain the second target data, and then load the second target data into the on-chip memory according to the physical operating address of the first target data in the on-chip memory, that is, according to the physical operating address of the second target data in the on-chip memory. It can be understood that compressing the data and storing it in the off-chip memory can reduce the memory space of the off-chip memory occupied by the data, so that the off-chip memory can store more data.
  • the compression identifier can be used to identify that the first target data is compressed data. Specifically, after extracting the first target data, the data scheduling device can determine whether the first target data has a compression identifier. If so, it indicates that the first target data is compressed data, and the data scheduling device can respond The compression identifier decompresses the first target data to obtain the second target data. If not, it indicates that the first target data has not been compressed, and the second target data is also the first target data.
  • a compression identifier may be added to the first target data, so that the data scheduling device can determine that the first target data is compressed data based on the added compression identifier .
  • the foregoing compression process can also be omitted, that is, when the first target data is uncompressed data, the uncompressed data can be directly transferred from the off-chip memory to the on-chip memory.
  • the on-chip memory mentioned in this embodiment may specifically include static random access memory (SRAM), tightly coupled memory (TCM), buffers, or other types of memory. Volatile memory.
  • the off-chip memory mentioned in this embodiment may specifically include volatile memory such as DRAM, double data rate synchronous dynamic random access memory (double data rate synchronous dynamic random access memory, DDR) memory, or,
  • the off-chip memory can also be a non-volatile memory such as a flash memory. It is worth noting that, since the program code and service data required to be executed by the terminal to process services can also be stored in the flash memory, and the data scheduling device completes the scheduling of the data, in actual applications, some delay requirements are not high.
  • the program code and corresponding business data corresponding to the business can be stored in a non-volatile memory such as flash memory, so that the terminal can process more based on the program code and business data stored in the non-volatile memory. More business.
  • the data scheduling device in this embodiment can extract data from multiple off-chip memories and transfer the extracted data to the same on-chip memory.
  • the program codes in multiple off-chip memories can share the same on-chip memory without having to configure a corresponding on-chip memory for each off-chip memory, thereby reducing the amount of on-chip memory required on the chip, and thereby The cost of the chip can be saved.
  • multiple off-chip memories can store program codes and business data corresponding to multiple different services.
  • the data scheduling device can simultaneously load data corresponding to different services from multiple off-chip memories and process them on-chip.
  • the device accesses the program codes and business data corresponding to different services in the on-chip memory, so that the terminal can process different services at the same time.
  • program codes and service data corresponding to multiple different services may also be stored in an off-chip memory at the same time.
  • some services may have higher requirements on the delay of the terminal.
  • some or all of the data corresponding to the service can be fixed in advance in the on-chip memory
  • the on-chip processor can directly read the corresponding data from the fixed storage area in the on-chip memory for execution and processing, instead of being scheduled by the data scheduling device every time the terminal executes the service Therefore, the time required for the data dispatching device to transfer in and out of data can be saved, thereby further reducing the time delay required by the terminal to process the service.
  • the data locked in the on-chip memory can be stored in the off-chip memory in advance, and the data scheduling device transfers the data from the off-chip memory to the on-chip memory and locks it in the on-chip memory.
  • the lock identifier can be used to identify that the data is locked in the on-chip memory. In this way, when the data scheduling device receives a call instruction for the data, if the data is The lock flag determines that the data has been locked in the on-chip memory in advance, and the on-chip processor can be directly notified to read the data in the fixed storage area of the on-chip memory without data scheduling.
  • the data scheduling device can not only transfer the data in the off-chip memory into the on-chip memory, but also call out the original data in the on-chip memory. It can be understood that the memory space of the on-chip memory is usually smaller than the memory space of the off-chip memory, and if the previous piece of program code and business data is already stored in the current on-chip memory, then the on-chip memory does not have enough memory space to accommodate data scheduling.
  • the first target data currently loaded by the device based on this, before the data scheduling device loads the first target data into the on-chip memory, the data scheduling device can also recall the data in the on-chip memory in advance, so that the on-chip memory has enough Large memory space to accommodate the first target data; in addition, since the data retrieved from the on-chip memory is processed data, when the terminal processes the service again, it may need to be based on the data obtained when the service was processed last time. Therefore, the data scheduling device can call out the data in the on-chip memory to the off-chip memory for storage. In specific implementation, the data scheduling device can extract the third target data from the on-chip memory, and call the fourth target data obtained based on the third target data to the off-chip memory.
  • the fourth target data may be the third target data
  • the data scheduling device may directly use the third target data (that is, the fourth target data) after extracting the third target data. Data) is called out to the off-chip memory for storage.
  • the fourth target data may be compressed data, and the fourth target data may be data obtained by compressing the third target data.
  • the data scheduling device may compress the third target data after calling out the third target data to obtain the fourth target data, and The fourth target data in the compressed format is stored in the off-chip memory. It can be understood that since the data amount of the fourth target data in the compressed format is smaller than the data amount of the uncompressed third target data, the memory space occupied by storing the fourth target data is smaller than the memory space occupied by storing the third target data.
  • the third target data is the data obtained by the on-chip processor processing the second target data
  • the amount of data of the second target data and the third target data is usually approximately the same, but when the third target data is compressed During storage, since the third target data obtained by the processor processing the second target data is changed compared to the second target data, the data volume of the fourth target data obtained by compressing the third target data is the same as that of the first target data.
  • the data volume of the target data may not be the same. Therefore, when storing the fourth target data in a compressed state, the length of the fourth target data and the physical storage address in the off-chip memory can be recorded.
  • the fourth target data in the off-chip memory can be located based on the length of the fourth target data and the corresponding physical storage address. Among them, it can be recorded by the data scheduling device, or it can be recorded by the on-chip processor.
  • the data scheduling device can detect the third target data corresponding to the third target data when compressing the third target data. If it is determined that the first target data has a compression identifier, the compression process for the third target data is executed. It can be understood that the first target data is stored in a compressed format in the off-chip memory, and the program code and service data contained in the first target data are executed by the on-chip processor, and the third target data obtained may also be Stored in off-chip memory in compressed format. In other words, if the extracted first target data is stored in the off-chip memory in a compressed format, the third target data may also be stored in the off-chip memory in a compressed format.
  • recall and recall in this embodiment are all for the on-chip memory, that is, data is recalled from the on-chip memory and data is transferred to the on-chip memory for storage.
  • the data scheduling device can obtain the length of the first target data that needs to be transferred into the on-chip memory, the physical storage address of the first target data in the off-chip memory, and its on-chip memory based on the received transfer instruction. Then extract the first target data in the off-chip memory according to the physical storage address and the length of the first target data, and transfer the second target data obtained based on the first target data into To the location corresponding to the physical operating address in the on-chip memory, the second target data is processed by the on-chip processor, and the on-chip processor, the on-chip memory, and the data scheduling device are integrated on the same chip.
  • the data scheduling device loads data into the on-chip memory, it determines all the data that needs to be transferred into the on-chip memory based on the physical storage address of the data and the length of the data, and loads all the data into the on-chip memory at once. There is no need to perform the data transfer process frequently, thereby reducing the time-consuming time of transferring data into the on-chip memory and reducing the delay caused by transferring data into the on-chip memory.
  • a hardware data scheduling device is used to complete data scheduling. Compared with the existing technical solution that uses software to control the transfer of data in off-chip memory into on-chip memory, data scheduling requires The time-consuming is shorter, which can also reduce the delay caused by data scheduling.
  • Figure 4 shows a schematic diagram of signaling interaction of a method for scheduling data in a memory in an embodiment of the present application.
  • Figure 5 shows an example of a specific scenario in which an on-chip integration
  • a data signal processor digital signal processor, DSP
  • SRAM on-chip memory
  • a data scheduling device are connected, and the chip is connected to a DDR memory (off-chip memory).
  • the method includes: S401: DSP processor generates for the first target Data transfer instruction, and send the transfer instruction to the data scheduling device.
  • the data scheduling device searches for the descriptor corresponding to the first target data, and determines the length SIZE of the first target data, the physical storage address PSA of the first target data in the DDR memory, and the first target data.
  • the call-in instruction may include the virtual operating address corresponding to the first target data, and the data scheduling device can parse the descriptor corresponding to the virtual operating address from the call-in instruction.
  • the descriptor may include virtual running address (VRA), physical running address (PRA), physical storage address (PSA), data length SIZE and data status flag FLAG.
  • the data state identifier FALG may at least include the aforementioned compression identifier and/or lock identifier (in this embodiment, the compression identifier is included as an example).
  • the data scheduling device can search for a descriptor containing the same virtual running address according to the virtual running address of the first target data contained in the call-in instruction, and then after finding the descriptor, it can be determined from the descriptor The length SIZE of the first target data, the physical storage address PSA of the first target data in the DDR memory, and the physical operation address PRA of the first target data in the SRAM.
  • the data scheduling device transfers the data at the PSA in the SRAM to the DDR memory according to the PSA.
  • the data scheduling device can obtain the length of the data at the PSA in the SRAM, and based on the PSA and the length of the data at the PSA, call out the data at the PSA (that is, a data segment in the SRAM) To DDR memory for storage.
  • step S403 in this embodiment may not be executed, and step S404 is directly executed.
  • S404 The data scheduling device extracts the first target data in the DDR memory according to the PSA and the SIZE.
  • S405 The data scheduling device determines that the FLAG in the descriptor includes a compression identifier, and decompresses the first target data corresponding to the compression identifier to obtain second target data.
  • S406 The data scheduling device transfers the second target data to the corresponding position in the SRAM according to the PRA.
  • S407 The on-chip processor accesses and processes the second target data in the SRAM to obtain third target data, where the third target data is located in the SRAM.
  • S408 The data scheduling device calls the third target data in the SRAM out of the SRAM.
  • the data scheduling device can actively call the third target data out of the SRAM, so that when the terminal performs other services, the data scheduling device There is no need to transfer data, but only the data transfer process, which can reduce the delay of terminal processing services.
  • the on-chip processor can also instruct the data scheduling device to call out the third target data after processing the second target data, so that the data scheduling device can continue to transfer other data into the SRAM.
  • S409 The data scheduling device determines that the first target data has a compression identifier according to the descriptor, and compresses the third target data to obtain fourth target data. In this embodiment, if the first target data extracted from the DDR memory has a compression identifier, indicating that the first target data is compressed and stored in the DDR memory, then when the third target data is stored in the DDR memory, it may also be Perform compressed storage.
  • S410 The data scheduling device calls the fourth target data to the PSA position in the DDR memory according to the PSA in the descriptor.
  • the data scheduling device may also perform SIZE on the data length in the descriptor. Update so that the updated data length SIZE is the data length of the fourth target data.
  • the SRAM is located outside the DSP processor.
  • the data scheduling device can control the process of the DSP processor accessing the SRAM. In this way, when the DSP processor does not have access When the data in the SRAM is authorized, the DSP processor can be prohibited from accessing the SRAM, so that the security of data access can be improved.
  • the SRAM can also be located inside the DSP processor to improve the efficiency of the DSP processor accessing data in the SRAM.
  • FIG. 7 shows a data scheduling device in an embodiment of the present application.
  • the data scheduling device 700 includes: an obtaining module 701, configured to obtain the first target data in response to an import instruction for the first target data; The length of the target data, the physical storage address of the first target data in the off-chip memory, and the physical operating address of the first target data in the on-chip memory; the extraction module 702 is used to extract the physical storage address according to the physical storage address and the According to the length of the first target data, extract the first target data in the off-chip memory; the transfer module 703 is configured to convert the second target obtained based on the first target data according to the physical running address The data is transferred to the on-chip memory, and the second target data is processed by an on-chip processor; wherein, the on-chip processor, the on-chip memory, and the data scheduling device 700 are integrated on the same chip.
  • the physical storage address is the first or last address of the first target data in the off-chip memory
  • the physical operation address is the first target data on the chip. The first or last address in the memory.
  • the first target data is compressed data;
  • the transfer module 703 includes: a decompression unit, configured to decompress the first target data to obtain the second target Data; a transfer unit for transferring the second target data into the on-chip memory according to the physical operating address.
  • the decompression unit includes: a determining subunit, configured to determine that the first target data has a compression identifier; and a decompression subunit, configured to respond to the compression identifier to the first target The data is decompressed.
  • the data device 700 further includes: a third target data obtaining module, configured to obtain third target data from the on-chip memory; and a call-out module, configured to obtain data based on the third target The fourth target data obtained from the data is called out to the off-chip memory.
  • the third target data is uncompressed data;
  • the call-out module includes: a compression unit, configured to compress the third target data to obtain the fourth target Data; a recall unit for recalling the fourth target data to the off-chip memory.
  • the third target data is data obtained by the on-chip processor processing the second target data
  • the compression unit is specifically configured to be based on a compression identifier of the first target data To compress the third target data.
  • the data scheduling device 700 further includes: a recording module configured to record the length of the fourth target data.
  • the off-chip memory is specifically a volatile memory or a non-volatile memory
  • the on-chip memory is specifically a volatile memory
  • each module/unit/subunit in the data scheduling device 700 may specifically include hardware circuits that implement corresponding functions.
  • the hardware circuit may include at least one of a digital circuit, an analog circuit, a compilable circuit, an algorithm circuit, or a hybrid circuit.
  • Multiple means two or more.
  • “And/or” describes the association relationship of the associated objects, indicating that there can be three relationships, for example, A and/or B, which can mean: A alone exists, both A and B exist, and B exists alone, where A, B can be singular or plural.
  • the character “/” generally indicates that the associated objects are in an “or” relationship.
  • "The following at least one item (a)” or similar expressions refers to any combination of these items, including any combination of a single item (a) or plural items (a).
  • At least one item (a) of a, b, or c can mean: a, b, c, ab, ac, bc, or abc, where a, b, and c can be single or multiple .
  • words such as “first” and “second” are used to distinguish the same or similar items with substantially the same function and effect. Those skilled in the art can understand that words such as “first” and “second” do not limit the quantity and order of execution, and words such as “first” and “second” do not limit the difference.

Abstract

本申请实施例公开了一种调度存储器中数据的方法、数据调度设备及系统,数据调度设备可以根据需要调入片上内存的第一目标数据的长度以及该第一目标数据在片外存储器中的物理存储地址,提取出片外存储器中的第一目标数据,并将基于该第一目标数据所得到的第二目标数据一次性调入至片上内存,其中,该第二目标数据被片上处理器进行处理,该片上处理器、片上内存以及数据调度设备集成于同一芯片上。可见,数据调度设备是一次性的将所有数据全部调入片上内存,而无需频繁的执行数据调入过程,从而减少数据调入至片上内存的耗时时长,降低数据调入片上内存所产生的时延。

Description

一种调度存储器中数据的方法、数据调度设备及系统 技术领域
本申请涉及数据处理技术领域,特别是涉及一种调度存储器中数据的方法、数据调度设备及系统。
背景技术
随着终端技术的发展,越来越多的业务场景中要求终端基带芯片能够提供低时延特性,也即,终端在通过该芯片处理业务时所产生的时延较低。为此,目前在对终端基带芯片进行设计的技术方案中,通常是使用该芯片上的片上内存来读写数据,通过降低该芯片读写数据时所产生的时延来降低终端处理业务的时延。但是,实际应用的一些应用场景中,终端还需要满足较高比特率的通信规格,比如,对于一些应用第五代移动通信网络(5th generation mobile network,5G)技术的终端,其比特率可能要求达到10Gbps,这就使得在设计芯片时需要在该芯片上配备大量的片上内存,从而导致所设计芯片的成本、功耗过高。
基于此,现有的技术方案中,为了减少芯片上所需配备的片上内存,预先将多种业务的程序代码和/或业务数据存放于芯片外的动态随机存取存储器(dynamic random access memory,DRAM)中,然后,针对于当前所处理的业务,由内存管理单元(memory management unit,MMU)和高速缓存(Cache)控制器将DRAM中存储的相应数据调入至片上内存,并将片上内存上原有的数据调出至DRAM;当需要执行当前业务的下一段程序或者需要处理下一业务时,再基于上述类似过程在片上内存完成相应数据的调入调出。但是,利用MMU和Cache控制器控制数据在片上内存进行调入调出,可能会产生难以预料的时延抖动,从而可能导致终端无法处理一些时延要求比较高的业务。
发明内容
本申请实施例提供了一种调度存储器中数据的方法、数据调度设备以及系统,以降低数据调入片上内存所产生的时延,从而使得终端具有处理较高时延要求业务的能力。
第一方面,本申请实施例提供了一种调度存储器中数据的方法,所述方法包括:数据调度设备响应于针对第一目标数据的调入指令,获取所述第一目标数据的长度、所述第一目标数据在片外存储器中的物理存储地址、所述第一目标数据在片上内存中的物理运行地址;所述数据调度设备根据所述物理存储地址以及所述第一目标数据的长度,提取所述片外存储器中的所述第一目标数据;所述数据调度设备根据所述物理运行地址,将基于所述第一目标数据得到的第二目标数据调入至所述片上内存,所述第二目标数据被片上处理器处理;其中,所述片上处理器、所述片上内存以及所述数据调度设备集成于同一芯片上。在该实施方式中,数据调度设备是基于该数据的物理存储地址以及数据的长度,定位出片外存储器中需要调入至片上内存中由片上处理器进行处理的所有数据,并将该数据一次性全部调入片上内存中,从而无需数据调度设备频繁的执行数据调入过程,从而减少数据调 入至片上内存的耗时时长,降低数据调入片上内存所产生的时延。而且,该数据调度设备通常为硬件,这相比于利用软件来控制数据进行调入至片上内存的实施方式而言,采用硬件的数据调度设备进行数据调度能够更快速的将片外存储器中的数据调入至片上内存中进行处理,从而可以降低数据调入所产生的时延。
结合第一方面,在第一方面的第一种可能的实施方式中,所述物理存储地址为所述第一目标数据在所述片外存储器中的首地址或尾地址,所述物理运行地址为所述第一目标数据在所述片上内存中的首地址或尾地址。在该实施方式中,数据调度设备可以基于第一目标数据在片外存储器中的首地址或者尾地址,定位第一目标数据的起始位置或者终止位置,进而结合该第一目标数据的数据长度,确定出第一目标数据在片外存储器中的所有数据;类似的,数据调度设备可以基于第一目标数据在片上内存的首地址或者尾地址,定位第一目标数据在片上内存中存储时的起始位置或者终止位置,进而确定出该第一目标数据的所有数据在片上内存的存储位置。
结合第一方面,在第一方面的第二种可能的实施方式中,所述第一目标数据为被压缩的数据;所述数据调度设备根据所述物理运行地址,将基于所述第一目标数据得到的第二目标数据调入至所述片上内存,包括:所述数据调度设备对所述第一目标数据进行解压,得到所述第二目标数据;所述数据调度设备根据所述物理运行地址将所述第二目标数据调入至所述片上内存。在该实施方式中,需要调入至片上内存中的数据是压缩存储于片外存储器中,可以理解,压缩后的数据所具有的数据量通常小于未压缩的数据所具有数据量,因此,将数据以压缩格式存储于片外存储器中可以使得片外存储器存储更多的数据,也即为减少了数据存储时所需占用的片外存储器的内存空间。相应的,数据调度设备在将压缩存储于片外存储器中的数据调入至片上内存时,为便于片上处理器对片上内存中的数据进行处理,数据调度设备可以对该压缩格式的数据进行解压,并将解压得到的数据调入至片上内存。
结合第一方面的第二种实施方式,在第一方面的第三种可能的实施方式中,所述数据调度设备对所述第一目标数据段进行解压,包括:所述数据调度设备确定所述第一目标数据具有压缩标识,并响应于所述压缩标识对所述第一目标数据进行解压。在该实施方式中,可以利用压缩标识对片外存储器中以压缩格式存储的数据进行标识,这样,数据调度设备每次在将片外存储器中的数据调入至片上内存时,可以查询该数据是否具有对应的压缩标识,若有,则表明该数据为压缩格式的数据,则可以将该数据进行解压后调入至片上内存。
结合第一方面至第一方面的第三种实施方式中的任一种实施方式,在第一方面的第四种可能的实施方式中,所述方法还包括:所述数据调度设备从所述片上内存中获取第三目标数据,并将基于所述第三目标数据得到的第四目标数据调出至所述片外存储器中。在该实施方式中,数据调度设备可以将片上内存中已存在的数据,尤其是片上处理器已经处理过的数据,调出片上内存,这样,调出的数据在片上内存中所占用的内存空间可以去容纳数据调度设备新调入的数据;而且,若终端需要再次处理相同业务,可能需要基于上一次处理该业务时所得到的程序代码以及业务数据来进行处理,因此,数据调度设备可以片上内存中的数据调出至片外存储器中进行存储,这样,当终端再次处理该业务时,属于调度 设备可以将上一次处理该业务所达到的程序代码以及业务数据调入至片上内存以供片上处理器进行处理。
结合第一方面的第四种实施方式中,在第一方面的第五种可能的实施方式中,所述第三目标数据是未被压缩的数据;所述将基于所述第三目标数据得到的第四目标数据调出至所述片外存储器中,包括:所述数据调度设备对所述第三目标数据进行压缩,得到所述第四目标数据;所述数据调度设备将所述第四目标数据调出至所述片外存储器中。在该实施方式中,由于压缩格式的数据所具有的数据量通常小于该数据未压缩时所具有的数据量,因此,数据调度设备在将片上内存中的数据调出至片外存储器中进行存储时,将该数据进行压缩,并将压缩格式的数据存储于片外存储器中,可以减少该数据在片外存储器中存储时所占用的存储内存。
结合第一方面的第五种实施方式中,在第一方面的第六种可能的实施方式中,所述第三目标数据为所述片上处理器处理所述第二目标数据而得到的数据,所述数据调度设备对所述第三目标数据进行压缩,包括:所述数据调度设备基于所述第一目标数据的压缩标识,对所述第三目标数据进行压缩。在该实施方式中,若数据调度设备调出片上内存的数据为之前调入至片上内存中并且被片上处理器所处理而得到的数据,则在将该数据存储至片外存储器时,如果该数据调入片上内存之前是以压缩格式存储于片外存储器,则数据调度设备可以同样以压缩格式将该数据存储于片外存储器中,这样可以使得被处理前以及被处理后的数据在片外存储器中的存储方式保持一致。
结合第一方面的第五种实施方式或者第六种实施方式,在第一方面的第七种可能的实施方式中,所述方法还包括:所述数据调度设备或所述片上处理器记录所述第四目标数据的长度。在该实施方式中,由于数据被片上处理器处理后,处理后的数据相较于处理前的数据发生变化,这使得在对被片上处理器处理后的数据进行压缩存储时,该压缩数据的数据量与之前片外存储器中存储的压缩数据的数据量不同,因此,在数据调度设备将压缩数据存储于片外存储器中时,数据调度设备或者片上处理器可以记录该压缩数据的长度,以便于当数据调度设备后续需要从片外存储器中提取该压缩数据时,可以基于该压缩数据的长度以及相应的物理存储地址,在片外存储器中定位出该压缩数据。
结合第一方面至第一方面的第七种实施方式中的任一种实施方式,在第一方面的第八种可能的实施方式中,所述片外存储器具体为易失性存储器或非易失性存储器,所述片上内存具体为易失性存储器。在该实施方式中,对于一些时延要求较高的业务,可以将该业务对应的程序代码以及业务数据存储于诸如DDR存储器等易失性存储器中,以使得数据调度设备在将易失性存储器中的数据调入片上内存时所需的耗时较短;而对于另一些时延要求不高的业务,可以将业务对应的程序代码以及相应的业务数据存储于诸如flash存储器等非易失性存储器中,这样终端可以基于该存储器中存储的程序代码以及业务数据处理更多的业务。
第二方面,本申请实施例还提供了一种数据调度设备,包括:获取模块,用于响应于针对第一目标数据的调入指令,获取所述第一目标数据的长度、所述第一目标数据在片外存储器中的物理存储地址、所述第一目标数据在片上内存中的物理运行地址;提取模块, 用于根据所述物理存储地址以及所述第一目标数据的长度,提取所述片外存储器中的所述第一目标数据;调入模块,用于根据所述物理运行地址,将基于所述第一目标数据得到的第二目标数据调入至所述片上内存,所述第二目标数据被片上处理器处理;其中,所述片上处理器、所述片上内存以及所述数据调度设备集成于同一芯片上。
结合第二方面,在第二方面的第一种可能的实施方式中,所述物理存储地址为所述第一目标数据在所述片外存储器中的首地址或尾地址,所述物理运行地址为所述第一目标数据在所述片上内存中的首地址或尾地址。
结合第二方面,在第二方面的第二种可能的实施方式中,所述第一目标数据为被压缩的数据;所述调入模块,包括:解压单元,用于对所述第一目标数据进行解压,得到所述第二目标数据;调入单元,用于根据所述物理运行地址将所述第二目标数据调入至所述片上内存。
结合第二方面的第二种实施方式,在第二方面的第三种可能的实施方式中,所述解压单元,包括:确定子单元,用于确定所述第一目标数据具有压缩标识;解压子单元,用于响应于所述压缩标识对所述第一目标数据进行解压。
结合第二方面至第二方面的第三种实施方式中的任一种实施方式,在第二方面的第四种可能的实施方式中,所述数据调度设备还包括:第三目标数据获取模块,用于从所述片上内存中获取第三目标数据;调出模块,用于将基于所述第三目标数据得到的第四目标数据调出至所述片外存储器中。
结合第二方面的第四种实施方式中,在第二方面的第五种可能的实施方式中,所述第三目标数据是未被压缩的数据;所述调出模块,包括:压缩单元,用于对所述第三目标数据进行压缩,得到所述第四目标数据;调出单元,用于将所述第四目标数据调出至所述片外存储器中。
结合第二方面的第五种实施方式中,在第二方面的第六种可能的实施方式中,所述第三目标数据为所述片上处理器处理所述第二目标数据而得到的数据,所述压缩单元,具体用于基于所述第一目标数据的压缩标识,对所述第三目标数据进行压缩。
结合第二方面的第五种实施方式或者第六种实施方式,在第二方面的第六种可能的实施方式中,所述数据调度设备还包括:记录模块,用于记录所述第四目标数据的长度。
结合第二方面至第二方面的第七种实施方式中的任一种实施方式,在第二方面的第八种可能的实施方式中,所述片外存储器具体为易失性存储器或者非易失性存储器,所述片上内存具体为易失性存储器。
由于第二方面提供的数据调度设备,对应于第一方面提供的调度存储器中数据的方法,故第二方面提供的数据调度设备的各种可能的实施方式,可以参照第一方面提供的调度存储器中数据的方法的各种可能的实施方式。
第三方面,本申请实施例还提供了一种数据调度系统,其特征在于,所述系统包括:片上处理器、片上内存以及数据调度设备,所述片上处理器、所述片上内存以及所述数据调度设备集成于同一芯片上;所述片上处理器用于处理所述片上内存中的数据;所述片上内存用于缓存数据;所述数据调度设备用于响应于针对第一目标数据的调入指令,获取所 述第一目标数据的长度、所述第一目标数据在片外存储器中的物理存储地址、所述第一目标数据在片上内存中的物理运行地址;根据所述物理存储地址以及所述第一目标数据的长度,提取所述片外存储器中的所述第一目标数据;根据所述物理运行地址,将基于所述第一目标数据得到的第二目标数据调入至所述片上内存,所述第二目标数据被片上处理器处理。
结合第三方面,在第三方面的第一种可能的实施方式中,所述物理存储地址为所述第一目标数据在所述片外存储器中的首地址或尾地址,所述物理运行地址为所述第一目标数据在所述片上内存中的首地址或尾地址。
结合第三方面或第三方面的第一种实施方式,在第三方面的第二种可能的实施方式中,所述第一目标数据为被压缩的数据;所述数据调度设备,具体用于对所述第一目标数据进行解压,得到所述第二目标数据;根据所述物理运行地址将所述第二目标数据调入至所述片上内存。
结合第三方面的第二种实施方式,在第三方面的第三种可能的实施方式中,所述数据调度设备,具体用于确定所述第一目标数据具有压缩标识,并响应于所述压缩标识对所述第一目标数据进行解压。
结合第三方面,在第三方面的第四种可能的实施方式中,所述系统还包括片外存储器,所述片外存储器,用于存储所述第一目标数据。在该实施方式中,片外存储器可以与片上处理器、片上内存以及数据调度设备集成在一个系统中。
结合第三方面的第四种实施方式中任一种实施方式,在第三方面的第五种可能的实施方式中,所述数据调度设备,还用于从所述片上内存中获取第三目标数据,并将基于所述第三目标数据得到的第四目标数据调出至所述片外存储器中;所述片外存储器,还用于存储所述第四目标数据。
结合第三方面的第五种实施方式,在第三方面的第六种可能的实施方式中,所述第三目标数据是未被压缩的数据;所述数据调度设备,具体用于对所述第三目标数据进行压缩,得到所述第四目标数据;将所述第四目标数据调出至所述片外存储器中。
结合第三方面的第六种实施方式,在第三方面的第七种可能的实施方式中,所述第三目标数据为所述片上处理器处理所述第二目标数据而得到的数据;所述数据调度设备,具体用于基于所述第一目标数据的压缩标识,对所述第三目标数据进行压缩。
结合第三方面的第六种实施方式或者第七种实施方式,在第三方面的第八种可能的实施方式中,所述数据调度设备或所述片上处理器,还用于记录所述第四目标数据的长度。
结合第三方面至第三方面的第八种实施方式中的任一种实施方式,在第三方面的第九种可能的实施方式中,所述片外存储器具体为易失性存储器或非易失性存储器,所述片上内存具体为易失性存储器。
由于第三方面提供的数据调度系统,对应于第一方面提供的调度存储器中数据的方法,故第三方面提供的数据调度系统的各种可能的实施方式,可以参照第一方面提供的调度存储器中数据的方法的各种可能的实施方式。
在本申请实施例的上述实现方式中,数据调度设备可以基于接收到的调入指令,获取需要调入片上内存的第一目标数据的长度、该第一目标数据在片外存储器中的物理存储地址以及其在片上内存中的物理运行地址,然后根据该物理存储地址以及第一目标数据的长度提取出片外存储器中的第一目标数据,并将基于该第一目标数据所得到的第二目标数据一次性调入至片上内存中该物理运行地址所对应的位置,其中,第二目标数据被片上处理器处理,该片上处理器、片上内存以及数据调度设备集成于同一芯片上。可见,数据调度设备在将数据调入片上内存时,是基于数据的物理存储地址以及数据的长度确定出需要调入片上内存的所有数据,并一次性的将所有数据全部调入片上内存,而无需频繁的执行数据调入过程,从而减少数据调入至片上内存的耗时时长,降低数据调入片上内存所产生的时延。而且,本申请实施例中是采用硬件的数据调度设备完成数据调度,这相比于现有的利用软件来控制片外存储器中的数据调入片上内存的技术方案而言,数据调度所需的耗时更短,从而也可以降低数据调度所产生的时延。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请中记载的一些实施例,对于本领域普通技术人员来讲,还可以根据这些附图获得其他的附图。
图1为本申请实施例中一示例性数据调度系统示意图;
图2为本申请实施例中另一示例性数据调度系统示意图;
图3为本申请实施例中一种调度存储器中数据的方法流程示意图;
图4为本申请实施例中一种调度存储器中数据的方法的信令交互示意图;
图5示出了包含芯片以及片外存储器的场景示例;
图6示出了一示例性描述符的具体结构;
图7为本申请实施例中一种数据调度设备的结构示意图。
具体实施方式
实际应用的一些场景中,为使得所设计的终端基带芯片,在满足低时延、高比特率要求的同时,还能减少配置于芯片上的片上内存,通常的做法是将终端处理该业务所需的程序代码以及业务数据分页存储于DRAM中,当终端需要处理该业务时,配置于终端基带芯片上的MMU以及Cache控制器,将DRAM中一页上的程序代码以及业务数据等数据调入至片上内存,同时,将片上内存中相应页上的数据调出至DRAM中。在芯片上的处理器在执行完该页的程序代码以及业务数据后,再由MMU以及Cache控制器将下一页上的数据调入片上内存以及调出相应页上的数据。但是,对于其中一段由多个页存储的程序代码以及业务数据,若该段程序代码以及业务数据要求被处理的过程中可能要求时延较小(也即对时延的要求较高),此时,由于MMU和Cache控制器每次都是在将一页上的数据调入调出片上内存并由处理器处理完成后,再将下一页数据的调入调出片上内存以便交由处理器处理,这会导致在处理器执行这段程序代码以及业务数据的过程中,MMU和Cache控制 器需要以页为单位频繁的执行数据的调入调出过程,从而导致处理器执行这段程序代码以及业务数据时实际产生的时延较长,超出处理该数据所允许的最大时延,也即出现了时延抖动的情况,进而导致这种数据的调入调出方式无法满足实际应用的一些低时延场景要求。
举例来说,假设处理器在处理一段程序代码以及业务数据所允许的最大时延为60微秒,并且这些数据分10页存储于DRAM中,而MMU和Cache控制器每次将1页上的数据调入或者调出片上内存所需耗时2微秒,同时,调入至片上内存中的这段数据被处理所需的耗时为30微秒。由于处理器在执行完片上内存中一页的数据后需要等待MMU和Cache控制器将下一页数据调入调出片上内存后才能处理新调入的数据,这就使得处理器处理这段程序代码以及业务数据实际所需的时间为70微秒,即为调入数据耗时(2微秒×10)、调出数据耗时(2微秒×10)以及数据处理耗时(30微秒)三者耗时之和,大于处理器处理该段数据所允许的最大时延,从而导致处理器处理该段数据所产生的时延无法满足实际应用的一些低时延场景要求。
基于此,本申请实施例提供了一种调度存储器中数据的方法,通过将数据一次性调入片上内存来降低数据调入片上内存所产生的时延。具体的,数据调度设备可以基于接收到的调入指令,获取需要调入片上内存的第一目标数据的长度、该第一目标数据在片外存储器中的物理存储地址以及其在片上内存中的物理运行地址,然后根据该物理存储地址以及第一目标数据的长度提取出片外存储器中的第一目标数据,并将基于该第一目标数据所得到的第二目标数据一次性调入至片上内存中该物理运行地址所对应的位置,其中,该第二目标数据被片上处理器处理,该片上处理器、片上内存以及数据调度设备集成于同一芯片上。可见,数据调度设备在将数据调入片上内存时,是基于数据的物理存储地址以及数据的长度确定出需要调入片上内存的所有数据,并一次性的将所有数据全部调入片上内存,而无需频繁的执行数据调入过程,从而减少数据调入至片上内存的耗时时长,降低数据调入片上内存所产生的时延。而且,本申请实施例中是采用硬件的数据调度设备完成数据调度,这相比于利用软件来控制片外存储器中的数据调入片上内存的技术方案而言,数据调度所需的耗时更短,从而也可以降低数据调度所产生的时延。
仍以处理器处理最大时延为60微秒的程序代码以及业务数据(数据量不变)为例,并且调入至片上内存中的数据被处理所需的仍然为30微秒,即使数据被调出片上内存所需的耗时不变,依旧为20微秒,但是数据调度设备通过将程序代码以及业务数据一次性调入片上内存时所需耗时可能仅为8微秒,从而使得终端该处理业务时实际所需的时间为58微秒(即30微秒+20微秒+8微秒),不超过处理器处理该段数据所允许的最大时延,从而满足该业务的低时延要求。进一步的,若数据调度设备也是一次性的将片上内存中原有的数据调出片上内存,则数据被调出片上内存所需的耗时可以减少至8微秒,从而可以进一步减少处理器处理该段数据实际所需的时间,具体为减少至46微秒(即30微秒+8微秒+8微秒)。
作为一种示例,本申请实施例提供的调度存储器中数据的方法可以应用于如图1所示的示例性数据调度系统中,并且该数据调度系统可以位于芯片101上。在该数据调度系统中,数据调度设备1013可以将片外存储器102中的数据调入至片上内存1012中。具体的,芯片101上集成有片上处理器1011、片上内存1012以及数据调度设备1013,片上处理器 1011可以向数据调度设备1013发送数据调入指令,以控制数据调度设备1013执行相应数据的调入操作;数据调度设备1013基于该调入指令获取所需调入至片上内存1012的第一目标数据的长度、第一目标数据在片外存储器102中的物理存储地址以及其调入片上内存1012后在片上内存1012中的物理运行地址,然后,数据调度设备1013可以根据所获取的物理存储地址以及第一目标数据的长度,从片外存储器102中提取出该第一目标数据,并根据所获取的物理运行地址将基于第一目标数据得到的第二目标数据调入至片上内存1012中,以使得片上处理器1011对片上内存1012中的第二目标数据进行处理。其中,片上处理器1011与数据调度设备1013之间可以存在连接,也可以不存在连接。
可以理解的是,上述数据调度系统仅是本申请实施例提供的一个系统示例,本申请实施例并不限于此场景。譬如,在其它可能的数据调度系统中,图1中所示的片外存储器102也可以视为数据调度系统中的一部分,用于存储调入片上内存1012中的数据以及从片上内存1012中调出的数据等;在另一些其它可能的数据调度系统中,片上处理器1011与片上内存1012可以相互独立,数据调度设备1013可以管控片上处理器1011访问片上内存1012中数据的操作行为;在又一些其它可能的数据调度系统中,片上内存1012也可以直接集成至片上处理器1011中,如图2所示。总之,本申请实施例可以应用于任何可适用的数据调度系统中,而不局限于上述示例性的数据调度系统。
为使本申请的上述目的、特征和优点能够更加明显易懂,下面将结合附图对本申请实施例中的各种非限定性实施方式进行示例性说明。显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。
参阅图3,图3示出了本申请实施例中一种调度存储器中数据的方法的流程示意图。其中,图3所示的调度存储器中数据的方法可以应用于上述图1或者图2等所示的数据调度系统中,图3中所示的片上处理器、片上内存以及数据调度设备被集成于同一芯片上,该方法具体可以包括:S301:片上处理器或片上内存生成针对于第一目标数据的调入指令,并将该调入指令发送给数据调度设备。本实施例中,可以是由片上处理器或者片上内存触发数据调度设备将数据调入片上内存。具体实现时,片上处理器或者片上内存(具体可以是片上内存中的控制器)可以生成针对于第一目标数据的调入指令,该调入指令用于指示数据调度设备将第一目标数据调入片上内存中的相应位置,然后,片上处理器可以将生成的调入指令发送给数据调度设备。
作为一种示例,片上处理器可以根据时序来触发数据调度设备完成数据调度。可以理解,当终端处理某个业务时,若片上处理器确定片上内存中的当前程序代码以及业务数据即将被处理完毕,则可以根据时序提前触发数据调度设备将下一段程序代码以及业务数据调入片上内存中,这样,当片上内存中的上一段程序代码以及业务数据被处理完时,下一段程序代码以及业务数据正好或者已经调入至片上内存,则终端无需再耗时等待数据从片外存储器调入至片上内存,从而可以降低终端处于该业务的时延。
在另一种示例中,片上内存中的控制器在访问程序代码以及业务数据失效(也即所需访问的程序代码以及业务数据尚未调入至片上内存)时,该片上内存中的控制器可以通过生成调入指令触发数据调入设备将所需访问的程序代码以及业务数据调入片上内存中。在该示例中,数据调度设备进行数据调入的过程可以是由片上内存中的控制器触发的,而无需片上处理器对数据调度设备的数据调度过程进行触发指示,因此,在进一步可能的示例中,片上处理器与数据调度设备之间可以无需建立通信连接;当然,在其它可能的实施方式中,片上处理器与数据调度设备之间也可以建立通信连接以用于其它信息的通信。
S302:数据调度设备响应该调入指令,获取第一目标数据的长度、该第一目标数据在片外存储器中的物理存储地址以及该第一目标数据在片上内存中的物理运行地址。
在一种示例性的具体实施方式中,片上处理器或者片上内存可以预先存储第一目标数据的长度、该第一目标数据在片外存储器中的物理存储地址以及该第一目标数据在片上内存中的物理运行地址(也即为完成数据调入后第一目标数据在片上内存中的具体位置),当片上处理器或者片上内存需要触发数据调度设备进行数据调入时,可以在生成的调入指令中可以携带第一目标数据的长度、物理存储地址以及物理运行地址。这样,数据调度设备在接收到该调入指令后,可以对该调入指令进行解析,从而解析得到第一目标数据的长度、物理存储地址以及物理运行地址。
而在另一种示例性的具体实施方式中,也可以是数据调度设备预先存储了第一目标数据的长度、该第一目标数据在片外存储器中的物理存储地址以及该第一目标数据在片上内存中的物理运行地址。当数据调度设备接收到针对于第一目标数据的调入指令后,可以响应该调入指令,获取自身存储的该第一目标数据所对应的长度、物理存储地址以及物理运行地址等。
S303:数据调度设备根据第一目标数据的物理存储地址以及该第一目标数据的长度,提取片外存储器中的第一目标数据。本实施例中,数据调度设备在获知第一目标数据的长度以及在片外存储器中的物理存储地址后,可以在片外存储器中定位出该第一目标数据。由于所定位出的第一目标数据是需要调入片上内存中的数据,因此,数据调度设备可以从片外存储器中提取该第一目标数据,以便后续的数据调入。
值得注意的是,第一目标数据在片外存储器中的物理存储地址,具体可以是该第一目标数据在片外存储器中进行存储时的首地址,从而数据调度设备基于该首地址以及第一目标数据的长度可以确定出片外存储器中的哪些数据为第一目标数据。当然,在其它可能的实施方式中,该物理存储地址也可以是该第一目标数据在片外存储器中进行存储时的尾地址或者其它地址等。
S304:数据调度设备根据第一目标数据的物理运行地址,将基于第一目标数据得到的第二目标数据调入至片上内存。数据调度设备在提取出第一目标数据后,根据该第一目标数据的物理运行地址,可以确定出数据调入至片上内存的具体位置,进而可以将基于第一目标数据得到的第二目标数据调入至片上内存。值得注意的是,第二目标数据在片上内存中的物理运行地址,具体可以是该第二目标数据在片上内存中进行缓存时的首地址,从而数据调度设备基于该首地址以及第一目标数据的长度可以确定出第二目标数据在片上内存 中的所有缓存位置,以便于数据调度设备将该第二目标数据调入至所确定的位置中进行缓存。当然,在其它可能的实施方式中,该物理存储地址也可以是该第二目标数据在片上内存上进行存储时的尾地址或者其它地址等。
其中,作为一种示例性的具体实施方式,第二目标数据可以就是第一目标数据,则数据调度设备在提取出第一目标数据后,可以根据第一目标数据的物理运行地址,直接将该第一目标数据(也即第二目标数据)调入片上内存的相应位置。
当然,在另一种示例性的具体实施方式中,第一目标数据为被压缩的数据,则目标数据可以是由第一目标数据经解压后所得到的数据。具体的,由于第一目标数据在片外存储器中进行存储时,是以压缩的格式进行存储,因此,数据调度设备在提取出第一目标数据后,可以对该第一目标数据进行解压处理,得到该第二目标数据,进而根据第一目标数据在片上内存的物理运行地址,也即为根据第二目标数据在片上内存的物理运行地址,将该第二目标数据调入至片上内存。可以理解,将数据进行压缩后再存储于片外存储器中,可以减少该数据占用片外存储器的内存空间,从而使得片外存储器可以存储更多的数据。
进一步的,可以利用压缩标识来标识出第一目标数据为被压缩的数据。具体的,数据调度设备在提取出第一目标数据后,可以确定出该第一目标数据是否具有压缩标识,若是,则表明该第一目标数据为被压缩的数据,则数据调度设备可以响应于该压缩标识对该第一目标数据进行解压,得到第二目标数据,若不是,则表明第一目标数据未被压缩处理,第二目标数据也即为该第一目标数据。
相应的,在将数据进行压缩而得到第一目标数据时,可以为该第一目标数据添加压缩标识,以便于数据调度设备能够基于该添加的压缩标识确定该第一目标数据为被压缩的数据。可以理解,上述压缩过程也可以被省略,即当第一目标数据为未被压缩的数据时,可以直接将该未压缩数据从片外存储器调入片上内存。
实际应用的一些场景中,本实施例所提及的片上内存具体可以包括静态随机存取存储器(static random access memory,SRAM)、紧耦合内存(tightly coupled memory,TCM)、缓存器或其他类型的易失性存储器,本实施例所提及的片外存储器具体可以包括DRAM、双数据速率同步动态随机存取存储器(double data rate synchronous dynamic random access memory,DDR)存储器等易失性存储器,或者,片外存储器具体也可以是诸如flash存储器等非易失性存储器。值得注意的是,由于终端处理业务所需执行的程序代码以及业务数据也可以存储于flash存储器中,并由数据调度设备完成该数据的调度,因此,实际应用中,对于一些时延要求不高的业务,可以将业务对应的程序代码以及相应的业务数据存储于诸如flash存储器等非易失性存储器中,从而使得终端可以基于该非易失性存储器存储器中存储的程序代码以及业务数据处理更多的业务。
而且,本实施例中数据调度设备可以从多个片外存储器中提取数据,并将所提取的数据调入至同一个片上内存。这样,多个片外存储器中的程序代码可以共享同一个片上内存,而无需为每个片外存储器单独配置一个与之对应的片上内存,从而可以减少芯片上所需的片上内存的数量,进而可以节约该芯片的成本。进一步的,多个片外存储器可以存储多个不同业务所对应的程序代码以及业务数据,这样,数据调度设备可以同时从多个片外存储 器中调入不同业务所对应的数据,并由片上处理器访问片上内存中不同业务所对应的程序代码以及业务数据,从而可以使得终端可以同时处理不同业务。当然,在另一些实施方式中,在一个片外存储器中也可以是同时存储多个不同业务所对应的程序代码以及业务数据。
实际应用的一些场景中,部分业务对于终端的时延要求可能较高,则为了进一步减少终端实际处理该业务所需耗时,可以预先在片上内存中固定存储该业务所对应的部分或全部数据,这样,终端在处理该业务时,片上处理器可以直接从片上内存中的固定存储区域内读取相应的数据进行执行以及处理,而无需在终端每次执行该业务时由数据调度设备进行调度,从而可以节省数据调度设备进行数据的调入以及调出所需的耗时,进而进一步降低了终端处理该业务所需的时延。当然,在一些示例中,锁定在片上内存中的数据预先可以存储于片外存储器中,而由数据调度设备将该数据从片外存储器中调入至片上内存,并在片上内存中锁定。基于此,在进一步的实施方式中,可以利用锁定标识来标识出该数据在片上内存中处于锁定状态,这样,当数据调度设备接收到针对于该数据的调入指令后,若基于该数据的锁定标识确定该数据已经预先锁定于片上内存,则无需进行数据调度而可以直接通知片上处理器在片上内存的固定存储区域内进行数据的读取。
实际应用中,数据调度设备除了可以将片外存储器中的数据调入片上内存之外,还可以将片上内存中的原有数据进行调出。可以理解,片上内存的内存空间通常小于片外存储器的内存空间,并且若当前片上内存中已经存储有上一段程序代码以及业务数据,则此时片上内存不具有足够的内存空间再去容纳数据调度设备当前调入的第一目标数据,基于此,在数据调度设备将第一目标数据调入片上内存之前,数据调度设备还可以提前将片上内存中的数据进行调出,以使得片上内存具有足够大的内存空间去容纳第一目标数据;另外,由于从片上内存中调出的数据为经过处理后的数据,因此,当终端再次处理该业务时,可能需要基于上一次处理该业务时所得到的程序代码以及业务数据来进行处理,因此,数据调度设备可以将片上内存中的数据调出至片外存储器中进行存储。具体实现时,数据调度设备可以从片上内存中提取出第三目标数据,并将基于第三目标数据而得到的第四目标数据调出至片外存储器中。
其中,在一些示例性的实施方式中,第四目标数据可以就是第三目标数据,则数据调度设备在提取出第三目标数据后,可以直接将该第三目标数据(也即为第四目标数据)调出至片外存储器中进行存储。
而在另一些示例性的实施方式中,第四目标数据可以是压缩后的数据,则第四目标数据可以是对第三目标数据进行压缩后所得到的数据。具体的,为了减少数据在被存储至片外存储器时所占用的内存空间,数据调度设备可以在调出第三目标数据后,对该第三目标数据进行压缩,得到第四目标数据,并将压缩格式的第四目标数据存储于片外存储器中。可以理解,由于压缩格式的第四目标数据的数据量小于未压缩的第三目标数据的数据量,因此,存储第四目标数据所占用的内存空间小于存储第三目标数据所占用的内存空间。
值得注意的是,若第三目标数据是片上处理器处理第二目标数据而得到的数据,则第二目标数据与第三目标数据的数据量通常大致相同,但是当对第三目标数据进行压缩存储时,由于处理器处理第二目标数据所得到的第三目标数据相较于第二目标数据发生变化, 这使得对第三目标数据进行压缩所得到的第四目标数据的数据量与第一目标数据的数据量可能并不相同,因此,在存储处于压缩状态的第四目标数据时,可以记录该第四目标数据的长度以及在片外存储器中的物理存储地址,这样,当数据调度设备后续需要从片外存储器中提取该第四目标数据时,可以基于该第四目标数据的长度以及相应的物理存储地址,定位出位于片外存储器中的第四目标数据。其中,可以是由数据调度设备进行记录,也可以是由片上处理器进行记录。
在进一步的实施方式中,若第三目标数据是片上处理器处理第二目标数据而得到的数据,则数据调度设备在对第三目标数据进行压缩时,可以检测与该第三目标数据所对应的第一目标数据是否具有压缩标识,若确定第一目标数据具有压缩标识,则执行对第三目标数据的压缩过程。可以理解,第一目标数据在片外存储器中是以压缩格式进行存储,则第一目标数据所包含的程序代码以及业务数据在被片上处理器执行后,所得到的第三目标数据也可以是以压缩格式存储于片外存储器中。也就是说,所提取的第一目标数据若是以压缩格式存储于片外存储器的,则也可以是以压缩格式将第三目标数据存储于片外存储器中。
需要注意的是,本实施例中所述的“调出”、“调入”均是针对于片上内存而言,即数据从片上内存进行调出,以及数据调入至片上内存中进行存储。
本实施例中,数据调度设备可以基于接收到的调入指令,获取需要调入片上内存的第一目标数据的长度、该第一目标数据在片外存储器中的物理存储地址以及其在片上内存中的物理运行地址,然后根据该物理存储地址以及第一目标数据的长度提取出片外存储器中的第一目标数据,并将基于该第一目标数据所得到的第二目标数据一次性调入至片上内存中该物理运行地址所对应的位置,其中,该第二目标数据被片上处理器进行处理,该片上处理器、片上内存以及数据调度设备集成于同一芯片上。可见,数据调度设备在将数据调入片上内存时,是基于数据的物理存储地址以及数据的长度确定出需要调入片上内存的所有数据,并一次性的将所有数据全部调入片上内存,而无需频繁的执行数据调入过程,从而减少数据调入至片上内存的耗时时长,降低数据调入片上内存所产生的时延。而且,本申请实施例中是采用硬件的数据调度设备完成数据调度,这相比于现有的利用软件来控制片外存储器中的数据调入片上内存的技术方案而言,数据调度所需的耗时更短,从而也可以降低数据调度所产生的时延。
为了便于理解本申请实施例的技术方案,下面将结合具体场景示例对本申请技术方案进行具象化说明。参阅图4、图5,图4示出了本申请实施例中一种调度存储器中数据的方法的信令交互示意图,图5示出了一具体场景示例,在该场景示例中,芯片上集成了数据信号处理器(digital signal processor,DSP)、SRAM(片上内存)以及数据调度设备,该芯片与DDR存储器(片外存储器)相连,该方法包括:S401:DSP处理器生成针对于第一目标数据的调入指令,并将该调入指令发送给数据调度设备。
S402:数据调度设备响应接收到的调入指令,查找该第一目标数据所对应的描述符,并确定第一目标数据的长度SIZE、第一目标数据在DDR存储器中的物理存储地址PSA、第一目标数据在SRAM中的物理运行地址PRA以及数据状态标识FLAG。本实施例中,调 入指令中可以包含该第一目标数据所对应的虚拟运行地址,数据调度设备可以从调入指令中解析出该虚拟运行地址所对应的描述符。具体实现时,如图6所示,该描述符可以包括虚拟运行地址(virtual running address,VRA)、物理运行地址(physical running address,PRA)、物理存储地址(physical storage address,PSA)、数据长度SIZE以及数据状态标识FLAG。其中,数据状态标识FALG至少可以包括上述压缩标识和/或锁定标识(本实施例中以包括压缩标识为例)。数据调度设备可以根据调入指令中所包含的第一目标数据的虚拟运行地址,查找出包含与其一致的虚拟运行地址的描述符,进而在查找出描述符后,从该描述符中可以确定出第一目标数据的长度SIZE、第一目标数据在DDR存储器中的物理存储地址PSA以及第一目标数据在SRAM中的物理运行地址PRA。
S403:数据调度设备根据PSA将SRAM中该PSA处的数据调出至DDR存储器中。本实施例中,数据调度设备可以获取SRAM中该PSA处数据的长度,并基于该PSA以及该PSA处数据的长度,将该PSA处的数据(也即为SRAM中的一数据段)调出至DDR存储器中进行存储。当然,实际应用中,若该PSA处并没有数据,则可以不执行本实施例中的步骤S403,而直接步骤S404。
S404:数据调度设备根据PSA以及SIZE,提取DDR存储器中的第一目标数据。S405:数据调度设备确定描述符中FLAG包括有压缩标识,并相应该压缩标识对第一目标数据进行解压,得到第二目标数据。S406:数据调度设备根据PRA,将第二目标数据调入至SRAM中的相应位置处。S407:片上处理器访问并处理SRAM中的第二目标数据,得到第三目标数据,其中,该第三目标数据位于SRAM中。S408:数据调度设备将SRAM中的第三目标数据调出SRAM。
本实施例中,片上处理器在处理完第二目标数据得到第三目标数据后,可以由数据调度设备主动将该第三目标数据调出SRAM,这样,当终端执行其它业务时,数据调度设备可以无需再进行数据的调出,而可以只进行数据的调入过程,从而可以减少终端处理业务的时延。当然,也可以是片上处理器在处理完第二目标数据后指示数据调度设备调出第三目标数据,以便于数据调度设备继续将其它数据调入SRAM中。
S409:数据调度设备根据描述符确定第一目标数据具有压缩标识,对第三目标数据进行压缩,得到第四目标数据。本实施例中,若从DDR存储器中提取的第一目标数据具有压缩标识,表明第一目标数据是压缩存储于DDR存储器中的,则在将第三目标数据存储于DDR存储器中时,也可以进行压缩存储。S410:数据调度设备根据描述符中的PSA,将第四目标数据调出至DDR存储器中的PSA位置。
需要说明的是,第一目标数据与第四目标数据的长度可能并不相同,因此,数据调度设备在将第四目标数据存储于DDR存储器中后,还可以对描述符中的数据长度SIZE进行更新,以使得更新后的数据长度SIZE为第四目标数据的数据长度。
值得注意的是,本实施例中,SRAM位于DSP处理器外部,则在一些可能的实施方式中,数据调度设备可以对DSP处理器访问SRAM的过程进行管控,这样,当DSP处理器不具有访问SRAM中的数据的权限时,可以禁止该DSP处理器对SRAM的访问,从而可 以提高数据访问的安全性。当然,在其它场景中,该SRAM也可以位于DSP处理器内部,以提高DSP处理器访问SRAM中数据的效率。
此外,本申请实施例还提供了一种调度存储器中数据的设备。参阅图7,图7示出了本申请实施例中一种数据调度设备,该数据调度设备700包括:获取模块701,用于响应于针对第一目标数据的调入指令,获取所述第一目标数据的长度、所述第一目标数据在片外存储器中的物理存储地址、所述第一目标数据在片上内存中的物理运行地址;提取模块702,用于根据所述物理存储地址以及所述第一目标数据的长度,提取所述片外存储器中的所述第一目标数据;调入模块703,用于根据所述物理运行地址,将基于所述第一目标数据得到的第二目标数据调入至所述片上内存,所述第二目标数据被片上处理器处理;其中,所述片上处理器、所述片上内存以及所述数据调度设备700集成于同一芯片上。
在一些可能的实施方式中,所述物理存储地址为所述第一目标数据在所述片外存储器中的首地址或尾地址,所述物理运行地址为所述第一目标数据在所述片上内存中的首地址或尾地址。
在一些可能的实施方式中,所述第一目标数据为被压缩的数据;所述调入模块703,包括:解压单元,用于对所述第一目标数据进行解压,得到所述第二目标数据;调入单元,用于根据所述物理运行地址将所述第二目标数据调入至所述片上内存。
在一些可能的实施方式中,所述解压单元,包括:确定子单元,用于确定所述第一目标数据具有压缩标识;解压子单元,用于响应于所述压缩标识对所述第一目标数据进行解压。
在一些可能的实施方式中,所述数据设备700还包括:第三目标数据获取模块,用于从所述片上内存中获取第三目标数据;调出模块,用于将基于所述第三目标数据得到的第四目标数据调出至所述片外存储器中。
在一些可能的实施方式中,所述第三目标数据是未被压缩的数据;所述调出模块,包括:压缩单元,用于对所述第三目标数据进行压缩,得到所述第四目标数据;调出单元,用于将所述第四目标数据调出至所述片外存储器中。
在一些可能的实施方式中,所述第三目标数据为所述片上处理器处理所述第二目标数据而得到的数据,所述压缩单元,具体用于基于所述第一目标数据的压缩标识,对所述第三目标数据进行压缩。
在一些可能的实施方式中,所述数据调度设备700还包括:记录模块,用于记录所述第四目标数据的长度。
在一些可能的实施方式中,所述片外存储器具体为易失性存储器或者为非易失性存储器,所述片上内存具体为易失性存储器。
需要说明的是,上述数据调度700设备中各模块/单元/子单元之间的信息交互、执行过程等内容,由于与本申请实施例中方法实施例基于同一构思,其带来的技术效果与本申请实施例中方法实施例相同,具体内容可参见本申请实施例前述所示的方法实施例中的叙述,此处不再赘述。例如,该数据调度设备700中各模块/单元/子单元具体可以包括实现相应功 能的硬件电路。该硬件电路包括可以包括数字电路、模拟电路、可编译电路、算法电路或混合电路中的至少一项。
需要说明的是,本申请中“的(英文:of)”,相应的“(英文corresponding,relevant)”和“对应的(英文:corresponding)”有时可以混用,应当指出的是,在不强调其区别时,其所要表达的含义是一致的。
需要说明的是,本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其他实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b,或c中的至少一项(个),可以表示:a,b,c,a-b,a-c,b-c,或a-b-c,其中a,b,c可以是单个,也可以是多个。另外,为了便于清楚描述本申请实施例的技术方案,在本申请的实施例中,采用了“第一”、“第二”等字样对功能和作用基本相同的相同项或相似项进行区分。本领域技术人员可以理解“第一”、“第二”等字样并不对数量和执行次序进行限定,并且“第一”、“第二”等字样也并不限定一定不同。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置和系统实施例而言,由于其基本相似于方法实施例,所以描述得比较简单,相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中作为分离部件说明的模块可以是或者也可以不是物理上分开的。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
以上所述仅是本申请示例性的实施方式,并非用于限定本申请的保护范围。

Claims (20)

  1. 一种调度存储器中数据的方法,其特征在于,所述方法包括:
    数据调度设备响应于针对第一目标数据的调入指令,获取所述第一目标数据的长度、所述第一目标数据在片外存储器中的物理存储地址、所述第一目标数据在片上内存中的物理运行地址;
    所述数据调度设备根据所述物理存储地址以及所述第一目标数据的长度,提取所述片外存储器中的所述第一目标数据;
    所述数据调度设备根据所述物理运行地址,将基于所述第一目标数据得到的第二目标数据调入至所述片上内存,所述第二目标数据被片上处理器处理;
    其中,所述片上处理器、所述片上内存以及所述数据调度设备集成于同一芯片上。
  2. 根据权利要求1所述的方法,其特征在于,所述物理存储地址为所述第一目标数据在所述片外存储器中的首地址或尾地址,所述物理运行地址为所述第一目标数据在所述片上内存中的首地址或尾地址。
  3. 根据权利要求1所述的方法,其特征在于,所述第一目标数据为被压缩的数据;所述数据调度设备根据所述物理运行地址,将基于所述第一目标数据得到的第二目标数据调入至所述片上内存,包括:
    所述数据调度设备对所述第一目标数据进行解压,得到所述第二目标数据;
    所述数据调度设备根据所述物理运行地址将所述第二目标数据调入至所述片上内存。
  4. 根据权利要求3所述的方法,其特征在于,所述数据调度设备对所述第一目标数据段进行解压,包括:
    所述数据调度设备确定所述第一目标数据具有压缩标识,并响应于所述压缩标识对所述第一目标数据进行解压。
  5. 根据权利要求1至4中任一项所述的方法,其特征在于,所述方法还包括:
    所述数据调度设备从所述片上内存中获取第三目标数据,并将基于所述第三目标数据得到的第四目标数据调出至所述片外存储器中。
  6. 根据权利要求5所述的方法,其特征在于,所述第三目标数据是未被压缩的数据;所述将基于所述第三目标数据得到的第四目标数据调出至所述片外存储器中,包括:
    所述数据调度设备对所述第三目标数据进行压缩,得到所述第四目标数据;
    所述数据调度设备将所述第四目标数据调出至所述片外存储器中。
  7. 根据权利要求6所述的方法,其特征在于,所述第三目标数据为所述片上处理器处理所述第二目标数据而得到的数据,所述数据调度设备对所述第三目标数据进行压缩,包括:
    所述数据调度设备基于所述第一目标数据的压缩标识,对所述第三目标数据进行压缩。
  8. 根据权利要求6或7所述的方法,其特征在于,所述方法还包括:
    所述数据调度设备或所述片上处理器记录所述第四目标数据的长度。
  9. 根据权利要求1至8任一项所述的方法,其特征在于,所述片外存储器具体为易失性存储器或非易失性存储器,所述片上内存具体为易失性存储器。
  10. 一种数据调度设备,其特征在于,所述设备包括:
    获取模块,用于响应于针对第一目标数据的调入指令,获取所述第一目标数据的长度、所述第一目标数据在片外存储器中的物理存储地址、所述第一目标数据在片上内存中的物理运行地址;
    提取模块,用于根据所述物理存储地址以及所述第一目标数据的长度,提取所述片外存储器中的所述第一目标数据;
    调入模块,用于根据所述物理运行地址,将基于所述第一目标数据得到的第二目标数据调入至所述片上内存,所述第二目标数据被片上处理器处理;
    其中,所述片上处理器、所述片上内存以及所述数据调度设备集成于同一芯片上。
  11. 一种数据调度系统,其特征在于,所述系统包括:片上处理器、片上内存以及数据调度设备,所述片上处理器、所述片上内存以及所述数据调度设备集成于同一芯片上;
    所述数据调度设备用于响应于针对第一目标数据的调入指令,获取所述第一目标数据的长度、所述第一目标数据在片外存储器中的物理存储地址、所述第一目标数据在片上内存中的物理运行地址;根据所述物理存储地址以及所述第一目标数据的长度,提取所述片外存储器中的所述第一目标数据;根据所述物理运行地址,将基于所述第一目标数据得到的第二目标数据调入至所述片上内存;
    所述片上内存,用于存储所述第二目标数据;
    所述片上处理器,用于处理所述第二目标数据。
  12. 根据权利要求11所述的系统,其特征在于,所述物理存储地址为所述第一目标数据在所述片外存储器中的首地址或尾地址,所述物理运行地址为所述第一目标数据在所述片上内存中的首地址或尾地址。
  13. 根据权利要求11或12所述的系统,其特征在于,所述第一目标数据为被压缩的数据;
    所述数据调度设备,具体用于对所述第一目标数据进行解压,得到所述第二目标数据;根据所述物理运行地址将所述第二目标数据调入至所述片上内存。
  14. 根据权利要求13所述的系统,其特征在于,所述数据调度设备,具体用于确定所述第一目标数据具有压缩标识,并响应于所述压缩标识对所述第一目标数据进行解压。
  15. 根据权利要求11所述的系统,其特征在于,所述系统还包括片外存储器,所述片外存储器,用于存储所述第一目标数据。
  16. 根据权利要求15所述的系统,其特征在于,
    所述数据调度设备,还用于从所述片上内存中获取第三目标数据,并将基于所述第三目标数据得到的第四目标数据调出至所述片外存储器中;
    所述片外存储器,还用于存储所述第四目标数据。
  17. 根据权利要求16所述的系统,其特征在于,所述第三目标数据是未被压缩的数据;
    所述数据调度设备,具体用于对所述第三目标数据进行压缩,得到所述第四目标数据;将所述第四目标数据调出至所述片外存储器中。
  18. 根据权利要求17所述的系统,其特征在于,所述第三目标数据为所述片上处理器处理所述第二目标数据而得到的数据;
    所述数据调度设备,具体用于基于所述第一目标数据的压缩标识,对所述第三目标数据进行压缩。
  19. 根据权利要求17或18所述的系统,其特征在于,所述数据调度设备或所述片上处理器,还用于记录所述第四目标数据的长度。
  20. 根据权利要求11至19任一项所述的系统,其特征在于,所述片外存储器具体为易失性存储器或非易失性存储器,所述片上内存具体为易失性存储器。
PCT/CN2019/086567 2019-05-13 2019-05-13 一种调度存储器中数据的方法、数据调度设备及系统 WO2020227878A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2019/086567 WO2020227878A1 (zh) 2019-05-13 2019-05-13 一种调度存储器中数据的方法、数据调度设备及系统
CN201980009406.7A CN112292660B (zh) 2019-05-13 2019-05-13 一种调度存储器中数据的方法、数据调度设备及系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/086567 WO2020227878A1 (zh) 2019-05-13 2019-05-13 一种调度存储器中数据的方法、数据调度设备及系统

Publications (1)

Publication Number Publication Date
WO2020227878A1 true WO2020227878A1 (zh) 2020-11-19

Family

ID=73289790

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/086567 WO2020227878A1 (zh) 2019-05-13 2019-05-13 一种调度存储器中数据的方法、数据调度设备及系统

Country Status (2)

Country Link
CN (1) CN112292660B (zh)
WO (1) WO2020227878A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210256973A1 (en) * 2020-02-13 2021-08-19 Baidu Online Network Technology (Beijing) Co., Ltd. Speech chip and electronic device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6453365B1 (en) * 1998-02-11 2002-09-17 Globespanvirata, Inc. Direct memory access controller having decode circuit for compact instruction format
CN101382927A (zh) * 2008-09-25 2009-03-11 杭州爱威芯科技有限公司 集成在芯片内的高速串行外围接口电路
CN102403034A (zh) * 2010-09-07 2012-04-04 艾默生网络能源有限公司 Dsp控制单板设备及其远程升级方法和服务器
CN106961608A (zh) * 2017-04-07 2017-07-18 山东师范大学 高清解码器数字显示混合格式码流自适应处理系统及方法
CN109189721A (zh) * 2018-08-27 2019-01-11 中国科学院电工研究所 一种实时性数据存储方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0834812A1 (en) * 1996-09-30 1998-04-08 Cummins Engine Company, Inc. A method for accessing flash memory and an automotive electronic control system
CN104156226B (zh) * 2013-05-15 2019-01-15 索尼公司 混合内存设备的挂起或关机方法
CN104423887B (zh) * 2013-08-22 2019-04-12 深圳富泰宏精密工业有限公司 移动设备内存管理方法及系统
EP3361385A4 (en) * 2015-12-03 2018-11-21 Huawei Technologies Co., Ltd. Data migration method applicable to computer system, and device and computer system utilizing same

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6453365B1 (en) * 1998-02-11 2002-09-17 Globespanvirata, Inc. Direct memory access controller having decode circuit for compact instruction format
CN101382927A (zh) * 2008-09-25 2009-03-11 杭州爱威芯科技有限公司 集成在芯片内的高速串行外围接口电路
CN102403034A (zh) * 2010-09-07 2012-04-04 艾默生网络能源有限公司 Dsp控制单板设备及其远程升级方法和服务器
CN106961608A (zh) * 2017-04-07 2017-07-18 山东师范大学 高清解码器数字显示混合格式码流自适应处理系统及方法
CN109189721A (zh) * 2018-08-27 2019-01-11 中国科学院电工研究所 一种实时性数据存储方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210256973A1 (en) * 2020-02-13 2021-08-19 Baidu Online Network Technology (Beijing) Co., Ltd. Speech chip and electronic device
US11735179B2 (en) * 2020-02-13 2023-08-22 Baidu Online Network Technology (Beijing) Co., Ltd. Speech chip and electronic device

Also Published As

Publication number Publication date
CN112292660A (zh) 2021-01-29
CN112292660B (zh) 2022-05-31

Similar Documents

Publication Publication Date Title
CN107832100B (zh) 一种apk插件的加载方法及其终端
US7636810B2 (en) Method, system, and apparatus for memory compression with flexible in-memory cache
US20150143045A1 (en) Cache control apparatus and method
US11748279B2 (en) System on chip, access command routing method, and terminal
WO2023273424A1 (zh) 一种基于linux内核ko模块的加载方法及装置
CN112486913B (zh) 一种基于集群环境下的日志异步存储方法和设备
CN104756080A (zh) 扩展主机设备的功能
CN112214247B (zh) 一种系统启动方法以及相关设备
US20160117116A1 (en) Electronic device and a method for managing memory space thereof
CN115374046B (zh) 一种多处理器数据交互方法、装置、设备及存储介质
CN115470156A (zh) 基于rdma的内存使用方法、系统、电子设备和存储介质
WO2020227878A1 (zh) 一种调度存储器中数据的方法、数据调度设备及系统
CN107451070B (zh) 一种数据的处理方法和服务器
CN102917036A (zh) 一种基于Memcached的分布式缓存数据同步实现方法
JP2001075866A (ja) 記憶装置を動作する方法および記憶装置
CN115495020A (zh) 文件处理方法、装置、电子设备和可读存储介质
CN115525462A (zh) 一种日志存储方法、装置、电子设备及存储介质
CN115426375A (zh) 一种数据处理方法和数据处理系统
CN112395245B (zh) 处理器的访问装置、方法及计算机设备
CN112765085A (zh) 数据传输方法及相关装置
CN103838679A (zh) 一种缓存处理方法及装置
CN112395244B (zh) 处理器的访问装置及方法
JP2005141637A (ja) メモリ管理装置
US20240069763A1 (en) Memory controller and memory access method
CN111767093B (zh) 数据处理方法、设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19928910

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19928910

Country of ref document: EP

Kind code of ref document: A1