WO2021237518A1 - Procédé et appareil de stockage de données, processeur et support de stockage informatique - Google Patents

Procédé et appareil de stockage de données, processeur et support de stockage informatique Download PDF

Info

Publication number
WO2021237518A1
WO2021237518A1 PCT/CN2020/092640 CN2020092640W WO2021237518A1 WO 2021237518 A1 WO2021237518 A1 WO 2021237518A1 CN 2020092640 W CN2020092640 W CN 2020092640W WO 2021237518 A1 WO2021237518 A1 WO 2021237518A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
feature map
compression
compressed
unit
Prior art date
Application number
PCT/CN2020/092640
Other languages
English (en)
Chinese (zh)
Inventor
李鹏
阮肇夏
王耀杰
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2020/092640 priority Critical patent/WO2021237518A1/fr
Priority to PCT/CN2020/099495 priority patent/WO2021237870A1/fr
Publication of WO2021237518A1 publication Critical patent/WO2021237518A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the embodiments of the present invention relate to the field of data processing, and more specifically, to a method, device, processor, and computer storage medium for data storage.
  • the current data compression method has a small amount of data compression, that is, even the compressed data occupies a large storage space. Moreover, for larger compressed data, larger bandwidth resources need to be consumed in the process of reading and writing from the memory.
  • the embodiments of the present invention provide a data storage method, device, processor, and computer storage medium, which can compress feature map data before storing, which reduces storage space and reduces bandwidth resources when reading and writing.
  • a data storage method includes:
  • a data storage device in a second aspect, includes:
  • a receiving device configured to receive feature map data to be stored
  • a dividing device configured to divide the feature map data into multiple data units
  • the compression device is configured to, for each data unit of the plurality of data units, determine whether the data in the data unit is all zeros, and perform compression according to the result of the determination;
  • the storage device is configured to store the compressed feature map data.
  • a processor including:
  • a computer storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps of the method described in the first aspect are implemented.
  • the method for data storage in the embodiment of the present invention fully takes into account the zero situation in the feature map and the characteristics of the close value between adjacent feature map data, and the difference method is used for compression, which can make the compressed data occupy
  • the storage space is smaller, on the one hand, it can reduce the space occupation of the external memory, on the other hand, it can also reduce the bandwidth resources when reading and writing, and save power consumption.
  • Fig. 1 is a schematic diagram of data storage according to an embodiment of the present invention.
  • Fig. 2 is a schematic block diagram of a system for data compression storage according to an embodiment of the present invention.
  • Fig. 3 is a schematic diagram of various modules of a system for data compression storage according to an embodiment of the present invention.
  • Fig. 4 is a schematic diagram of a flow of compression performed by the system for data compression storage according to an embodiment of the present invention.
  • FIG. 5 is another schematic diagram of a process of performing compression by the system for data compression storage according to an embodiment of the present invention.
  • Fig. 6 is a schematic diagram of a state machine of the system for data compression storage according to an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of each module of a compression path of a system for data compression storage according to an embodiment of the present invention.
  • FIG. 8 is a schematic flowchart of a data storage method according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of a minimum access unit storage characteristic map according to an embodiment of the present invention.
  • FIG. 10 is another schematic diagram of a minimum access unit storage characteristic map according to an embodiment of the present invention.
  • FIG. 11 is a schematic diagram of calculating multiple differences for one data unit according to an embodiment of the present invention.
  • Fig. 12 is a schematic structural diagram of a scan coding module according to an embodiment of the present invention.
  • Fig. 13 is a schematic diagram of fetching a data unit from a minimum access unit according to an embodiment of the present invention.
  • FIG. 14 is a schematic structural diagram of a difference algorithm compression module according to an embodiment of the present invention.
  • FIG. 15 is a schematic diagram of several situations of compressed data of a data unit according to an embodiment of the present invention.
  • FIG. 16 is a schematic flowchart of a compression process performed by a compression path according to an embodiment of the present invention.
  • FIG. 17 is a schematic block diagram of an apparatus for data storage according to an embodiment of the present invention.
  • neural networks such as Convolution Neural Networks (CNN).
  • CNN Convolution Neural Networks
  • a large amount of feature map data will be generated.
  • data compression technology is usually used, which can reduce the space occupied by the external memory. , And can reduce the bandwidth when reading and writing.
  • the external memory may be, for example, a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate Synchronous Dynamic Random Access Memory), or DDR for short.
  • a convolutional neural network generally includes a large number of convolutional layers, and each convolutional layer generates a large amount of feature map data.
  • these large amounts of feature map data are read and written to DDR, they will consume valuable system external memory bandwidth resources, resulting in other modules with high bandwidth requirements (such as CNN or other modules) because they cannot quickly access DDR and affect computing performance. .
  • an embodiment of the present invention provides a method for a data compression storage system.
  • the feature map data calculated by the convolutional neural network may be located in an on-chip memory.
  • the embodiment of the present invention aims to compress the feature map data in the on-chip memory and store the compressed data in the external memory.
  • the process can be similar to that shown in Figure 1, where the on-chip memory can be assumed to be SRAM on-chip, and the external memory can be assumed to be DDR.
  • the compression system reads the feature map data from the on-chip memory, the feature map data is compressed, and the compressed feature map data is stored in the external memory.
  • a system for data compression storage includes at least: a compression instruction generation module, a read arbitration module, at least two compression paths, and a write arbitration module, as shown in FIG. 2.
  • the number of compression paths in the embodiment of the present invention is at least two, for example, it can be 3 or more, and can be specifically configured according to the performance of the processor and the size of the feature map data to be processed. In this way, the embodiment of the present invention can flexibly configure the number of compression paths according to the output rate of the feature map data, so as to flexibly meet the performance requirements of different processing tasks and improve the compression performance.
  • compression path 1 compression path 1
  • compression path 2 compression path 2
  • the compression instruction generation module can be used to distribute the compression instructions to various compression paths.
  • the compression path can read the corresponding feature map data from the on-chip memory according to the compression instruction, and then compress the read feature map data.
  • the read arbitration module can arbitrate the read feature map commands of at least two compression paths for the feature map data in the on-chip memory.
  • the write arbitration module can arbitrate the write requests of at least two compression paths to write the compressed data into the external memory.
  • the compression instruction generation module can be expressed as the ENC_INSTR_PROC module, which can receive the compression instruction, parse the compression instruction; and further distribute the parsed compression instruction to each compression path.
  • the processor may send a compression instruction to the compression instruction generation module.
  • the compression instruction generation module After the compression instruction generation module receives the compression instruction, it can correspondingly distribute the compression instruction to each compression path, so that each compression path reads the characteristic map data from the on-chip memory and compresses it.
  • the compression instructions distributed to a certain compression path may include: the number of feature maps to be compressed by the compression path, the width of the feature maps, the height of the feature maps, the base addresses of these numbers of feature maps in the on-chip memory, and the number of feature maps in the on-chip memory.
  • the inter-picture storage interval of the on-chip memory the base address of these number of feature maps output to the external memory after being compressed by the compression path, the inter-picture storage interval of these numbers of feature maps output to the external memory after being compressed by the compression path, and these numbers
  • the base address of the compressed header information after the feature map is compressed by the compression path is output to the external memory
  • the header information storage interval of the compressed header information after these number of feature maps are compressed by the compression path is output to the external memory.
  • the read arbitration module can be represented as the FM_RD_ARB module, and referring to Figure 3, the system can also include a read command buffer module (which can be represented as RD_CMD_FIFO module) and a read data path identification buffer module (which can be represented as RDATA_ID_FIFO module); both are arbitrated with read Module connection.
  • the read arbitration module can obtain the read feature map commands issued by each compression path, that is, the read feature map commands issued by each compression path can be gathered here.
  • the read characteristic map commands of each compression path can be cached in the respective read command cache module.
  • the read arbitration module can arbitrate the read characteristic map commands in each read command cache module according to the arbitration rules to obtain the arbitration result.
  • the command to read the characteristic map of the first compression path is sent to the on-chip memory first, and the path identification (ID) of the first compression path is stored in the read data path identification cache module. After that, after returning the feature map data from the on-chip memory, according to the path identification (ID) stored in the read data path identification cache module, the returned feature map data will be sent to the compressed path corresponding to the path identification (ID).
  • the arbitration rules may be a priority mechanism or a fair polling mechanism configured according to the compressed instruction; or may be other arbitration-related mechanisms, which are not listed here. It can be understood that if the arbitration rule is a priority mechanism, the compression processing of the compression path with a high priority can be guaranteed first, and the task performance of the compression path with the priority can be ensured.
  • the write arbitration module which can be expressed as the FM_WR_ARB module, can obtain the write requests issued by each compression path, that is, the write requests issued by each compression path can be gathered here.
  • each write request can be arbitrated according to the arbitration rules, and the arbitration result can be obtained. If the result of the arbitration indicates that the second compression path wins, the write request of the second compression path is first sent to the external memory, that is, the compressed data obtained by the second compression path is stored in the external memory.
  • the arbitration rules may be a priority mechanism or a fair polling mechanism configured according to the compressed instruction; or may be other arbitration-related mechanisms, which are not listed here.
  • each compression path module can include a feature map reading module, a feature map caching module, a data compression module, a data packing module, a compression header generation module, a length alignment module, and compression Header cache module, compressed feature map cache module, compressed header write module and compressed feature map write module.
  • the feature map reading module which can be expressed as an RD-FM module, can send a feature map read command for the original feature map in the on-chip memory according to the compression instruction received from the compression instruction generation module.
  • the read feature map command may include the width and height of the original feature map to be read, the base address of the on-chip memory, and so on.
  • the feature map reading module is also used to re-read the original feature map compressed this time from the on-chip memory.
  • the feature map cache module which can be expressed as the SRC_FM_FIFO module, can be used to store the original feature map read back from the on-chip memory.
  • the data compression module can be used to divide the original feature map in the feature map cache module into multiple data units, and perform differential compression for each of the multiple data units.
  • the data compression module may include: a scan coding module and a difference algorithm compression module.
  • the scan coding module can be expressed as the SCAN_DPCM module
  • the difference algorithm compression module can be expressed as the RES_ENC module.
  • the data compression module will be described in more detail below in conjunction with FIG. 7 to FIG. 17.
  • the data packing module can be expressed as a DATA_PACK module, which is used to splice the data compressed by the data compression module into complete compressed data. Specifically, the fragmented data compressed by the data compression module is spliced into complete data, for example, into data with a unit of 16 bytes.
  • the length alignment module can be expressed as the LEN_ALIGN module, which is used to fill in the length of the compressed data spliced by the data packing module to a specific length.
  • LEN_ALIGN module is used to fill in the length of the compressed data spliced by the data packing module to a specific length.
  • the specific length is related to the chip performance of the external memory. That is, the specific length may be preset according to the performance of the chip of the external memory.
  • the compressed length can be filled with invalid data to a certain length.
  • the length of the compressed data is added from N ⁇ 16B to ceil(N/4) ⁇ 64B, where ceil represents rounding up, and N is a positive integer.
  • some external memory for example, DDR
  • the embodiment of the present invention can ensure that the external memory can work more efficiently by setting the length alignment module. Can improve the performance of the entire system.
  • the compression header generation module can be expressed as the ENC_HDR_GEN module, which can generate compression header information corresponding to the compressed data obtained by the data compression module according to the compression instruction received from the compression instruction generation module.
  • the compression header information can be generated according to the address information in the compression instruction, the feature map size information, the length of the compression result in the current clock cycle, whether it is the end of the current data unit, and so on.
  • the generated compressed header information can be used to determine whether the current data unit needs to be bypassed.
  • the generated compressed header information can be used to decompress compressed data in the future. It is understandable that the process of judging whether the bypass is needed based on the compressed header information is optional, but not necessary, that is, the compressed data and compressed header information can be stored without judging whether the bypass is needed.
  • the compressed header buffer module which can be expressed as the ENC_HDR_FIFO module, is used to buffer the compressed header information to be output generated by the compressed header generator module.
  • Compressed feature map cache module which can be expressed as the ENC_FM_FIFO module, used to cache the data to be output.
  • the cached data may be compressed data with length complement, or it may be the original feature map read back from the on-chip memory during bypass operation. The original feature map after the length is complemented.
  • the compression header writing module can be expressed as the ENC_HDR_WR module, which performs the writing operation of the compression header information.
  • the compressed feature map write module can be represented as the ENC_FM_WR module, which performs data storage operations, specifically the compressed data in the compressed feature map cache module or the original feature map read back from the on-chip memory during the bypass operation to the external memory.
  • the compression header information for the compressed data and the compression header information for the original data when the bypass operation is performed may have different compression identifiers.
  • the first compression identifier represents compressed data
  • the second compression identifier represents original data.
  • the compression feature map write module namely the ENC_FM_WR module, records the base address of this write operation, that is, the address that will be written to the external memory.
  • the currently working module may include a data compression module and so on.
  • the reading feature map module that is, the RD_FM module, re-sends the reading feature map instruction, thereby restarting the reading of the original feature map of the compression unit this time.
  • the original feature map is read from the on-chip memory again, and the read original feature map can be stored in the feature map cache module.
  • bypass mechanism After the bypass mechanism reads the original feature map, it will not go through the data compression module and the data packing module, but directly from the feature map cache module to the length alignment module, and reuse the module for output length alignment.
  • the compressed feature map writing module overwrites the previously obtained compressed data into the original feature map, and outputs the original feature map to the external memory.
  • the embodiment of the present invention can ensure that the storage space occupied by the external memory is smaller by setting the bypass mechanism.
  • the feature map data obtained through the convolutional neural network in the processor can be compressed and stored in the external memory.
  • FIG. 4 A schematic flowchart of the method may be shown in FIG. 4 and includes:
  • S101 Distribute the compression instruction to each of the at least two compression paths
  • each compression path reads a corresponding original feature map from the on-chip memory according to the received compression instruction, and compresses the read original feature map;
  • the compression instruction generation module can receive the compression instruction, parse the compression instruction, and then distribute the compression instruction to each compression path (PATH).
  • the received compression instruction may include information describing the compression task to be performed by each compression path and the priority of each compression path. Then, after the compression path is analyzed, the tasks of each compression path can be configured according to the analysis and each compression path can be configured. The priority of the compressed path. After that, each compression path can perform compression work in accordance with the received compression instruction.
  • the feature map data can be read from the on-chip memory, compressed, and then compressed information (such as compression header information including length) can be calculated.
  • the compressed data is greater than the length of the original feature map data, read the original feature map data again.
  • length alignment is performed, and the compression result is written.
  • the written compression result includes not only the compressed data after the length is filled or the original feature map data when the bypass operation is performed, but also the compressed header information. If each compression path has completed the compression storage process, the flow of the compression instruction ends; otherwise, it waits for the unfinished compression path to continue execution.
  • the system for data compression storage of the embodiment of the present invention can realize compression instruction reception, processing, and distribution, and can monitor and feedback completion.
  • the workflow shown in FIG. 5 is clear and can realize the compression and storage of feature map data.
  • system for data compression storage in the embodiment of the present invention may have multiple different states, including but not limited to: idle state, receiving instruction state, parsing instruction state, waiting for completion state, and the like.
  • state switching can be implemented according to the state machine shown in FIG. 6.
  • the idle state can be expressed as the IDLE state.
  • the start signal of the compression command can be expressed as instr_strt.
  • the receiving instruction status can be expressed as the RCV_INSTR state.
  • the command ready signal can be output, and at the same time as the command ready signal is output or after the command ready signal is output, switch to the analysis command state.
  • the instruction ready signal can be expressed as instr_rdy.
  • the state of the analysis instruction can be expressed as the PROC_INSTR state.
  • the compression instruction received in the receiving instruction state is analyzed, and the compression instruction is distributed to each compression path according to the analysis.
  • the compression instructions distributed to each compression path can be expressed as instr_isu.
  • the compression instructions distributed to a certain compression path may include: the number of feature maps to be compressed by the compression path, the width of the feature maps, the height of the feature maps, the base addresses of these numbers of feature maps in the on-chip memory, and the number of feature maps.
  • the command information included in the compression command distributed to compression path 1 may include: (1) FM_NUM, which indicates the number of feature maps that need to be compressed in compression path 1; (2) FM_WIDTH, which indicates The width of the feature map that needs to be compressed in compression path 1; (3) FM_HIGHT, which indicates the height of the feature map that needs to be compressed in path 1; (4) FM_SRAM_BADDR, the base address of the feature map that needs to be compressed in compression path 1 in the on-chip memory; (5) FM_SRAM_LEN indicates the storage interval of the feature map that needs to be compressed in the on-chip memory in compression path 1; (6) FM_DDR_BADDR indicates the base address of the feature map after compression path 1 is compressed to the external memory; (7) FM_DDR_LEN, indicates compression The storage interval between the feature map compressed in path 1 and output to the external memory; (8) FM_HDR_BADDR, which means the base address of the compressed header information corresponding to the feature map compressed
  • the waiting state can be expressed as the WAIT_DONE state.
  • the completion signal of each compression path can be monitored, and after the completion of all the compression paths is monitored, it can be switched to the idle state.
  • an instruction completion signal can be output to the upper-level module.
  • the instruction completion signal can be expressed as instr_done.
  • the embodiment of the present invention can ensure the normal operation of the system and ensure the safe and orderly storage of the feature map data.
  • each compression path performs data compression in conjunction with Figures 7 to 17. It can be understood that, since the process of compressing the feature map data by each compression path is similar, the following compression process may be performed for any compression path.
  • Figure 7 shows a schematic diagram of each module of a compression path.
  • the function of each module is as described above in conjunction with FIG. 3, and in FIG. 7, the dashed box shows a data compression module, which includes a scan coding module and a difference algorithm compression module.
  • the scan coding module can be expressed as the SCAN_DPCM module
  • the difference algorithm compression module can be expressed as the RES_ENC module.
  • the scan coding module (SCAN_DPCM module) is a difference calculation module of the difference (Residual, RES) compression method, which can scan (SCAN) the data to be compressed according to the compression performance, and take out a certain amount of data for compression.
  • the difference algorithm compression module (RES_ENC module) is a data compression module with a difference compression method. According to the difference compression algorithm, the difference value output by the scan encoding module (SCAN_DPCM module) can be compressed, and the current cycle compression result will be output at the same time The length and whether it is the end of the current data unit.
  • the adjacent values of the feature map output by the convolutional layer are very close or even equal, especially when the data in the feature map is pixel data Time. Therefore, this feature can be fully utilized, that is, the difference between adjacent feature map data is considered for compression.
  • the embodiment of the present invention is not limited to the case where the data in the feature map is pixel data.
  • the method and device disclosed in the embodiment of the present invention can also be used. To compress.
  • FIG. 8 is a schematic flowchart of a data storage method according to an embodiment of the present invention.
  • the method shown in Figure 8 includes:
  • S140 Store the compressed feature map data.
  • the compression instruction generation module reads the compression instruction from the on-chip memory.
  • the feature map read at one time through the read feature map command may correspond to the smallest access unit of the memory.
  • the feature map data corresponding to the smallest access unit of the memory is received in S110.
  • the size of the received feature map data may be equal to or smaller than the minimum access unit of the memory.
  • the feature map data corresponding to the smallest access unit of the memory can be referred to as a compression unit.
  • the feature map data corresponding to the smallest access unit is divided into multiple data units.
  • aligning the width and the number of rows of the feature map according to the minimum access unit of the memory can facilitate the access of the feature map on the one hand, and can efficiently use the bandwidth of the read-write memory on the other hand.
  • the storage space required by a row of data of the feature map data is greater than the minimum access unit, the data located in the same minimum access unit belongs to the same row of the feature map data. If the storage space required for one row of feature map data is less than the minimum access unit, the data belonging to the same row of feature map data is located in the same minimum access unit.
  • the storage space required for one row of feature map data is determined according to the width of the feature map and the bit width of each data (for example, the pixel bit width).
  • the minimum access unit of the memory is 32Byte (referred to as 32B for short), and the data bit width of each pixel is 8bit.
  • the total storage length of each feature map is aligned according to 32B.
  • the width of the feature map is fm_w
  • each row is aligned with 64B, and each 64B stores at most 1 row of the feature map (may also require multiple 64Bs to store 1 row ), the remaining invalid data can be filled with 0;
  • the minimum access unit of the memory may also be 16 Bytes or other sizes.
  • the data in the feature map is pixel data
  • the data bit width of each pixel can also be 4bit or 16bit or other sizes, and the storage form of the feature map can be determined similarly, and the embodiment of the present invention will not list them one by one.
  • the feature map data to be stored corresponding to the read feature map command can be received in S110, and temporarily stored in the feature map cache module. It can be understood that what is received in S110 is the original feature map data before compression.
  • S120 and S130 in FIG. 8 may be executed by the data compression module.
  • the current compression unit can be set, such as a row of the feature map data or all of the feature map data. Subsequently, the current compression unit is divided into multiple data units. As an example, one data unit may include 8 pixels. In this way, a compression path can compress data units of 8 pixels at a time.
  • each compression path can compress data units of 8 pixels at a time, which can improve the degree of parallelism, and on the one hand, improve the efficiency and speed of compression. , On the other hand, it also avoids becoming the performance bottleneck of the system.
  • compression may be performed through the following process: divide the data unit into one or more groups; if the data of the first group of the plurality of groups is all zeros, Then the compressed data is 0; if the data in the second group of the multiple groups is not all zeros, then: determine multiple differences between the data in the second group, and based on the multiple The difference is compressed.
  • the data of the first group is all zeros means: all the data of the first group are zeros. If the data in the second group is not all zeros, it means that at least one data in the second group is not zero.
  • the multiple differences between the data in the second group refer to the differences between every two adjacent pixels.
  • determining the multiple differences between the data in the second group may include: determining the first data in the second group and the last data before the second group The difference between a data, and determine the difference between each data in the second group except the first data and the first data. It is understandable that if the second group includes n0 data, then n0 differences will be obtained. Also, it should be noted that multiple differences are signed bit differences.
  • the embodiment of the present invention can be executed by the scan coding module: divide a data unit into one or more groups; determine whether the data in each group is all zeros; the data in a certain group is not all zeros Calculate multiple differences between the data in the non-all-zero group.
  • one data unit can be divided into two groups, that is, each group includes 4 pixels. If the 8 pixels of a data unit are represented as ⁇ p1,p2,p3,p4,p5,p6,p7,p8 ⁇ , then the two groups after division are: ⁇ p1,p2,p3,p4 ⁇ and ⁇ p5, p6, p7, p8 ⁇ . Subsequently, for the first set of data ⁇ p1, p2, p3, p4 ⁇ , determine whether the pixel values of these four pixels are all zeros, if they are all zeros, they can be represented by an all-zero indicator, for example, the all-zero indicator is 1bit "0".
  • the pixel values of these four pixels are not all zeros (that is, not all zeros), that is, at least one pixel is non-zero, it can be indicated by a non-all zero indicator, for example, the non-all zero indicator is a 1-bit "1" .
  • the non-all zero indicator is a 1-bit "1" .
  • the indicator can be obtained by judging whether the two groups are all zeros, as shown in Table 1 below.
  • the difference values D1, D2, D3, and D4 can be calculated. If the indicator is "01”, the difference values D5, D6, D7, and D8 can be calculated. If the indicator is "11”, the difference values D1, D2, D3, D4, D5, D6, D7, and D8 can be calculated.
  • the multiple difference values obtained are signed numbers. For example, assuming that each pixel in a data unit is an 8-bit signed number, the difference obtained is a 9-bit signed number, where the first bit of the 9-bit signed number is its sign bit, for example, the sign bit is 0 means a positive number, and a sign bit of 1 means a negative number.
  • a schematic structural diagram of the scan coding module (SCAN_DPCM module) in the embodiment of the present invention may be as shown in FIG. 12.
  • the register which can be expressed as SRORAGE_MIN_UNIT, is the smallest access unit of the temporary storage memory, which can include multiple data units.
  • the scan coding module may divide the feature map data in the minimum access unit into multiple data units, that is, all data in one data unit are located in the same minimum access unit.
  • the minimum access unit in conjunction with Figure 9 and Figure 10, if the storage space required for a row of feature map data is greater than the minimum access unit, the data located in the same minimum access unit belongs to the feature map data. Same line. If the storage space required for one row of feature map data is less than the minimum access unit, the data belonging to the same row of feature map data is located in the same minimum access unit.
  • SCAN_MUX can select a data unit from the register (SRORAGE_MIN_UNIT), and then divide the data unit into one or more groups for compression. Specifically, SCAN_MUX fetches a data unit from the smallest access unit until the traversal completes the smallest access unit. And in order to avoid invalid compression operations, a data unit must contain at least one valid pixel. If all data contained in a data unit is invalid data for complement, it is an invalid data unit. At this time, the data unit can be skipped and the next data unit can be continued.
  • compressing multiple differences in S130 may include: determining the number of storage bits according to multiple non-negative numbers corresponding to the multiple differences, and according to the sign bits of the multiple differences and the determined number of storage bits, Compress multiple differences.
  • multiple differences may have both large and small values, when multiple differences are all small, they can be represented by fewer bits, so that less space can be occupied after compression. storage.
  • the above-mentioned number of storage bits can be expressed as len, which is used to represent the minimum number of bits that need to be used after compressing multiple differences.
  • This process can be executed by the difference algorithm compression module. Specifically, the number of stored bits can be determined according to the multiple non-negative numbers corresponding to the multiple differences, and the multiple difference values can be combined according to the sign bits and the number of bits of the multiple differences. Perform compression.
  • it may include: determining a plurality of non-negative numbers corresponding to the plurality of difference values one-to-one; determining the number of bits required for storage according to the position of the highest non-zero value in the plurality of non-negative numbers; The sign bit of and the number of bits compress multiple differences, where the storage length of the compressed difference is the number of bits.
  • the non-negative number corresponding to the difference may refer to the absolute binary value of the difference.
  • the sign bit of the first difference value indicates that the first difference value is a non-negative number
  • the non-negative number corresponding to the first difference value is the number obtained by removing the sign bit of the first difference value.
  • the sign bit of the second difference value indicates that the second difference value is a negative number
  • the non-negative number corresponding to the second difference value is the second difference value after removing its sign bit and inverted.
  • the position of the highest non-zero value in the multiple non-negative numbers can be determined by performing a "bitwise OR" operation on multiple non-negative numbers, and then the number of bits required to store multiple differences can be determined.
  • the stored compressed data may include: a non-all zero indicator, a bit number indicator, and multiple compressed differences, where the bit number indicator represents the compressed data The length of the multiple differences after removing the sign bit. That is to say, each of the multiple difference values after compression has data with a sign bit and a bit number.
  • the scan coding module obtains 8 difference values D1 to D8.
  • the following describes the difference algorithm compression module with reference to Figure 14 to compare the 8 difference values D1 to D8 Example process for compression.
  • the sign bits of the eight differences D1 to D8 can be extracted, and then the corresponding non-negative numbers can be determined according to the sign bits.
  • F1 represents the highest bit of D1, that is, the sign bit.
  • D1' represents the remaining binary number after the sign bit of D1 is removed.
  • d1' represents the non-negative number corresponding to the difference D1.
  • D1 itself is a non-negative number
  • D1' is a non-negative number corresponding to D1, that is, it is determined that d1' is D1'.
  • D1 itself is a negative number
  • ⁇ D1' is a non-negative number corresponding to D1
  • d1' is determined to be ( ⁇ D1'), where ⁇ represents the inverse.
  • D1 itself is a negative number (F1 is 0)
  • the absolute value of the negative number represented by D1 is ⁇ D1'+1.
  • the range of the decimal number that needs to be represented by an 8-bit binary number is -256 to 255, because the binary number "11111111" represents the decimal number 255, and the sign bit "1" that represents a negative number is added in front of it.
  • the 8-bit binary number needs to represent the decimal number -256; that is to say, when the sign bit is "1", the decimal number after removing the sign bit plus an offset value is the absolute value of the corresponding negative number that needs to be represented .
  • the offset value is +1, so the decimal number +1 after removing the sign bit is the absolute value of the corresponding negative number.
  • 8 non-negative numbers corresponding to 8 differences can be obtained: d1’, d2’, d3’, d4’, d5’, d6’, d7’, d8’.
  • d_max1 is a bitwise OR operation on d1', d2', d3', and d4' to obtain the 4 difference values D1, D2, D3 and D4 of the first group. It is detected that 1 of the highest bit of d_max1 is the first bit, that is, len1.
  • d_max2 is a bitwise OR operation on d5', d6', d7', and d8' to obtain the 4 difference values D5, D6, D7, and D8 of the second group. How many bits are needed to represent , Check which bit 1 of the highest bit of d_max2 is the first bit, that is, len2.
  • each of the multiple difference values is a 9-bit difference value (including a sign bit of 1 bit). Therefore, the number of storage bits occupied by each difference is at most 8, so that len1 and len2 only need 3 bits.
  • a 3bit binary number can represent [0,7], and in this example, the number of bits corresponding to the compressed difference is [1,8]. For example, suppose len1 is “010”, which means that the number of bits after the difference is compressed is 3; suppose len1 is “111”, which means that the number of bits after the difference is compressed is 8.
  • each compressed difference value d1 to d8 also includes its own sign bit, the actual number of bits occupied by each compressed difference value is [2,9].
  • the compressed data can be further obtained based on this.
  • the compressed data for a group in the data unit includes: all zero/non-all zero indicator, bit number indicator (if not All zeros) and the compression difference (if not all zeros).
  • the compressed data may be as shown in FIG. 15, and there may be three situations.
  • ALL0_FLAG1 0
  • ALL0_FLAG2 0
  • compressed data (ENC_RESULT) ALL0_FLAG1, ALL0_FLAG2.
  • the compression length of the compressed data can also be calculated, and the compression length can represent all the bits occupied by the compressed data, for example, refer to the sum of the bits of each data contained in the compressed data shown in FIG. 15.
  • the obtained compressed data can be cached in the compressed feature map cache module after length-aligned for subsequent output process.
  • the compression header generation module determines that bypass (bypass operation) is not needed, then the compressed data is written to the external memory; otherwise, the original feature map is read again and the compressed feature map is replaced The compressed data in the module is cached, and the original feature map after replacement is written into the external memory.
  • the second data unit located after the first data unit is read immediately, and a similar compression operation is performed.
  • the first data unit and the second data unit may be two adjacent data units that are sequentially compressed in time by the compression path.
  • starting the compression process for the second data unit includes: judging whether the data of the second data unit is all zeros. That is to say, while the compressed data unit is written into the external memory, the compression process of determining whether all zeros is started is started for the second data unit. In this way, the pipeline compression processing process for multiple data units can be realized, and the resource utilization rate can be improved.
  • the difference algorithm compression module compresses the difference. And when the difference algorithm compression module compresses the difference of the data in the first data unit, the scan coding module starts to determine whether the two groups in the second data unit are all zeros.
  • different modules may be performing compression processing for different data units. In this way, resource utilization can be further improved, and the efficiency of data unit compression by the compression path can be improved.
  • first data unit and the second data unit herein may be two adjacent data units to be processed in a register (as shown in FIG. 12, where data of the smallest access unit size of the temporary storage memory) is stored.
  • the data compression module in the embodiment of the present invention may include a scan coding module and a difference algorithm compression module. It is a multi-stage compression pipeline design that can reduce the combinatorial logic of each stage, so that the generated circuit can support more High clock frequency improves chip performance.
  • the scan encoding module and the difference algorithm compression module have the circuit design structures shown in FIG. 12 and FIG. 14, respectively, so that one compression path can compress 8 pixel data at a time.
  • the process of performing compression by each compression path may be as shown in FIG. 16.
  • the steps after reading the feature map data in FIG. 16 are performed by the difference compression model.
  • the feature map data it is read according to the smallest access unit of the memory, that is, the feature map data of the smallest access unit size is read at a time, and the feature map data of the smallest access unit size is completed.
  • the feature map data of the next smallest access unit size until all the feature map data indicated by the compression instruction has been read.
  • the scan coding module can read a data unit of the feature map data with the smallest access unit size, and judge whether the two groups included in the data unit are all zeros, and if there are non-all zero groups, the original difference is calculated, where the original difference is The value represents the difference between two adjacent pixels in the data unit, such as the above D1 to D8.
  • the difference algorithm compression module can compress the original difference values (such as the above D1 to D8) to obtain the compressed difference values (such as the above d1 to d8), and calculate the compression length.
  • a flag indicating whether the current data unit is the end of the current compression unit can also be output.
  • the next data unit can be read from the feature map data of the smallest access unit size until the compression process of all pixels of the feature map data of the smallest access unit size is completed.
  • the method for data compression and storage in the embodiment of the present invention fully takes into account the situation of zero in the feature map and the feature that the values of adjacent pixels of the feature map data are close, and the difference method is used for compression, which can make the compressed data
  • the occupied storage space is smaller, on the one hand, it can reduce the space occupation of the external memory, on the other hand, it can also reduce the bandwidth resources during reading and writing, and save power consumption.
  • the device may include: a receiving device 210, a dividing device 220, a compression device 230, and a storage device 240. .
  • the receiving device 210 is configured to receive feature map data to be stored
  • the dividing device 220 is configured to divide the feature map data into multiple data units
  • the compression device 230 is configured to, for each data unit of the multiple data units, determine whether the data in the data unit is all zeros, and perform compression according to the result of the determination;
  • the storage device 240 is configured to store the compressed feature map data.
  • the compression device 230 compresses a data unit through the following process: divide the data unit into one or more groups; if the data in the first group of the multiple groups is all zeros, the compressed data It is 0; if the data of the second group in the multiple groups is not all zeros, then: determine multiple differences between the data in the second group, and compress according to the multiple differences.
  • the compression device 230 is configured to: determine the difference between the first data in the second group and the last data before the second group, and to determine the difference between the first data in the second group The difference between each other data and the first data.
  • the compression device 230 is configured to: determine a plurality of first non-negative numbers corresponding to a plurality of difference values one-to-one; The number of bits required; according to the sign bits and the number of bits of the multiple differences, the multiple differences are compressed, wherein the compressed length of each difference is the number of bits plus one.
  • the compression device 230 is configured to: determine the number of bits required for storage by performing a bitwise OR operation on a plurality of first non-negative numbers.
  • the compression device 230 is configured to:
  • the first non-negative number corresponding to the first difference value is the number obtained by removing the sign bit of the first difference value
  • the first non-negative number corresponding to the first difference value is the first difference value after removing its sign bit and inverted.
  • the data stored after compressing the second group includes: a non-all zero indicator, a bit number indicator, and multiple difference values after compression.
  • the bit number indicator represents the length of the compressed multiple differences after removing the sign bit.
  • the non-all zero indicator is 1.
  • the compression device 230 is further configured to: generate compression header information corresponding to the compressed data unit; wherein, the storage device 240 is configured to: combine the compressed data unit with the corresponding compression header. Information is stored in external storage.
  • the compression device 230 is further configured to: determine whether a bypass operation needs to be performed according to the compression header information; if it is determined that the bypass operation needs to be performed, generate bypass compression header information corresponding to the bypass operation.
  • the storage device 240 is configured to store uncompressed feature map data and bypass compression header information in an external memory.
  • it further includes a reading device configured to: receive a compression instruction; send a feature map read command according to the compression instruction, so as to obtain feature map data corresponding to the read feature map command from the on-chip memory.
  • the read feature map command includes the width and height of the feature map data, and the base address of the on-chip memory.
  • the receiving device 210 is configured to receive feature map data consistent with the size of the minimum access unit.
  • the compression device 240 is configured to divide the feature map data into multiple data units according to the minimum access unit of the memory, wherein all data in one data unit are located in the same minimum access unit.
  • the storage space required for a row of data of the feature map data is greater than the minimum access unit, the data located in the same minimum access unit belongs to the same row of the feature map data. If the storage space required for one row of feature map data is less than the minimum access unit, the data belonging to the same row of feature map data is located in the same minimum access unit.
  • the feature map data to be stored is the output of the convolutional layer in the neural network.
  • the compression device 230 is configured to: while storing the compressed first data unit, start the compression process for the second data unit.
  • the first data unit and the second data unit are data units that are compressed sequentially in time.
  • the compression device 230 is configured to: start the compression process of the second data unit by determining whether the data of the second data unit is all zeros.
  • the device shown in FIG. 17 can be used to implement the data storage method shown in FIG. 8. In order to avoid repetition, it will not be repeated here.
  • the device shown in FIG. 17 may be any one of at least two compression paths, and it is understandable that the device shown in FIG. 17 is only schematic, and it can also be implemented as Figure 7 shows the various modules.
  • the system for data compression storage in the embodiment of the present invention can be implemented on a processor, for example, it can be a processor of various devices such as a computer, a server, a workstation, a mobile terminal, and a pan/tilt.
  • the original feature map may be received or obtained by the processor from other devices, or generated by the processor in the process of executing other operations or algorithms.
  • the processor may be in the process of executing a convolutional neural network. Generate the original feature map.
  • an embodiment of the present invention also provides a processor.
  • the processor may include an on-chip memory and the system as shown in FIG. 3.
  • the processor may include on-chip memory and the device as shown in FIG. 17.
  • the processor may include a central processing unit (Central Processing Unit, CPU) or other forms of processing units with data processing capabilities and/or instruction execution capabilities, such as Field-Programmable Gate Array (Field-Programmable Gate Array). , FPGA) or Advanced RISC (Reduced Instruction Set Computer) Machine (ARM), etc., and the processor may include other components to perform various desired functions.
  • CPU Central Processing Unit
  • FPGA Field-Programmable Gate Array
  • ARM Advanced RISC
  • the processor may include other components to perform various desired functions.
  • characteristic map refers to the data before compression by the system of the embodiment of the present invention, unless otherwise indicated. , It can have two dimensions of width and height, or alternatively can have three dimensions of width, height and channel.
  • the embodiment of the present invention also provides a computer storage medium on which a computer program is stored.
  • the computer program is executed by the processor, the steps of the data storage method shown above can be realized.
  • the computer storage medium is a computer-readable storage medium.
  • the computer or the processor executes the steps of the method shown in FIG. 4 or FIG. 8.
  • the computer or the processor executes the following steps: receiving the feature map data to be stored; dividing the feature map data into multiple data units; Each data unit of the plurality of data units: judges whether the data in the data unit is all zeros, and compresses the data according to the judgment result; and stores the compressed feature map data.
  • the computer storage medium may include, for example, the memory card of a smart phone, the storage component of a tablet computer, the hard disk of a personal computer, a read-only memory (ROM), an erasable programmable read-only memory (EPROM), a portable compact disk read-only memory ( CD-ROM), USB memory, or any combination of the above storage media.
  • the computer-readable storage medium may be any combination of one or more computer-readable storage media.
  • an embodiment of the present invention also provides a computer program product, which contains instructions, which when executed by a computer, cause the computer to execute the steps of the data storage method shown in FIG. 4 or FIG. 8.
  • the computer when the instruction is executed by the computer, the computer is caused to execute: receive the feature map data to be stored; divide the feature map data into a plurality of data units; A data unit: judge whether the data in the data unit is all zeros, and compress according to the judgment result; store the compressed feature map data.
  • the method for data storage in the embodiment of the present invention fully takes into account the situation of zero in the feature map and the characteristics of the close value between adjacent feature map data, and uses the difference method to compress, which can make the storage space occupied by the compressed data Smaller, on the one hand, it can reduce the space occupation of the external memory, on the other hand, it can also reduce the bandwidth resources when reading and writing, and save power consumption.
  • the computer may be implemented in whole or in part by software, hardware, firmware or any other combination.
  • software it can be implemented in the form of a computer program product in whole or in part.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer instructions may be transmitted from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a digital video disc (DVD)), or a semiconductor medium (for example, a solid state disk (SSD)), etc.
  • the method for data compression and storage in the embodiment of the present invention fully takes into account the situation of zero in the feature map and the feature that the values of adjacent pixels of the feature map data are close, and the difference method is used for compression, which can make the compressed data
  • the occupied storage space is smaller, on the one hand, it can reduce the space occupation of the external memory, on the other hand, it can also reduce the bandwidth resources during reading and writing, and save power consumption.
  • the disclosed system, device, and method can be implemented in other ways.
  • the device embodiments described above are merely illustrative, for example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processor, or each unit may exist alone physically, or two or more units may be integrated into one unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

La présente invention concerne un procédé et un appareil de stockage de données, un processeur et un support de stockage informatique. Le procédé consiste à : recevoir des données de carte de caractéristiques à stocker (S110) ; diviser les données de carte de caractéristiques en une pluralité d'unités de données (S120) ; pour chaque unité de données de la pluralité d'unités de données, déterminer si des données dans l'unité de données sont toutes nulles, et effectuer une compression selon un résultat de détermination (S130) ; et stocker des données de carte de caractéristiques compressées (S140). On peut voir que, lorsque le stockage de données est réalisé sur des données de carte de caractéristiques au moyen du procédé, la situation de zéro dans une carte de caractéristiques, et la caractéristique de valeurs numériques de deux pixels adjacents de la carte de caractéristiques qui sont très proches et même égales sont prises entièrement en compte, et les données sont compressées au moyen de la détermination du fait que les données sont toutes nulles, de telle sorte que la quantité d'espace occupé d'une mémoire externe peut être réduite, et des ressources de bande passante pendant la lecture et l'écriture peuvent également être réduites, ce qui permet de réduire la consommation d'énergie.
PCT/CN2020/092640 2020-05-27 2020-05-27 Procédé et appareil de stockage de données, processeur et support de stockage informatique WO2021237518A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/092640 WO2021237518A1 (fr) 2020-05-27 2020-05-27 Procédé et appareil de stockage de données, processeur et support de stockage informatique
PCT/CN2020/099495 WO2021237870A1 (fr) 2020-05-27 2020-06-30 Procédé de codage de données, procédé de décodage de données, procédé de traitement de données, codeur, décodeur, système, plateforme mobile, et support lisible par ordinateur

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/092640 WO2021237518A1 (fr) 2020-05-27 2020-05-27 Procédé et appareil de stockage de données, processeur et support de stockage informatique

Publications (1)

Publication Number Publication Date
WO2021237518A1 true WO2021237518A1 (fr) 2021-12-02

Family

ID=78745424

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/092640 WO2021237518A1 (fr) 2020-05-27 2020-05-27 Procédé et appareil de stockage de données, processeur et support de stockage informatique

Country Status (1)

Country Link
WO (1) WO2021237518A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101751440A (zh) * 2008-12-19 2010-06-23 高德软件有限公司 一种数据压缩/解压缩方法及其装置
US8924591B2 (en) * 2010-06-29 2014-12-30 Huawei Technologies Co., Ltd. Method and device for data segmentation in data compression
CN104636273A (zh) * 2015-02-28 2015-05-20 中国科学技术大学 一种带多级Cache的SIMD众核处理器上的稀疏矩阵存储方法
CN105872536A (zh) * 2016-04-25 2016-08-17 电子科技大学 一种基于双编码模式的图像压缩方法
CN110191339A (zh) * 2019-05-22 2019-08-30 上海富瀚微电子股份有限公司 码率估计核心单元、码率估计装置及码率估计方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101751440A (zh) * 2008-12-19 2010-06-23 高德软件有限公司 一种数据压缩/解压缩方法及其装置
US8924591B2 (en) * 2010-06-29 2014-12-30 Huawei Technologies Co., Ltd. Method and device for data segmentation in data compression
CN104636273A (zh) * 2015-02-28 2015-05-20 中国科学技术大学 一种带多级Cache的SIMD众核处理器上的稀疏矩阵存储方法
CN105872536A (zh) * 2016-04-25 2016-08-17 电子科技大学 一种基于双编码模式的图像压缩方法
CN110191339A (zh) * 2019-05-22 2019-08-30 上海富瀚微电子股份有限公司 码率估计核心单元、码率估计装置及码率估计方法

Similar Documents

Publication Publication Date Title
US11960431B2 (en) Network-on-chip data processing method and device
CN114501024B (zh) 一种视频压缩系统、方法、计算机可读存储介质及服务器
US9836248B2 (en) In-memory data compression complementary to host data compression
CN102566958B (zh) 一种基于sgdma的图像分割处理装置
CN106685429B (zh) 整数压缩方法及装置
CN114286035B (zh) 图像采集卡、图像采集方法及图像采集系统
WO2024074012A1 (fr) Procédé, appareil et dispositif de commande de transmission vidéo, et support de stockage lisible non volatil
CN103914404A (zh) 一种粗粒度可重构系统中的配置信息缓存装置及压缩方法
WO2023197507A1 (fr) Procédé, système et appareil de traitement de données vidéo, et support de stockage lisible par ordinateur
CN109451317A (zh) 一种基于fpga的图像压缩系统及方法
CN113590512A (zh) 可直连外设设备的自启动dma装置及应用
WO2021237513A1 (fr) Système et procédé de stockage de compression de données, processeur et support de stockage informatique
CN113613289B (zh) 一种蓝牙数据传输方法、系统及通信设备
WO2021237510A1 (fr) Procédé et système de décompression de données, ainsi que processeur et support de stockage informatique
CN113177015B (zh) 基于帧头的串口通讯方法和串口芯片
US20070288691A1 (en) Data processing with data transfer between memories
WO2021237518A1 (fr) Procédé et appareil de stockage de données, processeur et support de stockage informatique
CN117539807A (zh) 一种数据传输方法、相关设备及存储介质
CN114422801B (zh) 优化视频压缩控制逻辑的方法、系统、设备和存储介质
CN112637602B (zh) 一种jpeg接口及数字图像处理系统
US11275683B2 (en) Method, apparatus, device and computer-readable storage medium for storage management
CN115185865A (zh) 基于芯片的数据传输方法、设备及存储介质
CN212873459U (zh) 一种用于数据压缩存储的系统
CN111382856B (zh) 数据处理装置、方法、芯片及电子设备
CN111382852B (zh) 数据处理装置、方法、芯片及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20937636

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20937636

Country of ref document: EP

Kind code of ref document: A1