CN106227506A - A kind of multi-channel parallel Compress softwares system and method in memory compression system - Google Patents

A kind of multi-channel parallel Compress softwares system and method in memory compression system Download PDF

Info

Publication number
CN106227506A
CN106227506A CN201510616502.1A CN201510616502A CN106227506A CN 106227506 A CN106227506 A CN 106227506A CN 201510616502 A CN201510616502 A CN 201510616502A CN 106227506 A CN106227506 A CN 106227506A
Authority
CN
China
Prior art keywords
compression
data
internal memory
memory
parallel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510616502.1A
Other languages
Chinese (zh)
Inventor
韩江
陈谋春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou Rockchip Electronics Co Ltd
Original Assignee
Fuzhou Rockchip Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou Rockchip Electronics Co Ltd filed Critical Fuzhou Rockchip Electronics Co Ltd
Publication of CN106227506A publication Critical patent/CN106227506A/en
Pending legal-status Critical Current

Links

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The present invention provides the multi-channel parallel Compress softwares system and method in a kind of memory compression system, including multiple parallel compression/decompression devices, for being compressed the data in internal memory/decompress concurrently.Multi-channel parallel Compress softwares system and method in the memory compression system of the present invention is effectively increased the data throughput of memory compression system;And accelerate internal memory response efficiency.

Description

A kind of multi-channel parallel Compress softwares system and method in memory compression system
Technical field
The present invention relates to the technical field of memory compression decompression, particularly relate to the multi-channel parallel pressure in a kind of memory compression system Contracting decompression system and method.
Background technology
Along with the software system of the mobile device such as smart mobile phone, panel computer operation is constantly to intelligent development, the application of mobile terminal Program is increasingly sophisticated, thus also grows with each passing day the demand of memory size and speed.Although Double Data Rate SDRAM (Double Date Rate SDRAM, DDR SDRAM) technology also slowly improving, but its development progress is well below in moving chip The development progress of other modules.Especially, mobile terminal exacerbates internal memory significantly to the demand of display, figure, image, video etc. And the disequilibrium between processor.It addition, the price range of decrease of main flow DDR is also far below the fall of processor, cause The price proportion that DDR internal memory accounts in a mobile device is more and more higher.
Therefore, the hot subject that memory compression system processing speed becomes the most urgently to be resolved hurrily how is improved.
Summary of the invention
The shortcoming of prior art in view of the above, it is an object of the invention to provide the multichannel in a kind of memory compression system also Row Compress softwares system and method, replacement based on cache line, use multi-channel structure make multiple cache line time Territory realizes parallel processing, thus the problem being greatly reduced internal storage access entirety time delay.
For achieving the above object and other relevant purposes, the present invention provides the multi-channel parallel compression system in a kind of memory compression system System, including multiple parallel compressoies, for being compressed the data that will write internal memory concurrently.
According to the multi-channel parallel compressibility in above-mentioned memory compression system, wherein: be initially completed the compressor pressure of compression After having contracted, the priority completed according to data compression, successively by the data write internal memory after compression in multiple parallel compressoies.
According to the multi-channel parallel compressibility in above-mentioned memory compression system, wherein: described compressor is in running order And after data to be compressed have read compressor, it is compressed operation.
Meanwhile, the present invention also provides for the multi-channel parallel compression method in a kind of memory compression system, by multiple parallel compressions The data that will write internal memory are compressed by device concurrently.
According to the multi-channel parallel compressibility in above-mentioned memory compression system, wherein: be initially completed the compressor pressure of compression After having contracted, according to the priority compressed, successively by the data write internal memory after compression in multiple parallel compressoies.
It addition, the present invention also provides for the multi-channel parallel decompression system in a kind of memory compression system, including multiple parallel decompressions The data of reading from internal memory, for often organizing after data read from internal memory, are decompressed by device concurrently.
According to the multi-channel parallel decompressing method in above-mentioned memory compression system, wherein: from internal memory read data time, according to The bus priority of the equipment that cache lines is corresponding is read out successively.
According to the multi-channel parallel decompressing method in above-mentioned memory compression system, wherein: described decompression machine is in running order And data to be decompressed are after DDR reads decompression machine, carry out decompression operations.
Meanwhile, the present invention also provides for the multi-channel parallel decompressing method in a kind of memory compression system, by multiple parallel decompressions Device, is often organizing after data read from internal memory, is decompressing the data read from internal memory concurrently.
According to the multi-channel parallel decompressing method in above-mentioned memory compression system, wherein: from internal memory read data time, according to The bus priority of the equipment that cache lines is corresponding is read out successively.
It addition, the present invention also provides for the multi-channel parallel compressibility in a kind of memory compression system, including
One caching, is used for storing data;
One internal memory, for storing the data of compression;With
Multiple parallel compression/decompression devices, after the data compression in will be stored in caching concurrently, are stored in internal memory, or use After the compression data decompression that will read from internal memory concurrently, it is stored in caching.
Meanwhile, the present invention also provides for the multi-channel parallel compressibility in a kind of memory compression system, including
One caching, is used for storing data;
One internal memory, for storing the data of compression;With
Multiple parallel compression/decompression devices are connected, by a single channel bus and internal memory chain by a multichannel bus and caching Connecing, the plurality of parallel compression/decompression device is for reading, by multichannel bus parallel, the data being stored in caching, and will read It is stored in internal memory by single channel bus after the data compression taken, or reads compression data by single channel bus from internal memory, and will After the compression data decompression that internal memory reads, it is stored in caching by multichannel bus parallel.
As it has been described above, the multi-channel parallel Compress softwares system and method in the memory compression system of the present invention, have following useful Effect:
(1) data throughput of memory compression system it is effectively increased;
(2) internal memory response efficiency is accelerated.
Accompanying drawing explanation
The structural representation of the multi-channel parallel Compress softwares system that Fig. 1 is shown as in the memory compression system of the present invention;
The schematic diagram of one embodiment of data write when Fig. 2 is shown as in internal memory not having compressor;
The schematic diagram of the embodiment that data when Fig. 3 is shown as in internal memory employing single-channel compression device write;
The signal of the preferred embodiment that data when Fig. 4 is shown as in the internal memory of the present invention employing multichannel compressor write Figure;
The schematic diagram of one embodiment of digital independent when Fig. 5 is shown as in internal memory not having decompression machine;
The schematic diagram of one embodiment of digital independent when Fig. 6 is shown as in internal memory using single channel decompression machine;
The signal of one preferred embodiment of digital independent when Fig. 7 is shown as in the internal memory of the present invention using multichannel decompression machine Figure.
Detailed description of the invention
Below by way of specific instantiation, embodiments of the present invention being described, those skilled in the art can be by disclosed by this specification Content understand other advantages and effect of the present invention easily.The present invention can also be added by the most different detailed description of the invention To implement or application, the every details in this specification can also be based on different viewpoints and application, in the essence without departing from the present invention Various modification or change is carried out under god.
It should be noted that the diagram provided in the present embodiment illustrates the basic conception of the present invention the most in a schematic way, the most graphic In component count, shape and size time only display with relevant assembly in the present invention rather than is implemented according to reality draw, its reality During enforcement, the kenel of each assembly, quantity and ratio can be a kind of random change, and its assembly layout kenel is likely to increasingly complex.
Fig. 1 shows the structural representation of the multi-channel parallel Compress softwares system in a memory compression system.This memory compression system At least include one-level or multistage caching (Cache) 120, one or more compression/decompression module (such as module 131, 132,133 etc.), and one internal memory 140.This memory compression system can be a computer system (such as notebook electricity Brain, panel computer, or smart mobile phone etc.) the computing of CPU 110 its required data are provided.One-level, two grades, or Three grades of cachings (being referred to as Cache 120) can be directly connected to CPU 110, for reading or the storage data of CPU 110. And the data in Cache 120 can pass through one or more compression/decompression module 131,132, or 133 be converted to pressure After contracting data, it is stored in internal memory 140.Compression data in internal memory 140 can also pass through compression/decompression module 131,132, Or after 133 decompressions, become uncompressed data and be stored in Cache 120.In other words, storage in Cache 120 It is unpressed data.And store is the data of compression in internal memory 140.In certain embodiments, also have other compression/decompression Module (embodying the most in FIG) will store internal memory 140 after the data compression that other channels (such as hard disk, network) obtains In, or send out after the compression data decompression in internal memory 140 from these other channels.
In certain embodiments, Cache 120 is three grades of cachings (L3 Cache), is a kind of for miss after reading L2 cache Data and the caching that designs.The operation principles of L3 is to use storage device faster to retain a from storage device at a slow speed Data streams read and copying in (such as internal memory 140), when there is a need to read and write data from slower storage body again, energy The action enough making read-write first completes on quick device, thus improves the response speed of system.Due to the introducing of L3, this reality The memory compression technology in example of executing can make the demand data piezometric in internal memory 140 be reduced to original about 1/2, it is also possible to many Under number scene, allow computer system keep performance constant, even under a few cases, cause the lifting of performance.
In certain embodiments, due to need to access internal memory 140 CPU 110 (can be described as in a mobile device Host or Core) having multiple, these CPU 110 can dish out the Address requests of multiple access internal memory 140 simultaneously.If an internal memory pressure In compression system, all of CPU 110 allows to access internal memory 140 by L3 Cache 120, and at Cache 120 and interior Depositing only one group of compression/decompression module 131 between 140, those all operations relevant to Address requests are required for one in order Individual it be compressed by compression/decompression module 131/decompress.Owing to compression/decompression module 131 becomes bottleneck, its time delay (latency) being superimposed with the consumption accessing internal memory 140, final result is to considerably increase the visit of internal memory 140 Ask time delay.
In one embodiment, memory compression system arranges multiple compression/decompression module 131 between Cache 120 and internal memory 140, 132, and 133.Wherein, Cache 120 and compression/decompression module 131, between 132, and 133, " multichannel bus " is set, Allow each compression/decompression module 131,132, and 133 can receive data simultaneously by this multichannel bus from Cache 120, Or transfer data to Cache120.Meanwhile, in compression/decompression module 131,132, and arrange between 133 and internal memory 140 " single channel bus ".So, a moment, only one of which compression/decompression module 131,132, or 133 can by should Data are stored in internal memory 140 by single channel bus, or read data from internal memory 140.In certain embodiments, compression/decompression mould Block 131,132, or 133 can each implement compression or decompression operation.
The memory compression system of the present invention, in Cache 120 and internal memory 140 use multichannel, and include multiple can be parallel The compression/decompression module of running, for concurrently the data that will write internal memory 140 being compressed, or will be from internal memory 140 The compression data of middle reading decompress.When being simultaneously received multiple request that data are stored in internal memory 140 from Cache 120 Time, the memory compression system of the present invention can allow multiple compression/decompression module 131,132,133 compress simultaneously and ask phase with these Corresponding data, and the priority completed according to data compression, the data write internal memory 140 that will first compress successively, without Will be according to the write internal memory 140 the most later of the request receiving storage data.And, compression/decompression module is only being in work When state and data to be processed are the most ready, just it is compressed operation.
The difference of the present invention and prior art is illustrated below by specific embodiment.
As in figure 2 it is shown, have in Cache 120 three groups of unpressed data (write_1, write_2 and write_3) according to Priority orders needs to write in internal memory 140.In other words, memory compression system successively obtains one or more CPU 110 The write request gone out, it is desirable to data write_1 in Cache 120, write_2 and write_3 are write internal memory 140.? In Fig. 2, by data write_1, write_2 and write_3 write internal memory 140 needed for time, with contain write_1, Three rectangular length of write_2 and write_3 are expressed.
If some moving chip framework does not has L3 caching and compression/decompression module, asking of concurrent write internal memory 140 Ask and can access internal memory 140 by a bus arbitration according to the priority orders of each request initiating terminal through taking.And move at some After adding single-channel compression/decompression module in chip architecture, although because data are compressed, the time of write internal memory 140 can be significantly Reduce, but can be because introducing compression/decompression module and time delay in whole reading path.
As shown in Figure 3, it is assumed that have three groups of unpressed data write_1, write_2 and write_3 to need according to priority orders Internal memory 140 is write after being compressed successively.The first three rows of Fig. 3 represents that memory compression system is respectively by these three groups of data pressures respectively After contracting/decompression module first carries out a squeeze operation, then write the time required for internal memory 140.If using single-channel compression solution Die block, a compression/decompression module to compress data one by one according to precedence.Fourth line represents that three groups of arrays perform successively Write internal memory 140 after compression to operate: after i.e. memory compression system first compresses the data of write_1 by its compression/decompression module, will Write_1 data write internal memory 140 after compression;The most again after the data with compression/decompression module compression write_2, will compression After write_2 data write internal memory 140;Finally after the data with compression/decompression module compression write_3, after compression Write_3 data write internal memory 140.The above all operations desired time is T1.From the figure 3, it may be seen that T1 is three groups of data It is compressed the summation of write operation wanted time respectively.Owing to using single-channel compression/decompression module, the compression behaviour of each group data Work cannot synchronize to perform, and causes writing write_1, write_2, and write_3 and creates long time delay.
In one embodiment, some moving chip framework has the internal memory pressure of multichannel compression/decompression module as described in Figure 1 Compression system.When multichannel compression/decompression module in this memory compression system carries out parallelly compressed, CPU 110 is first by be compressed Data are stored in Cache 120, and Cache 120 is passed through " multichannel bus " by data to be compressed by this memory compression system again It is sent to 3 parallel compression modules (131,132 and 133) concurrently.3 parallel compression modules 131,132, and 133 are compressed operation simultaneously to data.After the compression module being initially completed compression completes compression, complete according to data compression Priority, by " single channel bus " successively will in 3 parallel compressoies compression after data write internal memory 140 in.
As shown in Figure 4, in a preferred embodiment in the memory compression system of the present invention, including three compression modules Compressor1, Compressor2 and Compressor3, it is possible to concurrently the data that will write internal memory 140 are compressed. Assume have three groups of unpressed data write_1, write_2 and write_3 to write after needing to be compressed successively according to priority orders Enter internal memory 140.Three row of Fig. 4 represent respectively memory compression system by these three groups of data compression module Compressor1, After Compressor2 and Compressor3 first carries out squeeze operation, then write the time required for internal memory 140.Specifically, three Data can be compressed by individual compression module Compressor1, Compressor2 and Compressor3 the most respectively. Wherein, first Compressor2 completes compression, is followed by Compressor3 and Compressor1.Therefore, Compressor2 is worked as When first completing compression, the priority completed according to data compression, memory compression system is first by the data write of the write_2 of compression Deposit 140.Then, Compressor3 completes compression, and memory compression system is by the data write internal memory 140 of the write_3 of compression. Finally, after the compression of Compressor1 completes, then by the data write internal memory 140 of the write_1 of compression.As seen from the figure, After using the multi-channel parallel in the memory compression system of the present invention to compress and store three groups of data, required time T2 is much smaller than T1, thus it is greatly reduced the overall time delay of memory storage.
It is to say, when memory compression system performs the operation writing data into internal memory 140, the order of data write depends on The compression of each compression module completes order.After each compression module completes compression, memory compression system can be according to completing compression Order, successively by each compression module compress after data write internal memory 140.It should be noted that in first pressure of write During data after the compression of contracting module, if there being second compression module to complete compression, because compression module and internal memory 140 Between be single pass reason, after the data after the compression of first compression module will be waited to have write, compression memory system starts again Implement the write operation of the data after second compression module compression.
Correspondingly, the multi-channel parallel compression method in the memory compression system of the present invention includes herein below:
By multiple parallel compressoies, concurrently the data that will write internal memory are compressed.Wherein, pressure is completed first After the compressor compresses of contracting completes, according to the priority compressed, successively the data after compression in multiple parallel compressoies are write Enter.
Multi-channel parallel decompression system in the memory compression system of the present invention includes multiple parallel decompression module, for from interior Deposit after reading compression data in 140, concurrently the compression data read from internal memory 140 are decompressed.
Wherein, when memory compression system reads the data in internal memory 140, can be according to the CPU's 110 of Cache 120 correspondence Bus priority is read out successively.Specifically, the total capacity of L3 Cache 120 can be configured to 2-32Mbytes, and is divided Become the cache line of multiple a length of 1KBytes.When multiple cache line need to be replaced, memory compression system is according to each The bus priority of CPU 110 corresponding for cache line accesses internal memory 140 successively.And, decompression module is only being in work State and data to be decompressed, when internal memory 140 reads decompression module, just carry out decompression operations.Multiple parallel decompressions Module is successively read compression data according to certain priority orders, and it is complete that the decompression module of each duty reads compression data at it Decompression operations can be carried out concurrently after Biing.
As it is shown in figure 1, memory compression system is when carrying out multi-channel parallel decompression, 3 parallel decompression module (131,132 With 133) from internal memory 140, read compression data by single channel bus according to certain priority orders;3 parallel decompressions Module 131,132, and 133 after in compression, digital independent completes, and i.e. carries out respective data expansion step;Each decompression module 131,132, and 133 after completing the data decompression of their own, will decompression data write Cache 120 by multichannel bus In, read for CPU 110.
The difference of the present invention and prior art is illustrated below by specific embodiment.
As it is shown in figure 5, have three groups of data compressed (read_1, read_2 and read_3) in internal memory 140 according to priority Order needs decompression and is loaded in Cache 120.In other words, memory compression system successively obtains one or more CPU 110 After the read requests sent, find that the data of CPU110 needs are not in Cache 120.Therefore, memory compression system will be by Data read_1, read_2 and read_3 in internal memory 140 read in Cache 120.In Figure 5, read from internal memory 140 Time needed for data read_1, read_2 and read_3, by three rectangles containing read_1, read_2 and read_3 Length express.
If do not have L3 caching and compression/decompression module in some moving chip framework, concurrent reads asking of internal memory 140 Ask and can access internal memory 140 by a bus arbitration according to the priority orders of each request initiating terminal through taking.And move at some After chip architecture adds single-channel compression/decompression module, although the data in internal memory 140 are compressed, and the reading time can subtract significantly Few, but can be because introducing compression/decompression module and time delay in whole reading path.
As shown in Figure 6, it is assumed that have three groups of data read_1 compressed, read_2 and read_3 to need according to priority orders successively Carry out decompression write Cache 120.The first three rows of Fig. 6 represent respectively memory compression system respectively by these three groups of data from internal memory 140 After reading out, carry out a decompression operations by a compression/decompression module, then write the time required for Cache 120.If Using single-channel compression decompression module, the fourth line of Fig. 6 represents that three groups of arrays write Cache 120 after performing reading-decompression successively Operation: i.e. memory compression system is first after internal memory 140 reads read_1, by the data of compression/decompression module decompression read_1; Then after internal memory 140 reads read_2, by the data of compression/decompression module compression read_2;Finally read from internal memory 140 After read_3, by the data of compression/decompression module compression read_3.The above all operations desired time is T3.By Fig. 6 Understanding, T3 is read respectively by three groups of data and carries out decompressing the summation of wanted time.Owing to using single-channel compression/decompression module, The squeeze operation of each group data cannot synchronize to perform, and causes reading read_1, read_2, and read_3 to create and prolongs for a long time Time.
As it is shown in fig. 7, in the memory compression system of the present invention in a preferred embodiment, including three decompression module Decompressor1, Decompressor2 and Decompressor3, it is possible to concurrently the data read from internal memory 140 are carried out Decompression.That is, by single channel bus according to certain priority orders read three sections compression data read_1, read_2, and Read_3, and after every segment data reads, use corresponding decompression module to carry out decompression operations immediately.Three row of Fig. 7 divide Not Biao Shi memory compression system by these three groups of data after internal memory 140 reads out, with decompression module Decompressor1, Decompressor2 and Decompressor3 carries out decompression operations.Wherein, memory compression system, after having read read_1, is adopted Decompress with Decompressor1, and the data decompressed are stored to Cache 120;After having read read_2, use Decompressor 2 decompresses;After having read read_3, Decompressor3 is used to decompress.Wherein, three Decompression machine Decompressor1, Decompressor 2 and Decompressor 3 is concurrent working, and total by multichannel Line is connected to Cache 120, the most interrelated.As shown in Figure 7, the multichannel in the memory compression system of the present invention is used Three groups of data of parallel decompression, required time T4 is much smaller than T3, thus is greatly reduced the overall time delay of internal storage access.
Correspondingly, the multi-channel parallel decompressing method in the memory compression system of the present invention includes herein below:
By multiple parallel decompression machines, after often organizing digital independent, concurrently reading data are decompressed.
Wherein, when reading the data in internal memory, it is read out successively according to the bus priority of equipment corresponding to cache lines.Specifically Ground, the total capacity of L3 can be configured to 2-32Mbytes, it is possible to is divided into the cache line of multiple a length of 1KBytes.When When multiple cache line need to be replaced, system accesses DDR successively according to the bus priority of each equipment corresponding for cache line.
In the present invention, compression/de-compression is consistent with the change direction of the frequency of internal memory 140.The generally compression/de-compression time The time more than read/write memory 140.Therefore, solved by the multi-channel parallel compression in the memory compression system of the employing present invention Pressure system and method, is effectively increased the data throughput of memory compression system;And accelerate the response efficiency of DDR internal memory.
Wherein, in the present invention, involved compressor and decompression machine are used compression and decompression algorithm are known algorithm.Such as, The compression algorithm used is in " Parallel Compression with Cooperative Dictionary Construction " The algorithm illustrated.
To those skilled in the art, certain existing compression/decompression algorithm is used to realize the Compress softwares of data to be Ripe technology, therefore do not repeat them here.
It should be noted that all compressoies involved in the present invention and decompression machine all can concurrent workings.Therefore in the visit for internal memory In asking, the compression/decompression of data can be performed simultaneously.
In sum, the multi-channel parallel Compress softwares system and method in the memory compression system of the present invention is effectively increased internal memory The data throughput of compressibility;And accelerate internal memory response efficiency.So, the present invention effectively overcomes kind of the prior art Plant shortcoming and have high industrial utilization.
The principle of above-described embodiment only illustrative present invention and effect thereof, not for limiting the present invention.Any it is familiar with this skill Above-described embodiment all can be modified under the spirit and the scope of the present invention or change by the personage of art.Therefore, such as All that in art, tool usually intellectual is completed under without departing from disclosed spirit and technological thought etc. Effect is modified or changes, and must be contained by the claim of the present invention.

Claims (12)

1. the multi-channel parallel compressibility in a memory compression system, it is characterised in that: include multiple parallel compressor, be used for Concurrently the data that will write internal memory are compressed.
Multi-channel parallel compressibility in memory compression system the most according to claim 1, it is characterised in that: it is being initially completed After the compressor compresses of compression completes, the priority completed according to data compression, successively by after compression in multiple parallel compressoies Data write internal memory.
Multi-channel parallel compressibility in memory compression system the most according to claim 1, it is characterised in that: described compressor After in running order and to be compressed data have read compressor, it is compressed operation.
4. the multi-channel parallel compression method in a memory compression system, it is characterised in that: by multiple parallel compressoies, parallel The data that will write internal memory are compressed by ground.
Multi-channel parallel compressibility in memory compression system the most according to claim 4, it is characterised in that: it is being initially completed After the compressor compresses of compression completes, according to the priority compressed, successively by the number after compression in multiple parallel compressoies According to write internal memory.
6. the multi-channel parallel decompression system in a memory compression system, it is characterised in that: include multiple parallel decompression machine, be used for Often organizing after data read from internal memory, concurrently the data read from internal memory are being decompressed.
Multi-channel parallel decompressing method in memory compression system the most according to claim 6, it is characterised in that: read from internal memory When fetching data, it is read out successively according to the bus priority of equipment corresponding to cache lines.
Multi-channel parallel decompressing method in memory compression system the most according to claim 6, it is characterised in that: described decompression machine In in running order and to be decompressed data after DDR reads decompression machine, carry out decompression operations.
9. the multi-channel parallel decompressing method in a memory compression system, it is characterised in that: by multiple parallel decompression machines, often After group data read from internal memory, concurrently the data read from internal memory are decompressed.
Multi-channel parallel decompressing method in memory compression system the most according to claim 9, it is characterised in that: from internal memory During middle reading data, it is read out successively according to the bus priority of equipment corresponding to cache lines.
Multi-channel parallel compressibility in 11. 1 kinds of memory compression systems, it is characterised in that: include
One caching, is used for storing data;
One internal memory, for storing the data of compression;With
Multiple parallel compression/decompression devices, after the data compression in will be stored in caching concurrently, are stored in internal memory, or Person, after the compression data decompression that will read from internal memory concurrently, is stored in caching.
Multi-channel parallel compressibility in 12. 1 kinds of memory compression systems, it is characterised in that: include
One caching, is used for storing data;
One internal memory, for storing the data of compression;With
Multiple parallel compression/decompression devices are connected by multichannel bus and caching, by a single channel bus and interior Depositing link, the plurality of parallel compression/decompression device is used for reading, by multichannel bus parallel, the data being stored in caching, And internal memory will be stored in by single channel bus after the data compression of reading, or read compression by single channel bus from internal memory After data, and the compression data decompression that will read from internal memory, it is stored in caching by multichannel bus parallel.
CN201510616502.1A 2015-06-02 2015-09-24 A kind of multi-channel parallel Compress softwares system and method in memory compression system Pending CN106227506A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510296194 2015-06-02
CN2015102961949 2015-06-02

Publications (1)

Publication Number Publication Date
CN106227506A true CN106227506A (en) 2016-12-14

Family

ID=57528729

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510616502.1A Pending CN106227506A (en) 2015-06-02 2015-09-24 A kind of multi-channel parallel Compress softwares system and method in memory compression system

Country Status (1)

Country Link
CN (1) CN106227506A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107124615A (en) * 2017-05-15 2017-09-01 郑州云海信息技术有限公司 A kind of method and device of WebP lossy compression methods
CN108415668A (en) * 2018-02-06 2018-08-17 珠海市杰理科技股份有限公司 Chip motivational techniques, device, system, computer equipment and storage medium
CN109445719A (en) * 2018-11-16 2019-03-08 郑州云海信息技术有限公司 A kind of date storage method and device
CN112860323A (en) * 2019-11-27 2021-05-28 珠海格力电器股份有限公司 Method and device for loading file into memory
CN113885949A (en) * 2021-10-22 2022-01-04 瑞芯微电子股份有限公司 Quick startup method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6523102B1 (en) * 2000-04-14 2003-02-18 Interactive Silicon, Inc. Parallel compression/decompression system and method for implementation of in-memory compressed cache improving storage density and access speed for industry standard memory subsystems and in-line memory modules
US6879266B1 (en) * 1997-08-08 2005-04-12 Quickshift, Inc. Memory module including scalable embedded parallel data compression and decompression engines
CN102122959A (en) * 2011-03-29 2011-07-13 西安交通大学 Data compression device for improving main memory reliability of computer, and method thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6879266B1 (en) * 1997-08-08 2005-04-12 Quickshift, Inc. Memory module including scalable embedded parallel data compression and decompression engines
US6523102B1 (en) * 2000-04-14 2003-02-18 Interactive Silicon, Inc. Parallel compression/decompression system and method for implementation of in-memory compressed cache improving storage density and access speed for industry standard memory subsystems and in-line memory modules
CN102122959A (en) * 2011-03-29 2011-07-13 西安交通大学 Data compression device for improving main memory reliability of computer, and method thereof

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107124615A (en) * 2017-05-15 2017-09-01 郑州云海信息技术有限公司 A kind of method and device of WebP lossy compression methods
CN108415668A (en) * 2018-02-06 2018-08-17 珠海市杰理科技股份有限公司 Chip motivational techniques, device, system, computer equipment and storage medium
CN109445719A (en) * 2018-11-16 2019-03-08 郑州云海信息技术有限公司 A kind of date storage method and device
CN109445719B (en) * 2018-11-16 2022-04-22 郑州云海信息技术有限公司 Data storage method and device
CN112860323A (en) * 2019-11-27 2021-05-28 珠海格力电器股份有限公司 Method and device for loading file into memory
CN113885949A (en) * 2021-10-22 2022-01-04 瑞芯微电子股份有限公司 Quick startup method and system

Similar Documents

Publication Publication Date Title
US10545860B2 (en) Intelligent high bandwidth memory appliance
CN105843775B (en) On piece data divide reading/writing method, system and its apparatus
US10140123B2 (en) SIMD processing lanes storing input pixel operand data in local register file for thread execution of image processing operations
CN106227506A (en) A kind of multi-channel parallel Compress softwares system and method in memory compression system
CN111199273B (en) Convolution calculation method, device, equipment and storage medium
JP5121291B2 (en) Data transfer system
CN111079917B (en) Tensor data block access method and device
CN111031011B (en) Interaction method and device of TCP/IP accelerator
US20220179823A1 (en) Reconfigurable reduced instruction set computer processor architecture with fractured cores
US11487342B2 (en) Reducing power consumption in a neural network environment using data management
US9692813B2 (en) Dynamic assignment of transfers of blocks of data
CN115880132A (en) Graphics processor, matrix multiplication task processing method, device and storage medium
CN111324294A (en) Method and apparatus for accessing tensor data
CN112235579A (en) Video processing method, computer-readable storage medium and electronic device
KR101788245B1 (en) Multi-port cache memory apparatus and method for operating multi-port cache memory apparatus
CN114442908B (en) Hardware acceleration system and chip for data processing
CN104598409A (en) Method and device for processing input and output requests
CN108701102A (en) Direct memory access controller, method for reading data and method for writing data
KR100895298B1 (en) Apparatus, Method, Data Processing Element for efficient parallel processing of multimedia data
US11467973B1 (en) Fine-grained access memory controller
US20230161835A1 (en) Matrix operation method and accelerator
CN115344393A (en) Service processing method and related equipment
CN111666253B (en) Delivering programmable data to a system having shared processing elements sharing memory
US11210105B1 (en) Data transmission between memory and on chip memory of inference engine for machine learning via a single data gathering instruction
US10412400B1 (en) Memory access ordering for a transformation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20161214