WO2007018328A1 - Method for platform-free file compression and file security in cellular phone - Google Patents

Method for platform-free file compression and file security in cellular phone Download PDF

Info

Publication number
WO2007018328A1
WO2007018328A1 PCT/KR2005/003047 KR2005003047W WO2007018328A1 WO 2007018328 A1 WO2007018328 A1 WO 2007018328A1 KR 2005003047 W KR2005003047 W KR 2005003047W WO 2007018328 A1 WO2007018328 A1 WO 2007018328A1
Authority
WO
WIPO (PCT)
Prior art keywords
file
allocation table
data
block
index
Prior art date
Application number
PCT/KR2005/003047
Other languages
French (fr)
Inventor
Yangmin Seo
Eunsang Cho
Original Assignee
Gq Soft Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gq Soft Co., Ltd. filed Critical Gq Soft Co., Ltd.
Publication of WO2007018328A1 publication Critical patent/WO2007018328A1/en

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • H03M7/3084Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction using adaptive string matching, e.g. the Lempel-Ziv method
    • H03M7/3088Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction using adaptive string matching, e.g. the Lempel-Ziv method employing the use of a dictionary, e.g. LZ78
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs

Abstract

A method for compressing and securing file in cellular phone. for use in applications like game, ring-tone, wallpaper, etc. The method is highly versatile such that it can be used on any cellular phone platform Rex, Wince, Symbian.. The method employs an advanced adaptive data compression algorithm with string-matching and link-list techniques so that it is completely adaptive, and a dictionary is constructed on the fly. No prior knowledge of the statistics of the characters in the data is needed. During decompression, the dictionary is reconstructed at the same time as the decoding occurs. The compression converges very quickly and the compression ratio approaches the theoretical limit. The processor is also insensitive to error propagation.

Description

Description
METHOD FOR PLATFORM-FREE FILE COMPRESSION AND FILE SECURITY IN CELLULAR PHONE
Technical Field
[1] The present invention relates to a method for compressing and securing files in cellular phone. There are many imcompatible platform in cellular phones. It is different from carrier to carrier and from vendor to vendor. So the present invention relates to these platform spec and also relates to existing compressing technic. Background Art
[2] Data compression algorithms are known as a discipline which design the shortest representation of the original data string. Reversible data compression implies that a given data string being compressed and then decompressed should be identical to the original data string, and not a single bit of information is allowed to be distorted or lost. This reversibility is identical to the concept of noiseless data compression in the context of information theory. Hence, noiseless data compression is quite different from speech compression and video compression where distortions are allowed. For typical applications of data compression for data being stored or transmitted, noiseless data compression is the only kind of compression which can be used.
[3] Data file compression is typically accomplished by dividing a data file into equal length data segments called data packets. Each data packet is then compressed, using a pre-determined data compression ratio. The compressed data packets are then stored and/or transferred as a new, smaller data file.
[4] Data compression techniques are used extensively in the data communications field in order to communicate data at bit rates that can be supported by communication channels having dynamically changing but limited bandwidths.
[5] Discrete Cosine Transform (DCT) Quantisation is a widely used encoding technique for video data. It is used in image compression to reduce the length of the data words required to represent input image data prior to transmission or storage of that data. In the DCT quantisation process the image is segmented into regularly sized blocks of pixel values and typically each block comprises 8 horizontal pixels by 8 vertical pixels. In conventional data formats video data typically has three components that correspond to either the red, green and blue (RGB) components of a colour image or to a luminance component Y along with two colour difference components Cb and Cr.
[6] Image data compression systems typically use a series of trial compressions to determine the most appropriate quantisation divisor to achieve a predetermined output bit rate. Trial quantisations are carried out at, say, twenty possible quantisation divisors spread across the full available range of possible quantisation divisors. The two trial adjacent trial quantisation divisors that give projected output bit rates just above and just below the target bit rate are identified and a refined search is carried out between these two values. Typically the quantisation divisor selected for performing the image compression will be the one that gives the least harsh quantisation yet allows the target bit rate to be achieved. Disclosure of Invention
Technical Solution
[7] This invention provides a data compression apparatus comprising:
[8] a source detection arrangement for detecting whether or not the input data is source data that has not undergone a previous compression/decompression cycle;
[9] a data quantity generator, responsive to the source detection arrangement, for setting a desired data output quantity for the compressed data, the desired data quantity having a first value for source input data and a second, higher, value for non- source input data;
[10] a target allocator for allocating a target data quantity to respective subsets of the input data in dependence upon the desired output data quantity, the target data quantities together providing a desired output data quantity; and
[11] a data compression arrangement for compressing each subset of the input data in accordance with its respective target data quantity. Brief Description of the Drawings
[12] FIG. 1 depicts a block diagram of a parallel data compression system which can be utilized with the present invention.
[13] FIG. 2 depicts a generalized flow chart illustrating an improved data compression technique which can be utilized with the present invention.
[14] FIG. 3 depicts a block diagram of a parallel data decompression technique which can be utilized with the present invention.
[15] FIG. 4 depicts a general parallel data decompression flow chart
[16] FIG. 5 depicts a multi-file operation block diagram.
Mode for the Invention
[17] Reference will now be made in detail to the preferred embodiment of the invention, an example of which is illustrated in the accompanying drawings. While the invention will be described in conjunction with the preferred embodiment, it will be understood that it is not intended to limit the invention to that embodiment. On the contrary, it is intended to cover alternatives, modifications and equivalents as may be included with in the spirit and scope of the invention as defined by the appended claims. [18] Referring now to FIG. 1, a parallel compression system 10 which could be utilized with the present invention is depicted as a block diagram. In FIG. 1, input symbols 12 are input into N-input symbol registers 14. Typically, there are N symbols being input into register 14 at the same time. Common examples are 2 or 4 bytes of data. The input symbol register 14 feeds the input symbols to several places. One input is to the dictionary address generator 16 that generates all the addresses to N dictionary tables 20. Dictionary address generator 16 receives other inputs, as will be discussed.
[19] An address is generated for each of these tables 20. The dictionary tables are indexed by levels 1, 2 up to level number 0, modulo N. So there are N of these tables and each one has its own address register 18. The dictionary address generator 16 generates all the N addresses in parallel and then reads back from all the different tables in each of the dictionary tables 20.
[20] Each of these dictionary tables 20 includes five subtables. The sub-tables include a secondary parent index table, a secondary child index table, a link information table, a primary parent index table and a primary child index table. The system first looks into the primary tables. If the system cannot find what is wanted, then it goes to the secondary tables based on a flag bit stored in the primary child table. The system 10 of FIG. 1 includes the N-way comparator 24, which takes data from all of the parent tables.
[21] Initially, the system 10 generates addresses for the primary parent and child index tables. If the parent index tables following the table indexed by the current level register 26 find a match with the last index register 32, then the N-way comparator 24 will detect that match.
[22] The system 10 reads all the parent index tables comparing with the last index register 32 and with child indices from child table 20, as will be described in FIG. 2.
[23] In FIG. 1, a finite state machine 30 controls the other components. State machine 30 is a control circuitry that generates all the control signals to read and write the needed quantities at the proper times. The output register 42 outputs the index of the word the system is looking for in a dictionary 18. The output indices can be represented by a fixed number of bits if a variable to fixed mapping is desired. Alternatively, a variable number of bits can be used that depends on the value of the index counter 34. The N- way comparator 24 finds the best parsing of the input string and then outputs a sequence of indices to the output formatter 40. The formatter 40 packs these variable length indices into byte boundaries, because the system typically outputs 16 or 32 bits (N=2 or 4).
[24] The output register 42 can be of any desired size, preferably the same size as the N- input symbol registers 14. If the system bus width is 32 bits, it would be desirable to get 32 bits out from the output register 42. [25] The index counter 34 sequentially assigns indices to new words created in the dictionary.
[26] The starting value for the index counter 34 is 255+the number of special index values used to send special messages to the decompressor. For example, one special index would indicate a dictionary reset has occurred; another special index indicates the end of compressing an input block without resetting the dictionary.
[27] The data structure of the dictionary can be viewed as a multi-tree structure. For example, words consisting of a single symbol will be in level 1, Modulo N table 20-1. The second symbol of words consisting of two symbols will be in level 2, Modulo N table 20-2. Similarly, the sixth symbol of words that have six characters, for example, would be in the level 2, Modulo N table (if N=4). The level is the number of characters in a dictionary word.
[28] The system can process all N input symbol registers 14 concurrently. By splitting the dictionary 20 into N separate dictionaries, the system can look into all N dictionaries at the same time. At the same time, the state machine 30 is implementing a data structure that sends these words into the different levels.
[29] In one embodiment, the present invention creates N addresses at the same time, so the address is not dependent on the content of the dictionary 20. It is only dependent on the previous address and input symbol.
[30] The system 10 of FIG. 1 also includes a next secondary address 36 (numbered from
1, 2, . . . , N). Registers 36 are sources of addresses into the secondary parent index table, the secondary child index table and the link information table. These addresses are input to the dictionary address generator 16. If the system has to go to the secondary address tables, it depends on the next secondary address registers 36 to find these addresses.
[31] Address generator 16 generates addresses to the primary tables as well as the secondary tables. The sources of the primary addresses are the input registers 14 and the address registers 18. The addresses to the secondary tables are generated from the next address counters 36, the primary parent tables and the link info tables.
[32] The system 10 employs three flags in the dictionary tables to direct the searching and matching process. A flag set in the primary parent table indicates an available primary location. A flag in the primary child table indicates a previous collision at that location with the address of the list of colliding items given by the corresponding content of the primary parent table. A flag set in the link info table indicates the end of the list of colliding items.
[33] In FIG. 1, system 10 also includes a current level register 26 which indicates the depth of the current dictionary word. This register is used by the address generator 16 to compute the table addresses. For example, if the system is at level number 3, then at the next match it will try to look at level number 4, modulo N. So that would be the next level and then the address generator 16 will properly assign the addresses.
[34] FIG. 2 depicts a flow chart for a general number N for the compression process where N is the number of input symbols.
[35] In FIG. 2, the system starts with an initialization state 100 and then reads N symbols in state 102. All the N symbols are entered into the address generator 16 of FIG. 1 which computes all the addresses into the different tables in state 104. All the addresses are loaded into the correct primary address port of corresponding tables. The system reads all the dictionary locations at all of the addresses that were provided at state 106. At this point, the system has enough information to determine where are the matches. State 106 reads all the dictionary locations including the secondary address information.
[36] Next, at state 108, a match code is generated. The match code is the output of the
N-way comparator of FIGS. 1 and 2. The number of bits is equal to the number of comparators in that triangle and the system produces a code. The comparator provides a code of where are the matches. If the code is all zeros, it means that it didn't find any match, which means that all of the symbols correspond to first level matches.
[37] In this case, the system processes all input symbols accordingly. If the match code is equal to 00 . . . 01, then the system found only one match and the rest were all first level symbols.
[38] If the match code is 111 . . . 1, then all input symbols extended from the current level are matched. In this case, no output needs to be generated, and the next N input symbols are read. The system updates the values of the current level 26 and the last index register 32. The address corresponding to the last match is also stored to enable the system to compute the primary addresses for the next N-input symbols.
[39] For a finite memory size, the number of words in the dictionary, henceforth referred to as the dictionary size, has to be fixed. So with every mismatch encountered, the system 10 adds a new word to the dictionary, assigning the next available index to the new word. The index counter 34 provides the next index value for each word added to the dictionary in a sequential manner. When the value of the index counter 34 reaches the preassigned dictionary size, a "dictionary full" flag is generated. When a "dictionary full" flag is set, the dictionary growing process is inhibited until a "dictionary reset" signal is generated, upon which the index counter gets reset to its original value.
[40] If the system can grow the dictionary, then it builds as many words as it outputs indices. If one mismatch is encountered, the system will build one extra word corresponding to the last word concatenated with the mismatched symbol. FIG. 2 shows a general view of how the whole structure works, reading symbols, computing addresses, getting all the information from a dictionary, resolving all the information with a match code, and generating the compressed string.
[41] The system is collecting all the information about all the different words at the same time, computing the match code which directs the system to the right branch. Each of the branches 114 in FIG. 2 implements the dictionary growing phase of the compression process. The number of words to grow is determined by the match code. The matching process also retains information regarding the location of the extension symbol (primary or secondary address . . . ).
[42] A match code is an (l/2)N(N+l)-bit quantity and a bit is 1 if the corresponding match was there and they actually are the output from the N-way comparator. As an example in FIG. 2, N=3. If 011001 is the bit pattern, it means that the first symbol was a match, the second symbol was not, the third symbol was not and so on. Every one of these bits will indicate which symbol was a match and which words need to be extended.
[43] Referring now to FIG. 3, a block diagram of a parallel decompression system 240 which can be utilized with the present invention is depicted. The parallel decompression system works in a different way from the compression system. The parallel compression system takes in multiple input symbols and tries to find multiple matches at the same time. The decompression is getting an index and that index in its dictionary corresponds to a word of some length, so the parallel decompression will try to output a number N of these symbols from that word in parallel. Formatter 252 is a shifter that unpacks the indices from the input symbols and loads an input index in the index register 254.
[44] The input index goes to several places in order to build N different tables 272. That number N may not correspond to the same number N on compression as they are actually uncorrelated, but in practice N is dictated by the bus width of the whole system, which is typically 2 or 4. The system has N tables 272 (for level 1 mod N up to 0 mod N).
[45] The table 272 is composed of two sub-tables except for the first one. The first one
272- 1 holds a level table as well. Once an index is received, the level table tells how long is the word that corresponds to this index, so it knows when to stop. All the other index tables are only comprised of two sub-tables. One is for a symbol and one is for a link. The symbol table holds the current character and the link points to the next level.
[46] Address generator 270 takes the index as an input and produces an address to all levels. The symbol table 279-1 contains the first level character din that word (a string of characters). The second character will be in table 2. The third and fourth up to Nth character will be in the corresponding table. There are N-linked lists here for each word, so each table will hold one linked list per word (link through the link-table in- formation). The system takes the index and outputs into all the N different tables 272. The first table 272-1 will output the first symbol, the second table 272-2 will output the second symbol. The system knows how many to get because it reads the level from the level table. At the output 288, it has a variable number of symbols coming out because it depends on how long the word is, so the output 288 will format these bytes to N- symbol quantities.
[47] The contents of the link tables are loaded into the address registers, so if the length of the word, which is stored in the level table, is greater than N, the symbol and link tables are read. This process continues until all the symbols in the word are read.
[48] Regarding the Special Index ROM 260, the system reserves a number of indices.
Indices from 0 up 255 are reserved for first level characters. Some number of special indices are reserved for messages from the compression system to the decompression system. One example of special indices is that if the compression decides that the current dictionary is not matching to the input string. For example, it monitors the compression ratio and detects deterioration in the compression ratio. The compression sends a special index to the decompression, resets the dictionary memory and resets the index counter to its original value. Upon receiving the special index, the decompression resets its dictionary memory and index counter 258.
[49] Another special index is reserved for independent block operation. This special index is sent from the compression system after the end of an input block has been reached and the compression system was instructed to compress individual blocks independently. The compression system terminates the matching process and outputs the last index 32, then flushes out the remaining bits after sending the special index.
[50] Upon receiving that special index, the decompression system 240 will use the available bits remaining; instead it will read new input bytes, since those remaining bits are padding bits the compression system used to separate the compressed blocks. This technique is very effective in achieving a high compression ratio while separating the compressed blocks.
[51] A comparator 256 of FIG. 3 compares the input index with the special indices stored in the special index ROM 260. The same comparator 256 compares the input index with the value of the index counter 258. If the input index is equal to the contents of the index counter 258, then the previous index is used in the decoding instead of the current input index, since the word corresponding to that input index is not generated yet. If the input index is larger than the index counter 258, then an error will be generated. This capability provides a limited error detection.
[52] The last index register 276 is used in the case when the index register 276 is equal to the index counter 258. The first symbol register 278 and the previous first symbol register 280 are used to build a new word in the dictionary which is the word cor- responding to the previous index concatenated with the first symbol in the current word. The previous first symbol register 280 is loaded in the symbol table at the address given by the index counter. The previous first symbol will point to the rest of the previous word and the first symbol 278 becomes the extension symbol.
[53] Suppose TH is a word whose index is X. So X will point to T and then T will point to H. The word THE will have a different index. So it will actually have to start at a different location in the dictionary.
[54] So the first symbol in the previous word, T, is stored at the symbol table at the location pointed to by the index corresponding to THE. The link table points to "H" which does not need a new memory location. The last symbol in the new word is stored at a location pointed to by H, that last symbol E is stored in the first address register.
[55] All the N tables 272, in particular the symbol tables, feed into the output symbols register 288.
[56] The level register 286 is loaded with the contents of the level table and keeps track of how many symbols have been output so far. The last previous address register 290 keeps the last word ending address to link the extension symbol. There is a state machine 294 that controls all the blocks of FIG. 3.
[57] There is a next address counter 284 for every table that holds the address of the next available location in the table for entering extension symbols as well as some intermediate symbols in case two symbols become children of one node.
[58] The prior art starts decompression from a leaf or an internal node of the tree, so the input index points to the last symbol in a word; the output characters are stored in a stack. Then symbols are output in reverse order. In the present invention, the input index points to the first character in a word, because every word has a first level character. The system looks up the level table first, figures out what the number of symbols in a word are, and starts decompression from the top. The prior art has to read the level first and then decide where to start pointing. The present invention can decide everything in parallel. It knows that the first character has to be in table 272-1, so it reads it and at the same time reads the level and decides whether it needs more symbols from tables 272.
[59] FIG. 4 depicts a general parallel decompression flow chart designed for some number N of parallel output symbols.
[60] In FIG. 4, the system starts from initialization state 330 which initializes all the counters. Block 332 reads enough inputs to form one index, which depends on how many bits the system is reading from the input bus that will form an index. (It constructs one index at a time.) That will be stored in the index register 254 of FIG. 3 and compared with the special indices in block 334. [61] If it is not a special index, the normal decompression process starts. First the system computes the addresses for the tables in block 340. The first address is equal to the index register 254 content or the previous index register 276, then reads all the table locations (block 342) and with the level information it knows how many symbols are valid. If more symbols are needed, this means that it has not reached the end of this word. In this case, the value of the link table becomes the address to the table.
[62] The link table content will provide the information where the next symbol is until the end of the word is reached. It then tests if the index register is equal to the index counter. If it is, then there is an extra symbol that needs to be output, since in this case, the word length is one more than the last index received. This extra symbol is equal to the first symbol in the last word.
[63] The system goes to dictionary growing routine (block 354). The system compares the index counter with the dictionary size. If they are equal, then the dictionary growing routine is skipped, and the system goes to block 274.
[64] The system needs to save the first character and the last previous register, the address register, and the last index, shown in FIG. 3.
[65] Parallelism is not required for growing a word in the dictionary, since only one word is involved. The content of the first symbol register is stored in the symbol table 272-1 at the address given by the index counter 258.
[66] There is a first symbol (block 356) in each of the tables 272. The first symbol is very critical because that's where the index points to. Every symbol from now on will point to the next one.
[67] If the word to be extended has length smaller than N, then the extending symbol is entered in the appropriate table 272 at the address given by the index counter 258, and the first symbol register is entered in table 272-1 at the same address. If the word length to be entered is between N and 2N- 1, then the extension symbol is entered in the appropriate table 272 at the address given by the appropriate next address counter 284. The corresponding link table at the address given by the index counter 258 will be loaded with the corresponding next address counter 284.
[68] If the word to be extended is larger than 2N- 1, then the system checks for a pure prefix (block 368) which means that it checks whether the previous word to be extended has any other children words. That is, a leaf node will be called a pure prefix. In this case, one symbol needs to be entered with the appropriate link. If the pure prefix condition is not satisfied, then all the intermediate symbols are copied to new locations, preserving the link information.
[69] Another way to do this would be to do hashing when a node has two children. This is a way to avoid hashing.
[70] FIG. 5 depicts an example of how a multi-file operation or multi-tasking operation would work in conjunction with the present invention. In FIG. 5, there is a compression/decompression apparatus 510, which includes an index counter 534 and a dictionary base address 536.
[71] In a multi-file environment, the system includes multi-dictionaries 530 with one dictionary per file (Al, A2, . . . , AN).
[72] The system needs to supply the compression/decompression apparatus with the base address 536 or where the dictionary for the current file actually resides.
[73] Associated with every file Al, A2, . . . , AN is an index counter 1, 2, . . . , N. If the system starts compressing, say, file Al, the first block of file Al gets compressed, and then the final index counter after compressing one block of file Al, will be stored in the index counter 1.
[74] The next block might belong to File A2. The system would go to index counter 2, bring the content of index counter 2 into index counter 34. Multiplexer 542 permits the starting address of dictionary A2 to go into dictionary address 536 in the compression apparatus 510. This allows the compression apparatus to access the dictionary for file A2.
[75] Starting addresses for dictionaries Al through AN would be stored in starting address register bank 540.
[76] Starting address register bank 540 is basically N registers, each one holding the base address of the associated dictionary. If the current block comes into file A3, the starting address register 3 will back through multiplexer 542 and be stored in dictionary base address block 536.
[77] Using this mechanism, the compression/decompression apparatus will be able to toggle between blocks and between compression and decompression. For example, the system can start compressing a block of file Al, write it out and then decompress another block of file A3.
[78] What the system does is set the apparatus to do decompression, give it the right starting address (the dictionary base address), the associated dictionary would be the decompression dictionary table for file A3.
[79] The foregoing aspects of an improved data compression and decompression system is desirably incorporated into a data compression/decompression processor, according to the present invention, the details of which will now be described.
[80] In a preferred embodiment, the present invention is a single-chip VLSI data compression engine for use in data storage applications. This VLSI circuit is highly versatile such that it can be used on a host bus or housed in host adapters so that all devices (such as magnetic disks, tape drives, optical drives, WORMs and the like), connected to it can have substantial expanded capacity and/or higher data transfer rate.
[81] The present invention desirably employs an advanced adaptive data compression algorithm with string-matching and link-list techniques as described above. Compared with conventional data compression algorithm (such as Huffman coding) the improved algorithm has the following advantages. It is completely adaptive. During compression, the dictionary is constructed on the fly. No a priori knowledge of the statistics of the characters in the data is needed. During decompression, the dictionary is re-constructed at the same time as the decoding occurs. It converges very fast, and the compression ratio approaches the theoretical limit. It is also insensitive to error propagation.
[82] For data security applications, the processor 604 also implements key-dependent encryption capability. Users just load the key words on the chip and the data output to the buffer will be encrypted. Industrial Applicability
[83] The benefits of data compression are tremendous. The benefits of data compression technology to communications have been the increase in bandwidth available to the users and a savings of the cost of the communication device. If one applied advanced data compression technology with 4:1 compression ratio, one may use 2400 bps modems and unconditioned lines for such transmissions. In this example, the savings is not only the difference in modem cost between 9600 bps modems and 2400 bps modems, also the monthly charges for the conditioned lines. As a further example, a typical branch of a bank requires four leased lines for operation (one for the ATM machine, one for news broadcast, and two for data processing). If a multiplexer has a built-in compressor with 4 to 1 compression ratio, then only one line will be needed to communicate with the outside world, and a savings of monthly charges of three lines can be achieved.
[84] As to back-up systems, data compression technology can greatly reduce the back-up time, increase the capacity and save the back-up device cost. For example, for the high-end 1/2" reel to reel tape back-up systems, typically four tapes are required to be mounted and dismounted from the tape drive system. It normally takes 30 minutes to back up one tape. Due to this sequential process, it becomes a very-time-consuming process to back up computer data.

Claims

Claims
[1] A method of accessing files of compressed data on a mass storage device comprising the steps of:
(a) allocating one or more logical data blocks to each file using a first allocation table;
(b) allocating one or more physical data blocks for storing file data using a second file allocation table; and
(c) translating between said first file allocation table and said second file allocation table in response to a request to access the storage device.
[2] The method of claim 1 wherein said translating step comprises the step of translating between said first file allocation table and said second file allocation table using a mapping table.
[3] The method of claim 2 wherein said first file allocation table comprises a plurality of entries, each entry associated with a respective logical data block.
[4] The method of claim 3 wherein said step of allocating one or more logical data blocks comprises the step of associating each file with one of said entries corresponding a first logical data block associated with that file and creating a linked list of entries in said first allocation file indicating the logical data blocks associated with the file.
[5] The method of claim 4 wherein each entry of the first allocation table is associated with a respective entry in the mapping table which indicates a physical data block allocated by said second allocation table.
[6] A system of storing and retrieving compressed data comprising:
(a) a first file allocation table for allocating one or more logical data blocks to each of a plurality of files;
(b) a second allocation file for allocating physical data blocks for storing file data; and a mapping table for translating between said first and second file allocation tables in response to a request to access the storage device.
[7] The system of claim 6 and further comprising a processor for compressing the file data in response to a write operation to the storage device and decompressing the data responsive to a read operation from the storage device.
[8] The system of claim 7 wherein said first file allocation table comprises a plurality of entries, each entry associated with a respective logical data block.
[9] The system of claim 8 wherein said mapping table comprises a plurality of entries corresponding to respective of said first file allocation table entries, each entry of said mapping table indicating a physical data block allocated by said second allocation table.
[10] The system of claim 6 wherein said first allocation table maintains a linked list of logical data blocks for each file.
[11] A method of accessing files of compressed data on a mass storage device comprising the steps of:
(a) maintaining a first file allocation table which indicates one or more logical data blocks;
(b) maintaining a second file allocation table which indicates one or more physical data blocks on the mass storage device where the compressed data associated with each file is stored;
(c) mapping between the first file allocation table and the second file allocation table in response to a request to access the mass storage device.
[12] The method of claim 11 wherein each block of said second file allocation table comprises a plurality of allocation units, and said mapping step comprises the step of associating each block of said first file allocation table with one or more of said allocation units.
[13] The method of claim 11 wherein said mapping step comprises the step of associating each block of said first file allocation table with a block of said second file allocation table and an offset into said block of said second file allocation table.
[14] The method of claim 11 and further comprising the step of allocating a number of logical data blocks in excess of the number of physical data blocks to said first allocation table.
[15] The method of claim 11 wherein said mapping step comprises the step of mapping a plurality of logical data blocks into a single physical data block.
[16] The method of claim 11 wherein said mapping step comprises the step of mapping each logical data block to a physical block and an offset into the physical block.
PCT/KR2005/003047 2005-08-05 2005-09-14 Method for platform-free file compression and file security in cellular phone WO2007018328A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020050071893A KR20070016850A (en) 2005-08-05 2005-08-05 Method for platform-free file compression and file security in cellular phone
KR10-2005-0071893 2005-08-05

Publications (1)

Publication Number Publication Date
WO2007018328A1 true WO2007018328A1 (en) 2007-02-15

Family

ID=37727493

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2005/003047 WO2007018328A1 (en) 2005-08-05 2005-09-14 Method for platform-free file compression and file security in cellular phone

Country Status (2)

Country Link
KR (1) KR20070016850A (en)
WO (1) WO2007018328A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0079465A2 (en) * 1981-11-13 1983-05-25 International Business Machines Corporation Method for storing and accessing a relational data base
JPS647138A (en) * 1986-10-28 1989-01-11 Toshiba Corp File access control system
JPH02171831A (en) * 1988-12-23 1990-07-03 Nec Corp Data access system for compiler
JP2005135116A (en) * 2003-10-29 2005-05-26 Nec Corp Storage device and access control method thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0079465A2 (en) * 1981-11-13 1983-05-25 International Business Machines Corporation Method for storing and accessing a relational data base
JPS647138A (en) * 1986-10-28 1989-01-11 Toshiba Corp File access control system
JPH02171831A (en) * 1988-12-23 1990-07-03 Nec Corp Data access system for compiler
JP2005135116A (en) * 2003-10-29 2005-05-26 Nec Corp Storage device and access control method thereof

Also Published As

Publication number Publication date
KR20070016850A (en) 2007-02-08

Similar Documents

Publication Publication Date Title
US4558302A (en) High speed data compression and decompression apparatus and method
US5410671A (en) Data compression/decompression processor
US9639501B1 (en) Apparatus and methods to compress data in a network device and perform ternary content addressable memory (TCAM) processing
US5663721A (en) Method and apparatus using code values and length fields for compressing computer data
US6145012A (en) Apparatus and method for efficiently updating files in computer networks
US7650429B2 (en) Preventing aliasing of compressed keys across multiple hash tables
JP2502469B2 (en) Method and means for providing a static dictionary structure for compressing character data and decompressing compressed data
US9054729B2 (en) System and method of compression and decompression
US6747580B1 (en) Method and apparatus for encoding or decoding data in accordance with an NB/(N+1)B block code, and method for determining such a block code
US7538696B2 (en) System and method for Huffman decoding within a compression engine
CN110928483B (en) Data storage method, data acquisition method and equipment
US9306851B1 (en) Apparatus and methods to store data in a network device and perform longest prefix match (LPM) processing
WO1994022072A1 (en) Information processing using context-insensitive parsing
CN108702160B (en) Method, apparatus and system for compressing and decompressing data
RU97112940A (en) METHOD AND DEVICE FOR COMPRESSING DATA USING ASSOCIATIVE MEMORY
US20190052553A1 (en) Architectures and methods for deep packet inspection using alphabet and bitmap-based compression
EP3384406A1 (en) Combining hashes of data blocks
CN106849956B (en) Compression method, decompression method, device and data processing system
JP2010504067A (en) Method and system for storing and retrieving streaming data
US6127953A (en) Apparatus and method for compressing data containing repetitive patterns
US6819671B1 (en) Relay control circuit using hashing function algorithm
US6313767B1 (en) Decoding apparatus and method
KR20180060990A (en) Method and apparatus for vertex attribute compression and decompression in hardware
WO2007018328A1 (en) Method for platform-free file compression and file security in cellular phone
US11569841B2 (en) Data compression techniques using partitions and extraneous bit elimination

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS EPO FORM 1205A DATED 03.06.2008.

122 Ep: pct application non-entry in european phase

Ref document number: 05808805

Country of ref document: EP

Kind code of ref document: A1