CN106066843A - A kind of parallel coding and decoding device of high speed Base64 - Google Patents

A kind of parallel coding and decoding device of high speed Base64 Download PDF

Info

Publication number
CN106066843A
CN106066843A CN201610384154.4A CN201610384154A CN106066843A CN 106066843 A CN106066843 A CN 106066843A CN 201610384154 A CN201610384154 A CN 201610384154A CN 106066843 A CN106066843 A CN 106066843A
Authority
CN
China
Prior art keywords
data
main frame
coding
bytes
caching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610384154.4A
Other languages
Chinese (zh)
Inventor
徐晓燕
李高超
周渊
张露晨
马秀娟
唐积强
徐小磊
毛洪亮
刘俊贤
苏沐冉
刘庆良
何万江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEIJING SCISTOR TECHNOLOGY Co Ltd
National Computer Network and Information Security Management Center
Original Assignee
BEIJING SCISTOR TECHNOLOGY Co Ltd
National Computer Network and Information Security Management Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING SCISTOR TECHNOLOGY Co Ltd, National Computer Network and Information Security Management Center filed Critical BEIJING SCISTOR TECHNOLOGY Co Ltd
Priority to CN201610384154.4A priority Critical patent/CN106066843A/en
Publication of CN106066843A publication Critical patent/CN106066843A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • G06F13/4204Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus
    • G06F13/4221Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus being an input/output bus, e.g. ISA bus, EISA bus, PCI bus, SCSI bus
    • G06F13/423Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus being an input/output bus, e.g. ISA bus, EISA bus, PCI bus, SCSI bus with synchronous protocol
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/14Conversion to or from non-weighted codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2213/00Indexing scheme relating to interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F2213/0024Peripheral component interconnect [PCI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Communication Control (AREA)

Abstract

The invention discloses a kind of parallel coding and decoding device of high speed Base64, it is characterized in that, including the coding module managed for data and decoder module, wherein coding module includes: at least one data reception module, at least one coding control module, at least one data transmission blocks, decoder module includes: at least one data reception module, at least one decoding control block, at least one data transmission blocks;The present invention is based on FPGA platform, it is achieved that the parallel processing of 16 bytes of Base64 encoding and decoding, improves the efficiency of Base64 encoding and decoding, and effectively reduces the resource consumption of CPU.

Description

A kind of parallel coding and decoding device of high speed Base64
Technical field
The present invention relates to field of information security technology, particularly relate to a kind of parallel coding and decoding device of high speed Base64.
Background technology
Base64 coding is a kind of based on the method for expressing representing binary data with 64 printable characters.Base64 Coding is commonly used for storage, transmits some binary data coding methods or definition printable character transferring content one method, Base64 coding will not produce new character set.Meanwhile, Base64 coding may be considered a kind of simple encryption side to text Method.
Base64 coding is by the byte one coded system that byte conversion is 4 8bit of 3 8bit, the word after coding Symbol include letter A-Z, a-z, numeral 0-9 and+and/totally 64 characters, to encoded radio be 0-63.
Base64 coded system is, the data of 3 bytes is put in the relief area of a 24bit, data deficiencies 3 byte Time, bit not enough in relief area is supplied with 0;First from relief area, take out 6bit according to order from left to right every time, Again by the byte of 0 one 8bit of reformulation of this 6bit Yu 2bit, wherein the 0 of 2bit is placed on a high position, and the data of 6bit are placed on Low level;Reformulate 4 8bit data according to A-Z, a-z, 0-9 ,+and-coding export;When remaining data is 2 During byte, then filling one in coding result "=", if remaining data is 1 byte, then in coding result, fill two Individual "=".Decoding process is the most contrary with cataloged procedure, when running into "=" time then need to be removed, solve remaining Code.
Base64 coding, decoding need to read data frequently, table look-up and operate according to bit position splicing etc., for software Implementation, although code is relatively easy, but owing to processing only in byte, so it is more to consume cpu resource.
Summary of the invention
The invention aims to solve the problems referred to above, propose a kind of parallel coding and decoding device of high speed Base64, it is possible to Support the parallel encoding and decoding of data high-speed of 16 bytes, effectively reduce CPU usage.
The parallel coding and decoding device of a kind of high speed Base64 of the present invention, including the coding module processed for data and decoding Module;Wherein coding module includes:
At least one data reception module, for caching queuing message and the pending data that main frame issues.
At least one coding control module, for encoding the data after caching.
At least one data transmission blocks, the data back after encoding is to main frame.
Decoder module includes:
At least one data reception module, for caching queuing message and the pending data that main frame issues.
At least one decoding control block, for by the decoding data after caching.
At least one data transmission blocks, the data back after encoding is to main frame.
All modules of the parallel coding and decoding device of a kind of high speed Base64 that the present invention provides are realized by FPGA, FPGA carries out communication by Pcie interface and main frame, is used for reception and return data and information.
It is an advantage of the current invention that:
(1) present invention is based on FPGA platform, it is achieved that the parallel processing of 16 bytes of Base64 encoding and decoding, improves The efficiency of Base64 encoding and decoding, and effectively reduce the resource consumption of CPU;
(2) present invention processes the concurrency of data in view of FPGA, if using hardware to realize multibyte parallel processing, Then can largely promote the efficiency of encoding and decoding, reduce the consumption of cpu resource simultaneously.
Accompanying drawing explanation
Fig. 1 is the coding module structural representation of the present invention;
Fig. 2 is the decoder module structural representation of the present invention;
Fig. 3 is the coding flow chart of the present invention;
Fig. 4 is the decoding process figure of the present invention.
Detailed description of the invention
Below in conjunction with drawings and Examples, the present invention is described in further detail.
Fig. 1 shows the coding module structure of the present invention, data reception module, coding control module and data send mould Block is constituted.Data reception module is responsible for caching queuing message incoming for main frame and data, has cached rear informed code Data are encoded by control module;Coding control module is responsible in the middle of the caching of data reception module reading pending number According to, data are encoded according to base64 coding rule, the data after coding is sent into data transmission blocks and caches;Number The data being responsible for completing caching according to sending module pass back to main frame by Pcie interface.
Fig. 2 shows the decoder module structure of the present invention, data reception module, decoding control block and data send mould Block is constituted.Data reception module is responsible for caching queuing message incoming for main frame and data, notice decoding after having cached Control module is to decoding data;Decoding control block is responsible in the middle of the caching of data reception module reading pending number According to, data are decoded according to base64 decoding rule, decoded data feeding data transmission blocks is cached;Number The data being responsible for completing caching according to sending module pass back to main frame by Pcie interface.
Fig. 3 shows the coding flow process of the present invention, below in conjunction with Fig. 3 and assume the situation pair that pending data are 23 bytes Coding flow process carries out detailed description progressively:
Step 1: main frame input rank information, indicates that the data length of this encoding operation, the sequence of operation number and passback are main The offset address of machine.
Step 2: main frame issues pending data, the data bit width issued due to main frame is 64, and coding control module The bit wide processing data is 128, so data reception module is when data cached, 64 bit data that main frame inputs again is spelled Connecing is 128 bit data, the rule of splicing be the data buffer storage first inputted to high 64, the data buffer storage of rear input to low 64, When the valid data of input are less than 128, supply adding 0 after valid data and cache;When caching at least one After the data of 128, provide data signal in place.
Step 3: judge whether to have in the data buffer storage of current data reception module pending data, is that to provide data in place Signal, otherwise continues waiting for main frame and issues data.
Step 4: after coding control module receives data signal in place, reads 16 bytes from data reception module Data.
Step 5: owing to Base64 coding is that 24bit is converted to 32bit, the data of the most every 48 bytes can be converted to 64 Byte data;This example caches after carrying out the data reading 16 bytes every time inserting 0 operation, reads continuously and caches 3 times;Work as residue To carry out during data deficiencies 48 byte mending 0 operation.Carry out when this example is 23 byte to pending data length inserting the detailed of 0 operation Describe.First, read the data of 16 bytes, 16 byte datas read are started to insert every 6bit the 0 of 2bit from a high position, inserts Entering the data after 0 is 172bit, these 172bit data is put in the 512-340bit of the depositor that bit wide is 512bit slow Deposit;Again read off the data of remaining 7 bytes 56bit altogether, 56-53bit is put into the 339-336bit of cache register In cache, remaining 52bit every 6bit from high to low is previously inserted into the 0 of 2bit, inserts the 52bit data after 0 and turned It is changed to 70bit, this 70bit has been put into the 335-266bit of cache register, according to the rule of Base64, caching has been deposited The 265th of device, 264bit is filled to 0 so that the data of 23 bytes are converted to 31 bytes.
Step 6: judge currently whether processed last data, be to forward step 8 to, otherwise forward step 7 to.
Step 7: judge currently whether have been processed by 3 times, be to forward step 8 to, otherwise forward step 4 to.Judge herein The reason having carried out 3 operations is, can form the data of 64 bytes, for follow-up after the data of every 48 bytes are converted Data carry out caching according to the width of 16 bytes and easily facilitate operation.
Step 8: the data in cache register tabled look-up in units of byte, the data after inserting 0 are converted to Ascii code.Due to the parallel behavior of FPGA, disposably the data of 64 bytes can be converted to ascii code herein.
Step 9: judge currently whether processed last data, be to forward step 10 to, otherwise forward step 12 to.
Step 10: judge that the current data inserted after 0, the need of polishing, are to forward step 11 to, otherwise forward step 12 to. Judge whether that the rule needing polishing is, according to the pending data length indicated in queuing message, calculate the number after coding According to length, such as 23 byte datas in this example, the data after coding should be 32 bytes.23 described in steps of 5 byte warps Valid data after coding are 31 bytes, so should carry out polishing operation herein.
Step 11: the data length after the coding of step 5 is 31 bytes, and the data length being actually needed output is 32 words Joint, thus need herein after efficient coding data the 31st byte with "=" fill, and update main frame to be passed back to queue Length information in information is 32 bytes.
Step 12: the data after coding are sent in the caching of data transmission blocks.
Step 13: judge currently whether processed last byte, be to forward step 14 to, otherwise forward step 3 to.
Step 14: send the queuing message after updating first, indicate the information such as the data length of passback, the sequence of operation number.
Step 15: send the data in caching to main frame.
Step 16: judge whether currently transmitted data are last data, is to forward step 17 to, otherwise forwards step to Rapid 15.
Step 17: again send the queuing message after updating, be used for identifying data and be sent completely.
Fig. 4 shows the decoding process of the present invention, below in conjunction with Fig. 4 and assume that pending data are 32 bytes and include one The situation of individual polishing byte carries out detailed description progressively to coding flow process:
Step 1: main frame input rank information, indicates that the data length of this encoding operation, the sequence of operation number and passback are main The offset address of machine.
Step 2: main frame issues pending data, the data bit width issued due to main frame is 64, and coding control module The bit wide processing data is 128, so data reception module is when data cached, 64 bit data that main frame inputs again is spelled Connecing is 128 bit data, the rule of splicing be the data buffer storage first inputted to high 64, the data buffer storage of rear input to low 64, When the valid data of input are less than 128, supply adding 0 after valid data and cache;When caching at least one After the data of 128, provide data signal in place.
Step 3: judge whether to have in the data buffer storage of current data reception module pending data, is that to provide data in place Signal, otherwise continues waiting for main frame and issues data.
Step 4: after decoding control block receives data signal in place, reads 16 bytes from data reception module Data.
Step 5: judge whether the current data read are last pending data, are to forward step 6 to, otherwise turn To step 8.
Step 6: judge whether current data has Filling power, it is judged that whether the rule of filling is to exist in analytical data The ascii value of "=", if there is then forwarding step 7 to, otherwise forwards step 8 to.In this example, effective data are 31 bytes, the 32 bytes be fill "=".
Step 7: remove "=", and=position and data afterwards thereof are filled with 0.In this example due to "=" It is positioned at the 32nd byte, so the 32nd byte being filled with 0.
Step 8: table look-up, it is therefore an objective to the ascii code value of data is converted to the value of the 0-63 of Base64, goes 0 for follow-up Operation.
Step 9: removing 0 filled, the rule of removal is, by 0 of the high 2bit in the every 8bit of data after conversion of tabling look-up Remove, again splice remaining remaining 6bit in each data according to order from high to low.Owing to processing 16 bytes every time, Therefore remove the 0 of 32bit altogether, remaining 96bit data are spliced into 12 bytes again.In this example, 16 words when processing for the first time The data of joint are valid data, it is possible to be again spliced into the decoding data of 12 bytes;Second time only has 15 bytes to have when processing Effect, it is possible to be again spliced into the decoding data of 11 bytes.
Step 10: decoded 12 bytes are write in the depositor that bit wides are 384bit and caches.In this example, at first time The 384-289bit that 12 bytes obtained after reason are stored in cache register;11 bytes that second time obtains after processing are stored in slow Deposit the 288-199bit in depositor, and the 198-129bit in cache register is carried out whole to 16 words of polishing with 0 Several times, the data effective length simultaneously updated in queuing message is 23 bytes.Why the bit wide of cache register is 384bit, It is owing to the data of every 16 bytes have become 12 bytes, i.e. 96bit after going 0, when data long enough, often processes and can produce for 4 times The data of raw 48 bytes, i.e. 384bit, so it is easy to the storage of the follow-up data carried out in units of 16 bytes.When decoded During later data deficiencies 16 byte, not enough part carries out polishing with 0, updates in the queuing message of main frame to be passed back to simultaneously Data length information to indicate effective data overall length.
Step 11: judge currently whether processed last data, be to forward step 13 to, otherwise forward step to 12。
Step 12: judge currently whether have been carried out 4 decoding operations, be to forward step 13 to, otherwise forward step to 3。
Step 13: the valid data in cache register are written in the transmission caching of data transmission blocks.In this example Will be to sending caching write 2 times, each 16 bytes, totally 32 byte, wherein valid data are 23 bytes, by having in queuing message Effect length field is identified.
Step 14: judge currently whether processed last data, be to forward step 15 to, otherwise forward step 3 to.
Step 15: send the queuing message after updating first, indicate the information such as the data length of passback, the sequence of operation number.
Step 16: send the data in caching to main frame.
Step 17: judge whether currently transmitted data are last data, is to forward step 18 to, otherwise forwards step to Rapid 16.
Step 18: again send the queuing message after updating, be used for identifying data and be sent completely.
In sum, a kind of high speed Base64 parallel coding and decoding device employing FPGA that the present invention provides achieves and is gained merit Can module, Data processing being inserted 0, go 0, table look-up and the operation such as caching given full play to FPGA locating parallel data Reason ability, it is achieved that to carrying out the Base64 encoding and decoding of grouped data with 16 bytes, be greatly improved the data of Base64 algorithm Disposal ability.This example, only as a example by 16 byte packet, can realize carrying out packet transaction with more multibyte on this basis Base64 algorithm.

Claims (8)

1. the parallel coding and decoding device of high speed Base64, it is characterised in that include that the coding module processed for data is conciliate Code module, wherein coding module includes:
At least one data reception module, for caching queuing message and the pending data that main frame issues;
At least one coding control module, for encoding the data after caching;
At least one data transmission blocks, the data back after encoding is to main frame;
Decoder module includes:
At least one data reception module, for caching queuing message and the pending data that main frame issues;
At least one decoding control block, for by the decoding data after caching;
At least one data transmission blocks, the data back after encoding is to main frame.
A kind of parallel coding and decoding device of high speed Base64 the most according to claim 1, it is characterised in that described coding mould In block, queuing message incoming for main frame and data are cached by data reception module, have cached rear informed code and have controlled mould Data are encoded by block, and coding control module reads pending data in the middle of the caching of data reception module, by data Encoding according to base64 coding rule, the data after coding are sent into data transmission blocks and caches, data send mould The data that caching is completed by block pass back to main frame by Pcie interface.
A kind of parallel coding and decoding device of high speed Base64 the most according to claim 1, it is characterised in that described decoding mould In block, queuing message incoming for main frame and data are cached by data reception module, and after having cached, notice decoding controls mould Block is to decoding data, and decoding control block reads pending data in the middle of the caching of data reception module, by data Being decoded according to base64 decoding rule, decoded data feeding data transmission blocks cached, data send mould The data that caching is completed by block pass back to main frame by Pcie interface.
A kind of parallel coding and decoding device of high speed Base64 the most according to claim 1, it is characterised in that described coding mould Block and decoder module are all realized by FPGA, and FPGA carries out communication by Pcie interface and main frame, for receiving and return data And information.
A kind of parallel coding and decoding device of high speed Base64 the most according to claim 1, it is characterised in that described coding mould In block, when with 16 byte packet, coding flow process includes:
Step 1: main frame input rank information, indicates the data length of this encoding operation, the sequence of operation number and passback main frame Offset address;
Step 2: main frame issues pending data, data reception module is when data cached, by 64 bit data weights of main frame input Newly be spliced into 128 bit data, the rule of splicing be the data buffer storage first inputted to high 64, the data buffer storage of rear input is to low by 64 Position, when the valid data of input are less than 128, supply adding 0 after valid data and caches, when caching at least After the data of individual 128, provide data signal in place;
Step 3: judge whether to have in the data buffer storage of current data reception module pending data, is to provide data letter in place Number, otherwise continue waiting for main frame and issue data;
Step 4: after coding control module receives data signal in place, reads the number of 16 bytes from data reception module According to;
Step 5: cache after carrying out the data reading 16 bytes every time inserting 0 operation, read continuously and cache 3 times, work as remainder According to less than carrying out during 48 byte mending 0 operation;
Step 6: judge currently whether processed last data, be to forward step 8 to, otherwise forward step 7 to;
Step 7: judge currently whether have been processed by 3 times, be to forward step 8 to, otherwise forward step 4 to;
Step 8: the data in cache register tabled look-up in units of byte, the data after inserting 0 are converted to ascii Code;
Step 9: judge currently whether processed last data, be to forward step 10 to, otherwise forward step 12 to;
Step 10: judge that the current data inserted after 0, the need of polishing, are to forward step 11 to, otherwise forward step 12 to;Judge Rule the need of polishing is, according to the pending data length indicated in queuing message, calculates the data after coding long Degree;
Step 11: use after efficient coding data "=" fill, polishing to 32 bytes, and update the queue of main frame to be passed back to Length information in information is 32 bytes;
Step 12: the data after coding are sent in the caching of data transmission blocks;
Step 13: judge currently whether processed last byte, be to forward step 14 to, otherwise forward step 3 to;
Step 14: send the queuing message after updating first, indicate the information such as the data length of passback, the sequence of operation number;
Step 15: send the data in caching to main frame;
Step 16: judge whether currently transmitted data are last data, is to forward step 17 to, otherwise forwards step to 15;
Step 17: again send the queuing message after updating, be used for identifying data and be sent completely.
A kind of parallel coding and decoding device of high speed Base64 the most according to claim 5, it is characterised in that described step 5 In, it is assumed that pending data are 23 bytes, carry out slotting 0 concrete operations and are: first, read when being 23 byte to pending data length Take the data of 16 bytes, 16 byte datas read are started to insert every 6bit the 0 of 2bit from a high position, inserts the data after 0 For 172bit, these 172bit data are put in the 512-340bit of the depositor that bit wide is 512bit and cache;Again read off The data of remaining 7 bytes 56bit altogether, put into 56-53bit in the 339-336bit of cache register and cache, Remaining 52bit every 6bit from high to low is previously inserted into the 0 of 2bit, inserts the 52bit data after 0 and be converted into 70bit, 70bit is put into the 335-266bit of cache register, according to the rule of Base64, by the 265th of cache register the, 264bit is filled to 0 so that the data of 23 bytes are converted to 31 bytes.
A kind of parallel coding and decoding device of high speed Base64 the most according to claim 1, it is characterised in that described decoding mould In block, when with 16 byte packet, decoding process includes:
Step 1: main frame input rank information, indicates the data length of this encoding operation, the sequence of operation number and passback main frame Offset address;
Step 2: main frame issues pending data, data reception module is when data cached, by 64 bit data weights of main frame input Newly be spliced into 128 bit data, the rule of splicing be the data buffer storage first inputted to high 64, the data buffer storage of rear input is to low by 64 Position, when the valid data of input are less than 128, supply adding 0 after valid data and caches;When caching at least After the data of individual 128, provide data signal in place;
Step 3: judge whether to have in the data buffer storage of current data reception module pending data, is to provide data letter in place Number, otherwise continue waiting for main frame and issue data;
Step 4: after decoding control block receives data signal in place, reads the number of 16 bytes from data reception module According to;
Step 5: judge whether the current data read are last pending data, are to forward step 6 to, otherwise forward step to Rapid 8;
Step 6: judge whether current data has Filling power, it is judged that the rule of filling is, whether analytical data exists "=" Ascii value, if there is then forwarding step 7 to, otherwise forwards step 8 to;
Step 7: remove "=", and=position and data afterwards thereof are filled with 0;
Step 8: table look-up, is converted to the value of the 0-63 of Base64 by the ascii code value of data, goes 0 operation for follow-up;
Step 9: removing 0 filled, the rule of removal is, by 0 removal of the high 2bit in the every 8bit of data after conversion of tabling look-up, Again splice remaining remaining 6bit in each data according to order from high to low;Owing to processing 16 bytes every time, therefore altogether Remove the 0 of 32bit, remaining 96bit data are spliced into 12 bytes again;
Step 10: by decoded 12 bytes write bit wides be 384bit depositor in cache, when decoded last During data deficiencies 16 byte, not enough part carries out polishing with 0, updates the data in the queuing message of main frame to be passed back to simultaneously Length information is to indicate effective data overall length;
Step 11: judge currently whether processed last data, be to forward step 13 to, otherwise forward step 12 to;
Step 12: judge currently whether have been carried out 4 decoding operations, be to forward step 13 to, otherwise forward step 3 to;
Step 13: the valid data in cache register are written in the transmission caching of data transmission blocks;
Step 14: judge currently whether processed last data, be to forward step 15 to, otherwise forward step 3 to;
Step 15: send the queuing message after updating first, indicate the information such as the data length of passback, the sequence of operation number;
Step 16: send the data in caching to main frame;
Step 17: judge whether currently transmitted data are last data, is to forward step 18 to, otherwise forwards step to 16;
Step 18: again send the queuing message after updating, be used for identifying data and be sent completely.
A kind of parallel coding and decoding device of high speed Base64 the most according to claim 1, it is characterised in that described step 10 In, it is assumed that when pending data are 23 byte, then 12 bytes obtained after processing for the first time be stored in cache register the 384-289bit;The 288-199bit that 11 bytes that second time obtains after processing are stored in cache register, and caching is posted 198-129bit in storage carries out the polishing integral multiple to 16 words with 0, updates the data in queuing message the longest simultaneously Degree is 23 bytes;Why the bit wide of cache register is 384bit, is owing to the data of every 16 bytes have become 12 after going 0 Byte, i.e. 96bit, when data long enough, often process and can produce the data of 48 bytes for 4 times, i.e. 384bit, when decoded During later data deficiencies 16 byte, not enough part carries out polishing with 0, updates in the queuing message of main frame to be passed back to simultaneously Data length information to indicate effective data overall length.
CN201610384154.4A 2016-06-02 2016-06-02 A kind of parallel coding and decoding device of high speed Base64 Pending CN106066843A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610384154.4A CN106066843A (en) 2016-06-02 2016-06-02 A kind of parallel coding and decoding device of high speed Base64

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610384154.4A CN106066843A (en) 2016-06-02 2016-06-02 A kind of parallel coding and decoding device of high speed Base64

Publications (1)

Publication Number Publication Date
CN106066843A true CN106066843A (en) 2016-11-02

Family

ID=57421065

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610384154.4A Pending CN106066843A (en) 2016-06-02 2016-06-02 A kind of parallel coding and decoding device of high speed Base64

Country Status (1)

Country Link
CN (1) CN106066843A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108289037A (en) * 2017-12-08 2018-07-17 上海悠络客电子科技股份有限公司 A kind of equipment wireless parameter configuration method based on sound wave
CN115037981A (en) * 2021-03-05 2022-09-09 奇安信科技集团股份有限公司 Data stream decoding method and device, electronic equipment and storage medium
CN115037981B (en) * 2021-03-05 2024-05-31 奇安信科技集团股份有限公司 Decoding method and device of data stream, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1695115A (en) * 2002-08-28 2005-11-09 英特尔公司 Performing repeat string operations
CN101079879A (en) * 2006-12-19 2007-11-28 腾讯科技(深圳)有限公司 An Email transport system and method
CN103580851A (en) * 2013-11-13 2014-02-12 福建省视通光电网络有限公司 Information encryption and decryption method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1695115A (en) * 2002-08-28 2005-11-09 英特尔公司 Performing repeat string operations
CN101079879A (en) * 2006-12-19 2007-11-28 腾讯科技(深圳)有限公司 An Email transport system and method
CN103580851A (en) * 2013-11-13 2014-02-12 福建省视通光电网络有限公司 Information encryption and decryption method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李道强: "浅谈Base64编码及简单加密中的应用", 《福建电脑》 *
福建电脑: "基于GPU的Base64并行算法研究", 《福建电脑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108289037A (en) * 2017-12-08 2018-07-17 上海悠络客电子科技股份有限公司 A kind of equipment wireless parameter configuration method based on sound wave
CN115037981A (en) * 2021-03-05 2022-09-09 奇安信科技集团股份有限公司 Data stream decoding method and device, electronic equipment and storage medium
CN115037981B (en) * 2021-03-05 2024-05-31 奇安信科技集团股份有限公司 Decoding method and device of data stream, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US11431351B2 (en) Selection of data compression technique based on input characteristics
CN105183557B (en) A kind of hardware based configurable data compression system
CN103997346A (en) Data matching method and device based on assembly line
EP3316150A1 (en) Method and apparatus for file compaction in key-value storage system
WO2013048531A1 (en) Compression format for high bandwidth dictionary compression
CN202931289U (en) Hardware LZ 77 compression implement system
CN101449462A (en) High-speed data compression based on set associative cache mapping techniques
CN101996139A (en) Data matching method and data matching device
CN100378687C (en) A cache prefetch module and method thereof
CN103095305A (en) System and method for hardware LZ77 compression implementation
CN110334066A (en) A kind of Gzip decompression method, apparatus and system based on FPGA
CN106066843A (en) A kind of parallel coding and decoding device of high speed Base64
CN111324564B (en) Elastic caching method
CN105550979A (en) High-data-throughput texture cache hierarchy structure
CN105184185B (en) For detaching storage and the key disks of restoring data and its detaching and restoring data method
CN111835494B (en) Multi-channel network data transmission system and method
CN101510175B (en) Method for updating target data to memory and apparatus thereof
US8700859B2 (en) Transfer request block cache system and method
CN107612891B (en) Data compression encryption circuit
CN101526925A (en) Processing method of caching data and data storage system
CN100423453C (en) Arithmetic coding decoding method implemented by table look-up
CN104915153A (en) Method for double control cache synchronous design based on SCST
CN103488617A (en) Data interception method and device
BR0309059A (en) Process and communication disposition for adapting the data reason in a communication disposition
CN115913246A (en) Lossless data compression algorithm based on self-adaptive instantaneous entropy

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20161102

RJ01 Rejection of invention patent application after publication