CN113207004A - Remote sensing image compression algorithm hardware implementation method based on JPEG-LS interframe expansion - Google Patents

Remote sensing image compression algorithm hardware implementation method based on JPEG-LS interframe expansion Download PDF

Info

Publication number
CN113207004A
CN113207004A CN202110483170.XA CN202110483170A CN113207004A CN 113207004 A CN113207004 A CN 113207004A CN 202110483170 A CN202110483170 A CN 202110483170A CN 113207004 A CN113207004 A CN 113207004A
Authority
CN
China
Prior art keywords
data
block
frame
predictor
row
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110483170.XA
Other languages
Chinese (zh)
Other versions
CN113207004B (en
Inventor
陈立群
崔裕宾
颜露新
钟胜
颜章
杨桂彬
张思宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN202110483170.XA priority Critical patent/CN113207004B/en
Publication of CN113207004A publication Critical patent/CN113207004A/en
Application granted granted Critical
Publication of CN113207004B publication Critical patent/CN113207004B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/527Global motion vector estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/107Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/109Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/164Feedback from the receiver or from the transmission channel
    • H04N19/166Feedback from the receiver or from the transmission channel concerning the amount of transmission errors, e.g. bit error rate [BER]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/436Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements

Abstract

The invention discloses a remote sensing image compression algorithm hardware implementation method based on JPEG-LS interframe expansion, and belongs to the technical field of image compression. The method of the invention comprises the following steps: (1) controlling a compression mode; (2) coding frame and reference frame image blocks; (3) motion estimation is carried out, and an optimal matching block is obtained; (4) performing parallel computation on multiple predictors, and selecting an optimal predictor; (5) modeling and predicting a plurality of branches to obtain residual errors; (6) carrying out Columbus length-limited coding on the residual error, and outputting a compressed code stream; the method supports the efficient interframe expansion structure of multi-channel data caching of block access, full-search motion estimation and multi-predictor parallel computation, adopts the idea of a production line and a template sliding window, and improves the pixel throughput rate; interframe information is introduced on the basis of JPEG-LS intraframe compression, and the interframe prediction technology of motion compensation is adopted, so that image space redundancy and time redundancy are eliminated, and the compression efficiency is high.

Description

Remote sensing image compression algorithm hardware implementation method based on JPEG-LS interframe expansion
Technical Field
The invention belongs to the technical field of image compression, and particularly relates to a remote sensing image compression algorithm hardware implementation method based on JPEG-LS interframe expansion.
Background
With the rapid development of satellite remote sensing technology in China, the data volume generated by satellite-borne imaging loads is increasing day by day. The huge remote sensing image data volume brings great pressure to limited satellite storage and satellite-ground link bandwidth, and the compression of the satellite-borne remote sensing image is an effective measure for solving the problem. JPEG-LS is an ISO/ITU standard aiming at lossless compression of continuous tone images, has excellent compression performance and better control on computational complexity, adopts a motion compensation and multi-predictor based on an image lossless/near lossless compression algorithm of JPEG-LS interframe expansion, introduces information of image time dimension on the basis of space two-dimension, and can simultaneously reduce pixel correlation of sequence images on space and time so as to obtain higher compression ratio.
Satellite remote sensing imaging is high in cost, data is extremely precious, and therefore when images are compressed and encoded, high fidelity of important information in a concerned area needs to be guaranteed, compression efficiency of a high-fidelity compression algorithm is usually low, and great pressure is brought to satellite-ground link transmission bandwidth. In addition, the satellite-borne platform is deficient in computing resources and storage resources and cannot store the camera and code stream data for a long time, so that the satellite-borne system must realize a strong real-time compression function under the constraint of limited resources. In conclusion, the satellite-borne remote sensing image compression system faces the technical problems of high fidelity and strong real-time performance, and two contradictions between compression ratio and fidelity, real-time resource compression requirements and limited satellite resources need to be solved.
Aiming at the difficult problem of high-fidelity compression, the patent 'error code diffusion prevention JPEG-LS image lossless/near lossless compression method' (patent application number: CN201610165800.8, publication number: CN105828070A) introduces a partition compression method on the basis of JPEG-LS, and on the premise of ensuring that an interested target is not lost, different micro-loss parameters are adopted for different areas, so that the overall compression ratio of the image is improved. However, this method only considers the spatial correlation of the images, and cannot remove the temporal redundancy of the sequence images, resulting in a still low compression ratio.
Aiming at the difficult problem of strong real-time compression, a patent 'JPEG-LS conventional coding hardware implementation method' (patent application number: CN201210198818.X, publication number: CN102724506A) aims to solve the problems of complex parameter updating and error value calculating structure and slow processing rate in a JPEG-LS compression algorithm. However, the method only realizes the lossless compression function in the standard, avoids the problem of real-time realization of a pixel reconstruction feedback loop, and loses the near lossless compression function.
Disclosure of Invention
Aiming at the defects or improvement requirements of the prior art, the invention provides a remote sensing image compression method based on JPEG-LS interframe expansion, which aims to realize an efficient interframe expansion structure of multi-channel data caching, full search motion estimation and multi-predictor parallel computation supporting block access, and adopts the idea of a production line and a template sliding window, thereby solving the problems of low compression ratio under high-fidelity compression and high difficulty in realizing a compression algorithm under the constraint of limited resources in a satellite-borne remote sensing image data compression system.
In order to achieve the aim, the invention provides a remote sensing image compression algorithm hardware implementation method based on JPEG-LS interframe expansion, which comprises the following steps:
(1) caching image data of the coding frame and the reference frame by using an off-chip memory, and respectively obtaining coding blocks and searching block data according to different block row parameters;
(2) performing full search based on SAD criterion in a motion search block formed by a reference frame to obtain an optimal matching block of a coding block, and outputting the coding block and the matching block to the next stage;
(3) generating a synchronous causal template of the coding block and the matched block images, calculating a plurality of predictors in parallel, and setting the smallest sum of absolute values of residual errors in the blocks as an optimal predictor;
(4) performing fixed prediction by using an optimal predictor, calculating a prediction residual by combining a self-adaptive corrector, and obtaining a Columbus encoding parameter according to a context modeling parameter;
(5) and calculating coding parameters, completing Columbus length-limited coding, compressing code stream and framing output of decoding auxiliary information.
Specifically, the step (1) specifically includes:
(11) four FIFOs are used for respectively caching the encoded frame write-in data and the read-out data, the reference frame write-in data and the read-out data, and the functions of data bit width conversion and clock isolation are provided;
(12) defining a counter, counting the write data enabling signals, and finishing storage address accumulation and data sequence writing; according to the parameters of the rows and the columns of the blocks, when the number of the rows of the blocks is cached, calculating a read address and an offset address, and outputting encoding block data which are the same in size and are not overlapped from a storage area;
(13) because of image block compression, compressed and reconstructed image data exist in a block form, and a write address and an offset address need to be calculated according to block row and column parameters to obtain complete reference frame image data; setting a motion search step length as P, determining that ROW and column parameters of a search block are ROW +2P and COL +2P respectively, calculating a read address and an offset address, and outputting search block data which are the same in size and mutually overlapped;
(14) counting the water level conditions of a plurality of channels, wherein the water level of a writing channel is an input FIFO (first in first out) buffer memory amount and an idle space of the storage partition, and the water level of a reading channel is an output FIFO idle space and the buffer memory amount of the storage partition; and adopting a fixed priority strategy to judge the water level condition of each channel, and allocating the bus to which channel when the water level of which channel is high so as to finish data transmission.
Specifically, the step (2) specifically includes:
(21) using 2P FIFOs, (2P +1) × (2P +1) registers, cascading and caching four rows of data of a search block, when the fifth row of data of the search block arrives, forming a (2P +1) × (2P +1) matching window, reading the first row of data of a coding block at the moment, aligning the data of the coding block with the data of the window, and outputting the data to an SAD calculation module; when the (2P +2) th data of the (2P +1) th line of the search block arrives, a new (2P +1) x (2P +1) matching window is formed, the second data of the first line of the coding block is read at the moment, aligned and output to the SAD calculation module, and SAD calculation is completed after (ROW +2P) x (COL +2P) pixel clock cycles;
(22) expanding sign bits of each pixel data of the coding block and each pixel data of the matching window, and performing difference by using combinational logic; then, judging the sign bit of the difference value, if the sign bit is negative, inverting the data according to the bit and adding 1, if the sign bit is positive, keeping the data unchanged, and obtaining the absolute value of the difference value; using an accumulator with enough digits to count the sum of absolute values, and sending the sum of (2P +1) × (2P +1) absolute values to a comparison selection circuit module when the counting of all pixels in a coding block is finished;
(23) dividing the (2P +1) × (2P +1) matching results into (2P +1) groups, and comparing by using a two-stage pipeline; the first stage pipeline compares (2P +1) data in each group to obtain the minimum value of each group, and the second stage pipeline compares (2P +1) minimum values to obtain the final minimum value; the result is the minimum best matching block, the motion vector { m, n } of the coding block is output, and the sum of the absolute values of the best matching block is output to a predictor selection module to be used as the sum of the absolute values of the residual errors of a fourth predictor;
(24) and storing data of a search block and data of a coding block by using two on-chip FIFOs respectively, wherein the data volume of the search block is (ROW +2P) × (COL +2P), the data volume of the coding block is ROW × COL, after the result of the best matching block is output, reading the data of the search block, determining whether the data is valid according to ROW-column counting, and when the data is valid, correspondingly reading out the data of one coding block and outputting the data of one coding block to a predictor selection module together.
Specifically, the step (3) specifically includes:
(31) taking the image block as a unit, and counting rows and columns of the pixels; splicing an original pixel Ix of a coding block and a reconstructed pixel Rx _ i of a matching block, writing the spliced original pixel Ix of the coding block and the reconstructed pixel Rx _ i of the matching block into an FIFO (first in first out) cache, starting to read the FIFO and decompose data after caching one line, and outputting 3 intra-frame cause and effect template pixels Ia, Ib and Ic and 4 inter-frame cause and effect template pixels Ra _ i, Rb _ i, Rc _ i and Uk;
the template structure and the boundary processing mode are specifically as follows:
when the pixels of the coding blocks are in the special columns of the non-special rows, Ia is a left adjacent pixel, Ib is an upper adjacent pixel, and Ic is an upper left adjacent pixel;
when the pixels of the coding blocks are positioned in a first row and a first column, the first pixels are all used;
when the pixels of the coding blocks are in a first row and a second row, the first pixels are used by Ib and Ic;
when the pixels of the coding block are in a non-first row and a non-first column, Ib is used for Ia, and Ia of the previous row is used for Ic; the reference frame pixels are the same;
(32) using intra-frame and inter-frame causal template pixels to calculate three predicted values Px _1, Px _2 and Px _3 in parallel; the first predictor is an intra-frame predictor, and the second predictor and the third predictor are inter-frame predictors;
(33) respectively subtracting the actual value of the current pixel and the three predicted values, calculating residual values of different predictors, then taking an absolute value, and counting the sum of absolute values of residual errors in one block; when one block data statistics is finished, aligning the sum of absolute values of residual errors of a fourth predictor output by the motion estimation module, and outputting the sum to the predictor selection module;
(34) comparing the 4 summation values SUM4, SUM2, SUM3 and SUM4, wherein the smallest summation is the best predictor;
(35) splicing the pixel data of the coding block and the matching block, caching by using an FIFO, and starting to read out the data in the FIFO after the predictor selects the result to output; and sending the block data to a multi-branch modeling and predicting module along with a predictor selection result.
Specifically, the step (4) specifically includes:
(41) taking the image subblocks as a unit, and counting rows and columns of the pixels; splicing the encoding block reconstruction pixel Rx and the matching block reconstruction pixel Rx _ i, writing the spliced encoding block reconstruction pixel Rx and matching block reconstruction pixel Rx _ i into an FIFO (first in first out) buffer, starting to read the FIFO buffer and decompose data after one line of the buffer is buffered, and outputting 4 intra-frame cause and effect template pixels Ra, Rb, Rc, Rd and 4 inter-frame cause and effect template pixels Ra _ i, Rb _ i, Rc _ i and Uk;
the template structure and the boundary processing mode are specifically as follows:
when the pixels of the coding blocks are in the special columns of the non-special rows, Ra is a left adjacent pixel, Rb is an upper adjacent pixel, Rc is an upper left adjacent pixel, and Rd is an upper right adjacent pixel;
when the pixels of the coding blocks are positioned in a first row and a first column, the first pixels are all used;
when the pixels of the coding block are positioned in a first row but not a first column, Rb, Rc and Rd use first pixels;
when the pixels of the coding block are positioned in a non-first row and a non-first column, Ra uses Rb, and Rc uses Ra of a previous row;
when the pixels of the coding block are in the last column of the non-first row, Rb is used by Rd; the reference frame pixels are the same;
(42) according to the Near lossless compression limit distortion characteristic, the relationship between the pixel reconstruction value Rx and the pixel actual value Ix _ r is Rx ═ Ix _ r +/-Near; when the maximum value of Near is 2, the predicted values of Ra are Ix _ r-2, Ix _ r-1, Ix _ r +1 and Ix _ r + 2;
(43) modeling the local environment of the current pixel by using coding block cause and effect templates Ra, Rb, Rc and Rd to obtain address index values Q1 and Q2 and turning SIGNs SIGN1 and SIGN 2;
(44) in the process of pipeline design, the pixel address Conflict situation in four clock cycles of reading context parameters from the RAM to the RAM for updating needs to be recorded, namely the pixel context addresses in the current cycle and the next three cycles are compared one by one, namely the Conflict is set to be 1, otherwise, the Conflict is set to be 0, and the address Conflict types Conflict1[2:0] and Conflict2[2:0] are formed. Judging that the RAM reading operation is not carried out when the read-write Conflict occurs according to the address Conflict types Conflict1[0] and Conflict2[0] zone bits;
(45) the Ra prediction is five values, and the optimal predictor is used for prediction to obtain prediction values Px _1, Px _2, Px _3, Px _4 and Px _ 5; c _ sel1, C _ sel2, N _ sel1 and N _ sel2 are respectively selected according to the address Conflict types Conflict1 and Conflict 2; updating the selected parameter N to obtain N _ update1 and N _ update2, and generating N parameter updating marks N _ flag1 and N _ flag 2; correcting the fixed predicted value by using the C parameter corresponding to the pixel context address to obtain Px _ correct1, Px _ correct2, Px _ correct3, Px _ correct4 and Px _ correct 5; the difference is made between the pixel actual value Ix and the prediction correction value, and when SIGN is-1, the residual error is turned to obtain Errval1, Errval2, Errval3, Errval4 and Errval 5;
(46) when the pixel reconstruction in the previous clock cycle is completed, selecting a correct reconstruction value Ra branch to obtain a residual error Errval, a symbol SIGN and an address Conflict type Conflict; in order to simplify the division calculation, a table look-up mode is adopted for residual quantization, the ROM reading address is { | Errval |, Near }, and the stored data is { | Errval _ q |, Remainder };
(47) c, updating the five conditions of +/-2 +/-1 and invariable, compensating the residue of residual quantization, adjusting a quotient, namely a residual quantization value | Errval _ q | according to the relation between the residue and a divisor, and determining the residual quantization value Errval _ q by combining a Symbol of the residual quantization value. . And obtaining a pixel reconstruction value according to the actual value Ix of the pixel, the SIGN SIGN, the SIGN Symbol of the residual quantized value, the micro loss Near and the Remainder after the residual quantized compensation. Carrying out modulus taking on the compensated residual quantized value, and reducing the range;
(48) updating the identifiers C _ flag and C _ flag _ r according to the address Conflict type Conflict and the parameters of two adjacent pixels C, and selecting a correct C branch to obtain a residual modular value Errval _ mod, a pixel reconstruction value Rx and a correct parameter correction value C _ correct;
(49) selecting corresponding A, B parameters according to the address conflict type to obtain A _ sel and B _ sel; a, B, C parameter updating is completed by combining with the N _ flag, and updated A, B, C, N is written into two groups of parameter RAMs at the same time; calculating a Golomb coding parameter K by using the A parameter and the N parameter before updating; mapping a residual modulus value Errval _ mod into a non-negative integer MERrval according to the micro-loss Near, the coding parameter K, the parameter B before updating and the parameter N;
specifically, the step (43) specifically includes:
(431) calculating three local gradients from Ra, Rb, Rc, Rd, D0 being the difference between Rd and Rb, D1 being the difference between Rb and Rc, D2 being the difference between Rc and Ra; note that since Ra has 5 values, then D [2] also has 5 values, and D [2] has a maximum value of Rc-Ix _ r + Near, a minimum value of Rc-Ix _ r-Near, and a maximum value that is 2Near different from the minimum value;
(432) quantizing the gradients D [0], D [1], D [2] according to quantization thresholds T1, T2, T3 and Near; when D [ i ] is less than or equal to-T3, Q [ i ] is-4; when-T3 < di ≦ -T2, Q [ i ] ═ 3; when-T2 < di ≦ -T1, Q [ i ] ═ 2; when-T1 < di > < -Near, Q [ i ] ═ 1; when-Near is less than or equal to D [ i ] and less than or equal to Near, Q [ i ] is 0; when Near < D [ i ] < T1, Q [ i ] ═ 1; when T1 is less than or equal to D [ i ] < T2, Q [ i ] ═ 2; when T2 is less than or equal to D [ i ] < T3, Q [ i ] ═ 3; when T3 is less than or equal to D [ i ], Q [ i ] is 4; note that Q [2] has at most two different values, since the quantization minimum interval is 2Near, and the maximum value of D [2] differs from the minimum by 2 Near;
(433) gradient fusion of the gradient quantizations (Q0, Q1, Q2), i.e. Q81 + Q0 +9 + Q1 + Q2; if the first non-zero element of (Q0, Q1, Q2) is negative, then it is inverted to (-Q0, -Q1, -Q2), then gradient fusion is carried out, and the SIGN is-1; note that since Q [2] has two values, two address index values Q1, Q2 and the flip symbols SIGN1, SIGN2 are finally obtained;
in particular, step (49) comprises the following sub-steps:
(491) selecting A _ sel and B _ sel to be output to a context parameter updating module according to the address Conflict type Conflict;
(492) updating the parameter A, the parameter B and the parameter C by using A _ sel, B _ sel, C _ correct and N _ flag, wherein the updating process adopts combinational logic; writing the updated parameters A _ update, B _ update, C _ update and N _ update into two groups of parameter RAMs at the same time; sending A _ sel, B _ sel, N _ sel and residual modulus value Errval _ mod which are not updated by the current pixel to a K value calculation and residual modulus mapping module;
(493) according to the algorithm principle, sequentially left-shifting the N parameter, comparing the N parameter with the A parameter until the N parameter is more than or equal to the A parameter after being left-shifted by K bits, and outputting a K value at the moment;
(494) according to the algorithm principle, a lossless and micro-loss residual mapping mode is distinguished, and a modulus value Errval _ mod of a signed residual is mapped into a nonnegative integer residual mapping value MERrval; attention needs to be paid to judging the relation between the B parameter and the N parameter when the damage is not caused;
specifically, step (5) includes the following substeps:
(51) right shifting the residual mapping value MERrval by K bits by using combinational logic, and obtaining a quotient val _ temp through a primary D trigger; shifting the val _ temp to the left by K bits, and obtaining MERrval _ temp through a primary D trigger; then, the difference between MERrval _ temp and MERrval is made, and the remainder n is outputted through a first-stage D flip-flop. In order to synchronize the quotient and the remainder, the val _ temp is printed into a register of two beats to output the quotient val;
(52) if the val value is less than the code length upper LIMIT parameter MAX which is LIMIT-qbpp-1, the code stream is composed of val bit 0, 1 bit 1 and K bit n, otherwise, the code stream is composed of LIMIT-qbpp-1 bit 0, 1 bit 1 and qbpp bit MERrval;
(53) defining a 64-bit register reg64, placing the code stream data of the first pixel at the low bit of reg64, moving the original code stream data to the left when waiting for the next code stream data to arrive, placing the new code stream data at the low bit of reg64, outputting the new code stream data to FIFO and emptying the register when the register is full of 64 bits, and circulating the above operations;
(54) when the compression of a coding block is finished, the first pixel of the block, the motion vector, the predictor selection, the block code stream length as side information and the code stream framing form block data. The intra-frame/inter-frame compression mode parameter, the inter-frame compression period parameter, the micro-damage Near value parameter and the block row and column parameter are used as the whole image side information to form the final compressed code stream with all block data and output;
specifically, the step (1) further comprises the following steps:
analyzing the annotation instruction according to the protocol, completing the initialization of compression parameters, and simultaneously controlling the whole compression working mode; the method specifically comprises the following substeps:
s1, according to the protocol, analyzing the parameters of the compression mode in the frame/between frames, the parameters of the compression period between frames, the values of the micro-damage degree Near, the parameters of the pixel bit width and the parameters of the rows and columns of the blocks.
S2, calculating a pixel value RANGE RANGE according to the micro-loss Near and the pixel bit width bpp, and further calculating the qbpp, the Columbus encoding LIMIT LIMIT and the gradient quantization threshold; computing context parameters A, B, C, N, initializing two sets of parameter RAMs;
s3, controlling the compression state of the current frame according to the intra-frame and inter-frame compression mode parameters and the inter-frame compression period parameters; in the intra-frame compression process, reference frame image blocking and motion estimation are skipped, an intra-frame predictor is directly selected, and all image frames are subjected to intra-frame compression; when the interframe compression is carried out, the initial frame is subjected to intraframe compression, and the rest frames in one period are subjected to interframe compression.
According to another aspect of the invention, the invention provides a remote sensing image compression algorithm hardware implementation system based on JPEG-LS interframe expansion, which comprises the following parts:
the first module is used for caching image data of a coding frame and a reference frame by using an off-chip memory and respectively obtaining coding block data and searching block data according to different block row and column parameters;
a second module, which is used for carrying out full search based on SAD criterion in a motion search block formed by a reference frame to obtain an optimal matching block of the coding block and outputting the coding block and the matching block to the next stage;
the third module is used for generating a synchronous causal template of the coding block and the matched block images, a plurality of predictors are calculated in parallel, and the best predictor is the minimum sum of absolute values of residual errors in the blocks;
the fourth module is used for performing fixed prediction by using the optimal predictor, calculating a prediction residual by combining the adaptive corrector and obtaining a Columbus coding parameter according to the context modeling parameter;
and the fifth module is used for calculating coding parameters, finishing Columbus length-limited coding, compressing code streams and framing and outputting decoding auxiliary information.
Specifically, the first module specifically includes:
the first unit is used for respectively caching the written data and the read data of the encoding frame by using four FIFOs (first in first out), writing the data and reading the data by referring to the frame, and providing the functions of data bit width conversion and clock isolation;
the second unit is used for defining a counter, counting the write data enabling signals and finishing storage address accumulation and data sequence writing; according to the parameters of the rows and the columns of the blocks, when the number of the rows of the blocks is cached, calculating a read address and an offset address, and outputting encoding block data which are the same in size and are not overlapped from a storage area;
the third unit is used for compressing and reconstructing image data in a block form due to image block compression, and calculating a write address and an offset address according to block row and column parameters to obtain complete reference frame image data; setting a motion search step length as P, determining that ROW and column parameters of a search block are ROW +2P and COL +2P respectively, calculating a read address and an offset address, and outputting search block data which are the same in size and mutually overlapped;
the fourth unit is used for counting the water level conditions of a plurality of channels, the water level of a writing channel is an input FIFO (first in first out) buffer memory amount and the free space of the storage partition, and the water level of a reading channel is an output FIFO free space and the buffer memory amount of the storage partition; and adopting a fixed priority strategy to judge the water level condition of each channel, and allocating the bus to which channel when the water level of which channel is high so as to finish data transmission.
Specifically, the second module specifically includes:
generating a matching template unit, which is used for cascading and caching four rows of data of a search block by using 2P FIFOs, (2P +1) × (2P +1) registers, forming a (2P +1) × (2P +1) matching window when the fifth data of the fifth row of the search block arrives, reading the first data of the first row of the coding block at the moment, aligning the data of the coding block with the data of the window, and outputting the data to an SAD calculation module; when the (2P +2) th data of the (2P +1) th line of the search block arrives, a new (2P +1) x (2P +1) matching window is formed, the second data of the first line of the coding block is read at the moment, aligned and output to the SAD calculation module, and SAD calculation is completed after (ROW +2P) x (COL +2P) pixel clock cycles;
the parallel computing unit is used for extending sign bits of each pixel data of the coding block and each pixel data of the matching window and performing difference by using combinational logic; then, judging the sign bit of the difference value, if the sign bit is negative, inverting the data according to the bit and adding 1, if the sign bit is positive, keeping the data unchanged, and obtaining the absolute value of the difference value; using an accumulator with enough digits to count the sum of absolute values, and sending the sum of (2P +1) × (2P +1) absolute values to a comparison selection circuit module when the counting of all pixels in a coding block is finished;
a comparison selection unit for dividing the (2P +1) × (2P +1) matching results into (2P +1) groups, and performing comparison using a two-stage pipeline; the first stage pipeline compares (2P +1) data in each group to obtain the minimum value of each group, and the second stage pipeline compares (2P +1) minimum values to obtain the final minimum value; the result is the minimum best matching block, the motion vector { m, n } of the coding block is output, and the sum of the absolute values of the best matching block is output to a predictor selection module to be used as the sum of the absolute values of the residual errors of a fourth predictor;
and the cache and output unit is used for respectively storing data of a search block and data of a coding block by using two on-chip FIFOs, wherein the data volume of the search block is (ROW +2P) × (COL +2P), the data volume of the coding block is ROW × COL, the data volume of the search block is read out after the result of the best matching block is output, whether the data is valid or not is determined according to ROW and column counting, and when the data is valid, one coding block is correspondingly read out and is output to the predictor selection module together.
Specifically, the third module specifically includes:
the synchronous causal template unit is used for counting rows and columns of the pixels by taking the image block as a unit; splicing an original pixel Ix of a coding block and a reconstructed pixel Rx _ i of a matching block, writing the spliced original pixel Ix of the coding block and the reconstructed pixel Rx _ i of the matching block into an FIFO (first in first out) cache, starting to read the FIFO and decompose data after caching one line, and outputting 3 intra-frame cause and effect template pixels Ia, Ib and Ic and 4 inter-frame cause and effect template pixels Ra _ i, Rb _ i, Rc _ i and Uk; when the pixels of the coding blocks are in the special columns of the non-special rows, Ia is a left adjacent pixel, Ib is an upper adjacent pixel, and Ic is an upper left adjacent pixel;
when the pixels of the coding blocks are positioned in a first row and a first column, the first pixels are all used;
when the pixels of the coding blocks are in a first row and a second row, the first pixels are used by Ib and Ic;
when the pixels of the coding block are in a non-first row and a non-first column, Ib is used for Ia, and Ia of the previous row is used for Ic; the reference frame pixels are the same;
the predictor parallel computing unit is used for computing three predicted values Px _1, Px _2 and Px _3 in parallel by using intra-frame and inter-frame causal template pixels; the first predictor is an intra-frame predictor, and the second predictor and the third predictor are inter-frame predictors;
the residual absolute value summation unit is used for respectively subtracting the actual value of the current pixel and the three predicted values, calculating residual values of different predictors, then taking an absolute value, and counting the sum of residual absolute values in one block; when one block data statistics is finished, aligning the sum of absolute values of residual errors of a fourth predictor output by the motion estimation module, and outputting the sum to the predictor selection module;
a predictor selecting unit for comparing the 4 summation values SUM4, SUM2, SUM3 and SUM4, and the smallest summation is the best predictor;
the block pixel cache and output unit is used for splicing the pixel data of the coding block and the matching block, using FIFO cache, and starting to read out the data in the FIFO after the predictor selects the result to output; and sending the block data to a multi-branch modeling and predicting module along with a predictor selection result.
Specifically, the system further comprises a mode control module, which is specifically used for analyzing the annotation number instruction according to the protocol, completing the initialization of the compression parameters and simultaneously controlling the whole compression working mode; the method specifically comprises the following sub-modules:
the first sub-module is used for analyzing the intra/inter compression mode parameters, the inter compression period parameters, the micro-loss Near value parameters, the pixel bit width parameters and the partitioning row and column parameters according to the protocol.
The second sub-module is used for calculating a pixel value RANGE RANGE according to the micro-loss Near and the pixel bit width bpp, and further calculating the qbpp, the Columbus encoding LIMIT length LIMIT and the gradient quantization threshold; computing context parameters A, B, C, N, initializing two sets of parameter RAMs;
the third sub-module is used for controlling the compression state of the current frame according to the intra-frame and inter-frame compression mode parameters and the inter-frame compression period parameters; in the intra-frame compression process, reference frame image blocking and motion estimation are skipped, an intra-frame predictor is directly selected, and all image frames are subjected to intra-frame compression; when the interframe compression is carried out, the initial frame is subjected to intraframe compression, and the rest frames in one period are subjected to interframe compression.
Generally, compared with the prior art, the above technical solution conceived by the present invention has the following beneficial effects:
(1) the invention provides a remote sensing image lossless/near lossless compression hardware computing architecture based on JPEG-LS interframe expansion, designs an efficient interframe expansion structure supporting multi-channel data caching of block access, full search motion estimation and multi-predictor parallel computing, and adopts the idea of a pipeline and a template sliding window to improve the pixel throughput rate;
(2) according to the invention, inter-frame information is introduced on the basis of JPEG-LS intra-frame compression, and the image space redundancy and time redundancy are eliminated simultaneously by adopting the inter-frame prediction technology of motion compensation, so that the compression efficiency is higher; the breadth of the remote sensing image is large, different areas have different characteristics, the best predictor is selected in a block self-adaptive mode, and a higher compression ratio can be obtained;
(3) according to the invention, by pre-bypassing motion estimation, selecting an intra-frame/inter-frame predictor and configuring a micro-loss parameter, the random switching of an intra-frame/inter-frame-lossless/near-lossless compression method can be realized, the flexibility of a compression system is improved, and meanwhile, the feasibility is provided for realizing code rate controllability under the constraint of limited bandwidth;
(4) the satellite-ground transmission link is easy to have error codes due to space electromagnetic interference, the fixed interframe compression period is set, intraframe compression frames are regularly inserted, the ground decoding dependency relationship is broken, error code diffusion can be limited in one interframe compression period, and the error code resistance is improved.
Drawings
FIG. 1 is a block diagram of a hardware implementation architecture in an embodiment of the present invention;
FIG. 2 is a diagram illustrating a detailed calculation structure of motion-compensated interframe expansion according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a detailed computing architecture of a JPEG-LS encoder according to an embodiment of the present invention;
FIG. 4 is a block diagram of reference frame and encoded frame images according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating an embodiment of a fast motion estimation matching template;
FIG. 6 is a diagram illustrating a method for implementing a fast matching template according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating a predictor selection template causal template in an embodiment of the present invention;
FIG. 8 is a diagram illustrating multiple predictor formulas in an embodiment of the present invention;
FIG. 9 is a diagram illustrating causal templates of parallel forward prediction modules according to an embodiment of the present invention;
FIG. 10 is a diagram illustrating forward prediction of periodic tasks and RAM operations according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of a pixel reconstruction improvement formula in an embodiment of the present invention;
FIG. 12 is a diagram illustrating four cycles after updating the parameter A and the parameter B in the embodiment of the present invention;
fig. 13 is a schematic diagram of length-limited coding according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
FIG. 1 is a hardware implementation framework of an embodiment of the method of the present invention, FIG. 2 is a detailed computing architecture of inter-frame extension for motion compensation, and FIG. 3 is a detailed computing architecture of a JPEG-LS encoder; the implementation of the present embodiment mainly includes the following steps:
s1, compression mode control: and analyzing the annotation instruction according to the protocol, completing the initialization of the compression parameters and simultaneously controlling the whole compression working mode.
Specifically, step S1 includes the following sub-steps:
s11, analyzing a note number instruction: according to the protocol, an intra/inter compression mode parameter (inter frame), an inter frame compression period parameter (16), a micro loss Near value parameter (2), a pixel bit width parameter (12) and a partition row and column parameter (16X16) are analyzed.
S12, parameter initialization: the pixel value RANGE 820 is calculated according to the micro-loss Near 2 and the pixel bit width bpp 12, and then the qbpp 10 and the Golomb code LIMIT 48 are calculated. The gradient quantization thresholds default to T1-18, T2-67, and T3-276. Two sets of parameter RAM are initialized by calculating the following parameters a-13, B-0, C-0, and N-1. And the true dual-port RAM is used for writing on two ports simultaneously, so that the initialization time is reduced.
S13, mode control: and controlling the compression state of the current frame according to the intra-frame and inter-frame compression mode parameters and the inter-frame compression period parameters. When the interframe compression is carried out, the 1 st frame is subjected to intraframe compression, an intraframe predictor is directly used, and the 2 nd to 16 th frames are subjected to interframe compression. Then frame 17 is intra-compressed, next frame 15 is inter-compressed, and so on.
S2, partitioning the coded frame and the reference frame image: and caching the image data of the coding frame and the reference frame by using an off-chip memory SDRAM (synchronous dynamic random access memory), and respectively obtaining coding blocks and searching block data according to different partition row and column parameters. A block diagram of an encoded frame and a reference frame is shown in fig. 4.
Specifically, step S2 includes the following sub-steps:
s21, FIFO buffer and isolation: four FIFOs are used for respectively caching the encoded frame write-in data and the read-out data, the reference frame write-in data and the read-out data, and the functions of data bit width conversion and clock isolation are provided.
S22, coding frame sequence writing address and block reading address calculation: defining a counter, counting the write data enabling signals, and finishing storage address accumulation and data sequence writing; according to the parameters of the rows and the columns of the blocks, when the number of the rows of the blocks is cached, the read address and the offset address are calculated, and the coded block data with the same size and non-overlapping with each other is output from the storage area.
S23, calculating a block writing address and a block reading address of the reference frame: because of image block compression, compressed and reconstructed image data exist in a block form, and a write address and an offset address need to be calculated according to block row and column parameters to obtain complete reference frame image data; setting the motion search step length as 2, determining search block ROW and column parameters (ROW +4) and (COL +4), calculating a read address and an offset address, and outputting search block data with the same size and overlapping with each other.
S24, multi-channel data management: and counting the water level conditions of a plurality of channels, wherein the water level of a writing channel is an input FIFO (first in first out) buffer amount and the free space of the storage partition, and the water level of a reading channel is an output FIFO free space and the buffer amount of the storage partition. And adopting a fixed priority strategy to judge the water level condition of each channel, and allocating the bus to which channel when the water level of which channel is high so as to finish data transmission.
S3, motion estimation, obtaining the best matching block: and performing full search based on SAD criterion in a motion search block formed by the reference frame to obtain the best matching block of the coding block. And outputting the coded block and the matched block to the next stage.
Specifically, step S3 includes the following sub-steps:
s31, generating a matching template: four rows of data of a search block are cached in a cascading mode in a mode of 4 FIFOs +25 registers, when the fifth row of data of the search block arrives, a 5X5 matching window is formed, the first data of the first row of the coding block is read at the moment, the data of the coding block is aligned with the data of the window, and the data of the coding block is output to the SAD calculation module. When the sixth data of the fifth line of the search block arrives, a new 5X5 matching window is formed, the second data of the first line of the coding block is read at the moment, the second data is output to the SAD calculation module in an aligned mode, and SAD calculation can be completed after (ROW +4) X (COL +4) pixel clock cycles. A schematic diagram of a motion estimation fast matching template is shown in fig. 5. Fig. 6 is a schematic diagram of a 5X5 matching window implementation method.
S32, full search SAD parallel computation: the sign bit of each pixel data of the coding block and each pixel data of the matching window is expanded, namely 1' b0 is added at the most significant bit, and the difference is made by using combinational logic. And then judging the sign bit of the difference value, if the sign bit is 1 'b 1, inverting the data according to the bit and adding 1, and if the sign bit is 1' b0, keeping the data unchanged to obtain the absolute value of the difference value. The accumulator with enough bit number is used to count the sum of absolute values, and when all pixels in a coding block are counted to be finished, the sum of 25 absolute values is sent to a comparison selection circuit module.
S33, comparing and selecting the circuit: the 25 match results are divided into 5 groups and compared using a two-stage pipeline. The first stage pipeline compares 5 data in each group to obtain the minimum value of each group, and the second stage pipeline compares 5 minimum values to obtain the final minimum value. The block with the smallest result is the best matching block, the motion vector { m 2:0, n 2:0 } of the coding block is output, and the sum of the absolute values of the best matching block is output to the predictor selection module as the sum of the absolute values of the residuals of the predictor 4.
S34, caching and outputting the best matching block: and storing data of a search block and data of a coding block by using two on-chip FIFOs respectively, wherein the data volume of the search block is (ROW +4) X (COL +4), the data volume of the coding block is (ROW) X (COL), after the result of the best matching block is output, the data of the search block is read out, whether the data is valid is determined according to ROW-column counting, and when the data is valid, one coding block is correspondingly read out and output to a predictor selection module together.
S4, performing parallel computation on multiple predictors, and selecting an optimal predictor: and generating a coding block and a matching block image synchronization causal template, and calculating a plurality of predictors in parallel, wherein the smallest sum of absolute values of residuals in the block is the best predictor.
Specifically, step S4 includes the following sub-steps:
s41, generating a synchronous cause-effect template: the pixels are counted in rows and columns in units of image blocks. Splicing the original pixel Ix of the coding block and the reconstructed pixel Rx _ i of the matching block, writing the spliced pixels into an FIFO buffer, starting to read the FIFO buffer after one line of the buffer, decomposing data, and outputting 3 intra-frame cause and effect template pixels Ia, Ib and Ic and 4 inter-frame cause and effect template pixels Ra _ i, Rb _ i, Rc _ i and Uk according to a template structure and a boundary processing mode shown in FIG. 7. The template structure and the boundary processing mode are specifically as follows: when the pixels of the coding blocks are in a special column of a non-special row, Ia is a left adjacent pixel, Ib is an upper adjacent pixel, and Ic is an upper left adjacent pixel; when the pixels of the coding blocks are positioned in a first row and a first column, the first pixels are all used; when the pixels of the coding blocks are in a first row and a non-first column, the first pixels are used by Ib and Ic; and Ib is used for Ia and Ia is used for Ic when the pixels of the coding block are positioned in the non-first row and the non-first column. The reference frame pixels are the same.
S42, predictor parallel computing: four prediction values Px _1, Px _2, Px _3 are computed in parallel using intra and inter causal template pixels. The predictor calculation formula is shown in fig. 8, where predictor 1 is an intra-frame predictor, and predictors 2 and 3 are inter-frame predictors. Predictor 2 contains a division by 3, optimized for hardware computation by multiplying 5461 and then right shifting by 14 bits.
S43, summing absolute values of residual errors: according to the algorithm principle, the actual value of the current pixel and the three predicted values are respectively subtracted, residual values of different predictors are calculated, then an absolute value is taken, and the sum of absolute values of residual errors in one block is counted. When one block data statistics is finished, the sum of the absolute values of the residual errors of the predictor 4 output by the alignment motion estimation module is output to the predictor selection module.
S44, predictor selection: comparing the 4 summed values, SUM4 is minimum when SUM4< SUM3, SUM4< SUM2, and SUM4< SUM 1; otherwise, it is determined that SUM3< SUM2 and SUM3< SUM1, SUM3 is minimum; otherwise, SUM2< SUM1 is judged, and SUM2 is minimum; otherwise SUM1 is minimal. The smallest sum is the best predictor.
S45, block pixel caching and outputting: and splicing the pixel data of the coding block and the matching block, caching by using an FIFO, and reading out the data in the FIFO after the result is selected and output by the predictor. And sending the block data to a multi-branch modeling and predicting module along with a predictor selection result.
S5, modeling and predicting a plurality of branches to obtain a residual error: and (3) performing fixed prediction by using an optimal predictor, calculating a prediction residual by combining with a self-adaptive corrector, and obtaining a Columbus encoding parameter according to a context modeling parameter.
Specifically, step S5 includes the following sub-steps:
s51, generating a cause and effect template: the pixels are counted in rows and columns in units of image blocks. Splicing the encoding block reconstruction pixel Rx and the matching block reconstruction pixel Rx _ i, writing the spliced encoding block reconstruction pixel Rx and matching block reconstruction pixel Rx _ i into an FIFO buffer, starting to read the FIFO buffer and decompose data after one line of the buffer, and outputting 4 intra-frame cause and effect template pixels Ra, Rb, Rc, Rd and 4 inter-frame cause and effect template pixels Ra _ i, Rb _ i, Rc _ i and Uk. The template structure and the boundary processing mode are specifically as follows: when the pixels of the coding blocks are in a special column of a non-special row, Ra is a left adjacent pixel, Rb is an upper adjacent pixel, Rc is an upper left adjacent pixel, and Rd is an upper right adjacent pixel; when the pixels of the coding blocks are positioned in a first row and a first column, the first pixels are all used; when the coding block pixels are in a first row and not a first column, Rb, Rc and Rd use first pixels; when the pixels of the coding block are positioned in a non-first row and a non-first column, Ra uses Rb, and Rc uses Ra of a previous row; when encoding block pixels are in the last column of the non-first row, Rd uses Rb. The reference frame pixels are the same. The specific template structure and boundary processing method are shown in fig. 9.
S52, reconstruction value prediction: according to the Near lossless compression limit distortion characteristic, the relationship between the pixel reconstruction value Rx and the pixel actual value Ix _ r is Rx — Ix _ r ± Near. When Near is 2, the predicted values of Ra are Ix _ r-2, Ix _ r-1, Ix _ r +1 and Ix _ r + 2.
S53, context modeling: and modeling the local environment of the current pixel by using the coding block cause and effect templates Ra, Rb, Rc and Rd to obtain address index values Q1 and Q2 and inversion symbols SIGN1 and SIGN 2.
Specifically, step S53 includes the following sub-steps:
s531, gradient calculation: three local gradients are calculated from Ra, Rb, Rc, Rd, D0 being the difference between Rd and Rb, D1 being the difference between Rb and Rc, and D2 being the difference between Rc and Ra. Note that since Ra has 5 values, then D [2] also has 5 values, and D [2] has a maximum value of Rc-Ix _ r + Near, a minimum value of Rc-Ix _ r-Near, and a maximum value that is 2Near different from the minimum value.
S532, gradient quantization: the gradients D [0], D [1], D [2] are quantized according to quantization thresholds T1-16, T2-67, T3-276, and Near-2. When D [ i ] is less than or equal to-T3, Q [ i ] is-4; when-T3 < di ≦ -T2, Q [ i ] ═ 3; when-T2 < di ≦ -T1, Q [ i ] ═ 2; when-T1 < di > < -Near, Q [ i ] ═ 1; when-Near is less than or equal to D [ i ] and less than or equal to Near, Q [ i ] is 0; when Near < D [ i ] < T1, Q [ i ] ═ 1; when T1 is less than or equal to D [ i ] < T2, Q [ i ] ═ 2; when T2 is less than or equal to D [ i ] < T3, Q [ i ] ═ 3; when T3 is less than or equal to D [ i ], Q [ i ] is 4. Note that Q [2] has at most two different values, since the quantization minimum interval is 2Near, and the maximum value of D [2] is 2Near from the minimum value.
S533, gradient fusion and sign marking: gradient fusion is performed on the gradient quantization values (Q0, Q1, Q2), i.e. Q81 + Q0 +9 + Q1 + Q2. If the first non-zero element of (Q0, Q1, Q2) is negative, it is inverted to (-Q0, -Q1, -Q2), then gradient fusion is performed, and the SIGN is set to-1. Note that since Q [2] has two values, two address index values Q1, Q2 and the inverted symbols SIGN1, SIGN2 are finally obtained.
S54, context address conflict control: the periodic tasks and RAM operations are shown in fig. 10. In the process of pipeline design, the pixel address Conflict situation in four clock cycles of reading context parameters from the RAM to the RAM for updating needs to be recorded, namely the pixel context addresses in the current cycle and the next three cycles are compared one by one, namely the Conflict is set to be 1, otherwise, the Conflict is set to be 0, and the address Conflict types Conflict1[2:0] and Conflict2[2:0] are formed. And according to the flag bits of the address Conflict types Conflict1[0] and Conflict2[0], judging that the RAM reading operation is not carried out when the read-write Conflict occurs.
S55, prediction correction and residual calculation: the Ra prediction is five values, and the optimal predictor is used for prediction to obtain predicted values Px _1, Px _2, Px _3, Px _4 and Px _ 5. According to the address Conflict types Conflict1 and Conflict2, C _ sel1, C _ sel2, N _ sel1 and N _ sel2 are selected respectively. And updating the selected parameter N to obtain N _ update1 and N _ update2, and generating N parameter updating marks N _ flag1 and N _ flag 2. And correcting the fixed predicted value by using the C parameter corresponding to the pixel context address to obtain Px _ correct1, Px _ correct2, Px _ correct3, Px _ correct4 and Px _ correct 5. And when the SIGN is-1, the residual error is inverted to obtain Errval1, Errval2, Errval3, Errval4 and Errval 5.
S56, reconstruction value selection: and when the pixel reconstruction in the previous clock cycle is completed, selecting a correct reconstruction value Ra branch to obtain a residual error Errval, a symbol SIGN and an address Conflict type Conflict. In order to simplify the division calculation, a table look-up mode is adopted for residual quantization, the ROM reading address is { | Errval |, Near }, and the stored data is { | Errval _ q |, Remainder }.
S57, residual quantization compensation, pixel reconstruction and residual modulus taking: c, updating the five conditions of +/-2 +/-1 and invariable, compensating the residue of residual quantization, adjusting a quotient, namely a residual quantization value | Errval _ q | according to the relation between the residue and a divisor, and determining the residual quantization value Errval _ q by combining a Symbol of the residual quantization value. The principle of the pixel reconstruction improvement formula is shown in fig. 11. And obtaining a pixel reconstruction value according to the actual value Ix of the pixel, the SIGN SIGN, the SIGN Symbol of the residual quantized value, the micro loss Near and the Remainder after the residual quantized compensation. And performing modulus extraction on the compensated residual quantized value, and reducing the range.
S58, compensation selection: and updating the marks C _ flag and C _ flag _ r according to the address Conflict type Conflict and the parameters of two adjacent pixels C, and selecting a correct C branch to obtain a residual modular value Errval _ mod, a pixel reconstruction value Rx and a correct parameter correction value C _ correct.
S59, context parameter updating, K value calculation and residual mapping: and selecting corresponding A, B parameters according to the address conflict type to obtain A _ sel and B _ sel. A, B, C parameter updating is completed in combination with the N _ flag, and A, B, C, N after updating is written into two groups of parameter RAMs simultaneously. And calculating the Golomb coding parameter K by using the A parameter and the N parameter before updating. And mapping the residual modulus Errval _ mod into a non-negative integer MERrval according to the micro-loss Near, the coding parameter K, the parameter B before updating and the parameter N.
Specifically, step S59 includes the following sub-steps:
s591. context parameter A, B selects: the case of the a parameter and the B parameter in 4 adjacent cycles is shown in fig. 12. And selecting A _ sel and B _ sel to be output to the context parameter updating module according to the address Conflict type Conflict.
S592, context parameter updating: and completing updating of the A parameter, the B parameter and the C parameter by using A _ sel, B _ sel, C _ correct and N _ flag, wherein the updating process adopts combinational logic. And writing the updated parameters A _ update, B _ update, C _ update and N _ update into two groups of parameter RAMs simultaneously. And sending A _ sel, B _ sel, N _ sel and residual modulus value Errval _ mod which are not updated by the current pixel to a K value calculation and residual modulus value Errval _ mod.
S593. calculation of k value: according to the algorithm principle, the N parameter is shifted left in sequence and compared with the A parameter until the N parameter is shifted left by K bits and is more than or equal to the A parameter, and then the K value is output.
S594. residual mapping: according to the algorithm principle, a lossless and micro-loss residual mapping mode is distinguished, and a modulus value Errval _ mod of the signed residual is mapped into a non-negative integer residual mapping value MERrval. Note that the relationship between the B parameter and the N parameter needs to be determined when lossless.
S6, carrying out Columbus length-limited coding on the residual error, and outputting a compressed code stream: and calculating coding parameters, completing Columbus length-limited coding, compressing code stream and framing output of decoding auxiliary information.
Specifically, step S6 includes the following sub-steps:
s61, quotient and remainder calculation: right shifting the residual mapping value MERrval by K bits by using combinational logic, and obtaining a quotient val _ temp through a primary D trigger; shifting the val _ temp to the left by K bits, and obtaining MERrval _ temp through a primary D trigger; then, the difference between MERrval _ temp and MERrval is made, and the remainder n is outputted through a first-stage D flip-flop. To synchronize the quotient and remainder, the quotient val is output by toggling val _ temp to the two beat register.
S62, length-limited coding: the length-limited coding scheme is shown in fig. 13. If the val value is less than the code length upper LIMIT parameter MAX, the code stream is composed of val bit 0, 1 bit 1 and K bit n, otherwise, the code stream is composed of LIMIT-qbpp-1 bit 0, 1 bit 1 and qbpp bit MERrval.
S63, encoding FIFO buffer: defining a 64-bit register reg64, placing the code stream data of the first pixel at the low bit of reg64, moving the original code stream data to the left when waiting for the next code stream data to arrive, placing the new code stream data at the low bit of reg64, outputting the new code stream data to FIFO and emptying the register when the register is full of 64 bits, and circulating the above operations.
S64, framing output: when the compression of a coding block is finished, the first pixel of the block, the motion vector, the predictor selection, the block code stream length as side information and the code stream framing form block data. And the intra-frame/inter-frame compression mode parameter, the inter-frame compression period parameter, the micro-damage Near value parameter and the block row and column parameter are used as the whole image side information to form final compressed code stream output with all block data.
It will be appreciated by those skilled in the art that the foregoing is only a preferred embodiment of the invention, and is not intended to limit the invention, such that various modifications, equivalents and improvements may be made without departing from the spirit and scope of the invention.

Claims (10)

1. A remote sensing image compression algorithm hardware implementation method based on JPEG-LS interframe expansion is characterized by comprising the following steps:
(1) caching image data of the coding frame and the reference frame by using an off-chip memory, and respectively obtaining coding blocks and searching block data according to different block row parameters;
(2) performing full search based on SAD criterion in a motion search block formed by a reference frame to obtain an optimal matching block of a coding block, and outputting the coding block and the matching block to the next stage;
(3) generating a synchronous causal template of the coding block and the matched block images, calculating a plurality of predictors in parallel, and setting the smallest sum of absolute values of residual errors in the blocks as an optimal predictor;
(4) performing fixed prediction by using an optimal predictor, calculating a prediction residual by combining a self-adaptive corrector, and obtaining a Columbus encoding parameter according to a context modeling parameter;
(5) and calculating coding parameters, completing Columbus length-limited coding, compressing code stream and framing output of decoding auxiliary information.
2. The hardware implementation method of the JPEG-LS interframe expansion-based remote sensing image compression algorithm is characterized in that the step (1) specifically comprises the following steps:
(11) four FIFOs are used for respectively caching the encoded frame write-in data and the read-out data, the reference frame write-in data and the read-out data, and the functions of data bit width conversion and clock isolation are provided;
(12) defining a counter, counting the write data enabling signals, and finishing storage address accumulation and data sequence writing; according to the parameters of the rows and the columns of the blocks, when the number of the rows of the blocks is cached, calculating a read address and an offset address, and outputting encoding block data which are the same in size and are not overlapped from a storage area;
(13) because of image block compression, compressed and reconstructed image data exist in a block form, and a write address and an offset address need to be calculated according to block row and column parameters to obtain complete reference frame image data; setting a motion search step length as P, determining that ROW and column parameters of a search block are ROW +2P and COL +2P respectively, calculating a read address and an offset address, and outputting search block data which are the same in size and mutually overlapped;
(14) counting the water level conditions of a plurality of channels, wherein the water level of a writing channel is an input FIFO (first in first out) buffer memory amount and an idle space of the storage partition, and the water level of a reading channel is an output FIFO idle space and the buffer memory amount of the storage partition; and adopting a fixed priority strategy to judge the water level condition of each channel, and allocating the bus to which channel when the water level of which channel is high so as to finish data transmission.
3. The hardware implementation method of the JPEG-LS interframe expansion-based remote sensing image compression algorithm is characterized in that the step (2) specifically comprises the following steps:
(21) using 2P FIFOs, (2P +1) × (2P +1) registers, cascading and caching four rows of data of a search block, when the fifth row of data of the search block arrives, forming a (2P +1) × (2P +1) matching window, reading the first row of data of a coding block at the moment, aligning the data of the coding block with the data of the window, and outputting the data to an SAD calculation module; when the (2P +2) th data of the (2P +1) th line of the search block arrives, a new (2P +1) x (2P +1) matching window is formed, the second data of the first line of the coding block is read at the moment, aligned and output to the SAD calculation module, and SAD calculation is completed after (ROW +2P) x (COL +2P) pixel clock cycles;
(22) expanding sign bits of each pixel data of the coding block and each pixel data of the matching window, and performing difference by using combinational logic; then, judging the sign bit of the difference value, if the sign bit is negative, inverting the data according to the bit and adding 1, if the sign bit is positive, keeping the data unchanged, and obtaining the absolute value of the difference value; using an accumulator with enough digits to count the sum of absolute values, and sending the sum of (2P +1) × (2P +1) absolute values to a comparison selection circuit module when the counting of all pixels in a coding block is finished;
(23) dividing the (2P +1) × (2P +1) matching results into (2P +1) groups, and comparing by using a two-stage pipeline; the first stage pipeline compares (2P +1) data in each group to obtain the minimum value of each group, and the second stage pipeline compares (2P +1) minimum values to obtain the final minimum value; the result is the minimum best matching block, the motion vector { m, n } of the coding block is output, and the sum of the absolute values of the best matching block is output to a predictor selection module to be used as the sum of the absolute values of the residual errors of a fourth predictor;
(24) and storing data of a search block and data of a coding block by using two on-chip FIFOs respectively, wherein the data volume of the search block is (ROW +2P) × (COL +2P), the data volume of the coding block is ROW × COL, after the result of the best matching block is output, reading the data of the search block, determining whether the data is valid according to ROW-column counting, and when the data is valid, correspondingly reading out the data of one coding block and outputting the data of one coding block to a predictor selection module together.
4. The hardware implementation method of the JPEG-LS interframe expansion-based remote sensing image compression algorithm is characterized in that the step (3) specifically comprises the following steps:
(31) taking the image block as a unit, and counting rows and columns of the pixels; splicing an original pixel Ix of a coding block and a reconstructed pixel Rx _ i of a matching block, writing the spliced original pixel Ix of the coding block and the reconstructed pixel Rx _ i of the matching block into an FIFO (first in first out) cache, starting to read the FIFO and decompose data after caching one line, and outputting 3 intra-frame cause and effect template pixels Ia, Ib and Ic and 4 inter-frame cause and effect template pixels Ra _ i, Rb _ i, Rc _ i and Uk; the template structure and the boundary processing mode are specifically as follows:
when the pixels of the coding blocks are in the special columns of the non-special rows, Ia is a left adjacent pixel, Ib is an upper adjacent pixel, and Ic is an upper left adjacent pixel;
when the pixels of the coding blocks are positioned in a first row and a first column, the first pixels are all used;
when the pixels of the coding blocks are in a first row and a second row, the first pixels are used by Ib and Ic;
when the pixels of the coding block are in a non-first row and a non-first column, Ib is used for Ia, and Ia of the previous row is used for Ic; the reference frame pixels are the same;
(32) using intra-frame and inter-frame causal template pixels to calculate three predicted values Px _1, Px _2 and Px _3 in parallel; the first predictor is an intra-frame predictor, and the second predictor and the third predictor are inter-frame predictors;
(33) respectively subtracting the actual value of the current pixel and the three predicted values, calculating residual values of different predictors, then taking an absolute value, and counting the sum of absolute values of residual errors in one block; when one block data statistics is finished, aligning the sum of absolute values of residual errors of a fourth predictor output by the motion estimation module, and outputting the sum to the predictor selection module;
(34) comparing the 4 summation values SUM4, SUM2, SUM3 and SUM4, wherein the smallest summation is the best predictor;
(35) splicing the pixel data of the coding block and the matching block, caching by using an FIFO, and starting to read out the data in the FIFO after the predictor selects the result to output; and sending the block data to a multi-branch modeling and predicting module along with a predictor selection result.
5. The hardware implementation method of the JPEG-LS interframe expansion-based remote sensing image compression algorithm according to claim 1, wherein the step (1) is preceded by the steps of:
analyzing the annotation instruction according to the protocol, completing the initialization of compression parameters, and simultaneously controlling the whole compression working mode; the method specifically comprises the following substeps:
s1, according to a protocol, analyzing intra/inter compression mode parameters, inter compression period parameters, micro-loss Near value parameters, pixel bit width parameters and partitioning row and column parameters;
s2, calculating a pixel value RANGE RANGE according to the micro-loss Near and the pixel bit width bpp, and further calculating the qbpp, the Columbus encoding LIMIT LIMIT and the gradient quantization threshold; computing context parameters A, B, C, N, initializing two sets of parameter RAMs;
s3, controlling the compression state of the current frame according to the intra-frame and inter-frame compression mode parameters and the inter-frame compression period parameters; in the intra-frame compression process, reference frame image blocking and motion estimation are skipped, an intra-frame predictor is directly selected, and all image frames are subjected to intra-frame compression; when the interframe compression is carried out, the initial frame is subjected to intraframe compression, and the rest frames in one period are subjected to interframe compression.
6. A remote sensing image compression algorithm hardware implementation system based on JPEG-LS interframe expansion is characterized by comprising the following steps:
the first module is used for caching image data of a coding frame and a reference frame by using an off-chip memory and respectively obtaining coding block data and searching block data according to different block row and column parameters;
a second module, which is used for carrying out full search based on SAD criterion in a motion search block formed by a reference frame to obtain an optimal matching block of the coding block and outputting the coding block and the matching block to the next stage;
the third module is used for generating a synchronous causal template of the coding block and the matched block images, a plurality of predictors are calculated in parallel, and the best predictor is the minimum sum of absolute values of residual errors in the blocks;
the fourth module is used for performing fixed prediction by using the optimal predictor, calculating a prediction residual by combining the adaptive corrector and obtaining a Columbus coding parameter according to the context modeling parameter;
and the fifth module is used for calculating coding parameters, finishing Columbus length-limited coding, compressing code streams and framing and outputting decoding auxiliary information.
7. The remote sensing image compression algorithm hardware implementation system based on JPEG-LS interframe expansion as claimed in claim 6, wherein the first module specifically comprises:
the first unit is used for respectively caching the written data and the read data of the encoding frame by using four FIFOs (first in first out), writing the data and reading the data by referring to the frame, and providing the functions of data bit width conversion and clock isolation;
the second unit is used for defining a counter, counting the write data enabling signals and finishing storage address accumulation and data sequence writing; according to the parameters of the rows and the columns of the blocks, when the number of the rows of the blocks is cached, calculating a read address and an offset address, and outputting encoding block data which are the same in size and are not overlapped from a storage area;
the third unit is used for compressing and reconstructing image data in a block form due to image block compression, and calculating a write address and an offset address according to block row and column parameters to obtain complete reference frame image data; setting a motion search step length as P, determining that ROW and column parameters of a search block are ROW +2P and COL +2P respectively, calculating a read address and an offset address, and outputting search block data which are the same in size and mutually overlapped;
the fourth unit is used for counting the water level conditions of a plurality of channels, the water level of a writing channel is an input FIFO (first in first out) buffer memory amount and the free space of the storage partition, and the water level of a reading channel is an output FIFO free space and the buffer memory amount of the storage partition; and adopting a fixed priority strategy to judge the water level condition of each channel, and allocating the bus to which channel when the water level of which channel is high so as to finish data transmission.
8. The remote sensing image compression algorithm hardware implementation system based on JPEG-LS interframe expansion of claim 6, wherein the second module specifically comprises:
generating a matching template unit, which is used for cascading and caching four rows of data of a search block by using 2P FIFOs, (2P +1) × (2P +1) registers, forming a (2P +1) × (2P +1) matching window when the fifth data of the fifth row of the search block arrives, reading the first data of the first row of the coding block at the moment, aligning the data of the coding block with the data of the window, and outputting the data to an SAD calculation module; when the (2P +2) th data of the (2P +1) th line of the search block arrives, a new (2P +1) x (2P +1) matching window is formed, the second data of the first line of the coding block is read at the moment, aligned and output to the SAD calculation module, and SAD calculation is completed after (ROW +2P) x (COL +2P) pixel clock cycles;
the parallel computing unit is used for extending sign bits of each pixel data of the coding block and each pixel data of the matching window and performing difference by using combinational logic; then, judging the sign bit of the difference value, if the sign bit is negative, inverting the data according to the bit and adding 1, if the sign bit is positive, keeping the data unchanged, and obtaining the absolute value of the difference value; using an accumulator with enough digits to count the sum of absolute values, and sending the sum of (2P +1) × (2P +1) absolute values to a comparison selection circuit module when the counting of all pixels in a coding block is finished;
a comparison selection unit for dividing the (2P +1) × (2P +1) matching results into (2P +1) groups, and performing comparison using a two-stage pipeline; the first stage pipeline compares (2P +1) data in each group to obtain the minimum value of each group, and the second stage pipeline compares (2P +1) minimum values to obtain the final minimum value; the result is the minimum best matching block, the motion vector { m, n } of the coding block is output, and the sum of the absolute values of the best matching block is output to a predictor selection module to be used as the sum of the absolute values of the residual errors of a fourth predictor;
and the cache and output unit is used for respectively storing data of a search block and data of a coding block by using two on-chip FIFOs, wherein the data volume of the search block is (ROW +2P) × (COL +2P), the data volume of the coding block is ROW × COL, the data volume of the search block is read out after the result of the best matching block is output, whether the data is valid or not is determined according to ROW and column counting, and when the data is valid, one coding block is correspondingly read out and is output to the predictor selection module together.
9. The remote sensing image compression algorithm hardware implementation system based on JPEG-LS interframe expansion of claim 6, wherein the third module specifically comprises:
the synchronous causal template unit is used for counting rows and columns of the pixels by taking the image block as a unit; splicing an original pixel Ix of a coding block and a reconstructed pixel Rx _ i of a matching block, writing the spliced original pixel Ix of the coding block and the reconstructed pixel Rx _ i of the matching block into an FIFO (first in first out) cache, starting to read the FIFO and decompose data after caching one line, and outputting 3 intra-frame cause and effect template pixels Ia, Ib and Ic and 4 inter-frame cause and effect template pixels Ra _ i, Rb _ i, Rc _ i and Uk; when the pixels of the coding blocks are in the special columns of the non-special rows, Ia is a left adjacent pixel, Ib is an upper adjacent pixel, and Ic is an upper left adjacent pixel;
when the pixels of the coding blocks are positioned in a first row and a first column, the first pixels are all used;
when the pixels of the coding blocks are in a first row and a second row, the first pixels are used by Ib and Ic;
when the pixels of the coding block are in a non-first row and a non-first column, Ib is used for Ia, and Ia of the previous row is used for Ic; the reference frame pixels are the same;
the predictor parallel computing unit is used for computing three predicted values Px _1, Px _2 and Px _3 in parallel by using intra-frame and inter-frame causal template pixels; the first predictor is an intra-frame predictor, and the second predictor and the third predictor are inter-frame predictors;
the residual absolute value summation unit is used for respectively subtracting the actual value of the current pixel and the three predicted values, calculating residual values of different predictors, then taking an absolute value, and counting the sum of residual absolute values in one block; when one block data statistics is finished, aligning the sum of absolute values of residual errors of a fourth predictor output by the motion estimation module, and outputting the sum to the predictor selection module;
a predictor selecting unit for comparing the 4 summation values SUM4, SUM2, SUM3 and SUM4, and the smallest summation is the best predictor;
the block pixel cache and output unit is used for splicing the pixel data of the coding block and the matching block, using FIFO cache, and starting to read out the data in the FIFO after the predictor selects the result to output; and sending the block data to a multi-branch modeling and predicting module along with a predictor selection result.
10. The remote sensing image compression algorithm hardware implementation system based on JPEG-LS interframe expansion is characterized by further comprising a mode control module, wherein the mode control module is specifically used for analyzing a notation instruction according to a protocol, completing compression parameter initialization and controlling the whole compression working mode; the method specifically comprises the following sub-modules:
the first sub-module is used for analyzing intra/inter compression mode parameters, inter compression period parameters, micro-loss Near value parameters, pixel bit width parameters and partitioning row and column parameters according to a protocol;
the second sub-module is used for calculating a pixel value RANGE RANGE according to the micro-loss Near and the pixel bit width bpp, and further calculating the qbpp, the Columbus encoding LIMIT length LIMIT and the gradient quantization threshold; computing context parameters A, B, C, N, initializing two sets of parameter RAMs;
the third sub-module is used for controlling the compression state of the current frame according to the intra-frame and inter-frame compression mode parameters and the inter-frame compression period parameters; in the intra-frame compression process, reference frame image blocking and motion estimation are skipped, an intra-frame predictor is directly selected, and all image frames are subjected to intra-frame compression; when the interframe compression is carried out, the initial frame is subjected to intraframe compression, and the rest frames in one period are subjected to interframe compression.
CN202110483170.XA 2021-04-30 2021-04-30 Remote sensing image compression algorithm hardware implementation method based on JPEG-LS (joint photographic experts group-LS) inter-frame expansion Active CN113207004B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110483170.XA CN113207004B (en) 2021-04-30 2021-04-30 Remote sensing image compression algorithm hardware implementation method based on JPEG-LS (joint photographic experts group-LS) inter-frame expansion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110483170.XA CN113207004B (en) 2021-04-30 2021-04-30 Remote sensing image compression algorithm hardware implementation method based on JPEG-LS (joint photographic experts group-LS) inter-frame expansion

Publications (2)

Publication Number Publication Date
CN113207004A true CN113207004A (en) 2021-08-03
CN113207004B CN113207004B (en) 2024-02-02

Family

ID=77028113

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110483170.XA Active CN113207004B (en) 2021-04-30 2021-04-30 Remote sensing image compression algorithm hardware implementation method based on JPEG-LS (joint photographic experts group-LS) inter-frame expansion

Country Status (1)

Country Link
CN (1) CN113207004B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113722770A (en) * 2021-08-18 2021-11-30 上海励驰半导体有限公司 End-to-end protection method and system based on hierarchical data integrity
CN113794849A (en) * 2021-11-12 2021-12-14 深圳比特微电子科技有限公司 Device and method for synchronizing image data and image acquisition system
WO2023082867A1 (en) * 2021-11-09 2023-05-19 哲库科技(上海)有限公司 Image processing method, chip, electronic device, and storage medium
CN117097905A (en) * 2023-10-11 2023-11-21 合肥工业大学 Lossless image block compression method, lossless image block compression equipment and storage medium
CN117395381A (en) * 2023-12-12 2024-01-12 上海卫星互联网研究院有限公司 Compression method, device and equipment for telemetry data

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101534373A (en) * 2009-04-24 2009-09-16 北京空间机电研究所 Remote sensing image near-lossless compression hardware realization method based on improved JPEG-LS algorithm
CN102970531A (en) * 2012-10-19 2013-03-13 西安电子科技大学 Method for implementing near-lossless image compression encoder hardware based on joint photographic experts group lossless and near-lossless compression of continuous-tone still image (JPEG-LS)
KR101289881B1 (en) * 2012-02-28 2013-07-24 전자부품연구원 Apparatus and method for lossless image compression
CN105828070A (en) * 2016-03-23 2016-08-03 华中科技大学 Anti-error code propagation JPEG-LS image lossless/near-lossless compression algorithm hardware realization method
CN109151482A (en) * 2018-10-29 2019-01-04 西安电子科技大学 Spaceborne spectrum picture spectral coverage is lossless to damage mixing compression method
CN111462133A (en) * 2020-03-31 2020-07-28 厦门亿联网络技术股份有限公司 System, method, storage medium and device for real-time video portrait segmentation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101534373A (en) * 2009-04-24 2009-09-16 北京空间机电研究所 Remote sensing image near-lossless compression hardware realization method based on improved JPEG-LS algorithm
KR101289881B1 (en) * 2012-02-28 2013-07-24 전자부품연구원 Apparatus and method for lossless image compression
CN102970531A (en) * 2012-10-19 2013-03-13 西安电子科技大学 Method for implementing near-lossless image compression encoder hardware based on joint photographic experts group lossless and near-lossless compression of continuous-tone still image (JPEG-LS)
CN105828070A (en) * 2016-03-23 2016-08-03 华中科技大学 Anti-error code propagation JPEG-LS image lossless/near-lossless compression algorithm hardware realization method
CN109151482A (en) * 2018-10-29 2019-01-04 西安电子科技大学 Spaceborne spectrum picture spectral coverage is lossless to damage mixing compression method
CN111462133A (en) * 2020-03-31 2020-07-28 厦门亿联网络技术股份有限公司 System, method, storage medium and device for real-time video portrait segmentation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
周芸: ""视觉无损压缩JPEG-XS编码标准研究"", 《广播电视信息》 *
朱福全: ""基于自适应滤波的高光谱遥感图像无损压缩研究"", 《成都理工大学》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113722770A (en) * 2021-08-18 2021-11-30 上海励驰半导体有限公司 End-to-end protection method and system based on hierarchical data integrity
WO2023082867A1 (en) * 2021-11-09 2023-05-19 哲库科技(上海)有限公司 Image processing method, chip, electronic device, and storage medium
CN113794849A (en) * 2021-11-12 2021-12-14 深圳比特微电子科技有限公司 Device and method for synchronizing image data and image acquisition system
CN113794849B (en) * 2021-11-12 2022-02-08 深圳比特微电子科技有限公司 Device and method for synchronizing image data and image acquisition system
CN117097905A (en) * 2023-10-11 2023-11-21 合肥工业大学 Lossless image block compression method, lossless image block compression equipment and storage medium
CN117097905B (en) * 2023-10-11 2023-12-26 合肥工业大学 Lossless image block compression method, lossless image block compression equipment and storage medium
CN117395381A (en) * 2023-12-12 2024-01-12 上海卫星互联网研究院有限公司 Compression method, device and equipment for telemetry data
CN117395381B (en) * 2023-12-12 2024-03-12 上海卫星互联网研究院有限公司 Compression method, device and equipment for telemetry data

Also Published As

Publication number Publication date
CN113207004B (en) 2024-02-02

Similar Documents

Publication Publication Date Title
CN113207004B (en) Remote sensing image compression algorithm hardware implementation method based on JPEG-LS (joint photographic experts group-LS) inter-frame expansion
KR101235132B1 (en) Efficient transformation techniques for video coding
KR100411525B1 (en) Apparatus and Method of coding image
KR100793976B1 (en) Motion estimation circuit and operating method thereof
CN102088603B (en) Entropy coder for video coder and implementation method thereof
US20220377322A1 (en) Intra/inter mode decision for predictive frame encoding
US5699128A (en) Method and system for bidirectional motion compensation for compression of motion pictures
JPH1169345A (en) Inter-frame predictive dynamic image encoding device and decoding device, inter-frame predictive dynamic image encoding method and decoding method
KR20050012806A (en) Video encoding and decoding techniques
JPH03117992A (en) Coding and transmitting apparatus of video signal with motion vector
CN101166277B (en) Method for accessing memory in apparatus for processing moving pictures
JP2000059792A (en) High efficiency encoding device of dynamic image signal
JP5195674B2 (en) Image encoding device
KR101216142B1 (en) Method and/or apparatus for implementing reduced bandwidth high performance vc1 intensity compensation
US6668087B1 (en) Filter arithmetic device
KR0178746B1 (en) Half pixel processing unit of macroblock
JPH1155668A (en) Image coder
Momcilovic et al. Development and evaluation of scalable video motion estimators on GPU
KR100708183B1 (en) Image storing device for motion prediction, and method for storing data of the same
US7269288B2 (en) Apparatus for parallel calculation of prediction bits in a spatially predicted coded block pattern and method thereof
CN115022628B (en) JPEG-LS (joint photographic experts group-LS) -based high-throughput lossless image compression method
JP2015005903A (en) Compressor, decompressor and image processing apparatus
JP6872412B2 (en) Video coding device and program
KR0129802B1 (en) Circuit for compensation of the motion by half pel in a picture compression system
JP3228943B2 (en) Encoding device and decoding device, their methods and image processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant