CN113207004B - Remote sensing image compression algorithm hardware implementation method based on JPEG-LS (joint photographic experts group-LS) inter-frame expansion - Google Patents

Remote sensing image compression algorithm hardware implementation method based on JPEG-LS (joint photographic experts group-LS) inter-frame expansion Download PDF

Info

Publication number
CN113207004B
CN113207004B CN202110483170.XA CN202110483170A CN113207004B CN 113207004 B CN113207004 B CN 113207004B CN 202110483170 A CN202110483170 A CN 202110483170A CN 113207004 B CN113207004 B CN 113207004B
Authority
CN
China
Prior art keywords
data
block
frame
predictor
coding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110483170.XA
Other languages
Chinese (zh)
Other versions
CN113207004A (en
Inventor
陈立群
崔裕宾
颜露新
钟胜
颜章
杨桂彬
张思宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN202110483170.XA priority Critical patent/CN113207004B/en
Publication of CN113207004A publication Critical patent/CN113207004A/en
Application granted granted Critical
Publication of CN113207004B publication Critical patent/CN113207004B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/527Global motion vector estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/107Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/109Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/164Feedback from the receiver or from the transmission channel
    • H04N19/166Feedback from the receiver or from the transmission channel concerning the amount of transmission errors, e.g. bit error rate [BER]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/436Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a remote sensing image compression algorithm hardware implementation method based on JPEG-LS interframe expansion, and belongs to the technical field of image compression. The method of the invention comprises the steps of: (1) compressed mode control; (2) coded frame and reference frame image blocking; (3) motion estimation, obtaining a best matching block; (4) multi-predictor parallel computing, selecting an optimal predictor; (5) multi-branch modeling and prediction to obtain residual errors; (6) Performing Columbus length-limited coding on the residual error, and outputting a compressed code stream; the method supports the efficient inter-frame expansion structure of multi-channel data caching, full search motion estimation and multi-predictor parallel computation of block access, adopts the idea of assembly line and template sliding window, and improves the pixel throughput rate; inter-frame information is introduced on the basis of JPEG-LS intra-frame compression, and a motion compensation inter-frame prediction technology is adopted, so that image space redundancy and time redundancy are eliminated, and compression efficiency is high.

Description

Remote sensing image compression algorithm hardware implementation method based on JPEG-LS (joint photographic experts group-LS) inter-frame expansion
Technical Field
The invention belongs to the technical field of image compression, and particularly relates to a remote sensing image compression algorithm hardware implementation method based on JPEG-LS (joint photographic experts group-LS) inter-frame expansion.
Background
With the rapid development of satellite remote sensing technology in China, the data volume generated by the satellite-borne imaging load is increasingly large. The huge remote sensing image data volume brings the huge pressure to the limited on-board storage and the satellite-ground link, and the on-board remote sensing image compression is an effective measure for solving the problem. JPEG-LS is an ISO/ITU standard for lossless compression of continuous tone images, has excellent compression performance, better control on calculation complexity, and is based on an image lossless/near lossless compression algorithm of JPEG-LS interframe expansion, motion compensation and a multi-predictor are adopted, information of an image time dimension is introduced on the basis of space two dimensions, and pixel correlation of a sequence image in space and time can be reduced at the same time, so that a higher compression ratio is obtained.
The satellite remote sensing imaging cost is high, and the data is extremely precious, so that high fidelity of important information of a concerned region is required to be ensured when the image is compressed and encoded, but the compression efficiency of a high-fidelity compression algorithm is usually low, and great pressure is brought to the transmission bandwidth of a satellite-ground link. In addition, the computing resources and the storage resources of the on-board platform are deficient, and the camera and the code stream data cannot be stored for a long time, so that the on-board system must realize a strong real-time compression function under the constraint of limited resources. In summary, the satellite-borne remote sensing image compression system faces the technical problems of high fidelity and strong real-time, and two contradictions of compression ratio and fidelity, real-time compression resource requirement and limited satellite resources need to be solved.
Aiming at the difficult problem of high-fidelity compression, the patent 'JPEG-LS image lossless/near-lossless compression method for preventing error diffusion' (patent application number: CN201610165800.8, publication number: CN 105828070A) introduces a partition compression method on the basis of JPEG-LS, and adopts different micro-damage parameters for different areas on the premise of ensuring that an interested target is not lost, thereby improving the overall compression ratio of the image. However, this method only considers the spatial correlation of the images, and cannot remove the temporal redundancy of the sequential images, resulting in a still low compression ratio.
Aiming at the difficult problem of strong real-time compression, the patent 'JPEG-LS conventional coding hardware implementation method' (patent application number: CN201210198818.X, publication number: CN 102724506A) aims at solving the problems of complex parameter updating and error value calculation structure and slow processing rate in the JPEG-LS compression algorithm. However, the method only realizes the lossless compression function in the standard, avoids the real-time realization problem of a pixel reconstruction feedback loop, and causes the loss of the near lossless compression function.
Disclosure of Invention
Aiming at the defects or improvement demands of the prior art, the invention provides a remote sensing image compression method based on JPEG-LS interframe expansion, which aims to realize a high-efficiency interframe expansion structure for realizing multi-channel data caching, full search motion estimation and multi-predictor parallel computation of block access, and adopts the ideas of a pipeline and a template sliding window, thereby solving the problems of low compression ratio under high-fidelity compression and high difficulty of realizing a compression algorithm in real time under limited resource constraint in a satellite-borne remote sensing image data compression system.
In order to achieve the above purpose, the present invention provides a remote sensing image compression algorithm hardware implementation method based on JPEG-LS interframe expansion, the method comprises the following steps:
(1) Using an off-chip memory to buffer image data of the coding frame and the reference frame, and respectively obtaining coding block data and searching block data according to different block row and column parameters;
(2) In a motion search block formed by a reference frame, performing full search based on an SAD criterion to obtain an optimal matching block of the coding block, and outputting the coding block and the matching block to the next stage;
(3) Generating an image synchronization causal template of a coding block and a matching block, and parallelly calculating a plurality of predictors, wherein the sum of absolute values of residual errors in the blocks is the optimal predictor;
(4) Performing fixed prediction by using an optimal predictor, combining an adaptive corrector, calculating a prediction residual error, and obtaining a Columbus coding parameter according to a context modeling parameter;
(5) And calculating coding parameters to finish the Columbus limit length coding, and framing and outputting the compressed code stream and decoding auxiliary information.
Specifically, the step (1) specifically includes:
(11) The four FIFOs are used for respectively caching the written data and the read data of the coding frame, writing the data and the read data of the reference frame, and providing the functions of data bit width conversion and clock isolation;
(12) Defining a counter, counting write data enabling signals, and finishing storage address accumulation and data sequential writing; according to the block row and column parameters, when caching to the block line number, calculating a read address and an offset address, and outputting coded block data which are the same in size and are not overlapped with each other from a storage area;
(13) Because of image block compression, compressed and reconstructed image data exists in a block form, a write address and an offset address are required to be calculated according to block-row parameters, and complete reference frame image data is obtained; setting the motion search step length as P, determining that the ROW and column parameters of the search block are ROW+2P and COL+2P respectively, calculating a read address and an offset address, and outputting search block data which have the same size and overlap each other;
(14) Counting the water level conditions of a plurality of channels, wherein the water level of a writing channel is the input FIFO buffer quantity and the free space of the storage partition, and the water level of a reading channel is the output FIFO free space and the storage partition buffer quantity; and judging the water level condition of each channel by adopting a strategy with fixed priority, and distributing the bus to which channel if the water level of which channel is high so as to finish data transmission.
Specifically, the step (2) specifically includes:
(21) Using 2P FIFO, (2P+1) x (2P+1) registers, cascading and caching four lines of data of a search block, forming a (2P+1) x (2P+1) matching window when the fifth data of the fifth line of the search block arrives, reading the first data of the first line of the coding block, aligning the data of the coding block with the window data, and outputting the data to an SAD calculation module; when the (2P+1) th data of the search block arrives, a new (2P+1) x (2P+1) matching window is formed, at the moment, the second data of the first ROW of the coding block is read and aligned and output to a SAD calculation module, and SAD calculation is completed after (ROW+2P) x (COL+2P) pixel clock cycles;
(22) Performing difference between each pixel data of the coding block and each pixel data extension sign bit of the matching window by using a combination logic; judging sign bit of the difference value, if the sign bit is negative, inverting the data according to the bit, adding 1 again, and if the sign bit is positive, keeping the data unchanged to obtain an absolute value of the difference value; using a digit-sufficient accumulator to count the sum of absolute values, and sending the sum of (2P+1) x (2P+1) absolute values to a comparison and selection circuit module when all pixel statistics in one coding block are finished;
(23) Dividing (2P+1) x (2P+1) matching results into (2P+1) groups, and comparing the matching results by using a two-stage pipeline; the first stage pipeline compares (2P+1) data in each group to obtain the minimum value of each group, and the second stage pipeline compares (2P+1) minimum values to obtain the final minimum value; the result is the best matching block, the motion vector { m, n } of the coding block is output, and the sum of the absolute values of the best matching block is output to the predictor selection module as the sum of the absolute values of the residuals of the fourth predictor;
(24) And respectively storing search block data and coding block data by using two on-chip FIFOs, wherein the search block data amount is (row+2p) × (col+2p), the coding block data amount is row×col, reading out the search block data after the best matching block result is output, determining whether the data is valid or not according to the ROW-column count, correspondingly reading out one coding block data when the data is valid, and outputting the same to a predictor selection module.
Specifically, the step (3) specifically includes:
(31) Counting the rows and columns of pixels by taking the image block as a unit; splicing an original pixel Ix of the coding block and a reconstructed pixel Rx_i of the matching block, writing the spliced pixels into an FIFO buffer, starting reading the FIFO after buffering one line, decomposing data, and outputting 3 intra-frame cause template pixels Ia, ib and Ic and 4 inter-frame cause and effect template pixels Ra_i, rb_i, rc_i and Uk;
the template structure and the boundary processing mode are specifically as follows:
when the pixels of the coding block are in non-special rows and special columns, ia is a left adjacent pixel, ib is an upper adjacent pixel and Ic is an upper left adjacent pixel;
the first pixels are used when the pixels of the coding block are positioned in the first row and the first column;
when the pixels of the coding block are in the first row and the first column, using the first pixels by Ib and Ic;
when the coding block pixels are in the non-first row and the non-first column, ia uses Ib, and Ic uses Ia of the previous row; the same applies to the reference frame pixels;
(32) Calculating three predicted values Px_1, px_2 and Px_3 in parallel by using intra-frame and inter-frame causal template pixels; the first predictor is an intra-frame predictor, and the second predictor and the third predictor are inter-frame predictors;
(33) Respectively differencing the actual value of the current pixel and the three predicted values, calculating residual values of different predictors, taking absolute values, and counting the sum of residual absolute values in a block; when the statistics of one block data is finished, aligning the sum of residual absolute values of a fourth predictor output by the motion estimation module and outputting the sum to the predictor selection module;
(34) Comparing the 4 summation values SUM4, SUM2, SUM3 and SUM4, wherein the summation value is the smallest as the optimal predictor;
(35) Splicing pixel data of the coding block and the matching block, using FIFO buffer memory, waiting for outputting the selection result of the predictor, and starting to read out data in the FIFO; the block data is sent to the multi-branch modeling and prediction module along with the predictor selection result.
Specifically, the step (4) specifically includes:
(41) Counting the rows and columns of pixels by taking the image subblocks as units; splicing the coding block reconstruction pixel Rx and the matching block reconstruction pixel Rx_i, writing into an FIFO buffer, starting reading the FIFO after buffering one line, decomposing data, and outputting 4 intra-frame cause template pixels Ra, rb, rc, rd and 4 inter-frame cause and effect template pixels Ra_i, rb_i, rc_i and Uk;
the template structure and the boundary processing mode are specifically as follows:
when the pixels of the coding block are in non-special rows and special columns, ra is a left adjacent pixel, rb is an upper adjacent pixel, rc is an upper left adjacent pixel and Rd is an upper right adjacent pixel;
the first pixels are used when the pixels of the coding block are positioned in the first row and the first column;
when the pixels of the coding block are in the first row and the first column, rb, rc and Rd use the first pixels;
when the pixels of the coding block are in the non-first row and the non-first column, ra uses Rb, and Rc uses Ra of the previous row;
Rd uses Rb when the encoded block pixel is in the last column of the non-first row; the same applies to the reference frame pixels;
(42) According to the Near lossless compression limit distortion characteristic, the relation between the pixel reconstruction value Rx and the pixel actual value Ix_r is Rx=Ix_r+/-Near; the Ra predicted values are Ix_r-2, ix_r-1, ix_r, ix_r+1 and Ix_r+2 when the Near is set to be maximum 2;
(43) Modeling the local environment where the current pixel is located by using a coding block causal template Ra, rb, rc, rd to obtain address index values Q1 and Q2 and inversion symbols SIGN1 and SIGN2;
(44) In the pipeline design process, the condition of pixel address Conflict in four clock cycles from reading of context parameters from RAM to updating of RAM is required to be recorded, namely the current cycle is compared with the pixel context addresses of the next three cycles one by one, the same is Conflict setting 1, otherwise, the Conflict setting 0 is carried out, and the address Conflict types Conflict1[2:0] and Conflict2[2:0] are formed. Judging that the RAM reading operation is not performed when the read-write Conflict occurs according to the Conflict1[0] and Conflict2[0] zone bits of the address Conflict types;
(45) Ra is predicted to five values, and the optimal predictor is used for prediction to obtain predicted values px_1, px_2, px_3, px_4 and px_5; c_sel1, C_sel2, N_sel1 and N_sel2 are respectively selected according to the address Conflict types Conflict1 and Conflict 2; updating the selected parameter N to obtain N_Update1 and N_Update2, and generating N parameter updating identifiers N_flag1 and N_flag2; correcting the fixed predicted value by using the C parameter corresponding to the pixel context address to obtain Px_correct1, px_correct2, px_correct3, px_correct4 and Px_correct5; using pixel actual value Ix and predictive correction value to make difference, when SIGN is-1, turning over residual error to obtain Errval1, errval2, errval3, errval4 and Errval5;
(46) Waiting until the pixel reconstruction of the previous clock period is completed, selecting a correct reconstruction value Ra branch to obtain a residual error, a symbol SIGN and an address Conflict type Conflict; in order to simplify division calculation, residual quantization is performed by adopting a table look-up mode, the ROM read address is { |Errval|, near }, and the stored data is { |Errval_q|, remain };
(47) And C updating has five conditions of +/-2, +/-1 and invariable, compensating the remainder of residual quantization, and determining the residual quantization value Errval_q according to the relation between the remainder and the divisor, namely the residual quantization value |Errval_q|, and combining the Symbol of the residual quantization value. . And obtaining a pixel reconstruction value according to the pixel actual value Ix, the Symbol SIGN, the Symbol of the residual quantized value, the micro-loss Near and the residual remainders after residual quantization compensation. Taking the mode of the compensated residual quantized value, and reducing the range;
(48) Selecting a correct C branch according to the address Conflict type Conflict and the adjacent two pixel C parameter updating identifiers C_flag and C_flag_r to obtain a residual error modulus value Errval_mod, a pixel reconstruction value Rx and a correct parameter correction value C_correction;
(49) Selecting corresponding A, B parameters according to the address conflict type to obtain A_sel and B_sel; the update of A, B, C parameters is completed by combining N_flag, and A, B, C, N after the update is written into two groups of parameter RAMs at the same time; calculating a Golomb coding parameter K by using the A parameter and the N parameter before updating; according to the micro-loss degree Near, the coding parameter K, the B parameter before updating and the N parameter, mapping a residual error modulus value Errval_mod into a non-negative integer Merrval;
Specifically, step (43) specifically includes:
(431) Calculating three local gradients according to Ra, rb, rc, rd, D0 being Rd and Rb, D1 being Rb and Rc, D2 being Rc and Ra; note that since Ra has 5 values, then D2 has 5 values as well, D2 has a maximum value of Rc-Ix_r+near, a minimum value of Rc-Ix_r-Near, and a maximum to minimum value difference of 2Near;
(432) Quantifying gradients D0, D1, D2 according to quantization thresholds T1, T2, T3 and Near; when the value of the D [ i ] is less than or equal to-T3, the value of the Q [ i ] = -4; when-T3 is less than D [ i ] and less than or equal to-T2, Q [ i ] = -3; when-T2 is less than D [ i ] and less than or equal to-T1, Q [ i ] = -2; when-T1 < Di < -Near, qi= -1; when-Near is less than or equal to D [ i ] is less than or equal to Near, Q [ i ] =0; when Near < Di < T1, qi=1; when T1 is less than or equal to D [ i ] < T2, Q [ i ] =2; when T2 is less than or equal to D [ i ] < T3, Q [ i ] =3; when T3 is less than or equal to D [ i ], Q [ i ] =4; note that Q2 has at most two different values, since the quantization minimum interval is 2Near, and the D2 maximum differs from the minimum by 2Near;
(433) Gradient fusion is carried out on the gradient quantized values (Q < 0 >, Q < 1 >, Q < 2 >), namely Q=81 x Q < 0 > +9 x Q < 1 > +Q < 2 >; if the first non-zero element of (Q0, Q1, Q2) is negative, it is turned over to (-Q0, -Q1, -Q2), then gradient fusion is performed, and the SIGN SIGN is set to-1; note that since Q2 has two values, two address index values Q1, Q2 and inversion symbols SIGN1, SIGN2 are finally obtained;
Specifically, step (49) includes the sub-steps of:
(491) According to the address Conflict type Conflict, A_sel and B_sel are selected and output to a context parameter updating module;
(492) The A parameter, the B parameter and the C parameter are updated by using the A_sel, the B_sel, the C_correct and the N_flag, and the updating process adopts combinational logic; writing the updated parameters A_update, B_update, C_update and N_update into two groups of parameter RAMs at the same time; and sending the un-updated A_sel, B_sel, N_sel and residual error modulo value Errval_mod of the current pixel to a K value calculation and residual error mapping module;
(493) According to the algorithm principle, N parameters are sequentially shifted left and compared with A parameters until the N parameters are shifted left by K bits and then are greater than or equal to the A parameters, and then a K value is output;
(494) According to the algorithm principle, distinguishing a lossless residual mapping mode and a micro-loss residual mapping mode, and mapping a signed residual modulo value Errval_mod into a non-negative integer residual mapping value Merrval; when the damage is noted, the relation between the B parameter and the N parameter needs to be judged;
specifically, step (5) includes the sub-steps of:
(51) Shifting the residual mapping value Merrval by K bits right by using a combinational logic, and obtaining a quotient val_temp through a primary D trigger; then, moving the val_temp left by K bits, and obtaining the MErrval_temp through a first-stage D trigger; then, the Merrval_temp is differenced with Merrval, and the remainder n is outputted through a first-stage D trigger. To synchronize the quotient and remainder, valtemp is clocked in two beats of registers to output the quotient val;
(52) If the val value is less than the code length upper LIMIT parameter MAX=LIMIT-qbpp-1, the code stream is composed of val bit 0, 1 bit 1 and K bit n, otherwise, the code stream is composed of LIMIT-qbpp-1 bit 0, 1 bit 1 and qbpp bit Merrval;
(53) Defining a 64-bit register reg64, placing the code stream data of the first pixel in the lower bit of reg64, leftwards shifting the original code stream data when the next code stream data arrives, placing new code stream data in the lower bit of reg64, outputting the new code stream data to the FIFO when the register is full of 64 bits, and emptying the register, and circulating the above operation;
(54) When compression of one coding block is finished, the first pixel of the block, the motion vector, the predictor selection and the length of the code stream of the block are used as side information to form block data with the code stream group frame. The intra/inter compression mode parameter, the inter compression period parameter, the micro-loss Near value parameter and the block row and column parameter are used as the whole-drawing side information to form a final compressed code stream output with all the block data;
specifically, the step (1) further includes the steps of:
analyzing the number of notes instruction according to the protocol, completing the initialization of compression parameters, and simultaneously controlling the whole compression working mode; the method specifically comprises the following substeps:
s1, analyzing an intra/inter compression mode parameter, an inter compression period parameter, a micro-loss Near value parameter, a pixel bit width parameter and a block row and column parameter according to a protocol.
S2, calculating a pixel value RANGE (RANGE) according to the micro-loss Near and the pixel bit width bpp, and further calculating qbpp, a Columbus coding LIMIT length LIMIT and a gradient quantization threshold; calculating context parameters A, B, C, N, and initializing two sets of parameter RAMs;
s3, controlling the compression state of the current frame according to the inter-frame compression mode parameter and the inter-frame compression period parameter; when in intra-frame compression, skipping reference frame image blocking and motion estimation, directly selecting an intra-frame predictor, and carrying out intra-frame compression on all image frames; when in inter-frame compression, the initial frame is subjected to inter-frame compression, and the rest frames in one period are subjected to inter-frame compression.
According to another aspect of the present invention, the present invention provides a remote sensing image compression algorithm hardware implementation system based on JPEG-LS inter-frame expansion, the system comprising:
the first module is used for caching the image data of the coding frame and the reference frame by using an off-chip memory and respectively obtaining coding block and searching block data according to different block row and column parameters;
the second module is used for carrying out full search based on SAD criterion in the motion search block formed by the reference frame to obtain the best matching block of the coding block, and outputting the coding block and the matching block to the next stage;
The third module is used for generating an image synchronization causal template of the coding block and the matching block, a plurality of predictors are calculated in parallel, and the sum of absolute values of residual errors in the blocks is the optimal predictor;
a fourth module, configured to perform fixed prediction using an optimal predictor, calculate a prediction residual in combination with an adaptive corrector, and obtain a golomb coding parameter according to a context modeling parameter;
and a fifth module for calculating coding parameters to complete Columbus length coding, compressing the code stream and outputting the decoding auxiliary information framing.
Specifically, the first module specifically includes:
the first unit is used for respectively caching the written data and the read data of the coding frame by using four FIFOs, writing the data and the read data of the reference frame, and providing the functions of data bit width conversion and clock isolation;
the second unit is used for defining a counter, counting write data enabling signals and finishing storage address accumulation and data sequential writing; according to the block row and column parameters, when caching to the block line number, calculating a read address and an offset address, and outputting coded block data which are the same in size and are not overlapped with each other from a storage area;
the third unit is used for compressing the image blocks, compressing and reconstructing the image data to exist in a block form, and calculating a write address and an offset address according to the block row and column parameters to obtain complete reference frame image data; setting the motion search step length as P, determining that the ROW and column parameters of the search block are ROW+2P and COL+2P respectively, calculating a read address and an offset address, and outputting search block data which have the same size and overlap each other;
A fourth unit for counting the water level conditions of a plurality of channels, wherein the water level of the writing channel is the input FIFO buffer quantity and the free space of the storage partition, and the water level of the reading channel is the output FIFO free space and the storage partition buffer quantity; and judging the water level condition of each channel by adopting a strategy with fixed priority, and distributing the bus to which channel if the water level of which channel is high so as to finish data transmission.
Specifically, the second module specifically includes:
generating a matching template unit, which is used for using 2P FIFO (first in first out) registers, (2P+1) x (2P+1) registers, cascading and caching four lines of data of a search block, forming a (2P+1) x (2P+1) matching window when the fifth data of the fifth line of the search block arrives, reading the first data of the first line of the coding block at the moment, aligning the data of the coding block with the window data, and outputting the data to a SAD calculation module; when the (2P+1) th data of the search block arrives, a new (2P+1) x (2P+1) matching window is formed, at the moment, the second data of the first ROW of the coding block is read and aligned and output to a SAD calculation module, and SAD calculation is completed after (ROW+2P) x (COL+2P) pixel clock cycles;
a parallel computing unit for expanding sign bits for each pixel data of the coding block and each pixel data of the matching window, and performing difference making by using a combination logic; judging sign bit of the difference value, if the sign bit is negative, inverting the data according to the bit, adding 1 again, and if the sign bit is positive, keeping the data unchanged to obtain an absolute value of the difference value; using a digit-sufficient accumulator to count the sum of absolute values, and sending the sum of (2P+1) x (2P+1) absolute values to a comparison and selection circuit module when all pixel statistics in one coding block are finished;
A comparison selection unit for dividing (2P+1) x (2P+1) matching results into (2P+1) groups, and comparing by using a two-stage pipeline; the first stage pipeline compares (2P+1) data in each group to obtain the minimum value of each group, and the second stage pipeline compares (2P+1) minimum values to obtain the final minimum value; the result is the best matching block, the motion vector { m, n } of the coding block is output, and the sum of the absolute values of the best matching block is output to the predictor selection module as the sum of the absolute values of the residuals of the fourth predictor;
and the cache and output unit is used for respectively storing search block data and coding block data by using two on-chip FIFOs, wherein the search block data amount is (row+2p) × (col+2p), the coding block data amount is row×col, after the best matching block result is output, the search block data is read out, whether the data is valid or not is determined according to the ROW and column count, and when the data is valid, one coding block data is correspondingly read out and is output to the predictor selection module together.
Specifically, the third module specifically includes:
the synchronous causal template unit is used for counting the rows and columns of pixels by taking the image block as a unit; splicing an original pixel Ix of the coding block and a reconstructed pixel Rx_i of the matching block, writing the spliced pixels into an FIFO buffer, starting reading the FIFO after buffering one line, decomposing data, and outputting 3 intra-frame cause template pixels Ia, ib and Ic and 4 inter-frame cause and effect template pixels Ra_i, rb_i, rc_i and Uk; when the pixels of the coding block are in non-special rows and special columns, ia is a left adjacent pixel, ib is an upper adjacent pixel and Ic is an upper left adjacent pixel;
The first pixels are used when the pixels of the coding block are positioned in the first row and the first column;
when the pixels of the coding block are in the first row and the first column, using the first pixels by Ib and Ic;
when the coding block pixels are in the non-first row and the non-first column, ia uses Ib, and Ic uses Ia of the previous row; the same applies to the reference frame pixels;
a predictor parallel computing unit for computing three predicted values px_1, px_2, px_3 in parallel using intra and inter causal template pixels; the first predictor is an intra-frame predictor, and the second predictor and the third predictor are inter-frame predictors;
the residual absolute value summing unit is used for respectively differencing the actual value of the current pixel and the three predicted values, calculating residual values of different predictors, taking absolute values, and counting the sum of the residual absolute values in one block; when the statistics of one block data is finished, aligning the sum of residual absolute values of a fourth predictor output by the motion estimation module and outputting the sum to the predictor selection module;
the predictor selecting unit is used for comparing 4 summation values SUM4, SUM2, SUM3 and SUM4, and the summation value is the optimal predictor;
the block pixel buffer and output unit is used for splicing the pixel data of the coding block and the matching block, using the FIFO buffer, waiting for the output of the selection result of the predictor, and starting to read out the data in the FIFO; the block data is sent to the multi-branch modeling and prediction module along with the predictor selection result.
Specifically, the system further comprises a mode control module, which is specifically used for analyzing the number of notes instruction according to a protocol, completing the initialization of compression parameters and controlling the whole compression working mode; the method specifically comprises the following submodules:
the first sub-module is used for analyzing the intra/inter compression mode parameter, the inter compression period parameter, the micro-loss Near value parameter, the pixel bit width parameter and the block row and column parameter according to the protocol.
The second sub-module is used for calculating a pixel value RANGE (RANGE) according to the micro-loss degree Near and the pixel bit width bpp, and further calculating qbpp, the Columbus coding LIMIT length LIMIT and the gradient quantization threshold; calculating context parameters A, B, C, N, and initializing two sets of parameter RAMs;
a third sub-module, configured to control a current frame compression state according to an intra-frame compression mode parameter and an inter-frame compression period parameter; when in intra-frame compression, skipping reference frame image blocking and motion estimation, directly selecting an intra-frame predictor, and carrying out intra-frame compression on all image frames; when in inter-frame compression, the initial frame is subjected to inter-frame compression, and the rest frames in one period are subjected to inter-frame compression.
In general, the above technical solutions conceived by the present invention have the following beneficial effects compared with the prior art:
(1) The invention provides a remote sensing image lossless/near lossless compression hardware computing architecture based on JPEG-LS interframe expansion, which designs a multi-channel data buffer supporting block access, full search motion estimation and multi-predictor parallel computing high-efficiency interframe expansion structure, adopts the idea of a pipeline and a template sliding window, and improves the pixel throughput rate;
(2) The invention introduces the inter-frame information on the basis of JPEG-LS intra-frame compression, adopts the inter-frame prediction technology of motion compensation, simultaneously eliminates the image space redundancy and the time redundancy, and has higher compression efficiency; the remote sensing image has large breadth, different areas have different characteristics, and the optimal predictor is selected in a block self-adaptive manner, so that a higher compression ratio can be obtained;
(3) The invention selects the intra/inter predictor through pre-bypass motion estimation and configures micro-damage parameters, thereby realizing random switching of intra/inter-lossless/near-lossless compression methods, improving the flexibility of a compression system and simultaneously providing feasibility for realizing code rate controllability under the constraint of limited bandwidth;
(4) The invention sets a fixed inter-frame compression period, regularly inserts intra-frame compression frames, breaks the ground decoding dependency relationship, can limit error diffusion in one inter-frame compression period and improves the error resistance.
Drawings
FIG. 1 is a block diagram of a hardware implementation architecture in an embodiment of the present invention;
FIG. 2 is a diagram illustrating a detailed computing architecture of motion compensated inter-frame expansion in an embodiment of the present invention;
FIG. 3 is a detailed computation architecture diagram of a JPEG-LS encoder according to an embodiment of the present invention;
FIG. 4 is a block diagram of reference frame and encoded frame images in an embodiment of the present invention;
FIG. 5 is a schematic diagram of a fast motion estimation matching template in an embodiment of the present invention;
FIG. 6 is a schematic diagram of a fast matching template implementation method in an embodiment of the present invention;
FIG. 7 is a schematic diagram of a predictor selection template causal template in an embodiment of the present invention;
FIG. 8 is a schematic diagram of a multi-predictor formulation in accordance with an embodiment of the present invention;
FIG. 9 is a schematic diagram of a causal template of a parallel forward prediction module in an embodiment of the present invention;
FIG. 10 is a diagram illustrating forward prediction tasks and RAM operations for each cycle in an embodiment of the present invention;
FIG. 11 is a schematic diagram of a pixel reconstruction improvement formula in an embodiment of the present invention;
FIG. 12 is a schematic diagram of four cycles after updating the A parameter and the B parameter in an embodiment of the present invention;
fig. 13 is a schematic diagram of a length-limited code in an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention. In addition, the technical features of the embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
FIG. 1 shows a hardware implementation framework employing an embodiment of the method of the present invention, FIG. 2 shows a detailed computing architecture in which motion compensation is inter-frame expansion, and FIG. 3 shows a detailed computing architecture in which a JPEG-LS encoder; the implementation of this embodiment mainly includes the following steps:
s1, compression mode control: and analyzing the number of notes instruction according to the protocol, completing the initialization of the compression parameters, and simultaneously controlling the whole compression working mode.
Specifically, step S1 includes the following sub-steps:
s11, analyzing a note number instruction: according to the protocol, the intra/inter compression mode parameter (inter), the inter compression period parameter (16), the micro-loss Near value parameter (2), the pixel bit width parameter (12) and the block row and column parameter (16X 16) are analyzed.
S12, initializing parameters: a pixel value RANGE range=820 is calculated from the micro-loss near=2 and the pixel bit width bpp=12, and qbpp=10 and Golomb code LIMIT length limit=48 are calculated. The gradient quantization threshold defaults to t1=18, t2=67, t3=276. The two sets of parameter RAMs are initialized for computing the context parameters a=13, b=0, c=0, n=1. And the real dual-port RAM is used for writing on two ports simultaneously, so that the initialization time is reduced.
S13, mode control: and controlling the compression state of the current frame according to the inter-frame compression mode parameter and the inter-frame compression period parameter. In the inter-frame compression, the 1 st frame is subjected to intra-frame compression, and the 2 nd to 16 th frames are subjected to inter-frame compression by directly using an intra-frame predictor. Then frame 17 is intra-frame compressed, next frame 15 is inter-frame compressed, and so on.
S2, image blocking of the coding frame and the reference frame: and using an off-chip memory SDRAM to cache the image data of the coding frame and the reference frame, and respectively obtaining the data of the coding block and the searching block according to different block row and column parameters. A block diagram of the encoded frames and the reference frames is shown in fig. 4.
Specifically, step S2 includes the following sub-steps:
s21, FIFO buffering and isolation: four FIFOs are used for respectively buffering the written data and the read data of the coding frame, writing the data and the read data of the reference frame, and providing the functions of data bit width conversion and clock isolation.
S22, calculating the coding frame sequential write address and the blocking read address: defining a counter, counting write data enabling signals, and finishing storage address accumulation and data sequential writing; according to the block row and column parameters, when the block row number is cached, the read address and the offset address are calculated, and the coded block data which have the same size and are not overlapped with each other are output from the storage area.
S23, calculating a reference frame block write address and a block read address: because of image block compression, compressed and reconstructed image data exists in a block form, a write address and an offset address are required to be calculated according to block-row parameters, and complete reference frame image data is obtained; setting the motion search step length as 2, determining search block ROW and column parameters (ROW+4) and (COL+4), calculating a read address and an offset address, and outputting search block data which have the same size and overlap each other.
S24, multichannel data management: and counting the water level conditions of a plurality of channels, wherein the water level of a write channel is the input FIFO buffer quantity and the free space of the storage partition, and the water level of a read channel is the output FIFO free space and the storage partition buffer quantity. And judging the water level condition of each channel by adopting a strategy with fixed priority, and distributing the bus to which channel if the water level of which channel is high so as to finish data transmission.
S3, motion estimation, namely obtaining an optimal matching block: and in the motion search block formed by the reference frame, performing full search based on the SAD criterion to obtain the best matching block of the coding block. And outputs the encoded block and the matching block to the next stage.
Specifically, step S3 includes the following sub-steps:
s31, generating a matching template: and using a form of 4 FIFO+25 registers to cascade and buffer four lines of data of the search block, forming a 5X5 matching window when the fifth line of data of the search block arrives, reading the first line of data of the first line of the coding block at the moment, aligning the data of the coding block with the window data, and outputting the data to the SAD calculation module. When the sixth data of the fifth ROW of the search block arrives, a new 5X5 matching window is formed, at the moment, the second data of the first ROW of the coding block is read and aligned and output to the SAD calculation module, and SAD calculation can be completed after (ROW+4) X (COL+4) pixel clock cycles. A schematic diagram of a motion estimation fast matching template is shown in fig. 5. A schematic diagram of a 5X5 matching window implementation method is shown in fig. 6.
S32, full search SAD parallel calculation: the difference is made using combinational logic by adding 1' b0 to the encoded block per pixel data and the matched window per pixel data extension sign bit, i.e., the most significant bit. And judging the sign bit of the difference value, if the sign bit is 1'b1, inverting the data according to the bit, adding 1 again, and if the sign bit is 1' b0, keeping the data unchanged, and obtaining the absolute value of the difference value. Using a number-enough accumulator to count the sum of absolute values, and sending the sum of 25 absolute values to a comparison and selection circuit module when all pixel statistics in one coding block are finished.
S33, a comparison and selection circuit: the 25 matching results are divided into 5 groups and compared using a two-stage pipeline. The first stage pipeline compares 5 data in each group to obtain the minimum value of each group, and the second stage pipeline compares 5 minimum values to obtain the final minimum value. The result is the smallest best matching block, the motion vector { m [2:0], n [2:0] } of the coding block is output, and the sum of the absolute values of the best matching blocks is output to the predictor selection module as the sum of the absolute values of the residuals of the predictor 4.
S34, caching and outputting the best matching block: and respectively storing search block data and coding block data by using two on-chip FIFOs, wherein the search block data amount is (ROW+4) X (COL+4), the coding block data amount is (ROW) X (COL), reading the search block data after the best matching block result is output, determining whether the data is valid according to the ROW-column count, correspondingly reading one coding block data when the data is valid, and outputting the data to a predictor selection module together.
S4, multi-predictor parallel computing, and selecting an optimal predictor: and generating an image synchronization causal template of the coding block and the matching block, and calculating a plurality of predictors in parallel, wherein the sum of absolute values of residual errors in the blocks is the optimal predictor.
Specifically, step S4 includes the sub-steps of:
s41, generating a synchronous causal template: pixels are counted in rows and columns in units of image blocks. The original pixel Ix of the coding block and the reconstructed pixel rx_i of the matching block are spliced and then written into the FIFO buffer, after one line of buffer is buffered, reading of the FIFO is started and data is decomposed, and 3 intra-frame cause template pixels Ia, ib, ic and 4 inter-frame cause and effect template pixels ra_i, rb_i, rc_i, uk are output according to the template structure and the boundary processing mode as shown in fig. 7. The template structure and the boundary processing mode are specifically as follows: (1) when the pixels of the coding block are in non-special rows and special columns, ia is a left adjacent pixel, ib is an upper adjacent pixel and Ic is an upper left adjacent pixel; (2) the first pixels are used when the pixels of the coding block are positioned in the first row and the first column; (3) when the pixels of the coding block are in the first row and the first column, using the first pixels by Ib and Ic; (4) when the coding block pixels are in the non-first row and the non-first column, ia is Ib and Ic is Ia_in the last row. Reference frame pixels are the same.
S42, parallel calculation of predictors: four predictors px_1, px_2, px_3 are calculated in parallel using intra and inter causal template pixels. The predictor calculation formula is shown in fig. 8, wherein predictor 1 is an intra-frame predictor, and predictors 2 and 3 are inter-frame predictors. Predictor 2 contains a division with divisor 3, which is optimized to multiply 5461 and then shift 14 bits to the right for ease of hardware computation.
S43, summing absolute values of residual errors: according to the algorithm principle, the actual value of the current pixel and three predicted values are respectively subjected to difference, residual values of different predictors are calculated, then absolute values are taken, and the sum of the residual absolute values in one block is counted. When one block data statistics is finished, the sum of residual absolute values of predictors 4 output by the motion estimation module is aligned and output to the predictor selection module.
S44, selecting a predictor: comparing the 4 summation values, when SUM4< SUM3, SUM4< SUM2, and SUM4< SUM1, SUM4 is the smallest; if not, judging that SUM3 is less than SUM2, and SUM3 is less than SUM1, and if so, SUM3 is the smallest; if not, judging that SUM2 is less than SUM1, and if so, minimizing SUM 2; otherwise SUM1 is minimal. The sum is the smallest being the best predictor.
S45, caching and outputting the segmented pixels: and splicing the pixel data of the coding block and the matching block, using the FIFO buffer, and starting to read out the data in the FIFO after the selection result of the predictor is output. The block data is sent to the multi-branch modeling and prediction module along with the predictor selection result.
S5, multi-branch modeling and prediction are carried out, and residual errors are obtained: and carrying out fixed prediction by using an optimal predictor, combining an adaptive corrector, calculating a prediction residual, and obtaining the Columbus coding parameters according to the context modeling parameters.
Specifically, step S5 includes the sub-steps of:
s51, generating a causal template: pixels are counted in rows and columns in units of image blocks. The coding block reconstruction pixel Rx and the matching block reconstruction pixel Rx_i are spliced and then written into an FIFO buffer, reading of the FIFO is started after one line of the FIFO buffer is buffered, data are decomposed, and 4 intra-frame cause template pixels Ra, rb, rc, rd and 4 inter-frame cause and effect template pixels Ra_i, rb_i, rc_i and Uk are output. The template structure and the boundary processing mode are specifically as follows: (1) when the pixels of the coding block are in non-special rows and special columns, ra is a left adjacent pixel, rb is an upper adjacent pixel, rc is an upper left adjacent pixel and Rd is an upper right adjacent pixel; (2) the first pixels are used when the pixels of the coding block are positioned in the first row and the first column; (3) when the pixels of the coding block are in the first row and the first column, rb, rc and Rd use the first pixels; (4) when the pixels of the coding block are in the non-first row and the non-first column, ra uses Rb, and Rc uses Ra of the previous row; (5) rd uses Rb when the encoded block pixel is in the last column of the non-first row. Reference frame pixels are the same. The specific template structure and boundary processing mode are shown in fig. 9.
S52, predicting a reconstruction value: according to the Near lossless compression-limited distortion characteristic, the relationship between the pixel reconstruction value Rx and the pixel actual value ix_r is rx=ix_r±near. When Near is 2, the Ra predicted values are Ix_r-2, ix_r-1, ix_r, ix_r+1, ix_r+2.
S53, context modeling: modeling the local environment where the current pixel is located by using the coding block causal template Ra, rb, rc, rd to obtain address index values Q1 and Q2 and inversion symbols SIGN1 and SIGN2.
Specifically, step S53 includes the following sub-steps:
s531, gradient calculation: three local gradients were calculated according to Ra, rb, rc, rd, D0 being the difference between Rd and Rb, D1 being the difference between Rb and Rc, and D2 being the difference between Rc and Ra. Note that since Ra has 5 values, D2 has 5 values as well, D2 has a maximum value of Rc-Ix_r+near, a minimum value of Rc-Ix_r-Near, and a maximum value differing from the minimum value by 2Near.
S532, gradient quantization: gradients D [0], D [1], D [2] are quantized according to quantization thresholds t1=16, t2=67, t3=276, and near=2. When the value of the D [ i ] is less than or equal to-T3, the value of the Q [ i ] = -4; when-T3 is less than D [ i ] and less than or equal to-T2, Q [ i ] = -3; when-T2 is less than D [ i ] and less than or equal to-T1, Q [ i ] = -2; when-T1 < Di < -Near, qi= -1; when-Near is less than or equal to D [ i ] is less than or equal to Near, Q [ i ] =0; when Near < Di < T1, qi=1; when T1 is less than or equal to D [ i ] < T2, Q [ i ] =2; when T2 is less than or equal to D [ i ] < T3, Q [ i ] =3; when T3 is equal to or less than D [ i ], Q [ i ] =4. Note that Q2 has at most two different values, since the quantization minimum interval is 2Near, and the D2 maximum differs from the minimum by 2Near.
S533, gradient fusion and sign marking: gradient quantization values (Q < 0 >, Q < 1 >, Q < 2 >) are gradient fused, i.e., Q=81 xQ0+9 xQ1+Q2. If the first non-zero element of (Q0, Q1, Q2) is negative, it is flipped to (-Q0, -Q1, -Q2), then gradient fused and the SIGN SIGN is set to-1. Note that since Q2 has two values, two address index values Q1, Q2 and inversion symbols SIGN1, SIGN2 are finally obtained.
S54, context address conflict control: the periodic tasks and RAM operations are shown in fig. 10. In the pipeline design process, the condition of pixel address Conflict in four clock cycles from reading of context parameters from RAM to updating of RAM is required to be recorded, namely the current cycle is compared with the pixel context addresses of the next three cycles one by one, the same is Conflict setting 1, otherwise, the Conflict setting 0 is carried out, and the address Conflict types Conflict1[2:0] and Conflict2[2:0] are formed. And according to the Conflict1[0] and Conflict2[0] zone bits of the address Conflict types, judging that the read RAM operation is not performed when the read-write Conflict occurs.
S55, prediction correction and residual calculation: ra is predicted to five values, and predicted values px_1, px_2, px_3, px_4, and px_5 are obtained by predicting using an optimal predictor. C_sel1, C_sel2, N_sel1, N_sel2 are selected according to the address collision types Conflict1, conflict2, respectively. And updating the selected parameter N to obtain N_Update1 and N_Update2, and generating N parameter updating identifiers N_flag1 and N_flag2. And correcting the fixed predicted value by using the C parameter corresponding to the pixel context address to obtain Px_correct1, px_correct2, px_correct3, px_correct4 and Px_correct5. And using the pixel actual value Ix to make a difference with the prediction correction value, and turning over the residual when the SIGN is-1 to obtain the Errval1, errval2, errval3, errval4 and Errval5.
S56, selecting a reconstruction value: and (3) waiting until the pixel reconstruction of the previous clock period is completed, and selecting a correct reconstruction value Ra branch to obtain a residual error, a symbol SIGN and an address Conflict type Conflict. In order to simplify division calculation, residual quantization is performed by adopting a table look-up mode, the ROM read address is { |Errval| and Near }, and the stored data is { |Errval_q| and Remain }.
S57, residual error quantization compensation, pixel reconstruction and residual error modulo: and C updating has five conditions of +/-2, +/-1 and invariable, compensating the remainder of residual quantization, and determining the residual quantization value Errval_q according to the relation between the remainder and the divisor, namely the residual quantization value |Errval_q|, and combining the Symbol of the residual quantization value. The principle of the pixel reconstruction improvement formula is shown in fig. 11. And obtaining a pixel reconstruction value according to the pixel actual value Ix, the Symbol SIGN, the Symbol of the residual quantized value, the micro-loss Near and the residual remainders after residual quantization compensation. And taking a modulus of the compensated residual quantized value, and reducing the range.
S58, compensation selection: and selecting a correct C branch according to the address Conflict type Conflict and the adjacent two pixel C parameter updating identifiers C_flag and C_flag_r to obtain a residual error modulus value Errval_mod, a pixel reconstruction value Rx and a correct parameter correction value C_correction.
S59, updating context parameters, calculating K values and mapping residual errors: and selecting corresponding A, B parameters according to the address conflict type to obtain A_sel and B_sel. And (5) completing the updating of A, B, C parameters by combining with the N_flag, and simultaneously writing the updated A, B, C, N into the two groups of parameter RAMs. The Golomb coding parameter K is calculated using the a parameter and the N parameter before updating. And mapping the residual error modulo value Errval_mod into a non-negative integer Merrval according to the micro-loss Near, the coding parameter K, the B parameter before updating and the N parameter.
Specifically, step S59 includes the following sub-steps:
s591. context parameters A, B selection: the case of the a parameter and the B parameter for 4 adjacent cycles is shown in fig. 12. A_sel and B_sel are selected and output to a context parameter updating module according to the address Conflict type Conflict.
S592, updating context parameters: updating the A parameter, the B parameter and the C parameter by using the A_sel, the B_sel, the C_correct and the N_flag, wherein the updating process adopts combinational logic. And simultaneously writing the updated parameters A_update, B_update, C_update and N_update into two groups of parameter RAMs. And sends the un-updated a_sel, b_sel, n_sel and residual modulo value errval_mod of the current pixel to the K value calculation and residual mapping module.
Calculation of s593.K value: according to the algorithm principle, N parameters are sequentially shifted left and compared with A parameters until N parameters are shifted left by K bits and then are greater than or equal to A parameters, and then a K value is output.
S594. residual mapping: according to the algorithm principle, a residual mapping mode of lossless and micro-loss is distinguished, and a signed residual modulo value Errval_mod is mapped into a non-negative integer residual mapping value Merrval. Note that the relationship between B parameters and N parameters needs to be determined when lossless.
S6, performing Columbus length-limited coding on the residual error, and outputting a compressed code stream: and calculating coding parameters to finish the Columbus limit length coding, and framing and outputting the compressed code stream and decoding auxiliary information.
Specifically, step S6 includes the sub-steps of:
s61, quotient and remainder calculation: shifting the residual mapping value Merrval by K bits right by using a combinational logic, and obtaining a quotient val_temp through a primary D trigger; then, moving the val_temp left by K bits, and obtaining the MErrval_temp through a first-stage D trigger; then, the Merrval_temp is differenced with Merrval, and the remainder n is outputted through a first-stage D trigger. To synchronize the quotient and remainder, val_temp is clocked in two beats of registers to output the quotient val.
S62, limit length coding: the length-limited coding diagram is shown in fig. 13. If the val value is less than the code length upper LIMIT parameter max=limit-qbpp-1, the code stream is composed of val bit 0, 1 bit 1 and K bit n, otherwise, is composed of LIMIT-qbpp-1 bit 0, 1 bit 1 and qbpp bit MErrval.
S63, coding FIFO buffer memory: defining 64-bit register reg64, placing the code stream data of first pixel into the low order of reg64, leftwards shifting original code stream data when waiting for next code stream data to come, placing new code stream data into the low order of reg64, when the register is full of 64 bits, outputting it to FIFO and clearing said register, and circulating above-mentioned operation.
S64, framing output: when compression of one coding block is finished, the first pixel of the block, the motion vector, the predictor selection and the length of the code stream of the block are used as side information to form block data with the code stream group frame. The intra/inter compression mode parameter, the inter compression period parameter, the micro-loss Near value parameter and the block row and column parameter are used as the whole-drawing side information to form a final compressed code stream output with all the block data.
It will be readily appreciated by those skilled in the art that the foregoing is merely a preferred embodiment of the invention and is not intended to limit the invention, but any modifications, equivalents, improvements or alternatives falling within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (4)

1. The remote sensing image compression algorithm hardware implementation method based on JPEG-LS interframe expansion is characterized by comprising the following steps:
(1) Using an off-chip memory to buffer image data of the coding frame and the reference frame, and respectively obtaining coding block data and searching block data according to different block row and column parameters;
(2) In a motion search block formed by a reference frame, performing full search based on an SAD criterion to obtain an optimal matching block of the coding block, and outputting the coding block and the matching block to the next stage;
(3) Generating an image synchronization causal template of a coding block and a matching block, and parallelly calculating a plurality of predictors, wherein the sum of absolute values of residual errors in the blocks is the optimal predictor;
(4) Performing fixed prediction by using an optimal predictor, combining an adaptive corrector, calculating a prediction residual error, and obtaining a Columbus coding parameter according to a context modeling parameter;
(5) Calculating coding parameters, finishing Columbus limit length coding, and framing and outputting compressed code stream and decoding auxiliary information;
the step (1) specifically comprises the following steps:
(11) The four FIFOs are used for respectively caching the written data and the read data of the coding frame, writing the data and the read data of the reference frame, and providing the functions of data bit width conversion and clock isolation;
(12) Defining a counter, counting write data enabling signals, and finishing storage address accumulation and data sequential writing; according to the block row and column parameters, when caching to the block line number, calculating a read address and an offset address, and outputting coded block data which are the same in size and are not overlapped with each other from a storage area;
(13) Because of image block compression, compressed and reconstructed image data exists in a block form, a write address and an offset address are required to be calculated according to block-row parameters, and complete reference frame image data is obtained; setting the motion search step length as P, determining that the ROW and column parameters of the search block are ROW+2P and COL+2P respectively, calculating a read address and an offset address, and outputting search block data which have the same size and overlap each other;
(14) Counting the water level conditions of a plurality of channels, wherein the water level of a writing channel is the input FIFO buffer quantity and the free space of a storage partition, and the water level of a reading channel is the output FIFO free space and the storage partition buffer quantity; adopting a strategy of fixed priority to judge the water level condition of each channel, and allocating buses to which channels if the water level of which channel is high so as to finish data transmission;
the step (2) specifically comprises:
(21) Using 2P FIFO, (2P+1) x (2P+1) registers, cascading and caching four lines of data of a search block, forming a (2P+1) x (2P+1) matching window when the fifth data of the fifth line of the search block arrives, reading the first data of the first line of the coding block, aligning the data of the coding block with the window data, and outputting the data to an SAD calculation module; when the (2P+1) th data of the search block arrives, a new (2P+1) x (2P+1) matching window is formed, at the moment, the second data of the first ROW of the coding block is read and aligned and output to a SAD calculation module, and SAD calculation is completed after (ROW+2P) x (COL+2P) pixel clock cycles;
(22) Performing difference between each pixel data of the coding block and each pixel data extension sign bit of the matching window by using a combination logic; judging sign bit of the difference value, if the sign bit is negative, inverting the data according to the bit, adding 1 again, and if the sign bit is positive, keeping the data unchanged to obtain an absolute value of the difference value; using a digit-sufficient accumulator to count the sum of absolute values, and sending the sum of (2P+1) x (2P+1) absolute values to a comparison and selection circuit module when all pixel statistics in one coding block are finished;
(23) Dividing (2P+1) x (2P+1) matching results into (2P+1) groups, and comparing the matching results by using a two-stage pipeline; the first stage pipeline compares (2P+1) data in each group to obtain the minimum value of each group, and the second stage pipeline compares (2P+1) minimum values to obtain the final minimum value; the result is the best matching block, the motion vector { m, n } of the coding block is output, and the sum of the absolute values of the best matching block is output to the predictor selection module as the sum of the absolute values of the residuals of the fourth predictor;
(24) Using two on-chip FIFOs to respectively store search block data and coding block data, wherein the search block data amount is (ROW+2P) × (COL+2P), the coding block data amount is ROW×COL, reading out the search block data after the best matching block result is output, determining whether the data is valid according to the ROW-column count, correspondingly reading out one coding block data when the data is valid, and outputting the same to a predictor selection module;
The step (3) specifically comprises:
(31) Counting the rows and columns of pixels by taking the image block as a unit; splicing an original pixel Ix of the coding block and a reconstructed pixel Rx_i of the matching block, writing the spliced pixels into an FIFO buffer, starting reading the FIFO after buffering one line, decomposing data, and outputting 3 intra-frame cause template pixels Ia, ib and Ic and 4 inter-frame cause and effect template pixels Ra_i, rb_i, rc_i and Uk; the template structure and the boundary processing mode are specifically as follows:
when the pixels of the coding block are in non-special rows and special columns, ia is a left adjacent pixel, ib is an upper adjacent pixel and Ic is an upper left adjacent pixel;
the first pixels are used when the pixels of the coding block are positioned in the first row and the first column;
when the pixels of the coding block are in the first row and the first column, using the first pixels by Ib and Ic;
when the coding block pixels are in the non-first row and the non-first column, ia uses Ib, and Ic uses Ia of the previous row; the same applies to the reference frame pixels;
(32) Calculating three predicted values Px_1, px_2 and Px_3 in parallel by using intra-frame and inter-frame causal template pixels; the first predictor is an intra-frame predictor, and the second predictor and the third predictor are inter-frame predictors;
(33) Respectively differencing the actual value of the current pixel and the three predicted values, calculating residual values of different predictors, taking absolute values, and counting the sum of residual absolute values in a block; when the statistics of one block data is finished, aligning the sum of residual absolute values of a fourth predictor output by the motion estimation module and outputting the sum to the predictor selection module;
(34) Comparing the 4 summation values SUM4, SUM2, SUM3 and SUM4, wherein the summation value is the smallest as the optimal predictor;
(35) Splicing pixel data of the coding block and the matching block, using FIFO buffer memory, waiting for outputting the selection result of the predictor, and starting to read out data in the FIFO; the block data is sent to the multi-branch modeling and prediction module along with the predictor selection result.
2. The method for implementing the remote sensing image compression algorithm hardware based on the JPEG-LS inter-frame expansion according to claim 1, wherein said step (1) further comprises the steps of:
analyzing the number of notes instruction according to the protocol, completing the initialization of compression parameters, and simultaneously controlling the whole compression working mode; the method specifically comprises the following substeps:
s1, analyzing an intra-frame/inter-frame compression mode parameter, an inter-frame compression period parameter, a micro-damage Near value parameter, a pixel bit width parameter and a block row and column parameter according to a protocol;
s2, calculating a pixel value RANGE (RANGE) according to the micro-loss Near and the pixel bit width bpp, and further calculating qbpp, a Columbus coding LIMIT length LIMIT and a gradient quantization threshold; calculating context parameters A, B, C, N, and initializing two sets of parameter RAMs;
s3, controlling the compression state of the current frame according to the inter-frame compression mode parameter and the inter-frame compression period parameter; when in intra-frame compression, skipping reference frame image blocking and motion estimation, directly selecting an intra-frame predictor, and carrying out intra-frame compression on all image frames; when in inter-frame compression, the initial frame is subjected to inter-frame compression, and the rest frames in one period are subjected to inter-frame compression.
3. A remote sensing image compression algorithm hardware implementation system based on JPEG-LS inter-frame expansion, the system comprising:
the first module is used for caching the image data of the coding frame and the reference frame by using an off-chip memory and respectively obtaining coding block and searching block data according to different block row and column parameters;
the second module is used for carrying out full search based on SAD criterion in the motion search block formed by the reference frame to obtain the best matching block of the coding block, and outputting the coding block and the matching block to the next stage;
the third module is used for generating an image synchronization causal template of the coding block and the matching block, a plurality of predictors are calculated in parallel, and the sum of absolute values of residual errors in the blocks is the optimal predictor;
a fourth module, configured to perform fixed prediction using an optimal predictor, calculate a prediction residual in combination with an adaptive corrector, and obtain a golomb coding parameter according to a context modeling parameter;
a fifth module for calculating coding parameters to complete Columbus length coding, and outputting compressed code stream and decoding auxiliary information framing;
the first module specifically includes:
the first unit is used for respectively caching the written data and the read data of the coding frame by using four FIFOs, writing the data and the read data of the reference frame, and providing the functions of data bit width conversion and clock isolation;
The second unit is used for defining a counter, counting write data enabling signals and finishing storage address accumulation and data sequential writing; according to the block row and column parameters, when caching to the block line number, calculating a read address and an offset address, and outputting coded block data which are the same in size and are not overlapped with each other from a storage area;
the third unit is used for compressing the image blocks, compressing and reconstructing the image data to exist in a block form, and calculating a write address and an offset address according to the block row and column parameters to obtain complete reference frame image data; setting the motion search step length as P, determining that the ROW and column parameters of the search block are ROW+2P and COL+2P respectively, calculating a read address and an offset address, and outputting search block data which have the same size and overlap each other;
a fourth unit for counting the water level conditions of a plurality of channels, wherein the water level of the writing channel is the input FIFO buffer amount and the free space of the storage partition, and the water level of the reading channel is the output FIFO free space and the storage partition buffer amount; adopting a strategy of fixed priority to judge the water level condition of each channel, and allocating buses to which channels if the water level of which channel is high so as to finish data transmission;
the second module specifically includes:
Generating a matching template unit, which is used for using 2P FIFO (first in first out) registers, (2P+1) x (2P+1) registers, cascading and caching four lines of data of a search block, forming a (2P+1) x (2P+1) matching window when the fifth data of the fifth line of the search block arrives, reading the first data of the first line of the coding block at the moment, aligning the data of the coding block with the window data, and outputting the data to a SAD calculation module; when the (2P+1) th data of the search block arrives, a new (2P+1) x (2P+1) matching window is formed, at the moment, the second data of the first ROW of the coding block is read and aligned and output to a SAD calculation module, and SAD calculation is completed after (ROW+2P) x (COL+2P) pixel clock cycles;
a parallel computing unit for expanding sign bits for each pixel data of the coding block and each pixel data of the matching window, and performing difference making by using a combination logic; judging sign bit of the difference value, if the sign bit is negative, inverting the data according to the bit, adding 1 again, and if the sign bit is positive, keeping the data unchanged to obtain an absolute value of the difference value; using a digit-sufficient accumulator to count the sum of absolute values, and sending the sum of (2P+1) x (2P+1) absolute values to a comparison and selection circuit module when all pixel statistics in one coding block are finished;
A comparison selection unit for dividing (2P+1) x (2P+1) matching results into (2P+1) groups, and comparing by using a two-stage pipeline; the first stage pipeline compares (2P+1) data in each group to obtain the minimum value of each group, and the second stage pipeline compares (2P+1) minimum values to obtain the final minimum value; the result is the best matching block, the motion vector { m, n } of the coding block is output, and the sum of the absolute values of the best matching block is output to the predictor selection module as the sum of the absolute values of the residuals of the fourth predictor;
the cache and output unit is used for respectively storing search block data and coding block data by using two on-chip FIFOs, wherein the search block data amount is (row+2p) × (col+2p), the coding block data amount is row×col, after the best matching block result is output, the search block data is read out, whether the data is valid or not is determined according to the ROW-column count, and when the data is valid, one coding block data is correspondingly read out and is output to the predictor selection module together;
the third module specifically includes:
the synchronous causal template unit is used for counting the rows and columns of pixels by taking the image block as a unit; splicing an original pixel Ix of the coding block and a reconstructed pixel Rx_i of the matching block, writing the spliced pixels into an FIFO buffer, starting reading the FIFO after buffering one line, decomposing data, and outputting 3 intra-frame cause template pixels Ia, ib and Ic and 4 inter-frame cause and effect template pixels Ra_i, rb_i, rc_i and Uk; when the pixels of the coding block are in non-special rows and special columns, ia is a left adjacent pixel, ib is an upper adjacent pixel and Ic is an upper left adjacent pixel;
The first pixels are used when the pixels of the coding block are positioned in the first row and the first column;
when the pixels of the coding block are in the first row and the first column, using the first pixels by Ib and Ic;
when the coding block pixels are in the non-first row and the non-first column, ia uses Ib, and Ic uses Ia of the previous row; the same applies to the reference frame pixels;
a predictor parallel computing unit for computing three predicted values px_1, px_2, px_3 in parallel using intra and inter causal template pixels; the first predictor is an intra-frame predictor, and the second predictor and the third predictor are inter-frame predictors;
the residual absolute value summing unit is used for respectively differencing the actual value of the current pixel and the three predicted values, calculating residual values of different predictors, taking absolute values, and counting the sum of the residual absolute values in one block; when the statistics of one block data is finished, aligning the sum of residual absolute values of a fourth predictor output by the motion estimation module and outputting the sum to the predictor selection module;
the predictor selecting unit is used for comparing 4 summation values SUM4, SUM2, SUM3 and SUM4, and the summation value is the optimal predictor;
the block pixel buffer and output unit is used for splicing the pixel data of the coding block and the matching block, using the FIFO buffer, waiting for the output of the selection result of the predictor, and starting to read out the data in the FIFO; the block data is sent to the multi-branch modeling and prediction module along with the predictor selection result.
4. The remote sensing image compression algorithm hardware implementation system based on JPEG-LS interframe expansion according to claim 3, wherein the system further comprises a mode control module, specifically configured to analyze a fluence instruction according to a protocol, complete initialization of compression parameters, and control an entire compression working mode; the method specifically comprises the following submodules:
the first sub-module is used for analyzing the intra/inter compression mode parameter, the inter compression period parameter, the micro-loss Near value parameter, the pixel bit width parameter and the block row and column parameter according to the protocol;
the second sub-module is used for calculating a pixel value RANGE (RANGE) according to the micro-loss degree Near and the pixel bit width bpp, and further calculating qbpp, the Columbus coding LIMIT length LIMIT and the gradient quantization threshold; calculating context parameters A, B, C, N, and initializing two sets of parameter RAMs;
a third sub-module, configured to control a current frame compression state according to an intra-frame compression mode parameter and an inter-frame compression period parameter; when in intra-frame compression, skipping reference frame image blocking and motion estimation, directly selecting an intra-frame predictor, and carrying out intra-frame compression on all image frames; when in inter-frame compression, the initial frame is subjected to inter-frame compression, and the rest frames in one period are subjected to inter-frame compression.
CN202110483170.XA 2021-04-30 2021-04-30 Remote sensing image compression algorithm hardware implementation method based on JPEG-LS (joint photographic experts group-LS) inter-frame expansion Active CN113207004B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110483170.XA CN113207004B (en) 2021-04-30 2021-04-30 Remote sensing image compression algorithm hardware implementation method based on JPEG-LS (joint photographic experts group-LS) inter-frame expansion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110483170.XA CN113207004B (en) 2021-04-30 2021-04-30 Remote sensing image compression algorithm hardware implementation method based on JPEG-LS (joint photographic experts group-LS) inter-frame expansion

Publications (2)

Publication Number Publication Date
CN113207004A CN113207004A (en) 2021-08-03
CN113207004B true CN113207004B (en) 2024-02-02

Family

ID=77028113

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110483170.XA Active CN113207004B (en) 2021-04-30 2021-04-30 Remote sensing image compression algorithm hardware implementation method based on JPEG-LS (joint photographic experts group-LS) inter-frame expansion

Country Status (1)

Country Link
CN (1) CN113207004B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113722770B (en) * 2021-08-18 2024-06-18 上海励驰半导体有限公司 End-to-end protection method and system based on hierarchical data integrity
CN116109471A (en) * 2021-11-09 2023-05-12 哲库科技(上海)有限公司 Image processing method, chip, electronic device and storage medium
CN113794849B (en) * 2021-11-12 2022-02-08 深圳比特微电子科技有限公司 Device and method for synchronizing image data and image acquisition system
CN117097905B (en) * 2023-10-11 2023-12-26 合肥工业大学 Lossless image block compression method, lossless image block compression equipment and storage medium
CN117395381B (en) * 2023-12-12 2024-03-12 上海卫星互联网研究院有限公司 Compression method, device and equipment for telemetry data

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101534373A (en) * 2009-04-24 2009-09-16 北京空间机电研究所 Remote sensing image near-lossless compression hardware implementation method based on improved JPEG-LS algorithm
CN102970531A (en) * 2012-10-19 2013-03-13 西安电子科技大学 Method for implementing near-lossless image compression encoder hardware based on joint photographic experts group lossless and near-lossless compression of continuous-tone still image (JPEG-LS)
KR101289881B1 (en) * 2012-02-28 2013-07-24 전자부품연구원 Apparatus and method for lossless image compression
CN105828070A (en) * 2016-03-23 2016-08-03 华中科技大学 Anti-error code propagation JPEG-LS image lossless/near-lossless compression algorithm hardware realization method
CN109151482A (en) * 2018-10-29 2019-01-04 西安电子科技大学 Spaceborne spectrum picture spectral coverage is lossless to damage mixing compression method
CN111462133A (en) * 2020-03-31 2020-07-28 厦门亿联网络技术股份有限公司 System, method, storage medium and device for real-time video portrait segmentation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101534373A (en) * 2009-04-24 2009-09-16 北京空间机电研究所 Remote sensing image near-lossless compression hardware implementation method based on improved JPEG-LS algorithm
KR101289881B1 (en) * 2012-02-28 2013-07-24 전자부품연구원 Apparatus and method for lossless image compression
CN102970531A (en) * 2012-10-19 2013-03-13 西安电子科技大学 Method for implementing near-lossless image compression encoder hardware based on joint photographic experts group lossless and near-lossless compression of continuous-tone still image (JPEG-LS)
CN105828070A (en) * 2016-03-23 2016-08-03 华中科技大学 Anti-error code propagation JPEG-LS image lossless/near-lossless compression algorithm hardware realization method
CN109151482A (en) * 2018-10-29 2019-01-04 西安电子科技大学 Spaceborne spectrum picture spectral coverage is lossless to damage mixing compression method
CN111462133A (en) * 2020-03-31 2020-07-28 厦门亿联网络技术股份有限公司 System, method, storage medium and device for real-time video portrait segmentation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"基于自适应滤波的高光谱遥感图像无损压缩研究";朱福全;《成都理工大学》;全文 *
"视觉无损压缩JPEG-XS编码标准研究";周芸;《广播电视信息》;全文 *

Also Published As

Publication number Publication date
CN113207004A (en) 2021-08-03

Similar Documents

Publication Publication Date Title
CN113207004B (en) Remote sensing image compression algorithm hardware implementation method based on JPEG-LS (joint photographic experts group-LS) inter-frame expansion
KR101235132B1 (en) Efficient transformation techniques for video coding
KR100556340B1 (en) Image Coding System
JP4444235B2 (en) Motion detection circuit and operation method thereof
EP0705038B1 (en) Method and system for bidirectional motion compensation for compressing of motion pictures
JPH1169345A (en) Inter-frame predictive dynamic image encoding device and decoding device, inter-frame predictive dynamic image encoding method and decoding method
US7627036B2 (en) Motion vector detection device and moving picture camera
CN102088603B (en) Entropy coder for video coder and implementation method thereof
US10165270B2 (en) Intra/inter mode decision for predictive frame encoding
CN107846597A (en) Data cache method and device for Video Decoder
CN102148990B (en) Device and method for predicting motion vector
JP6171627B2 (en) Image encoding device, image encoding method, image encoding program, image decoding device, image decoding method, and image decoding program
CN101783958B (en) Computation method and device of time domain direct mode motion vector in AVS (audio video standard)
JP2000059792A (en) High efficiency encoding device of dynamic image signal
CN102420989B (en) Intra-frame prediction method and device
JP5195674B2 (en) Image encoding device
US6668087B1 (en) Filter arithmetic device
KR101216142B1 (en) Method and/or apparatus for implementing reduced bandwidth high performance vc1 intensity compensation
CN102055980B (en) Intra-frame predicting circuit for video coder and realizing method thereof
JPH1155668A (en) Image coder
JP2015005903A (en) Compressor, decompressor and image processing apparatus
KR100708183B1 (en) Image storing device for motion prediction, and method for storing data of the same
US20220108480A1 (en) Bit plane decoding method and apparatus
CN115022628B (en) JPEG-LS (joint photographic experts group-LS) -based high-throughput lossless image compression method
JPWO2009031519A1 (en) Entropy encoding / decoding method and entropy encoding / decoding device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant