CN101335892A - Hybrid distributed video encoding method based on intra-frame intra-frame mode decision - Google Patents

Hybrid distributed video encoding method based on intra-frame intra-frame mode decision Download PDF

Info

Publication number
CN101335892A
CN101335892A CN 200810105125 CN200810105125A CN101335892A CN 101335892 A CN101335892 A CN 101335892A CN 200810105125 CN200810105125 CN 200810105125 CN 200810105125 A CN200810105125 A CN 200810105125A CN 101335892 A CN101335892 A CN 101335892A
Authority
CN
China
Prior art keywords
frame
wavelet
spiht
decoding
coding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 200810105125
Other languages
Chinese (zh)
Other versions
CN101335892B (en
Inventor
王安红
李志宏
张�雄
郑义
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taiyuan University of Science and Technology
Original Assignee
Taiyuan University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taiyuan University of Science and Technology filed Critical Taiyuan University of Science and Technology
Priority to CN 200810105125 priority Critical patent/CN101335892B/en
Publication of CN101335892A publication Critical patent/CN101335892A/en
Application granted granted Critical
Publication of CN101335892B publication Critical patent/CN101335892B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

A hybrid distributed video encoding method based on wavelet domain intra-frame mode decision capable of improving rate aberration performance, is characterized by the following steps: (1) a low complexity encoding, including the following steps: using traditional intra-frame encoder to code key frames, generating the reference frame of Wyner-Ziv frame by weighted average interpolation, generating a residual frame by a subtraction arithmetic, performing discrete wavelet switch DWT to the residual frame, generating a wavelet block, intra-frame mode decision of the wavelet block, entropy coding of the mode information, and inter-frame SW-SPIHT coding or intra-frame SPIHT coding of the wavelet block; (2) a high-complexity decoding, including the following: using traditional intra-frame decoding algorithm to decode key frames, adopting a motion estimation interpolation to produce side information frame of the Wyner-Ziv, generating the reference frame of the decoding terminal by weighting average interpolation, generating a residual frame of the decoding terminal by a subtraction arithmetic, performing DWT to the residual frame, entropy decoding of the mode information, adopting LBS to perform motion estimation to generate more accurate side information, fine reconstruction of wavelet coefficients, and recovering the original pixels by inverse discrete wavelet transform (IDWT) and addition operations.

Description

Mixed distribution formula method for video coding based on the frame mode decision-making
Technical field
The present patent application is to belong to the method for video coding field, especially a kind of mixed distribution formula method for video coding based on the frame mode decision-making that can the improvement rate distortion performance.
Background technology
Distributed video coding (DVC) is the video coding framework of a kind of thinking novelty of just having grown up in recent years.Compare (as MPEG series with other ripe video compression technology, H.26* series etc.), DVC has " low complex degree coding ", advantages such as robustness to a certain degree, be applicable to the video communication services (taking a picture wireless sensor network, Network Video Surveillance etc.) of emerging requirement " close friend uploads " as moving.DVC has utilized Slepian-Wolf and the Wyner-Ziv side information source encoding theory in the information theory seventies, attempt to realize the distributed source coding pattern of consecutive frame " separating the coding combined decoding ", the motion search that operand is huge partly or entirely moves to decoding end from coding side, thereby realize " low complex degree coding ", this and " the high complexity coding " that utilize in the MPEG method correlation of consecutive frame to carry out the estimation predictive coding have essential distinction in the past.Although the thought of distributed source coding proposed as far back as the seventies in last century, only in recent years along with the appearance of high performance channel code, as Turbo code, LDPC sign indicating number etc., some practical DVC frameworks just emerge.
In existing DVC coding framework, key frame adopts traditional intraframe coding method, and all the other frames adopt the Wyner-Ziv coding mode, and the coding mode of promptly a kind of " intraframe coding, interframe decoding " is so be called as the Wyner-Ziv frame.According to the coding mode of Wyner-Ziv frame, present DVC can be divided into two classes, comprises pixel domain DVC and transform domain DVC, such as the DVC in DCT territory.With respect to pixel domain DVC, DCT territory DVC utilizes dct transform to remove the spatial coherence of image thereby has further improved distortion performance.But in the DVC in DCT territory, the best associating quantizer of selecting each frequency band coefficient of DCT is complicated job.AnneAaron has proposed residual error DVC, and this scheme uses pixel domain Wyner-Ziv encoder to compress to the residual frame between present frame and the reference frame.Because residual error DVC has removed temporal correlation between present frame and the reference frame to a certain extent at coding side, it has the distortion performance much at one with DCT territory DVC.Simultaneously, residual error DVC has overcome the difficulty of selecting best associating quantizer.But the distortion performance of residual error DVC still needs further raising.
Summary of the invention
The purpose of this invention is to provide the mixed distribution formula method for video coding that a kind of energy can the improvement rate distortion performance based on the frame mode decision-making.
Technical scheme of the present invention is: the mixed distribution formula method for video coding based on the frame mode decision-making is characterized in that comprising the following steps:
I. the low complex degree coding specifically comprises the following steps:
(1). read set point l, T 1, T 2, T 3: wherein l is the number of all images frame that comprised of an image sets (GOP), T 1And T 2Be threshold value, the T of temporal correlation judgment criterion 3Be the threshold value of spatial coherence judgment criterion, read GOP+1 image value then;
(2). to the H.264/AVC intraframe coding of key frame, generate the H.264 interior code stream of frame, send to decoding end then;
(3). to the H.264/AVC intraframe decoder of key frame;
(4). generate the reference frame W of current Wyner-Ziv frame Re, concrete grammar is to utilize formula
W re=αK′ j+βK′ j+1 (1)
Wherein, K ' iAnd K ' I+1Be respectively the decoded key frame of the adjacent previous and next of current Wyner-Ziv; α=1-r/l, β=1-α, wherein r is current encoded frame and previous key frame K ' iBetween distance, i.e. frame number, l represents the number of all frames in this set of pictures (GOP).
(5). generate residual frame: residual frame is meant the current Wyner-Ziv frame W that will encode and the difference of its reference frame, i.e. residual frame D=W-W Re
(6). residual frame is carried out wavelet transform:: to D=W-W ReCarrying out wavelet transform is DWT, output Wavelet image C d
(7). Wavelet image is produced wavelet block;
(8). each wavelet block is carried out the judgement of frame mode decision-making coding: each wavelet block, utilize formula (2) (3) to calculate its time relevance parameter E LL, spatial coherence parameter ρ 2, at condition 1:E LL〉=T 1Or condition 2:T 2≤ E LL<T 1And ρ 2≤ T 3When satisfying, its coding mode parameter setting is 1, and the coding mode parameter setting is 0 under all the other situations.
Wherein, the temporal correlation calculation of parameter is according to the low frequency energy E of current wavelet block LL:
E LL = Σ i = 1 N LL ( C i LL ) 2 - - - ( 2 )
C i LLBe wavelet block lowest frequency small echo band LL (3)I wavelet coefficient, N LLBe LL (3)In the sum of all coefficients;
The spatial coherence calculation of parameter is based on the variance of current wavelet block high frequency coefficients, suc as formula (3):
ρ 2 = 1 N Σ i = 1 N | C i | 2 - ( 1 N Σ i = 1 N C i ) 2 - - - ( 3 )
C wherein iRepresent i high frequency coefficient, N is the sum of all high frequency coefficients;
(9). the entropy coding of pattern code stream: to all wavelet block according to " its coding mode of " sequential scanning from left to right from top to bottom forms the pattern code stream, and mode code stream is carried out entropy coding, sends to decoding end.
(10). wavelet block is encoded: to each wavelet block,, use SPIHT coding in traditional frame, form intra-SPIHT code stream in the frame if coding mode is 1; If pattern is 0 then encodes with the SW-SPIHT of inter-frame mode, form interframe SW-SPIHT code stream; At last intra-SPIHT code stream and SW-SPIHT code stream are sent to decoding end;
II. high complexity decoding specifically comprises the following steps:
(1). with the key frame of decoding of intraframe decoder mode H.264;
(2). produce side information frame Y: produce the side information frame Y of interpolation Wyner-Ziv frame by the key frame of decoding by the bi directional motion compensation interpolation method, specifically utilize following formula:
Y(x,y)=α×K′ j(x+β×dx f,y+β×dy f)
+β×K′ j+1(x-α×dx f,y+α×dy f)
+α×K′ j(x-β×dx b,y-β×dy b) (4)
+β×K′ j+1(x+α×dx b,y+α×dy b)
Wherein (x y) is the coordinate of interpolation frame pixel; [dx b, dy b] and [dx f, dy f] between the key frame of decoding back to the motion vector of forward direction, it can obtain by half-pix estimation method; Identical in α and β and the formula (1).
(3). generate reference frame: identical with coding side, utilize formula (1) to produce reference frame W Re
(4). generate residual frame: residual frame is meant side information frame Y and reference frame W ReThe difference frame, i.e. D y=Y-W Re
(5). residual frame is carried out wavelet transform: to residual frame D y=Y-W ReCarry out the DWT conversion, it is coefficient C as a result d yAlso form wavelet block;
(6). the entropy decoding of pattern code stream: recover the pattern code stream undistortedly
(7). interframe SW-SPIHT decoding: according to the pattern code stream of decoding, if the mode value of current wavelet block is 0, then at C d yIn get same position wavelet block as side information, carry out interframe SW-SPIHT decoding;
(8). intra-SPIHT decoding and wavelet field estimation in the frame: if the mode value of current wavelet block is 1, then carry out intra-SPIHT decoding in the frame, specifically carry out the following step: at first be the wavelet block coefficient C of intra-SPIHT decoding to obtain to recover i' H, next use the estimation that LBS carries out wavelet field, promptly by C i' HObtain more accurate side information C d YLBS, concrete LBS estimation is:
C d yLBS = min i ∈ referenced | C d ′ H - C i LBS | - - - ( 5 )
C wherein i LBSBe i the coefficient with reference to wavelet block, all of all frames that produced by LBS with reference to wavelet block are formed with reference to wavelet block, and its term of reference is dx=[-8, and+8], dy=[-8 ,+8];
(9). the meticulous reconstruct of wavelet coefficient: through the process of intra-SPIHT decoding in interframe SW-SPIHT or the frame, the SPIHT information that is restored utilizes formula (6) to come meticulous C by the SPIHT information of recovering d y
C ′ d = v max , if C d y ≥ v max C d y , if C d y ∈ ( v min , v max ) . v min , if C d y ≤ v min - - - ( 6 )
C ' wherein dBe final wavelet coefficient; v MaxAnd v MinBe respectively the C that infers according to the SPIHT bit plane information of recovering dMaximum and minimum value, promptly work as C dMaximum and the minimum value of all bit planes can both be resumed the time; M is C dThe sum of bit plane, and the number of bit-planes of current recovery is represented with n, wherein m>n;
(10). inverse discrete wavelet transform IDWT: to C ' dThereby carry out IDWT and recover difference D ';
(11). recover original pixels: reference frame W ReWith difference D's ' and as the last pixel value that recovers, i.e. W '=D '+W Re
Effect of the present invention: mixed distribution formula video coding of the present invention (HDVC) system, residual coding technology, wavelet field DVC technology, wavelet field frame mode decision-making coding techniques have been adopted simultaneously based on spiht algorithm, thereby be called as HDVC, HDVC has utilized temporal correlation simultaneously at coding side, spatial coherence, thereby it can be realized than DCT territory DVC, the better distortion performance of residual error DVC.Simultaneously, SPIHT (multistage tree set partitioning) algorithm quantizes to have brought facility for the uniting of wavelet coefficient of different frequency bands.
On the other hand, at present in the coding of Wyner-Ziv frame, encoder supposes that the temporal correlation of main information and side information fixes on the locus, that is to say, has ignored the otherness of correlation of master, the side information of diverse location in the same Wyner-Ziv frame.Frame mode decision-making encoding scheme is meant the difference according to master, side information correlation, and the different coding mode of main information employing to different spatial in the same frame comprises intra-frame encoding mode and Slepian-Wolf inter-frame mode.In the DVC of transform domain and hybrid domain, frame mode decision-making encoding scheme still is not used at present.Therefore, the application has proposed the frame mode decision-making encoding scheme of HDVC in wavelet field simultaneously, that is to say, when the temporal correlation of estimating was lower, we just adopted SPIHT (intra-SPIHT) coding in the frame to the wavelet block of residual frame; Otherwise just adopt the SW-SPIHT (Slepian-Wolf SPIHT) of inter-frame mode to encode.Experimental result shows that frame mode decision-making coding can improve the distortion performance of HDVC.
With traditional MPEG or H.26X method compare, HDVC method of the present invention has the advantage of " low complex degree coding "; Compare with more existing DVC systems, HDVC of the present invention has improved its distortion performance largely under the condition that amount of calculation improves slightly.The present invention is well suited for the communication equipment requirement of present emerging ask for something " low complex degree coding ", as wireless sensor network, mobile phone camera, wireless video monitoring etc.With respect to existing DVC, innovative point of the present invention be embodied in following some: 1) the SW-SPIHT algorithm is used to the residual frame of Wyner-Ziv frame rather than uses Wyner-Ziv frame itself; 2) the SW-SPIHT algorithm is used on each wavelet block of residual frame; 3) provided the SPIHT reconstructing method based on side information of wavelet coefficient; 4) frame mode of wavelet field decision-making coding is used to each wavelet block; 5) provided the decision strategy formula of wavelet field based on the frame mode decision-making coding of space-time analysis.
The present invention is described further below in conjunction with drawings and Examples.
Description of drawings
Fig. 1 is that system hardware of the present invention is implemented block diagram
Fig. 2 is that encoder of the present invention connects block diagram
Fig. 3 is that decoder of the present invention connects block diagram
The reconstruct of Fig. 4 wavelet block of the present invention and scanning sequency schematic diagram;
Fig. 5 is a low complex degree encryption algorithm program flow diagram of the present invention;
Fig. 6 is the high complexity decoding algorithm of a present invention program flow diagram;
Rate distortion comparison diagram when Fig. 7 is GOP=8 of the present invention;
Fig. 8 be frame PSNR change curve of the present invention (GOP=8,166kbps)
Fig. 9 is the comparison diagram of recovery image of the present invention.
Embodiment
The HDVC system based on the frame mode decision-making that the present invention proposes both can finish by software, also can realize by DSP, hardware systems such as parallel machine with SIMD or MIMD structure.
We provide the hardware enforcement block diagram of HDVC proposed by the invention in Fig. 1, and camera wherein can be a mobile phone cam, and the video input terminal of the more first-class low energy device of sensor network shooting is responsible for the collection of raw video image; Video after the collection carries out compressed encoding by " low complex degree encoder " frame by frame; All frames in the memory among GOP of storage.Compressed code flow is through transmission network, and the network here can be wired or wireless network; At receiving terminal, the code stream of receiving is decoded frame by frame by " high complexity decoder ", and shows.
The encoder block diagram as shown in Figure 2, encoder mainly is made up of intra encoder and interframe encode device, intra encoder is encoded to key frame; The interframe encode device is to remaining frame, and promptly the Wyner-Ziv frame is encoded, all frames and decoded key frame among GOP of store memory storage.
The decoder block diagram as shown in Figure 3, decoder mainly is made up of intraframe decoder device and interframe decoder, the intraframe decoder device is decoded to key frame; The interframe decoder is to remaining frame, and promptly the Wyner-Ziv frame is decoded, the key frame of decoding among GOP of store memory storage.
The program circuit of encryption algorithm and decoding algorithm is respectively as Fig. 5, shown in 6.Wherein, H.264/AVC the present invention has adopted encoding and decoding in the frame for key frame, and to other frame--the Wyner-Ziv frame, then adopted " low complex degree coding " as described below and " high complexity decoding " algorithm.
1, low complex degree coding
Encoder algorithm among the present invention, mainly form by following part: the one, to the intraframe coding of key frame; The 2nd, DWT computing, spiht algorithm, the frame mode that acts on the Wyner-Ziv residual frame made a strategic decision, chnnel coding, thereby cancelled the huge estimation of operand at coding side, so the intraframe coding of the complexity of encoder and conventional video coding is more close, thereby kept the characteristic of " low complex degree coding ".
The first step: read set point l, T 1, T 2, T 3: wherein l is the number of all images frame that comprised of an image sets (GOP), T 1And T 2Be threshold value, the T of temporal correlation judgment criterion 3Be the threshold value of spatial coherence judgment criterion, read GOP+1 image value then and read setup parameter,
Second step: key frame is encoded with intraframe coding algorithm H.264/AVC;
The 3rd step: the interframe encode algorithm with HDVC is encoded to the Wyner-Ziv frame, at first decoding recovers key frame, promptly use an intraframe decoder device corresponding with the intraframe coding algorithm to recover key frame at coding side, we have used intraframe decoder algorithm H.264/AVC in force.
The 4th step: generate reference frame: (weighted averageinterpolation WAI) generates reference frame with the weighted average interpolation.In cataloged procedure, the weighted average interpolation at first is utilized for current Wyner-Ziv frame W and generates reference frame W ReAs formula (1)
W re=αK′ j+βK′ j+1 (1)
Wherein, K ' iAnd K ' I+1Be respectively the decoded key frame of the adjacent previous and next of current Wyner-Ziv; α=1-r/l, β=1-α, wherein r is current encoded frame and previous key frame K ' iBetween distance, i.e. frame number, l represents the number of all frames in this set of pictures (GOP).
The generation method of this reference frame is different with method among the existing DVC, existing DVC uses a key frame and produces reference frame, and this interpolation method has been used two adjacent key frames, and considered the distance of current encoded frame and two adjacent key frames, adopt a kind of weighted average, it is more abundant that time correlation performance like this is used ground, therefore obtained effect preferably.
The 5th step: generate residual frame: with current Wyner-Ziv frame and its reference frame W ReDifference produce residual frame, promptly residual frame is: D=W-W Re, use residual frame rather than original Wyner-Ziv frame, mainly be to remove the regular hour correlation, thereby improve the distortion performance of whole HDVC system at coding side.
The 6th the step: to residual frame carry out wavelet transform (discrete wavelettransform, DWT): to residual frame D=W-W ReCarry out DWT, output wavelet coefficient C dResidual frame is used DWT rather than current Wyner-Ziv frame is used DWT, among this point and the existing DVC method need not, this mainly is in order further to remove the spatial coherence of residual frame, thereby further improves the performance of HDVC system.
The 7th step: Wavelet image is produced wavelet block: wavelet coefficient C dBe divided into wavelet block, as shown in Figure 4, the image after 3 grades of wavelet decomposition generates 16 * 16 wavelet block, in each wavelet block, four 2 * 2 coefficient vector is arranged, and comes from frequency band LL respectively (3), HL (3), LH (3)And HH (3)Three 4 * 4 coefficient vector comes from HL respectively (2), LH (2), HH (2)Three 8 * 8 coefficient vector comes from H (1), LH (1)And HH (1)
Use the wavelet block of residual frame in this patent rather than with whole residual frame as coding unit, its objective is to distinguish lead, side information is in the correlation of different spatial, thereby according to different correlations, adopt different coded systems.In addition, each wavelet block is made of several wavelet tree, and therefore, SPIHT can be applied to wavelet block more easily.Spiht algorithm utilizes the correlation of each frequency band, has adopted the associating quantification manner of individual frequency band coefficient, thereby has overcome the difficulty to DCT coefficient selection associating quantification in the existing DVC scheme.
The 8th step: each wavelet block is carried out the judgement of " frame mode decision-making coding "
To each wavelet block, there are two kinds of coding modes available, they are intra-SPIHT coding and interframe SW-SPIHT codings in the frame, coding mode selects to depend on time, the spatial coherence that estimates.
The estimation criterion of temporal correlation is according to the low frequency energy E of current wavelet block LL, suc as formula (2):
E LL = Σ i = 1 N LL ( C i LL ) 2 - - - ( 2 )
C i LLBe wavelet block lowest frequency small echo band LL (3)I wavelet coefficient, N LLBe LL (3)In the sum of coefficient, for example to 16 * 16 wavelet block, N LL=4.
The estimation criterion of spatial coherence is based on the variance of current wavelet block high frequency coefficients, suc as formula (3):
ρ 2 = 1 N Σ i = 1 N | C i | 2 - ( 1 N Σ i = 1 N C i ) 2 - - - ( 3 )
C wherein iRepresent i high frequency coefficient, N is the sum of all high frequency coefficients, for example N=12 in 16 * 16 wavelet block.In both cases, use SPIHT coding in the frame, they are situation 1:E LL〉=T 1, situation 2:T 2≤ E LL<T 1And σ 2≤ T 3Under first kind of situation, the low frequency coefficient of residual error wavelet block is bigger, the gap of block of pixels of correspondence position of representing current Wyer-Ziv frame and reference frame is bigger, therefore can think that the gap of side information of the formed main information of current wavelet block and its decoding end is bigger, correlation a little less than, so the efficient of interframe encode SW-SPIHT coding is not high, thereby use intraframe coding; Under second kind of situation, the energy of low frequency wavelet coefficient is higher relatively, and the high frequency wavelet coefficient concentrates on than small magnitude simultaneously, therefore uses the interior intra-SPIHT coding mode of frame comparatively suitable.Otherwise except that both of these case, we think that the temporal correlation of main information and side information is higher, encode with the SW-SPIHT of inter-frame mode and can obtain higher compression efficiency.T wherein 1, T 2, T 3It is the threshold value that is predetermined by experiment.
The 9th step: to the entropy coding of pattern code stream: to all wavelet block according to " its coding mode of " sequential scanning from left to right from top to bottom, formation pattern code stream (1 expression frame mode, 0 expression inter-frame mode), the pattern code stream sends to decoding end after being compressed by arithmetic coding.
The tenth step: to the coding of wavelet block
SPIHT is applied to the bit plane level in SW-SPIHT and the frame.At first, coding mode according to each piece, to the wavelet block scanning of same-code pattern, the various SPIHT information flows on their each plane (the wavelet tree distributed intelligence, refinement information and the symbolic information that comprise SPIHT) form different main information flows in identical bit plane level.Then, each main information flow compresses through Slepian-Wolf encoder or entropy coder.
SPIHT coding or interframe SW-SPIHT are meant the encryption algorithm of bit plane level in the frame, with interframe SW-SPIHT is example, concrete steps are, at first, to all coding modes is the wavelet block of SW-SPIHT, according to from left to right shown in Figure 5, and sequential scanning from top to bottom, the various SPIHT information flows (the wavelet tree distributed intelligence, refinement information and the symbolic information that comprise SPIHT) on each plane with them form different main information flows in identical bit plane level respectively; Then, use SW encoder to compress each main information flow, i.e. SW-SPIHT compression based on channel code.Concrete SW-SPTHT compression step is: each main information flow with system channel sign indicating number coding, is generated check bit sum information bit bit; Abandon information bit, only transmit the check digit of right quantity.Because the check digit that transmits is less than information bit, thereby has obtained compression.Again specific to the Rate Control problem of SW-SPIHT, the i.e. quantity problem of the check digit that will transmit, this is by the decision of the correlation of main information and side information, also can decide with feedback channel, i.e. transfer check position length by length, decoding end is carried out the channel code decoding with the check bit sum side information of receiving, if decoding is unsuccessful, decoder sends flag information to encoder, require to increase again check digit, the step of this " transmission-requirement " repeats until decoding successfully always, as shown in Figure 5, has used the LDPCA with feedback channel to compress SPIHT information in the enforcement of this patent.
SPIHT is traditional interior SPIHT of frame in the frame, and its method is with every plane information of entropy coding method compression SPIHT, then compressed code flow is sent to decoding end.
2, high complexity decoding
Among the present invention, the links such as estimation of the LDPCA iterative decoding that decoder algorithm is fed back by band, motion compensated interpolation, LBS realize therefore having higher decoding complex degree.Specifically may further comprise the steps:
The first step: key frame is decoded with intraframe decoder device H.264/AVC
Second step: produce the side information frame
In decoding end, side information frame Y is an accurately estimation to the Wyner-Ziv frame.Therefore, in decode procedure, Y is produced by accurate bi directional motion compensation interpolation method by the key frame of decoding, promptly
Y(x,y)=α×K′ j(x+β×dx f,y+β×dy f)
+β×K′ j+1(x-α×dx f,y+α×dy f)
+α×K′ j(x-β×dx b,y-β×dy b) (4)
+β×K′ j+1(x+α×dx b,y+α×dy b)
Wherein (x y) is the coordinate of interpolation frame pixel; [dx b, dy b] and [dx f, dy f] between the key frame of decoding back to the motion vector of forward direction, it can obtain by half-pix estimation method; Identical in α and β and the formula (1).
The 3rd step: generate reference frame
The reference frame W of decoding end ReThe utilization method identical with coding side produces, and promptly uses weighted average interpolation WAI, generates reference frame W according to formula (1) Re
The 4th step: generate residual frame
Corresponding with the coding side residual frame, the residual frame of decoding end is: D y=Y-W Re
The 5th step: residual frame is carried out wavelet transform (DWT)
In decoder, DWT also acts on residual frame D y=Y-W Re, it is coefficient C as a result d yAlso form wavelet block.
The 6th step: the decoding of pattern code stream
To pattern code streams by using arithmetic decoding.
The 7th step: interframe SW-SPIHT decoding
According to the coding mode information of wavelet block, the decoding that decoding end is recovered SPIHT information also has two kinds of patterns, and the one, when pattern information is 1, adopt intra-SPIHT decoding in the frame, the one, when pattern information is 0, adopt interframe SW-SPIHT decoding; The decoding process of interframe SW-SPIHT is meant, uses channel-decoding mode iteration to recover SPIHT information according to the side information of the check bit sum decoding end that receives.In SW-SPIHT, side information is meant C d yIn with the wavelet block of present encoding wavelet block same position.
The 8th step: intra-SPIHT decoding and wavelet field estimation in the frame;
According to the coding mode information of wavelet block, if SPIHT decoding in the application of frame shows that then the correlation of main information and its side information is relatively poor, so we recover SPIHT information with the decoding algorithm in the frame.Yet after having recovered SPIHT information, we have used the estimation of a wavelet field to obtain more accurate side information again, the effect that this side information will produce follow-up wavelet coefficient reconstruct.
Concrete intra-SPIHT decoding and wavelet field motion-estimation step are: at first be the wavelet coefficient C of intra-SPIHT decoding to obtain to recover i' H, next use the estimation that LBS carries out wavelet field, promptly by C i' HObtain more accurate side information C y LBS, concrete LBS estimation is,
C d yLBS = min i ∈ referenced | C d ′ H - C i LBS | - - - ( 5 )
C wherein i LBSBe i the coefficient with reference to wavelet block, all of all frames that produced by LBS with reference to wavelet block are formed with reference to wavelet block, and its term of reference is dx=[-8, and+8], dy=[-8 ,+8].
The 9th step: the meticulous reconstruct of wavelet coefficient
Through the process of intra-SPIHT decoding in interframe SW-SPIHT or the frame, the SPIHT stream that we can be restored supposes that this recovery is a zero defect.Because the SPIHT code stream that recovers contains relevant C dInformation, and C dWith side information C d y(or C d YLBS, below system is designated as C d y) between very strong correlation is arranged, and this correlation is particularly outstanding when high bit plane, therefore SPIHT code stream and the C that recovers d yStrong correlation is arranged too.Based on this kind situation, this patent has proposed a kind of better reconstruct C dMethod, just use formula (6) to come meticulous C by the SPIHT code stream that recovers d y
C ′ d = v max , if C d y ≥ v max C d y , if C d y ∈ ( v min , v max ) . v min , if C d y ≤ v min - - - ( 6 )
C ' wherein dBe final wavelet coefficient; v MaxAnd v MinBe respectively the C that infers according to the SPIHT bit plane information of recovering dMaximum and minimum value, promptly work as C dMaximum and the minimum value of all bit planes can both be resumed the time; M is C dThe sum of bit plane, and the number of bit-planes of current recovery is represented with n, wherein m>n.This fine method has improved the wavelet coefficient by the SPIHT decoding and reconstituting, and reason is that one is with the C that recovers dN bit plane limit information merely by C d yBit plane reconstruct C dAnd the distortion that produces; On the other hand, C d yRemaining m-n low level plane information information of having replenished n the high bit plane that recovers, thereby to only by C dHigh-order planar S PIHT reconstruct played compensating action.
The tenth step: inverse discrete wavelet transform (IDWT)
To C ' dRecover difference D ' thereby carry out IDWT,
The 11 step: recover original pixels: last, reference frame W ReBe used to recover pixel value, i.e. W '=D '+W Re
Fig. 5 is the flow chart of " low complicated encryption algorithm " of the present invention, will be described in detail the concrete implementation step of encryption algorithm according to Fig. 5 below:
Step 1: begin to read in video sequence by camera, and set the frame number l of GOP, set threshold T 1, T 2, T 3
Step 2: to the key frame among each GOP, utilize and H.264/AVC it is carried out intraframe coding, output key frame code stream is to transmission channel;
Step 3: at coding side to the key frame code stream decoding,
Step 4: utilize WAI to generate the reference frame W of Wyner-Ziv frame W Re
Step 5: do subtraction, i.e. D=W-W Re, and D is carried out DWT generate wavelet coefficient C d
Step 6: according to Fig. 4, with C dGenerate wavelet block;
Step 7: to each wavelet block, temporal correlation judgment criterion in according to the present invention and space correlation judgment criterion are carried out frame mode and are differentiated, and generate pattern code stream (1: frame mode intra-SPIHT, 0: inter-frame mode SW-SPIHT);
Step 8: mode code stream is carried out entropy coding send to decoding end;
Step 9: the intra-SPIHT coding of bit plane level or SW-SPIHT coding, the intra-SPIHT code stream of generate pattern 1, the various SW-SPIHT code streams of pattern 0 are comprising the verification code stream of the distributed intelligence of SPIHT, the verification code stream of meticulous information, the verification code stream of symbolic information; Simultaneously, export various code streams to transmission channel.
Fig. 6 provides the program realization flow figure of decoding algorithm, below implementation step is described:
Step 1: accept code stream from transmission channel, the key frame code stream of receiving is decoded, and utilize formula W AI to generate reference frame W Re,
Step 2: utilize formula (1) to generate reference frame W Re, utilize formula (4) interpolation to generate side information frame Y;
Step 3: generate residual frame, i.e. D y=Y-W Re, and to D yCarry out DWT and generate wavelet coefficient C d y
Step 4: the entropy decoding of pattern code stream: to pattern code streams by using arithmetic decoding;
Step 5:,, promptly carry out the interframe encoding and decoding, then at C if pattern is 0 according to the coding mode information of decoding d yIn the wavelet block of getting with current wavelet block same position be side information, utilize the check digit of receiving, recover SPIHT information with the LDPCA decoding algorithm
Step 6: if coding mode is 1, promptly carry out encoding and decoding in the frame, then according to traditional SPIHT recovery wavelet coefficient of decoding.
Step 7: if pattern 1 is then carried out further estimation (ME) with LBS to wavelet coefficient according to formula (5), to obtain the accurate side information C of wavelet coefficient Cd d YLBS
Step 8: wavelet coefficient reconstruct, utilize formula (6), the reconstruct wavelet coefficient;
Step 9: recover the wavelet coefficient image with each wavelet block multiple connection that recovers.
Step 10: bring recovery difference D ' by anti-discrete wavelet transformer,
Step 11: be addition W '=D '+W ReRecover the original pixels image;
We have done preliminary testing experiment to the HDVC scheme that the present invention proposes, and we adopt the standard test sequences of Digital Image Processing to do input video, and the luminance component of 15HZ QCIF sequence foreman and hall is tested.Suppose a harmless transmission channel.Use HP Compaqnx6330 notebook computer to make code decode algorithm and handle, the notebook parameter is: intel (R), Core (TM) 2CPU, T5500 , @1.66GHz, 980MHz, 1.0G internal memory.Software platform is JM9.0 software platform H.264/AVC, has realized the HDVC scheme with the C Programming with Pascal Language.
Wherein the LDPCA with 396 nodes is SW-SPIHT, when chief series curtailment 396, has added some and 0 has supplied 396.In order to make all key frames and Wyner-Ziv frame in video coding process, can obtain more continuous video quality, we optimally adjust the quantization step of key frame and Wyner-Ziv frame, the quantification of key frame is decided by the QP that JM9.0 selects, and the quantified precision of Wyner-Ziv frame is mainly determined by the quantity of bit plane.As Fig. 7, five the rate distortion points of HDVC and the Y-PSNR PSNR of the reconstructing video when GOP=8 have been indicated.Some relevant contrast tests comprise:
(1) interframe encode H.264/AVC (I-B-B-B);
(2) intraframe coding H.264/AVC (I-I-I-I);
(3) up-to-date DCT territory DVC result;
(4) residual error DVC.
The first, to compare with the result of DCT territory DVC, algorithm of the present invention can obtain 3 decibels improvement in to the experiment of hall sequence, and the reason of improvement mainly is that this patent has utilized temporal correlation in cataloged procedure.But this improvement of foreman sequence for the height motion is also not obvious, and it mainly is because the correlation of the reference frame of weighted average interpolation and present frame is lower.
The second, add the frame mode decision-making technic, the PSNR of HDVC has obtained the highest 2.1 decibels raising, and as Fig. 8, this raising is clearly to high moving region especially.
In addition, compare with intraframe coding H.264/AVC, HDVC proposed by the invention has obtained the raising up to 7 decibels.Simultaneously, find in experiment that the time that this patent carries out an intraframe coding to the time of frame Wyner-Ziv frame coding and scheme H.264/AVC is roughly the same, embodied the advantage of " low complex degree coding ".
At last, provided this patent algorithm image restored and the recovery image of intraframe coding algorithm H.264/AVC among Fig. 9, found out that obviously under identical code check (45kbps), this patent scheme image restored has better subjectivity and objective quality.

Claims (1)

1, based on the mixed distribution formula method for video coding of frame mode decision-making, it is characterized in that comprising the following steps:
I. the low complex degree coding specifically comprises the following steps:
(1). read set point l, T 1, T 2, T 3: wherein l is that an image sets is the number of all images frame that comprises of GOP, T 1And T 2Be threshold value, the T of temporal correlation judgment criterion 3Be the threshold value of spatial coherence judgment criterion, read GOP+1 image value then;
(2). to the H.264/AVC intraframe coding of key frame, generate the H.264 interior code stream of frame, send to decoding end then;
(3). to the H.264/AVC intraframe decoder of key frame;
(4). generate the reference frame W of current Wyner-Ziv frame Re, concrete grammar is to utilize formula
W re=αK′ j+βK′ j+1 (1)
Wherein, K ' iAnd K ' I+1Be respectively the decoded key frame of the adjacent previous and next of current Wyner-Ziv; α=1-r/l, β=1-α, wherein r is current encoded frame and previous key frame K ' iBetween distance, i.e. frame number, l represents the number of all frames in this set of pictures (GOP);
(5). generate residual frame: residual frame is meant the current Wyner-Ziv frame w that will encode and the difference of its reference frame, i.e. residual frame D=W-W Re
(6). residual frame is carried out wavelet transform:: to D=W-W ReCarrying out wavelet transform is DWT, output Wavelet image C d
(7). Wavelet image is produced wavelet block;
(8). each wavelet block is carried out the judgement of frame mode decision-making coding: each wavelet block, utilize formula (2) (3) to calculate its time relevance parameter E LL, spatial coherence parameter ρ 2, at condition 1:E LL〉=T 1Or condition 2:T 2≤ E LL<T 1And ρ 2≤ T 3When satisfying, its coding mode parameter setting is 1, and the coding mode parameter setting is 0 under all the other situations;
Wherein, the temporal correlation calculation of parameter is according to the low frequency energy E of current wavelet block LL:
E LL = Σ i = 1 N LL ( C i LL ) 2 - - - ( 2 )
C i LLBe wavelet block lowest frequency small echo band LL (3)I wavelet coefficient, N LLBe LL (3)In the sum of all coefficients;
The spatial coherence calculation of parameter is based on the variance of current wavelet block high frequency coefficients, suc as formula (3):
ρ 2 = 1 N Σ i = 1 N | C i | 2 - ( 1 N Σ i = 1 N C i ) 2 - - - ( 3 )
C wherein iRepresent i high frequency coefficient, N is the sum of all high frequency coefficients;
(9). the entropy coding of pattern code stream: to all wavelet block according to " its coding mode of " sequential scanning from left to right from top to bottom forms the pattern code stream, and mode code stream is carried out entropy coding, sends to decoding end;
(10). wavelet block is encoded: to each wavelet block,, use SPIHT coding in traditional frame, form intra-SPIHT code stream in the frame if coding mode is 1; If pattern is 0 then encodes with the SW-SPIHT of inter-frame mode, form interframe SW-SPIHT code stream; At last intra-SPIHT code stream and SW-SPIHT code stream are sent to decoding end;
II. high complexity decoding specifically comprises the following steps:
(1). with the key frame of decoding of intraframe decoder mode H.264;
(2). produce side information frame Y: produce the side information frame Y of interpolation Wyner-Ziv frame by the key frame of decoding by the bi directional motion compensation interpolation method, specifically utilize following formula:
Y(x,y)=α×K′ j(x+β×dx f,y+β×dy f)
+β×K′ j+1(x-α×dx f,y+α×dy f)
+α×K′ j(x-β×dx b,y-β×dy b) (4)
+β×K′ j+1(x+α×dx b,y+α×dy b)
Wherein (x y) is the coordinate of interpolation frame pixel; [dx b, dy b] and [dx f, dy f] between the key frame of decoding back to the motion vector of forward direction, it can obtain by half-pix estimation method; Identical in α and β and the formula (1);
(3). generate reference frame: identical with coding side, utilize formula (1) to produce reference frame W Re
(4). generate residual frame: residual frame is meant side information frame Y and reference frame W ReThe difference frame, i.e. D y=Y-W Re
(5). residual frame is carried out wavelet transform: to residual frame D y=Y-W ReCarry out the DWT conversion, it is coefficient C as a result d yAlso form wavelet block;
(6). the entropy decoding of pattern code stream: recover the pattern code stream undistortedly
(7). interframe SW-SPIHT decoding: according to the pattern code stream of decoding, if the mode value of current wavelet block is 0, then at C d yIn get same position wavelet block as side information, carry out interframe SW-SPIHT decoding;
(8). intra-SPIHT decoding and wavelet field estimation in the frame: if the mode value of current wavelet block is 1, then carry out intra-SPIHT decoding in the frame, specifically carry out the following step: at first be the wavelet block coefficient C ' of intra-SPIHT decoding to obtain to recover i H, next use the estimation that LBS carries out wavelet field, promptly by C ' i HObtain more accurate side information C d YLBS, concrete LBS estimation is:
C d yLBS = min i ∈ referenced | C d ′ H - C i LBS | - - - ( 5 )
C wherein i LBSBe i the coefficient with reference to wavelet block, all of all frames that produced by LBS with reference to wavelet block are formed with reference to wavelet block, and its term of reference is dx=[-8, and+8], dy=[-8 ,+8];
(9). the meticulous reconstruct of wavelet coefficient: through the process of intra-SPIHT decoding in interframe SW-SPIHT or the frame, the SPIHT information that is restored utilizes formula (6) to come meticulous C by the SPIHT information of recovering d y
C ′ d = v max , if C d y ≥ v max C d y , if C d y ∈ ( v min , v max ) v min , if C d y ≤ v min . - - - ( 6 )
C ' wherein dBe final wavelet coefficient; v MaxAnd v MinBe respectively the C that infers according to the SPIHT bit plane information of recovering dMaximum and minimum value, promptly work as C dMaximum and the minimum value of all bit planes can both be resumed the time; M is C dThe sum of bit plane, and the number of bit-planes of current recovery is represented with n, wherein m>n;
(10). inverse discrete wavelet transform IDWT: to C ' dThereby carry out IDWT and recover difference D ';
(11). recover original pixels: reference frame W ReWith difference D's ' and as the last pixel value that recovers, i.e. W '=D '+W Re
CN 200810105125 2008-04-25 2008-04-25 Hybrid distributed video encoding method based on intra-frame intra-frame mode decision Expired - Fee Related CN101335892B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200810105125 CN101335892B (en) 2008-04-25 2008-04-25 Hybrid distributed video encoding method based on intra-frame intra-frame mode decision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200810105125 CN101335892B (en) 2008-04-25 2008-04-25 Hybrid distributed video encoding method based on intra-frame intra-frame mode decision

Publications (2)

Publication Number Publication Date
CN101335892A true CN101335892A (en) 2008-12-31
CN101335892B CN101335892B (en) 2010-06-09

Family

ID=40198149

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200810105125 Expired - Fee Related CN101335892B (en) 2008-04-25 2008-04-25 Hybrid distributed video encoding method based on intra-frame intra-frame mode decision

Country Status (1)

Country Link
CN (1) CN101335892B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101841720A (en) * 2010-05-25 2010-09-22 东南大学 Stereo video coding method based on modified view lifting schemes
CN101883287A (en) * 2010-07-14 2010-11-10 清华大学深圳研究生院 Method for multi-viewpoint video coding side information integration
CN102026000A (en) * 2011-01-06 2011-04-20 西安电子科技大学 Distributed video coding system with combined pixel domain-transform domain
CN102223533A (en) * 2011-04-14 2011-10-19 广东工业大学 Signal decoding and coding method and device
CN102333218A (en) * 2011-09-23 2012-01-25 清华大学深圳研究生院 DVC (Distributed Video Coding) decoding terminal side information frame generating method based on ODWT sub-band classification
CN102595132A (en) * 2012-02-17 2012-07-18 南京邮电大学 Distributed video encoding and decoding method applied to wireless sensor network
CN102630008A (en) * 2011-09-29 2012-08-08 北京京东方光电科技有限公司 Method and terminal for wireless video transmission
CN103067710A (en) * 2012-12-28 2013-04-24 辽宁师范大学 Distributed hyperspectral image coding and decoding method based on three-dimensional wavelet transform
CN103561268A (en) * 2010-12-29 2014-02-05 中国移动通信集团公司 Method and device for encoding video monitoring image
WO2015021587A1 (en) * 2013-08-12 2015-02-19 Intel Corporation Techniques for low power image compression and display
CN106327510A (en) * 2016-08-29 2017-01-11 广州华多网络科技有限公司 Image reconstruction method and device
CN106612429A (en) * 2016-01-29 2017-05-03 四川用联信息技术有限公司 Image lossless compression method based on controllable-parameter encryption and compression algorithm
CN107749993A (en) * 2017-11-02 2018-03-02 广西大学 Distributed video coding information source distortion evaluation method based on MMSE reconstruct
CN107911196A (en) * 2017-10-27 2018-04-13 中国电子科技集团公司第二十八研究所 A kind of radar track message transmitting method
CN113068037A (en) * 2021-03-17 2021-07-02 上海哔哩哔哩科技有限公司 Method, apparatus, device, and medium for sample adaptive compensation

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4045913B2 (en) * 2002-09-27 2008-02-13 三菱電機株式会社 Image coding apparatus, image coding method, and image processing apparatus
CN1322472C (en) * 2003-09-08 2007-06-20 中国人民解放军第一军医大学 Quad tree image compressing and decompressing method based on wavelet conversion prediction
US7565020B2 (en) * 2004-07-03 2009-07-21 Microsoft Corp. System and method for image coding employing a hybrid directional prediction and wavelet lifting

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101841720B (en) * 2010-05-25 2011-08-10 东南大学 Stereo video coding method based on modified view lifting schemes
CN101841720A (en) * 2010-05-25 2010-09-22 东南大学 Stereo video coding method based on modified view lifting schemes
CN101883287A (en) * 2010-07-14 2010-11-10 清华大学深圳研究生院 Method for multi-viewpoint video coding side information integration
CN103561268A (en) * 2010-12-29 2014-02-05 中国移动通信集团公司 Method and device for encoding video monitoring image
CN102026000A (en) * 2011-01-06 2011-04-20 西安电子科技大学 Distributed video coding system with combined pixel domain-transform domain
CN102026000B (en) * 2011-01-06 2012-07-04 西安电子科技大学 Distributed video coding system with combined pixel domain-transform domain
CN102223533A (en) * 2011-04-14 2011-10-19 广东工业大学 Signal decoding and coding method and device
CN102333218A (en) * 2011-09-23 2012-01-25 清华大学深圳研究生院 DVC (Distributed Video Coding) decoding terminal side information frame generating method based on ODWT sub-band classification
CN102630008B (en) * 2011-09-29 2014-07-30 北京京东方光电科技有限公司 Method and terminal for wireless video transmission
CN102630008A (en) * 2011-09-29 2012-08-08 北京京东方光电科技有限公司 Method and terminal for wireless video transmission
CN102595132A (en) * 2012-02-17 2012-07-18 南京邮电大学 Distributed video encoding and decoding method applied to wireless sensor network
CN103067710A (en) * 2012-12-28 2013-04-24 辽宁师范大学 Distributed hyperspectral image coding and decoding method based on three-dimensional wavelet transform
CN103067710B (en) * 2012-12-28 2015-11-18 辽宁师范大学 Based on distributed hyper spectrum image coding and the coding/decoding method of 3 D wavelet transformation
WO2015021587A1 (en) * 2013-08-12 2015-02-19 Intel Corporation Techniques for low power image compression and display
CN106612429A (en) * 2016-01-29 2017-05-03 四川用联信息技术有限公司 Image lossless compression method based on controllable-parameter encryption and compression algorithm
CN106327510A (en) * 2016-08-29 2017-01-11 广州华多网络科技有限公司 Image reconstruction method and device
CN106327510B (en) * 2016-08-29 2019-08-23 广州华多网络科技有限公司 A kind of method and device of image reconstruction
CN107911196A (en) * 2017-10-27 2018-04-13 中国电子科技集团公司第二十八研究所 A kind of radar track message transmitting method
CN107911196B (en) * 2017-10-27 2020-07-14 南京莱斯电子设备有限公司 Radar track message transmission method
CN107749993A (en) * 2017-11-02 2018-03-02 广西大学 Distributed video coding information source distortion evaluation method based on MMSE reconstruct
CN107749993B (en) * 2017-11-02 2019-12-03 广西大学 Distributed video coding information source based on MMSE reconstruct is distorted evaluation method
CN113068037A (en) * 2021-03-17 2021-07-02 上海哔哩哔哩科技有限公司 Method, apparatus, device, and medium for sample adaptive compensation

Also Published As

Publication number Publication date
CN101335892B (en) 2010-06-09

Similar Documents

Publication Publication Date Title
CN101335892B (en) Hybrid distributed video encoding method based on intra-frame intra-frame mode decision
KR100597402B1 (en) Method for scalable video coding and decoding, and apparatus for the same
CN102137263B (en) Distributed video coding and decoding methods based on classification of key frames of correlation noise model (CNM)
CN101860748B (en) Side information generating system and method based on distribution type video encoding
CN1926876B (en) Method for coding and decoding an image sequence encoded with spatial and temporal scalability
CN102271256B (en) Mode decision based adaptive GOP (group of pictures) distributed video coding and decoding method
CN102630012B (en) Coding and decoding method, device and system based on multiple description videos
CN103442228B (en) Code-transferring method and transcoder thereof in from standard H.264/AVC to the fast frame of HEVC standard
CN101835044A (en) Grouping method in frequency domain distributed video coding
JP2008511185A (en) Method and system for representing correlated image sets
CN101835042A (en) Wyner-Ziv video coding system controlled on the basis of non feedback speed rate and method
CN108989802A (en) A kind of quality estimation method and system of the HEVC video flowing using inter-frame relation
CN102256133A (en) Distributed video coding and decoding method based on side information refining
CN101621690A (en) Two-description video coding method based on Wyner-Ziv principle
CN101964910B (en) Video spatial resolution conversion method based on code-rate type transcoding assistance
CN105611301B (en) Distributed video decoding method based on wavelet field residual error
CN102572428B (en) Side information estimating method oriented to distributed coding and decoding of multimedia sensor network
CN102595132A (en) Distributed video encoding and decoding method applied to wireless sensor network
CN110100437A (en) For damaging the hybrid domain cooperation loop filter of Video coding
JP2006509410A (en) Video encoding method and apparatus
AU2001293994A1 (en) Compression of motion vectors
EP1325636A2 (en) Compression of motion vectors
CN101150721B (en) Coding method with adaptable bit element plane coding mode
CN100466735C (en) Video encoding and decoding methods and video encoder and decoder
CN107749993B (en) Distributed video coding information source based on MMSE reconstruct is distorted evaluation method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20100609

Termination date: 20140425