CN101663895A - Use the video coding model selection of estimated decoding cost - Google Patents

Use the video coding model selection of estimated decoding cost Download PDF

Info

Publication number
CN101663895A
CN101663895A CN200780052818A CN200780052818A CN101663895A CN 101663895 A CN101663895 A CN 101663895A CN 200780052818 A CN200780052818 A CN 200780052818A CN 200780052818 A CN200780052818 A CN 200780052818A CN 101663895 A CN101663895 A CN 101663895A
Authority
CN
China
Prior art keywords
decoding
remaining data
zero
matrix
conversion coefficient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN200780052818A
Other languages
Chinese (zh)
Other versions
CN101663895B (en
Inventor
西塔拉曼·加纳帕蒂·苏布拉马尼亚
施方
陈培松
塞伊富拉·哈利特·奥古兹
史考特·T·斯瓦泽伊
维诺德·考希克
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qualcomm Inc
Original Assignee
Qualcomm Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qualcomm Inc filed Critical Qualcomm Inc
Publication of CN101663895A publication Critical patent/CN101663895A/en
Application granted granted Critical
Publication of CN101663895B publication Critical patent/CN101663895B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/19Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding using optimisation based on Lagrange multipliers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/107Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/109Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/149Data rate or code amount at the encoder output by estimating the code amount by means of a model, e.g. mathematical model or statistical model
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

Abstract

The present invention describes the technology that the decoding mode be used to use estimated decoding cost is selected.For instance, for high compression efficiency is provided, the decoding mode that code device can be attempted selecting to decipher the data of block of pixels with high efficiency is deciphered described.For this reason, described code device can be selected based on decoding mode is carried out in the estimation of the decoding cost of at least a portion that may pattern.According to the techniques described herein, described code device is estimated the decoding cost of different mode under the situation of described of unactual decoding.In fact, in certain aspects, described coding module device can be estimated the decoding cost of described pattern under not at the data conditions of described of each pattern quantization.In this way, decoding cost estimation technique of the present invention reduce carry out effective model select needed on calculating intensive amount of calculation.

Description

Use the video coding model selection of estimated decoding cost
Technical field
The present invention relates to video coding, and more particularly relate to the decoding cost of estimation in order to the decoding video sequence.
Background technology
Digital video capabilities can be incorporated in the various devices, comprising Digital Television, digital live broadcast system, radio communication device, PDA(Personal Digital Assistant), laptop computer, desktop PC, video game machine, digital camera, digital recorder, honeycomb fashion or satelline radio phone etc.Digital video apparatus is in the remarkable improvement that can provide aspect processing and the transmission of video sequence with respect to the conventional simulation video system.
Set up the different video coding standards that is used for the decoding digital video sequence.For instance, mobile motion picture expert group version (MPEG) has been worked out many standards, comprises MPEG-1, MPEG-2 and MPEG-4.Other example comprises H.263 H.264 standard and homologue thereof, ISO/IEC MPEG-4 of standard and ITU-T of International Telecommunication Union-T, part 10, i.e. advanced video decoding (AVC).These video coding standards are by supporting the efficiency of transmission of the improvement of video sequence with the compress mode decoding data.
Many current techniques are utilized block-based decoding.In block-based decoding, the frame of multimedia sequence is divided into discrete block of pixels, and based on deciphering described block of pixels with the difference of other piece, described other piece can be arranged in same number of frames or different frame with described block of pixels.Some block of pixels (so-called " macro block ") comprises sub-pixel piece group.As an example, the 16x16 macro block can comprise four 8x8 pieces.Described sub-piece can be deciphered separately.For instance, described H.264 standard permits using various different sizes (for example, 16x16,16x8,8x16,8x8,4x4,8x4 and 4x8) to come decode block.In addition, expand, in macro block, can include the sub-piece of any size (for example, 2x16,16x2,2x2,4x16 and 8x2).
Summary of the invention
Of the present invention aspect some in, a kind of method that is used for the fast acquisition of digital video data comprises: one or more conversion coefficients that will keep non-zero when quantizing of the remaining data of identification block of pixels; At least estimate the bit quantity that is associated with the decoding of described remaining data based on the described conversion coefficient discerned; Reach the decoding cost of estimating to be used to decipher described block of pixels at least based on the estimated bit quantity that is associated with the described remaining data of decoding.
In certain aspects, a kind of equipment that is used for the fast acquisition of digital video data comprises: conversion module, and it is the remaining data generation conversion coefficient of block of pixels; The position estimation module, it is identified in the bit quantity that will keep one or more conversion coefficients of non-zero when quantizing and be associated with the decoding of described remaining data based on the described conversion coefficient estimation of discerning at least; And control module, it estimates to be used to decipher the decoding cost of described block of pixels at least based on the estimated bit quantity that is associated with the described remaining data of decoding.
In certain aspects, a kind of equipment that is used for the fast acquisition of digital video data comprises: the device that will keep one or more conversion coefficients of non-zero when quantizing that is used to discern the remaining data of block of pixels; Be used at least estimating the device of the bit quantity that is associated with the decoding of described remaining data based on the described conversion coefficient of discerning; Be used at least estimating to be used to decipher the device of the decoding cost of described block of pixels based on the estimated bit quantity that is associated with the described remaining data of decoding.
In certain aspects, a kind of fast acquisition of digital video data computing machine program product that is used for comprises the computer-readable media that has instruction on it.Described instruction comprises: the code that will keep one or more conversion coefficients of non-zero when quantizing that is used to discern the remaining data of block of pixels; Be used at least estimating the code of the bit quantity that is associated with the decoding of described remaining data based on the described conversion coefficient of discerning; And be used at least estimating to be used to decipher the code of the decoding cost of described block of pixels based on the estimated bit quantity that is associated with the described remaining data of decoding.
In accompanying drawing and following explanation, illustrate the details of one or more examples.According to described explanation and graphic and claims, will understand further feature, purpose and advantage.
Description of drawings
Fig. 1 is the calcspar that the video coding system of decoding cost estimation technique described herein is adopted in graphic extension.
Fig. 2 is the calcspar of the coding module of graphic extension exemplary in more detail.
Fig. 3 is the calcspar of another exemplary coding module of graphic extension in more detail.
Fig. 4 is the graphic extension coding module is selected the example operational of coding mode based on estimated decoding cost a flow chart.
Fig. 5 be the graphic extension coding module do not quantize or the situation of the remaining data of encoding block under estimate flow chart with the example operational of deciphering the bit quantity that described remaining data is associated.
To be the graphic extension coding module estimate under the situation of the remaining data of encoding block not Fig. 6 and the flow chart of the example operational of deciphering the bit quantity that described remaining data is associated.
Embodiment
The present invention describes the technology of the video coding model selection be used to use estimated decoding cost.For instance, for high compression efficiency is provided, the decoding mode that code device can be attempted selecting to decipher the data of block of pixels with high efficiency is deciphered described.For this reason, described code device can be at least selected based on decoding mode is carried out in the estimation of the decoding cost of at least a portion that may pattern.According to the techniques described herein, described code device is estimated the decoding cost of different mode under the situation of actual decode block not.In fact, in certain aspects, described coding module device can be estimated the decoding cost of described pattern under not at the data conditions of described of each pattern quantization.In this way, decoding cost estimation technique of the present invention reduce carry out effective model select needed on calculating intensive amount of calculation.
Fig. 1 is the calcspar of graphic extension multimedia decoding system 10, and described system adopts decoding cost estimation technique as described herein.Decoding system 10 comprises code device 12 and the decoding device 14 that connects by transmission channel 16.One or more digital multimedia data sequences of code device 12 coding, and described encoded sequence transmission is arrived decoding device 14 by transmission channel 16, decoding device 14 are decoded and may be provided it to the user of decoding device 14.Transmission channel 16 can comprise arbitrary wired or wireless medium, or its combination.
Code device 12 can be formed for broadcasting the part of the broadcast network component of one or more multi-medium data channels.As an example, code device 12 can form in order to the part of one or more encoded multi-medium data channel radios to wireless base station, server or arbitrary infrastructure node of wireless device.In the case, code device 12 can be with encoded transfer of data to a plurality of wireless devices, and for example decoding device 14.Yet, for the sake of simplicity, the single decoding device 14 of graphic extension only in Fig. 1.Perhaps, code device 12 can comprise mobile phone, and described mobile phone local transmission institute video captured is used for visual telephone or other similar application.
Decoding device 14 can comprise user's set, and the encoded multi-medium data that described user's set received code device 12 is transmitted and the described multi-medium data of decoding are for offering the user.By way of example, decoding device 14 can be embodied as the part as lower device: Digital Television, radio communication device, gambling device, portable digital-assistant (PDA), laptop computer or desktop PC, digital music and video-unit are (for example, the device of " iPod " trade mark of being sold) or radio telephone (for example, honeycomb fashion, satellite or based on the radio telephone on land) or be used for video and/or audio frequency crossfire, visual telephone or both other mobile radio terminals through outfit.Decoding device 14 can be associated with mobile or stationary apparatus.In broadcasted application, code device 12 can be with encoded video and/or audio transmission to the decoding device 14 that is associated with a plurality of users.
In certain aspects, use for two-way communication, multimedia decoding system 10 can according to (ITU-T) of session initiation protocol (SIP), International Telecommunications Union's standard H.323 the standard, ITU-T of part H.324 standard or other standard are supported visual telephone or video streaming.For unidirectional or two-way communication, code device 12 can according to for example mobile motion picture expert group version (MPEG)-2, MPEG-4, ITU-T H.263 or ITU-T H.264 (it is corresponding to advanced video decoding (AVC) of MPEG-4 the 10th part) wait the encoded multi-medium data of video compression standard generation.Though in Fig. 1, do not show, code device 12 and decoding device 14 can integrate with audio coder and decoder respectively, and comprise suitable multiplexer-demultiplexer (MUX-DEMUX) module or other hardware, firmware or software, to dispose shared data sequence or independent data sequence sound intermediate frequency and both codings of video.The MUX-DEMUX module can meet ITU H.223 multiplexer agreement or other agreement, for example User Datagram Protoco (UDP) (UDP) if suitably.
In certain aspects, the present invention is contained and is applied to enhancement mode H.264 video coding is to send the instant multimedia service in using ground mobile multimedia multicast (TM3) system of forward link (FLO) air interface specification only, and " the only forward link air interface specification of ground mobile multimedia multicast " published in August, 2006 and be technical standard TIA-1099 (" FLO " standard).Yet decoding cost estimation technique described in the present invention is not limited to broadcasting, multicast, clean culture or the point-right-dot system of arbitrary particular type.
As illustrated in Fig. 1, code device 12 comprises coding module 18 and transmitter 20.Coding module 18 receives one or more input multimedia sequences (under the situation of video coding, described input multimedia sequence can comprise one or more Frames) and the frame of the multimedia sequence that receives of optionally encoding.Coding module 18 (shows) from one or more sources to receive among Fig. 1 imports multimedia sequence.In certain aspects, coding module 18 can receive the input multimedia sequence via satellite from one or more video content providers (for example).As another example, coding module 18 can be in being integrated in code device 12 or being coupled in the image capture apparatus (showing Fig. 1) of code device 12 and receiving multimedia sequence.Perhaps, coding module 18 can be in code device 12 or is coupled in the memory of code device 12 or the file store (showing Fig. 1) and receives multimedia sequence.Described multimedia sequence can comprise will be as broadcasting or on-the-spot instant or near instant video, audio or video and the tonic train deciphering as required and transmit, and can comprise will be as the video through pre-decode and storage, audio or video and the tonic train of broadcasting or deciphering as required and transmitting.In certain aspects, at least a portion of described multimedia sequence can be that computer produces, for example under the situation of gambling.
Under any circumstance, coding module 18 coded frame and be transferred to decoding device 14 with a plurality of through decoded frame by transmitter 20.Coding module 18 can be encoded to intra-coding frame, interframe decoded frame or its combination with the frame of input multimedia sequence.Use the frame of intra-coding technology for encoding not decipher with reference to other frame, and (" I ") frame in so-called.Use the frame of interframe decoding technique coding to decipher with reference to one or more other frames.Described interframe decoded frame can comprise one or more predictions (" P ") frame, two-way (" B ") frame or its combination.The P frame is encoded with reference at least one interim previous frame, and the B frame is encoded with reference at least one interim future frame.In some cases, the B frame can be encoded with reference at least one interim future frame and at least one interim previous frame.
Coding module 18 can further be configured to frame is divided into a plurality of and in described each of encoding individually.As an example, coding module 18 can be divided into described frame a plurality of 16x16 pieces.The piece of some so-called " macro block " comprises sub-divided block (this paper is called " sub-piece ") group.As an example, the 16x16 macro block can comprise four 8x8 pieces or other sub-divided block.For instance, H.264 standard permits using various different sizes (for example, 16x16,16x8,8x16,8x8,4x4,8x4 and 4x8) that piece is encoded.In addition, expand, in macro block, can comprise the sub-piece of any size, for example 2x16,16x2,2x2,4x16,8x2 etc.Therefore, coding module 18 can be configured to described frame is divided into some and in the described block of pixels each is encoded to intra-coding piece or interframe decode block, its each can so-called.
Coding module 18 can be supported a plurality of decoding modes.In the described pattern each can be corresponding to the various combination of block size and decoding technique.For instance, under the situation of standard H.264, there are seven interframe decoding modes and 13 intra-coding patterns.The interframe decoding mode of described seven different masses sizes comprises SKIP pattern, 16x16 pattern, 16x8 pattern, 8x16 pattern, 8x8 pattern, 8x4 pattern, 4x8 pattern and 4x4 pattern.Described 13 intra-coding patterns comprise: INTRA 4x4 pattern about described pattern, has direction interpolation in nine possibilities; And INTRA 16x16 pattern, about described pattern, direction interpolation in 4 possibilities is arranged.
For high compression efficiency is provided, according to various aspects of the present invention, coding module 18 is attempted the pattern of selection with the data of high efficiency decode block.For this reason, coding module 18 is estimated the decoding cost of at least a portion of described pattern in described each.Coding module 18 is estimated decoding cost according to ratio and distortion.According to the techniques described herein, coding module 18 determines to estimate under the situation of ratio and distortion metrics the decoding cost of described pattern described of unactual decoding.In this way, decoding mode 18 can be under not at the situation of calculating complicated decoding of the data of each pattern execution block be decoded into one in the described pattern of original selection based on described at least.Normal mode select to need uses in the described pattern each that data are carried out actual decoding to come definite which pattern of selection.Therefore, described technology is by saving time and computational resource based on the decoding cost preference pattern under not at the situation of each the actual decoding data in the described pattern.In fact, in certain aspects, coding module 18 can be estimated the decoding cost of described pattern under not at the data conditions of each pattern quantization piece.In this way, decoding cost estimation technique of the present invention reduce carry out effective model select needed on calculating intensive amount of calculation.
Code device 12 is used selected pattern and is deciphered the piece of described frame and transmit described Frame through decoding by transmitter 20.Transmitter 20 can comprise that suitable modulator-demodulator and drive circuit software and/or firmware are to transmit encoded multimedia by transmission channel 16.For wireless application, transmitter 26 comprises that the RF circuit carries the wireless data of encoded multi-medium data with transmission.
Decoding device 14 comprises receiver 22 and decoder module 24.Decoding device 14 receives encoded data by receiver 22 from code device 12.The same with transmitter 20, receiver 22 can comprise that suitable modulator-demodulator and drive circuit software and/or firmware receive encoded multimedia to pass through transmission channel 16, and can comprise that the RF circuit is to receive the wireless data that carries encoded multi-medium data in wireless application.Decoder module 24 decoding by receiver 22 receive through the decoding data frame.Decoding device 14 can further offer the user by the display (not shown) with described Frame through decoding, and described display can be integrated in the decoding device 14 or be provided as the discrete device that is coupled to decoding device 14 by wired or wireless connection.
In some instances, code device 12 and decoding device 14 can comprise mutual transmission and receiving circuit separately, so that for encoded multimedia and other information by transmission channel 16 transmission, each all can be used as transmitting device and receiving system.In the case, code device 12 and decoding device 14 both can transmit and receive multimedia sequence and therefore participate in two-way communication.In other words, the illustrated assembly of decoding system 10 can be integrated into the part of encoder/decoder (CODEC).
Assembly in code device 12 and the decoding device 14 is the example that can be used for implementing those devices of technology described herein.Yet if desired, code device 12 and decoding device 14 can comprise many other assemblies.For instance, code device 12 can comprise a plurality of coding modules, its each receive one or more multi-medium data sequences and according to the corresponding multi-medium data sequence of technology for encoding described herein.In the case, code device 12 can comprise further that at least one multiplexer is to make up described data segment to be used for transmission.In addition, if be suitable for, code device 12 and decoding device 14 can comprise that suitable modulation, demodulation, frequency conversion, filtering and amplifier block are to be used to transmit and receive encoded video, comprising radio frequency (RF) wireless module and antenna.Yet for ease of graphic extension, described assembly is not shown among Fig. 1.
Fig. 2 is the calcspar of the coding module of graphic extension exemplary in more detail 30.Coding module 30 can (for instance) presentation graphs 1 the coding module 18 of code device 12.As illustrated among Fig. 2, coding module 30 comprises control module 32, and it receives the input multi-medium data frame of one or more multimedia sequences from one or more sources, and handles the described frame of the multimedia sequence that receives.In particular, the multimedia sequence frame that imports into of control module 32 analysis and based on the analysis of described frame being determined coding still skips the described frame that imports into.In certain aspects, code device 12 can use frame-skip to stride the bandwidth of transmission channel 16 with encode contained information saving in the described multimedia sequence of the frame rate that reduces.
And for the frame that imports into that will encode, control module 32 also can be configured to determine described frame is encoded to I frame, P frame or B frame.Control module 32 can determine at multimedia sequence beginning, will import frame in scene variation place of described sequence and be encoded to the I frame, for use as channel switch frame or for use as interior refresh frame.Otherwise control module 32 is encoded to frame (being P frame or B frame) through interframe decoding to reduce and to decipher the amount of bandwidth that described frame is associated with described frame.
Control module 32 can further be configured to described frame is divided into a plurality of and select decoding mode, one in the H.264 decoding mode for example mentioned above in described each.As hereinafter will describing in detail, coding module 30 can estimate that the decoding cost of at least a portion of described pattern is with the most effective one in the described decoding mode of assisted Selection.After selecting to be used for to decipher described one decoding mode, coding module 30 produces described remaining data.For carrying out the piece of intra-coding through selection, spatial prediction module 34 produces described remaining data.Spatial prediction module 34 can be used (for instance) one or more contiguous blocks and produce described prediction version by interpolation corresponding to the interpolation directivity of decoding mode in the described selected frame.Spatial prediction module 34 can be calculated described of described incoming frame and described poor between the prediction piece then.This difference is called remaining data or residual coefficient.
For the piece that will carry out interframe decoding through selection, motion estimation module 36 and motion compensating module 38 produce described remaining data.In particular, motion estimation module 36 at least one reference frame of identification and search for the described piece in the described incoming frame of mating most in the described reference frame.Motion estimation module 36 calculation of motion vectors are with the skew between the position of representing described position in the described incoming frame and the described institute identification block in the described reference frame.Motion compensating module 38 calculates poor between the described institute identification block pointed of motion vector described in described of described incoming frame and the described reference frame.This difference is a described remaining data.
Coding module 30 also comprises conversion module 40, quantization modules 46 and entropy coder 48.Conversion module 40 is according to the remaining data of described of transforming function transformation function conversion.In certain aspects, 40 pairs of remaining datas of conversion module are used integer transform, and for example 4x4 or 8x8 integer transform or discrete cosine transform (DCT) are to produce the conversion coefficient of described remaining data.Quantization modules 46 quantizes described conversion coefficient and is provided to entropy coder 48 with described through quantized transform coefficients.Entropy coder 48 uses for example context-adaptive variable-length decoding (CAVLC) or context adaptive binary arithmetically decoding context-adaptive decoding techniques such as (CABAC) to encode through quantized transform coefficients.As hereinafter describing in detail, entropy coder 48 is used selected pattern and is deciphered described data.
Entropy coder 48 is codified and described excessive data that is associated also.For instance, except that remaining data, entropy coder 48 is gone back identifier, one or more reference frame index, quantization parameter (QP) information, the described slice information etc. of decoding mode of described of one or more motion vectors, the indication of described of codified.Entropy coder 48 can receive this additional blocks data from other module in the coding module 30.For instance, motion vector information can be received from motion estimation module 36, and block mode information can be received from control module 32.In certain aspects, entropy coder 48 can use regular length decoding (FLC) technology or general variable length decoding (VLC) technology (for example, index Columbus decoding (" Exp-Golomb ")) to decipher at least a portion of this extraneous information.Perhaps, entropy coder 48 can use the part of the described additional blocks data of context-adaptive decoding technique mentioned above (being CABAC or CAVLC) coding.
Select to be used for described pattern for assist control module 32, control module 32 is estimated the decoding cost of at least a portion of described possibility pattern.In certain aspects, control module 32 can be estimated the cost of described of decoding in described each in may decoding mode.For instance, can be according in given pattern, coming estimated cost with the ratio of the amount distortion of deciphering described bit quantity that is associated and in that pattern, producing.For instance, under the situation of standard H.264, for the piece that carries out interframe decoding through selection, control module 32 can be estimated the decoding cost of 22 different decoding modes (interframe decoding mode and intra-coding pattern), and for the piece that carries out intra-coding through selection, control module 32 can be estimated the decoding cost of 13 different decoding modes.In others, control module 32 can use another mode selection techniques to reduce the possibility modal sets at first, and utilizes technology of the present invention to estimate the decoding cost of residue pattern in described group then.In other words, in certain aspects, control module 32 can be dwindled the quantity of pattern possibility before using described cost estimation technique.Advantageously, coding module 30 is estimated the decoding cost of described pattern under the data conditions of described of the unactual decoding at different mode, thereby reduces the computing cost of making a strategic decision and being associated with described decoding.In fact, in the illustrated example of Fig. 2, coding module 30 can not estimated decoding cost under the data conditions at described of different mode quantification.In this way, described decoding cost estimation technique of the present invention reduce calculate decoding cost needed on calculating intensive amount of calculation.In particular, needn't use various decoding modes to encode described for selecting one in the described pattern.
To describe in more detail as this paper, control module 32 is estimated the decoding cost of each institute's analytical model according to following equation:
J=D+λmode·R, (1)
Wherein J is estimated decoding cost, and D is a described distortion metrics, and λ mode is Lagrange (Lagrange) multiplier of corresponding modes, and R is a described ratio measures.Distortion metrics (D) can comprise (for instance): absolute difference and (SAD), the difference of two squares and (SSD), absolute transformed and (SATD), square translation difference and (SSTD) etc.Ratio measures (R) (for instance) can be the position amount that is associated with the described data of decoding in given.As indicated above, can use the different dissimilar blocks of data of decoding technique decoding.Therefore, equation (1) can be rewritten as following form:
J=D+λ mode(R context+R non_context), (2)
R wherein ContextThe ratio measures of the blocks of data that expression use context-adaptive decoding technique is deciphered, and R Non_contextThe ratio measures of the blocks of data that the non-context-adaptive decoding technique of expression use is deciphered.In standard H.264, for instance, can use context-adaptive decoding (for example, CAVLC or CABAC) to decipher remaining data.For example other blocks of data such as motion vector, block mode can use FLC or general VLC technology (for example, Exp-Golomb) to decipher.In the case, equation (2) can be rewritten as following form:
J=D+λ mode(R residual+R other), (3)
Wherein Rresidual represents to use the context-adaptive decoding technique to decipher the ratio measures of described remaining data, for example with the bit quantity that is associated of the described remaining data of decoding, and Rother represents to use FLC or general VLC technology to decipher the ratio measures of other blocks of data, for example the bit quantity that is associated with other blocks of data of decoding.
When calculating estimated decoding cost (J), coding module 30 can be determined and the bit quantity of using FLC or general VLC decode block data to be associated, i.e. R relatively simply OtherCoding module 30 can use (for instance) decoding table to discern with using FLC or general VLC and decipher the bit quantity that described blocks of data is associated.Described decoding table can comprise (for instance) a plurality of code words and the bit quantity that is associated with the described code word of decoding.Yet, determine the bit quantity (R that is associated with the described remaining data of decoding Residual) because context-adaptive decoding proposes the task of difficulty more with the self adaptation character that the context of data becomes.Be to determine the accurate bit quantity that is associated with the described remaining data of decoding (or no matter which kind of data is being carried out context-adaptive decoding), coding module 30 must the described remaining data of conversion, quantification is described through the remaining data of conversion and the described remaining data through change quantization of encoding.Yet according to technology of the present invention, position estimation module 42 can be estimated to decipher the bit quantity that described remaining data is associated with using described context-adaptive decoding technique under the situation of the described remaining data of unactual decoding.
In the illustrated example of Fig. 2, position estimation module 42 uses the conversion coefficient of remaining data to estimate the bit quantity that is associated with the described remaining data of decoding.Therefore, for each pattern that will analyze, coding module 30 only needs to calculate the bit quantity that conversion coefficient can be estimated and the described remaining data of decoding is associated of described remaining data.Therefore, coding module 30 is by at each quantizes described conversion coefficient or coding and reduces to determine and decipher needed computational resource of bit quantity and the amount of time that described remaining data is associated through quantized transform coefficients in the described pattern.
The conversion coefficient that position estimation module 42 analytic transformation modules 40 are exported is to be identified in one or more conversion coefficients that quantize will keep afterwards non-zero.In particular, each and corresponding threshold value in the estimation module 42 more described conversion coefficients of position.In certain aspects, can calculate described corresponding threshold value according to the QP of coding module 30.Position estimation module 42 will be identified as the conversion coefficient that will keep non-zero after quantizing more than or equal to the conversion coefficient of its corresponding threshold value.
Position estimation module 42 is at least based on estimating and decipher the bit quantity that described remaining data is associated through being identified in the conversion coefficient that keeps non-zero after quantizing.In particular, position estimation module 42 is determined after quantification the quantity of the non-zero transform coefficient that still exists.Position estimation module 42 is also to still at least a portion of the absolute value of the conversion coefficient of existence summation after being identified in quantification.Position estimation module 42 uses following equation to estimate the ratio measures of described remaining data then, i.e. the bit quantity that is associated with the described remaining data of decoding:
R residual=a 1*SATD+a 2*NZ est+a 3, (4)
Wherein SATD be the non-zero transform coefficient that after quantification, still exists through prediction at least a portion absolute value and, NZ EstBe the estimated non-zero transform coefficient quantity that after quantification, still exists through prediction, and a 1, a 2And a 3It is coefficient.Can use least-squares estimation design factor a 1, a 2And a 3In the example of equation (4), though described conversion coefficient and that be absolute transformed and SATD for example also can use other difference coefficients such as SSTD.
The R of graphic extension 4x4 piece hereinafter ResidualExemplary calculate.Can carry out similar calculating to the piece of different sizes.Coding module 30 calculates the matrix of the conversion coefficient of described remaining data.Hereinafter graphic extension exemplary transform coefficient matrix.
A = 326 191 12 63 675 - 18 - 85 371 108 155 114 45 15 421 5 - 12
The line number amount of transform coefficient matrix (A) equals the line number amount of described middle pixel, and the number of columns of described transform coefficient matrix equals the number of columns of described middle pixel.Therefore, in above example, the size of described transform coefficient matrix is that 4x4 is with corresponding with the 4x4 piece.(i j) is corresponding residue system transformation of variable to the first A of each of described transform coefficient matrix.
During quantizing, the conversion coefficient that has smaller value in the matrix A often becomes zero after quantizing.Equally, coding module 30 relatively remains the matrix A of conversion coefficient and threshold matrix and will keep non-zero with which conversion coefficient of prediction matrix A after quantizing.Hereinafter graphic extension exemplary threshold matrix.
C = 93 150 93 150 150 240 150 240 93 150 93 150 150 240 150 240
Can be according to QP value compute matrix C.The size of Matrix C and matrix A measure-alike.For instance, under the situation of standard H.264, can be based on the unit of following equation compute matrix C:
C ( i , j ) = 2 QBITS { QP } - Level _ Offset ( i , j ) { QP } Level _ Scale ( i , j ) { QP } ∀ i , j , QP , - - - ( 5 )
Wherein QBITS{QP} is a parameter of determining convergent-divergent according to QP, Level-_Offset (i, j) { QP} is the dead band parameter of the capable i of described matrix and the unit that is listed as the j place and also is the function of QP, Level_Scale (i, j) { QP} is the multiplier factor of the capable i of described matrix and the unit that is listed as the j place and also is the function of QP, i is corresponding to the row of matrix, and j is corresponding to matrix column, and QP is corresponding to the quantization parameter of coding module 30.In exemplary equation (5), in coding standards H.264, can define described variable according to the QP of operation.Other equation can be used for determining that in the described variable which will exist after quantizing, and can define based on the quantization method that described specific criteria adopted in other coding standards.In certain aspects, coding module 30 can be configured to operate in the scope of QP value.In the case, coding module 30 can calculate a plurality of comparator matrixs in advance, and described comparator matrix is corresponding with in the QP value described in the described QP value scope each.Coding module 30 selects the corresponding comparator matrix of described QP with coding module 30 to compare with described transform coefficient matrix.
Comparative result between transform coefficient matrix A and the threshold matrix C is by one and zero matrix that constitutes.In above example, described relatively form following illustrated by one with zero matrix that constitutes:
M = ( abs ( A ( i , j ) ) > C ( i , j ) ) ∀ i , j , QP = 1 1 0 0 1 0 0 1 1 1 1 0 0 1 0 0
Wherein an expression is identified as after quantification the position of the conversion coefficient that may have (promptly may keep non-zero), and may there be the position of the conversion coefficient of (promptly may become zero) in null representation after quantification.As indicated above, when the absolute value of the conversion coefficient of matrix A during, described conversion coefficient is identified as may keeps non-zero more than or equal to the corresponding threshold value of Matrix C.
Use formed one and null matrix, position estimation module 42 is determined the quantity of the conversion coefficient that will exist after quantification.In other words, position estimation module 42 determines to be identified as the quantity that keeps the conversion coefficient of non-zero after quantizing.Position estimation module 42 can determine to be identified as the quantity that keeps the conversion coefficient of non-zero after quantizing according to following equation:
NZ est = Σ i = 0 3 Σ j = 0 3 M ( i , j ) , - - - ( 6 )
NZ wherein EstBe the estimated quantity of non-null transformation coefficient, and M (i j) is the value of i and row j place matrix M of being expert at.In above-mentioned example, NZ EstEqual 8.
Position estimation module 42 also calculate the described conversion coefficient through estimating after quantification, still to exist absolute value at least a portion and.In certain aspects, the position estimation module 42 can according to following equation calculate described conversion coefficient absolute value at least a portion and:
SATD = Σ i = 0 3 Σ j = 0 3 ( M ( i , j ) * abs ( A ( i , j ) ) ) , - - - ( 7 )
Wherein SATD be identified as all conversion coefficients of after quantizing, keeping non-zero and, M (i j) is the value of row i and row j place matrix M, and A (i j) is row i and the value that is listed as j place matrix A, and abs (x) is the ABS function of the absolute value of calculating x.In above-mentioned example, SATD equals 2361.Other difference metric can be used for conversion coefficient, for example SSTD.
Use these values, position estimation module 42 is used the approximate bit quantity that is associated with the described residual coefficient of decoding of above equation (3).Control module 32 can be used R ResidualThe estimation estimation of calculating total decoding cost of described pattern.Coding module 30 can estimate in the same manner one or more other may pattern total decoding cost, and select to have the pattern of minimum decoding cost then.Coding module 30 is used selected decoding mode then and is deciphered described of described frame.
Above-mentioned technology can be individually or two or more these type of technology or all this type of technology be implemented in together in the code device 12.Assembly in the coding module 30 is the exemplary components that is suitable for implementing those assemblies of technology described herein.Yet if desired, coding module 30 can comprise many other assemblies, and the assembly of functional lesser amt of above-mentioned one or more modules of combination.Described assembly in the coding module 30 can be embodied as one or more processors, digital signal processor, application-specific integrated circuit (ASIC) (ASIC), field programmable gate array (FPGA), discrete logic, software, hardware, firmware or its arbitrary combination.Describing different features with modular form is intended to give prominence to the difference in functionality aspect of coding module 30 and may not hint and must realize described module by independent hardware and/or component software.But, can be with functional being integrated in shared or the independent hardware or component software that is associated with one or more modules.
Fig. 3 is the calcspar of another exemplary coding module 50 of graphic extension.The coding module 50 of Fig. 3 roughly is similar to the coding module 30 of Fig. 2, and just the position estimation module 52 of coding module 50 is estimated after the conversion coefficient to remaining data quantizes and deciphered the bit quantity that described remaining data is associated.In particular, after the quantification of conversion coefficient, position estimation module 52 uses following equation to estimate the bit quantity that is associated with the described residual coefficient of decoding:
R residual=a 1*SATQD+a 2*NZ TQ+a 3, (8)
Wherein SATQD be non-zero through the absolute value of quantization transform coefficient and, NZ TQBe the quantity through quantization transform coefficient of non-zero, and a 1, a 2And a 3It is coefficient.Can use least-squares estimation design factor a 1, a 2And a 3Though coding module 50 is estimating to quantize described conversion coefficient before the bit quantity that is associated with the described remaining data of decoding, coding module 50 is still estimated the decoding cost of described pattern under the data conditions of described of unactual decoding.Therefore, still reduce intensive amount of calculation on calculating.
Fig. 4 is the flow chart of the example operational of graphic extension coding module, and for example the coding module 50 of the coding module 30 of Fig. 2 and/or Fig. 3 is selected coding mode based on estimated decoding cost at least.Yet,, will discuss Fig. 4 according to coding module 30 for the purpose of exemplary.Coding module 30 is selected to estimate the pattern (60) of decoding cost at it.Coding module 30 produces the distortion metrics (62) of current block.For instance, coding module 30 can be based on the tolerance of the comparison calculated distortion between described and at least one reference block.Under the situation of the piece that will carry out intra-coding through selection, described reference block can be the contiguous block in the described same number of frames.On the other hand, for the piece that will carry out interframe decoding through selection, described reference block can be the piece from contiguous frames.Described distortion metrics can be (for instance) SAD, SSD, SATD, SSTD or other similar distortion metrics.
In the example of Fig. 4, coding module 30 is determined to decipher the bit quantity (64) of the described part correlation connection of described data with using non-context-adaptive decoding technique.As indicated above, these data can comprise identifier, one or more reference frame index, QP information, the described slice information etc. of decoding mode of described described of one or more motion vectors, indication.Coding module 30 can use (for instance) decoding table to discern with using FLC, general VLC or other non-context-adaptive decoding technique and decipher the bit quantity that described data are associated.
The bit quantity (66) that coding module 30 estimations and/or calculating and the described part correlation that uses the context-adaptive decoding technique to decipher described data join.For instance, in the context of standard H.264, coding module 30 can be estimated to decipher the bit quantity that described remaining data is associated with using context-adaptive.Coding module 30 can be deciphered in unactual execution and estimate under the situation of described remaining data and decipher the bit quantity that described remaining data is associated.In certain aspects, coding module 30 can be estimated under the situation that does not quantize described remaining data and decipher the bit quantity that described remaining data is associated.For instance, coding module 30 can calculate the conversion coefficient of described remaining data, and is identified in the conversion coefficient that quantizes may keep afterwards non-zero.Use these conversion coefficients of discerning, coding module 30 is estimated and is deciphered the bit quantity that described remaining data is associated.In others, coding module 30 can quantize described conversion coefficient and estimate and the bit quantity that is associated of the described remaining data of decoding through quantized transform coefficients based on described at least.In either case, coding module 30 is by estimating needed bit quantity and save time and handling resource.If enough computational resources are arranged, coding module 30 can calculate but not estimate needed actual bit quantity so.
Coding module 30 is estimated and/or is calculated with selected pattern and decipher total decoding cost (68) of described.Coding module 30 can be based on distortion metrics, estimate total decoding cost of described of decoding with the position of the part correlation connection of the non-context-adaptive of the use decoding decoding of the described data of decoding and with the position of the part correlation connection of the use context-adaptive decoding decoding of the described data of decoding.For instance, coding module 30 can use above equation (2) or (3) to estimate to decipher total decoding cost of described with selected pattern.
Coding module 30 determines whether to exist will estimate any other decoding mode (70) of decoding cost at it.As indicated above, coding module 30 is estimated the decoding cost of at least a portion of described possibility pattern.In certain aspects, coding module 30 can be estimated the cost of described of decoding in described each in may decoding mode.For instance, in the context of standard H.264, for the piece that carries out interframe decoding through selection, coding module 30 can be estimated the decoding cost of 22 different decoding modes (interframe decoding and intra-coding pattern), and for through selecting to carry out the piece of intra-coding, coding module 30 can be estimated 13 different decoding modes.In others, control module 30 can use another mode selection techniques to reduce the possibility modal sets at first, and utilizes technology of the present invention to estimate the decoding cost of described decoding mode group through reducing then.
More will estimate the decoding mode of decoding cost at it the time when existing, coding module 30 is selected next decoding mode and estimate the cost of the described data of decoding in described selected decoding mode.More will estimate the decoding mode of decoding cost at it the time when not existing, coding module 30 selects one in the described pattern to be used to decipher described (72) based on described estimated decoding cost at least.In an example, decoding module 30 can select to have the decoding mode of minimum estimated decoding cost.When preference pattern, decoding module 30 can be used described selected pattern and decipher described specific (74).Described process can continue at the additional blocks in the given frame.As an example, described process can continue to decipher all pieces in the described frame up to using according to the decoding mode of the choice of technology described herein.And described process can continue up to using high efficiency mode to decipher the piece of a plurality of frames.
Fig. 5 is the flow chart of the example operational of graphic extension coding module (for example coding module 30 of Fig. 2), the bit quantity that its estimation is associated with the residual coefficient of decode block.In selecting described decoding mode will estimate at it decoding cost one after, coding module 30 is at the remaining data (80) of described of described selected mode producing.For instance, for through selecting to carry out the piece of intra-coding, spatial prediction module 34 based on described with described prediction version relatively produce described remaining data.Perhaps, for through selecting to carry out the piece of interframe decoding, motion estimation module 36 and motion compensating module 38 are based on relatively calculating described remaining data between the corresponding blocks in described and the reference frame.In certain aspects, described remaining data may be as calculated to produce described distortion metrics.In the case, coding module 30 can be from the described remaining data of memory search.
Conversion module 40 is according to the conversion coefficient (82) of residual coefficient to produce described remaining data of described of transforming function transformation function conversion.Conversion module 40 can (for instance) be used 4x4 or 8x8 integer transform or dct transform to produce the conversion coefficient of described remaining data to remaining data.In the estimation module 42 more described conversion coefficients of position one and corresponding threshold value with definite described conversion coefficient whether more than or equal to described threshold value (84).Can calculate and the corresponding threshold value of described conversion coefficient according to the QP of coding module 30.If described conversion coefficient is more than or equal to described corresponding threshold value, position estimation module 42 is identified as described conversion coefficient the coefficient (86) that will keep non-zero after quantizing so.If described conversion coefficient is less than described corresponding threshold value, position estimation module 42 is identified as described conversion coefficient and will becomes zero coefficient (88) after quantizing so.
Position estimation module 42 determines whether the described remaining data for described exists extra conversion coefficient (90).If there is described additional transformations coefficient, position estimation module 42 is selected another person in the described coefficient and it is compared with corresponding threshold value so.If there is no with the additional transformations coefficient of analyzing, position estimation module 42 determines to keep the number of coefficients (92) of non-zero after being identified in quantification so.Position estimation module 42 is also at least a portion of the absolute value through being identified in the conversion coefficient that keeps non-zero after quantizing sue for peace (94).Position estimation module 42 use the quantity of determined nonzero coefficient and nonzero coefficient part with estimate and decipher the bit quantity (96) that described remaining data is associated.For instance, position estimation module 42 can use above equation (4) to estimate the bit quantity that is associated with the described remaining data of decoding.In this way, coding module 30 do not quantize or the situation of the described remaining data of encoding under in described selected pattern, estimate the bit quantity that is associated with the described remaining data of deciphering described.
Fig. 6 is the flow chart of the example operational of graphic extension coding module, and for example the coding module 50 of Fig. 3 is estimated the bit quantity that is associated with the residual coefficient of decode block.In selecting described decoding mode will estimate at it decoding cost one after, coding module 50 produces described residual coefficient (100).For instance, for through selecting to carry out the piece of intra-coding, spatial prediction module 34 based on described with described prediction version relatively calculate described remaining data.Perhaps, for through selecting to carry out the piece of interframe decoding, motion estimation module 36 and motion compensating module 38 are based on relatively calculating described remaining data between the corresponding blocks in described and the reference frame.In certain aspects, described residual coefficient may be as calculated to produce described distortion metrics.
Conversion module 40 is according to the conversion coefficient (102) of residual coefficient to produce described remaining data of described of transforming function transformation function conversion.Conversion module 40 can (for instance) be used 4x4 or 8x8 integer transform or dct transform to produce the residual coefficient through conversion to remaining data.Quantization modules 46 quantizes described conversion coefficient (104) according to the QP of coding module 50.
Position estimation module 52 is determined the quantity through quantization transform coefficient (106) of non-zero.Position estimation module 42 is also to non-zero order or through the absolute value of quantization transform coefficient summation (108).Position estimation module 52 use the non-zero that calculated through the quantity of quantization transform coefficient and non-zero through quantization transform coefficient with estimate and decipher the bit quantity (110) that described remaining data is associated.For instance, position estimation module 52 can use above equation (4) to estimate the bit quantity that is associated with the described residual coefficient of decoding.In this way, coding module is in the bit quantity of estimating under the situation of the described remaining data of not encoding to be associated with the described remaining data of described of decoding in described selected pattern.
Based on teaching as herein described, should be appreciated that, can not rely on any others and implement aspect disclosed herein and can make up both or more persons in these aspects in every way.The techniques described herein may be implemented in hardware, software, firmware or its arbitrary combination.If be implemented in the hardware, can use digital hardware, analog hardware or its combination to realize described technology so.If be implemented in the software, can realize described technology by computer program at least in part so, described computer program comprises the computer-readable media that stores instruction or code on it.The described instruction or the code that are associated with the computer-readable media of described computer program can be carried out by computer, for example, carry out by one or more processors, ASIC, FPGA or other equivalent integrated circuits such as for example one or more digital signal processors (DSP), general purpose microprocessor or discrete logic.
By way of example and unrestricted mode, described computer-readable media (for example can comprise RAM, Synchronous Dynamic Random Access Memory (SDRAM)), read-only memory (ROM), nonvolatile RAM (NVRAM), ROM, Electrically Erasable Read Only Memory (EEPROM), EEPROM, flash memory, CD-ROM or other optical disc memory device, magnetic disk memory or other magnetic storage device, or arbitrary other can be used to carry or store and is instruction or data structure form and can be by the tangible medium of the expectation program code of computer access.
This paper has described many aspects and example.Yet, also can make various modifications, and the principle that this paper provided also can be applicable to others to these examples.These and other aspect belongs in the scope of above claims.

Claims (40)

1, a kind of method that is used for the fast acquisition of digital video data, described method comprises:
One or more conversion coefficients that when quantizing, will keep non-zero of the remaining data of identification block of pixels;
At least estimate the bit quantity that is associated with the decoding of described remaining data based on the described conversion coefficient discerned; And
At least estimate to be used to decipher the decoding cost of described block of pixels based on the described estimated bit quantity that is associated with the described remaining data of decoding.
2, method according to claim 1, wherein discern described conversion coefficient comprise in the more described conversion coefficient each with a plurality of threshold values in corresponding one, the described conversion coefficient that will keep non-zero when quantizing to be identified in is wherein according in the described a plurality of threshold values of quantization parameter (QP) calculating each.
3, corresponding one in the method according to claim 2, each in the wherein more described conversion coefficient and a plurality of threshold values will keep the described conversion coefficient of non-zero to comprise the described conversion coefficient less than its corresponding threshold value is identified as the conversion coefficient that will keep non-zero when the quantification when quantizing to be identified in.
4, method according to claim 2, it further comprises:
The a plurality of sets of threshold values of precomputation, each in the wherein said sets of threshold values is corresponding to the different value of described QP; And
Described value based on the described QP of the described block of pixels that is used for encoding is selected one of described a plurality of sets of threshold values.
5, method according to claim 1, estimate that wherein the described bit quantity that is associated with the described remaining data of decoding comprises:
Determine to be identified as the quantity that when quantizing, keeps the described conversion coefficient of non-zero;
To being identified as at least one the absolute value summation in the described conversion coefficient that when quantizing, keeps non-zero; And
At least based on the described of the described absolute value of the quantity of described determined non-zero transform coefficient and described at least one non-zero transform coefficient with estimate the described bit quantity that is associated with the decoding of described remaining data.
6, method according to claim 1, estimate wherein that the described bit quantity be associated with the decoding of described remaining data is included in in two block modes each at least and estimate to decipher the needed bit quantity of described remaining data, and estimate described decoding cost be included in in described at least two block modes each at least based in described block mode corresponding one in described estimated bit quantity estimate described decoding cost, and further comprise at least based on each described estimated in the described pattern and be decoded into one in the described block mode of original selection.
7, method according to claim 6, it further comprises:
At in the described pattern each, use the described estimated bit quantity that is associated with the decoding of described remaining data to estimate to be used to decipher total decoding cost of described block of pixels at least;
Select to have one of minimum estimated total decoding cost in described a plurality of pattern; And
Use described selected pattern and decipher described block of pixels.
8, method according to claim 7, estimate that wherein described total decoding cost comprises:
Calculate the distortion metrics of described block of pixels;
The bit quantity that calculating is associated with the decoding of the non-residue data of described block of pixels; And
At least estimate to be used to decipher described total decoding cost of described block of pixels based on described distortion metrics, the described bit quantity that is associated with the decoding of described non-residue data and the described bit quantity that is associated with the decoding of described remaining data.
9, method according to claim 1, it further comprises:
At least select decoding mode based on the described estimated bit quantity that is associated with the decoding of described remaining data;
After selecting described decoding mode, quantize the described conversion coefficient of described remaining data;
The described of described remaining data of encoding through quantization transform coefficient; And
Transmit the described encoded coefficient of described remaining data.
10, method according to claim 1, it further comprises:
Produce the matrix of described conversion coefficient, the line number amount of wherein said transform coefficient matrix equals the pixel column quantity in described, and the number of columns of described transform coefficient matrix equals the pixel column quantity in described;
More described transform coefficient matrix and threshold matrix, wherein said threshold matrix has the measure-alike size with described transform coefficient matrix, and further wherein said 1 and 0 the matrix that relatively produces, after quantizing, will become 0 position in the described transform coefficient matrix of wherein said 0 expression, and the position that after quantizing, will keep non-zero in the described transform coefficient matrix of described 1 expression;
1 quantity summation in described 1 and 0 matrix is identified as the quantity that keeps the described conversion coefficient of non-zero when quantizing with calculating;
To in the described transform coefficient matrix corresponding at least one the absolute value summation in the described conversion coefficient of 1 position described in described 1 and 0 matrix; And
At least the described bit quantity that is associated with the decoding of described remaining data based on the described and estimation of the described quantity of described non-zero transform coefficient and described at least one non-zero transform coefficient.
11, a kind of equipment that is used for the fast acquisition of digital video data, described equipment comprises:
Conversion module, it is the remaining data generation conversion coefficient of block of pixels;
The position estimation module, it will keep one or more in the described conversion coefficient of non-zero and the bit quantity that is associated with the decoding of described remaining data based on the described conversion coefficient estimation of discerning at least when being identified in and quantizing; And
Control module, it estimates to be used to decipher the decoding cost of described block of pixels at least based on the described estimated bit quantity that is associated with the described remaining data of decoding.
12, equipment according to claim 11, in in the more described conversion coefficient of wherein said position estimation module each and a plurality of threshold values corresponding one, the described conversion coefficient that will keep non-zero when quantizing to be identified in is wherein according in the described a plurality of threshold values of quantization parameter (QP) calculating each.
13, equipment according to claim 12, wherein said position estimation module will be identified as the conversion coefficient that will keep non-zero when quantizing less than the described conversion coefficient of its corresponding threshold value.
14, equipment according to claim 12, the a plurality of sets of threshold values of wherein said position estimation module precomputation, in the wherein said sets of threshold values each is corresponding to the different value of described QP, and selects one of described a plurality of sets of threshold values based on the described value of the described QP of the described block of pixels that is used for encoding.
15, equipment according to claim 11, wherein said position estimation module determines to be identified as the quantity that keeps the described conversion coefficient of non-zero when quantizing, to be identified as in the described conversion coefficient that when quantizing, keeps non-zero at least one the absolute value summation and at least based on the described of the described absolute value of the quantity of described determined non-zero transform coefficient and described at least one non-zero transform coefficient with estimate the described bit quantity that is associated with the decoding of described remaining data.
16, equipment according to claim 11, wherein:
The described bit quantity that estimation is associated with the decoding of described remaining data in institute's rheme estimation module each at least two block modes, and
Described control module at least based in described at least two block modes corresponding one in described estimated bit quantity estimate each decoding cost in the described block mode, and be decoded into one in the described block mode of original selection based on each described estimated in the described pattern at least.
17, equipment according to claim 16, wherein said control module is in the described pattern each, at least use the described estimated bit quantity that is associated with the decoding of described remaining data to estimate to be used to decipher total decoding cost of described block of pixels, select to have one of minimum estimated total decoding cost in described a plurality of pattern, and use described selected pattern and decipher described block of pixels.
18, equipment according to claim 17, wherein said control module is calculated the distortion metrics of described block of pixels, calculate the bit quantity that is associated with the decoding of the non-residue data of described block of pixels, and reach described total decoding cost that the described bit quantity that is associated with the decoding of described remaining data estimates to be used to decipher described block of pixels based on described distortion metrics, the described bit quantity that is associated with the decoding of described non-residue data at least.
19, equipment according to claim 11, it further comprises:
Control module, it selects decoding mode based on the described estimated bit quantity that is associated with the described remaining data of decoding at least;
Quantization modules, it quantizes the described conversion coefficient of described remaining data after selecting described decoding mode;
The entropy coding module, its described remaining data of encoding described through quantization transform coefficient; And
Transmitter, it transmits the described encoded coefficient of described remaining data.
20, equipment according to claim 11, wherein:
Described conversion module produces the matrix of described conversion coefficient, and the line number amount of wherein said transform coefficient matrix equals the pixel column quantity in described, and the number of columns of described transform coefficient matrix equals the pixel column quantity in described, and
More described transform coefficient matrix of institute's rheme estimation module and threshold matrix, wherein said threshold matrix has the measure-alike size with described transform coefficient matrix, and further wherein said 1 and 0 the matrix that relatively produces, after quantizing, will become the position that after quantification, will keep non-zero in 0 position and the described transform coefficient matrix of described 1 expression in the described transform coefficient matrix of wherein said 0 expression
Further wherein said position estimation module is identified as the quantity that keeps the described conversion coefficient of non-zero when quantizing to 1 quantity summation in described 1 and 0 matrix with calculating, to in the described transform coefficient matrix corresponding at least one the absolute value summation in the described conversion coefficient of 1 position described in described 1 and 0 matrix, and at least based on the described of the quantity of described non-zero transform coefficient and described at least one non-zero transform coefficient with estimate the described bit quantity that is associated with the decoding of described remaining data.
21, a kind of equipment that is used for the fast acquisition of digital video data, described equipment comprises:
Be used to discern the device that when quantizing, will keep one or more conversion coefficients of non-zero of the remaining data of block of pixels;
Be used at least estimating the device of the bit quantity that is associated with the decoding of described remaining data based on the described conversion coefficient of discerning;
Be used at least estimating to be used to decipher the device of the decoding cost of described block of pixels based on the described estimated bit quantity that is associated with the described remaining data of decoding.
22, equipment according to claim 21, in in the more described conversion coefficient of wherein said recognition device each and a plurality of threshold values corresponding one, the described conversion coefficient that will keep non-zero when quantizing to be identified in is wherein according in the described a plurality of threshold values of quantization parameter (QP) calculating each.
23, equipment according to claim 22, wherein said recognition device will be identified as the conversion coefficient that will keep non-zero when quantizing less than the described conversion coefficient of its corresponding threshold value.
24, equipment according to claim 22, it further comprises:
The device that is used for a plurality of sets of threshold values of precomputation, each in the wherein said sets of threshold values is corresponding to the different value of described QP; And
Be used for selecting one device of described a plurality of sets of threshold values based on the described value of the described QP of the described block of pixels that is used to encode.
25, equipment according to claim 21, wherein said estimation unit determines to be identified as the quantity that keeps the described conversion coefficient of non-zero when quantizing, to be identified as in the described conversion coefficient that when quantizing, keeps non-zero at least one the absolute value summation and at least based on the described of the described absolute value of the quantity of described determined non-zero transform coefficient and described at least one non-zero transform coefficient with estimate the described bit quantity that is associated with the decoding of described remaining data.
26, equipment according to claim 21, the bit quantity that estimation is associated with the decoding of described remaining data in wherein said position estimation unit each at least two block modes, and described decoding cost estimation unit at least based in described at least two block modes corresponding one in described estimated bit quantity estimate each decoding cost in the described block mode, and further comprise the device that is used at least selecting one in the described block mode based on each described estimated bit quantity of described pattern.
27, equipment according to claim 26, it further comprises and is used for using the described estimated bit quantity that is associated with the decoding of described remaining data to estimate to be used to decipher the device of total decoding cost of described block of pixels at least at each of described pattern, and wherein said choice device selects to have one of minimum estimated total decoding cost in described a plurality of pattern.
28, equipment according to claim 27, wherein said decoding cost estimation unit calculates the distortion metrics of described block of pixels, calculate the bit quantity that is associated with the decoding of the non-residue data of described block of pixels, and reach described total decoding cost that the described bit quantity that is associated with the decoding of described remaining data estimates to be used to decipher described block of pixels based on described distortion metrics, the described bit quantity that is associated with the decoding of described non-residue data at least.
29, equipment according to claim 21, it further comprises:
Be used for selecting based on the described estimated bit quantity that is associated with the decoding of described remaining data at least the device of decoding mode;
Be used for after selecting described decoding mode, quantizing the device of the described conversion coefficient of described remaining data;
The described device through quantization transform coefficient of described remaining data is used to encode; And
Be used to transmit the device of the described encoded coefficient of described remaining data.
30, equipment according to claim 21, it further comprises the device of the matrix that is used to produce described conversion coefficient, the line number amount of wherein said transform coefficient matrix equals the pixel column quantity in described, and the number of columns of described transform coefficient matrix equals the pixel column quantity in described, and wherein:
More described transform coefficient matrix of described recognition device and threshold matrix, wherein said threshold matrix has the measure-alike size with described transform coefficient matrix, and further wherein said 1 and 0 the matrix that relatively produces, after quantizing, will become 0 position in the described transform coefficient matrix of wherein said 0 expression, and the position that after quantizing, will keep non-zero in the described transform coefficient matrix of described 1 expression; And
Described estimation unit is identified as the quantity that keeps the described conversion coefficient of non-zero when quantizing to 1 quantity summation in described 1 and 0 matrix with calculating, to in the described transform coefficient matrix corresponding at least one the absolute value summation in the described conversion coefficient of 1 position described in described 1 and 0 matrix, and at least based on the described of the quantity of described non-zero transform coefficient and described at least one non-zero transform coefficient with estimate the described bit quantity that is associated with the decoding of described remaining data.
31, a kind of fast acquisition of digital video data computing machine program product that is used for, described computer program comprises the computer-readable media that has instruction on it, and described instruction comprises:
Be used to discern the code that when quantizing, will keep one or more conversion coefficients of non-zero of the remaining data of block of pixels;
Be used at least estimating the code of the bit quantity that is associated with the decoding of described remaining data based on the described conversion coefficient of discerning; And
Be used at least estimating to be used to decipher the code of the decoding cost of described block of pixels based on the described estimated bit quantity that is associated with the described remaining data of decoding.
32, computer program according to claim 31, the code that wherein is used for discerning described conversion coefficient comprise be used for more described conversion coefficient each with a plurality of threshold values corresponding one to be identified in the code that will keep the described conversion coefficient of non-zero when quantizing, wherein according in the described a plurality of threshold values of quantization parameter (QP) calculating each.
33, corresponding one in the computer program according to claim 32, each that wherein is used for more described conversion coefficient and a plurality of threshold values comprises and is used for the described conversion coefficient less than its corresponding threshold value is identified as the code that will keep the conversion coefficient of non-zero when the quantification to be identified in the code that will keep the described conversion coefficient of non-zero when quantizing.
34, computer program according to claim 32, it further comprises:
The code that is used for a plurality of sets of threshold values of precomputation, each in the wherein said sets of threshold values is corresponding to the different value of described QP; And
Be used for selecting one code of described a plurality of sets of threshold values based on the described value of the described QP of the described block of pixels that is used to encode.
35, computer program according to claim 31 is used to wherein to estimate that the code of the described bit quantity that is associated with the decoding of described remaining data comprises:
Be used to determine to be identified as the code of the quantity of the described conversion coefficient of maintenance non-zero when quantizing;
Be used for being identified as at least one the code of absolute value summation of the described conversion coefficient that when quantizing, keeps non-zero; And
The code of the described bit quantity that is used at least based on the described of the described absolute value of the quantity of described determined non-zero transform coefficient and described at least one non-zero transform coefficient and estimates to be associated with the decoding of described remaining data.
36, computer program according to claim 31, the code that wherein is used for estimating the described bit quantity that is associated with the decoding of described remaining data comprises the code that is used for estimating in each of at least two block modes the bit quantity that is associated with the decoding of described remaining data, and the code that is used for estimating described decoding cost comprises each the code of described decoding cost that is used at least based on estimating in corresponding one described estimated bit quantity of described at least two block modes in the described block mode, and further comprises the code that is used at least selecting based on each described estimated bit quantity of described pattern one in the described block mode.
37, computer program according to claim 36, it further comprises:
Be used for using the described estimated bit quantity that is associated with the decoding of described remaining data to estimate to be used to decipher the code of total decoding cost of described block of pixels at least at each of described pattern;
Be used for selecting described a plurality of pattern to have one code of minimum estimated total decoding cost; And
Be used to use the code that described selected pattern is deciphered described block of pixels.
38,, be used to wherein estimate that the code of described total decoding cost comprises according to the described computer program of claim 37:
Be used to calculate the code of the distortion metrics of described block of pixels;
Be used to calculate the code of the bit quantity that is associated with the decoding of the non-residue data of described block of pixels; And
Be used at least estimating to be used to decipher the code of described total decoding cost of described block of pixels based on described distortion metrics, the described bit quantity that is associated with the decoding of described non-residue data and the described bit quantity that is associated with the decoding of described remaining data.
39, computer program according to claim 31, it further comprises:
Be used for selecting based on the described estimated bit quantity that is associated with the decoding of described remaining data at least the code of decoding mode;
Be used for after selecting described decoding mode, quantizing the code of the described conversion coefficient of described remaining data;
The described code through quantization transform coefficient of described remaining data is used to encode; And
Be used to transmit the code of the described encoded coefficient of described remaining data.
40, computer program according to claim 31, it further comprises:
Be used to produce the code of the matrix of described conversion coefficient, the line number amount of wherein said transform coefficient matrix equals the pixel column quantity in described, and the number of columns of described transform coefficient matrix equals the pixel column quantity in described;
The code that is used for more described transform coefficient matrix and threshold matrix, wherein said threshold matrix has the measure-alike size with described transform coefficient matrix, and further wherein said 1 and 0 the matrix that relatively produces, after quantizing, will become 0 position in the described transform coefficient matrix of wherein said 0 expression, and the position that after quantizing, will keep non-zero in the described transform coefficient matrix of described 1 expression; And
Be used for the quantity summation of described 1 and 0 matrix 1 is identified as with calculating the code of the quantity of the described conversion coefficient that when quantizing, keeps non-zero;
Be used for described transform coefficient matrix corresponding at least one the code of absolute value summation in the described conversion coefficient of 1 position described in described 1 and 0 matrix; And
The code of the described bit quantity that is used at least based on the described of the quantity of described non-zero transform coefficient and described at least one non-zero transform coefficient and estimates to be associated with the decoding of described remaining data.
CN2007800528186A 2007-05-04 2007-05-04 Video coding mode selection using estimated coding costs Expired - Fee Related CN101663895B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2007/068307 WO2008136828A1 (en) 2007-05-04 2007-05-04 Video coding mode selection using estimated coding costs

Publications (2)

Publication Number Publication Date
CN101663895A true CN101663895A (en) 2010-03-03
CN101663895B CN101663895B (en) 2013-05-01

Family

ID=39145223

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2007800528186A Expired - Fee Related CN101663895B (en) 2007-05-04 2007-05-04 Video coding mode selection using estimated coding costs

Country Status (5)

Country Link
EP (1) EP2156672A1 (en)
JP (1) JP2010526515A (en)
KR (2) KR101166732B1 (en)
CN (1) CN101663895B (en)
WO (1) WO2008136828A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104054347A (en) * 2012-01-18 2014-09-17 高通股份有限公司 Indication of use of wavefront parallel processing in video coding
CN108632620A (en) * 2011-03-08 2018-10-09 维洛媒体国际有限公司 The decoding of transformation coefficient for video coding

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8891615B2 (en) 2008-01-08 2014-11-18 Qualcomm Incorporated Quantization based on rate-distortion modeling for CABAC coders
US9008171B2 (en) 2008-01-08 2015-04-14 Qualcomm Incorporated Two pass quantization for CABAC coders
RS62714B1 (en) * 2011-06-16 2022-01-31 Ge Video Compression Llc Entropy coding of motion vector differences
KR102126855B1 (en) * 2013-02-15 2020-06-26 한국전자통신연구원 Method and apparatus for coding mode decision
KR102229386B1 (en) * 2014-12-26 2021-03-22 한국전자통신연구원 Apparatus and methdo for encoding video
WO2020153506A1 (en) * 2019-01-21 2020-07-30 엘지전자 주식회사 Method and apparatus for processing video signal
WO2023067822A1 (en) * 2021-10-22 2023-04-27 日本電気株式会社 Video encoding device, video decoding device, video encoding method, video decoding method, and video system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0646268A (en) * 1992-07-24 1994-02-18 Chinon Ind Inc Code quantity controller
FR2753330B1 (en) * 1996-09-06 1998-11-27 Thomson Multimedia Sa QUANTIFICATION METHOD FOR VIDEO CODING
NO318318B1 (en) * 2003-06-27 2005-02-28 Tandberg Telecom As Procedures for improved video encoding
JP2006140758A (en) * 2004-11-12 2006-06-01 Toshiba Corp Method, apparatus and program for encoding moving image
JP4146444B2 (en) * 2005-03-16 2008-09-10 株式会社東芝 Video encoding method and apparatus
CN100348051C (en) * 2005-03-31 2007-11-07 华中科技大学 An enhanced in-frame predictive mode coding method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Q CHEN 等: "A fast bits estimation method for rate distortion optimization in H.264/AVC", 《PROCEEDINGS OF THE PICTURE CODING SYMPOSIUM》 *
Q WANG 等: "Low complexity RDO mode decision based on a fast coding-bits estimation model for H.264/AVC", 《CIRCUITS AND SYSTEMS,2005,ISCAS 2005》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108632620A (en) * 2011-03-08 2018-10-09 维洛媒体国际有限公司 The decoding of transformation coefficient for video coding
CN108632620B (en) * 2011-03-08 2022-04-01 高通股份有限公司 Coding of transform coefficients for video coding
CN104054347A (en) * 2012-01-18 2014-09-17 高通股份有限公司 Indication of use of wavefront parallel processing in video coding

Also Published As

Publication number Publication date
KR20100005240A (en) 2010-01-14
EP2156672A1 (en) 2010-02-24
JP2010526515A (en) 2010-07-29
WO2008136828A1 (en) 2008-11-13
KR101166732B1 (en) 2012-07-19
CN101663895B (en) 2013-05-01
KR20120031529A (en) 2012-04-03

Similar Documents

Publication Publication Date Title
CN101663895B (en) Video coding mode selection using estimated coding costs
CN101946515B (en) Two pass quantization for cabac coders
CN101911702B (en) Method and device for quantization of video module coefficient for CABAC supported video coding process
CN100581232C (en) Method for coding motion in video sequence
RU2533196C2 (en) Video coding with large macroblocks
KR101387255B1 (en) Adaptive motion resolution for video coding
CN101406056B (en) Method of reducing computations in intra-prediction and mode decision processes in a digital video encoder
CN102172025B (en) Video coding with large macroblocks
CN106131576B (en) Use the video encoding/decoding method, encoding device and decoding device of quad-tree structure
CN101743751B (en) Adaptive transformation of residual blocks depending on the intra prediction mode
CN101232618B (en) Method and device for indicating quantizer parameters in a video coding system
CN1331353C (en) Method for sub-pixel valve interpolation
CN103190147B (en) For combined decoding method and the equipment of the syntactic element of video coding
US8150172B2 (en) Video coding mode selection using estimated coding costs
CN103238322A (en) Separately coding the position of a last significant coefficient of a video block in video coding
CN103202016A (en) Adaptive motion vector resolution signaling for video coding
CN101999230A (en) Offsets at sub-pixel resolution
CN101267563A (en) Adaptive variable length coding
CN101854545A (en) The method of intra-prediction and the equipment that are used for video encoder
CN103181170A (en) Adaptive scanning of transform coefficients for video coding
CN102204251A (en) Video coding using transforms bigger than 4x4 and 8x8
CN114501010B (en) Image encoding method, image decoding method and related devices
CN103238323A (en) Coding the position of a last significant coefficient within a video block based on a scanning order for the block in video coding
CN104041045A (en) Secondary boundary filtering for video coding
CN105027160A (en) Spatially adaptive video coding

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130501

Termination date: 20150504

EXPY Termination of patent right or utility model