CN102611888A - Encoding method for screen content - Google Patents

Encoding method for screen content Download PDF

Info

Publication number
CN102611888A
CN102611888A CN2011103696694A CN201110369669A CN102611888A CN 102611888 A CN102611888 A CN 102611888A CN 2011103696694 A CN2011103696694 A CN 2011103696694A CN 201110369669 A CN201110369669 A CN 201110369669A CN 102611888 A CN102611888 A CN 102611888A
Authority
CN
China
Prior art keywords
upbound
index
lowbound
concordance list
base color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011103696694A
Other languages
Chinese (zh)
Other versions
CN102611888B (en
Inventor
丁文鹏
尹宝才
朱维佳
施云慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201110369669.4A priority Critical patent/CN102611888B/en
Publication of CN102611888A publication Critical patent/CN102611888A/en
Application granted granted Critical
Publication of CN102611888B publication Critical patent/CN102611888B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses an encoding method for a screen content with higher encoding efficiency. The method comprises the following steps that: in a time domain, a plurality of base colors and an index table are used for representing the images or image blocks of the screen content, wherein the index table is disassembled into a matched table and a non-matched index table; each element in the matched table shows whether the elements of the index table are equal to a predicted index or not; if not, the value of the index is updated and stored in the non-matched index table; the predicted index is generated through two methods; one method comprises the following step that: the index table of the image blocks which are searched after interframe motion estimation serves as a predicted index table with the strong correlation of the time domain; and the other method comprises the following step that: a pixel-level predicted index serves as the predicted index according to the texture direction of pixels around a current pixel with the correlation of a space domain.

Description

A kind of coding method to screen content
Technical field
The invention belongs to the technical field of video coding, relate to a kind of coding method particularly screen content (screen content).
Background technology
In existing technology, the coding of vision-mix (to the coding method of screen content) roughly can be divided into following three types.
1. based on the method for image encoding:
This method still adopts the conventional coding framework to come vision-mix is encoded; But adopt the different quantized step-length to different zones; For the piece that comprises literal and chart content, coding back rebuilds effect can be very fuzzy, so reduce quantization step when quantizing these pieces; Be its overabsorption bit, make the quality of its reconstruction as well as possible.But for fixing bit number, if be literal and the bit of having schemed the table block overabsorption, the bit number that distributed of natural image piece will correspondingly reduce so, so on the whole, this method still can not reach good effect.
2. based on the method for hierarchical coding:
This method adopts the mixed light bar phantom to compress.It is divided into piece image or image block the plane layer of foreground layer, background layer and a binary system operator.The plane layer of this binary system operator shows that each pixel of image or image block belongs to foreground layer or background layer; And JBIG (Joint Bi-level Image Experts Group can be used in this binary system operator plane; Associating bianry image expert group) standard or JBIG2 (Joint Bi-level Image Experts Group2, associating bianry image expert group two generations) standard is compressed.And the foreground layer and the background layer that pass through after the data filling algorithm blurs can compress with the traditional image coding framework.Paper " Compressing compound documents " (The Document and Image Compression Handbook; M.Barni; Ed.New York:Marcel-Dekker; 2005.) this method has been proposed, and compare with traditional encoding and decoding framework through this method of experiment proof, very big gain can be obtained.But this method has some shortcomings: at first gain depends on promptly how image is divided into prospect and background by the algorithm of cutting apart to a great extent.The second, if the effect of coding can be damaged in the hole that splits without any processing.The 3rd, the textcolor of the background layer of encoding respectively and the text shape of binary system operator also can reduce the efficient of coding.So this method still can not satisfy the needs of application.
3. block-based coding method:
This method can be divided into two types:
A kind of is the character of utilizing the spatial domain, like statistics with histogram, and gradient, the numbers of color etc. can be divided into a kind of in several types with the present encoding piece, like text, chart, natural image etc.Different classes is compressed to be applicable to their statistical property with the different coding framework then.
Another kind is to consider that the histogram of literal and chart in the screen content is more sparse; With it is the form of base color and concordance list in Time Domain Decomposition; It is Base color and Index map (BCIM) pattern; As shown in Figure 1, wherein four kinds of colors are represented in " 1 ", " 0 ", " 253 ", " 255 ".Paper " Enable efficient compound image compression in is intra coding H.264/AVC " (Proc.Int.Conf.Image Processing; Oct.2007; Vol.2; Pp.337-340) at first H.264 this method has been applied in the standard, and has obtained good gain, but code efficiency is still waiting to improve.
Summary of the invention
Technology of the present invention is dealt with problems and is: overcome the deficiency of prior art, provide a kind of code efficiency the higher coding method to screen content.
Technical solution of the present invention is: this coding method to screen content screen content, represent image or the image block of screen content with several base color and concordance list in time-domain; Wherein to be disassembled be the concordance list of a matching list and a non-coupling to concordance list, and whether the element of each the element representation concordance list in the matching list equate with its prediction index, if do not wait, then will upgrade the value of this index and be stored in the concordance list of non-coupling; The prediction index produces through two kinds of methods: a kind of is the strong correlation that utilizes time-domain, and the concordance list of the image block that searches after interframe movement is estimated is used as the prediction concordance list; Another kind is a correlation of utilizing spatial domain, according to the grain direction of the surrounding pixel of current pixel, with the prediction index of Pixel-level as the prediction index.
The reason that code efficiency is higher: because screen content screen content has the sparse character of histogram distribution, so the image block of representing to have screen content with several base color and concordance list is very effective.We so concordance list is decomposed into matching list and non-match index table.Because boolean's property of matching list, promptly each element have only 0 with 1 two kind of value, represent in the concordance list whether index with predict that index equates.When prediction is point-device; Be used for so the distribution meeting of code stream of codes match table very uneven with; 0 and 1 ratio great disparity is very big; If such code stream is encoded with CABAC (Context-based Adaptive Binary Arithmetic Coding is based on contextual binary arithmetic coding), the minimizing amplitude of code check can be very big so.And the non-match index table of encoding is a small probability time, and it only occurs in the index of concordance list and predicts when index is unequal, at this time owing to got rid of a kind of possibility; In order to guarantee continuity; Can be with the value reach of a part, like this, the probability of 0 appearance is higher relatively.Equally CABAC is suitable for, can reduces code check.And above-mentioned two kinds of methods that produce the prediction index have utilized the correlation of room and time, and the accuracy that produces the prediction index is guaranteed, so the method that we propose can be improved the code efficiency of screen content significantly.
Description of drawings
Fig. 1 shows existing BCIM pattern;
Fig. 2 shows the flow chart according to screen content of the present invention coding method;
Fig. 3 is the key diagram that decomposes concordance list;
Fig. 4 shows how to predict index according to the correlation generation of time-domain;
Fig. 5 shows how to predict index according to the correlation generation of spatial domain;
Fig. 6 shows a current pixel and a left side thereof, and is upper left, on, the pixel of upper-right position.
Embodiment
Image or the image block of screen content are represented in this coding method to screen content screen content with several base color and concordance list in time-domain; Wherein to be disassembled be the concordance list of a matching list and a non-coupling to concordance list; Whether the element of each the element representation concordance list in the matching list equates with its prediction index; If do not wait, then will upgrade the value of this index and be stored in the concordance list of non-coupling (as shown in Figure 3); The prediction index produces through two kinds of methods: a kind of is the strong correlation that utilizes time-domain, and the concordance list of the image block that searches after interframe movement is estimated is used as the prediction concordance list; Another kind is a correlation of utilizing spatial domain, according to the grain direction of the surrounding pixel of current pixel, with the prediction index of Pixel-level as the prediction index.
Preferably, as shown in Figure 2, this method may further comprise the steps:
(1) each coding unit CU is divided into predicting unit PU according to the quaternary tree form;
(2) each predicting unit PU is divided into converter unit TU according to the quaternary tree form;
(3) for current TU YUV3 component arranged, carries out the processing of step (4)-(12) for each of each component of YUV:
(4) pixel in the TU is carried out the Dynamic Programming cluster, obtain N representative value, N is the integer greater than 0, and this N value promptly is a N base color, and by the incremental order ordering; Then each pixel among the current TU is mapped in N the base color according to the nearest principle of Euclidean distance, and obtains an index and show which base color it maps to, and calculate the distortion of each TU;
(5) produce the prediction index;
(6) produce matching list according to each index principle whether the prediction index corresponding with it equates;
(7) for those and the unequal index of prediction index, upgrade index, and be stored in the concordance list of non-coupling;
(8) code check that concordance list produced of calculation code base color, matching list and non-coupling;
(9) code check according to distortion that produces in the step (4) and step (8) generation comes the calculation rate distortion cost;
(10) value of from 1 to 4 traversal N, repeating step after the fix N (2)-(9) make step
(9) rate distortion costs of calculating in is minimum;
(11) information of the concordance list of the information of the base color that the rate distortion costs of selecting is minimum and matching list and non-coupling is sent to decoding end;
(12) decoding end is rebuild current TU according to base color and concordance list after obtaining corresponding information;
(13) rebuild PU and CU.
Preferably, the method for acquisition base color may further comprise the steps:
(a) with the line of pixels ordering in the present image; Statistics also writes down the pixel value and the frequency of occurrences thereof in each piece; And all pixels with different value are lined up one line up interval X according to the value incremental order; And the frequency of occurrences of each pixel value also forms an interval Y, and the element in these two intervals of X and Y is one to one;
(b) setting is divided into the N class with the pixel value in the piece, promptly will obtain N base color, and N is the integer greater than 0:
β (on behalf of the element in LowBound to the UpBound interval, lowBound, UpBound N) be assigned to the distortion that is produced in N type,
β(lowBound,UpBound,N)=α(LowBound,p-1)+β(p,UpBound,N-1) N≥2
α(LowBound,UpBound) N=1
If N >=2, the P point is tried to achieve through following formula
p = arg min lowBound < P < UpBound - N + 1 ( &alpha; ( LowBound , p - 1 ) + &beta; ( p , UpBound , N - 1 )
&alpha; ( LowBound , p - 1 ) = &Sigma; i = LowBound p - 1 ( X i - avg ( LowBound , p - 1 ) ) 2 &times; Y i
avg ( LowBound , p - 1 ) = &Sigma; i = LowBound p - 1 X i &times; Y i Y i ;
(c)
β(lowBound,UpBound,4)=α(LowBound,p1-1)+β(p1,UpBound,3)
β(p1,UpBound,3)=α(p1,p2-1)+β(p2,UpBound,2)
β(p2,UpBound,2)=α(p2,p3-1)+β(p3,UpBound,1)
β(p3,UpBound,1)=α(p3,UpBound)
Wherein pi is the optimum point that finds in every layer line property search, and i is 1,2 or 3,
(p3, UpBound), (p2, p3-1), (p1, p2-1), (LowBound p1-1) is exactly the base color of being tried to achieve to α to α to α to α, and LowBound<p1<p2<p3<UpBound is arranged here.Quantification employing formula: C (S)=sign (S) * max in the step (2) (0, floor (| S|/Q+1-Z), wherein S is an input signal, and floor is following bracket function, and Q is a quantization step, and Z is the adjusting parameter of presetting.
Below specify the method that produces the prediction index, mainly comprise:
1. the correlation according to time-domain produces
According to the correlation of time domain, as shown in Figure 4, the matching image piece that estimation found is carried out the quantification with respect to the base color of current block, obtain a concordance list, as the predicted value of current block concordance list.
2. the correlation according to spatial domain produces
According to the correlation in spatial domain, the index of each pixel in the piece is predicted, weigh at first to four pixel travel direction property testings on every side, and with 2 norms.2 norms are more little, represent texture smooth more, and continuity is strong more, so after 4 pixels around the test, with the index of the respective pixel of the 2 norms minimum predicted value as current index.Fig. 5 has well explained this process.
When the codes match table, context is carried out corresponding design.Be total up to 16 kinds of contexts of its design, the matching of each contiguous on every side pixel have 2 kinds maybe, mate and do not match, represent with 1 and 0 respectively; Like this, obtain 4 neighborhood pixels, i.e. lefts here altogether; The top, upper left side and upper right side, corresponding context is 2 4=16.
Under unmatched situation, get corresponding element in the non-match index table of encoding, at this moment need carry out carrying out contextual adaptively selected to reach better coding effect according to surrounding pixel.
Among Fig. 5 be a current pixel and a left side thereof, upper left, on, the pixel of upper-right position.
For it designs 15 kinds of context models altogether, for better explanation, every kind of context is with a quadruple notation, and expression is left, upper left successively respectively for four-tuple, on, the index that the pixel of upper-right position is represented.These 15 kinds of contexts are respectively { AAAA, AAAB, AABA, ABAA, BAAA, AABB, ABAB, ABBA, AABC, BAAC, BCAA, ABAC, ABCA, BACA, ABCD}.A wherein, B, C, D represent 4 different index values, and for example to represent the index of 4 positions be identical to AAAA, and AAAB represents left, the upper left side, 3 index values of top are the same, top-right index value is different with other three index values.Specific practice is following:
At first obtain 4 pixel corresponding index value up and down, by the frequency of occurrences of 4 index and position thereof see its belong to 15 kinds contextual any, and encode with the context model of correspondence.And before entropy coding, we need remap current index to reach better coding effect.The method that remaps is following: according to context model under it, be 0 with needing the highest index of frequency to remap, the order of successively decreasing by frequency then remap the index that might occur, form a mapping table.At last in mapping table, find the index after current index remaps, and the index after this mapping of encoding.
The above; It only is preferred embodiment of the present invention; Be not that the present invention is done any pro forma restriction, every foundation technical spirit of the present invention all still belongs to the protection range of technical scheme of the present invention to any simple modification, equivalent variations and modification that above embodiment did.

Claims (3)

1. coding method to screen content screen content is characterized in that: image or the image block of representing screen content in time-domain with several base color and concordance list; Wherein to be disassembled be the concordance list of a matching list and a non-coupling to concordance list, and whether the element of each the element representation concordance list in the matching list equate with its prediction index, if do not wait, then will upgrade the value of this index and be stored in the concordance list of non-coupling; The prediction index produces through two kinds of methods: a kind of is the strong correlation that utilizes time-domain, and the concordance list of the image block that searches after interframe movement is estimated is used as the prediction concordance list; Another kind is a correlation of utilizing spatial domain, according to the grain direction of the surrounding pixel of current pixel, with the prediction index of Pixel-level as the prediction index.
2. method according to claim 1 is characterized in that: may further comprise the steps:
(1) each coding unit CU is divided into predicting unit PU according to the quaternary tree form;
(2) each predicting unit PU is divided into converter unit TU according to the quaternary tree form;
(3) for current TU YUV3 component arranged, carries out the processing of step (4)-(12) for each component of YUV component:
(4) pixel in the TU is carried out the Dynamic Programming cluster, obtain N representative value, N is the integer greater than 0, and this N value promptly is a N base color, and by the incremental order ordering; Then each pixel among the current TU is mapped in N the base color according to the nearest principle of Euclidean distance, and obtains an index and show which base color it maps to, and calculate the distortion of each TU;
(5) produce the prediction index;
(6) produce matching list according to each index principle whether the prediction index corresponding with it equates;
(7) for those and the unequal index of prediction index, upgrade index, and be stored in the concordance list of non-coupling;
(8) code check that concordance list produced of calculation code base color, matching list and non-coupling;
(9) code check according to distortion that produces in the step (4) and step (8) generation comes the calculation rate distortion cost;
(10) value of from 1 to 4 traversal N, repeating step after the fix N (2)-(9) make the rate distortion costs of calculating in the step (9) minimum;
(11) information of the concordance list of the information of the base color that the rate distortion costs of selecting is minimum and matching list and non-coupling is sent to decoding end;
(12) decoding end is rebuild current TU according to base color and concordance list after obtaining corresponding information;
(13) rebuild PU and CU.
3. method according to claim 1 is characterized in that: the method that obtains base color may further comprise the steps:
(a) with the line of pixels ordering in the present image; Statistics also writes down the pixel value and the frequency of occurrences thereof in each piece; And all pixels with different value are lined up one line up interval X according to the value incremental order; And the frequency of occurrences of each pixel value also forms an interval Y, and the element in these two intervals of X and Y is one to one;
(b) set the pixel value in the piece is divided into the N class, i.e. the N that tries to achieve base color, N is the integer greater than 0:
β (on behalf of the element in LowBound to the UpBound interval, lowBound, UpBound N) be assigned to the distortion that is produced in N type,
β(lowBound,UpBound,N)=α(LowBound,p-1)+β(p,UpBound,N-1) N≥2
α(LowBound,UpBound) N=1
If N >=2, the P point is tried to achieve through following formula
p = arg min lowBound < P < UpBound - N + 1 ( &alpha; ( LowBound , p - 1 ) + &beta; ( p , UpBound , N - 1 )
&alpha; ( LowBound , p - 1 ) = &Sigma; i = LowBound p - 1 ( X i - avg ( LowBound , p - 1 ) ) 2 &times; Y i
avg ( LowBound , p - 1 ) = &Sigma; i = LowBound p - 1 X i &times; Y i Y i ;
(c)
β(lowBound,UpBound,4)=α(LowBound,p1-1)+β(p1,UpBound,3)
β(p1,UpBound,3)=α(p1,p2-1)+β(p2,UpBound,2)
β(p2,UpBound,2)=α(p2,p3-1)+β(p3,UpBound,1)
β(p3,UpBound,1)=α(p3,UpBound)
Wherein pi is the optimum point that finds in every layer line property search, and i is 1,2 or 3,
(p3, UpBound), (p2, p3-1), (p1, p2-1), (LowBound p1-1) is exactly the base color of being tried to achieve to α to α to α to α, and LowBound<p1<p2<p3<UpBound is arranged here.
CN201110369669.4A 2011-11-18 2011-11-18 Encoding method for screen content Expired - Fee Related CN102611888B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110369669.4A CN102611888B (en) 2011-11-18 2011-11-18 Encoding method for screen content

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110369669.4A CN102611888B (en) 2011-11-18 2011-11-18 Encoding method for screen content

Publications (2)

Publication Number Publication Date
CN102611888A true CN102611888A (en) 2012-07-25
CN102611888B CN102611888B (en) 2014-07-23

Family

ID=46529013

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110369669.4A Expired - Fee Related CN102611888B (en) 2011-11-18 2011-11-18 Encoding method for screen content

Country Status (1)

Country Link
CN (1) CN102611888B (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103404139A (en) * 2012-02-28 2013-11-20 松下电器产业株式会社 Image processing device and image processing method
CN103402091A (en) * 2013-07-31 2013-11-20 上海通途半导体科技有限公司 Cloud desktop image classifying and encoding method
CN104410863A (en) * 2014-12-11 2015-03-11 上海兆芯集成电路有限公司 Image processor and image processing method
CN104702955A (en) * 2013-12-05 2015-06-10 财团法人工业技术研究院 Method and system for predictive coding of screen video
CN105472392A (en) * 2015-12-11 2016-04-06 钟炎培 Screen video encoding and decoding method based on progressive character block compression and encoding and decoding device
CN105791824A (en) * 2016-03-09 2016-07-20 西安电子科技大学 Screen content coding predicting mode quick selection method based on edge point density
CN105786953A (en) * 2015-01-12 2016-07-20 帕洛阿尔托研究中心公司 Order encoded manifests in a content centric network
WO2016127837A1 (en) * 2015-02-11 2016-08-18 Huawei Technologies Co., Ltd. Apparatus and method for compressing color index map
CN105900432A (en) * 2014-01-02 2016-08-24 Vid拓展公司 Two-demensional palette coding for screen content coding
CN106375771A (en) * 2016-08-31 2017-02-01 苏睿 Image characteristic matching method and device
CN106385592A (en) * 2016-08-31 2017-02-08 苏睿 Image compression method and device
CN106464875A (en) * 2014-06-20 2017-02-22 高通股份有限公司 Copy from previous rows for palette mode coding
CN106993189A (en) * 2017-04-06 2017-07-28 北京工业大学 A kind of screen content method for video coding based on Optimal scheduling
CN107103891A (en) * 2017-01-20 2017-08-29 艾体威尔电子技术(北京)有限公司 A kind of method that color/graphics is used in small internal memory embedded system
US10291827B2 (en) 2013-11-22 2019-05-14 Futurewei Technologies, Inc. Advanced screen content coding solution
US10298955B2 (en) 2012-08-23 2019-05-21 Microsoft Technology Licensing, Llc Non-transform coding
US10638143B2 (en) 2014-03-21 2020-04-28 Futurewei Technologies, Inc. Advanced screen content coding with improved color table and index map coding methods
CN112040247A (en) * 2018-09-10 2020-12-04 华为技术有限公司 Video decoding method and video decoder

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101217668A (en) * 2008-01-14 2008-07-09 浙江大学 A mixed image compression method based on block classification
CN102223541A (en) * 2011-07-14 2011-10-19 北京工业大学 Method for coding hybrid image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101217668A (en) * 2008-01-14 2008-07-09 浙江大学 A mixed image compression method based on block classification
CN102223541A (en) * 2011-07-14 2011-10-19 北京工业大学 Method for coding hybrid image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
宋宇彬: "基于四叉树的图像分割技术", 《兵工自动化》, vol. 23, no. 6, 30 June 2004 (2004-06-30), pages 63 - 64 *

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103404139A (en) * 2012-02-28 2013-11-20 松下电器产业株式会社 Image processing device and image processing method
US10298955B2 (en) 2012-08-23 2019-05-21 Microsoft Technology Licensing, Llc Non-transform coding
CN103402091A (en) * 2013-07-31 2013-11-20 上海通途半导体科技有限公司 Cloud desktop image classifying and encoding method
US10291827B2 (en) 2013-11-22 2019-05-14 Futurewei Technologies, Inc. Advanced screen content coding solution
CN104702955B (en) * 2013-12-05 2019-04-05 财团法人工业技术研究院 Method and system for predictive coding of screen video
CN104702955A (en) * 2013-12-05 2015-06-10 财团法人工业技术研究院 Method and system for predictive coding of screen video
US11036459B2 (en) 2014-01-02 2021-06-15 Vid Scale, Inc. Two-dimensional palette coding for screen content coding
CN105900432B (en) * 2014-01-02 2019-05-10 Vid拓展公司 Two-dimentional palette coding for screen content coding
CN105900432A (en) * 2014-01-02 2016-08-24 Vid拓展公司 Two-demensional palette coding for screen content coding
US10638143B2 (en) 2014-03-21 2020-04-28 Futurewei Technologies, Inc. Advanced screen content coding with improved color table and index map coding methods
CN106464875A (en) * 2014-06-20 2017-02-22 高通股份有限公司 Copy from previous rows for palette mode coding
CN104410863A (en) * 2014-12-11 2015-03-11 上海兆芯集成电路有限公司 Image processor and image processing method
CN105786953B (en) * 2015-01-12 2021-08-27 思科技术公司 Ordering encoded manifests in a content-centric network
CN105786953A (en) * 2015-01-12 2016-07-20 帕洛阿尔托研究中心公司 Order encoded manifests in a content centric network
WO2016127837A1 (en) * 2015-02-11 2016-08-18 Huawei Technologies Co., Ltd. Apparatus and method for compressing color index map
US9729885B2 (en) 2015-02-11 2017-08-08 Futurewei Technologies, Inc. Apparatus and method for compressing color index map
CN105472392A (en) * 2015-12-11 2016-04-06 钟炎培 Screen video encoding and decoding method based on progressive character block compression and encoding and decoding device
CN105472392B (en) * 2015-12-11 2018-11-20 西安万像电子科技有限公司 Screen video decoding method and codec based on the compression of gradual character block
CN105791824A (en) * 2016-03-09 2016-07-20 西安电子科技大学 Screen content coding predicting mode quick selection method based on edge point density
CN105791824B (en) * 2016-03-09 2018-08-31 西安电子科技大学 Screen content coding prediction mode fast selecting method based on edge dot density
CN106375771B (en) * 2016-08-31 2019-05-24 西安万像电子科技有限公司 Image Feature Matching method and apparatus
CN106385592B (en) * 2016-08-31 2019-06-28 西安万像电子科技有限公司 Method for compressing image and device
CN106385592A (en) * 2016-08-31 2017-02-08 苏睿 Image compression method and device
CN106375771A (en) * 2016-08-31 2017-02-01 苏睿 Image characteristic matching method and device
CN107103891A (en) * 2017-01-20 2017-08-29 艾体威尔电子技术(北京)有限公司 A kind of method that color/graphics is used in small internal memory embedded system
CN107103891B (en) * 2017-01-20 2019-06-18 艾体威尔电子技术(北京)有限公司 A method of using color/graphics in small memory embedded system
CN106993189B (en) * 2017-04-06 2019-10-15 北京工业大学 A kind of screen content method for video coding based on Optimal scheduling
CN106993189A (en) * 2017-04-06 2017-07-28 北京工业大学 A kind of screen content method for video coding based on Optimal scheduling
CN112040247A (en) * 2018-09-10 2020-12-04 华为技术有限公司 Video decoding method and video decoder
US11706417B2 (en) 2018-09-10 2023-07-18 Huawei Technologies Co., Ltd. Video decoding method and video decoder

Also Published As

Publication number Publication date
CN102611888B (en) 2014-07-23

Similar Documents

Publication Publication Date Title
CN102611888B (en) Encoding method for screen content
CN102223541B (en) Method for coding hybrid image
CN100547615C (en) Become the encoding scheme of the data stream of graphical model during expression
CN100392671C (en) Context-based adaptive variable length coding for adaptive block transforms
CN100420308C (en) Image encoding device, image decoding device, image encoding method, image decoding method, image encoding program, and image decoding program
CN104811708B (en) The coding/decoding method of video
CN104885471B (en) Coding/uncoded data content-adaptive entropy coding for next-generation video
JP5733590B2 (en) A context modeling technique for encoding transform coefficient levels.
CN101090495B (en) Method and system for processing video data
CN104067524B (en) Context-adaptive entropy coding with the initialization value set simplified
CN109997361A (en) Low complex degree sign prediction for video coding
CN108605141A (en) Efficient parameter storage for the transformation of compact multipass
CN106559669A (en) The method and device of image prediction
CN101243611A (en) Efficient coding and decoding of transform blocks
CN106170921A (en) Relate to source code and the decoding method and device of the data of sign compression
CN104365099A (en) Parameter update method for entropy coding and decoding of conversion coefficient level, and entropy coding device and entropy decoding device of conversion coefficient level using same
RU2567988C2 (en) Encoder, method of encoding data, decoder, method of decoding data, system for transmitting data, method of transmitting data and programme product
CN105027560A (en) Method of determining binary codewords for transform coefficients
CN101383972A (en) Remote sensed image compression method based on space prediction and transformation
CN1864177A (en) Video encoding and decoding methods and corresponding devices
Zhou et al. Distributed video coding using interval overlapped arithmetic coding
CN108464005A (en) Real transform and reverse transformation system in Video coding and method
CN109417622B (en) Method and apparatus for improved significance flag decoding using simple local predictors
Li et al. Lossless image compression algorithm and hardware architecture for bandwidth reduction of external memory
CN102256126A (en) Method for coding mixed image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140723

Termination date: 20211118