CN100527842C - Background-based motion estimation coding method - Google Patents

Background-based motion estimation coding method Download PDF

Info

Publication number
CN100527842C
CN100527842C CN 200710063053 CN200710063053A CN100527842C CN 100527842 C CN100527842 C CN 100527842C CN 200710063053 CN200710063053 CN 200710063053 CN 200710063053 A CN200710063053 A CN 200710063053A CN 100527842 C CN100527842 C CN 100527842C
Authority
CN
China
Prior art keywords
frame
macro block
background
coding
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 200710063053
Other languages
Chinese (zh)
Other versions
CN101009835A (en
Inventor
戴琼海
于志涛
尚明海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN 200710063053 priority Critical patent/CN100527842C/en
Publication of CN101009835A publication Critical patent/CN101009835A/en
Application granted granted Critical
Publication of CN100527842C publication Critical patent/CN100527842C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The related motion estimation coding method based on background in digital video code compression technology comprises: initiating the background frame, reading in a frame image to decide its code type, completing coding to current frame by traditional frame inner coding method if the image is I frame; or reading in a macro block if the image id P frame, then coding the block with traditional frame inner coding method if the block is I type, or coding as both traditional reference frame motion estimation and background frame reference motion estimation if the block is P type, and updating current background frame during coding to generate current background frame. This invention improves coding efficiency, special useful to video monitor and conference.

Description

Motion estimation coding method based on background
Technical field
The invention belongs to the digital video coding compression technique area, particularly the coding method of estimation.
Background technology
Digital video is meant the video information with the digital form record, and English corresponding phrase is Digital video.The original data volume of digital video is very big, has all brought very big inconvenience for transmission and storage, so often needs to carry out encoding compression in the practical application..
Video coding adopts the method for estimation to realize the compression of data.Video data is that because picking rate is fast, adjacent image correlation in terms of content is very strong with the image sequence of regular time interval continuous acquisition.Estimation is exactly this correlation of utilizing between image, eliminates redundant information wherein, realizes the compression of data.The general procedure of method for estimating is as follows: the macro block (16 pixels * 16 pixels) that at first present frame (preparing image encoded in the cataloged procedure) is divided into fixed size; Then, for each macro block, (reference picture that coding is selected for use) searched for and compared in reference frame, finds " a coupling macro block " that the content similarity is the highest, the motion vector of the current macro that obtains thus encoding (relative displacement between current macro and the coupling macro block); Then, two macro blocks are subtracted each other obtain residual error, and residual matrix is carried out dct transform; At last motion vector and dct transform coefficient are carried out entropy coding and obtain packed data.Wherein, the quality of selected reference frame directly has influence on the efficient of encoding compression, if the similarity between selected reference frame and the present frame is strong, just can obtain very high data compression ratio; On the contrary, if the similarity between selected reference frame and the present frame a little less than, the data compression ratio that then obtains is also lower.The acquisition time of considering adjacent image in the video data is shorter at interval, and the similarity of content is higher, and traditional coding method generally selects present image adjacent image before as the reference frame.H.264, up-to-date video compression international standard has also proposed the multi-reference frame technology, promptly selects a plurality of reference frames when coding, thereby has effectively guaranteed the similarity of reference frame, has improved the data compression rate of video coding.
In the gatherer process of video, because the motion of object, a part of changeless background content can be blocked by foreground object, and after current scenery body left, the background content that is blocked can be reappeared again once more, and the zone of reproduction just is called " exposed region ".As shown in Figure 1, a and b are the 450th frame and the 460th two field pictures of one section video sequence.In the 450th frame, because personage's motion, the chair in the background is blocked by the personage, until the 460th frame, along with leaving of personage, the background chair just displays again, and in the 460th frame, the chair of reproduction is exactly " exposed region "." exposed region " is prevalent in various Video Applications occasions, and traditional coding method is not high to the code efficiency of " exposed region " part.Reason is that the image that traditional coding method is selected to be close to previously in cataloged procedure when coding " exposed region ", because this zone is blocked by foreground object, does not have similar match block, so code efficiency descends greatly as the reference frame in reference frame.As shown in Figure 2, b is the 460th frame, and a is the reference coding frame of selecting with conventional method.For the bar-shaped zone among the b (" exposed region "),, there is not the content of coupling, in a so the code efficiency of this part can be very low because it is blocked by the personage.Problem hereto, a kind of solution are to adopt the method for back forecast, promptly select in the image sequence image after the current encoded image as the reference frame.Can improve the code efficiency of " exposed region " like this, but the computation complexity of encoding process is too high, has introduced extra processing delay time in decoding place again simultaneously.
Summary of the invention
The objective of the invention is for overcoming the weak point of prior art, a kind of method for estimating based on background is proposed, this method is on the basis of traditional coding method, in motion estimation process, increased a new reference frame---background frames, remedied the deficiency of traditional coding method, improved code efficiency effectively " exposed region " coding.Be particularly suitable for some application scenario and have changeless background content, as video monitoring and video conference etc.
The motion estimation coding method based on background that the present invention proposes is characterized in that, may further comprise the steps:
1) before the coding beginning, at first background frames is carried out initialization, initialized method is that the pixel value with all pixels in the background frames is initially a fixed value; The state of all macro blocks is set to " uncertain " in the background frames simultaneously:
2) begin coding, read in a two field picture and judge the type of coding of this image,, go to step 6) if the I frame then adopts traditional inner frame coding method to finish the coding of present frame; If the P frame then carries out step 3);
3) read in a macro block and judge this macroblock encoding type,, then adopt traditional inner frame coding method to finish this macroblock encoding, go to step 5) if this macro block is the I macro block; If this macro block is the P macro block, then this macro block is carried out traditional reference frame estimation and background frames reference frame estimation respectively, and obtain motion compensated residual matrix separately;
4) energy of more above-mentioned two residual matrixes selects for use the less residual matrix corresponding reference frame of energy value as reference coding frame, uses reference coding frame that this macroblock encoding is handled;
5) continue to read and handle next macro block, repeating step 3)-4) all macroblock codings in present frame finish;
6) judge whether that all frames have all been encoded and finish, if then finish cataloged procedure; Otherwise, judge whether the current background frame generates to finish, if the state of each macro block of current background frame all is " determine ", then the current background frame generates and finishes, and reads the next frame image and directly adopts current background frame repeating step 2)-5) continue coding;
7) if also have the state of macro block to be in the current background frame " uncertain ", then the current background frame does not generate and finishes, earlier the current background frame is learnt renewal, and then the background frames repeating step 2 after reading the next frame image and adopting renewal)-5) continue to encode.
Method characteristics of the present invention
Method of the present invention has improved the code efficiency of " exposed region ".Reason is as follows: if adopt traditional coding method, the contiguous frames that can choose present frame in motion estimation process is as the reference frame.When encoding,, finding the content of coupling, in reference frame so influenced code efficiency because this zone is blocked by foreground object to " exposed region ".The present invention uses a background frames to preserve background content, except choosing the reference frame according to traditional coding method, also elects background frames as reference frame simultaneously in the cataloged procedure.Like this, to " exposed region " when encoding, though in traditional reference frame (reference frame that traditional coding method is selected), be difficult to find the content of coupling, but because the consistency of background, the content of " reproduction " can find the content of coupling at an easy rate in background frames, thereby has improved code efficiency.
The background frames that uses in the inventive method is different with the common video frame, and the common video frame is the real scene image that collects, and background frames is the artificial coding assistant images that generates.Background frames is represented changeless background content, can't directly obtain from video data, need generate by constantly study is final in cataloged procedure.The present invention has designed the learning method of background frames, thereby can be finished the generation of background frames in cataloged procedure automatically by encoder.
The inventive method simplicity of design is easy to realize.
Description of drawings
The schematic diagram that Fig. 1 is covered and reappears for background.
Fig. 2 is the schematic diagram that traditional method for estimating is handled exposed region.
Fig. 3 is the motion estimation coding method flow chart based on background of the present invention.
Fig. 4 is the study update method flow chart of background frames of the present invention.
Fig. 5 is the generative process schematic diagram of the background frames of present embodiment.
Embodiment
The motion estimation coding method based on background that the present invention proposes reaches embodiment in conjunction with the accompanying drawings and is described in detail as follows:
Method for estimating based on background of the present invention is on the basis of traditional coding method as shown in Figure 3, has increased a new reference frame---background frames in motion estimation process.The specific implementation step is as follows:
1) before the coding beginning, at first background frames is carried out initialization, initialized method is that the pixel value with all pixels in the background frames is initially a fixed value (can be arbitrary value, convenient for the purpose of generally get 0); The state of all macro blocks is set to " uncertain " in the background frames simultaneously;
2) begin coding, read in a two field picture and judge the type of coding of this image,, go to step 6) if the I frame then adopts traditional inner frame coding method to finish the coding of present frame; If the P frame then carries out step 3);
3) read in a macro block and judge this macroblock encoding type,, then adopt traditional inner frame coding method to finish this macroblock encoding, go to step 5) if this macro block is the I macro block; If this macro block is the P macro block, then this macro block is carried out traditional reference frame estimation and background frames reference frame estimation respectively, and obtain motion compensated residual matrix separately;
4) energy of more above-mentioned two residual matrixes (the absolute value sum of each point value in the residual matrix) selects for use the less residual matrix corresponding reference frame of energy value as reference coding frame, uses reference coding frame that this macroblock encoding is handled;
5) continue to read and handle next macro block, repeating step 3)-4) all macroblock codings in present frame finish;
6) judge whether that all frames have all been encoded and finish, if then finish cataloged procedure; Otherwise, judge whether the current background frame generates to finish, if the state of each macro block of current background frame all is " determine ", then the current background frame generates and finishes, and reads the next frame image and directly adopts current background frame repeating step 2)-5) continue coding;
7) if also have the state of macro block to be in the current background frame " uncertain ", then the current background frame does not generate and finishes, earlier the current background frame is learnt renewal, and then the background frames repeating step 2 after reading the next frame image and adopting renewal)-5) continue to encode.
Above-mentioned steps 7) in, background frames learnt to upgrade being meant content and the state that upgrades each macro block in the background frames, so that the background frames background content of application scenarios in the presentation code image more accurately, the study update method of background frames, as shown in Figure 4, specifically may further comprise the steps:
(1) from the current background frame, takes out a macro block, judge the state of this macro block,, then leapt to for (4) step if the state of this macro block is " determining ";
(2) if the state of this macro block is " uncertain ", then calculate the diversity factor value SAD (SAD is Sum of Absolute Distortion, the absolute value sum of two macro block corresponding pixel points differences) of same position macro block in this macro block and the present frame (image of end-of-encode just).
(3) diversity factor sad value and the diversity factor threshold value SAD_MAX (fixed value of setting) that calculates compared: if SAD 〉=SAD_MAX, then upgrade the content of this macro block in the background frames, update method is to replace the macro block of this position in the background frames with the macro block of this position in the present frame, confidence level N with this background frames macro block is changed to 0 simultaneously, and N is a nonnegative integer; If SAD<SAD_MAX then adds 1 with the confidence level N of this background frames macro block earlier, then N and threshold value N_MAX (a fixing positive integer of setting) are compared, if N=N_MAX, then the state with this background frames macro block changes " determining " into; If N<N_MAX, then the state with this background frames macro block remains " uncertain ":
(4) read next background frames macro block, repeated for (1)-(3) step, learn to finish up to all macro blocks of background frames.
Embodiment
The scene that present embodiment is chosen is a corner of office, as shown in Figure 5.Video data is obtained by monitoring camera, and sequence is totally 1370 frames, and frame per second was 25 frame/seconds, and the image size is 176 pixels * 144 pixels.Because the image size is 176 pixels * 144 pixels, and macroblock size is 16 pixels * 16 pixels, so background frames has 11 * 8 macro blocks, present embodiment is with a two-dimensional array variable b[11] [8] represent the state of each macro block in the background frames, the value of aray variable is that the state of the corresponding macro block of 0 expression is " uncertain ", and the value of aray variable is that the state of the corresponding macro block of 1 expression is " determining ".The confidence level N of background frames macro block is with a two-dimensional array variable d[11] [8], the confidence level of the value representation correspondence position macro block of each variable.Two threshold parameters: (the big more difference feasible value to two macro blocks of SAD_MAX value is big more for macro block diversity factor threshold value SAD_MAX=600, the more little difference feasible value to two macro blocks of SAD_MAX value is more little, the difference of general each pixel can obtain effect preferably 3 with interior, consider that macroblock size is 16 pixels * 16 pixels, SAD_MAX can be taken as in 16 * 16 * 3=768); (N_MAX represents this background frames macro block number of times that SAD<SAD_MAX sets up continuously when study to the confidence level threshold value N_MAX=10 of background macro block, if N_MAX is too little, easily foreground object is falsely dropped during background frames study and be background, if N_MAX is too big, background frames is subjected to the interference of foreground object easily and is difficult for determining when learning, so the N_MAX value is generally between 10 to 30).
The employed encoder of present embodiment is to revise to obtain on the basis of traditional MPEG4 encoder xvid-0.9.1.
The motion estimation coding method based on background of present embodiment may further comprise the steps:
1) before the coding beginning, at first background frames is carried out initialization, the pixel value of all pixels in the background frames is made as 0; The state of all macro blocks is set to " uncertain " in the background frames, is about to two-dimensional array variable b[11] value of [8] is set to 0 entirely;
2) begin coding, read in a two field picture and judge the type of coding of this image,, go to step 6) if the I frame then adopts traditional inner frame coding method to finish the coding of present frame; If the P frame then carries out step 3);
3) read in a macro block and judge this macroblock encoding type,, then adopt traditional inner frame coding method to finish this macroblock encoding, go to step 5) if this macro block is the I macro block; If this macro block is the P macro block, then this macro block is carried out traditional reference frame estimation and background frames reference frame estimation respectively, and obtain separately motion compensated residual diff_1 and diff_2;
4) energy (the absolute value sum of each point value in the residual matrix) of comparison diff_1 and diff_2 if diff_1<diff_2 then selects for use traditional reference frame as reference coding frame this macro block to be carried out estimation, is finished this macroblock encoding then; If diff_1 〉=diff_2 then selects for use background frames as reference coding frame this macro block to be carried out estimation, finish this macroblock encoding then;
5) continue to read and handle next macro block, repeating step 3)-4) all macroblock codings in present frame finish;
6) judge whether that all frames have all been encoded and finish, if then finish cataloged procedure; Otherwise, judge the state variable b[11 of background frames macro block] [8], if b[11] all values of [8] all is 1, the state that each macro block of current background frame then is described all is " determining ", then the current background frame generates and finishes, and reads the next frame image and directly adopts current background frame repeating step 2)-5) the continuation coding;
7) if b[11] existence value is 0 variable in [8], and the macro block that state " uncertain " arranged in the current background frame then is described, the current background frame does not also generate and finishes; At this moment, earlier the current background frame is learnt renewal, and then the background frames repeating step 2 after reading the next frame image and adopting renewal)-5) continue to encode.
In the present embodiment,
Above-mentioned steps 7) in, background frames is learnt method for updating, as shown in Figure 4, specifically may further comprise the steps:
(1) from the current background frame, takes out a macro block, judge the state of this macro block then, promptly read the state variable b of this macro block correspondence,, represent that the state of this macro block is " determining ", then leapt to for the 4th step if the value of b is 1;
(2) if the value of b is 0, the state of representing this macro block is " uncertain ", (SAD is Sum of Absolute Distortion then to calculate the diversity factor value SAD of same position macro block in this macro block and the present frame (image of end-of-encode just), the absolute value sum of two macro block corresponding pixel points differences), result of calculation is SAD=sad1.
(3) sad1 and diversity factor threshold value SAD_MAX (parameter of setting) are compared: if sad1 〉=SAD_MAX, then upgrade the content of this macro block in the background frames, update method is to replace the macro block of this position in the background frames with the macro block of this position in the present frame, confidence level d with this background frames macro block is changed to 0 simultaneously, even d=0; If SAD<SAD_MAX then adds 1 with the confidence level variable d of this background frames macro block earlier, promptly d=d+1 compares d and threshold value N_MAX (present embodiment is made as 10) then, if d 〉=10, then the state with this background frames macro block changes " determining " into; If d<10, then the state with this background frames macro block remains " uncertain ";
(4) study of current background frame macro block finishes, and reads next background frames macro block, in repeating step (1)-(3) step, learns to finish up to all macro blocks of background frames.
The coding effect of present embodiment has write down the 5th frame in the cataloged procedure respectively as shown in Figure 5 among Fig. 5, the state of the content of background frames and each macro block when the 11st frame and the 200th frame.The content of Fig. 5 (a) expression background frames, the state of each macro block in Fig. 5 (b) expression background frames, state represents with the background content of correspondence position that for the macro block of " determining " state is then represented with grid block for the macro block of " uncertain ".As can be seen, the background frames learning method effect of using among the present invention is very good, and in the 5th frame, the content of background frames is very near the background content of scene.Because threshold value N_MAX value is taken as 10, the confidence level of each macro block mostly is 5 most when the 5th frame, so the state of each macro block all is " uncertain ", promptly shows as the grid block in (b).During to the 11st frame, the confidence level of most of macro block has satisfied d 〉=N_MAX, thereby state becomes " determining ", but the state that the fraction macro block is still arranged is " uncertain ", main cause is owing to confidence level variable d in the content update is put 0 restatement, so do not satisfy d 〉=N_MAX in cataloged procedure.And time the 200th frame, the state of all macro blocks all becomes " determining ", and so far the generative process of background frames finishes.
The compression performance of motion estimation coding method of the present invention and traditional single reference frame and traditional two reference frame coding methods contrasts, experimental data is shown in table (1), can see, under same code rate, the inventive method is on PSNR, improved 0.3-0.9dB than traditional single reference frame coding method, improved 0.1-0.7dB than traditional two reference frame coding methods.
The coding effect comparison of table 1 the inventive method, traditional single reference frame and traditional double reference frame
Figure C200710063053D00081

Claims (2)

1, a kind of motion estimation coding method based on background is characterized in that, may further comprise the steps:
1) before the coding beginning, at first background frames is carried out initialization, initialized method is that the pixel value with all pixels in the background frames is initially a fixed value; The state of all macro blocks is set to " uncertain " in the background frames simultaneously;
2) begin coding, read in a two field picture and judge the type of coding of this image,, go to step 6) if the I frame then adopts traditional inner frame coding method to finish the coding of present frame; If the P frame then carries out step 3);
3) read in a macro block and judge this macroblock encoding type,, then adopt traditional inner frame coding method to finish this macroblock encoding, go to step 5) if this macro block is the I macro block; If this macro block is the P macro block, then this macro block is carried out traditional reference frame estimation and background frames reference frame estimation respectively, and obtain motion compensated residual matrix separately;
4) energy of more above-mentioned two residual matrixes selects for use the less residual matrix corresponding reference frame of energy value as reference coding frame, uses reference coding frame that this macro block is carried out encoding process;
5) continuing to read and handle next macro block, execution in step 3) all macroblock codings in present frame finish;
6) judge whether that all frames have all been encoded and finish, if then finish cataloged procedure; Otherwise, judging whether the current background frame generates finishes, the state that the current background frame generates the macro block finish is set is " determining ", if the state of each macro block of current background frame all is " determining ", then the current background frame generates and finishes, and reads the next frame image and directly adopts current background frame execution in step 2) the continuation coding;
7) if also having the state of macro block in the current background frame is " uncertain ", then the current background frame does not have generation to finish, and earlier the current background frame is learnt renewal, and then reads the background frames execution in step 2 after next frame image and the employing renewal) continue to encode.
2, the method for claim 1 is characterized in that, in the described step 7) background frames is learnt renewal and specifically may further comprise the steps:
(71) from the current background frame, take out a macro block, judge the state of this macro block,, then leapt to for (74) step if the state of this macro block is " determining ";
(72) if the state of this macro block is " uncertain ", then calculate the diversity factor value SAD of same position macro block in this macro block and the present frame, described present frame is the image of end-of-encode just;
(73) diversity factor sad value and the diversity factor threshold value SAD_MAX that calculates compared: if SAD 〉=SAD_MAX, then upgrade the content of this macro block in the background frames, update method is to replace the macro block of this position in the background frames with the macro block of this position in the present frame, confidence level N with this background frames macro block is changed to 0 simultaneously, and N is a nonnegative integer; If SAD<SAD_MAX then adds 1 with the confidence level N of this background frames macro block earlier, then N and threshold value N_MAX are compared, if N=N_MAX, then the state with this background frames macro block changes " determining " into; If N<N_MAX, then the state with this background frames macro block remains " uncertain ";
(74) read next background frames macro block, carried out for (71) step, learn to finish up to all macro blocks of background frames.
CN 200710063053 2007-01-26 2007-01-26 Background-based motion estimation coding method Expired - Fee Related CN100527842C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200710063053 CN100527842C (en) 2007-01-26 2007-01-26 Background-based motion estimation coding method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200710063053 CN100527842C (en) 2007-01-26 2007-01-26 Background-based motion estimation coding method

Publications (2)

Publication Number Publication Date
CN101009835A CN101009835A (en) 2007-08-01
CN100527842C true CN100527842C (en) 2009-08-12

Family

ID=38697913

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200710063053 Expired - Fee Related CN100527842C (en) 2007-01-26 2007-01-26 Background-based motion estimation coding method

Country Status (1)

Country Link
CN (1) CN100527842C (en)

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101127912B (en) * 2007-09-14 2010-11-17 浙江大学 Video coding method for dynamic background frames
CN101330619B (en) * 2008-07-29 2012-03-07 北京中星微电子有限公司 Method for compressing video image and corresponding method for decoding video
CN101742296B (en) * 2008-11-14 2016-01-20 北京中星微电子有限公司 Reduce video coding-decoding method and the device of the fluctuation of bit stream data amount
CN102196253B (en) * 2010-03-11 2013-04-10 中国科学院微电子研究所 Video coding method and device based on frame type self-adaption selection
US8467412B2 (en) 2010-04-14 2013-06-18 Ericsson Television Inc. Adaptive rate shifting for delivery of video services to service groups
US9060173B2 (en) * 2011-06-30 2015-06-16 Sharp Kabushiki Kaisha Context initialization based on decoder picture buffer
US9338465B2 (en) * 2011-06-30 2016-05-10 Sharp Kabushiki Kaisha Context initialization based on decoder picture buffer
CN102447902B (en) * 2011-09-30 2014-04-16 广州柯维新数码科技有限公司 Method for selecting reference field and acquiring time-domain motion vector
US11109036B2 (en) 2013-10-14 2021-08-31 Microsoft Technology Licensing, Llc Encoder-side options for intra block copy prediction mode for video and image coding
EP3720132A1 (en) 2013-10-14 2020-10-07 Microsoft Technology Licensing LLC Features of color index map mode for video and image coding and decoding
CA2924763A1 (en) 2013-10-14 2015-04-23 Microsoft Corporation Features of intra block copy prediction mode for video and image coding and decoding
MX360926B (en) 2014-01-03 2018-11-22 Microsoft Technology Licensing Llc Block vector prediction in video and image coding/decoding.
US10390034B2 (en) 2014-01-03 2019-08-20 Microsoft Technology Licensing, Llc Innovations in block vector prediction and estimation of reconstructed sample values within an overlap area
US11284103B2 (en) 2014-01-17 2022-03-22 Microsoft Technology Licensing, Llc Intra block copy prediction with asymmetric partitions and encoder-side search patterns, search ranges and approaches to partitioning
US10542274B2 (en) 2014-02-21 2020-01-21 Microsoft Technology Licensing, Llc Dictionary encoding and decoding of screen content
AU2014385769B2 (en) 2014-03-04 2018-12-06 Microsoft Technology Licensing, Llc Block flipping and skip mode in intra block copy prediction
KR20230130178A (en) 2014-06-19 2023-09-11 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 Unified intra block copy and inter prediction modes
EP3202150B1 (en) 2014-09-30 2021-07-21 Microsoft Technology Licensing, LLC Rules for intra-picture prediction modes when wavefront parallel processing is enabled
US9591325B2 (en) 2015-01-27 2017-03-07 Microsoft Technology Licensing, Llc Special case handling for merged chroma blocks in intra block copy prediction mode
EP3308540B1 (en) 2015-06-09 2020-04-15 Microsoft Technology Licensing, LLC Robust encoding/decoding of escape-coded pixels in palette mode
CN105898310B (en) * 2016-04-26 2021-07-16 广东中星电子有限公司 Video encoding method and apparatus
CN106101706B (en) * 2016-06-30 2019-11-19 华为技术有限公司 A kind of image encoding method and device
CN108259904B (en) * 2016-12-29 2022-04-05 法法汽车(中国)有限公司 Method, encoder and electronic device for encoding image data
US10986349B2 (en) 2017-12-29 2021-04-20 Microsoft Technology Licensing, Llc Constraints on locations of reference blocks for intra block copy prediction
CN109859248B (en) * 2018-12-24 2024-03-19 上海大学 Time domain difference-based secondary background modeling method
CN110062235B (en) * 2019-04-08 2023-02-17 上海大学 Background frame generation and update method, system, device and medium
CN112822520B (en) * 2020-12-31 2023-06-16 武汉球之道科技有限公司 Server coding method for online event video
EP4322523A1 (en) * 2021-04-16 2024-02-14 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Residual coding method and device, video coding method and device, and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1738426A (en) * 2005-09-09 2006-02-22 南京大学 Video motion goal division and track method
CN1801930A (en) * 2005-12-06 2006-07-12 南望信息产业集团有限公司 Dubious static object detecting method based on video content analysis

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1738426A (en) * 2005-09-09 2006-02-22 南京大学 Video motion goal division and track method
CN1801930A (en) * 2005-12-06 2006-07-12 南望信息产业集团有限公司 Dubious static object detecting method based on video content analysis

Also Published As

Publication number Publication date
CN101009835A (en) 2007-08-01

Similar Documents

Publication Publication Date Title
CN100527842C (en) Background-based motion estimation coding method
CN107396124B (en) Video-frequency compression method based on deep neural network
CN111709896B (en) Method and equipment for mapping LDR video into HDR video
CN105556964B (en) Content-adaptive for efficient next-generation Video coding is two-way or function prediction multipass picture
Zhang et al. An efficient coding scheme for surveillance videos captured by stationary cameras
CN101771878B (en) Self-adaptively selecting global motion estimation method for panoramic video coding
CN108259916B (en) Best match interpolation reconstruction method in frame in a kind of distributed video compressed sensing
CN104065887A (en) Methods And Systems For Enhanced Dynamic Range Images And Video From Multiple Exposures
JPH0670301A (en) Apparatus for segmentation of image
CN103141092B (en) The method and apparatus carrying out encoded video signal for the super-resolution based on example of video compress use motion compensation
CN101404766B (en) Multi-view point video signal encoding method
CN100581265C (en) Processing method for multi-view point video
CN110852964A (en) Image bit enhancement method based on deep learning
CN108289224B (en) A kind of video frame prediction technique, device and neural network is compensated automatically
CN111479110A (en) Fast affine motion estimation method for H.266/VVC
CN110177282A (en) A kind of inter-frame prediction method based on SRCNN
CN114900691B (en) Encoding method, encoder, and computer-readable storage medium
CN102316323B (en) Rapid binocular stereo-video fractal compressing and uncompressing method
CN114202463B (en) Cloud fusion-oriented video super-resolution method and system
CN113810715B (en) Video compression reference image generation method based on cavity convolutional neural network
CN108259914B (en) Cloud image encoding method based on object library
CN112601095B (en) Method and system for creating fractional interpolation model of video brightness and chrominance
CN102263952B (en) Quick fractal compression and decompression method for binocular stereo video based on object
CN103647969B (en) A kind of object-based Fast Fractal video compress and decompression method
CN114466199A (en) Reference frame generation method and system applicable to VVC (variable valve timing) coding standard

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20090812

Termination date: 20140126