CN101478678A - Time-domain filtering method based on interested region motion compensation - Google Patents

Time-domain filtering method based on interested region motion compensation Download PDF

Info

Publication number
CN101478678A
CN101478678A CN 200810236531 CN200810236531A CN101478678A CN 101478678 A CN101478678 A CN 101478678A CN 200810236531 CN200810236531 CN 200810236531 CN 200810236531 A CN200810236531 A CN 200810236531A CN 101478678 A CN101478678 A CN 101478678A
Authority
CN
China
Prior art keywords
interest
area
pixel
video
prospect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 200810236531
Other languages
Chinese (zh)
Other versions
CN101478678B (en
Inventor
兰旭光
马雯
薛建儒
郑南宁
王斌
马政
毕重远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN 200810236531 priority Critical patent/CN101478678B/en
Publication of CN101478678A publication Critical patent/CN101478678A/en
Application granted granted Critical
Publication of CN101478678B publication Critical patent/CN101478678B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present invention discloses a movement compensation time domain filtering method based on interested area. The method adopts a video diving technique for obtaining the interested region ROI. The method adopts the movement technique of interested region ROI for obtaining the movement vector tree of foreground and background. The method adopts a movement estimation technique of boundary determination for obtaining the movement patch of each pixel. The method adopts a boundary transferring technique of interested region for obtaining the boundary of each stage of low-frequency frame. The method adopts a movement compensation time domain filtering technique of interested region ROI for eliminating time domain redundancy. The former video is divided into two independent regions. Each part can be singly processed. The extensible video coding of content based on interested content is realized through the method. When the code rate is lower, the content region which is interested to the people can be allocated with more code rate. The content of foreground area is clearer for obtaining a better viewing effect.

Description

Based on area-of-interest motion compensated temporal filter method
Technical field
The invention belongs to video coding and Network Transmission field, be specifically related to estimation and motion compensated temporal filter technology based on area-of-interest.
Background technology
The today that develops rapidly in the Internet, people are more and more higher for the service request of video stream media, and traditional coding and decoding video scheme is in the diversity that provides content to be difficult to meet consumers' demand aspect scalable.In piece image or video sequence, in general just wherein the partial content of people's real concern, and the coding major part all is based on frame in the standard of coding and decoding video in the past, all is to adopt identical processing mode for all the elements in the two field picture promptly.Can cause like this when low bit rate, the quality of whole two field picture all will be restricted.For this reason, become the focus of research based on the mode of the content scalable coding of area-of-interest (ROI).Key technology wherein just is based on the motion time domain compensation technology of area-of-interest, does not also have solution preferably at present.
Summary of the invention
The object of the present invention is to provide motion compensated temporal filter method, eliminate the time-domain information redundancy based on area-of-interest.Finish under highest resolution image and video are once encoded, allow to require from local code stream, to decode, to be provided at the solution of the scalable encoding and decoding of any area-of-interest (ROI) in the heterogeneous network according to user-specific content.
In order to realize above-mentioned task, the technical solution that the present invention adopts is: may further comprise the steps:
1) video sequence being carried out video cuts apart and follows the tracks of the border that obtains area-of-interest;
2) border that obtains is divided into background and prospect with original video, and the estimation of carrying out region of interest ROI obtains the motion vector tree of background and prospect;
3) motion vector that obtains is set the movement locus that obtains each pixel in background, the prospect by the motion compensation of region of interest ROI;
4) border of transmitting the area-of-interest that obtains each grade low-frequency frame correspondence, the border by region of interest ROI is described;
The border of each grade area-of-interest that 5) will obtain and the movement locus of each pixel, the motion compensated temporal filter that carries out region of interest ROI is effectively removed the time redundancy of video, obtains the scalability of time simultaneously.
It is to utilize manual the demarcation or image segmentation algorithm that described video is cut apart, and the extraction video plays the region of interest ROI in the original video frame, implements the video tracking algorithm then, and video is divided into prospect and background the most at last.
The estimation of described region of interest ROI is that the prospect of a framing GOP and background area are carried out estimation respectively, divide for foreground portion, be used as is that one group of new area image is handled, but the particular location of pixel is taken at the absolute coordinate in the whole frame video, for background parts, its estimation sphere of action is whole frame video, will obtain two motion vector trees of prospect and background respectively like this.
The motion compensation of described area-of-interest (ROI) is after estimation is finished, the motion vector of the piece that obtains is mapped to pixel, carry out motion compensation for each pixel then, for prospect, owing to be that element in the prospect is independently finished estimation, pixel after the motion compensation is still among prospect, but for background, finish after the motion compensation, the respective pixel of some pixel enters in the middle of the prospect, in order to guarantee complete reconstruct,, they are all handled as associated pixel not for this part pixel.
Transmit on the border of described area-of-interest is that the result that solution is cut apart by video except the border description of the area-of-interest of original video provides, and no longer cutting apart but need obtain the problem on border on the low-frequency frame of other grades, when the filtering of each grade is carried out, except exporting corresponding low-and high-frequency frame, also will export the border of the area-of-interest of these low-and high-frequency frame correspondences describes, the border of the corresponding original even frame of low-frequency frame wherein, and the border of the corresponding original odd-numbered frame of high-frequency frame.
The motion compensated temporal filter of described region of interest ROI is to carry out the time domain wavelet decomposition along the movement locus of area pixel respectively for prospect and background, and in the middle of to the processing of background, adopt high-frequency frame to compose zero for the special not associated pixel of a class, the mode that low-frequency frame keeps obtains corresponding low-and high-frequency frame, when carrying out the fraction precision interpolation, employing is carried out the border to Back Word type zone branch different piece and is judged, by the prospect that obtains after the wavelet decomposition, the low-and high-frequency frame of background can directly carry out the low-and high-frequency frame that linear superposition obtains whole frame.
Two regional movement vector trees that obtain after the estimation are sent into the coding module that carries out motion vector, simultaneously the zone stack high and low frequency frame that obtains behind the time-domain filtering is sent into the wavelet based space decomposing module of carrying out based on area-of-interest (ROI).Wavelet coefficient for area-of-interest (ROI) carries out adaptive lifting.The 3 D wavelet coefficient results that obtains sent in the entropy coder encode, carry out embedded code check for the code stream of having encoded in various conditions and block control, make the part in the area-of-interest (ROI) can obtain more code check, with the better content visual effect that obtains.
The realization of success of the present invention the technology of above removal time domain redundancy based on area-of-interest, finish scalable video coding method, and can be applied to go in the Network Transmission based on area-of-interest.The user can play according to the video flowing that self-demand obtains respective quality and content.
The invention provides the motion compensated temporal filter technical scheme of supporting any area-of-interest (ROI), elimination time-domain information redundancy, finish scalable digital video decoding based on area-of-interest, the content area branch that the user is paid close attention to is equipped with higher code check, thereby obtain better viewing quality, adapt to the development of novel Video Applications more.
Description of drawings
Fig. 1 is the system assumption diagram of region of interest ROI of the present invention.
Fig. 2 is the estimation schematic diagram of region of interest ROI of the present invention;
(a) be present frame; (b) be reference frame.
Fig. 3 is the motion compensation schematic diagram of region of interest ROI of the present invention.
Fig. 4 is the framing time domain wavelet decomposition schematic diagram of region of interest ROI of the present invention.
Fig. 5 is that schematic diagram is transmitted on the border of region of interest ROI of the present invention.
Below in conjunction with accompanying drawing content of the present invention is described in further detail.
Embodiment
With reference to shown in Figure 1, at first obtain the border of area-of-interest (ROI) by video dividing technique, original video is divided into prospect and background.Employing obtains the movement locus of each pixel based on the motion estimation techniques of area-of-interest (ROI), obtain movement locus obtains prospect and background area afterwards by the motion compensated temporal filter based on area-of-interest (ROI) low-and high-frequency frame, eliminate regional time-domain information redundancy.Regional low-and high-frequency frame after the stack is sent into the spatial wavelet transform of carrying out based on area-of-interest (ROI), simultaneously the regional movement vector that obtains is carried out motion vector encoder.Carry out adaptive boosting for the corresponding wavelet coefficient in each frequency band of area-of-interest (ROI) part.Wavelet coefficient after promoting is sent to carries out embedded encodedly in the entropy coder, carry out Rate Control at last.
With reference to shown in Figure 2, this technology utilize classification variable-block piece coupling (HVSBM, HierarchicalVariable Size BlockMatching) method obtain regional frame of video movement locus.But under full resolution, carry out, do not carry out pyramidal decomposition.The piece coupling of fixed size is divided into a series of macro blocks with frame of video, then the piece of search coupling in reference frame:
( dx , dy ) = arg min Σ ( x , y ∈ s ) | X cur [ x , y ] - X ref [ x - dx , y - dy ] | p
X wherein Cur[x, y] is the pixel value of current block, X Ref[x-dx, y-dy] is the pixel value of reference block.S is the hunting zone, || pBe matching criterior.Matching criterior is SAD when p=1, and when p=2, matching criterior is MSE.Make that following formula sets up (dx dy) is the motion vector value of the piece that obtains.Be used as a width of cloth area image and carry out estimation for foreground area, background obtains motion vector by the mode that whole frame deducts foreground area.
With reference to shown in Figure 3, each pixel in the foreground area is carried out the time domain motion compensated filtering by the motion vector that obtains, obtain the movement locus of each pixel, because the regional movement of prospect is estimated independently to carry out, the pixel that promptly relates to all is the pixel in the middle of the prospect, and the pixel that can guarantee region motion compensation is also in the middle of prospect.But for background parts, because his motion vector adopts the mode of subtracting each other to obtain, so when carrying out motion compensation, have the respective pixel that one part of pixel obtains and entered into prospect, therefore need be judged for the motion compensation of background parts, for the pixel that enters prospect after the motion compensation, this pixel is used as not associated pixel handles.
With reference to shown in Figure 4, for the wavelet filtering based on LG53: main formulas is as follows:
H i=hweight[0]MAP 2i+1→2i(A i)+hweight[1]B i+hweight[2]MAP 2i+1→2i+2(A i+1) (1)
L i=lweight[1]A i+lweight[0]MAU 2i→2i+1(H i-1)+lweight[2]MAU 2i→2i-1(H i+1) (2)
Wherein: H iFilter value for the high-frequency frame that obtains behind the time-domain filtering
L iFilter value for the low-frequency frame that obtains behind the time-domain filtering
B iBe a pixel value in the odd-numbered frame
A iBe a pixel value in the even frame
MAP-prediction (Predict) step, MA represents that the pixel in the bracket of back is along movement locus and B iCorresponding pixel.
MAU-lifting (Update) step, MA represents that the pixel in the bracket of back is along movement locus and B iCorresponding pixel.
hweight[0]=-0.42389563,hweight[1]=0.84779125 hweight[2]=-0.42389563
lweight[0]=0.36115757,lweight[1]=1.2247449 lweight[2]=0.36115757
Detailed process: at first,, the sequence of frames of video of time domain is divided into two groups: A framing and B framing (just by even frame and odd-numbered frame grouping) according to promoting the general step that requires.For the B in the B framing iFrame, the A of it and A framing iFrame and A I+1Frame carries out propulsion estimation and reverse respectively and estimates.As shown in the figure.
Secondly, finish on the basis of estimation, for B iPixel in the frame can be at A along motion estimation direction iFrame and A I+1Find corresponding pixel (ideally, other situations are discussed in the back) in the frame.Shown in formula (1), predict that we just can obtain high-frequency frame H like this iIn the filter value of a respective pixel.
Once more, on the basis that obtains high-frequency frame sequence H, for the A in the A framing iFrame, the H of it and H framing I-1Frame and H iFrame carries out propulsion estimation and reverse respectively to be estimated, obviously, here we do not consider video sequence about two borders, we can discuss in the back.
At last, finish on the basis of estimation, for A iPixel in the frame can be at H along motion estimation direction I-1Frame and H iFind corresponding pixel (ideally, other situations are discussed in the back) in the frame.Upgrade shown in formula (2), we just can obtain low-frequency frame L like this iIn the filter value of a respective pixel.
Above step is carried out repetition for the low-frequency frame sequence that obtains, the low-frequency frame that only remaining hope reaches in sequence (such as, finally have only a frame low-frequency frame).
Adopting now comes prospect and background branch carries out the time domain wavelet decomposition.The location of pixels in prospect background zone all is the absolute position in whole frame.For background, be judged as the not pixel of associated pixel in the region motion compensation, when carrying out filtering and obtaining high-frequency frame, the pixel position corresponding value is zero; When obtaining low-frequency frame, the value of correspondence position location of pixels is the value of this pixel itself.After obtaining the low-and high-frequency frame of background, prospect respectively, can carry out the low-and high-frequency frame that linear superposition obtains whole frame.Because estimation is fraction pixel precision,,, need be judged because used interpolating pixel might enter into prospect during interpolation so when filtering, will obtain the fractional position pixel value time, need carry out interpolation.Be divided into 8 zones, for each regional criterion difference.Vx among the figure is the abscissa of the starting point of area-of-interest, and vy is the ordinate of the starting point in the emerging zone of sense, and hor is the wide of video, and ver is the height of video.Hor_content is the width of area-of-interest, and ver_content is the height of area-of-interest.During judgement for the zone among the figure 1,3,5,7 coboundaries and left margin are 0, and lower boundary is ver, and right margin is hor, for zone 2, coboundary and left margin are 0, and lower boundary is vy, right margin is hor, for zone 4, the coboundary is vy+ver_content, and left margin is 0, lower boundary is ver, and right margin is hor.For zone 6, the coboundary is 0, and left margin is vx+hor_content, and lower boundary is ver, and right margin is hor.For zone 8, coboundary and left margin are 0, and lower boundary is ver, and right margin is vx.
The zone The coboundary Left margin Lower boundary Right margin
1 0 0 ver hor
2 0 0 vy hor
3 0 0 ver hor
4 vx+ver_content 0 ver hor
5 0 0 ver hor
6 0 vy+hor_content ver hor
7 0 0 ver hor
8 0 0 ver vx
With reference to shown in Figure 5, the border of the area-of-interest of each frame of original video is provided by the result of cutting apart.But following each grade all is to carry out on the basis of the low-frequency frame that upper level obtains, and the border of these low-frequency frame obtains by upper level.The border of the corresponding even frame of low-frequency frame, the border of the corresponding odd-numbered frame of high-frequency frame, the downward transmission of one-level one-level.

Claims (6)

1. based on area-of-interest motion compensated temporal filter method, it is characterized in that: may further comprise the steps:
1) video sequence being carried out video cuts apart and follows the tracks of the border that obtains area-of-interest;
2) border that obtains is divided into background and prospect with original video, and the estimation of carrying out region of interest ROI obtains the motion vector tree of background and prospect;
3) motion vector that obtains is set the movement locus that obtains each pixel in background, the prospect by the motion compensation of region of interest ROI;
4) border of transmitting the area-of-interest that obtains each grade low-frequency frame correspondence, the border by region of interest ROI is described;
The border of each grade area-of-interest that 5) will obtain and the movement locus of each pixel, the motion compensated temporal filter that carries out region of interest ROI is effectively removed the time redundancy of video, obtains the scalability of time simultaneously.
2. according to claim 1 based on area-of-interest motion compensated temporal filter method, it is characterized in that, it is to utilize manual the demarcation or image segmentation algorithm that described video is cut apart, the extraction video plays the region of interest ROI in the original video frame, implement the video tracking algorithm then, video is divided into prospect and background the most at last.
3. according to claim 1 based on area-of-interest motion compensated temporal filter method, it is characterized in that, the estimation of described region of interest ROI is that the prospect of a framing GOP and background area are carried out estimation respectively, divide for foreground portion, be used as is that one group of new area image is handled, but the particular location of pixel is taken at the absolute coordinate in the whole frame video, for background parts, its estimation sphere of action is whole frame video, will obtain two motion vector trees of prospect and background respectively like this.
4. according to claim 1 based on area-of-interest motion compensated temporal filter method, it is characterized in that: the motion compensation of described area-of-interest (ROI) is after estimation is finished, the motion vector of the piece that obtains is mapped to pixel, carry out motion compensation for each pixel then, for prospect, owing to be that element in the prospect is independently finished estimation, pixel after the motion compensation is still among prospect, but for background, finish after the motion compensation, the respective pixel of some pixel enters in the middle of the prospect, in order to guarantee complete reconstruct, for this part pixel, they are all handled as associated pixel not.
5. according to claim 1 based on area-of-interest motion compensated temporal filter method, it is characterized in that: transmit on the border of described area-of-interest is that the result that solution is cut apart by video except the border description of the area-of-interest of original video provides, and no longer cutting apart but need obtain the problem on border on the low-frequency frame of other grades, when the filtering of each grade is carried out, except exporting corresponding low-and high-frequency frame, also will export the border of the area-of-interest of these low-and high-frequency frame correspondences describes, the border of the corresponding original even frame of low-frequency frame wherein, and the border of the corresponding original odd-numbered frame of high-frequency frame.
6. according to claim 1 based on area-of-interest motion compensated temporal filter method, it is characterized in that: the motion compensated temporal filter of described region of interest ROI is to carry out the time domain wavelet decomposition along the movement locus of area pixel respectively for prospect and background, and in the middle of to the processing of background, adopt high-frequency frame to compose zero for the special not associated pixel of a class, the mode that low-frequency frame keeps obtains corresponding low-and high-frequency frame, when carrying out the fraction precision interpolation, employing is carried out the border to Back Word type zone branch different piece and is judged, by the prospect that obtains after the wavelet decomposition, the low-and high-frequency frame of background can directly carry out the low-and high-frequency frame that linear superposition obtains whole frame.
CN 200810236531 2008-12-30 2008-12-30 Time-domain filtering method based on interested region motion compensation Expired - Fee Related CN101478678B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200810236531 CN101478678B (en) 2008-12-30 2008-12-30 Time-domain filtering method based on interested region motion compensation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200810236531 CN101478678B (en) 2008-12-30 2008-12-30 Time-domain filtering method based on interested region motion compensation

Publications (2)

Publication Number Publication Date
CN101478678A true CN101478678A (en) 2009-07-08
CN101478678B CN101478678B (en) 2011-06-01

Family

ID=40839301

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200810236531 Expired - Fee Related CN101478678B (en) 2008-12-30 2008-12-30 Time-domain filtering method based on interested region motion compensation

Country Status (1)

Country Link
CN (1) CN101478678B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101945287A (en) * 2010-10-14 2011-01-12 杭州华三通信技术有限公司 ROI encoding method and system thereof
CN102043949A (en) * 2010-12-28 2011-05-04 天津市亚安科技电子有限公司 Method for searching region of interest (ROI) of moving foreground
CN102075757A (en) * 2011-02-10 2011-05-25 北京航空航天大学 Video foreground object coding method by taking boundary detection as motion estimation reference
CN102438152A (en) * 2011-12-29 2012-05-02 中国科学技术大学 Scalable video coding (SVC) fault-tolerant transmission method, coder, device and system
CN106937118A (en) * 2017-03-13 2017-07-07 西安电子科技大学 A kind of bit rate control method being combined based on subjective area-of-interest and time-space domain
CN111369592A (en) * 2020-03-13 2020-07-03 浙江工业大学 Rapid global motion estimation method based on Newton interpolation
CN112233128A (en) * 2020-10-15 2021-01-15 推想医疗科技股份有限公司 Image segmentation method, model training method, device, medium, and electronic device
CN112995678A (en) * 2021-02-22 2021-06-18 深圳创维-Rgb电子有限公司 Video motion compensation method and device and computer equipment
CN113259662A (en) * 2021-04-16 2021-08-13 西安邮电大学 Code rate control method based on three-dimensional wavelet video coding
CN114302137A (en) * 2021-12-23 2022-04-08 北京达佳互联信息技术有限公司 Time domain filtering method and device for video, storage medium and electronic equipment

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101945287A (en) * 2010-10-14 2011-01-12 杭州华三通信技术有限公司 ROI encoding method and system thereof
CN101945287B (en) * 2010-10-14 2012-11-21 浙江宇视科技有限公司 ROI encoding method and system thereof
CN102043949B (en) * 2010-12-28 2015-02-11 天津市亚安科技股份有限公司 Method for searching region of interest (ROI) of moving foreground
CN102043949A (en) * 2010-12-28 2011-05-04 天津市亚安科技电子有限公司 Method for searching region of interest (ROI) of moving foreground
CN102075757A (en) * 2011-02-10 2011-05-25 北京航空航天大学 Video foreground object coding method by taking boundary detection as motion estimation reference
CN102075757B (en) * 2011-02-10 2013-08-28 北京航空航天大学 Video foreground object coding method by taking boundary detection as motion estimation reference
CN102438152A (en) * 2011-12-29 2012-05-02 中国科学技术大学 Scalable video coding (SVC) fault-tolerant transmission method, coder, device and system
CN106937118A (en) * 2017-03-13 2017-07-07 西安电子科技大学 A kind of bit rate control method being combined based on subjective area-of-interest and time-space domain
CN106937118B (en) * 2017-03-13 2019-09-13 西安电子科技大学 A kind of bit rate control method combined based on subjective area-of-interest and time-space domain
CN111369592A (en) * 2020-03-13 2020-07-03 浙江工业大学 Rapid global motion estimation method based on Newton interpolation
CN112233128A (en) * 2020-10-15 2021-01-15 推想医疗科技股份有限公司 Image segmentation method, model training method, device, medium, and electronic device
CN112995678A (en) * 2021-02-22 2021-06-18 深圳创维-Rgb电子有限公司 Video motion compensation method and device and computer equipment
CN113259662A (en) * 2021-04-16 2021-08-13 西安邮电大学 Code rate control method based on three-dimensional wavelet video coding
CN114302137A (en) * 2021-12-23 2022-04-08 北京达佳互联信息技术有限公司 Time domain filtering method and device for video, storage medium and electronic equipment
CN114302137B (en) * 2021-12-23 2023-12-19 北京达佳互联信息技术有限公司 Time domain filtering method and device for video, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN101478678B (en) 2011-06-01

Similar Documents

Publication Publication Date Title
CN101478678B (en) Time-domain filtering method based on interested region motion compensation
CN101420618B (en) Adaptive telescopic video encoding and decoding construction design method based on interest zone
US8644384B2 (en) Video coding reference picture prediction using information available at a decoder
CN107105278B (en) The video coding and decoding system that motion vector automatically generates
CN1113541C (en) Segmented picture coding method and system, and corresponding decoding method ans system
CN102595135B (en) Method and device for scalable video coding
CN100588257C (en) Scalable video coding with grid motion estimation and compensation
US20070140350A1 (en) Moving-picture layered coding and decoding methods, apparatuses, and programs
CN105338357B (en) A kind of distributed video compressed sensing decoding method
CN106507116B (en) A kind of 3D-HEVC coding method predicted based on 3D conspicuousness information and View Synthesis
US5675669A (en) Apparatus for encoding/decoding an image signal having a still object
JPH07203435A (en) Method and apparatus for enhancing distorted graphic information
CN104539961B (en) Gradable video encoding system based on the gradual dictionary learning of hierarchy
CN101841723B (en) Perceptual video compression method based on JND and AR model
CN105120290A (en) Fast coding method for depth video
CN105612751A (en) Systems and methods for inter-layer RPS derivation based on sub-layer reference prediction dependency
CN102547282B (en) Extensible video coding error hiding method, decoder and system
US10965958B2 (en) Mixed domain collaborative post filter for lossy still image coding
Bian et al. Wireless point cloud transmission
CN103220533A (en) Method for hiding loss errors of three-dimensional video macro blocks
US20070140335A1 (en) Method of encoding video signals
WO2017004883A1 (en) Time-domain information-based adaptive video pre-processing method
CN113542753A (en) AVS3 video coding method and encoder
CN107071447A (en) A kind of correlated noise modeling method based on two secondary side information in DVC
CN103813149B (en) A kind of image of coding/decoding system and video reconstruction method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20110601

Termination date: 20161230

CF01 Termination of patent right due to non-payment of annual fee