CN104125471A - Video image compression method - Google Patents

Video image compression method Download PDF

Info

Publication number
CN104125471A
CN104125471A CN201410385560.3A CN201410385560A CN104125471A CN 104125471 A CN104125471 A CN 104125471A CN 201410385560 A CN201410385560 A CN 201410385560A CN 104125471 A CN104125471 A CN 104125471A
Authority
CN
China
Prior art keywords
motion
motion vector
frame
piece
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410385560.3A
Other languages
Chinese (zh)
Other versions
CN104125471B (en
Inventor
高冬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHENGDU RUIBO HUICHUANG INFORMATION TECHNOLOGY Co Ltd
Original Assignee
CHENGDU RUIBO HUICHUANG INFORMATION TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHENGDU RUIBO HUICHUANG INFORMATION TECHNOLOGY Co Ltd filed Critical CHENGDU RUIBO HUICHUANG INFORMATION TECHNOLOGY Co Ltd
Priority to CN201410385560.3A priority Critical patent/CN104125471B/en
Publication of CN104125471A publication Critical patent/CN104125471A/en
Application granted granted Critical
Publication of CN104125471B publication Critical patent/CN104125471B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention provides a video image compression method. The method comprises the following steps: performing global motion estimation by using a motion vector field; partitioning local motion in a video; correcting an ROI (Region of Interest) image; performing video encoding based on the corrected ROI image. By adopting the video image compression method, an ROI in the video can be detected accurately and completely, the encoding bit rate is reduced, and meanwhile the subjective quality is kept.

Description

A kind of video image compressing method
Technical field
The present invention relates to Video coding, particularly a kind of video image compressing method.
Background technology
Development along with information technology, the amount of video information that in life, people touch is more and more huger, how in high efficiency extraction video, obvious object has caused more and more researchers' concern, ROI (area-of-interest) is having a wide range of applications aspect vision signal processing, such as fields such as video frequency searching, video compression, video monitoring, video trackings.Video compression aspect, because video resolution is more and more higher now, therefore video-frequency compression method is also one of focus of research efficiently.Simultaneously the video-frequency compression method in conjunction with human vision model is one of key technology of coding and decoding video of future generation, so ROI also just seems particularly important as an importance of human vision model.
ROI is having a wide range of applications aspect vision signal processing, therefore the exploitation of ROI technology is had to very important meaning.At present image ROI method is mainly utilized the conspicuousness of the feature calculation images such as color, brightness of image, but the ROI method of image is not utilized the motion feature of video, and when therefore image ROI method directly being applied to video and detecting, effect is bad.Yet less to the research of video ROI method, and exist the shortcoming that method complexity is higher, prior art not to consider the globality of textural characteristics and the human eye vision interest region of video, cause that compression ratio is not high or subjective quality is not good.
Therefore,, for existing the problems referred to above in correlation technique, effective solution is not yet proposed at present.
Summary of the invention
For solving the existing problem of above-mentioned prior art, the present invention proposes a kind of video image compressing method, comprising:
Step 1, utilizes the motion vector field in video code flow to carry out overall motion estimation;
Step 2, after motion vector field is carried out to overall motion estimation, cuts apart the local motion in video;
Step 3, revises the ROI image of the local motion extracting under the global motion background obtaining;
Step 4, the ROI image based on revising carries out Video coding.
Preferably, described step 1 employing parameter is 8 dimensional vector v=[v 0, v 1..., v 7] projection model, the perspective transform of this model is defined as:
x R=(v 0x C+v 1y C+v 2)/(v 6x C+v 7y C+1);
y R=(v 3x C+v 4y C+v 5)/(v 6x C+v 7y C+1);
(x wherein c, y c) and (x r, y r) be respectively the coordinate of present frame and reference frame, for each coordinate in present frame, be (x c, y c) the piece of corresponding motion model v, its component motion is defined as:
V X(x C,y C;v)=x R-x C
V Y(x C,y C;v)=y R-y C
V wherein xand V yrepresent respectively the horizontal and vertical component of motion vector V,
Remove the first deviate and second deviate of global motion model in vector field; The motion vector noise that described the first deviate is estimation in video coding process; Described the second deviate is the motion vector that does not meet background motion model.
Preferably, the first deviate of global motion model and the second deviate in described removal vector field, further comprise:
The size of the more current motion vector of step 3.1 and contiguous 8 motion vectors, by with preset threshold value comparison, remove in the following manner the most unaccommodated motion vector deviate:
‖ V c-V n‖/‖ V c‖ <T mV, wherein: V cfor current motion vector; V nfor nearby motion vectors; T mVfor threshold value;
The iterative computation that step 3.2 adopts Union Movement to cut apart, in first round iteration, moving region is cut apart figure and is obtained by the segmentation result prediction of former frame; And in successive iterations process, use the figure of cutting apart of the present frame obtained by last round of iterative computation;
Global motion compensates by V (x, y, t), i.e. V cOM(x, y, t)=V (x, y, t)-V (x, y; v t), V wherein cOM(x, y, t) is t frame coordinate is the motion vector after the compensation of (x, y) piece, and V (x, y, t) is the motion vector of the t frame coordinate piece that is (x, y), v tit is the globe motion parameter vector in certain iteration of t frame;
After the motion segmentation of t frame is determined, the motion vector deviation piece coordinate of t+1 frame can obtain by prediction, if V is (x t, y t, t) being detected is the second deviate, piece (the x in corresponding t+1 frame t+1, y t+1) can predict as follows:
(x t+1,y t+1)=(x t,y t)-V(x t,y t,t)
Set the deviation piece that the piece of maximum coverage area is prediction, before t+1 frame carries out the overall motion estimation iterative computation of the first round, the motion vector of these deviation pieces all will be removed.;
Step 3.3 after the first and second deviates are removed, parameter vector v tthe motion vector estimation remaining by t frame obtains, for a given v t, the real motion vector that a frame meta is set to (x, y) is V (x, y, t), finds out and makes V (x, y; v t) with the v of V (x, y, t) difference minimum t:
V t=arg min vΣ ‖ V (x, y, t)-V (x, y; V) ‖ 2, the motion vector that wherein used is for removing the motion vector after all deviates.
Preferably, described step 3 further comprises:
Step 4.1, for i piece, if it is ROI region, is set weighted value M ibe 128; Otherwise weight is 0.
Step 4.2 is established B ibe the number of coded bits of i piece, find whole frame maximum B max, then by B ibe mapped to the scope of (0,128), as additional weight value;
Step 4.3 is by ROI weights W icarry out according to the following formula the addition of additional weight value:
W i=M i+127(B i/B max)
Step 5.4 obtains the final ROI image of revising.
Preferably, described step 4 further comprises:
Adopt the method for adaptive frequency coefficient compacting, for each converter unit, definition:
C wherein pfor the matrix of frequency coefficients after compacting; represent that two matrix corresponding elements multiply each other; W is coefficient of frequency compacting matrix.
The present invention compared to existing technology, has the following advantages:
In motion identification and cataloged procedure, consider the globality of textural characteristics and the human eye vision interest region of video, compared with the conventional method, this algorithm can more accurately intactly detect the region-of-interest in video; Than traditional algorithm, reduce coding bit rate, kept almost identical subjective quality simultaneously.
Accompanying drawing explanation
Fig. 1 is according to the flow chart of the video image compressing method of the embodiment of the present invention.
Embodiment
Below with diagram the principle of the invention accompanying drawing together with the detailed description to one or more embodiment of the present invention is provided.In conjunction with such embodiment, describe the present invention, but the invention is not restricted to any embodiment.Scope of the present invention is only defined by the claims, and the present invention contain manyly substitute, modification and equivalent.Set forth in the following description many details to provide thorough understanding of the present invention.These details are provided for exemplary purposes, and also can realize the present invention according to claims without some or all details in these details.
Because the target of coding is in order to obtain higher compression efficiency under same video quality, so this research is from the angle of human eye vision perception, carries out significance analysis obtain ROI image in code stream compression domain H.265.
An aspect of of the present present invention provides a kind of video image compressing method.Fig. 1 is the video image compressing method flow chart according to the embodiment of the present invention.As shown in Figure 1, implement concrete steps of the present invention as follows:
1 overall motion estimation
The present invention adopts the perspective projection model with 8 parameters.This perspective model parameter is 8 dimensional vector v=[v 0, v 1..., v 7].Set (x c, y c) and (x r, y r) be respectively the coordinate of present frame and reference frame, this perspective transform can be defined as:
x R=(v 0x C+v 1y C+v 2)/(v 6x C+v 7y C+1);
Y r=(v 3x c+ v 4y c+ v 5)/(v 6x c+ v 7y c+ 1); Formula 1
Set V xand V yrepresenting respectively the horizontal and vertical component of motion vector V, is (x for each coordinate in present frame c, y c) piece of corresponding motion model v, this component can be defined as:
V X(x C,y C;v)=x R-x C
V y(x c, y c; V)=y r-y cformula 2
Because the object of the overall motion estimation of motion vector is released parameter vector v exactly from motion vector, and H.265 code stream provides motion vector field, so motion vector does not need to reappraise.Yet in vector field, there is some motion vector and be not suitable for global motion model, being called deviate; Therefore,, in order to improve the accuracy of overall motion estimation, they need to be removed.In the present invention, these deviates are divided into following two classes.
Class1 motion vector noise.These noises are normally because estimation in video coding process is inaccurate in some region, can not capture real motion and produce, such as thering is the region of less or few texture, the borderline region of a moving object and there is region of repetition textural characteristics etc.
Type 2 does not meet the motion vector of background motion model.Such motion vector can be divided into again two kinds: the real motion vector of background motion object relatively, is very closely mixed the motion vector into the stationary object of background apart from camera.Such deviate feature is that they appear at the continuum, space being covered by above-mentioned two kinds of objects in a frame conventionally, and similar to adjacent vectors.
In order to estimate exactly global motion, must remove as far as possible this motion vector deviate of two types, concrete steps are as follows.
Step 1 is removed Class1 motion vector deviate.Conventionally there is very strong spatial coherence in the motion vector from a motion model.The size of the more current motion vector of the present invention and contiguous 8 motion vectors, by with preset threshold value and relatively remove the most unaccommodated motion vector deviate.Concrete judgment mode is ‖ V c-V n‖/‖ V c‖ <T mV, wherein: V cfor current motion vector; V nfor nearby motion vectors; T mVfor threshold value, in the present invention, be set to 0.15.
Step 2 adopts the iterative calculation method that Union Movement is cut apart to detect and remove type 2 motion vector deviates.In first round iteration, moving region is cut apart figure and is obtained by the segmentation result prediction of former frame; And in iterative process below, use the figure of cutting apart of the present frame that obtained by last round of iterative computation.Set v tbe the globe motion parameter vector in certain iteration of t frame, V (x, y, t) is that t frame coordinate is the motion vector of the piece of (x, y).Global motion can pass through V (x, y, t) compensation, i.e. V cOM(x, y, t)=V (x, y, t)-V (x, y; v t), wherein: V cOM(x, y, t) is t frame coordinate is the motion vector after the compensation of (x, y) piece; v tby formula (1) and (2), calculated.
Through after global motion compensation, will utilize the motion vector after compensation to carry out motion segmentation.Because the result of cutting apart has been pointed out the moving region in present frame, so the motion vector in these regions is taken as the motion vector deviate of type 2, is removed carrying out before next round overall motion estimation iterative computation.
In addition, after the motion segmentation of t frame is determined, the motion vector deviation piece coordinate of t+1 frame can obtain by prediction.If V is (x t, y t, t) being detected is the motion vector deviate of type 2, piece (the x in corresponding t+1 frame t+1, y t+1) can predict as follows:
(x t+1,y t+1)=(x t,y t)-V(x t,y t,t)。
Yet coordinate is (x in t+1 frame t+1, y t+1) prediction piece may cover several simultaneously, so the piece that is set as maximum coverage area in the inventive method is the deviation piece of prediction.So before t+1 frame carries out the overall motion estimation iterative computation of the first round, the motion vector of these deviation pieces all will be removed.By such initial setting and global motion compensation, the removal of the motion segmentation in successive iterations process and type 2 motion vector deviates all can be more accurate.
Step 3 global motion model parameters is estimated.After all motion vector deviates are removed, parameter vector v tto obtain by the remaining motion vector estimation of t frame.For a given v t, a frame meta is set to motion vector V (x, the y of (x, y); v t) can through type (1) and (2) calculate, and real motion vector is V (x, y, t).The object of overall motion estimation is exactly to find out the v that makes both difference minimums t.
Difference of two squares error is error criterion the most frequently used in overall motion estimation, so problem can be summed up as:
V t=arg min vΣ ‖ V (x, y, t)-V (x, y; V) ‖ 2, the motion vector that wherein used is for removing the motion vector after all types deviate.Therefore find v tprocess just develop for models fitting process.
2 motion segmentation
After motion vector field has been carried out to global motion compensation, then carry out motion segmentation, step is as follows.
Step 1 starts with single cluster (motion vectors that whole frame is all), calculates Ta center V c=(Σ kv k)/N, then distinguishes Yi Xin center V c± V c/ 2 generate two new clusters.
Step 2 is with principle of similarity is divided whole frame recently motion vector in existing cluster, and the center of then upgrading i cluster is n wherein ii cluster C ithe number of middle motion vector.
Step 3 is calculated the distortion of each cluster, respectively with centered by, continue the cluster C with maximum distortion mbe divided into two clusters, wherein P=((X max-X min)/2 (M-1), (Y max-Y min)/2 (M-1)), M is the sum of cluster before dividing, X min, X max, Y minand Y maxbe respectively minimum and maximum horizontal and vertical component in central point vector.
Step 4 repeating step 2 and 3, until the variation of cluster distortion is less than predefined threshold value.In the present invention, get 5% of initial distortion variations; Or make minimum cluster be less than predefined threshold value, get 5% of all motion vector numbers.
3 vision ROI image correction
Through overall motion estimation and the motion segmentation of associating, setting background area weighted value is 0, and foreground area weighted value is 128, has just obtained the movement vision ROI image of the local motion prospect that extracts under global motion background.But the vision ROI image that analysis obtains according to motion vector has only been considered the motion feature of video sequence, do not consider complex texture region that human eye is paid close attention to and the integrality of foreground moving object.Because number of coded bits has reflected complexity and the activity in region to a great extent, so the characteristic distributions that the present invention combines number of coded bits in code stream revises movement vision ROI image, and process is as follows:
A. for i piece, if it is motion marking area, set weighted value M ibe 128; Otherwise weight is 0.
B. establish B ibe the number of coded bits of i piece, find whole frame maximum B max, then by B ibe mapped to the scope of (0,128), as additional weight value.
C. ROI weight is carried out to the addition of additional weight value according to the following formula,
W i=M i+127(B i/B max)
Thereby obtain the final vision specific image of revising.
4 codings based on ROI image
In coding method of the present invention, adopted a kind of method of carrying out the compacting of adaptive frequency coefficient for non-ROI region.For each converter unit, definition: in formula: represent that two matrix corresponding elements multiply each other; C pfor the matrix of frequency coefficients after compacting; W is coefficient of frequency compacting matrix, W = w 0 w 1 w 2 w 3 w 1 w 2 w 3 w 4 w 2 w 3 w 4 w 5 w 3 w 4 w 5 w 6
W i(i ∈ [0,6]) gets 0 or 1, and meets constraint w i+1≤ w i.This constraint representation coefficient of frequency compacting starts to be transitioned into gradually low frequency component from high fdrequency component, therefore always has 7 kinds of coefficient of frequency compacting matrix forms.In specific coding, need to decide w according to the ROI of this piece ithe concrete value of (i ∈ [0,6]).
For example,, to the most significant region of vision, w i(i ∈ [0,6]) can be taken as 1, and to the least significant region of vision, can get w 0be 1, w i(i ∈ [1,6]) is all 0.Owing to having adopted self adaptation quadtree coding structure, support minimum 4 * 4 to maximum 32 * 32 dct transform unit.Therefore for the converter unit of every kind big or small, the present invention has set 5 kinds of coefficient of frequency compacting matrixes.Set i, j is respectively the transverse and longitudinal coordinate of piece, calculates according to the following formula:
Wherein: W (k) ijit is coefficient of frequency compacting matrix; N is block size, and value is respectively 4,8,16 and 32; K is the index of 5 kinds of candidate matrices.Corresponding to 5 kinds of candidate matrices, by the visually-perceptible weights W of non-marking area TU tUalso be normalized to five grade L tU:
L TU=ceil[W TU/(128S TU/5)],
S in formula tUcorresponding four kinds of block size values are Isosorbide-5-Nitrae respectively, 16 and 64, then by following formula, determine coefficient of frequency compacting matrix: the W of TU tU=W{min[max (L tU+ W init, 0), 4],
W in formula initbe for selecting the initial index of matrix of frequency coefficients, be used for the intensity of control frequency coefficient compacting, get the integer value between [4,4].According to coding Q pdynamically update W init=-(Q p-C nonVS)/S tEP+ O gM
C in formula nonVS, S tEPand O gMrepresent respectively non-marking area often value, step-length and global motion skew, by the factors such as content characteristic of video scene, decided.For the video sequence that comprises global motion, be defined as respectively 30,6 and 0; And for the video sequence of stationary background, be defined as respectively 24,6 and-2.
In sum, the present invention proposes a kind of video image compressing method, in motion identification and cataloged procedure, consider the globality of textural characteristics and the human eye vision interest region of video, compare with existing ROI method for detecting area, this algorithm can more accurately intactly detect the region-of-interest in video; Than traditional algorithm, reduce coding bit rate, kept almost identical subjective quality simultaneously.
Obviously, it should be appreciated by those skilled in the art, above-mentioned each module of the present invention or each step can realize with general computing system, they can concentrate on single computing system, or be distributed on the network that a plurality of computing systems form, alternatively, they can be realized with the executable program code of computing system, thereby, they can be stored in storage system and be carried out by computing system.Like this, the present invention is not restricted to any specific hardware and software combination.
Should be understood that, above-mentioned embodiment of the present invention is only for exemplary illustration or explain principle of the present invention, and is not construed as limiting the invention.Therefore any modification of, making, be equal to replacement, improvement etc., within protection scope of the present invention all should be included in without departing from the spirit and scope of the present invention in the situation that.In addition, claims of the present invention are intended to contain whole variations and the modification in the equivalents that falls into claims scope and border or this scope and border.

Claims (5)

1. a video image compressing method, is characterized in that, comprising:
Step 1, utilizes the motion vector field in video code flow to carry out overall motion estimation;
Step 2, after motion vector field is carried out to overall motion estimation, cuts apart the local motion in video;
Step 3, revises the ROI image of the local motion extracting under the global motion background obtaining;
Step 4, the ROI image based on revising carries out Video coding.
2. method according to claim 1, is characterized in that, it is 8 dimensional vector v=[v that described step 1 adopts parameter 0, v 1..., v 7] projection model, the perspective transform of this model is defined as:
x R=(v 0x C+v 1y C+v 2)/(v 6x C+v 7y C+1);
y R=(v 3x C+v 4y C+v 5)/(v 6x C+v 7y C+1);
(x wherein c, y c) and (x r, y r) be respectively the coordinate of present frame and reference frame, for each coordinate in present frame, be (x c, y c) the piece of corresponding motion model v, its component motion is defined as:
V X(x C,y C;v)=x R-x C
V Y(x C,y C;v)=y R-y C
V wherein xand V yrepresent respectively the horizontal and vertical component of motion vector V,
Remove the first deviate and second deviate of global motion model in vector field; The motion vector noise that described the first deviate is estimation in video coding process; Described the second deviate is the motion vector that does not meet background motion model.
3. method according to claim 2, is characterized in that, the first deviate of global motion model and the second deviate in described removal vector field, further comprise:
The size of the more current motion vector of step 3.1 and contiguous 8 motion vectors, by with preset threshold value comparison, remove in the following manner the most unaccommodated motion vector deviate:
‖ V c-V n‖/‖ V c‖ <T mV, wherein: V cfor current motion vector; V nfor nearby motion vectors; T mVfor threshold value;
The iterative computation that step 3.2 adopts Union Movement to cut apart, in first round iteration, moving region is cut apart figure and is obtained by the segmentation result prediction of former frame; And in successive iterations process, use the figure of cutting apart of the present frame obtained by last round of iterative computation;
Global motion compensates by V (x, y, t), i.e. V cOM(x, y, t)=V (x, y, t)-V (x, y; v t), V wherein cOM(x, y, t) is t frame coordinate is the motion vector after the compensation of (x, y) piece, and V (x, y, t) is the motion vector of the t frame coordinate piece that is (x, y), v tit is the globe motion parameter vector in certain iteration of t frame;
After the motion segmentation of t frame is determined, the motion vector deviation piece coordinate of t+1 frame can obtain by prediction, if V is (x t, y t, t) being detected is the second deviate, piece (the x in corresponding t+1 frame t+1, y t+1) can predict as follows:
(x t+1,y t+1)=(x t,y t)-V(x t,y t,t)
Set the deviation piece that the piece of maximum coverage area is prediction, before t+1 frame carries out the overall motion estimation iterative computation of the first round, the motion vector of these deviation pieces all will be removed;
Step 3.3 after the first and second deviates are removed, parameter vector v tthe motion vector estimation remaining by t frame obtains, for a given v t, the real motion vector that a frame meta is set to (x, y) is V (x, y, t), finds out and makes V (x, y; v t) with the v of V (x, y, t) difference minimum t:
V t=arg min vΣ ‖ V (x, y, t)-V (x, y; V) ‖ 2, the motion vector that wherein used is for removing the motion vector after all deviates.
4. method according to claim 3, is characterized in that, described step 3 further comprises:
Step 4.1, for i piece, if it is ROI region, is set weighted value M ibe 128; Otherwise weight is 0;
Step 4.2 is established B ibe the number of coded bits of i piece, find whole frame maximum B max, then by B ibe mapped to the scope of (0,128), as additional weight value;
Step 4.3 is by ROI weights W icarry out according to the following formula the addition of additional weight value:
W i=M i+127(B i/B max)
Step 4.4 obtains the final ROI image of revising.
5. method according to claim 4, is characterized in that, described step 4 further comprises:
Adopt the method for adaptive frequency coefficient compacting, for each converter unit, definition: c wherein pfor the matrix of frequency coefficients after compacting; represent that two matrix corresponding elements multiply each other; W is coefficient of frequency compacting matrix.
CN201410385560.3A 2014-08-07 2014-08-07 A kind of video image compressing method Active CN104125471B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410385560.3A CN104125471B (en) 2014-08-07 2014-08-07 A kind of video image compressing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410385560.3A CN104125471B (en) 2014-08-07 2014-08-07 A kind of video image compressing method

Publications (2)

Publication Number Publication Date
CN104125471A true CN104125471A (en) 2014-10-29
CN104125471B CN104125471B (en) 2016-01-20

Family

ID=51770715

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410385560.3A Active CN104125471B (en) 2014-08-07 2014-08-07 A kind of video image compressing method

Country Status (1)

Country Link
CN (1) CN104125471B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110692235A (en) * 2017-05-17 2020-01-14 株式会社克利普顿 Image processing apparatus, image processing program, and image processing method
CN111385583A (en) * 2018-12-28 2020-07-07 展讯通信(上海)有限公司 Image motion estimation method, device and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060285770A1 (en) * 2005-06-20 2006-12-21 Jongwoo Lim Direct method for modeling non-rigid motion with thin plate spline transformation
CN101286239A (en) * 2008-04-22 2008-10-15 北京航空航天大学 Aerial shooting traffic video frequency vehicle rapid checking method
CN101420618A (en) * 2008-12-02 2009-04-29 西安交通大学 Adaptive telescopic video encoding and decoding construction design method based on interest zone
CN102148934A (en) * 2011-04-02 2011-08-10 北京理工大学 Multi-mode real-time electronic image stabilizing system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060285770A1 (en) * 2005-06-20 2006-12-21 Jongwoo Lim Direct method for modeling non-rigid motion with thin plate spline transformation
CN101286239A (en) * 2008-04-22 2008-10-15 北京航空航天大学 Aerial shooting traffic video frequency vehicle rapid checking method
CN101420618A (en) * 2008-12-02 2009-04-29 西安交通大学 Adaptive telescopic video encoding and decoding construction design method based on interest zone
CN102148934A (en) * 2011-04-02 2011-08-10 北京理工大学 Multi-mode real-time electronic image stabilizing system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110692235A (en) * 2017-05-17 2020-01-14 株式会社克利普顿 Image processing apparatus, image processing program, and image processing method
CN111385583A (en) * 2018-12-28 2020-07-07 展讯通信(上海)有限公司 Image motion estimation method, device and computer readable storage medium
CN111385583B (en) * 2018-12-28 2022-04-22 展讯通信(上海)有限公司 Image motion estimation method, device and computer readable storage medium

Also Published As

Publication number Publication date
CN104125471B (en) 2016-01-20

Similar Documents

Publication Publication Date Title
JP6842395B2 (en) Use of image analysis algorithms to provide training data to neural networks
US10425649B2 (en) Method and apparatus for performing graph-based prediction using optimization function
Moorthy et al. Efficient motion weighted spatio-temporal video SSIM index
KR101622344B1 (en) A disparity caculation method based on optimized census transform stereo matching with adaptive support weight method and system thereof
KR101528895B1 (en) Method and apparatus for adaptive feature of interest color model parameters estimation
CN112203095B (en) Video motion estimation method, device, equipment and computer readable storage medium
CN102542571B (en) Moving target detecting method and device
CN109191498B (en) Target detection method and system based on dynamic memory and motion perception
CN106157330B (en) Visual tracking method based on target joint appearance model
CN104125470B (en) A kind of method of transmitting video data
CN103080979A (en) System and method for synthesizing portrait sketch from photo
CN108257098A (en) Video denoising method based on maximum posteriori decoding and three-dimensional bits matched filtering
Hui et al. Extended analysis of motion-compensated frame difference for block-based motion prediction error
CN113780389B (en) Deep learning semi-supervised dense matching method and system based on consistency constraint
US11394972B2 (en) Method and device for encoding/decoding video signal by using optimized conversion based on multiple graph-based model
CN104125471B (en) A kind of video image compressing method
Zhang et al. Low-complexity intra coding scheme based on Bayesian and L-BFGS for VVC
CN105491370B (en) Video saliency detection method based on graph collaborative low-high-level features
CN107113426B (en) Method and apparatus for performing graph-based transformations using generalized graph parameters
CN114651270A (en) Depth loop filtering by time-deformable convolution
Hou et al. Graph-based transform for data decorrelation
CN112534809B (en) Selective template matching in video coding
CN108024113B (en) Target ratio self-adaptive compressed domain small target tracking method
Novikov et al. Local-adaptive blocks-based predictor for lossless image compression
US10715802B2 (en) Method for encoding/decoding video signal by using single optimized graph

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant