CN101917619B - Quick motion estimation method of multi-view video coding - Google Patents

Quick motion estimation method of multi-view video coding Download PDF

Info

Publication number
CN101917619B
CN101917619B CN 201010262078 CN201010262078A CN101917619B CN 101917619 B CN101917619 B CN 101917619B CN 201010262078 CN201010262078 CN 201010262078 CN 201010262078 A CN201010262078 A CN 201010262078A CN 101917619 B CN101917619 B CN 101917619B
Authority
CN
China
Prior art keywords
motion vector
vector
current block
motion
estimation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN 201010262078
Other languages
Chinese (zh)
Other versions
CN101917619A (en
Inventor
陈耀武
朱威
林翔宇
周承涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN 201010262078 priority Critical patent/CN101917619B/en
Publication of CN101917619A publication Critical patent/CN101917619A/en
Application granted granted Critical
Publication of CN101917619B publication Critical patent/CN101917619B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a quick motion estimation method of multi-view video coding, comprising the following steps of: carrying out spatial median filtering on a motion vector field of coded frames of adjacent views; calculating an overall parallax vector between the current view and the adjacent view by using a parallax vector field of the coded frame of the current view; selecting a reference motion vector from the filtered motion vector field by using the overall parallax vector; selecting a searching center of motion estimation by using the reference motion vector and the spatial motion vector of adjacent estimated blocks; selecting the searching range of the motion estimation by using the deviation degree of the searching center and the reference motion vector; and carrying out the final motion searching in the searching range, selecting the final motion vector of the current block, and finishing the selection of the final motion vectors of all blocks by using the same method and finishing the motion estimation of the multi-view video coding. The method is suitable for the motion estimation of the multi-view video encoding and can effectively reduce the calculation amount of the motion estimation and synchronously maintain the coding rate distortion performance.

Description

A kind of quick motion estimation method of multi-view video coding
Technical field
The present invention relates to the encoding digital video signals field, is a kind of quick motion estimation method of multi-view video coding specifically.
Background technology
Multi-view point video is Same Scene to be taken the video data that obtains with one group of camera from a plurality of angles; It is the important input data of three-dimensional television (3DTV) and free view-point TV (FTV), can provide traditional single view video incomparable visual experience for the user.(Multiview Video Coding MVC) compresses multi-view point video multiple view video coding exactly, satisfies the huge video data storage of a plurality of viewpoints and the needs of transmission.Joint video team (Joint Video Team JVT) carries out standardization effort with MVC as H.264/AVC appendix, and issued multiple view video coding verification model (Joint Multiview Video Model, JMVM).
The interframe of MVC is estimated to be divided into estimation and disparity estimation, and wherein estimation is used to improve the efficient of time domain prediction, and disparity estimation is used to improve the efficient of interview prediction.With H.264/AVC the same; MVC has adopted the rate-distortion optimization technology (to see Wiegand T, Schwarz H, Joch A in the interframe estimation procedure; Kossentini F; Sullivan G J.Rate-constrained coder control and comparison of video coding standards.IEEE Transactions on Circuits and Systems for Video Technology, 2003,13 (7): 688-703); Interframe expectancy rate distortion cost J (s, v) calculate as shown in the formula shown in (I):
J(s,v)=SAD(s,v)+λ MOTION·R(s,v) (I)
Wherein, s is the vision signal of current block, and v is the motion vector or the difference vector of current block; J (s; V) represent the rate distortion costs of current block under vector v, SAD is the absolute value sum of the pixel value difference between the vision signal of current block s and the reference video signal that pointed to by v, λ MOTIONBe the Lagrange multiplier that interframe is estimated, R (s, the v) bit number of presentation code current block motion vector or difference vector consumption.Because each search matched all will be calculated corresponding rate distortion costs; Choose then have minimum rate distortion costs in the search procedure vector as best vector; Therefore the computation complexity of the estimation of MVC and disparity estimation is huge; Consume most scramble times, seriously hindered the practical application of MVC.
The fast motion estimation of tradition single view video coding can be used to reduce the computation complexity of MVC estimation, but these methods all are to be directed against the single view video encoding design, only use the coded message in the single viewpoint.Because the video of adjacent viewpoint comes from Same Scene, motion vector has very strong correlation between the viewpoint.Therefore the estimation of multiple view video coding can make full use of correlation between the viewpoint of motion vector, further reduces the computation complexity of estimation.Application number is that 200910044824.1 one Chinese patent application discloses a kind of rapid motion estimating method of regulating based on the multi-vision-point encoding searching scope adaptive; Spatial coherence according to adjacent viewpoint; Extract the movable information of adjacent coded views and analyze the characteristic of multi-view point video, macro block is classified as three types of macro blocks, be i.e. the macro block in the macro block in the macro block in the consistent zone of motion, the medium consistent zone of motion and complicated movement zone; Zone to move simple or the medium unanimity of moving; Can suitably reduce the hunting zone, to the zone of complicated movement, to not restriction of hunting zone.Directly use the motion vector of adjacent coded views in this method, receive the influence of the noise motion vector of adjacent coded views easily; Simultaneously; The hunting zone that pattern is set is less; Only three types of macro blocks are adjusted the hunting zone respectively, and the hunting zone of same type of macro block is fixed, so the setting of macro block hunting zone has only three kinds of patterns; Can only the hunting zone be set to every type of macro block, can not carry out the searching scope adaptive adjustment each macro block.
Existing quick motion estimation method of multi-view video coding mainly is to utilize difference vector to obtain movable information from adjacent coded views, is used to accelerate estimation.But these algorithms do not have the influence to algorithm effect of consideration of noise motion vector and noise difference vector.Receive the influence of picture noise, exist some to depart from the noise motion vector of object real motion state more greatly in the motion vector field that estimation obtains, same; Receive the influence of picture noise, the difference vector that disparity estimation obtains can depart from the true parallax of object.If use the difference vector of each piece to obtain the corresponding blocks of contiguous viewpoint, then receive the influence of noise difference vector easily and depart from real corresponding blocks, the corresponding blocks movable information that causes obtaining inaccurate.
Summary of the invention
The invention provides a kind of quick motion estimation method of multi-view video coding, effectively reduce amount of calculation when guaranteeing the compression efficiency of video.
A kind of quick motion estimation method of multi-view video coding may further comprise the steps:
(1) filtering of motion vector field: the contiguous viewpoint motion vector field of coded frame is carried out the space medium filtering, eliminate noise vector wherein, the contiguous viewpoint that obtains reflecting the real motion state is the motion vector field of coded frame (motion vector field after the filtering);
(2) calculating of global disparity vector: utilize current view point to calculate the global disparity vector between current view point and the contiguous viewpoint in the difference vector field of coded frame;
(3) choosing of reference motion vector: utilize current view point that step (2) obtains and the global disparity vector between the contiguous viewpoint; In the motion vector field, choose the reference motion vector of the minimum vector of rate distortion costs after the filtering that from step (1), obtains as current block;
(4) choosing of search center: the reference motion vector that obtains from step (3) is adjacent with the space to be estimated to choose the search center of the minimum vector of rate distortion costs as the current block estimation the motion vector of piece;
(5) choosing of hunting zone: utilize the departure degree of the reference motion vector that search center that step (4) obtains and step (3) obtain, choose the horizontal direction of current block estimation and the hunting zone of vertical direction;
(6) final motion search: in the hunting zone that step (5) obtains, carry out final motion search, choose the final motion vector of current block; Final motion search can be selected existing searching algorithm for use;
(7) repeating step (3)~(6) obtain the final motion vector of all pieces, accomplish the multiple view video coding estimation.
Further, described step (1) comprising:
When contiguous viewpoint coded frame is carried out estimation, preserve the motion vector of all pieces, form the contiguous viewpoint motion vector field of coded frame; Motion vector MV ' with the motion vector MV ' of each piece in the motion vector field of described contiguous viewpoint coded frame, the top piece adjacent with its space U, the following piece adjacent with its space motion vector WV ' B, the left side piece adjacent with its space motion vector MV ' L, the right piece adjacent with its space motion vector MV ' RCarry out suc as formula the space medium filtering shown in (II), obtain motion vector MV after the filtering ", motion vector MV after the filtering of all pieces " together and form motion vector field after the filtering;
MV″=median(MV′ U,MV′ B,MV′ L,MV′ R,MV′) (II)。
The vector field of filtered motion vector field before than filtering is more level and smooth, more can reflect the real motion state of object.
Further, described step (2) comprising:
Vector in the difference vector field of current view point coded frame is made even all, obtains mean parallax vector DV AVGTo described mean parallax vector DV AVGCarry out time domain weighting and calculate, obtain the global disparity vector GDV between current view point and the contiguous viewpoint, shown in (III):
GDV ( k ) = DV AVG , k = 0 α × GDV ( k - 1 ) + ( 1 - α ) × DV AVG , k > 0 - - - ( III )
In the formula (III), α is the time domain weight factor, and k is the renewal index of GDV, and k adds up after each the renewal.
Further, described step (3) comprising:
(3.1) utilize current view point that step (2) obtains and the global disparity vector GDV between the contiguous viewpoint to point to contiguous viewpoint coded frame; Obtain the corresponding blocks of current block in contiguous viewpoint, and identical with the current block size four similar of covering of corresponding blocks;
(3.2) utilize described four similar corresponding motion vector in the motion vector field after the filtering that step (1) obtains, and corresponding blocks is at each similar shared area, the motion vector MV of weighted calculation corresponding blocks " W, shown in (IV):
MV ′ ′ W = MV ′ ′ 0 × a 0 + MV ′ ′ 1 × a 1 + MV ′ ′ 2 × a 2 + MV ′ ′ 3 × a 3 a 0 + a 1 + a 2 + a 3 - - - ( IV )
In the formula (IV), MV " 0, MV " 1, MV " 2And MV " 3Be four the similar motion vector in the motion vector field after the filtering that step (1) obtains, a 0, a 1, a 2And a 3Represented corresponding blocks shared area in four similar;
(3.3) utilize the interframe expectancy rate distortion cost formula shown in the formula (I), from by corresponding blocks motion vector MV " WSimilar block motion vector MV with four " 0, MV " 1, MV " 2, MV " 3The motion vector set omega of forming 1In, choose the reference motion vector RMV of the minimum vector of rate distortion costs, that is: as current block
RMV = arg min m ∈ Ω 1 J ( S , m ) - - - ( V )
In the formula (V), S is the vision signal of current block, and RMV is a reference motion vector, and it has reflected the real motion state of current block, can be with its reference as the current block estimation.
Further, described step (4) comprising:
The motion vector MV of the reference motion vector RMV of the current block that (4.1) will obtain from step (3), the left side piece adjacent with the current block space L, the top piece adjacent with the current block space motion vector MV U, the last left piece adjacent with the current block space motion vector MV ULMotion vector MV with the last right piece adjacent with the current block space UR, coded prediction vector PMV and zero vector ZMV form the search center candidate collection Ω of current block 2
(4.2) utilize the interframe expectancy rate distortion cost formula shown in the formula (I), from described motion vector set omega 2In choose the search center CMV of the minimum vector of current block rate distortion costs as estimation, shown in (VI):
CMV = arg min m ∈ Ω 2 J ( S , m ) - - - ( VI )
In the formula (VI), S is the vision signal of current block, and CMV is the search center of estimation.
Further, described step (5) comprising:
(5.1) employing formula (VII) and formula (VIII), the reference motion vector RMV that search center CMV that difference calculation procedure (4) obtains and step (3) obtain departs from D in the horizontal direction XDepart from D with vertical direction Y:
D X=|CMV X-RMV X| (VII)
D Y=|CMV Y-RMV Y| (VIII)
(5.2) horizontal direction that calculates is departed from D XDepart from D with vertical direction YAll multiply by Control Parameter β and obtain the hunting zone, wherein β is used for weighing search precision and searching and computing amount, and its value should be greater than 1, so that reference motion vector RMV is included within the hunting zone;
(5.3) in order to improve the robustness of estimation, employing formula (IX) and formula (X) calculated level direction search scope SR XWith vertical direction hunting zone SR Y, the hunting zone further is limited between minimum search range and the maximum search scope:
SR X=max(SR MIN,min(SR MAX,β×D X)) (IX)
SR Y=max(SR MIN,min(SR MAX,β×D Y)) (X)
Wherein, SR MINBe the minimum search range of estimation, SR MAXIt is the maximum search scope of estimation.
Among the present invention, the contiguous viewpoint motion vector field of coded frame is carried out filtering, eliminate noise motion vector wherein, obtain motion vector field after the filtering with the real motion state consistency, thereby effectively utilize the information of adjacent viewpoint; Make full use of between the viewpoint of motion vector and spatial coherence; From the estimated motion vector of the reference motion vector of reflection current block real motion state and space adjacent block, choose search center, in a less hunting zone of needs just can be included in optimum movement vector like this near optimum movement vector for current block; Simultaneously, utilize the departure degree of search center and reference motion vector to choose the hunting zone of estimation, to reach the purpose that reduces motion estimation complexity.
Compared with prior art, the present invention has following useful technique effect:
The invention provides a kind of multiple view video coding method for estimating based on correlation and spatial coherence between the motion vector viewpoint.This method is applicable to the estimation of multiple view video coding; Compared with prior art; Have following characteristics and advantage: effectively utilize the adjacent viewpoint motion vector field of coded frame; And the adjacent estimated motion vector in space, choose the reference motion vector of reflection real motion state and near the search center of optimum movement vector, and utilize the departure degree of reference motion vector and search center to come self adaptation to choose the hunting zone.This method can effectively reduce the amount of calculation of multi-vision-point encoding estimation, keeps the encoding rate distortion performance simultaneously.
Description of drawings
Fig. 1 is the flow chart of quick motion estimation method of multi-view video coding of the present invention;
Fig. 2 is comparison diagram before and after the motion vector field filtering among the present invention;
Fig. 3 is the obtain sketch map of corresponding blocks among the present invention with similar;
Fig. 4 is the graph of a relation of reference motion vector, search center and hunting zone among the present invention.
Embodiment
As shown in Figure 1, a kind of multi-view point video rapid motion estimating method may further comprise the steps:
(1) filtering of motion vector field;
(2) global disparity vector is calculated;
(3) reference motion vector is chosen;
(4) search center is chosen;
(5) hunting zone is chosen;
(6) final motion search;
(7) accomplish all piece estimation.
In the above-mentioned steps, step (1) and (2) are the operations of frame level, and step (3), (4), (5) and (6) are that the piece level is operated.Described is a kind of in the divided block of estimating between macroblock frame 16 * 16,16 * 8,8 * 16,8 * 8,8 * 4,4 * 8 and 4 * 4, and motion vector field of the following stated or difference vector field are the set of said motion vector or difference vector.
Be example with 16 * 16 below, specify the overall process of multi-view point video rapid motion estimating method.
(1) the contiguous viewpoint motion vector field of coded frame is carried out the space medium filtering, eliminate noise vector wherein, obtain motion vector field after the filtering with the real motion state consistency.Detailed process is following:
When contiguous viewpoint coded frame is carried out 16 * 16 estimation, preserve each motion vector of 16 * 16, form contiguous viewpoint 16 * 16 block motion vector fields of coded frame; Motion vector MV ' with each motion vector MV ' of 16 * 16 in 16 * 16 block motion vector fields of described contiguous viewpoint coded frame, the top piece adjacent with its space U, the following piece adjacent with its space motion vector MV ' B, the left side piece adjacent with its space motion vector MV ' L, the right piece adjacent with its space motion vector MV ' RCarry out together suc as formula the space medium filtering shown in (II), obtain motion vector MV after each 16 * 16 filtering ", motion vector MV after all filtering of 16 * 16 " forms contiguous viewpoint 16 * 16 filtered motion vector fields of coded frame;
MV″=median(MV′ U,MV′ B,MV′ L,MV′ R,MV′) (II)
In image boundary, the top piece adjacent with current 16 * 16 block spaces, following piece, left side piece and the right piece possibly not exist, and the inventive method substitutes non-existent motion vector with zero vector ZMV.
The contrast of the motion vector field before and after step (1) filtering is as shown in Figure 2, and Fig. 2 (a) is the preceding motion vector field sketch map of filtering, and Fig. 2 (b) is filtered motion vector field sketch map.As can be seen from Figure 2, the vector field of filtered motion vector field before than filtering is more level and smooth, more can reflect the real motion state of object.
(2) utilize current view point to calculate the global disparity vector between current view point and the contiguous viewpoint in the difference vector field of coded frame.Detailed process is following:
Vector in the difference vector field of current view point coded frame is made even all, obtains mean parallax vector DV AVGTo described mean parallax vector DV AVGCarry out time domain weighting and calculate, obtain the global disparity vector GDV between current view point and the contiguous viewpoint, shown in (III):
GDV ( k ) = DV AVG , k = 0 α × GDV ( k - 1 ) + ( 1 - α ) × DV AVG , k > 0 - - - ( III )
In the formula (III), α is the time domain weight factor, and k is the renewal index of GDV, and k adds up after each the renewal.
(3) utilize current view point that step (2) obtains and the global disparity vector between the contiguous viewpoint; The contiguous viewpoint that from step (1), obtains after 16 * 16 filtering of coded frame in the motion vector field, is chosen the minimum vector of rate distortion costs as current 16 * 16 reference motion vector.Detailed process is following:
(3.1) utilize current view point that step (2) obtains and the global disparity vector GDV between the contiguous viewpoint to point to contiguous viewpoint coded frame; As shown in Figure 3; Obtain 16 * 16 corresponding blocks (grey blocks among Fig. 3) in contiguous viewpoint; And corresponding blocks four similar (Block0, Blockl, Block2 and Block3 among Fig. 3) covering, this size of four similar is identical with current block;
(3.2) utilize described four similar corresponding motion vector in the motion vector field after the filtering that step (1) obtains, and corresponding blocks is at each similar shared area, the motion vector MV of weighted calculation corresponding blocks " W, shown in (IV):
MV ′ ′ W = MV ′ ′ 0 × a 0 + MV ′ ′ 1 × a 1 + MV ′ ′ 2 × a 2 + MV ′ ′ 3 × a 3 a 0 + a 1 + a 2 + a 3 - - - ( IV )
In the formula (IV), MV " 0, MV " 1, MV " 2And MV " 3Be four the similar motion vector in the motion vector field after the filtering that step (1) obtains, a 0, a 1, a 2And a 3Represented corresponding blocks shared area in four similar;
(3.3) utilize the interframe expectancy rate distortion cost formula shown in the formula (I), from by corresponding blocks motion vector MV " WSimilar block motion vector MV with four " 0, MV " 1, MV " 2, MV " 3The motion vector set omega of forming 1In, choose the minimum vector of rate distortion costs as current 16 * 16 reference motion vector RMV.
RMV = arg min m ∈ Ω 1 J ( S , m ) - - - ( V )
In the formula (V), S is the vision signal of current block, and the current block here is 16 * 16; RMV is a reference motion vector, and it has reflected current 16 * 16 real motion state, can be with its reference as current 16 * 16 estimation.
(4) reference motion vector that obtains from step (3) is adjacent with the space to be estimated to choose the search center of the minimum vector of rate distortion costs as current 16 * 16 estimation the motion vector of piece.Detailed process is following:
The motion vector MV of current 16 * 16 reference motion vector RMV that (4.1) will obtain from step (3), the left side piece adjacent with current 16 * 16 block spaces L, the top piece adjacent with current 16 * 16 block spaces motion vector MV U, the last left piece adjacent with current 16 * 16 block spaces motion vector MV ULMotion vector MV with the last right piece adjacent with current 16 * 16 block spaces UR, coded prediction vector PMV and zero vector ZMV form current 16 * 16 search center candidate collection Ω 2
(4.2) utilize the interframe expectancy rate distortion cost formula shown in the formula (I), from described search center candidate Ω 2In choose the search center CMV of the minimum vector of current block rate distortion costs as current 16 * 16 estimation, shown in (VI):
CMV = arg min m ∈ Ω 2 J ( S , m ) - - - ( VI )
In the formula (VI), S is the vision signal of current block, and the current block here is 16 * 16; CMV is the search center of estimation.
(5): utilize the departure degree of the reference motion vector that search center that step (4) obtains and step (3) obtain, choose the horizontal direction of current 16 * 16 estimation and the hunting zone of vertical direction.Detailed process is following:
(5.1) employing formula (VII) and formula (VIII), the reference motion vector RMV that search center CMV that difference calculation procedure (4) obtains and step (3) obtain departs from D in the horizontal direction XDepart from D with vertical direction Y:
D X=|CMV X-RMV X| (VII)
D Y=|CMV Y-RMV Y| (VIII)
(5.2) horizontal direction that calculates is departed from D XDepart from D with vertical direction YAll multiply by Control Parameter β and obtain the hunting zone, wherein β is used for weighing search precision and searching and computing amount, and its value should be greater than 1, so that reference motion vector RMV is included within the hunting zone;
(5.3) in order to improve the robustness of estimation, employing formula (IX) and formula (X) calculated level direction search scope SR XWith vertical direction hunting zone SR Y, the hunting zone further is limited between minimum search range and the maximum search scope:
SR X=max(SR MIN,min(SR MAX,β×D X)) (IX)
SR Y=max(SR MIN,min(SR MAX,β×D Y)) (X)
Wherein, SR MINBe the minimum search range of estimation, SR MAXIt is the maximum search scope of estimation.
Search center CMV, the relation of reference motion vector RMV and hunting zone is as shown in Figure 4, and the hunting zone is by SR among Fig. 4 XAnd SR YThe rectangle scope of tolerance.
(6) in the hunting zone that step (5) obtains, carry out final motion search, choose current 16 * 16 final motion vector;
(7) repeating step (3)~(6) obtain all final motion vectors of 16 * 16, accomplish the multiple view video coding estimation.
Experiment is carried out on multiple view video coding identifying code JMVC4.0; With multiple view video coding universal test condition (the Su Y P that is the basis; Vetro A, Smolic A.Common test conditions for multiview video coding.Doc.U211, JVT 21st meeting; Hangzhou, 2006).The search pattern of JMVC is selected full-search algorithm for use, and the hunting zone is set to 64, and inter-frame mode (Inter pattern) is selected 16 * 16 piece division for use, and basic QP selects 22,27,32 and 37 for use.The Exit sequence (sequence 1.) of six typical multi-view point video cycle tests: MERL, the Ballroom sequence (sequence 2.) of MERL, the Race1 sequence (sequence 3.) of KDDI, Flamenco2 sequence (sequence 4.), the breadboard Rena sequence of Tanimoto (sequence 5.) and the breadboard Akko&Kayo sequence of Tanimoto (sequence 6.) of KDDI have been selected in experiment for use.Preceding two or three viewpoints of these sequences have been selected in experiment for use, and the Flamenco2 sequence of KDDI (sequence 4.) is because its two-dimentional cross arrangement is only selected preceding two viewpoints, and other sequence is selected first three viewpoint.Second viewpoint is used to realize this joint algorithm, and other viewpoint is viewpoint as a reference.
In order to assess the degree of closeness of CMV and optimum movement vector; Table 1 is listed CMV and the average air line distance between the optimum movement vector that obtains with full-search algorithm of second viewpoint forward direction time domain prediction of six type sequences, has also provided the average air line distance between the candidate vector of optimum movement vector and CMV simultaneously.From table 1, can find out; CMV is littler than the distance of candidate vector and optimum movement vector with the distance of optimum movement vector; And the average distance of its each sequence is all in 4 pixels, in therefore only the less hunting zone of needs just can be included in optimum movement vector.In addition, can find out that from table 1 RMV is only big slightly than the distance of CMV and optimum movement vector with the distance of optimum movement vector, so optimum movement vector is also around reference motion vector RMV.
The average air line distance of table 1 optimum movement vector and search center (CMV) and candidate vector thereof
(QP=32)
Figure BSA00000242647900101
Can find out that from table 1 CMV is near the optimum movement vector of current block, the reflection that it is approximate the position of optimum movement vector.RMV is the reference motion vector that from the filtered motion vector field of adjacent viewpoint, obtains, and it is the prediction of the real motion state of current block.The distance of the relative RMV of CMV has reflected the departure degree of optimum movement vector with respect to the real motion state.If CMV and RMV's is nearer apart, optimum movement vector and real motion state consistency then are described, the search that only need around CMV, carry out among a small circle is just passable.If the apart from each other of CMV and RMV, then optimum movement vector is bigger departs from the actual motion state, and its state is comparatively active, need around CMV, increase the hunting zone.Under most of situation, because the motion relevance between viewpoint, CMV and RMV are at a distance of nearer, and therefore whole hunting zone has obtained minimizing.
The inventive method can adopt existing various searching algorithm to carrying out motion search in the hunting zone of choosing, and this is in and uses full-search algorithm to choose final motion vector in the hunting zone.In order to assess the performance of the inventive method; Experiment is reference with the JMVC full-search algorithm; The signal to noise ratio of statistics the inventive method changes (Bjontegaard delta PSNR; BDPSNR) and code check change that (Bjontegaard delta bit rate BDBR) weighs distortion performance, and the non-anchor frame scramble time reduction (Dtime) of statistics the inventive method is weighed the variation of amount of calculation.For the performance of simple evaluation and test estimation, in the experiment disparity estimation of the non-anchor frame of second viewpoint is closed.Control coefrficient in formula (IX) and the formula (X) is set to: β=1.3, SR MAX=64, SR MIN=4.
Table 2 is depicted as, and is reference with the full-search algorithm among the JMVC, the BDPSNR of the inventive method, BDBR and Dtime.Wherein BDPSNR is that positive number and BDBR are that on behalf of algorithm, negative have better distortion performance, and Dtime is the reduction that negative value is represented the algorithm coding time.
Can find out that from table 2 compare with the full-search algorithm among the JMVC, this paper method on average reduces for 95% scramble time.No matter be bigger Race1 sequence of motion amplitude (sequence 3.) and Ballroom sequence (sequence 2.); Or moderate Rena sequence (sequence 5.) and the Akko&Kayo sequence (sequence 6.) of motion amplitude; Or more Exit sequence (sequence 1.) and the Flamenco2 sequence (sequence 4.) of stagnant zone, the reduction of binary encoding time is comparatively even.Aspect distortion performance, the BDPSNR of Exit sequence (sequence 1.) has reduced 0.004dB, and BDBR has increased by 0.24%, but the BDPSNR of other sequence increases, and BDBR decreases.The average BDPSNR of six sequences has improved 0.021dB, and average BDBR has reduced 0.48%.Above data show that the inventive method has kept the encoding rate distortion performance consistent with full-search algorithm when significantly reducing encoding calculation amount.
Above-mentioned experimental result is under the situation that inter-frame mode use 16 * 16 is divided, to obtain, and uses other littler division when the Inter pattern, and this paper algorithm is suitable equally, can obtain similar result.
Table 2
Figure BSA00000242647900111

Claims (4)

1. a quick motion estimation method of multi-view video coding is characterized in that, may further comprise the steps:
(1) the contiguous viewpoint motion vector field of coded frame is carried out the space medium filtering, eliminate noise vector wherein, obtain reflecting motion vector field after the filtering of real motion state;
(2) utilize current view point to calculate the global disparity vector between current view point and the contiguous viewpoint in the difference vector field of coded frame, computational process is following:
Vector in the difference vector field of current view point coded frame is made even all, obtains mean parallax vector DV AVGTo described mean parallax vector DV AVGCarry out time domain weighting and calculate, obtain the global disparity vector GDV between current view point and the contiguous viewpoint, shown in (III):
GDV ( k ) = DV AVG , k = 0 α × GDV ( k - 1 ) + ( 1 - α ) × DV AVG , k > 0 - - - ( III )
In the formula (III), α is the time domain weight factor, and k is the renewal index of global disparity vector GDV;
(3) utilize current view point that step (2) obtains and the global disparity vector between the contiguous viewpoint, in the motion vector field, choose the reference motion vector of the minimum vector of rate distortion costs after the filtering that from step (1), obtains as current block; Wherein, the reference motion vector of current block to choose process following:
(3.1) utilize current view point that step (2) obtains and the global disparity vector GDV between the contiguous viewpoint to point to contiguous viewpoint coded frame; Obtain the corresponding blocks of current block in contiguous viewpoint, and identical with the current block size four similar of covering of corresponding blocks;
(3.2) utilize described four similar corresponding motion vector in the motion vector field after the filtering that step (1) obtains, and corresponding blocks is at each similar shared area, the motion vector MV of weighted calculation corresponding blocks " W, shown in (IV):
MV ′ ′ W = MV ′ ′ 0 × a 0 + MV ′ ′ 1 × a 1 + MV ′ ′ 2 × a 2 + MV ′ ′ 3 × a 3 a 0 + a 1 + a 2 + a 3 - - - ( IV )
In the formula (IV), MV " 0, MV " 1, MV " 2And MV " 3Be four the similar motion vector in the motion vector field after the filtering that step (1) obtains, a 0, a 1, a 2And a 3Represented corresponding blocks shared area in four similar;
(3.3) utilize interframe expectancy rate distortion cost formula, from by corresponding blocks motion vector MV " WSimilar block motion vector MV with four " 0, MV " 1, MV " 2, MV " 3The motion vector set omega of forming 1In, choose the reference motion vector RMV of the minimum vector of rate distortion costs as current block;
(4) reference motion vector that obtains from step (3) is adjacent with the space to be estimated to choose the search center of the minimum vector of rate distortion costs as the current block estimation the motion vector of piece;
(5) utilize the departure degree of the reference motion vector that search center that step (4) obtains and step (3) obtain, choose the horizontal direction of current block estimation and the hunting zone of vertical direction;
(6) in the hunting zone that step (5) obtains, carry out final motion search, choose the final motion vector of current block;
(7) repeating step (3)~(6) obtain the final motion vector of all pieces, accomplish the multiple view video coding estimation.
2. quick motion estimation method of multi-view video coding according to claim 1 is characterized in that, space medium filtering process comprises in the described step (1):
When contiguous viewpoint coded frame is carried out estimation, preserve the motion vector of all pieces, form the contiguous viewpoint motion vector field of coded frame; Motion vector MV ' with the motion vector MV ' of each piece in the motion vector field of described contiguous viewpoint coded frame, the top piece adjacent with its space U, the following piece adjacent with its space motion vector MV ' B, the left side piece adjacent with its space motion vector MV ' L, the right piece adjacent with its space motion vector MV ' RCarry out suc as formula the space medium filtering shown in (II), obtain motion vector MV after the filtering ", motion vector MV after the filtering of all pieces " together and form motion vector field after the filtering;
MV″=median(MV′ U,MV′ B,MV′ L,MV′ R,MV′) (II)。
3. quick motion estimation method of multi-view video coding according to claim 1 is characterized in that, in the described step (4) search center of current block estimation to choose process following:
The motion vector MV of the reference motion vector RMV of the current block that (4.1) will obtain from step (3), the left side piece adjacent with the current block space L, the top piece adjacent with the current block space motion vector MV U, the last left piece adjacent with the current block space motion vector MV ULMotion vector MV with the last right piece adjacent with the current block space UR, coded prediction vector PMV and zero vector ZMV form the search center candidate collection Ω of current block 2
(4.2) utilize interframe expectancy rate distortion cost formula, from described search center candidate collection Ω 2In choose the search center CMV of the minimum vector of current block rate distortion costs as estimation.
4. quick motion estimation method of multi-view video coding according to claim 1 is characterized in that, in the described step (5) hunting zone to choose process following:
(5.1) employing formula (VII) and formula (VIII), the reference motion vector RMV that search center CMV that difference calculation procedure (4) obtains and step (3) obtain departs from D in the horizontal direction XDepart from D with vertical direction Y:
D X=|CMV X-RMV X| (VII)
D Y=|CMV Y-RMV Y| (VIII)
(5.2) horizontal direction that calculates is departed from D XDepart from D with vertical direction YAll multiply by Control Parameter β and obtain the hunting zone;
(5.3) employing formula (IX) and formula (X) calculated level direction search scope SR XWith vertical direction hunting zone SR Y:
SR X=max(SR MIN,min(SR MAX,β×D X)) (IX)
SR Y=max(SR MIN,min(SR MAX,β×D Y)) (X)
Wherein, SR MINBe the minimum search range of estimation, SR MAXIt is the maximum search scope of estimation.
CN 201010262078 2010-08-20 2010-08-20 Quick motion estimation method of multi-view video coding Active CN101917619B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201010262078 CN101917619B (en) 2010-08-20 2010-08-20 Quick motion estimation method of multi-view video coding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201010262078 CN101917619B (en) 2010-08-20 2010-08-20 Quick motion estimation method of multi-view video coding

Publications (2)

Publication Number Publication Date
CN101917619A CN101917619A (en) 2010-12-15
CN101917619B true CN101917619B (en) 2012-05-09

Family

ID=43324987

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201010262078 Active CN101917619B (en) 2010-08-20 2010-08-20 Quick motion estimation method of multi-view video coding

Country Status (1)

Country Link
CN (1) CN101917619B (en)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102045571B (en) * 2011-01-13 2012-09-05 北京工业大学 Fast iterative search algorithm for stereo video coding
EP2721825A4 (en) * 2011-06-15 2014-12-24 Mediatek Inc Method and apparatus of motion and disparity vector prediction and compensation for 3d video coding
WO2013068548A2 (en) 2011-11-11 2013-05-16 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Efficient multi-view coding using depth-map estimate for a dependent view
IN2014KN00990A (en) * 2011-11-11 2015-10-09 Fraunhofer Ges Forschung
CN103139569B (en) * 2011-11-23 2016-08-10 华为技术有限公司 The coding of multi-view point video, coding/decoding method, device and codec
CN102413332B (en) * 2011-12-01 2013-07-24 武汉大学 Multi-viewpoint video coding method based on time-domain-enhanced viewpoint synthesis prediction
US20130163880A1 (en) * 2011-12-23 2013-06-27 Chao-Chung Cheng Disparity search methods and apparatuses for multi-view videos
CN102801995B (en) * 2012-06-25 2016-12-21 北京大学深圳研究生院 A kind of multi-view video motion based on template matching and disparity vector prediction method
WO2014005280A1 (en) * 2012-07-03 2014-01-09 Mediatek Singapore Pte. Ltd. Method and apparatus to improve and simplify inter-view motion vector prediction and disparity vector prediction
EP2839664A4 (en) * 2012-07-09 2016-04-06 Mediatek Inc Method and apparatus of inter-view sub-partition prediction in 3d video coding
WO2014015807A1 (en) * 2012-07-27 2014-01-30 Mediatek Inc. Method of constrain disparity vector derivation in 3d video coding
AU2013321333B2 (en) 2012-09-28 2017-07-27 Sony Corporation Image processing device and method
US9253486B2 (en) * 2012-09-28 2016-02-02 Mitsubishi Electric Research Laboratories, Inc. Method and system for motion field backward warping using neighboring blocks in videos
EP2966868B1 (en) * 2012-10-09 2018-07-18 HFI Innovation Inc. Method for motion information prediction and inheritance in video coding
WO2014103966A1 (en) * 2012-12-27 2014-07-03 日本電信電話株式会社 Image encoding method, image decoding method, image encoding device, image decoding device, image encoding program, and image decoding program
CN104104933B (en) * 2013-04-12 2016-12-28 浙江大学 A kind of difference vector generates method and device
CN104243947B (en) * 2013-07-29 2018-03-16 深圳深讯和科技有限公司 Parallax estimation method and device
CN103747265B (en) * 2014-01-03 2017-04-12 华为技术有限公司 NBDV (Disparity Vector from Neighboring Block) acquisition method and video decoding device
KR20160132862A (en) * 2014-03-13 2016-11-21 퀄컴 인코포레이티드 Simplified advanced residual prediction for 3d-hevc
CN104394417B (en) * 2014-12-15 2017-07-28 哈尔滨工业大学 A kind of difference vector acquisition methods in multiple view video coding
CN108076347B (en) * 2016-11-15 2021-11-26 阿里巴巴集团控股有限公司 Method and device for acquiring coding starting point
CN109660800B (en) * 2017-10-12 2021-03-12 北京金山云网络技术有限公司 Motion estimation method, motion estimation device, electronic equipment and computer-readable storage medium
CN110365987B (en) * 2018-04-09 2022-03-25 杭州海康威视数字技术股份有限公司 Motion vector determination method, device and equipment
CN110493602A (en) * 2019-08-19 2019-11-22 张紫薇 Video coding fast motion searching method and system
CN115134574B (en) * 2022-06-24 2023-08-01 咪咕视讯科技有限公司 Dynamic metadata generation method, device, equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100481732B1 (en) * 2002-04-20 2005-04-11 전자부품연구원 Apparatus for encoding of multi view moving picture
US7778328B2 (en) * 2003-08-07 2010-08-17 Sony Corporation Semantics-based motion estimation for multi-view video coding
CN101459849A (en) * 2009-01-04 2009-06-17 上海大学 Fast motion estimation method based on motion searching scope adaptive regulation of multi-vision-point encoding
CN101600108B (en) * 2009-06-26 2011-02-02 北京工业大学 Joint estimation method for movement and parallax error in multi-view video coding

Also Published As

Publication number Publication date
CN101917619A (en) 2010-12-15

Similar Documents

Publication Publication Date Title
CN101917619B (en) Quick motion estimation method of multi-view video coding
CN106105191B (en) Method and apparatus for handling multiview video signal
CN104363451B (en) Image prediction method and relevant apparatus
US8693551B2 (en) Optimal angular intra prediction for block-based video coding
CN101010960B (en) Method and device for motion estimation and compensation for panorama image
CN101600108B (en) Joint estimation method for movement and parallax error in multi-view video coding
US8290289B2 (en) Image encoding and decoding for multi-viewpoint images
CN110312132A (en) A kind of decoding method, device and its equipment
CN103546758B (en) A kind of fast deep graphic sequence inter mode decision fractal coding
CN102905150B (en) Novel multi-view video fractal coding, compressing and decompressing method
CN105580371B (en) Based on adaptively sampled layering motion estimation method and equipment
CN103037218B (en) Multi-view stereoscopic video compression and decompression method based on fractal and H.264
CN103327327B (en) For the inter prediction encoding unit selection method of high-performance video coding HEVC
CN103596004A (en) Intra-frame prediction method and device based on mathematical statistics and classification training in HEVC
CN102045571B (en) Fast iterative search algorithm for stereo video coding
CN102984541B (en) Video quality assessment method based on pixel domain distortion factor estimation
CN103051894B (en) A kind of based on fractal and H.264 binocular tri-dimensional video compression & decompression method
CN101895749B (en) Quick parallax estimation and motion estimation method
CN101990103A (en) Method and device for multi-view video coding
CN102316323B (en) Rapid binocular stereo-video fractal compressing and uncompressing method
CN106791869B (en) Quick motion search method based on light field sub-aperture image relative positional relationship
CN102917233A (en) Stereoscopic video coding optimization method in space teleoperation environment
CN101557519B (en) Multi-view video coding method
CN101459849A (en) Fast motion estimation method based on motion searching scope adaptive regulation of multi-vision-point encoding
CN104618725A (en) Multi-view video coding algorithm combining quick search and mode optimization

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant