EP1468568A1 - Procede d'estimation du mouvement dominant dans une sequence d'images - Google Patents

Procede d'estimation du mouvement dominant dans une sequence d'images

Info

Publication number
EP1468568A1
EP1468568A1 EP02805377A EP02805377A EP1468568A1 EP 1468568 A1 EP1468568 A1 EP 1468568A1 EP 02805377 A EP02805377 A EP 02805377A EP 02805377 A EP02805377 A EP 02805377A EP 1468568 A1 EP1468568 A1 EP 1468568A1
Authority
EP
European Patent Office
Prior art keywords
movement
regression
motion
representation
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP02805377A
Other languages
German (de)
English (en)
French (fr)
Inventor
François Le Clerc
Sylvain Marrec
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Thomson Licensing SAS
Original Assignee
Thomson Licensing SAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing SAS filed Critical Thomson Licensing SAS
Publication of EP1468568A1 publication Critical patent/EP1468568A1/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • H04N5/145Movement estimation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion

Definitions

  • the invention relates to a method and device for estimating the dominant movement in a video plane. More specifically, the method is based on the analysis of the motion fields transmitted with the video in compression schemes using motion compensation. Such schemes are implemented in the MPEG-1, MPEG-2 and MPEG-4 video compression standards. There are known motion analysis methods based on the estimation, from motion vectors originating from compressed video streams of MPEG type, of a motion model which is most often refined:
  • M and v are the components of a vector ⁇ t present at the position (x ,, .y,) of the field of motion.
  • the estimation of the affine parameters a, b, c, d, e and f of the motion model is based on a technique of minimization of least squares errors. Such a process is described in the article by MA Smith and T. Kanade “Video Skimming and Characterization through the Combination of Image and Language Understanding" (acts of IEEE 1998 International Workshop on Content-Based Access of Image and Video Databases, pages 61 to 70). The authors of this article use the parameters of the affine model of motion, as well as the means û and v of the spatial components of the vectors of the field, to identify and classify the apparent motion.
  • the means of the components of the vectors i7 and v are analyzed to test the hypothesis of a panoramic.
  • a threshold of the variance associated with the number of motion vectors in each class ("bin" in English) of the histogram, for each of the two histograms, is then used to identify the presence of dominant movements of the "zoom" and " panoramic".
  • a secondary object is here defined as an object which occupies on the image a surface smaller than that of at least one other object of the image, the object associated with the dominant movement being that which occupies the largest surface in the picture.
  • the vectors of the compressed video stream which serve as the basis for the analysis of the movement do not always reflect the reality of the actual apparent movement of the image. Indeed, these vectors have been calculated in order to minimize the amount of information to be transmitted after motion compensation, not to estimate the physical motion of the pixels in the image.
  • a reliable estimation of a motion model from vectors from the compressed flux requires the use of a robust method, automatically eliminating from the calculation the motion vectors relating to secondary objects not following the dominant motion, as well as the vectors not corresponding to the physical movement of the main object of the image.
  • the invention presented here aims to overcome the drawbacks of the different families of methods of estimating the dominant movement presented above.
  • the subject of the invention is a method of detecting a dominant movement in a sequence of images performing a calculation of a field of motion vectors associated with an image, defining, for an image element of coordinates xi, yi. , one or more motion vectors of components ui, vi, characterized in that it also performs the following steps:
  • the robust regression is the method of the least median of the squares which consists in seeking, among a set of lines j, ri, j being the residue of the ith sample of coordinates xi, ui or yi, vi , compared to a straight line j, that providing the median value of the set of residual squares which is minimal:
  • the search for the least median of the squares of the residues is applied to a predefined number of lines each determined by a couple of samples drawn randomly in the space of representation of the movement considered.
  • the method performs, after the robust linear regression, a second non-robust linear regression making it possible to refine the estimates of the parameters of the motion model.
  • This second linear regression can exclude the points in the representation spaces whose regression residue from the first robust regression exceeds a predetermined threshold.
  • the method performs an equality test of the directing coefficients of the regression lines calculated in each of the representation spaces, based on a comparison of the sums of the squares of the residues obtained first by carrying out two separate regressions in each representation space, secondly by performing a global slope regression on the set of samples of the two representation spaces, and, if the test is positive, estimates the parameter k of the model by the arithmetic mean of the directing coefficients of the lines of regression obtained in each representation space.
  • the invention also relates to a device for implementing the method.
  • the method allows the implementation of robust methods of identifying the movement model at a reduced cost.
  • the main interest of the method described in the invention lies in the use of a space of judicious representation of the components of the motion vectors, which makes it possible to reduce the identification of the parameters of the motion model to a double regression linear.
  • FIG. 3 an illustration of the representation spaces of the motion vectors used in the invention
  • FIG. 4 the distribution of the theoretical vectors for a zoom movement centered in the representation spaces used in the invention
  • FIG. 5 the distribution of theoretical vectors for a movement of global oblique translation of the image in the representation spaces used in the invention
  • a flow diagram of the method of detecting the dominant movement The characterization of the dominant movement in a sequence of images requires the identification of a parametric model of apparent dominant movement. In the context of the exploitation of motion vector fields from compressed video streams, this model must represent the apparent motion in the 2D image plane. Such a model is obtained by approximating the projection onto the image plane of the movement of objects in three-dimensional space.
  • the 6-parameter affine model (a, b, c, d, e, f) presented above is commonly adopted in the literature.
  • the proposed method consists, basically, in identifying this parametric model of the movement, from fields of motion vectors which are provided in the video stream in order to effect the decoding thereof, when the coding principle calls upon techniques. motion compensation as used for example in the MPEG-1, MPEG-2 and MPEG-4 standards.
  • the method described in the invention is also applicable to motion vector fields which would have been calculated by a separate process from the images constituting the processed video sequence.
  • the adopted motion model is derived from a simplified linear model with 4 parameters (t x , t y , k, ⁇ ) which we will call SLM (acronym of the English expression Simplified Linear Model) , defined by :
  • (x ⁇ Yg) 1 coordinates of the reference point for the approximation of the 3D scene filmed by the camera into a 2D scene; this reference point will be assimilated to the point of coordinates (0, 0) 'of the image
  • t x , t y J vector representing the translation component of the movement
  • k divergence term representing the zoom component of the movement
  • rotation angle of the movement around the camera axis.
  • the objective sought is to identify the dominant movements caused by the movements and the optical transformations of the cameras, for example an optical zoom, in the video sequences. This is in particular to identify the camera movements which are statistically the most widespread in the composition of video documents, mainly grouping the translation and zoom movements, their combination, and the absence of movement, i.e. static or fixed plans.
  • the camera rotation effects very rarely observed in practice, are not taken into account: we therefore restrict the model to 3 parameters (t x , t y , k) by assuming that ⁇ ⁇ 0.
  • the advantage of this simplified parametric representation of the movement is that the parameters t x , t y and k, respectively describing the two components of translation and the zoom parameter of the movement model, can be estimated by linear regression in the representation spaces movement
  • the representation of a field of motion vectors in these spaces generally provides, for each of them, a cloud of points distributed around a line of slope k.
  • the process of estimating the parameters of the simplified model of movement is based on the application of a linear regression of robust type in each of the spaces of representation of the movement.
  • Linear regression is a mathematical operation which determines the line which best fits a cloud of points, for example by minimizing the sum of the squares of the distances from each point to this line.
  • This operation is, in the context of the invention, implemented using a robust statistical estimation technique, so as to guarantee a certain insensitivity to the presence of outliers in the data.
  • the estimation of the dominant movement model must be freed: - from the presence in the image of several objects, some of which follow secondary movements distinct from the dominant movement,
  • the motion vectors transmitted in a compressed video stream have been calculated with the aim of minimizing the amount of residual information to be transmitted after compensation for movement and not with the aim of providing the real movement of the objects constituting the imaged scene.
  • Figure 8 summarizes the different stages of the method for estimating the dominant movement in the sequence. Each of these steps is described in more detail below.
  • a first step 1 performs a normalization of the motion vector fields each associated with an image of the processed video sequence. These vector fields are assumed to have been calculated prior to the application of the algorithm, using a motion estimator.
  • the estimation of the movement can be carried out for rectangular blocks of pixels of the image, as in the methods of pairing of blocks of pixels called "block-matching", or provide a dense vector field, where a vector is estimated. for each pixel in the image.
  • the present invention preferentially, but not exclusively, deals with the case where the vector fields used have been calculated by a video encoder and transmitted in the compressed video stream for decoding purposes.
  • the motion vectors are estimated for the current image at the rate of one vector per rectangular block of the image, relative to a reference frame whose temporal distance to the current image is variable. Furthermore, for certain so-called “B” frames predicted bi-directionally, two motion vectors may have been calculated for the same block, one pointing from the current image to a past reference frame and the other from the current image to a future reference frame. A step of normalization of the vector fields is therefore essential in order to treat, in the steps subsequent, vectors calculated on time intervals of equal durations and pointing in the same direction.
  • the second step constructs the spaces for representing the movement presented above.
  • Each pair of points (x ,, ui) and (y ,, v,) corresponding to the representation of a vector of the motion field can be modeled relative to the regression lines in each of the spaces by:
  • FIG. 3 illustrates point clouds obtained after construction of these two spaces from a field of normalized motion vectors.
  • the parameters (ao, b 0 ) and (a ⁇ , b ⁇ ) obtained after the linear regressions in each of the representation spaces provide estimates of the parameters of the dominant motion model. So, the slopes ao and a ? correspond to a double estimate of the parameter of divergence k characterizing the zoom component, while the ordinates at the origin b 0 and bi correspond to an evaluation of the components t x and t y of translation.
  • FIGS 4 to 7 show some examples of possible configurations.
  • the next step 3 performs a robust linear regression for each of the movement representation spaces, with the aim of separating the data points representative of the real dominant movement from those corresponding either to the movement of secondary objects in the image, or vectors which do not translate the physical movement of the pixels with which they are associated.
  • robust estimation techniques There are several families of robust estimation techniques.
  • the regression lines are calculated so as to satisfy the criterion of the least median of the squares.
  • the calculation method briefly presented below, is described more fully in paragraph 3 of the article by P. Meer, D. Mintz and A. Rosenfeld “Robust Regression Methods for Computer Vision: A Review", published in International Journal of Computer Vision, volume 6 no.1, 1991, pages 59 to 70.
  • n, j the residue of the i th sample of a space representing the movement in which we seek to estimate the set E j of the regression parameters (slope and intercept of the regression line), we calculate Ej so as to meet the following criteria: min (medii 2 :)
  • the residue n, j corresponds to the residual error ⁇ u ⁇ or ⁇ v , - - depending on the representation space considered - associated with the modeling of the i th sample by the regression line of parameters E j .
  • the solution to this non-linear minimization problem requires a search for the line defined by Ej among all possible lines. In order to restrict the calculations, the search is limited to a finite set of p regression lines, defined by p pairs of points drawn randomly from the samples of the representation space studied. For each of the p lines, the squares of the residuals are calculated and sorted so as to identify the square of the square residue which has the median value. The regression line is estimated as the one that provides the smallest of these median values of the squares of the residuals.
  • the regression lines obtained by robust estimation in each representation space are then used to identify the outliers. For this purpose, we calculate, as a function of the median value of the square of the residue corresponding to the best regression line found, a robust estimate ⁇ of the standard deviation of the residues associated with the non-aberrant samples, under the assumption that they follow a Gaussian distribution, and we label as an aberrant sample any sample whose absolute value of the residue exceeds K times ⁇ . It is advantageous to fix the value of K at 2.5. Still in this step 3, we finally perform classic linear regressions, not robust, on the samples of each space of representation, excluding samples identified as outliers. These regressions provide refined estimates of the parameters (ao, bo) and (a ⁇ , b ⁇ ) which will be used in the rest of the process.
  • the next step 4 performs a linearity test of the regression lines in each of the representation spaces.
  • the purpose of this test is to verify that the point clouds in each space are effectively approximately distributed along lines, which does not guarantee the existence - systematic - of a regression line.
  • the linearity test is carried out, in each representation space, by comparing to a predetermined threshold the standard deviation of the residues resulting from the linear regression relating to the non-outlier samples.
  • the value of the threshold depends on the time normalization applied to the motion vectors in step 1 of the method. In the case where, after normalization, each vector represents a displacement corresponding to the time interval separating two interlaced frames, ie 40 ms for a transmission at 50 Hz, this threshold can advantageously be set at 6.
  • step 5 consists in verifying that the slopes ao and ai, which provide a double estimate of the divergence parameter k of the motion model, do not differ significantly.
  • the equality test of two regression slopes is a known problem, which is dealt with in some statistical works; we can for example consult the chapter devoted to analysis of variance in CR Rao's book “Linear Stafistical Inference and its Applications” published by Wiley editions (2 nd edition). This test is carried out in a conventional manner by calculating an overall regression slope relating to the set of non-outlier samples of the two representation spaces of the motion vector field.
  • the value of the divergence coefficient k of the dominant motion model is estimated by the arithmetic mean of the regression slopes a 0 and ai obtained in each of the representation spaces.
  • the parameters t x and t y are estimated respectively by the values of the intercepts bo and bi from the linear regressions in the representation spaces.
  • the vector ⁇ (k, t x , t y of the estimated parameters is used to decide the category in which to classify the dominant movement, namely: - static,
  • the classification algorithm is based on nullity tests of the model parameters, in accordance with the table below:
  • the nullity tests of the estimates of the parameters of the models can be carried out by simple comparison with a threshold of their absolute value. More sophisticated techniques, based on statistical modeling of the distribution of data can also be used. In this statistical framework, an example of an algorithm for deciding the nullity of model parameters based on likelihood tests is presented in the article by P. Bouthemy, M. Gelgon and F. Ganansia entitled "A unified approach to shot change detection and camera motion characterization ”, published in the journal IEEE Circuits and Systems for Video Technology volume 9 no. 7, October 1999, pages 1030 to 1044.
  • One application of the invention relates to video indexing from the selection of key images.
  • the video indexing process generally begins with a pre-processing, which aims to restrict the volume of information to be processed in the video stream to a set of key images selected in the sequence.
  • the video indexing treatments, and in particular the extraction of the visual attributes, are carried out exclusively on these key images, each of which is representative of the content of a segment of the video.
  • all of the keyframes should form a comprehensive summary of the video, and redundancies between the visual content of the keyframes should be avoided, so as to minimize the computational burden of the indexing process.
  • the method of estimating the dominant movement within each video plane makes it possible to optimize the selection of the key images, within each plane, with respect to these criteria, by adapting it to the dominant movement. .
  • the described method can also be used for the generation of metadata.
  • the dominant movements often coincide with the camera movements when shooting the video.
  • Some directors use specific camera movement sequences to communicate certain emotions or sensations to the viewer.
  • the method described in the invention can make it possible to detect these particular sequences in the video, and therefore to provide metadata relating to the atmosphere created by the director in certain passages of the video.
  • Another application of dominant motion detection is the detection or aid in the detection of ruptured planes. In fact, an abrupt change in the properties of the dominant movement in a sequence can only be caused by a break in the plane.
  • the method described in the invention allows the identification, in each image, of the support of the dominant movement. This support indeed coincides with all the pixels whose associated vector has not been identified as an outlier, in the sense of the dominant movement.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
EP02805377A 2001-12-19 2002-12-12 Procede d'estimation du mouvement dominant dans une sequence d'images Withdrawn EP1468568A1 (fr)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
FR0116466 2001-12-19
FR0116466A FR2833797B1 (fr) 2001-12-19 2001-12-19 Procede d'estimation du mouvement dominant dans une sequence d'images
PCT/FR2002/004316 WO2003055228A1 (fr) 2001-12-19 2002-12-12 Procede d'estimation du mouvement dominant dans une sequence d'images

Publications (1)

Publication Number Publication Date
EP1468568A1 true EP1468568A1 (fr) 2004-10-20

Family

ID=8870690

Family Applications (1)

Application Number Title Priority Date Filing Date
EP02805377A Withdrawn EP1468568A1 (fr) 2001-12-19 2002-12-12 Procede d'estimation du mouvement dominant dans une sequence d'images

Country Status (9)

Country Link
US (1) US20050163218A1 (enExample)
EP (1) EP1468568A1 (enExample)
JP (1) JP4880198B2 (enExample)
KR (1) KR100950617B1 (enExample)
CN (1) CN100411443C (enExample)
AU (1) AU2002364646A1 (enExample)
FR (1) FR2833797B1 (enExample)
MX (1) MXPA04005991A (enExample)
WO (1) WO2003055228A1 (enExample)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2003244192A1 (en) * 2003-06-25 2005-01-13 Nokia Corporation Image data compression parameter value controlling digital imaging device and image data compression parameter value decision method
CA2574590C (en) * 2004-07-20 2013-02-19 Qualcomm Incorporated Method and apparatus for motion vector prediction in temporal video compression
FR2875662A1 (fr) 2004-09-17 2006-03-24 Thomson Licensing Sa Procede de visualisation de document audiovisuels au niveau d'un recepteur, et recepteur apte a les visualiser
EP1956556B1 (en) * 2005-11-30 2010-06-02 Nikon Corporation Motion vector estimation
WO2009070508A1 (en) 2007-11-30 2009-06-04 Dolby Laboratories Licensing Corp. Temporally smoothing a motion estimate
JP5039921B2 (ja) * 2008-01-30 2012-10-03 インターナショナル・ビジネス・マシーンズ・コーポレーション 圧縮システム、プログラムおよび方法
JPWO2009128208A1 (ja) * 2008-04-16 2011-08-04 株式会社日立製作所 動画像符号化装置、動画像復号化装置、動画像符号化方法、および動画像復号化方法
CN102160381A (zh) * 2008-09-24 2011-08-17 索尼公司 图像处理设备和方法
TWI477144B (zh) * 2008-10-09 2015-03-11 Htc Corp 影像調整參數計算方法及裝置,及其電腦程式產品
CN101726256B (zh) * 2008-10-27 2012-03-28 鸿富锦精密工业(深圳)有限公司 从影像轮廓中搜寻拐点的计算机系统及方法
CN102377992B (zh) * 2010-08-06 2014-06-04 华为技术有限公司 运动矢量的预测值的获取方法和装置
JP2012084056A (ja) * 2010-10-14 2012-04-26 Foundation For The Promotion Of Industrial Science 物体検出装置
US9442904B2 (en) * 2012-12-21 2016-09-13 Vmware, Inc. Systems and methods for applying a residual error image
US9939253B2 (en) * 2014-05-22 2018-04-10 Brain Corporation Apparatus and methods for distance estimation using multiple image sensors
EP3343923B1 (en) * 2015-08-24 2021-10-06 Huawei Technologies Co., Ltd. Motion vector field coding method and decoding method, and coding and decoding apparatuses
JP2021513054A (ja) * 2018-02-02 2021-05-20 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. 画像位置合わせ及び回帰解析を用いたシリアルポジトロンエミッショントモグラフィ(pet)検査における標準取り込み値(suv)のスケーリング差の補正
KR102710599B1 (ko) 2018-03-21 2024-09-27 삼성전자주식회사 이미지 데이터 처리 방법 및 이를 위한 장치
CN111491183B (zh) * 2020-04-23 2022-07-12 百度在线网络技术(北京)有限公司 一种视频处理方法、装置、设备及存储介质
US11227396B1 (en) * 2020-07-16 2022-01-18 Meta Platforms, Inc. Camera parameter control using face vectors for portal
JP7056708B2 (ja) * 2020-09-23 2022-04-19 カシオ計算機株式会社 情報処理装置、情報処理方法及びプログラム

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW257924B (en) * 1995-03-18 1995-09-21 Daewoo Electronics Co Ltd Method and apparatus for encoding a video signal using feature point based motion estimation
US5802220A (en) * 1995-12-15 1998-09-01 Xerox Corporation Apparatus and method for tracking facial motion through a sequence of images
EP1229740A3 (en) * 1996-01-22 2005-02-09 Matsushita Electric Industrial Co., Ltd. Method and device for digital image encoding and decoding
EP1068576A1 (en) * 1999-02-01 2001-01-17 Koninklijke Philips Electronics N.V. Descriptor for a video sequence and image retrieval system using said descriptor
EP1050850A1 (en) * 1999-05-03 2000-11-08 THOMSON multimedia Process for estimating a dominant motion between two frames
EP1050849B1 (en) * 1999-05-03 2017-12-27 Thomson Licensing Process for estimating a dominant motion between two frames
US6865582B2 (en) * 2000-01-03 2005-03-08 Bechtel Bwxt Idaho, Llc Systems and methods for knowledge discovery in spatial data
JP3681342B2 (ja) * 2000-05-24 2005-08-10 三星電子株式会社 映像コーディング方法
WO2002003256A1 (en) * 2000-07-05 2002-01-10 Camo, Inc. Method and system for the dynamic analysis of data
US7499077B2 (en) * 2001-06-04 2009-03-03 Sharp Laboratories Of America, Inc. Summarization of football video content

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
FORMENTI P ET AL: "GLOBAL-LOCAL MOTION ESTIMATION IN MULTILAYER VIDEO CODING", OPTOMECHATRONIC MICRO/NANO DEVICES AND COMPONENTS III : 8 - 10 OCTOBER 2007, LAUSANNE, SWITZERLAND; [PROCEEDINGS OF SPIE , ISSN 0277-786X], SPIE, BELLINGHAM, WASH, vol. 1818, no. PART 02, 1 January 1992 (1992-01-01), pages 573 - 584, XP000911793, ISBN: 978-1-62841-730-2, DOI: 10.1117/12.131473 *

Also Published As

Publication number Publication date
KR100950617B1 (ko) 2010-04-01
AU2002364646A1 (en) 2003-07-09
FR2833797B1 (fr) 2004-02-13
MXPA04005991A (es) 2004-09-27
US20050163218A1 (en) 2005-07-28
FR2833797A1 (fr) 2003-06-20
CN1608380A (zh) 2005-04-20
JP2005513929A (ja) 2005-05-12
KR20040068291A (ko) 2004-07-30
CN100411443C (zh) 2008-08-13
WO2003055228A1 (fr) 2003-07-03
JP4880198B2 (ja) 2012-02-22

Similar Documents

Publication Publication Date Title
EP1468568A1 (fr) Procede d'estimation du mouvement dominant dans une sequence d'images
EP2326091B1 (en) Method and apparatus for synchronizing video data
US7508990B2 (en) Apparatus and method for processing video data
Yang et al. A fast source camera identification and verification method based on PRNU analysis for use in video forensic investigations
US20080273751A1 (en) Detection and Tracking of Moving Objects from a Moving Platform in Presence of Strong Parallax
JP2005513929A6 (ja) 画像のシーケンスにおける主要な動きを推定する方法
US12056949B1 (en) Frame-based body part detection in video clips
Chen et al. Variational fusion of time-of-flight and stereo data for depth estimation using edge-selective joint filtering
CN112396074A (zh) 基于单目图像的模型训练方法、装置及数据处理设备
Baracchi et al. Facing image source attribution on iPhone X
Joshi et al. Tampering detection and localization in digital video using temporal difference between adjacent frames of actual and reconstructed video clip
CN114239736A (zh) 光流估计模型的训练方法和装置
CN113592940A (zh) 基于图像确定目标物位置的方法及装置
Jung et al. Object Detection and Tracking‐Based Camera Calibration for Normalized Human Height Estimation
CN114170325B (zh) 确定单应性矩阵的方法、装置、介质、设备和程序产品
Li et al. Detection of blotch and scratch in video based on video decomposition
CN113592706A (zh) 调整单应性矩阵参数的方法和装置
US20060093215A1 (en) Methods of representing and analysing images
Milani et al. Audio tampering detection using multimodal features
Chittapur et al. Exposing digital forgery in video by mean frame comparison techniques
KR20160126985A (ko) 비디오의 방향을 결정하기 위한 방법 및 장치
Amer Object and event extraction for video processing and representation in on-line video applications
CN117132503A (zh) 一种图像局部高亮区域修复方法、系统、设备及存储介质
Springer et al. Robust Rotational Motion Estimation for efficient HEVC compression of 2D and 3D navigation video sequences
CN116453086A (zh) 识别交通标志的方法、装置和电子设备

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20040702

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LI LU MC NL PT SE SI SK TR

AX Request for extension of the european patent

Extension state: AL LT LV MK RO

RIN1 Information on inventor provided before grant (corrected)

Inventor name: MARREC, SYLVAIN

Inventor name: LE CLERC, FRANEOIS

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: THOMSON LICENSING

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: THOMSON LICENSING

17Q First examination report despatched

Effective date: 20161114

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20170325