EP1316065B1 - Videobildsegmentierungsverfahren unter verwendung von elementären objekten - Google Patents

Videobildsegmentierungsverfahren unter verwendung von elementären objekten Download PDF

Info

Publication number
EP1316065B1
EP1316065B1 EP01967439A EP01967439A EP1316065B1 EP 1316065 B1 EP1316065 B1 EP 1316065B1 EP 01967439 A EP01967439 A EP 01967439A EP 01967439 A EP01967439 A EP 01967439A EP 1316065 B1 EP1316065 B1 EP 1316065B1
Authority
EP
European Patent Office
Prior art keywords
contour
active contour
image
active
elementary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP01967439A
Other languages
English (en)
French (fr)
Other versions
EP1316065A2 (de
EP1316065B8 (de
Inventor
Magali Maziere
Françoise Chassaing
Henri Sanson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orange SA
Original Assignee
France Telecom SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by France Telecom SA filed Critical France Telecom SA
Publication of EP1316065A2 publication Critical patent/EP1316065A2/de
Publication of EP1316065B1 publication Critical patent/EP1316065B1/de
Application granted granted Critical
Publication of EP1316065B8 publication Critical patent/EP1316065B8/de
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/149Segmentation; Edge detection involving deformable models, e.g. active contour models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/755Deformable models or variational models, e.g. snakes or active contours
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20116Active contour; Active surface; Snakes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Definitions

  • the invention relates to a segmentation method of a video image by elementary objects.
  • the methods of segmentation video images by elementary objects from computer vision process only allow in no way to reproduce the functioning of the system visual and cognitive human. Indeed, the resulting image achieved through the implementation of the above processes is sub-segmented or over-segmented. In both cases, these methods do not make it possible to reproduce automatic the ideal segmentation achieved by a human operator.
  • a first family corresponds to the methods of classical segmentation by filtering, morphology mathematical, region growth, partition color histograms, markovian methods. These automatic methods are applied to an image but the results obtained depend heavily on the content particular of the image and are sensitive to the texture of the image. They do not allow a segmentation of the image by elementary objects as far as it is difficult to find the contours of an object of interest. Images are over-segmented and the contours detected do not form not all a closed list, guaranteeing substantially the integrity of the outline of the object of interest and the segmentation of the latter. Dispersion on results is great between the different methods and the results are not very robust, two very resemblances that can lead to very different and vice versa the same image can result in a very different segmentation with two methods.
  • a second family groups methods based on on the mathematical morphology, which ones try to to remedy the problems and disadvantages of the methods of the first family from processes based on a tree structure, binary partition tree allowing to characterize the content of the images.
  • Such a structure of tree describing the spatial organization of the image is obtained by iteratively merging neighboring regions according to a criterion of homogeneity until a only region. The tree is built keeping the trace merged regions at each iteration of the process.
  • This method offers the possibility of manually marking regions of interest on the original image and of find in the tree of partition of the nodes corresponding to this marking.
  • a third family includes methods statistics by Markov fields. These methods proceed to a labeling of the regions of the image according to a criterion to maximize. They can take into account a wide set of information a priori on the image and are particularly adapted to satellite images composed of textured and juxtaposed areas.
  • a fifth family of methods corresponds to a development of the previous family method, in which, with regard to external forces applied to the active contour, the model behaves like a balloon inflating under the effect of the aforementioned forces and stops when it encounters marked outlines or Predefined. Thus, the active contour can cross outline little marked.
  • Other developments have proposed to use deformable geometric active contours. These developments use sets of levels allowing the automatic management of the changes of active contour topology.
  • the methods of the family mentioned above require a initialization close to the final solution, that is to say the natural outline of the object, in order to obtain a good convergence of the algorithm.
  • a first family uses a mesh technique.
  • a hierarchical mesh structure successively estimates the dominant movement of the object, then the internal movements of the latter.
  • a hierarchy of meshes is generated from the mask of the object defining a polygonal envelope of this object.
  • an affine global model initializing the coarse mesh of the hierarchy is estimated. This estimate is then propagated to the finer levels where a global estimate is made. It sometimes happens that a node deviates from the natural outline of the object and clings to the background of the scene by driving with it its neighboring nodes. This training process is linked to a temporal accumulation of the positioning errors of the nodes, since only the initial segmentation is available during the optimization.
  • a second family uses the implementation of active contours, according to the methods described previously.
  • the active contour obtained on the current image is propagated from one image to another and is deformed for marry the contours of the object of interest on the images successive. Movement constraints can be added when minimizing the functional energy.
  • these methods can combine methods for estimating parameters by optical flow or a motion model, such as translation, affine transformation, perspective, bilinear or other deformation, and active contouring methods, in order to make more Robust tracking or tracking object.
  • the object tracking method combines an active contour method and an analysis of the movement by regions of the image. The movement of the object is detected by a motion-based segmentation algorithm. An active contour model is then used to track and segment the object. Then, the movement of the region defined inside the active contour is then estimated by a multi-resolution approach by an affine model. A Kalman filter is used to predict the position of the aforementioned region and thus initialize the active contour in the next image.
  • a third family of methods uses label-based techniques that exploit image partitioning processes, or label maps on the pixels of an image.
  • a technique combining motion information and spatial organization on the images has been proposed for the purpose of tracking an object.
  • the current image is partitioned by a method of mathematical morphology and the resulting image is offset by the motion vectors estimated by roughly a block matching algorithm or block-matching language Anglo-Saxon.
  • the spatial homogeneity of the regions or markers is then verified.
  • These methods have the limitations of classical active contour methods, including slow convergence.
  • a second method is based on the Markov field technique. This method includes a method of image segmentation into homogeneous regions in the sense of movement by statistical labeling. The score is obtained according to a criterion of intensity, color and texture.
  • a third method realizes a spatial segmentation of the image into homogeneous regions and a follow-up is performed by a back projection method. This is to determine the mask of the object of interest on the current image. Each region of the segmented current image is then back-projected following the movement on the previous segmented image. The retro-projected regions belonging to the mask of the object then form the new mask of the object on the current image.
  • a "lasso” feature allows you to quickly trace an imprecise outline with a mouse around a object. Then pressing a button, the user gets a reduction of this contour so that it exactly envelops this object. The user does not have access to the program's sources to find out how the function is Implementation.
  • the present invention aims to remedy disadvantages of the techniques of the prior art mentioned above, both with regard to the segmentation process image than tracking or tracking an object in movement on successive images.
  • an object of the present invention is the implementation of a method of segmentation of a video image by elementary objects in which no prior knowledge of the image is required.
  • Another object of the present invention is, in because of the lack of knowledge a priori on the image, the implementation of an edge segmentation method assets of a video image by elemental objects in which the active contour of departure, still designated outline of departure, is arbitrary with respect to an elementary object of interest belonging to the image.
  • Another object of the present invention is also, given the initialization of the process object of the present invention from an active contour any starting point, the implementation of a method of Segmentation of a very flexible video image of use and a very high tolerance to selection of an unsophisticated user, the outline of departure which may contain several loops, in the absence any orientation necessary.
  • Another object of the present invention is also the implementation of a segmentation method image by active contours, in which, any knowledge a priori on the image being deleted, the term of external energy is consequently removed, this which makes it possible to obtain a high speed of convergence of the current active contour to the natural outline of the object elementary interest.
  • Another object of the present invention is also the implementation of a segmentation method active contour image, in which, due to the lack of prior knowledge about the image, a better tolerance to noise and image contours poorly defined is obtained.
  • Another object of the present invention is also the implementation of a segmentation method active contour image, in which, because of the tolerance of a multi-loop start contour, the segmentation of the image vis-à-vis at least one object elementary multi-component can be implemented which gives a high degree of flexibility of use to the process object of the present invention.
  • Another object of the present invention is the implementing a method of segmentation of an image video by elementary objects in which the speed of convergence of the starting contour towards the natural contour of the elementary object of interest in the image allows a great stability of the segmentation process in each image, and, as a result, a follow-up or prosecution stable moving objects on successive images, which allows to obtain a great robustness of pursuit of a object of mobile interest on a large number of images successive.
  • another object of this invention is also the implementation of a method of segmentation of a video image by elementary objects in which, because of the rapid convergence of active contours, robustness in tracking objects and the permitted subdivision of an active contour into several active contours, each resulting active contour Such a subdivision evolves independently as it is linked to the only subdivision of the elementary object of interest.
  • Another object of the present invention is finally the implementation of a method of segmentation of an image video by elementary objects in which, thanks to a Simplified motion tracking process, convergence active current contour to object movement elementary mobile interest is accelerated.
  • the method which is the subject of the invention can particularly advantageous to be implemented from program modules and finds application to all video image processing involving segmentation by objects and for which a rough preselection but reliable of the object to segment is achievable.
  • the process object of the invention is implemented from at least an IM image, such as a video image, but preferably from an image sequence comprising at least one elementary object, noted OBJ, animated or not and delimited by a natural outline, noted CN.
  • IM image such as a video image
  • OBJ elementary object
  • CN natural outline
  • the method which is the subject of the present invention is based on the fact that every OBJ elementary object present in an image, in particular a video image, has an outline natural CN whose trace is reflected in the image considered by light intensity values presenting substantially a discontinuity all along this last, this discontinuity having the effect to introduce a notion of differential intensity vis-à-vis of the object itself and the direct environment of this object, and, in particular, a gradient value light intensity on the natural outline of the object, and so on this trace, presenting a value substantially stable.
  • the method that is the subject of the present invention the aforementioned remark, its object is, from of an absolutely arbitrary starting contour surrounding however this object, to seek, by deformation of this outline of departure, contraction of the latter towards the aforementioned object, stability in the position of the contour active on the natural outline of the object.
  • the method which is the subject of the present invention consists, in a step A, to be defined around the aforementioned elementary object OBJ, a starting contour, noted CD, totally surrounding the OBJ elementary object.
  • the image IM available as a video image, and thus in form an image file can advantageously be displayed on a display system, not shown in the drawing in FIG. 1a, such as a video screen with a graphical interface and a pointer.
  • a display system not shown in the drawing in FIG. 1a, such as a video screen with a graphical interface and a pointer.
  • a user can easily, from a pointing device, trace around the OBJ object any starting contour CD surrounding the aforementioned object in the easiest way.
  • Step A above is then followed by a step B defining, from the starting contour CD, a original active contour, noted as CAD, formed by a set of nodes distributed on this starting contour.
  • CAD original active contour
  • Step B is then followed by a step C of convergent deformation of the original CAO active contour by moving at least one of the points of the active contour CAD origin to the OBJ object, and particular towards the natural outline of the object elementary.
  • the deformation of the active contour CAD origin is performed by moving to the natural outline of the elementary object of at least one of the nodes of the original active contour, this displacement being normal and centripetal to the original CAD contour, function of the elastic energy (or spring term) obtained from of the distance of the nodes adjacent to the current node and controlled by a blocking function on the image of contours, obtained from the measured intensity along segments adjacent to the current node.
  • the CAD active contour deformation allows to generate a current active contour, noted CAC, which is then iteratively subjected to deformation aforementioned convergent to generate active contours successive distinct currents as long as the displacement and the deformation do not satisfy the blocking condition for all the nodes of the outline.
  • the final active contour substantially reproduces the CN natural outline of the OBJ object.
  • step C the deformation operation previously described for generate a current active contour.
  • step B that is to say from the creation of the starting contour CD, and from the drawing of this one, and the definition of the original active contour CAD, a calculation of the energy function E is carried out, this energy function being linked to the gradient of luminous intensity calculated on the active contour of CAD origin, as will be described later in the description.
  • step C the application of a convergent deformation by displacement of at least one point or node of the active contour of origin CAD, allows to calculate a variation of energy ⁇ E of elastic energy minimum, for the current active CAC contour obtained from applied deformation.
  • Step C can then be followed by a step of test D consisting of checking that the energy variation ⁇ E is minimal.
  • step E the iteration is initiated by step E, which is noted: CAD ⁇ CAC.
  • step B in which the original CAO active contour has has been replaced by the current CAC active contour of the previous iteration, can then be reapplied by via step C and step D previously described.
  • the deformation process is then applied iteratively as long as there is displacement, which allows to successive current active contours to get closer the natural outline of the CN object.
  • the current active contour CAC of the previous iteration corresponds to a final active contour which is none other than the natural outline of the OBJ object substantially.
  • step B of definition of an active contour of origin CAD from of the starting contour CD, if necessary of an outline active current CAC, will now be given in connection with Figure 1b.
  • each active contour can advantageously be defined by polygonal modeling by sampling on the contour of the active contour, contour active CAD origin, respectively current active contour CAC, depending on the distance between consecutive nodes.
  • threshold values polygon sampling can be defined by the user.
  • polygon sampling threshold values can be made substantially automatically to from selected reference dimensions according to the size of the elementary object.
  • an intermediate X 3 is substantially added node to the middle of the segment right on the CD from contour.
  • the node X 3 is then taken into account and inserted between the nodes X 1 and X 2 to constitute in fact the active CAD original contour, where appropriate the current active contour CAC.
  • sampling and polygonal modeling can be implemented, such as interpolation or smoothing methods (spline, as an example), in order to add constraints differential on the original active contour, respectively the current active contour.
  • the corresponding segment is then merged, the nodes X 1 and X 2 then being brought back into a resulting single node X 4 represented in the substep 4, positioned substantially at the middle right of the segment of length d on the starting contour or on the active contour of origin CAD, respectively the current active contour CAC.
  • An interpolated position other than that corresponding to the middle of the length segment d may be used.
  • the nodes X 1 and X 2 are then deleted and replaced by the single node X 4 , as shown in the substep 4.
  • a current active contour CAC or an active contour of origin CAO modeled by all the segments as represented in FIG. 1b are available at the substep supra, successive segments 31, d 32, and so on throughout the course of the active contour of origin, respectively of the current active contour.
  • a luminosity intensity gradient is calculated in the horizontal, respectively vertical directions, the luminous intensity or luminous intensity gradient.
  • luminance checking relation (1) In the previous relation, I x (i, j) denotes the value of the luminous intensity or luminance gradient in the horizontal direction, and I y (i, j) denotes the value of the luminous intensity gradient in the vertical direction. for any pixel of coordinates i, j in the rectangular area of pixels considered with respect to the adjacent pixels of address i + 1, i-1, respectively j + 1 and j-1.
  • N I 2 x (i, j) + I 2 there (I, j) from the gradients in the above-mentioned vertical and horizontal directions.
  • the force of an outline active is measured by the standard N of the gradient as calculated previously.
  • an active contour of CAD origin respectively current active contour CAC
  • the contributions of the luminous intensity gradient are evaluated respectively on the two segments adjacent to the considered node. , that is to say on the segments d 31 and d 32 for the successive nodes represented in substep 2 of FIG. 1b.
  • the contribution is taken from the set of stored gradient values GR, this set being designated by the map of the gradients .
  • the contribution for the considered node is then weighted by a function of form equal to 1 on the current node and decreasing linearly towards the value 0 on the adjacent node. All gradient contributions on the considered segment are summed.
  • the values associated with each segment are stored in a vector.
  • the functional or functional elastic energy representative of the distance separating each node from a neighboring node then satisfies the relation (4):
  • X, Xp and Xs are respectively dimension vectors 2 containing the coordinates of the current node, the previous node and the next node.
  • k represents a term of stiffness, called spring term, corresponding to the elastic energy representative of the distance separating each node from a neighboring node.
  • the term spring R tends to minimize the energy E which results in a smoothing whose strength is weighted by the stiffness term k.
  • This term is a term of regulation avoiding degeneration and which notably eliminates the formation of folds.
  • spring R is an oriented magnitude, supported by the segment joining two consecutive nodes and supported by the latter.
  • the spring terms are noted R 13 , R 31 , R 32 , R 23 .
  • the deformation applied to each active contour of origin CAO, respectively current active contour CAC is effected by a displacement of at least one of the nodes constituting the active contour of origin, respectively of the current active contour considered, taking into account a binding relation, on the one hand, the spring term R above, the displacement itself, in a centripetal direction towards the elementary object and of course a term of light energy linked to the gradient and designated by the contribution of the gradient on the active contour of CAD origin, respectively the current active contour CAC as will be described below.
  • the value of the luminous intensity gradient is taken into account on the whole of each segment placed on both sides of the considered node, the G contribution of GR light intensity gradient on each segment considered being evaluated from a summation of the gradient standard weighted by the weighting function previously mentioned in the description.
  • d takes the value d31 and X moves from node X1 to the node X3 on the d31 segment.
  • a heuristic is used, in order to assign a normal vector to the aforementioned active contour.
  • the normal vector N 1 is calculated at the segment d 31 and the normal vector N 2 to the segment d 32 .
  • the average or resultant normalized normal vectors N 1 and N 2 provide the direction of the normal vector N 3 resulting at node X 3 .
  • the vector N 3 corresponding to a displacement vector NOT is then oriented towards the inside of the object, starting for example from a calculation of concavity of the path supporting the active contour of CAD origin, respectively the current active contour CAC.
  • Other calculation methods based on spline or other interpolations can be implemented for the estimation of the normal vector NOT .
  • the displacement constraint F applied according to the displacement vector NOT in at least one of the nodes of the original active contour, respectively of the current active contour is given by the relation (7):
  • the term ⁇ (G ⁇ S) is a specific function such that this function is equal to 1 if G ⁇ S, and equal to 0 otherwise, S denoting a threshold value predefined by the user and G designating the contribution of the gradient to the considered node.
  • the aforementioned relation (7) defines the condition of blocking the displacement of the nodes by the function ⁇ (G ⁇ S). If this function is equal to 1, the displacement of the node (s) of the current active contour of the value F result is realized and if this function is zero the displacement is stopped.
  • the node and of course, where appropriate, the set of nodes constituting the active contour CAO origin, respectively the current active contour CAC, is moved from the value of the displacement constraint F in the centripetal direction defined for the considered node.
  • step A consisting of the definition around the object OBJ of a starting contour CD
  • the latter may advantageously comprise a sub-step A 11 consisting of an operation of smoothing the image by means of a filtering process.
  • the current video image is filtered in order to limit the ambient noise present in this image and to obtain more spread outlines.
  • the filtering used may consist of a conventional noise suppression filtering process depending on the nature of the constituent data of the image. For this reason, the filtering process will not be described in more detail.
  • the sub-step A 11 can then be followed by a substep A 12 consisting, starting from the initial contour CD, in an initialization of the computation of the gradient values for a determined zone of the image.
  • the gradient values given by the previous relations (1) and (2) are calculated only on the region enclosed by the initial contour CD, then by the current active contours. successively until reaching the active current contour CAC of the final active contour corresponding to the natural contour of the object.
  • the calculation values of the gradient norm are then stored in a gradient map.
  • the above values can be calculated in gray level or in color.
  • the gradient map is an image of floating values initialized to an arbitrary value for example.
  • FIG. 2b views are shown successively on a monitor for displaying a video image comprising an object OBJ, an active contour of origin CAD or a current active contour CAC, and an area in which the gradient map CG is calculated.
  • the map of gradients is calculated in an intermediate zone with a current active contour and natural contour of the CN object, this area being shown in gray in Figure 2b.
  • step B of definition starting from the starting contour CD of an active contour of origin CAD one indicates that this one can be also subdivided in a first substep B 11 consisting of carrying out the sampling for polygonal modeling of the contour considered, as shown in Figure 1b, the substep B 11 can then advantageously be followed by a substep B 12 intersection detection on the active contour, CAO active contour , respectively active current contour CAC.
  • the sub-step B 12 may advantageously be implemented when the elementary object is constituted by an animated object in the image, and therefore capable of movement, deformation and partition, for any active contour likely to constitute a loop presenting at least one point of intersection following a partition, a deformation of this elementary object into elementary object components.
  • the contour active, original active contour, respectively outline current asset is then split and grouped into a number of distinct active contours equal to the number intersections increased by one unit to allow assign a final active contour to each component of the aforementioned elementary object.
  • an active contour evolves over time, taking into account changes in form or partition of the object, this which means that loops can appear within the active contour.
  • the contour current asset is divided into several active contours according to the division rule previously mentioned.
  • the node A is disconnected from node B and it is the same for node C vis-à-vis node D.
  • the node A and the node C are connected to the node D, respectively to the node B.
  • the notion of connection consists of constitute each active contour, original active contour, active current contour, in the form of a list of nodes closed.
  • the aforementioned step is a recursive process including creating a new active contour, adding nodes between the nodes B and C in this new active contour and the simultaneous deletion of these same nodes in the current active contour. If the new outline active is not degenerate, that is to say if it involves less than two knots, so it's stored under shape of a meta-snake representing an outline vector assets, the latter being themselves stored in the form a list of nodes. Active contour makes sense to approximate the outer contours of an object. Function recursive above is called again until the absence of intersection. Detection processes of different intersection can be implemented without depart from the scope of the subject of the present invention.
  • Step D consisting in carrying out the minimum displacement test can advantageously, as shown in FIG. 2a, on a negative response to the aforementioned test, be followed by a step F 1 intended to modify the value of the definition resolution of the active contour. current CAC.
  • a step F 1 intended to modify the value of the definition resolution of the active contour. current CAC.
  • step F 1 it is indicated that this can be done as described previously in the description with reference to FIG. 1b, and in particular by modifying the threshold values polygonal sampling method Smax and Smin.
  • the final active edge displacement stop step F is then called, the final active contour being deemed to correspond to the natural contour of the elementary object of interest.
  • the process object of the present invention is to make it possible to follow or to pursue the elementary object in view of the fact that this one is likely to deform, to turn and, more generally, to move over time, that is, from one image to the next, over a sequence video images for example.
  • the user has selected an elementary object of interest, that is to say that step B of figure la has been implemented, and in addition, that the acquisition of the elementary object of interest was carried out, that is to say that step F of the figure 1a or 1b was carried out, the final contour marrying satisfactorily the elementary object of interest.
  • the method object of the present invention then consists, in one step G called data preparation, carried out on the current image, by building the mask of the object delimited by the final active contour or band, called crown, encompassing the nodes of the active contour considered, the crown being a difference of regions encompassed by two expansions of the active contour or by successive dilations of an initialized binary image with this active contour.
  • Step G is itself followed by a step H to make on the crown, an estimate of movement to move the nodes of the contour active or the pixels of the crown following a vector of estimated movement.
  • An I test can be planned to reiterate motion estimation, by return J to the estimate of movement prior to step H.
  • the test I can correspond for example in a motion estimation on a number greater than two images, for example, in according to the choice of the user, as will be described later in the description.
  • the motion vector or displacement is then applied to the active contour considered, in order to ensure the follow-up of the object mobile elementary by the final active contour and discriminate the aforementioned mobile elementary object, account given the movement of the latter in the following image.
  • the subject of the present invention may be reiterated to perform step B of FIG. 1a or FIG. 2a, then the step C of displacement deformation under blocking condition for all nodes of the contour.
  • the motion estimation step H can be implemented according to two substeps, a first substep H 1 for estimating the actual motion applied to the dilated active contour, as mentioned above, followed by a substep H 2 of refining the segmentation of the image, that is to say the selection of the outline of the elementary object.
  • the motion estimation method itself can be based on a multiresolution structure estimating the global movement of an object constituted by the current active contour CAC, by a translation model. or an affine model.
  • the multiresolution is obtained by successively filtering the images, this process making it possible to accelerate the convergence of the solution and making it more robust.
  • x and y denote the coordinates of a point M (x, y) of the transformed current image due to the motion of the elementary object at a coordinate point M '(x', y ') x 'and y' in the following image
  • dx, dy denote the translation parameters in the horizontal x, and vertical directions y for the transformation by translation
  • a 1 , a 2 , a 3 , a 4 , a 5 , 6 denote the affine transformation parameters for passing current active contour of the current image to the current active contour of the next image due to the displacement or deformation of the elementary object of interest.
  • stage G preparation of data that is to say of definition of the band forming crown from current active contour or contour final asset segmenting the elementary object of interest
  • the aforementioned step may consist in generating a binary image calculated on the aforementioned crown encompassing the nodes of the final CAF final active contour.
  • the previously mentioned crown may correspond to the difference of regions encompassed by two dilations of the CAF final active contour, these regions being definable relative to the geometric center of the active contour or the center of gravity of the latter.
  • Another possibility may consist of obtaining the aforementioned regions by successive dilations of a binary image initialized to from the final active CAF contour considered.
  • the sub-step of refining the selection of the contour of the object carried out in sub-step H 2 may consist, as described in connection with FIG. 3b, following the estimation of the movement of the crown of the active contour. considered, the final active contour CAF for example, constituting for the motion estimation a current active contour CAC, to move each node of this active contour CAC of the estimated motion value in a substep H 21 , to generate an outline initial asset for the new image.
  • 3b shows the final active contour forming in fact a current active contour CAC by a dotted circle, in a nonlimiting manner, in order not to overload the drawing, the motion estimation having given rise to a vector of displacement D e and the displacement being illustrated in a symbolic manner by the displacement of the center of the current active contour CAC, and of course of the periphery thereof.
  • This displacement makes it possible to generate a current active contour displaced CACD at the end of step H 21 .
  • the current active contour displaced CACD thus constitutes an initial current active contour CACI for the next image.
  • Substep H 21 is then followed by a substep 22 H of expanding the initial current active contour CACI by geometrical transformation, to generate a current active contour moved and expanded CACDd constituting a reference initial active contour for this CAIR next picture.
  • the expansion process is carried out by geometrical transformation, the geometrical transformation possibly consisting for example of a homothety with respect to the barycentre of the displaced current active contour CACD.
  • the initial active CAIR reference contour thus obtained constitutes an original active contour of the elementary object for the next image in sub-step H 23 , which of course makes it possible to restart iteratively the convergent deformation of the active contour of origin to generate the final current active contour for the next aforementioned image. It is thus understood that, following substep H 23 of FIG. 3b, it is then possible to call for example step B then step C of FIGS. 1a and 2a to ensure the segmentation of the object, according to the method of the present invention.
  • an active contour is shown whatever, a mask consisting of a binary image and finally, the corona corresponding to dilations successive stages of a binary image initialized with the active contour.
  • Figure 4 represents a ballet scene played by the two aforementioned characters. Both first images of the top presents two selections possible characters (mouse and box bound) encompassing the characters and the six other images present a moment of the temporal follow-up of these characters.
  • an access terminal such as a terminal constituted by a microcomputer desktop, a laptop, an assistant digital PDA type, or a radiotelephone terminal mobile with a display screen and an interface WAP type graphic for example, this terminal of mobile radiotelephone using a transmission UMTS type for example, or GPRS, and allowing the exchange files with this server site.
  • the TA terminal has a sample, actually constituted by a sample image marked IECH, consisting of in at least one sample video image from the sequence of images or of the plurality of stored images in a database of the SERV server.
  • the sequence images stored in the database of this server is actually a sequence of reference images, SIR, this sequence of images being deemed to include a plurality of current IRC reference images, each current reference image being followed by an image of next reference, noted IRS.
  • the protocol for finding an elementary object of interest which is the subject of the present invention, consists, in a step K, in segmenting the IECH sample video image according to the method that is the subject of the present invention. , as described previously in the description with reference to FIGS. 1 to 4.
  • the purpose of this segmentation is to generate at least one sample active contour.
  • This sample active contour is, for example, a final active contour CAF, in the sense of the method that is the subject of the present invention, and constituted by a list of nodes associated with the elementary object of interest belonging to the IECH sample video image.
  • the list of nodes constitutes in fact a list of points distributed on the active contour considered, the final contour, at each point being furthermore associated with a value of stiffness constant representative of the elastic energy E, as previously mentioned in FIG. the description.
  • Step K is then followed by a step L of transmitting the list of the nodes Le of the terminal TA access to the SERV server site.
  • Step L above is then followed by a step M at the server level to segment at least one current image of the sequence of images stored in the database, this segmentation being of course carried out in accordance with the segmentation process the invention described previously in the description.
  • the aforementioned segmentation operation is noted IRC segmentation, to generate CAR, this operation allowing of course to generate at least one contour reference asset, noted CAR.
  • Step M is then itself followed by a step N consisting of a comparison comparison comparison step of the sample active contour L to the active reference contour of the list L r , denoted L e ⁇ L r .
  • the sample list and the active CAE sample contour can be identified with the reference list L r and with the active reference contour CAR, the test comparison step N is followed by a step P of stopping the search and transmitting, if necessary, at the request of the terminal TA, all or part of the sequence of images stored in the database accessible on the SERV server site.
  • the protocol that is the subject of the present invention can to be improved to the extent that at each active contour sample, and in return, at each contour reference asset CAR, may be associated different parameters attributes of the elementary object subject of the research, in order to improve the performance of object recognition.
  • the protocol that is the subject of the present invention can include steps of discriminating in the object of interest of the sample object component attributes, rated AECH, attributes such as color, texture, motion parameters, etc. of the elementary object of interest in the sample image considered.
  • attributes such as color, texture, motion parameters, etc. of the elementary object of interest in the sample image considered.
  • step L the attributes are transmitted AECH sample object component of the access terminal TA at the SERV server site.
  • the object protocol of the invention may consist in discriminating, in the object delimited by the reference active contour CAR, reference object component attributes of the same type than those of the sample object component attributes.
  • Reference object component attributes are AIR and correspond in the same way to attributes such as texture, color, temperature color or other, in the object delimited by the outline reference asset.
  • Step M is then followed by a step N in which also compares the component attributes AECH reference object and component attributes AIR sample object to control shutdown, respectively the pursuit of research.
  • this command can be carried out by a coupling by an AND function of the comparison of the sample list and sample active outline at the reference list and the reference active contour at the comparison of the sample attributes to the attributes of reference object component.
  • step M of segmentation of the current reference image With regard to the implementation of step M of segmentation of the current reference image, one indicates that the case where this image contains several elementary objects of interest does not constitute an obstacle to the implementation of the protocol which is the subject of this invention, in so far as, in such a case, it is possible to arbitrarily predict an active contour of departure CD substantially surrounding the entire image at its periphery, the method which is the subject of the present invention allowing segmentation into several objects of interest elementary when they are disjoint.
  • the image IECH sample of a basic object of interest by the user so it always exists, in every image current IRC reference, a basic object of reference corresponding substantially to the object chosen by the user in the IECH sample image.
  • the protocol that is the subject of the present invention appears particularly well suited to the implementation image search in image sequences MPEG 4 standard video for example.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Analysing Materials By The Use Of Radiation (AREA)
  • Heterocyclic Carbon Compounds Containing A Hetero Ring Having Oxygen Or Sulfur (AREA)
  • Anti-Oxidant Or Stabilizer Compositions (AREA)

Claims (9)

  1. Verfahren zur Segmentierung eines Bilds einer animierten Sequenz (IM) durch Einzelobjekte, dadurch gekennzeichnet, dass es darin besteht, dass gegenüber mindestens einem durch einen natürlichen Umriss (CN) begrenzten Einzelobjekt (OBJ) dieses Bilds:
    um dieses Einzelobjekt herum ein Ausgangsumriss (CD) definiert wird, der dieses Einzelobjekt vollständig umgibt;
    ausgehend von diesem Ausgangsumriss ein aktiver Ursprungsumriss (CAO) definiert wird, der von einer Einheit von auf diesem Ausgangsumriss verteilten Knoten gebildet ist, wobei jeder Knoten von einem Punkt gebildet ist, der zu diesem Ausgangsumriss gehört, sowie von einer elastischen Energiefunktion, die den diesen Knoten von einem benachbarten Knoten trennenden Abstand (d) darstellt;
    gegenüber einer Einheit von Bezugswerten, die den natürlichen Umriss dieses Einzelobjekts darstellen kann, dieser aktive Ursprungsumriss einer konvergenten Verformung durch Bewegung mindestens eines der Knoten des aktiven Ursprungsumrisses auf den natürlichen Umriss des Einzelobjekts zu unterzogen wird, um einen laufenden aktiven Umriss (CAC) zu erzeugen, wobei dieser laufende aktive Umriss dieser konvergenten Verformung iterativ unterzogen wird, um verschiedene aufeinanderfolgende laufende aktive Umrisse zu erzeugen, solange diese Bewegung eine Nichtblockierungsbedingung erfüllt, und andernfalls jede Knotenbewegung dieses laufenden aktiven Umrisses gestoppt wird, was gestattet, einen laufenden aktiven Endumriss zu erzeugen, der im wesentlichen den natürlichen Umriss dieses Einzelobjekts wiedergibt.
  2. Verfahren nach Anspruch 1, dadurch gekennzeichnet, daß die Einheit der Knoten jedes aktiven Umrisses durch polygonale Modellierung durch Abtastung auf der Spur des aktiven Umrisses in Abhängigkeit von dem Abstand (d) zwischen aufeinanderfolgenden Knoten definiert wird, was gestattet, die Definitionsauflösung jedes der aufeinanderfolgenden aktiven Umrisse anzupassen.
  3. Verfahren nach einem der Ansprüche 1 oder 2, dadurch gekennzeichnet, dass diese konvergierende Verformung darin besteht, dass.
    an jedem der Knoten des laufenden aktiven Umrisses (CAC) ein zum aktiven Umriss normaler Vektor errechnet wird;
    mindestens einer der Knoten dieses aktiven Umrisses einer zentripetalen Bewegung in der Richtung dieses diesem Knoten zugeordneten normalen Vektors unterzogen wird.
  4. Verfahren nach einem der Ansprüche 1 bis 3, dadurch gekennzeichnet, dass diese Einheit von Bezugswerten aus einer auf diesem aktiven Umriss errechneten Einheit von Bilstärkegradientwerten besteht.
  5. Verfahren nach einem der Ansprüche 1 bis 4, dadurch gekennzeichnet, dass es, wenn dieses Einzelobjekt (OBJ) aus einem animierten Objekt im Bild besteht, wobei das animierte Objekt der Bewegung, der Verformung und der Aufteilung fähig ist, für jeden aktiven Umriss, der eine Schleife bilden kann, die mindestens einen Schnittpunkt infolge einer Aufteilung dieses Einzelobjekts in Komponenten von Einzelobjekten aufweist, darin besteht, dass
    das Vorhandensein mindestens eines Schnittpunkts auf diesem aktiven Umriss erfasst wird,
    dieser aktive Umriss in eine Anzahl von verschiedenen aktiven Umrissen, die gleich der Anzahl von Schnittpunkten plus eins ist, aufgeteilt/zusammengefasst wird, was gestattet, jeder Komponente dieses Einzelobjekts einen aktiven Endumriss (CAF) zuzuteilen
  6. Verfahren nach einem der Ansprüche 1 bis 5, dadurch gekennzeichnet, dass es, wenn dieses Einzelobjekt aus einem im Bild beweglichen animierten Objekt besteht, außerdem für mindestens zwei aufeinanderfolgende Bilder einer animierten Sequenz darin besteht, dass:
    auf jedem aktiven Endumriss (CAF) jedes Bildes ein einen Kranz bildendes Band definiert wird, das die Einheit der zu diesem aktiven Umriss gehörenden Knoten umfasst;
    zwischen Punkten dieses Kranzes eine Schätzung der Bewegung des Einzelobjekts (OBJ) des Bilds zum folgenden Bild vorgenommen wird, die gestattet, auf den Knoten dieses aktiven Umrisses einen Bewegungsvektor zu definieren;
    auf jeden Knoten dieses aktiven Umrisses dieser Bewegungsvektor zum folgenden Bild angelegt wird, was gestattet, die Verfolgung des beweglichen Einzelobjekts durch diesen aktiven Endumriss zu gewährleisten und dieses bewegliche Einzelobjekt unter Berücksichtigung der Bewegung des letzteren zu unterscheiden.
  7. Verfahren nach Anspruch 6, dadurch gekennzeichnet, dass es zum Zweck der Verfeinerung der Segmentierung des Bilds darin besteht, dass auf die Schätzung der Bewegung des Kranzes des aktiven Umrisses hin:
    jeder Knoten dieses aktiven Umrisses um den Wert der geschätzten Bewegung bewegt wird, um einen aktiven Anfangsumriss für das neue Bild zu erzeugen;
    dieser aktive Anfangsumriss durch geometrische Transformation ausgedehnt wird, um einen aktiven Bezugsanfangsumriss für dieses neue Bild zu erzeugen, wobei dieser aktive Bezugsanfangsumriss einen aktiven Ursprungsumriss (CAO) dieses Objekts (OBJ) bildet;
    die konvergente Verformung dieses aktiven Ursprungsumrisses iterativ wiederholt wird, um diesen laufenden aktiven Endumriss (CAF) zu erzeugen.
  8. Protokoll für die Suche eines Einzelobjekts von Interesse in einer animierten Sequenz von Bildern, die in einer auf einer Serversite (SERV) zugänglichen Datenbasis gespeichert sind, von einem Terminal für den Zugang auf diese Serversite aus, wobei dieser Zugangsterminal (TA) über eine Probe verfügt, die aus mindestens einem von dieser Bildersequenz kommenden Probebild besteht, dadurch gekennzeichnet, dass es mindestens darin besteht, dass:
    dieses Probebild gemäß dem erfindungsgemäßen Verfahren nach einem der Ansprüche 1 bis 7 segmentiert wird, um mindestens einen aktiven Probeumriss zu erzeugen, der aus einer Liste von Knoten besteht, die diesem zu diesem Probevideobild gehörenden Einzelobjekt von Interesse zugeordnet sind;
    diese Liste von Knoten vom Zugangsterminal zu der Serversite ubertragen wird,
    mindestens ein laufendes Bild dieser in dieser Datenbasis gespeicherten Bildersequenz gemäß dem erfindungsgemäßen Verfahren nach einem der Ansprüche 1 bis 7 segmentiert wird, um mindestens einen aktiven Bezugsumriss (CAR) zu erzeugen;
    dieser aktive Probenumriss durch Ähnlichkeitsvergleich mit diesem aktiven Bezugsumriss verglichen wird und auf Ähnlichkeitsvergleich hin die Suche abgebrochen wird, um die Übertragung der ganzen Sequenz von gespeicherten Bildern oder eines Teils davon zum Zugangsterminal zu gewährleisten, und andernfalls die Suche auf dem ganzen diesem laufenden Bild folgenden Bild in dieser Sequenz von gespeicherten Bildern weitergeführt wird.
  9. Protokoll nach Anspruch 8, dadurch gekennzeichnet, dass es außerdem die Schritte umfasst, die darin bestehen, dass:
    in diesem Objekt von Interesse Probeobjektkomponentenattribute, wie Farbe, Textur, Bewegungsparameter, in diesem Probevideobild unterschieden werden;
    diese Objektkomponentenattribute von diesem Zugangsterminal (TA) zu dieser Serversite (SERV) übertragen werden;
    in dem durch diesen aktiven Bezugsumriss (CAR) begrenzten Objekt Bezugsobjektkomponentenattribute vom selben Typ wie die der Probenobjektkomponentenattribute unterschieden werden;
    die Bezugsobjektkomponentenattribute und die Probenobjektkomponentenattribute verglichen werden, um die Unterbrechung bzw. die Weiterführung der Suche auszulösen.
EP01967439A 2000-09-07 2001-09-06 Videobildsegmentierungsverfahren unter verwendung von elementären objekten Expired - Lifetime EP1316065B8 (de)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
FR0011404 2000-09-07
FR0011404A FR2814312B1 (fr) 2000-09-07 2000-09-07 Procede de segmentation d'une surface image video par objets elementaires
PCT/FR2001/002771 WO2002021444A2 (fr) 2000-09-07 2001-09-06 Procede de segmentation d'une image video par objets elementaires

Publications (3)

Publication Number Publication Date
EP1316065A2 EP1316065A2 (de) 2003-06-04
EP1316065B1 true EP1316065B1 (de) 2005-07-06
EP1316065B8 EP1316065B8 (de) 2005-09-14

Family

ID=8854051

Family Applications (1)

Application Number Title Priority Date Filing Date
EP01967439A Expired - Lifetime EP1316065B8 (de) 2000-09-07 2001-09-06 Videobildsegmentierungsverfahren unter verwendung von elementären objekten

Country Status (8)

Country Link
US (2) US7164718B2 (de)
EP (1) EP1316065B8 (de)
JP (1) JP4813749B2 (de)
AT (1) ATE299281T1 (de)
DE (1) DE60111851T2 (de)
ES (1) ES2245374T3 (de)
FR (1) FR2814312B1 (de)
WO (1) WO2002021444A2 (de)

Families Citing this family (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050129274A1 (en) * 2001-05-30 2005-06-16 Farmer Michael E. Motion-based segmentor detecting vehicle occupants using optical flow method to remove effects of illumination
US20080019568A1 (en) * 2002-05-23 2008-01-24 Kabushiki Kaisha Toshiba Object tracking apparatus and method
TWI226010B (en) * 2003-11-25 2005-01-01 Inst Information Industry System and method for object tracking path generation and computer-readable medium thereof
US7840074B2 (en) * 2004-02-17 2010-11-23 Corel Corporation Method and apparatus for selecting an object in an image
US7983835B2 (en) 2004-11-03 2011-07-19 Lagassey Paul J Modular intelligent transportation system
US7457472B2 (en) * 2005-03-31 2008-11-25 Euclid Discoveries, Llc Apparatus and method for processing video data
US9532069B2 (en) 2004-07-30 2016-12-27 Euclid Discoveries, Llc Video compression repository and model reuse
US9743078B2 (en) 2004-07-30 2017-08-22 Euclid Discoveries, Llc Standards-compliant model-based video encoding and decoding
US8902971B2 (en) 2004-07-30 2014-12-02 Euclid Discoveries, Llc Video compression repository and model reuse
US7436981B2 (en) * 2005-01-28 2008-10-14 Euclid Discoveries, Llc Apparatus and method for processing video data
US7457435B2 (en) 2004-11-17 2008-11-25 Euclid Discoveries, Llc Apparatus and method for processing video data
WO2010042486A1 (en) * 2008-10-07 2010-04-15 Euclid Discoveries, Llc Feature-based video compression
US7508990B2 (en) * 2004-07-30 2009-03-24 Euclid Discoveries, Llc Apparatus and method for processing video data
EP2602742A1 (de) * 2004-07-30 2013-06-12 Euclid Discoveries, LLC Vorrichtung und Verfahren zur Verarbeitung von Videodaten
US9578345B2 (en) 2005-03-31 2017-02-21 Euclid Discoveries, Llc Model-based video encoding and decoding
CN101061489B (zh) * 2004-09-21 2011-09-07 欧几里得发现有限责任公司 用来处理视频数据的装置和方法
NZ561570A (en) 2005-03-16 2010-02-26 Lucasfilm Entertainment Compan Three-dimensional motion capture
US7672516B2 (en) * 2005-03-21 2010-03-02 Siemens Medical Solutions Usa, Inc. Statistical priors for combinatorial optimization: efficient solutions via graph cuts
US20080246765A1 (en) * 2005-05-06 2008-10-09 Desmond Grenfell Method and apparatus for constraint-based texture generation
KR100746022B1 (ko) * 2005-06-14 2007-08-06 삼성전자주식회사 서브픽셀 움직임 추정시 모델 스위칭을 통한 압축 효율을증가시키는 인코딩 방법 및 장치
WO2008091485A2 (en) * 2007-01-23 2008-07-31 Euclid Discoveries, Llc Systems and methods for providing personal video services
US7554440B2 (en) 2006-07-25 2009-06-30 United Parcel Service Of America, Inc. Systems and methods for monitoring travel conditions
US8130225B2 (en) 2007-01-16 2012-03-06 Lucasfilm Entertainment Company Ltd. Using animation libraries for object identification
US8199152B2 (en) * 2007-01-16 2012-06-12 Lucasfilm Entertainment Company Ltd. Combining multiple session content for animation libraries
US8542236B2 (en) * 2007-01-16 2013-09-24 Lucasfilm Entertainment Company Ltd. Generating animation libraries
CN101622874A (zh) * 2007-01-23 2010-01-06 欧几里得发现有限责任公司 对象存档系统和方法
JP2010526455A (ja) * 2007-01-23 2010-07-29 ユークリッド・ディスカバリーズ・エルエルシー 画像データを処理するコンピュータ方法および装置
US8045800B2 (en) * 2007-06-11 2011-10-25 Microsoft Corporation Active segmentation for groups of images
US8144153B1 (en) 2007-11-20 2012-03-27 Lucasfilm Entertainment Company Ltd. Model production for animation libraries
TWI381717B (zh) * 2008-03-31 2013-01-01 Univ Nat Taiwan 數位視訊動態目標物體分割處理方法及系統
US9142024B2 (en) * 2008-12-31 2015-09-22 Lucasfilm Entertainment Company Ltd. Visual and physical motion sensing for three-dimensional motion capture
US9082222B2 (en) * 2011-01-18 2015-07-14 Disney Enterprises, Inc. Physical face cloning
US8948447B2 (en) 2011-07-12 2015-02-03 Lucasfilm Entertainment Companyy, Ltd. Scale independent tracking pattern
WO2013074926A1 (en) 2011-11-18 2013-05-23 Lucasfilm Entertainment Company Ltd. Path and speed based character control
US9299159B2 (en) 2012-11-09 2016-03-29 Cyberlink Corp. Systems and methods for tracking objects
CN104123713B (zh) * 2013-04-26 2017-03-01 富士通株式会社 多图像联合分割方法和装置
US9165182B2 (en) * 2013-08-19 2015-10-20 Cisco Technology, Inc. Method and apparatus for using face detection information to improve speaker segmentation
CN103544698A (zh) * 2013-09-30 2014-01-29 江南大学 基于模糊能量主动轮廓模型的高光谱图像分割方法
KR101392978B1 (ko) 2014-02-04 2014-05-08 (주)나인정보시스템 하이브리드 병렬처리를 이용한 영상 내 영역 라벨링 장치 및 방법
WO2015138008A1 (en) 2014-03-10 2015-09-17 Euclid Discoveries, Llc Continuous block tracking for temporal prediction in video encoding
US10091507B2 (en) 2014-03-10 2018-10-02 Euclid Discoveries, Llc Perceptual optimization for model-based video encoding
US10097851B2 (en) 2014-03-10 2018-10-09 Euclid Discoveries, Llc Perceptual optimization for model-based video encoding

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0512443A (ja) * 1991-07-05 1993-01-22 Nippon Telegr & Teleph Corp <Ntt> 動物体の輪郭追跡方法
JP3347508B2 (ja) * 1995-02-24 2002-11-20 キヤノン株式会社 撮像画像処理装置および撮像画像処理方法
JP3335814B2 (ja) * 1995-09-07 2002-10-21 株式会社東芝 画像処理方法及び装置
US5999651A (en) * 1997-06-06 1999-12-07 Matsushita Electric Industrial Co., Ltd. Apparatus and method for tracking deformable objects
JPH1131227A (ja) * 1997-07-14 1999-02-02 Tani Denki Kogyo Kk 画像認識による計測方法および記録媒体
US6031935A (en) * 1998-02-12 2000-02-29 Kimmel; Zebadiah M. Method and apparatus for segmenting images using constant-time deformable contours
US6560281B1 (en) * 1998-02-24 2003-05-06 Xerox Corporation Method and apparatus for generating a condensed version of a video sequence including desired affordances
US6400831B2 (en) * 1998-04-02 2002-06-04 Microsoft Corporation Semantic video object segmentation and tracking
US6804394B1 (en) * 1998-04-10 2004-10-12 Hsu Shin-Yi System for capturing and using expert's knowledge for image processing
EP0959625A3 (de) 1998-05-22 2001-11-07 Tektronix, Inc. Bereich- und konturinformationsbasierte Videobildsegmentierung
US6266443B1 (en) * 1998-12-22 2001-07-24 Mitsubishi Electric Research Laboratories, Inc. Object boundary detection using a constrained viterbi search
US6480615B1 (en) 1999-06-15 2002-11-12 University Of Washington Motion estimation within a sequence of data frames using optical flow with adaptive gradients
US7010567B1 (en) * 2000-06-07 2006-03-07 Alpine Electronic, Inc. Map-data distribution method, and map-data distribution server and client

Also Published As

Publication number Publication date
DE60111851D1 (de) 2005-08-11
EP1316065A2 (de) 2003-06-04
USRE42977E1 (en) 2011-11-29
WO2002021444A3 (fr) 2002-06-27
FR2814312A1 (fr) 2002-03-22
JP2004508642A (ja) 2004-03-18
US7164718B2 (en) 2007-01-16
US20030169812A1 (en) 2003-09-11
ATE299281T1 (de) 2005-07-15
EP1316065B8 (de) 2005-09-14
FR2814312B1 (fr) 2003-01-24
DE60111851T2 (de) 2006-04-20
WO2002021444A2 (fr) 2002-03-14
ES2245374T3 (es) 2006-01-01
JP4813749B2 (ja) 2011-11-09

Similar Documents

Publication Publication Date Title
EP1316065B1 (de) Videobildsegmentierungsverfahren unter verwendung von elementären objekten
Anantrasirichai et al. Artificial intelligence in the creative industries: a review
EP3707676B1 (de) Verfahren zur schätzung der installation einer kamera im referenzrahmen einer dreidimensionalen szene, vorrichtung, system mit erweiterter realität und zugehöriges computerprogramm
JP4898800B2 (ja) イメージセグメンテーション
Friedland et al. SIOX: Simple interactive object extraction in still images
US9762775B2 (en) Method for producing a blended video sequence
JP5355422B2 (ja) ビデオの索引付けとビデオシノプシスのための、方法およびシステム
Borgo et al. State of the art report on video‐based graphics and video visualization
Baskurt et al. Video synopsis: A survey
CN111724302A (zh) 利用机器学习的纵横比转换
CN111491187A (zh) 视频的推荐方法、装置、设备及存储介质
US11869125B2 (en) Generating composite images with objects from different times
EP0961227A1 (de) Verfahren zum detektieren der relativen Tiefe zweier Objekte in einer Szene ausgehend von zwei Aufnahmen in verschiedenen Blickrichtungen
Takacs et al. Hyper 360—towards a unified tool set supporting next generation VR film and TV productions
Lin et al. High resolution animated scenes from stills
Shrivastava et al. Broad neural network for change detection in aerial images
WO1999040539A1 (fr) Procede de segmentation spatiale d&#39;une image en objets visuels et application
Takacs et al. Deep authoring-an AI Tool set for creating immersive MultiMedia experiences
Dissanayake et al. AutoRoto: Automated Rotoscoping with Refined Deep Masks
US20220237224A1 (en) Methods and system for coordinating uncoordinated content based on multi-modal metadata through data filtration and synchronization in order to generate composite media assets
Silva et al. Fast-Forward Methods for Egocentric Videos: A Review
Usón Peirón Design and development of a foreground segmentation approach based on Deep Learning for Free Viewpoint Video
Sidenko et al. Objects Segmentation in Augmented Reality Environment.
Garber Video Background Modeling for Efficient Transformations Between Images and Videos
Ginosar Modeling Visual Minutiae: Gestures, Styles, and Temporal Patterns

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20030305

AK Designated contracting states

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

AX Request for extension of the european patent

Extension state: AL LT LV MK RO SI

17Q First examination report despatched

Effective date: 20040120

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20050706

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20050706

Ref country code: IE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20050706

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20050706

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20050706

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: FRENCH

Ref country code: GB

Ref legal event code: ERR

Free format text: NOTIFICATION HAS BEEN RECEIVED FROM THE EUROPEAN PATENT OFFICE THAT THE CORRECT NAME OF THE APPLICANT/PROPRIETOR IS: FRANCE TELECOM THIS CORRECTION WILL NOT BE PUBLISHED IN THE EUROPEAN PATENT BULLETIN

REF Corresponds to:

Ref document number: 60111851

Country of ref document: DE

Date of ref document: 20050811

Kind code of ref document: P

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20050906

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20050930

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20050930

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20050930

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20050930

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20050930

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20051006

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20051006

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20051006

GBT Gb: translation of ep patent filed (gb section 77(6)(a)/1977)

Effective date: 20050928

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20051212

REG Reference to a national code

Ref country code: ES

Ref legal event code: FG2A

Ref document number: 2245374

Country of ref document: ES

Kind code of ref document: T3

NLV1 Nl: lapsed or annulled due to failure to fulfill the requirements of art. 29p and 29m of the patents act
REG Reference to a national code

Ref country code: IE

Ref legal event code: FD4D

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

26N No opposition filed

Effective date: 20060407

BERE Be: lapsed

Owner name: FRANCE TELECOM EXPLOITANT PUBLIC

Effective date: 20050930

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: ES

Payment date: 20080924

Year of fee payment: 8

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: IT

Payment date: 20080913

Year of fee payment: 8

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090906

REG Reference to a national code

Ref country code: GB

Ref legal event code: 732E

Free format text: REGISTERED BETWEEN 20110317 AND 20110323

REG Reference to a national code

Ref country code: ES

Ref legal event code: FD2A

Effective date: 20110714

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20110704

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090907

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 15

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 16

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 17

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 18

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20200812

Year of fee payment: 20

Ref country code: GB

Payment date: 20200828

Year of fee payment: 20

Ref country code: FR

Payment date: 20200814

Year of fee payment: 20

REG Reference to a national code

Ref country code: DE

Ref legal event code: R071

Ref document number: 60111851

Country of ref document: DE

REG Reference to a national code

Ref country code: GB

Ref legal event code: PE20

Expiry date: 20210905

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION

Effective date: 20210905