EP2441047A1 - Verfahren und vorrichtung zur echtzeitverfolgung von objekten in einer bildsequenz bei optischer unschärfe - Google Patents

Verfahren und vorrichtung zur echtzeitverfolgung von objekten in einer bildsequenz bei optischer unschärfe

Info

Publication number
EP2441047A1
EP2441047A1 EP10734231A EP10734231A EP2441047A1 EP 2441047 A1 EP2441047 A1 EP 2441047A1 EP 10734231 A EP10734231 A EP 10734231A EP 10734231 A EP10734231 A EP 10734231A EP 2441047 A1 EP2441047 A1 EP 2441047A1
Authority
EP
European Patent Office
Prior art keywords
image
tracking
points
images
objects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP10734231A
Other languages
English (en)
French (fr)
Inventor
Nicolas Livet
Thomas Pasquier
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Total Immersion
Original Assignee
Total Immersion
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Total Immersion filed Critical Total Immersion
Publication of EP2441047A1 publication Critical patent/EP2441047A1/de
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Definitions

  • the present invention relates to the combination of real and virtual images in real time, in an augmented reality system, and more particularly a method and a device for tracking objects in real time in a sequence of images comprising fuzzy images.
  • Augmented reality is intended to insert one or more virtual objects in the images of a video stream.
  • the position and orientation of these virtual objects can be determined by external data of the scene represented by the images, for example coordinates directly derived from a game scenario, or by related data. to certain elements of this scene, for example coordinates of a particular point of the scene such as the hand of a player or a decorative element.
  • certain elements of this scene for example coordinates of a particular point of the scene such as the hand of a player or a decorative element.
  • the operations of tracking elements and incrustation of virtual objects in the real images can be executed by separate computers or by the same computer.
  • the purpose of the tracking algorithms used for these purposes is to find very accurately, in a real scene, the pose, that is to say the position and orientation, of an object whose information of geometry is generally available or, equivalently, to retrieve the extrinsic position and orientation parameters of a camera filming this object, thanks, for example, to image analysis.
  • tracking algorithms also called target tracking algorithms
  • use a marker that can be visual or use other means such as sensors, preferably wireless type radio frequency or infrared.
  • some algorithms use shape recognition to track a particular element in an image stream.
  • the autoimmune Polytechnique Fédérale de Lausanne has developed a visual tracking algorithm that does not use a marker and whose originality lies in the pairing of particular points between the current image of a video stream and a keyframe, called a keyframe. in English terminology, obtained at the initialization of the system and a key image updated during the execution of the visual tracking.
  • the principle of this algorithm is described for example in the article entitled "Fusing Online and Offline Information for Stable 3D Tracking in Real Time" - Luca Vacchetti, Vincent Lepetit, Pascal Fua - IEEE Transactions on Pattern Analysis and Machine Intelligence 2004.
  • the objective of this visual tracking algorithm is to find, in a real scene, the pose of an object whose three-dimensional mesh (3D) is available as a 3D model, or to find, in an equivalent way, the extrinsic parameters. of position and orientation of a camera filming this object, motionless, thanks to image analysis.
  • 3D three-dimensional mesh
  • a keyframe is composed of two elements: a captured image of the video stream and a pose (orientation and position) of the real object appearing in this image.
  • the keyframes are images extracted from the video stream in which the object to be tracked has been placed manually through the use of a pointing device such as a mouse.
  • Keyframes preferably characterize the pose of the same object in several images. They are created and registered "offline", that is to say out of the permanent regime of the monitoring application. It is interesting to note that for targets or objects of planar type, for example a magazine, these keyframes can be directly generated from an available image of the object, for example in JPEG or bitmap format.
  • Each offline keyframe includes an image in which the object is present and a pose to characterize the location of that object as well as a number of points of interest that characterize the object in the image.
  • Points of interest are, for example, constructed from a Harris point detector, SURF (Speeded-Up Robust Features), SIFT (acronym for Scale-Invariant Feature). Transform in Anglo-Saxon terminology) or YAPE (acronym for Yet Another Point Extractor in English terminology) and represent locations with high values of directional gradients in the image and a description of variation of the image in the vicinity of these points.
  • the manual preparation phase thus consists in finding a first estimate of the pose of the object in an image extracted from the video stream, which amounts to formalizing the initial affine transformation T p ⁇ c , the transition matrix between the reference associated with the image. object followed to the marker attached to the camera.
  • T p ⁇ c the initial affine transformation
  • a single image can be used to construct an offline keyframe.
  • the offline keyframes are processed in order to position points of interest according to the parameters chosen when launching the application. These parameters are specified empirically for each type of application use and allow the detection and matching application to be adapted to obtain a better quality of estimation of the pose of the object according to the characteristics of the application. the real environment. Then, when the real object in the current image is in a pose that is close to the pose of this same object in one of the offline keyframes, the number of matches becomes important. It is then possible to find the affine transformation allowing to fix the three-dimensional model of the object on the real object.
  • this offline keyframe can be reprojected using the estimated pose of the previous image. This reprojection thus makes it possible to have a key image that contains a representation of the object similar to that of the current image and can thus allow the algorithm to operate with points of interest and descriptors that are not robust to rotations.
  • the tracking application thus combines two distinct types of algorithm: a detection of points of interest, for example a modified version of Harris point detection or detection of SIFT or SURF points, and a reprojection technique. points of interest positioned on the three-dimensional model towards the plane image. This reprojection makes it possible to predict the result of a spatial transformation of one image on the other, extracted from the video stream. These two combined algorithms allow robust tracking of an object with six degrees of freedom.
  • a point p of the image is the projection of a point P of the real scene with p ⁇ P 1 • P E • T p ⁇ c • P
  • Pi is the matrix of the intrinsic parameters of the camera, ie its focal length, the center of the image and the offset
  • P E is the matrix of the extrinsic parameters of the camera, that is to say the position of the camera in space real
  • T p ⁇ c is the matrix of passage between the reference associated with the object followed towards the marker attached to the camera. Only the relative position of the object relative to the relative position of the camera is considered here, which amounts to placing the reference of the real scene at the optical center of the camera.
  • error minimization is used in order to find the best solution for the estimation T p ⁇ c by using the set of three-dimensional correspondences on the geometric model and two-dimensional (2D) in the current image and in the keyframe.
  • an RANSAC Random SAmple Consensus
  • PROSAC cronym for PROgressive SAmple Consensus in English terminology
  • the applicant has developed a visual tracking algorithm for objects that do not use a marker and whose originality lies in the pairing of particular points between the current (and previous) image of a video stream and a set of Keyframes, obtained automatically when the system is booted.
  • Such an algorithm is in particular described in the French patent application FR 2 911 707.
  • This algorithm makes it possible, in a first step, to identify the object positioned in front of the camera and then to initialize completely automatically without positioning constraints. the process of tracking the object.
  • This algorithm makes it possible in particular to recognize and follow a large number of objects present at the same time in a video stream and thus allows the identification and tracking of targets or objects in a real scene.
  • These objects can be of different geometries and have various colorimetric aspects. By way of example, but in a non-limiting way, they may be textured trays, faces, clothes, natural scenes, television studios or buildings.
  • optical stabilizer The principle of an optical stabilizer is to link the optical group with an accelerometer type sensor to detect the movements of the camera and slightly move this group accordingly to counteract the movements of the camera.
  • Digital stabilizers work by changing the framing of the photograph in the image from the sensor. This approach requires the use of a sensor whose resolution is greater than that of the image. Detection of the movements of the camera can be achieved by the use of a gyro accelerometer or by image analysis.
  • the approach aims to minimize the following function that describes the residual error between two regions that belong to two images I and J,
  • W describes the neighborhood around x and w (x) represents a weighting function such as a Gaussian.
  • this tracking of characteristic points must, however, be coupled to a point-of-interest detector in an initial image.
  • a point-of-interest detector in an initial image.
  • the points of interest are thus located, in the initial image, on the pixels which have high values of second derivatives on their neighborhood.
  • OpenCV an implementation of the search and tracking of these descriptors is proposed in the public library known as OpenCV (acronym for Open Computer Vision in English terminology), developed by the company Intel.
  • This implementation notably proposes the use of a pyramid of subsampled images in order to increase the robustness of the solution to changes of scale when the size of the object in the image varies greatly.
  • Such a feature element tracking solution also called template matching in English terminology, makes it possible to follow points of interest by using a portion of the image around the position of this point which makes the repeatability of these points of interest. interest more robust to the effects of blur.
  • Still other approaches aim at estimating, for each pixel of an image, the direction of movement (optical flow).
  • it is possible to transform a so-called “spatial” image into a frequency domain by means of a Fourier transform.
  • Such a method is in particular described in the thesis entitled “Visual Motion Estimation based on Motion Blur Interpretation” of Rekleitis banned (1995).
  • these approaches are often expensive in terms of calculations and therefore difficult to apply to a real-time context for consumer applications. Moreover, they do not make it possible to obtain easily exploitable information for an object tracking method.
  • the invention solves at least one of the problems discussed above.
  • the subject of the invention is thus a method of tracking a representation of at least one object in a sequence of images, in real time, at least one image of said sequence of images comprising at least one optical blur effect , said method comprising the following steps, identifying a representation of said at least one object in a first image of said sequence of images;
  • the method according to the invention thus makes it possible to follow in real time one or more real objects in a sequence of images, some of whose images comprise an optical blur effect, local or global, while optimizing the necessary resources.
  • said step of tracking said identified representation of said at least one object in said second image comprises a step of determining correspondences between a plurality of points of interest of said second image and a corresponding key image.
  • said fuzzy detecting step comprising a step of comparing the number of matches between said plurality of points of interest of said second image and said corresponding keyframe with a threshold.
  • said step of tracking said identified representation of said at least one object in a third image comprises a step of searching for characteristic points in said first or second image, the laying of said at least one object being at least partially determined by reprojection of said characteristic points onto a three-dimensional model of said at least one object.
  • the method according to the invention thus makes it possible to refine the tracking of real objects.
  • said step of tracking said identified representation of said at least one object in a third image comprises a step of searching for characteristic points in a key image corresponding to said third image, the laying of said at least one object at least partially determined by reprojection of said characteristic points on a three-dimensional model of said at least one object.
  • said step of tracking said identified representation of said at least one object in said second image comprises a step of determining a plurality of points of interest in said first and second images, said points of interest being identified as Harris points or SURF, SIFT or YAPE points.
  • said step of tracking said identified representation of said at least one object in said second image preferably comprises a step of determining a plurality of points of interest in said first or second image and in a keyframe corresponding, said points of interest being identified as Harris points or SURF, SIFT or YAPE points.
  • the method is recursively applied to several images of said plurality of images to improve the tracking of real objects.
  • the invention also relates to a computer program comprising instructions adapted to the implementation of each of the steps of the method described above when said program is executed on a computer as well as information storage means, removable or not , partially or completely readable by a computer or a microprocessor comprising code instructions of a computer program for performing each of the steps of this method.
  • the invention also relates to a device comprising means adapted to the implementation of each of the steps of the method described above.
  • FIG. 1 comprising FIGS. 1a, 1b, 1c and 1d, schematically illustrates different types of blur that may appear in an image
  • FIG. 2 schematically illustrates an example of an algorithm combining motion tracking and blur detection to enable objects to be tracked despite the presence of global or local blur in one or more images of a sequence of images in which objects are followed;
  • FIG. 3 presents a first embodiment of the algorithm illustrated in FIG. 2;
  • FIG. 4 illustrates the extraction of the 2D / 3D correspondences between a current image and a 3D model by using the tracking of robust elements that are robust to the blur between a current image and the image preceding it in the sequence;
  • FIG. 5 illustrates an exemplary device adapted to implement the invention or a part of the invention.
  • the aim of the invention is the robust and rapid tracking of one or more objects, in real time, in image sequences that may exhibit temporal optical blur effects.
  • the combination of an algorithm for identifying and tracking objects such as the one developed by the company Total Immersion with a more robust algorithm for monitoring image characteristics with motion blur is here implemented to solve the problems of stalls that can occur in the presence of blur.
  • these stalls can be frequent when low quality cameras are used or when movements of real objects in front of the camera are fast. They are most often the consequence of a series of images, generally over a specific period, which exhibit an optical blur effect.
  • effects of image blur are generally “global” blurs, most often caused by rapid movements of the camera, more specifically the image sensor, or “local”, caused by the rapid movement of objects present in the field of vision.
  • Figure 1 including Figures 1a, 1b, 1c and 1d, schematically illustrates different types of blur that may appear in an image.
  • Figure 1a is a schematic representation of a 100-1 image from a sequence of images, for example a video stream from a camera incorporating an image sensor.
  • the image 100-1 here represents a scene 105 in which the objects 110, 115 and 120 are placed. These objects are here static and the camera from which the image 100-1 is derived is stable.
  • the image 100-1 does not present any blur.
  • Figure 1b shows a 100-2 image similar to 100-1, from the same camera. However, during the capture of the image 100-2, the sensor has moved, causing a global blur on the image.
  • Figure 1c shows a 100-3 image similar to 100-1, from the same camera. However, during the capture of the image 100-3, the object 120 moved rapidly along the translation axis 125, thus causing a local directional blur on the image.
  • Figure 1d shows a 100-4 image similar to 100-1, from the same camera. However, during the capture of the image 100-4, the object 120 moved rapidly along the axis of rotation 130, thus causing a radial or rotational local blur on the image.
  • FIG. 2 schematically illustrates an example of an algorithm combining motion tracking and blur detection to enable tracking of objects despite the presence of global or local blur in one or more images of a sequence of images in which the objects are followed.
  • the algorithm illustrated here is implemented on each of the images of the sequence, sequentially.
  • a first step here is to detect the presence of the object or objects to follow in the images and to follow (step 200).
  • the tracking mode used here is for example a standard object tracking algorithm, in steady state mode (the initialization phase, automatic or not, was previously performed), using so-called “stable" descriptors such as Harris or SIFT, SURF or YAPE type descriptors.
  • the steady state indicates that one or more objects are detected and tracked in the sequence of images from the camera. In this case, the pose of an object is precisely determined in each of the images successively outputted from the image sensor.
  • Recursive pairings consisting of determining the corresponding points in successive images, step by step, can be used in this standard tracking mode using the characteristic points of the previous image.
  • the tracking mode is called “hybrid”.
  • the pairings determined between a current image and key images are added to the determined pairings between the current image and the previous image to evaluate the pose.
  • Recursive pairings are particularly robust to vibration effects, while keyframe matches help to avoid recursive pairing drifts.
  • the use of these two types of pairings thus allows a more robust and stable visual tracking.
  • a next step is to detect the possible presence of blur in the image being processed (step 205), that is to say to detect fast movements of objects in the scene or camera shake.
  • an optical blur detection step is performed, systematically or not. This detection is a measure that makes it possible to determine the presence of optical blur in the current image or in a series of images. It can be based, for example, on the variation in the number of matches between the points of interest used in the standard object tracking mode. If this variation is greater than a predetermined threshold, for a given tracking object or for all objects tracked, the presence of blur is detected.
  • this step is performed only under certain conditions (step 210), for example by using motion sensors such as accelerometers or gyroscopes, for the case of camera shake, or following the loss of relevant information, especially when a fall in the number of matches between the points of interest used in the standard object tracking mode is observed.
  • steps 205 and 210 are combined.
  • step 200 If it is not necessary to detect the presence of optical blur in the image, the algorithm continues in a conventional manner (step 200).
  • the step of measuring blur in a sequence of images is important because it makes it possible to determine the exact moment when the tracking of objects of conventional type is no longer suitable for tracking objects and may quickly generate a stall.
  • a test is performed to determine if the image contains an optical blur (step 215). If the measurement is negative, that is to say if no optical blur is detected in the processed image, the algorithm continues in a conventional manner (step 200). If not, a new object tracking mode is used to track objects in blurred images. If the presence of optical blur in the image is detected, a next step is to determine whether the object tracking mode used for tracking objects in blurred images is initialized (step 220).
  • this mode has not been initialized, it is (step 225).
  • the initialization consists in particular in creating information relating to the use of a method of tracking characteristic elements that are robust to blurring in a sequence of fuzzy images, in particular to detecting characteristic elements that are robust to blurring in the images.
  • This step may, in some implementations, be performed "offline" at the launch of the application, especially when these features robust to the blur are built directly on offline keyframes.
  • the mode of tracking characteristic elements that are robust to blurring in a sequence of fuzzy images is then implemented (step 230).
  • a mode of tracking characteristic elements that are robust to blurring can be based on the use of KLT type descriptors or else the tracking of lines of strong gradients as previously described.
  • these two solutions are combined to obtain a more robust result.
  • step 200 at least a portion of the so-called stable descriptors used in the conventional type object tracking (step 200) is replaced by the descriptors determined during the initialization phase of the object tracking mode used to enable the tracking of objects in fuzzy images, more robust to "local” and “global” optical blur effects.
  • the standard tracking mode is used again (step 200). Otherwise, the object tracking mode used to track objects in blurred images is maintained (step 230).
  • an object tracking algorithm comprising an object identification step, an initialization step depending on the object or objects present in the field camera optics and a tracking step of these objects is combined with a characteristic point tracking algorithm of type KLT, advantageously adapted to the context of tracking objects in a sequence of fuzzy images.
  • An optical blur detection operator in an image is directly extracted from the tracking algorithm.
  • Figure 3 partially illustrates this first embodiment for tracking objects in a current image 300.
  • a first step is to identify, or detect, the object or objects to follow present in the field of the camera and initialize the tracking of these objects (step 305).
  • This step implements a known algorithm, such as that developed by the company Total Immersion, presented above, which uses a database containing a large number of descriptors, for example points of interest and descriptors of HARRIS type, SIFT, SURF or YAPE, belonging to a large number of referenced objects 310.
  • classification trees such as binary decision trees (see for example the article “Keypoint Recognition using Randomized Trees” V. Lepetit and P. Fua, EPFL, 2006) or according to structures with multiple ramifications, also called fern-like decision trees (see for example the article “Fast Keypoint Reconnection using Random Ferns” M. Ozuysal, P. Fua and V. LepetitJ, allowing a simple and fast classification by comparison of intensity image around a point of interest to allow fast and robust identification of one or more objects in the current image.
  • This detection step also estimates an approximate pose of the recognized objects in the image in order to simplify the initialization step. This estimation also makes it possible to create a so-called current key image, referenced 315, which is then used in the object tracking method.
  • the current keyframe 315 is used to initialize the tracking system. During this initialization, points of interest, for example Harris points, are calculated on the current key image 315 to be used in the tracking of the identified object (s). After being initialized, the object tracking method is started (step
  • This method is here a "hybrid” method that uses a correlation operator, for example a correlation operator of the ZNCC type (acronym Zero-mean Normal Cross Correlation in English terminology) for determining matches between the current image 300 and the current key image 315 and between the current image 300 and the previous image 325, preceding the current image in the sequence of images.
  • This set of correspondences is then used to determine the pose (position and orientation) of the objects followed. It should be noted here that the more these points are numerous and the more precise their position, the more precise the result of the pose estimation.
  • a next step is to determine if the current image contains an optical blur effect (step 330).
  • the two sets of matches between the current image 300 and the previous image 325 and between the current image 300 and the current key image 315 are used as an indicator of the quality of the image. current image.
  • a threshold may be predetermined or dynamically determined. It is important to note that a substantial drop in the number of matches may also occur in the case where the object partially disappears from the image. However, in this case, the number of points often remains important and the number of matches gradually decreases during the processed image sequence.
  • step 320 If the number of these matches remains greater than the threshold, the tracking of the objects continues in a standard way (step 320).
  • a particular mode of monitoring here the KLT point tracking algorithm, is initialized (step 340).
  • the previous image 325 and the previous pose resulting from the tracking algorithm are used to search for characteristic elements to follow, robust to blur.
  • the preceding image is a priori not fuzzy since the fuzzy image detector (step 330) has found a sufficiently large number of matches on this image.
  • the characteristic elements to follow, robust with the blur called KLT characteristics, are sought in this previous image thanks to the estimate of the derivatives seconds for each pixel in the image.
  • these second derivatives are important, that is to say greater than a predetermined threshold, in at least one of the two main directions, it is considered that the pixel characterizes a point of interest robust to the blur.
  • These points are stored (reference 345). Then, knowing the pose of the object in the preceding image 325 and knowing the geometric model 400 of the object, it is possible to estimate the reprojection of these KLT characteristics and to extract precise 3D coordinates on the object. 400 geometric model of the object.
  • step 350 the correspondences of the KLT characteristics of the previous image 345 are searched in the current image 300.
  • This characteristic element tracking method as described in the state of the art makes it possible to follow points on successive images. It is particularly robust in identifying pixel movements in different portions of the overall image.
  • the correspondences as illustrated in FIG. 4 are then obtained (reference 355).
  • FIG. 4 illustrates the extraction of the 2D / 3D correspondences between the current image and the 3D model 400 by using the KLT characteristic tracking between the previous image 325 and the current image 300. It is thus shown that the knowledge of the 2D / 3D correspondences between the previous image 325 and the 3D 400 geometrical model and the construction of the 2D / 2D pairings between the current image and the previous image allows the extraction of 2D / 3D correspondences between the current image and the 3D image. 3D geometric model. These new correspondences allow, as previously described, to estimate the pose of the object in the current image. It should be noted that in Figure 4 it is possible to replace the previous image 325 by a keyframe. This figure thus describes the recursive pairings current image - previous image as well as the current image - key image pairings.
  • a next step (step 360) is to track objects using the KLT characteristics previously calculated. This step consists in particular in using the recursive correspondences between previous and current images in order to extract a list of matches between the image plane and the geometric model of the object. These matches are known because in step 340, the characteristic elements of the previous image have been reprojected on the geometric model of the object.
  • step 365 When the number of matches of KLT characteristics is insufficient (step 365) with respect to a predetermined threshold or determined dynamically, it is considered that the object is no longer present in the field of the camera. In this case, the object tracking method is then stalled and a new object detection phase is performed (steps 365 and 305) to detect objects that are potentially in the field of the camera.
  • step 340 the initialization step (step 340) is not repeated, as illustrated by the dashed arrow between blocks 335 and 350 .
  • the initialization of the tracking of characteristic elements in a sequence of images comprising an optical blur effect is advantageously replaced by an independent follow-up of the previous image.
  • the KLT characteristics used for the initialization are not estimated on the previous image but by using the current key image, reprojected according to the previous pose, estimated during the tracking step on the image previous.
  • the KLT characteristics tracked in the current image are similar to those of this reprojected keyframe, which allows a faster detection rate in the successive images of the image sequence.
  • This second embodiment makes it possible to avoid possible errors that would be linked to an erroneous pose estimate on the previous image as well as possible occultation problems of the object, for example when the hand of a user passes in front of a real object. It should be noted that it is possible to combine the two described embodiments in order to obtain more robust results for object tracking. However, such a combination increases the costs in terms of calculation.
  • FIG. 5 A device adapted to implement the invention or a part of the invention is illustrated in Figure 5.
  • the device shown is preferably a standard device, for example a personal computer.
  • the device 500 here comprises an internal communication bus 505 to which are connected:
  • CPU Central Processing Unit
  • ROM Read OnIy Memory in English terminology
  • ROM Read OnIy Memory
  • RAM Random Access Memory
  • cache memory 520 comprising registers adapted to record variables and parameters created and modified during the execution of the aforementioned programs
  • the device 500 also preferably has the following elements:
  • a hard disk 525 which may comprise the aforementioned programs and data processed or to be processed according to the invention.
  • a memory card reader 530 adapted to receive a memory card 535 and to read or write to it data processed or to be processed according to the invention.
  • the internal communication bus allows communication and interoperability between the various elements included in the device 500 or connected to it.
  • the representation of the internal bus is not limiting and, in particular, the microprocessor is capable of communicating instructions to any element of the device 500 directly or via another element of the device 500.
  • the executable code of each program enabling the programmable device to implement the processes according to the invention can be stored, for example, in the hard disk 525 or in the read-only memory 515.
  • the memory card 535 may contain data as well as the executable code of the aforementioned programs which, once read by the device 500, is stored in the hard disk 525.
  • the executable code of the programs can be received, at least partially, through the communication interface 540, to be stored in the same manner as described above.
  • program or programs may be loaded into one of the storage means of the device 500 before being executed.
  • the microprocessor 510 will control and direct the execution of the instructions or portions of software code of the program or programs according to the invention, which instructions are stored in the hard disk 525 or in the read-only memory 515 or else in the other storage elements mentioned above. .
  • the program or programs that are stored in a non-volatile memory for example the hard disk 525 or the read-only memory 515, are transferred into the RAM 520 which then contains the executable code of the program or programs according to the invention, as well as registers for storing the variables and parameters necessary for the implementation of the invention.
  • a person skilled in the field of the invention may apply modifications in the foregoing description.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
EP10734231A 2009-06-08 2010-06-04 Verfahren und vorrichtung zur echtzeitverfolgung von objekten in einer bildsequenz bei optischer unschärfe Withdrawn EP2441047A1 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR0902764A FR2946446B1 (fr) 2009-06-08 2009-06-08 Procede et dispositif de suivi d'objets en temps reel dans une sequence d'images en presence de flou optique
PCT/FR2010/051104 WO2010142895A1 (fr) 2009-06-08 2010-06-04 Procédé et dispositif de suivi d'objets en temps réel dans une séquence d'images en présence de flou optique

Publications (1)

Publication Number Publication Date
EP2441047A1 true EP2441047A1 (de) 2012-04-18

Family

ID=41528530

Family Applications (1)

Application Number Title Priority Date Filing Date
EP10734231A Withdrawn EP2441047A1 (de) 2009-06-08 2010-06-04 Verfahren und vorrichtung zur echtzeitverfolgung von objekten in einer bildsequenz bei optischer unschärfe

Country Status (3)

Country Link
EP (1) EP2441047A1 (de)
FR (1) FR2946446B1 (de)
WO (1) WO2010142895A1 (de)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5709906B2 (ja) 2010-02-24 2015-04-30 アイピープレックス ホールディングス コーポレーション 視覚障害者支援用拡張現実パノラマ
US10970425B2 (en) * 2017-12-26 2021-04-06 Seiko Epson Corporation Object detection and tracking

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2911707B1 (fr) 2007-01-22 2009-07-10 Total Immersion Sa Procede et dispositifs de realite augmentee utilisant un suivi automatique, en temps reel, d'objets geometriques planaires textures, sans marqueur, dans un flux video.

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2010142895A1 *

Also Published As

Publication number Publication date
WO2010142895A1 (fr) 2010-12-16
FR2946446B1 (fr) 2011-07-15
FR2946446A1 (fr) 2010-12-10

Similar Documents

Publication Publication Date Title
EP2455916B1 (de) Auf nichtstarrem Tracking basierende Mensch-Maschine-Schnittstelle
EP2491532B1 (de) Verfahren, computerprogramm, und vorrichtung für echtzeit-hybridverfolgung von objektdarstellungen in einer bildfolge
EP2132710B1 (de) Realitätserweiterungsverfahren und -geräte mit automatischer echtzeit-verfolgung markerfreier, texturierter und planarer geometrischer objekte in einem video-stream
WO2008125754A1 (fr) Procede et dispositif de determination de la pose d'un objet tridimensionnel dans une image et procede et dispositif de creation d'au moins une image cle pour le suivi d'objets
EP2111605B1 (de) Verfahren und vorrichtung zur erzeugung von mindestens zwei einem dreidimensionalen objekt entsprechenden schlüsselbildern
WO2017096949A1 (zh) 一种对目标进行跟踪拍摄的方法、控制装置及系统
Kurz et al. Inertial sensor-aligned visual feature descriptors
FR2933218A1 (fr) Procede et dispositif permettant de detecter en temps reel des interactions entre un utilisateur et une scene de realite augmentee
US8452124B2 (en) Method and system for detecting motion blur
FR3073312A1 (fr) Procede d'estimation de pose d'une camera dans le referentiel d'une scene tridimensionnelle, dispositif, systeme de realite augmentee et programme d'ordinateur associe
Porzi et al. Learning contours for automatic annotations of mountains pictures on a smartphone
EP2441048A1 (de) Verfahren und vorrichtungen zur erkennung echter objekte, zur nachverfolgung der darstellung dieser objekte und für erweiterte realität bei einer bildsequenz in einem client-server-modus
EP2257924B1 (de) Verfahren zur erstellung eines dichtebildes eines observationsgebiets
CA2825506A1 (en) Spectral scene simplification through background subtraction
WO2010142897A2 (fr) Procédé et dispositif de calibration d'un capteur d'images utilisant un système temps réel de suivi d'objets dans une séquence d'images
GB2606807A (en) Image creation for computer vision model training
EP2441047A1 (de) Verfahren und vorrichtung zur echtzeitverfolgung von objekten in einer bildsequenz bei optischer unschärfe
US20200090351A1 (en) Aligning digital images by selectively applying pixel-adjusted-gyroscope alignment and feature-based alignment models
EP3219094B1 (de) Film rushes-erzeugungsvorrichtung durch video-analyse
Lima et al. Model based 3d tracking techniques for markerless augmented reality
Jiddi et al. Photometric Registration using Specular Reflections and Application to Augmented Reality
CA3230088A1 (fr) Procede de mise en relation d'une image candidate avec une image de reference
WO2012107696A1 (fr) Procédés, dispositif et programmes d'ordinateur pour la reconnaissance de formes, en temps réel, à l'aide d'un appareil comprenant des ressources limitées

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20111229

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK SM TR

DAX Request for extension of the european patent (deleted)
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1168456

Country of ref document: HK

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

INTG Intention to grant announced

Effective date: 20130424

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: TOTAL IMMERSION

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20140103

REG Reference to a national code

Ref country code: HK

Ref legal event code: WD

Ref document number: 1168456

Country of ref document: HK