WO2012171839A1 - Video navigation through object location - Google Patents
Video navigation through object location Download PDFInfo
- Publication number
- WO2012171839A1 WO2012171839A1 PCT/EP2012/060723 EP2012060723W WO2012171839A1 WO 2012171839 A1 WO2012171839 A1 WO 2012171839A1 EP 2012060723 W EP2012060723 W EP 2012060723W WO 2012171839 A1 WO2012171839 A1 WO 2012171839A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- images
- sequence
- image
- navigating
- selecting
- Prior art date
Links
- 238000000034 method Methods 0.000 claims abstract description 52
- 238000001514 detection method Methods 0.000 claims description 9
- 230000011218 segmentation Effects 0.000 claims description 8
- 238000005516 engineering process Methods 0.000 description 6
- 230000008901 benefit Effects 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000009191 jumping Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 241000405217 Viola <butterfly> Species 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 210000004209 hair Anatomy 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 238000013179 statistical model Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04842—Selection of displayed objects or displayed text elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/74—Browsing; Visualisation therefor
- G06F16/745—Browsing; Visualisation therefor the internal structure of a single video sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7837—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/102—Programmed access in sequence to addressed parts of tracks of operating record carriers
- G11B27/105—Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/19—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
- G11B27/28—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/34—Indicating arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/73—Querying
- G06F16/732—Query formulation
- G06F16/7335—Graphical querying, e.g. query-by-region, query-by-sketch, query-by-trajectory, GUIs for designating a person/face/object as a query predicate
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/19—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/4728—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for selecting a Region Of Interest [ROI], e.g. for requesting a higher resolution version of a selected region
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/858—Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot
- H04N21/8583—Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot by creating hot-spots
Definitions
- the present invention relates to a method for navigating in a sequence of images, e.g. in a movie and for interactive rendering of the same, specifically for videos rendered on portable devices that allow easy user interaction, and to an apparatus for conducting the method.
- object segmentation is known in the art for producing spatial image segmentations, i.e. object boundaries, based on color and texture information.
- An object is defined quickly by a user using object
- object segmentation technology just by selecting one or more points within the object.
- Known algorithms for object segmentation are “graph cut” and “watershed”.
- object tracking Another technology is called “object tracking”. After an object has been defined by its spatial boundary, the object is tracked automatically in the subsequent sequence of images. For object tracking, the object is typically described by its color distribution.
- a known algorithm for object tracking is “mean shift”. For increased precision and robustness, some algorithms rely on the object appearance structure.
- SIFT Scale—invariant feature transform
- object detection Generic object detection technology makes use of machine learning for computing statistical model of the appearance of the object to be detected. This requires many examples of the objects (ground truth) .
- Models typically rely on SIFT descriptors. Most common machine learning techniques used nowadays include boosting and support vector machine (SVM) .
- SVM support vector machine
- face detection is a specific object detection application.
- the features used are typically filter parameters, more specifically "haar wavelet” parameters.
- a well known implementation relies on cascaded boosted classifiers, e.g. Viola & Jone .
- a first example is skipping a fixed amount of playback time, e.g. moving forward in the video for 10 or 30 seconds.
- a second example is to make a jump to the next cut or to the next group of pictures (GOP) .
- GOP group of pictures
- the skipping mechanism is oriented according to the video data, not according to the content of the movie. It is not clear for the user what image is displayed at the end of the jump. Further, the length of the interval skipped is short.
- a third example is that a jump is made to the next scene. A scene is a part of action in a single location in a TV show or movie, composed of a series of shots.
- This method relies on the number of objects that the system can effectively index. For the time being, there are relatively few detectors compared to the huge variety of objects one can encounter in e.g. an average news video.
- a method for navigating in a sequence of images comprises the steps of:
- the first input is a user input or an input from another device that is connected to the device executing the method.
- the first object is indicated by a symbol, e.g. a cross, a plus or a circle and this symbol is moved instead of the first object itself.
- the second position is a symbol, e.g. a cross, a plus or a circle and this symbol is moved instead of the first object itself.
- Another way to define the second position is to define the position of the first object in relation to at least one other object in the image. Identifying at least one image in the sequence of images where the first object is close to the second position .
- the image in the sequence of images is used as a starting point for playback, for which the distance between the two objects is the smallest.
- the distance between the objects e.g. the absolute value is used.
- Another way for defining if an object is close to another object is only using X or Y coordinates or weighting the distance in X and Y direction using different weighting factors.
- sequence of images which is a movie or news program, either being broadcasted or recorded, is navigating through the sequence of images according to the content of the images and is not dependent on some fixed structure of the broadcasted stream which is defined mainly due to technical reasons. Navigation is made intuitive and more user
- the method is performed in real-time so that the user has the feeling of actually moving the object.
- the user asks for the point in time where the designated object disappears from the screen.
- the first input for selecting the first object is clicking on the object or drawing a bounding box around the object.
- the user applies commonly known input methods for a man-machine interface. If an indexing exists, the user is also able to choose the objects by this index from a database .
- the step of moving the first object to a second position according to a second input includes :
- the step of identifying further includes identifying at least one image in the sequence of images where the
- the first object might be the ball
- the user can move the ball into the direction of the goal as he expects that there is a scene he might be interested in when the ball is close to the goal, because this might be shortly before the team scores or a player kicks the ball over the goal.
- This kind of navigation by object is completely independent of the coordinates of the screen, but depends on the relative distance of two objects in the image.
- the position of the destination of the first object being close to the position of the second object also includes that the second object is exactly at the same position as the destination or that the second object overlaps the destination of the moved first object.
- the size of the objects and their variation over time is considered to define the relative position of two object to each other.
- the user selects an object, e.g. a face and then zooms the bounding box of the face in order to define the size of the face. Afterwards, an image is searched in the sequence of images on which the face is displayed at the size or a size close to this size.
- This feature has the advantage that if e.g. an interview is played back and the user is interested in the speech of a specific person, assuming that the face of this person is displayed almost covering the biggest part of the screen when this person speaks.
- the further input for selecting the second object is clicking on the object or drawing a bounding box around the object.
- the user applies commonly known input methods for a man-machine interface. If an indexing exists, the user is also able to choose the objects by this index from a database.
- object segmentation For selecting the objects, object segmentation, object detection or face detection is employed.
- object tracking techniques are used to track the position of this object in the subsequent images of the sequence of images.
- key-point technique is employed for selecting an object.
- key-point description is used for determining the similarity of objects in different images in the sequence of images.
- Hierarchical segmentation produces a tree whose nodes and leaves correspond to nested areas of the images. This segmentation is done in advance. If a user selects an object by tapping to a given point of an image, the
- the node selected with the first tap is considered as father of the node selected with the second tap.
- the corresponding area is considered to define the object.
- only a part of the images of the sequence of images are analyzed for identifying at least one image where the object is close to the second position.
- This part to be analyzed is a certain number of images following the actual image, the certain number of images representing a certain playback time following the currently displayed image.
- Another way to implement the method is to analyze all following images from the
- the invention further concerns an apparatus for navigation in a sequence of images according to the above described method .
- Fig. 1 shows an apparatus for playback of a sequence of images and for performing the inventive method
- Fig. 2 shows the inventive method for navigating
- Fig. 3 shows a flow chart illustrating the inventive method
- Fig. 4 shows a first example of navigation according to the inventive method
- Fig. 5 shows a second example of navigation according to the inventive method
- Fig. 1 schematically depicts a playback device for
- the playback device includes a screen 1, a TV receiver, HDD, DVD, BD player or the like as source 2 for a sequence of images and a man- machine interface 3.
- the playback device can also be an apparatus including all functions, e.g. a tablet, where the screen is also used as man-machine interface (touchscreen) and a hard disc or flash disc for storing a movie or documentary is present and a broadcast receiver device is also included into the device.
- Fig. 2 shows a sequence of images 100, e.g. of a movie, documentary or sports event, comprising multiple images.
- the image 101 which is currently displayed on the screen, is a starting point for the inventive method. In the first step, the screen view 11 displays this image 101.
- a first object 12 is selected according to a first input received from the man-machine interface. Then, this first object 12 or a symbol representing this first object is moved to another location 13 on the screen, e.g. by drag and drop according to a second input received by the man-machine interface. On screen view 21, the new location 13 of the first object 12 is illustrated. Then, the method identifies at least one image 102 in the sequence of images 100 in which the first object 12 is at a location 14 that is close to the location 13 where this object has been moved to. In this image, the location 14 has a certain distance 15 to the desired location 13, indicated by the drag and drop movement. This distance 15 is used as a measure for
- Fig. 3 illustrates the steps which are performed by the method.
- an object is selected in a displayed image according to a first input.
- the input is received from a man-machine interface. It is assumed that the selecting process described is performed in a short time period. This ensures that the object appearance does not change too much.
- an image analysis is performed. The image of the current frame is analyzed and the point of interest, which captures a set of key-points present in the image, is extracted. These key-points are located where strong gradients are present. These key-points are extracted with a description of the surrounding texture. When a position in the image is selected, the key-points around this position are collected.
- the radius of the area in which key-points are collected is a parameter of the method.
- the selection of the key-points is assisted by other methods, e.g. by a spatial segmentation.
- the set of extracted key- points constitute a description of the selected object.
- the object is moved to a second position in step 210. This movement is executed according to a second input, which is an input from the man-machine interface. The movement is realized as drag and drop.
- the method identifies in step 220 at least one image in the sequence of images in which the first object is close to the second position, which is the image
- step 230 the method jumps to the
- Fig. 4 shows an example of applying the method when
- the playback time of the whole show is indicated by an arrow t.
- the first image is displayed on the screen, the image is including three faces.
- the user is interested in the person displayed on the left-hand side of the screen and selects the person by drawing a bounding box around the face. Then the user drags the selected object (the face with fancy hairs) into the middle of the screen and in addition enlarges the bounding box to indicate that he wants to see this person in the middle of the screen and in a close-up view.
- an image fulfilling this requirement is searched for in the sequence of images, this image is found at time t2 and this image is displayed and playback is started at this time t2.
- FIG. 5 shows an example of applying a method when watching a soccer game.
- a scene of a game in the middle of the field is shown.
- There are four players, one of them is close to the ball.
- the user is interested in a certain situation, e.g. in the next penalty.
- he selects the ball with the bounding box and tracks the object to the penalty spot to indicate that he wants to see a scene where the ball is exactly at this point.
- this requirement is fulfilled.
- a scene is displayed where the ball lies on the penalty spot and a player prepares for kicking a penalty.
- the game is played back from this scene onwards.
- the user is able to conveniently navigate to the next scene he is interested in.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Library & Information Science (AREA)
- Signal Processing (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Television Signal Processing For Recording (AREA)
- Signal Processing For Digital Recording And Reproducing (AREA)
- Indexing, Searching, Synchronizing, And The Amount Of Synchronization Travel Of Record Carriers (AREA)
- Management Or Editing Of Information On Record Carriers (AREA)
- User Interface Of Digital Computer (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
Claims
Priority Applications (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
RU2014101339A RU2609071C2 (en) | 2011-06-17 | 2012-06-06 | Video navigation through object location |
CA2839519A CA2839519A1 (en) | 2011-06-17 | 2012-06-06 | Video navigation through object location |
JP2014515137A JP6031096B2 (en) | 2011-06-17 | 2012-06-06 | Video navigation through object position |
EP12730823.7A EP2721528A1 (en) | 2011-06-17 | 2012-06-06 | Video navigation through object location |
KR1020137033446A KR20140041561A (en) | 2011-06-17 | 2012-06-06 | Video navigation through object location |
CN201280029819.XA CN103608813A (en) | 2011-06-17 | 2012-06-06 | Video navigation through object location |
US14/126,494 US20140208208A1 (en) | 2011-06-17 | 2012-06-06 | Video navigation through object location |
MX2013014731A MX2013014731A (en) | 2011-06-17 | 2012-06-06 | Video navigation through object location. |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP11305767 | 2011-06-17 | ||
EP11305767.3 | 2011-06-17 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2012171839A1 true WO2012171839A1 (en) | 2012-12-20 |
Family
ID=46420070
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2012/060723 WO2012171839A1 (en) | 2011-06-17 | 2012-06-06 | Video navigation through object location |
Country Status (9)
Country | Link |
---|---|
US (1) | US20140208208A1 (en) |
EP (1) | EP2721528A1 (en) |
JP (1) | JP6031096B2 (en) |
KR (1) | KR20140041561A (en) |
CN (1) | CN103608813A (en) |
CA (1) | CA2839519A1 (en) |
MX (1) | MX2013014731A (en) |
RU (1) | RU2609071C2 (en) |
WO (1) | WO2012171839A1 (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9405770B2 (en) | 2014-03-10 | 2016-08-02 | Google Inc. | Three dimensional navigation among photos |
CN104185086A (en) * | 2014-03-28 | 2014-12-03 | 无锡天脉聚源传媒科技有限公司 | Method and device for providing video information |
CN104270676B (en) * | 2014-09-28 | 2019-02-05 | 联想(北京)有限公司 | A kind of information processing method and electronic equipment |
JP6142897B2 (en) * | 2015-05-15 | 2017-06-07 | カシオ計算機株式会社 | Image display device, display control method, and program |
KR102474244B1 (en) * | 2015-11-20 | 2022-12-06 | 삼성전자주식회사 | Image display apparatus and operating method for the same |
TWI636426B (en) * | 2017-08-23 | 2018-09-21 | 財團法人國家實驗研究院 | Method of tracking a person's face in an image |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090052861A1 (en) * | 2007-08-22 | 2009-02-26 | Adobe Systems Incorporated | Systems and Methods for Interactive Video Frame Selection |
US20100082585A1 (en) * | 2008-09-23 | 2010-04-01 | Disney Enterprises, Inc. | System and method for visual search in a video media player |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06101018B2 (en) * | 1991-08-29 | 1994-12-12 | インターナショナル・ビジネス・マシーンズ・コーポレイション | Search of moving image database |
JP4226730B2 (en) * | 1999-01-28 | 2009-02-18 | 株式会社東芝 | Object region information generation method, object region information generation device, video information processing method, and information processing device |
KR100355382B1 (en) * | 2001-01-20 | 2002-10-12 | 삼성전자 주식회사 | Apparatus and method for generating object label images in video sequence |
JP2004240750A (en) * | 2003-02-06 | 2004-08-26 | Canon Inc | Picture retrieval device |
TW200537941A (en) * | 2004-01-26 | 2005-11-16 | Koninkl Philips Electronics Nv | Replay of media stream from a prior change location |
US20080285886A1 (en) * | 2005-03-29 | 2008-11-20 | Matthew Emmerson Allen | System For Displaying Images |
WO2007096003A1 (en) * | 2006-02-27 | 2007-08-30 | Robert Bosch Gmbh | Trajectory-based video retrieval system, method and computer program |
US7787697B2 (en) * | 2006-06-09 | 2010-08-31 | Sony Ericsson Mobile Communications Ab | Identification of an object in media and of related media objects |
US8488839B2 (en) * | 2006-11-20 | 2013-07-16 | Videosurf, Inc. | Computer program and apparatus for motion-based object extraction and tracking in video |
DE102007013811A1 (en) * | 2007-03-22 | 2008-09-25 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | A method for temporally segmenting a video into video sequences and selecting keyframes for finding image content including subshot detection |
US20100281371A1 (en) * | 2009-04-30 | 2010-11-04 | Peter Warner | Navigation Tool for Video Presentations |
JP5163605B2 (en) * | 2009-07-14 | 2013-03-13 | パナソニック株式会社 | Moving picture reproducing apparatus and moving picture reproducing method |
US20110113444A1 (en) * | 2009-11-12 | 2011-05-12 | Dragan Popovich | Index of video objects |
US9171075B2 (en) * | 2010-12-30 | 2015-10-27 | Pelco, Inc. | Searching recorded video |
-
2012
- 2012-06-06 WO PCT/EP2012/060723 patent/WO2012171839A1/en active Application Filing
- 2012-06-06 JP JP2014515137A patent/JP6031096B2/en not_active Expired - Fee Related
- 2012-06-06 CN CN201280029819.XA patent/CN103608813A/en active Pending
- 2012-06-06 US US14/126,494 patent/US20140208208A1/en not_active Abandoned
- 2012-06-06 MX MX2013014731A patent/MX2013014731A/en active IP Right Grant
- 2012-06-06 CA CA2839519A patent/CA2839519A1/en not_active Abandoned
- 2012-06-06 RU RU2014101339A patent/RU2609071C2/en not_active IP Right Cessation
- 2012-06-06 EP EP12730823.7A patent/EP2721528A1/en not_active Withdrawn
- 2012-06-06 KR KR1020137033446A patent/KR20140041561A/en not_active Application Discontinuation
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090052861A1 (en) * | 2007-08-22 | 2009-02-26 | Adobe Systems Incorporated | Systems and Methods for Interactive Video Frame Selection |
US20100082585A1 (en) * | 2008-09-23 | 2010-04-01 | Disney Enterprises, Inc. | System and method for visual search in a video media player |
Non-Patent Citations (1)
Title |
---|
JOSEF SIVIC ET AL: "Person Spotting: Video Shot Retrieval for Face Sets", IMAGE AND VIDEO RETRIEVAL; [LECTURE NOTES IN COMPUTER SCIENCE; LNCS], SPRINGER-VERLAG, BERLIN/HEIDELBERG, vol. 3568, 4 August 2005 (2005-08-04), pages 226 - 236, XP019012815, ISBN: 978-3-540-27858-0 * |
Also Published As
Publication number | Publication date |
---|---|
JP2014524170A (en) | 2014-09-18 |
RU2014101339A (en) | 2015-07-27 |
KR20140041561A (en) | 2014-04-04 |
JP6031096B2 (en) | 2016-11-24 |
RU2609071C2 (en) | 2017-01-30 |
US20140208208A1 (en) | 2014-07-24 |
CA2839519A1 (en) | 2012-12-20 |
CN103608813A (en) | 2014-02-26 |
MX2013014731A (en) | 2014-02-11 |
EP2721528A1 (en) | 2014-04-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200218902A1 (en) | Methods and systems of spatiotemporal pattern recognition for video content development | |
AU2015222869B2 (en) | System and method for performing spatio-temporal analysis of sporting events | |
JP5355422B2 (en) | Method and system for video indexing and video synopsis | |
Pritch et al. | Webcam synopsis: Peeking around the world | |
US7802188B2 (en) | Method and apparatus for identifying selected portions of a video stream | |
US20140208208A1 (en) | Video navigation through object location | |
EP3513566A1 (en) | Methods and systems of spatiotemporal pattern recognition for video content development | |
Chen et al. | Personalized production of basketball videos from multi-sensored data under limited display resolution | |
Carlier et al. | Combining content-based analysis and crowdsourcing to improve user interaction with zoomable video | |
US10325628B2 (en) | Audio-visual project generator | |
JP2004508757A (en) | A playback device that provides a color slider bar | |
CN111031349B (en) | Method and device for controlling video playing | |
JP2011504034A (en) | How to determine the starting point of a semantic unit in an audiovisual signal | |
JP2007200249A (en) | Image search method, device, program, and computer readable storage medium | |
Wittenburg et al. | Rapid serial visual presentation techniques for consumer digital video devices | |
WO1999005865A1 (en) | Content-based video access | |
JP3629047B2 (en) | Information processing device | |
Coimbra et al. | The shape of the game | |
Zhuang | Sports video structure analysis and feature extraction in long jump video | |
KR20110114385A (en) | Manual tracing method for object in movie and authoring apparatus for object service | |
JP6219808B2 (en) | Video search device operating method, video search method, and video search device | |
Wang | Viewing support system for multi-view videos | |
Sumiya et al. | A Spatial User Interface for Browsing Video Key Frames | |
Pongnumkul | Facilitating Interactive Video Browsing through Content-Aware Task-Centric Interfaces |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12730823 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2014515137 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: MX/A/2013/014731 Country of ref document: MX |
|
ENP | Entry into the national phase |
Ref document number: 2839519 Country of ref document: CA Ref document number: 20137033446 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012730823 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2014101339 Country of ref document: RU Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14126494 Country of ref document: US |