IE84144B1 - Multimedia management - Google Patents

Multimedia management Download PDF

Info

Publication number
IE84144B1
IE84144B1 IE2003/0839A IE20030839A IE84144B1 IE 84144 B1 IE84144 B1 IE 84144B1 IE 2003/0839 A IE2003/0839 A IE 2003/0839A IE 20030839 A IE20030839 A IE 20030839A IE 84144 B1 IE84144 B1 IE 84144B1
Authority
IE
Ireland
Prior art keywords
content
media
meta data
component
controller
Prior art date
Application number
IE2003/0839A
Other versions
IE20030839A1 (en
Inventor
Marlow Sean
Murphy Noel
O Connor Noel
Mcdonald Kieran
Browne Paul
Pollard Sheila
Smeaton Alan
Lee Hyowon
Original Assignee
Aliope Limited
Filing date
Publication date
Application filed by Aliope Limited filed Critical Aliope Limited
Priority to IE2003/0839A priority Critical patent/IE84144B1/en
Publication of IE20030839A1 publication Critical patent/IE20030839A1/en
Publication of IE84144B1 publication Critical patent/IE84144B1/en

Links

Abstract

ABSTRACT A multimedia management system (1) receives live stream content at an interface (5) or static content (6) and captures (7) it to a storage device (10). An analysis component (11) in parallel or in series generates meta data for a database (12), the meta data acting as an index for the storage device (10). Automatic detection of events by the analysis component (11) allows provision of real time distribution of content such as news clips to fixed and mobile networks (MS). Near real time media services are also provided with some annotations being performed by an annotation component (14) under control of a browser (13).

Description

Multimedia Management INTRODUCTION Field of the Invention The invention relates to management of multimedia such as digitised audio-visual COIIIC Ht.
Prior Art Discussion In recent years advances have been made in the field of distribution of multimedia in mobile networks. For example the Multimedia Messaging Service (MMS) standard for mobile networks specifies peer—to—peer multimedia messaging. United States Patent Application No. USZOOZOS / 442 describes a method of transmitting/ receiving a broadcast message in a mobile network.
Also, advances have been made in processing of multimedia in various aspects, such as indexing and browsing. For example, the paper "Fischlar: an on-line system for indexing and browsing broadcast television", O'Connor N, Marlow S, Murphy N, Smeaton A, Browne P, Deasy S, Lee H and Mc Donald K. ICASSP 2001 - Inzemaflonal Confirence on Acoustics, Speech, and Szgnal Processing. Salt Lake City, UT, 7-11 May 2001., describes indexing and browsing. Television programmes are captured in MPEG-1 format and analysed using video indexing tools. Browsing interfaces allow a user to browse a visual index to locate content of interest, and selected content can then be streamed in real time.
While these developments are of benefit, there is still a need to provide more comprehensive multimedia asset management to support content—based operations for a broader range of devices and access mechanisms.
SUMMARY OF THE INVENTION According to the invention, there is provided a multimedia management system comprising: a multimedia content capture component for receiving content and for writing it to a storage device; an analysis component for analysing received content to generate meta data, and a database for storing the meta data; a server component for distributing content or meta data to a network for delivery to subscriber devices; a controller for coordinating operation of the components of the system to provide configured services for delivery of content to subscriber devices; wherein the analysis component generates the meta data as an index to the content in the storage device and the server uses the meta data to access the content; wherein the analysis component extracts key frames from received video content and segments audio and video streams; wherein said key frames are stored to provide a storyboard of video content with images forming part of an index; and wherein the analysis component operates in parallel with capture of content by the capture component for analysis in real time as a video content stream is being received.
In one embodiment, the analysis component uses a shot boundary detection technique to generate the meta data.
In another embodiment, the analysis component automatically detects events in incoming content and generates notifications in real time.
In a further embodiment, the controller receives the notifications and automatically controls components of the system to generate a content output triggered by the detected event.
In one embodiment, the notification is an alert message transmitted to a mobile station.
In another embodiment, the controller directs routing of meta data from the database to a plurality of different devices, including local devices directly connected to the controller and remote subscriber devices.
In a further embodiment, the analysis component writes some of the meta data to the storage devices to provide an addressing index.
In one embodiment, the controller uses meta data solely from the database for directing content to some subscriber devices.
In another embodiment, the controller dynamically generates media interfaces with style sheet transformation, in which style sheets dynamically transform the media and code into user-viewable media and display code.
In a further embodiment, the transformation performs independent processing of screen windows in a manner analogous to application or operating system display windows.
In one embodiment, the style sheet transformation code is separate from underlying functionality of the controller and other components involved in media output.
In another embodiment, the system further comprises a live streaming interface connected to the content capture component.
In a further embodiment, the system further comprises a scheduling component connected to the live stream circuit for activating the live stream circuit and setting recording parameters for both the live stream circuit and the capture component.
DETAILED DESCRIPTION OF THE INVENTION Brief Description of the Drawings The invention will be more clearly understood from the following description of some embodiments thereof, given by way of example only with reference to the accompanying drawings in which:- Fig. 1 is a block diagram of a multimedia management system of the invention.
Fig. 2 is a diagram showing interfacing modes; and Fig. 3 is a diagram illustrating media analysis, generation of meta—data, and presentation for content-based operations such as access, retrieval, sharing consumption.
Description of the Embodiments Referring to Fig. l a multimedia management system 1 of the invention comprises a controller 2 which controls in a comprehensive manner Various multimedia processing components to provide a variety of services. These components are as follows.
: A live stream interface for receiving broadcast digital TV channels.
: A reader device for reading multimedia content from a static, non-live source such as DVDs.
: A capture circuit for encoding and compressing content to a format such as MPEG.
: A content storage device for storing the encoded and compressed content files together with related meta data. The latter includes images, hyperlinks, log files for data such as start/ end time of capture, programme meta—data, user account information, user preferences, copyright information, number of key frames extracted, file size, and channel meta data. Some of the meta data is generated by the capture circuit 7, and the remainder by an analysis component ll.
An analysis component ll which extracts representative meta—data such as key frames, audio, text, from compressed audio—visual content. Extracted meta—data is encoded in formats and resolutions (for example JPEG, GIF, MPEG) of any size that can be displayed, selected, and interacted with. The analysis component 11 both extracts existing meta data, and generates meta data which is stored in the storage device 10. However, it also writes meta data to a database , set out below. Additional meta data related to the precision and accuracy of analysis techniques used against audio—visual content is generated by the analysis component 11 using the shot boundary technique to generate colour histogram, edge detection, and motion analysis data.
A content database for storing all meta data generated by the analysis component 11. Some of the meta data is also stored with the actual content in the storage device 10, as outlined above. The database 12 stores the meta data in a manner to provide an index for the storage device 10.
A browser for use by local/ remote staff of the organization hosting the system 1.
This is used for accessing the content using indexes of the database 12. It activates interfaces for content-based operations (non-linear playback, annotation) accomplished over end-user devices. The browser 13 can also route content to a transcoding component 15.
An annotation component for annotation of stored content under instruction from the browser 13.
A scheduling component for scheduling operation of the interface 5 for receiving and encoding live streams.
A Web server for routing of content to infrastructure for fixed and mobile networks for onward routing to central/edge servers and end user playback devices.
The multimedia management processes performed by the system are now described.
One of the problems facing operators of communication systems such as mobile network operators is that while there is a vast amount of content available, it has not been possible to manage it so that content owners and aggregators, network operators and end users get optimum use of it. It is not just a matter of coping with the limited communication bandwidths and device capabilities of many user devices, but also syndicating content which end users wish to access.
The system 1 addresses this problem by acting as a platform which takes in the multimedia content and manages it in an optimum manner so that end users can access/ receive what they subscribed to. For example, security camera systems generate a vast quantity of content, however, an end user is only interested in seeing suspicious activity. Thus, the analysis component 11 may automatically monitor live streams from such cameras and generate an alert for the controller 2 if a suspicious image and/ or sound pattern is detected. The controller in turn notifies in real time the Web server 17 to retrieve the content meta data and push it onto either the fixed or mobile network to the end user subscriber playback devices. One suitable bearer for such alerts is GSM SMS to mobile devices.
In other situations some human input may be required such as downloading of a video sequence of a breaking news event or a score in a football match. In this case, staff monitor certain content channels and access audio—visual segments in non—linear fashion to achieve content-based operations for seamless syndication, transmission and retrieval over fixed and mobile networks. The content is cropped and rendered to be suitable for transmission to the subscriber's mobile station MS. Such operations can typically be carried out in a few seconds, thus providing a near real- time service to the subscriber.
Fig. 2 illustrates the versatility of the system 1, providing content to a wide variety of devices to present a variety of interfaces. These include a portal interface 20, a search interface 21, a browsing interface 22, a playback interface 23, an annotation interface 24, remote user device browsing/ swapping interfaces 25, and a remote user device playback interface 26. This diagram illustrates versatility of the architecture of the system 1. The controller 20 is at the centre allowing interfacing with both staff of the hosting organisation and remote end users in a versatile manner.
The controller 2 includes a database of access control data for interfacing with staff.
It allows supervisor access to some of the components.
The controller 2 controls automatic deletion of content according to configured p3.I‘3.ITl€t€1‘S.
The analysis component ll generates the meta data is a manner whereby it forms a content index that is represented using images of video content and meta data referenced to additional related media content. Capture and analysis of content can take place in parallel, thus enabling text, image and video distribution without the need to wait for a programme to end before managing distribution to a mass audience.
The analysis component 2 generates the meta data for some content in the form of storyboards. Live and stored media files are analysed using algorithms processing specific data streams/layers, multiplexed/ encapsulated in analogue and standard media compression formats (for example MPEG formats). Each algorithm can be controlled by a dynamic threshold/ parameter set to analyse different types of media information (for example video with slow-movement and short cuts compared to video with fast motion and fades). The analysis techniques are used to extract representational content from media files for indexing based on natural breaks or events in the media content. Analysis results derived from files composed of different type of media information (for example news programme with newscaster and news agencies footage) are based on a combination of algorithms in order to generate the most logical representation of programme’s content for specific playback devices.
The analysis component 11 analyses whole data streams/ formats and media layers by segmenting audio, detecting faces, detecting edges, detecting black frames, and generating colour histograms, The combination of these algorithms detects video frames and video shots that are sequenced in a logical scene if media programmes follow a structure at the time of production.
Referring to Fig. 3, a media file 40 is analysed in step 45 to generate content—based meta-data 46, which is then automatically stored in the storage device 10. The meta- data 46 is retrieved using a style sheet 41 that can be marked-up in representation languages (for example HTML, CHTML, WML, SMIL and XML languages) useful for static/ dynamic representation of media information to media devices (for example PC, PDA, and mobile phone). Dynamic representation of media content provides the basis for progressive download of media information to media devices dependent on progressive and fast retrieval of media information over narrowband networks. Thus, media data is presented or exchanged between users by communicating representational information, meta-data information, and location information rather that communicating the media data itself.
The controller 2 acts as an interface between the storage device 10 and the web server 17, the Web browser 13, and a streaming server reading commands from hyperlinks and animated media formats. Such commands are sent over the fixed and mobile network to playback devices. The Web browser 13 is automatically refreshed and mapped against media files, metafiles and meta-data found in the database 12.
Requests may be made from subscriber devices to achieve content—based operations.
Each content-based operation is completed by connection (circuit switched and/ or packet based) to a local media server and/or remote server depending on the availability of bandwidth that the mobile device can connect to and the location of subscriber requesting the content. Media programmes and/or media clips are subsequently and automatically sent over packet—based networks to subscriber devices in synchronised, downloadable or streamable formats that can be played using messaging software, browsers or media decoders developed specifically for the media device.
Access to interfaces and media information in the system 1 is monitored at different levels. Content-based operations requested by a subscriber can only be fulfilled after inputting a user ID. Monitored access generates information on usage of the system 1 and the media data content. Such information can be used to improve media distribution on a personalised basis.
Recording functions integrated in the scheduling component 16 provides system users and end users with recording options. Recording requests could be made to the system by clicking on hyperlinked media information such as programme title, and programme meta-data. End users can also rate media programmes in order to receive programme recommendations from the system.
Search interfaces with media highlights in an overview/ detail view interface are also generated based upon the meta data. Media sequences are highlighted based on a search query and they can be browsed using a timeline bar in Detail View Media interface .
The system 1 also generates a search interface with a detail view based on text results and slide show media interface enabling system users to narrow down search results using text information (for example closed caption, annotation) tagged to indexed audio-visual content. A search interface for audio-visual information enables system users and end users to search audio-visual information indexed using different analysis techniques. Searches can be applied against different parameters populated by analysis techniques such a face detection, closed caption parsing, and shot detection. Search results are ranked and are based on representative meta-data. An overview media interface implements "overview first, zoom and filter, then details on demand". This interface provides an overview of media files using representative meta-data, thus indicating to the end user what the audio-visual information is about.
The system 1 also generates a scroll bar media interface. System users browse media content using a scroll bar that acts as a very versatile controller. Clicking on the arrows of the scroll bar moves the page by each row of representative keyframes, clicking on the empty bar jumps the page by screenful, and dragging the square object provides immediate and continuous movement of representative keyframes thus allowing finer and more pleasant experience.
Another interface is an extension of the overview media interface. This is similar to the overview/detail browser, but allows movement into a detail view. Once in a detail view, a timeline bar is used for immediate system reaction and finer control in browsing. The timeline bar can be static or dynamic. Both types of timeline bars are useful when browsing a large amount of media information because it provides temporal orientation by showing the current point of browsing in relation to the whole media file. Not only useful as a time indicator, the bar in this browser allows random access by selecting related sections of the bar. The interface with the dynamic quick bar capabilities saves tedious and continuous clickings that the static timeline bar browser may require. One sweep of the mouse allows viewing of all of the keyframes in the video. Much finer control and continuous, immediate response from the system provides ‘direct manipulation‘. The browsing task is less of an 'act— and—react' interaction, and is more natural and paced by the user.
A slide show media interface flips through all representative indexed meta-data in a single slot. T emporally presenting representative meta-data (rather than spatially) saves screen space considerably, and is thus useful for small end user devices. People with more dynamic interaction preference may also like this browser. It is important to provide some form of temporal orientation in this kind of temporal presentation, and in this case timeline bar is used.
A speech annotation interface is useful for typists who, in an off—line mode, have to annotate and synchronise text against audio-visual content. The speech annotation interface is based on the slide show media interface and helps annotate text at specific level of granularity into media files. Annotation against video can be searched and the system can provide audio-visual search results.
Another annotation interface supports portal updates on remote media servers located close to end-users. The audio-visual annotation interface also enables system users to insert rights and user access information.
A media portal interface for devices with small displays provides an overview of media content available to browse and retrieve on an end user device with small display. Based on standard mark—up languages, this Interface uses media segments as retrieval units, tracking users’ access history, thus being able to recommend the most important news stories to the user, in the form of a summarised list. This way, all the user has to do is simply select a story from the list - saving awkward text input and other interaction. The idea is putting most of the processing on the system itself, rather than providing elaborate searching/browsing work to the user. Because this interface tracks user interactions, it can generate usability information useful for billing and rating mechanisms.
The server 17 also provides detailed video interfaces for devices with small display.
These two interfaces make video browsing similar to video game interactions on small devices. From these interfaces, the user can control the current point of browsing, using the controller/ arrows. This allows the user to use only the one hand holding the small device. Four arrow buttons are just about all that the user uses to browse representative media (other than another arrow at the bottom left, to bring back to related media information such as listing, related programmes, etc.).
Playback interfaces represent static and synchronised multimedia messaging that plays images, audio and video information in email clients, browsers, and MMS clients on MMS handsets. Receivers and viewers of MMS can interact with the media information to access related information such as related stories in video formats.
The system 1 provides searching capabilities across programme meta-data, meta—data parameters filled by analysis techniques, and captured and referenced closed caption data. Search results are complemented and hyperlinked to visual representation of media programmes to allow users to access media information in a random fashion thus reducing the time to play back specific media segments. Searches can be made from any web browser capable device.
To complement text-based search of media programmes, video skimming and visualisation is achieved by the system 1 to retrieve media information in a non- linear fashion. Information rendered by browsers can be progressively downloaded on a media device for a user to interact with locally or it can be requested on-demand by selecting media information stored remotely. Based on HTML and XML mark-up languages, media browsers are designed to optimise content-based operations for media devices having unique specifications (display, resolution, media format, and/ or mark-up language rendering).
The controller 2 controls playback formats, resolutions, quality and method of transmission and monitoring of the playback of a media file. Playback of a media file is supported by streaming server and streaming software, video clients, and browsers that can render proprietary and standard media formats. The management module can control processes leading to playback of media content. Processes such as buffering, progressive download, image and ad removal/ insertion can be set to suit user preferences and media device specifications.
The system 1 also enables content-based interactions from file formats and streaming clients provided that the management module of the present invention can interface with the API of a streaming server.
The controller 2 generates interfaces from which a system user can automatically access, annotate, format, share and publish media segments to any file or directory.
This method allows distribution of media clips once in multiple formats, over many networks, to multiple devices. Text annotation to media clips are created to be only accessed by authorised users. Annotation of media clips can be accomplished at different levels of granularity of media content — image, advertisement, logical media segment (shots, scenes, story) — to allow for more efficient retrieval at a later stage.
Users operating this interface set terms-of-usage for each media file that is made available for content—based operations on local/remote servers. Terms-of—usage can be embedded in transmission delivery formats of each media file in order to track and confirm usage of each media file retrieved.
The outputs of media editing can be standard multimedia messaging services based on text, images, and synchronised multimedia presentations. Outputs can also be text, audio and video clips that can be rendered by email clients, MMSCs, browsers, streaming players, and streaming clients.
The above interfaces are dynamically generated by XSL transformation. A set of XSL stylesheets are used to take necessary information from the underlying XML data which is generated whenever a web request is done by a client. The generated XML data comes from various sources to collect necessary information and as a result contains all information to display for the user's device in a request of the user's action. XSL stylesheets then transform the data into user-viewable code (HTML, JavaScript, Java Applet and SVG) to render on the web browser and other devices.
For the web interfaces with sophisticated features, layers of frames have been used, each frame corresponding to each XSL stylesheet, transforming underlying XML data for that particular frame into presentable format such as HTML. Framed interface allows application—like interaction in which a clicked widget stays on with the effect visible on other parts of the screen, instead of the whole screen refreshing.
For simpler interfaces such as for mobile devices, a single XSL stylesheet defines the layout and elements of the screen, thus corresponding to a single underlying XML data generated when requested by the user.
In this way, the XML-based architecture of the system allows easy device-depended interface implementation using XSL. This also separates the underlying functionality of the system from the interface design, ensuring more user-oriented interface consideration by the designer.
The controller 2 can "push" and/ or "pull" media files that have been analysed and indexed onto servers that are closer and more accessible to authorised users of media content.
It will be appreciated that the system 1 acts as the hub around which specific applications can be supported and services provided with a short lead time. The system 1 captures, manages and distributes media information requested by authorised users across retrieval devices. The system allows easy development of services.
The following is a non-exhaustive list of services which may be provided: - Distributed analysis of media information - Protected editing of media information - Face Detection in live and encoded video programmes — Caching of referenced media information — Live broadcast & indexing of media content - Real-time, near real-time, on-demand representation of media information - Near real-time alerts of media events — Image matching in live and encoded video programmes - Face matching in structured video programmes - Video object detection in live and encoded video programmes — Video object matching in live and encoded video programmes - Motion detection in live and encoded video programmes - File merging — Advertisement insertion based on user lists - Searching across closed and open caption — Searching across image and video features — Personalised media caching - Media management for instant messaging — Multicasting - Video skimming Broadcasters transmit a continuous flow of pre-recorded and live television programmes through licensed networks (cable/ satellite/ terrestrial) and media device (TV/STB). Broadcast models are currently based on physical locations where a TV signal can reach viewers without necessarily identifying them and interesting them.
The system 1 provides an efiective mechanism to transmit beyond this traditional model and reach audiences that are mobile. This is achieved by making time- sensitive broadcast events available shortly after they have occurred and in a format suitable for retrieval across personal media devices.
Archive units of content owners, producers and aggregators like parliaments and broadcasting organisations are in charge of labour intensive operations. Restoration, digitisation, cataloguing, storing, copyright clearance, copying and shipping represent of the main processes that the archive units have to manage before media assets can be available on—demand to internal and external viewers. In order to enable access to archived content, this system enables the archival process to be accomplished over a web browser capable device. It also allows retrieval of copyrighted media information based on catalogued information.
The invention is not limited to the embodiments described but may be varied in construction and detail.

Claims (2)

Claims A multimedia management system comprising: a multimedia content capture (7) component for receiving content and for writing it to a storage device (l0); an analysis component (11) for analysing received content to generate meta data, and a database (12) for storing the meta data; a server component (17) for distributing content or meta data to a network (MN) for delivery to subscriber devices; a controller (2) for coordinating operation of the components of the system to provide configured services for delivery of content to subscriber devices (MS); wherein the analysis component (11) generates the meta data as an index to the content in the storage device (10) and the server (17) uses the meta data to access the content; wherein the analysis component (ll) extracts key frames from received video content and segments audio and video streams; wherein said key frames are stored to provide a storyboard of video content with images forming part of an index; and wherein the analysis component (11) operates in parallel with capture of content by the capture component (7) for analysis in real time as a video content stream is being received. A system as claimed in any preceding claim, wherein the analysis component (1
1. ) uses a shot boundary detection technique to generate the meta data. A system as claimed in any preceding claim, wherein the analysis component (11) automatically detects events in incoming content and generates notifications in real time. A system as claimed in claim 3, wherein the controller (
2. ) receives the notifications and automatically controls components of the system to generate a content output triggered by the detected event. A system as claimed in claim 4, wherein the notification is an alert message transmitted to a mobile station. A system as claimed in any preceding claim, wherein the controller directs routing of meta data from the database to a plurality of different devices, including local devices directly connected to the controller and remote subscriber devices. A system as claimed in claim 6, wherein the analysis component (11) writes some of the meta data to the storage devices to provide an addressing index. A system as claimed in claims 6 or 7, wherein the controller uses meta data solely from the database for directing content to some subscriber devices. A system as claimed in any preceding claim, wherein the controller dynamically generates media interfaces with style sheet transformation, in which style sheets dynamically transform the media and code into user- viewable media and display code. A system as claimed in claim 9, wherein the transformation performs independent processing of screen windows in a manner analogous to application or operating system display windows. A system as claimed in claim 9 or 10, wherein the style sheet transformation code is separate from underlying functionality of the controller and other components involved in media output. A system as claimed in any preceding claim, wherein the system further comprises a live streaming interface connected to the content capture component. A system as claimed in claim 12, further comprising a scheduling component connected to the live stream circuit for activating the live stream circuit and setting recording parameters for both the live stream circuit and the capture COI'I1pOI'l€1'1t. A computer program product comprising software code for performing operations of a multimedia management system of any preceding claim when executing on a digital system. A multimedia management system substantially as described with reference to the drawings.
IE2003/0839A 2003-11-10 Multimedia management IE84144B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
IE2003/0839A IE84144B1 (en) 2003-11-10 Multimedia management

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
IEIRELAND08/11/20032002/0870
IE20020870 2002-11-08
IE2003/0839A IE84144B1 (en) 2003-11-10 Multimedia management

Publications (2)

Publication Number Publication Date
IE20030839A1 IE20030839A1 (en) 2005-05-18
IE84144B1 true IE84144B1 (en) 2006-02-22

Family

ID=

Similar Documents

Publication Publication Date Title
US11626141B2 (en) Method, system and computer program product for distributed video editing
Gao et al. Vlogging: A survey of videoblogging technology on the web
Chang et al. Video adaptation: concepts, technologies, and open issues
Tseng et al. Using MPEG-7 and MPEG-21 for personalizing video
US8060906B2 (en) Method and apparatus for interactively retrieving content related to previous query results
US8589973B2 (en) Peer to peer media distribution system and method
KR100512138B1 (en) Video Browsing System With Synthetic Key Frame
US8798777B2 (en) System and method for using a list of audio media to create a list of audiovisual media
US20070300258A1 (en) Methods and systems for providing media assets over a network
US20030030752A1 (en) Method and system for embedding information into streaming media
US20030163828A1 (en) Method and system for retrieving information about television programs
US20100070643A1 (en) Delivery of synchronized metadata using multiple transactions
WO2018011689A1 (en) Method and system for switching to dynamically assembled video during streaming of live video
EP1969447A2 (en) System and methods for storing, editing, and sharing digital video
US8527537B2 (en) Method and apparatus for providing community-based metadata
WO2004043029A2 (en) Multimedia management
WO2018011682A1 (en) Method and system for real time, dynamic, adaptive and non-sequential stitching of clips of videos
WO2018011686A1 (en) Method and system for navigation between segments of real time, adaptive and non-sequentially assembled video
CN102316367A (en) Method for storing video resource information
US10389779B2 (en) Information processing
Tseng et al. Video personalization and summarization system for usage environment
Tseng et al. Video personalization and summarization system
IE84144B1 (en) Multimedia management
Mate et al. Automatic video remixing systems
IES83424Y1 (en) Multimedia management