EP1636799A2 - Data processing system and method, computer program product and audio/visual product - Google Patents

Data processing system and method, computer program product and audio/visual product

Info

Publication number
EP1636799A2
EP1636799A2 EP04742904A EP04742904A EP1636799A2 EP 1636799 A2 EP1636799 A2 EP 1636799A2 EP 04742904 A EP04742904 A EP 04742904A EP 04742904 A EP04742904 A EP 04742904A EP 1636799 A2 EP1636799 A2 EP 1636799A2
Authority
EP
European Patent Office
Prior art keywords
data
visual
asset
menu
assets
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP04742904A
Other languages
German (de)
French (fr)
Inventor
Stuart Antony Green
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zootech Ltd
Original Assignee
Zootech Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GB0313216A external-priority patent/GB2402755B/en
Priority claimed from US10/457,265 external-priority patent/US20040250275A1/en
Application filed by Zootech Ltd filed Critical Zootech Ltd
Publication of EP1636799A2 publication Critical patent/EP1636799A2/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/034Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/102Programmed access in sequence to addressed parts of tracks of operating record carriers
    • G11B27/105Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B2220/00Record carriers by type
    • G11B2220/20Disc-shaped record carriers
    • G11B2220/25Disc-shaped record carriers characterised in that the disc is based on a specific recording technology
    • G11B2220/2537Optical discs
    • G11B2220/2562DVDs [digital versatile discs]; Digital video discs; MMCDs; HDCDs

Definitions

  • the present invention relates to a data processing system and method, a computer program product and an audio-visual product and, more particularly, to a DVD product, authoring system and method, a computer program product for such an authoring system and method and a DVD product.
  • DVDs represent one of the fastest growing forms of multimedia entertainment throughout the world. Conventionally, DVDs have been used to present movies to users using extremely high quality digital audio/visual content.
  • Figure 1 shows, schematically, a typical home entertainment system 100 comprising a DVD player 102, a DVD 104 and a television 106.
  • the DVD 104 contains a number of programs and cells 108 each of which comprises corresponding digital audio-visual content 110 together with respective navigation data 112.
  • the navigation data 112 is used by a navigation engine 114 within the DVD player
  • the presentation engine 1 16 presents the digital content 1 10 on a television or monitor 106 as rendered audio-visual content 118.
  • the rendered audio-visual content 118 conventionally, takes the form a movie or photographic stills or text associated with that movie; so-called Bonus features.
  • a user can use a remote control 120 associated with the DVD player 102 to influence the operation of the navigation engine 114 via an infrared remote control interface 122.
  • the processing performed by the DVD player and, in particular, the navigation engine 114 is relatively simple and largely limited to responding to infrared remote control commands and retrieving and displaying, via the presentation engine 1 16, pre-authored or pre-determined digital audio-visual content 110. Beyond decoding and presenting the digital audio-visual content 110 as rendered visual content 1 18, the DVD player 102 performs relatively little real-time processing.
  • GUI graphical user interface
  • Figure 2 depicts, schematically, a GUI 200 presented by, for example, Internet Explorer, running on the Windows 98 operating system.
  • the GUI 200 comprises an application window 202 with a menu bar 204.
  • the menu bar 204 has a number of menu items 206 to 216 that can be selected individually using a mouse and cursor or corresponding hot-keys as is well known within the art. Selecting one of the menu items 206 to 216, typically, causes a pull-down menu to be displayed.
  • Figure 3 depicts a pull-down menu 300 corresponding to the "File" menu item 206. It can be seen that the pulldown menu 300 comprises a number of further menu items, "New” 302 to "Close” 304, that can be selected to perform corresponding functions. Two of the further menu items; namely, "New" 302 and "Send” 306 invoke or produce further, respective, menus (not shown)
  • the menu items are selected and the various menus, pulldown or otherwise, are invoked in real-time, that is, the processing necessary for displaying and stepping through the various menu items presented is performed in real-time.
  • the instruction set of a microprocessor of a host computer is sufficiently sophisticated and flexible to imbue the Internet Explorer application 200 with the capability to perform the necessary calculations and manipulations to implement the display and selection of menu items in response to user commands issued in real-time.
  • panes illustrated in figures 2 and 3 have been shown as lacking content.
  • the limitations of DVD players become even more apparent when considering providing such dynamic menus with content that can change or is dynamic.
  • the content displayable within a pane might be video or stills of digital images such as photographs or the like.
  • the process of producing DVD data is known as authoring.
  • the process of authoring comprises creating and/or marshalling the content for the DVD data and, optionally, encoding/formatting that data, together with corresponding navigational data, to allow the DVD data to be subsequently processed by a DVD player or to be output, in its pre-mastering form, in preparation for producing DVDs bearing the data.
  • Authoring is described in, for example, "Desktop DVD Authoring', ISBN 0789727528, and "DVD Production", ISBN 0240516117, both of which are incorporated herein by reference for all purposes. It will be appreciated that authoring comprises at least one of designing and creating the content of a DVD-Video title, that is, DVD-Video data.
  • a first aspect of embodiments of the present invention provides an asset authoring method comprising the steps of providing a data structure comprising data defining a menu structure having at least one menu having a respective number of menu items associated with a number of defined views of, or actions in relation to, a general visual asset; providing a visual asset; and creating, automatically, a number of visual assets using at least one of the visual assets provided and the data of the data structure; the visual assets created corresponding to respective views of the defined views of the visual asset provided or reflecting respective actions of the defined actions in relation to the visual asset provided.
  • embodiments of the present invention allow menus, such as, for example, pull-down menus or other menus, associated with viewing content to be realised on a DVD player, that is, the embodiments allow the real-time display of menus and invocation of menu items performed by computers to be at least emulated.
  • the number of visual assets comprises at least one visual asset and, preferably, more than one visual asset.
  • a further aspect of embodiments of the present invention provides a method of authoring visual content; the method comprising the step of creating a video sequence comprising data to display a progressively expanding menu comprising a number of menu items following invocation of a selected menu item or a user-generated event.
  • a still further aspect of embodiments of the present invention provides a method of authoring visual content; the method comprising the step of creating a video sequence comprising data to display a progressively contracting menu comprising a number of menu items following invocation of a selected menu item or a user generated event.
  • figure 1 shows a home entertainment system
  • figure 2 shows a GUI for Internet Explorer
  • figure 3 depicts a pull-down menu of the GUI
  • FIG. 4 shows schematically an asset authoring process according to an embodiment of the present invention
  • figure 5 depicts a data structure for defining a menu according to an embodiment
  • figure 6 shows, schematically, video sequences for expansion and contraction of pulldown menus according to embodiments of the present invention
  • figure 7 illustrates data for a pull-down menu to be used in the video sequences of figure 6;
  • figure 8 illustrates the generation of sub-picture menu data for the pull-down menus used in the video sequences of figure 6;
  • figure 9 depicts the display of the frames of the video sequences together with the schematic overlay of the sub-picture menu data
  • figure 10 shows the relationship between a sub-picture having menu item overlays and a corresponding video sequence or frame;
  • figure 11 illustrates the frames of a video sequence for the expansion and contraction of the further menu;
  • figure 12 illustrates the generation of a further menu item according to an embodiment
  • figure 13 shows menu data for generating a video sequence showing the progressive expansion or contraction of the further menu shown in figure 12;
  • figure 14 depicts the relationship between a graphical overlay of a sub-picture to a corresponding menu item of the further menu shown in figure 12;
  • figure 15 shows a first flowchart for generating a visual asset according to an embodiment
  • figure 16 shows a second flowchart for generating a visual asset according to an embodiment.
  • Figure 4 shows an authoring process 400 according to an embodiment of the present invention for automatically producing a number, M, of sets of assets 402 to 406 from corresponding assets 408 to 412 and a data structure 414 defining a menu structure having a number, N, of menu items 416 to 420.
  • the menu items, or only selected menu items if appropriate, have associated data 422 to 426 representing a graphical manifestation or representation of the menu items.
  • the menu items, or only selected menu items have associated data processing operations that perform, or at least provide access to functions that can perform, data processing operations or manipulations upon, or in relation to, a notional or generalised asset.
  • the notional or generalised assets are the provided assets 408 to 412 that are used to produce the sets of assets 402 to 406.
  • the sets of assets 402 to 406 comprise respective assets.
  • the first set of assets 402 comprises several visual assets 434 to 438 that were produced, from or in relation to the first asset 408, by applying appropriate or selected operations of the available operations 428 to 432 according to the menu structure, that is, according to whether or not a menu item is intended to be available for that first asset 408.
  • the assets 434 to 438 created are shown, for the purpose of a generalised description, as having been created from menu items that have operations A, B and C (not shown) associated with them.
  • the operations A, B and C will be operations associated with corresponding menu items selected from the N illustrated menu items.
  • the second set of assets 404 comprises several assets 440 to 444 that were produced, from or in relation to the second asset 410, by applying appropriate or selected operations of the available operations 428 to 432 according to the menu structure, that is, according to whether or not a menu item is intended to be available for that second asset 410.
  • the assets 440 to 444 created are shown, for the purpose of a generalised description, as having been created from menu items that have operations P, Q and R (not shown) associated with them.
  • the operations P, Q and R are associated with corresponding menu items selected from the N illustrated menu items.
  • Navigational data 452 to 468 is also created for each asset 434 to 450.
  • the navigational data is arranged to allow the navigation engine 114 of the DVD player 102 to obtain the next image or video sequence, that is, created asset, according to the menu structure.
  • the navigational data associated with that first asset 434 may comprise links to the second asset 436, which might represent an image or video sequence showing that image together with the progressive display of a number of menu options associated with that image.
  • the menu options might relate to image processing techniques such as "posterising" the image.
  • the links associated with the second asset 436 might comprise a link to a third asset (not shown) representing the image together with the progressive closing or contraction of the menu options previously displayed via the first asset 434 and a link to a fourth asset showing a "posterised” version of the original image shown in the original asset 408.
  • the assets might represent stills or video sequences.
  • the assets that relate to the menu options or menu items are video sequences that show the progressive expansion or contraction of the menus.
  • the assets might comprise two portions with a first portion representing a video sequence arranged to display or hide the dynamic menu and a second portion representing a still image or a further video sequence that is arranged to loop, that is, that is arranged to repeat once the menu has been displayed or hidden.
  • Figure 5 illustrates graphically a possible menu structure definition in the form of a tree 500. The data structure will be described with reference to a menu structure to perform image-processing techniques on a number of images.
  • the tree 500 comprises a root node 502 at which an asset might be displayed in its original or unadulterated form. Selecting "OK", for example, using the remote control 120, might be intended to cause a transition to a node for displaying the menu options available at that level in the menu structure. It can be appreciated from the example that invoking the "OK" button or the like is intended to produce a pull-down menu having four menu items 504 to 510. In the example, the four menu items are "Action” 504, "Zoom” 506, "Pan” 508 and "Effect” 510.
  • an originally displayed asset will be intended to also comprise a pull-down menu showing those options with those menu options having been progressively displayed via a corresponding video sequence.
  • sub-picture data is intended to be generated and shown as graphic overlays for each of the menu items "Actions” 504, "Zoom” 506, "Pan” 508 and "Effect” 510.
  • the menu structure is defined such that selecting the first menu option 504 produces a further menu comprising a number of sub-menu items.
  • the sub-menu items are "First” 512, "Last” 514, “Next” 516, "Previous” 518, "Thumbs” 520 and "Category” 522.
  • the menu structure is arranged to have sub- picture graphic overlays associated with each of the options that can be used to select the options. Video assets are intended to be produced that give effect to operations associated with these options 512 to 522.
  • Selecting the "First” 512 option is intended to display a first image of a number of images. Therefore, an asset displaying that first image is intended to be produced.
  • Selecting the second option, "Last” 514 is intended to display the last image of the number of images. Therefore, an asset for displaying that image will be produced using the last image.
  • the "Previous” 516 and “Next” 518 menu items are intended to display previous and next images respectively. Suitably, video assets giving effect to the display of the previous and next images are intended to be created.
  • the option “Thumbs” 520 is intended to display thumbnail views of all, or selectable, images within a category or set of images.
  • the menu structure might be defined such that the second menu item, "Zoom” 506, produces a further menu having four zooming options; namely, "+” 528, "-” 530, "100%” 532 and "200%” 534, which, when selected, are intended to produce zoomed versions of an original asset.
  • giving effect to invocations of these menu items 528 to 534 will require corresponding video assets, firstly, to display the menu options and, secondly, to give effect to the transition from an initial, or starting, view of an asset to a zoomed view of the asset together with corresponding navigation data to allow the navigation engine 114, in conjunction with the presentation engine 116, to retrieve and render the video assets showing such zooming operations.
  • a sub-picture having appropriately positioned graphical overlays that are selectable and maskable will also be desirable.
  • the "Pan” 508 menu option produces a further sub-menu comprising four menu items or options 530 to 542 that are arranged to allow a user to pan around an image. Accordingly, for each original asset, various video assets need to be defined that support such panning.
  • the final menu option, "Effect” 510 is arranged to produce a further submenu comprising three menu items 544 to 548 that apply image processing techniques or effects to the original assets.
  • the illustrated menu items are "Colour” 544, "Black & White” 546 and "Posterise” 548, which require video assets to present the original assets in colour, in black and white and in a posterised forms respectively. Again, sub-picture image data would also be required to support selection of the menu items 544 to 548.
  • the assets produced, or intended to be produced, to give effect to traversing the menu structure and invoking menu items can be still images or video sequences representing a dynamic transition from one view of an asset to another view of an or the asset, representing a transition between views of an asset, or a transition to an asset.
  • FIG 6 there is shown schematically an authoring process 600 for producing a pair of video sequences 602 and 604 comprising frames that illustrate the expansion and contraction of a pull-down menu, assuming that the menu structure and menu items are arranged to define a pull-down menu.
  • the first video sequence 602 has been shown, for illustrative purposes only, as comprising five frames 606 to 614.
  • the first frame 606 is a schematic representation of the image shown in figure 2. In the interests of clarity, only the menu bar 204 and window 202 of the image of figure 2 have been illustrated in each frame.
  • the second frame 608 is shown with a portion 616 of the pull-down menu 300 having been displayed.
  • the third and fourth frames 610 and 612 respectively illustrate progressively larger portions 618 and 620 of the pull-down menu 300.
  • the final frame 614 illustrates the complete pull-down menu 300 and corresponds to the image shown in figure 3.
  • the progressively increasing or expanding portions 616 to 620 of the pull-down menu 300 are illustrated as expanding on a per menu item basis, that is, each portion contains a greater number of menu items as compared to a previous portion.
  • the pull-down menu 300 has been shown as comprising four menu items rather than the full 13 menu items shown in figure 3.
  • a pull-down menu may present any predetermined number of menu items.
  • the progressive expansion and contraction of the menus corresponds to or emulates revealing or hiding of menus within a Windows context.
  • visual assets 606 to 614 will take the form of a number of frames, that is, video sequences.
  • visual asset 606 will, in practice, represent a video sequence comprising a number of frames that progressively displays the first portion 616 of the menu over a predetermined period of time. It will be appreciated that the number of frames constituting such a video sequence might be a function of the desired display speed for the menu.
  • Navigation data 622 to 628 provides links between video assets and allows the navigation engine to retrieve the first video sequence or set of video assets or sequences 602 from the DVD 104 and to cause the presentation engine 116 to display the first video sequence using that retrieved data.
  • the second video sequence 604 of figure 6 has also been shown, for illustrative purposes only, as comprising five frames 630 to 638.
  • the first frame 630 is a schematic representation of the image shown in figure 3, in which the pull-down menu 300 is in its fully expanded form.
  • the second frame 632 is shown with a smaller portion 640 of the pull-down menu 300 having been displayed. It can be seen that the third and fourth frames 634 and 636 respectively display progressively smaller portions 642 and 644 of the pull-down menu 300.
  • the final frame 638 illustrates the complete pull-down menu 300 in its most contracted form and corresponds to the image 200 shown in figure 2.
  • the progressively decreasing or contracting portions 640 to 644 of the pull-down menu 300 are illustrated, again, as contracting on a per menu item basis, that is, each portion contains progressively fewer menu items as compared to a previous portion.
  • Navigation data 646 to 652 linking each video asset will also be created to allow the navigation engine 114 to retrieve the asset and cause the presentation engine 116 to display that video asset.
  • each video asset 630 to 638 will, in practice, represent a video sequence and that the embodiment described above has been illustrated using frames rather than sequences for the purposes of clarity of illustration only.
  • video content panes of the video sequences 602 and 604 have been shown "empty" for the purposes of clarity only.
  • the content panes will contain content such as, for example, image data or video sequence data.
  • pull-down menu has been described with reference to expanding and contracting on a per menu item basis, embodiments can be realised in which any predetermined expansion or contraction step size is used. It will be appreciated that smaller or greater steps sizes might affect the number of frames that are required to form the first 602 and second 604 video sequences or the smoothness of the display of the pull-down menu 300. It can be appreciated that rendering such pre-authored video sequences as the first 602 and second 604 video sequences enables pull-down menus to be provided, or at least emulated, using DVD players, which increases the richness of the user interfaces for, and the user experience of, DVDs.
  • Figure 7 shows, schematically, the graphical data 700 that can be used to produce a progressively expanding or contracting pull-down menu 300 according to an embodiment.
  • the data 700 comprises 13 pull-down menu portions 702 to 726. These portions 702 to 726 are used to produce the video sequences 602 and 604 described above with respect to figure 6.
  • a complete frame of video may comprise both the pull-down menu portions or complete menu with or without the "application" window, such as that displayed in figure 2, together with other data or information such as, for example, content for the application window and/or a background on which the application window sits, if it does not occupy the whole of the 720x480 or 720x576 pixels of the DVD NTSC and the DVD PAL/SECAM pixel resolutions, respectively.
  • the data representing the video sequences 602 and 604, stored on the DVD 104 will also be accompanied by sub-picture data, carried by at least one of the thirty-two available sub-picture streams.
  • the sub-picture data is used to produce graphical overlays or highlights for selecting menu items of the various menu items of the pull-down menu.
  • the sub-picture data is used to produce a bitmap image bearing graphical overlays that are displayed on top of, or otherwise combined with, corresponding video sequences.
  • the manner and position of display of the graphical elements are controlled or determined using corresponding sub- picture buttons with associated highlights that are selectively operated as masks to hide or reveal an associated graphical overlay.
  • FIG 8 there is shown schematically the relationship 800 between a selected number of graphical overlays 802 to 808 and corresponding portions 802' to 808' of the pull-down menu 300.
  • the sub-picture buttons or masks associated with each graphical overlay 802 to 808 are arranged such that, when invoked in conjunction with the video sequence displaying the pull-down menu, the sub-picture bitmaps selectively highlight or overlay the corresponding portions 802' to 808' of the pull-down menu 300.
  • the presentation engine 116 under the control of the navigation engine 114, displays the appropriate sub-picture graphical overlay 802 to 808 in response to user commands received from the remote control 120 using the sub-picture buttons or masks.
  • figure 9 illustrates the relationship 900 between three central graphical overlays 902 to 906 of a sub- picture (not shown) and their corresponding menu items 902' to 906'.
  • the navigation engine 114 in response to an "up” or “down” user command received from the IR control 120, will cause the presentation engine 116 to display a selected overlay 902 or 906 to highlight the "Page Setup" 902' or "Print Preview” 906' menu items respectively by masking the appropriate overlays that are not required to be displayed.
  • FIG 10 there is shown the relationship 1000 between a sub-picture 1002 containing graphical overlays 1004 and a video sequence or frame containing the pulldown menu 300 in its fully expanded state.
  • the sub-picture is notionally divided into a number of regions (not shown) known as buttons that can be selectively displayed in response to user actions, that is, commands received from the IR control 120. These buttons are used to reveal or hide a number of highlight regions that are aligned with respective menu options of the pull-down menu 300.
  • Figure 11 illustrates the process 1100 of successive display of the frames constituting the first and second video sequences, according to the direction of the time lines 1102 and 1104.
  • the relationship between the sub-picture graphical overlay 804 and the corresponding "Save As" menu item 804' of the pull-down menu 300 can be more easily appreciated.
  • Figure 12 illustrates a view 1200 of the application 200 with one menu item 1202 of the pull-down menu 300 having been invoked. It can be appreciated that this invocation has produced a further menu 1204.
  • the further menu 1204 is progressively displayed in a left-to-right manner in a similar process to the progressive display of the pull-down menu 300 itself.
  • the authoring process to produce the data used in producing a video sequence having such a left-to-right menu needs to produce data 1300 such as, for example, that shown in figure 13.
  • the left-to-right menu data 1300 comprises a number of portions 1302 to 1308 of the further menu 1204. Each portion 1302 to 1308 is progressively bigger or smaller than a succeeding or preceding portion respectively.
  • sub-picture graphical overlays 1310 to 1314 that correspond to the respective menu items 1310' to 1314' of the further menu 1204.
  • the data shown in figure 13 is used to produce video sequences for progressively expanding or contracting the further menu 1204 in a manner that is substantially similar to the process used to produce the first 602 and second 604 video sequences shown in figure 6.
  • Figure 14 illustrates, with greater clarity, the relationship 1400 between the further menu 1204 and the sub-picture graphical overlay 1310 for the "Page by E-mail" 1310' menu item.
  • the pull-down menu 300 has been invoked, followed by the selection of the "Send" menu item 1202, which has caused the display of the left-to-right menu 1204 and the corresponding sub-picture graphical overlay 1310.
  • the start frame 1402 and end frame 1404 are shown, together with intermediate frames 1406 to 1410, as constituting expansion and contraction video sequences according to the direction of the time lines 1412 and 1414 respectively.
  • the navigation data associated with the first video sequence 602 will include a link to the video sequence for expanding the further menu 1204 to give effect to that expansion should the "Send" menu item 1202 be invoked.
  • the process of marshalling or producing a visual asset for displaying and using dynamic menus involves producing video sequences for both the expansion and contraction, that is, the display and hiding, of the pull-down menu together with navigation data linking the frames and/or video sequences, according to planned or predetermined user operations and sub-picture graphical overlay data and navigation data for controlling the display of the sub-picture graphical overlays.
  • FIG 15 there is shown a flowchart 1500 for producing visual assets according to an embodiment of the present invention.
  • an original visual asset is provided or obtained.
  • a data structure comprising a definition of a menu structure together with associated menus and menu items and operations related to those menu items is defined at step 1504. Such a data structure has been described above in relation to figures 4 and 5.
  • An asset is created, at step 1506 using appropriate menu items and their related operations as well as the originally provided video asset. It is determined at step 1508 whether or not all assets relating to the originally provided asset have been created. If the determination at step 1508 is negative, processing returns to step 1506 where a further asset is created, again, according to the needs or requirements defined by the menu structure defined in step 1504.
  • step 1510 Having created the necessary video assets from an original asset, navigation data linking the assets according to an intended navigational strategy, which is, again, defined by the menu structure, is created at step 1510.
  • Figure 16 shows a flowchart 1600 that illustrates the steps undertaken in steps 1506 and 1508 of figure 15 in greater detail.
  • the menu items applicable to a provided video asset are identified and counted at step 1602.
  • a count, N is set to 1 at step 1604.
  • the corresponding operation such as, for example, the operations 428 to 432 shown in figure 4, are identified.
  • a copy of the originally provided video asset is processed using the appropriate operation identified at step 1606 to create at least a portion, or a first portion, of an intended Nth video asset.
  • the graphical data associated with the Nth menu item is processed to produce a second portion of the Nth video asset.
  • the complete or whole of the Nth video asset is created using at least one of the first and second portions at step 1612. It is determined, at step 1614, whether there are more menu items to be processed for which corresponding video assets, derived from the originally provided video asset, are required. If the determination is positive, processing continues to step 1616 where N is incremented and control passes to step 1606, where the next menu item is considered. If the determination at step 1614 is negative, processing terminates or, more accurately, processing returns to step 1508 of figure 15. It will be appreciated by those skilled in the art that the menu structure defined in the data structure might comprise sub-menus. Therefore, the process of producing the assets for such a complex menu structure might require nested or recursive applications of the steps shown in the flowcharts.
  • the pull-down menus are implemented in any context.
  • the "application” might be intended to step through an album of photographs or video sequences and the menu items might control the display of those photographs or video sequences.
  • the pull-down menu stems from a corresponding menu bar item.
  • the pull-down menu can be arranged to appear, at a predetermined screen position, in response to a user-generated event.
  • Embodiments can be realised in which, for example, modal or modeless dialogue boxes, or other GUI elements, are emulated via correspond video sequences.
  • the video assets created in the above embodiments might use an image processing system or multimedia authoring system by which an author can create the assets.
  • an image processing system or multimedia authoring system by which an author can create the assets.
  • Macromedia Flash Macromedia Director
  • AfterEffects the navigation data associated with such created assets might use the invention described in UK patent application no. GB 0309814.2 (filed 30 April 2003 and claiming priority from UK Patent application no. GB 0209790.5) and US patent application serial number 60/383.825, the contents of which are incorporated herein for all purposes by reference and shown in appendix A.
  • dynamic menus encompasses the type of menus provided by computer systems as described above. These menus can be displayed, that is, rendered, in at least one of the following number of ways, that is, they can be progressively displayed, progressively contracted, displayed and removed substantially immediately, in a pop-up/pop-down manner and in any other way.
  • the resulting data might be merely a copy of the provided data or data that is the result of processing the originally provided data.
  • the resulting data is merely a copy of the originally provided data might occur when the corresponding menu item is used to navigate to the copy of the originally provided data, that is, the menu option is used for navigational purposes rather than for modifying, according to an associated option, the originally provided data.
  • the operation associated with the menu option would be navigation data/commands that invoke the playing of the provided asset.
  • the embodiments of the present invention are preferably implement, where appropriate, using software.
  • the software can be stored on or in various media such as, for example, magnetic or optical discs or in ROMs, PROMs and the like.
  • the phrase “one or more” followed by, for example, a noun comprises “one [noun]”and “two or more [nouns]", that is, it comprises “at least one [noun]” and visa versa. Therefore, the phrase “one or more video sequences” comprises one. video sequence and, similarly, the phrase “one or more original assets” comprises one original asset as well as both extending to "a plurality of video sequences" and "a plurality of original assets” respectively.
  • the present invention relates in general to a method and apparatus for authoring complex audiovisual products.
  • a feature movie or television program typically has a straightforward linear navigational sequence of individual scenes.
  • it is now desired to develop new categories of audiovisual products which have a much more complex navigational structure, such as a movie with many scene choices or different movie endings, and/or which have a large number of individual scenes, such as an interactive quiz game with say one thousand individual quiz questions.
  • the present invention relates to authoring of audiovisual content into a form compliant with a specification for DVD-video and able to be recorded on an optical disc recording medium.
  • An optical disc is a convenient storage media for many different purposes.
  • a digital versatile disc (DVD) has been developed with a capacity of up to 4.7Gb on a single-sided single-layer disc, and up to 17Gb on a double-sided double-layer disc.
  • DVD-video is particularly intended for use with pre-recorded video content, such as a motion picture.
  • DVD discs are becoming popular and commercially important.
  • a DVD-video disc is played using a dedicated playback device with relatively simple user controls, and DVD players for playing DVD-video discs are becoming relatively widespread. More detailed background information concerning the DVD-video specification is available from DVD Forum at www.dvdforum.org. APPENDIX
  • DVD-video discs and DVD-video players are becoming popular and widespread, at present only a limited range of content has been developed.
  • the process of authoring content into a DVD-video compatible format is relatively expensive and time consuming.
  • the flexibility and functions allowed in the DVD-video specification are compromised by the expensive and time consuming authoring task. Consequently, current DVD-video discs are relatively simple in their navigational complexity. Such simplicity can impede a user's enjoyment of a DVD-video disc, and also inhibits the development of new categories of DVD-video products.
  • An aim of the present invention is to provide a convenient and simple method and apparatus for authoring an audio-visual product.
  • An aim of the preferred embodiments of the present invention is to provide a method and apparatus able to create an audio-visual product having a complex navigational structure and/or having many individual content objects, whilst reducing a time required for authoring and minimising a need for highly skilled operators.
  • Another preferred aim is to provide an authoring tool which is intuitive to use and is highly flexible.
  • An aim of particularly preferred embodiments of the invention is to allow efficient creation of audio-visual products such as DVD-video products that run on commonly available DVD-video players.
  • an authoring method for use in creating an audiovisual product comprising the steps of: defining a plurality of components, the components implicitly representing functional sections of audiovisual content with respect to one or more raw content objects, and a plurality of transitions that represent movements between the plurality of components; expanding the plurality of components and the plurality of transitions to provide a set of explicitly realised AV assets and an expanded intermediate datastructure of nodes and links, where each node is associated with an AV asset of the set and the links represent movement from one node to another; and creating an audiovisual product in a predetermined output format, using the AV assets and the expanded intermediate datastructure of the nodes and the links.
  • an authoring method for use in creating a DVD-video product comprising the steps of: creating a plurality of components representing parameterised sections of audiovisual content, and a plurality of transitions representing movements between components; expanding the plurality of components and the plurality of transitions to provide a set of AV assets and an expanded datastructure of nodes and links, where each node is associated with an AV asset of the set and the links represent movement from one node to another; and creating a DVD-video format datastructure from the AV assets, using the nodes and links.
  • an authoring method for use in creating an audiovisual product according to a DVD-video specification comprising the steps of: generating a set of AV assets each comprising a video object, zero or more audio objects and zero or more sub-picture objects, and an expanded datastructure of nodes and links, where each node is associated with one AV asset of the set and the links represent navigational movement from one node to another; and creating a DVD-video format datastructure from the set of AV assets, using the nodes and links; the method characterised by the steps of: creating a plurality of components and a plurality of transitions, where a component implicitly defines a plurality of AV assets by referring to a presentation template and to items of raw content substitutable in the presentation template, and the plurality of transitions represent navigational movements between components; and expanding the plurality of components and the plurality of transitions to generate the set of AV assets and the expanded datastructure of nodes and links.
  • the present invention there is provided a recording medium having recorded thereon computer implementable instructions for performing any of the methods APPENDIX defined herein.
  • Figure 1 is an overview of an authoring method according to a preferred embodiment, of the present invention
  • Figure 2 is a schematic diagram showing a simple abstraction of a desired audiovisual product
  • Figure 3 shows in more detail a component used as part of the abstraction of Figure 2;
  • Figure 4 shows an example prior art authoring method compared with an example preferred embodiment of the present authoring method
  • Figure 5 shows another example embodiment of the present authoring method using components and transitions
  • Figure 6 shows the example of Figure 5 in a tabular format
  • Figure 7 is an overview of a method for evaluating components and transitions
  • Figure 8 shows evaluation of components in more detail
  • Figure 10 shows a portion of an expanded datastructure during evaluation of components and transitions
  • Figure 11 is an overview of a preferred method for creating DVD-video structures from an expanded datastructure
  • Figure 12 shows a step of creating DVD video structure locations in more detail
  • Figure 13 shows a step of creating DVD-video compatible datastructures in more APPENDIX detail.
  • Figure 1 shows an overview of an authoring method according to a preferred embodiment of the present invention.
  • the present invention is applicable when authoring many types of audiovisual products, and in particular when complex navigational structure or content are involved.
  • the present invention is applicable to authoring of video on demand products delivered remotely from a service provider to a user, such as over a computer network or other telecommunications network.
  • the present invention is especially useful in authoring interactive products, where user choices and responses during playback of the product dictate navigational flow or content choices.
  • the present invention is particularly suitable for use in the authoring of an audiovisual product compliant with a DVD-video specification.
  • the audiovisual product is suitably recorded onto a recording medium such as an optical disk.
  • the DVD-video specification defines a series of data objects that are arranged in a hierarchical structure, with strict limits on the maximum number of objects that exist at each level of the hierarchy.
  • the resultant audiovisual product will play on commonly available DVD players.
  • audiovisual content is considered in terms of audio-visual assets (also called AV assets or presentation objects).
  • each AV asset contains at least one video object, zero or more audio objects, and zero or more sub-picture objects. That is, a section of video data is presented along with synchronised audio tracks and optional sub-picture objects.
  • the current DVD-video specification allows up to eight different audio tracks (audio streams) to be provided in association with up to nine video objects (video angle streams).
  • the video streams represent different camera angles
  • the audio streams represent different language versions of a soundtrack such as English, French, Arabic etc.
  • only one of the APPENDIX available video and audio streams is selected and reproduced when the DVD-video product is played back.
  • the current specification allows up to thirty-two sub-picture streams, which are used for functions such as such as language subtitles. Again, typically only one of the sub-picture streams is selected and played back, to give for example a movie video clip with English subtitles from the sub-picture stream reproduced in combination with a French audio stream.
  • this relatively simple combination of video, audio and sub-picture streams requires a high degree of co-ordination and effort during authoring, in order to achieve a finished product such as a feature movie.
  • due to the laborious and expensive nature of the authoring process there is a strong disincentive that inhibits the development of high- quality audiovisual products according to the DVD-video specification. There is then an even stronger impediment against the development of audiovisual products with complex navigational flow or using high numbers of individual raw content objects.
  • the authoring method of the present invention is implemented as a program, or a suite of programs.
  • the program or programs are recorded on any suitable recording medium, including a removable storage such as a magnetic disk, hard disk or solid state memory card, or as a signal modulated onto a carrier for transmission on any suitable data network, such as the internet.
  • the authoring method is suitably performed on a computing platform, ideally a general purpose computing platform such as a personal computer or a client-server computing network.
  • a computing platform ideally a general purpose computing platform such as a personal computer or a client-server computing network.
  • the method may be implemented, wholly or at least in part, by dedicated authoring hardware.
  • the authoring method of the preferred embodiment of the present invention comprises three main stages, namely: creating a high-level abstraction (or storyboard) representing functional sections of a desired audiovisual product in step 101; automatically evaluating the high-level abstraction to create a fully expanded intermediate structure and a set of AV assets in step 102; and creating an output datastructure compliant with a DVD-video specification using the expanded intermediate structure and AV assets in step 103.
  • the output datastructure is then recorded onto a recording medium, in this case being a blank optical disc, to create a DVD-video product.
  • the high-level abstraction is created by forming a plurality of components that implicitly represent functional elements of a desired APPENDIX
  • DVD-video product and a set of transitions that represent movements between the components that will occur during playback.
  • FIG. 2 is a schematic diagram showing a simple abstraction of a desired audiovisual product.
  • the components 201 represent functional elements of the desired audiovisual product, where one or more portions of AV content (combinations of video clips, audio clips, etc) are to be reproduced during playback.
  • the transitions 202 indicate legitimate ways in which the product moves from one component to another during playback. In the example of Figure 2, the transitions 202 are all explicitly defined.
  • each transition 202 is associated with an event 203, which indicates the circumstances giving rise to that transition.
  • An event 203 is a triggering action such as the receipt of a user command, or the expiry of a timer, that influences movement through the sections of AV content during playback. Referring to Figure 2, starting from a particular component A, and given all possible actions, exactly one event 203 will be satisfied, allowing a transition 202 from the current component A to a next component B or C.
  • the preferred embodiment allows for three different types of component. These are an information component, a choice component and a meta-component.
  • An information component represents what will in due course become a single AV asset in the desired audiovisual product.
  • an information component simply comprises a reference to a raw content object or collection of raw content objects (i.e. raw video and audio clips) that will be used to create an AV asset in the audiovisual product.
  • an information component refers to a welcome sequence that is displayed when the DVD-video product is played in a DVD-video player. The same welcome sequence is to be played each time playback begins. It is desired to display the welcome sequence, and then proceed to the next component.
  • An information component (which can also be termed a simple component) is used principally to define presentation data in the desired DVD-video product.
  • a choice component represents what will become a plurality of AV assets in the desired audiovisual product.
  • the choice component (alternately termed a multi-component) comprises a reference to at least one raw content object, and one or more parameters.
  • a welcome sequence in one of a plurality of languages, dependent upon a language parameter. That is, both a speaker's APPENDIX picture (video stream) and voice track (audio stream) are changed according to the desired playback language.
  • a choice component is used to represent a set of desired AV assets in the eventual audiovisual product, where a value of one or more parameters is used to distinguish between each member of the set.
  • a choice component represents mainly presentation data in a desired DVD-video product, but also represents some navigational structure (i.e. selecting amongst different available AV assets according to a language playback parameter).
  • a meta-component comprises a procedurally-defined structure representing a set of information components and/or a set of choice components, and associated transitions. Conveniently, a meta-component may itself define subsidiary meta-components. A meta- component is used principally to define navigational structure in the desired audiovisual product, by representing other components and transitions.
  • Figure 3 shows a choice component or information component 201 in more detail.
  • the component is reached by following one of a set of incoming transitions 202, labelled Ti(l ...n), and is left by following one of a set of outgoing transitions To(l ...m).
  • the component 201 is defined with reference to zero or more parameters 301, which are used only during the authoring process. However, the component may also be defined with reference to zero or more runtime variables 302. Each variable 302 records state information that can be read and modified within the scope of each component, during playback of the audiovisual product such as in a standard DVD player. Conveniently, the component 201 is provided with a label 303 for ease of handling during the authoring process.
  • the component 201 contains references to one or more items of content 304.
  • the items of content are raw multi-media objects (still picture images, video clips, audio clips, text data, etc.) recorded in one or more source storage systems such as a file system, database, content management system, or asset management system, in any suitable format such as .gif, .tif, .bmp, .txt, .rtf, jpg, .mpg, .qtf, .mov, .wav, .rm, .qtx, amongst many others. It will be appreciated that these raw content items are not necessarily at this stage in a format suitable for use in the DVD-video specification, which demands that video, audio and sub- picture objects are provided in selected predetermined formats (i.e. MPEG).
  • Each component 201 uses the references as a key or index which allows that item of content to be retrieved from the source storage systems.
  • the references may be explicit (e.g. APPENDIX an explicit file path), or may be determined implicitly, such as with reference to values of the parameters 301 and/or variables 302 (i.e. using the parameters 301 and/or variables 302 to construct an explicit file path).
  • the component 201 also comprises a reference to a template 305.
  • the template 305 provides, for example, a definition of presentation, layout, and format of a desired section of AV content to be displayed on screen during playback.
  • a template 305 draws on one or more items of content 304 to populate the template.
  • one template 305 is provided for each component 201.
  • a single template 305 may be shared between plural components 201, or vice versa.
  • a template 305 is provided in any suitable form, conveniently as an executable program, a plug-in or an active object.
  • a template is conveniently created using a programming language such as C++, Visual Basic, Shockwave or Flash, or by using a script such as HTML or Python, amongst many others.
  • a template allows a high degree of flexibility in the creation of AV assets for a DVD-video product.
  • templates already created for other products may be reused directly in the creation of another form of audiovisual product, in this case a DVD-video product.
  • creating a component 201 in this parameterised form allows a large plurality of AV assets to be represented simply and easily by a single component.
  • Figure 4 compares a typical prior art method for authoring an audiovisual product against the preferred embodiment of the present invention.
  • this example it is desired to develop an audiovisual product which allows the user to play a simple quiz game.
  • each AV asset 401 which it is desired to present in the eventual audiovisual product must be created in advance, and navigation between the assets defined using navigation links represented by arrows 402.
  • the game involves answering a first question, and, if correct, then answering a second question.
  • the answer to each question is randomised at runtime using a runtime variable such that one of answers A, B and C is correct, whilst the other two are incorrect.
  • a large number of assets need to be created, with an even greater number of navigational APPENDIX links.
  • the process is relatively expensive and time consuming, and is prone to errors.
  • Figure 4b shows an abstraction, using components and transitions as described herein, for an equivalent quiz game. It will be appreciated that the abstraction shown in Figure 4b remains identical even if the number of questions increases to ten, twenty or even fifty questions, whereas the representation in Figure 4a becomes even more complex as each question is added.
  • Figure 5 shows another example abstraction using components and transitions.
  • Figure 5 illustrates an example abstraction for an audiovisual product that will contain a catalogue of goods sold by a retail merchant.
  • a welcome sequence is provided as an information component 201a.
  • Choice components 201b are used to provide a set of similar sections of AV content such as summary pages of product information, or pages of detailed product information including photographs or moving video, for each product in the catalogue.
  • the catalogue contains, for example, of the order of one thousand separate products, each of which will result in a separate AV asset in the desired DVD-video product.
  • Meta-components 201c provide functions such as the selection of products by category, name or by part code. These meta-components are procedurally defined.
  • Figure 6 shows a tabular representation for the abstraction shown in schematic form in Figure 5.
  • the authoring method and apparatus suitably presents a convenient user interface for creating components and transitions of the high-level abstraction.
  • a graphical user interface is provided allowing the definition of components, transitions and events, similar to the schematic diagram of Figure 5.
  • the user interface provides for the graphical creation of components such as by drawing boxes and entering details associated with those boxes, and defining transitions by drawing arrows between the boxes and associating events with those arrows.
  • a tabular textual interface is provided similar to the table of Figure 6.
  • the abstraction created in step 101 is itself a useful output.
  • the created abstraction may be stored for later use, or may be transferred to another party for further work.
  • the authoring method is used to automatically create a final audiovisual product, such as a DVD-video product, from the abstraction.
  • the method optionally includes the step 104 of checking for APPENDIX compliance with a DVD specification. It is desired to predict whether the resulting DVD- video product will conform to a desired output specification, in this case the DVD-video specification.
  • the DVD-video specification has a hierarchical structure with strict limits on a maximum number of objects that may exist at each level, and limits on the maximum quantity of data that can be stored on a DVD-video disc.
  • the checking step 104 is performed using the created components 201 and transitions 202.
  • the components 201 contain references to raw AV content objects 304 and templates 305, and authoring parameters 301, 302, that allow AV assets to be produced.
  • the checking step 104 comprises predicting a required number of objects at each level of the hierarchical structure, by considering the number of potential AV assets that will be produced given the possible values of the authoring parameters (i.e. authoring-only parameters 301 and runtime variables 302), and provides an indication of whether the limits for the maximum number of objects will be exceeded.
  • step 104 is performed without a detailed realisation of every AV asset, whilst providing an operator with a reasonably accurate prediction of expected conformance. If non-conformance is predicted, the operator may then take steps, at this early stage, to remedy the situation. As a result, it is possible to avoid unnecessary time and expense in the preparation of a full audiovisual product which is non-conformant.
  • step 102 the components 201 and transitions 202 of the high level abstraction 200 are automatically evaluated and expanded to create AV assets and an intermediate datastucture of nodes and links.
  • Figure 7 shows the step 102 of Figure 1 in more detail.
  • the components 201 and transitions 202 may be evaluated in any order, but it is convenient to first evaluate the components, and then to evaluate the transitions. Ideally, any meta-components in the abstraction are evaluated first. Where a meta-component results in new components and transitions, these are added to the abstraction, until all meta- components have been evaluated, leaving only information components and parameterised choice components.
  • This expanded datastructure comprises branching logic derived from the events 203 attached to the transitions 202 (which will eventually become navigation data in the desired audiovisual product) and nodes associated with AV assets derived from the components 201 (which will eventually become presentation data in the audiovisual product).
  • the expanded datastructure is yet in a suitable form for creating an audiovisual product in a restricted format such as a DVD-video product, since at this stage there is no mapping onto the hierarchical structure and other limitations of the DVD-video specification.
  • FIG. 8 shows step 701 of Figure 7 in more detail, to explain the preferred method for evaluating the components 201.
  • each information component 201a and each choice component 201b is selected in turn in step 801.
  • Each component 201 is evaluated to provide one or more AV assets in step 802.
  • this evaluation comprises creating an AV asset from the referenced raw content objects 304.
  • this evaluation step suitably comprises evaluating a template 305 and one or more raw content objects 304 according to the authoring parameters 301/302, to provide a set of AV assets.
  • a node in the expanded datastructure is created to represent each AV asset, at step 803.
  • entry logic and/or exit logic is created to represent a link to or from each node such that each AV asset is reached or left under appropriate runtime conditions.
  • Figure 9 shows a preferred method for evaluating transitions in step 702 of Fig.7.
  • Each transition 202 is selected in any suitable order in step 901.
  • the conditions of the triggering event 203 associated with a particular transition 202 are used to create entry and/or exit logic for each node of the expanded datastructure.
  • explicit links are provided between the nodes.
  • Figure 10 is a schematic illustration of a component 201 during evaluation to create a set of nodes 110 each associated with an AV asset 120, together with entry logic 132 and exit logic 134, defining movement between one node 110 and the next.
  • the entry logic 132 and exit logic 134 reference runtime variables 302 which are available during playback (e.g. timer events, player status, and playback states), and the receipt of user commands.
  • the evaluation step consumes each of the authoring-only parameters 301 associated with the abstract components 201, such that only the runtime variables 302 and runtime actions such as timer events and user commands remain.
  • a conformance checking step 105 may, additionally or alternatively to the checking step 104, be applied following the evaluation step 102.
  • Evaluation of the abstraction in step 102 to produce the expanded datastructure 100 allows a more accurate prediction of expected compliance with a particular output specification.
  • each node of the expanded datastructure represents one AV asset, such that the total number of AV assets and object locations can be accurately predicted, and the set of AV assets has been created, allowing an accurate prediction of the capacity required to hold these assets.
  • information about conformance or non-conformance is fed back to an operator. Changes to the structure of the product can then be suggested and made in the abstraction, to improve compliance.
  • step 103 the expanded datastructure from step 102 is used to create an audiovisual product according to a predetermined output format, in this case by creating specific structures according to a desired DVD-video specification.
  • Figure 11 shows an example method for creation of the DVD video structures.
  • the nodes 110 in the expanded datastructure are placed in a list, such as in an order of the abstract components 201 from which those nodes originated, and in order of the proximity of those components to adjacent components in the abstraction.
  • jumps between DVD video structure locations during playback are minimised and localised, in order to improve playback speed and cohesion.
  • Each node is used to create a DVD video structure location at step 1102.
  • step 1103 if the number of created DVD video structure locations exceeds the specified limit set by the DVD-video specification then creation is stopped at 1104, and an error reported. Assuming the number of structures is within the specified limit then DVD video compatible datastructures are created at step 1105. Finally, a DVD video disc image is created at step 1106.
  • commercially available tools are used to perform step 1106, and need not be described in detail here.
  • Step 1102 is illustrated in more detail in Figure 12.
  • variable T represents a number of a video title set VTS (ie. from 1-99) whilst variable P represents a program chain PGC (ie. from 1-999) within each video title set.
  • VTS video title set
  • P program chain
  • the nodes 110 of the expanded datastructure 100 are used to define locations in the video title sets and program chains. As the available program chains within each video title set are .consumed, then the locations move to the next video title set.
  • many alternate methods APPENDIX are available in order to optimise allocation of physical locations to the nodes of the expanded datastructure.
  • Step 1105 of Figure 11 is illustrated in more detail in Figure 13.
  • Figure 13 shows a preferred method for creating DVD-video compatible datastructures by placing the AV assets 120 associated with each node 110 in the structure location assigned for that node, and substituting links between the nodes with explicit references to destination locations. At step 1307 this results in an explicit DVD compatible datastructure which may then be used to create a DVD disc image. Finally, the DVD disc image is used to record a DVD disc as a new audiovisual product.
  • the DVD authoring method and apparatus described above have a number of advantages. Creating components that represent parameterised sections of audio visual content allow many individual AV assets to be implicitly defined and then automatically created. Repetitive manual tasks are avoided, which were previously time consuming, expensive and error-prone.
  • the authoring method and apparatus significantly enhance the range of features available in existing categories of audiovisual products such as movie presentations. They also allow new categories of audiovisual products to be produced. These new categories include both entertainment products such as quiz-based games and puzzle- based games, as well as information products such as catalogues, directories, reference guides, dictionaries and encyclopaedias.
  • the authoring method and apparatus described herein allow full use of the video and audio capabilities of DVD specifications such as DVD-video.
  • a user may achieve playback using a standard DVD player with ordinary controls such as a remote control device.
  • a DVD-video product having highly complex navigational content is readily created in a manner which is simple, efficient, cost effective and reliable.
  • An authoring method for use in creating an audiovisual product comprising the steps of:
  • the components implicitly representing functional sections of audiovisual content with respect to one or more raw content objects, and a plurality of transitions that represent movements between the plurality of components;
  • the defining step comprises defining at least one information component that comprises a reference to a raw content object.
  • the defining step comprises defining at least one choice component comprising a reference to at least one raw content object, and at least one authoring parameter.
  • choice component APPENDIX comprises a reference to a presentation template and a reference to at least one substitutable raw content object to be placed in the template according to the at least one authoring parameter.
  • the defining step comprises defining at least one meta-component representing a set of components and transitions.
  • the at least one meta-component is a procedurally defined representation of the set of components and transitions.
  • each transition represents a permissible movement from one component to another component.
  • triggering event is an event occurring during playback of the audiovisual product.
  • the predetermined output format is a hierarchical datastructure having limitations on a number of objects that may exist in the datastructure at each level of the hierarchy
  • the checking step comprises predicting an expected number of objects at a level and comparing the expected number with the limitations of the hierarchical datastructure.
  • the checking step comprises predicting an expected total size of the audiovisual product, and comparing the expected total size against a storage capacity of a predetermined storage medium.
  • the expanding step comprises, for each component, building one or more of the set of explicitly realised AV assets by reading and manipulating the one or more raw content objects.
  • the defining step comprises defining at least one choice component comprising a reference to a plurality of raw content objects and at least one authoring parameter;
  • the building step comprises:
  • each node represents one AV asset of the set
  • each transition is associated between first and second components, and creating the set of links comprises evaluating each transition to create one or more links, each of the links being between a node created from the first component and a node created from the second component.
  • the expanding step comprises evaluating at least one of the transitions to create exit logic associated with at least one first node, evaluating one of the components to create entry logic associated with at least one second node, and providing a link between the first and second nodes according to the entry logic and the exit logic.
  • the predetermined output format is a hierarchical datastructure having limitations on a number of objects that may exist in the datastructure at each level of the hierarchy
  • the checking step comprises predicting an expected number of objects at a level and comparing the expected number with the limitations of the hierarchical datastructure.
  • the checking step comprises predicting an expected total size of the audiovisual product, and comparing the expected total size against a storage capacity of a predetermined storage medium.
  • the AV assets each comprise a video object, zero or more audio objects, and zero or more sub-picture objects.
  • the AV assets each comprise at least one video object, zero to eight audio objects, and zero to thirty-two sub-picture objects, according to the DVD-video specification.
  • the creating step comprises creating objects in a hierarchical datastructure defined by the predetermined output format with objects at levels of the datastructure, according to the intermediate datastructure of nodes and links, and where the objects in the hierarchical datastructure include objects derived from the explicitly realised AV assets.
  • the creating step comprises creating DVD-video structure locations from the nodes of the expanded intermediate datastructure, placing the explicitly realised AV assets at the created structure locations, and substituting the links of the expanded intermediate datastructure with explicit references to the DVD-video structure locations.
  • the choice component comprises a reference to a presentation template and a reference to at least one item of substitutable content to be placed in the template according to the at least one parameter.
  • the choice component comprises at least one runtime variable available during playback of an audiovisual product in a DVD player, and at least one authoring parameter not available during playback.
  • each transition represents a APPENDIX permissible movement from one component to another component, each transition being associated with a triggering event.
  • a triggering event includes receiving a user command, or expiry of a timer.
  • each node represents one AV asset of the set
  • evaluating each choice component comprises creating entry logic associated with at least one node and/or evaluating at least one transition to create exit logic associated with at least one node, and providing a link between a pair of nodes according to the entry logic and the exit logic.
  • An authoring method for use in creating an audiovisual product according to a DVD-video specification comprising the steps of:
  • a component implicitly defines a plurality of AV assets by referring to a presentation template and to items of raw content substitutable in the presentation template, and the plurality of transitions represent navigational movements between components;
  • a recording medium having recorded therein computer implementable instructions for performing the method of claim 48.
  • An optical disk recording medium having recorded thereon an audiovisual product authored according to the method of claim 48.
  • An authoring method for creating an audiovisual product has three main stages.
  • the first stage defines components implicitly representing functional sections of audiovisual content and transitions that represent movements between components.
  • the second stage expands the components and transitions to provide a set of explicitly realised AV assets and an expanded intermediate datastructure of nodes and links. Each node is associated with one of the AV assets and the links represent movement from one node to another.
  • the third stage creates the audiovisual product in a predetermined output format, using the AV assets and the expanded intermediate datastructure of the nodes and the links.

Description

DATA PROCESSING SYSTEM AND METHOD, COMPUTER PROGRAM PRODUCT AND AUDIO/VISUAL PRODUCT
Field of the Invention
The present invention relates to a data processing system and method, a computer program product and an audio-visual product and, more particularly, to a DVD product, authoring system and method, a computer program product for such an authoring system and method and a DVD product.
Background to the Invention
DVDs represent one of the fastest growing forms of multimedia entertainment throughout the world. Conventionally, DVDs have been used to present movies to users using extremely high quality digital audio/visual content. Figure 1 shows, schematically, a typical home entertainment system 100 comprising a DVD player 102, a DVD 104 and a television 106. The DVD 104 contains a number of programs and cells 108 each of which comprises corresponding digital audio-visual content 110 together with respective navigation data 112. The navigation data 112 is used by a navigation engine 114 within the DVD player
102 to control the order or manner of presentation of the digital content 1 10 by a presentation engine 116. The presentation engine 1 16 presents the digital content 1 10 on a television or monitor 106 as rendered audio-visual content 118. As is well known within the art, the rendered audio-visual content 118, conventionally, takes the form a movie or photographic stills or text associated with that movie; so-called Bonus features.
A user (not shown) can use a remote control 120 associated with the DVD player 102 to influence the operation of the navigation engine 114 via an infrared remote control interface 122. The combination of the infrared remote control 120 and the navigation engine
114 allows the user to make various selections from any menus presented by the presentation engine 116 under the control of the navigation engine 114 as mentioned above.
Due to the relatively limited set of commands that might form the navigation data, the processing performed by the DVD player and, in particular, the navigation engine 114, is relatively simple and largely limited to responding to infrared remote control commands and retrieving and displaying, via the presentation engine 1 16, pre-authored or pre-determined digital audio-visual content 110. Beyond decoding and presenting the digital audio-visual content 110 as rendered visual content 1 18, the DVD player 102 performs relatively little real-time processing.
This can be contrasted with the relatively sophisticated real-time processing performed by computers when providing or supporting a graphical user interface (GUI) such as that represented or presented by all of the members of the family of Windows operating systems available from Microsoft Corporation. Figure 2 depicts, schematically, a GUI 200 presented by, for example, Internet Explorer, running on the Windows 98 operating system. The GUI 200 comprises an application window 202 with a menu bar 204. The menu bar 204 has a number of menu items 206 to 216 that can be selected individually using a mouse and cursor or corresponding hot-keys as is well known within the art. Selecting one of the menu items 206 to 216, typically, causes a pull-down menu to be displayed. Figure 3 depicts a pull-down menu 300 corresponding to the "File" menu item 206. It can be seen that the pulldown menu 300 comprises a number of further menu items, "New" 302 to "Close" 304, that can be selected to perform corresponding functions. Two of the further menu items; namely, "New" 302 and "Send" 306 invoke or produce further, respective, menus (not shown)
As will be appreciated, the menu items are selected and the various menus, pulldown or otherwise, are invoked in real-time, that is, the processing necessary for displaying and stepping through the various menu items presented is performed in real-time. Effectively, the instruction set of a microprocessor of a host computer is sufficiently sophisticated and flexible to imbue the Internet Explorer application 200 with the capability to perform the necessary calculations and manipulations to implement the display and selection of menu items in response to user commands issued in real-time.
It will be appreciated that this is in stark contrast to the operation of menus and the selection of menu items using current DVD players. As compared to computer applications, the menu options and the mode of presentation of those options of those DVD players is currently relatively crude and unsophisticated. This is, at least in part, due to most DVD players being unable to perform, in response to a user action or command, the real-time processing necessary to display such sophisticated menus and, subsequently, to select a menu item from such displayed menus. This is due, in part, to the very limited additional graphics element processing capacity offered by current DVD players.
It will be appreciated that the panes illustrated in figures 2 and 3 have been shown as lacking content. The limitations of DVD players become even more apparent when considering providing such dynamic menus with content that can change or is dynamic. For example, the content displayable within a pane might be video or stills of digital images such as photographs or the like.
The process of producing DVD data is known as authoring. The process of authoring comprises creating and/or marshalling the content for the DVD data and, optionally, encoding/formatting that data, together with corresponding navigational data, to allow the DVD data to be subsequently processed by a DVD player or to be output, in its pre-mastering form, in preparation for producing DVDs bearing the data.. Authoring is described in, for example, "Desktop DVD Authoring', ISBN 0789727528, and "DVD Production", ISBN 0240516117, both of which are incorporated herein by reference for all purposes. It will be appreciated that authoring comprises at least one of designing and creating the content of a DVD-Video title, that is, DVD-Video data.
It is an object of embodiments of the present invention at least to mitigate some of the problems of the prior art.
Summary of Invention
Accordingly, a first aspect of embodiments of the present invention provides an asset authoring method comprising the steps of providing a data structure comprising data defining a menu structure having at least one menu having a respective number of menu items associated with a number of defined views of, or actions in relation to, a general visual asset; providing a visual asset; and creating, automatically, a number of visual assets using at least one of the visual assets provided and the data of the data structure; the visual assets created corresponding to respective views of the defined views of the visual asset provided or reflecting respective actions of the defined actions in relation to the visual asset provided.
Advantageously, embodiments of the present invention allow menus, such as, for example, pull-down menus or other menus, associated with viewing content to be realised on a DVD player, that is, the embodiments allow the real-time display of menus and invocation of menu items performed by computers to be at least emulated.
Preferably, the number of visual assets comprises at least one visual asset and, preferably, more than one visual asset.
A further aspect of embodiments of the present invention provides a method of authoring visual content; the method comprising the step of creating a video sequence comprising data to display a progressively expanding menu comprising a number of menu items following invocation of a selected menu item or a user-generated event. A still further aspect of embodiments of the present invention provides a method of authoring visual content; the method comprising the step of creating a video sequence comprising data to display a progressively contracting menu comprising a number of menu items following invocation of a selected menu item or a user generated event.
Other aspects of embodiments of the present invention are described herein and claimed in the claims.
Brief Description of the Drawings
Embodiments of the present invention will now be described, by way of example only, with reference to the accompanying drawings in which:
figure 1 shows a home entertainment system;
figure 2 shows a GUI for Internet Explorer;
figure 3 depicts a pull-down menu of the GUI;
figure 4 shows schematically an asset authoring process according to an embodiment of the present invention;
figure 5 depicts a data structure for defining a menu according to an embodiment;
figure 6 shows, schematically, video sequences for expansion and contraction of pulldown menus according to embodiments of the present invention;
figure 7 illustrates data for a pull-down menu to be used in the video sequences of figure 6;
figure 8 illustrates the generation of sub-picture menu data for the pull-down menus used in the video sequences of figure 6;
figure 9 depicts the display of the frames of the video sequences together with the schematic overlay of the sub-picture menu data;
figure 10 shows the relationship between a sub-picture having menu item overlays and a corresponding video sequence or frame; figure 11 illustrates the frames of a video sequence for the expansion and contraction of the further menu;
figure 12 illustrates the generation of a further menu item according to an embodiment;
figure 13 shows menu data for generating a video sequence showing the progressive expansion or contraction of the further menu shown in figure 12;
figure 14 depicts the relationship between a graphical overlay of a sub-picture to a corresponding menu item of the further menu shown in figure 12;
figure 15 shows a first flowchart for generating a visual asset according to an embodiment; and
figure 16 shows a second flowchart for generating a visual asset according to an embodiment.
Description of Preferred Embodiments
Figure 4 shows an authoring process 400 according to an embodiment of the present invention for automatically producing a number, M, of sets of assets 402 to 406 from corresponding assets 408 to 412 and a data structure 414 defining a menu structure having a number, N, of menu items 416 to 420. The menu items, or only selected menu items, if appropriate, have associated data 422 to 426 representing a graphical manifestation or representation of the menu items. Also, the menu items, or only selected menu items, have associated data processing operations that perform, or at least provide access to functions that can perform, data processing operations or manipulations upon, or in relation to, a notional or generalised asset. In the example shown, the notional or generalised assets are the provided assets 408 to 412 that are used to produce the sets of assets 402 to 406.
It can be appreciated that the sets of assets 402 to 406 comprise respective assets. For example, the first set of assets 402 comprises several visual assets 434 to 438 that were produced, from or in relation to the first asset 408, by applying appropriate or selected operations of the available operations 428 to 432 according to the menu structure, that is, according to whether or not a menu item is intended to be available for that first asset 408. The assets 434 to 438 created are shown, for the purpose of a generalised description, as having been created from menu items that have operations A, B and C (not shown) associated with them. The operations A, B and C will be operations associated with corresponding menu items selected from the N illustrated menu items.
Similarly, the second set of assets 404 comprises several assets 440 to 444 that were produced, from or in relation to the second asset 410, by applying appropriate or selected operations of the available operations 428 to 432 according to the menu structure, that is, according to whether or not a menu item is intended to be available for that second asset 410. The assets 440 to 444 created are shown, for the purpose of a generalised description, as having been created from menu items that have operations P, Q and R (not shown) associated with them. The operations P, Q and R are associated with corresponding menu items selected from the N illustrated menu items. The same applies to the Mth set of assets 406, which comprises respective assets 446 to 450 produced from or in relation to the Mth asset 412 and selected operations 428 to 432.
Navigational data 452 to 468 is also created for each asset 434 to 450. The navigational data is arranged to allow the navigation engine 114 of the DVD player 102 to obtain the next image or video sequence, that is, created asset, according to the menu structure. For example, if the first asset 434 of the first set of assets 402 represents an image, the navigational data associated with that first asset 434 may comprise links to the second asset 436, which might represent an image or video sequence showing that image together with the progressive display of a number of menu options associated with that image. For example, the menu options might relate to image processing techniques such as "posterising" the image. Therefore, in this example, the links associated with the second asset 436 might comprise a link to a third asset (not shown) representing the image together with the progressive closing or contraction of the menu options previously displayed via the first asset 434 and a link to a fourth asset showing a "posterised" version of the original image shown in the original asset 408.
It will be appreciated that the assets might represent stills or video sequences. In preferred embodiments, the assets that relate to the menu options or menu items are video sequences that show the progressive expansion or contraction of the menus. Alternatively, or additionally, the assets might comprise two portions with a first portion representing a video sequence arranged to display or hide the dynamic menu and a second portion representing a still image or a further video sequence that is arranged to loop, that is, that is arranged to repeat once the menu has been displayed or hidden. Figure 5 illustrates graphically a possible menu structure definition in the form of a tree 500. The data structure will be described with reference to a menu structure to perform image-processing techniques on a number of images. It will be appreciated that this is for the purpose of illustration only and that embodiments of the present invention are not limited thereto. The tree 500 comprises a root node 502 at which an asset might be displayed in its original or unadulterated form. Selecting "OK", for example, using the remote control 120, might be intended to cause a transition to a node for displaying the menu options available at that level in the menu structure. It can be appreciated from the example that invoking the "OK" button or the like is intended to produce a pull-down menu having four menu items 504 to 510. In the example, the four menu items are "Action" 504, "Zoom" 506, "Pan" 508 and "Effect" 510. At this stage in the menu structure, an originally displayed asset will be intended to also comprise a pull-down menu showing those options with those menu options having been progressively displayed via a corresponding video sequence. In order to select the menu options, sub-picture data is intended to be generated and shown as graphic overlays for each of the menu items "Actions" 504, "Zoom" 506, "Pan" 508 and "Effect" 510.
It can be appreciated that the menu structure is defined such that selecting the first menu option 504 produces a further menu comprising a number of sub-menu items. In the illustrated example, the sub-menu items are "First" 512, "Last" 514, "Next" 516, "Previous" 518, "Thumbs" 520 and "Category" 522. Again, the menu structure is arranged to have sub- picture graphic overlays associated with each of the options that can be used to select the options. Video assets are intended to be produced that give effect to operations associated with these options 512 to 522.
Selecting the "First" 512 option is intended to display a first image of a number of images. Therefore, an asset displaying that first image is intended to be produced. Selecting the second option, "Last" 514, is intended to display the last image of the number of images. Therefore, an asset for displaying that image will be produced using the last image. The "Previous" 516 and "Next" 518 menu items are intended to display previous and next images respectively. Suitably, video assets giving effect to the display of the previous and next images are intended to be created. The option "Thumbs" 520 is intended to display thumbnail views of all, or selectable, images within a category or set of images. Again, selecting this option will necessitate producing a video asset that displays all of the thumbnail views or a selected number of those thumbnail views. It can be appreciated that any view of an asset might need associated navigation data to jump to the video asset or sequence showing the thumbnail views. The final option, "Category" 522, is arranged to present a further sub-menu containing a number of categories of image; each represented by a corresponding menu item 524 to 526. Selecting one of these menu items is intended to display the first image in the category of images or a number of thumbnail views of the images within that category.
The menu structure might be defined such that the second menu item, "Zoom" 506, produces a further menu having four zooming options; namely, "+" 528, "-" 530, "100%" 532 and "200%" 534, which, when selected, are intended to produce zoomed versions of an original asset. Suitably, giving effect to invocations of these menu items 528 to 534 will require corresponding video assets, firstly, to display the menu options and, secondly, to give effect to the transition from an initial, or starting, view of an asset to a zoomed view of the asset together with corresponding navigation data to allow the navigation engine 114, in conjunction with the presentation engine 116, to retrieve and render the video assets showing such zooming operations. Again, a sub-picture having appropriately positioned graphical overlays that are selectable and maskable will also be desirable.
The "Pan" 508 menu option produces a further sub-menu comprising four menu items or options 530 to 542 that are arranged to allow a user to pan around an image. Accordingly, for each original asset, various video assets need to be defined that support such panning. Similarly, the final menu option, "Effect" 510, is arranged to produce a further submenu comprising three menu items 544 to 548 that apply image processing techniques or effects to the original assets. The illustrated menu items are "Colour" 544, "Black & White" 546 and "Posterise" 548, which require video assets to present the original assets in colour, in black and white and in a posterised forms respectively. Again, sub-picture image data would also be required to support selection of the menu items 544 to 548.
It will be appreciated that the assets produced, or intended to be produced, to give effect to traversing the menu structure and invoking menu items can be still images or video sequences representing a dynamic transition from one view of an asset to another view of an or the asset, representing a transition between views of an asset, or a transition to an asset.
It can be appreciated from the above that marshalling or producing the assets in preparation for creating a DVD that uses, or at least emulates, dynamic menus requires a very large number of assets to be created that anticipate all possible combinations of asset views according to the number of menus and menu options or items within those menus defined in the data structure. Furthermore, corresponding assets that show the expansion or contraction of the menu items either jointly or severally with respective asset data will also require a large number of assets to be generated.
Referring to figure 6, there is shown schematically an authoring process 600 for producing a pair of video sequences 602 and 604 comprising frames that illustrate the expansion and contraction of a pull-down menu, assuming that the menu structure and menu items are arranged to define a pull-down menu. The first video sequence 602 has been shown, for illustrative purposes only, as comprising five frames 606 to 614. The first frame 606 is a schematic representation of the image shown in figure 2. In the interests of clarity, only the menu bar 204 and window 202 of the image of figure 2 have been illustrated in each frame. The second frame 608 is shown with a portion 616 of the pull-down menu 300 having been displayed. It can be seen that the third and fourth frames 610 and 612 respectively illustrate progressively larger portions 618 and 620 of the pull-down menu 300. The final frame 614 illustrates the complete pull-down menu 300 and corresponds to the image shown in figure 3. The progressively increasing or expanding portions 616 to 620 of the pull-down menu 300 are illustrated as expanding on a per menu item basis, that is, each portion contains a greater number of menu items as compared to a previous portion. Again, for the purpose of clarity of illustration only, the pull-down menu 300 has been shown as comprising four menu items rather than the full 13 menu items shown in figure 3. However, it will be appreciated that a pull-down menu, according to requirements, may present any predetermined number of menu items. The progressive expansion and contraction of the menus corresponds to or emulates revealing or hiding of menus within a Windows context.
Although figure 6 illustrates the creation of individual frames, it will be appreciated that in preferred embodiments the visual assets 606 to 614 will take the form of a number of frames, that is, video sequences. For example, visual asset 606 will, in practice, represent a video sequence comprising a number of frames that progressively displays the first portion 616 of the menu over a predetermined period of time. It will be appreciated that the number of frames constituting such a video sequence might be a function of the desired display speed for the menu.
Navigation data 622 to 628 provides links between video assets and allows the navigation engine to retrieve the first video sequence or set of video assets or sequences 602 from the DVD 104 and to cause the presentation engine 116 to display the first video sequence using that retrieved data. The second video sequence 604 of figure 6 has also been shown, for illustrative purposes only, as comprising five frames 630 to 638. The first frame 630 is a schematic representation of the image shown in figure 3, in which the pull-down menu 300 is in its fully expanded form. The second frame 632 is shown with a smaller portion 640 of the pull-down menu 300 having been displayed. It can be seen that the third and fourth frames 634 and 636 respectively display progressively smaller portions 642 and 644 of the pull-down menu 300. The final frame 638 illustrates the complete pull-down menu 300 in its most contracted form and corresponds to the image 200 shown in figure 2. The progressively decreasing or contracting portions 640 to 644 of the pull-down menu 300 are illustrated, again, as contracting on a per menu item basis, that is, each portion contains progressively fewer menu items as compared to a previous portion. Navigation data 646 to 652 linking each video asset will also be created to allow the navigation engine 114 to retrieve the asset and cause the presentation engine 116 to display that video asset. Again, it will be appreciated that each video asset 630 to 638 will, in practice, represent a video sequence and that the embodiment described above has been illustrated using frames rather than sequences for the purposes of clarity of illustration only.
It will be appreciated the video content panes of the video sequences 602 and 604 have been shown "empty" for the purposes of clarity only. In practice, the content panes will contain content such as, for example, image data or video sequence data.
It will be appreciated that although the pull-down menu has been described with reference to expanding and contracting on a per menu item basis, embodiments can be realised in which any predetermined expansion or contraction step size is used. It will be appreciated that smaller or greater steps sizes might affect the number of frames that are required to form the first 602 and second 604 video sequences or the smoothness of the display of the pull-down menu 300. It can be appreciated that rendering such pre-authored video sequences as the first 602 and second 604 video sequences enables pull-down menus to be provided, or at least emulated, using DVD players, which increases the richness of the user interfaces for, and the user experience of, DVDs.
Figure 7 shows, schematically, the graphical data 700 that can be used to produce a progressively expanding or contracting pull-down menu 300 according to an embodiment. It can be seen that the data 700 comprises 13 pull-down menu portions 702 to 726. These portions 702 to 726 are used to produce the video sequences 602 and 604 described above with respect to figure 6. A complete frame of video may comprise both the pull-down menu portions or complete menu with or without the "application" window, such as that displayed in figure 2, together with other data or information such as, for example, content for the application window and/or a background on which the application window sits, if it does not occupy the whole of the 720x480 or 720x576 pixels of the DVD NTSC and the DVD PAL/SECAM pixel resolutions, respectively.
The data representing the video sequences 602 and 604, stored on the DVD 104, will also be accompanied by sub-picture data, carried by at least one of the thirty-two available sub-picture streams. The sub-picture data is used to produce graphical overlays or highlights for selecting menu items of the various menu items of the pull-down menu. The sub-picture data is used to produce a bitmap image bearing graphical overlays that are displayed on top of, or otherwise combined with, corresponding video sequences. The manner and position of display of the graphical elements are controlled or determined using corresponding sub- picture buttons with associated highlights that are selectively operated as masks to hide or reveal an associated graphical overlay.
Referring to figure 8, there is shown schematically the relationship 800 between a selected number of graphical overlays 802 to 808 and corresponding portions 802' to 808' of the pull-down menu 300. The sub-picture buttons or masks associated with each graphical overlay 802 to 808 are arranged such that, when invoked in conjunction with the video sequence displaying the pull-down menu, the sub-picture bitmaps selectively highlight or overlay the corresponding portions 802' to 808' of the pull-down menu 300. The presentation engine 116, under the control of the navigation engine 114, displays the appropriate sub-picture graphical overlay 802 to 808 in response to user commands received from the remote control 120 using the sub-picture buttons or masks. For example, figure 9 illustrates the relationship 900 between three central graphical overlays 902 to 906 of a sub- picture (not shown) and their corresponding menu items 902' to 906'. Assume that the central graphical overlay 904 is currently displayed. The navigation engine 114, in response to an "up" or "down" user command received from the IR control 120, will cause the presentation engine 116 to display a selected overlay 902 or 906 to highlight the "Page Setup" 902' or "Print Preview" 906' menu items respectively by masking the appropriate overlays that are not required to be displayed.
Referring to figure 10, there is shown the relationship 1000 between a sub-picture 1002 containing graphical overlays 1004 and a video sequence or frame containing the pulldown menu 300 in its fully expanded state. The sub-picture is notionally divided into a number of regions (not shown) known as buttons that can be selectively displayed in response to user actions, that is, commands received from the IR control 120. These buttons are used to reveal or hide a number of highlight regions that are aligned with respective menu options of the pull-down menu 300. Figure 11 illustrates the process 1100 of successive display of the frames constituting the first and second video sequences, according to the direction of the time lines 1102 and 1104. The relationship between the sub-picture graphical overlay 804 and the corresponding "Save As" menu item 804' of the pull-down menu 300 can be more easily appreciated.
Figure 12 illustrates a view 1200 of the application 200 with one menu item 1202 of the pull-down menu 300 having been invoked. It can be appreciated that this invocation has produced a further menu 1204. In preferred embodiments, the further menu 1204 is progressively displayed in a left-to-right manner in a similar process to the progressive display of the pull-down menu 300 itself. The authoring process to produce the data used in producing a video sequence having such a left-to-right menu needs to produce data 1300 such as, for example, that shown in figure 13. The left-to-right menu data 1300 comprises a number of portions 1302 to 1308 of the further menu 1204. Each portion 1302 to 1308 is progressively bigger or smaller than a succeeding or preceding portion respectively. Also shown, in a manner analogous to that of figure 8, are the sub-picture graphical overlays 1310 to 1314 that correspond to the respective menu items 1310' to 1314' of the further menu 1204. The data shown in figure 13 is used to produce video sequences for progressively expanding or contracting the further menu 1204 in a manner that is substantially similar to the process used to produce the first 602 and second 604 video sequences shown in figure 6.
Figure 14 illustrates, with greater clarity, the relationship 1400 between the further menu 1204 and the sub-picture graphical overlay 1310 for the "Page by E-mail" 1310' menu item. It can be appreciated that the pull-down menu 300 has been invoked, followed by the selection of the "Send" menu item 1202, which has caused the display of the left-to-right menu 1204 and the corresponding sub-picture graphical overlay 1310. Again, the start frame 1402 and end frame 1404 are shown, together with intermediate frames 1406 to 1410, as constituting expansion and contraction video sequences according to the direction of the time lines 1412 and 1414 respectively.
It will be appreciated that the navigation data associated with the first video sequence 602 will include a link to the video sequence for expanding the further menu 1204 to give effect to that expansion should the "Send" menu item 1202 be invoked. It will be appreciated from the above that the process of marshalling or producing a visual asset for displaying and using dynamic menus involves producing video sequences for both the expansion and contraction, that is, the display and hiding, of the pull-down menu together with navigation data linking the frames and/or video sequences, according to planned or predetermined user operations and sub-picture graphical overlay data and navigation data for controlling the display of the sub-picture graphical overlays.
Referring to figure 15 there is shown a flowchart 1500 for producing visual assets according to an embodiment of the present invention. At step 1502, an original visual asset is provided or obtained. A data structure comprising a definition of a menu structure together with associated menus and menu items and operations related to those menu items is defined at step 1504. Such a data structure has been described above in relation to figures 4 and 5. An asset is created, at step 1506 using appropriate menu items and their related operations as well as the originally provided video asset. It is determined at step 1508 whether or not all assets relating to the originally provided asset have been created. If the determination at step 1508 is negative, processing returns to step 1506 where a further asset is created, again, according to the needs or requirements defined by the menu structure defined in step 1504. Having created the necessary video assets from an original asset, navigation data linking the assets according to an intended navigational strategy, which is, again, defined by the menu structure, is created at step 1510. A test, performed at step 1512, determines whether or not there are further a/v assets to process. If the test is positive, processing continues at step 1502, where the next asset to be processed is obtained. If the test is negative, processing terminates.
Figure 16 shows a flowchart 1600 that illustrates the steps undertaken in steps 1506 and 1508 of figure 15 in greater detail. The menu items applicable to a provided video asset are identified and counted at step 1602. A count, N, is set to 1 at step 1604. For the Nth menu item, the corresponding operation such as, for example, the operations 428 to 432 shown in figure 4, are identified. At step 1608 a copy of the originally provided video asset is processed using the appropriate operation identified at step 1606 to create at least a portion, or a first portion, of an intended Nth video asset.
At step 1610, the graphical data associated with the Nth menu item is processed to produce a second portion of the Nth video asset. The complete or whole of the Nth video asset is created using at least one of the first and second portions at step 1612. It is determined, at step 1614, whether there are more menu items to be processed for which corresponding video assets, derived from the originally provided video asset, are required. If the determination is positive, processing continues to step 1616 where N is incremented and control passes to step 1606, where the next menu item is considered. If the determination at step 1614 is negative, processing terminates or, more accurately, processing returns to step 1508 of figure 15. It will be appreciated by those skilled in the art that the menu structure defined in the data structure might comprise sub-menus. Therefore, the process of producing the assets for such a complex menu structure might require nested or recursive applications of the steps shown in the flowcharts.
Although the above embodiments have been described within the context of a DVD equivalent of Internet Explorer, embodiments of the present invention are not limited thereto. Embodiments can be realised in which the pull-down menus are implemented in any context. For example, the "application" might be intended to step through an album of photographs or video sequences and the menu items might control the display of those photographs or video sequences. Still further, it will also be appreciated that the pull-down menu stems from a corresponding menu bar item. However, the pull-down menu can be arranged to appear, at a predetermined screen position, in response to a user-generated event.
The above embodiments have been described with reference to creating video or visual assets. However, embodiments of the present invention are not limited to such an arrangement. Embodiments can be realised in which the assets processed and/or produced are audio-visual assets.
Although the above embodiments have been described in the context of dynamic menus, embodiments of the present invention are not limited to such an arrangement. Embodiments can be realised in which, for example, modal or modeless dialogue boxes, or other GUI elements, are emulated via correspond video sequences.
It will be appreciated that the video assets created in the above embodiments might use an image processing system or multimedia authoring system by which an author can create the assets. For example, to overlay menu image data on top of image or video data one skilled in the art might use Macromedia Flash, Macromedia Director or Adobe AfterEffects. Furthermore, the navigation data associated with such created assets might use the invention described in UK patent application no. GB 0309814.2 (filed 30 April 2003 and claiming priority from UK Patent application no. GB 0209790.5) and US patent application serial number 60/383.825, the contents of which are incorporated herein for all purposes by reference and shown in appendix A.
The embodiments of the present invention have been described with reference to producing DVD data that allows dynamic menus comparable to those provided by conventional computer systems to be provided, that is, at least emulated, using a conventional DVD player, which itself, might take the form of a hardware or software media player. The term "dynamic menus" encompasses the type of menus provided by computer systems as described above. These menus can be displayed, that is, rendered, in at least one of the following number of ways, that is, they can be progressively displayed, progressively contracted, displayed and removed substantially immediately, in a pop-up/pop-down manner and in any other way.
Some of the above embodiments have been described with reference to deriving data to be displayed from an asset provided with the resulting data being influenced by a corresponding menu option having an associated action. It will be appreciated that the resulting data might be merely a copy of the provided data or data that is the result of processing the originally provided data. Instances in which the resulting data is merely a copy of the originally provided data might occur when the corresponding menu item is used to navigate to the copy of the originally provided data, that is, the menu option is used for navigational purposes rather than for modifying, according to an associated option, the originally provided data. In such a case, the operation associated with the menu option would be navigation data/commands that invoke the playing of the provided asset.
Furthermore, it will be appreciated that the embodiments of the present invention are preferably implement, where appropriate, using software. The software can be stored on or in various media such as, for example, magnetic or optical discs or in ROMs, PROMs and the like.
For the avoidance of doubt, the phrase "one or more" followed by, for example, a noun comprises "one [noun]"and "two or more [nouns]", that is, it comprises "at least one [noun]" and visa versa. Therefore, the phrase "one or more video sequences" comprises one. video sequence and, similarly, the phrase "one or more original assets" comprises one original asset as well as both extending to "a plurality of video sequences" and "a plurality of original assets" respectively.
The reader's attention is directed to all papers and documents which are filed concurrently with or previous to this specification in connection with this application and which are open to public inspection with this specification, and the contents of all such papers and documents are incorporated herein by reference.
All of the features disclosed in this specification (including any accompanying claims, abstract and drawings) and/or all of the steps of any method or process so disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive.
Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features.
The invention is not restricted to the details of any foregoing embodiments. The invention extends to any novel one, or any novel combination, of the features disclosed in this specification (including any accompanying claims, abstract and drawings), or to any novel one, or any novel combination, of the steps of any method or process so disclosed.
APPENDIX
AUTHORING OF COMPLEX AUDIOVISUAL PRODUCTS
The present invention relates in general to a method and apparatus for authoring complex audiovisual products. ,
In general terms, it is desired to assemble many small sections of raw audio and video content (i.e. sound clips and video clips) to form a finished audiovisual product, by way of an authoring process. However, in many environments a considerable degree of specialist knowledge and time must be invested in the authoring process in order to achieve a desirable finished audiovisual product. These problems are exacerbated where the audiovisual product has a complex navigational structure or requires many separate raw content objects.
As a simple example, a feature movie or television program typically has a straightforward linear navigational sequence of individual scenes. By contrast, it is now desired to develop new categories of audiovisual products which have a much more complex navigational structure, such as a movie with many scene choices or different movie endings, and/or which have a large number of individual scenes, such as an interactive quiz game with say one thousand individual quiz questions.
In one preferred embodiment, the present invention relates to authoring of audiovisual content into a form compliant with a specification for DVD-video and able to be recorded on an optical disc recording medium.
An optical disc is a convenient storage media for many different purposes. A digital versatile disc (DVD) has been developed with a capacity of up to 4.7Gb on a single-sided single-layer disc, and up to 17Gb on a double-sided double-layer disc. There are presently several different formats for recording data onto a DVD disc, including DVD-video, DVD- audio, and DVD RAM, amongst others. Of these, DVD-video is particularly intended for use with pre-recorded video content, such as a motion picture. As a result of the large storage capacity and ease of use, DVD discs are becoming popular and commercially important. Conveniently, a DVD-video disc is played using a dedicated playback device with relatively simple user controls, and DVD players for playing DVD-video discs are becoming relatively widespread. More detailed background information concerning the DVD-video specification is available from DVD Forum at www.dvdforum.org. APPENDIX
Although DVD-video discs and DVD-video players are becoming popular and widespread, at present only a limited range of content has been developed. In particular, a problem arises in that, although the DVD specification is very flexible, it is also very complex. The process of authoring content into a DVD-video compatible format is relatively expensive and time consuming. In practice, the flexibility and functions allowed in the DVD-video specification are compromised by the expensive and time consuming authoring task. Consequently, current DVD-video discs are relatively simple in their navigational complexity. Such simplicity can impede a user's enjoyment of a DVD-video disc, and also inhibits the development of new categories of DVD-video products.
An example DVD authoring tool is disclosed in WO 99/38098 (Spruce
Technologies) which provides an interactive graphical authoring interface and data management engine. This known authoring tool requires a relatively knowledgeable and experienced operator and encounters difficulties when attempting to develop an audiovisual product having a complex navigational structure. In particular, despite providing a graphical user interface, the navigational structure of the desired DVD-video product must be explicitly defined by the author. Hence, creating a DVD-video product with a complex navigational structure is expensive, time-consuming and error-prone.
An aim of the present invention is to provide a convenient and simple method and apparatus for authoring an audio-visual product.
An aim of the preferred embodiments of the present invention is to provide a method and apparatus able to create an audio-visual product having a complex navigational structure and/or having many individual content objects, whilst reducing a time required for authoring and minimising a need for highly skilled operators.
Another preferred aim is to provide an authoring tool which is intuitive to use and is highly flexible.
An aim of particularly preferred embodiments of the invention is to allow efficient creation of audio-visual products such as DVD-video products that run on commonly available DVD-video players.
According to the present invention there is provided a method and apparatus as set forth in the appended claims. Preferred features of the invention will be apparent from the dependent claims, and the description which follows. APPENDIX
In a first aspect of the present invention there is provided an authoring method for use in creating an audiovisual product, comprising the steps of: defining a plurality of components, the components implicitly representing functional sections of audiovisual content with respect to one or more raw content objects, and a plurality of transitions that represent movements between the plurality of components; expanding the plurality of components and the plurality of transitions to provide a set of explicitly realised AV assets and an expanded intermediate datastructure of nodes and links, where each node is associated with an AV asset of the set and the links represent movement from one node to another; and creating an audiovisual product in a predetermined output format, using the AV assets and the expanded intermediate datastructure of the nodes and the links.
In a second aspect of the present invention there is provided an authoring method for use in creating a DVD-video product, comprising the steps of: creating a plurality of components representing parameterised sections of audiovisual content, and a plurality of transitions representing movements between components; expanding the plurality of components and the plurality of transitions to provide a set of AV assets and an expanded datastructure of nodes and links, where each node is associated with an AV asset of the set and the links represent movement from one node to another; and creating a DVD-video format datastructure from the AV assets, using the nodes and links.
In a third aspect of the present invention there is provided an authoring method for use in creating an audiovisual product according to a DVD-video specification, comprising the steps of: generating a set of AV assets each comprising a video object, zero or more audio objects and zero or more sub-picture objects, and an expanded datastructure of nodes and links, where each node is associated with one AV asset of the set and the links represent navigational movement from one node to another; and creating a DVD-video format datastructure from the set of AV assets, using the nodes and links; the method characterised by the steps of: creating a plurality of components and a plurality of transitions, where a component implicitly defines a plurality of AV assets by referring to a presentation template and to items of raw content substitutable in the presentation template, and the plurality of transitions represent navigational movements between components; and expanding the plurality of components and the plurality of transitions to generate the set of AV assets and the expanded datastructure of nodes and links.
In another aspect the present invention there is provided a recording medium having recorded thereon computer implementable instructions for performing any of the methods APPENDIX defined herein.
In yet another aspect of the present invention there is provided a recording medium having recorded thereon an audiovisual product authored according to any of the methods defined herein.
For a better understanding of the invention, and to show how embodiments of the same may be carried into effect, reference will now be made, by way of example, to the accompanying diagrammatic drawings in which:
Figure 1 is an overview of an authoring method according to a preferred embodiment, of the present invention;
Figure 2 is a schematic diagram showing a simple abstraction of a desired audiovisual product;
Figure 3 shows in more detail a component used as part of the abstraction of Figure 2;
Figure 4 shows an example prior art authoring method compared with an example preferred embodiment of the present authoring method;
Figure 5 shows another example embodiment of the present authoring method using components and transitions;
Figure 6 shows the example of Figure 5 in a tabular format;
Figure 7 is an overview of a method for evaluating components and transitions;
Figure 8 shows evaluation of components in more detail;
Figure 9 shows evaluation of transitions in more detail;
Figure 10 shows a portion of an expanded datastructure during evaluation of components and transitions;
Figure 11 is an overview of a preferred method for creating DVD-video structures from an expanded datastructure;
Figure 12 shows a step of creating DVD video structure locations in more detail; and
Figure 13 shows a step of creating DVD-video compatible datastructures in more APPENDIX detail.
Figure 1 shows an overview of an authoring method according to a preferred embodiment of the present invention. The present invention is applicable when authoring many types of audiovisual products, and in particular when complex navigational structure or content are involved.
As one example, the present invention is applicable to authoring of video on demand products delivered remotely from a service provider to a user, such as over a computer network or other telecommunications network. Here, the present invention is especially useful in authoring interactive products, where user choices and responses during playback of the product dictate navigational flow or content choices.
As another example, the present invention is particularly suitable for use in the authoring of an audiovisual product compliant with a DVD-video specification. This example will be discussed in more detail below in order to illustrate the preferred arrangements of present invention. The audiovisual product is suitably recorded onto a recording medium such as an optical disk. The DVD-video specification defines a series of data objects that are arranged in a hierarchical structure, with strict limits on the maximum number of objects that exist at each level of the hierarchy. Hence, in one preferred embodiment of the present invention it is desired to create an audiovisual product which meets these and other limitations of the specification. In particular it is desired that the resultant audiovisual product will play on commonly available DVD players. However, it is also desired to create the audiovisual product having a complex navigational structure, in order to increase a user's enjoyment of the product, and in order to allow the creation of new categories of audiovisual products.
In the field of DVD-video, audiovisual content is considered in terms of audio-visual assets (also called AV assets or presentation objects). According to the DVD-video specification each AV asset contains at least one video object, zero or more audio objects, and zero or more sub-picture objects. That is, a section of video data is presented along with synchronised audio tracks and optional sub-picture objects. The current DVD-video specification allows up to eight different audio tracks (audio streams) to be provided in association with up to nine video objects (video angle streams). Typically, the video streams represent different camera angles, whilst the audio streams represent different language versions of a soundtrack such as English, French, Arabic etc. Usually, only one of the APPENDIX available video and audio streams is selected and reproduced when the DVD-video product is played back. Similarly, the current specification allows up to thirty-two sub-picture streams, which are used for functions such as such as language subtitles. Again, typically only one of the sub-picture streams is selected and played back, to give for example a movie video clip with English subtitles from the sub-picture stream reproduced in combination with a French audio stream. Even this relatively simple combination of video, audio and sub-picture streams requires a high degree of co-ordination and effort during authoring, in order to achieve a finished product such as a feature movie. Hence, due to the laborious and expensive nature of the authoring process there is a strong disincentive that inhibits the development of high- quality audiovisual products according to the DVD-video specification. There is then an even stronger impediment against the development of audiovisual products with complex navigational flow or using high numbers of individual raw content objects.
Conveniently, the authoring method of the present invention is implemented as a program, or a suite of programs. The program or programs are recorded on any suitable recording medium, including a removable storage such as a magnetic disk, hard disk or solid state memory card, or as a signal modulated onto a carrier for transmission on any suitable data network, such as the internet.
In use, the authoring method is suitably performed on a computing platform, ideally a general purpose computing platform such as a personal computer or a client-server computing network. Alternatively, the method may be implemented, wholly or at least in part, by dedicated authoring hardware.
As shown in Figure 1 , the authoring method of the preferred embodiment of the present invention comprises three main stages, namely: creating a high-level abstraction (or storyboard) representing functional sections of a desired audiovisual product in step 101; automatically evaluating the high-level abstraction to create a fully expanded intermediate structure and a set of AV assets in step 102; and creating an output datastructure compliant with a DVD-video specification using the expanded intermediate structure and AV assets in step 103. Suitably, the output datastructure is then recorded onto a recording medium, in this case being a blank optical disc, to create a DVD-video product.
The method outlined in Figure 1 will now be explained in more detail.
Firstly, looking at the step 101 of Figure 1, the high-level abstraction is created by forming a plurality of components that implicitly represent functional elements of a desired APPENDIX
DVD-video product, and a set of transitions that represent movements between the components that will occur during playback.
Figure 2 is a schematic diagram showing a simple abstraction of a desired audiovisual product. In the example of Figure 2 there are three components 201, linked by two transitions 202. The components 201 represent functional elements of the desired audiovisual product, where one or more portions of AV content (combinations of video clips, audio clips, etc) are to be reproduced during playback. The transitions 202 indicate legitimate ways in which the product moves from one component to another during playback. In the example of Figure 2, the transitions 202 are all explicitly defined. Suitably, each transition 202 is associated with an event 203, which indicates the circumstances giving rise to that transition. An event 203 is a triggering action such as the receipt of a user command, or the expiry of a timer, that influences movement through the sections of AV content during playback. Referring to Figure 2, starting from a particular component A, and given all possible actions, exactly one event 203 will be satisfied, allowing a transition 202 from the current component A to a next component B or C.
The preferred embodiment allows for three different types of component. These are an information component, a choice component and a meta-component.
An information component represents what will in due course become a single AV asset in the desired audiovisual product. Suitably, an information component simply comprises a reference to a raw content object or collection of raw content objects (i.e. raw video and audio clips) that will be used to create an AV asset in the audiovisual product. For example, an information component refers to a welcome sequence that is displayed when the DVD-video product is played in a DVD-video player. The same welcome sequence is to be played each time playback begins. It is desired to display the welcome sequence, and then proceed to the next component. An information component (which can also be termed a simple component) is used principally to define presentation data in the desired DVD-video product.
A choice component represents what will become a plurality of AV assets in the desired audiovisual product. In the preferred embodiment, the choice component (alternately termed a multi-component) comprises a reference to at least one raw content object, and one or more parameters. Here, for example, it is desired to present a welcome sequence in one of a plurality of languages, dependent upon a language parameter. That is, both a speaker's APPENDIX picture (video stream) and voice track (audio stream) are changed according to the desired playback language. Conveniently, a choice component is used to represent a set of desired AV assets in the eventual audiovisual product, where a value of one or more parameters is used to distinguish between each member of the set. Hence, a choice component represents mainly presentation data in a desired DVD-video product, but also represents some navigational structure (i.e. selecting amongst different available AV assets according to a language playback parameter).
A meta-component comprises a procedurally-defined structure representing a set of information components and/or a set of choice components, and associated transitions. Conveniently, a meta-component may itself define subsidiary meta-components. A meta- component is used principally to define navigational structure in the desired audiovisual product, by representing other components and transitions.
Figure 3 shows a choice component or information component 201 in more detail. The component is reached by following one of a set of incoming transitions 202, labelled Ti(l ...n), and is left by following one of a set of outgoing transitions To(l ...m).
The component 201 is defined with reference to zero or more parameters 301, which are used only during the authoring process. However, the component may also be defined with reference to zero or more runtime variables 302. Each variable 302 records state information that can be read and modified within the scope of each component, during playback of the audiovisual product such as in a standard DVD player. Conveniently, the component 201 is provided with a label 303 for ease of handling during the authoring process.
The component 201 contains references to one or more items of content 304. The items of content are raw multi-media objects (still picture images, video clips, audio clips, text data, etc.) recorded in one or more source storage systems such as a file system, database, content management system, or asset management system, in any suitable format such as .gif, .tif, .bmp, .txt, .rtf, jpg, .mpg, .qtf, .mov, .wav, .rm, .qtx, amongst many others. It will be appreciated that these raw content items are not necessarily at this stage in a format suitable for use in the DVD-video specification, which demands that video, audio and sub- picture objects are provided in selected predetermined formats (i.e. MPEG).
Each component 201 uses the references as a key or index which allows that item of content to be retrieved from the source storage systems. The references may be explicit (e.g. APPENDIX an explicit file path), or may be determined implicitly, such as with reference to values of the parameters 301 and/or variables 302 (i.e. using the parameters 301 and/or variables 302 to construct an explicit file path).
Conveniently, the component 201 also comprises a reference to a template 305. The template 305 provides, for example, a definition of presentation, layout, and format of a desired section of AV content to be displayed on screen during playback. A template 305 draws on one or more items of content 304 to populate the template. Typically, one template 305 is provided for each component 201. However, a single template 305 may be shared between plural components 201, or vice versa. A template 305 is provided in any suitable form, conveniently as an executable program, a plug-in or an active object. A template is conveniently created using a programming language such as C++, Visual Basic, Shockwave or Flash, or by using a script such as HTML or Python, amongst many others. Hence, it will be appreciated that a template allows a high degree of flexibility in the creation of AV assets for a DVD-video product. Also, templates already created for other products (such as a website) may be reused directly in the creation of another form of audiovisual product, in this case a DVD-video product.
The parameters 301, runtime variables 302, content items 304 and template 305 together allow one or more AV assets to be produced for use in the desired audiovisual product. Advantageously, creating a component 201 in this parameterised form allows a large plurality of AV assets to be represented simply and easily by a single component.
To illustrate the power and advantages of creating components 201 and transitions 202 as described above, reference will now be made to Figure 4 which compares a typical prior art method for authoring an audiovisual product against the preferred embodiment of the present invention. In this example it is desired to develop an audiovisual product which allows the user to play a simple quiz game.
In Figure 4a, each AV asset 401 which it is desired to present in the eventual audiovisual product must be created in advance, and navigation between the assets defined using navigation links represented by arrows 402. Here, the game involves answering a first question, and, if correct, then answering a second question. The answer to each question is randomised at runtime using a runtime variable such that one of answers A, B and C is correct, whilst the other two are incorrect. In this simple example of Figure 4a it can be seen that a large number of assets need to be created, with an even greater number of navigational APPENDIX links. Hence, the process is relatively expensive and time consuming, and is prone to errors.
Figure 4b shows an abstraction, using components and transitions as described herein, for an equivalent quiz game. It will be appreciated that the abstraction shown in Figure 4b remains identical even if the number of questions increases to ten, twenty or even fifty questions, whereas the representation in Figure 4a becomes even more complex as each question is added.
Figure 5 shows another example abstraction using components and transitions. Figure 5 illustrates an example abstraction for an audiovisual product that will contain a catalogue of goods sold by a retail merchant. A welcome sequence is provided as an information component 201a. Choice components 201b are used to provide a set of similar sections of AV content such as summary pages of product information, or pages of detailed product information including photographs or moving video, for each product in the catalogue. Here, the catalogue contains, for example, of the order of one thousand separate products, each of which will result in a separate AV asset in the desired DVD-video product. Meta-components 201c provide functions such as the selection of products by category, name or by part code. These meta-components are procedurally defined.
Figure 6 shows a tabular representation for the abstraction shown in schematic form in Figure 5.
In use, the authoring method and apparatus suitably presents a convenient user interface for creating components and transitions of the high-level abstraction. Ideally, a graphical user interface is provided allowing the definition of components, transitions and events, similar to the schematic diagram of Figure 5. Most conveniently, the user interface provides for the graphical creation of components such as by drawing boxes and entering details associated with those boxes, and defining transitions by drawing arrows between the boxes and associating events with those arrows. Alternatively, a tabular textual interface is provided similar to the table of Figure 6.
Referring again to Figure 1, the abstraction created in step 101 is itself a useful output. The created abstraction may be stored for later use, or may be transferred to another party for further work. However, in most cases the authoring method is used to automatically create a final audiovisual product, such as a DVD-video product, from the abstraction.
Referring to Figure 1, the method optionally includes the step 104 of checking for APPENDIX compliance with a DVD specification. It is desired to predict whether the resulting DVD- video product will conform to a desired output specification, in this case the DVD-video specification. For example, the DVD-video specification has a hierarchical structure with strict limits on a maximum number of objects that may exist at each level, and limits on the maximum quantity of data that can be stored on a DVD-video disc.
In one embodiment, the checking step 104 is performed using the created components 201 and transitions 202. As discussed above, the components 201 contain references to raw AV content objects 304 and templates 305, and authoring parameters 301, 302, that allow AV assets to be produced. The checking step 104 comprises predicting a required number of objects at each level of the hierarchical structure, by considering the number of potential AV assets that will be produced given the possible values of the authoring parameters (i.e. authoring-only parameters 301 and runtime variables 302), and provides an indication of whether the limits for the maximum number of objects will be exceeded. Similarly, where a component defines a set of similar AV assets, then it is useful to predict the physical size of those assets, and so check that the audiovisual product is expected to fit within the available capacity of a DVD disc. Advantageously, the conformance check of step 104 is performed without a detailed realisation of every AV asset, whilst providing an operator with a reasonably accurate prediction of expected conformance. If non-conformance is predicted, the operator may then take steps, at this early stage, to remedy the situation. As a result, it is possible to avoid unnecessary time and expense in the preparation of a full audiovisual product which is non-conformant.
As shown in Figure 1, in step 102 the components 201 and transitions 202 of the high level abstraction 200 are automatically evaluated and expanded to create AV assets and an intermediate datastucture of nodes and links. Figure 7 shows the step 102 of Figure 1 in more detail.
The components 201 and transitions 202 may be evaluated in any order, but it is convenient to first evaluate the components, and then to evaluate the transitions. Ideally, any meta-components in the abstraction are evaluated first. Where a meta-component results in new components and transitions, these are added to the abstraction, until all meta- components have been evaluated, leaving only information components and parameterised choice components.
An expanded intermediate datastructure is created to represent the abstract components APPENDIX
201 and transitions 202 in the new evaluated form. This expanded datastructure comprises branching logic derived from the events 203 attached to the transitions 202 (which will eventually become navigation data in the desired audiovisual product) and nodes associated with AV assets derived from the components 201 (which will eventually become presentation data in the audiovisual product). However, it is not intended that the expanded datastructure is yet in a suitable form for creating an audiovisual product in a restricted format such as a DVD-video product, since at this stage there is no mapping onto the hierarchical structure and other limitations of the DVD-video specification.
Figure 8 shows step 701 of Figure 7 in more detail, to explain the preferred method for evaluating the components 201. As shown in Figure 8, each information component 201a and each choice component 201b is selected in turn in step 801. Each component 201 is evaluated to provide one or more AV assets in step 802. In an information component, this evaluation comprises creating an AV asset from the referenced raw content objects 304. In a choice component, this evaluation step suitably comprises evaluating a template 305 and one or more raw content objects 304 according to the authoring parameters 301/302, to provide a set of AV assets. Suitably, a node in the expanded datastructure is created to represent each AV asset, at step 803. At step 804, entry logic and/or exit logic is created to represent a link to or from each node such that each AV asset is reached or left under appropriate runtime conditions.
Figure 9 shows a preferred method for evaluating transitions in step 702 of Fig.7.
Each transition 202 is selected in any suitable order in step 901. In step 902 the conditions of the triggering event 203 associated with a particular transition 202 are used to create entry and/or exit logic for each node of the expanded datastructure. In step 903 explicit links are provided between the nodes.
Figure 10 is a schematic illustration of a component 201 during evaluation to create a set of nodes 110 each associated with an AV asset 120, together with entry logic 132 and exit logic 134, defining movement between one node 110 and the next. The entry logic 132 and exit logic 134 reference runtime variables 302 which are available during playback (e.g. timer events, player status, and playback states), and the receipt of user commands. Conveniently, the evaluation step consumes each of the authoring-only parameters 301 associated with the abstract components 201, such that only the runtime variables 302 and runtime actions such as timer events and user commands remain. APPENDIX
Referring again to Figure 1, a conformance checking step 105 may, additionally or alternatively to the checking step 104, be applied following the evaluation step 102. Evaluation of the abstraction in step 102 to produce the expanded datastructure 100 allows a more accurate prediction of expected compliance with a particular output specification. In particular, each node of the expanded datastructure represents one AV asset, such that the total number of AV assets and object locations can be accurately predicted, and the set of AV assets has been created, allowing an accurate prediction of the capacity required to hold these assets. Conveniently, information about conformance or non-conformance is fed back to an operator. Changes to the structure of the product can then be suggested and made in the abstraction, to improve compliance.
Referring to Figure 1, in step 103 the expanded datastructure from step 102 is used to create an audiovisual product according to a predetermined output format, in this case by creating specific structures according to a desired DVD-video specification.
Figure 11 shows an example method for creation of the DVD video structures. In step 1101, the nodes 110 in the expanded datastructure are placed in a list, such as in an order of the abstract components 201 from which those nodes originated, and in order of the proximity of those components to adjacent components in the abstraction. As a result, jumps between DVD video structure locations during playback are minimised and localised, in order to improve playback speed and cohesion.
Each node is used to create a DVD video structure location at step 1102. Optionally at step 1103 if the number of created DVD video structure locations exceeds the specified limit set by the DVD-video specification then creation is stopped at 1104, and an error reported. Assuming the number of structures is within the specified limit then DVD video compatible datastructures are created at step 1105. Finally, a DVD video disc image is created at step 1106. Conveniently, commercially available tools are used to perform step 1106, and need not be described in detail here.
Step 1102 is illustrated in more detail in Figure 12. In this example variable T represents a number of a video title set VTS (ie. from 1-99) whilst variable P represents a program chain PGC (ie. from 1-999) within each video title set. As shown in Figure 12 the nodes 110 of the expanded datastructure 100 are used to define locations in the video title sets and program chains. As the available program chains within each video title set are .consumed, then the locations move to the next video title set. Here, many alternate methods APPENDIX are available in order to optimise allocation of physical locations to the nodes of the expanded datastructure.
Step 1105 of Figure 11 is illustrated in more detail in Figure 13. Figure 13 shows a preferred method for creating DVD-video compatible datastructures by placing the AV assets 120 associated with each node 110 in the structure location assigned for that node, and substituting links between the nodes with explicit references to destination locations. At step 1307 this results in an explicit DVD compatible datastructure which may then be used to create a DVD disc image. Finally, the DVD disc image is used to record a DVD disc as a new audiovisual product.
The DVD authoring method and apparatus described above have a number of advantages. Creating components that represent parameterised sections of audio visual content allow many individual AV assets to be implicitly defined and then automatically created. Repetitive manual tasks are avoided, which were previously time consuming, expensive and error-prone. The authoring method and apparatus significantly enhance the range of features available in existing categories of audiovisual products such as movie presentations. They also allow new categories of audiovisual products to be produced. These new categories include both entertainment products such as quiz-based games and puzzle- based games, as well as information products such as catalogues, directories, reference guides, dictionaries and encyclopaedias. In each case, the authoring method and apparatus described herein allow full use of the video and audio capabilities of DVD specifications such as DVD-video. A user may achieve playback using a standard DVD player with ordinary controls such as a remote control device. A DVD-video product having highly complex navigational content is readily created in a manner which is simple, efficient, cost effective and reliable.
Although a few preferred embodiments have been shown and described, it will be appreciated by those skilled in the art that various changes and modifications might be made without departing from the scope of the invention, as defined in the appended claims. APPENDIX
Claims
1. An authoring method for use in creating an audiovisual product, comprising the steps of:
defining a plurality of components, the components implicitly representing functional sections of audiovisual content with respect to one or more raw content objects, and a plurality of transitions that represent movements between the plurality of components;
expanding the plurality of components and the plurality of transitions to provide a set of explicitly realised AV assets and an expanded intermediate datastructure of nodes and links, where each node is associated with an AV asset of the set and the links represent movement from one node to another; and
creating an audiovisual product in a predetermined output format, using the AV assets and the expanded intermediate datastructure of the nodes and the links.
2. The method of claim 1, wherein the defining step comprises defining at least one information component that comprises a reference to a raw content object.
3. The method of claim 2, wherein the reference denotes a file path to a location where the raw content object is stored.
4. The method of any preceding claim, wherein the defining step comprises defining at least one choice component comprising a reference to at least one raw content object, and at least one authoring parameter.
5. The method of claim 4, wherein the at least one authoring parameter is adapted to control a selection or modification, of the at least one raw content object.
6. The method of claim 4 or 5, wherein the at least one authoring parameter comprises a runtime variable available during playback of the audiovisual product.
7. The method of claim 4, 5 or 6, wherein the at least one authoring parameter comprises an authoring-only parameter that will not be available during playback of the audiovisual product.
8. The method of any of claims 4 to 7, wherein the choice component APPENDIX comprises a reference to a presentation template and a reference to at least one substitutable raw content object to be placed in the template according to the at least one authoring parameter.
9. The method of any preceding claim, wherein the defining step comprises defining at least one meta-component representing a set of components and transitions.
10. The method of claim 9, wherein the at least one meta-component is a procedurally defined representation of the set of components and transitions.
11. The method of any preceding claim, wherein each transition represents a permissible movement from one component to another component.
12. The method of any preceding claim, wherein each transition is associated with a triggering event.
13. The method of claim 12, wherein the triggering event is an event occurring during playback of the audiovisual product.
14. The method of claim 13, wherein the triggering event is receiving a user command, or expiry of a timer.
15. The method of any preceding claim, further comprising the step of checking expected conformance of the audiovisual product with the predetermined output format, using the plurality of components and the plurality of transitions.
16. The method of claim 15, wherein the predetermined output format is a hierarchical datastructure having limitations on a number of objects that may exist in the datastructure at each level of the hierarchy, and the checking step comprises predicting an expected number of objects at a level and comparing the expected number with the limitations of the hierarchical datastructure.
17. The method of claim 15 or 16, wherein the checking step comprises predicting an expected total size of the audiovisual product, and comparing the expected total size against a storage capacity of a predetermined storage medium.
18. The method of any preceding claim, wherein the expanding step comprises, for each component, building one or more of the set of explicitly realised AV assets by reading and manipulating the one or more raw content objects. APPENDIX
19. The method of any preceding claim, wherein:
the defining step comprises defining at least one choice component comprising a reference to a plurality of raw content objects and at least one authoring parameter; and
the building step comprises:
selecting one or more raw content objects from amongst the plurality of raw content objects using the at least one authoring parameter; and
combining the selected raw content objects to form one of the AV assets..
20. The method of claim 19, comprising repeating the selecting and combining steps to automatically build a plurality of the explicitly realised AV assets from the one of the components.
21. The method of any preceding claim, wherein the expanding step comprises:
creating from each one of the plurality of components one or more explicitly realised AN assets to provide the set of AV assets;
creating the expanded intermediate datastructure wherein each node represents one AV asset of the set; and
creating a set of links between the nodes.
22. The method of any preceding claim, wherein each transition is associated between first and second components, and creating the set of links comprises evaluating each transition to create one or more links, each of the links being between a node created from the first component and a node created from the second component.
23. The method of any preceding claim, wherein the expanding step comprises evaluating at least one of the transitions to create exit logic associated with at least one first node, evaluating one of the components to create entry logic associated with at least one second node, and providing a link between the first and second nodes according to the entry logic and the exit logic.
24. The method of claim 23, wherein at least one of the transitions is associated with a triggering event, and the expanding step comprises evaluating the triggering event to determine the exit logic associated with the at least first one node. APPENDIX
25. The method of any preceding claim, further comprising the step of checking expected conformance of the audiovisual product with the predetermined output format, using the AV assets and the expanded intermediate datastructure of nodes and links.
26. The method of claim 25, wherein the predetermined output format is a hierarchical datastructure having limitations on a number of objects that may exist in the datastructure at each level of the hierarchy, and the checking step comprises predicting an expected number of objects at a level and comparing the expected number with the limitations of the hierarchical datastructure.
27. The method of claim 26, wherein the checking step comprises predicting an expected total size of the audiovisual product, and comparing the expected total size against a storage capacity of a predetermined storage medium.
28. The method of any preceding claim, wherein the AV assets have a data format specified according to the predetermined output format.
29. The method of any preceding claim, wherein the AV assets each have a data format according to the predetermined output format, whilst the raw content objects are not limited to a data format of the predetermined output format.
30. The method of any preceding claim, wherein the predetermined output format is a DVD-video specification.
31. The method of any preceding claim, wherein the AV assets each comprise a video object, zero or more audio objects, and zero or more sub-picture objects.
32. The method of any preceding claim, wherein the AV assets each comprise at least one video object, zero to eight audio objects, and zero to thirty-two sub-picture objects, according to the DVD-video specification.
33. The method of any preceding claim, wherein the creating step comprises creating objects in a hierarchical datastructure defined by the predetermined output format with objects at levels of the datastructure, according to the intermediate datastructure of nodes and links, and where the objects in the hierarchical datastructure include objects derived from the explicitly realised AV assets.
34. The method of any preceding claim, wherein the predetermined output APPENDIX format is a DVD-video specification and the creating step comprises creating DVD-video structure locations from the nodes of the expanded intermediate datastructure, placing the explicitly realised AV assets at the created structure locations, and substituting the links of the expanded intermediate datastructure with explicit references to the DVD-video structure locations.
35. An authoring method for use in creating a DVD-video product, comprising the steps of:
creating a plurality of components representing parameterised sections of audiovisual content, and a plurality of transitions representing movements between components;
expanding the plurality of components and the plurality of transitions to provide a set of AV assets and an expanded datastructure of nodes and links, where each node is associated with an AV asset of the set and the links represent movement from one node to another; and
creating a DVD-video format datastructure from the AV assets, using the nodes and links.
36. The method of claim 35 or 36, comprising creating at least one information component comprising a reference to an item of AV content.
37. The method of claim 35, comprising creating at least one choice component comprising a reference to at least one item of AV content, and at least one parameter for modifying the item of AV content.
38. The method of claim 37, wherein the choice component comprises a reference to a presentation template and a reference to at least one item of substitutable content to be placed in the template according to the at least one parameter.
39. The method of claim 37 or 38, wherein the choice component comprises at least one runtime variable available during playback of an audiovisual product in a DVD player, and at least one authoring parameter not available during playback.
40. The method of any of claims 35 to 39, comprising creating at least one meta- component representing a set of components and transitions.
41. The method of any of claims 35 to 40, wherein each transition represents a APPENDIX permissible movement from one component to another component, each transition being associated with a triggering event.
42. The method of claim 41 , wherein a triggering event includes receiving a user command, or expiry of a timer.
43. The method of any of claims 35 to 42, wherein the expanding step comprises:
creating from each one of the plurality of components one or more AV assets to provide the set of AV assets;
creating the expanded datastructure wherein each node represents one AV asset of the set; and
creating a set of links between the nodes.
44. The method of claim 37 or any claim dependent thereon, wherein the expanding step comprises evaluating each choice component to create a plurality of AV assets according to each value of the at least one parameter.
45. The method of claim 44, wherein evaluating each choice component comprises creating entry logic associated with at least one node and/or evaluating at least one transition to create exit logic associated with at least one node, and providing a link between a pair of nodes according to the entry logic and the exit logic.
46. The method of any of claims 35 to 45, comprising the step of checking expected conformance with the DVD-video format using the created components and transitions.
47. The method of any of claims 35 to 40, comprising the step of checking expected conformance with the DVD-video format using the set of AV assets and the expanded datastructure of nodes and links.
48. An authoring method for use in creating an audiovisual product according to a DVD-video specification, comprising the steps of:
generating a set of AV assets each comprising a video object, zero or more audio objects and zero or more sub-picture objects, and an expanded datastructure of nodes and APPENDIX links, where each node is associated with one AV asset of the set and the links represent navigational movement from one node to another; and
creating a DVD-video format datastructure from the set of AV assets, using the nodes and links;
the method characterised by the steps of:
creating a plurality of components and a plurality of transitions, where a component implicitly defines a plurality of AV assets by referring to a presentation template and to items of raw content substitutable in the presentation template, and the plurality of transitions represent navigational movements between components; and
expanding the plurality of components and the plurality of transitions to generate the set of AV assets and the expanded datastructure of nodes and links.
49. A recording medium having recorded therein computer implementable instructions for performing the method of any of claims 1 to 34.
50. A recording medium having recorded therein computer implementable instructions for performing the method of any of claims 35 to 47.
51. A recording medium having recorded therein computer implementable instructions for performing the method of claim 48.
52. A recording medium having recorded thereon an audiovisual product authored according to the method of any of claims 1 to 34.
53. An optical disk recording medium having recorded thereon an audiovisual product authored according to the method of any of claims 35 to 47.
54. An optical disk recording medium having recorded thereon an audiovisual product authored according to the method of claim 48.
55. An authoring method for use in creating an audiovisual product, substantially as hereinbefore described with reference to the accompanying drawings.
56. An authoring method for use in creating a DVD-video product, substantially as hereinbefore described with reference to the accompanying drawings. APPENDIX
57. An authoring method for use in creating an audiovisual product according to a DVD-video specification, substantially as hereinbefore described with reference to the accompanying drawings.
APPENDIX
ABSTRACT
AUTHORING OF COMPLEX AUDIOVISUAL PRODUCTS
An authoring method for creating an audiovisual product. The method has three main stages. The first stage defines components implicitly representing functional sections of audiovisual content and transitions that represent movements between components. The second stage expands the components and transitions to provide a set of explicitly realised AV assets and an expanded intermediate datastructure of nodes and links. Each node is associated with one of the AV assets and the links represent movement from one node to another. The third stage creates the audiovisual product in a predetermined output format, using the AV assets and the expanded intermediate datastructure of the nodes and the links.
[Figure 1]

Claims

1. An asset authoring method comprising the steps of providing a data structure comprising data defining a menu structure having at least one menu having a respective number of menu items associated with a number of defined views of, or actions in relation to, a general visual asset; providing a visual asset; and creating a number of visual assets using at least one of the visual asset provided and the data of the data structure; the visual assets created corresponding to respective views of the defined views of the visual asset provided or reflecting respective actions of the defined actions in relation to the visual asset provided.
2. An asset authoring method as claimed in claim 1 in which the step of providing the visual asset comprises the step of providing at least one of image data and a video sequence.
3. An asset authoring method as claimed in any preceding claim in which the step of creating the number of visual assets comprises the step of deriving data from the provided visual asset to produce the number of visual assets.
4. An asset authoring method as claimed in claim 3 in which the step of deriving data from the provided visual asset comprises the step of copying data from the provided visual asset.
5. An asset authoring method as claimed in claim 3 in which the step of deriving data from the provided visual asset comprises the step of processing the data of the visual asset such that the number of visual assets comprises respective modified data of the provided visual asset.
6. An asset authoring method as claimed in any preceding claim in which the step of creating the number of visual assets comprises the step of including, in selected visual assets of the number of visual assets, visual data representing views of selected menu items of the number of menu items.
7. An asset authoring method as claimed in any preceding claim in which the step of creating the number of visual assets comprises the step of creating sub-picture data comprising data for at least one selectable graphical element associated with a respective menu item.
8. An asset authoring method as claimed in claim 7 in which the step of creating the sub- picture data comprises the step of creating, or providing, a number of selectable graphical elements associated with respective menu items.
9. An asset authoring method as claimed in claim 8 in which the step of creating the sub- picture data comprises the step of creating a mask for selectively displaying the number of selectable graphical elements.
10. An asset authoring method as claimed in any preceding claim in which the step of creating the number of visual assets comprises the steps of associating a visual asset processing operation with selected menu items of the menu items; and deriving the data for the number of visual assets from the provided visual asset using respective visual asset processing operations.
11. An asset authoring method as claimed in any preceding claim in which the step of providing the data structure comprises the step of defining image data or video data associated with a plurality of views of the menu.
12. An asset authoring method as claimed in claim 11 in which the step of defining image data or video data associated with the plurality of views of the menu comprises the step of creating image data or video data such that the plurality of views of the menu represent progressively expanding or contracting views of the menu.
13. An asset authoring method as claimed in any preceding claim, further comprising the step of creating navigational data associated with, or linking, the number of visual assets according to the menu structure to allow the number of visual assets to be accessed, played or displayed according to the menu structure.
14. An asset authoring method as claimed in any preceding claim, further comprising the step of providing a first number or plurality of visual assets; and creating, automatically, a second number of visual assets using the plurality of visual assets; the created visual assets corresponding to respective views of the defined views or to respective actions of the defined actions according to the menu structure.
15. An asset authoring method as claimed in any preceding claim in which the step of providing the visual assets comprises the step of providing an audio-visual asset.
16. An asset authoring method substantially as described herein with reference and/or as illustrated in any of figures 4 to 16 the accompanying drawings.
17. An asset authoring system comprising means to provide a data structure comprising data defining a menu structure having at least one menu having a respective number of menu items associated with a number of defined views of, or actions in relation to, a general visual asset; means to provide a visual asset; means to create, automatically, a number of visual assets using at least one of the visual assets provided and the data of the data structure; the visual assets created corresponding to respective views of the defined views of the visual asset provided or reflecting respective actions of the defined actions in relation to the visual asset provided.
18. An asset authoring system as claimed in claim 17 in which the means to provide the visual asset comprises means to provide at least one of image data and a video sequence
19. An asset authoring system as claimed in either of claims 17 and 18 in which the means to create the number of visual assets comprises means to derive data from the provided visual asset to produce the number of visual assets.
20. An asset authoring system as claimed in claim 19 in which the means to derive data from the provided visual asset comprises means to copy data from the provided visual asset.
21. An asset authoring system as claimed in claim 19 in which the means to derive data from the provided visual asset comprises means to process the data of the visual asset such that the number of visual assets comprises respective modified data of the provided visual asset.
22. An asset authoring system as claimed in any of claims 17 to 21 in which the means to create the number of visual assets comprises means to include, in selected visual assets of the number of visual assets, visual data representing views of selected menu items of the number of menu items.
23. An asset authoring system as claimed in any of claims 17 to 22 in which the means to create the number of visual assets comprises means to create sub-picture data comprising data for at least one selectable graphical element associated with a respective menu item.
24. An asset authoring system as claimed in claim 23 in which the means to create the sub- picture data comprises means to create, or provide, a number of selectable graphical elements associated with respective menu items.
25. An asset authoring system as claimed in claim 24 in which the means to create the sub- picture data comprises means to create a mask for selectively displaying the number of selectable graphical elements.
26. An asset authoring system as claimed in any of claims 17 to 25 in which the means to create the number of visual assets comprises means to associate a visual asset processing operation with selected menu items of the menu items; and means to derive the data for the number of visual assets from the provided visual asset using respective visual asset processing operations.
27. An asset authoring system as claimed in any of claims 17 to 26 in which the means to provide the data structure comprises means to define image data or video data associated with a plurality of views of the menu.
28. An asset authoring system as claimed in claim 27 in which the means to define the image data or the video data associated with the plurality of views of the menu comprises the means to create the image data or the video data such that the plurality of views of the menu represent progressively expanding or contracting views of the menu.
29. An asset authoring system as claimed in any of claims 17 to 28 further comprising means to create navigational data associated with, or linking, the number of visual assets according to the menu structure to allow the number of visual assets to be accessed, played or displayed according to the menu structure.
30. An asset authoring system as claimed in any of claims 17 to 29 further comprising means to provide a first number or plurality of visual assets; and means to create, automatically, a second number of visual assets using the plurality of visual assets; the created visual assets corresponding to respective views of the defined views or to respective actions of the defined actions according to the menu structure.
31. An asset authoring system as claimed in any of claims 17 to 30 in which means to provide the visual assets comprises means to provide an audio-visual asset.
32. An asset authoring system substantially as described herein with reference and/or as illustrated in any of figures 4 to 16 the accompanying drawings.
33. A system for authoring visual content; the system comprising the step of creating a video sequence comprising data to display a progressively expanding menu comprising a number of menu items following invocation of a selected menu item or receipt of a user generated event and data derived from or associated with at least one of image data and a video sequence.
34. A system of authoring visual content; the system comprising the step of creating a video sequence comprising data to display a progressively contracting menu comprising a number of menu items following invocation of a selected menu item or receipt of a user generated event.
35. A system as claimed in either of claims 33 and 34, further comprising means to generate sub-picture graphical elements for each menu item; each sub-picture graphical element having associated position data to position the elements in a predetermined position relative to corresponding menu items when rendered and data derived from or associated with at least one of image data or a video sequence.
36. A system as claimed in any of claims 33 to 35 in which the progressively varying menu represents a pull-down menu.
37. A computer program comprising computer executable code to implement a system or method as claimed in any preceding claim.
38. A computer program product comprising computer readable storage storing a computer program as claimed in claim 37.
39. A storage medium comprising at least visual content authored using a method, system, computer program or computer program product as claimed in any preceding claim.
40. A storage medium comprising data representing a video sequence comprising data to display a progressively variable or dynamic menu comprising a number of menu items following invocation of a selected menu item or receipt of a user generated event; and data representing sub-picture graphical elements for each menu item; each sub-picture graphical element having associated position data to mask the elements in predetermined positions relative to corresponding menu items when rendered in response to a user- generated event.
41. A storage medium as claimed in either of claims 39 and 40 in which the storage medium is an optical medium.
42. A storage medium as claimed in claim 41 in which the optical medium is a DVD product.
43. A storage medium as claimed in either of claims 39 and 40 in which the storage medium is a magnetic medium.
44. A storage medium as claimed in claim 43 in which the storage medium is a digital linear tape.
45. A system to manufacture a DVD product; the system comprising means to create a data carrier comprising data representing a video sequence comprising data to display a progressively variable or dynamic menu comprising a number of menu items following invocation of a selected menu item or receipt of a user generated event; and data representing sub-picture graphical elements for each menu item; each sub-picture graphical element having an associated maskable position relative to corresponding menu items when rendered in response to a user-generated event.
46. A system to manufacture a DVD product; the system comprising means to read a data carrier comprising data representing at least the set of visual assets created using a method, system, computer program, computer program product or storage medium as claimed in any preceding claim; and means to materially produce the DVD product using the data stored on the data carrier.
47. A DVD product comprising data representing a video sequence comprising data to display a progressively variable or dynamic menu comprising a number of menu items following invocation of a selected menu item or receipt of a user generated event; and data representing sub-picture graphical elements for each menu item; each sub-picture graphical element having an associated maskable position relative to corresponding menu items when rendered in response to a user-generated event.
48. A data structure substantially as described herein with reference to and/or as illustrated in any of figures 4 to 16 the accompanying drawings.
EP04742904A 2003-06-09 2004-06-09 Data processing system and method, computer program product and audio/visual product Withdrawn EP1636799A2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
GB0313216A GB2402755B (en) 2003-06-09 2003-06-09 Data processing system and method,computer program product and audio/visual product
US10/457,265 US20040250275A1 (en) 2003-06-09 2003-06-09 Dynamic menus for DVDs
PCT/GB2004/002457 WO2004109699A2 (en) 2003-06-09 2004-06-09 Data processing system and method, computer program product and audio/visual product

Publications (1)

Publication Number Publication Date
EP1636799A2 true EP1636799A2 (en) 2006-03-22

Family

ID=33512695

Family Applications (1)

Application Number Title Priority Date Filing Date
EP04742904A Withdrawn EP1636799A2 (en) 2003-06-09 2004-06-09 Data processing system and method, computer program product and audio/visual product

Country Status (2)

Country Link
EP (1) EP1636799A2 (en)
WO (1) WO2004109699A2 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2422946A (en) * 2004-12-16 2006-08-09 Zootech Ltd Menus for audiovisual product
WO2006064049A1 (en) 2004-12-16 2006-06-22 Zootech Limited Menus for audiovisual content

Also Published As

Publication number Publication date
WO2004109699A2 (en) 2004-12-16

Similar Documents

Publication Publication Date Title
US7904812B2 (en) Browseable narrative architecture system and method
US7574103B2 (en) Authoring of complex audiovisual products
KR20050121664A (en) Video based language learning system
WO2003077249A1 (en) Reproducing method and apparatus for interactive mode using markup documents
CN101276376A (en) Method and system to reproduce contents, and recording medium including program to reproduce contents
KR20080023314A (en) Synchronization aspects of interactive multimedia presentation management
JP5285052B2 (en) Recording medium on which moving picture data including mode information is recorded, reproducing apparatus and reproducing method
US20050097437A1 (en) Data processing system and method
US20040139481A1 (en) Browseable narrative architecture system and method
US20050094972A1 (en) Data processing system and method
AU2003222992B2 (en) Simplified preparation of complex interactive DVD
US20050097442A1 (en) Data processing system and method
US7650063B2 (en) Method and apparatus for reproducing AV data in interactive mode, and information storage medium thereof
US20110161923A1 (en) Preparing navigation structure for an audiovisual product
US20050094971A1 (en) Data processing system and method
WO2004109699A2 (en) Data processing system and method, computer program product and audio/visual product
CN101720483A (en) Authoring tools and the method that is used to realize this authoring tools
US20050094968A1 (en) Data processing system and method
JP5619838B2 (en) Synchronicity of interactive multimedia presentation management
JPH10199215A (en) Reproduction control information editing device of system stream, its method and recording medium recorded with the method
GB2402755A (en) Providing a dynamic menu system for a DVD system

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL HR LT LV MK

DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20090105

PUAK Availability of information related to the publication of the international search report

Free format text: ORIGINAL CODE: 0009015