WO2014131194A1 - Dynamic presentation prototyping and generation - Google Patents

Dynamic presentation prototyping and generation Download PDF

Info

Publication number
WO2014131194A1
WO2014131194A1 PCT/CN2013/072061 CN2013072061W WO2014131194A1 WO 2014131194 A1 WO2014131194 A1 WO 2014131194A1 CN 2013072061 W CN2013072061 W CN 2013072061W WO 2014131194 A1 WO2014131194 A1 WO 2014131194A1
Authority
WO
WIPO (PCT)
Prior art keywords
presentation
points
input
style
slides
Prior art date
Application number
PCT/CN2013/072061
Other languages
French (fr)
Inventor
Darren K. Edge
Koji Yatani
Original Assignee
Microsoft Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corporation filed Critical Microsoft Corporation
Priority to PCT/CN2013/072061 priority Critical patent/WO2014131194A1/en
Priority to CN201380074201.XA priority patent/CN105144672B/en
Priority to EP13876698.5A priority patent/EP2962259A1/en
Publication of WO2014131194A1 publication Critical patent/WO2014131194A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management

Definitions

  • Some implementations may include a computing device to generate a presentation including a plurality of slides.
  • the presentation may be generated based on an input file that includes commands from a presentation markup language.
  • the commands may specify details associated with the presentation.
  • the details may include a title and a background image associated with each of the plurality of slides, one or more points to be included in each of the plurality of slides, and a style associated with the each of the plurality of slides.
  • FIG. 1 is an illustrative architecture that includes creating a presentation according to some implementations.
  • FIG. 2 is an illustrative architecture that includes creating a presentation at a story level, a scene level, and a detail level according to some implementations.
  • FIG. 3 is a flow diagram of an example process that includes specifying and revising a presentation according to some implementations.
  • FIG. 4 is a flow diagram of an example process that includes specifying details associated with a presentation according to some implementations.
  • FIG. 5 is a flow diagram of an example process that includes generating a presentation based on a specification according to some implementations.
  • FIG. 6 is a flow diagram of an example process that includes presenting a presentation based on a specification according to some implementations.
  • FIG. 7 illustrates an example configuration of a computing device and environment that can be used to implement the modules and functions described herein.
  • the systems and techniques described herein may be used to create presentations that are dynamic compared to traditional linear presentations.
  • the presentations may include presentations that are dynamically alterable during rehearsal and delivery.
  • a user may specify and manipulate the points to be made in the presentation and the relationships between the points.
  • the user may select global style parameters (e.g., fonts, colors, spacing, and the like) independently or using suggested themes.
  • the presentation media e.g., slide decks and/or other types of media
  • the user may repeatedly review the presentation, tweak the presentation (e.g., by tweaking one or more of the points, relationships, or global style parameters), and re-generate the presentation media until the user is satisfied.
  • the resulting presentation may support interaction with points in the presentation medium based on the relationships between the points to enable self-testing during rehearsal and flexible navigation during delivery.
  • Presentations specified in such a manner may be edited and regenerated quickly using a rapid prototyping process, thereby providing a usable presentation quickly while supporting changes in presentation style and structure as the presentation evolves.
  • the presentation media may be automatically generated to include rich navigation options reflecting the relationships between points, in ways that would be laborious to set up by hand and fragile (e.g., manually hyperlinking slides) in response to structural changes.
  • the systems and techniques described herein may be deployed equally effectively on a variety of platforms, from desktop computers, to laptop computers, to touch-based tablet devices, enabling authoring capabilities that are idea-based rather than style-based, that support casual entry that is not labor intensive, and where touchscreen capabilities may be supported to enable dynamic navigation rather than linear presentation.
  • FIG. 1 is an illustrative architecture 100 that includes creating a presentation according to some implementations.
  • the architecture 100 includes a computing device 102 coupled to a network 106.
  • the network 106 may include one or more networks, such as a wireless local area network (e.g., WiFi®, BluetoothTM, or other type of near-field communication (NFC) network), a wireless wide area network (e.g., a code division multiple access (CDMA), a global system for mobile (GSM) network, or a long term evolution (LTE) network), a wired network (e.g., Ethernet, data over cable service interface specification (DOCSIS), Fiber Optic System (FiOS), Digital Subscriber Line (DSL) and the like), other type of network, or any combination thereof.
  • DOCSIS data over cable service interface specification
  • DOCOS Fiber Optic System
  • DSL Digital Subscriber Line
  • the computing device 102 may be coupled to the display device 108, such as a monitor.
  • the display device 108 may include a touchscreen.
  • the computing device 102 may be a desktop computing device, a laptop computing device, a tablet computing device, a wireless phone, a media playback device, a media recorder, another type of computing device, or any combination thereof.
  • the computing device 102 may include one or more processors 110 and one or more computer readable media 112.
  • the computer readable media 112 may include instructions that are organized into modules and that are executable by the one or more processors 110 to perform various functions.
  • the computer readable media 112 may include an authoring module 114, a generation module 116, and a presentation module 118.
  • the authoring module 114 may enable a user of the computing device 102 to author a presentation 120 by specifying points to be made, the relationships between the points, and styles associated with the points.
  • the generation module 116 may enable the user to generate the presentation 120 after authoring the presentation 120.
  • the presentation module 118 may enable the user to present the presentation 120 using a display device, such as the display device 108. If the user does not specify a style for the presentation 120, one or more of the modules 114, 116, or 118 may select a default style.
  • the presentation 120 may include one or more slides, such as a first slide 122 to an Nth slide 124 (where N>1).
  • Each of the N slides may include one or more points 126, text 128, one or more images 130, media data 132, links 134, or any combination thereof.
  • the points 126 may include one or more primary concepts or ideas that are to be conveyed to the audience.
  • the points 126 may be conveyed using one or more of the text 128, the images 130, or the media data 132.
  • the text 128 may include text that specifies details associated with one or more of the points 126.
  • the one or more images 130 may include images (e.g., photographs, graphics, icons, or the like) that visually illustrate one or more of the points 126.
  • the media data 132 may include audio data, video data, or other types of media data that may be played back to illustrate one or more of the points 126.
  • the links 134 may be specified by a user and may be used to connect different points (e.g., from the points 126) and different slides (e.g., from the N slides 122 to 124) with each other to enable a presenter to dynamically provide additional details associated with a particular point during the presentation. For example, based on the type of audience to which the presentation is being given, different questions may arise relating to the same point.
  • the links 134 may enable the presenter to branch off and present additional information to answer different questions arising from the same point.
  • a point may have three sub-points, Al, A2, and A3. If a question arises relating to sub-point Al, the presenter may select a first link to present additional materials associated with the sub-point Al . Similarly, if a question arises relating to sub-point A2, the presenter may select a second link to present additional materials associated with the sub-point A2. If a question arises relating to sub-point A3, the presenter may select a third link to present additional materials associated with the sub-point A3.
  • the links 134 may enable the presenter to dynamically customize the delivery of the presentation 120 while presenting the presentation 120.
  • the server 104 may include one or more processors 136 and one or more computer readable media 138.
  • the computer readable media 138 may include one or more of the authoring module 114, the generation module 116, or the presentation module 118.
  • one or more of the modules 114, 116 or 118 may be downloaded from the server 104 and stored in the computer readable media 112 to enable a user of the computing device 102 to use the modules 114, 116 or 118.
  • the server 104 may host one or more of the modules 114, 116 or 118 and the computing device 102 may access one or more of the modules 114, 116 or 118 using the network 106.
  • the computing device 102 may send input data 140 to the server 104.
  • the input data 140 may include authoring information, such as points to be made in a presentation, the relationship between the points, and specified styles.
  • the server 104 may generate the presentation 120 based on the input data 140 and send the presentation 120 to the computing device 102.
  • the modules 114, 116, or 118 may be distributed across multiple computing devices, such as the computing device 102 and the server 104.
  • the computing device 102 may enable a user to author a presentation 120.
  • the presentation may be generated by the computing device 102 using the generation module 116 stored in the computer readable media 112.
  • the server 104 may generate the presentation 120 using the generation module 116 stored in the computer readable media 138 based on the input data 140 provided by the computing device 102.
  • the presentation 120 may be presented on the display device 108 using the computing device 102 (or another computing device).
  • the presentation 120 may be authored and generated using the computing device 102 and/or server 104 but may be presented using a different computing device.
  • the computer readable media 112, 132 are examples of storage media used to store instructions which are executed by the processor(s) 110, 130 to perform the various functions described above.
  • the computer readable media 112, 132 may generally include both volatile memory and non-volatile memory (e.g., RAM, ROM, or the like).
  • the computer readable media 112, 132 may include hard disk drives, solid-state drives, removable media, including external and removable drives, memory cards, flash memory, floppy disks, optical disks (e.g., CD, DVD), a storage array, a network attached storage, a storage area network, or the like.
  • the computer readable media 112, 132 may be one or more types of storage media capable of storing computer-readable, processor-executable program instructions as computer program code that can be executed by the processor(s) 110, 132 as a particular machine configured for carrying out the operations and functions described in the implementations herein.
  • the computing device 102 and server 104 may also include one or more communication interfaces for exchanging data with other devices, such as via a network, direct connection, or the like, as discussed above.
  • the communication interfaces may facilitate communications within a wide variety of networks and protocol types, including wired networks (e.g., LAN, cable, etc.) and wireless networks (e.g., WLAN, cellular, satellite, etc.), the Internet and the like.
  • the communication interfaces may also provide communication with external storage (not shown), such as in a storage array, network attached storage, storage area network, or the like.
  • module can represent program code (and/or declarative-type instructions) that performs specified tasks or operations when executed on a processing device or devices (e.g., CPUs or processors).
  • the program code can be stored in one or more computer-readable memory devices or other computer storage devices.
  • this disclosure provides various example implementations, as described and as illustrated in the drawings. However, this disclosure is not limited to the implementations described and illustrated herein, but can extend to other implementations, as would be known or as would become known to those skilled in the art. Reference in the specification to "one implementation,” “this implementation,” “these implementations” or “some implementations” means that a particular feature, structure, or characteristic described is included in at least one implementation, and the appearances of these phrases in various places in the specification are not necessarily all referring to the same implementation.
  • FIG. 2 is an illustrative architecture 200 that includes creating a presentation at a story level, a scene level, and a detail level according to some implementations.
  • the architecture 200 illustrates how a user may author the presentation 120 using the authoring module 114.
  • the authoring module 114 may provide a graphical user interface, a command line interface, markup commands, other types of authoring commands, or any combination thereof.
  • a presentation such as the presentation 120
  • presenters may be influenced by the relative performance of peers when the peers deliver similar presentations. Presenters may take into consideration the relationship between different kinds of audience members and the content to be presented, anticipating and formulating responses to questions that could arise as a result.
  • a presentation may include information and examples that are wrapped in a narrative and delivered through the interplay of visuals and speech.
  • the presentation may have multiple points and layers that are connected by a sense of coherence and flow.
  • the modules, 114, 116, or 118 may enable a story to be developed before the presentation 120 is generated. Starting with the goal in mind may guide subsequent activities, including crafting implicit messages, explicit takeaways, or rhetorical questions. For example, mapping points to slide titles may provide a provisional structure for elaboration.
  • the modules, 114, 116, or 118 may enable the user to quickly and easily replace text with images and/or graphics that convey the intended message.
  • the modules, 114, 116, or 118 may enable connecting a slide to a next slide by using a leading question, a hint, or a concern prior to moving from the slide to the next slide. Planning and adding transition words to a presentation may enable the presenter to explain to the audience why the presentation is moving to the next topic.
  • the modules, 114, 116, or 118 may enable the presenter to view and rearrange content at a high-level to enable the presenter to create a sense of flow using images and/or descriptive text. For example, each part of the presentation may be tied to a main theme/story that the presentation is conveying.
  • the modules, 114, 116, or 118 may enable a presenter to rehearse and refine a presentation to preserve the presentation structure in the mind of the presenter, thereby encouraging a natural delivery that is free from reading and recital.
  • the authorizing module 114 may provide a rehearsal mode that enables the use of visual cues in slides for the recall of points that are to be made verbally.
  • the rehearsal mode may enable the presenter to learn the association between the visual cues and the points to be made using presenter notes, by providing physical or electronic flashcards to drill points into memory, or other types of cues.
  • practicing speaking slides out loud may highlight differences between written and spoken language and support the rephrasing of the language in the presenter notes.
  • the rehearsal mode may enable the presenter to establish a mental structure using performance-oriented rehearsals, such as practice walking around, in front of a mirror, gesturing, or visualizing the delivery.
  • the presenter may desire to communicate a presentation that flows smoothly from start to finish by directing the audience's attention to key points using a combination of visuals, gestures, and speech. Breaking the flow of the presentation may be detrimental to the presenter and/or the audience by side-tracking the presentation from the key points of the presentation. For example, after creating the presentation but before presenting the presentation, the presenter may obtain information (e.g., a recent event that occurred) and alter the emphasis of the presentation based on the information. For example, the presenter may determine to go into more detail on some points while glossing over or skipping other points.
  • information e.g., a recent event that occurred
  • a presenter may desire to present portions of the presentation in a non-linear order based on information obtained prior to presenting the presentation, in response to audience questions, etc.
  • Memorizing where various pieces of information are presented in the presentation may be impractical for large presentations and/or presentations that have undergone significant revisions. Exiting a presentation to access additional files may result in losing the attention of audience members and/or creating a perception that the presenter is disorganized. Even if a presenter prepares extra slides (e.g., as an appendix at the end of the presentation) to enable the presenter discuss points in greater detail, accessing the appropriate slides and then resuming the presentation may be disruptive to the flow of the presentation.
  • the authorizing module 114 may enable linking various portions of the presentation. For example, a presenter may link points with other points (e.g., sub-points), slides with other slides, etc. to enable non-linear delivery of the presentation without disrupting the flow of the presentation. Such a presentation may enable the presenter to dynamically customize the presentation while the presenter is presenting the presentation. The presenter can respond to information obtained after the presentation was generated by presenting the presentation in a way that focuses on the points that are relevant to the information while skimming or ignoring irrelevant points. The presenter can respond to questions and explore details about points that are of interest to audience members while skimming or ignoring points that are of less interest to the audience.
  • points e.g., sub-points
  • slides with other slides e.g., slides with other slides, etc.
  • Such a presentation may enable the presenter to dynamically customize the presentation while the presenter is presenting the presentation.
  • the presenter can respond to information obtained after the presentation was generated by presenting the presentation in a way that
  • presenters who do not rehearse may overestimate or underestimate the number of points that may be covered in a particular time period and end up either skipping through large portions of the presentation or zipping through the presentation. In either case, the audience may be frustrated because the presentation was not presented in a manner that was conducive to the audience learning the intended message.
  • the modules, 114, 116, or 118 may enable the presenter to rehearse and time presenting the presentation in a way that the audience leaves having understood the intended message.
  • the presenter may set time targets for high level scenes and one or more of the modules 114, 116, or 118 may proportionately distribute the time target to subordinate (e.g., detail) slides and/or points.
  • timing feedback when timing feedback is displayed for a scene slide (e.g., during rehearsal or delivery), and the presenter navigates to a subordinate detail slide, the same timing feedback may seamlessly continue until the presenter moves to a different scene.
  • a per-scene approach might require less presenter effort and pressure as compared to a per-slide approach.
  • audience interaction and time constraints may shape the authoring, rehearsal, and delivery of a presentation.
  • the authoring environment of conventional presentation applications may constrain presenters to a primarily linear presentation delivery.
  • the modules, 114, 116, or 118 enable a user to specify constraints (e.g., time constraints) while enabling dynamic presentation rehearsal and delivery.
  • constraints e.g., time constraints
  • the time saved by not directly manipulating text, images, and other slide parameters to achieve a particular style may be reallocated to more important activities, such as (a) telling stories using the presentation by thinking about the sequence, structure, and purpose of the points to be made and (b) preparing for structured spontaneity. Time spent on these activities may result in the presenter having a more rehearsed mastery of in-depth material, thereby creating the freedom for the presenter to be more dynamic, responsive, and improvisational during delivery.
  • the presentation modules 114, 116, or 118 may enable presenters to organize the points they wish to communicate and automatically generate a presentation based on the organization of the points. Enabling presenters to plan the presentation using points before committing them to a presentation (e.g., slides) may enable the presenter to visualize the entire story to be unfolded using the presentation. Enabling presenters to plan the presentation using points may enable presenters to focus on crafting a story that is effective, memorable, and appropriate for the audience.
  • the modules 114, 116, or 118 may enable presenters to take a collection of points sourced from multiple documents and/or multiple people and generate a presentation that includes those points along with a consistent style.
  • the style used for the presentation may be quickly and easily customizable while conforming to best practices in the visual design of presentations. Styling a presentation may enable the use of images to emotionally impact the audience. Concepts presented using images may be remembered by the audience for a longer period of time than concepts presented by words alone. In general, the styling may enable principles of visual design, including contrast, repetition, alignment, and proximity.
  • the modules 114, 116, or 118 may enable presenters to craft and connect the central scenes of a high-level narrative and encourage planning of verbal linkages between scenes to strike a balance between storytelling and analysis.
  • the points may be organized into scenes based on the presenter determining how deeply to explore each scene while completing the presentation on time.
  • the term "scene" as used herein refers to a set of one or more points that advance a higher-level story. Scenes may enable the presentation to flow from a portion of the presentation (e.g., a slide) to a next portion (e.g., a next slide), using a chronological flow, a problem/solution flow, or an opportunity/leverage flow.
  • Appropriate scenes for a presentation may be discovered by clustering related points into different organizational structures, such as columns or a hierarchical tree.
  • a hierarchical organization of the scenes and/or the points within each scene may enable the organization of the presentation such that supporting information branches off from the primary idea(s) being conveyed.
  • the modules 114, 116, or 118 may enable presenters to link scenes in various ways, such as using an opening gambit (e.g., a question, a factoid, or an anecdote), making repeated references to the flow structure, making logical transitions between outbound and inbound topics, closing with a call to action, etc.
  • Presenting a visual road map (e.g., outline) near the start and end of the presentation may guide the audience when the presentation is being presented and may assist the audience with retaining a mental model of the presentation.
  • Linking points to other points and scenes may enable the dynamic expansion of points into sub-points, notes, media, files, or web pages that support the point being presented.
  • Cued- recall learning refers to a flashcard-like method of testing recall of target information given an initial cue.
  • the modules 114, 116, or 118 may support cued-recall learning.
  • the ability to expand points as needed gives the presenter the freedom and flexibility to respond to the audience by presenting points that are appropriate for the audience at a depth that is appropriate for the audience.
  • the modules 114, 116, or 118 may generate a graphical user interface that enables a user to specify details associated with the presentation, such as a title, one or more points, and one or more graphics for each portion (e.g., slide) of the presentation.
  • a simple presentation markup language may be provided to enable a user to specify titles and points for each slide included in a presentation.
  • An example of a PML that enables a user to specify details associated with the presentation is provided in Table 2.
  • the PML described in Table 2 may support the development of high-level scenes illustrated with full-bleed images, the expansion of scenes into points, the expansion of points into sub-points, supporting files, media, and/or web pages, and the preparation of links between scenes.
  • the PML of Table 2 may enable a presenter to specify a variety of style parameters, such as font types, colors of titles and body text, sizes and colors of title backgrounds (e.g., to create contrast when overlaid on a background image), the aspect ratio of the slide, a background color, other style-related parameters, or any combination thereof.
  • the modules 114, 116, and 118 may automatically (e.g., without human interaction) scale slide titles to fill the space available in each slide.
  • the modules 114, 116, and 118 may enable the adjustment of links between a slide and other slides in the presentation. For example, in some cases, a transparent linkage box may be added to zero or more edges of the four edges of each slide.
  • Each linkage box may be used to hyperlink a particular slide to one or more other slides of the presentation to create an interconnected slide network.
  • the hyperlinks may provide a mechanism for dynamic navigation between slides when the presentation is being presented. For example, the hyperlinks may be navigated using a mouse or using a touchscreen. When using a touchscreen, one or more of the applications 114, 116, or 118 may interpret directional swipe gestures as navigation commands.
  • a presentation such as the presentation 120 of FIG. 1, may be generated using one or more of the modules 114, 116, or 118 by specifying details of the presentation at a story level 202, a scene level 204, and a detail level 206.
  • a user may define scenes, such as one or more of a first scene 208, a second scene 210, a third scene 212, or a fourth scene 214.
  • the scenes 208, 210, 212, and 214 may be displayed in a thumbnail view to enable the user to select a particular scene and link the particular scene with one or more other scenes. For example, the user may select the first scene 208 and add links from the first scene 208 to the second scene 210.
  • a selected scene may provide a visual indication that the scene has been selected, such as by displaying a darker border (as illustrated in FIG. 2), a flashing border, or other visual indicator.
  • the scene level 204 illustrates how the first scene 208 is horizontally linked to the second scene 210 and the second scene 210 is horizontally linked to the third scene 212.
  • the scene level 204 may enable the user to add a top level point, such as a first title and first image 218 to the first scene 208, a second title and second image 220 to the second scene 210, and a third title and a third image 222 to the third scene. Clicking the top border of a scene may cause a jump to a hyperlinked "storyline" slide with the outbound scene highlighted.
  • an overview of all scenes may be automatically created by one or more of the modules 114, 116, or 118 to support non- linear navigation and visual reference to the presentation structure.
  • the presentation structure may be created by statically hyper-linking slides to one another, with different overview slides created with different scenes highlighted according to the outbound scene.
  • a similar presentation structure may be achieved through navigation and highlights interpreted dynamically at presentation runtime (e.g., by the application). Clicking on a particular scene thumbnail may cause the presentation to jump directly to the particular scene, while horizontal navigation may display the links between scenes providing a story rehearsal path for preparing the presentation. Clicking on a bottom portion of a scene being displayed may display the high level points for the scene.
  • the high level points may be displayed using a drop down menu. If the user causes the high level points to be displayed (e.g., rather than speaking about just the displayed scene), the user may navigate back to the scene level 204 before advancing to a next slide.
  • Such a mechanism may enable the presenter to provide closure to each scene while prompting the presenter to verbally convey a previously prepared verbal linkage without displaying text that competes for the audience's attention.
  • the detail level 206 may enable the user to add internal hyperlinks and/or external hyperlinks.
  • the details may not be added in a detail view, but the detail level 206 may be realized through hyperlinked bullets generated from hierarchically-related points (as well as points connected manually in a free-form structure).
  • the user may add a hyperlink to any of the points 126.
  • the hyperlink may be used to link to an external file, a web page, or another type of content that is external to the presentation 120.
  • the hyperlink may be used to link a point or a slide to other points, slides, descriptions, media data (e.g., image data, video data, audio data, etc.), or other material that is included in the presentation 120.
  • Navigating horizontally at the detail level 206 may enable the user to follow a detailed rehearsal path, performing a depth-first traversal of all expandable points in the presentation, with "cue" slides indicating which points may be expanded.
  • the presenter may repeatedly traverse the rehearsal path until the structure of the presentation and the content of each of the points can be recalled.
  • the modules 114, 116, or 188 may automatically generate slide notes to show scene linkages and previews of point expansions.
  • an input file including PML commands, an input text file, and any hyperlinked files or media that are to be embedded into the presentation 120 may be placed in a folder.
  • Providing the input file that includes the PML commands to an application e.g., one or more of the modules 114, 116, or 118
  • an application e.g., one or more of the modules 114, 116, or 118
  • an application may generate/regenerate the presentation 120, which may automatically be opened in an installed presentation application' such as Microsoft® PowerPoint®.
  • An example of an input file may appear as:
  • each block represents one or more operations that can be implemented in hardware, software, or a combination thereof.
  • the blocks represent computer-executable instructions that, when executed by one or more processors, cause the processors to perform the recited operations.
  • computer-executable instructions include routines, programs, objects, modules, components, data structures, and the like that perform particular functions or implement particular abstract data types.
  • the order in which the blocks are described is not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel to implement the processes.
  • the processes 300, 400, 500, and 600 are described with reference to the architectures 100 and 200 as described above, although other models, frameworks, systems and environments may implement these processes.
  • FIG. 3 is a flow diagram of an example process that includes specifying and revising a presentation according to some implementations.
  • the architecture 300 describes how a user may create and refine a presentation.
  • the user may specify one or more aspects of a presentation.
  • the user may specify different components of the presentation 120, such as one or more of the points 126, the text 128, the images 130, the media data 132, or the links 134 using a PML, (e.g., the PML illustrated in Table 1) or a GUI provided by one or more of the modules 114, 116, or 118.
  • PML e.g., the PML illustrated in Table 1
  • GUI provided by one or more of the modules 114, 116, or 118.
  • the user may generate the presentation.
  • the user may use the generation module 116 to generate the presentation 120.
  • the user may present the presentation.
  • the user may use the presentation module 118 to display the presentation 120 on the display device 108.
  • the user may rehearse the presentation using a rehearsal mode of the presentation module 118 and deliver the presentation 120 to an audience using a delivery mode of the presentation module 118.
  • FIG. 4 is a flow diagram of an example process that includes specifying details associated with a presentation according to some implementations.
  • the process 400 may be performed by the authoring module 114 of FIG. 1.
  • a visual point may be an idea that is to be conveyed visually using the presentation, e.g., using one or more slides or media data from the presentation.
  • connections between the visual points may be created.
  • the authoring module 14 may be used to include visual points that are connected vertically, horizontally, hierarchically, linearly, non-linearly, circularly, or any combination thereof in the presentation 120.
  • verbal points may be specified.
  • a verbal point may be an idea that is to be conveyed verbally by the presenter.
  • a verbal point may be used to introduce the presentation, to transition from one slide to another slide during the presentation, or to make another type of point.
  • the authoring module 114 may be used to add cues (e.g., text, images, and/or media data) to prompt the presenter to convey verbal points.
  • the contents of the visual points and the verbal points may be edited.
  • the authoring module 114 may be used to specify one or more of text, images, or media data in the contents of the visual points and/or the verbal points.
  • styles associated with the presentation may be specified.
  • the authoring module 14 may be used to specify styles associated with each of the slides 122 to 124, such as fonts, colors, background images, foreground images, or other styles associated with presentations.
  • the presentation may be generated.
  • the generation module 116 may be used to generate the presentation 120 after the user has completed specifying the contents of the presentation 120.
  • the user may use the authoring module to author a presentation using a PML (e.g., the PML illustrated in Table 1), a graphical user interface, or other authoring tool and then generate the presentation 120 using the generation module 116 based on the authoring.
  • the user may review the generated presentation and repeat one or more of blocks 402, 404, 406, 408, 410, or 412 until the user is satisfied with the resulting presentation.
  • a presentation may be viewed as a set of points to be communicated through visuals with or without accompanying speech.
  • the points may be made in the presentation using text, images, media data, or other forms of media such as diagrams, photos, videos, web pages, etc.
  • a particular point may be followed by related points at the same level of abstraction as the particular point, or by secondary points (e.g., sub-points) that expand on the particular point by providing more details, evidence, or examples.
  • Some points may be grouped at higher levels, leading to a hierarchical structure in which the points at a particular level may be ordered to achieve a particular effect (e.g., the presentation of an argument).
  • a presentation may include a hierarchy in which a title slide is the root, while remaining slides may be children of the root.
  • the bullets and other non-title visual elements of a slide may be the children of the title, and the notes of the slide may be children of slide elements or the slide title.
  • modifying the structure may be difficult and/or time consuming. For example, the user may spend a large amount of time (e.g., several minutes) to convert a slide to a bullet or vice-versa.
  • the authoring module 114 may enable the user to specify logical relationships (e.g., sequence, transition, hierarchy, and the like) between points without committing to any arrangement or styling.
  • the user merely modifies the relationship between points and re-generates the presentation using the generation module 116.
  • the structure of the presentation 120 may be altered in a few seconds using the authoring module 114 as compared to a few minutes to alter the structure using a conventional presentation generation application.
  • the authoring module 114 may enable the user to specify a visual theme for the presentation 120.
  • the visual theme may include fonts and colors to be used for the presentation as well as spatial layout rules for the arrangement of points.
  • the generation of the presentation 120 using the generation module 116 may go beyond direct manipulation of object placement (e.g., as per the What You See Is What You Get (WYSIWYG) paradigm) to support an automated layout guided by principles of graphic and narrative design.
  • the presentation 120 may be specified using a simple markup language (e.g., similar to the PML of Table 1) or through a graphical editor that supports the hierarchical layout, styling, and restructuring of points (e.g., using a What You See Is What You Mean (WYSIWYM) paradigm).
  • a simple markup language e.g., similar to the PML of Table 1
  • WYSIWYM What You See Is What You Mean
  • FIG. 5 is a flow diagram of an example process that includes generating a presentation based on a specification according to some implementations.
  • the process 500 may be performed by the generation module 116 of FIG. 1.
  • a file that includes a presentation specification may be parsed.
  • the file may include PML commands (e.g., from Table 1) that specify details associated with a specification that is to be generated.
  • the file may be generated by the user or by a graphical user interface provided by the authoring module 114.
  • appropriate design rules may be loaded.
  • the design rules may map an abstract presentation structure (e.g., points, scenes, and their corresponding relationships) into various representational forms, e.g., slides, web pages, handouts, canvas layouts and the like.
  • the design rules may include styling principles as proportional spacing, in which the points of a slide are distributed equally across the height of the slide or the child points surrounding a parent point in a spatial canvas layout are placed at equal angular intervals around the parent point.
  • the styling principles may be used to automatically create aesthetic layouts in terms of the relative sizes and distances between presentation points.
  • the golden ratio 1.618 may be used to scale font sizes and inter-point spacing across levels of a presentation point hierarchy.
  • Calculating the visual weight of visual elements (e.g., amount of inked type) and the corresponding centers of mass of the visual elements may provide visual representations that are balanced with respect to the center of the display.
  • the stylistic and spatial relationships between visual elements may vary in accordance with the actions of the presenter while being constrained by the design rules.
  • the presentation may be generated.
  • the generation module 116 may be used to generate the presentation 120.
  • At 508 at least some elements of the presentation may be arranged and/or styled.
  • the user may review the generated presentation 120, tweak one or more elements of the presentation 120 by modifying an arrangement of the points, a style associated with the presentation 120, or both.
  • At 510 at least some elements may be linked.
  • the user may link at least some elements of the presentation 120 by linking points or slides with others points or slides in the presentation 120 or by adding hyperlinks to content external to the presentation 120, such as external files, internet sites, etc.
  • the presentation may be saved. For example, in FIG. 1, once the user has generated the presentation 120 and is satisfied with the generated presentation 120, the user may save the presentation 120 (e.g., in the computer readable media 112 or 132).
  • the modules 114, 116, or 118 may enable a content and story-centered approach to specifying the presentation 120.
  • the modules 114, 116, or 118 may enable the generation of multiple media representations of a particular presentation.
  • the generation module 116 may be used to generate different types of presentations, such as a set of web pages suitable for display on a website, a slide deck (e.g., Microsoft® PowerPointTM) for display using a computing device, a canvas layout (e.g., Microsoft® Expression StudioTM), a slide deck suitable for display on a computing device with display constraints (e.g., tablet device or mobile phone), a video (e.g., a movie), or some other presentation medium.
  • the generation module 116 may generate the presentation 120 with a structure of points that supports complex navigation such that the presenter can dynamically create a presentation tailored for a particular audience while presenting the presentation. For example, based on information (e.g., current events), audience comments and/or questions, or the like, the presenter can navigate the presentation 120 to go into more depth on some points while skipping or skimming over other points, without the audience being aware that the presenter is dynamically customizing the presentation 120.
  • the presentation 120 may be created by compiling the presentation specification into an extensible markup language (XML) of a document format using software tools, such as an Open XML software development kit (SDK).
  • XML extensible markup language
  • SDK Open XML software development kit
  • the slides 122 to 124 may include a title, bullet points, media content, and spatial regions (e.g., slide borders) that support hyperlink-based navigation between the slides 122 to 124 according to the structure of the points in the presentation 120.
  • the user may specify the structure of the points by specifying relationships between the points using the authoring module 114.
  • the modules 114, 116, or 118 may enable hyperlink relationships between points to be quickly specified and modified.
  • FIG. 6 is a flow diagram of an example process that includes presenting a presentation based on a specification according to some implementations.
  • the process 600 may be performed by the presentation module 118 of FIG. 1.
  • a rehearsal mode may be entered.
  • the presentation may be reviewed.
  • visual points and verbal points in the presentation may be navigated to rehearse the presentation.
  • rehearsal mode may be exited.
  • the user may use the presentation module 118 to enter a rehearsal mode to rehearse the presentation 120.
  • the rehearsal mode the user may navigate the points 126, including verbal points and visual points, of the presentation 120.
  • the rehearsal mode may be used to familiarize the presenter with the structure and flow of the presentation 120. After the presenter has completed rehearsing the presentation 120, the presenter may exit the rehearsal mode.
  • delivery mode may be entered.
  • the visual points and/or the verbal points may be navigated while presenting the presentation.
  • the user may enter the delivery mode and present the presentation 120, including navigating the points 126 using the links 134.
  • a delivery mode (or generated file) may not include some material prepared for rehearsal because some material may be intended for the speaker but not the audience (e.g., private notes).
  • the presentation module 118 may provide various modes, including a rehearsal mode in which the presenter can rehearse the presentation and a delivery mode in which the presenter can deliver the presentation.
  • a rehearsal mode in which the presenter can rehearse the presentation
  • a delivery mode in which the presenter can deliver the presentation.
  • FIG. 7 illustrates an example configuration of a computing device 700 and environment that can be used to implement the modules and functions described herein.
  • the computing device 102 or the server 104 may include an architecture that is similar to or based on the computing device 700.
  • the computing device 700 may include one or more processors 702, a memory 704, communication interfaces 706, a display device 708, other input/output (I/O) devices 710, and one or more mass storage devices 712, able to communicate with each other, such as via a system bus 714 or other suitable connection.
  • the processor 702 may be a single processing unit or a number of processing units, all of which may include single or multiple computing units or multiple cores.
  • the processor 702 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions.
  • the processor 702 may be configured to fetch and execute computer-readable instructions stored in the memory 704, mass storage devices 712, or other computer-readable media.
  • Memory 704 and mass storage devices 712 are examples of computer storage media for storing instructions, which are executed by the processor 702 to perform the various functions described above.
  • memory 704 may generally include both volatile memory and non-volatile memory (e.g., RAM, ROM, or the like).
  • mass storage devices 712 may generally include hard disk drives, solid-state drives, removable media, including external and removable drives, memory cards, flash memory, floppy disks, optical disks (e.g., CD, DVD), a storage array, a network attached storage, a storage area network, or the like.
  • Both memory 704 and mass storage devices 712 may be collectively referred to as memory or computer storage media herein, and may be capable of storing computer-readable, processor-executable program instructions as computer program code that can be executed by the processor 702 as a particular machine configured for carrying out the operations and functions described in the implementations herein.
  • the authoring module 114, the generation module 116, the presentation module 118, the presentation 120, other modules 716 and other data 718, or portions thereof may be implemented using any form of computer-readable media that is accessible by the computing device 700.
  • “computer-readable media” includes computer storage media and communication media.
  • Computer storage media includes non-volatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store information for access by a computing device.
  • communication media may embody computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave.
  • computer storage media does not include communication media.
  • the computing device 700 may also include one or more communication interfaces 706 for exchanging data with other devices, such as via a network, direct connection, or the like, as discussed above.
  • the communication interfaces 806 can facilitate communications within a wide variety of networks and protocol types, including wired networks (e.g., LAN, cable, etc.) and wireless networks (e.g., WLAN, cellular, satellite, etc.), the Internet and the like.
  • Communication interfaces 806 can also provide communication with external storage (not shown), such as in a storage array, network attached storage, storage area network, or the like.
  • a display device 708, such as a monitor may be included in some implementations for displaying information and images to users.
  • Other I/O devices 810 may be devices that receive various inputs from a user and provide various outputs to the user, and may include a keyboard, a remote controller, a mouse, a printer, audio input/output devices, voice input, and so forth.
  • Memory 704 may include modules and components for training machine learning algorithms (e.g., PRFs) or for using trained machine learning algorithms according to the implementations described herein.
  • the memory 704 may include multiple modules (e.g., the modules 114, 116, and 118) to perform various functions.
  • the memory 704 may also include other modules 716 that implement other features and other data 718 that includes intermediate calculations and the like.
  • the other modules 716 may include various software, such as an operating system, drivers, communication software, or the like.
  • module can represent program code (and/or declarative-type instructions) that performs specified tasks or operations when executed on a processing device or devices (e.g., CPUs or processors).
  • the program code can be stored in one or more computer-readable memory devices or other computer storage devices.
  • this disclosure provides various example implementations, as described and as illustrated in the drawings. However, this disclosure is not limited to the implementations described and illustrated herein, but can extend to other implementations, as would be known or as would become known to those skilled in the art. Reference in the specification to "one implementation,” “this implementation,” “these implementations” or “some implementations” means that a particular feature, structure, or characteristic described is included in at least one implementation, and the appearances of these phrases in various places in the specification are not necessarily all referring to the same implementation.

Abstract

Some implementations may include a computing device to generate a presentation including a plurality of slides. The presentation may be generated based on an input file that includes commands from a presentation markup language. The commands may specify details associated with the presentation. The details may include a title and a background image associated with each of the plurality of slides, one or more points to be included in each of the plurality of slides, and a style associated with the each of the plurality of slides.

Description

DYNAMIC PRESENTATION PROTOTYPING AND GENERATION
BACKGROUND
[0001] Current software applications that enable the creation of presentations may structure workflow in such a way that users spend too much time on style rather than the substance of the messages to be communicated. For example, users may simply dump content onto slides rather than applying basic principles of visual design and storytelling. In addition, users may create a linear slide show that does not take into account in-depth information about related topics that may arise during the delivery of the presentation.
SUMMARY
[0002] This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key or essential features of the claimed subject matter; nor is it to be used for determining or limiting the scope of the claimed subject matter.
[0003] Some implementations may include a computing device to generate a presentation including a plurality of slides. The presentation may be generated based on an input file that includes commands from a presentation markup language. The commands may specify details associated with the presentation. The details may include a title and a background image associated with each of the plurality of slides, one or more points to be included in each of the plurality of slides, and a style associated with the each of the plurality of slides. BRIEF DESCRIPTION OF THE DRAWINGS
[0004] The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same reference numbers in different figures indicate similar or identical items.
[0005] FIG. 1 is an illustrative architecture that includes creating a presentation according to some implementations.
[0006] FIG. 2 is an illustrative architecture that includes creating a presentation at a story level, a scene level, and a detail level according to some implementations.
[0007] FIG. 3 is a flow diagram of an example process that includes specifying and revising a presentation according to some implementations.
[0008] FIG. 4 is a flow diagram of an example process that includes specifying details associated with a presentation according to some implementations.
[0009] FIG. 5 is a flow diagram of an example process that includes generating a presentation based on a specification according to some implementations.
[0010] FIG. 6 is a flow diagram of an example process that includes presenting a presentation based on a specification according to some implementations.
[0011] FIG. 7 illustrates an example configuration of a computing device and environment that can be used to implement the modules and functions described herein.
DETAILED DESCRIPTION
[0012] The systems and techniques described herein may be used to create presentations that are dynamic compared to traditional linear presentations. For example, the presentations may include presentations that are dynamically alterable during rehearsal and delivery. A user may specify and manipulate the points to be made in the presentation and the relationships between the points. The user may select global style parameters (e.g., fonts, colors, spacing, and the like) independently or using suggested themes. The presentation media (e.g., slide decks and/or other types of media) may be generated automatically based on the specified points, relationships, and styles. The user may repeatedly review the presentation, tweak the presentation (e.g., by tweaking one or more of the points, relationships, or global style parameters), and re-generate the presentation media until the user is satisfied. The resulting presentation may support interaction with points in the presentation medium based on the relationships between the points to enable self-testing during rehearsal and flexible navigation during delivery.
[0013] Presentations specified in such a manner may be edited and regenerated quickly using a rapid prototyping process, thereby providing a usable presentation quickly while supporting changes in presentation style and structure as the presentation evolves. The presentation media may be automatically generated to include rich navigation options reflecting the relationships between points, in ways that would be laborious to set up by hand and fragile (e.g., manually hyperlinking slides) in response to structural changes. The systems and techniques described herein may be deployed equally effectively on a variety of platforms, from desktop computers, to laptop computers, to touch-based tablet devices, enabling authoring capabilities that are idea-based rather than style-based, that support casual entry that is not labor intensive, and where touchscreen capabilities may be supported to enable dynamic navigation rather than linear presentation. ILLUSTRATIVE ARCHITECTURES
[0014] FIG. 1 is an illustrative architecture 100 that includes creating a presentation according to some implementations. The architecture 100 includes a computing device 102 coupled to a network 106. The network 106 may include one or more networks, such as a wireless local area network (e.g., WiFi®, Bluetooth™, or other type of near-field communication (NFC) network), a wireless wide area network (e.g., a code division multiple access (CDMA), a global system for mobile (GSM) network, or a long term evolution (LTE) network), a wired network (e.g., Ethernet, data over cable service interface specification (DOCSIS), Fiber Optic System (FiOS), Digital Subscriber Line (DSL) and the like), other type of network, or any combination thereof.
[0015] The computing device 102 may be coupled to the display device 108, such as a monitor. In some implementations, the display device 108 may include a touchscreen. The computing device 102 may be a desktop computing device, a laptop computing device, a tablet computing device, a wireless phone, a media playback device, a media recorder, another type of computing device, or any combination thereof. The computing device 102 may include one or more processors 110 and one or more computer readable media 112. The computer readable media 112 may include instructions that are organized into modules and that are executable by the one or more processors 110 to perform various functions. For example, the computer readable media 112 may include an authoring module 114, a generation module 116, and a presentation module 118. The authoring module 114 may enable a user of the computing device 102 to author a presentation 120 by specifying points to be made, the relationships between the points, and styles associated with the points. The generation module 116 may enable the user to generate the presentation 120 after authoring the presentation 120. The presentation module 118 may enable the user to present the presentation 120 using a display device, such as the display device 108. If the user does not specify a style for the presentation 120, one or more of the modules 114, 116, or 118 may select a default style.
[0016] The presentation 120 may include one or more slides, such as a first slide 122 to an Nth slide 124 (where N>1). Each of the N slides may include one or more points 126, text 128, one or more images 130, media data 132, links 134, or any combination thereof. Of course, other types of data may also be included in the presentation 120. The points 126 may include one or more primary concepts or ideas that are to be conveyed to the audience. The points 126 may be conveyed using one or more of the text 128, the images 130, or the media data 132. The text 128 may include text that specifies details associated with one or more of the points 126. The one or more images 130 may include images (e.g., photographs, graphics, icons, or the like) that visually illustrate one or more of the points 126. The media data 132 may include audio data, video data, or other types of media data that may be played back to illustrate one or more of the points 126. The links 134 may be specified by a user and may be used to connect different points (e.g., from the points 126) and different slides (e.g., from the N slides 122 to 124) with each other to enable a presenter to dynamically provide additional details associated with a particular point during the presentation. For example, based on the type of audience to which the presentation is being given, different questions may arise relating to the same point. The links 134 may enable the presenter to branch off and present additional information to answer different questions arising from the same point. For example, a point may have three sub-points, Al, A2, and A3. If a question arises relating to sub-point Al, the presenter may select a first link to present additional materials associated with the sub-point Al . Similarly, if a question arises relating to sub-point A2, the presenter may select a second link to present additional materials associated with the sub-point A2. If a question arises relating to sub-point A3, the presenter may select a third link to present additional materials associated with the sub-point A3. If no questions arise relating to sub- points Al, A2, or A3, then the presenter may move to a next point without accessing the additional materials related to sub-points Al, A2, and A3. Thus, the links 134 may enable the presenter to dynamically customize the delivery of the presentation 120 while presenting the presentation 120.
[0017] The server 104 may include one or more processors 136 and one or more computer readable media 138. The computer readable media 138 may include one or more of the authoring module 114, the generation module 116, or the presentation module 118. In some cases, one or more of the modules 114, 116 or 118 may be downloaded from the server 104 and stored in the computer readable media 112 to enable a user of the computing device 102 to use the modules 114, 116 or 118. In other cases (e.g., in a cloud computing environment), the server 104 may host one or more of the modules 114, 116 or 118 and the computing device 102 may access one or more of the modules 114, 116 or 118 using the network 106. For example, the computing device 102 may send input data 140 to the server 104. The input data 140 may include authoring information, such as points to be made in a presentation, the relationship between the points, and specified styles. The server 104 may generate the presentation 120 based on the input data 140 and send the presentation 120 to the computing device 102. The modules 114, 116, or 118 may be distributed across multiple computing devices, such as the computing device 102 and the server 104.
[0018] Thus, the computing device 102 may enable a user to author a presentation 120. In some cases, the presentation may be generated by the computing device 102 using the generation module 116 stored in the computer readable media 112. In other cases, the server 104 may generate the presentation 120 using the generation module 116 stored in the computer readable media 138 based on the input data 140 provided by the computing device 102. The presentation 120 may be presented on the display device 108 using the computing device 102 (or another computing device). For example, the presentation 120 may be authored and generated using the computing device 102 and/or server 104 but may be presented using a different computing device.
[0019] The computer readable media 112, 132 are examples of storage media used to store instructions which are executed by the processor(s) 110, 130 to perform the various functions described above. For example, the computer readable media 112, 132 may generally include both volatile memory and non-volatile memory (e.g., RAM, ROM, or the like). Further, the computer readable media 112, 132 may include hard disk drives, solid-state drives, removable media, including external and removable drives, memory cards, flash memory, floppy disks, optical disks (e.g., CD, DVD), a storage array, a network attached storage, a storage area network, or the like. The computer readable media 112, 132 may be one or more types of storage media capable of storing computer-readable, processor-executable program instructions as computer program code that can be executed by the processor(s) 110, 132 as a particular machine configured for carrying out the operations and functions described in the implementations herein.
[0020] The computing device 102 and server 104 may also include one or more communication interfaces for exchanging data with other devices, such as via a network, direct connection, or the like, as discussed above. The communication interfaces may facilitate communications within a wide variety of networks and protocol types, including wired networks (e.g., LAN, cable, etc.) and wireless networks (e.g., WLAN, cellular, satellite, etc.), the Internet and the like. The communication interfaces may also provide communication with external storage (not shown), such as in a storage array, network attached storage, storage area network, or the like.
[0021] The example systems and computing devices described herein are merely examples suitable for some implementations and are not intended to suggest any limitation as to the scope of use or functionality of the environments, architectures and frameworks that can implement the processes, components and features described herein. Thus, implementations herein are operational with numerous environments or architectures, and may be implemented in general purpose and special-purpose computing systems, or other devices having processing capability. Generally, any of the functions described with reference to the figures can be implemented using software, hardware (e.g., fixed logic circuitry) or a combination of these implementations. The term "module," "mechanism" or "component" as used herein generally represents software, hardware, or a combination of software and hardware that can be configured to implement prescribed functions. For instance, in the case of a software implementation, the term "module," "mechanism" or "component" can represent program code (and/or declarative-type instructions) that performs specified tasks or operations when executed on a processing device or devices (e.g., CPUs or processors). The program code can be stored in one or more computer-readable memory devices or other computer storage devices. Thus, the processes, components and modules described herein may be implemented by a computer program product.
[0022] Furthermore, this disclosure provides various example implementations, as described and as illustrated in the drawings. However, this disclosure is not limited to the implementations described and illustrated herein, but can extend to other implementations, as would be known or as would become known to those skilled in the art. Reference in the specification to "one implementation," "this implementation," "these implementations" or "some implementations" means that a particular feature, structure, or characteristic described is included in at least one implementation, and the appearances of these phrases in various places in the specification are not necessarily all referring to the same implementation.
[0023] Furthermore, while FIG. 1 sets forth an example of a suitable architecture to generate presentations, numerous other possible architectures, frameworks, systems and environments will be apparent to those of skill in the art in view of the disclosure herein. [0024] FIG. 2 is an illustrative architecture 200 that includes creating a presentation at a story level, a scene level, and a detail level according to some implementations. The architecture 200 illustrates how a user may author the presentation 120 using the authoring module 114. To enable the user the author the presentation 120, the authoring module 114 may provide a graphical user interface, a command line interface, markup commands, other types of authoring commands, or any combination thereof.
SETTING GOALS GIVEN CONSTRAINTS
[0025] A presentation, such as the presentation 120, may be created with a goal that is constrained by parameters, such as the content, the audience, the schedule, the event, the preparation context, and the delivery context. For example, a presenter may desire that the look of the presentation not overshadow the actual content and/or message of the presentation. Presentations may be explicitly constrained by rules, such as an amount of time, a number of slides, or other constraint that is allocated to the presenter. When creating a presentation, presenters may be influenced by the relative performance of peers when the peers deliver similar presentations. Presenters may take into consideration the relationship between different kinds of audience members and the content to be presented, anticipating and formulating responses to questions that could arise as a result.
TELLING STORIES WITH A PRESENTATION
[0026] A presentation may include information and examples that are wrapped in a narrative and delivered through the interplay of visuals and speech. The presentation may have multiple points and layers that are connected by a sense of coherence and flow. The modules, 114, 116, or 118 may enable a story to be developed before the presentation 120 is generated. Starting with the goal in mind may guide subsequent activities, including crafting implicit messages, explicit takeaways, or rhetorical questions. For example, mapping points to slide titles may provide a provisional structure for elaboration. The modules, 114, 116, or 118 may enable the user to quickly and easily replace text with images and/or graphics that convey the intended message.
[0027] The modules, 114, 116, or 118 may enable connecting a slide to a next slide by using a leading question, a hint, or a concern prior to moving from the slide to the next slide. Planning and adding transition words to a presentation may enable the presenter to explain to the audience why the presentation is moving to the next topic. The modules, 114, 116, or 118 may enable the presenter to view and rearrange content at a high-level to enable the presenter to create a sense of flow using images and/or descriptive text. For example, each part of the presentation may be tied to a main theme/story that the presentation is conveying.
PREPARING FOR STRUCTURED SPONTANEITY
[0028] The modules, 114, 116, or 118 may enable a presenter to rehearse and refine a presentation to preserve the presentation structure in the mind of the presenter, thereby encouraging a natural delivery that is free from reading and recital. For example, the authorizing module 114 may provide a rehearsal mode that enables the use of visual cues in slides for the recall of points that are to be made verbally. To illustrate, the rehearsal mode may enable the presenter to learn the association between the visual cues and the points to be made using presenter notes, by providing physical or electronic flashcards to drill points into memory, or other types of cues. During rehearsal, practicing speaking slides out loud may highlight differences between written and spoken language and support the rephrasing of the language in the presenter notes. The rehearsal mode may enable the presenter to establish a mental structure using performance-oriented rehearsals, such as practice walking around, in front of a mirror, gesturing, or visualizing the delivery. ORCHESTRATING FOCUS AND FLOW
[0029] During delivery, the presenter may desire to communicate a presentation that flows smoothly from start to finish by directing the audience's attention to key points using a combination of visuals, gestures, and speech. Breaking the flow of the presentation may be detrimental to the presenter and/or the audience by side-tracking the presentation from the key points of the presentation. For example, after creating the presentation but before presenting the presentation, the presenter may obtain information (e.g., a recent event that occurred) and alter the emphasis of the presentation based on the information. For example, the presenter may determine to go into more detail on some points while glossing over or skipping other points. Thus, a presenter may desire to present portions of the presentation in a non-linear order based on information obtained prior to presenting the presentation, in response to audience questions, etc. Memorizing where various pieces of information are presented in the presentation may be impractical for large presentations and/or presentations that have undergone significant revisions. Exiting a presentation to access additional files may result in losing the attention of audience members and/or creating a perception that the presenter is disorganized. Even if a presenter prepares extra slides (e.g., as an appendix at the end of the presentation) to enable the presenter discuss points in greater detail, accessing the appropriate slides and then resuming the presentation may be disruptive to the flow of the presentation.
[0030] To enable a presentation to be presented smoothly in a non-linear manner, the authorizing module 114 may enable linking various portions of the presentation. For example, a presenter may link points with other points (e.g., sub-points), slides with other slides, etc. to enable non-linear delivery of the presentation without disrupting the flow of the presentation. Such a presentation may enable the presenter to dynamically customize the presentation while the presenter is presenting the presentation. The presenter can respond to information obtained after the presentation was generated by presenting the presentation in a way that focuses on the points that are relevant to the information while skimming or ignoring irrelevant points. The presenter can respond to questions and explore details about points that are of interest to audience members while skimming or ignoring points that are of less interest to the audience.
INFLUENCING THE AUDIENCE WITH TIMING
[0031] Communication takes place over time, and the presenter's timed rehearsals, attention to timekeeping, and rhythm of spoken delivery may affect audience perceptions. For example, even with good timekeeping, the number of questions asked or the amount of discussion generated by certain points may cause the presenter to skip portions of the presentation to remain within the allotted time constraints. As another example, a stressed presenter may speak too fast or go through the presentation so quickly that audience members are unable to absorb the contents of the presentation, leaving the audience frustrated. During delivery, presenters may desire to hit particular topics by particular times. To end the presentation gracefully, some presenters may use a timer that signals the presenter at a predetermined interval (e.g., 2 minutes, 5 minutes, etc.) before the end of the time allotted for the presentation. Finishing on time is often viewed as a measure of success for a presentation, particularly when the alternative is being told to stop speaking.
[0032] Thus, presenters who do not rehearse may overestimate or underestimate the number of points that may be covered in a particular time period and end up either skipping through large portions of the presentation or zipping through the presentation. In either case, the audience may be frustrated because the presentation was not presented in a manner that was conducive to the audience learning the intended message. The modules, 114, 116, or 118 may enable the presenter to rehearse and time presenting the presentation in a way that the audience leaves having understood the intended message. For example, the presenter may set time targets for high level scenes and one or more of the modules 114, 116, or 118 may proportionately distribute the time target to subordinate (e.g., detail) slides and/or points. Thus, when timing feedback is displayed for a scene slide (e.g., during rehearsal or delivery), and the presenter navigates to a subordinate detail slide, the same timing feedback may seamlessly continue until the presenter moves to a different scene. Such a per-scene approach might require less presenter effort and pressure as compared to a per-slide approach.
[0033] As discussed above, audience interaction and time constraints may shape the authoring, rehearsal, and delivery of a presentation. However, the authoring environment of conventional presentation applications may constrain presenters to a primarily linear presentation delivery. In contrast, the modules, 114, 116, or 118 enable a user to specify constraints (e.g., time constraints) while enabling dynamic presentation rehearsal and delivery. Given a fixed amount of time to prepare a presentation, the time saved by not directly manipulating text, images, and other slide parameters to achieve a particular style may be reallocated to more important activities, such as (a) telling stories using the presentation by thinking about the sequence, structure, and purpose of the points to be made and (b) preparing for structured spontaneity. Time spent on these activities may result in the presenter having a more rehearsed mastery of in-depth material, thereby creating the freedom for the presenter to be more dynamic, responsive, and improvisational during delivery.
[0034] The presentation modules 114, 116, or 118 may enable presenters to organize the points they wish to communicate and automatically generate a presentation based on the organization of the points. Enabling presenters to plan the presentation using points before committing them to a presentation (e.g., slides) may enable the presenter to visualize the entire story to be unfolded using the presentation. Enabling presenters to plan the presentation using points may enable presenters to focus on crafting a story that is effective, memorable, and appropriate for the audience. The modules 114, 116, or 118 may enable presenters to take a collection of points sourced from multiple documents and/or multiple people and generate a presentation that includes those points along with a consistent style. The style used for the presentation may be quickly and easily customizable while conforming to best practices in the visual design of presentations. Styling a presentation may enable the use of images to emotionally impact the audience. Concepts presented using images may be remembered by the audience for a longer period of time than concepts presented by words alone. In general, the styling may enable principles of visual design, including contrast, repetition, alignment, and proximity.
[0035] The modules 114, 116, or 118 may enable presenters to craft and connect the central scenes of a high-level narrative and encourage planning of verbal linkages between scenes to strike a balance between storytelling and analysis. The points may be organized into scenes based on the presenter determining how deeply to explore each scene while completing the presentation on time. The term "scene" as used herein refers to a set of one or more points that advance a higher-level story. Scenes may enable the presentation to flow from a portion of the presentation (e.g., a slide) to a next portion (e.g., a next slide), using a chronological flow, a problem/solution flow, or an opportunity/leverage flow. Appropriate scenes for a presentation may be discovered by clustering related points into different organizational structures, such as columns or a hierarchical tree. A hierarchical organization of the scenes and/or the points within each scene may enable the organization of the presentation such that supporting information branches off from the primary idea(s) being conveyed.
[0036] The modules 114, 116, or 118 may enable presenters to link scenes in various ways, such as using an opening gambit (e.g., a question, a factoid, or an anecdote), making repeated references to the flow structure, making logical transitions between outbound and inbound topics, closing with a call to action, etc. Presenting a visual road map (e.g., outline) near the start and end of the presentation may guide the audience when the presentation is being presented and may assist the audience with retaining a mental model of the presentation. Linking points to other points and scenes may enable the dynamic expansion of points into sub-points, notes, media, files, or web pages that support the point being presented. Cued- recall learning refers to a flashcard-like method of testing recall of target information given an initial cue. Unlike conventional presentation software, the modules 114, 116, or 118 may support cued-recall learning. During delivery, the ability to expand points as needed (e.g., on-demand) gives the presenter the freedom and flexibility to respond to the audience by presenting points that are appropriate for the audience at a depth that is appropriate for the audience.
[0037] In some implementations, the modules 114, 116, or 118 may generate a graphical user interface that enables a user to specify details associated with the presentation, such as a title, one or more points, and one or more graphics for each portion (e.g., slide) of the presentation. In other implementations, a simple presentation markup language (PML) may be provided to enable a user to specify titles and points for each slide included in a presentation. An example of a PML that enables a user to specify details associated with the presentation is provided in Table 2. The PML described in Table 2 may support the development of high-level scenes illustrated with full-bleed images, the expansion of scenes into points, the expansion of points into sub-points, supporting files, media, and/or web pages, and the preparation of links between scenes. Table 2 -Presentation Markup Language (PML) Example
Figure imgf000018_0001
[0038] The PML of Table 2 may enable a presenter to specify a variety of style parameters, such as font types, colors of titles and body text, sizes and colors of title backgrounds (e.g., to create contrast when overlaid on a background image), the aspect ratio of the slide, a background color, other style-related parameters, or any combination thereof. The modules 114, 116, and 118 may automatically (e.g., without human interaction) scale slide titles to fill the space available in each slide. The modules 114, 116, and 118 may enable the adjustment of links between a slide and other slides in the presentation. For example, in some cases, a transparent linkage box may be added to zero or more edges of the four edges of each slide. Each linkage box may be used to hyperlink a particular slide to one or more other slides of the presentation to create an interconnected slide network. The hyperlinks may provide a mechanism for dynamic navigation between slides when the presentation is being presented. For example, the hyperlinks may be navigated using a mouse or using a touchscreen. When using a touchscreen, one or more of the applications 114, 116, or 118 may interpret directional swipe gestures as navigation commands.
[0039] A presentation, such as the presentation 120 of FIG. 1, may be generated using one or more of the modules 114, 116, or 118 by specifying details of the presentation at a story level 202, a scene level 204, and a detail level 206. At the story level 202, a user may define scenes, such as one or more of a first scene 208, a second scene 210, a third scene 212, or a fourth scene 214. The scenes 208, 210, 212, and 214 may be displayed in a thumbnail view to enable the user to select a particular scene and link the particular scene with one or more other scenes. For example, the user may select the first scene 208 and add links from the first scene 208 to the second scene 210. The user may then select another of the scenes 208, 210, 212, or 214 and add links from the selected scene to other scenes and so on. In the thumbnail view, a selected scene may provide a visual indication that the scene has been selected, such as by displaying a darker border (as illustrated in FIG. 2), a flashing border, or other visual indicator.
[0040] The scene level 204 illustrates how the first scene 208 is horizontally linked to the second scene 210 and the second scene 210 is horizontally linked to the third scene 212. The scene level 204 may enable the user to add a top level point, such as a first title and first image 218 to the first scene 208, a second title and second image 220 to the second scene 210, and a third title and a third image 222 to the third scene. Clicking the top border of a scene may cause a jump to a hyperlinked "storyline" slide with the outbound scene highlighted. From a sequence of scene described by the user, an overview of all scenes may be automatically created by one or more of the modules 114, 116, or 118 to support non- linear navigation and visual reference to the presentation structure. The presentation structure may be created by statically hyper-linking slides to one another, with different overview slides created with different scenes highlighted according to the outbound scene. A similar presentation structure may be achieved through navigation and highlights interpreted dynamically at presentation runtime (e.g., by the application). Clicking on a particular scene thumbnail may cause the presentation to jump directly to the particular scene, while horizontal navigation may display the links between scenes providing a story rehearsal path for preparing the presentation. Clicking on a bottom portion of a scene being displayed may display the high level points for the scene. For example, in response to the user clicking the bottom portion of the scene, the high level points may be displayed using a drop down menu. If the user causes the high level points to be displayed (e.g., rather than speaking about just the displayed scene), the user may navigate back to the scene level 204 before advancing to a next slide. Such a mechanism may enable the presenter to provide closure to each scene while prompting the presenter to verbally convey a previously prepared verbal linkage without displaying text that competes for the audience's attention.
[0041] The detail level 206 may enable the user to add internal hyperlinks and/or external hyperlinks. The details may not be added in a detail view, but the detail level 206 may be realized through hyperlinked bullets generated from hierarchically-related points (as well as points connected manually in a free-form structure). For example, at the detail level 206, the user may add a hyperlink to any of the points 126. The hyperlink may be used to link to an external file, a web page, or another type of content that is external to the presentation 120. The hyperlink may be used to link a point or a slide to other points, slides, descriptions, media data (e.g., image data, video data, audio data, etc.), or other material that is included in the presentation 120. Once a point has been expanded, clicking on the top border of the each slide may enable the user to navigate back up the hierarchy. Navigating horizontally at the detail level 206 may enable the user to follow a detailed rehearsal path, performing a depth- first traversal of all expandable points in the presentation, with "cue" slides indicating which points may be expanded. The presenter may repeatedly traverse the rehearsal path until the structure of the presentation and the content of each of the points can be recalled. The modules 114, 116, or 188 may automatically generate slide notes to show scene linkages and previews of point expansions.
[0042] To generate the presentation 120, an input file including PML commands, an input text file, and any hyperlinked files or media that are to be embedded into the presentation 120 may be placed in a folder. Providing the input file that includes the PML commands to an application (e.g., one or more of the modules 114, 116, or 118) may generate/regenerate the presentation 120, which may automatically be opened in an installed presentation application' such as Microsoft® PowerPoint®. An example of an input file may appear as:
{A new way to think about presentations}
Λ opening verbal introduction
[Presenting with HyperSlides < Presenting.jpg]
Λ title and image of scene 1
[> Dynamic Presentation Prototyping]
[» Dynamic Prototyping of Presentations]
[» Prototyping of Dynamic Presentations]
[> Practical Guidance > PresentationZen.jpg]
[> Empirical Grounding » GroundedTheoryStudy.docx]
Λ hyperlinked bullets, slides, and files of scene 1
{Presentation slides are prototyped dynamically}
Λ verbal transition to scene 2
[Authoring < Prototyping.jpg]
[> Setting Goals given Constraints]
[> Telling Stories with Slides]
[> Planning with Points] [> Styling as a Service]
[> Linking between Scenes]
{Presentation links are rehearsed dynamically}
[Rehearsal < Rehearsing.jpg]
[> Preparing for Structured Spontaneity]
[> Linking between Scenes]
[> Expanding on Demand (to learn the story)]
{Presentation itself is delivered dynamically}
[Delivery < Delivering.jpg]
[> Orchestrating Focus and Flow]
[> Influencing Audience with Timing]
[> Expanding on Demand (to tell the story)]
{Rapid iterative prototyping of flexible presentations}
Λ closing verbal takeaway message
EXAMPLE PROCESSES
[0043] In the flow diagrams of FIGS. 3-6, each block represents one or more operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the blocks represent computer-executable instructions that, when executed by one or more processors, cause the processors to perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, modules, components, data structures, and the like that perform particular functions or implement particular abstract data types. The order in which the blocks are described is not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel to implement the processes. For discussion purposes, the processes 300, 400, 500, and 600 are described with reference to the architectures 100 and 200 as described above, although other models, frameworks, systems and environments may implement these processes.
[0044] FIG. 3 is a flow diagram of an example process that includes specifying and revising a presentation according to some implementations. The architecture 300 describes how a user may create and refine a presentation.
[0045] At 302, the user may specify one or more aspects of a presentation. For example, in FIG. 1, the user may specify different components of the presentation 120, such as one or more of the points 126, the text 128, the images 130, the media data 132, or the links 134 using a PML, (e.g., the PML illustrated in Table 1) or a GUI provided by one or more of the modules 114, 116, or 118.
[0046] At 304, the user may generate the presentation. For example, in FIG. 1, the user may use the generation module 116 to generate the presentation 120.
[0047] At 306, the user may present the presentation. For example, in FIG. 1, the user may use the presentation module 118 to display the presentation 120 on the display device 108. The user may rehearse the presentation using a rehearsal mode of the presentation module 118 and deliver the presentation 120 to an audience using a delivery mode of the presentation module 118.
[0048] During the rehearsal mode, if the user desires to modify one or more aspects of the presentation 120, the user may return to 302 to further revise 308 the presentation 120. Thus, 302, 304, 306, and 308 may be repeated until the user is satisfied with the resulting presentation 120. [0049] FIG. 4 is a flow diagram of an example process that includes specifying details associated with a presentation according to some implementations. For example, the process 400 may be performed by the authoring module 114 of FIG. 1.
[0050] At 402, visual points may be specified. A visual point may be an idea that is to be conveyed visually using the presentation, e.g., using one or more slides or media data from the presentation.
[0051] At 404, connections between the visual points may be created. For example, in FIG. 1, the authoring module 14 may be used to include visual points that are connected vertically, horizontally, hierarchically, linearly, non-linearly, circularly, or any combination thereof in the presentation 120.
[0052] At 406, verbal points may be specified. A verbal point may be an idea that is to be conveyed verbally by the presenter. For example, a verbal point may be used to introduce the presentation, to transition from one slide to another slide during the presentation, or to make another type of point. In FIG. 1, the authoring module 114 may be used to add cues (e.g., text, images, and/or media data) to prompt the presenter to convey verbal points.
[0053] At 408, the contents of the visual points and the verbal points may be edited. For example, in FIG. 1, the authoring module 114 may be used to specify one or more of text, images, or media data in the contents of the visual points and/or the verbal points.
[0054] At 410, styles associated with the presentation may be specified. For example, in FIG. 1, the authoring module 14 may be used to specify styles associated with each of the slides 122 to 124, such as fonts, colors, background images, foreground images, or other styles associated with presentations.
[0055] At 412, the presentation may be generated. For example, in FIG. 1, the generation module 116 may be used to generate the presentation 120 after the user has completed specifying the contents of the presentation 120. To illustrate, the user may use the authoring module to author a presentation using a PML (e.g., the PML illustrated in Table 1), a graphical user interface, or other authoring tool and then generate the presentation 120 using the generation module 116 based on the authoring. The user may review the generated presentation and repeat one or more of blocks 402, 404, 406, 408, 410, or 412 until the user is satisfied with the resulting presentation.
[0056] A presentation may be viewed as a set of points to be communicated through visuals with or without accompanying speech. The points may be made in the presentation using text, images, media data, or other forms of media such as diagrams, photos, videos, web pages, etc. A particular point may be followed by related points at the same level of abstraction as the particular point, or by secondary points (e.g., sub-points) that expand on the particular point by providing more details, evidence, or examples. Some points may be grouped at higher levels, leading to a hierarchical structure in which the points at a particular level may be ordered to achieve a particular effect (e.g., the presentation of an argument). A presentation may include a hierarchy in which a title slide is the root, while remaining slides may be children of the root. Similarly, the bullets and other non-title visual elements of a slide may be the children of the title, and the notes of the slide may be children of slide elements or the slide title. When using a conventional presentation generation application, once a user has committed to a particular hierarchical structure, modifying the structure may be difficult and/or time consuming. For example, the user may spend a large amount of time (e.g., several minutes) to convert a slide to a bullet or vice-versa. In contrast, the authoring module 114 may enable the user to specify logical relationships (e.g., sequence, transition, hierarchy, and the like) between points without committing to any arrangement or styling. To modify the structure of a presentation, the user merely modifies the relationship between points and re-generates the presentation using the generation module 116. Thus, the structure of the presentation 120 may be altered in a few seconds using the authoring module 114 as compared to a few minutes to alter the structure using a conventional presentation generation application.
[0057] In addition to the text and media content of the points themselves, the authoring module 114 may enable the user to specify a visual theme for the presentation 120. The visual theme may include fonts and colors to be used for the presentation as well as spatial layout rules for the arrangement of points. Thus, the generation of the presentation 120 using the generation module 116 may go beyond direct manipulation of object placement (e.g., as per the What You See Is What You Get (WYSIWYG) paradigm) to support an automated layout guided by principles of graphic and narrative design.
[0058] The presentation 120 may be specified using a simple markup language (e.g., similar to the PML of Table 1) or through a graphical editor that supports the hierarchical layout, styling, and restructuring of points (e.g., using a What You See Is What You Mean (WYSIWYM) paradigm). Thus, the modules 114, 116, or 118 may support assembling the presentation 120 from portions of different presentations because the combined points from the different presentations may be easily restyled to form one consistently styled presentation.
[0059] FIG. 5 is a flow diagram of an example process that includes generating a presentation based on a specification according to some implementations. For example, the process 500 may be performed by the generation module 116 of FIG. 1.
[0060] At 502, a file that includes a presentation specification may be parsed. For example, the file may include PML commands (e.g., from Table 1) that specify details associated with a specification that is to be generated. The file may be generated by the user or by a graphical user interface provided by the authoring module 114. [0061] At 504, appropriate design rules may be loaded. The design rules may map an abstract presentation structure (e.g., points, scenes, and their corresponding relationships) into various representational forms, e.g., slides, web pages, handouts, canvas layouts and the like. The design rules may include styling principles as proportional spacing, in which the points of a slide are distributed equally across the height of the slide or the child points surrounding a parent point in a spatial canvas layout are placed at equal angular intervals around the parent point. The styling principles may be used to automatically create aesthetic layouts in terms of the relative sizes and distances between presentation points. For example, the golden ratio 1.618 may be used to scale font sizes and inter-point spacing across levels of a presentation point hierarchy. Calculating the visual weight of visual elements (e.g., amount of inked type) and the corresponding centers of mass of the visual elements may provide visual representations that are balanced with respect to the center of the display. The stylistic and spatial relationships between visual elements may vary in accordance with the actions of the presenter while being constrained by the design rules.
[0062] At 506, the presentation may be generated. For example, in FIG. 1, the generation module 116 may be used to generate the presentation 120.
[0063] At 508, at least some elements of the presentation may be arranged and/or styled. For example, in FIG. 1, the user may review the generated presentation 120, tweak one or more elements of the presentation 120 by modifying an arrangement of the points, a style associated with the presentation 120, or both.
[0064] At 510, at least some elements may be linked. For example, in FIG. 1, the user may link at least some elements of the presentation 120 by linking points or slides with others points or slides in the presentation 120 or by adding hyperlinks to content external to the presentation 120, such as external files, internet sites, etc. [0065] At 512, the presentation may be saved. For example, in FIG. 1, once the user has generated the presentation 120 and is satisfied with the generated presentation 120, the user may save the presentation 120 (e.g., in the computer readable media 112 or 132).
[0066] Thus, the modules 114, 116, or 118 may enable a content and story-centered approach to specifying the presentation 120. In addition, the modules 114, 116, or 118 may enable the generation of multiple media representations of a particular presentation. For example, using the same authored input file produced by the authoring module 114, the generation module 116 may be used to generate different types of presentations, such as a set of web pages suitable for display on a website, a slide deck (e.g., Microsoft® PowerPoint™) for display using a computing device, a canvas layout (e.g., Microsoft® Expression Studio™), a slide deck suitable for display on a computing device with display constraints (e.g., tablet device or mobile phone), a video (e.g., a movie), or some other presentation medium.
[0067] The generation module 116 may generate the presentation 120 with a structure of points that supports complex navigation such that the presenter can dynamically create a presentation tailored for a particular audience while presenting the presentation. For example, based on information (e.g., current events), audience comments and/or questions, or the like, the presenter can navigate the presentation 120 to go into more depth on some points while skipping or skimming over other points, without the audience being aware that the presenter is dynamically customizing the presentation 120. The presentation 120 may be created by compiling the presentation specification into an extensible markup language (XML) of a document format using software tools, such as an Open XML software development kit (SDK). In some implementations, the slides 122 to 124 may include a title, bullet points, media content, and spatial regions (e.g., slide borders) that support hyperlink-based navigation between the slides 122 to 124 according to the structure of the points in the presentation 120. The user may specify the structure of the points by specifying relationships between the points using the authoring module 114. The modules 114, 116, or 118 may enable hyperlink relationships between points to be quickly specified and modified.
[0068] FIG. 6 is a flow diagram of an example process that includes presenting a presentation based on a specification according to some implementations. For example, the process 600 may be performed by the presentation module 118 of FIG. 1.
[0069] At 602, a rehearsal mode may be entered. At 604, the presentation may be reviewed. At 606, visual points and verbal points in the presentation may be navigated to rehearse the presentation. At 608, rehearsal mode may be exited. For example, the user may use the presentation module 118 to enter a rehearsal mode to rehearse the presentation 120. In the rehearsal mode, the user may navigate the points 126, including verbal points and visual points, of the presentation 120. The rehearsal mode may be used to familiarize the presenter with the structure and flow of the presentation 120. After the presenter has completed rehearsing the presentation 120, the presenter may exit the rehearsal mode.
[0070] At 610, delivery mode may be entered. At 612, the visual points and/or the verbal points may be navigated while presenting the presentation. For example, in FIG. 1, the user may enter the delivery mode and present the presentation 120, including navigating the points 126 using the links 134. A delivery mode (or generated file) may not include some material prepared for rehearsal because some material may be intended for the speaker but not the audience (e.g., private notes).
[0071] Thus, the presentation module 118 may provide various modes, including a rehearsal mode in which the presenter can rehearse the presentation and a delivery mode in which the presenter can deliver the presentation. EXAMPLE COMPUTING DEVICE AND ENVIRONMENT
[0072] FIG. 7 illustrates an example configuration of a computing device 700 and environment that can be used to implement the modules and functions described herein. For example, the computing device 102 or the server 104 may include an architecture that is similar to or based on the computing device 700.
[0073] The computing device 700 may include one or more processors 702, a memory 704, communication interfaces 706, a display device 708, other input/output (I/O) devices 710, and one or more mass storage devices 712, able to communicate with each other, such as via a system bus 714 or other suitable connection.
[0074] The processor 702 may be a single processing unit or a number of processing units, all of which may include single or multiple computing units or multiple cores. The processor 702 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor 702 may be configured to fetch and execute computer-readable instructions stored in the memory 704, mass storage devices 712, or other computer-readable media.
[0075] Memory 704 and mass storage devices 712 are examples of computer storage media for storing instructions, which are executed by the processor 702 to perform the various functions described above. For example, memory 704 may generally include both volatile memory and non-volatile memory (e.g., RAM, ROM, or the like). Further, mass storage devices 712 may generally include hard disk drives, solid-state drives, removable media, including external and removable drives, memory cards, flash memory, floppy disks, optical disks (e.g., CD, DVD), a storage array, a network attached storage, a storage area network, or the like. Both memory 704 and mass storage devices 712 may be collectively referred to as memory or computer storage media herein, and may be capable of storing computer-readable, processor-executable program instructions as computer program code that can be executed by the processor 702 as a particular machine configured for carrying out the operations and functions described in the implementations herein.
[0076] Although illustrated in FIG. 7 as being stored in memory 704 of computing device 700, the authoring module 114, the generation module 116, the presentation module 118, the presentation 120, other modules 716 and other data 718, or portions thereof, may be implemented using any form of computer-readable media that is accessible by the computing device 700. As used herein, "computer-readable media" includes computer storage media and communication media.
[0077] Computer storage media includes non-volatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store information for access by a computing device.
[0078] In contrast, communication media may embody computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave. As defined herein, computer storage media does not include communication media.
[0079] The computing device 700 may also include one or more communication interfaces 706 for exchanging data with other devices, such as via a network, direct connection, or the like, as discussed above. The communication interfaces 806 can facilitate communications within a wide variety of networks and protocol types, including wired networks (e.g., LAN, cable, etc.) and wireless networks (e.g., WLAN, cellular, satellite, etc.), the Internet and the like. Communication interfaces 806 can also provide communication with external storage (not shown), such as in a storage array, network attached storage, storage area network, or the like.
[0080] A display device 708, such as a monitor may be included in some implementations for displaying information and images to users. Other I/O devices 810 may be devices that receive various inputs from a user and provide various outputs to the user, and may include a keyboard, a remote controller, a mouse, a printer, audio input/output devices, voice input, and so forth.
[0081] Memory 704 may include modules and components for training machine learning algorithms (e.g., PRFs) or for using trained machine learning algorithms according to the implementations described herein. The memory 704 may include multiple modules (e.g., the modules 114, 116, and 118) to perform various functions. The memory 704 may also include other modules 716 that implement other features and other data 718 that includes intermediate calculations and the like. The other modules 716 may include various software, such as an operating system, drivers, communication software, or the like.
[0082] The example systems and computing devices described herein are merely examples suitable for some implementations and are not intended to suggest any limitation as to the scope of use or functionality of the environments, architectures and frameworks that can implement the processes, components and features described herein. Thus, implementations herein are operational with numerous environments or architectures, and may be implemented in general purpose and special-purpose computing systems, or other devices having processing capability. Generally, any of the functions described with reference to the figures can be implemented using software, hardware (e.g., fixed logic circuitry) or a combination of these implementations. The term "module," "mechanism" or "component" as used herein generally represents software, hardware, or a combination of software and hardware that can be configured to implement prescribed functions. For instance, in the case of a software implementation, the term "module," "mechanism" or "component" can represent program code (and/or declarative-type instructions) that performs specified tasks or operations when executed on a processing device or devices (e.g., CPUs or processors). The program code can be stored in one or more computer-readable memory devices or other computer storage devices. Thus, the processes, components and modules described herein may be implemented by a computer program product.
[0083] Furthermore, this disclosure provides various example implementations, as described and as illustrated in the drawings. However, this disclosure is not limited to the implementations described and illustrated herein, but can extend to other implementations, as would be known or as would become known to those skilled in the art. Reference in the specification to "one implementation," "this implementation," "these implementations" or "some implementations" means that a particular feature, structure, or characteristic described is included in at least one implementation, and the appearances of these phrases in various places in the specification are not necessarily all referring to the same implementation.
CONCLUSION
[0084] Although the subject matter has been described in language specific to structural features and/or methodological acts, the subject matter defined in the appended claims is not limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. This disclosure is intended to cover any and all adaptations or variations of the disclosed implementations, and the following claims should not be construed to be limited to the specific implementations disclosed in the specification. Instead, the scope of this document is to be determined entirely by the following claims, along with the full range of equivalents to which such claims are entitled.

Claims

CLAIMS What is claimed is:
1. A method, comprising:
under control of one or more processors configured with executable instructions to perform acts comprising:
receiving scene input specifying a plurality of scenes associated with a presentation; receiving content input specifying the content of the at least one scene; and generating the presentation based on the scene input, the content input, and a style.
2. The method as recited in claim 1, the acts further comprising:
receiving linkage information linking at least one scene with one or more other scenes of the plurality scenes;
receiving a plurality of points for inclusion in the at least one scene; and
receiving structure information associated with the plurality of points, wherein the plurality of points are organized based on the structure information in the generated presentation.
3. The method as recited in claim 2, wherein the structure information specifies that at least a portion of the plurality of points are organized hierarchically in the at least one scene.
4. The method as recited in claim 1, the acts further comprising:
receiving navigation input while the presentation is being presented; and
dynamically displaying a specified portion of the presentation based on the navigation input without displaying other portions of the presentation.
5. The method as recited in claim 1, wherein the scene input, the linkage information, the content input, and the style input are specified in an input file using commands from a presentation markup language.
6. The method as recited in claim 1, the acts further comprising:
in response to receiving style input, selecting the style based on the style input; and in response to determining that style input was not received, selecting a default style as the style.
7. A computing device comprising:
one or more processors;
one or more computer-readable storage media storing instructions executable by the one or more processors to perform acts comprising:
receiving point input specifying a plurality of visual points associated with a presentation;
receiving connection input specifying one or more connections between at least two points of the plurality of visual points;
receiving content input specifying contents of each of the plurality of visual points;
receiving one or more edits associated with the presentation;
editing one or more of the plurality of visual points, the one or more connections, or the contents of at least one of the plurality of visual points based on the one or more edits; and
generating the presentation based on the point input, the connection input, the content input, and the one or more edits.
8. The computing device as recited in claim 7, the acts further comprising: entering a rehearsal mode in response to receiving an enter rehearsal mode command; enabling the presentation to be navigated; and
exiting the rehearsal mode in response to receiving an exit rehearsal mode command.
9. The computing device as recited in claim 7, wherein the generating the presentation comprises:
determining a default style in response to determining that style input was not received; and
generating the presentation based on the default style.
10. The computing device as recited in claim 7, wherein the generating the presentation comprises:
receiving style input specifying a style for the presentation; and
generating the presentation based on the style input.
11. The computing device as recited in claim 7, wherein:
the point input and the connection input are specified in an input file using commands from a presentation markup language; and
the generating the presentation comprises:
parsing the input file for the commands; and
generating the presentation based on the parsed commands.
12. The computing device as recited in claim 7, wherein the point input and the connection input are specified using a graphical user interface.
13. A metho d comprising :
under control of one or more processors configured with executable instructions to perform acts comprising:
parsing an input file comprising commands from a presentation markup language;
creating a plurality of slides in a presentation, each of the plurality of slides including a title and a background image specified by at least one of the commands;
adding one or more points to at least one slide of the plurality of slides based on the commands;
adding a hyperlink to access a website to at least one slide of the plurality of slides based on the commands;
embedding media data into at least one slide of the plurality of slides based on the commands; and
applying a style specified by one of the commands to each of the plurality of slides.
14. The method as recited in claim 13, wherein the one or more points are organized hierarchically.
15. The method as recited in claim 13, the acts further comprising presenting the presentation using navigation commands.
16. The method as recited in claim 13, the acts further comprising adding a verbal linkage cue to at least one of the plurality of slides based on the commands.
17. The method as recited in claim 13, wherein the media data comprises at least one of audio data or video data.
18. The method as recited in claim 13, the acts further comprising adding a note to at least one of the plurality of slides based on the commands.
19. The method as recited in claim 13, wherein:
a first slide of the plurality of slides originated from a first presentation having a first style;
a second slide of the plurality of slides originated from a second presentation having a second style; and
the plurality of slides have a third style that is different from both the first style and the second style.
20. The method as recited in claim 13, the acts further comprising linking one slide of the plurality of slides to at least one other slide of the plurality of slides.
PCT/CN2013/072061 2013-03-01 2013-03-01 Dynamic presentation prototyping and generation WO2014131194A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2013/072061 WO2014131194A1 (en) 2013-03-01 2013-03-01 Dynamic presentation prototyping and generation
CN201380074201.XA CN105144672B (en) 2013-03-01 2013-03-01 Dynamic demonstration prototype and generation
EP13876698.5A EP2962259A1 (en) 2013-03-01 2013-03-01 Dynamic presentation prototyping and generation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2013/072061 WO2014131194A1 (en) 2013-03-01 2013-03-01 Dynamic presentation prototyping and generation

Publications (1)

Publication Number Publication Date
WO2014131194A1 true WO2014131194A1 (en) 2014-09-04

Family

ID=51427491

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2013/072061 WO2014131194A1 (en) 2013-03-01 2013-03-01 Dynamic presentation prototyping and generation

Country Status (3)

Country Link
EP (1) EP2962259A1 (en)
CN (1) CN105144672B (en)
WO (1) WO2014131194A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9824291B2 (en) 2015-11-13 2017-11-21 Microsoft Technology Licensing, Llc Image analysis based color suggestions
WO2019070982A1 (en) * 2017-10-05 2019-04-11 Fluent Forever, Inc. Language fluency system
US10282075B2 (en) 2013-06-24 2019-05-07 Microsoft Technology Licensing, Llc Automatic presentation of slide design suggestions
CN109804372A (en) * 2016-02-02 2019-05-24 微软技术许可有限责任公司 Image section in demonstration is emphasized
US10528547B2 (en) 2015-11-13 2020-01-07 Microsoft Technology Licensing, Llc Transferring files
US10534748B2 (en) 2015-11-13 2020-01-14 Microsoft Technology Licensing, Llc Content file suggestions
US11514924B2 (en) 2020-02-21 2022-11-29 International Business Machines Corporation Dynamic creation and insertion of content

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108268436B (en) * 2016-12-30 2021-08-20 珠海金山办公软件有限公司 Method and device for beautifying and matching slides
JP2022507963A (en) * 2018-11-26 2022-01-18 フォト バトラー インコーポレイテッド Presentation file generation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004184576A (en) * 2002-12-02 2004-07-02 Nomura Human Capital Solutions Co Ltd Presentation system
US20050108619A1 (en) * 2003-11-14 2005-05-19 Theall James D. System and method for content management
CN102169483A (en) * 2011-04-25 2011-08-31 江西省电力公司信息通信中心 Filmstrip automatic generation method based on electronic spreadsheet

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5488180B2 (en) * 2010-04-30 2014-05-14 ソニー株式会社 Content reproduction apparatus, control information providing server, and content reproduction system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004184576A (en) * 2002-12-02 2004-07-02 Nomura Human Capital Solutions Co Ltd Presentation system
US20050108619A1 (en) * 2003-11-14 2005-05-19 Theall James D. System and method for content management
CN102169483A (en) * 2011-04-25 2011-08-31 江西省电力公司信息通信中心 Filmstrip automatic generation method based on electronic spreadsheet

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2962259A4 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10282075B2 (en) 2013-06-24 2019-05-07 Microsoft Technology Licensing, Llc Automatic presentation of slide design suggestions
US11010034B2 (en) 2013-06-24 2021-05-18 Microsoft Technology Licensing, Llc Automatic presentation of slide design suggestions
US9824291B2 (en) 2015-11-13 2017-11-21 Microsoft Technology Licensing, Llc Image analysis based color suggestions
US10528547B2 (en) 2015-11-13 2020-01-07 Microsoft Technology Licensing, Llc Transferring files
US10534748B2 (en) 2015-11-13 2020-01-14 Microsoft Technology Licensing, Llc Content file suggestions
CN109804372A (en) * 2016-02-02 2019-05-24 微软技术许可有限责任公司 Image section in demonstration is emphasized
CN109804372B (en) * 2016-02-02 2023-10-31 微软技术许可有限责任公司 Emphasizing image portions in a presentation
WO2019070982A1 (en) * 2017-10-05 2019-04-11 Fluent Forever, Inc. Language fluency system
US11288976B2 (en) 2017-10-05 2022-03-29 Fluent Forever Inc. Language fluency system
US11514924B2 (en) 2020-02-21 2022-11-29 International Business Machines Corporation Dynamic creation and insertion of content

Also Published As

Publication number Publication date
CN105144672B (en) 2018-02-27
CN105144672A (en) 2015-12-09
EP2962259A4 (en) 2016-01-06
EP2962259A1 (en) 2016-01-06

Similar Documents

Publication Publication Date Title
US9619128B2 (en) Dynamic presentation prototyping and generation
EP2962259A1 (en) Dynamic presentation prototyping and generation
US7917839B2 (en) System and a method for interactivity creation and customization
Bailey et al. DEMAIS: designing multimedia applications with interactive storyboards
US10671251B2 (en) Interactive eReader interface generation based on synchronization of textual and audial descriptors
US20150206444A1 (en) System and method for authoring animated content for web viewable textbook data object
US20050071736A1 (en) Comprehensive and intuitive media collection and management tool
US20080010585A1 (en) Binding interactive multichannel digital document system and authoring tool
US20160098250A1 (en) Application prototyping tool
US20200293266A1 (en) Platform for educational and interactive ereaders and ebooks
US20150301721A1 (en) Desktop publishing tool
US20110311952A1 (en) Modularized Computer-Aided Language Learning Method and System
EP2780828A2 (en) Framework for creating interactive digital content
Edge et al. HyperSlides: dynamic presentation prototyping
US20160188136A1 (en) System and Method that Internally Converts PowerPoint Non-Editable and Motionless Presentation Mode Slides Into Editable and Mobile Presentation Mode Slides (iSlides)
US10878470B2 (en) Frameworks to demonstrate live products
US20190371192A1 (en) Computerized training video system
TWI575457B (en) System and method for online editing and exchanging interactive three dimension multimedia, and computer-readable medium thereof
Chaudhary Tkinter GUI application development hotshot
Tosic Artificial Intelligence-driven web development and agile project management using OpenAI API and GPT technology: A detailed report on technical integration and implementation of GPT models in CMS with API and agile web development for quality user-centered AI chat service experience
KR20230023804A (en) Text-video creation methods, devices, facilities and media
Johnson Adobe Dreamweaver CS6 on Demand
Kosinska et al. Microsoft Expression Blend 4 Step by Step
Klinke et al. Tool support for collaborative creation of interactive storytelling media
Dębiński et al. Methods of creating graphical interfaces of web applications based on the example of FLEX Framework

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201380074201.X

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13876698

Country of ref document: EP

Kind code of ref document: A1

REEP Request for entry into the european phase

Ref document number: 2013876698

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2013876698

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE