WO2018187534A1 - Method and apparatus for referencing, filtering, and combining content - Google Patents

Method and apparatus for referencing, filtering, and combining content Download PDF

Info

Publication number
WO2018187534A1
WO2018187534A1 PCT/US2018/026194 US2018026194W WO2018187534A1 WO 2018187534 A1 WO2018187534 A1 WO 2018187534A1 US 2018026194 W US2018026194 W US 2018026194W WO 2018187534 A1 WO2018187534 A1 WO 2018187534A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
user
clip
media file
user input
Prior art date
Application number
PCT/US2018/026194
Other languages
French (fr)
Inventor
David HIRSCHFELD
Mark C. Phelps
Theodore V. HAIG
Barry FERNANDO
Original Assignee
Art Research And Technology, L.L.C.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/479,774 external-priority patent/US10609442B2/en
Application filed by Art Research And Technology, L.L.C. filed Critical Art Research And Technology, L.L.C.
Priority to EP18781305.0A priority Critical patent/EP3607457A4/en
Publication of WO2018187534A1 publication Critical patent/WO2018187534A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/435Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking

Definitions

  • Embodiments generally relate to assemblies, methods, devices, and systems for
  • Embodiments of the current disclosure describe a method for displaying information associated with a playable media file.
  • the method comprises the steps of obtaining stored data describing the information, the stored data comprising a storage location of the playable media file and a plurality of virtual clips each associated with the playable media file and including a first data element identifying a first time within the playable media file at which the corresponding virtual clip begins, and a second data element identifying a first user profile associated with creating the corresponding virtual clip; accessing the playable media file at the storage location; causing a graphical user interface (GUI) to be displayed on a computing device of a user, wherein said GUI enables the user to generate user inputs by interacting with the GUI; receiving a first user input indicating a first interaction of the user with a first display position on the timeline; determining a selected time within the playable media file that corresponds to the first display position; identifying a first virtual clip of the plurality of the virtual clips and one or more of the virtual clips; and updating the
  • certain embodiments of the current disclosure depict a method for marking a portion of interest in a playable media file.
  • the method comprises the steps of causing a recording device to begin capturing a recording of a live event as the Playable Media File; while the recording device is capturing the recording, receiving a first user input, the recording device continuing to capture the live content subsequent to the first user input; determining from the first user input, a first temporal point of interest during said recording of the Playable Media File;
  • certain embodiments of the current disclosure describe a method of annotating a playable media file.
  • the method comprises the steps of obtaining a virtual clip comprising a first location within the playable media file and a second location within the playable media file, the first and second locations together defining a clip of the playable media file occurring between the first and second locations; causing, using the virtual clip, the clip to be displayed on a computing device of a user; receiving a first user input associated with the virtual clip;
  • FIG. 1 illustrates an exemplary embodiment of a system for making a composite video with annotation(s);
  • FIG. 2 illustrates another exemplary embodiment of a system for making a composite video with annotation(s);
  • FIG. 3 is a table of information fields stored in association with each playable media file
  • FIG. 4 summarizes steps in Applicants' method, steps implemented by Applicants' article of manufacture, and steps performed by a programmable processor implementing Applicants' computer program product
  • FIG. 5 summarizes steps in Applicants' method, steps implemented by Applicants' article of manufacture, and steps performed by a programmable processor implementing Applicants' computer program product
  • FIG. 6 summarizes steps in Applicants' method, steps implemented by Applicants' article of manufacture, and steps performed by a programmable processor implementing Applicants' computer program product;
  • FIG. 7 summarizes steps in Applicants' method, steps implemented by Applicants' article of manufacture, and steps performed by a programmable processor implementing Applicants' computer program product
  • FIG 8 summarizes steps in Applicants' method, steps implemented by Applicants' article of manufacture, and steps performed by a programmable processor implementing Applicants' computer program product
  • FIG. 9 summarizes steps in Applicants' method, steps implemented by Applicants' article of manufacture, and steps performed by a programmable processor implementing Applicants' computer program product
  • FIG. 10 summarizes steps in Applicants' method, steps implemented by Applicants' article of manufacture, and steps performed by a programmable processor implementing Applicants' computer program product
  • FIG 1 1 summarizes steps in Applicants' method, steps implemented by Applicants' article of manufacture, and steps performed by a programmable processor implementing Applicants' computer program product;
  • FIG. 1 2 summarizes steps in Applicants' method, steps implemented by Applicants' article of manufacture, and steps performed by a programmable processor implementing Applicants' computer program product;
  • FIG. 13 summarizes steps in Applicants' method, steps implemented by Applicants' article of manufacture, and steps performed by a programmable processor implementing Applicants' computer program product;
  • FIG. 14 summarizes steps in Applicants' method, steps implemented by Applicants' article of manufacture, and steps performed by a programmable processor implementing Applicants' computer program product;
  • FIG. 1 5 is a flowchart of the method and/or process related to set a bookmark during a recording of a playable media file
  • FIG. 1 6 is a flowchart of the method and/or process related to display annotations associated with a playable media file
  • FIG 1 7 A is an example of a graphical user interface for tagging
  • FIG. 17B is an example of a graphical user interface that enables a user to configure hi s user account to identify virtual clips in a particular subcategory
  • FIG. 1 8 A and 1 8B are examples of graphical user interfaces for displaying
  • FIGS. 4- 16 The schematic flow chart diagrams included are generally set forth as a logical flowchart diagram (e.g., FIGS. 4- 16). As such, the depicted order and labeled steps are indicative of one embodiment of the presented method. In certain embodiments, other steps and methods are conceived that are equivalent in function, logic, or effect to one or more steps, or portions thereof, of the illustrated method. Additionally, the format and symbols employed are provided to explain the logical steps of the method and are understood not to limit the scope of the method. Although various arrow types and line types are employed in the flow-chart diagrams, they are understood not to limit the scope of the corresponding method (e.g., FIGS. 4- 16).
  • arrows or other connectors may be used to indicate only the logical flow of the method. For instance, an arrow indicates a waiting or monitoring period of unspecified duration between enumerated steps of the depicted method. Additionally, the order in which a particular method occurs may or may not strictly adhere to the order of the corresponding steps shown.
  • Applicants' system and method includes a network wherein a video can be created using any available video format, and that video can be shared between a plurality of people.
  • Applicants' system and method can be used by multiple members of a social network to associate annotations with a Playable Media File including a composite digital clip, and/or to initiate discussion threads associated with that Playable Media File including a composite digital clip.
  • network 100 comprises a social network.
  • Applicants' social network 100 is an open social network.
  • Applicants' social network 100 is a closed social network.
  • network 100 comprises a network server 130 that is communicatively connected to a computing device 1 10 through a first communication fabric 120 and a computing device 150 through a second communication fabric 140.
  • the network server 130 i s owned and/or operated by a social networking service provider while computing dev ices 1 10 and 150 are owned and/or operated by users or members of the social network 100, where a member has a profile containing information about the member stored in information 137 of the social network server 130.
  • the computing device 1 10 is owned and operated by a first member and the computing device 1 50 is ow ned and operated by a second member.
  • FIG. 1 shows a first computing dev ice 1 10, netw ork serv er
  • FIG. 1 should not be taken as limiting. Rather, in other embodiments any number of entities and corresponding devices can be part of the network 100, and further, although FIG. 1 shows two communication fabrics 1 20 and 140, in other embodiments, less than, or more than, tw o
  • communication fabrics are provided in the social netw ork 100.
  • the communication fabric 1 20 and the communication fabric 140 are the same communication fabric.
  • the computing dev ices 1 10 and 1 50 and host 130 are each an article of manufacture.
  • the article of manufacture include: a server, a mainframe computer, a mobile telephone, a smart telephone, a personal digital assi stant, a personal computer, a laptop, a set-top box, an MP3 player, an email enabled device, a tablet computer, a web enabled device, or other special purpose computer each having one or more processors (e.g., a Central Processing Unit, a Graphical Processing Unit, or a microprocessor) that are configured to execute Applicants' API to receiv e information fields, transmit information fields, store information fields, or perform methods.
  • processors e.g., a Central Processing Unit, a Graphical Processing Unit, or a microprocessor
  • FIG. 1 illustrates the computing dev ice 1 10, the netw ork serv er 130, and the computing device 1 50 as each including a processor 1 1 2, 132, and 1 52, respectively, a non-transitory computer readable medium 1 1 3, 133, and 153, respectively, having a series of instructions 1 14, 1 34, and 1 54, respectively, encoded therein, an input/output means 1 1 1, 1 3 1 , and 1 5 1 , respectively, such as a keyboard, a mouse, a stylus, touch screen, a camera, a scanner, or a printer.
  • Computer readable program code 1 14, 1 34, and 1 54 is encoded in non-transitory computer readable media I 1 3, 133, and 1 53, respectively.
  • Processors 1 1 2, 132, and 1 52 utilize computer readable program code 1 14, 134, and 1 54, respectively, to operate computing devices 1 10, 130, and 1 50, respectively.
  • the computing device 1 10, 130, and 1 50 employ hardware and/or software that supports accelerometers, gyroscopes, magnetometers (e.g., solid state compasses) and the like.
  • Processors 1 1 2 and 1 52 utilize Applicants' Application Program Interfaces (APIs)
  • Algorithm 136 comprises Applicants' source code to operate a public or priv ate social network, and when implemented by computing device 1 10 causes a graphic user interface ( "GUI" ) to be displayed on display screen 1 1 5, wherein that GUI compri ses and displays a plurality of graphical interactable objects.
  • GUI graphic user interface
  • a member using computing device 1 10 can utilize that GUI to access a logical volume, such as for example and without limitation logical volume 180 (FIG. 2 ), wherein information speci ic to that user are encoded in logical volume 1 80.
  • the member and/or user can further utilize the GUI to access Applicants' social network as described herein.
  • Processor 132 accesses the computer readable program code 134, encoded on the non-transitory computer readable medium 133, and executes an instruction 136 to electronically communicate with the computing device 1 10 via the communication fabric 1 20 or electronically communicate with the computing dev ice 1 50 via the communication fabric 140.
  • Encoded information 137 includes, for example and without limitation, the data communicated or information fields communicated, e.g. , date and time of transmission, frequency of transmission and the like, with any or all of the computing device 1 10 and the computing device 150.
  • information 137 is analyzed and/or mined.
  • information 137 is encoded in a plurality of individual logical volumes specific to each member / user.
  • computing devices 1 10 and 1 50 further comprise one or more display screens 1 1 5 and 1 55, respectively.
  • display screens 1 1 5 and 1 55 comprise an LED display device.
  • the information fields received from the computing device are identical to the information fields received from the computing device.
  • the network serv er 1 30 are exchanged with other computing devices not shown in FIG. 1 .
  • information fields received from a social network in which the member has an Internet presence is sent to the social network server 130 and stored at the information 137 in association with a profile of the member.
  • the information fields transmitted from the computing device 1 10 to the social network server 130 is sent to an account of the member w ithin the social netw ork.
  • information 137 is encoded in one or more hard disk drives, tape cartridge libraries, optical disks, combinations thereof, and/or any suitable data storage medium, storing one or more databases, or the components thereof, in a single location or in multiple locations, or as an array such as a Direct Access Storage Device (DASD), redundant array of independent disks (R AID), vi realization device, etc.
  • information 137 is structured by a database model, such as a relational model , a hierarchical model, a network model, an entity-relationship model, an object-oriented model, or a combination thereof.
  • the information 137 is structured in a relational model that stores a plurality of Identities for each of a plurality of members as attributes in a matrix.
  • the computing dev ices 1 10, 130, and 1 50 include wired
  • v arious communication protocols including near field (e.g., "Blue Tooth” ) and/or far field communication capabilities (e.g., satellite communication or communication to cell sites of a cellular network) that support any number of services such as: telephony.
  • SMS Short Message Service
  • MMS Multimedia Messaging Service
  • Email electronic mail
  • GPS Global Positioning System
  • the communication fabrics 120 and 140 each comprise one or more switches and 141, respectively.
  • communication fabrics 120 and 140 are the same.
  • at least one of the communication fabrics 120 and 140 comprises the Internet, an intranet, an extranet, a storage area network (SAN ), a wide area network (WAN), a local area network (LAN), a virtual private network, a satellite communications network, an interactive television network, or any combination of the foregoing.
  • at least one of the communication fabrics 120 and 140 contains either or both wired or wireless connections for the transmission of signal s including electrical connections, magnetic connections, or a combination thereof.
  • communication fabrics 120 and 140 utilize any of a variety of communication protocol s, such as Transmission Control Protocol/Internet Protocol (TCP/IP), for example.
  • TCP/IP Transmission Control Protocol/Internet Protocol
  • the computing devices 1 10, 130 and 150 are each
  • the network server 1 30 is a computing device that is owned and/or operated by a networking service provider, and computing devices 1 10 and 150 are owned and/or operated by individual network users.
  • network server is owned and/or operated by a social network prov ider.
  • the network server 130 provides access to the computing dev ices 1 10 and 1 50 to execute Applicants' source code 136 via a Software as a Service ( SaaS) means.
  • SaaS Software as a Service
  • information fields are received from one or more computing devices 1 10, 130 and/or 150 and stored on the "Cloud" such as data storage library 160 and/or 1 70.
  • each of the data storage libraries 160 and 170 have corresponding physical storage devices, such as and without limitation physical data storage devices 163 - 169 for data storage library 160 and 1 73 - 179 for data storage library 170.
  • data storage library 160 and data storage library 170 are configured in a Peer To Peer Remote Copy ( "PPRC" ) storage system, wherein the information fields in data storage library 160 is automatically backed up in data storage library 170.
  • PPRC Peer To Peer Remote Copy
  • Applicants' PPRC storage system utilizes synchronous copying.
  • Applicants' PPRC storage system utilizes asynchronous copying.
  • physical storage device 163 is configured to comprise logical volume 180.
  • each physical storage device in data storage library 160 i s configured to comprise a plurality of logical volumes.
  • each physical storage dev ice in data storage library 1 70 is configured to comprise a corresponding plurality of logical volumes.
  • each member of the social network is assigned a unique logical volume.
  • a permission file 157 may be encoded in computer readable medium 133 or in data storage libraries 160 and 170 that associates each logical volume with a social network member and further associates each logical volume with access permi ssions for certain designated other social network users.
  • Each social network user configures his/her own logical volume permissions.
  • a first user desires to remov e access permissions from a second user
  • that first member simply ctCCCSSCS his/her permissions file and deletes the second user. Thereafter, the second user cannot retriev e data stored on the logical volume associated with the first user.
  • Applicants' algorithm 136 can be accessed by users of Applicants' network 100 to create, share, edit, associate one or more annotations with, and/or associate one or more discussion threads with, a Playable Media File.
  • One member using a computing device such as computing device 1 10 or 150, to access network server 130, streams a Playable Media File from its original storage location.
  • the Playable Media File is encoded in a unique logical volume accessible by a first user. That first user can grant access to the Playable Media Fi le to one or more other users by storing access permissions in permission file 1 57.
  • the access includes levels such as, and without limitation, view only, view/edit, view/ ' edit/share, and the like. In certain embodiments the access includes conditions or restrictions such as expirations dates, limitations on the number of times the file can be viewed, and the like.
  • a data profile 300 is created for the Playable Media File and is stored on network server 130, and optionally on data storage library 160 or 170.
  • Data profile 300 includes various information fields, including the Global Unique Identifier (GUID) 302 associated with the creating member, a description 304 of the Playable Media File (e.g., a title), and permissions 306 held by various members to access, edit, and/or share the Playable Media File.
  • GUID Global Unique Identifier
  • Data profile 300 may further include subsequently added annotations 3 12 and discussion threads 328.
  • Applicants ' system and method further disclose an article of manufacture comprising a platform for information management, such as computing device 1 10, 130, and/or 1 50, comprising computer readable program code, such as API 1 16, API 1 56, and/or Applicants' social network source code 1 36, residing in a non-transitory computer readable medium, such as computer readable medium 1 13, 133, and/or 1 53, where that computer readable program code can be executed by a processor, such as processor 1 1 2 (FIG 1) and/or 132 (FIG. 1), and/or 1 52, to implement Applicants' method recited in FIGS. 4- 16.
  • a platform for information management such as computing device 1 10, 130, and/or 1 50
  • computer readable program code such as API 1 16, API 1 56, and/or Applicants' social network source code 1 36
  • a non-transitory computer readable medium such as computer readable medium 1 13, 133, and/or 1 53
  • processor 1 1 2 FIG. 4- 16
  • Applicants' system and method further disclose a non -transitory computer readable medium wherein Applicants' computer program product is encoded herein.
  • Applicants' computer program product comprises computer readable program code that can be executed by a programmable processor to implement Applicants' method recited in FIGS. 4- 16.
  • the computer readable program code is encoded in a non-transitory computer readable medium comprising, for example, a magnetic information storage medium, an optical information storage medium, an electronic information storage medium, and the like.
  • “Electronic storage media” means, for example and without limitation, one or more devices, such as and without limitation, a PROM, F.PROM, EEPROM, Flash PROM, compactflash, smartmedia, and the like.
  • a method for setting a bookmark during a recording of a playable media file is
  • a network user can use one of the computing devices 1 10 and 150 (FIG. 1) to record a playable media file of a live event.
  • FIG. 1 should not be taken as limiting.
  • any number of computing device that is capable of recording a playable media file of a live event can be used by a network user and can be part of the network 100.
  • a user device may be configured to, on its own or in cooperation with one or more servers as described above, create a virtual clip of a playable media fi le encoding a recording of a live event while the event and/or the recording is taking place.
  • the exemplary method is described with reference to a touchscreen user device that is recording the event, and variations on the exemplary method are contemplated.
  • the user device may be configured to receive audio inputs and/or inputs from peripheral devices such as a keyboard, remote controller, and/or mouse.
  • the user device may not be recording the live event and may not create or store the playable media file; another device may create the playable media file and store it in a location from which the user dev ice or another dev ice accessible by the user may concurrently and/or later download and/or stream the media file.
  • the user device receives a user input signaling the user device to begin recording the live event, and starts to record a playable media file of the live event.
  • the user device may also display a user interface including one or more interactable graphical objects that serve as the user's controls for the virtual clip.
  • the user device receiv es another user input, and at step 530 the user device determines that the user input includes a command to start a virtual clip of the recording.
  • the user device generates a temporal place marker that indexes the temporal point of interest in the recording that corresponds to the time that the user initiated the virtual clip.
  • the temporal place marker is stored on the user device or the recording computing device in step 550.
  • the user device continues to record the live ev ent, subsequently receiving another user input at step 560.
  • the user device determines whether the user input includes a command to stop capturing the virtual clip; this command may be a selection of an END CLIP object in the user interface, or it may be a selection by the user to stop recording the live event.
  • the user device generates a temporal place marker that indexes the temporal point of interest in the recording that corresponds to the time that the user ended the virtual clip.
  • the user device may create and store the virtual clip containing an identi ier for the media file encoding the recording of the live event, the first temporal place marker identifying the start time (i .e., time elapsed from the beginning of the recording) of the virtual clip, and the second temporal place marker identifying the end time of the v irtual clip.
  • the steps 520-590 of creating a virtual clip may be repeated to capture a second virtual clip of the media file.
  • the user input that ends the capture of the first virtual clip may also serve as the user input that starts the capture of the second v irtual clip.
  • the playable media file and the virtual clip(s) may be transferred by the user device to a server or other computer storage device and later accessed using the systems described herein.
  • the temporal place markers may be used to identify "trim" locations within the media file; the user device or recording device may store - only or additionally - the encoded content captured between the temporal place markers.
  • the user device may be used to view the media file subsequent to the live event occurring, and to generate virtual clips of the media file as described above.
  • a user can communicate to a recording computing dev ice to generate a temporal bookmark that indexes a temporal point of interest during recording of a liv e ev ent.
  • the algorithm 136 comprising Applicants' source code generates a temporal place marker.
  • the algorithm 136 comprises voice recognition source code so that when the user speaks verbal ly to the recording computing dev ice, a temporal place marker is generated.
  • the user is able to communicate to recording computing device using a control dev ice 105 (FIG. 1), which is connected to the recording computing dev ice via a connected data link .
  • control dev ice is connected to the recording computing dev ice remotely via Bluetooth, ultra-wideband, wireless local area network, Wi-Fi, AirPort, Infrared, ZigBee, and or other similar technologies.
  • a signal can be transferred from the control dev ice 105 to the recording computing dev ice so that the algorithm 136 comprising Applicants " source code generates a temporal place marker during the recording of a liv e event.
  • recording computing dev ice can be used to make a composite video file.
  • step 610 Applicants disclose determining whether to create a plurality of virtual clips, wherein each virtual clip compri ses content encoded in one or more Media File, playable or static, from a beginning of the Media File, playable or static, up to a designated end point.
  • the depicted order and labeled steps in FIG. 4 are indicativ e of one embodiment of the presented method. Further, the order in which a particular method occurs may or may not strictly adhere to the order of the corresponding steps shown in FIG. 4 because some users may choose to perform certain steps before other steps.
  • a "Media File, playable or static,” may be a file containing data that encodes one or more types of media content, such as audio content, video content, audiovisual content, image and other computer graphic content, text content, slide-show and similar sequenced-graphic content, and the like.
  • media files include, AVI file, MP3 file, MP4 file, WMA file, WAV file, Flash, MPEG file, an image file (JPG, TIF, PNG, GIF, Bitmap, and the like), a PDF file, a text file (e.g., a .doc file), a VISIO file, a .ppt file, a .key file, a spreadsheet file, and any type of 3D media file.
  • such a 3D media file requires holographic projection / holographic viewing.
  • “Media File, playable or static” further includes any file which generates a Stereoscopic visual display that can be viewed through stereoscopic eyewear or played on 3D display technology such as 3D TV, and in certain embodiments comprises a Virtual Reality/ Augmented Realty file that can be viewed through Virtual Reality devices such as Hololense, Oculus Rift, Sony
  • Playstation VR HCT VIVE, Razer OSBR HDK, Zeiss VRl, SOV VR, Freefly, and the like.
  • a "virtual clip" created from one or more of such media files may, in some embodiments, be a set of data points that together delineate a particular subset of the content encoded in the media file.
  • the virtual clip is comprised of references that identify specific content in the corresponding media file, but the virtual clip is not necessarily itself a stored media file containing the content data.
  • the content data may remain in its original storage location, and the present systems (e.g., described in FIGS. 1 and 2) may obtain the virtual clip, read the set of data points, access the media file in its original stored location, and then obtain (e.g., via file transfer or streaming) the subset of content that is delineated by the data points.
  • the set of data points may include a start point and an end point; together with an identifier of the media file, the start point and end point may identify the content to be included in the virtual clip.
  • the data describing the data points may be selected depending on the type of media file encoding the content.
  • start and end points include: in a video or audio file, a start time (e.g., a duration measured from the beginning of the video/audio content at time 0.0) and an end time which together define a "clip" of the video/audio content; in an image, a starting coordinate (e.g., with respect to a top-left corner of the image being at coordinate (0, 0)) of a first pixel representing the top-left corner of a rectangular region of the image, and an ending coordinate of a second pixel representing the lower- right corner of the region; in a slide show, a starting slide number and an ending slide number; in a plain text, formatted text, or binary text file, a starting pointer and an ending pointer identifying positions in the character stream.
  • start time e.g., a duration measured from the beginning of the video/audio content at time 0.0
  • end time which together define a "clip" of the video/audio content
  • a starting coordinate e.g., with respect to a top-left corner of
  • each data point in the set may include a time (e.g., time elapsed since the beginning of the simulation), a coordinate location within the simulated environment (e.g., xyz coordinates of a user-controlled camera within a geographic environment mapped to a Cartesian coordinate system), and data (e.g., a vector) identifying the camera line-of-sight.
  • a time e.g., time elapsed since the beginning of the simulation
  • a coordinate location within the simulated environment e.g., xyz coordinates of a user-controlled camera within a geographic environment mapped to a Cartesian coordinate system
  • data e.g., a vector
  • step 7 10 the method, without pausing the media play, displays an END CLIP interactable graphical object and a CANCEL CLIP interactable graphical object.
  • step 720 If the user activates the CANCEL CLIP interactable graphical object in step 720, then the method transitions from step 720 to step 750 and ends. Alternatively, if the user does not activate the CANCEL CLIP interactable graphical object in step 7 10, then the method transitions from step 7 10 to step 730 wherein the method determines if the END CLIP interactable graphical object has been activated. If the method determines in step 730 that the END CLIP interactable graphical object has not been activated, then the method waits at step 730, while the system continues to play or otherwise display the media file, until the user activates the END CLIP interactable graphical object. At step 740, the system determines that the user has selected to end the virtual clip, determines the location w ithin the media file at which the virtual clip should end, and temporarily stores a start point, an end point, and any other data needed to identify the virtual clip.
  • step 740 for a virtual clip of a video or audio file may include identifying the time elapsed in the content when the END CLIP interactable graphical object was selected, and then creating and storing a virtual clip containing the media file identifier, a start time of 0.0, and an end time representing the time elapsed.
  • the system may subtract the start time from the end time to determine a duration of the virtual clip, and may store the start time and the duration in the virtual clip.
  • step 740 for a virtual clip of an image file may include identifying an end coordinate of the pixel over which a mouse cursor was 1 ocated when the END CLIP interactable graphical object was selected, and then creating and storing a virtual clip containing the media file identifier, a start point of (0, 0), and an end point at the end coordinate.
  • the virtual clip would thus identify the region within an implied bounding box; if the end coordinate were (x, y), the bounding box would have clock wi se corners at (0, 0), (x, 0), (x, y), and (0, y).
  • step 740 for a virtual clip of a text file may include identifying a cursor location within the text file and determining a target position, within the data stream (e.g., ASCII or other plain text stream, rich text or other formatted text stream, binary file stream, etc. ) representing the text file, corresponding to the cursor location, then creating and storing a virtual clip containing the media file identifier, a starting stream position of 0, and the target position as an ending stream position.
  • a virtual clip is saved to the user's computing device.
  • the virtual clip is sav ed to Applicants' network serv er 130 (FIG. 1).
  • step 610 if the user elects in step 610 NOT to create a plurality of virtual clips each from a beginning of the media file to a designated end point, then the method transitions from step 6 10 to step 620 wherein the user may elect to create a plurality of virtual clips comprising content from one or more Media File, playable or static from a designated start point to a designated end point.
  • the system may display on the user's device a user interface that displays the media file along with a START CLIP interactable graphical object, and the system may receiv e a user input indicating the START CLIP interactable graphical object was selected.
  • the system may identify, as the start point of the virtual clip, the point within the media file that was "in focus" when the START CLIP interactable graphical object was selected, and then transitions from step 620 to step 8 10 (FIG. 6).
  • a determination of the "in focus" point, and thus the start point, may depend on the type of the content file, but in any case can be objectiv ely determined.
  • the time during playback that the START CLIP object is selected may be the start point; additional data may be needed for 2D or 3 D recorded simulations, such as the camera location and line-of-sight when the START CLIP object is selected.
  • the "in focus" point may be the slide being di splayed when the START CLIP object is selected, and in other static files such as text and image files, the cursor position may identify the "in focus" point.
  • step 8 1 0 the method streams the Media File, playable or static from a designated start point, and without pausing the media play, displays an
  • step 820 If the user activates the CANCEL CLIP interactable graphical object in step 820, then the method transitions from step 820 to step 850 and ends. Ahematively, if the user does not activate the CANCEL CLIP interactable graphical object in step 8 10, then the method transitions from step 8 10 to step 830 wherein the method determines if the END CLIP interactable graphical object has been activated. If the method determines in step 830 that the END CLIP interactable graphical object has not been activated, then the method waits at step 830, while the system continues to play or otherwise di splay the media file, until the user activates the END CLIP interactable graphical object.
  • the system determines that the user has selected to end the virtual clip, determines the location within the media file at which the virtual clip should end, and temporarily stores a start point, an end point, and any other data needed to identify the virtual clip.
  • Any of the above examples described with respect to step 740 of FIG. 5 may illustrate the system's operation to create and store the virtual clip, with the additional processing required to identify the start point within the media file.
  • the system may provide a user interface that enables the user to draw a visible bounding box (e.g., using a mouse cursor and clicks), and may identify the start and end points using the top-left and lower- right coordinates of the visible bounding box.
  • step 840 the virtual clip is saved to the user's computing device. In certain embodiments, in step 840 the virtual clip is saved to Applicants' network server 130 (FIG. 1 ).
  • step 610 if the user elects in step 610 NOT to create a plurality of virtual clips each from a beginning to a designated end point, and if the user elects NOT to create a plurality of virtual clips, where each virtual clip compri ses content from one or more Media File, playable or statics, and wherein the user specifies a designated timeline location to begin the virtual clip, then the method transitions from step 620 to step 630 wherein the method detennines if the user elects to configure a composite v irtual clip. [00068] If the user elects to configure a composite virtual clip in step 630, the method transitions from step 630 to step 910. Referring now to FIG. 7, in step 910 the method selects (N) saved virtual clips to configure a composite virtual clip, and determines an order of presentation for those (N) virtual clips.
  • step 920 the method sets (M) initially to 1 .
  • step 930 the method configures a
  • step 930 the method saves the (M)th link in a composite virtual clip file.
  • step 950 the method determines if (M) equals (N), i .e. if all (N) links to the (N) selected (N) saved virtual clips have been created and saved. If the method determines in step 950 that (M ) does not equal (N), then the method transitions from step 950 to step 960 wherein the method increments (M) by 1, i .e. sets (M) equal to ( M)+ l . The method transitions from step 960 to step 930 and continues as described herein. Alternativ ely, if the method determines in step 950 that (M) equals (N), then the method transitions from step 950 to step 970 and ends.
  • step 6 10 If the user elects in step 6 10 NOT to create a plurality of virtual clips each from a beginning to a designated end point, and if the user elects in step 620 NOT to create a plurality of virtual clips, where each virtual clip comprises content from one or more Media File, playable or statics, and wherein the user specifies a designated timeline location to begin the virtual clip, and if the user does NOT elect in step 630 to configure a composite virtual clip in step 630, then in step 640 the method determines whether to display a composite virtual clip.
  • step 1010 the method provides a storage location for a composite virtual clip file configured to access (M) saved clips.
  • step 1020 the method sets ( P) initially to 1 .
  • step 1030 the method activates a (P)th link encoded in the composite virtual clip file to stream a (P)th saved virtual clip to the user's device.
  • step 1040 the method determines if all (N) clips comprising the selected composite virtual clip have been displayed, i .e. if (P) equals (N). If the method determines in step 1040 that (P) does not equal (N), then the method transitions from step 1040 to step 1050 and increments (P) by 1 , i .e. sets (P) equal to ( P) + 1. The method transitions from step 1050 to step 1030 and continues as described herein.
  • step 1040 determines in step 1040 that (P) equals (N)
  • step 962 the method displays an Annotation Panel .
  • step 964 the method determines if the user entered an annotation in the Annotation panel of step 962. If the method determines in step 964 is a user entered an annotation in the Annotation Panel, then the method transitions from step 964 to step 14 10. Alternatively, if the user did not enter an annotation in the Annotation panel of step 962, then the method transitions from step 964 to step 966 wherein the method determines if the user elects to change visibility from PUBLIC to PRIVATE.
  • step 966 determines in the 966 that the user does not elect to change the visibility of the identified content. If the method determines in the 966 that the user does not elect to change the visibility of the identified content, then the method transitions from step 966 to step 968 wherein the method detennines if the user elects to share saved data with specific recipients. If the user elects to share saved data with specific recipients, then the method transitions from step 966 to step 1 5 10. If the user elects not to share saved data with specific recipients, then the method transitions from step 968 to step 1060 and ends.
  • step 964 is a user entered an annotation in the Annotation Panel
  • the method transitions from step 964 to step 14 10.
  • step 1410 the method saves the annotation entered into the Annotation Panel of step 962.
  • the user's annotation is saved to the user's computing device.
  • step 14 10 the user's annotation is saved to server 140 (FIG. 1).
  • the method determines whether a user input associated with the virtual clip, such as an annotation, conforms to a predetermined format.
  • the format comprises a tag identifier indicating the user input includes taxonomy tags, a first tag following the tag identifier and identifying a first category of the composite virtual clip, and (P) subtag(s) sequentially following the first tag and each including a delimiter indicating a previous tag is complete and an additional tag follows, identifying an additional category of the virtual clip.
  • the tag identi ier may be a character (the "#" or hash symbol in the examples herein), character string, or other data element that the system is configured to identify as an indicator that the text fol lowing the tag identifier should conform to the taxonomy tag format, and contains at least one tag if so.
  • the format comprises zero subtags. In other embodiments, the format compri ses 1, 2, 3, 4, 5, or any number that is greater than 1 subtags.
  • the method creates and saves a taxonomy tag for the annotation saved in step 14 10.
  • the taxonomy tag comprises a form " content:TITLE.
  • the taxonomy tag comprises a form "#first tag:subtag 1 :subtag2: . . . :subtagP," where the first tag and each subtag( 1 . . P) are character strings separated by the delimiter character
  • the method also identifies one or more taxonomy tags from the user and associates the virtual clip with one or more categories identified by the one or more taxonomy tags.
  • each tag immediately following a tag identifier corresponds to a main category
  • each subtag corresponds to a subcategory of the (sub)category corresponding to the immediately preceding tag (i .e., the tag to the left of the delimiter).
  • one or more categories are arranged into a hierarchy determined from a sequence of the corresponding tags identified in the user input.
  • each taxonomy tag identifies a corresponding hierarchy of categories.
  • the method associates the virtual clip with each of the one or more categories corresponding to one of the tags/subtags in each taxonomy tag associated with the v irtual clip.
  • the categories and any corresponding hierarchy may exist in a data store (e.g., the global data store), and associating the taxonomy tags with the categories may include matching the tags to the categories.
  • the taxonomy tags and their respective tagging sequence may represent a realtime, ad hoc "categorization" in absence of a centralized hierarchy.
  • the virtual clip may be associated with the taxonomy tags to produce a searchable virtual clip that is delivered to a requesting device in response to a query from the requesting device for any of the plurality of virtual clips that are associated with the taxonomy tags.
  • the system may require that the taxonomy tags of the query appear in the same sequence of the stored taxonomy tags, in order to improve accuracy and relevance of the search results.
  • associating a virtual clip with the taxonomy tags may include creating, based on an order in an input character string of the one or more taxonomy tags, a directed relationship between a first taxonomy tag and a second taxonomy tag sequentially following the first taxonomy tag in the character string, the directed relationship enabling a user to retrieve the first virtual clip from the stored data using an ordered combination of the first and second taxonomy tags as the query.
  • the system may provide for the query to include a user identifier, such that the virtual clips may further be searched to return any virtual clips that have the corresponding taxonomy tags and were created by a particular user.
  • This configuration also provides for a user to configure the associated user account to "follow" a particular user, and further a particular set of taxonomy tags; subsequent to implementing this configuration, the system may automatically send matching virtual clips to the user's account.
  • GUI graphical user interface
  • the GUI 1700 may include a navigation element 1 702 that displays visual representations of one or more of the category hierarchies, in accordance with parameters that may be provided by the user.
  • the GUI may enable the user to configure the navigation element 1702 to display a particular subset of all accessible (i.e., to the user via permissions in a user account) hierarchies, non-limiting examples of such a subset including: all hierarchies derived from taxonomy tags associated with virtual clips created, stored, and/or saved (e.g., via a bookmark function) by the user; all hierarchies derived from taxonomy tags of virtual clips shared with the user's user account; each hierarchy derived from a taxonomy tag used within a specified portion of a social network; and the like.
  • the GUI 1700 may enable the user to interact with the displayed hierarchies, such as by displaying an interactable icon (e.g., an arrow 1 704) indicating that a displayed category 1 706 has one or more subcategories; selecting the icon may cause the system to update the navigation element 1 702 to display the subcategory/ies that were previously hidden.
  • an interactable icon e.g., an arrow 1 704
  • selecting the icon may cause the system to update the navigation element 1 702 to display the subcategory/ies that were previously hidden.
  • the user may be able to select a displayed category 1 706;
  • the system may filter all virtual clips accessible by the user to produce a subset of such virtual clips that are also associated with the selected category 1 706, as specified by a taxonomy tag associated with the virtual clip.
  • the system may then update the GU I 1700 to include a content di splay panel 1 7 12 displaying visual representations 1 7 14 of the virtual clips that belong to the filtered subset.
  • the visual representations 1 7 14 may be interactable graphical objects, such as a selectable element that generates a user input causing the system to update the GUI 1700 to include a virtual clip display panel (not shown) that di splays the virtual clip associated with the selected visual representation 1 7 14.
  • FIG. 1 7B illustrates an example GUI 1 760 that enables a user to configure his user account to identify virtual clips in a particular subcategory, and further to identify virtual clips created by a particular user and belonging to a particular subcategory.
  • FIG. 17B further shows that the system may configure such filtering in-context - that is, the filtering may be performed upon encountering a taxonomy tag 1 764 of a virtual clip
  • the system may configure the GUI 1760 to render the taxonomy tag 1 764 as an interactable object; the user may, for example, tap on or direct a mouse cursor to hover over the taxonomy tag 1 764, producing a user input that the system processes and in turn updates the GUI 1 760 to include a popup information window 1770 containing information as well as objects that may initiate commands.
  • One such object 1 772 may invoke a filtering command that causes the system to
  • the user is enabled to click on the object 1 772 to "follow" the subcategory
  • Another such object 1 774 may invoke a filtering command that is constructed from the category hierarchy of the taxonomy tag as well as additional metadata of the virtual clip.
  • the additional metadata includes the user identifier of the user that created or "posted” the virtual clip 1 762. The object 1 774 thus invites the user to aggregate virtual clips associated with the subcategory only if the virtual clips were created or posted by the identified user.
  • taxonomy tags may further be used to aggregate information about social
  • the illustrated information window 1770 displays exemplary network aggregation data, including a number of virtual clips network-wide having the selected taxonomy tag, a number of annotations and/or comments made on v irtual clips in the corresponding category, and a number of users who have associated virtual clips or otherw ise have participated in the subcategory. Any suitable metadata associated with the virtual clips may be aggregated and presented for analysi s in thi s manner.
  • step 1440 the method determines if the user activates a
  • step 1440 determines that the user does activate the CANCEL graphical interactable object. If the method determines that the user does activate the CANCEL graphical interactable object, then the method transitions from step 1440 to step 1490 wherein the method ends without saving any selected content. Alternatively, if the method determines in step 1440 that the user does not activate the CANCEL graphical interactable object, then the method transitions from step 1440 to step 1450 wherein the method determines if the user activates the SAVE graphical interactable object.
  • step 1450 determines in step 1450 that the user activates the SAVE graphical interactable object. If the method determines in step 1450 that the user activates the SAVE graphical interactable object, then the method transitions from step 1450 to step 1460 wherein the method collects available data including content from the media file, metadata from the media file, begin and end points in the media file, media file location (URL ), annotation text, annotation Taxonomy Tag(s), visibility settings, and designated recipients.
  • step 1460 the method collects available data including content from the media file, metadata from the media file, begin and end points in the media file, media file location ( URL ), annotation text, annotation Taxonomy Tag(s), visibility settings, and designated recipients.
  • step 1460 transitions from step 1460 to step 1470 wherein the method indexes and saves the collected data of step 1460.
  • step 1470 transitions from step 1470 to step 1480 wherein the method resumes the media file play.
  • step 966 the method transitions from step 966 to step 1 5 1 0 (FIG. 13) wherein the method does NOT include a location for the media file, or a location for any saved data abstracted from that media file, in a sitemap published to search engines.
  • the method transitions from step 1 5 10 to step 1440 and continues as described herein.
  • step 968 If a user elects to provide saved content to specific persons in step 968, then the method transitions from step 968 to step 1610 wherein the method enters recipients in the form of name(s), email(s), and/or social media account(s). The method transitions from step 16 10 to step 1440 and continues as described herein.
  • a “transition” comprises an animation-like effect when Applicants' method to display a composite virtual clip moves from one previously saved virtual clip to a next previously saved virtual clip during an onscreen presentation.
  • Applicants' method allows control the speed of each transition effect.
  • Applicants' method also permits the addition of sound transitions when moving from a saved virtual clip to the next saved virtual clip.
  • step 650 If a user desires in step 650 to add one or more transition effects to a previously configured composite virtual clip.
  • Applicants' method transitions from step 650 to step 1 1 10 (FIG. 9).
  • step 1 1 10 the method selects a previously configured composite virtual clip, wherein that composite virtual clip is configured to include (N) previously saved virtual clips in an order from 1 to (N).
  • step 1 120 the method selects a transition effect having a known storage location.
  • step 1 130 the method configures an (i )th transition effect link pointing to the known storage location for the desired transition effect.
  • step 1 140 the method configures the (i )th transition effect link to be activated after activation of a link to an (i)th virtual clip and before activation of a link to an (i+l)th virtual clip.
  • step 1 1 the method updates the composite virtual clip file to include the (i)th transition effect link.
  • step 1 160 the method determines if the user desires to configure additional transition effects for the selected composite v irtual clip. If the user elects to configure additional transition effect links, then the method transition from step 1 160 to step 1 120 and continues as described herein. Alternatively, if the user does not elect to configure additional transition effect links, then the method transition from step 1 160 to step 1 1 70 and ends.
  • step 660 If a user desires in step 660 to add one or more lensing effects to a previously configured composite virtual clip. Applicants' method transitions from step 660 to step 12 10 (FIG. 10) As those skilled in the art will appreciate, a "lensing" effect includes, for example and without limitation, overlay of one or more color filters, image distortions, and annotations.
  • step 12 10 the method selects a previously configured composite virtual clip, wherein that composite virtual clip is configured to include (N) previously saved v irtual clips in an order from 1 to (N).
  • step 1220 the method selects a 1 en sing effect having a known storage location.
  • step 1230 the method configures an (i)th 1 en sing effect link pointing to the known storage location for the desired 1 en sing effect.
  • step 1240 the method configures the (i)th 1 en sing effect link to be simultaneously with activation of a link to an (i)th virtual clip.
  • step 1250 the method updates the composite virtual clip file to include the (i)th 1 en sing effect link.
  • step 1260 the method determines if the user desires to configure additional 1 en sing effects for the selected composite virtual clip. If the user elects to configure additional transition effect links, then the method transition from step 1260 to step 1220 and continues as described herein. Alternatively, if the user does not elect to configure additional transition effect links, then the method transition from step 1 260 to step 1270 and ends.
  • step 670 If a user desires in step 670 to add one or more sound effects to a previously
  • step 1310 the method selects a previously configured composite virtual clip, wherein that composite virtual clip is configured to include (N) prev iously saved virtual clips in an order from 1 to (N).
  • step 1320 the method selects a sound effect having a known storage location.
  • step 1330 the method configures an (i)th sound effect link pointing to the known storage location for the desired 1 en sing effect.
  • step 1340 the method configures the (i)th sound effect link to be simultaneously with activation of a link to an (i)th virtual clip.
  • step 1 350 the method updates the composite virtual clip file to include the (i )th sound effect link.
  • step 1360 the method determines if the user desires to configure additional sound effects for the selected composite virtual clip. If the user elects to configure additional sound effect links, then the method transition from step 1360 to step 1320 and continues as described herein. Alternatively, if the user does not elect to configure additional sound effect links, then the method transition from step 1360 to step 1370 and ends.
  • a method for displaying annotations associated with a playable media file is disclosed. Either computing device 1 10 or 1 50 from the network 100 can be used to di splay annotations associated with a playable media file.
  • the system may execute computer readable program code to generate a graphical user interface (GUI ), such as the example GUI 1800 of FIG.
  • GUI graphical user interface
  • the graphical user interface 1 800 may include a display window 1 802 for displaying content encoded by the playable media file.
  • the system may generate the GUI 1 800 to include such playback (or other display) of the playable media file.
  • the system may obtain a virtual clip as described herein, and may determine a storage location of the playable media file from the virtual clip; then, the system may include, in the program instructions for displaying the GU I 1800, instructions that cause the display device 107 to access and/or retrieve the playable media file at the storage location . Additionally or alternatively, the system may itself access and/or retrieve the playable media file at the storage location, and may deliver the playable media file to the user's device for playback.
  • the GUI 1 800 may include a first interactable graphical object 1 8 1 0, which displays a timeline representing a duration of play of a playable media file.
  • the first interactable graphical object 1 8 10 may overlay the display window 1 802 that displays the playable media file content.
  • the first interactable graphical object 1 8 10 may display a plurality of vi sible annotation indicators.
  • a visible annotation indicator may be a clip indicator 1830 associated with a corresponding v irtual clip that i s associated with the playable media file.
  • the clip indicator 1 830 may identify a start time of the associated v irtual clip, and may appear at the corresponding location along the timeline.
  • each virtual clip associated with the playable media file in a global data store may have a corresponding clip indicator 1830 appearing in the appropriate display position for the corresponding start time of the virtual clip.
  • the clip indicator 1 830 of each v irtual clip may have a corresponding color that is selected based on an access type of the virtual clip with respect to the active user account.
  • the colors of clip indicators 1 830 are associated with: a public access type, wherein any user and non-user v isitor can access the virtual clip; a shared access type, wherein another user of the social network has shared the virtual clip with the user of the active user account; and, a private access type, which are virtual clips created by the active user.
  • the GUI 1800 may further include a second interactable graphical object 1 820 that also overlays a portion of the display window 1 802.
  • the second interactable graphical object 1 820 may be configured to dynamically display up to a maximum number of graphic elements each associated with a corresponding virtual clip of the plurality of virtual clips; the graphic elements may be selected based on an interaction by the user with a certain display position on the timeline.
  • a user input indicating that the user interacted with
  • the system may create each of the graphic elements to include information related to a v irtual clip that has a start time within a certain duration from the time associated with the first display position.
  • the system may identify one, some, or all of the virtual clips as display able virtual clips: the virtual clip having its start time closest to the time at the first display position may be selected as a first clip; then, one or more virtual clips preceding (e.g., sequentially preceding) the first clip and/or one or more virtual clips subsequent (e.g., sequentially subsequent) may be selected, such that a number of virtual clips no greater than the maximum number of graphic elements are selected.
  • the display able v irtual clips are each associated with one of the graphic elements, such that information about the clip i s displayed in the graphic element when the second interactable graphical object 1 820 is visible in the GUI 1 800.
  • the graphic elements may be displayed in a stacked li st, as illustrated, with the first clip approximate the vertical center of the list.
  • the system may revise the selection of displayable virtual clips and update the GUI 1800 accordingly each time a new user input indicates another interaction with the timeline.
  • the second interactable graphical object 1 820 may have a setting that the system can switch to make the second interactable graphical object 1 820 vi sible or not visible within the GUI 1 800.
  • the system causes the second interactable graphical object 1 820 not to be displayed when the GUI 1 800 i s first displayed. Then (e.g., in step 1 740 of FIG. 16), when a user interacts with a visible annotation indicator 1830, or any other part of the first interactable graphical object 1 8 10, the system update the GUI 1 800 to display the second interactable graphical object 1 820, itself displaying the list of graphical elements (e.g., annotations 1 822a-g).
  • the system may create additional annotation indicators, which are displayed on the first interactable graphical object 1 8 10, based on a user's input.
  • the data profile 300 in FIG. 3 further comprises an access type indication whether an annotation is a public annotation available to all network users of a social network, a shared annotation made accessible to a user by one of the network users, or a private annotation accessible only by a user; and an identifier of a creating user of an annotation.
  • the graphical user interface 1 800 may further include a third interactable graphical object 1 840 overlaying a third portion of the display window.
  • the third interactable graphical object 1 840 could be made vi sible by the system, as described above.
  • the system may receive a user input indicative of a user interaction with the first graphic element.
  • the system may obtain entries in a di scussion thread associated with the first clip, and may render information from the discussion thread into the third interactable graphical object 1 840 and update the GUI 1800 to display the third interactable graphical object 1 840.
  • a machine learning engine can learn what a user of a network behavior looks like and the machine learning engine can interact with the computing device and the control device within the network 100.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Strategic Management (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Tourism & Hospitality (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Data Mining & Analysis (AREA)
  • Development Economics (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Game Theory and Decision Science (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A method and system for annotating playable media files in a social network having a plurality of members, for displaying information associated with the playable media files, and for marking a portion of interest in the playable media files is disclosed herein.

Description

METHOD AND APPARATUS FOR REFERENCING, FILTERING, AND COMBINING
CONTENT
Technology Field
[0001] Embodiments generally relate to assemblies, methods, devices, and systems for
managing information, and more particularly, to assemblies, methods, devices, and systems for sharing and annotating content between members of a social network.
Summary
[0002] Embodiments of the current disclosure describe a method for displaying information associated with a playable media file. The method comprises the steps of obtaining stored data describing the information, the stored data comprising a storage location of the playable media file and a plurality of virtual clips each associated with the playable media file and including a first data element identifying a first time within the playable media file at which the corresponding virtual clip begins, and a second data element identifying a first user profile associated with creating the corresponding virtual clip; accessing the playable media file at the storage location; causing a graphical user interface (GUI) to be displayed on a computing device of a user, wherein said GUI enables the user to generate user inputs by interacting with the GUI; receiving a first user input indicating a first interaction of the user with a first display position on the timeline; determining a selected time within the playable media file that corresponds to the first display position; identifying a first virtual clip of the plurality of the virtual clips and one or more of the virtual clips; and updating the user interface on the computing device to display a list of the one or more displayable virtual clips in the second interactable graphical object.
[0003] Further, certain embodiments of the current disclosure depict a method for marking a portion of interest in a playable media file. The method comprises the steps of causing a recording device to begin capturing a recording of a live event as the Playable Media File; while the recording device is capturing the recording, receiving a first user input, the recording device continuing to capture the live content subsequent to the first user input; determining from the first user input, a first temporal point of interest during said recording of the Playable Media File;
generating a first temporal place marker that indexes said first temporal point of interest; and electronically storing the first temporal place marker.
Moreover, certain embodiments of the current disclosure describe a method of annotating a playable media file. The method comprises the steps of obtaining a virtual clip comprising a first location within the playable media file and a second location within the playable media file, the first and second locations together defining a clip of the playable media file occurring between the first and second locations; causing, using the virtual clip, the clip to be displayed on a computing device of a user; receiving a first user input associated with the virtual clip;
determining that the first user input conforms to a predetermined format defining taxonomy tags; identifying one or more taxonomy tags from the user input; and associating, in an account of the user, the virtual clip with each of the one or more taxonomy tags identified from the user input.
Brief Description of the Drawings
The invention will be better understood from a reading of the following detailed description taken in conjunction with the drawings in which like reference designators are used to designate like elements, and in which:
FIG. 1 illustrates an exemplary embodiment of a system for making a composite video with annotation(s);
FIG. 2 illustrates another exemplary embodiment of a system for making a composite video with annotation(s);
FIG. 3 is a table of information fields stored in association with each playable media file;
FIG. 4 summarizes steps in Applicants' method, steps implemented by Applicants' article of manufacture, and steps performed by a programmable processor implementing Applicants' computer program product; [00010] FIG. 5 summarizes steps in Applicants' method, steps implemented by Applicants' article of manufacture, and steps performed by a programmable processor implementing Applicants' computer program product;
[00011] FIG. 6 summarizes steps in Applicants' method, steps implemented by Applicants' article of manufacture, and steps performed by a programmable processor implementing Applicants' computer program product;;
[00012] FIG. 7 summarizes steps in Applicants' method, steps implemented by Applicants' article of manufacture, and steps performed by a programmable processor implementing Applicants' computer program product;
[00013] FIG 8 summarizes steps in Applicants' method, steps implemented by Applicants' article of manufacture, and steps performed by a programmable processor implementing Applicants' computer program product;
[00014] FIG. 9 summarizes steps in Applicants' method, steps implemented by Applicants' article of manufacture, and steps performed by a programmable processor implementing Applicants' computer program product;
[00015] FIG. 10 summarizes steps in Applicants' method, steps implemented by Applicants' article of manufacture, and steps performed by a programmable processor implementing Applicants' computer program product;
[00016] FIG 1 1 summarizes steps in Applicants' method, steps implemented by Applicants' article of manufacture, and steps performed by a programmable processor implementing Applicants' computer program product;
[00017] FIG. 1 2 summarizes steps in Applicants' method, steps implemented by Applicants' article of manufacture, and steps performed by a programmable processor implementing Applicants' computer program product;
[00018] FIG. 13 summarizes steps in Applicants' method, steps implemented by Applicants' article of manufacture, and steps performed by a programmable processor implementing Applicants' computer program product; [00019] FIG. 14 summarizes steps in Applicants' method, steps implemented by Applicants' article of manufacture, and steps performed by a programmable processor implementing Applicants' computer program product;
[00020] FIG. 1 5 is a flowchart of the method and/or process related to set a bookmark during a recording of a playable media file;
[00021] FIG. 1 6 is a flowchart of the method and/or process related to display annotations associated with a playable media file;
[00022] FIG 1 7 A is an example of a graphical user interface for tagging;
[00023] FIG. 17B is an example of a graphical user interface that enables a user to configure hi s user account to identify virtual clips in a particular subcategory; and
[00024] FIG. 1 8 A and 1 8B are examples of graphical user interfaces for displaying
annotations.
Detailed Description
[00025] This invention is described in preferred embodiments in the following description with reference to the Figures, in hich like numbers represent the same or similar elements. Reference throughout thi s specification to "one embodiment," "an embodiment," or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases "in one embodiment," "in an embodiment," and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.
[00026] The described features, structures, or characteristics of the i nvention may be
combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are recited to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention may be practiced without one or more of the specific detail s, or with other methods, components, materials, and so forth. In other instances, well-known structures, material s, or operations are not shown or described in detail to avoid obscuring aspects of the invention.
[00027] The schematic flow chart diagrams included are generally set forth as a logical flowchart diagram (e.g., FIGS. 4- 16). As such, the depicted order and labeled steps are indicative of one embodiment of the presented method. In certain embodiments, other steps and methods are conceived that are equivalent in function, logic, or effect to one or more steps, or portions thereof, of the illustrated method. Additionally, the format and symbols employed are provided to explain the logical steps of the method and are understood not to limit the scope of the method. Although various arrow types and line types are employed in the flow-chart diagrams, they are understood not to limit the scope of the corresponding method (e.g., FIGS. 4- 16). Indeed, some arrows or other connectors may be used to indicate only the logical flow of the method. For instance, an arrow indicates a waiting or monitoring period of unspecified duration between enumerated steps of the depicted method. Additionally, the order in which a particular method occurs may or may not strictly adhere to the order of the corresponding steps shown.
[00028] In certain embodiments, individual steps recited in FIGS. 4- 16, are combined,
eliminated, or reordered.
[00029] Applicants' system and method includes a network wherein a video can be created using any available video format, and that video can be shared between a plurality of people. In certain embodiments. Applicants' system and method can be used by multiple members of a social network to associate annotations with a Playable Media File including a composite digital clip, and/or to initiate discussion threads associated with that Playable Media File including a composite digital clip.
[00030] Referring to FIG. 1 , a portion of Applicants' network 100 is illustrated. In certain embodiments. Applicants' network 100 comprises a social network. In certain embodiments. Applicants' social network 100 is an open social network. In certain embodiments. Applicants' social network 100 is a closed social network. [00031] In the illustrated embodiment of FIG. I, network 100 comprises a network server 130 that is communicatively connected to a computing device 1 10 through a first communication fabric 120 and a computing device 150 through a second communication fabric 140. In certain embodiments, the network server 130 i s owned and/or operated by a social networking service provider while computing dev ices 1 10 and 150 are owned and/or operated by users or members of the social network 100, where a member has a profile containing information about the member stored in information 137 of the social network server 130. In some embodiments, the computing device 1 10 is owned and operated by a first member and the computing device 1 50 is ow ned and operated by a second member.
[00032] For the sake of clarity, FIG. 1 shows a first computing dev ice 1 10, netw ork serv er
130, and a second computing device 1 50. FIG. 1 should not be taken as limiting. Rather, in other embodiments any number of entities and corresponding devices can be part of the network 100, and further, although FIG. 1 shows two communication fabrics 1 20 and 140, in other embodiments, less than, or more than, tw o
communication fabrics are provided in the social netw ork 100. For example, in certain embodiments, the communication fabric 1 20 and the communication fabric 140 are the same communication fabric.
[00033] In certain embodiments, the computing dev ices 1 10 and 1 50 and host 130 are each an article of manufacture. Examples of the article of manufacture include: a server, a mainframe computer, a mobile telephone, a smart telephone, a personal digital assi stant, a personal computer, a laptop, a set-top box, an MP3 player, an email enabled device, a tablet computer, a web enabled device, or other special purpose computer each having one or more processors (e.g., a Central Processing Unit, a Graphical Processing Unit, or a microprocessor) that are configured to execute Applicants' API to receiv e information fields, transmit information fields, store information fields, or perform methods.
[00034] By way of illustration and not limitation, FIG. 1 illustrates the computing dev ice 1 10, the netw ork serv er 130, and the computing device 1 50 as each including a processor 1 1 2, 132, and 1 52, respectively, a non-transitory computer readable medium 1 1 3, 133, and 153, respectively, having a series of instructions 1 14, 1 34, and 1 54, respectively, encoded therein, an input/output means 1 1 1, 1 3 1 , and 1 5 1 , respectively, such as a keyboard, a mouse, a stylus, touch screen, a camera, a scanner, or a printer. Computer readable program code 1 14, 1 34, and 1 54 is encoded in non-transitory computer readable media I 1 3, 133, and 1 53, respectively. Processors 1 1 2, 132, and 1 52 utilize computer readable program code 1 14, 134, and 1 54, respectively, to operate computing devices 1 10, 130, and 1 50, respectively. In certain embodiments, the computing device 1 10, 130, and 1 50 employ hardware and/or software that supports accelerometers, gyroscopes, magnetometers (e.g., solid state compasses) and the like.
[00035] Processors 1 1 2 and 1 52 utilize Applicants' Application Program Interfaces (APIs)
1 16 and 1 56, respectively, encoded in computer readable media 1 13 and 1 53, respectively, to communicate with host 130 and access Applicants' algorithm 1 36 encoded in computer readable medium 1 33 to implement Applicants' social network and method described herein. Algorithm 136 comprises Applicants' source code to operate a public or priv ate social network, and when implemented by computing device 1 10 causes a graphic user interface ( "GUI" ) to be displayed on display screen 1 1 5, wherein that GUI compri ses and displays a plurality of graphical interactable objects. A member using computing device 1 10 (or computing dev ice 1 50) can utilize that GUI to access a logical volume, such as for example and without limitation logical volume 180 (FIG. 2 ), wherein information speci ic to that user are encoded in logical volume 1 80. The member and/or user can further utilize the GUI to access Applicants' social network as described herein.
[00036] Processor 132 accesses the computer readable program code 134, encoded on the non-transitory computer readable medium 133, and executes an instruction 136 to electronically communicate with the computing device 1 10 via the communication fabric 1 20 or electronically communicate with the computing dev ice 1 50 via the communication fabric 140. Encoded information 137 includes, for example and without limitation, the data communicated or information fields communicated, e.g. , date and time of transmission, frequency of transmission and the like, with any or all of the computing device 1 10 and the computing device 150. In certain embodiments, information 137 is analyzed and/or mined. In certain embodiments, information 137 is encoded in a plurality of individual logical volumes specific to each member / user.
[00037] In certain embodiments, computing devices 1 10 and 1 50 further comprise one or more display screens 1 1 5 and 1 55, respectively. In certain embodiments, display screens 1 1 5 and 1 55 comprise an LED display device.
[00038] In certain embodiments, the information fields received from the computing device
1 10 at the network serv er 1 30 are exchanged with other computing devices not shown in FIG. 1 . For example, information fields received from a social network in which the member has an Internet presence is sent to the social network server 130 and stored at the information 137 in association with a profile of the member. Alternatively, or in combination, the information fields transmitted from the computing device 1 10 to the social network server 130 is sent to an account of the member w ithin the social netw ork.
[00039] In certain embodiments, information 137 is encoded in one or more hard disk drives, tape cartridge libraries, optical disks, combinations thereof, and/or any suitable data storage medium, storing one or more databases, or the components thereof, in a single location or in multiple locations, or as an array such as a Direct Access Storage Device (DASD), redundant array of independent disks (R AID), vi realization device, etc. In certain embodiments, information 137 is structured by a database model, such as a relational model , a hierarchical model, a network model, an entity-relationship model, an object-oriented model, or a combination thereof. For example, in certain embodiments, the information 137 is structured in a relational model that stores a plurality of Identities for each of a plurality of members as attributes in a matrix.
[00040] In certain embodiments, the computing dev ices 1 10, 130, and 1 50 include wired
and/or wireless communication devices which employ v arious communication protocols including near field (e.g., "Blue Tooth" ) and/or far field communication capabilities (e.g., satellite communication or communication to cell sites of a cellular network) that support any number of services such as: telephony. Short Message Service ( SMS) for text messaging. Multimedia Messaging Service (MMS) for transfer of photographs and videos, electronic mail (email ) access, or Global Positioning System (GPS) service, for example.
[00041] As illustrated in FIG . 1, the communication fabrics 120 and 140 each comprise one or more switches and 141, respectively. In certain embodiments, communication fabrics 120 and 140 are the same. In certain embodiments, at least one of the communication fabrics 120 and 140 comprises the Internet, an intranet, an extranet, a storage area network ( SAN ), a wide area network (WAN), a local area network (LAN), a virtual private network, a satellite communications network, an interactive television network, or any combination of the foregoing. In certain embodiments, at least one of the communication fabrics 120 and 140 contains either or both wired or wireless connections for the transmission of signal s including electrical connections, magnetic connections, or a combination thereof. Examples of these types of connections include: radio frequency connections, optical connections, telephone links, a Digital Subscriber Line, or a cable link. Moreover, communication fabrics 120 and 140 utilize any of a variety of communication protocol s, such as Transmission Control Protocol/Internet Protocol (TCP/IP), for example.
[00042] Referring to FIG. 2, the computing devices 1 10, 130 and 150 are each
communicatively connected to the communication fabric 120, such as a WAN or Inteniet. The network server 1 30 is a computing device that is owned and/or operated by a networking service provider, and computing devices 1 10 and 150 are owned and/or operated by individual network users. In certain embodiments, network server is owned and/or operated by a social network prov ider. In certain embodiments, the network server 130 provides access to the computing dev ices 1 10 and 1 50 to execute Applicants' source code 136 via a Software as a Service ( SaaS) means. [00043] In certain embodiments information fields are received from one or more computing devices 1 10, 130 and/or 150 and stored on the "Cloud" such as data storage library 160 and/or 1 70. Referring to FIG. 2, each of the data storage libraries 160 and 170 have corresponding physical storage devices, such as and without limitation physical data storage devices 163 - 169 for data storage library 160 and 1 73 - 179 for data storage library 170.
[00044] In certain embodiments, data storage library 160 and data storage library 170 are configured in a Peer To Peer Remote Copy ( "PPRC" ) storage system, wherein the information fields in data storage library 160 is automatically backed up in data storage library 170. In certain embodiments, Applicants' PPRC storage system utilizes synchronous copying. In certain embodiments. Applicants' PPRC storage system utilizes asynchronous copying.
[00045] In the illustrated embodiment of FIG. 2, physical storage device 163 is configured to comprise logical volume 180. In certain embodiments, each physical storage device in data storage library 160 i s configured to comprise a plurality of logical volumes. Similarly, each physical storage dev ice in data storage library 1 70 is configured to comprise a corresponding plurality of logical volumes. In certain embodiments, each member of the social network is assigned a unique logical volume. In such embodiments a permission file 157 may be encoded in computer readable medium 133 or in data storage libraries 160 and 170 that associates each logical volume with a social network member and further associates each logical volume with access permi ssions for certain designated other social network users. Each social network user configures his/her own logical volume permissions. In certain embodiments, if a first user desires to remov e access permissions from a second user, that first member simply ctCCCSSCS his/her permissions file and deletes the second user. Thereafter, the second user cannot retriev e data stored on the logical volume associated with the first user.
[00046] Referring to FIGS. 1, 2, and 3, Applicants' algorithm 136, and its functions, can be accessed by users of Applicants' network 100 to create, share, edit, associate one or more annotations with, and/or associate one or more discussion threads with, a Playable Media File. One member, using a computing device such as computing device 1 10 or 150, to access network server 130, streams a Playable Media File from its original storage location. In certain embodiments the Playable Media File is encoded in a unique logical volume accessible by a first user. That first user can grant access to the Playable Media Fi le to one or more other users by storing access permissions in permission file 1 57. In certain embodiments the access includes levels such as, and without limitation, view only, view/edit, view/'edit/share, and the like. In certain embodiments the access includes conditions or restrictions such as expirations dates, limitations on the number of times the file can be viewed, and the like.
[00047] Referring now to FIG. 3, when a user having permission streams the Playable Media fi le, and if that user associates an annotation with the Playable Media File, a data profile 300 is created for the Playable Media File and is stored on network server 130, and optionally on data storage library 160 or 170. Data profile 300 includes various information fields, including the Global Unique Identifier (GUID) 302 associated with the creating member, a description 304 of the Playable Media File (e.g., a title), and permissions 306 held by various members to access, edit, and/or share the Playable Media File. Data profile 300 may further include subsequently added annotations 3 12 and discussion threads 328.
[00048] Applicants' system and method further disclose an article of manufacture comprising a platform for information management, such as computing device 1 10, 130, and/or 1 50, comprising computer readable program code, such as API 1 16, API 1 56, and/or Applicants' social network source code 1 36, residing in a non-transitory computer readable medium, such as computer readable medium 1 13, 133, and/or 1 53, where that computer readable program code can be executed by a processor, such as processor 1 1 2 (FIG 1) and/or 132 (FIG. 1), and/or 1 52, to implement Applicants' method recited in FIGS. 4- 16. [00049] Applicants' system and method further disclose a non -transitory computer readable medium wherein Applicants' computer program product is encoded herein. Applicants' computer program product comprises computer readable program code that can be executed by a programmable processor to implement Applicants' method recited in FIGS. 4- 16. In either case, in certain embodiments, the computer readable program code is encoded in a non-transitory computer readable medium comprising, for example, a magnetic information storage medium, an optical information storage medium, an electronic information storage medium, and the like. "Electronic storage media," means, for example and without limitation, one or more devices, such as and without limitation, a PROM, F.PROM, EEPROM, Flash PROM, compactflash, smartmedia, and the like.
[00050] A method for setting a bookmark during a recording of a playable media file is
disclosed. In certain embodiments, a network user can use one of the computing devices 1 10 and 150 (FIG. 1) to record a playable media file of a live event. Again, FIG. 1 should not be taken as limiting. In other embodiments, any number of computing device that is capable of recording a playable media file of a live event can be used by a network user and can be part of the network 100.
[00051] Referring to FIG. 1 5, a user device may be configured to, on its own or in cooperation with one or more servers as described above, create a virtual clip of a playable media fi le encoding a recording of a live event while the event and/or the recording is taking place. The exemplary method is described with reference to a touchscreen user device that is recording the event, and variations on the exemplary method are contemplated.
For example, rather than touchscreen inputs, in other embodiments the user device may be configured to receive audio inputs and/or inputs from peripheral devices such as a keyboard, remote controller, and/or mouse. In other embodiments, the user device may not be recording the live event and may not create or store the playable media file; another device may create the playable media file and store it in a location from which the user dev ice or another dev ice accessible by the user may concurrently and/or later download and/or stream the media file. [00052] At step 510, the user device receives a user input signaling the user device to begin recording the live event, and starts to record a playable media file of the live event. In conjunction with starting the recording, the user device may also display a user interface including one or more interactable graphical objects that serve as the user's controls for the virtual clip. At step 520, during the recording the user device receiv es another user input, and at step 530 the user device determines that the user input includes a command to start a virtual clip of the recording. At step 540, the user device generates a temporal place marker that indexes the temporal point of interest in the recording that corresponds to the time that the user initiated the virtual clip. In certain embodiments, the temporal place marker is stored on the user device or the recording computing device in step 550.
[00053] The user device continues to record the live ev ent, subsequently receiving another user input at step 560. At step 570, the user device determines whether the user input includes a command to stop capturing the virtual clip; this command may be a selection of an END CLIP object in the user interface, or it may be a selection by the user to stop recording the live event. In either case, at step 580 the user device generates a temporal place marker that indexes the temporal point of interest in the recording that corresponds to the time that the user ended the virtual clip. At step 590, the user device may create and store the virtual clip containing an identi ier for the media file encoding the recording of the live event, the first temporal place marker identifying the start time (i .e., time elapsed from the beginning of the recording) of the virtual clip, and the second temporal place marker identifying the end time of the v irtual clip.
1000541 If the command to end the virtual clip did not terminate the recording of the live
event, the steps 520-590 of creating a virtual clip may be repeated to capture a second virtual clip of the media file. In some embodi ments, the user input that ends the capture of the first virtual clip may also serve as the user input that starts the capture of the second v irtual clip. The playable media file and the virtual clip(s) may be transferred by the user device to a server or other computer storage device and later accessed using the systems described herein. Additionally or alternatively, the temporal place markers may be used to identify "trim" locations within the media file; the user device or recording device may store - only or additionally - the encoded content captured between the temporal place markers. In other embodiments of the method, the user device may be used to view the media file subsequent to the live event occurring, and to generate virtual clips of the media file as described above.
[00055] There are several different ways for a user to communicate to a recording computing dev ice to generate a temporal bookmark that indexes a temporal point of interest during recording of a liv e ev ent. In certain embodiments, when a user touches a screen of the recording computing dev ice, the algorithm 136 comprising Applicants' source code generates a temporal place marker. In other embodiment, the algorithm 136 comprises voice recognition source code so that when the user speaks verbal ly to the recording computing dev ice, a temporal place marker is generated. In yet other embodiments, the user is able to communicate to recording computing device using a control dev ice 105 (FIG. 1), which is connected to the recording computing dev ice via a connected data link . In other embodiments, the control dev ice is connected to the recording computing dev ice remotely via Bluetooth, ultra-wideband, wireless local area network, Wi-Fi, AirPort, Infrared, ZigBee, and or other similar technologies. A signal can be transferred from the control dev ice 105 to the recording computing dev ice so that the algorithm 136 comprising Applicants" source code generates a temporal place marker during the recording of a liv e event.
[00056] The playable media file with at least one temporal bookmark generated from a
recording computing dev ice can be used to make a composite video file. Referring now to FIG. 4, in step 610 Applicants disclose determining whether to create a plurality of virtual clips, wherein each virtual clip compri ses content encoded in one or more Media File, playable or static, from a beginning of the Media File, playable or static, up to a designated end point. The depicted order and labeled steps in FIG. 4 are indicativ e of one embodiment of the presented method. Further, the order in which a particular method occurs may or may not strictly adhere to the order of the corresponding steps shown in FIG. 4 because some users may choose to perform certain steps before other steps. A "Media File, playable or static," may be a file containing data that encodes one or more types of media content, such as audio content, video content, audiovisual content, image and other computer graphic content, text content, slide-show and similar sequenced-graphic content, and the like. Non-limiting examples of particular formats of media files include, AVI file, MP3 file, MP4 file, WMA file, WAV file, Flash, MPEG file, an image file (JPG, TIF, PNG, GIF, Bitmap, and the like), a PDF file, a text file (e.g., a .doc file), a VISIO file, a .ppt file, a .key file, a spreadsheet file, and any type of 3D media file. In certain embodiments, such a 3D media file requires holographic projection / holographic viewing. In certain embodiments, "Media File, playable or static," further includes any file which generates a Stereoscopic visual display that can be viewed through stereoscopic eyewear or played on 3D display technology such as 3D TV, and in certain embodiments comprises a Virtual Reality/ Augmented Realty file that can be viewed through Virtual Reality devices such as Hololense, Oculus Rift, Sony
Playstation VR, HCT VIVE, Razer OSBR HDK, Zeiss VRl, SOV VR, Freefly, and the like.
A "virtual clip" created from one or more of such media files may, in some embodiments, be a set of data points that together delineate a particular subset of the content encoded in the media file. Thus, the virtual clip is comprised of references that identify specific content in the corresponding media file, but the virtual clip is not necessarily itself a stored media file containing the content data. The content data may remain in its original storage location, and the present systems (e.g., described in FIGS. 1 and 2) may obtain the virtual clip, read the set of data points, access the media file in its original stored location, and then obtain (e.g., via file transfer or streaming) the subset of content that is delineated by the data points. [00058] In some embodiments, the set of data points may include a start point and an end point; together with an identifier of the media file, the start point and end point may identify the content to be included in the virtual clip. The data describing the data points may be selected depending on the type of media file encoding the content. Non-limiting examples of start and end points include: in a video or audio file, a start time (e.g., a duration measured from the beginning of the video/audio content at time 0.0) and an end time which together define a "clip" of the video/audio content; in an image, a starting coordinate (e.g., with respect to a top-left corner of the image being at coordinate (0, 0)) of a first pixel representing the top-left corner of a rectangular region of the image, and an ending coordinate of a second pixel representing the lower- right corner of the region; in a slide show, a starting slide number and an ending slide number; in a plain text, formatted text, or binary text file, a starting pointer and an ending pointer identifying positions in the character stream. In a particular example of a 2D or 3D media file encoding a recorded computer simulation, each data point in the set may include a time (e.g., time elapsed since the beginning of the simulation), a coordinate location within the simulated environment (e.g., xyz coordinates of a user-controlled camera within a geographic environment mapped to a Cartesian coordinate system), and data (e.g., a vector) identifying the camera line-of-sight.
[00059] Referring to FIG. 4, if a user elects to create such a plurality of virtual clips, the
system may identify the beginning of the media file as the start point of the virtual clip, and then transitions from step 610 (FIG. 4) to step 710 (FIG. 5). Referring now to FIG. 5, in step 7 10 the method, without pausing the media play, displays an END CLIP interactable graphical object and a CANCEL CLIP interactable graphical object.
[00060] If the user activates the CANCEL CLIP interactable graphical object in step 720, then the method transitions from step 720 to step 750 and ends. Alternatively, if the user does not activate the CANCEL CLIP interactable graphical object in step 7 10, then the method transitions from step 7 10 to step 730 wherein the method determines if the END CLIP interactable graphical object has been activated. If the method determines in step 730 that the END CLIP interactable graphical object has not been activated, then the method waits at step 730, while the system continues to play or otherwise display the media file, until the user activates the END CLIP interactable graphical object. At step 740, the system determines that the user has selected to end the virtual clip, determines the location w ithin the media file at which the virtual clip should end, and temporarily stores a start point, an end point, and any other data needed to identify the virtual clip.
In one example, step 740 for a virtual clip of a video or audio file may include identifying the time elapsed in the content when the END CLIP interactable graphical object was selected, and then creating and storing a virtual clip containing the media file identifier, a start time of 0.0, and an end time representing the time elapsed. In another example, rather than storing an end time, the system may subtract the start time from the end time to determine a duration of the virtual clip, and may store the start time and the duration in the virtual clip. In another example, step 740 for a virtual clip of an image file may include identifying an end coordinate of the pixel over which a mouse cursor was 1 ocated when the END CLIP interactable graphical object was selected, and then creating and storing a virtual clip containing the media file identifier, a start point of (0, 0), and an end point at the end coordinate. The virtual clip would thus identify the region within an implied bounding box; if the end coordinate were (x, y), the bounding box would have clock wi se corners at (0, 0), (x, 0), (x, y), and (0, y). In another example, step 740 for a virtual clip of a text file may include identifying a cursor location within the text file and determining a target position, within the data stream (e.g., ASCII or other plain text stream, rich text or other formatted text stream, binary file stream, etc. ) representing the text file, corresponding to the cursor location, then creating and storing a virtual clip containing the media file identifier, a starting stream position of 0, and the target position as an ending stream position. [00062] In certain embodiments, in step 740 a virtual clip is saved to the user's computing device. In certain embodiments, in step 740 the virtual clip is sav ed to Applicants' network serv er 130 (FIG. 1).
[00063] Referring to FIG. 4 again, if the user elects in step 610 NOT to create a plurality of virtual clips each from a beginning of the media file to a designated end point, then the method transitions from step 6 10 to step 620 wherein the user may elect to create a plurality of virtual clips comprising content from one or more Media File, playable or static from a designated start point to a designated end point. In one embodiment, to determine that the user has elected to create a virtual clip, the system may display on the user's device a user interface that displays the media file along with a START CLIP interactable graphical object, and the system may receiv e a user input indicating the START CLIP interactable graphical object was selected. If the user elects to create a plurality of virtual clips, where each virtual clip compri ses content from one or more Media File, playable or statics, and wherein the user specifies a designated timeline location to begin the virtual clip, then the system may identify, as the start point of the virtual clip, the point within the media file that was "in focus" when the START CLIP interactable graphical object was selected, and then transitions from step 620 to step 8 10 (FIG. 6).
[00064] A determination of the "in focus" point, and thus the start point, may depend on the type of the content file, but in any case can be objectiv ely determined. In a playable media file, the time during playback that the START CLIP object is selected may be the start point; additional data may be needed for 2D or 3 D recorded simulations, such as the camera location and line-of-sight when the START CLIP object is selected. In a slide show file, the "in focus" point may be the slide being di splayed when the START CLIP object is selected, and in other static files such as text and image files, the cursor position may identify the "in focus" point.
[00065] Referring now to FIG. 6, in step 8 1 0 the method streams the Media File, playable or static from a designated start point, and without pausing the media play, displays an
END CLIP interactable graphical object and a CANCEL CLIP interactable graphical object. If the user activates the CANCEL CLIP interactable graphical object in step 820, then the method transitions from step 820 to step 850 and ends. Ahematively, if the user does not activate the CANCEL CLIP interactable graphical object in step 8 10, then the method transitions from step 8 10 to step 830 wherein the method determines if the END CLIP interactable graphical object has been activated. If the method determines in step 830 that the END CLIP interactable graphical object has not been activated, then the method waits at step 830, while the system continues to play or otherwise di splay the media file, until the user activates the END CLIP interactable graphical object. At step 840, the system determines that the user has selected to end the virtual clip, determines the location within the media file at which the virtual clip should end, and temporarily stores a start point, an end point, and any other data needed to identify the virtual clip. Any of the above examples described with respect to step 740 of FIG. 5 may illustrate the system's operation to create and store the virtual clip, with the additional processing required to identify the start point within the media file. For example, to obtain a virtual clip of an image, the system may provide a user interface that enables the user to draw a visible bounding box (e.g., using a mouse cursor and clicks), and may identify the start and end points using the top-left and lower- right coordinates of the visible bounding box.
[00066] In certain embodiments, in step 840 the virtual clip is saved to the user's computing device. In certain embodiments, in step 840 the virtual clip is saved to Applicants' network server 130 (FIG. 1 ).
[00067] Referring to FIG. 4, if the user elects in step 610 NOT to create a plurality of virtual clips each from a beginning to a designated end point, and if the user elects NOT to create a plurality of virtual clips, where each virtual clip compri ses content from one or more Media File, playable or statics, and wherein the user specifies a designated timeline location to begin the virtual clip, then the method transitions from step 620 to step 630 wherein the method detennines if the user elects to configure a composite v irtual clip. [00068] If the user elects to configure a composite virtual clip in step 630, the method transitions from step 630 to step 910. Referring now to FIG. 7, in step 910 the method selects (N) saved virtual clips to configure a composite virtual clip, and determines an order of presentation for those (N) virtual clips.
[00069] In step 920, the method sets (M) initially to 1 . In step 930, the method configures a
(M)th link to a (M)th saved virtual clip, wherein the (M)th saved virtual clip will be the (M)th virtual clip to be displayed when the composite virtual clip is activated. In step 930, the method saves the (M)th link in a composite virtual clip file.
[00070] In step 950, the method determines if (M) equals (N), i .e. if all (N) links to the (N) selected (N) saved virtual clips have been created and saved. If the method determines in step 950 that (M ) does not equal (N), then the method transitions from step 950 to step 960 wherein the method increments (M) by 1, i .e. sets (M) equal to ( M)+ l . The method transitions from step 960 to step 930 and continues as described herein. Alternativ ely, if the method determines in step 950 that (M) equals (N), then the method transitions from step 950 to step 970 and ends.
[00071] If the user elects in step 6 10 NOT to create a plurality of virtual clips each from a beginning to a designated end point, and if the user elects in step 620 NOT to create a plurality of virtual clips, where each virtual clip comprises content from one or more Media File, playable or statics, and wherein the user specifies a designated timeline location to begin the virtual clip, and if the user does NOT elect in step 630 to configure a composite virtual clip in step 630, then in step 640 the method determines whether to display a composite virtual clip.
[00072] If the user elects to display a composite virtual clip in step 640, the method transitions to step 1010 (FIG. 8) where the method provides a storage location for a composite virtual clip file configured to access (M) saved clips. In step 1020, the method sets ( P) initially to 1 . In step 1030 the method activates a (P)th link encoded in the composite virtual clip file to stream a (P)th saved virtual clip to the user's device.
[00073] In step 1040 the method determines if all (N) clips comprising the selected composite virtual clip have been displayed, i .e. if (P) equals (N). If the method determines in step 1040 that (P) does not equal (N), then the method transitions from step 1040 to step 1050 and increments (P) by 1 , i .e. sets (P) equal to ( P) + 1. The method transitions from step 1050 to step 1030 and continues as described herein.
Alternatively, if the method determines in step 1040 that (P) equals (N), the method transitions to step 962 wherein the method displays an Annotation Panel .
In step 964, the method determines if the user entered an annotation in the Annotation panel of step 962. If the method determines in step 964 is a user entered an annotation in the Annotation Panel, then the method transitions from step 964 to step 14 10. Alternatively, if the user did not enter an annotation in the Annotation panel of step 962, then the method transitions from step 964 to step 966 wherein the method determines if the user elects to change visibility from PUBLIC to PRIVATE.
If the method determines in the 966 that the user does not elect to change the visibility of the identified content, then the method transitions from step 966 to step 968 wherein the method detennines if the user elects to share saved data with specific recipients. If the user elects to share saved data with specific recipients, then the method transitions from step 966 to step 1 5 10. If the user elects not to share saved data with specific recipients, then the method transitions from step 968 to step 1060 and ends.
If the method determines in step 964 is a user entered an annotation in the Annotation Panel, then the method transitions from step 964 to step 14 10. Referring now to FIG. 12, in step 1410 the method saves the annotation entered into the Annotation Panel of step 962. In certain embodiments, in step 1410 the user's annotation is saved to the user's computing device. In certain embodiments, in step 14 10 the user's annotation is saved to server 140 (FIG. 1).
In step 1420, the method determines whether a user input associated with the virtual clip, such as an annotation, conforms to a predetermined format. In certain embodiments, the format comprises a tag identifier indicating the user input includes taxonomy tags, a first tag following the tag identifier and identifying a first category of the composite virtual clip, and (P) subtag(s) sequentially following the first tag and each including a delimiter indicating a previous tag is complete and an additional tag follows, identifying an additional category of the virtual clip. The tag identi ier may be a character (the "#" or hash symbol in the examples herein), character string, or other data element that the system is configured to identify as an indicator that the text fol lowing the tag identifier should conform to the taxonomy tag format, and contains at least one tag if so. In some embodiments, the format comprises zero subtags. In other embodiments, the format compri ses 1, 2, 3, 4, 5, or any number that is greater than 1 subtags. In step 1430, the method creates and saves a taxonomy tag for the annotation saved in step 14 10. In certain embodiments, the taxonomy tag comprises a form " content:TITLE. " In other embodiments, the taxonomy tag comprises a form "#first tag:subtag 1 :subtag2: . . . :subtagP," where the first tag and each subtag( 1 . . P) are character strings separated by the delimiter character
[00078] Further, in step 1430, the method also identifies one or more taxonomy tags from the user and associates the virtual clip with one or more categories identified by the one or more taxonomy tags. In one embodiment, each tag immediately following a tag identifier corresponds to a main category, and each subtag corresponds to a subcategory of the (sub)category corresponding to the immediately preceding tag (i .e., the tag to the left of the delimiter). Thus, one or more categories are arranged into a hierarchy determined from a sequence of the corresponding tags identified in the user input. As described, each taxonomy tag identifies a corresponding hierarchy of categories. In some embodiments, the method associates the virtual clip with each of the one or more categories corresponding to one of the tags/subtags in each taxonomy tag associated with the v irtual clip.
[00079] In some embodiments, the categories and any corresponding hierarchy may exist in a data store (e.g., the global data store), and associating the taxonomy tags with the categories may include matching the tags to the categories. Additionally or alternativ ely, the taxonomy tags and their respective tagging sequence may represent a realtime, ad hoc "categorization" in absence of a centralized hierarchy. The virtual clip may be associated with the taxonomy tags to produce a searchable virtual clip that is delivered to a requesting device in response to a query from the requesting device for any of the plurality of virtual clips that are associated with the taxonomy tags. In some embodiments, the system may require that the taxonomy tags of the query appear in the same sequence of the stored taxonomy tags, in order to improve accuracy and relevance of the search results. Thus, associating a virtual clip with the taxonomy tags may include creating, based on an order in an input character string of the one or more taxonomy tags, a directed relationship between a first taxonomy tag and a second taxonomy tag sequentially following the first taxonomy tag in the character string, the directed relationship enabling a user to retrieve the first virtual clip from the stored data using an ordered combination of the first and second taxonomy tags as the query. Additionally, the system may provide for the query to include a user identifier, such that the virtual clips may further be searched to return any virtual clips that have the corresponding taxonomy tags and were created by a particular user. This configuration also provides for a user to configure the associated user account to "follow" a particular user, and further a particular set of taxonomy tags; subsequent to implementing this configuration, the system may automatically send matching virtual clips to the user's account.
Referring to FIG. 17 A, the system may generate a graphical user interface (GUI) 1700 for display to a user on a user device. The GUI 1700 may include a navigation element 1 702 that displays visual representations of one or more of the category hierarchies, in accordance with parameters that may be provided by the user. For example, the GUI may enable the user to configure the navigation element 1702 to display a particular subset of all accessible (i.e., to the user via permissions in a user account) hierarchies, non-limiting examples of such a subset including: all hierarchies derived from taxonomy tags associated with virtual clips created, stored, and/or saved (e.g., via a bookmark function) by the user; all hierarchies derived from taxonomy tags of virtual clips shared with the user's user account; each hierarchy derived from a taxonomy tag used within a specified portion of a social network; and the like. The GUI 1700 may enable the user to interact with the displayed hierarchies, such as by displaying an interactable icon (e.g., an arrow 1 704) indicating that a displayed category 1 706 has one or more subcategories; selecting the icon may cause the system to update the navigation element 1 702 to display the subcategory/ies that were previously hidden.
[00081] In some embodiments, the user may be able to select a displayed category 1 706;
when the system receives the user's selection, the system may filter all virtual clips accessible by the user to produce a subset of such virtual clips that are also associated with the selected category 1 706, as specified by a taxonomy tag associated with the virtual clip. The system may then update the GU I 1700 to include a content di splay panel 1 7 12 displaying visual representations 1 7 14 of the virtual clips that belong to the filtered subset. The visual representations 1 7 14 may be interactable graphical objects, such as a selectable element that generates a user input causing the system to update the GUI 1700 to include a virtual clip display panel (not shown) that di splays the virtual clip associated with the selected visual representation 1 7 14.
[00082] The system may use the taxonomy tags associated with a virtual clip to filter virtual clips according to any suitable parameter or combination of parameters. FIG. 1 7B illustrates an example GUI 1 760 that enables a user to configure his user account to identify virtual clips in a particular subcategory, and further to identify virtual clips created by a particular user and belonging to a particular subcategory. FIG. 17B further shows that the system may configure such filtering in-context - that is, the filtering may be performed upon encountering a taxonomy tag 1 764 of a virtual clip
1 762 while viewing the virtual clip 1 762 (of a text file in the illustrated example), rather than from a dedicated category navigation system as described with respect to
FIG 1 7 A. In one embodiment, the system may configure the GUI 1760 to render the taxonomy tag 1 764 as an interactable object; the user may, for example, tap on or direct a mouse cursor to hover over the taxonomy tag 1 764, producing a user input that the system processes and in turn updates the GUI 1 760 to include a popup information window 1770 containing information as well as objects that may initiate commands.
[00083] One such object 1 772 may invoke a filtering command that causes the system to
configure the user account to aggregate references to newly posted virtual clips containing a taxonomy tag with a certain ( subcategory. In the illustrated example, the user is enabled to click on the object 1 772 to "follow" the subcategory
"Politics:Presidential : *," the wildcard indicating that virtual clips associated with any subcategory of "Presidential " will be included in the aggregation . Another such object 1 774 may invoke a filtering command that is constructed from the category hierarchy of the taxonomy tag as well as additional metadata of the virtual clip. In the illustrated example, the additional metadata includes the user identifier of the user that created or "posted" the virtual clip 1 762. The object 1 774 thus invites the user to aggregate virtual clips associated with the subcategory only if the virtual clips were created or posted by the identified user.
[00084] The taxonomy tags may further be used to aggregate information about social
network usage of particular tags, and the GUI 1 760 may be used to present such information. The illustrated information window 1770 displays exemplary network aggregation data, including a number of virtual clips network-wide having the selected taxonomy tag, a number of annotations and/or comments made on v irtual clips in the corresponding category, and a number of users who have associated virtual clips or otherw ise have participated in the subcategory. Any suitable metadata associated with the virtual clips may be aggregated and presented for analysi s in thi s manner.
[00085] Referring again to FIG. 1 2, in step 1440, the method determines if the user activates a
CANCEL graphical interactable object. If the method determines that the user does activate the CANCEL graphical interactable object, then the method transitions from step 1440 to step 1490 wherein the method ends without saving any selected content. Alternatively, if the method determines in step 1440 that the user does not activate the CANCEL graphical interactable object, then the method transitions from step 1440 to step 1450 wherein the method determines if the user activates the SAVE graphical interactable object.
If the method determines in step 1450 that the user activates the SAVE graphical interactable object, then the method transitions from step 1450 to step 1460 wherein the method collects available data including content from the media file, metadata from the media file, begin and end points in the media file, media file location ( URL ), annotation text, annotation Taxonomy Tag(s), visibility settings, and designated recipients.
The method transitions from step 1460 to step 1470 wherein the method indexes and saves the collected data of step 1460. The method transition from step 1470 to step 1480 wherein the method resumes the media file play.
If the user elects to change the visibility settings in step 966 (FIG. 8), then the method transitions from step 966 to step 1 5 1 0 (FIG. 13) wherein the method does NOT include a location for the media file, or a location for any saved data abstracted from that media file, in a sitemap published to search engines. The method transitions from step 1 5 10 to step 1440 and continues as described herein.
If a user elects to provide saved content to specific persons in step 968, then the method transitions from step 968 to step 1610 wherein the method enters recipients in the form of name(s), email(s), and/or social media account(s). The method transitions from step 16 10 to step 1440 and continues as described herein.
If a user elects in step 650 to apply one or more transitioning effect to one or more saved virtual clips, then the method transitions from step 650 to step 1 1 10. As those skilled in the art will appreciate, a "transition" comprises an animation-like effect when Applicants' method to display a composite virtual clip moves from one previously saved virtual clip to a next previously saved virtual clip during an onscreen presentation. Applicants' method allows control the speed of each transition effect. In addition. Applicants' method also permits the addition of sound transitions when moving from a saved virtual clip to the next saved virtual clip.
If a user desires in step 650 to add one or more transition effects to a previously configured composite virtual clip. Applicants' method transitions from step 650 to step 1 1 10 (FIG. 9). Referring now to FIG. 9, in step 1 1 10 the method selects a previously configured composite virtual clip, wherein that composite virtual clip is configured to include (N) previously saved virtual clips in an order from 1 to (N). In step 1 120, the method selects a transition effect having a known storage location. In step 1 130, the method configures an (i )th transition effect link pointing to the known storage location for the desired transition effect.
In step 1 140, the method configures the (i )th transition effect link to be activated after activation of a link to an (i)th virtual clip and before activation of a link to an (i+l)th virtual clip. In step 1 1 0, the method updates the composite virtual clip file to include the (i)th transition effect link.
In step 1 160, the method determines if the user desires to configure additional transition effects for the selected composite v irtual clip. If the user elects to configure additional transition effect links, then the method transition from step 1 160 to step 1 120 and continues as described herein. Alternatively, if the user does not elect to configure additional transition effect links, then the method transition from step 1 160 to step 1 1 70 and ends.
If a user desires in step 660 to add one or more lensing effects to a previously configured composite virtual clip. Applicants' method transitions from step 660 to step 12 10 (FIG. 10) As those skilled in the art will appreciate, a "lensing" effect includes, for example and without limitation, overlay of one or more color filters, image distortions, and annotations.
Referring now to FIG. 10, in step 12 10 the method selects a previously configured composite virtual clip, wherein that composite virtual clip is configured to include (N) previously saved v irtual clips in an order from 1 to (N). [00098] In step 1220, the method selects a 1 en sing effect having a known storage location. In step 1230, the method configures an (i)th 1 en sing effect link pointing to the known storage location for the desired 1 en sing effect.
[00099] In step 1240, the method configures the (i)th 1 en sing effect link to be simultaneously with activation of a link to an (i)th virtual clip. In step 1250, the method updates the composite virtual clip file to include the (i)th 1 en sing effect link.
[000100] In step 1260, the method determines if the user desires to configure additional 1 en sing effects for the selected composite virtual clip. If the user elects to configure additional transition effect links, then the method transition from step 1260 to step 1220 and continues as described herein. Alternatively, if the user does not elect to configure additional transition effect links, then the method transition from step 1 260 to step 1270 and ends.
[000101] If a user desires in step 670 to add one or more sound effects to a previously
configured composite virtual clip. Applicants' method transitions from step 670 to step 13 10 (FIG. 1 1 ). Referring now to FIG. 1 1 , in step 1310 the method selects a previously configured composite virtual clip, wherein that composite virtual clip is configured to include (N) prev iously saved virtual clips in an order from 1 to (N).
[000102] In step 1320, the method selects a sound effect having a known storage location. In step 1330, the method configures an (i)th sound effect link pointing to the known storage location for the desired 1 en sing effect.
[000103] In step 1340, the method configures the (i)th sound effect link to be simultaneously with activation of a link to an (i)th virtual clip. In step 1 350, the method updates the composite virtual clip file to include the (i )th sound effect link.
[000104] In step 1360, the method determines if the user desires to configure additional sound effects for the selected composite virtual clip. If the user elects to configure additional sound effect links, then the method transition from step 1360 to step 1320 and continues as described herein. Alternatively, if the user does not elect to configure additional sound effect links, then the method transition from step 1360 to step 1370 and ends. [000105] Referring to FIG. 16, a method for displaying annotations associated with a playable media file is disclosed. Either computing device 1 10 or 1 50 from the network 100 can be used to di splay annotations associated with a playable media file. In step 1 720, the system may execute computer readable program code to generate a graphical user interface (GUI ), such as the example GUI 1800 of FIG. 1 8 A, and may cause the GUI 1 800 to be displayed on a display device 107 ( FIG. 1 ), such as by transmitting the GUI 1 800 to the display device 107 or, when the system is implemented within the display device 107, by controlling a display of the display device 107 to display the GUI 1 800. In certain embodiments, the graphical user interface 1 800 may include a display window 1 802 for displaying content encoded by the playable media file. The system may generate the GUI 1 800 to include such playback (or other display) of the playable media file. In one embodiment, the system may obtain a virtual clip as described herein, and may determine a storage location of the playable media file from the virtual clip; then, the system may include, in the program instructions for displaying the GU I 1800, instructions that cause the display device 107 to access and/or retrieve the playable media file at the storage location . Additionally or alternatively, the system may itself access and/or retrieve the playable media file at the storage location, and may deliver the playable media file to the user's device for playback.
[000106] Referring further to FIG. 1 8 A, in some embodiments the GUI 1 800 may include a first interactable graphical object 1 8 1 0, which displays a timeline representing a duration of play of a playable media file. The first interactable graphical object 1 8 10 may overlay the display window 1 802 that displays the playable media file content. Further, the first interactable graphical object 1 8 10 may display a plurality of vi sible annotation indicators. For example, a visible annotation indicator may be a clip indicator 1830 associated with a corresponding v irtual clip that i s associated with the playable media file. The clip indicator 1 830 may identify a start time of the associated v irtual clip, and may appear at the corresponding location along the timeline. In some embodiments, each virtual clip associated with the playable media file in a global data store may have a corresponding clip indicator 1830 appearing in the appropriate display position for the corresponding start time of the virtual clip. The clip indicator 1 830 of each v irtual clip may have a corresponding color that is selected based on an access type of the virtual clip with respect to the active user account. In the illustrated example, the colors of clip indicators 1 830 are associated with: a public access type, wherein any user and non-user v isitor can access the virtual clip; a shared access type, wherein another user of the social network has shared the virtual clip with the user of the active user account; and, a private access type, which are virtual clips created by the active user.
The GUI 1800 may further include a second interactable graphical object 1 820 that also overlays a portion of the display window 1 802. The second interactable graphical object 1 820 may be configured to dynamically display up to a maximum number of graphic elements each associated with a corresponding virtual clip of the plurality of virtual clips; the graphic elements may be selected based on an interaction by the user with a certain display position on the timeline. In some embodiments, when the system receives a user input indicating that the user interacted with (e.g. , clicked or tapped on, or hovered over with a mouse pointer) the timeline at a first display position, the system may create each of the graphic elements to include information related to a v irtual clip that has a start time within a certain duration from the time associated with the first display position. For example, based on the time within the playable media file that corresponds to the first display position, the system may identify one, some, or all of the virtual clips as display able virtual clips: the virtual clip having its start time closest to the time at the first display position may be selected as a first clip; then, one or more virtual clips preceding (e.g., sequentially preceding) the first clip and/or one or more virtual clips subsequent (e.g., sequentially subsequent) may be selected, such that a number of virtual clips no greater than the maximum number of graphic elements are selected. Then, in order of their start times, the display able v irtual clips are each associated with one of the graphic elements, such that information about the clip i s displayed in the graphic element when the second interactable graphical object 1 820 is visible in the GUI 1 800. For example, the graphic elements may be displayed in a stacked li st, as illustrated, with the first clip approximate the vertical center of the list. The system may revise the selection of displayable virtual clips and update the GUI 1800 accordingly each time a new user input indicates another interaction with the timeline.
[000108] The second interactable graphical object 1 820 may have a setting that the system can switch to make the second interactable graphical object 1 820 vi sible or not visible within the GUI 1 800. In one embodiment, the system causes the second interactable graphical object 1 820 not to be displayed when the GUI 1 800 i s first displayed. Then (e.g., in step 1 740 of FIG. 16), when a user interacts with a visible annotation indicator 1830, or any other part of the first interactable graphical object 1 8 10, the system update the GUI 1 800 to display the second interactable graphical object 1 820, itself displaying the list of graphical elements (e.g., annotations 1 822a-g). Additionally, in step 1750, the system may create additional annotation indicators, which are displayed on the first interactable graphical object 1 8 10, based on a user's input. The data profile 300 in FIG. 3 further comprises an access type indication whether an annotation is a public annotation available to all network users of a social network, a shared annotation made accessible to a user by one of the network users, or a private annotation accessible only by a user; and an identifier of a creating user of an annotation.
[000109] Referring to FIG. 1 8B, the graphical user interface 1 800 may further include a third interactable graphical object 1 840 overlaying a third portion of the display window. When a user interacts with an annotation di splayed on the second interactable graphical object 1 820, the third interactable graphical object 1 840 could be made vi sible by the system, as described above. For example, while the GUI 1 800 is displaying information for a first virtual clip in a first graphic element of the second interactable graphical object 1 8 10, the system may receive a user input indicative of a user interaction with the first graphic element. In response, the system may obtain entries in a di scussion thread associated with the first clip, and may render information from the discussion thread into the third interactable graphical object 1 840 and update the GUI 1800 to display the third interactable graphical object 1 840.
[000110] For the purpose of this application, a machine learning engine can learn what a user of a network behavior looks like and the machine learning engine can interact with the computing device and the control device within the network 100.
[00011 1 1 While the preferred embodiments of the present invention have been illustrated in detail, it should be apparent that modifications and adaptations to those embodiments may occur to one skil led in the art without departing from the scope of the present invention .

Claims

We claim:
1. A method for displaying information associated with a Playable Media File, comprising:
obtaining stored data describing the information, the stored data comprising a storage location of the playable media file and a plurality of virtual clips each associated with the playable media file and including a first data element identifying a first time within the playable media file at which the corresponding virtual clip begins, and a second data element identifying a first user profile associated with creating the corresponding virtual clip;
accessing the playable media file at the storage location;
causing a graphical user interface (GUI) to be displayed on a computing device of a user, wherein said GUI enables the user to generate user inputs by interacting with the GUI, and the GUI comprises:
a display window for displaying content encoded by the playable media file; a first interactable graphical object, wherein the first interactable graphical object overlays a first portion of the display window and displays a timeline representing a duration of the playable media file and a plurality of clip indicators each associated with a corresponding virtual clip of the plurality of virtual clips, each clip indicator appearing on the timeline at a display position corresponding to the first time identified by the first data element of the corresponding virtual clip; and
a second interactable graphical object, w herein the second interactable graphical object overlays a second portion of the display window, is configured to display up to a first number of graphic elements each associated with a corresponding virtual clip of the plurality of virtual clips, and is initially not displayed in the GUI;
receiving a first user input indicating a first interaction of the user with a first display position on the timeline;
determining a selected time within the playable media file that corresponds to the first display position;
identifying, as a plurality of display able virtual clips:
a first virtual clip of the plurality of virtual clips, the corresponding first time of the first virtual clip being the closest, of the plurality of virtual clips, to the selected time; and
one or more of the virtual clips wherein the corresponding first time precedes and is approximate to the first time of the first virtual clip, and one or more of the virtual clips wherein the corresponding first time is subsequent and approximate to the first time of the first virtual clip, such that at most the first number of the plurality of virtual clips are selected as the one or more display able virtual clips; and
updating the user interface on the computing device to display a list of the one or more displayable virtual clips in the second interactable graphical object.
2. The method of claim 1, w herein each of the plurality of virtual clips further comprises a first access type indicating whether the virtual clip is a public clip available to any requesting user, a shared clip made accessible to the user by one of a plurality of network users of a web service, or a private clip accessible only by the user, and wherein causing the user interface to be displayed on the computing device comprises determining, for each of the plurality of clip indicators, a display color based on the first access type of the corresponding virtual clip.
3. The method of claim 2, further comprising:
receiving a second user input;
creating, from the second user input, an annotation associated with a first displayable virtual clip of the one or more displayable virtual clips;
providing said annotation to a network server;
providing a data profile to said network server, wherein said data profile comprises a location in said Playable Media File where said annotation should be made visible;
determining by said network server if said annotation is a first annotation submitted for said Playable media File;
if said annotation is not a first annotation submitted for said Playable Media File, encoding said data profile in a previously-created table of contents for said Playable Media File; if said annotation is a first annotation submitted for said Playable Media File:
creating a table of contents by said network server for said Playable Media File; and
encoding by said network server said data profile in said table of contents.
4. The method of claim 3, wherein the data profile further comprises:a second access type indicating whether the corresponding annotation is a public annotation to all network users of a social network, a shared annotation made accessible to the user by one of the network users, or a private annotation accessible only by the user; and an identifier of a creating user of the corresponding annotation.
5. The method of claim 1, wherein the graphical user interface further comprises a third interactable graphical object overlaying a third portion of the display window, having a visible state controlling whether the third interactable graphical object is visible or hidden, and when visible displaying a discussion thread comprising annotations associated with a selected virtual clip of the one or more displayable virtual clips.
6. The method of claim 5, further comprising:
receiving a second user input describing an interaction with the selected virtual clip within the second interactable graphical object; and
responsive to the second user input, displaying the discussion thread within the third interactable graphical object.
7. The method of claim 1, wherein obtaining the stored data comprises:
causing a recording device to begin capturing, as the playable media file, a recording of live content;
while the recording device is capturing the live content, receiving a second user input at a first time, the recording device continuing to capture the live content subsequent to the second user input;
while the recording device is capturing the live content, receiving on the user interface a third user input at a second time; and
creating a first virtual clip of the plurality of v irtual clips, the first virtual clip hav ing the first time as the corresponding first position and the second time as the corresponding second position.
8. The method of claim 1, further comprising:
receiving a second user input associated with a first virtual clip of the plurality of virtual clips;
determining that the second user input comprises a character string in a predetermined format comprising:
a tag identifier indicating that the user input includes one or more taxonomy tags; a first tag following the tag identifier; and
one or more subtags sequentially following the first tag and delimited by a delimiter character;
identifying one or more taxonomy tags from the second user input; and
associating, in the stored data, the first virtual clip with the one or more taxonomy tags to produce a searchable virtual clip that is delivered to a requesting device in response to a query from the requesting device for any of the plurality of virtual clips that are associated with the user profile and the one or more taxonomy tags.
9. The method of claim 8, wherein associating the first virtual clip with the one or more taxonomy tags comprises creating, based on an order in the second user input of the one or more taxonomy tags, a directed relationship between a first taxonomy tag and a second taxonomy tag sequentially following the first taxonomy tag in the character string, the directed relationship enabling a user to retrieve the first virtual clip from the stored data using an ordered combination of the first and second taxonomy tags as the query.
10. A method for marking a portion of interest in a Playable Media File, comprising:
causing a recording device to begin capturing a recording of a live event as the Playable Media File;
while the recording dev ice i s capturing the recording, receiving a first user input, the recording dev ice continuing to capture the live content subsequent to the first user input;
determining from the first user input, a first temporal point of interest during said recording of the Playable Media File;
generating a first temporal place marker that indexes said first temporal point of interest; and
electronically storing the first temporal place marker.
1 1 . The method of claim 10, further comprising:
receiv ing a second user input while the recording device is capturing the recording; determining from the second user input, a second temporal point of interest during said recording of the Playable Media File; and
generating a second temporal place marker that indexes said second temporal point of interest;
wherein electronically storing the first temporal place marker comprises electronically storing a first virtual clip associated with the playable media file, the first virtual clip comprising the first temporal place marker and the second temporal place marker.
12. The method of claim 10, wherein the generating step further comprises using a control device to create the first temporal place marker.
13. The method of claim 1 2, wherein the control device communicates to the recording device via a connected data link.
14. The method of 12, wherein the control device communicates to the recording dev ice remotely.
15. The method of 10, wherein said recording dev ice is selected from the group consisting of a mainframe computer, a mobile telephone, a smart telephone, a personal digital assistant, a personal computer, a laptop, a set-top box, an MPS player, an email enabled dev ice, a tablet computer, a web enabled dev ice, and other special purpose computer each hav ing one or more processors.
16. A method of annotating a playable media file, the method comprising:
obtaining a virtual clip comprising a first location within the playable media file and a second location within the playable media file, the first and second locations together defining a clip of the playable media file occurring between the first and second locations;
causing, using the virtual clip, the clip to be displayed on a computing dev ice of a user; receiv ing a first user input associated with the virtual clip;
determining that the first user input conforms to a predetermined format defining taxonomy tags;
identifying one or more taxonomy tags from the user input; and
associating, in an account of the user, the virtual clip with each of the one or more taxonomy tags identified from the user input.
17. The method of claim 16, wherein identifying the one or more taxonomy tags compri ses determining the one or more taxonomy tags using the predetermined format comprising:
a tag identifier indicating that the user input includes taxonomy tags; and
one or more tags following the tag identifier and separated from each other by a delimiter, the one or more tags including a first tag and zero or more subtags arranged in sequence.
1 8. The method of claim 1 7, wherein associating the virtual clip with the one or more taxonomy tags comprises arranging the one or more taxonomy tags according to the sequence in the user input of the one or more taxonomy tags.
19. The method of claim 1 8, wherein the virtual clip is associated with each of the one or more taxonomy tags in a global data store tracking use of the taxonomy tags by all network users of a social network, and wherein the first user input includes a first taxonomy tag fol lowing the tag identifier, a second taxonomy tag following the first taxonomy tag, and a third taxonomy tag following the second taxonomy tag, the method further comprising:
causing the first user input to be displayed in association with an interactable graphical object on the computing device;
receiv ing a second user input indicating a selection of the second taxonomy tag in the first user input;
determining, as a tag filter, a portion of the sequence including the first taxonomy tag and the second taxonomy tag; querying the global data store to obtain a plurality of filtered virtual clips each associated with the first taxonomy tag and the second taxonomy tag according to the tag filter; and
causing the plurality of filtered virtual clips to be displayed on the computing device.
20. The method of claim 19, wherein associating the virtual clip with the one or more taxonomy tags further comprises associating the virtual clip in the global data store with a user identifier of a creating user of the virtual clip, and wherein the tag filter further includes the user identifier, such that the plurality of filtered virtual clips are each associated with the user identifier.
PCT/US2018/026194 2017-04-05 2018-04-05 Method and apparatus for referencing, filtering, and combining content WO2018187534A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP18781305.0A EP3607457A4 (en) 2017-04-05 2018-04-05 Method and apparatus for referencing, filtering, and combining content

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US15/479,774 US10609442B2 (en) 2016-07-20 2017-04-05 Method and apparatus for generating and annotating virtual clips associated with a playable media file
US15/479,774 2017-04-05

Publications (1)

Publication Number Publication Date
WO2018187534A1 true WO2018187534A1 (en) 2018-10-11

Family

ID=63712323

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2018/026194 WO2018187534A1 (en) 2017-04-05 2018-04-05 Method and apparatus for referencing, filtering, and combining content

Country Status (2)

Country Link
EP (1) EP3607457A4 (en)
WO (1) WO2018187534A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115665461A (en) * 2022-10-13 2023-01-31 聚好看科技股份有限公司 Video recording method and virtual reality equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100251386A1 (en) 2009-03-30 2010-09-30 International Business Machines Corporation Method for creating audio-based annotations for audiobooks
US20120151346A1 (en) * 2010-12-10 2012-06-14 Mcclements Iv James Burns Parallel echo version of media content for comment creation and delivery
US20140229866A1 (en) 2008-11-24 2014-08-14 Shindig, Inc. Systems and methods for grouping participants of multi-user events
US20170013042A1 (en) 2013-01-31 2017-01-12 David Hirschfeld Social networking with video annotation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8837906B2 (en) * 2012-12-14 2014-09-16 Motorola Solutions, Inc. Computer assisted dispatch incident report video search and tagging systems and methods

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140229866A1 (en) 2008-11-24 2014-08-14 Shindig, Inc. Systems and methods for grouping participants of multi-user events
US20100251386A1 (en) 2009-03-30 2010-09-30 International Business Machines Corporation Method for creating audio-based annotations for audiobooks
US20120151346A1 (en) * 2010-12-10 2012-06-14 Mcclements Iv James Burns Parallel echo version of media content for comment creation and delivery
US20170013042A1 (en) 2013-01-31 2017-01-12 David Hirschfeld Social networking with video annotation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3607457A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115665461A (en) * 2022-10-13 2023-01-31 聚好看科技股份有限公司 Video recording method and virtual reality equipment
CN115665461B (en) * 2022-10-13 2024-03-22 聚好看科技股份有限公司 Video recording method and virtual reality device

Also Published As

Publication number Publication date
EP3607457A4 (en) 2021-01-13
EP3607457A1 (en) 2020-02-12

Similar Documents

Publication Publication Date Title
US20200186869A1 (en) Method and apparatus for referencing, filtering, and combining content
JP6303023B2 (en) Temporary eventing system and method
US11188586B2 (en) Organization, retrieval, annotation and presentation of media data files using signals captured from a viewing environment
US20070162953A1 (en) Media package and a system and method for managing a media package
US12056142B2 (en) Content capture across diverse sources
CN117633257A (en) System and method for conversion between media content items
US10681103B2 (en) Social networking with video annotation
CN106062741A (en) Method and system for processing information within social network
EP3516614A1 (en) Social networking with video annotation
US20170017382A1 (en) System and method for interaction between touch points on a graphical display
US20200034338A1 (en) System and method of virtual/augmented/mixed (vam) reality data storage
WO2018187534A1 (en) Method and apparatus for referencing, filtering, and combining content
US20140123076A1 (en) Navigating among edit instances of content
US20140244698A1 (en) Method for Skipping Empty Folders when Navigating a File System
US10678842B2 (en) Geostory method and apparatus
CN105765985B (en) Unified content indicates
US20220308720A1 (en) Data augmentation and interface for controllable partitioned sections
CA3188009A1 (en) System and method for digital information management
CN116801024A (en) Comment-based interaction method, comment-based interaction device, comment-based interaction equipment, comment-based interaction storage medium and comment-based interaction program product
KR20200104105A (en) Apparatus and method for producing integrated information using image
WO2018052458A1 (en) Social networking with video annotation
AU2005233653A1 (en) A media package and a system and method for managing a media package

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18781305

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018781305

Country of ref document: EP

Effective date: 20191105