US20180019002A1 - Activating a video based on location in screen - Google Patents

Activating a video based on location in screen Download PDF

Info

Publication number
US20180019002A1
US20180019002A1 US15/668,465 US201715668465A US2018019002A1 US 20180019002 A1 US20180019002 A1 US 20180019002A1 US 201715668465 A US201715668465 A US 201715668465A US 2018019002 A1 US2018019002 A1 US 2018019002A1
Authority
US
United States
Prior art keywords
video
frame object
frame
gui
method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US15/668,465
Inventor
David McIntosh
Chris Pennello
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ALC HOLDINGS Inc
Original Assignee
ALC HOLDINGS Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US201361761096P priority Critical
Priority to US201361822105P priority
Priority to US201361847996P priority
Priority to US201361905772P priority
Priority to US14/173,753 priority patent/US9767845B2/en
Application filed by ALC HOLDINGS Inc filed Critical ALC HOLDINGS Inc
Priority to US15/668,465 priority patent/US20180019002A1/en
Publication of US20180019002A1 publication Critical patent/US20180019002A1/en
Assigned to REDUX, INC. reassignment REDUX, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MCINTOSH, DAVID, PENNELLO, CHRIS
Assigned to ALC HOLDINGS, INC. reassignment ALC HOLDINGS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: REDUX, INC.
Application status is Pending legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object or an image, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object or an image, setting a parameter value or selecting a range
    • G06F3/04842Selection of a displayed object
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object or an image, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning
    • G06F3/04855Interaction with scrollbars
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00624Recognising scenes, i.e. recognition of a whole field of perception; recognising scene-specific objects
    • G06K9/00711Recognising video content, e.g. extracting audiovisual features from movies, extracting representative key-frames, discriminating news vs. sport content
    • G06K9/00744Extracting features from the video content, e.g. video "fingerprints", or characteristics, e.g. by automatic extraction of representative shots or key frames
    • G06K9/00751Detecting suitable features for summarising video content
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/00007Time or data compression or expansion
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/034Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/036Insert-editing
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/102Programmed access in sequence to addressed parts of tracks of operating record carriers
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/102Programmed access in sequence to addressed parts of tracks of operating record carriers
    • G11B27/105Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/482End-user interface for program selection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8455Structuring of content, e.g. decomposing content into time segments involving pointers to the content, e.g. pointers to the I-frames of the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8549Creating video summaries, e.g. movie trailer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/04Synchronising

Abstract

Providing a method for browsing portions of videos called video previews. The video previews may be associated with a link or predefined duration of a full video, such that the video preview is generated from a portion of the full video and viewed by a user. The video previews are configured to play a series of images associated with images from the portion of the full video when the video preview is activated.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of non-provisional U.S. patent application Ser. No. 14/173,753, filed on Feb. 5, 2014, which is a non-provisional of U.S. Patent Application No. 61/761,096, filed on Feb. 5, 2013, U.S. Patent Application No. 61/822,105, filed on May 10, 2013, U.S. Patent Application No. 61/847,996, filed on Jul. 18, 2013, and U.S. Patent Application No. 61/905,772, filed on Nov. 18, 2013, which are herein incorporated by reference in their entirety for all purposes.
  • This application is related to commonly owned and concurrently filed U.S. patent application Ser. No. 14/173,697, entitled “Video Preview Creation with Link” (Attorney Docket 91283-000710US-896497), U.S. patent application Ser. No. 14/173,715, entitled “User Interface for Video Preview Creation” (Attorney Docket 91283-000720US-897301), U.S. patent application Ser. No. 14/173,732, entitled “Video Preview Creation based on Environment” (Attorney Docket 91283-000730US-897293), U.S. patent application Ser. No. 14/173,740, entitled “Video Preview Creation with Audio” (Attorney Docket 91283-000740US-897294), U.S. patent application Ser. No. 14/173,745, entitled “Generation of Layout of Videos” (Attorney Docket 91283-000750US-897295), which are herein incorporated by reference in their entirety for all purposes.
  • BACKGROUND
  • Users commonly provide video content to websites (e.g., YouTube®), which can be referred to as “posting a video.” For example, the user can associate a title, a static thumbnail image, and/or a textual description with the video. Other users (e.g., viewers) can access and view this content via the websites. For example, the viewers can see a video's title and static thumbnail of the video before deciding whether to play the full video. However, the viewers may find it difficult to select particular videos of interest because the title may not be descriptive of the contents of the video, the static thumbnail image may not summarize the essence of the video, or the textual description with the video may be a poor signal for whether the video will be interesting to the viewer. Thus, the viewers may spend significant amounts of time searching and watching videos that are not enjoyable to the viewer.
  • Additionally, if the viewer selects and starts watching a video, it often takes the viewer a significant amount of time before the viewer can determine whether they like the video and want to keep watching, or whether they want to select another video. When the videos are relatively short (e.g., 3 minutes), the viewer may watch a substantial portion of the video before they can determine whether they would be interested in viewing the video. This process can often be frustrating to a viewers who are accustomed to instant gratification provided by other consumer internet services, and the viewers may stop watching internet videos because it takes too long for them to find an interesting video to watch.
  • SUMMARY
  • Embodiments of the present invention provide methods, systems, and apparatuses for viewing portions of videos called “video previews.” Once the video previews are created, they may be associated with a video channel (e.g., a collection of videos) for a viewer to browse. Each video channel or video in a channel can provide short, playable video preview that users can view to better decide whether to watch the full video or video channel. For example, when the video preview moves to a particular location on a display, the video preview may start playing (e.g., within a frame object). In another example, if the viewer selects a video preview, a full video associated with the video preview can be provided. In another example, a video preview can represent the video channel or collection, and a selection of the video preview can provide an interface with video previews of the videos of the channel. Also, embodiments can organize video previews and channels to be visually pleasing and efficient for a viewer.
  • Other embodiments are directed to systems and computer readable media associated with methods described herein.
  • A better understanding of the nature and advantages of the present invention may be gained with reference to the following detailed description and the accompanying drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 shows a flowchart illustrating a method of creating a video preview, organizing the video previews, and providing a user interface that includes the video previews according to an embodiment of the present invention.
  • FIG. 2 shows block diagrams of various subsystems used to generate or provide a video preview.
  • FIG. 3 shows a flowchart illustrating a method of generating a video preview object according to an embodiment of the present invention.
  • FIG. 4 shows a graphical user interface for browsing one or more video previews according to an embodiment of the present invention.
  • FIG. 5 shows a graphical user interface for browsing one or more video previews according to an embodiment of the present invention.
  • FIG. 6 illustrates video previews that are configured to play based on the location of other video previews preview according to an embodiment of the present invention.
  • FIG. 7 illustrates video previews that are configured to play based on the location of other video previews according to an embodiment of the present invention.
  • FIG. 8 illustrates the correlation of video previews with audio according to an embodiment of the present invention.
  • FIG. 9 shows a block diagram of a computer apparatus according to an embodiment of the present invention.
  • DEFINITIONS
  • A “video preview” is a visual representation of a portion of a video (also referred to as a “full video” to contrast a “video preview” of the video). The full video may correspond to the entirety of a video file or a portion of the video file, e.g., when only a portion of the video file has been streamed to a user device. The preview may be shorter than the full video and the full video can be shorter than the complete video file. The preview can convey the essence of the full video. The video preview is shorter (e.g., fewer images, less time) than a full (e.g., more images, longer time, substantially complete) video. In various embodiments, a preview can be a continuous portion of the full video or include successive frames that are not continuous in the full video (e.g., two successive frames of the preview may actually be one or more seconds apart in the full video).
  • A “frame object” is an object on a GUI that is configured to play a video previews (e.g., an iframe, a frame in a current window, frame buffer object (FBO)). Frame objects (e.g., placeholders, 2-dimensional boxes, windows, squares) can be generated by a computing device for displaying the video preview. In some embodiments, the frame object will also provide filters or effects for the video preview (e.g., defined by the computing device, defined by a programming language that generates a frame object class).
  • A “composite of video previews” (also referred to simply as a “composite” or “preview composite”) is an area where one or more related video previews will be played. If the composite has one video preview, then the composite simply corresponds to playing the preview. When the composite includes multiple previews, many frame objects can be associated with each other, each playing a video preview. The video previews in a composite can each link to the same full video. In one embodiment, the creator of a preview can identify the previews to include in a composite, and the composite can exist as a single display object, where the previews of the composite start playing at the time the composite is activated. The shape of a composite of video previews can be a square (2 blocks×2 blocks, N blocks×N blocks) or rectangle (1×N, 2×N, 2×2 with blocks comprising unequal sides, N×N with blocks comprising unequal sides). The composite of video previews may have a right-wise or left-wise orientation.
  • A “cluster of video previews” (also referred to simply as a “cluster”) is a group of composites that are grouped together because they are related in some way. In one embodiment, the composites may relate to each other and be placed in a cluster imply by being next to each other in a stream of video previews pulled from a queue by a computing device. In another embodiment, the composites in a cluster may be filtered by category, popular items, and/or trending items, and thus the composites of a cluster may be related to each by the matching filter criteria. Each composite in the cluster can link to different full videos.
  • A “channel” or “collection” is a group of related videos that is accessible by a user and organized in a layout. For example, a user may decide to associate three composites (e.g., video previews relating to baking a cake, baking a cookie, and freezing ice cream) in a single collection called “Desserts I Love.” Once the user associates the three composites, the composites may form a cluster. The cluster may be organized in a layout that will make it aesthetically pleasing for other users to view the cluster. The user can access their collection called “Desserts I Love” through a web browser, like a bookmarked area or toolbar, at a later time. Once the user accesses their collection, each of the composites that the user has associated with this collection can be displayed so the user can easily access them again.
  • DETAILED DESCRIPTION
  • Embodiments of the present invention can enhance video viewing by providing short, playable video previews. Users or viewers (used interchangeably) can watch the video previews provided through a website, computing device, messaging service, television, or other devices to better decide whether to watch a full video or channel of videos.
  • I. Providing Video Previews
  • FIG. 1 shows a flowchart illustrating a method 100 of creating a video preview, organizing the video previews, and providing a user interface that includes the video previews according to an embodiment of the present invention. The method 100 may comprise a plurality of steps for implementing an embodiment of creating a video preview with a link performed by various computing devices (e.g., video server, provider server, user device, third party server).
  • At step 110, a video preview may be generated. Embodiments of the invention may provide a graphical user interface for a user that allows the user to select a portion of a video (e.g., a full video) to use as a video preview. The system may generate the video preview based on the input received from the user. The input may be active (e.g., the user providing an identification of a video portion of a full video) or passive (e.g., a plurality of users view a section of the full video a threshold number of times, which identifies a video portion of a full video). Additional means of generating video previews can be found in U.S. patent application Ser. No. 14/173,697, entitled “Video Preview Creation with Link” (Attorney Docket 91283-000710US-896497), U.S. patent application Ser. No. 14/173,715, entitled “User Interface for Video Preview Creation” (Attorney Docket 91283-000720US-897301), U.S. patent application Ser. No. 14/173,732, entitled “Video Preview Creation based on Environment” (Attorney Docket 91283-000730US-897293), and U.S. patent application Ser. No. 14/173,740, entitled “Video Preview Creation with Audio” (Attorney Docket 91283-000740US-897294), which are incorporated by reference in their entirety.
  • At step 120, one or more video previews may be organized into one or more channels or collections. For example, the method 100 can associate the video preview generated in step 110 (e.g., a 4-second animated GIF of a snowboarder jumping off a ledge) with a channel (e.g., a collection of videos about snowboarders). In some embodiments, the video previews may be organized in a group (e.g., a composite, a playable group, a cluster of video previews) and displayed on a network page. Additional information about the organization and layout of video previews can be found in U.S. patent application Ser. No. 14/173,745, entitled “Generation of Layout of Videos” (Attorney Docket 91283-000750US-897295), which is incorporated by reference in its entirety.
  • At step 130, a GUI may be provided with the video previews. For example, the GUI may provide one or more channels (e.g., channel relating to snowboarders, channel relating to counter cultures), one or more videos within a channel (e.g., a first snowboarding video, a second snowboarding video, and a first counter culture video), or a network page displaying one or more video previews. The video previews may be shared through social networking pages, text messaging, or other means.
  • II. System for Providing Video Previews
  • Various systems and computing devices can be involved with various workflows used to activate a video based on a location in a screen.
  • FIG. 2 shows block diagrams of various subsystems used to generate or provide a video preview. For example, the computing devices can include a video server 210, a provider server 220, a user device 230, or a third party server 240 according to an embodiment of the present invention. In some embodiments, any or all of these servers, subsystems, or devices may be considered a computing device.
  • The computing devices can be implemented various ways without diverting from the essence of the invention. For example, the video server 210 can provide, transmit, and store full videos and/or video previews (e.g., Ooyala®, Brightcove®, Vimeo®, YouTube®, CNN®, NFL®′ Hulu®, Vevo®). The provider server 220 can interact with the video server 210 to provide the video previews. In some embodiments, the provider server 220 can receive information to generate the video preview (e.g., a timestamp to a location in the full video, the link to the full video, the full video file, a push notification including the link to the full video). The user device 230 can receive a video preview and/or full video to view, browse, or store the generated video previews. The third party server 240 can also receive a video preview and/or full video to view or browse the generated video previews. In some embodiments, the user device 230 or third party server 240 can also be used to generate the video preview or create a frame object.
  • Additional information about the video server 210, provider server 220, user device 230, and third party server 240 can be found in U.S. patent application Ser. No. 14/173,697, entitled “Video Preview Creation with Link” (Attorney Docket 91283-000710US-896497) and U.S. patent application Ser. No. 14/173,715, entitled “User Interface for Video Preview Creation” (Attorney Docket 91283-000720US-897301), which are incorporated by reference in their entirety.
  • In some embodiments, the video server 210, provider server 220, a user device 230, and third party server 240 can be used to activate a video preview. For example, the computing device (e.g., provider server 220, user device 230) may receive a video preview that comprises one or more images. The images may be associated with a video portion of the full video that corresponds to a series of images from the full video. The computing device can generate a frame object for displaying the video preview when the frame object is located on a display of a computing device. In response to the frame object moving to a particular location on the display, the frame object may be configured (e.g., by the computing device) to play the video preview within the frame object.
  • More than one frame object may be supported as well. For example, when a second video preview is received, the computing device generate a frame object for displaying the second video preview when the frame object is located on a display of a computing device. In response to the second frame object moving to a particular location on the display, the second frame object may be configured (e.g., by the computing device) to play the second video preview.
  • In some embodiments, the video preview may be configured to stop playing (e.g., by the computing device). For example, a first video preview may be stopped from playing when the second frame object is located at the particular location on the display in the GUI. In another example, the video preview may be stopped from playing when an activation device (e.g., finger, mouse pointer) touches or taps on a screen of the computing device, or when the frame object is configured to stop playing after a certain number of iterations.
  • III. Playing a Video Preview
  • Video previews and/or frame objects can be created to display the video preview in a particular location on a screen. Video previews may be activated and selected. For example, a video preview can be configured to play a series of images when the video preview is tapped or selected by a user (e.g., activated) and retrieve a full video when the video preview is tapped or selected a second time (e.g., selected). The video preview may also be configured to be activated based on its location in a screen. In some examples, the particular location on the screen can be identified by a region of the screen, defined by two boundary lines, or just one (e.g., center) line.
  • FIG. 3 shows a flowchart illustrating a method of generating a video preview object according to an embodiment of the present invention (e.g., performed by a computing device including a user device 230). In some embodiments, a video may begin as a series of frames or images (e.g., raw format) that are encoded by the video server 210 into a full video. The full video may reduce the size of the corresponding file and enable a more efficient transmission of the full video to other devices (e.g., provider server 220, user device 230). In some embodiments, the provider server 220 can transcode the full video (e.g., change the encoding for full video to a different encoding, encoding the full video to the same encoding or re-encoding) in order to generate and transmit the video preview. For example, transcoding may change the start time of a video, duration, or caption information.
  • The user may create a video preview that may later be accessed by a viewer. For example, the user may select the best 1-10 seconds of a video to convey the essence of the full video. The video preview can be shorter (e.g., fewer images, less time) than a full (e.g., more images, longer time, substantially complete) video. In some embodiments, the video preview comprises less than five percent of the images from the full video, or another relatively small number of images in comparison to the full video. The system associated with the GUI may generate a smaller file to associate with the video portion (e.g., animated GIF, MP4, collection of frames, RIFF). The system may provide the GUI on a variety of systems. For example, the GUI can be provided via an internet browser or client application (e.g., software configured to be executed on a device) and configured to run on a variety of devices (e.g., mobile, tablet, set-top, television).
  • At block 310, a computing device receives a video preview. For example, a provider server 220 can transmit a video preview to a user device 230 or a third party server 240 through a GUI or application (e.g., a web browser). The video preview may comprise a set of images, e.g., from a portion of a full video, multiple portions from one full video, or multiple portions from multiple full videos.
  • In one embodiment, the computing device can be used to browse a website that contains video previews. For example, a browser can be used to access a server that stores the video previews. The server can send video previews to the computing device (e.g., user device 230). The server can send a series of video previews at one time, so that a browser of the computing device can display the video previews consecutively without having to wait for communications from the server to provide a next video preview.
  • At block 320, a frame object is generated for displaying the video preview. For example, the video preview may play the set of images within the frame object when the frame object is located at a particular location on a display in a GUI of a computing device. The frame object may be configured to play the video preview within the boundaries of the object.
  • At block 330, a series of frame objects may be displayed (e.g., in a GUI). For example, the series can include the frame object. In some embodiments, the GUI can allow motion of the frame objects in the GUI. In some embodiments, the user may touch a surface of the computing device that displays the GUI or use an activation device to move the frame objects. For example, when two frame objects are displayed on the GUI, the user can swipe the screen (e.g., touch a surface of a computing device and drag an activation device, including a finger, mouse pointer, etc.) to move the second frame location to the particular location on the display.
  • At block 340, the frame object can be identified as having moved to a particular location. For example, the computing device may identify that the frame object has moved to a particular location on the display in the GUI. The computing device may identify the current location of the frame object (e.g., pixel location, relative window location, in relation to a “scrolling line” as shown in FIGS. 6-7, etc.) and compare the current frame location with the particular location (e.g., as defined within the GUI, defined by the video preview, or defined within the frame object using data obtained from the GUI or a server). When the video preview is placed in a particular location (e.g., the middle of a browser window, 10 pixels from the edge of the screen, in screen), the video preview can be activated.
  • The movement to a particular location on the display may activate the video preview. When the user activates the video preview (e.g., by swiping the frame object to the left so that it is in the middle of the screen, playing video previews simultaneously, one visible video preview). In some embodiments, the user may touch a surface of the computing device that displays the GUI or use an activation device to activate the frame objects. For example, when two frame objects are displayed on the GUI and the first frame object is playing the first video preview, the user can swipe the screen (e.g., touch a surface of a computing device and drag an activation device, including a finger, mouse pointer, etc.) to activate the second video preview.
  • The deactivation of a preview can be similarly determined by a location, which may be the same or different than the particular location used for activation. For example, the video can be activated when it leaves the particular location or reaches another location. Various embodiments can restrict the number of previews playing at any one time.
  • At block 350, the video preview may be played within the frame object. For example, the video preview may be played in response to movement. Once the frame is identified as moving or at the point where the frame object has moved to a particular location on the display, the video preview may be played within the frame object.
  • Once the activation is identified, the browser can play the video portion. For example, the browser can play the video portion in a continuous loop, a limited loop that would stop after a certain number of times, or stop after one time. The browser may also play the video preview forwards and/or backwards.
  • A. Composites
  • Composites may be supported. For example, a composite may be an area where one or more related video preview blocks will be played, each video preview in a composite linking to the same full video. When a 2×2 composite is placed in a particular location (e.g., more than 50% visible to a user, in the middle of the browser, when the adjacent video preview is fully viewable), the video preview can be activated. In yet another embodiment, the location of other video previews around a first video preview can determine when the first video preview is activated. In still another embodiment, a first video preview can be activated once an adjacent video preview has played once.
  • In an embodiment, frame objects can be associated with a composite of video previews. For example, a first frame object can be associated with a composite of video previews, which includes one or more frame objects that move together when one of the frame objects is moved in the GUI. The first frame object can display a first video preview. When the composite of video previews is activated, the first video preview can play. Other frame objects may also be displayed by the GUI. For example, when a second frame is displayed that is not part of the composite of video previews, the second frame object may play. Alternatively, in response to identifying that the composite of video previews is located at the particular location in the GUI of the computing device, the video previews associated with the composite of video previews can play instead of the second video preview. In some examples, the video previews associated with the composite of video previews can play simultaneously, without also playing the second video preview associated with the second frame object.
  • B. Selecting Video Preview
  • The user may also select the video preview (e.g., double-tapping the video preview, double-clicking) and the browser can transmit a selection request to the server. The computing device can retrieve the link associated with the full video (e.g., from a data store associated with the video server 210, the data store that stores the full video in an original location). For example, in response to receiving a selection of the frame object from a user, the full video may be played. The full video can correspond with the video preview and include at least a portion of the set of images that were included in the video preview.
  • In an embodiment, the computing device may also retrieve metadata (e.g., title, caption). For example, if the viewer of the video preview uses a web browser that translates all webpages and text to Spanish, the caption can be translated to Spanish as well. The caption may be adjusted by color, font, size, position on a screen, or other options as well. The caption may be a “soft caption” that is independent of a video preview file, instead of a “hard caption” that is part of the file format. Metadata may also include categories or classifications of the video preview (e.g., to help organize a video preview on a display).
  • The browser can display the full video from the link or an application. The location of the full video can be from an external location (e.g., by services provided by Hulu®, Netflix®, iTunes® or internally at the computing device. In an embodiment, the full video can continue seamlessly from the point when the video preview stopped.
  • IV. Browsing One or More Video Previews
  • Users may browse to one or more video previews through the use of a computing device (e.g., user device 230). The video previews can be displayed via a software application, like a graphical user interface (GUI), web browser, television, or other methods of viewing digital images.
  • A. GUI for Browsing Video Previews
  • FIG. 4 shows a GUI for browsing one or more video previews according to an embodiment of the present invention. The GUI 400 can allow a user to browse a plurality of video previews through one or more frame objects 410, 420, 430. The series of frames may be reserved locations associated with the GUI for static images, video previews, and the like. The series of frame objects may comprise a first frame object 410, a second frame object 420, and a third frame object 430. The GUI 400 may also include a series of menu options, a selected menu option, a channel, a background, or other information for browsing and playing video previews.
  • As shown, more than one frame object may be displayed at a particular time. For example, a first frame object and second frame object can be displayed at the same time, so that at least a portion of the first and second frame objects are displayed by the GUI simultaneously. Displaying the frame objects may not be a factor in playing the video preview, which may instead rely on the location of the frame on the display or the activation of associated frame objects in a composite.
  • The frame objects may be placed at relative positions in the GUI for the video previews to play. Each position may or may not have a playing video preview. In an embodiment, the first frame 410 can play the video preview because the first frame is located in substantially the middle of the GUI. The second frame 420 and the third frame 430 would not be in the middle of the screen, so these frames could display a static image or a portion of the static image associated with the video preview.
  • In an embodiment, the viewer can activate the thumbnail associated with the video preview by zero-click viewing (e.g., based on the location of the video preview and not a click or selection of a video preview). For example, once a video preview is placed in a particular location (e.g., the middle of a browser window, 10 pixels from the edge of the screen, off screen, in screen), the video preview can be activated.
  • In some embodiments, the activation by the user can transmit an activation request from the browser to a computing device. In other embodiments, the activation may be contained within the frame objet, so that no correspondence is needed between the frame object and the computing device. For example, the computing device can retrieve a cataloged video portion associated with the activation request. The computing device can transmit the video portion to the browser (e.g., receive a video preview from a video server 210 and forward to a browser, receive a video preview from a data store with a provider server 220 and forward to a user device 230).
  • In some embodiments, the frame object may represent a link to a full video or video channel. The viewer can select (e.g., double-tap, click) the activated video preview to access the full video. The viewer may be redirected to another application to view the full content. For example, a viewer may view Scarface on a GUI related to one of several video services (Netflix®, Hulu®, Comcast® television) from her iPad. Alternatively, the viewer may open a browser tab to watch the full video, outside of the GUI disclosed herein. In another instance, the full video could play in the same place as the video preview in the GUI. In some embodiments, the channels may be organized and socially curated. A curator can decide the order and programming.
  • The series of frames can move to different locations in the GUI 400. For example, first frame 410 may be activated automatically because it is located in substantially the middle of the screen. The viewer can swipe the first frame 410 to the left. In response, the GUI would move the first frame 410 to the left-side of the screen and the second frame 420 would be placed in the middle of the screen. The third frame 430 may not be visible after the swipe. The second frame 420 could be automatically activated, because it is located in the middle of the screen, and the first frame 410 would no longer be activated, because it is on the left-side of the screen. When activated, the second frame 420 would provide the location for the video preview associated with the second frame to play.
  • In an embodiment, the video preview can begin to play in a near immediate response to a user's activation (e.g., when the video preview is in the middle of a screen and playing instantly) or selection (e.g., when the video preview was activated and began playing, and the user has double-tapped the location of the video preview to initiate the complete video). For example, the video preview may play for the user when the image file has not completely loaded on the device. This may be advantageous because the user can get a substantially immediate response from the GUI and a more gratifying video preview experience.
  • In some embodiments, a video preview can play in response to selecting a frame object. For example, in response to selecting a first frame object, a second frame object can be provided that displays additional content that corresponds to the video preview.
  • The second frame object can be displayed in the GUI (e.g., with the first frame object, in a new GUI without the first frame object).
  • In some embodiments, the video preview can play automatically or independent of a receiving an activation of the video preview from a user. For example, when the browser opens at a computing device, the video preview can start playing.
  • In some embodiments, the video preview can be activated or selected in response to an activation device (e.g., a finger, mouse pointer, keyboard shortcuts). For example, the frame object and/or video preview can be activated by placing an activation device in a substantially close proximity to a location of the frame object. In another example, the activation may occur when the activation device is placed over the location of the frame object (e.g., hovering). The video preview may be selected in response to an activation device as well. For example, the frame object is selected by placing an activation device in a substantially close proximity to a location of the frame object and clicking or tapping the frame object.
  • Further, the video previews may be encoded as images. This may be advantageous because the video preview could be shorter in length and a smaller file size. The video preview could load quickly on most connections with little to no buffering. The video previews provided by the frame object can play as the frame object is loaded.
  • Embodiments can provide a website or client application where users can browse short video previews and watch the full video or channel that is represented by the video preview. The website may provide additional aspects of video browsing, including a series of menu options, a selected menu option, one or more channels that organize the video previews, and a background. The series of menu options can be a list of categories (e.g., sports, movies, comedy). The menu option may become a selected menu option by being selected or touched. The channel can be a collection of frame objects for a particular category. The background may be a static image located behind the video previews, series of menu options, and channel. The background could be a video playing the background of the current selected channel. The GUI can be designed to let viewers browse videos, while still consuming the video that they had previously selected.
  • FIG. 5 shows an alternative GUI for browsing one or more video previews according to an embodiment of the present invention. For example, the GUI 500 on a computing device (e.g., mobile device, tablet, web browser, TV) can enable users to browse a hierarchy of video previews (e.g., video preview in a frame object, composite, cluster, channel). The GUI 500 can include an area for video previews 510, one or more channel previews and/or frame objects 520 (e.g., 521, 522, 523, 524, 525, 526), one or more playable groups of video previews or frame objects 530, a menu 530 (e.g., 540, 550, 560), and a background 570.
  • The one or more channel previews and/or frame objects 520 can be representative of a channel, where a channel is a collection of video previews of full videos of a same category. A channel preview can corresponds to a video preview of the channel, a collection of video previews of the channel, or be a new video preview that is created from one video or multiple full videos of the channel. For example, when the user activates a frame object 521, the video preview associated with the frame object can start playing. When the user selects the frame object 521, the user can be directed to a channel of video previews, e.g., a GUI as depicted in FIG. 4. The video previews in the channel may be related (e.g., via metadata, by categories). In an embodiment, the activated channel on the GUI plays an associated video preview.
  • When the frame object is not activated, the video preview may not play. In some instances, the frame object can show thumbnail images in place of the activated video preview. When a user swipes, hits the right/left arrows or clicks, additional channels may be visible on the GUI and the video preview for a particular channel can play automatically.
  • In some embodiments, the user can select (e.g., double-tap, click) the activated video preview to access the full video. When the video preview represents a channel of videos, selecting the video preview may allow the user to access the full videos associated with the channel.
  • In one embodiment, one video preview may be activated by default. The video frame may be highlighted to signify that the frame has been activated and the video preview associated with the frame may begin to play at the location of the frame. The user can view the GUI on a device that displays an indicator (e.g., mouse pointer). When the indicator is located in near proximity to a channel (e.g., rolling over the channel location), the video preview may being to play immediately.
  • In some examples, the series of frames 520 may be locations of channels and each channel location can display a static image (e.g., a thumbnail) or a video preview. In an embodiment, the series of frames 520 may be displayed as a grid, such that there are multiple frames in each row and multiple frames in each column.
  • The first frame 521 may be activated by default (e.g., because it is located in the upper left-hand corner of the series of frames 520) and play a video preview without the user's instructions (e.g., tapping, hovering a mouse pointer, swiping). In an embodiment, the first frame can be highlighted by a thicker border around the video preview to signify that the first frame 521 has been activated. When the user selects (e.g., double-tap, click) the activated video preview for a channel, the device may display a GUI so that the user can access the full videos associated with the channel.
  • A user may activate another channel (e.g., by placing a mouse pointer or indicator in near proximity to a channel), in which the video preview associated with that channel may begin to play. For example, if the user activates the second frame 522, a thicker border may be placed around the second frame 522 and the video preview associated with the second frame 522 can begin to play.
  • The frame objects can form a playable group 530 (e.g., including 522, 523, 525, 526). For example, when an activation of frame object 521 moves to frame object 522, the frame objects associated with frame object 522 (e.g., 523, 525, 526) can also begin to play because they are part of playable group 530. In some examples, the playable group 530 can be associated with a single full video, such that when any of the frame objects associated with the playable group is selected, the GUI will display the full video associated with the playable group 530.
  • Other functionality may be provided by GUI 500 as well. For example, the user may be able to browse any video previews that are highlighted or currently playing 540, any collections or channels 550 that the user is associated with, and video previews or channels associated with a particular category 560. A background 570 may also appear on the GUI 500 (e.g., thumbnail, video preview, text).
  • B. Identifying that the Frame Object Moves to a Particular Location
  • FIG. 6 illustrates video previews that are configured to play based on the location of other video previews preview according to an embodiment of the present invention. As shown, the cluster of video previews in GUI 610 includes a first video preview 630, a second video preview 640, and a scrolling line 650. For example, the first video preview 630 may be a composite of video previews in a 1×2 grid (1 column by 2 rows) from a cross country driving video showing a car driving near snow, and the second video preview 640 may be a composite of video previews in a 1×2 grid from a racing video showing a car drifting on an empty freeway. These two sets of videos can be grouped in a cluster and played for the user simultaneously.
  • When a cluster of video previews comes into view, the cluster can start playing. The videos associated with the cluster of video previews can be grouped by any method, including when the video previews share a visual consistency and/or come from a similar category. The consistency can make logical sense to avoid seeming jarring for the user. The cluster can also portray visual beauty.
  • The plurality of video previews can be configured to play based on the location of other video previews. For example, as shown in GUIs 610 and 620, the layout shows two video previews, a first video preview 630 and a second video preview 640. In GUI 610, the first video preview 630 is playing and the second video preview 640 is not playing. In this embodiment, the second video preview 640 is fully displayed, but it has not been activated while the first video preview 630 is still activated. The activation of the second video preview can be dependent on the activation of the first video preview. In GUI 620, the second video preview is activated and playing the video preview, once the first video preview is no longer activated. For example, the video preview may become active as it moves toward the middle of the display.
  • C. Identifying a Scrolling Line
  • As illustrated in FIG. 6, a scrolling line 650 can be identified to determine which video preview can be activated to play at a particular location. The scrolling line 650 may be an invisible line across the screen of the user device 230 that helps identify which video preview (e.g., or playable group of video previews, cluster of video previews, etc.) should play at a particular time.
  • The approach for determining which playable groups play as the user scrolls through a grid of items depends on several ideas. For example, playable groups are generally constructed such that their height may not exceed the height of the screen of the user device 230. In another example, some playable groups can play by default (e.g., when the screen views the top of the layout, the first playable group should play, when the screen views the bottom of the layout, the last playable group should play, etc.). For playable groups near the middle of the layout, a scrolling line may be used to determine which playable group can play (e.g., if more than one playable group is visible on the screen of the user device 230).
  • When the user has scrolled to a particular location, the GUI can compute the position of the scrolling line 650 on the screen of the user device 230. For example, the determination of the scrolling line position “s,” given a “y” offset (e.g., y can represent the amount by which the user has scrolled down from the top) can be expressed by the following general formula, with “v” as the height of the screen at the user device 230 and “c” as the height of the video previews in the layout (e.g., the grid):

  • When y<v,s=(3/2)y

  • When v≦y≦2v,s=y+(v/2)

  • When y>2v,s=y+v−(1/2)(c−v−y)
  • Some optimizations and/or special considerations may be made. For example, if the content height is less than thrice the view height, but greater than twice the view height, the scrolling line may be a line from 0 to c. The scrolling line position can be expressed in the following formula:

  • When 2v<c<3v,s=y(c/(c−v))
  • In another example, a padding can be added to the layout. For example, when a short grid is considered, the padding can help ensure that the last playable group can play (e.g., a scrolling line from 0 to c may not play the last playable group because the user may not be able to scroll far enough for the scroll line to intersect the last playable group). The padding can be added to the height of “c” (e.g., the height of the video previews in the layout), such that c=2v and:

  • When a short grid or c=2v,s=2y
  • Once the scrolling line is determined, the playable group that is currently intersecting the scrolling line may play. For example, the GUI may analyze contiguous offset ranges of each of the playable groups and/or cluster of video previews and place them into an interval tree (e.g., a binary interval tree that permits O(log(n)) analysis and/or lookups). An interval may be a set of real numbers with the property that any number that lies between two numbers in the set is also included in the set. The interval tree may use these intervals to allow an efficient lookup of playable groups. The lookup can find the intervals that overlap with any given interval. In some examples, the interval tree is dynamic (e.g., the interval tree can allow insertion and deletion of intervals dynamically).
  • Other data structures (e.g., ordered tree data structures in addition to an interval tree) are available as well without diverting from the essence of the invention. For example, leaf contiguous intervals can be stored in an interval tree with a linked list. This can permit linear-time traversal for slow scrolling (e.g., when the user is progressing linearly from one interval to the next). The tree can allow for efficient lookups if the user scrolled very quickly from one position in the grid to another. In another example, a skip list can be used. A skip list can also permit general logarithmic random lookups, but also simultaneously permits linear traversal of the data.
  • D. Identifying that a Second Frame Object is Still Active
  • FIG. 7 illustrates video previews that are configured to play based on the location of other video previews according to an embodiment of the present invention. As shown, GUIs 710 and 720 include a plurality of video previews. GUI 710 can display a plurality of video previews 730, 740 and GUI 720 can represent the same video previews at a later point in time. A scrolling line 750 may also be calculated. As shown, the video previews with a solid line are illustrated as active video previews and the video previews with a dashed line are illustrated as not active video previews. For example, video preview 730 may be playing at the time associated with GUI 710 and may be off-screen at the time associated with GUI 720. Video preview 740 may be viewable at both times associated with GUI 710 and GUI 720.
  • As shown in GUI 710, the first video preview 730 occupies nearly 1/3 of the screen, while the second video preview 740 occupies nearly 2/3 of the screen. However, the second video preview 740 may not be activated (e.g., based on the configuration when generated, based on a location relative to other video previews in a composite). As shown in GUI 720, the second video preview 740 may still be activated even though the video preview is not fully displayed. The activation may be due to the calculation of the scrolling line 750 and determining which one or more video previews are intersecting the scrolling line 750 at a particular point of time.
  • E. Determination of Location(s)
  • As shown in FIGS. 4-7, the frame objects and/or video previews can occupy various portions of the screen to be activated, including the middle of the viewable area, center, left, right, outside the viewable area, or substantially close to any such location. The computing device may determine the location of the frame object in a variety of ways. For example, the computing device (e.g., provider server 220, user device 230) can create a frame object that identifies the coordinates of the object. When the frame object is loaded at a particular coordinate setting, the frame object can be activated and play the video preview within the frame object. In another example, the computing device can create the frame object to compare other object properties (e.g., imageview, button, iframe) with the user device's provided display specifications (e.g., scrollview.bounds). In yet another example, the computing device can create a frame object that identifies the frame object in relation to a group of frame objects (e.g., cluster, composite) and automatically start playing the video preview displayed by the frame object when the preview frame object starts playing (e.g., in a composite) or when the previous frame object stops playing (e.g., in a cluster).
  • F. Correlation with Audio
  • FIG. 8 illustrates the correlation of video previews with audio according to an embodiment of the present invention. The GUI 800 can include one or more video previews 810, 820, 830 and corresponding audio. The audio may correlate with the video preview, such that video preview 820 corresponds with a first audio 850, video preview 810 corresponds with a second audio 860, and video preview 830 corresponds with a third audio. Alternatively, a general audio file 880 can correspond with the plurality of video previews.
  • Audio may correlate with one or more video previews. For example, a GUI 840 can display one or more audio files associated with one or more video previews in a temporal-based interface (e.g., the x-axis/horizontal-axis is the progression of time in which the audio plays). The user may be able to view or adjust (e.g., dragging, tapping, clicking, sliding) a visual representation of the audio (e.g. 850, 860, 870, the background of GUI 840) in order to correlate the audio as the audio plays with the video preview (e.g., duration, pitch, volume, fading) through the use of the GUI 840. In another example, the computing device can adjust the audio. Additional information relating to audio with video previews can be found in U.S. patent application Ser. No. 14/173,740, entitled “Video Preview Creation with Audio” (Attorney Docket 91283-000740US-897294), which is incorporated by reference.
  • When audio corresponds with a particular video preview, the location of the video preview may affect the audio (e.g., pitch, volume, fading, Doppler shift). For example, the audio may play while these previews are selected and/or activated, and the video associated with these groups of video previews are playing. The audio associated with other composites may be silent. When a user swipes or changes the focus (e.g., clicks on an object displayed by the GUI other than the video preview) so that other video previews and/or frame objects become activated, the system can gauge, in a Doppler Shift-like audio effect, how the audio would change if an object was moving farther away from the user.
  • In another example, the pitch may change to seem like one video preview is physically moving farther away from the user. For example, when first audio 850 corresponding with video preview 820 is moved from one location to another location (e.g., from the center location to the far left location, partially off the screen), the pitch for first audio 850 may be adjusted to simulate that the source of the audio is moving farther from the user. In another example, when second audio 860 corresponding to video preview 810 is moved from one location to another location (e.g., from far left location to the center location), the pitch for second audio 860 may be adjusted to simulate that the source of the audio is moving closer to the user. The pitch may also change for the audio associated with other composites to simulate the videos physically moving closer to the user.
  • In some embodiments, the audio may fade in or out. For example, there may be one video preview associated with one audio on a GUI. When the video preview appears on the screen (e.g., moving from right to left, video preview 810), the audio can fade in as the video preview moves closer to the center of the screen. The audio may also fade out the farther the video preview moves to the center of the screen (e.g., near the location of video preview 830). In another example, in response to identifying that the frame object is located at the particular location on the display in the GUI of the computing device, the audio can be faded in or out, irrespective of the location of the video preview to the center of the screen (e.g., top of the screen to bottom of the screen, closer or farther from the corner of the screen).
  • When more than one video preview is identified on a GUI, the locations of both video previews can affect the audio. For example, in response to identifying that the first frame object is located at the particular location on the display in the GUI. A first audio file (that corresponds with the first video preview) can fade out and a second audio file (that corresponds with the second video preview) can fade in.
  • G. Sharing for Others to Browse
  • In one embodiment, users can share video previews through communication mechanisms such as social networks and email. Because the video previews are relatively small (e.g., a short duration, small file size, encoded to be easily transmittable), the video previews can be compelling to share with other people. In addition because the file-size of the video moment can be small, it can be embedded in the communication mechanism so that when the message is opened, the video moment will play instantly. This can further increase the virality of a video-moment, as a recipient may not need to click on a link to see the video-preview, and instead the recipient can see it play immediately, making it more likely that the recipient will forward the video previews on to other people. For instance, a video preview could be attached to an email, and the video preview could play instantly when another user receives the email. Alternatively, the video preview could be posted to social networks such as Facebook or Twitter and could play automatically within a stream of content the user is browsing.
  • V. Example Subsystems and Components
  • Any of the clients or servers may utilize any suitable number of subsystems. Examples of such subsystems or components are shown in FIG. 9. The subsystems shown in FIG. 9 are interconnected via a system bus 975. Additional subsystems such as a printer 974, keyboard 978, fixed disk 979, monitor 976, which is coupled to display adapter 982, and others are shown. Peripherals and input/output (I/O) devices, which couple to I/O controller 971, can be connected to the computer system by any number of means known in the art, such as input/output (I/O) port 977 (e.g., USB, FireWire®). For example, I/O port 977 or external interface 981 (e.g. Ethernet, Wi-Fi, etc.) can be used to connect the computer apparatus to a wide area network such as the Internet, a mouse input device, or a scanner. The interconnection via system bus allows the central processor 973, which may include one or more processors, to communicate with each subsystem and to control the execution of instructions from system memory 972 or the fixed disk 979 (such as a hard drive or optical disk), as well as the exchange of information between subsystems. The system memory 972 and/or the fixed disk 979 may embody a computer readable medium. Any of the data mentioned herein can be output from one component to another component and can be output to the user.
  • It should be understood that any of the embodiments of the present invention can be implemented in the form of control logic using hardware (e.g. an application specific integrated circuit or field programmable gate array) and/or using computer software with a generally programmable processor in a modular or integrated manner. As user herein, a processor includes a multi-core processor on a same integrated chip, or multiple processing units on a single circuit board or networked. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will know and appreciate other ways and/or methods to implement embodiments of the present invention using hardware and a combination of hardware and software.
  • Any of the software components or functions described in this application may be implemented as software code to be executed by a processor using any suitable computer language such as, for example, Java®, C++ or Perl using, for example, conventional or object-oriented techniques. The software code may be stored as a series of instructions or commands on a computer readable medium for storage and/or transmission, suitable media include random access memory (RAM), a read only memory (ROM), a magnetic medium such as a hard-drive or a floppy disk, or an optical medium such as a compact disk (CD) or DVD (digital versatile disk), flash memory, and the like. The computer readable medium may be any combination of such storage or transmission devices.
  • Such programs may also be encoded and transmitted using carrier signals adapted for transmission via wired, optical, and/or wireless networks conforming to a variety of protocols, including the Internet. As such, a computer readable medium according to an embodiment of the present invention may be created using a data signal encoded with such programs. Computer readable media encoded with the program code may be packaged with a compatible device or provided separately from other devices (e.g., via Internet download). Any such computer readable medium may reside on or within a single computer program product (e.g. a hard drive, a CD, or an entire computer system), and may be present on or within different computer program products within a system or network. A computer system may include a monitor, printer, or other suitable display for providing any of the results mentioned herein to a user.
  • Any of the methods described herein may be totally or partially performed with a computer system including one or more processors, which can be configured to perform the steps. Thus, embodiments can be directed to computer systems configured to perform the steps of any of the methods described herein, potentially with different components performing a respective steps or a respective group of steps. Although presented as numbered steps, steps of methods herein can be performed at a same time or in a different order. Additionally, portions of these steps may be used with portions of other steps from other methods. Also, all or portions of a step may be optional. Additionally, any of the steps of any of the methods can be performed with modules, circuits, or other means for performing these steps.
  • The specific details of particular embodiments may be combined in any suitable manner without departing from the spirit and scope of embodiments of the invention. However, other embodiments of the invention may be directed to specific embodiments relating to each individual aspect, or specific combinations of these individual aspects.
  • The above description of exemplary embodiments of the invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form described, and many modifications and variations are possible in light of the teaching above. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications to thereby enable others skilled in the art to best utilize the invention in various embodiments and with various modifications as are suited to the particular use contemplated.
  • A recitation of “a”, “an” or “the” is intended to mean “one or more” unless specifically indicated to the contrary.

Claims (20)

1.-19. (canceled)
20. A method of browsing videos, the method comprising:
receiving a video, wherein the video comprises a set of images;
receiving, by a computing device, a frame object for displaying the video, wherein the set of images are played within the frame object when the frame object is located at a particular location on a display in a graphical user interface (GUI) of the computing device;
displaying, by the computing device, a series of frame objects in the GUI, the series including the frame object, wherein the GUI allows motion of the frame objects in the GUI;
identifying, by the computing device, that the frame object has moved to a particular location on the display in the GUI; and
playing, by the computing device, the video within the frame object in response to identifying that the frame object moves to a particular location on the display in the GUI.
21. The method of claim 20, wherein the video is a first video, the frame object is a first frame object, and the method further comprises:
receiving a second video;
receiving a second frame object for displaying the second video;
displaying the second frame object next to the first frame object in the GUI; and
in response to identifying that the second frame object is located at the particular location on the display in the GUI of the computing device, playing the second video within the second frame object.
22. The method of claim 21, further comprising stopping the first video from playing when the second frame object is located at the particular location on the display in the GUI.
23. The method of claim 21, wherein a user touches a surface of the computing device that displays the GUI and swipes the screen to move the second frame location to the particular location on the display and play the second video.
24. The method of claim 21, wherein a section of the first frame object is displayed while the second frame object is displayed, so that at least a portion of the first and second frame objects are displayed by the GUI simultaneously.
25. The method of claim 21, further comprising:
in response to identifying that the first frame object is located at the particular location on the display in the GUI of the computing device, fading a first audio file out, wherein the first audio file is associated with the first video; and
fading a second audio file in, wherein the second audio file is associated with the second video.
26. The method of claim 20, further comprising in response to receiving a selection of the frame object from a user, playing a full video that corresponds to the video, wherein the full video includes at least a portion of the set of images in the video.
27. The method of claim 20, wherein the video plays independent of a receiving an activation of the video from a user.
28. The method of claim 20, wherein the frame object is activated by placing a selection device in a substantially close proximity to a location of the frame object.
29. The method of claim 28, wherein the selection device is a finger.
30. The method of claim 28, wherein the selection device is a mouse pointer.
31. The method of claim 20, wherein the frame object is selected by placing a selection device in a substantially close proximity to a location of the frame object, and clicking the frame object.
32. The method of claim 20, wherein the frame object is selected by placing a selection device in a substantially close proximity to a location of the frame object, and tapping the frame object.
33. The method of claim 20, wherein the video comprises less than five percent of the images from a full video.
34. The method of claim 20, wherein the frame object is a first frame object and the method further comprises:
associating the first frame object with a composite of videos, wherein the composite of videos includes one or more frame objects that move together when one of the frame objects is moved in the GUI;
displaying a second frame object that is not part of the composite of videos, wherein the second frame object displays a second video; and
in response to identifying that the composite of videos is located at the particular location in the GUI of the computing device, playing the videos associated with the composite of videos.
35. The method of claim 34, wherein the videos associated with the composite of videos play simultaneously and not playing the second video associated with the second frame object.
36. The method of claim 20, wherein the frame object is located at the particular location when the GUI is originally displayed by the computing device.
37. The method of claim 20, wherein the frame object is a first frame object and the method further comprises:
in response to selecting the first frame object, providing a second frame object that displays additional content that corresponds to the video, wherein the second frame object is displayed in the GUI.
38. A computer product comprising a non-transitory computer readable medium storing a plurality of instructions that when executed control a computer system to browsing videos, the instructions comprising:
receiving a video, wherein the video comprises a set of images;
receiving a frame object for displaying the video, wherein the set of images are played within the frame object when the frame object is located at a particular location on a display in a graphical user interface (GUI) of a computing device;
displaying a series of frame objects in the GUI, the series including the frame object, wherein the GUI allows motion of the frame objects in the GUI;
identifying that the frame object has moved to a particular location on the display in the GUI; and
playing the video within the frame object in response to identifying that the frame object moves to a particular location on the display in the GUI.
US15/668,465 2013-02-05 2017-08-03 Activating a video based on location in screen Pending US20180019002A1 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
US201361761096P true 2013-02-05 2013-02-05
US201361822105P true 2013-05-10 2013-05-10
US201361847996P true 2013-07-18 2013-07-18
US201361905772P true 2013-11-18 2013-11-18
US14/173,753 US9767845B2 (en) 2013-02-05 2014-02-05 Activating a video based on location in screen
US15/668,465 US20180019002A1 (en) 2013-02-05 2017-08-03 Activating a video based on location in screen

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US15/668,465 US20180019002A1 (en) 2013-02-05 2017-08-03 Activating a video based on location in screen

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US14/173,753 Continuation US9767845B2 (en) 2013-02-05 2014-02-05 Activating a video based on location in screen

Publications (1)

Publication Number Publication Date
US20180019002A1 true US20180019002A1 (en) 2018-01-18

Family

ID=51259288

Family Applications (11)

Application Number Title Priority Date Filing Date
US14/173,732 Abandoned US20140219634A1 (en) 2013-02-05 2014-02-05 Video preview creation based on environment
US14/173,745 Active 2034-08-20 US9589594B2 (en) 2013-02-05 2014-02-05 Generation of layout of videos
US14/173,697 Active US9530452B2 (en) 2013-02-05 2014-02-05 Video preview creation with link
US14/173,740 Active 2034-03-26 US9244600B2 (en) 2013-02-05 2014-02-05 Video preview creation with audio
US14/173,715 Active 2034-02-07 US9349413B2 (en) 2013-02-05 2014-02-05 User interface for video preview creation
US14/173,753 Active 2034-03-27 US9767845B2 (en) 2013-02-05 2014-02-05 Activating a video based on location in screen
US14/937,557 Active US9881646B2 (en) 2013-02-05 2015-11-10 Video preview creation with audio
US15/091,358 Active US9852762B2 (en) 2013-02-05 2016-04-05 User interface for video preview creation
US15/449,174 Active 2034-12-28 US10373646B2 (en) 2013-02-05 2017-03-03 Generation of layout of videos
US15/668,465 Pending US20180019002A1 (en) 2013-02-05 2017-08-03 Activating a video based on location in screen
US15/882,422 Pending US20180218756A1 (en) 2013-02-05 2018-01-29 Video preview creation with audio

Family Applications Before (9)

Application Number Title Priority Date Filing Date
US14/173,732 Abandoned US20140219634A1 (en) 2013-02-05 2014-02-05 Video preview creation based on environment
US14/173,745 Active 2034-08-20 US9589594B2 (en) 2013-02-05 2014-02-05 Generation of layout of videos
US14/173,697 Active US9530452B2 (en) 2013-02-05 2014-02-05 Video preview creation with link
US14/173,740 Active 2034-03-26 US9244600B2 (en) 2013-02-05 2014-02-05 Video preview creation with audio
US14/173,715 Active 2034-02-07 US9349413B2 (en) 2013-02-05 2014-02-05 User interface for video preview creation
US14/173,753 Active 2034-03-27 US9767845B2 (en) 2013-02-05 2014-02-05 Activating a video based on location in screen
US14/937,557 Active US9881646B2 (en) 2013-02-05 2015-11-10 Video preview creation with audio
US15/091,358 Active US9852762B2 (en) 2013-02-05 2016-04-05 User interface for video preview creation
US15/449,174 Active 2034-12-28 US10373646B2 (en) 2013-02-05 2017-03-03 Generation of layout of videos

Family Applications After (1)

Application Number Title Priority Date Filing Date
US15/882,422 Pending US20180218756A1 (en) 2013-02-05 2018-01-29 Video preview creation with audio

Country Status (1)

Country Link
US (11) US20140219634A1 (en)

Families Citing this family (75)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9826197B2 (en) 2007-01-12 2017-11-21 Activevideo Networks, Inc. Providing television broadcasts over a managed network and interactive content over an unmanaged network to a client device
US9773059B2 (en) * 2010-11-09 2017-09-26 Storagedna, Inc. Tape data management
US10042516B2 (en) * 2010-12-02 2018-08-07 Instavid Llc Lithe clip survey facilitation systems and methods
US9800945B2 (en) 2012-04-03 2017-10-24 Activevideo Networks, Inc. Class-based intelligent multiplexing over unmanaged networks
US9557885B2 (en) 2011-08-09 2017-01-31 Gopro, Inc. Digital media editing
USD732049S1 (en) 2012-11-08 2015-06-16 Uber Technologies, Inc. Computing device display screen with electronic summary or receipt graphical user interface
US20140219634A1 (en) 2013-02-05 2014-08-07 Redux, Inc. Video preview creation based on environment
WO2014145921A1 (en) 2013-03-15 2014-09-18 Activevideo Networks, Inc. A multiple-mode system and method for providing user selectable video content
US9326047B2 (en) 2013-06-06 2016-04-26 Activevideo Networks, Inc. Overlay rendering of user interface onto source video
US9270964B1 (en) * 2013-06-24 2016-02-23 Google Inc. Extracting audio components of a portion of video to facilitate editing audio of the video
US9620169B1 (en) * 2013-07-26 2017-04-11 Dreamtek, Inc. Systems and methods for creating a processed video output
USD745550S1 (en) * 2013-12-02 2015-12-15 Microsoft Corporation Display screen with animated graphical user interface
USD745551S1 (en) * 2014-02-21 2015-12-15 Microsoft Corporation Display screen with animated graphical user interface
WO2015134537A1 (en) 2014-03-04 2015-09-11 Gopro, Inc. Generation of video based on spherical content
US20150277686A1 (en) * 2014-03-25 2015-10-01 ScStan, LLC Systems and Methods for the Real-Time Modification of Videos and Images Within a Social Network Format
US9832418B2 (en) * 2014-04-15 2017-11-28 Google Inc. Displaying content between loops of a looping media item
US9788029B2 (en) 2014-04-25 2017-10-10 Activevideo Networks, Inc. Intelligent multiplexing using class-based, multi-dimensioned decision logic for managed networks
US10158847B2 (en) * 2014-06-19 2018-12-18 Vefxi Corporation Real—time stereo 3D and autostereoscopic 3D video and image editing
US20160026874A1 (en) 2014-07-23 2016-01-28 Gopro, Inc. Activity identification in video
US9685194B2 (en) 2014-07-23 2017-06-20 Gopro, Inc. Voice-based video tagging
KR20160031217A (en) * 2014-09-12 2016-03-22 삼성전자주식회사 Method for controlling and an electronic device thereof
US20160127807A1 (en) * 2014-10-29 2016-05-05 EchoStar Technologies, L.L.C. Dynamically determined audiovisual content guidebook
US10264293B2 (en) 2014-12-24 2019-04-16 Activevideo Networks, Inc. Systems and methods for interleaving video streams on a client device
US9734870B2 (en) 2015-01-05 2017-08-15 Gopro, Inc. Media identifier generation for camera-captured media
WO2016118537A1 (en) * 2015-01-19 2016-07-28 Srinivas Rao Method and system for creating seamless narrated videos using real time streaming media
US9679605B2 (en) 2015-01-29 2017-06-13 Gopro, Inc. Variable playback speed template for video editing application
US10375444B2 (en) * 2015-02-13 2019-08-06 Performance and Privacy Ireland Limited Partial video pre-fetch
US20160275989A1 (en) * 2015-03-16 2016-09-22 OZ ehf Multimedia management system for generating a video clip from a video file
CN104837050B (en) * 2015-03-23 2018-09-04 腾讯科技(北京)有限公司 Information processing method and terminal
EP3086321A1 (en) * 2015-04-24 2016-10-26 ARRIS Enterprises LLC Designating partial recordings as personalized multimedia clips
WO2016187235A1 (en) 2015-05-20 2016-11-24 Gopro, Inc. Virtual lens simulation for video and photo cropping
US9727749B2 (en) 2015-06-08 2017-08-08 Microsoft Technology Licensing, Llc Limited-access functionality accessible at login screen
US20170026721A1 (en) * 2015-06-17 2017-01-26 Ani-View Ltd. System and Methods Thereof for Auto-Playing Video Content on Mobile Devices
US20160372155A1 (en) * 2015-06-19 2016-12-22 Elmer Tolentino, JR. Video bit processing
US9715901B1 (en) * 2015-06-29 2017-07-25 Twitter, Inc. Video preview generation
KR101708318B1 (en) * 2015-07-23 2017-02-20 엘지전자 주식회사 Mobile terminal and control method for the mobile terminal
US20170031876A1 (en) * 2015-07-27 2017-02-02 Adp, Llc Web Page Generation System
US10324600B2 (en) 2015-07-27 2019-06-18 Adp, Llc Web page generation system
US20170060372A1 (en) * 2015-08-28 2017-03-02 Facebook, Inc. Systems and methods for providing interactivity for panoramic media content
US20170060404A1 (en) * 2015-08-28 2017-03-02 Facebook, Inc. Systems and methods for providing interactivity for panoramic media content
US9894393B2 (en) 2015-08-31 2018-02-13 Gopro, Inc. Video encoding for reduced streaming latency
US9721611B2 (en) 2015-10-20 2017-08-01 Gopro, Inc. System and method of generating video from video clips based on moments of interest within the video clips
US10204273B2 (en) 2015-10-20 2019-02-12 Gopro, Inc. System and method of providing recommendations of moments of interest within video clips post capture
US9923941B2 (en) 2015-11-05 2018-03-20 International Business Machines Corporation Method and system for dynamic proximity-based media sharing
CN105635837B (en) * 2015-12-30 2019-04-19 努比亚技术有限公司 A kind of video broadcasting method and device
US10095696B1 (en) 2016-01-04 2018-10-09 Gopro, Inc. Systems and methods for generating recommendations of post-capture users to edit digital media content field
US10109319B2 (en) 2016-01-08 2018-10-23 Gopro, Inc. Digital media editing
US9620140B1 (en) * 2016-01-12 2017-04-11 Raytheon Company Voice pitch modification to increase command and control operator situational awareness
US10083537B1 (en) 2016-02-04 2018-09-25 Gopro, Inc. Systems and methods for adding a moving visual element to a video
US9972066B1 (en) 2016-03-16 2018-05-15 Gopro, Inc. Systems and methods for providing variable image projection for spherical visual content
US9838730B1 (en) 2016-04-07 2017-12-05 Gopro, Inc. Systems and methods for audio track selection in video editing
US9794632B1 (en) 2016-04-07 2017-10-17 Gopro, Inc. Systems and methods for synchronization based on audio track changes in video editing
US9838731B1 (en) 2016-04-07 2017-12-05 Gopro, Inc. Systems and methods for audio track selection in video editing with audio mixing option
US9762971B1 (en) * 2016-04-26 2017-09-12 Amazon Technologies, Inc. Techniques for providing media content browsing
US9911223B2 (en) * 2016-05-13 2018-03-06 Yahoo Holdings, Inc. Automatic video segment selection method and apparatus
US10250894B1 (en) 2016-06-15 2019-04-02 Gopro, Inc. Systems and methods for providing transcoded portions of a video
US9922682B1 (en) 2016-06-15 2018-03-20 Gopro, Inc. Systems and methods for organizing video files
US9998769B1 (en) 2016-06-15 2018-06-12 Gopro, Inc. Systems and methods for transcoding media files
US10045120B2 (en) 2016-06-20 2018-08-07 Gopro, Inc. Associating audio with three-dimensional objects in videos
US20170374423A1 (en) * 2016-06-24 2017-12-28 Glen J. Anderson Crowd-sourced media playback adjustment
US10185891B1 (en) 2016-07-08 2019-01-22 Gopro, Inc. Systems and methods for compact convolutional neural networks
US9836853B1 (en) 2016-09-06 2017-12-05 Gopro, Inc. Three-dimensional convolutional neural networks for video highlight detection
US10268898B1 (en) 2016-09-21 2019-04-23 Gopro, Inc. Systems and methods for determining a sample frame order for analyzing a video via segments
US10282632B1 (en) 2016-09-21 2019-05-07 Gopro, Inc. Systems and methods for determining a sample frame order for analyzing a video
US20180102143A1 (en) * 2016-10-12 2018-04-12 Lr Acquisition, Llc Modification of media creation techniques and camera behavior based on sensor-driven events
US10002641B1 (en) 2016-10-17 2018-06-19 Gopro, Inc. Systems and methods for determining highlight segment sets
US10284809B1 (en) 2016-11-07 2019-05-07 Gopro, Inc. Systems and methods for intelligently synchronizing events in visual content with musical features in audio content
US10262639B1 (en) 2016-11-08 2019-04-16 Gopro, Inc. Systems and methods for detecting musical features in audio content
CN106604086B (en) * 2016-12-08 2019-06-04 武汉斗鱼网络科技有限公司 The played in full screen method and system of preview video in Android application
US10339443B1 (en) 2017-02-24 2019-07-02 Gopro, Inc. Systems and methods for processing convolutional neural network operations using textures
US10127943B1 (en) 2017-03-02 2018-11-13 Gopro, Inc. Systems and methods for modifying videos based on music
US10185895B1 (en) 2017-03-23 2019-01-22 Gopro, Inc. Systems and methods for classifying activities captured within images
US10083718B1 (en) 2017-03-24 2018-09-25 Gopro, Inc. Systems and methods for editing videos based on motion
US10187690B1 (en) 2017-04-24 2019-01-22 Gopro, Inc. Systems and methods to detect and correlate user responses to media content
WO2019003040A1 (en) * 2017-06-28 2019-01-03 Sourcico Ltd. Pulsating image

Family Cites Families (101)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2528446B2 (en) * 1992-09-30 1996-08-28 セイコーエプソン株式会社 Audio and image processing apparatus
DE69322047T2 (en) 1992-10-01 1999-06-24 Hudson Soft Co Ltd Image processing device
US5745103A (en) * 1995-08-02 1998-04-28 Microsoft Corporation Real-time palette negotiations in multimedia presentations
US6263507B1 (en) 1996-12-05 2001-07-17 Interval Research Corporation Browser for use in navigating a body of information, with particular application to browsing information represented by audiovisual data
US6335985B1 (en) * 1998-01-07 2002-01-01 Kabushiki Kaisha Toshiba Object extraction apparatus
US6526577B1 (en) 1998-12-01 2003-02-25 United Video Properties, Inc. Enhanced interactive program guide
US7509580B2 (en) 1999-09-16 2009-03-24 Sharp Laboratories Of America, Inc. Audiovisual information management system with preferences descriptions
JP2001195051A (en) * 2000-01-12 2001-07-19 Konami Co Ltd Data generating device for image display and recording medium
JP3617413B2 (en) 2000-06-02 2005-02-02 日産自動車株式会社 Control apparatus for an electromagnetically driven valve
US20020112244A1 (en) 2000-12-19 2002-08-15 Shih-Ping Liou Collaborative video delivery over heterogeneous networks
US7169996B2 (en) 2002-11-12 2007-01-30 Medialab Solutions Llc Systems and methods for generating music using data/music data file transmitted/received via a network
US20070074269A1 (en) * 2002-02-22 2007-03-29 Hai Hua Video processing device, video recorder/playback module, and methods for use therewith
EP1383327B1 (en) * 2002-06-11 2013-12-25 Panasonic Corporation Content distributing system and data-communication controlling device
MXPA05013029A (en) 2003-06-02 2006-03-02 Disney Entpr Inc System and method of programmatic window control for consumer video players.
US7324119B1 (en) 2003-07-14 2008-01-29 Adobe Systems Incorporated Rendering color images and text
US20050144016A1 (en) 2003-12-03 2005-06-30 Christopher Hewitt Method, software and apparatus for creating audio compositions
US9715898B2 (en) 2003-12-16 2017-07-25 Core Wireless Licensing S.A.R.L. Method and device for compressed-domain video editing
US20050193341A1 (en) 2004-02-27 2005-09-01 Hayward Anthony D. System for aggregating, processing and delivering video footage, documents, audio files and graphics
JP4385974B2 (en) 2004-05-13 2009-12-16 ソニー株式会社 Image display method, image processing apparatus, a program and a recording medium
EP1762095A1 (en) * 2004-06-17 2007-03-14 Philips Electronics N.V. Personalized summaries using personality attributes
US20060059504A1 (en) 2004-09-14 2006-03-16 Eduardo Gomez Method for selecting a preview of a media work
US20060204214A1 (en) 2005-03-14 2006-09-14 Microsoft Corporation Picture line audio augmentation
KR100654455B1 (en) 2005-05-26 2006-12-06 삼성전자주식회사 Apparatus and method for providing addition information using extension subtitle file
KR100710752B1 (en) 2005-06-03 2007-04-24 삼성전자주식회사 System and apparatus and method for generating panorama image
US20070006262A1 (en) * 2005-06-30 2007-01-04 Microsoft Corporation Automatic content presentation
US20070118801A1 (en) 2005-11-23 2007-05-24 Vizzme, Inc. Generation and playback of multimedia presentations
US20070136750A1 (en) 2005-12-13 2007-06-14 Microsoft Corporation Active preview for media items
US20070157240A1 (en) 2005-12-29 2007-07-05 United Video Properties, Inc. Interactive media guidance system having multiple devices
US8607287B2 (en) * 2005-12-29 2013-12-10 United Video Properties, Inc. Interactive media guidance system having multiple devices
US20080036917A1 (en) 2006-04-07 2008-02-14 Mark Pascarella Methods and systems for generating and delivering navigatable composite videos
US8077153B2 (en) * 2006-04-19 2011-12-13 Microsoft Corporation Precise selection techniques for multi-touch screens
US7844354B2 (en) 2006-07-27 2010-11-30 International Business Machines Corporation Adjusting the volume of an audio element responsive to a user scrolling through a browser window
US7844352B2 (en) 2006-10-20 2010-11-30 Lehigh University Iterative matrix processor based implementation of real-time model predictive control
US8881011B2 (en) * 2006-12-05 2014-11-04 Crackle, Inc. Tool for creating content for video sharing platform
JP2008146453A (en) * 2006-12-12 2008-06-26 Sony Corp Picture signal output device and operation input processing method
AU2006252196B2 (en) 2006-12-21 2009-05-14 Canon Kabushiki Kaisha Scrolling Interface
EP1988703A1 (en) * 2007-05-02 2008-11-05 TTPCOM Limited Image transformation
US20080301579A1 (en) 2007-06-04 2008-12-04 Yahoo! Inc. Interactive interface for navigating, previewing, and accessing multimedia content
US20090094159A1 (en) 2007-10-05 2009-04-09 Yahoo! Inc. Stock video purchase
KR101434498B1 (en) 2007-10-29 2014-09-29 삼성전자주식회사 Portable terminal and method for managing dynamic image thereof
JP2011505596A (en) 2007-11-30 2011-02-24 スリーエム イノベイティブ プロパティズ カンパニー Method of making an optical waveguide
US7840661B2 (en) * 2007-12-28 2010-11-23 Yahoo! Inc. Creating and editing media objects using web requests
US8181197B2 (en) 2008-02-06 2012-05-15 Google Inc. System and method for voting on popular video intervals
JP2011516907A (en) 2008-02-20 2011-05-26 オーイーエム インコーポレーティッド Music learning and mixing system
US8468572B2 (en) 2008-03-26 2013-06-18 Cisco Technology, Inc. Distributing digital video content to multiple end-user devices
US8139072B2 (en) 2008-04-14 2012-03-20 Mcgowan Scott James Network hardware graphics adapter compression
WO2010006334A1 (en) * 2008-07-11 2010-01-14 Videosurf, Inc. Apparatus and software system for and method of performing a visual-relevance-rank subsequent search
US20100023984A1 (en) 2008-07-28 2010-01-28 John Christopher Davi Identifying Events in Addressable Video Stream for Generation of Summary Video Stream
US20100125875A1 (en) * 2008-11-20 2010-05-20 Comcast Cable Communications, Llc Method and apparatus for delivering video and video-related content at sub-asset level
WO2010068175A2 (en) 2008-12-10 2010-06-17 Muvee Technologies Pte Ltd Creating a new video production by intercutting between multiple video clips
US8655466B2 (en) 2009-02-27 2014-02-18 Apple Inc. Correlating changes in audio
US8259816B2 (en) 2009-03-12 2012-09-04 MIST Innovations, Inc. System and method for streaming video to a mobile device
US8818172B2 (en) 2009-04-14 2014-08-26 Avid Technology, Inc. Multi-user remote video editing
US8527646B2 (en) * 2009-04-14 2013-09-03 Avid Technology Canada Corp. Rendering in a multi-user video editing system
US8392004B2 (en) 2009-04-30 2013-03-05 Apple Inc. Automatic audio adjustment
JP5523752B2 (en) 2009-07-08 2014-06-18 京セラ株式会社 The display control device
US8457470B2 (en) 2009-07-13 2013-06-04 Echostar Technologies L.L.C. Systems and methods for a common image data array file
EP2461317A4 (en) 2009-07-31 2013-10-30 Sharp Kk Image processing device, control method for image processing device, control program for image processing device, and recording medium in which control program is recorded
US8438484B2 (en) 2009-11-06 2013-05-07 Sony Corporation Video preview module to enhance online video experience
US8736561B2 (en) * 2010-01-06 2014-05-27 Apple Inc. Device, method, and graphical user interface with content display modes and display rotation heuristics
GB2489784A (en) 2010-01-29 2012-10-10 Hewlett Packard Development Co Portable computer having multiple embedded audio controllers
CN102196001B (en) * 2010-03-15 2014-03-19 腾讯科技(深圳)有限公司 Movie file downloading device and method
US20120017150A1 (en) 2010-07-15 2012-01-19 MySongToYou, Inc. Creating and disseminating of user generated media over a network
US9572995B2 (en) 2010-09-29 2017-02-21 Verizon Patent And Licensing Inc. Creating and using a virtual video asset in a video provisioning system
US8743953B2 (en) 2010-10-22 2014-06-03 Motorola Solutions, Inc. Method and apparatus for adjusting video compression parameters for encoding source video based on a viewer's environment
US9160960B2 (en) 2010-12-02 2015-10-13 Microsoft Technology Licensing, Llc Video preview based browsing user interface
EP2646970A4 (en) 2010-12-02 2015-08-05 Dayspark Inc Systems, devices and methods for streaming multiple different media content in a digital container
US8923607B1 (en) 2010-12-08 2014-12-30 Google Inc. Learning sports highlights using event detection
CA2825927A1 (en) 2011-01-28 2012-08-02 Eye IO, LLC Color conversion based on an hvs model
JP2012165313A (en) 2011-02-09 2012-08-30 Sony Corp Editing device, method, and program
US8244103B1 (en) * 2011-03-29 2012-08-14 Capshore, Llc User interface for method for creating a custom track
US9779097B2 (en) 2011-04-28 2017-10-03 Sony Corporation Platform agnostic UI/UX and human interaction paradigm
US9135371B2 (en) 2011-05-09 2015-09-15 Google Inc. Contextual video browsing
AU2011202182B1 (en) 2011-05-11 2011-10-13 Frequency Ip Holdings, Llc Creation and presentation of selective digital content feeds
US8291452B1 (en) 2011-05-20 2012-10-16 Google Inc. Interface for watching a stream of videos
US8649668B2 (en) * 2011-06-03 2014-02-11 Adobe Systems Incorporated Client playback of streaming video adapted for smooth transitions and viewing in advance display modes
US20120323897A1 (en) 2011-06-14 2012-12-20 Microsoft Corporation Query-dependent audio/video clip search result previews
JP2013009218A (en) 2011-06-27 2013-01-10 Sony Corp Editing device, method, and program
US8868680B2 (en) 2011-06-30 2014-10-21 Infosys Technologies Ltd. Methods for recommending personalized content based on profile and context information and devices thereof
WO2013010177A2 (en) 2011-07-14 2013-01-17 Surfari Inc. Online groups interacting around common content
US9973800B2 (en) 2011-08-08 2018-05-15 Netflix, Inc. Merchandising streaming video content
US20130047084A1 (en) 2011-08-18 2013-02-21 Christopher John Sanders Management of Local and Remote Media Items
US20130097550A1 (en) * 2011-10-14 2013-04-18 Tovi Grossman Enhanced target selection for a touch-based input enabled user interface
US10148762B2 (en) 2011-10-18 2018-12-04 Facebook, Inc. Platform-specific notification delivery channel
US9111579B2 (en) * 2011-11-14 2015-08-18 Apple Inc. Media editing with multi-camera media clips
US20130163963A1 (en) 2011-12-21 2013-06-27 Cory Crosland System and method for generating music videos from synchronized user-video recorded content
US9378283B2 (en) 2012-04-23 2016-06-28 Excalibur Ip, Llc Instant search results with page previews
US8959453B1 (en) * 2012-05-10 2015-02-17 Google Inc. Autohiding video player controls
US20130317951A1 (en) * 2012-05-25 2013-11-28 Rawllin International Inc. Auto-annotation of video content for scrolling display
US9027064B1 (en) 2012-06-06 2015-05-05 Susie Opare-Abetia Unified publishing platform that seamlessly delivers content by streaming for on-demand playback and by store-and-forward delivery for delayed playback
US9158440B1 (en) * 2012-08-01 2015-10-13 Google Inc. Display of information areas in a view of a graphical interface
US9179232B2 (en) 2012-09-17 2015-11-03 Nokia Technologies Oy Method and apparatus for associating audio objects with content and geo-location
US8610730B1 (en) 2012-09-19 2013-12-17 Google Inc. Systems and methods for transferring images and information from a mobile computing device to a computer monitor for display
US8717500B1 (en) * 2012-10-15 2014-05-06 At&T Intellectual Property I, L.P. Relational display of images
KR20140064162A (en) 2012-11-19 2014-05-28 삼성전자주식회사 Method for displaying a screen in mobile terminal and the mobile terminal therefor
CN103873944B (en) * 2012-12-18 2017-04-12 瑞昱半导体股份有限公司 Methods and apparatus for establishing the timing relationships between the different players play the content to be of
US20140219634A1 (en) 2013-02-05 2014-08-07 Redux, Inc. Video preview creation based on environment
US9077956B1 (en) 2013-03-22 2015-07-07 Amazon Technologies, Inc. Scene identification
US20140325568A1 (en) 2013-04-26 2014-10-30 Microsoft Corporation Dynamic creation of highlight reel tv show
US9467750B2 (en) 2013-05-31 2016-10-11 Adobe Systems Incorporated Placing unobtrusive overlays in video content
GB2520319A (en) 2013-11-18 2015-05-20 Nokia Corp Method, apparatus and computer program product for capturing images

Also Published As

Publication number Publication date
US9530452B2 (en) 2016-12-27
US20160217826A1 (en) 2016-07-28
US9881646B2 (en) 2018-01-30
US20140223307A1 (en) 2014-08-07
US20140219634A1 (en) 2014-08-07
US20170270966A1 (en) 2017-09-21
US20140219629A1 (en) 2014-08-07
US9589594B2 (en) 2017-03-07
US9767845B2 (en) 2017-09-19
US20140223306A1 (en) 2014-08-07
US10373646B2 (en) 2019-08-06
US9852762B2 (en) 2017-12-26
US9349413B2 (en) 2016-05-24
US9244600B2 (en) 2016-01-26
US20140219637A1 (en) 2014-08-07
US20140223482A1 (en) 2014-08-07
US20180218756A1 (en) 2018-08-02
US20160064034A1 (en) 2016-03-03

Similar Documents

Publication Publication Date Title
KR101829782B1 (en) Sharing television and video programming through social networking
US9684432B2 (en) Web-based system for collaborative generation of interactive videos
US9723335B2 (en) Serving objects to be inserted to videos and tracking usage statistics thereof
CN104219559B (en) Unobvious superposition is launched in video content
US9819999B2 (en) Interactive media display across devices
US8615777B2 (en) Method and apparatus for displaying posting site comments with program being viewed
US8285121B2 (en) Digital network-based video tagging system
EP2309738A1 (en) Distributed scalable media environment
US20110289458A1 (en) User interface animation for a content system
US20090259971A1 (en) Media mashing across multiple heterogeneous platforms and devices
US9349413B2 (en) User interface for video preview creation
KR101460462B1 (en) Techniques for object based operations
US20120284623A1 (en) Online search, storage, manipulation, and delivery of video content
CN102483742B (en) System and method for managing Internet media content
US8640030B2 (en) User interface for creating tags synchronized with a video playback
US9008491B2 (en) Snapshot feature for tagged video
US20100217884A2 (en) Method and system of providing multimedia content
US20100318520A1 (en) System and method for processing commentary that is related to content
AU2006252196B2 (en) Scrolling Interface
US8819559B2 (en) Systems and methods for sharing multimedia editing projects
KR101716350B1 (en) Animation sequence associated with image
US20080052742A1 (en) Method and apparatus for presenting media content
US9407964B2 (en) Method and system for navigating video to an instant time
KR101633805B1 (en) Animation sequence associated with feedback user-interface element
US20080189733A1 (en) Content rating systems and methods

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

AS Assignment

Owner name: REDUX, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MCINTOSH, DAVID;PENNELLO, CHRIS;REEL/FRAME:048622/0384

Effective date: 20140204

Owner name: ALC HOLDINGS, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:REDUX, INC.;REEL/FRAME:048629/0144

Effective date: 20140624

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED