US10614854B2 - Speedy clipping - Google Patents
Speedy clipping Download PDFInfo
- Publication number
- US10614854B2 US10614854B2 US16/037,073 US201816037073A US10614854B2 US 10614854 B2 US10614854 B2 US 10614854B2 US 201816037073 A US201816037073 A US 201816037073A US 10614854 B2 US10614854 B2 US 10614854B2
- Authority
- US
- United States
- Prior art keywords
- slice
- snippet
- frames
- encoding
- slices
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 claims description 40
- 230000004048 modification Effects 0.000 claims description 7
- 238000012986 modification Methods 0.000 claims description 7
- 230000004044 response Effects 0.000 claims description 6
- 238000005192 partition Methods 0.000 abstract description 3
- 230000002123 temporal effect Effects 0.000 abstract description 2
- 230000008569 process Effects 0.000 description 19
- 239000003550 marker Substances 0.000 description 4
- 230000001413 cellular effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 239000002355 dual-layer Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/114—Adapting the group of pictures [GOP] structure, e.g. number of B-frames between two anchor frames
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/142—Detection of scene cut or scene change
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/172—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/40—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/816—Monomedia components thereof involving special video data, e.g 3D video
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8549—Creating video summaries, e.g. movie trailer
Definitions
- Content distribution is continuing to move away from traditional broadcast mediums to online and over-the-top distribution.
- New tools are needed to aid in this transition.
- With respect to video new tools are needed to efficiently create the content, bring the content online, and distribute the content. Accordingly, there is a need for better video editing, manipulation, encoding, transcoding, and distribution tools.
- Clipping is the process of cropping or selecting one or more portions of a video asset or streaming asset and preserving the one or more portions as new video asset.
- the video assets that are generated from different portions of existing video assets are referred to as snippets.
- a snippet can be used to present or promote existing video content through alternative formats. For instance, a snippet from a broadcast news report can be used to enhance an online text based article relating to the same report. A snippet can also be used as a trailer, advertisement, or teaser for promotional purposes.
- Traditional snippet generation involves an editor identifying a start marker and an end marker somewhere within an original video asset.
- a clipping tool then re-encodes the portion of the original asset falling within the start marker and end marker clip boundaries in order to produce the snippet from the original asset.
- FIG. 1 conceptually illustrates the video generation system creating a sliced encoding of an original video asset from which snippets can be efficiently generated in accordance with some embodiments.
- FIG. 2 presents a process for creating a snippet from the sliced encoding of an original video asset in accordance with some embodiments.
- FIG. 3 conceptually illustrates the snippet creation in accordance with some embodiments.
- FIG. 4 conceptually illustrates creating a snippet at a particular bit rate from an original video asset that is encoded at different bit rates.
- FIG. 5 is a block diagram of an exemplary video generation system for encoding video content into a sequence of slices and for efficiently creating snippets from encoded slices.
- FIG. 6 illustrates a computer system or server with which some embodiments are implemented.
- a video generation system for efficiently creating snippets from existing video assets without re-encoding the entire portion of already encoded video falling within the snippet boundaries.
- the video generation system creates a snippet by re-encoding very short durations of the original video asset at the beginning and end of the snippet, and by reusing already encoded portions of the original asset falling in between. In doing so, the video generation system presented herein is able to create snippets faster and with little or no quality loss relative to prior art systems and methods for snippet generation.
- FIG. 1 conceptually illustrates the video generation system creating a sliced encoding of an original video asset from which snippets can be efficiently generated in accordance with some embodiments.
- FIG. 1 illustrates the video generation system 110 , an original video asset 120 , and a resulting sliced encoding 130 of the original video stream produced by the video generation system.
- the original video asset 120 can originate from a source feed (i.e., a digital video camera), a broadcast signal, a stream, or a media file.
- the original video asset 120 can include a combination of audio and video or just video. It should be noted that the video generation system and described embodiments can also be applied to create audio only snippets from an original audio asset. Accordingly, the embodiments are applicable to a variety of media files or media content.
- the video generation system 110 partitions the original video asset 120 into a sequence of slices.
- the slices can then be distributed across one or more encoders of the video generation system 110 for encoding.
- Each resulting encoded slice 130 represents a different temporal chunk that encodes a short but different duration or portion of the original video asset 120 .
- Each encoded slice is therefore defined by a set of video frames.
- the frames are a combination of key frames or I-frames, P-frames, and B-frames.
- I-frames are decoded based on information solely contained within the I-frame. An image or individual video frame can therefore be rendered based on information from a single I-frame.
- P-frames are decoded with reliance on data from one or more previous frames.
- B-frames are decoded with reliance on data from one or more previous and forward frames.
- P-frames and B-frames reference other frames for changes in image data or vector displacement so that different images can be rendered from the P and B frames without duplicating information from the other referenced frames.
- the embodiments and slices can be adapted to include other types of frames in addition to or in place of the I, P, and B frames. The efficient clipping can be performed on any such frame type.
- each encoded slice commences with an I-frame. Subsequent frames for the remaining duration of each encoded slice can contain a mix of I, P, B, and other frames.
- the frame mix is produced by the encoder based on encoding settings that were specified for the video asset, wherein the encoding settings can include bit rates, quality settings, compression settings, resolution, etc. Further detail regarding the slicing of a video asset and the encoding of the slices can be found in U.S.
- FIG. 2 presents a process 200 for creating a snippet from the sliced encoding of an original video asset in accordance with some embodiments.
- the process 200 is performed by the video generation system on a sliced encoding of the original video asset.
- the process 200 commences by receiving (at 210 ) user defined start and end times for a snippet of an original video asset.
- the process retrieves (at 215 ) the sliced encoding of the original video asset.
- the video generation system retrieves the encoded slices that were produced from encoding the original video asset.
- the process identifies (at 220 ) a first slice and a second slice from the plurality of slices of the sliced encoding of the original video asset based on the user defined snippet start and end times.
- the first slice identified at 220 is a slice from the plurality of slices that encodes a duration of the video asset spanning the user defined snippet start time.
- the first slice identified at 220 may therefore not correspond to the initial slice that encodes the first seconds of the original video asset.
- the second slice identified at 220 is a different slice from the plurality of slices that encodes a duration of the video asset spanning the snippet end time.
- the first and second slices cannot be arbitrarily clipped to the defined snippet start and end times, because the snippet start and end times may point to P or B frames from the first and second slices that reference data from other frames in those slices. Accordingly, the process decodes (at 230 ) the first and second slices. From the decoding, the video generation system obtains frame information for all frames in the slices. The decoding essentially converts each frame in the first and second slices into I-frames. In some embodiments, the decoding also associates a timestamp or duration to each frame.
- the process produces (at 240 ) the snippet start slice by re-encoding the decoded first slice frames from the frame at the defined snippet start time to the first slice last frame.
- the re-encoding clips the first slice by excluding any frames that are before the defined snippet start time.
- the specific frame at the snippet start time is encoded as an I-frame and subsequent frames from the original asset first slice are encoded as a combination of I, P, B, and other frame types depending on encoder settings. Consequently, the resulting snippet start slice commences with the specific frame at the snippet start time.
- the process produces (at 250 ) the snippet end slice by re-encoding the decoded second slice frames from the first frame to the frame at the defined snippet end time.
- the re-encoding clips the second slice such that the resulting snippet end slice ends with the specific frame at the snippet end time.
- the process retrieves (at 260 ) a subset of slices from the sliced encoding of the original video asset falling in between the first and second slices or the snippet start and end slices. More specifically, the subset of slices encode a duration of the video asset that is in between the snippet start and end slices and not already encoded within the snippet start and end slices. Unlike the first and second slices of the original video asset, the subset of slices in between the first and second slices are not decoded or re-encoded.
- the retrieval at 260 involves obtaining the original encoding for subsequent reuse in creating the snippet.
- the process orders the snippet start slice, the retrieved subset of slices from the encoding of the original video asset, and the snippet end slice to produce the snippet.
- the ordering involves creating a manifest file listing the slices and other information (e.g., slice duration, encoded bit rate, etc.) for snippet playback.
- the manifest file provides client players with the slice ordering, thereby allowing the players to request the proper slice sequence for snippet playback.
- the ordering may further include storing the snippet start and end slices along with the retrieved subset of slices for subsequent distribution.
- the video generation system merges the snippet start slice, the retrieved subset of slices, and the snippet end slice as a new video or snippet asset that exists independent of the original video asset.
- the merging can involve combining the snippet slices into a single file for subsequent distribution.
- the process serves (at 280 ) the snippet start slice in response to any request for the snippet.
- the process will continue to serve the remaining snippet slices until the snippet end slice or until playback is terminated.
- the video generation system produces the snippet, and in particular, the snippet start slice and the snippet end slice with insignificant time and computer resource usage as each of the slices spans no more than a few seconds of video. Accordingly, producing the snippet start slice and snippet end slice involves re-encoding those few seconds of video, rather than the entire snippet, regardless of the total length of the snippet. There is therefore little to no difference in creating a snippet that is ten seconds in duration from one that is ten minutes in duration.
- the video generation system largely preserves the quality of the snippet relative to the original video asset encoding from which the snippet is produced.
- the video generation system loses quality in just the snippet start and end slices as a result of re-encoding these slices. There is however no quality lost in all other slices between the snippet start slice and the snippet end slice as these other slices are reused without any modification from the original video asset encoding.
- FIG. 3 conceptually illustrates the snippet creation in accordance with some embodiments.
- the figure illustrates five encoded slices 320 , 330 , 340 , 350 , and 360 of an original video asset from which the video generation system 310 clips to create a snippet with a re-encoded start slice 390 , a re-encoded end slice 395 , and an original encoded slice 340 .
- the entire original video asset may involve many more slices than those depicted in FIG. 3 and the five encoded slices are presented for illustrative purposes.
- each slice 320 , 330 , 340 , 350 , and 360 represents a different five seconds of the original video asset.
- the frame types that form each slice 320 , 330 , 340 , 350 , and 360 are shown for illustrative purposes.
- the video generation system 310 receives user specified start and end times for a desired snippet.
- the video generation system 310 identifies the snippet start time to be within the second slice 330 and the snippet end time to be within the fourth slice 350 . More specifically, the video generation system 310 identifies frame 370 from the second slice 330 corresponding to the snippet start time and frame 380 from the fourth slice 350 corresponding to the snippet end time.
- Frame 370 is a P-frame that references a prior P-frame that references a prior I-frame of the second slice 330 .
- the snippet start slice 390 cannot be directly created from frame 370 . Accordingly, the video generation system 310 decodes the second slice 330 . The video generation system 310 then re-encodes a subset of the decoded frames spanning from frame 370 to the last frame of the second slice 330 . The re-encoded set of frames forms the snippet start slice 390 . The re-encoding converts starting frame 370 to an I frame with subsequent frames of the snippet start slice 390 comprising a combination of I, P, B, and other frame types.
- Frame 380 is a B-frame that references a forward I-frame and a previous I-frame.
- the snippet end slice 395 cannot be directly created from frame 380 .
- the video generation system 310 decodes the fourth slice 350 in order to then re-encode a subset of decoded frames spanning from the first frame of the fourth slice 350 to frame 380 .
- the re-encoded set of frames forms the snippet end slice 395 .
- the re-encoding clips the fourth slice 350 such that the snippet end slices 395 stops at the frame 380 corresponding to the snippet end time.
- the video generation system generates the snippet from the snippet start slice 390 , the third slice 340 of the original video asset encoding, and the snippet last slice 395 .
- snippet generation may further include creating a manifest that identifies the snippet slices for playback.
- the manifest identifies the snippet as starting with the snippet start slice.
- the manifest identifies the clipped duration of the snippet start slice.
- the manifest then identifies the subsequent ordering of the remaining snippet slices as well as their respective durations.
- the manifest is provided to a client player requesting the snippet. Upon receiving the manifest, the client player is then able to request the snippet slices in the correct order for playback.
- the video generation system creates a snippet at multiple bit rates.
- the video generation system encodes the original video asset at each of the different bit rates.
- the video generation system then creates the snippet by re-encoding the snippet first slice and the snippet last slice at each of the bit rates.
- the different bit rate encoded snippet first slices and last slices are then matched with the corresponding bit rate encoded slices of the original video asset falling in between the snippet first slice and snippet last slice.
- FIG. 4 conceptually illustrates creating a snippet at a particular bit rate from an original video asset that is encoded at different bit rates.
- the original video asset encoding produces at least a first set of slices 410 that encode the original video asset at a high or higher quality first bit rate and a second set of slices 420 that encode the original video asset at a low or lower quality second bit rate.
- the video generation system identifies a first slice spanning the snippet start time and a second slice spanning the snippet end. Since re-encoding the snippet start slice and end slice is a lossy process and in order to minimize quality loss, the video generation system obtains the first and second slices from the first set of slices 410 encoded at the high quality first bit rate rather than the low quality second bit rate at which the snippet 430 is generated. The video generation system clips the first and second slices from the first set of slices 410 and re-encodes them at the low quality second bit rate to produce the snippet start and end slices. The video generation system then generates the snippet 430 from the snippet start slice, a subset of slices from the second set of slices 420 (i.e., low quality second bit rate) between the first and second slices, and the snippet end slice.
- FIG. 5 is a block diagram of an exemplary video generation system for encoding video content into a sequence of slices and for efficiently creating snippets from encoded slices.
- the system involves at least one slicer 510 , encoder 520 , storage 530 , and server 540 , each communicably coupled via a data communications network 550 (e.g., public network such as the Internet or private network such as a local area network (LAN)).
- a data communications network 550 e.g., public network such as the Internet or private network such as a local area network (LAN)
- a video creator feeds video content to the slicer 510 .
- the slicer 510 partitions the video content into a plurality of slices.
- the slicer 510 may be a lightweight piece of software that runs on the computing system near the signal source (e.g., source file or live feed), such as a laptop computer of the video creator.
- the slicer 510 can also run on a remote machine to which the video content is uploaded.
- the plurality of slices pass from the slicer 510 to the one or more encoders 520 .
- the encoders 520 collectively work to produce the encoded slices. For example, a different encoder 520 may retrieve a slice, encode the slice, and store the encoded slice to storage 530 .
- the encoders 520 also produce the snippets according to the above described embodiments.
- Multiple encoders 520 may be used to encode a video asset at different bit rates or quality settings.
- the encoders 520 can similarly generate snippets of a video asset at each of the different video asset encoded bit rates.
- the encoders 520 can generate a snippet at a particular bit rate.
- the encoders 520 may generate the snippet start and end slices at the particular bit rate from the highest quality encoded slices of the video asset, while reusing the video asset slices that were encoded at the particular bit rate for filling the duration in between the snippet start and end slices.
- the server 540 receives requests for encoded video content over the network 550 from media players executing on the client computing systems (referred to herein as the “client”).
- the server 540 passes the encoded slices of the original video content from the storage 530 in response to such requests.
- the client and the server 540 which may be executed on a server of a content delivery network, may be coupled by the network 550 .
- the network 550 may include any digital public or private network.
- the client may be a client workstation, a server, a computer, a portable electronic device, an entertainment system configured to communicate over a network, such as a set-top box, a digital receiver, a digital television, a mobile phone, or other electronic devices.
- portable electronic devices may include, but are not limited to, cellular phones, portable gaming systems, portable computing devices, or the like.
- the server 540 may be a network appliance, a gateway, a personal computer, a desktop computer, a workstation, etc.
- video creators use an application or application programming interface (API) to submit snippet requests to the video generation system.
- the snippet requests can be passed to the server 540 or directly to the one or more encoders 520 for execution and snippet generation.
- the snippet request identifies the video asset from which the snippet is to be generated as well as start and end times for the snippet.
- the snippet request may specify additional information including a bit rate or other quality settings for the snippet.
- the encoder 520 decodes and re-encodes the slices from which the snippet start and end slices are produced and stores the snippet start and end slices along with a manifest for the snippet back to the storage 550 for subsequent distribution by the server 540 .
- the video generation system of some embodiments is a standalone machine that produces the snippets.
- the video generation system produces the original video asset encoded slices or has access to those slices, whether the encoded slices are stored on local or remote storage.
- any device can be used to efficiently clip video as described above as long as the device has access to the encoded video slices that form the original video asset.
- Server computer, and computing machine are meant in their broadest sense, and can include any electronic device with a processor including cellular telephones, smartphones, portable digital assistants, tablet devices, laptops, notebooks, and desktop computers.
- Examples of computer-readable media include, but are not limited to, CD-ROMs, flash drives, RAM chips, hard drives, EPROMs, etc.
- FIG. 6 illustrates a computer system or server with which some embodiments are implemented.
- a computer system includes various types of computer-readable mediums and interfaces for various other types of computer-readable mediums that implement the various methods and machines described above (e.g., video generation system).
- Computer system 600 includes a bus 605 , a processor 610 , a system memory 615 , a read-only memory 620 , a permanent storage device 625 , input devices 630 , and output devices 635 .
- the bus 605 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the computer system 600 .
- the bus 605 communicatively connects the processor 610 with the read-only memory 620 , the system memory 615 , and the permanent storage device 625 . From these various memory units, the processor 610 retrieves instructions to execute and data to process in order to execute the processes of the invention.
- the processor 610 is a processing device such as a central processing unit, integrated circuit, graphical processing unit, etc.
- the read-only-memory (ROM) 620 stores static data and instructions that are needed by the processor 610 and other modules of the computer system.
- the permanent storage device 625 is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when the computer system 600 is off. Some embodiments use a mass-storage device (such as a magnetic, solid-state, or optical disk) as the permanent storage device 625 .
- the system memory 615 is a read-and-write memory device. However, unlike storage device 625 , the system memory is a volatile read-and-write memory, such as random access memory (RAM).
- RAM random access memory
- the system memory stores some of the instructions and data that the processor needs at runtime. In some embodiments, the processes are stored in the system memory 615 , the permanent storage device 625 , and/or the read-only memory 620 .
- the bus 605 also connects to the input and output devices 630 and 635 .
- the input devices enable the user to communicate information and select commands to the computer system.
- the input devices 630 include alphanumeric keypads (including physical keyboards and touchscreen keyboards), pointing devices.
- the input devices 630 also include audio input devices (e.g., microphones, MIDI musical instruments, etc.).
- the output devices 635 display images generated by the computer system.
- the output devices include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD).
- bus 605 also couples computer 600 to a network 665 through a network adapter (not shown).
- the computer can be a part of a network of computers (such as a local area network (“LAN”), a wide area network (“WAN”), or an Intranet, or a network of networks, such as the Internet).
- LAN local area network
- WAN wide area network
- Intranet an Intranet
- the computer system 600 may include one or more of a variety of different computer-readable media.
- Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable blu-ray discs, any other optical or magnetic media, and floppy disks.
- RAM random access memory
- ROM read-only compact discs
- CD-R recordable compact discs
- CD-RW rewritable compact discs
- CD-RW read-only digital versatile discs
- DVD-RAM dual-
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Computer Security & Cryptography (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
Description
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/037,073 US10614854B2 (en) | 2016-03-22 | 2018-07-17 | Speedy clipping |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/077,795 US10032481B2 (en) | 2016-03-22 | 2016-03-22 | Speedy clipping |
US16/037,073 US10614854B2 (en) | 2016-03-22 | 2018-07-17 | Speedy clipping |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/077,795 Continuation US10032481B2 (en) | 2016-03-22 | 2016-03-22 | Speedy clipping |
Publications (2)
Publication Number | Publication Date |
---|---|
US20180322907A1 US20180322907A1 (en) | 2018-11-08 |
US10614854B2 true US10614854B2 (en) | 2020-04-07 |
Family
ID=59898600
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/077,795 Active 2037-02-08 US10032481B2 (en) | 2016-03-22 | 2016-03-22 | Speedy clipping |
US16/037,073 Active US10614854B2 (en) | 2016-03-22 | 2018-07-17 | Speedy clipping |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/077,795 Active 2037-02-08 US10032481B2 (en) | 2016-03-22 | 2016-03-22 | Speedy clipping |
Country Status (1)
Country | Link |
---|---|
US (2) | US10032481B2 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017179593A1 (en) * | 2016-04-13 | 2017-10-19 | ソニー株式会社 | Av server and av server system |
CN112218118A (en) * | 2020-10-13 | 2021-01-12 | 湖南快乐阳光互动娱乐传媒有限公司 | Audio and video clipping method and device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080154889A1 (en) * | 2006-12-22 | 2008-06-26 | Pfeiffer Silvia | Video searching engine and methods |
US20110292288A1 (en) * | 2010-05-25 | 2011-12-01 | Deever Aaron T | Method for determining key video frames |
US20130188700A1 (en) * | 2012-01-19 | 2013-07-25 | Qualcomm Incorporated | Context adaptive entropy coding with a reduced initialization value set |
US20140133837A1 (en) * | 2011-06-21 | 2014-05-15 | Nokia Corporation | Video remixing system |
US20160191961A1 (en) * | 2014-12-31 | 2016-06-30 | Imagine Communications Corp. | Fragmented video transcoding systems and methods |
US9715901B1 (en) * | 2015-06-29 | 2017-07-25 | Twitter, Inc. | Video preview generation |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160100173A1 (en) * | 2014-10-03 | 2016-04-07 | International Business Machines Corporation | Enhanced Video Streaming |
-
2016
- 2016-03-22 US US15/077,795 patent/US10032481B2/en active Active
-
2018
- 2018-07-17 US US16/037,073 patent/US10614854B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080154889A1 (en) * | 2006-12-22 | 2008-06-26 | Pfeiffer Silvia | Video searching engine and methods |
US20110292288A1 (en) * | 2010-05-25 | 2011-12-01 | Deever Aaron T | Method for determining key video frames |
US20140133837A1 (en) * | 2011-06-21 | 2014-05-15 | Nokia Corporation | Video remixing system |
US20130188700A1 (en) * | 2012-01-19 | 2013-07-25 | Qualcomm Incorporated | Context adaptive entropy coding with a reduced initialization value set |
US20160191961A1 (en) * | 2014-12-31 | 2016-06-30 | Imagine Communications Corp. | Fragmented video transcoding systems and methods |
US9715901B1 (en) * | 2015-06-29 | 2017-07-25 | Twitter, Inc. | Video preview generation |
Also Published As
Publication number | Publication date |
---|---|
US20170278543A1 (en) | 2017-09-28 |
US20180322907A1 (en) | 2018-11-08 |
US10032481B2 (en) | 2018-07-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10924523B2 (en) | Encodingless transmuxing | |
US9852762B2 (en) | User interface for video preview creation | |
US10743008B2 (en) | Video file transcoding system, segmentation method, and transcoding method and apparatus | |
US9961123B2 (en) | Media production system with score-based display feature | |
US9478256B1 (en) | Video editing processor for video cloud server | |
CA2896175C (en) | Media distribution and management platform | |
EP3713224B1 (en) | Live data processing method and system, and server | |
US7133881B2 (en) | Encoding and transferring media content onto removable storage | |
US20120266203A1 (en) | Ingest-once write-many broadcast video production system | |
KR102012682B1 (en) | Systems and Methods for Encoding and Sharing Content Between Devices | |
CN105376612A (en) | Video playing method, media equipment, playing equipment and multimedia system | |
WO2017092327A1 (en) | Playing method and apparatus | |
JP6508206B2 (en) | INFORMATION PROCESSING APPARATUS AND METHOD | |
WO2016002496A1 (en) | Information processing device and method | |
US10614854B2 (en) | Speedy clipping | |
US9998513B2 (en) | Selecting bitrate to stream encoded media based on tagging of important media segments | |
US10070174B2 (en) | Movie package file format to persist HLS onto disk | |
US11545185B1 (en) | Method and apparatus for frame accurate high resolution video editing in cloud using live video streams | |
CN112738573A (en) | Video data transmission method and device and video data distribution method and device | |
US11315604B2 (en) | Thumbnail video player for video scrubbing | |
CN108737355B (en) | Streaming media playback based on user bandwidth | |
JP6385474B2 (en) | Cloud streaming-based broadcast-linked service system, broadcast-linked service client device, trigger content providing server, and method using the same | |
CN104796732A (en) | Audio and video editing method and device | |
RU2690163C2 (en) | Information processing device and information processing method | |
CN114124941B (en) | m3u8 format file downloading method, playing method and m3u8 format file downloading system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: VERIZON DIGITAL MEDIA SERVICES INC., VIRGINIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:VERIZON PATENT AND LICENSING INC.;REEL/FRAME:046367/0012 Effective date: 20160811 Owner name: VERIZON PATENT AND LICENSING INC., NEW JERSEY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:OWEN, CALVIN RYAN;WILLEY, TYLER;BRUECK, DAVID FREDERICK;SIGNING DATES FROM 20160316 TO 20160322;REEL/FRAME:046366/0966 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: EDGECAST INC., VIRGINIA Free format text: CHANGE OF NAME;ASSIGNOR:VERIZON DIGITAL MEDIA SERVICES INC.;REEL/FRAME:059367/0990 Effective date: 20211101 |
|
AS | Assignment |
Owner name: EDGIO, INC., ARIZONA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:EDGECAST INC.;REEL/FRAME:061738/0972 Effective date: 20221021 |
|
AS | Assignment |
Owner name: LYNROCK LAKE MASTER FUND LP (LYNROCK LAKE PARTNERS LLC, ITS GENERAL PARTNER), NEW YORK Free format text: PATENT SECURITY AGREEMENT;ASSIGNORS:EDGIO, INC.;MOJO MERGER SUB, LLC;REEL/FRAME:065597/0212 Effective date: 20231114 Owner name: U.S. BANK TRUST COMPANY, NATIONAL ASSOCIATION, ARIZONA Free format text: PATENT SECURITY AGREEMENT;ASSIGNORS:EDGIO, INC.;MOJO MERGER SUB, LLC;REEL/FRAME:065597/0406 Effective date: 20231114 |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
FEPP | Fee payment procedure |
Free format text: SURCHARGE FOR LATE PAYMENT, LARGE ENTITY (ORIGINAL EVENT CODE: M1554); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |
|
AS | Assignment |
Owner name: LYNROCK LAKE MASTER FUND LP (LYNROCK LAKE PARTNERS LLC, ITS GENERAL PARTNER), NEW YORK Free format text: PATENT SECURITY AGREEMENT;ASSIGNORS:EDGIO, INC.;MOJO MERGER SUB, LLC;REEL/FRAME:068763/0276 Effective date: 20240823 |