US20210127118A1 - System and method for optimized encoding and transmission of a plurality of substantially similar video fragments - Google Patents

System and method for optimized encoding and transmission of a plurality of substantially similar video fragments Download PDF

Info

Publication number
US20210127118A1
US20210127118A1 US17/138,577 US202017138577A US2021127118A1 US 20210127118 A1 US20210127118 A1 US 20210127118A1 US 202017138577 A US202017138577 A US 202017138577A US 2021127118 A1 US2021127118 A1 US 2021127118A1
Authority
US
United States
Prior art keywords
variant
video
section
variant section
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US17/138,577
Inventor
Seth Haberman
Gerrit Niemeijer
Richard L. Booth
Alex Jansen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Adeia Media Holdings LLC
Original Assignee
Tivo LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tivo LLC filed Critical Tivo LLC
Priority to US17/138,577 priority Critical patent/US20210127118A1/en
Assigned to VISIBLE WORLD, LLC reassignment VISIBLE WORLD, LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: VISIBLE WORLD, INC.
Assigned to VISIBLE WORLD, INC. reassignment VISIBLE WORLD, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BOOTH, RICHARD L., HABERMAN, SETH, JANSEN, ALEX, NIEMEIJER, GERRIT
Assigned to TIVO CORPORATION reassignment TIVO CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: VISIBLE WORLD, LLC
Publication of US20210127118A1 publication Critical patent/US20210127118A1/en
Assigned to BANK OF AMERICA, N.A., AS COLLATERAL AGENT reassignment BANK OF AMERICA, N.A., AS COLLATERAL AGENT SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ADEIA GUIDES INC., ADEIA IMAGING LLC, ADEIA MEDIA HOLDINGS LLC, ADEIA MEDIA SOLUTIONS INC., ADEIA SEMICONDUCTOR ADVANCED TECHNOLOGIES INC., ADEIA SEMICONDUCTOR BONDING TECHNOLOGIES INC., ADEIA SEMICONDUCTOR INC., ADEIA SEMICONDUCTOR SOLUTIONS LLC, ADEIA SEMICONDUCTOR TECHNOLOGIES LLC, ADEIA SOLUTIONS LLC
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/152Data rate or code amount at the encoder output by measuring the fullness of the transmission buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/114Adapting the group of pictures [GOP] structure, e.g. number of B-frames between two anchor frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/174Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/40Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors

Definitions

  • This invention is directed towards digital compressed video, and more particularly towards a method for merging separately encoded MPEG video segments into a single full-screen encoded MPEG video segment.
  • An infomercial is a television commercial with the specific purpose of getting the viewer to pickup the phone and order the advertised product immediately.
  • infomercials There are two types of infomercials: Long form messages (paid programming), having a length of (approx.) 30 minutes, and short form messages (direct response ads, aired in the space of normal commercials), having a length of 30-120 sec.
  • a typical infomercial (just like any other television commercial) is broadcast in many different geographic regions from many different (easily 50 or more) broadcast stations. To measure the effectiveness of the commercial in the different regions it would be advantageous to have a different call-in phone number in use for the commercial in each region. Typically, such phone numbers to be called by the viewer are overlaid over a small portion, typically the bottom, of the video for the entire duration (or large portions) of the commercial.
  • Regional customization of television messages would also be advantageous, for instance, in the case of a commercial for a brand or chain that has many stores throughout the country.
  • Commercials for the brand could be regionalized by showing the address of the nearest store for that brand, by showing different promotions for different products for each region, by showing different prices for the same product in different regions, etc.
  • Such region-specific customizations could be added to small portions of the video of the commercial, for example as (but not limited to) a text overlay.
  • Video stitching is a solution to the problem of how to efficiently encode, bundle, and assemble many different variants of a single piece of digital video, where certain (preferably small) rectangular parts of the screen can be identified as containing a lot of variations, and the remaining, preferably large, rectangular portion(s) of the screen are invariant (or have very little variation) across all versions of the video.
  • a video stitching system would be used to reconstruct any desired variant. Savings will be made both in speed of encoding, decoding, and size of the complete package with all variants.
  • An illustrative embodiment of the present invention includes a method of encoding partial-frame video segments, the method including dividing the full-video frame area into rectangular regions, wherein the rectangular regions have a length and width that are each a multiple of 16 pixels. Then upon obtaining video segments for at least one of the rectangular regions, the method includes determining a target VBV buffer size for the video segments for a selected rectangular region, and then encoding the video segments for the selected rectangular region using the determined target VBV buffer size.
  • a common GOP pattern is used for encoding all video segments for all of the rectangular regions.
  • the step of determining a target VBV buffer size includes selecting a target VBV buffer size that is substantially proportional to a full frame VBV buffer size based on relative size of the rectangular region as compared to a full frame size. Alternatively a target VBV buffer size is selected that is smaller than a value that is substantially proportional to a full frame VBV buffer size based on relative size of the rectangular region as compared to a full frame size.
  • the method also includes determining a VBV bit rate for encoding the video segments for the selected rectangular region, wherein the VBV bit rate is selected that is substantially proportional to a full frame VBV bit rate based on relative size of the rectangular region as compared to a full frame size. Alternatively, a VBV bit rate is selected that is smaller than a value that is substantially proportional to a full frame VBV bit rate based on relative size of the rectangular region as compared to a full frame size.
  • the illustrative embodiment also includes determining f-code array values for the video segments, after encoding the video segments, modifying the f-code array values and motion vectors for the encoded video according to the determined f-code array values.
  • the f-code array values are determined by using the maximum values for the f-code array values from the plurality of video segments.
  • the illustrative embodiment also includes assembling a row of encoded video segment macroblocks into a slice, wherein header information for each macroblock in the slice is modified for a proper format for a complete slice.
  • the method also includes obtaining a plurality of different video segments for the rectangular region, wherein the different video segments are then merged with other video segments for other rectangular regions to create multiple different full-video frame video segments.
  • the present invention also includes an encoding system for encoding partial-frame video segments to allow different partial-frame video segments to be merged into a full-video frame.
  • the savings in overall package size as provided by the present invention is important, for instance, in situations of satellite (multicast/broadcast) distribution of the entire package from the originator to the local broadcast stations. Each local station has to extract the desired variant from the package. This is shown in FIG. 2 . Large savings in the size of the package to be distributed will make it commercially feasible to have many different variants of the infomercial.
  • Video Stitching as described in the present invention is very different from other object-based compressed video technologies (such as MPEG-4) since the order of decoding and composing (stitching) the individual segments is reversed.
  • MPEG-4 the video decoder first decodes the various encoded video segments, and then composes the resulting uncompressed segments (typically on-the-fly) into full screen video.
  • the compressed segments are first stitched together into a full-screen compressed segment which is subsequently decoded.
  • Video Stitching as described integrates naturally with existing MPEG-2 decoding equipment deployed in, for example, cable television headends and broadcast television stations.
  • existing MPEG-2 decoding equipment deployed in, for example, cable television headends and broadcast television stations.
  • To support video stitching in such environments would mean that the individually encoded MPEG video segments are stitched together just after reception at the point of decoding.
  • Decoding itself will not change since the Video Stitcher will produce the same type of (MPEG-2) streams that are currently handled by such systems.
  • MPEG-4 would mean that a transcoder from MPEG-4 to MPEG-2 is needed at the point of reception to be able to reuse the existing MPEG-2 decoders, which is disadvantageous.
  • Another option would be replacing the existing MPEG-2 decoders with MPEG-4 decoders which is disadvantageous from a cost perspective.
  • FIG. 1 illustrates a video segment split-up in a top and bottom region
  • FIG. 2 illustrates a satellite broadcast of a package consisting of a single top video file and multiple bottom video files
  • FIG. 3 illustrates a video segment split-up into five different rectangular regions
  • FIG. 4 illustrates a video segment split-up into seven different rectangular regions
  • FIG. 5 illustrates part of preparing stitchable video segments and stitching them together into valid MPEG, according to an illustrative embodiment
  • FIG. 6 illustrates another part of preparing stitchable video segments and stitching them together into valid MPEG according to an illustrative embodiment.
  • the present invention finds utility in various data transmission applications including, but not limited to, encoding, transmission, reception and decoding of digital compressed video, regardless of the means of transmission.
  • FIG. 1 An application of an illustrative embodiment is shown in FIG. 1 .
  • a piece of video 20 is to be created of which fifty variations must be made, each with a different 1-800-number, promotion code, or any other piece of information in the bottom 1 ⁇ 3 rd of the screen.
  • the final videos have to be encoded in MPEG-2, have a total resolution of 720 ⁇ 480 (NTSC) and a bitrate of 4,500,000 bps.
  • the top video will have a resolution of 720 ⁇ 320 and an encoding bitrate of 3,000,000 bps.
  • the bottom video will have a resolution of 720 ⁇ 160 and an encoding bitrate of 1,500,000 bps, leading to the desired merged resolution and bitrate.
  • Typical MPEG VBV buffer size values will be 140 Kbyte for the top and 70 Kbyte for the bottom (leading to a an overall VBV buffer size after merging below the maximum of 224 Kbyte).
  • FIGS. 5 and 6 describe the various steps in preparing stitchable video segments, and stitching such video segments together into full-screen segments. Throughout the following text references will be made to the steps shown in these pictures as further illustration of the present invention.
  • a first aspect of the present invention is how to encode the video to make it suitable for stitching at a later point in time.
  • One possibility is to encode all variant commercials in their full-screen entirety, and then post-process them into the separate parts. This has proven to be not practical because MPEG motion vectors allow video at any given point on the screen to “migrate” to other parts of the screen over time (the duration of a GOP (Group of Pictures) is typically half a second), and it cannot easily he guaranteed that changes in the variable part of the screen won't end up in the invariant part, causing a visible video glitch.
  • GOP Group of Pictures
  • a workable solution according to the present invention is to encode the separate rectangular parts (2 or more) of the video fully independent from each other.
  • Each part would represent a rectangular region of the screen.
  • Each such region is a multiple of 16 pixels high and a multiple of 16 pixels wide and thus correspond to an integral number of MPEG macroblocks. All rectangular regions together would precisely cover the entire screen 20 (without overlaps).
  • An NTSC picture for example, is 720 pixels wide and 480 pixels high, corresponding to 45 macroblocks wide and 30 macroblocks high.
  • FIGS. 3 and 4 More complicated situations are shown in FIGS. 3 and 4 , where the screen is split-up into five and seven rectangular regions respectively, each an exact multiple of 16 pixels wide and high.
  • FIG. 3 there is one region 22 of the screen variable (with a call to action).
  • FIG. 4 there is an additional logo 24 on the top right hand corner that is variable.
  • the present invention guarantees that motion vector “crosstalk” between the different segments will not occur.
  • FIG. 5 shows the steps according to an illustrative embodiment.
  • step 1 is the step of determining on the different screen regions, as previously described. For this example, there are five different regions.
  • the video for each region is created or extracted from source material of other dimensions, as is well known in the art. If the source material is already of the right dimensions, it can be encoded directly. All the segments (including variations) are then encoded, and then packaged for delivery, step 3.
  • the packaging may include all segments, or be divided up by delivery area or other parameters.
  • the package is then transferred to the receiving station by any type of transmission, step 4.
  • the receiving station can. be a head end, set top box, or other place where the segments will be assembled. The proper segments are selected, and assembled, or stitched, into the appropriate full screen size, step 5. Details for this process are provided below.
  • Enabling efficient stitching of separately-encoded pieces of video utilizes several steps during encoding, step 2 FIG. 5 .
  • a first requirement on the separately encoded videos is that they must maintain exactly identical GOP patterns. For instance, an IPBBPBBPBBPBB segment can't be stitched together with an IPBPBPBPBPBPB segment. Even though the number of frames is the same in these GOPs, the differing patterns preclude stitching because a resultant frame can't think of itself as part P (Interframe or forward prediction frame) and part B (bi-directional predicted frame) at the same time. Stitching such videos together would effectively mean decoding and subsequently reencoding the video at the point of reception, thus defeating the purpose.
  • VBV buffer size and bitrate for each region is beneficial for VBV buffer size and bitrate for each region to be chosen in approximate proportion to the relative sizes of the regions. For example, in a two region situation, as shown in FIG. 1 , if one region takes 90% of the area, it should use about 90% of the intended final VBV buffer size, while the other region should get 10% or so of each. It is furthermore important to note, as will be explained below, that stitching will typically result in some data expansion, so the VBV sizes and bitrates actually chosen for encoding should be slightly smaller than the ones that are computed based only on the proportion of the relative region sizes, applied to the fixed desired VBV buffer size and bitrate.
  • a fixed factor usually between 10% and 15%
  • the sum of the two parts should be about 200 Kbyte for the VBV buffer and about 5400000 bits/sec, for the bitrate.
  • Another typical requirement for encoding “stitchable” video segments according to the illustrative embodiment is that the global stream parameters that control macroblock decoding should be identical across clips.
  • the optional quantization matrices included in sequence headers should be the same, as well as any quantization matrix extensions present.
  • Several fields of the picture coding extension should also agree across clips, most importantly, the alternate scan bit, the intra dc_precision value, and the f_code 4-value array. Most encoders presently used in the industry usually either pick fixed values for these fields, or they can be instructed to use certain fixed values. The exception to this is the 4-value f_code array which governs the decoding of motion vectors.
  • the f_code array contains four elements: one each for forward horizontal motion, forward vertical motion, backward horizontal motion, and backward vertical motion. Only B frames actually use all four. P frames only use the forward components. I frames (Intraframes) don't use them at all.
  • the values in the f_code array reflect the “worst case” motion vectors found in the associated image.
  • a picture with a lot of large-scale gross motion in it relative to its predecessor picture tends to have larger maximum motion vector sizes than a relatively static picture. Bigger motion vectors are reflected in bigger f_code values, which basically control the number of bits used to express motion vectors in the picture.
  • the f_code values of the video frames that must be stitched together must be modified so that they are consistent. Subsequently, according to the illustrative embodiment, the motion vectors that are expressed in terms of these f_codes reencoded to match the new f_codes.
  • the illustrative embodiment defines the f_code values for the stitched frame to be at least the maximum of the alternatives for any f_code array component. After thus determining the new f_code for the stitched frame, the motion vectors of each frame are reencoded in terms of the new f_codes. This reencoding always typically involves some data expansion, but practice has shown it is often in the order of just 1%.
  • the first option is to make this part of the actual stitching process, i.e., in the local stations or set top boxes as shown in FIG. 2 .
  • the advantage of this approach is that the package that has to be transmitted as small as possible, leading to a saving in bandwidth.
  • this means that the stitching work in the local station is more complex, since all the f_code modification and motion-vector reencoding has to be done locally.
  • the second option is to find the maximum f_code values for a given frame across all variants and modify all the variants (i.e., reencode the motion vectors) for this maximum f_code array before packaging for distribution. This will simplify the stitching process in the local station at the expense of needing more bandwidth, and leading to slightly larger (1% or so) stitched videos (since the max. f_code is computed across all variants, and not on a per variant basis).
  • FIG. 5 illustrates the first option, i.e., packaging the video directly after encoding, and modifying f_codes and motion vectors after distribution.
  • Step 3 illustrates the packaging of the separately encoded segments into a single package.
  • Step 4 illustrates the distribution of that package (using any available transmission mechanism).
  • Step 5 shows the selection, at the reception point, of a set of segments (one for each region) to be stitched together.
  • Step 6 in FIG. 6 illustrates the modification of f_code arrays and motion vectors at the point of reception.
  • a single row of macroblocks is encoded as a sequence of one or more MPEG “slices”, where a slice is a row of macroblocks (one macroblock high, and one or more macroblocks wide). In a single MPEG video frame, these slices are encoded from top to bottom.
  • the first task to compose a full-screen frame from multiple separately encoded smaller frames is to produce the screen-wide rows of macroblocks. Simply appending the slices from the different encoded frames for each such row in a left to right order is already sufficient. The only extra work that has to be done for each slice is setting the macroblock_address_increment value in the first macroblock to indicate the horizontal starting position of the slice in the macroblock row.
  • the next step is composing these rows into a full frame. This can be done by appending the rows in a top to bottom order. The only extra work that has to be done is adjusting the slice slice_vertical_position value in each slice to indicate how far down the screen from the top the slice goes in the final stitched frame.
  • the first macroblock of a slice to be appended to a previous slice may need a macroblock_address_increment adjustment to indicate how many macroblocks have been skipped between the end of the slice to its left and the current macroblock. When there is no gap (as is usually the case), this value will need no change.
  • the first macroblock of that following slice will need to contain the correct quantiser_scale_code, indicated by setting the macroblock_quant_flag in the macroblock, followed by the appropriate quantiser_scale_code.
  • the predicted motion vectors at the beginning of a slice following a previous slice must be updated.
  • motion vectors are predicted to be zero, and so the first motion vectors encoded for a macroblock represent absolute values. But subsequent macroblocks derive their motion vectors as deltas from those of the macroblock to their immediate left. Forward and reverse motion vectors are tracked separately, and in the event that macroblocks are skipped within the slice, predictions may revert to zero.
  • the exact rules about how skipped macroblocks affect predictions differ between P and B frames and field and frame pictures. In. any event, one or more macroblocks at or near the beginning of a slice to be appended to a previous slice most likely need to be modified to take into account motion vector predictions inherited from the previous slice. Once inherited correctly, macroblocks farther to the right need not be modified.
  • the “dct_dc_differential” values found in the first macroblock of an appended slice must be modified to reflect inheritance of predicted dc levels front the last macroblock of the slice onto which the appending operation is being performed.
  • Normally slices start out with known fixed dc level assumptions in the first macroblock, and inherit dc levels, with modification by dct_dc_differential values, from left to right. Modification of the dct_dc_differential value is required at the start of an appended slice because it must base its dc calculation on inheritance from the macroblock to its left instead of being based on initial fixed values.
  • the stop code for a slice is a run of 23 zero bits. These stop bits have to be removed for all but the last appended slice on a screen row of macroblocks,
  • FIG. 6 illustrates the stitching of video segments and merging of slices.
  • Step 7 illustrates the concatenation of slices for single rows of macroblocks as well as concatenation of such rows into a full-screen picture.
  • Step 8 illustrates the subsequent merging of multiple slices for one screen-wide macroblock row into a single slice.
  • the final data fields that have to be determined for a complete frame that has thus been stitched together are the vertical_size_value, bit_rate_value, and vbv_buffer_size_value, all in the MPEG sequence header, as well as the vbv_delay field in the MPEG picture header.
  • VBR Very BitRate
  • the system can recalculate the peak (maximum) bitrate of the stream (which might have changed due to the f_code and motion vector adaptation), and fill in this value in the bit_rate_value fields of the video.
  • the vbv_delay fields have a constant, fixed, value in the case of VBR video, and the value for vbv_buffer_size can simple be chosen as the maximum allowed VBV buffer size for the particular profile and level of the MPEG video (e.g., 224 Kbyte for Main Profile/Main Level MPEG video)
  • macroblock data tends to slightly expand to reflect the necessary f_code value changes during stitching, causing an increase in required bandwidth.
  • sequence, GOP, and picture header information from all but one of them is stripped off to leave just the slices of macroblocks, causing a slight decrease in required bandwidth.
  • Emax be the maximum of all values E(0), . . . E(F ⁇ 1), i.e., the maximum expansion ratio across all F frames.
  • a typical value is around 1.1 (a 10% expansion).
  • VBV max ( VBV (0)+ VBV (1)++ VBV ( N ))* E max
  • VBV buffer size e.g., 224 Kbyte for MPEG-2 Main Profile/Main Level, or MP@ML
  • bit_rate_value vbv_buffer_size_value
  • vbv_delay vbv_delay
  • Step 9 in FIG. 6 illustrates the final step of padding between frames to achieve the final VBV buffer size for the stitched video sequence in the case of CBR video.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A system and method for stitching separately encoded MPEG video fragments, each representing a different rectangular area of the screen together into one single full-screen MPEG encoded video fragment.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of Ser. No. 15/856,171, filed on Dec. 28, 2017, which is a continuation of U.S. patent application Ser. No. 15/155,827, filed on May 16, 2016, now U.S. Pat. No. 10,298,934, which issued on May 21, 2019, which is a continuation of U.S. patent application Ser. No. 13/455,836, filed on Apr. 25, 2012, now U.S. Pat. No. 9,344,734, which issued on May 17, 2016, which is a continuation of U.S. patent application Ser. No. 10/991,674, filed on Nov. 18, 2004, now U.S. Pat. No. 8,170,096, which issued on May 1, 2012, which claims the benefit of U.S. Provisional Patent Application No. 60/523,035, filed on Nov. 18, 2003, the contents of which are incorporated by reference herein in their entireties.
  • FIELD OF THE INVENTION
  • This invention is directed towards digital compressed video, and more particularly towards a method for merging separately encoded MPEG video segments into a single full-screen encoded MPEG video segment.
  • BACKGROUND
  • Current electronic distribution of television messages, such as commercials, from an originator or distributor to one or more television broadcast stations and/or cable television master control centers, does not easily allow for regional customization of such messages. The reason for this is that each different variant of a message has to be transmitted completely by itself, and independent from the other variants, from sender to receiver. Each extra variant will thus require proportionally extra bandwidth usage over the transmission channel. Sending fifty different variants will require fifty times as much bandwidth as sending one single variant. This added bandwidth consumption would be prohibitively costly and/or time consuming.
  • Regional customization of television messages would be desirable, for example, in the distribution of infomercials. An infomercial is a television commercial with the specific purpose of getting the viewer to pickup the phone and order the advertised product immediately. Typically, there are two types of infomercials: Long form messages (paid programming), having a length of (approx.) 30 minutes, and short form messages (direct response ads, aired in the space of normal commercials), having a length of 30-120 sec.
  • A typical infomercial (just like any other television commercial) is broadcast in many different geographic regions from many different (easily 50 or more) broadcast stations. To measure the effectiveness of the commercial in the different regions it would be advantageous to have a different call-in phone number in use for the commercial in each region. Typically, such phone numbers to be called by the viewer are overlaid over a small portion, typically the bottom, of the video for the entire duration (or large portions) of the commercial.
  • Regional customization of television messages would also be advantageous, for instance, in the case of a commercial for a brand or chain that has many stores throughout the country. Commercials for the brand could be regionalized by showing the address of the nearest store for that brand, by showing different promotions for different products for each region, by showing different prices for the same product in different regions, etc. Such region-specific customizations could be added to small portions of the video of the commercial, for example as (but not limited to) a text overlay.
  • The above examples have in common that, a small portion of the video is varying between the different versions of the message, while larger portions of the video are common between many versions. Therefore it would be advantageous if there was a method to independently encode and distribute different portions of the screen, to exploit the different amounts of variation for different portions of the screen, and thus achieve a saving in required transmission bandwidth. An additional advantage would be reduced encoding and decoding time since the amount of video to be encoded and decoded would be less.
  • However, television commercials are currently mainly distributed in MPEG-2 format. Unfortunately, the MPEG-2 video compression standard as well as existing MPEG-2 encoding and decoding equipment do not allow for independent encoding, decoding, and/or assembly into full screen video, of different portions of the video screen.
  • SUMMARY
  • “Video stitching” is a solution to the problem of how to efficiently encode, bundle, and assemble many different variants of a single piece of digital video, where certain (preferably small) rectangular parts of the screen can be identified as containing a lot of variations, and the remaining, preferably large, rectangular portion(s) of the screen are invariant (or have very little variation) across all versions of the video.
  • In a situation where only parts of the video, for instance the bottom, are different over all variants, it is advantageous to encode only one copy of the (common) top of the screen and 10 multiple different copies of the bottom of the screen than to encode each full-screen variant, At the receiving end, a video stitching system would be used to reconstruct any desired variant. Savings will be made both in speed of encoding, decoding, and size of the complete package with all variants.
  • An illustrative embodiment of the present invention includes a method of encoding partial-frame video segments, the method including dividing the full-video frame area into rectangular regions, wherein the rectangular regions have a length and width that are each a multiple of 16 pixels. Then upon obtaining video segments for at least one of the rectangular regions, the method includes determining a target VBV buffer size for the video segments for a selected rectangular region, and then encoding the video segments for the selected rectangular region using the determined target VBV buffer size. Preferably, a common GOP pattern is used for encoding all video segments for all of the rectangular regions.
  • The step of determining a target VBV buffer size includes selecting a target VBV buffer size that is substantially proportional to a full frame VBV buffer size based on relative size of the rectangular region as compared to a full frame size. Alternatively a target VBV buffer size is selected that is smaller than a value that is substantially proportional to a full frame VBV buffer size based on relative size of the rectangular region as compared to a full frame size. The method also includes determining a VBV bit rate for encoding the video segments for the selected rectangular region, wherein the VBV bit rate is selected that is substantially proportional to a full frame VBV bit rate based on relative size of the rectangular region as compared to a full frame size. Alternatively, a VBV bit rate is selected that is smaller than a value that is substantially proportional to a full frame VBV bit rate based on relative size of the rectangular region as compared to a full frame size.
  • The illustrative embodiment also includes determining f-code array values for the video segments, after encoding the video segments, modifying the f-code array values and motion vectors for the encoded video according to the determined f-code array values. The f-code array values are determined by using the maximum values for the f-code array values from the plurality of video segments.
  • The illustrative embodiment also includes assembling a row of encoded video segment macroblocks into a slice, wherein header information for each macroblock in the slice is modified for a proper format for a complete slice.
  • The method also includes obtaining a plurality of different video segments for the rectangular region, wherein the different video segments are then merged with other video segments for other rectangular regions to create multiple different full-video frame video segments.
  • The present invention also includes an encoding system for encoding partial-frame video segments to allow different partial-frame video segments to be merged into a full-video frame.
  • The savings in overall package size as provided by the present invention is important, for instance, in situations of satellite (multicast/broadcast) distribution of the entire package from the originator to the local broadcast stations. Each local station has to extract the desired variant from the package. This is shown in FIG. 2. Large savings in the size of the package to be distributed will make it commercially feasible to have many different variants of the infomercial.
  • Even in the case of (unicast) distribution of the entire package from a single originator to a single intermediate distribution point (such as a cable television local advertising master control center) the savings in bandwidth between originator and distribution point will allow for major cost and time savings.
  • It is important to note that Video Stitching as described in the present invention is very different from other object-based compressed video technologies (such as MPEG-4) since the order of decoding and composing (stitching) the individual segments is reversed. In MPEG-4, the video decoder first decodes the various encoded video segments, and then composes the resulting uncompressed segments (typically on-the-fly) into full screen video. In the proposed video stitching method, the compressed segments are first stitched together into a full-screen compressed segment which is subsequently decoded.
  • Another difference between the present invention and other object-based video compression methods such as MPEG-4 is that Video Stitching as described integrates naturally with existing MPEG-2 decoding equipment deployed in, for example, cable television headends and broadcast television stations. To support video stitching in such environments would mean that the individually encoded MPEG video segments are stitched together just after reception at the point of decoding. Decoding itself will not change since the Video Stitcher will produce the same type of (MPEG-2) streams that are currently handled by such systems. As a comparison, using MPEG-4 would mean that a transcoder from MPEG-4 to MPEG-2 is needed at the point of reception to be able to reuse the existing MPEG-2 decoders, which is disadvantageous. Another option would be replacing the existing MPEG-2 decoders with MPEG-4 decoders which is disadvantageous from a cost perspective.
  • Note that the present invention will work with videos of any resolution, and is not restricted to just NTSC or PAL. Any resolution that is legal under MPEG is supported.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The foregoing and other features and advantages of the present invention will be more fully understood from the following detailed description of illustrative embodiments, taken in conjunction with the accompanying drawings in which:
  • FIG. 1 illustrates a video segment split-up in a top and bottom region;
  • FIG. 2 illustrates a satellite broadcast of a package consisting of a single top video file and multiple bottom video files;
  • FIG. 3 illustrates a video segment split-up into five different rectangular regions;
  • FIG. 4 illustrates a video segment split-up into seven different rectangular regions;
  • FIG. 5 illustrates part of preparing stitchable video segments and stitching them together into valid MPEG, according to an illustrative embodiment; and
  • FIG. 6 illustrates another part of preparing stitchable video segments and stitching them together into valid MPEG according to an illustrative embodiment.
  • DETAILED DESCRIPTION
  • A method of creating personalized messages that can be used for regionally, or even personalized, targeting based on variations in the commercial is described in co-pending U.S. patent application Ser. No. 09/545,015 filed on Apr. 7, 2000 and incorporated herein by reference. The present invention is directed towards variations between video and commercials based on differences that are confined to certain portions of the screen.
  • The present invention finds utility in various data transmission applications including, but not limited to, encoding, transmission, reception and decoding of digital compressed video, regardless of the means of transmission.
  • An application of an illustrative embodiment is shown in FIG. 1. A piece of video 20 is to be created of which fifty variations must be made, each with a different 1-800-number, promotion code, or any other piece of information in the bottom ⅓rd of the screen. The final videos have to be encoded in MPEG-2, have a total resolution of 720×480 (NTSC) and a bitrate of 4,500,000 bps. The top video will have a resolution of 720×320 and an encoding bitrate of 3,000,000 bps. The bottom video will have a resolution of 720×160 and an encoding bitrate of 1,500,000 bps, leading to the desired merged resolution and bitrate. Typical MPEG VBV buffer size values will be 140 Kbyte for the top and 70 Kbyte for the bottom (leading to a an overall VBV buffer size after merging below the maximum of 224 Kbyte).
  • Now, assuming a thirty minute message, fifty completely encoded variants would require a total storage capacity (and transmission bandwidth) of (30×60×4.5×50)/(8×1024)=50 Gbyte, Separate encoding of the single top section and the 50 different bottom sections will only require (30×60×(3+1.5×50))/(8×1024)=17 Gbyte, which represents a reduction in size of a factor 3 (34% of the full size).
  • In situations where the bottom section is smaller and/or the amount of different variants is larger the savings will increase even more dramatically. For example, a bottom section of ⅕th of the screen size and a total of 100 variants will lead to a size of 100 Gbyte for encoding of each full variants, and 30×60×4.5×(⅘+⅕×100)/(8×1024)=20.5 Gbyte for separate encoding of the top part and the bottom parts. This reflects a reduction by a factor of 5 (20% of the fall size).
  • FIGS. 5 and 6 describe the various steps in preparing stitchable video segments, and stitching such video segments together into full-screen segments. Throughout the following text references will be made to the steps shown in these pictures as further illustration of the present invention.
  • A first aspect of the present invention is how to encode the video to make it suitable for stitching at a later point in time. One possibility is to encode all variant commercials in their full-screen entirety, and then post-process them into the separate parts. This has proven to be not practical because MPEG motion vectors allow video at any given point on the screen to “migrate” to other parts of the screen over time (the duration of a GOP (Group of Pictures) is typically half a second), and it cannot easily he guaranteed that changes in the variable part of the screen won't end up in the invariant part, causing a visible video glitch.
  • A workable solution according to the present invention is to encode the separate rectangular parts (2 or more) of the video fully independent from each other. Each part would represent a rectangular region of the screen. Each such region is a multiple of 16 pixels high and a multiple of 16 pixels wide and thus correspond to an integral number of MPEG macroblocks. All rectangular regions together would precisely cover the entire screen 20 (without overlaps). An NTSC picture, for example, is 720 pixels wide and 480 pixels high, corresponding to 45 macroblocks wide and 30 macroblocks high. One can, for instance, encode the top 25 rows as the invariant part of the picture and make multiple versions of the bottom 5 rows to be stitched together with the top 25.
  • More complicated situations are shown in FIGS. 3 and 4, where the screen is split-up into five and seven rectangular regions respectively, each an exact multiple of 16 pixels wide and high. In FIG. 3 there is one region 22 of the screen variable (with a call to action). In FIG. 4 there is an additional logo 24 on the top right hand corner that is variable.
  • By running the rectangular regions of the picture through an encoder separately, the present invention guarantees that motion vector “crosstalk” between the different segments will not occur.
  • FIG. 5 shows the steps according to an illustrative embodiment. At step 1, is the step of determining on the different screen regions, as previously described. For this example, there are five different regions. At step 2, the video for each region is created or extracted from source material of other dimensions, as is well known in the art. If the source material is already of the right dimensions, it can be encoded directly. All the segments (including variations) are then encoded, and then packaged for delivery, step 3. The packaging may include all segments, or be divided up by delivery area or other parameters. The package is then transferred to the receiving station by any type of transmission, step 4. The receiving station can. be a head end, set top box, or other place where the segments will be assembled. The proper segments are selected, and assembled, or stitched, into the appropriate full screen size, step 5. Details for this process are provided below.
  • Enabling efficient stitching of separately-encoded pieces of video according to the illustrative embodiment utilizes several steps during encoding, step 2 FIG. 5. A first requirement on the separately encoded videos is that they must maintain exactly identical GOP patterns. For instance, an IPBBPBBPBBPBB segment can't be stitched together with an IPBPBPBPBPBPB segment. Even though the number of frames is the same in these GOPs, the differing patterns preclude stitching because a resultant frame can't think of itself as part P (Interframe or forward prediction frame) and part B (bi-directional predicted frame) at the same time. Stitching such videos together would effectively mean decoding and subsequently reencoding the video at the point of reception, thus defeating the purpose.
  • When encoding the rectangular regions according to the illustrative embodiment, it is beneficial for VBV buffer size and bitrate for each region to be chosen in approximate proportion to the relative sizes of the regions. For example, in a two region situation, as shown in FIG. 1, if one region takes 90% of the area, it should use about 90% of the intended final VBV buffer size, while the other region should get 10% or so of each. It is furthermore important to note, as will be explained below, that stitching will typically result in some data expansion, so the VBV sizes and bitrates actually chosen for encoding should be slightly smaller than the ones that are computed based only on the proportion of the relative region sizes, applied to the fixed desired VBV buffer size and bitrate. In practice, this means a slight reduction by a fixed factor, usually between 10% and 15%, for each segment. For example, for a VBV buffer size of 224 Kbyte and a bitrate of 6000000 bits/sec, the sum of the two parts should be about 200 Kbyte for the VBV buffer and about 5400000 bits/sec, for the bitrate.
  • Another typical requirement for encoding “stitchable” video segments according to the illustrative embodiment is that the global stream parameters that control macroblock decoding should be identical across clips. In particular, the optional quantization matrices included in sequence headers should be the same, as well as any quantization matrix extensions present. Several fields of the picture coding extension should also agree across clips, most importantly, the alternate scan bit, the intra dc_precision value, and the f_code 4-value array. Most encoders presently used in the industry usually either pick fixed values for these fields, or they can be instructed to use certain fixed values. The exception to this is the 4-value f_code array which governs the decoding of motion vectors.
  • The f_code array contains four elements: one each for forward horizontal motion, forward vertical motion, backward horizontal motion, and backward vertical motion. Only B frames actually use all four. P frames only use the forward components. I frames (Intraframes) don't use them at all.
  • The values in the f_code array reflect the “worst case” motion vectors found in the associated image. A picture with a lot of large-scale gross motion in it relative to its predecessor picture tends to have larger maximum motion vector sizes than a relatively static picture. Bigger motion vectors are reflected in bigger f_code values, which basically control the number of bits used to express motion vectors in the picture. In order to perform video stitching, the f_code values of the video frames that must be stitched together must be modified so that they are consistent. Subsequently, according to the illustrative embodiment, the motion vectors that are expressed in terms of these f_codes reencoded to match the new f_codes. An advantage is that any given motion vector can he re-encoded for a larger f_code value, but not necessarily for a smaller f_code value. Therefore, to be able to stitch two or more video frames together, the illustrative embodiment defines the f_code values for the stitched frame to be at least the maximum of the alternatives for any f_code array component. After thus determining the new f_code for the stitched frame, the motion vectors of each frame are reencoded in terms of the new f_codes. This reencoding always typically involves some data expansion, but practice has shown it is often in the order of just 1%.
  • According to the illustrative embodiment, there are two options to modifying f_code values and reencoding motion vectors to make video fragments stitchable. The first option is to make this part of the actual stitching process, i.e., in the local stations or set top boxes as shown in FIG. 2. The advantage of this approach is that the package that has to be transmitted as small as possible, leading to a saving in bandwidth. However, this means that the stitching work in the local station is more complex, since all the f_code modification and motion-vector reencoding has to be done locally.
  • The second option is to find the maximum f_code values for a given frame across all variants and modify all the variants (i.e., reencode the motion vectors) for this maximum f_code array before packaging for distribution. This will simplify the stitching process in the local station at the expense of needing more bandwidth, and leading to slightly larger (1% or so) stitched videos (since the max. f_code is computed across all variants, and not on a per variant basis).
  • FIG. 5 illustrates the first option, i.e., packaging the video directly after encoding, and modifying f_codes and motion vectors after distribution. Step 3 illustrates the packaging of the separately encoded segments into a single package. Step 4 illustrates the distribution of that package (using any available transmission mechanism). Step 5 shows the selection, at the reception point, of a set of segments (one for each region) to be stitched together. Step 6 in FIG. 6 illustrates the modification of f_code arrays and motion vectors at the point of reception.
  • The actual stitching process, after having modified all the f_code arrays and having reencoded all the motion vectors for all the (slices of the) frames of the videos to be stitched together, is now described.
  • In MPEG, a single row of macroblocks is encoded as a sequence of one or more MPEG “slices”, where a slice is a row of macroblocks (one macroblock high, and one or more macroblocks wide). In a single MPEG video frame, these slices are encoded from top to bottom.
  • The first task to compose a full-screen frame from multiple separately encoded smaller frames is to produce the screen-wide rows of macroblocks. Simply appending the slices from the different encoded frames for each such row in a left to right order is already sufficient. The only extra work that has to be done for each slice is setting the macroblock_address_increment value in the first macroblock to indicate the horizontal starting position of the slice in the macroblock row.
  • Having composed all the individual screen-wide macroblock rows, the next step is composing these rows into a full frame. This can be done by appending the rows in a top to bottom order. The only extra work that has to be done is adjusting the slice slice_vertical_position value in each slice to indicate how far down the screen from the top the slice goes in the final stitched frame.
  • It is important to consider that, although it is perfectly legal MPEG to have multiple slices per screen-wide row of macroblocks, some decoders have problems with more than 1 slice per macroblock row since this is not common practice in the industry. It is safer to concatenate two or more slices into a single full-width slice. Slice-concatenation according to the illustrative embodiment is described by the following 6 step process.
  • 1. All but the first of the slices being concatenated must have their slice header removed.
  • 2. The first macroblock of a slice to be appended to a previous slice may need a macroblock_address_increment adjustment to indicate how many macroblocks have been skipped between the end of the slice to its left and the current macroblock. When there is no gap (as is usually the case), this value will need no change.
  • 3. If there is a difference between the quantiser_scale_code in use at the end of a slice and that declared in the excised slice header on the following slice, the first macroblock of that following slice will need to contain the correct quantiser_scale_code, indicated by setting the macroblock_quant_flag in the macroblock, followed by the appropriate quantiser_scale_code.
  • 4. The predicted motion vectors at the beginning of a slice following a previous slice must be updated. At the beginning of a slice, motion vectors are predicted to be zero, and so the first motion vectors encoded for a macroblock represent absolute values. But subsequent macroblocks derive their motion vectors as deltas from those of the macroblock to their immediate left. Forward and reverse motion vectors are tracked separately, and in the event that macroblocks are skipped within the slice, predictions may revert to zero. The exact rules about how skipped macroblocks affect predictions differ between P and B frames and field and frame pictures. In. any event, one or more macroblocks at or near the beginning of a slice to be appended to a previous slice most likely need to be modified to take into account motion vector predictions inherited from the previous slice. Once inherited correctly, macroblocks farther to the right need not be modified.
  • 5. The “dct_dc_differential” values found in the first macroblock of an appended slice must be modified to reflect inheritance of predicted dc levels front the last macroblock of the slice onto which the appending operation is being performed. Normally slices start out with known fixed dc level assumptions in the first macroblock, and inherit dc levels, with modification by dct_dc_differential values, from left to right. Modification of the dct_dc_differential value is required at the start of an appended slice because it must base its dc calculation on inheritance from the macroblock to its left instead of being based on initial fixed values.
  • 6. The stop code for a slice is a run of 23 zero bits. These stop bits have to be removed for all but the last appended slice on a screen row of macroblocks,
  • FIG. 6 illustrates the stitching of video segments and merging of slices. Step 7 illustrates the concatenation of slices for single rows of macroblocks as well as concatenation of such rows into a full-screen picture. Step 8 illustrates the subsequent merging of multiple slices for one screen-wide macroblock row into a single slice.
  • The final data fields that have to be determined for a complete frame that has thus been stitched together are the vertical_size_value, bit_rate_value, and vbv_buffer_size_value, all in the MPEG sequence header, as well as the vbv_delay field in the MPEG picture header.
  • The value for vertical_size_value is simply the height of the stitched frame and is hence easy to modify. However, in the case of CBR (Constant BitRate) MPEG video, to obtain legal values for bit_rate_value, vbv_buffer_size_value, and vbv_delay requires additional work. It is generally necessary to add varying amounts of padding between frames of video while appending all frames together, in order to ensure that the video has constant bitrate and satisfies the rules for the CBR video buffer verification (VBV) model that govern the legal playability of MPEG video. Only after this padding is complete, the values for bit_rate_value, vbv_buffer_size_value, and vbv_delay can be filled in for each frame, step 9. This will be described further in the separate section below.
  • In the case of VBR (Variable BitRate) MPEG video, padding is not strictly necessary. Instead, the system can recalculate the peak (maximum) bitrate of the stream (which might have changed due to the f_code and motion vector adaptation), and fill in this value in the bit_rate_value fields of the video. The vbv_delay fields have a constant, fixed, value in the case of VBR video, and the value for vbv_buffer_size can simple be chosen as the maximum allowed VBV buffer size for the particular profile and level of the MPEG video (e.g., 224 Kbyte for Main Profile/Main Level MPEG video)
  • As mentioned previously, macroblock data tends to slightly expand to reflect the necessary f_code value changes during stitching, causing an increase in required bandwidth. Conversely, when streams are stitched together, sequence, GOP, and picture header information from all but one of them is stripped off to leave just the slices of macroblocks, causing a slight decrease in required bandwidth.
  • The net effect of f_code-related expansion and header-stripping contraction usually ends up with a net expansion of raw data to be transmitted. This data expansion will be unevenly distributed, since some frames in the stream will be expanded more than others. Therefore, in order to maintain, constant bitrate (CBR) and satisfy the associated VBV buffer models, the new video data must be properly distributed, and this can be achieved by adding padding bytes between frames of video.
  • The most straightforward way to pad for VBV legality and constant bitrate is to measure the size of each original frame of video and the stitched frame produced by their concatenation with f_code adjustment as follows.
      • Let S(n,f) be the size in bytes of frame f of video segment n, where there are N segments (0, . . . , N−1) that are being stitched together and F frames of video (0, . . . , F) in each segment.
      • S(f) be the size in bytes of frame f of stitched-together video (after f_code motion vector adjustment and the discarding of headers from all but one of the stitched frames)
      • Then

  • E(f)=S9f)/(S(0,f)+S91,f)+ . . . +S(N−1,f))
  • is the expansion ratio of frame f of stitched video. Now, let Emax be the maximum of all values E(0), . . . E(F−1), i.e., the maximum expansion ratio across all F frames. Practice has shown that a typical value is around 1.1 (a 10% expansion). By padding all stitched frames (except the one(s) that have this maximum expansion rate) with an appropriate number of zero-bytes it is possible to make E(f) the same as this maximum value Emax for each video frame. Now, furthermore assuming that the initial frame VBV delay times were equal in the original separately encoded clips (which is easy to accomplish with existing encoders) we can define:

  • VBVmax=(VBV(0)+VBV(1)++VBV(N))*Emax
      • where
      • VBV(n) is the VBV buffer size used to encode segment n, and
      • VBVmax is the VBV buffer size in which the stitched video is guaranteed to run
  • Padding to a maximum expansion rate as just described is a simple way of guaranteeing a certain VBV buffer size for the resultant video. Keeping VBVmax below the MPEG defined max. VBV buffer size (e.g., 224 Kbyte for MPEG-2 Main Profile/Main Level, or MP@ML) will guarantee legal video that any decoder will be able to decode.
  • One issue with the just described simple padding algorithm is that it can result in significant (10% or so) expansion of the final stitched video, which can be wasteful. In case the expansion is already done before transmission it will also lead to extra bandwidth consumption.
  • However, a person skilled in the art can see that it is possible to construct more intelligent variations of this worst-case padding algorithm which reduce the wasted bandwidth and VBV buffer growth by removing or relocating sections of padding in such a way that buffer models are not violated.
  • After the padding of each frame is complete, the values for bit_rate_value, vbv_buffer_size_value, and vbv_delay can finally be computed and filled in the sequence and picture headers for each frame, thus solving the last required action to complete the stitching process.
  • Step 9 in FIG. 6 illustrates the final step of padding between frames to achieve the final VBV buffer size for the stitched video sequence in the case of CBR video.
  • Although the invention has been shown and described with respect to illustrative embodiments thereof, various other changes, omissions and additions in the form and detail thereof may be made therein without departing from the spirit and scope of the invention.
  • It will be understood that various modifications may be made to the embodiments disclosed herein. Therefore, the above description should not be construed as limiting, but merely as exemplification of the various embodiments. Those skilled in the art will envision other modifications within the scope and spirit of the claims appended hereto.

Claims (20)

1. A method comprising:
receiving a video frame;
dividing the video frame into a variant section and a non-variant section;
encoding the variant section separately from the non-variant section; and
transmitting the variant sections and the non-variant sections as separate sections to enable each copy of the variant section to be stitched to a copy of the non-variant section to form respective customized separate full video-frames.
2. The method of claim 1, wherein transmitting includes, transmitting to a receiver, the variant section and the non-variant section as separate sections utilizes less network bandwidth than transmitting the variant and non-variant section together as a full video-frame utilizes.
3. The method of claim 1, wherein encoding the variant and non-variant section comprises:
receiving video segments for the variant and non-variant section; and
encoding the received video segments.
4. The method of claim 1, wherein the variant section is divided into smaller rectangular portions as compared to the non-variant section.
5. The method of claim 1, wherein each full-video frame from the multiple separate customized full video-frames represents a single variant section with a single non-variant section.
6. The method of claim 1, wherein the received video frame includes a plurality of variant and non-variant sections.
7. The method of claim 1, wherein the received video frame is selected from a group consisting of a commercial, advertisement, and an infomercial.
8. The method of claim 1, wherein the variant section includes a regionally customized message.
9. The method of claim 1, wherein encoding the variant section separately from the non-variant section results in eliminating crosstalk between the variant and the non-variant section.
10. The method of claim 1, wherein the variant section and the non-variant section are encoded using a common GOP pattern.
11. A method comprising:
receiving a plurality of video frames having a variant section separately from a plurality of video frames having a non-variant section, wherein the variant section and the non-variant section are part of a video frame;
separately decoding previously encoded video frames having the variant section from the video frames having the non-variant section; and
stitching each video frame having the variant section to a video frame having the non-variant section to enable each copy of the variant section to be stitched to a copy of the non-variant section to form respective customized separate full video-frames.
12. The method of claim 11, wherein, the stitching of each video frame having the variant section to the video frame having the non-variant section enables to form respective regionally customized full video-frame.
13. The method of claim 11, wherein, the received plurality of video frames are selected from a group consisting of a commercial, advertisement, and an infomercial.
14. The method of claim 11, wherein, wherein receiving a plurality of video frames having a variant section separately from a plurality of video frames having a non-variant section utilizes less network bandwidth than receiving the variant and non-variant section together as a full video-frame utilizes.
15. An apparatus comprising:
one or more processors; and
memory storing instructions that, when executed by the one or more processors, cause the apparatus to:
receive a video frame;
divide the received video frame into a variant section and a non-variant section;
encode the variant section separately from the non-variant section; and
transmit the variant sections and the non-variant sections as separate sections to enable each copy of the variant section to be stitched to a copy of the non-variant section to form respective customized separate full video-frames.
16. The apparatus of claim 15, wherein encoding the variant and non-variant section comprises:
receiving video segments for the variant and non-variant section; and
encoding the received video segments.
17. The apparatus of claim 15, wherein, wherein transmitting includes, transmitting to a receiver, the variant section and the non-variant section as separate sections utilizes less network bandwidth than transmitting the variant and non-variant section together as a full video-frame utilizes.
18. The apparatus of claim 15, wherein each full-video frame from the multiple separate customized full video-frames represents a single variant section with a single non-variant section.
19. The apparatus of claim 15, wherein, the variant section includes a regionally customized message.
20. The apparatus of claim 15, wherein encoding the variant section separately from the non-variant section eliminates crosstalk between the variant and the non-variant section.
US17/138,577 2003-11-18 2020-12-30 System and method for optimized encoding and transmission of a plurality of substantially similar video fragments Abandoned US20210127118A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/138,577 US20210127118A1 (en) 2003-11-18 2020-12-30 System and method for optimized encoding and transmission of a plurality of substantially similar video fragments

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
US52303503P 2003-11-18 2003-11-18
US10/991,674 US8170096B1 (en) 2003-11-18 2004-11-18 System and method for optimized encoding and transmission of a plurality of substantially similar video fragments
US13/455,836 US9344734B2 (en) 2003-11-18 2012-04-25 System and method for optimized encoding and transmission of a plurality of substantially similar video fragments
US15/155,827 US10298934B2 (en) 2003-11-18 2016-05-16 System and method for optimized encoding and transmission of a plurality of substantially similar video fragments
US15/856,171 US10666949B2 (en) 2003-11-18 2017-12-28 System and method for optimized encoding and transmission of a plurality of substantially similar video fragments
US16/872,831 US11503303B2 (en) 2003-11-18 2020-05-12 System and method for optimized encoding and transmission of a plurality of substantially similar video fragments
US17/138,577 US20210127118A1 (en) 2003-11-18 2020-12-30 System and method for optimized encoding and transmission of a plurality of substantially similar video fragments

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US16/872,831 Continuation US11503303B2 (en) 2003-11-18 2020-05-12 System and method for optimized encoding and transmission of a plurality of substantially similar video fragments

Publications (1)

Publication Number Publication Date
US20210127118A1 true US20210127118A1 (en) 2021-04-29

Family

ID=45990876

Family Applications (6)

Application Number Title Priority Date Filing Date
US10/991,674 Active 2027-07-28 US8170096B1 (en) 2003-11-18 2004-11-18 System and method for optimized encoding and transmission of a plurality of substantially similar video fragments
US13/455,836 Active US9344734B2 (en) 2003-11-18 2012-04-25 System and method for optimized encoding and transmission of a plurality of substantially similar video fragments
US15/155,827 Active US10298934B2 (en) 2003-11-18 2016-05-16 System and method for optimized encoding and transmission of a plurality of substantially similar video fragments
US15/856,171 Active US10666949B2 (en) 2003-11-18 2017-12-28 System and method for optimized encoding and transmission of a plurality of substantially similar video fragments
US16/872,831 Active US11503303B2 (en) 2003-11-18 2020-05-12 System and method for optimized encoding and transmission of a plurality of substantially similar video fragments
US17/138,577 Abandoned US20210127118A1 (en) 2003-11-18 2020-12-30 System and method for optimized encoding and transmission of a plurality of substantially similar video fragments

Family Applications Before (5)

Application Number Title Priority Date Filing Date
US10/991,674 Active 2027-07-28 US8170096B1 (en) 2003-11-18 2004-11-18 System and method for optimized encoding and transmission of a plurality of substantially similar video fragments
US13/455,836 Active US9344734B2 (en) 2003-11-18 2012-04-25 System and method for optimized encoding and transmission of a plurality of substantially similar video fragments
US15/155,827 Active US10298934B2 (en) 2003-11-18 2016-05-16 System and method for optimized encoding and transmission of a plurality of substantially similar video fragments
US15/856,171 Active US10666949B2 (en) 2003-11-18 2017-12-28 System and method for optimized encoding and transmission of a plurality of substantially similar video fragments
US16/872,831 Active US11503303B2 (en) 2003-11-18 2020-05-12 System and method for optimized encoding and transmission of a plurality of substantially similar video fragments

Country Status (1)

Country Link
US (6) US8170096B1 (en)

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8170096B1 (en) 2003-11-18 2012-05-01 Visible World, Inc. System and method for optimized encoding and transmission of a plurality of substantially similar video fragments
US8074248B2 (en) 2005-07-26 2011-12-06 Activevideo Networks, Inc. System and method for providing video content associated with a source image to a television in a communication network
EP3145200A1 (en) * 2007-01-12 2017-03-22 ActiveVideo Networks, Inc. Mpeg objects and systems and methods for using mpeg objects
US9826197B2 (en) 2007-01-12 2017-11-21 Activevideo Networks, Inc. Providing television broadcasts over a managed network and interactive content over an unmanaged network to a client device
CA2814070A1 (en) 2010-10-14 2012-04-19 Activevideo Networks, Inc. Streaming digital video between video devices using a cable television system
US9204203B2 (en) 2011-04-07 2015-12-01 Activevideo Networks, Inc. Reduction of latency in video distribution networks using adaptive bit rates
US8793393B2 (en) * 2011-11-23 2014-07-29 Bluespace Corporation Video processing device, video server, client device, and video client-server system with low latency thereof
WO2013106390A1 (en) 2012-01-09 2013-07-18 Activevideo Networks, Inc. Rendering of an interactive lean-backward user interface on a television
US9800945B2 (en) 2012-04-03 2017-10-24 Activevideo Networks, Inc. Class-based intelligent multiplexing over unmanaged networks
US9123084B2 (en) 2012-04-12 2015-09-01 Activevideo Networks, Inc. Graphical application integration with MPEG objects
US20140025710A1 (en) * 2012-07-23 2014-01-23 Espial Group Inc. Storage Optimizations for Multi-File Adaptive Bitrate Assets
US10275128B2 (en) 2013-03-15 2019-04-30 Activevideo Networks, Inc. Multiple-mode system and method for providing user selectable video content
US9294785B2 (en) 2013-06-06 2016-03-22 Activevideo Networks, Inc. System and method for exploiting scene graph information in construction of an encoded video sequence
US9219922B2 (en) 2013-06-06 2015-12-22 Activevideo Networks, Inc. System and method for exploiting scene graph information in construction of an encoded video sequence
EP3005712A1 (en) 2013-06-06 2016-04-13 ActiveVideo Networks, Inc. Overlay rendering of user interface onto source video
US9788029B2 (en) 2014-04-25 2017-10-10 Activevideo Networks, Inc. Intelligent multiplexing using class-based, multi-dimensioned decision logic for managed networks
US10200707B2 (en) * 2015-10-29 2019-02-05 Microsoft Technology Licensing, Llc Video bit stream decoding
WO2017196582A1 (en) * 2016-05-11 2017-11-16 Advanced Micro Devices, Inc. System and method for dynamically stitching video streams
US20170332096A1 (en) * 2016-05-11 2017-11-16 Advanced Micro Devices, Inc. System and method for dynamically stitching video streams
US11636516B2 (en) 2017-02-13 2023-04-25 Adcuratio Media, Inc. System and method for targeting individuals with advertisement spots during national broadcast and cable television
US10560728B2 (en) * 2017-05-29 2020-02-11 Triton Us Vp Acquisition Co. Systems and methods for stitching separately encoded NAL units into a stream
CN117651139B (en) * 2024-01-29 2024-04-02 鹏钛存储技术(南京)有限公司 Video coding method and system for dynamically calculating relative index position of macro block

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6094455A (en) * 1996-09-25 2000-07-25 Matsushita Electric Industrial Co., Ltd. Image compression/encoding apparatus and system with quantization width control based on bit generation error
US6249613B1 (en) * 1997-03-31 2001-06-19 Sharp Laboratories Of America, Inc. Mosaic generation and sprite-based coding with automatic foreground and background separation
US20020018072A1 (en) * 2000-05-11 2002-02-14 Chui Charles K. Scalable graphics image drawings on multiresolution image with/without image data re-usage
US20020147987A1 (en) * 2001-03-20 2002-10-10 Steven Reynolds Video combiner
US6584229B1 (en) * 1999-08-30 2003-06-24 Electronics And Telecommunications Research Institute Macroblock-based object-oriented coding method of image sequence having a stationary background
US20030202124A1 (en) * 2002-04-26 2003-10-30 Alden Ray M. Ingrained field video advertising process

Family Cites Families (179)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3366731A (en) 1967-08-11 1968-01-30 Comm And Media Res Services In Television distribution system permitting program substitution for selected viewers
US3639686A (en) 1969-04-25 1972-02-01 Homarket Inc Television receiver cut-in device
US4331974A (en) 1980-10-21 1982-05-25 Iri, Inc. Cable television with controlled signal substitution
US4475123A (en) 1981-04-02 1984-10-02 Theta-Com., Division Of Texscan Addressable subscriber cable television system
US4965825A (en) 1981-11-03 1990-10-23 The Personalized Mass Media Corporation Signal processing apparatus and methods
US4625235A (en) 1983-05-19 1986-11-25 Westinghouse Electric Corp. Remote control switching of television sources
US4638359A (en) 1983-05-19 1987-01-20 Westinghouse Electric Corp. Remote control switching of television sources
JPS60130282A (en) 1983-12-16 1985-07-11 Pioneer Electronic Corp Data transmission system of catv
US4602279A (en) 1984-03-21 1986-07-22 Actv, Inc. Method for providing targeted profile interactive CATV displays
US4573072A (en) 1984-03-21 1986-02-25 Actv Inc. Method for expanding interactive CATV displayable choices for a given channel capacity
US4703423A (en) 1984-07-10 1987-10-27 Recipe Terminal Corporation Apparatus and method for generation of brand name specific advertising media
US4789235A (en) 1986-04-04 1988-12-06 Applied Science Group, Inc. Method and system for generating a description of the distribution of looking time as people watch television commercials
US5099422A (en) 1986-04-10 1992-03-24 Datavision Technologies Corporation (Formerly Excnet Corporation) Compiling system and method of producing individually customized recording media
US4850007A (en) 1987-06-25 1989-07-18 American Telephone And Telegraph Company Telephone toll service with advertising
US4847700A (en) 1987-07-16 1989-07-11 Actv, Inc. Interactive television system for providing full motion synched compatible audio/visual displays from transmitted television signals
US4847698A (en) 1987-07-16 1989-07-11 Actv, Inc. Interactive television system for providing full motion synched compatible audio/visual displays
US4847699A (en) 1987-07-16 1989-07-11 Actv, Inc. Method for providing an interactive full motion synched compatible audio/visual television display
US4918516A (en) 1987-10-26 1990-04-17 501 Actv, Inc. Closed circuit television system having seamless interactive television programming and expandable user participation
USRE34340E (en) 1987-10-26 1993-08-10 Actv, Inc. Closed circuit television system having seamless interactive television programming and expandable user participation
US4814883A (en) 1988-01-04 1989-03-21 Beam Laser Systems, Inc. Multiple input/output video switch for commerical insertion system
CA1337132C (en) 1988-07-15 1995-09-26 Robert Filepp Reception system for an interactive computer network and method of operation
GB8918553D0 (en) 1989-08-15 1989-09-27 Digital Equipment Int Message control system
US5155591A (en) 1989-10-23 1992-10-13 General Instrument Corporation Method and apparatus for providing demographically targeted television commercials
US5105184B1 (en) 1989-11-09 1997-06-17 Noorali Pirani Methods for displaying and integrating commercial advertisements with computer software
US5220501A (en) 1989-12-08 1993-06-15 Online Resources, Ltd. Method and system for remote delivery of retail banking services
US5446919A (en) 1990-02-20 1995-08-29 Wilkins; Jeff K. Communication system and method with demographically or psychographically defined audiences
US5260778A (en) 1990-06-26 1993-11-09 General Instrument Corporation Apparatus for selective distribution of messages over a communications network
US5291395A (en) 1991-02-07 1994-03-01 Max Abecassis Wallcoverings storage and retrieval system
US5173900A (en) 1991-05-17 1992-12-22 General Instrument Corporation Method and apparatus for communicating different categories of data in a single data stream
US5401946A (en) 1991-07-22 1995-03-28 Weinblatt; Lee S. Technique for correlating purchasing behavior of a consumer to advertisements
US5426281A (en) 1991-08-22 1995-06-20 Abecassis; Max Transaction protection system
US5231494A (en) 1991-10-08 1993-07-27 General Instrument Corporation Selection of compressed television signals from single channel allocation based on viewer characteristics
US5734413A (en) 1991-11-20 1998-03-31 Thomson Multimedia S.A. Transaction based interactive television system
US5343239A (en) 1991-11-20 1994-08-30 Zing Systems, L.P. Transaction based interactive television system
US5519433A (en) 1991-11-20 1996-05-21 Zing Systems, L.P. Interactive television security through transaction time stamping
US5724091A (en) 1991-11-25 1998-03-03 Actv, Inc. Compressed digital data interactive program system
US5861881A (en) 1991-11-25 1999-01-19 Actv, Inc. Interactive computer system for providing an interactive presentation with personalized video, audio and graphics responses for multiple viewers
US5802314A (en) 1991-12-17 1998-09-01 Canon Kabushiki Kaisha Method and apparatus for sending and receiving multimedia messages
US6850252B1 (en) 1999-10-05 2005-02-01 Steven M. Hoffberg Intelligent electronic appliance system and method
US5361393A (en) 1992-01-28 1994-11-01 Prodigy Services Company Method for improving interactive-screen uploading of accounting data
US6208805B1 (en) 1992-02-07 2001-03-27 Max Abecassis Inhibiting a control function from interfering with a playing of a video
US5434678A (en) 1993-01-11 1995-07-18 Abecassis; Max Seamless transmission of non-sequential video segments
US5684918A (en) 1992-02-07 1997-11-04 Abecassis; Max System for integrating video and communications
US5953485A (en) 1992-02-07 1999-09-14 Abecassis; Max Method and system for maintaining audio during video control
US5610653A (en) 1992-02-07 1997-03-11 Abecassis; Max Method and system for automatically tracking a zoomed video image
US5253940A (en) 1992-02-19 1993-10-19 Max Abecassis User selectable numeric keycaps layout
US5305195A (en) 1992-03-25 1994-04-19 Gerald Singer Interactive advertising system for on-line terminals
EP0657049A4 (en) 1992-08-26 1995-08-09 Datavision Technologies Compiling system and method for mass producing individually customized media.
US5422468A (en) 1992-10-30 1995-06-06 Abecassis; Max Deposit authorization system
US6463585B1 (en) 1992-12-09 2002-10-08 Discovery Communications, Inc. Targeted advertisement using television delivery systems
US5600364A (en) 1992-12-09 1997-02-04 Discovery Communications, Inc. Network controller for cable television delivery systems
CA2121151A1 (en) 1993-04-16 1994-10-17 Trevor Lambert Method and apparatus for automatic insertion of a television signal from a remote source
US5356151A (en) 1993-04-20 1994-10-18 Max Abecassis Gameboard and scale model game
US5442390A (en) 1993-07-07 1995-08-15 Digital Equipment Corporation Video on demand with memory accessing and or like functions
US5414455A (en) 1993-07-07 1995-05-09 Digital Equipment Corporation Segmented video on demand system
US5761601A (en) 1993-08-09 1998-06-02 Nemirofsky; Frank R. Video distribution of advertisements to businesses
JPH09507108A (en) 1993-10-29 1997-07-15 ケイスリー,ロナルド,ディ. Interactive multimedia communication system to access industry-specific information
US5537141A (en) 1994-04-15 1996-07-16 Actv, Inc. Distance learning system providing individual television participation, audio responses and memory for every student
US5548532A (en) 1994-04-28 1996-08-20 Thomson Consumer Electronics, Inc. Apparatus and method for formulating an interactive TV signal
US5448568A (en) 1994-04-28 1995-09-05 Thomson Consumer Electronics, Inc. System of transmitting an interactive TV signal
US5636346A (en) 1994-05-09 1997-06-03 The Electronic Address, Inc. Method and system for selectively targeting advertisements and programming
US5768521A (en) 1994-05-16 1998-06-16 Intel Corporation General purpose metering mechanism for distribution of electronic information
US5499046A (en) 1994-05-23 1996-03-12 Cable Services Technologies, Inc. CATV distribution system with each channel having its own remote scheduler
US5873068A (en) 1994-06-14 1999-02-16 New North Media Inc. Display based marketing message control system and method
US5566353A (en) 1994-09-06 1996-10-15 Bylon Company Limited Point of purchase video distribution system
US5515098A (en) 1994-09-08 1996-05-07 Carles; John B. System and method for selectively distributing commercial messages over a communications network
US5632007A (en) 1994-09-23 1997-05-20 Actv, Inc. Interactive system and method for offering expert based interactive programs
US5717923A (en) 1994-11-03 1998-02-10 Intel Corporation Method and apparatus for dynamically customizing electronic information to individual end users
US5724521A (en) 1994-11-03 1998-03-03 Intel Corporation Method and apparatus for providing electronic advertisements to end users in a consumer best-fit pricing manner
US5617142A (en) 1994-11-08 1997-04-01 General Instrument Corporation Of Delaware Method and apparatus for changing the compression level of a compressed digital signal
US5758257A (en) 1994-11-29 1998-05-26 Herz; Frederick System and method for scheduling broadcast of and access to video programs and other data using customer profiles
US5913031A (en) 1994-12-02 1999-06-15 U.S. Philips Corporation Encoder system level buffer management
US5774170A (en) 1994-12-13 1998-06-30 Hite; Kenneth C. System and method for delivering targeted advertisements to consumers
US5585838A (en) 1995-05-05 1996-12-17 Microsoft Corporation Program time guide
GB9510093D0 (en) * 1995-05-18 1995-07-12 Philips Electronics Uk Ltd Interactive image manipulation
US5796945A (en) 1995-06-07 1998-08-18 Tarabella; Robert M. Idle time multimedia viewer method and apparatus for collecting and displaying information according to user defined indicia
US5740549A (en) 1995-06-12 1998-04-14 Pointcast, Inc. Information and advertising distribution system and method
US5682196A (en) 1995-06-22 1997-10-28 Actv, Inc. Three-dimensional (3D) video presentation system providing interactive 3D presentation with personalized audio responses for multiple viewers
US5652615A (en) 1995-06-30 1997-07-29 Digital Equipment Corporation Precision broadcast of composite programs including secondary program content such as advertisements
US5784095A (en) 1995-07-14 1998-07-21 General Instrument Corporation Digital audio system with video output program guide
US5907837A (en) 1995-07-17 1999-05-25 Microsoft Corporation Information retrieval system in an on-line network including separate content and layout of published titles
US6026368A (en) 1995-07-17 2000-02-15 24/7 Media, Inc. On-line interactive system and method for providing content and advertising information to a targeted set of viewers
US5805974A (en) 1995-08-08 1998-09-08 Hite; Kenneth C. Method and apparatus for synchronizing commercial advertisements across multiple communication channels
US6002393A (en) 1995-08-22 1999-12-14 Hite; Kenneth C. System and method for delivering targeted advertisements to consumers using direct commands
US5758259A (en) 1995-08-31 1998-05-26 Microsoft Corporation Automated selective programming guide
US5671225A (en) 1995-09-01 1997-09-23 Digital Equipment Corporation Distributed interactive multimedia service system
SG80607A1 (en) 1995-09-29 2001-05-22 Matsushita Electric Ind Co Ltd Method and device for recording and reproducing interleaved bitstream on and from medium
US5949951A (en) * 1995-11-09 1999-09-07 Omnimedia Systems, Inc. Interactive workstation for creating customized, watch and do physical exercise programs
US5732217A (en) 1995-12-01 1998-03-24 Matsushita Electric Industrial Co., Ltd. Video-on-demand system capable of performing a high-speed playback at a correct speed
US5774664A (en) 1996-03-08 1998-06-30 Actv, Inc. Enhanced video programming system and method for incorporating and displaying retrieved integrated internet information segments
US6018768A (en) 1996-03-08 2000-01-25 Actv, Inc. Enhanced video programming system and method for incorporating and displaying retrieved integrated internet information segments
US5778181A (en) 1996-03-08 1998-07-07 Actv, Inc. Enhanced video programming system and method for incorporating and displaying retrieved integrated internet information segments
JP3480777B2 (en) 1996-03-15 2003-12-22 パイオニア株式会社 Information recording apparatus, information recording method, information reproducing apparatus, and information reproducing method
US5848396A (en) 1996-04-26 1998-12-08 Freedom Of Information, Inc. Method and apparatus for determining behavioral profile of a computer user
US5740388A (en) 1996-05-10 1998-04-14 Custom Communications, Inc. Apparatus for creating individually customized videos
US6137834A (en) 1996-05-29 2000-10-24 Sarnoff Corporation Method and apparatus for splicing compressed information streams
US6424991B1 (en) 1996-07-01 2002-07-23 Sun Microsystems, Inc. Object-oriented system, method and article of manufacture for a client-server communication framework
US5929850A (en) 1996-07-01 1999-07-27 Thomson Consumer Electronices, Inc. Interactive television system and method having on-demand web-like navigational capabilities for displaying requested hyperlinked web-like still images associated with television content
US5937331A (en) 1996-07-01 1999-08-10 Kalluri; Rama Protocol and system for transmitting triggers from a remote network and for controlling interactive program content at a broadcast station
US5825884A (en) 1996-07-01 1998-10-20 Thomson Consumer Electronics Method and apparatus for operating a transactional server in a proprietary database environment
US6078619A (en) * 1996-09-12 2000-06-20 University Of Bath Object-oriented video system
US5986692A (en) 1996-10-03 1999-11-16 Logan; James D. Systems and methods for computer enhanced broadcast monitoring
US5917830A (en) 1996-10-18 1999-06-29 General Instrument Corporation Splicing compressed packetized digital video streams
US6400886B1 (en) 1996-11-15 2002-06-04 Futuretel, Inc. Method and apparatus for stitching edited video segments
US5931901A (en) 1996-12-09 1999-08-03 Robert L. Wolfe Programmed music on demand from the internet
US6038000A (en) 1997-05-28 2000-03-14 Sarnoff Corporation Information stream syntax for indicating the presence of a splice point
US5978799A (en) 1997-01-30 1999-11-02 Hirsch; G. Scott Search engine including query database, user profile database, information templates and email facility
US6806909B1 (en) 1997-03-03 2004-10-19 Koninklijke Philips Electronics N.V. Seamless splicing of MPEG-2 multimedia data streams
US20020154694A1 (en) 1997-03-21 2002-10-24 Christopher H. Birch Bit stream splicer with variable-rate output
US6009409A (en) * 1997-04-02 1999-12-28 Lucent Technologies, Inc. System and method for scheduling and controlling delivery of advertising in a communications network
US6075551A (en) 1997-07-08 2000-06-13 United Video Properties, Inc. Video promotion system with flexible local insertion capabilities
US6141358A (en) 1997-07-25 2000-10-31 Sarnoff Corporation Method and apparatus for aligning sub-stream splice points in an information stream
US6463444B1 (en) 1997-08-14 2002-10-08 Virage, Inc. Video cataloger system with extensibility
US6567980B1 (en) 1997-08-14 2003-05-20 Virage, Inc. Video cataloger system with hyperlinked output
US6360234B2 (en) 1997-08-14 2002-03-19 Virage, Inc. Video cataloger system with synchronized encoders
WO1999011065A1 (en) 1997-08-27 1999-03-04 Starsight Telecast, Inc. Systems and methods for replacing television signals
US5989692A (en) 1997-09-02 1999-11-23 Cytonix Corporation Porous surface for laboratory apparatus and laboratory apparatus having said surface
KR100574186B1 (en) * 1997-10-03 2006-04-27 소니 가부시끼 가이샤 Encoded stream splicing device and method, and an encoded stream generating device and method
US5867208A (en) 1997-10-28 1999-02-02 Sun Microsystems, Inc. Encoding system and method for scrolling encoded MPEG stills in an interactive television application
US6049569A (en) 1997-12-09 2000-04-11 Philips Electronics N.A. Corporation Method and apparatus for encoding digital video bit streams with seamless splice points and method and apparatus for splicing such digital video bit streams
US6029045A (en) 1997-12-09 2000-02-22 Cogent Technology, Inc. System and method for inserting local content into programming content
US5986712A (en) * 1998-01-08 1999-11-16 Thomson Consumer Electronics, Inc. Hybrid global/local bit rate control
CA2265089C (en) * 1998-03-10 2007-07-10 Sony Corporation Transcoding system using encoding history information
US6611624B1 (en) 1998-03-13 2003-08-26 Cisco Systems, Inc. System and method for frame accurate splicing of compressed bitstreams
JP3657424B2 (en) 1998-03-20 2005-06-08 松下電器産業株式会社 Center device and terminal device for broadcasting program information
US6160570A (en) 1998-04-20 2000-12-12 U.S. Philips Corporation Digital television system which selects images for display in a video sequence
US20020095676A1 (en) 1998-05-15 2002-07-18 Robert A. Knee Interactive television program guide system for determining user values for demographic categories
US6785289B1 (en) 1998-06-05 2004-08-31 Sarnoff Corporation Method and apparatus for aligning sub-stream splice points in an information stream
US6698020B1 (en) 1998-06-15 2004-02-24 Webtv Networks, Inc. Techniques for intelligent video ad insertion
US6327574B1 (en) 1998-07-07 2001-12-04 Encirq Corporation Hierarchical models of consumer attributes for targeting content in a privacy-preserving manner
US6067348A (en) 1998-08-04 2000-05-23 Universal Services, Inc. Outbound message personalization
US6588013B1 (en) 1998-08-18 2003-07-01 United Video Properties, Inc. Promotional material distribution system with automatic updating of promotional material selection algorithms
US6694482B1 (en) 1998-09-11 2004-02-17 Sbc Technology Resources, Inc. System and methods for an architectural framework for design of an adaptive, personalized, interactive content delivery system
US6357042B2 (en) 1998-09-16 2002-03-12 Anand Srinivasan Method and apparatus for multiplexing separately-authored metadata for insertion into a video data stream
US6522694B1 (en) * 1998-10-09 2003-02-18 Matsushita Electric Industrial Co., Ltd. Programmable filter for removing stuffing bits from an MPEG-2 bit-stream
US6671880B2 (en) 1998-10-30 2003-12-30 Intel Corporation Method and apparatus for customized rendering of commercials
US6408278B1 (en) 1998-11-10 2002-06-18 I-Open.Com, Llc System and method for delivering out-of-home programming
US6457010B1 (en) 1998-12-03 2002-09-24 Expanse Networks, Inc. Client-server based subscriber characterization system
US7328448B2 (en) 2000-08-31 2008-02-05 Prime Research Alliance E, Inc. Advertisement distribution system for distributing targeted advertisements in television systems
DE69937674T2 (en) 1998-12-23 2008-10-30 Koninklijke Philips Electronics N.V. PROGRAMS RECEIVERS
JP2000261647A (en) * 1999-03-04 2000-09-22 Fuji Xerox Co Ltd Image processing unit
US6621870B1 (en) * 1999-04-15 2003-09-16 Diva Systems Corporation Method and apparatus for compressing video sequences
US6343287B1 (en) 1999-05-19 2002-01-29 Sun Microsystems, Inc. External data store link for a profile service
US6411992B1 (en) 1999-05-28 2002-06-25 Qwest Communications Int'l, Inc. Method and apparatus for broadcasting information over a network
US6502076B1 (en) 1999-06-01 2002-12-31 Ncr Corporation System and methods for determining and displaying product promotions
US6330286B1 (en) 1999-06-09 2001-12-11 Sarnoff Corporation Flow control, latency control, and bitrate conversions in a timing correction and frame synchronization apparatus
US6304852B1 (en) 1999-07-21 2001-10-16 Vignette Graphics, Llc Method of communicating computer operation during a wait period
US6449657B2 (en) 1999-08-06 2002-09-10 Namezero.Com, Inc. Internet hosting system
US6466975B1 (en) 1999-08-23 2002-10-15 Digital Connexxions Corp. Systems and methods for virtual population mutual relationship management using electronic computer driven networks
US6463441B1 (en) 1999-10-12 2002-10-08 System Improvements, Inc. Incident analysis and solution system
US6857024B1 (en) 1999-10-22 2005-02-15 Cisco Technology, Inc. System and method for providing on-line advertising and information
US6421386B1 (en) * 1999-12-29 2002-07-16 Hyundai Electronics Industries Co., Ltd. Method for coding digital moving video including gray scale shape information
US6678332B1 (en) 2000-01-04 2004-01-13 Emc Corporation Seamless splicing of encoded MPEG video and audio
US6389467B1 (en) 2000-01-24 2002-05-14 Friskit, Inc. Streaming media search and continuous playback system of media resources located by multiple network addresses
BR0108295A (en) 2000-02-02 2003-03-18 Worldgate Service Inc System and method for transmitting and displaying directed information
DE10005818A1 (en) * 2000-02-10 2001-08-16 Moeller Gmbh Device for mounting switchgear on mounting rails
US20020026359A1 (en) 2000-02-22 2002-02-28 Long Kenneth W. Targeted advertising method and system
US6574793B1 (en) 2000-02-25 2003-06-03 Interval Research Corporation System and method for displaying advertisements
US20020057336A1 (en) 2000-03-02 2002-05-16 Gaul Michael A. Interactive program guide configuration system
US8572639B2 (en) 2000-03-23 2013-10-29 The Directv Group, Inc. Broadcast advertisement adapting method and apparatus
US7548565B2 (en) 2000-07-24 2009-06-16 Vmark, Inc. Method and apparatus for fast metadata generation, delivery and access for live broadcast program
US9503789B2 (en) * 2000-08-03 2016-11-22 Cox Communications, Inc. Customized user interface generation in a video on demand environment
US7096483B2 (en) 2000-12-21 2006-08-22 Thomson Licensing Dedicated channel for displaying programs
US20040105492A1 (en) * 2001-01-22 2004-06-03 Goh Kwong Huang Method and apparatus for video buffer verifier underflow and overflow control
US8060906B2 (en) 2001-04-06 2011-11-15 At&T Intellectual Property Ii, L.P. Method and apparatus for interactively retrieving content related to previous query results
US7170938B1 (en) * 2001-08-21 2007-01-30 Cisco Systems Canada Co. Rate control method for video transcoding
US6956600B1 (en) * 2001-09-19 2005-10-18 Bellsouth Intellectual Property Corporation Minimal decoding method for spatially multiplexing digital video pictures
US20030110500A1 (en) 2001-12-06 2003-06-12 Rodriguez Arturo A. Prediction-based adaptative control of television viewing functionality
US6978470B2 (en) * 2001-12-26 2005-12-20 Bellsouth Intellectual Property Corporation System and method for inserting advertising content in broadcast programming
KR20030059399A (en) * 2001-12-29 2003-07-10 엘지전자 주식회사 Video browsing systme based on mosaic image
US7177356B2 (en) * 2002-01-11 2007-02-13 Webtv Networks, Inc. Spatially transcoding a video stream
US20040010549A1 (en) * 2002-03-17 2004-01-15 Roger Matus Audio conferencing system with wireless conference control
US9445133B2 (en) 2002-07-10 2016-09-13 Arris Enterprises, Inc. DVD conversion for on demand
US20050086692A1 (en) 2003-10-17 2005-04-21 Mydtv, Inc. Searching for programs and updating viewer preferences with reference to program segment characteristics
US8170096B1 (en) * 2003-11-18 2012-05-01 Visible World, Inc. System and method for optimized encoding and transmission of a plurality of substantially similar video fragments
JP4695928B2 (en) * 2005-06-29 2011-06-08 ホシデン株式会社 Locking device
EP1908303A4 (en) * 2005-07-01 2011-04-06 Sonic Solutions Method, apparatus and system for use in multimedia signal encoding
US20100232504A1 (en) 2009-03-13 2010-09-16 The State of Oregon acting by and through the State Board of Higher Education on behalf of the Supporting region-of-interest cropping through constrained compression
WO2011057044A2 (en) 2009-11-06 2011-05-12 Hercules Incorporated Surface application of polymers and polymer mixtures to improve paper strength

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6094455A (en) * 1996-09-25 2000-07-25 Matsushita Electric Industrial Co., Ltd. Image compression/encoding apparatus and system with quantization width control based on bit generation error
US6249613B1 (en) * 1997-03-31 2001-06-19 Sharp Laboratories Of America, Inc. Mosaic generation and sprite-based coding with automatic foreground and background separation
US6584229B1 (en) * 1999-08-30 2003-06-24 Electronics And Telecommunications Research Institute Macroblock-based object-oriented coding method of image sequence having a stationary background
US20020018072A1 (en) * 2000-05-11 2002-02-14 Chui Charles K. Scalable graphics image drawings on multiresolution image with/without image data re-usage
US20020147987A1 (en) * 2001-03-20 2002-10-10 Steven Reynolds Video combiner
US20030202124A1 (en) * 2002-04-26 2003-10-30 Alden Ray M. Ingrained field video advertising process

Also Published As

Publication number Publication date
US11503303B2 (en) 2022-11-15
US20120224641A1 (en) 2012-09-06
US20160261871A1 (en) 2016-09-08
US10298934B2 (en) 2019-05-21
US9344734B2 (en) 2016-05-17
US20180124410A1 (en) 2018-05-03
US10666949B2 (en) 2020-05-26
US8170096B1 (en) 2012-05-01
US20200344483A1 (en) 2020-10-29

Similar Documents

Publication Publication Date Title
US11503303B2 (en) System and method for optimized encoding and transmission of a plurality of substantially similar video fragments
EP0944249B1 (en) Encoded stream splicing device and method, and an encoded stream generating device and method
US6094457A (en) Statistical multiplexed video encoding using pre-encoding a priori statistics and a priori and a posteriori statistics
US6968567B1 (en) Latency reduction in providing interactive program guide
AU756355B2 (en) Video encoder and encoding method with buffer control
US6674796B1 (en) Statistical multiplexed video encoding for diverse video formats
US20040218093A1 (en) Seamless splicing of MPEG-2 multimedia data streams
US20060285586A1 (en) Methods and systems for achieving transition effects with MPEG-encoded picture content
KR102572947B1 (en) bit stream merge
US8681874B2 (en) Video insertion information insertion in a compressed bitstream
US7394850B1 (en) Method and apparatus for performing digital-to-digital video insertion
US20060239563A1 (en) Method and device for compressed domain video editing
JP2009207163A (en) Decoding device and method, and recording medium
KR20230019103A (en) Video encoders, video decoders, methods for encoding and decoding, and video data streams to realize advanced video coding concepts.
US7058965B1 (en) Multiplexing structures for delivery of interactive program guide
US20130215973A1 (en) Image processing apparatus, image processing method, and image processing system
CN101621663B (en) Method for preconditioning ad content for digital program insertion
US6993080B2 (en) Signal processing
EP0871337A2 (en) Method and apparatus for modifying a digital data stream
KR100780844B1 (en) Decoder, processing system and processing method for multi-view frame data, and recording medium having program performing this
KR102717190B1 (en) Video encoder, video decoder, methods for encoding and decoding and video data stream for realizing advanced video coding concepts
US9219930B1 (en) Method and system for timing media stream modifications
CA2388944C (en) Method and apparatus for performing digital-to-digital video insertion
KR20240151280A (en) Video encoder, video decoder, methods for encoding and decoding and video data stream for realizing advanced video coding concepts
Drury et al. Picture quality and multiplex management in digital video broadcasting systems

Legal Events

Date Code Title Description
AS Assignment

Owner name: TIVO CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:VISIBLE WORLD, LLC;REEL/FRAME:054781/0619

Effective date: 20201124

Owner name: VISIBLE WORLD, LLC, PENNSYLVANIA

Free format text: CHANGE OF NAME;ASSIGNOR:VISIBLE WORLD, INC.;REEL/FRAME:054895/0452

Effective date: 20180627

Owner name: VISIBLE WORLD, INC., NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HABERMAN, SETH;NIEMEIJER, GERRIT;BOOTH, RICHARD L.;AND OTHERS;REEL/FRAME:054895/0276

Effective date: 20050126

STPP Information on status: patent application and granting procedure in general

Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

AS Assignment

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA

Free format text: SECURITY INTEREST;ASSIGNORS:ADEIA GUIDES INC.;ADEIA IMAGING LLC;ADEIA MEDIA HOLDINGS LLC;AND OTHERS;REEL/FRAME:063529/0272

Effective date: 20230501

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION