US8817881B1 - Video processing apparatus and video processing method - Google Patents

Video processing apparatus and video processing method Download PDF

Info

Publication number
US8817881B1
US8817881B1 US14/187,647 US201414187647A US8817881B1 US 8817881 B1 US8817881 B1 US 8817881B1 US 201414187647 A US201414187647 A US 201414187647A US 8817881 B1 US8817881 B1 US 8817881B1
Authority
US
United States
Prior art keywords
video image
stream
video
slice
encoded
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
US14/187,647
Other versions
US20140253806A1 (en
Inventor
Koji Yano
Yuji Fujimoto
Junichiro Enoki
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Assigned to SONY CORPORATION reassignment SONY CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ENOKI, JUNICHIRO, FUJIMOTO, YUJI, YANO, KOJI
Application granted granted Critical
Publication of US8817881B1 publication Critical patent/US8817881B1/en
Publication of US20140253806A1 publication Critical patent/US20140253806A1/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/40Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/48Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using compressed domain processing techniques other than decoding, e.g. modification of transform coefficients, variable length coding [VLC] data or run-length data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234363Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the spatial resolution, e.g. for clients with a lower screen resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8451Structuring of content, e.g. decomposing content into time segments using Advanced Video Coding [AVC]

Definitions

  • the present disclosure relates to video processing apparatuses and video processing methods and, more particularly, to a video processing apparatus and a video processing method configured to reduce the processing load caused by movement of a view area of a multiple-video image.
  • a multiple-video image displayed on a screen is composed of a plurality of videos.
  • a main video that is supposed to be mainly viewed is arranged at the center of the screen in a maximum size.
  • selectable (switchable) videos are arranged in sizes smaller than that of the main video.
  • the selectable videos are, for example, TV broadcasting channels, Web screens, video contents of movies and the like, and TV chat screens, and are obtained, for example, from within a cloud (network).
  • a first method employs servers in the cloud that distributes a plurality of encoded streams associated with a plurality of videos, respectively.
  • a client apparatus receives and decodes the encoded streams and performs combination processing on the decoded streams so as to generate a multiple-video image.
  • Japanese Unexamined Patent Application Publication No. 2002-064818 discloses a multiple-video image that is formed by receiving a plurality of elementary streams (ESs) and assigning a larger view area to the ES in order of decreasing precedence starting from higher priority ESs on a basis of the display priority of the ESs.
  • ESs elementary streams
  • a server As a second method for displaying a multiple-video image, there is a method in which a server generates a multiple-video image in the form of a single encoded stream and delivers the stream as illustrated in FIG. 2 .
  • the server decodes a plurality of videos to be combined, resizes the videos, combines the videos, and then re-encodes the combined videos so as to generate an encoded stream of a multiple-video image.
  • the server When a view area of the multiple-video image is moved by scrolling or other operations, the server has to decode the encoded stream of the multiple-video image, change the motion vector, and re-encode the decoded stream. These processes also put a large processing load on the server.
  • the present disclosure has been made in view of these circumstances and enables reduction of the processing load caused by movement of the view area of the multiple-video image.
  • a video processing apparatus includes a combining unit that combines encoded streams of a plurality of videos in order to generate an encoded stream of a multiple-video image composed of the videos, each encoded stream of each video having coding units in each horizontal line that are encoded as a slice.
  • the video processing apparatus also includes an insertion unit that inserts an insertion stream into the encoded stream of the multiple-video image generated by the combining unit when a view area of the multiple-video image is moved, and the insertion stream is an encoded stream in which all the coding units in the multiple-video image are replaced with skip macroblocks with a motion vector indicating a direction and an amount of movement of the view area.
  • a video processing method is associated with the video processing apparatus according to the above-described embodiment of the present disclosure.
  • the video processing method in the embodiment of the present disclosure includes combining encoded streams of a plurality of videos in order to generate an encoded stream of a multiple-video image composed of the videos, each encoded stream of each video having coding units in each horizontal line that are encoded as a slice.
  • the video processing method also includes inserting an insertion stream into the encoded stream of the multiple-video image generated by the combining unit when a view area of the multiple-video image is moved, and the insertion stream is an encoded stream in which all the coding units in the multiple-video image are replaced with skip macroblocks with a motion vector indicating a direction and an amount of movement of the view area.
  • the processing load caused by movement of a view area of a multiple-video image can be reduced.
  • FIG. 1 illustrates a multiple-video reproduction system
  • FIG. 2 illustrates an example of a method for distributing an encoded stream of a multiple-video image
  • FIG. 3 is an exemplary configuration of a multiple-video reproduction system, to which the present technology is applied, according to an embodiment
  • FIG. 4 is a block diagram illustrating an exemplary configuration of the distribution server in FIG. 3 ;
  • FIG. 5 illustrates generation of a frame-based encoded stream of a multiple-video image
  • FIG. 6 illustrates a multiple-video image when a view area is moved
  • FIG. 7 illustrates an insertion stream
  • FIG. 8 illustrates an insertion video image in a case where slice data of an insertion stream is not updated
  • FIG. 9 illustrates an insertion video image in a case where slice data of an insertion stream is updated
  • FIG. 10 illustrates how to insert an insertion stream
  • FIGS. 11A , 11 B, and 11 C illustrate an update of a frame-based encoded stream of a multiple-video image
  • FIG. 12 is a flowchart illustrating how the distribution server in FIG. 4 performs generation processing
  • FIG. 13 is a flowchart illustrating details of multiple-video image update processing in FIG. 12 ;
  • FIG. 14 is a block diagram illustrating an exemplary hardware configuration of a computer.
  • FIG. 3 shows an exemplary configuration of a multiple-video reproduction system, to which the present technology is applied, according to an embodiment.
  • a multiple-video reproduction system 1 functions as a video processing apparatus and includes a distribution server 11 for distributing videos and a receiving device 13 connected to the distribution server 11 via a network 12 .
  • the multiple-video reproduction system 1 distributes an encoded stream of a multiple-video image composed of a plurality of videos to display the multiple-video image.
  • a plurality of frame-based encoded streams of videos are input from outside sources to the distribution server 11 of the multiple-video reproduction system 1 .
  • the distribution server 11 selects a plurality of encoded streams of videos to be displayed from the input encoded streams of the videos on the basis of history of movement information that indicates the direction and amount of movement of a view area and is transmitted from the receiving device 13 .
  • the distribution server 11 combines the selected encoded streams of videos on a frame by frame basis to generate a frame-based encoded stream of a multiple-video image.
  • the distribution server 11 transmits the frame-based encoded stream of the multiple-video image as a combined stream without changes to the receiving device 13 .
  • the distribution server 11 determines whether the amount of movement indicated by the movement information is anything other than 0. If the amount of movement indicated by the movement information is anything other than 0, the distribution server 11 generates, based on the movement information, an encoded stream of an insertion video, which is associated with the encoded stream of the multiple-video image and is a frame of a multiple-video image to be inserted between frames of the multiple-video image.
  • the generated encoded stream of the insertion video is referred to as an insertion stream.
  • the distribution server 11 inserts the insertion stream into the frame-based encoded stream of the multiple-video image and transmits it as a combined stream to the receiving device 13 .
  • the receiving device 13 receives the combined stream transmitted by the distribution server 11 via the network 12 such as the Internet.
  • the receiving device 13 decodes the received combined stream and displays the resultant multiple-video image and insertion video image on a built-in liquid crystal display or other types of display.
  • the receiving device 13 After starting to display the multiple-video image, the receiving device 13 generates movement information in response to the user's operation, such as scrolling and cursor movement, and transmits the generated movement information to the distribution server 11 .
  • movement information such as scrolling and cursor movement
  • the encoded streams of the videos to be displayed are changed and accordingly the view area is shifted.
  • the receiving apparatus 13 does not have to include a liquid crystal display, and may instead display the multiple-video image on a display device connected thereto.
  • the receiving device 13 can be, for example, a television receiver having a network connecting function, a set top box (STP), a personal computer, or a portable terminal device.
  • STP set top box
  • the network 12 can be configured so as to be connected to a plurality of receiving devices 13 .
  • the distribution server 11 multicasts a combined stream to the receiving devices 13 .
  • FIG. 4 is a block diagram illustrating an exemplary configuration of the distribution server 11 in FIG. 3 .
  • the distribution server 11 includes a receiving unit 31 , a storage unit 32 , a read unit 33 , a combining unit 34 , an insertion unit 35 , and a transmission unit 36 .
  • the receiving unit 31 of the distribution server 11 receives movement information transmitted from the receiving device 13 via the network 12 shown in FIG. 3 , and supplies the movement information to the read unit 33 , combining unit 34 , and insertion unit 35 .
  • the storage unit 32 stores insertion streams associated with the movement information indicating amounts of movement other than 0.
  • the read unit 33 reads out an insertion stream associated with the movement information from the storage unit 32 on the basis of the movement information supplied from the receiving unit 31 , and supplies the insertion stream to the insertion unit 35 .
  • the combining unit 34 selects a plurality of encoded streams of videos to be displayed from a plurality of encoded streams of videos input from outside sources based on the history of the movement information supplied from the receiving unit 31 .
  • the combining unit 34 combines the encoded streams of the selected videos on a frame by frame basis and supplies the frame-based encoded stream of the resultant multiple-video image to the insertion unit 35 .
  • the insertion unit 35 updates slice data of the insertion stream with the use of the frame-based encoded stream of a predetermined video that is input from an outside source on the basis of the movement information.
  • the insertion unit 35 inserts the insertion stream into the frame-based encoded stream of the multiple-video image supplied from the combining unit 34 .
  • the insertion unit 35 changes (generates) the slice header of the insertion stream on the basis of the slice header of the frame-based encoded stream, which is placed immediately after the insertion stream, of the multiple-video image.
  • the insertion unit 35 also changes the slice header of the encoded stream of the multiple-video image after the insertion stream.
  • the insertion unit 35 supplies the frame-based encoded stream of the multiple-video image with the insertion stream having been inserted, which is regarded as a combined stream, to the transmission unit 36 .
  • the insertion unit 35 supplies a frame-based encoded stream of a multiple-video image, which is regarded as a combined stream, to the transmission unit 36 .
  • the transmission unit 36 transmits the combined stream, which is supplied from the insertion unit 35 , to the receiving device 13 via the network 12 shown in FIG. 3 .
  • FIG. 5 illustrates how the combining unit 34 in FIG. 4 generates a frame-based encoded stream of a multiple-video image.
  • FIG. 5 shows that the combining unit 34 generates a frame-based encoded stream of a multiple-video image with the 0th to 3rd videos (i.e., View 0 to View 3) arranged at the upper left, upper right, lower left, and lower right, respectively.
  • the 0th to 3rd videos i.e., View 0 to View 3
  • each of the encoded streams input to the combining unit 34 is a stream encoded by an advanced video coding (AVC) method in which macroblocks, which are coding units, in each horizontal line are regarded as a slice, and the slice does not refer to the outside of the screen, but refers to a frame one frame before the current frame in a coding order.
  • AVC advanced video coding
  • the frame referred to is called a reference frame in intra coding.
  • each video has four macroblocks in a vertical direction, and accordingly the number of slices in each video is four.
  • the combining unit 34 reorders slices in the respective encoded streams of the 0th to 3rd videos to be displayed in the input encoded streams and combines the encoded streams of the 0th to 3rd videos, thereby generating a frame-based encoded stream of a multiple-video image.
  • the combining unit 34 arranges the 0th slice of the 0th video as the 0th slice of the combined encoded stream, while arranging the 0th slice of the 1st video as the 1st slice of the combined encoded stream. Thereafter, the slices of the 0th video and the slices of the 1st video are alternately arranged, and the last slice of the 1st video is arranged so as to be the 7th slice in the combined encoded stream.
  • the combining unit 34 arranges the 0th slice of the 2nd video as the 8th slice of the combined encoded stream, while arranging the 0th slice of the 3rd video as the 9th slice of the combined encoded stream. Thereafter, the slices of the 2nd video and the slices of the 3rd video are alternately arranged, and the last slice of the 3rd video is arranged so as to be the 15th slice of the combined encoded stream.
  • the encoded streams input to the combining unit 34 are encoded streams with the macroblocks on each horizontal line encoded as a slice, there is no dependence between macroblocks arranged at vertically different positions. Therefore, decoding can be properly performed even if the slices are decoded in a different order.
  • the combining unit 34 can generate a frame-based encoded stream of a multiple-video image by only reordering the slices of the encoded streams of the respective videos.
  • FIG. 6 illustrates a multiple-video image when a view area is moved.
  • the horizontal axis represents time (T). This is the same in FIG. 7 as will be described later.
  • a view area of the 1st frame (Frame 1) is positioned at the bottom left with respect to a view area of the 0th frame (Frame 0) in the coding order, or, in other words, when a view area of the 0th frame moves in a direction toward the bottom left, the position of the videos in the multiple-video image moves in a direction toward the top right on the screen. That is, a video 43 in the multiple-video image of the 1st frame, which is associated with the multiple-video image 41 of the 0th frame, is positioned at the top right with respect to the multiple-video image 41 on the screen.
  • the reference block of the macroblock 52 is changed to a block positioned at the top right with respect to a block 53 to which the macroblock 52 is supposed to refer.
  • the macroblock 52 corresponds to a macroblock 51 of a multiple-video image 42 of the 1st frame when the view area is not moved. Therefore, it is necessary for the motion vector of the macroblock 52 to point to the block 53 that is a reference block of the macroblock 51 . However, if the motion vector of the macroblock 51 is applied to the motion vector of the macroblock 52 , the macroblock 52 refers to a block 54 , as the reference block, that is positioned at the top right with respect to the block 53 in the multiple-video image 41 because the macroblock 52 is positioned at the top right with respect to the macroblock 51 on the screen. As a result, the macroblock 52 is not properly decoded.
  • the distribution server 11 inserts an insertion stream in order to shift the position pointed at by the motion vector of the inter-coded macroblock in the direction in which the view area has moved by an amount of movement of the view area.
  • the distribution server 11 generates an insertion stream in which all macroblocks of an insertion video are replaced with skip macroblocks with a motion vector indicating the direction in which and the amount by which the view area has moved, and inserts the insertion stream before the encoded stream of the multiple-video image in which the view area has moved.
  • the distribution server 11 when the view area is moved in a direction toward the bottom left as shown in FIG. 6 , the distribution server 11 generates an insertion stream in which all macroblocks are replaced with skip macroblocks with a motion vector pointing in the direction toward the bottom left as shown in FIG. 7 .
  • the insertion video image 71 corresponds to the multiple-video image 41 that is positioned further toward the bottom left than the insertion video image on the screen.
  • the insertion video image 71 corresponds to the multiple-video image 41 with the view area having moved in the direction toward the bottom left.
  • the insertion stream is inserted as an encoded stream of the 0′th frame (Frame 0′) before the encoded stream of the multiple-video image of the 1st frame in which the view area has moved.
  • the multiple-video image of the 1st frame is decoded with reference to the insertion video image 71 , which is one frame before the multiple-video image of the 1st frame, but not the multiple-video image 41 .
  • the macroblock 52 is decoded with reference to a block 72 of the insertion video image 71 .
  • the block 72 is in a position corresponding to the block 54 on the screen, and the block 54 is positioned at the top right with respect to the block 53 to which the macroblock 51 refers.
  • the block 72 corresponds to the block 53 that is positioned further toward the bottom left than the block 72 on the screen. Therefore, the macroblock 52 is decoded with reference to the block 53 that is a reference block of the corresponding macroblock 51 .
  • the distribution server 11 shifts the position pointed at by the motion vector of the macroblock 52 in the direction in which the view area has moved by an amount of movement of the view area by inserting the insertion stream without recoding.
  • FIG. 8 illustrates an insertion video image in a case where slice data of an insertion stream is not updated.
  • the frame-based encoded stream of the multiple-video image generated in FIG. 5 is an encoded stream of a multiple-video image of the 0th frame.
  • an insertion stream is inserted as an encoded stream of the 0′th frame and all macroblocks of the insertion stream are replaced with skip macroblocks with a motion vector that indicates a downward direction and an amount of movement equivalent to the size of one macroblock.
  • the insertion video image of the insertion stream becomes a multiple-video image with videos in the multiple-video image of the 0th frame moved upward by just one slice.
  • the macroblocks of the insertion stream are skip macroblocks with a motion vector pointing in a downward direction by just one slice.
  • the slices of the insertion stream are decoded with reference to the slices of the multiple-video image of the 0th frame positioned one macroblock lower than the slices of the insertion stream on the screen. Therefore, the upper part of the decoded video image of the insertion stream is composed of the 2nd to 15th slices of the multiple-video image of the 0th frame.
  • the slices of the insertion video image refer to pixels that are closest to their reference within the screen.
  • the pixels are in the lowermost part of the 14th and 15th slices, which are the lowermost slices of the multiple-video image of the 0th frame, and have a predetermined pixel value (0 in this example).
  • the pixels of the slices at the lowermost part of the decoded video image of the insertion stream have a predetermined pixel value (0 in this example).
  • the pixel value of the decoded video image of the insertion stream that is decoded with reference to the outside of the screen can be set to the predetermined value.
  • the decoded video image of the insertion stream which is supposed to be decoded with reference to pixels outside the screen, refers to pixels inside the screen that are closest to the reference, resulting in corruption of the decoded video image of the insertion stream.
  • the decoded video image of the insertion stream is prevented from becoming corrupted and is displayed with high quality.
  • this embodiment is configured to display a video that is supposed to be displayed rather than a video in a fixed color (black in this example). This can further improve the quality of the decoded video image of the insertion stream.
  • FIG. 9 illustrates an insertion video image in a case where slice data of an insertion stream is updated.
  • slice data of n slices from the movement direction of an insertion video image are updated to intra-coded slice data of n slices from the opposite direction to the movement direction of videos that are to be displayed at the position of the n slices.
  • slices that have no reference slices to which the slices of the insertion video refer in the multiple-video image are replaced with intra-coded slices of videos to be displayed at positions at which the slices are positioned.
  • slice data of one slice from the downward direction of the insertion video image is updated to intra-coded slice data of one slice (Slice A or Slice B) from the upward direction of videos that are to be displayed at the position of the one slice.
  • the lowermost slices of the insertion video image are decoded without referring to the 0th frame of the multiple-video image, and the decoded video image of the lowermost slices becomes a video image that are supposed to be displayed at the position of the slices.
  • the image quality of the decoded video image of the insertion stream is improved.
  • FIG. 10 illustrates how the insertion unit 35 in FIG. 4 inserts an insertion stream.
  • a storage unit 32 stores (bit streams of) insertion streams associated with movement information. Specifically, the storage unit 32 stores (a bit stream of) an insertion stream with all macroblocks included replaced with skip macroblocks with a motion vector indicating the movement information according to the movement information.
  • the read unit 33 reads out (the bit stream of) the insertion stream associated with the movement information supplied from the receiving unit 31 . Then, the insertion unit 35 updates the slice data of (the bit stream of) the read insertion stream on the basis of the movement information, and inserts the updated insertion stream into (a bit stream of) a frame-based encoded stream of a multiple-video image supplied from the combining unit 34 .
  • movement information supplied from the receiving unit 31 is movement information A and does not point out an amount of movement equivalent to n slices and a vertical direction
  • (a bit stream of) an insertion stream A is inserted as it is in between (the bit streams of) the encoded streams of the multiple-video image of the 0th and 1st frames.
  • the distribution server 11 can insert the insertion stream with much lighter processing load in comparison with the case where the distribution server 11 generates an insertion stream associated with movement information every time the distribution server 11 receives the movement information.
  • the slice header of the insertion stream is changed. Specifically, frame_num, pic_order_cnt_lsb, delta_pic_order_cnt_bottom, delta_pic_order_cnt[0], and delta_pic_order_cnt[1], which are included in a slice header of an insertion stream, are made the same as those in a slice header of the immediately following frame in the coding order of frames of the insertion stream.
  • frame_num is a frame identifier
  • pic_order_cnt_lsb, delta_pic_order_cnt_bottom, delta_pic_order_cnt[0], and delta_pic_order_cnt[1] are information to be used to determine a picture order count (POC).
  • the values of frame_num, pic_order_cnt_lsb, delta_pic_order_cnt_bottom, delta_pic_order_cnt[0], and delta_pic_order_cnt[1] of all frames from the frame of the insertion stream to an IDR picture are increased by a value equivalent to the frame 0′.
  • updating the slice data of the insertion stream involves a change of data other than the slice data of the insertion stream when the intra-coded slice data to be used for the update is slice data of the IDR picture.
  • nal_unit_type which indicates the type of a network abstraction layer (NAL) unit of the slice data to be used for the update, is changed from 5 indicating that the slice data is slice data of the IDR picture to 1 indicating that the slice data is slice data of a picture other than the IDR picture.
  • idr_pic_id which is an identifier of the IDR picture and included in the slice header of the slice data to be used for the update, is deleted.
  • nal_unit_type of the slice data to be used for the update is 5 and nal_ref_idc is not 0, no_output_of_prior_pics_flag and long_term_reference_flag included in the slice header are deleted and adaptive_ref_pic_marking_mode_flag is changed to 0.
  • no_output_of_prior_pics_flag is a flag specifying how the pictures decoded prior to the IDR picture are treated after decoding of the IDR picture.
  • long_term_reference_flag is a flag specifying whether the IDR picture is used as a long-term reference picture.
  • adaptive_ref_pic_marking_mode_flag is a flag to be set to use memory management control operation (MMCO) and is set to 0 when MMCO is not used.
  • FIGS. 11A , 11 B, 11 C illustrate how the combining unit 34 in FIG. 4 updates a frame-based encoded stream of a multiple-video image.
  • the encoded stream of a multiple-video image of the 0th frame shown in FIG. 11A corresponds to the frame-based encoded stream of the multiple-video image in FIG. 5 .
  • the 2nd slice of the multiple-video image of the 1st frame refers to the 0th slice of the 0th frame which is one frame previous to the 1st frame.
  • an insertion stream is generated as described with reference to FIG. 9 and is inserted as an encoded stream of the 0′th frame as shown in FIG. 11B .
  • an encoded stream of a multiple-video image of the 1st frame is generated so as to contain slices that correspond to the 2nd to 15th slices of the multiple-video image of the 0th frame in an upper part of the multiple-video image of the 1st frame.
  • an encoded stream of a multiple-video image composed of the 1st to 3rd slices of the 1st and 2nd videos and the 0th to 3rd slices of the 3rd and 4th videos is generated as the encoded stream of the multiple-video image of the 1st frame.
  • the 0th slice of the multiple-video image of the 1st frame refers to a slice of the 0′th frame, which is positioned one slice upper than the 0th slice, namely the outside of the screen. Consequently, proper decoding is not performed.
  • the combining unit 34 replaces all the macroblocks, which are in the slices of the 1st video in the multiple-video image of the 1st frame and refer to outside the screen, with skip macroblocks with a motion vector of 0 as shown in FIG. 11C .
  • the 2nd video also contains a slice that refers to a slice outside the screen as with the case of the 1st video, and therefore display of the 2nd video is also stopped.
  • the combining unit 34 replaces all the macroblocks of the area with skip macroblocks with a motion vector of 0 as shown in FIG. 11C . As a result, display of the area other than the 1st to 4th videos in the multiple-video image of the 1st frame is stopped.
  • FIG. 12 is a flow chart illustrating how the distribution server 11 in FIG. 4 performs generation processing.
  • a combining unit 34 selects encoded streams equivalent to one frame of each of a plurality of videos to be displayed from encoded streams of a plurality of videos input from outside sources and combines the selected streams to generate an encoded stream of the 0th frame of the multiple-video image.
  • the combining unit 34 supplies the encoded stream of the 0th frame of the multiple-video image to a transmission unit 36 via an insertion unit 35 .
  • step S 12 the transmission unit 36 transmits the encoded stream of the 0th frame of the multiple-video image, which is supplied from the insertion unit 35 , as a combined stream to the receiving device 13 via the network 12 shown in FIG. 3 .
  • step S 13 the receiving unit 31 receives movement information transmitted from the receiving device 13 via the network 12 and supplies the movement information to a read unit 33 , the combining unit 34 , and the insertion unit 35 .
  • step S 14 the combining unit 34 selects encoded streams equivalent to one frame of each of a plurality of videos to be displayed from encoded streams of a plurality of videos input from outside sources on the basis of the history of the movement information and combines the selected streams to generate a frame-based encoded stream of a multiple-video image.
  • the combining unit 34 supplies the encoded stream to the insertion unit 35 .
  • step S 15 the read unit 33 determines whether the amount of movement, which is indicated by the movement information supplied from the receiving unit 31 , is 0. If it is determined that the amount of movement is 0 in step S 15 , the insertion unit 35 supplies the frame-based encoded stream of the multiple-video image supplied from the combining unit 34 to the transmission unit 36 .
  • step S 16 the transmission unit 36 transmits the frame-based encoded stream of the multiple-video image supplied from the combining unit 34 as a combined stream to the receiving device 13 via the network 12 and the processing goes to step S 25 .
  • step S 15 the combining unit 34 performs in step S 17 multiple-video image update processing to update the frame-based encoded stream of the multiple-video image generated in step S 14 .
  • step S 17 multiple-video image update processing to update the frame-based encoded stream of the multiple-video image generated in step S 14 .
  • step S 18 the read unit 33 reads out an insertion stream associated with the movement information from the storage unit 32 on the basis of the movement information supplied from the receiving unit 31 and supplies the insertion stream to the insertion unit 35 .
  • step S 19 the insertion unit 35 updates the slice header of the insertion stream with the use of the slice header of the frame-based encoded stream of the multiple-video image supplied from the combining unit 34 .
  • the insertion unit 35 makes frame_num, pic_order_cnt_lsb, delta_pic_order_cnt_bottom, delta_pic_order_cnt[0], and delta_pic_order_cnt[1], which are contained in the slice header of the insertion stream, the same as those in the slice header of the encoded stream supplied from the combining unit 34 .
  • step S 20 the insertion unit 35 determines whether the movement information supplied from the receiving unit 31 indicates that the movement direction is a vertical direction and the amount of movement is n slice.
  • step S 21 the insertion unit 35 updates slice data of n slice from the movement direction of an insertion video image to intra-coded slice data of n slices from the opposite direction to the movement direction of videos that are to be displayed at the position of the n slices.
  • the slice data used for the update is selected from slice data of encoded streams of a plurality of videos input from outside sources. If the slice data used for the update is slice data of an IDR picture, the insertion unit 35 changes nal_unit_type and idr_pic_id of the insertion stream. If nal_ref_idc is not 0, the insertion unit 35 changes also no_output_of_prior_pics_flag, long_term_reference_flag, and adaptive_ref_pic_marking_mode_flag. Subsequent to the processing in step S 21 , the processing goes to step S 22 .
  • step S 20 if it is determined that the movement direction is not a vertical direction or the amount of movement is not n slice in step S 20 , the processing skips step S 21 and goes to step S 22 .
  • step S 22 the insertion unit 35 updates the slice header of the encoded stream of the multiple-video image supplied from the combining unit 34 . Specifically, the insertion unit 35 increases the value of frame_num, pic_order_cnt_lsb, delta_pic_order_cnt_bottom, delta_pic_order_cnt[0], and delta_pic_order_cnt[1], which are contained in the slice header of the encoded stream of the multiple-video image, by the number of frames of insertion stream to be inserted between an IDR picture and a picture of the multiple-video image.
  • step S 23 the insertion unit 35 inserts the insertion stream before the frame-based encoded stream of the multiple-video image with the slice header updated in step S 22 .
  • the insertion stream is inserted in the frame-based encoded stream of the multiple-video image.
  • the insertion unit 35 supplies the frame-based encoded stream of the multiple-video image with the insertion stream inserted therein to the transmission unit 36 .
  • step S 24 the transmission unit 36 transmits the frame-based encoded stream of the multiple-video image in which the insertion stream supplied from the insertion unit 35 is inserted, as a combined stream, to the receiving device 13 via the network 12 , and the processing goes to step S 25 .
  • step S 25 the distribution server 11 determines whether to terminate the generation processing in response to a user's instruction or the like. If the distribution server 11 determines not to terminate the generation processing in step S 25 , the processing returns to step S 13 and the processing from step S 13 to step S 25 is repeated until the generation processing is terminated.
  • step S 25 If the distribution server 11 determines to terminate the generation processing in step S 25 , the processing is terminated.
  • FIG. 13 is a flowchart that details multiple-video image update processing in step S 17 in FIG. 12 .
  • step S 41 in FIG. 13 the combining unit 34 selects a video that has not been processed yet from videos making up a multiple-video image as a target video to be processed.
  • step S 42 the combining unit 34 determines whether all slices of the target video are present in the multiple-video image.
  • step S 42 If it is determined that all slices of the target video are present in the multiple-video image in step S 42 , it is then determined whether slices to be referred by the slices of the target video are present outside the insertion video image in step S 43 .
  • step S 43 If it is determined that the slices to be referred by the slices of the target video are present outside the insertion video image in step S 43 , the processing goes to step S 44 .
  • step S 42 if it is determined that all the slices of the target video are not present in the multiple-video image in step S 42 , the processing goes to step S 44 .
  • step S 44 the combining unit 34 changes all macroblocks of the target video in the frame-based encoded stream of the multiple-video image generated in step S 14 of FIG. 12 to skip macroblocks with a motion vector of 0. Then, the processing goes to step S 45 .
  • step S 43 if it is determined that slices to be referred by the slices of the target video are not present outside the insertion video image in step S 43 , the processing skips step S 44 and goes to step S 45 .
  • step S 45 the combining unit 34 determines whether all videos making up the multiple-video image are selected as target videos to be processed. If it is determined that all the videos making up the multiple-video image are not selected as target videos to be processed in step S 45 , the processing returns to step S 41 and repeats step S 41 to step S 45 until all the videos are selected as target videos to be processed.
  • step S 45 If it is determined that all the videos making up the multiple-video image are selected as target videos to be processed in step S 45 , the processing returns to step S 17 and goes to step S 18 in FIG. 12 .
  • the distribution server 11 when a view area of a multiple-video image is moved, the distribution server 11 inserts an insertion stream in an encoded stream of the multiple-video image, thereby eliminating the necessity to change the motion vector of the encoded stream of the multiple-video image. Accordingly, the distribution server 11 can generate a combined stream without recoding when the view area of the multiple-video image is moved.
  • the distribution server 11 can generate a combined stream that is supposed to appear after the view area of the multiple-video image has moved.
  • the quality degradation of the multiple-video image caused by recoding can be prevented.
  • the aforementioned series of processes performed by the distribution server 11 can be implemented not only by hardware but also by software.
  • software programs are installed in a computer.
  • the computer used herein may be a computer incorporated in hardware for specific purposes or, for example, a general-purpose personal computer that can perform various functions by installing various programs thereon.
  • FIG. 14 is a block diagram showing an exemplary hardware configuration of a computer that executes the aforementioned series of processes performed by the distribution server 11 with programs.
  • a central processing unit (CPU) 201 In the computer, a central processing unit (CPU) 201 , a read only memory (ROM) 202 , and a random access memory (RAM) 203 are interconnected by a bus 204 .
  • CPU central processing unit
  • ROM read only memory
  • RAM random access memory
  • the bus 204 is further connected with an input-output interface 205 .
  • the input-output interface 205 is connected with an input section 206 , an output section 207 , a storage section 208 , a communicating section 209 , and a drive 210 .
  • the input section 206 includes a keyboard, a mouse, a microphone, etc.
  • the output section 207 includes a display, a speaker, etc.
  • the storage section 208 includes a hard disk, a nonvolatile memory, etc.
  • the communication section 209 includes a network interface, etc.
  • the drive 210 drives a removable medium 211 , such as a magnetic disk, an optical disc, a magneto-optical disc, or a semiconductor memory, etc.
  • the CPU 201 loads a program stored in the storage section 208 into the RAM 203 via the input-output interface 205 and the bus 204 , and then executes the program.
  • the series of processes described above is performed.
  • the program can be recorded on the computer (CPU 201 ) on a removable medium 211 , such as a package medium and to provide the removable medium 211 .
  • the program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, digital satellite broadcasting or the like.
  • the program can be installed from the removable medium 211 loaded into the drive 210 to the storage section 208 of the computer through the input/output interface 205 .
  • the program can be also installed into the storage section 208 from the communication section 209 that receives the program through a wired or wireless transmission medium.
  • the program can be preinstalled in the ROM 202 or the storage section 208 .
  • the programs executed by the computer may be programs that are processed in time series in accordance with the sequence described in this specification.
  • the programs may be programs to be executed in parallel or at necessary timing, such as at the time of being invoked, or the like.
  • a single step may include a plurality of processes that can be executed by a single apparatus or can be shared by a plurality of apparatuses.
  • All macroblocks of the insertion stream can be changed into skip macroblocks with a motion vector indicating movement information irrespective of movement information of a view area.
  • the coding method of the encoded stream of videos may be a high efficiency video coding (HEVC) method.
  • the unit of coding is a coding unit (CU).
  • the present disclosure can be configured as follows.
  • a video processing apparatus includes a combining unit that combines encoded streams of a plurality of videos to generate an encoded stream of a multiple-video image composed of the videos, each encoded stream of each video having coding units in each horizontal line that are encoded as a slice, and an insertion unit that inserts an insertion stream into the encoded stream of the multiple-video image generated by the combining unit when a view area of the multiple-video image is moved, the insertion stream being an encoded stream in which all the coding units in the multiple-video image are replaced with skip macroblocks with a motion vector indicating a direction and an amount of movement of the view area.
  • a video processing method performed by a video processing apparatus includes combining encoded streams of a plurality of videos to generate an encoded stream of a multiple-video image composed of the videos, each encoded stream of each video having coding units in each horizontal line that are encoded as a slice, and inserting an insertion stream in the encoded stream of the multiple-video image generated by the combining process when a view area of the multiple-video image is moved, the insertion stream being an encoded stream in which all the coding units in the multiple-video image are replaced with skip macroblocks with a motion vector indicating a direction and an amount of movement of the view area.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

A video processing apparatus includes a combining unit that combines encoded streams of a plurality of videos to generate an encoded stream of a multiple-video image composed of the videos, each encoded stream of each video having coding units in each horizontal line that are encoded as a slice, and an insertion unit that inserts an insertion stream into the encoded stream of the multiple-video image generated by the combining unit when a view area of the multiple-video image is moved. The insertion stream is an encoded stream in which all the coding units in the multiple-video image are replaced with skip macroblocks with a motion vector indicating a direction and an amount of movement of the view area.

Description

CROSS REFERENCE TO RELATED APPLICATIONS
This application claims the benefit of Japanese Priority Patent Application JP 2013-046836 filed Mar. 8, 2013, the entire contents of which are incorporated herein by reference.
BACKGROUND
The present disclosure relates to video processing apparatuses and video processing methods and, more particularly, to a video processing apparatus and a video processing method configured to reduce the processing load caused by movement of a view area of a multiple-video image.
Due to progress in content digitization and the development of a video transmission infrastructure, video distribution through the Internet is spreading. Recently, in addition to personal computers, network-connectable television receivers have been increasing as reception side devices, and therefore it has become possible to view distributed videos on a television receiver. Furthermore, the recent development of cloud services has made it possible to provide various channels including private contents to viewers via a network. Thus, there has been an increasing demand for a multiple-video reproduction system that allows viewers to simultaneously watch a plurality of videos as shown in FIG. 1 in order to allow the viewers to easily search for a preferred video to watch.
In the multiple-video reproduction system of FIG. 1, a multiple-video image displayed on a screen is composed of a plurality of videos. Among the videos displayed, a main video that is supposed to be mainly viewed is arranged at the center of the screen in a maximum size. Around the main video, selectable (switchable) videos are arranged in sizes smaller than that of the main video. The selectable videos are, for example, TV broadcasting channels, Web screens, video contents of movies and the like, and TV chat screens, and are obtained, for example, from within a cloud (network).
Among methods for displaying such a multiple-video image, a first method employs servers in the cloud that distributes a plurality of encoded streams associated with a plurality of videos, respectively. A client apparatus receives and decodes the encoded streams and performs combination processing on the decoded streams so as to generate a multiple-video image. By way of example, Japanese Unexamined Patent Application Publication No. 2002-064818 discloses a multiple-video image that is formed by receiving a plurality of elementary streams (ESs) and assigning a larger view area to the ES in order of decreasing precedence starting from higher priority ESs on a basis of the display priority of the ESs.
However, distribution of the plurality of encoded streams involves a substantially wide transmission band. Also, it is necessary for client apparatuses to have the capabilities of simultaneously decoding the encoded streams and of performing combination processing on the decoded streams, which makes the client apparatuses expensive.
As a second method for displaying a multiple-video image, there is a method in which a server generates a multiple-video image in the form of a single encoded stream and delivers the stream as illustrated in FIG. 2. In this case, the server decodes a plurality of videos to be combined, resizes the videos, combines the videos, and then re-encodes the combined videos so as to generate an encoded stream of a multiple-video image. These processes put a substantially large processing load on the server.
SUMMARY
When a view area of the multiple-video image is moved by scrolling or other operations, the server has to decode the encoded stream of the multiple-video image, change the motion vector, and re-encode the decoded stream. These processes also put a large processing load on the server.
The present disclosure has been made in view of these circumstances and enables reduction of the processing load caused by movement of the view area of the multiple-video image.
A video processing apparatus according to one embodiment of the present disclosure includes a combining unit that combines encoded streams of a plurality of videos in order to generate an encoded stream of a multiple-video image composed of the videos, each encoded stream of each video having coding units in each horizontal line that are encoded as a slice. The video processing apparatus also includes an insertion unit that inserts an insertion stream into the encoded stream of the multiple-video image generated by the combining unit when a view area of the multiple-video image is moved, and the insertion stream is an encoded stream in which all the coding units in the multiple-video image are replaced with skip macroblocks with a motion vector indicating a direction and an amount of movement of the view area.
A video processing method according to an embodiment of the present disclosure is associated with the video processing apparatus according to the above-described embodiment of the present disclosure.
The video processing method in the embodiment of the present disclosure includes combining encoded streams of a plurality of videos in order to generate an encoded stream of a multiple-video image composed of the videos, each encoded stream of each video having coding units in each horizontal line that are encoded as a slice. The video processing method also includes inserting an insertion stream into the encoded stream of the multiple-video image generated by the combining unit when a view area of the multiple-video image is moved, and the insertion stream is an encoded stream in which all the coding units in the multiple-video image are replaced with skip macroblocks with a motion vector indicating a direction and an amount of movement of the view area.
According to the embodiments of the present disclosure, the processing load caused by movement of a view area of a multiple-video image can be reduced.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 illustrates a multiple-video reproduction system;
FIG. 2 illustrates an example of a method for distributing an encoded stream of a multiple-video image;
FIG. 3 is an exemplary configuration of a multiple-video reproduction system, to which the present technology is applied, according to an embodiment;
FIG. 4 is a block diagram illustrating an exemplary configuration of the distribution server in FIG. 3;
FIG. 5 illustrates generation of a frame-based encoded stream of a multiple-video image;
FIG. 6 illustrates a multiple-video image when a view area is moved;
FIG. 7 illustrates an insertion stream;
FIG. 8 illustrates an insertion video image in a case where slice data of an insertion stream is not updated;
FIG. 9 illustrates an insertion video image in a case where slice data of an insertion stream is updated;
FIG. 10 illustrates how to insert an insertion stream;
FIGS. 11A, 11B, and 11C illustrate an update of a frame-based encoded stream of a multiple-video image;
FIG. 12 is a flowchart illustrating how the distribution server in FIG. 4 performs generation processing;
FIG. 13 is a flowchart illustrating details of multiple-video image update processing in FIG. 12; and
FIG. 14 is a block diagram illustrating an exemplary hardware configuration of a computer.
DETAILED DESCRIPTION OF EMBODIMENTS Embodiment
Exemplary Configuration of Multiple-Video Reproduction System According to Embodiment
FIG. 3 shows an exemplary configuration of a multiple-video reproduction system, to which the present technology is applied, according to an embodiment.
A multiple-video reproduction system 1 functions as a video processing apparatus and includes a distribution server 11 for distributing videos and a receiving device 13 connected to the distribution server 11 via a network 12. The multiple-video reproduction system 1 distributes an encoded stream of a multiple-video image composed of a plurality of videos to display the multiple-video image.
Specifically, a plurality of frame-based encoded streams of videos are input from outside sources to the distribution server 11 of the multiple-video reproduction system 1. The distribution server 11 selects a plurality of encoded streams of videos to be displayed from the input encoded streams of the videos on the basis of history of movement information that indicates the direction and amount of movement of a view area and is transmitted from the receiving device 13. The distribution server 11 combines the selected encoded streams of videos on a frame by frame basis to generate a frame-based encoded stream of a multiple-video image.
If the amount of movement indicated by the movement information is 0, the distribution server 11 transmits the frame-based encoded stream of the multiple-video image as a combined stream without changes to the receiving device 13.
On the other hand, if the amount of movement indicated by the movement information is anything other than 0, the distribution server 11 generates, based on the movement information, an encoded stream of an insertion video, which is associated with the encoded stream of the multiple-video image and is a frame of a multiple-video image to be inserted between frames of the multiple-video image. The generated encoded stream of the insertion video is referred to as an insertion stream. Then, the distribution server 11 inserts the insertion stream into the frame-based encoded stream of the multiple-video image and transmits it as a combined stream to the receiving device 13.
The receiving device 13 receives the combined stream transmitted by the distribution server 11 via the network 12 such as the Internet. The receiving device 13 decodes the received combined stream and displays the resultant multiple-video image and insertion video image on a built-in liquid crystal display or other types of display.
After starting to display the multiple-video image, the receiving device 13 generates movement information in response to the user's operation, such as scrolling and cursor movement, and transmits the generated movement information to the distribution server 11. Thus, when the amount of movement is anything other than 0, the encoded streams of the videos to be displayed are changed and accordingly the view area is shifted.
The receiving apparatus 13 does not have to include a liquid crystal display, and may instead display the multiple-video image on a display device connected thereto. In addition, the receiving device 13 can be, for example, a television receiver having a network connecting function, a set top box (STP), a personal computer, or a portable terminal device.
The network 12 can be configured so as to be connected to a plurality of receiving devices 13. In this case, the distribution server 11 multicasts a combined stream to the receiving devices 13.
[Exemplary Configuration of Distribution Server]
FIG. 4 is a block diagram illustrating an exemplary configuration of the distribution server 11 in FIG. 3.
As shown in FIG. 4, the distribution server 11 includes a receiving unit 31, a storage unit 32, a read unit 33, a combining unit 34, an insertion unit 35, and a transmission unit 36.
The receiving unit 31 of the distribution server 11 receives movement information transmitted from the receiving device 13 via the network 12 shown in FIG. 3, and supplies the movement information to the read unit 33, combining unit 34, and insertion unit 35. The storage unit 32 stores insertion streams associated with the movement information indicating amounts of movement other than 0.
The read unit 33 reads out an insertion stream associated with the movement information from the storage unit 32 on the basis of the movement information supplied from the receiving unit 31, and supplies the insertion stream to the insertion unit 35.
The combining unit 34 selects a plurality of encoded streams of videos to be displayed from a plurality of encoded streams of videos input from outside sources based on the history of the movement information supplied from the receiving unit 31. The combining unit 34 combines the encoded streams of the selected videos on a frame by frame basis and supplies the frame-based encoded stream of the resultant multiple-video image to the insertion unit 35.
When an insertion stream is supplied from the read unit 33 or, that is, when the amount of movement is anything other than 0, the insertion unit 35 updates slice data of the insertion stream with the use of the frame-based encoded stream of a predetermined video that is input from an outside source on the basis of the movement information. The insertion unit 35 inserts the insertion stream into the frame-based encoded stream of the multiple-video image supplied from the combining unit 34.
During insertion, the insertion unit 35 changes (generates) the slice header of the insertion stream on the basis of the slice header of the frame-based encoded stream, which is placed immediately after the insertion stream, of the multiple-video image. The insertion unit 35 also changes the slice header of the encoded stream of the multiple-video image after the insertion stream.
In addition, the insertion unit 35 supplies the frame-based encoded stream of the multiple-video image with the insertion stream having been inserted, which is regarded as a combined stream, to the transmission unit 36.
On the other hand, when the insertion stream is not supplied from the read unit 33 or, that is, when the amount of movement is 0, the insertion unit 35 supplies a frame-based encoded stream of a multiple-video image, which is regarded as a combined stream, to the transmission unit 36.
The transmission unit 36 transmits the combined stream, which is supplied from the insertion unit 35, to the receiving device 13 via the network 12 shown in FIG. 3.
[Description on Generation of Frame-Based Encoded Stream of Multiple-Video Image]
FIG. 5 illustrates how the combining unit 34 in FIG. 4 generates a frame-based encoded stream of a multiple-video image.
Note that the example of FIG. 5 shows that the combining unit 34 generates a frame-based encoded stream of a multiple-video image with the 0th to 3rd videos (i.e., View 0 to View 3) arranged at the upper left, upper right, lower left, and lower right, respectively.
As shown in FIG. 5, there are one or more pixels having a predetermined pixel value (0 in this example) in surrounding areas of videos, which are associated with encoded streams input to the combining unit 34. Each of the encoded streams input to the combining unit 34 is a stream encoded by an advanced video coding (AVC) method in which macroblocks, which are coding units, in each horizontal line are regarded as a slice, and the slice does not refer to the outside of the screen, but refers to a frame one frame before the current frame in a coding order. The frame referred to is called a reference frame in intra coding. In the example of FIG. 5, each video has four macroblocks in a vertical direction, and accordingly the number of slices in each video is four.
The combining unit 34 reorders slices in the respective encoded streams of the 0th to 3rd videos to be displayed in the input encoded streams and combines the encoded streams of the 0th to 3rd videos, thereby generating a frame-based encoded stream of a multiple-video image.
Specifically, the combining unit 34 arranges the 0th slice of the 0th video as the 0th slice of the combined encoded stream, while arranging the 0th slice of the 1st video as the 1st slice of the combined encoded stream. Thereafter, the slices of the 0th video and the slices of the 1st video are alternately arranged, and the last slice of the 1st video is arranged so as to be the 7th slice in the combined encoded stream.
Then, the combining unit 34 arranges the 0th slice of the 2nd video as the 8th slice of the combined encoded stream, while arranging the 0th slice of the 3rd video as the 9th slice of the combined encoded stream. Thereafter, the slices of the 2nd video and the slices of the 3rd video are alternately arranged, and the last slice of the 3rd video is arranged so as to be the 15th slice of the combined encoded stream.
Since the encoded streams input to the combining unit 34 are encoded streams with the macroblocks on each horizontal line encoded as a slice, there is no dependence between macroblocks arranged at vertically different positions. Therefore, decoding can be properly performed even if the slices are decoded in a different order. Thus, the combining unit 34 can generate a frame-based encoded stream of a multiple-video image by only reordering the slices of the encoded streams of the respective videos.
[Description on Insertion Stream]
FIG. 6 illustrates a multiple-video image when a view area is moved.
In FIG. 6, the horizontal axis represents time (T). This is the same in FIG. 7 as will be described later.
As shown in FIG. 6, when a view area of the 1st frame (Frame 1) is positioned at the bottom left with respect to a view area of the 0th frame (Frame 0) in the coding order, or, in other words, when a view area of the 0th frame moves in a direction toward the bottom left, the position of the videos in the multiple-video image moves in a direction toward the top right on the screen. That is, a video 43 in the multiple-video image of the 1st frame, which is associated with the multiple-video image 41 of the 0th frame, is positioned at the top right with respect to the multiple-video image 41 on the screen.
In this case, if the motion vector (MV) of a macroblock (Inter MB) 52 in the video 43, which is inter-coded with reference to the multiple-video image 41, is not changed, the reference block of the macroblock 52 is changed to a block positioned at the top right with respect to a block 53 to which the macroblock 52 is supposed to refer.
In short, the macroblock 52 corresponds to a macroblock 51 of a multiple-video image 42 of the 1st frame when the view area is not moved. Therefore, it is necessary for the motion vector of the macroblock 52 to point to the block 53 that is a reference block of the macroblock 51. However, if the motion vector of the macroblock 51 is applied to the motion vector of the macroblock 52, the macroblock 52 refers to a block 54, as the reference block, that is positioned at the top right with respect to the block 53 in the multiple-video image 41 because the macroblock 52 is positioned at the top right with respect to the macroblock 51 on the screen. As a result, the macroblock 52 is not properly decoded.
To achieve proper decoding even when the view area is moved, it is necessary to shift the position pointed at by the motion vector of the inter-coded macroblock in the direction in which the view area has moved by an amount of movement of the view area. However, decoding the multiple-video image once to change the motion vector of the multiple-video image and recording the decoded multiple-video image increase the processing load, thereby making it difficult to deliver the combined stream in real time. In addition, recoding of the multiple-video image may deteriorate the image quality.
Therefore, the distribution server 11 inserts an insertion stream in order to shift the position pointed at by the motion vector of the inter-coded macroblock in the direction in which the view area has moved by an amount of movement of the view area.
More specifically, the distribution server 11 generates an insertion stream in which all macroblocks of an insertion video are replaced with skip macroblocks with a motion vector indicating the direction in which and the amount by which the view area has moved, and inserts the insertion stream before the encoded stream of the multiple-video image in which the view area has moved.
For example, when the view area is moved in a direction toward the bottom left as shown in FIG. 6, the distribution server 11 generates an insertion stream in which all macroblocks are replaced with skip macroblocks with a motion vector pointing in the direction toward the bottom left as shown in FIG. 7. Thus, the insertion video image 71 corresponds to the multiple-video image 41 that is positioned further toward the bottom left than the insertion video image on the screen. In short, the insertion video image 71 corresponds to the multiple-video image 41 with the view area having moved in the direction toward the bottom left.
The insertion stream is inserted as an encoded stream of the 0′th frame (Frame 0′) before the encoded stream of the multiple-video image of the 1st frame in which the view area has moved. As a result, the multiple-video image of the 1st frame is decoded with reference to the insertion video image 71, which is one frame before the multiple-video image of the 1st frame, but not the multiple-video image 41.
As described above, for example, the macroblock 52 is decoded with reference to a block 72 of the insertion video image 71. The block 72 is in a position corresponding to the block 54 on the screen, and the block 54 is positioned at the top right with respect to the block 53 to which the macroblock 51 refers. Also, the block 72 corresponds to the block 53 that is positioned further toward the bottom left than the block 72 on the screen. Therefore, the macroblock 52 is decoded with reference to the block 53 that is a reference block of the corresponding macroblock 51.
As described above, the distribution server 11 shifts the position pointed at by the motion vector of the macroblock 52 in the direction in which the view area has moved by an amount of movement of the view area by inserting the insertion stream without recoding.
With reference to FIGS. 8 and 9, a description will be made about updating of slice data of an insertion stream.
FIG. 8 illustrates an insertion video image in a case where slice data of an insertion stream is not updated.
In the example of FIG. 8, the frame-based encoded stream of the multiple-video image generated in FIG. 5 is an encoded stream of a multiple-video image of the 0th frame. Next to the 0th frame, an insertion stream is inserted as an encoded stream of the 0′th frame and all macroblocks of the insertion stream are replaced with skip macroblocks with a motion vector that indicates a downward direction and an amount of movement equivalent to the size of one macroblock. These are the same in FIG. 9 as will be described later.
When slice data of the insertion stream is not updated as shown in FIG. 8, the insertion video image of the insertion stream becomes a multiple-video image with videos in the multiple-video image of the 0th frame moved upward by just one slice. In short, the macroblocks of the insertion stream are skip macroblocks with a motion vector pointing in a downward direction by just one slice.
Therefore, the slices of the insertion stream are decoded with reference to the slices of the multiple-video image of the 0th frame positioned one macroblock lower than the slices of the insertion stream on the screen. Therefore, the upper part of the decoded video image of the insertion stream is composed of the 2nd to 15th slices of the multiple-video image of the 0th frame.
On the other hand, there are no slices of the multiple-video image of the 0th frame, which are supposed to be positioned one macroblock lower than the lowermost slices of the insertion video image. Therefore, the slices of the insertion video image refer to pixels that are closest to their reference within the screen. In this example, the pixels are in the lowermost part of the 14th and 15th slices, which are the lowermost slices of the multiple-video image of the 0th frame, and have a predetermined pixel value (0 in this example). As a result, the pixels of the slices at the lowermost part of the decoded video image of the insertion stream have a predetermined pixel value (0 in this example).
Since one or more pixels in surrounding areas of the videos making up the multiple-video image have a predetermined pixel value, the pixel value of the decoded video image of the insertion stream that is decoded with reference to the outside of the screen can be set to the predetermined value.
In contrast, if one or more pixels in the surrounding areas of the videos making up the multiple-video image do not have a predetermined pixel value, the decoded video image of the insertion stream, which is supposed to be decoded with reference to pixels outside the screen, refers to pixels inside the screen that are closest to the reference, resulting in corruption of the decoded video image of the insertion stream.
As shown in FIG. 8, even if the distribution server 11 does not update the slice data of the insertion stream, the decoded video image of the insertion stream is prevented from becoming corrupted and is displayed with high quality.
However, if the amount of movement of the view area is equivalent to n (n=integer) slices (e.g., 16×n pixels) and the view area moves in a vertical direction, this embodiment is configured to display a video that is supposed to be displayed rather than a video in a fixed color (black in this example). This can further improve the quality of the decoded video image of the insertion stream.
FIG. 9 illustrates an insertion video image in a case where slice data of an insertion stream is updated.
When a view area moves by an amount of n slices in a vertical direction, slice data of n slices from the movement direction of an insertion video image are updated to intra-coded slice data of n slices from the opposite direction to the movement direction of videos that are to be displayed at the position of the n slices. In this case, slices that have no reference slices to which the slices of the insertion video refer in the multiple-video image are replaced with intra-coded slices of videos to be displayed at positions at which the slices are positioned.
For example, as shown in FIG. 9, when the view area moves by an amount of one slice in a downward direction, slice data of one slice from the downward direction of the insertion video image is updated to intra-coded slice data of one slice (Slice A or Slice B) from the upward direction of videos that are to be displayed at the position of the one slice.
Accordingly, the lowermost slices of the insertion video image are decoded without referring to the 0th frame of the multiple-video image, and the decoded video image of the lowermost slices becomes a video image that are supposed to be displayed at the position of the slices. As a result, the image quality of the decoded video image of the insertion stream is improved.
[Description on Insertion of Insertion Stream]
FIG. 10 illustrates how the insertion unit 35 in FIG. 4 inserts an insertion stream.
As shown in FIG. 10, a storage unit 32 stores (bit streams of) insertion streams associated with movement information. Specifically, the storage unit 32 stores (a bit stream of) an insertion stream with all macroblocks included replaced with skip macroblocks with a motion vector indicating the movement information according to the movement information.
The read unit 33 reads out (the bit stream of) the insertion stream associated with the movement information supplied from the receiving unit 31. Then, the insertion unit 35 updates the slice data of (the bit stream of) the read insertion stream on the basis of the movement information, and inserts the updated insertion stream into (a bit stream of) a frame-based encoded stream of a multiple-video image supplied from the combining unit 34.
For example, as shown in FIG. 10, if movement information supplied from the receiving unit 31 is movement information A and does not point out an amount of movement equivalent to n slices and a vertical direction, (a bit stream of) an insertion stream A is inserted as it is in between (the bit streams of) the encoded streams of the multiple-video image of the 0th and 1st frames.
As described above, since the storage unit 32 stores an insertion stream associated with movement information, the distribution server 11 can insert the insertion stream with much lighter processing load in comparison with the case where the distribution server 11 generates an insertion stream associated with movement information every time the distribution server 11 receives the movement information.
At the time of insertion of the insertion stream, the slice header of the insertion stream is changed. Specifically, frame_num, pic_order_cnt_lsb, delta_pic_order_cnt_bottom, delta_pic_order_cnt[0], and delta_pic_order_cnt[1], which are included in a slice header of an insertion stream, are made the same as those in a slice header of the immediately following frame in the coding order of frames of the insertion stream.
frame_num is a frame identifier, and pic_order_cnt_lsb, delta_pic_order_cnt_bottom, delta_pic_order_cnt[0], and delta_pic_order_cnt[1] are information to be used to determine a picture order count (POC).
During insertion of the insertion stream, the values of frame_num, pic_order_cnt_lsb, delta_pic_order_cnt_bottom, delta_pic_order_cnt[0], and delta_pic_order_cnt[1] of all frames from the frame of the insertion stream to an IDR picture are increased by a value equivalent to the frame 0′.
Furthermore, updating the slice data of the insertion stream involves a change of data other than the slice data of the insertion stream when the intra-coded slice data to be used for the update is slice data of the IDR picture.
Specifically, nal_unit_type, which indicates the type of a network abstraction layer (NAL) unit of the slice data to be used for the update, is changed from 5 indicating that the slice data is slice data of the IDR picture to 1 indicating that the slice data is slice data of a picture other than the IDR picture. In addition, idr_pic_id, which is an identifier of the IDR picture and included in the slice header of the slice data to be used for the update, is deleted.
When nal_unit_type of the slice data to be used for the update is 5 and nal_ref_idc is not 0, no_output_of_prior_pics_flag and long_term_reference_flag included in the slice header are deleted and adaptive_ref_pic_marking_mode_flag is changed to 0.
In other words, when the slice data to be used for the update is slice data of a reference picture or the like, no_output_of_prior_pics_flag and long_term_reference_flag are deleted and adaptive_ref_pic_marking_mode_flag is changed to 0.
no_output_of_prior_pics_flag is a flag specifying how the pictures decoded prior to the IDR picture are treated after decoding of the IDR picture. long_term_reference_flag is a flag specifying whether the IDR picture is used as a long-term reference picture. adaptive_ref_pic_marking_mode_flag is a flag to be set to use memory management control operation (MMCO) and is set to 0 when MMCO is not used.
[Description on Update of Frame-Based Encoded Stream of Multiple-Video Image]
FIGS. 11A, 11B, 11C illustrate how the combining unit 34 in FIG. 4 updates a frame-based encoded stream of a multiple-video image.
In the example shown in FIGS. 11A to 11C, the encoded stream of a multiple-video image of the 0th frame shown in FIG. 11A corresponds to the frame-based encoded stream of the multiple-video image in FIG. 5. As shown in FIG. 11A, the 2nd slice of the multiple-video image of the 1st frame refers to the 0th slice of the 0th frame which is one frame previous to the 1st frame.
When a view area is moved downwardly by one slice between the 0th and 1st frames in such a multiple-video image, an insertion stream is generated as described with reference to FIG. 9 and is inserted as an encoded stream of the 0′th frame as shown in FIG. 11B.
Then, an encoded stream of a multiple-video image of the 1st frame is generated so as to contain slices that correspond to the 2nd to 15th slices of the multiple-video image of the 0th frame in an upper part of the multiple-video image of the 1st frame.
Specifically, an encoded stream of a multiple-video image composed of the 1st to 3rd slices of the 1st and 2nd videos and the 0th to 3rd slices of the 3rd and 4th videos is generated as the encoded stream of the multiple-video image of the 1st frame.
However, the 0th slice of the multiple-video image of the 1st frame (corresponding to the 2nd slice of the multiple-video image of the 0th frame) refers to a slice of the 0′th frame, which is positioned one slice upper than the 0th slice, namely the outside of the screen. Consequently, proper decoding is not performed.
For proper decoding, the combining unit 34 replaces all the macroblocks, which are in the slices of the 1st video in the multiple-video image of the 1st frame and refer to outside the screen, with skip macroblocks with a motion vector of 0 as shown in FIG. 11C. This properly decodes the 1st video in the multiple-video image of the 1st frame and makes the 1st video in the multiple-video image of the 1st frame the same as the 1st video in the multiple-video image of the 0′th frame. Consequently, display of the 1st video is stopped.
In the example of FIGS. 11A to 11C, the 2nd video also contains a slice that refers to a slice outside the screen as with the case of the 1st video, and therefore display of the 2nd video is also stopped.
In addition, if an area other than the 1st to 4th videos in the multiple-video image of the 1st frame is not an area containing all slices of videos to be displayed, the combining unit 34 replaces all the macroblocks of the area with skip macroblocks with a motion vector of 0 as shown in FIG. 11C. As a result, display of the area other than the 1st to 4th videos in the multiple-video image of the 1st frame is stopped.
[Description on Processing by Distribution Server]
FIG. 12 is a flow chart illustrating how the distribution server 11 in FIG. 4 performs generation processing.
In step S11 in FIG. 12, a combining unit 34 selects encoded streams equivalent to one frame of each of a plurality of videos to be displayed from encoded streams of a plurality of videos input from outside sources and combines the selected streams to generate an encoded stream of the 0th frame of the multiple-video image. The combining unit 34 supplies the encoded stream of the 0th frame of the multiple-video image to a transmission unit 36 via an insertion unit 35.
In step S12, the transmission unit 36 transmits the encoded stream of the 0th frame of the multiple-video image, which is supplied from the insertion unit 35, as a combined stream to the receiving device 13 via the network 12 shown in FIG. 3.
In step S13, the receiving unit 31 receives movement information transmitted from the receiving device 13 via the network 12 and supplies the movement information to a read unit 33, the combining unit 34, and the insertion unit 35.
In step S14, the combining unit 34 selects encoded streams equivalent to one frame of each of a plurality of videos to be displayed from encoded streams of a plurality of videos input from outside sources on the basis of the history of the movement information and combines the selected streams to generate a frame-based encoded stream of a multiple-video image. The combining unit 34 supplies the encoded stream to the insertion unit 35.
In step S15, the read unit 33 determines whether the amount of movement, which is indicated by the movement information supplied from the receiving unit 31, is 0. If it is determined that the amount of movement is 0 in step S15, the insertion unit 35 supplies the frame-based encoded stream of the multiple-video image supplied from the combining unit 34 to the transmission unit 36.
In step S16, the transmission unit 36 transmits the frame-based encoded stream of the multiple-video image supplied from the combining unit 34 as a combined stream to the receiving device 13 via the network 12 and the processing goes to step S25.
On the other hand, if it is determined that the amount of movement is not 0 in step S15, the combining unit 34 performs in step S17 multiple-video image update processing to update the frame-based encoded stream of the multiple-video image generated in step S14. A detailed description will be given about the multiple-video image update processing with reference to FIG. 13 described later.
In step S18, the read unit 33 reads out an insertion stream associated with the movement information from the storage unit 32 on the basis of the movement information supplied from the receiving unit 31 and supplies the insertion stream to the insertion unit 35.
In step S19, the insertion unit 35 updates the slice header of the insertion stream with the use of the slice header of the frame-based encoded stream of the multiple-video image supplied from the combining unit 34. Specifically, the insertion unit 35 makes frame_num, pic_order_cnt_lsb, delta_pic_order_cnt_bottom, delta_pic_order_cnt[0], and delta_pic_order_cnt[1], which are contained in the slice header of the insertion stream, the same as those in the slice header of the encoded stream supplied from the combining unit 34.
In step S20, the insertion unit 35 determines whether the movement information supplied from the receiving unit 31 indicates that the movement direction is a vertical direction and the amount of movement is n slice.
If it is determined that the movement direction is a vertical direction and the amount of movement is n slice in step S20, the processing goes to step S21. In step S21, the insertion unit 35 updates slice data of n slice from the movement direction of an insertion video image to intra-coded slice data of n slices from the opposite direction to the movement direction of videos that are to be displayed at the position of the n slices.
The slice data used for the update is selected from slice data of encoded streams of a plurality of videos input from outside sources. If the slice data used for the update is slice data of an IDR picture, the insertion unit 35 changes nal_unit_type and idr_pic_id of the insertion stream. If nal_ref_idc is not 0, the insertion unit 35 changes also no_output_of_prior_pics_flag, long_term_reference_flag, and adaptive_ref_pic_marking_mode_flag. Subsequent to the processing in step S21, the processing goes to step S22.
On the other hand, if it is determined that the movement direction is not a vertical direction or the amount of movement is not n slice in step S20, the processing skips step S21 and goes to step S22.
In step S22, the insertion unit 35 updates the slice header of the encoded stream of the multiple-video image supplied from the combining unit 34. Specifically, the insertion unit 35 increases the value of frame_num, pic_order_cnt_lsb, delta_pic_order_cnt_bottom, delta_pic_order_cnt[0], and delta_pic_order_cnt[1], which are contained in the slice header of the encoded stream of the multiple-video image, by the number of frames of insertion stream to be inserted between an IDR picture and a picture of the multiple-video image.
In step S23, the insertion unit 35 inserts the insertion stream before the frame-based encoded stream of the multiple-video image with the slice header updated in step S22. Thus, the insertion stream is inserted in the frame-based encoded stream of the multiple-video image. In addition, the insertion unit 35 supplies the frame-based encoded stream of the multiple-video image with the insertion stream inserted therein to the transmission unit 36.
In step S24, the transmission unit 36 transmits the frame-based encoded stream of the multiple-video image in which the insertion stream supplied from the insertion unit 35 is inserted, as a combined stream, to the receiving device 13 via the network 12, and the processing goes to step S25.
In step S25, the distribution server 11 determines whether to terminate the generation processing in response to a user's instruction or the like. If the distribution server 11 determines not to terminate the generation processing in step S25, the processing returns to step S13 and the processing from step S13 to step S25 is repeated until the generation processing is terminated.
If the distribution server 11 determines to terminate the generation processing in step S25, the processing is terminated.
FIG. 13 is a flowchart that details multiple-video image update processing in step S17 in FIG. 12.
In step S41 in FIG. 13, the combining unit 34 selects a video that has not been processed yet from videos making up a multiple-video image as a target video to be processed. In step S42, the combining unit 34 determines whether all slices of the target video are present in the multiple-video image.
If it is determined that all slices of the target video are present in the multiple-video image in step S42, it is then determined whether slices to be referred by the slices of the target video are present outside the insertion video image in step S43.
If it is determined that the slices to be referred by the slices of the target video are present outside the insertion video image in step S43, the processing goes to step S44.
On the other hand, if it is determined that all the slices of the target video are not present in the multiple-video image in step S42, the processing goes to step S44.
In step S44, the combining unit 34 changes all macroblocks of the target video in the frame-based encoded stream of the multiple-video image generated in step S14 of FIG. 12 to skip macroblocks with a motion vector of 0. Then, the processing goes to step S45.
On the other hand, if it is determined that slices to be referred by the slices of the target video are not present outside the insertion video image in step S43, the processing skips step S44 and goes to step S45.
In step S45, the combining unit 34 determines whether all videos making up the multiple-video image are selected as target videos to be processed. If it is determined that all the videos making up the multiple-video image are not selected as target videos to be processed in step S45, the processing returns to step S41 and repeats step S41 to step S45 until all the videos are selected as target videos to be processed.
If it is determined that all the videos making up the multiple-video image are selected as target videos to be processed in step S45, the processing returns to step S17 and goes to step S18 in FIG. 12.
As described above, when a view area of a multiple-video image is moved, the distribution server 11 inserts an insertion stream in an encoded stream of the multiple-video image, thereby eliminating the necessity to change the motion vector of the encoded stream of the multiple-video image. Accordingly, the distribution server 11 can generate a combined stream without recoding when the view area of the multiple-video image is moved.
Thus, processing load caused by movement of the view area of the multiple-video image is reduced. As a result, even if the distribution server 11 has a low processing capacity, the distribution server 11 can generate a combined stream that is supposed to appear after the view area of the multiple-video image has moved. In addition, the quality degradation of the multiple-video image caused by recoding can be prevented.
[Description on Computer to which the Present Disclosure is Applied]
The aforementioned series of processes performed by the distribution server 11 can be implemented not only by hardware but also by software. When the series of processes is implemented by software, software programs are installed in a computer. The computer used herein may be a computer incorporated in hardware for specific purposes or, for example, a general-purpose personal computer that can perform various functions by installing various programs thereon.
FIG. 14 is a block diagram showing an exemplary hardware configuration of a computer that executes the aforementioned series of processes performed by the distribution server 11 with programs.
In the computer, a central processing unit (CPU) 201, a read only memory (ROM) 202, and a random access memory (RAM) 203 are interconnected by a bus 204.
The bus 204 is further connected with an input-output interface 205. The input-output interface 205 is connected with an input section 206, an output section 207, a storage section 208, a communicating section 209, and a drive 210.
The input section 206 includes a keyboard, a mouse, a microphone, etc. The output section 207 includes a display, a speaker, etc. The storage section 208 includes a hard disk, a nonvolatile memory, etc. The communication section 209 includes a network interface, etc. The drive 210 drives a removable medium 211, such as a magnetic disk, an optical disc, a magneto-optical disc, or a semiconductor memory, etc.
In the computer configured as described above, for example, the CPU 201 loads a program stored in the storage section 208 into the RAM 203 via the input-output interface 205 and the bus 204, and then executes the program. Thus, the series of processes described above is performed.
It is possible to record the program to be executed on the computer (CPU 201) on a removable medium 211, such as a package medium and to provide the removable medium 211. In addition, the program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, digital satellite broadcasting or the like.
The program can be installed from the removable medium 211 loaded into the drive 210 to the storage section 208 of the computer through the input/output interface 205. The program can be also installed into the storage section 208 from the communication section 209 that receives the program through a wired or wireless transmission medium. In addition, the program can be preinstalled in the ROM 202 or the storage section 208.
The programs executed by the computer may be programs that are processed in time series in accordance with the sequence described in this specification. Alternatively, the programs may be programs to be executed in parallel or at necessary timing, such as at the time of being invoked, or the like.
The embodiments of the present disclosure are not limited to the foregoing embodiments, and various changes can be made without departing from the spirit of the present disclosure.
For example, in the present disclosure, it is possible to employ a cloud-computing configuration in which one function is shared and processed in cooperation by a plurality of devices through a network.
It is also possible to execute each step in the foregoing flowchart by a single apparatus or to share the steps among a plurality of apparatuses.
Furthermore, a single step may include a plurality of processes that can be executed by a single apparatus or can be shared by a plurality of apparatuses.
All macroblocks of the insertion stream can be changed into skip macroblocks with a motion vector indicating movement information irrespective of movement information of a view area. In addition, the coding method of the encoded stream of videos may be a high efficiency video coding (HEVC) method. In this case, the unit of coding is a coding unit (CU).
The present disclosure can be configured as follows.
[1] A video processing apparatus includes a combining unit that combines encoded streams of a plurality of videos to generate an encoded stream of a multiple-video image composed of the videos, each encoded stream of each video having coding units in each horizontal line that are encoded as a slice, and an insertion unit that inserts an insertion stream into the encoded stream of the multiple-video image generated by the combining unit when a view area of the multiple-video image is moved, the insertion stream being an encoded stream in which all the coding units in the multiple-video image are replaced with skip macroblocks with a motion vector indicating a direction and an amount of movement of the view area.
[2] The video processing apparatus according to [1], wherein one or more pixels in surrounding areas in the videos have a predetermined pixel value.
[3] The video processing apparatus according to [1] or [2], wherein the insertion unit replaces a slice that is in the insertion stream and has no reference slice in the multiple-video image with an intra-coded slice of a video to be displayed at a position at which the slice is positioned.
[4] The video processing apparatus according to [3], wherein when the direction of movement of the view area is a vertical direction and the amount of movement of the view area is an integral multiple of a slice, the insertion unit replaces a slice that is in the insertion stream and has no reference slice in the multiple-video image with an intra-coded slice of a video to be displayed at a position at which the slice is positioned.
[5] The video processing apparatus according to any of [1] to [4], wherein the insertion unit generates a slice header of the insertion stream on the basis of a slice header of the encoded stream, which is placed immediately after the insertion stream, of the multiple-video image.
[6] The video processing apparatus according to any of [1] to [5], wherein when the reference slice of a slice of the video in a multiple-video image associated with the encoded stream of the multiple-video image is present outside a multiple-video image associated with the insertion stream, the combining unit replaces all the coding units in the video with skip macroblocks with a motion vector of 0.
[7] The video processing apparatus according to any of [1] to [6], wherein when all the slices of the video are not present in a multiple-video image associated with the encoded stream of the multiple-video image, the combining unit replaces all the coding units of the video with skip macroblocks with a motion vector of 0.
[8] A video processing method performed by a video processing apparatus, includes combining encoded streams of a plurality of videos to generate an encoded stream of a multiple-video image composed of the videos, each encoded stream of each video having coding units in each horizontal line that are encoded as a slice, and inserting an insertion stream in the encoded stream of the multiple-video image generated by the combining process when a view area of the multiple-video image is moved, the insertion stream being an encoded stream in which all the coding units in the multiple-video image are replaced with skip macroblocks with a motion vector indicating a direction and an amount of movement of the view area.

Claims (8)

What is claimed is:
1. A video processing apparatus comprising:
a combining unit that combines encoded streams of a plurality of videos to generate an encoded stream of a multiple-video image composed of the videos, each encoded stream of each video having coding units in each horizontal line that are encoded as a slice; and
an insertion unit that inserts an insertion stream into the encoded stream of the multiple-video image generated by the combining unit when a view area of the multiple-video image is moved, the insertion stream being an encoded stream in which all the coding units in the multiple-video image are replaced with skip macroblocks with a motion vector indicating a direction and an amount of movement of the view area.
2. The video processing apparatus according to claim 1, wherein
one or more pixels in surrounding areas in the videos have a predetermined pixel value.
3. The video processing apparatus according to claim 1, wherein
the insertion unit replaces a slice that is in the insertion stream and has no reference slice in the multiple-video image with an intra-coded slice of a video to be displayed at a position at which the slice is positioned.
4. The video processing apparatus according to claim 3, wherein
when the direction of movement of the view area is a vertical direction and the amount of movement of the view area is an integral multiple of a slice, the insertion unit replaces a slice that is in the insertion stream and has no reference slice in the multiple-video image with an intra-coded slice of a video to be displayed at a position at which the slice is positioned.
5. The video processing apparatus according to claim 1, wherein
the insertion unit generates a slice header of the insertion stream on the basis of a slice header of the encoded stream, which is placed immediately after the insertion stream, of the multiple-video image.
6. The video processing apparatus according to claim 1, wherein
when the reference slice of a slice of the video in a multiple-video image associated with the encoded stream of the multiple-video image is present outside a multiple-video image associated with the insertion stream, the combining unit replaces all the coding units in the video with skip macroblocks with a motion vector of 0.
7. The video processing apparatus according to claim 1, wherein
when all the slices of the video are not present in a multiple-video image associated with the encoded stream of the multiple-video image, the combining unit replaces all the coding units of the video with skip macroblocks with a motion vector of 0.
8. A video processing method performed by a video processing apparatus, comprising:
combining encoded streams of a plurality of videos to generate an encoded stream of a multiple-video image composed of the videos, each encoded stream of each video having coding units in each horizontal line that are encoded as a slice; and
inserting an insertion stream into the encoded stream of the multiple-video image generated by the combining process when a view area of the multiple-video image is moved, the insertion stream being an encoded stream in which all the coding units in the multiple-video image are replaced with skip macroblocks with a motion vector indicating a direction and an amount of movement of the view area.
US14/187,647 2013-03-08 2014-02-24 Video processing apparatus and video processing method Expired - Fee Related US8817881B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2013046836A JP5812028B2 (en) 2013-03-08 2013-03-08 Video processing apparatus and video processing method
JP2013-046836 2013-03-08

Publications (2)

Publication Number Publication Date
US8817881B1 true US8817881B1 (en) 2014-08-26
US20140253806A1 US20140253806A1 (en) 2014-09-11

Family

ID=51469347

Family Applications (1)

Application Number Title Priority Date Filing Date
US14/187,647 Expired - Fee Related US8817881B1 (en) 2013-03-08 2014-02-24 Video processing apparatus and video processing method

Country Status (3)

Country Link
US (1) US8817881B1 (en)
JP (1) JP5812028B2 (en)
CN (1) CN104038776B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11336909B2 (en) * 2016-12-27 2022-05-17 Sony Corporation Image processing apparatus and method
CN114650371A (en) * 2020-12-17 2022-06-21 安讯士有限公司 Method and digital camera for forming combined image frames of a combined video stream

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6669617B2 (en) * 2016-09-12 2020-03-18 ルネサスエレクトロニクス株式会社 Video processing system
CN116405694A (en) * 2022-03-18 2023-07-07 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5414469A (en) * 1991-10-31 1995-05-09 International Business Machines Corporation Motion video compression system with multiresolution features
US5764277A (en) * 1995-11-08 1998-06-09 Bell Communications Research, Inc. Group-of-block based video signal combining for multipoint continuous presence video conferencing
US5812791A (en) * 1995-05-10 1998-09-22 Cagent Technologies, Inc. Multiple sequence MPEG decoder
US6141062A (en) * 1998-06-01 2000-10-31 Ati Technologies, Inc. Method and apparatus for combining video streams
US6147695A (en) * 1996-03-22 2000-11-14 Silicon Graphics, Inc. System and method for combining multiple video streams
JP2002064818A (en) 2000-08-21 2002-02-28 Sony Corp Data transmission system, apparatus and method for transmitting data, apparatus and method for processing data as well as recording medium
US7492387B2 (en) * 2002-08-05 2009-02-17 Chih-Lung Yang Implementation of MPCP MCU technology for the H.264 video standard

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2962348B2 (en) * 1996-02-08 1999-10-12 日本電気株式会社 Image code conversion method
US5867208A (en) * 1997-10-28 1999-02-02 Sun Microsystems, Inc. Encoding system and method for scrolling encoded MPEG stills in an interactive television application
JP2000341587A (en) * 1999-05-25 2000-12-08 Sony Corp Device and method for image processing
BRPI0107875B1 (en) * 2000-01-28 2015-09-08 Opentv Inc method, decoder, interactive and systems designed to combine multiple mpeg encoded video streams
US8473628B2 (en) * 2008-08-29 2013-06-25 Adobe Systems Incorporated Dynamically altering playlists

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5414469A (en) * 1991-10-31 1995-05-09 International Business Machines Corporation Motion video compression system with multiresolution features
US5812791A (en) * 1995-05-10 1998-09-22 Cagent Technologies, Inc. Multiple sequence MPEG decoder
US5764277A (en) * 1995-11-08 1998-06-09 Bell Communications Research, Inc. Group-of-block based video signal combining for multipoint continuous presence video conferencing
US6147695A (en) * 1996-03-22 2000-11-14 Silicon Graphics, Inc. System and method for combining multiple video streams
US6141062A (en) * 1998-06-01 2000-10-31 Ati Technologies, Inc. Method and apparatus for combining video streams
JP2002064818A (en) 2000-08-21 2002-02-28 Sony Corp Data transmission system, apparatus and method for transmitting data, apparatus and method for processing data as well as recording medium
US7031317B2 (en) 2000-08-21 2006-04-18 Sony Corporation Transmission apparatus and transmission method
US7492387B2 (en) * 2002-08-05 2009-02-17 Chih-Lung Yang Implementation of MPCP MCU technology for the H.264 video standard

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11336909B2 (en) * 2016-12-27 2022-05-17 Sony Corporation Image processing apparatus and method
CN114650371A (en) * 2020-12-17 2022-06-21 安讯士有限公司 Method and digital camera for forming combined image frames of a combined video stream
CN114650371B (en) * 2020-12-17 2024-01-05 安讯士有限公司 Method and digital camera for forming combined image frames of a combined video stream

Also Published As

Publication number Publication date
JP5812028B2 (en) 2015-11-11
US20140253806A1 (en) 2014-09-11
CN104038776B (en) 2017-09-08
CN104038776A (en) 2014-09-10
JP2014175857A (en) 2014-09-22

Similar Documents

Publication Publication Date Title
US11812080B2 (en) System and method for smooth transition of live and replay program guide displays
US10356459B2 (en) Information processing apparatus and method
EP2568709B1 (en) Image processing apparatus, image processing method, and image processing system
US9414065B2 (en) Dynamic image distribution system, dynamic image distribution method and dynamic image distribution program
JP6305279B2 (en) Video compression device and video playback device
US10264293B2 (en) Systems and methods for interleaving video streams on a client device
JP2013134762A (en) Image processing system, image providing server, information processor, and image processing method
US8817881B1 (en) Video processing apparatus and video processing method
CN109963176B (en) Video code stream processing method and device, network equipment and readable storage medium
KR20170047489A (en) Apparatus for Processing Images, Method for Processing Images, and Computer Readable Recording Medium
CN105379281B (en) Picture reference control for video decoding using a graphics processor
Uchihara et al. Fast H. 264/AVC stream joiner for interactive free view-area multivision video
US20180199002A1 (en) Video processing apparatus and video processing method cooperating with television broadcasting system
EP4064704A1 (en) Personalization of a video sequence
US12028394B2 (en) Method and apparatus for providing cloud streaming service
KR102719787B1 (en) Ranking information for immersive media processing
US20130287100A1 (en) Mechanism for facilitating cost-efficient and low-latency encoding of video streams
JP4672561B2 (en) Image processing apparatus, receiving apparatus, broadcast system, image processing method, image processing program, and recording medium
WO2024126057A1 (en) Reference picture marking process based on temporal identifier
WO2024126058A1 (en) Reference picture lists signaling
WO2009140072A1 (en) Object-based video coding with intra-object or inter-object intra-frame prediction

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: SONY CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:YANO, KOJI;FUJIMOTO, YUJI;ENOKI, JUNICHIRO;REEL/FRAME:032330/0148

Effective date: 20140115

STCF Information on status: patent grant

Free format text: PATENTED CASE

CC Certificate of correction
MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551)

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20220826