WO2007013334A1 - 映像合成装置及びプログラム - Google Patents
映像合成装置及びプログラム Download PDFInfo
- Publication number
- WO2007013334A1 WO2007013334A1 PCT/JP2006/314264 JP2006314264W WO2007013334A1 WO 2007013334 A1 WO2007013334 A1 WO 2007013334A1 JP 2006314264 W JP2006314264 W JP 2006314264W WO 2007013334 A1 WO2007013334 A1 WO 2007013334A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video
- display
- time
- sub
- displayed
- Prior art date
Links
- 230000015572 biosynthetic process Effects 0.000 title claims abstract description 11
- 238000003786 synthesis reaction Methods 0.000 title claims abstract description 11
- 238000000034 method Methods 0.000 claims description 33
- 239000000203 mixture Substances 0.000 claims description 31
- 230000008569 process Effects 0.000 claims description 18
- 230000002194 synthesizing effect Effects 0.000 claims description 15
- 238000012545 processing Methods 0.000 abstract description 22
- 238000010586 diagram Methods 0.000 description 31
- 230000008859 change Effects 0.000 description 13
- 230000003139 buffering effect Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 239000002131 composite material Substances 0.000 description 4
- 238000004519 manufacturing process Methods 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 208000003443 Unconsciousness Diseases 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/44—Receiver circuitry for the reception of television signals according to analogue transmission standards
- H04N5/445—Receiver circuitry for the reception of television signals according to analogue transmission standards for displaying additional information
- H04N5/45—Picture in picture, e.g. displaying simultaneously another television channel in a region of the screen
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
- G09G5/38—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory with means for controlling the display position
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
- H04N21/4307—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
- H04N21/43072—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of multiple content streams on the same device
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
- H04N21/4316—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/84—Generation or processing of descriptive data, e.g. content descriptors
Definitions
- the present invention relates to a video composition apparatus and a program for inputting a first video and a second video, and synthesizing and displaying the second video on the first video.
- “Picture 'in' picture” is a function that displays two images simultaneously by overlaying a small child screen on the screen (parent screen). For example, “multi-angle display” that displays images taken at a different angle from the main screen image in the sub-screen, and “commentary display” that displays additional information related to the main screen image in a commentary format (for example, It is used for the director's commentary video, etc., which includes a secret story about the movie, etc.).
- Picture 'in' picture is realized, for example, by decoding two video data on two different layers and then superimposing the decoded video as shown in FIG. Yes. At this time, the display size and display position of the video on the sub-screen side are adjusted to be superimposed on the main screen.
- a rectangular picture shown in FIG. 17 or an arbitrarily shaped picture may be used as a sub-screen at the time of a picture “in” picture.
- Such a picture “in” picture function and realization method are described in Patent Document 1, for example.
- Patent Document 1 JP 2005-123775 A
- the display position of the child screen is determined in advance. Display a sub-screen at the position! /
- the child screen image is displayed on top of the parent screen image. Therefore, when the child screen is displayed, part of the parent screen image may be hidden by the child screen image. become. From this, it is desirable that the display position of the sub-screen can be given by switching the position of the sub-screen to be displayed on the parent screen according to the change in the content of the video on the main screen.
- an image on a child screen can be freely started, paused, and restarted at any time within a certain period.
- the child screen is displayed only during playback.
- the sub-screen video is a special video for the main screen video
- the sub-screen video special video
- the present invention has been made in view of the above-described problem, and for the display position of the sub-screen at the time of picture 'in' picture reproduction, the displayable time and the display area or displayable area for each time are set. Provide the display data shown. Accordingly, as described above, there is provided a video composition device and program that can provide an appropriate display position of a sub-screen even if the playback time and stop time of the sub-screen video are freely changed. It is to be provided.
- the first invention inputs a first video and a second video, and synthesizes and outputs the second video to the first video A synthesizer, the time information indicating the time of the first video at which the second video can be displayed, and the display indicating the display area of the second video given corresponding to the time Display including area information Designating means for designating a display position in the first video when the second video is displayed based on the display data, and the first video specified by the designating means And a combining means for combining the second image at a display position in the image.
- the second invention is the video composition device according to the first invention, wherein the display data includes the second video, which is given corresponding to the time of the second video.
- Second display area information indicating a display area is included, and when the second video is displayed, the specifying means displays the display area information and Z or the second second information included in the display data. A process for designating a display position according to display area information is performed.
- the third invention is the video composition device according to the first invention, wherein the display data includes the second video, which is given corresponding to the time of the second video.
- Displayable area information indicating a displayable area is included, and the designation unit displays the display area information and Z included in the display data or the displayable information when displaying the second video.
- a process for designating a display position in accordance with the area information is performed.
- a fourth invention is a video compositing device that inputs a first video and a second video, synthesizes the second video with the first video, and outputs the second video.
- Time information indicating the time of the first video that can be displayed, and displayable area information indicating a displayable area of the second video given corresponding to the time.
- Designating means for receiving the display data and designating the display position in the first video when displaying the second video based on the display data, and designated by the designating means.
- synthesizing means for superposing and synthesizing the second video image at a display position in the first video image.
- the fifth invention is the video composition device according to the fourth invention, wherein the display data is provided for the second video corresponding to the time of the second video.
- Display area information indicating a display area is included, and when the second video is displayed, the specifying unit displays the displayable area information and Z included in the display data or the display area information. A process for designating a display position is performed.
- a sixth invention is the video composition device of the fourth invention, wherein the display data includes , Second displayable area information indicating a displayable area of the second video, which is given corresponding to the time of the second video, is included, and the specifying means includes the second video Is displayed, the display position is designated according to the displayable area information and Z or the second displayable area information included in the display data.
- a seventh invention is a video composition device for inputting a first video and a second video, synthesizing the second video with the first video, and outputting the synthesized video.
- Time information indicating the time of the second video different from the time of the second video, and display area information indicating the display area of the second video given corresponding to the time of the second video.
- Designating means for receiving display data and designating a display position in the first video when displaying the second video based on the display data, and the first designated by the designating means.
- Synthesizing means for superimposing and synthesizing the second video at a display position in one video.
- An eighth invention is a video synthesizing device that inputs a first video and a second video, synthesizes the second video with the first video, and outputs the synthesized video.
- Time information indicating the time of the second video, which is different from the time of the second video, and displayable area information indicating the displayable area of the second video, which is given corresponding to the time of the second video;
- display position in the first video is set so as to be included in the displayable area information.
- a ninth invention is the video composition apparatus according to any one of the first to eighth inventions, wherein the synthesized and outputted video is a picture 'in'.
- the first video is a video displayed on a main screen
- the second video is a video displayed on a sub-screen.
- a tenth aspect of the invention is directed to a computer that performs control for inputting a first video and a second video, combining the second video with the first video, and outputting the synthesized video.
- Time information indicating the time of the first video at which the video can be displayed, and the time information given corresponding to the time
- Display data including display area information indicating a display area of the second video, and based on the display data! /,
- the first video in displaying the second video is displayed.
- the eleventh aspect of the invention is the video composition device of the first or seventh aspect of the invention, wherein the display area information includes an upper left vertex coordinate of a rectangular area for displaying the second video. It is a feature.
- the twelfth invention is the video composition device according to the fourth or eighth invention, wherein the displayable area information includes an upper left vertex coordinate of a rectangular area in which the second video can be displayed. It is characterized by that.
- the present invention provides display data indicating a displayable time and a display area or a displayable area with respect to a display position of a small screen during picture “in” picture reproduction.
- This display data is included in the video data of the sub-screen video or the parent screen video, or is stored in management data independent of the video data, and is handled together with the video data at the time of video transmission and distribution.
- the display data is read out and used to determine the display position of the sub-screen corresponding to the playback time of the parent (sub-) screen video each time.
- the small-screen video is synthesized and displayed on the main-screen video in the picture “in” picture, it can be displayed and reproduced at an appropriate display position.
- the sub-screen video can be switched between display and non-display freely within the range of displayable time, and even if it is switched between display and non-display, the sub-screen video is displayed at an appropriate position each time.
- Composite display is possible. Therefore, it is possible to perform playback when the picture is “in” as intended by the provider.
- FIG. 1 is a functional block diagram showing a schematic configuration of a video display device according to first, second, and third embodiments of the present invention.
- FIG. 2 shows an example of display data handled by the video display device according to the first embodiment of the present invention.
- FIG. 3 is a diagram showing another example of display data handled by the video display device according to the first embodiment of the present invention.
- FIG. 4 is an explanatory view showing a variation of display data handled by the video display apparatus according to the first embodiment of the present invention.
- FIG. 5 is a diagram showing still another example of display data handled by the video display device that is effective in the first embodiment of the present invention.
- FIG. 6 is a flowchart showing processing when an image is displayed on the image display device that is effective in the first, second, and third embodiments of the present invention.
- FIG. 7 is an explanatory diagram showing a first display state when an image is displayed on the image display device that is helpful in the first embodiment of the present invention.
- FIG. 8 is an explanatory diagram showing a second display state when an image is displayed on the image display device according to the first embodiment of the present invention.
- FIG. 9 is an explanatory diagram showing a third display state when an image is displayed on the image display device according to the first embodiment of the present invention.
- FIG. 10 is an explanatory diagram showing a fourth display state when an image is displayed on the image display device according to the first embodiment of the present invention.
- FIG. 11 is a diagram showing an example of display data handled by the video display device according to the second embodiment of the present invention.
- FIG. 13 is an explanatory diagram showing a second display state when an image is displayed on the image display device according to the second embodiment of the present invention.
- FIG. 14 is an explanatory diagram showing a third display state when an image is displayed on the image display device according to the second embodiment of the present invention.
- FIG. 15 is an explanatory diagram showing a fourth display state when an image is displayed on the image display device according to the second embodiment of the present invention.
- FIG. 17 is an explanatory diagram showing a method of realizing a conventional picture “in” picture function.
- FIGS. 1-10 A video display apparatus, method, and display data according to a first embodiment of the present invention will be described with reference to FIGS.
- FIG. 1 is a functional block diagram showing a schematic configuration of a video display apparatus 1 according to the first embodiment of the present invention.
- the video display device 1 two pieces of video data (encoded video stream) are inputted, decoded and synthesized, and displayed in a so-called “picture“ in ”picture display” state.
- the video displayed on the main screen is referred to as “main video”
- the video displayed as the sub-screen is referred to as “sub video”.
- the video display apparatus 1 includes a decoding unit 101 and a buffering unit 102 that decode and output video data of a main video, a decoding unit 103 and a buffering unit 104 that decode and output control video data of a sub video, In addition to the synthesis unit 105 that synthesizes the video with the main video, the adjustment unit 106 that is the internal configuration, and the display unit 107 that displays the output video, An input unit 108 that accepts a Z-display non-display switching instruction, a processing control unit 109 that adjusts the processing of the decoding unit 103 and the Z or noffering unit 104 in response to the switching, and A position designation unit 110 for designating the display position of the sub video (sub-screen) from the display data of the sub video input separately and the time information at the time of reproduction is configured.
- this display data used for designating the display position of the sub video (child screen) is referred to as “metadata” for the video data.
- the video display device 1 is shown here as including the decoding units 101 and 103, they are not essential. For example, if the input video data is video data that is not encoded, the video display device 1 may not include the decoding units 101 and 103.
- the video display device 1 in FIG. 1 includes only functional blocks related to the processing of video data (data related to the video signal), but the actual video data is other than the data related to the video signal.
- audio data and management data (information necessary for decoding encoded data such as the encoding method and information necessary for video reproduction such as a playlist specifying video segmentation and connection) are displayed. The device is configured to include functional blocks for processing them.
- FIG. 1 is implemented as an internal configuration of an actual video display device.
- the video data of the input main video is decoded by the decoding unit 101, and the decoded video is output after the timing is adjusted by the noffering unit 102. Since the sub video is not displayed, the decoded video output from the noffering unit 102 passes through the synthesis unit 105 as it is and is input to the display unit 107. Then, the main video is displayed as it is.
- the input video data of the sub video is decoded by the decoding unit 103, and the decoded video is output after the timing is adjusted by the noffering unit 104.
- the decoded video of the sub video is input to the adjustment unit 106 in the synthesis unit 105.
- the adjustment unit 106 converts and adjusts the image size of the decoded video of the sub video and the display position on the screen as preprocessing for combining the sub video with the main video. At this time, adjustment is performed so that the sub video (child screen) is combined with the display position of the main video (parent screen) designated by the position specifying unit 110 described later.
- the adjusted sub video is combined with the decoded video of the main video that has been input, and is output and displayed through the display unit 107. It is also possible to perform composition such that the transparency is set at the time of composition, and the main image can be seen through the synthesized sub image.
- the video display device 1 includes an input unit 108, and the user power also accepts an instruction to switch the display Z non-display of the sub video (slave screen). Then, the input unit 108 generates display state information indicating whether the sub video (sub-screen) is currently in a display state or a non-display state based on the input switching instruction, and the processing control unit 109 and the position specifying unit Tell 110.
- the processing control unit 109 receives the display state information from the input unit 108 and controls the processing of the decoding unit 103 and the Z or buffering unit 104 based on the display state information. For example, when the display state information becomes “non-display state”, the decoding process in the decoding unit 103 and the output from the Z or buffering unit 104 are stopped, and when the display state information becomes “display state”. By resuming these processes, the sub video is paused while it is not displayed.
- the position specifying unit 110 receives the display state information from the input unit 108, and displays the sub video (child screen) using the metadata described later when the sub video (child screen) is in the display state.
- the display position in the main video (parent screen) is determined, and the adjustment unit 106 is notified.
- the main video changes with time, so if you want to display a sub video or display a sub video, the display position in the main video will change with time as the main video changes. Will change. Therefore, as described above, some time has elapsed since the sub video is not displayed and temporarily stopped by the processing control unit 109 and the decoding unit 103 and Z or the buffering unit 104 controlled by the processing control unit 109. When playback is resumed and the sub-video is displayed later, it is not always desirable to display it in the same position as before it was hidden.
- the sub video display data that is, the metadata provided in the present invention is a key for displaying the sub video at any position of the main video for each time position of the main video, or
- the position specifying unit 110 uses the metadata input together with the video data of the sub video, and the sub position corresponding to the time position indicated by the time information at the time of playback. Outputs the display position of the video (sub-screen).
- FIG. 2 and FIG. 3 show specific examples of metadata relating to the sub video display provided by the present invention.
- the video stream (sub video stream) included in the video data is also composed of a header part and a video data partial force.
- the header part includes various types of information regarding the stream, and the header part includes metadata.
- FIGS. 2 and 3 the specific metadata structure (FIGS. 2 (a) and 3 (a)) and the display area or displayable area indicated by the metadata are shown.
- the display area or displayable area is also shown as a one-dimensional model so that the temporal changes in the display area or displayable area can be easily understood.
- a diagram ( Figure 2 (c), Figure 3 (c)) is shown. That is, the vertical axis in Fig. 2 (c) and Fig. 3 (c) represents the spatial two-dimensional position of the screen, and the vertical width of the band shown in the figure is the size of the display area or displayable area. Correspond.
- Fig. 2 (a) shows an example of the metadata structure.
- the metadata includes sub video in any time range of the main video based on the total playback time 200 of the sub video and the playback time of the main video (play time with the playback start position “00:00:00”).
- Displayable time information 201 indicating whether or not can be displayed
- display area information 202 indicating the position in the main video for displaying the sub video at each time within the displayable time range. Yes.
- the display area information 202 shown in FIG. 2 indicates the upper left vertex position of the sub-screen assuming that the display size of the sub video (sub-screen) is a fixed size given in advance.
- the sub video is displayed with the time “0 0:00:10” being 1, yl) at the top left vertex position.
- the vertex coordinates are not limited to the upper left coordinates, and of course may be used as the center coordinates of the sub video.
- FIG. 2 (b) is a two-dimensional representation of the display area for displaying the sub video at each time of the main video. For example, if the time is between “00:00:15” and “00:00:30” The image is synthesized and displayed in the area in the main video with (x2, y2) as the upper left vertex coordinate
- FIG. 2 (c) shows a one-dimensional representation of the display area for displaying the sub-video.
- the vertical direction shows the spatial position (region) in the main video
- the horizontal direction shows the time (time position of the main video). For example, at the time “00:00:15”, it is illustrated that the vertex coordinate at the upper left of the sub video is moved from (xl, yl) to (x2, y2).
- the display area of the sub video for the main video is shown as a band-shaped area whose position changes at time “00:00:15” and “00:00:30” in FIG. 2 (c). Talk!
- FIG. 3 (a) also shows an example of the metadata structure.
- the metadata shown in Fig. 3 (a) is the displayable time information indicating the sub video display time range in the main video based on the sub video total playback time 300 and the main video playback time. 301 and displayable area information 302 indicating an area in the main video capable of displaying (allowing to display) the sub video at each time within the displayable time range. It is configured.
- the displayable area information 302 shown in FIG. 3 represents an area that can be displayed on the small screen by two vertex coordinates at the upper left and lower right. For example, referring to Fig.
- the sub video (child screen) can be displayed.
- the display size of the sub video (sub-screen) is a fixed size given in advance and the displayable area specified by the displayable area information 302 in FIG. 3 is larger than the display size of the sub-screen, , It can be displayed at any position within the displayable area at the time of display.
- the displayed sub video (sub-screen) may be moved or enlarged within the displayable area.
- the area in which the sub video can be displayed with respect to the main video is a band-shaped area whose position and width change at times “00:00:00” and “00:00:30”. Represented!
- the display (possible) area represented by the metadata assuming that the size of the sub-video (sub-screen) is fixed.
- the display area information may represent the display size of the sub video itself. That is, the display area is represented by two vertex coordinates at the upper left and lower right as in FIG. 3, and when the sub video is displayed, the sub video is enlarged or reduced to the size of the display area. Like May be.
- the table of FIG. 4 shows the variations of the metadata given in the present invention by setting the time range that specifies the display (possible) area and the description format of the display (possible) area. is there.
- FIG. 4 illustrates only the case where the display (possible) area is a rectangular area.
- time range There are two methods for setting the time range: one that specifies an arbitrary section and one that provides a display (possible) area for each fixed unit section.
- specifying an arbitrary section if there is no time gap or overlap between consecutive sections, either the start or end time of the section may be omitted.
- the power of “hours: minutes: seconds”, which is commonly used as a time expression is not limited to this.
- the entire time is “seconds” or “milliseconds”. It may be in the form of giving in seconds.
- one time is displayed, for example, every 1 second, every 250 milliseconds, every minute, etc. ( (Possible)
- An area may be given.
- the length of the unit section is appropriately set according to the nature of the stored video.
- the display (possible) area there are methods of specifying only one coordinate, two coordinates, coordinates and size.
- the one that can be specified with one coordinate is the case where the display size of the sub video is determined in advance.
- the area is represented by two coordinates, or the coordinates and size.
- the sub video display size is smaller than the specified area, which is a so-called displayable area, and the sub video is displayed in the specified area.
- the display area is resized (enlarged or reduced).
- the displayable area it is also possible to specify a band-like area (for example, the upper half or the lower half of the screen) extending vertically or horizontally in the main video.
- Figure 4 shows an example in which the display (possible) area is a rectangular area.
- the display (possible) area is given as a non-rectangular shape such as a polygon or an ellipse, or given as an arbitrary shape.
- the arbitrarily shaped area is given by, for example, a mask image representing it.
- the description of the specific description format of the arbitrary shape is omitted.
- the position of the display (possible) area changes discretely at a certain time.
- the display (possible) area information 502 included in the metadata (FIG. 5 (a)) is a display (possible) area at the time section and the start time position of the time section, as shown in FIG. 5, for example.
- the position of the display (possible) area at the end time position of the time interval is shown in Fig. 5 (b).
- the sub-screen is displayed in the display area with (xl, yl) as the upper left coordinates. Then, the display area is continuously moved so that the child screen is displayed in the display area having (x2, y2) as the upper left coordinates at the time “00:00:20”. In addition, at the time “00:00:40”, the display area is continuously moved so that the sub-screen is displayed in the display area having (x3, y3) as the upper left coordinates.
- Fig. 5 (c) schematically shows the display area or displayable area in this case as one dimension.
- the method for specifying a continuously changing area is not limited to this, and the display at the start time position is not limited.
- the area represented by metadata is a display area (display area) or a displayable area (displayable area). It can also be considered that it is designated as a display prohibited area (area that cannot be displayed). That is, the present invention is similarly applied to metadata designating the displayable time and the display prohibited area.
- FIG. 6 is a flowchart showing processing at the time of sub-video display including display Z non-display switching of sub-video (child screen).
- This flowchart mainly shows operations of the position specifying unit 110, the processing control unit 109, and the combining unit 105 in the apparatus configuration of the video display apparatus 1 shown in FIG. 7 to 10 show an example of the operation result when the sub video is synthesized and displayed by the video display device 1 of FIG. 1 through the flowchart of FIG.
- black portions indicate the time when the sub video is displayed and the display position at that time.
- the playback and display process will be described below using the metadata shown in Fig. 2 as an example in which the display area is the same as the display size of the sub video.
- the display area size is the display size of the sub video. Even in the case of using metadata indicating larger, V, or so-called displayable area than the size, the position specifying unit 110 is not limited to determining the display position appropriately from the displayable area and outputting it. The behavior does not change.
- step S1 When the position specifying unit 110 reads the metadata (step S1), first, the current playback time of the main video is displayed according to the displayable time information (201 in FIG. 2) included in the metadata. (Steps S2 and S3). If it is before the start time of the displayable time, the sub video is not displayed by waiting until the displayable time starts (Step S2; No).
- step S2 If the current playback time in the main video is within the displayable time (step S2; Yes ⁇ step S3; No), position specifying unit 110 inputs an instruction to change sub video display Z non-display state. Part 108 is accepted.
- step S4 when an instruction to display a sub video is received and the sub video is in a display state (step S4; Yes), the sub video is decoded and a decoded image is output (step S5).
- the position specifying unit 110 acquires time information related to the current playback time position of the main video (step S6), and determines the display position of the sub video corresponding to the current playback time position using the metadata (step S6). S 7). Then, the composition unit 105 synthesizes and displays the sub video at the designated display position in the main video (step S8). If the sub video data has not ended (step S9; No), the process proceeds to step S3 and the process is continued.
- Step S10 the sub video display itself is temporarily stopped.
- FIG 7 to 10 are diagrams schematically showing the positional relationship between the main video and the sub video.
- the spatial position of the main video is shown in the vertical direction, and the time is shown in the horizontal direction. Now, the time ⁇ 00: 00: 0 Output of the main video starts from “0”. Furthermore, the display state of the sub-video when the metadata structure shown in Fig. 2 (a) is used is expressed as! /.
- FIG. 7 is a diagram showing a state up to time “00:00:13”.
- the sub video displayable time starts from time “00:00:10”.
- the sub video is decoded. (Step S5).
- the sub video display is started at the display position corresponding to the time “00: 00: 13” that is synthesized with the main video and indicated in the metadata (the black portion in FIG. 7).
- FIG. 8 is a diagram showing a state up to time “00: 00: 20”.
- the contents for changing the display area of the sub video at the time "00: 0: 15" are described.
- the display position of the sub video is changed according to the display area information 202 of the metadata (step S7), and the state of the sub video is not displayed from the input unit 108 at the time “00:00:00”.
- the position designation unit 110 outputs a signal for stopping the sub video output to the synthesis unit 105.
- the synthesis unit 105 stops outputting the sub video (Step S10).
- Fig. 9 is a diagram showing the state up to the time “00: 00: 28”, and shows the state when the sub video (sub-screen) is switched to the display state again. ing. At this time, the sub video returns from the paused state to the playback state, and is played back at “00:00:00” and the subsequent sub video is played back. The display position of the sub video (sub-screen) at that time is given the display position corresponding to the time “00:00:28” by the metadata.
- FIG. 10 is a diagram showing a state up to time “00: 00: 36”, and shows a state where the reproduction of the sub-video having the total reproduction time “15” seconds is finished.
- the display area of the sub video is changed at time “00:00:30” (step S7).
- the sub video output stops step S9; Yes).
- the display position of the sub video in the main video corresponding to the display time is appropriately set. Can be specified.
- the sub video can be switched between display and non-display freely within the displayable time range, and even if display and non-display are switched freely, the sub video is aligned at an undesired position for the main video. It is possible to avoid being displayed.
- the above-described metadata power is depicted so as to be input independently of each video data.
- management data for managing video data information necessary for decoding encoded data such as encoding method, information necessary for video playback such as playlists that specify video segmentation and connection
- it may be provided by being stored in a video stream including the video data of the above-described metadata power sub-video. In this case, it is necessary to separate the video stream power metadata of the sub video before inputting to the video display device 1.
- the force storage position at which the metadata is stored in the header position of the video stream is not limited to this, but in the middle of the stream, for example, video data includes a plurality of packets. In such a case, it may be embedded as a new data packet between video packets or stored in the packet header of each video packet.
- the video provider By providing metadata together with the video data as described above, the video provider enables display of the sub video at the time of the picture 'in' picture at the display position intended by the provider. .
- the composition unit 105 of the video display device 1 shown in FIG. 1 only adjusts the sub video,
- the main video is not adjusted (that is, the main video is displayed in full screen).
- an adjustment unit 106a (separate from the sub video adjustment unit 106) is provided on the input side of the decoded video of the main video.
- FIG. 1 a video display device, method, and display data according to the second embodiment of the present invention will be described with reference to FIGS. 1, 6, and 11 to 15.
- FIG. 1 a video display device, method, and display data according to the second embodiment of the present invention will be described with reference to FIGS. 1, 6, and 11 to 15.
- the schematic configuration of the video display device 2 according to the second embodiment of the present invention is represented by the functional block diagram of FIG. 1 as in the first embodiment.
- the metadata handled is different from that in the first embodiment, and as the operation of the display device, only the operation of the position specification unit is the same as that of the image display device 1 (position specification unit 110). It differs from device 2 (position specifying unit 210). Therefore, in the following, focusing on the differences from the first embodiment, the metadata used in the video display device 2 of the second embodiment and the specific operation during playback using the metadata will be described. To do.
- FIG. 11 shows an example of metadata handled in the second embodiment.
- the metadata (Figs. 2 and 3) given in the first embodiment is preferred for the main video when the sub video is displayed within the displayable time, and the sub video (sub-screen) in the main video is preferred. It gave a display area.
- the metadata shown in FIGS. 2 and 3 sub video display areas corresponding to the playback time of each main video are given based on the playback time axis of the main video.
- the metadata related to the second embodiment shown in FIG. 11 is desired for the sub video itself when the sub video is displayed depending on the content of the sub video and the intention of the production. It provides a display area in which a suitable sub-image should be displayed.
- a sub video display area corresponding to the playback time of each sub video is given on the basis of the sub video playback time axis.
- the desired display position depending on the content of the sub-video is, for example, a sub-video of 10 seconds, the person A facing the right for the first 5 seconds, and the person B facing the left for the remaining 5 seconds.
- the sub video is displayed on the left for the first 5 seconds
- the sub video is displayed on the right for the remaining 5 seconds.
- Both are used in use cases such as displaying in the center of the screen.
- this is only an example, and it is not always desirable to face the center of both.
- the metadata that is relevant to the second embodiment as shown in FIG. 11 is additional information for reflecting the intention of the sub video producer in the reproduction for the sub video itself.
- FIG. 11 (a) shows a specific metadata structure
- FIG. 11 (b) shows a display area indicated by the metadata
- FIG. In (c) the display area is schematically shown as one-dimensional so that the temporal change of the display area can be easily understood.
- the reproduction time position of the sub video is given on the horizontal axis of FIGS. 11 (b) and 11 (c).
- the vertical axis in Fig. 11 (c) represents the spatial two-dimensional position of the screen, and the vertical width of the band shown corresponds to the size of the display area.
- the metadata shown in FIG. 11 (a) is based on displayable time information 1101 indicating in which time range of the main video the sub video can be displayed and the sub video based on the playback time axis of the sub video.
- Display area information 1102 indicating the position in the main video at which the sub video is to be displayed at each playback time.
- the displayable time information 1101 is not essential and may be omitted. If omitted, the entire main video is interpreted as the sub video displayable time.
- the display size of the sub video (child screen) is a fixed size given in advance as the display area information 1102, and the upper left vertex position of the child screen (or the center of the child screen).
- the present invention is not limited to this.
- two coordinates are given to indicate a displayable area (see FIG. 3), and 2 One It is also possible to give a display area to enlarge / reduce the sub video by giving the coordinates of.
- the display area where the sub video is to be displayed is the sub video playback time “0 0: 00: 05” (that is, 5 seconds after the start of sub video playback) and “00: 00: It is expressed as a band-shaped area whose position changes at “10” (that is, after the playback start force of the sub-video is 10 seconds in total).
- the processing when the sub video display including the switching of display of the sub video (sub-screen) Z non-display is performed as in the first embodiment. It is represented by the flow chart. This flowchart shows operations of the position specifying unit 210, the processing control unit 109, and the combining unit 105 in the apparatus configuration of the video display apparatus 2 shown in FIG.
- step Sl When the position specifying unit 210 reads the input metadata (step Sl), the current playback time of the main video is displayed in accordance with the displayable time information 1101 included in the metadata. (Steps S2 and S3). If the current playback time is before the start time of the displayable time, the sub video is not displayed and a standby state is entered (step S 2; No).
- Step S2 If the current playback time in the main video is within the displayable time (Step S2; Yes ⁇ Step S3; No), the position specifying unit 210 inputs an instruction to change the display Z non-display state of the sub video. Part 108 is accepted.
- the position specifying unit 210 acquires time information related to the current playback time position of the sub video (step S6), and determines a display position corresponding to the current playback time position of the sub video using the metadata (step S6). S7).
- the composition unit 105 synthesizes and displays the sub video at the designated display position in the main video (step S8).
- the difference from the first embodiment is that the total playback time position of the sub video itself is acquired as time information in step S6, and the sub video is used in step S7 using metadata. Two points are to determine the display position corresponding to the playback time position.
- FIGS. 12 to 15 are diagrams schematically showing an example of operation results when the sub video is synthesized and displayed on the video display device 2.
- the metadata is managed based on the playback time of the sub video that indicates how far the sub video has been played back and displayed in addition to the playback time of the main video.
- (a) shows how the metadata power display area is specified based on the playback time of the sub video
- (b) shows the sub video combined with the main video based on the time of the main video. A diagram showing how this occurs is shown.
- black portions indicate the time when the sub video is displayed and the display position at that time.
- FIG. 12 is a diagram showing a state up to time “00:00:13”.
- the sub video displayable time starts from time “00:00:10”.
- the sub video is decoded. (Step S5).
- the display of the sub video is started at the display position corresponding to the time “00:00:13” which is synthesized with the main video and indicated by the metadata.
- FIG. 12 (a) shows a state in which the video starts to be output from the reproduction time “00:00:00” in the sub video.
- FIG. 12 (b) shows that the sub video starts to be output when the playback time of the main video is “00: 0: 13”.
- FIG. 13 is a diagram showing a state up to time “00:00:00”.
- the display area information 1102 of the metadata structure in FIG. 11A the display area of the sub video is changed when the playback time of the sub video is “00:00:00”. Therefore, as shown in FIG. 13 (a), the display area is changed at the reproduction time “00:00:05” of the sub video. Therefore, as shown in Fig. 13 (b), on the composite video, the display position is changed at time "00: 00: 18", which is 5 seconds after the sub video is played (displayed). Yes.
- FIG. 14 is a diagram showing the state up to time “00: 00: 28”, and shows the state when the sub-image (sub-screen) is switched to the display state again. .
- the sub video returns from the paused state to the playback state, and the time position “00: 00: 07 of the subsequent sub video that was being played back at the time of“ 00: 0: 20 ”of the main video, that is, the sub video itself.
- “(Total playback time 7 seconds time position) starts playback.
- the display position of the sub video (sub-screen) is also given by the metadata in the same way as the time position “00: 0: 07” (total playback time of 7 seconds) of the sub video itself.
- FIG. 15 is a diagram showing the state up to time “00:00:36” in the main video, and shows the state where the playback of the sub video with the total playback time “15” seconds is finished. ing.
- the display position of the sub video is displayed at the time “00:00:10” (total playback time 10 seconds) of the sub video.
- the display position of the sub video is changed at the time when the time of the sub video is “00:00:00”, that is, the time of the main video “00: 0: 31”.
- the sub video is synthesized with the main video by using the metadata indicating the display area (or displayable area) of the sub video.
- the sub video can be switched between display and non-display freely, and even if display and non-display are switched freely, the sub video is synthesized and displayed at the display position according to the content of the sub video and the intention of the production. be able to.
- the metadata related to the present embodiment whether the provision form of the metadata is provided by being stored in a data stream of management data, for example, independent of the video data, as in the first embodiment.
- a data stream of management data for example, independent of the video data
- FIG. 11 (a) it can be stored and provided in a video stream including video data of a sub video.
- a process for separating the metadata from the video stream of the sub video is required before inputting to the video display device 2.
- the metadata used in the second embodiment is given one-to-one with the sub video, so basically the video data of the sub video. Or management data related to the sub video. Further, in FIG.
- the force storage position where the metadata is stored at the header position of the video stream is not limited to this, and the middle of the stream, for example, video data is divided into a plurality of packets and sent. In such a case, it may be possible to embed as a new data packet between video packets or store in the packet header of each video packet.
- FIG. 1 a video display apparatus, method, and display data according to the third embodiment of the present invention will be described with reference to FIGS. 1, 6, and 16.
- FIG. 1 a video display apparatus, method, and display data according to the third embodiment of the present invention will be described with reference to FIGS. 1, 6, and 16.
- FIG. 1 a video display apparatus, method, and display data according to the third embodiment of the present invention will be described with reference to FIGS. 1, 6, and 16.
- the schematic configuration of the video display device 3 according to the third embodiment of the present invention is represented by the functional block diagram of FIG. 1, as in the first and second embodiments.
- the operation of the position specifying unit 110 is different, and in the present embodiment, it is represented as the position specifying unit 310.
- the processing when the sub video display is performed by the video display device 3 that works according to the third embodiment is represented by the flowchart of FIG. 6 as in the first and second embodiments.
- the operation of the video display device 3 according to the third embodiment will be described with a focus on differences from the video display device 1 according to the first embodiment.
- the two types of metadata described in the first embodiment and the second embodiment are both input as sub-video display metadata, and these metadata
- the display area of the sub video is determined based on the combination of Therefore, the position specifying unit 310 of the video display device 3 receives two types of metadata and two pieces of time information (main video playback time position information and sub video playback time position information) (step S6 in the flowchart). ) Determine the appropriate sub video display area (step S7 in the flow chart).
- FIG. 16 is a diagram schematically showing the state of the main video and the sub video.
- FIG. 16 (a) is a displayable area of the sub video specified for the main video given by the metadata described in the first embodiment
- FIG. 16 (b) is a diagram showing the second embodiment. This shows the display area specified for the sub-video itself, which is given by the metadata explained in Section 4.
- Fig. 16 (c) shows how the sub video display area is specified during playback by the metadata shown in Fig. 16 (b)
- Fig. 16 (d) shows Fig. 16 (a)
- FIG. 6 is a diagram illustrating a state in which a sub-video is combined and displayed.
- FIG. 16 (c) and FIG. 16 (d) use the above two types of metadata to display the sub-video at time “00:00:13”, as in the first and second embodiments.
- Fig. 16 (c) shows the display area 16B force for the sub video shown in (b).
- Fig. 16 (d) shows the sub video displayable area 16A in the main video shown in (a). is there.
- the shaded line in FIG. 16 (d) or a blacked-out location force The time when the sub-video is displayed and the display position at that time are shown.
- the sub-video is generally given to the main video as value-added surplus content. Therefore, it is generally desirable to play the main video while keeping it as little as possible. For this reason, when the above two types of metadata are given, the display area 16A of the sub video given to the main video is given priority over the display area 16B given to the sub video itself. Determine the display area.
- the display area 16A and the display area 16B overlap each other.
- the display area of the sub video is determined based on both metadata.
- time range 1602 (00:00:15" to "00: 0: 20" and "00: 0: 28" to "00: 0
- the display area 16B is completely included in the displayable area 16A. For this reason, in the range 1602, the sub video is displayed in the display area given to the sub video itself based on the same metadata as shown in the second embodiment.
- the sub video displayable area 16A given to the main video and the content of the sub video itself are included.
- the sub video display area 16B specified accordingly is divided into different areas. In this case, priority is given to the sub video displayable area 16A given to the main video. That is, in the time range 1603, the sub video is displayed in the sub video displayable area given to the main video, based on the same metadata as shown in the first embodiment.
- the displayable area shown in Fig. 16 (a) for designating the display position of the sub video and the display area shown in Fig. 16 (b) are separated into different areas, and 16 Table shown in (a) When the displayable area shows an area larger than the display size of the sub video (sub-screen)
- a process may be added in which an area that is included in the displayable area in FIG. 16 (a) and is closest to the display area in FIG. 16 (b) is determined as a sub-video display area.
- the display position of the sub-video is forcibly set by setting the display area priority in FIG. It is also possible to set! / Based on the display area.
- a transmission path such as broadcasting or communication, or video data (and management data) recorded in the recording medium and recorded in advance on the recording medium
- the metadata is read out sequentially for playback and display.
- the video display apparatus, method, and display data of the present invention are described as a configuration of a broadcast video reception apparatus, a video communication reception apparatus, and a recording / playback apparatus having a recording medium, and in each embodiment. It can also be applied to recording media on which metadata is recorded.
- the video data (and management data) and the metadata shown in each embodiment of the present invention can be managed separately. Therefore, when metadata is generated on the playback side and video data input via a separate broadcast, communication, or recording medium is played back as a picture-in-picture, the metadata generated on the playback side is generated. It is also possible to use it in combination.
- metadata may be formed through processing such as setting the area, which may be hidden when the sub video is displayed in the main video according to the user's preference, or not desired to be hidden from the area!
- the Metadata generation on the playback side is performed when video data (and management data) input through a transmission path such as broadcast or communication is recorded on a recording medium, or video data (and management data) is read from the recording medium.
- ava registered trademark
- the present invention uses the metadata described in each embodiment regardless of where the metadata is actually set. It can be applied to a video display device and method.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- Computer Hardware Design (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Controls And Circuits For Display Device (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Television Signal Processing For Recording (AREA)
- Studio Circuits (AREA)
Abstract
Description
Claims
Priority Applications (10)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP06768290A EP1912201B1 (en) | 2005-07-27 | 2006-07-19 | Video synthesis device and program |
US11/989,212 US8687121B2 (en) | 2005-07-27 | 2006-07-19 | Video synthesizing apparatus and program |
KR1020097020484A KR100983185B1 (ko) | 2005-07-27 | 2006-07-19 | 영상 합성 장치 및 기록매체 |
CN2006800277450A CN101233558B (zh) | 2005-07-27 | 2006-07-19 | 视频合成设备 |
ES06768290T ES2392350T3 (es) | 2005-07-27 | 2006-07-19 | Dispositivo y programa de síntesis de video |
US12/661,904 US8736698B2 (en) | 2005-07-27 | 2010-03-25 | Video synthesizing apparatus and program |
US12/661,898 US8743228B2 (en) | 2005-07-27 | 2010-03-25 | Video synthesizing apparatus and program |
US12/661,899 US9100619B2 (en) | 2005-07-27 | 2010-03-25 | Video synthesizing apparatus and program |
US12/661,901 US8836804B2 (en) | 2005-07-27 | 2010-03-25 | Video synthesizing apparatus and program |
US12/661,900 US8836803B2 (en) | 2005-07-27 | 2010-03-25 | Video synthesizing apparatus and program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2005218064A JP4408845B2 (ja) | 2005-07-27 | 2005-07-27 | 映像合成装置及びプログラム |
JP2005-218064 | 2005-07-27 |
Related Child Applications (6)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/989,212 A-371-Of-International US8687121B2 (en) | 2005-07-27 | 2006-07-19 | Video synthesizing apparatus and program |
US12/661,904 Continuation US8736698B2 (en) | 2005-07-27 | 2010-03-25 | Video synthesizing apparatus and program |
US12/661,899 Continuation US9100619B2 (en) | 2005-07-27 | 2010-03-25 | Video synthesizing apparatus and program |
US12/661,900 Continuation US8836803B2 (en) | 2005-07-27 | 2010-03-25 | Video synthesizing apparatus and program |
US12/661,898 Continuation US8743228B2 (en) | 2005-07-27 | 2010-03-25 | Video synthesizing apparatus and program |
US12/661,901 Continuation US8836804B2 (en) | 2005-07-27 | 2010-03-25 | Video synthesizing apparatus and program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2007013334A1 true WO2007013334A1 (ja) | 2007-02-01 |
Family
ID=37683235
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2006/314264 WO2007013334A1 (ja) | 2005-07-27 | 2006-07-19 | 映像合成装置及びプログラム |
Country Status (7)
Country | Link |
---|---|
US (6) | US8687121B2 (ja) |
EP (6) | EP2200289A3 (ja) |
JP (1) | JP4408845B2 (ja) |
KR (2) | KR100983185B1 (ja) |
CN (6) | CN101790058B (ja) |
ES (3) | ES2450080T3 (ja) |
WO (1) | WO2007013334A1 (ja) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010177848A (ja) * | 2009-01-28 | 2010-08-12 | Sharp Corp | テレビ装置、pc装置、テレビ装置とpc装置とからなる表示システム |
WO2013021643A1 (ja) * | 2011-08-11 | 2013-02-14 | パナソニック株式会社 | 放送通信連携システム、データ生成装置及び受信装置 |
JP2013538482A (ja) * | 2010-07-13 | 2013-10-10 | トムソン ライセンシング | マルチメディアアプリケーションのためのピクチャ・イン・ピクチャの方法 |
JP2014509130A (ja) * | 2011-02-04 | 2014-04-10 | クゥアルコム・インコーポレイテッド | グラフィックのための低レイテンシワイヤレスディスプレイ |
WO2014080879A1 (ja) * | 2012-11-26 | 2014-05-30 | ソニー株式会社 | 送信装置、送信方法、受信装置、受信方法および受信表示方法 |
US8964783B2 (en) | 2011-01-21 | 2015-02-24 | Qualcomm Incorporated | User input back channel for wireless displays |
US9065876B2 (en) | 2011-01-21 | 2015-06-23 | Qualcomm Incorporated | User input back channel from a wireless sink device to a wireless source device for multi-touch gesture wireless displays |
US9198084B2 (en) | 2006-05-26 | 2015-11-24 | Qualcomm Incorporated | Wireless architecture for a traditional wire-based protocol |
US9264248B2 (en) | 2009-07-02 | 2016-02-16 | Qualcomm Incorporated | System and method for avoiding and resolving conflicts in a wireless mobile display digital interface multicast environment |
US9398089B2 (en) | 2008-12-11 | 2016-07-19 | Qualcomm Incorporated | Dynamic resource sharing among multiple wireless devices |
US9413803B2 (en) | 2011-01-21 | 2016-08-09 | Qualcomm Incorporated | User input back channel for wireless displays |
US9525998B2 (en) | 2012-01-06 | 2016-12-20 | Qualcomm Incorporated | Wireless display with multiscreen service |
US9582239B2 (en) | 2011-01-21 | 2017-02-28 | Qualcomm Incorporated | User input back channel for wireless displays |
US9582238B2 (en) | 2009-12-14 | 2017-02-28 | Qualcomm Incorporated | Decomposed multi-stream (DMS) techniques for video display systems |
US9787725B2 (en) | 2011-01-21 | 2017-10-10 | Qualcomm Incorporated | User input back channel for wireless displays |
US10108386B2 (en) | 2011-02-04 | 2018-10-23 | Qualcomm Incorporated | Content provisioning for wireless back channel |
US10135900B2 (en) | 2011-01-21 | 2018-11-20 | Qualcomm Incorporated | User input back channel for wireless displays |
Families Citing this family (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5194343B2 (ja) * | 2005-08-08 | 2013-05-08 | 株式会社日立製作所 | 情報再生装置及び情報再生方法 |
JP5058637B2 (ja) * | 2007-03-14 | 2012-10-24 | Necカシオモバイルコミュニケーションズ株式会社 | 電子機器および電子機器の処理プログラム |
JP2009027552A (ja) * | 2007-07-20 | 2009-02-05 | Funai Electric Co Ltd | 光ディスク再生装置 |
JP2009105505A (ja) * | 2007-10-19 | 2009-05-14 | Sony Corp | 映像表示システム、映像表示方法、表示制御装置 |
US8595748B1 (en) * | 2007-12-21 | 2013-11-26 | Ibiquity Digital Corporation | Systems and methods for transmitting and receiving large objects via digital radio broadcast |
EP2117231A1 (en) * | 2008-05-06 | 2009-11-11 | Sony Corporation | Service providing method and service providing apparatus for generating and transmitting a digital television signal stream and method and receiving means for receiving and processing a digital television signal stream |
JP2010026021A (ja) * | 2008-07-16 | 2010-02-04 | Sony Corp | 表示装置および表示方法 |
US8351768B2 (en) * | 2009-07-23 | 2013-01-08 | Microsoft Corporation | Media processing comparison system and techniques |
BR112012007573A2 (pt) * | 2009-09-29 | 2016-08-16 | Sharp Kk | dispositivo de saída de imagem e método de sintetização de imagem |
JP5740822B2 (ja) * | 2010-03-04 | 2015-07-01 | ソニー株式会社 | 情報処理装置、情報処理方法およびプログラム |
US9582533B2 (en) * | 2010-06-08 | 2017-02-28 | Sharp Kabushiki Kaisha | Content reproduction device, control method for content reproduction device, control program, and recording medium |
US8537201B2 (en) * | 2010-10-18 | 2013-09-17 | Silicon Image, Inc. | Combining video data streams of differing dimensionality for concurrent display |
EP2645730A4 (en) * | 2010-11-24 | 2014-08-27 | Lg Electronics Inc | VIDEO DISPLAY DEVICE AND METHOD FOR THEIR CONTROL |
JP5649429B2 (ja) | 2010-12-14 | 2015-01-07 | パナソニックIpマネジメント株式会社 | 映像処理装置、カメラ装置および映像処理方法 |
JP5201251B2 (ja) * | 2011-09-20 | 2013-06-05 | 株式会社日立製作所 | 情報再生装置及び情報再生方法 |
WO2013061526A1 (ja) * | 2011-10-26 | 2013-05-02 | パナソニック株式会社 | 放送受信装置、放送受信方法およびプログラム |
US8654256B2 (en) * | 2011-12-23 | 2014-02-18 | Mediatek Inc. | Video processing apparatus for generating multiple video outputs by employing hardware sharing technique |
JP2012133880A (ja) * | 2012-01-20 | 2012-07-12 | Hitachi Ltd | 復号化装置、復号方法およびディスク型記録媒体 |
KR101903443B1 (ko) * | 2012-02-02 | 2018-10-02 | 삼성전자주식회사 | 멀티미디어 통신 시스템에서 장면 구성 정보 송수신 장치 및 방법 |
US9407961B2 (en) * | 2012-09-14 | 2016-08-02 | Intel Corporation | Media stream selective decode based on window visibility state |
JP5825279B2 (ja) * | 2013-02-27 | 2015-12-02 | ブラザー工業株式会社 | 端末装置及びプログラム |
EP2816801B1 (en) | 2013-04-27 | 2018-05-30 | Huawei Technologies Co., Ltd. | Video conference processing method and device |
US9426539B2 (en) | 2013-09-11 | 2016-08-23 | Intel Corporation | Integrated presentation of secondary content |
KR102277258B1 (ko) * | 2014-02-27 | 2021-07-14 | 엘지전자 주식회사 | 디지털 디바이스 및 상기 디지털 디바이스에서 애플리케이션 처리 방법 |
CN105187733B (zh) * | 2014-06-06 | 2019-03-01 | 腾讯科技(北京)有限公司 | 视频处理方法、装置及终端 |
US9992553B2 (en) * | 2015-01-22 | 2018-06-05 | Engine Media, Llc | Video advertising system |
JP6760718B2 (ja) * | 2015-07-22 | 2020-09-23 | Run.Edge株式会社 | 動画再生プログラム、装置、及び方法 |
JP6290143B2 (ja) * | 2015-07-30 | 2018-03-07 | シャープ株式会社 | 情報処理装置、情報処理プログラムおよび情報処理方法 |
CN105933756A (zh) * | 2016-06-27 | 2016-09-07 | 北京奇虎科技有限公司 | 以画中画方式直播视频的方法及装置 |
CN106296819A (zh) * | 2016-08-12 | 2017-01-04 | 北京航空航天大学 | 一种基于智能机顶盒的全景视频播放器 |
US11240567B2 (en) | 2016-10-25 | 2022-02-01 | Aether Media, Inc. | Video content switching and synchronization system and method for switching between multiple video formats |
EP3533224A1 (en) * | 2016-10-25 | 2019-09-04 | Aether, Inc. | Video content switching and synchronization system and method for switching between multiple video formats |
US10320728B2 (en) | 2016-12-13 | 2019-06-11 | Google Llc | Methods, systems, and media for generating a notification in connection with a video content item |
WO2019004073A1 (ja) | 2017-06-28 | 2019-01-03 | 株式会社ソニー・インタラクティブエンタテインメント | 画像配置決定装置、表示制御装置、画像配置決定方法、表示制御方法及びプログラム |
WO2022245362A1 (en) * | 2021-05-20 | 2022-11-24 | Hewlett-Packard Development Company, L.P. | Video signal classifications |
CN115834923A (zh) | 2021-09-16 | 2023-03-21 | 艾锐势企业有限责任公司 | 用于视频内容处理的网络设备、系统和方法 |
KR102461153B1 (ko) * | 2022-06-07 | 2022-11-01 | 주식회사 종달랩 | 사용자가 선택한 원단 착용 이미지를 자동 생성하는 온라인 쇼핑몰 시스템 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07181954A (ja) * | 1993-12-22 | 1995-07-21 | Nippon Telegr & Teleph Corp <Ntt> | 映像変倍表示方法 |
JPH10134030A (ja) * | 1996-05-22 | 1998-05-22 | Fujitsu Ltd | マルチメディアデータ・プレゼンテーションシステムおよび方法 |
JP2001312737A (ja) * | 2000-05-01 | 2001-11-09 | Sony Corp | 情報処理装置および方法、並びにプログラム格納媒体 |
JP2002247475A (ja) * | 2001-02-14 | 2002-08-30 | Matsushita Electric Ind Co Ltd | 映像表示装置 |
JP2004140813A (ja) * | 2002-09-26 | 2004-05-13 | Canon Inc | 受信装置及び画像表示システム及び放送方法 |
JP2005130083A (ja) * | 2003-10-22 | 2005-05-19 | Canon Inc | 表示装置、表示制御方法及び情報提示方法 |
Family Cites Families (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
NL9002109A (nl) * | 1990-09-19 | 1992-04-16 | Koninkl Philips Electronics Nv | Informatieopzoek- en weergaveinrichting. |
JP2565045B2 (ja) | 1991-12-11 | 1996-12-18 | 日本電気株式会社 | シナリオ編集/提示方法及びその装置 |
KR960007545B1 (ko) | 1993-05-08 | 1996-06-05 | 엘지전자 주식회사 | 주화면위치 보상회로 및 그 방법 |
CN2236200Y (zh) * | 1994-10-14 | 1996-09-25 | 南京大学 | 多功能桌面电视监控装置 |
JPH0965225A (ja) * | 1995-08-24 | 1997-03-07 | Hitachi Ltd | テレビジョン装置及びその表示方法 |
US7877766B1 (en) * | 2000-05-04 | 2011-01-25 | Enreach Technology, Inc. | Method and system of providing a non-skippable sub-advertisement stream |
AU2001258788A1 (en) | 2000-05-22 | 2001-12-03 | Sony Computer Entertainment Inc. | Information processing apparatus, graphic processing unit, graphic processing method, storage medium, and computer program |
US7206029B2 (en) * | 2000-12-15 | 2007-04-17 | Koninklijke Philips Electronics N.V. | Picture-in-picture repositioning and/or resizing based on video content analysis |
JP4414604B2 (ja) | 2001-02-28 | 2010-02-10 | オリンパス株式会社 | 携帯電話 |
US6833874B2 (en) * | 2001-03-27 | 2004-12-21 | Sony Corporation | Ticker tape picture-in-picture system |
US6697123B2 (en) * | 2001-03-30 | 2004-02-24 | Koninklijke Philips Electronics N.V. | Adaptive picture-in-picture |
US20030079224A1 (en) * | 2001-10-22 | 2003-04-24 | Anton Komar | System and method to provide additional information associated with selectable display areas |
US20040098753A1 (en) * | 2002-03-20 | 2004-05-20 | Steven Reynolds | Video combiner |
CN2550981Y (zh) * | 2002-06-07 | 2003-05-14 | 李智 | 音乐电视合成机 |
JP4002804B2 (ja) | 2002-08-13 | 2007-11-07 | 株式会社リアルビズ | 複数台の表示装置による多画面表示方法、同方法用プログラム及び記録媒体 |
ES2413529T3 (es) * | 2002-09-26 | 2013-07-16 | Koninklijke Philips N.V. | Aparato para recibir una señal de información digital |
US20040150748A1 (en) * | 2003-01-31 | 2004-08-05 | Qwest Communications International Inc. | Systems and methods for providing and displaying picture-in-picture signals |
JP3982465B2 (ja) | 2003-06-26 | 2007-09-26 | ソニー株式会社 | ディスク装置およびディスク装置の制御方法、ならびに、ディスク装置の制御プログラム |
EP1679885A1 (en) * | 2003-08-11 | 2006-07-12 | Matsushita Electric Industrial Co., Ltd. | Photographing system and photographing method |
JP2005123775A (ja) | 2003-10-15 | 2005-05-12 | Sony Corp | 再生装置、再生方法、再生プログラムおよび記録媒体 |
-
2005
- 2005-07-27 JP JP2005218064A patent/JP4408845B2/ja active Active
-
2006
- 2006-07-19 US US11/989,212 patent/US8687121B2/en not_active Expired - Fee Related
- 2006-07-19 ES ES10158752.5T patent/ES2450080T3/es active Active
- 2006-07-19 EP EP10158761A patent/EP2200289A3/en not_active Withdrawn
- 2006-07-19 CN CN2010101449344A patent/CN101790058B/zh not_active Expired - Fee Related
- 2006-07-19 WO PCT/JP2006/314264 patent/WO2007013334A1/ja active Application Filing
- 2006-07-19 EP EP10158759A patent/EP2200288A3/en not_active Withdrawn
- 2006-07-19 EP EP10158767.3A patent/EP2200290B1/en not_active Not-in-force
- 2006-07-19 ES ES10158767.3T patent/ES2506141T3/es active Active
- 2006-07-19 CN CN201010144943A patent/CN101789232A/zh active Pending
- 2006-07-19 CN CN2010101449541A patent/CN101815186B/zh not_active Expired - Fee Related
- 2006-07-19 EP EP10158756A patent/EP2200287A3/en not_active Withdrawn
- 2006-07-19 ES ES06768290T patent/ES2392350T3/es active Active
- 2006-07-19 EP EP06768290A patent/EP1912201B1/en not_active Not-in-force
- 2006-07-19 EP EP10158752.5A patent/EP2200286B1/en not_active Not-in-force
- 2006-07-19 KR KR1020097020484A patent/KR100983185B1/ko active IP Right Grant
- 2006-07-19 KR KR1020087004564A patent/KR100955141B1/ko not_active IP Right Cessation
- 2006-07-19 CN CN2006800277450A patent/CN101233558B/zh not_active Expired - Fee Related
- 2006-07-19 CN CN2010101453674A patent/CN101808207B/zh not_active Expired - Fee Related
- 2006-07-19 CN CN2010101449607A patent/CN101815187B/zh not_active Expired - Fee Related
-
2010
- 2010-03-25 US US12/661,904 patent/US8736698B2/en active Active
- 2010-03-25 US US12/661,898 patent/US8743228B2/en not_active Expired - Fee Related
- 2010-03-25 US US12/661,901 patent/US8836804B2/en not_active Expired - Fee Related
- 2010-03-25 US US12/661,899 patent/US9100619B2/en not_active Expired - Fee Related
- 2010-03-25 US US12/661,900 patent/US8836803B2/en not_active Expired - Fee Related
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07181954A (ja) * | 1993-12-22 | 1995-07-21 | Nippon Telegr & Teleph Corp <Ntt> | 映像変倍表示方法 |
JPH10134030A (ja) * | 1996-05-22 | 1998-05-22 | Fujitsu Ltd | マルチメディアデータ・プレゼンテーションシステムおよび方法 |
JP2001312737A (ja) * | 2000-05-01 | 2001-11-09 | Sony Corp | 情報処理装置および方法、並びにプログラム格納媒体 |
JP2002247475A (ja) * | 2001-02-14 | 2002-08-30 | Matsushita Electric Ind Co Ltd | 映像表示装置 |
JP2004140813A (ja) * | 2002-09-26 | 2004-05-13 | Canon Inc | 受信装置及び画像表示システム及び放送方法 |
JP2005130083A (ja) * | 2003-10-22 | 2005-05-19 | Canon Inc | 表示装置、表示制御方法及び情報提示方法 |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9198084B2 (en) | 2006-05-26 | 2015-11-24 | Qualcomm Incorporated | Wireless architecture for a traditional wire-based protocol |
US9398089B2 (en) | 2008-12-11 | 2016-07-19 | Qualcomm Incorporated | Dynamic resource sharing among multiple wireless devices |
JP2010177848A (ja) * | 2009-01-28 | 2010-08-12 | Sharp Corp | テレビ装置、pc装置、テレビ装置とpc装置とからなる表示システム |
US9264248B2 (en) | 2009-07-02 | 2016-02-16 | Qualcomm Incorporated | System and method for avoiding and resolving conflicts in a wireless mobile display digital interface multicast environment |
US9582238B2 (en) | 2009-12-14 | 2017-02-28 | Qualcomm Incorporated | Decomposed multi-stream (DMS) techniques for video display systems |
JP2013538482A (ja) * | 2010-07-13 | 2013-10-10 | トムソン ライセンシング | マルチメディアアプリケーションのためのピクチャ・イン・ピクチャの方法 |
US8964783B2 (en) | 2011-01-21 | 2015-02-24 | Qualcomm Incorporated | User input back channel for wireless displays |
US10911498B2 (en) | 2011-01-21 | 2021-02-02 | Qualcomm Incorporated | User input back channel for wireless displays |
US9413803B2 (en) | 2011-01-21 | 2016-08-09 | Qualcomm Incorporated | User input back channel for wireless displays |
US10382494B2 (en) | 2011-01-21 | 2019-08-13 | Qualcomm Incorporated | User input back channel for wireless displays |
US10135900B2 (en) | 2011-01-21 | 2018-11-20 | Qualcomm Incorporated | User input back channel for wireless displays |
US9582239B2 (en) | 2011-01-21 | 2017-02-28 | Qualcomm Incorporated | User input back channel for wireless displays |
US9065876B2 (en) | 2011-01-21 | 2015-06-23 | Qualcomm Incorporated | User input back channel from a wireless sink device to a wireless source device for multi-touch gesture wireless displays |
US9787725B2 (en) | 2011-01-21 | 2017-10-10 | Qualcomm Incorporated | User input back channel for wireless displays |
US10108386B2 (en) | 2011-02-04 | 2018-10-23 | Qualcomm Incorporated | Content provisioning for wireless back channel |
JP2014509130A (ja) * | 2011-02-04 | 2014-04-10 | クゥアルコム・インコーポレイテッド | グラフィックのための低レイテンシワイヤレスディスプレイ |
US9503771B2 (en) | 2011-02-04 | 2016-11-22 | Qualcomm Incorporated | Low latency wireless display for graphics |
US9723359B2 (en) | 2011-02-04 | 2017-08-01 | Qualcomm Incorporated | Low latency wireless display for graphics |
WO2013021643A1 (ja) * | 2011-08-11 | 2013-02-14 | パナソニック株式会社 | 放送通信連携システム、データ生成装置及び受信装置 |
US9525998B2 (en) | 2012-01-06 | 2016-12-20 | Qualcomm Incorporated | Wireless display with multiscreen service |
US10123069B2 (en) | 2012-11-26 | 2018-11-06 | Saturn Licensing Llc | Receiving apparatus, receiving method, and receiving display method for displaying images at specified display positions |
JPWO2014080879A1 (ja) * | 2012-11-26 | 2017-01-05 | ソニー株式会社 | 送信装置、送信方法、受信装置、受信方法および受信表示方法 |
WO2014080879A1 (ja) * | 2012-11-26 | 2014-05-30 | ソニー株式会社 | 送信装置、送信方法、受信装置、受信方法および受信表示方法 |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4408845B2 (ja) | 映像合成装置及びプログラム | |
JP4364176B2 (ja) | 映像データ再生装置及び映像データ生成装置 | |
JP2010206803A (ja) | 映像合成装置及びプログラム | |
JP4800353B2 (ja) | 映像合成装置、プログラム及び記録媒体 | |
JP4964319B2 (ja) | 映像合成装置、プログラム及び記録媒体 | |
JP4964318B2 (ja) | 記録媒体及び表示用データのデータ生成装置 | |
JP5148651B2 (ja) | 記録媒体及び表示用データのデータ生成装置 | |
JP4964320B2 (ja) | 映像合成装置及び記録媒体 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200680027745.0 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2006768290 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11989212 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020087004564 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020097020484 Country of ref document: KR |