WO2015155893A1 - Video output apparatus, video reception apparatus, and video output method - Google Patents

Video output apparatus, video reception apparatus, and video output method Download PDF

Info

Publication number
WO2015155893A1
WO2015155893A1 PCT/JP2014/060516 JP2014060516W WO2015155893A1 WO 2015155893 A1 WO2015155893 A1 WO 2015155893A1 JP 2014060516 W JP2014060516 W JP 2014060516W WO 2015155893 A1 WO2015155893 A1 WO 2015155893A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
data
metadata
unit
input
Prior art date
Application number
PCT/JP2014/060516
Other languages
French (fr)
Japanese (ja)
Inventor
坂本 哲也
吉澤 和彦
益岡 信夫
浩朗 伊藤
健 木佐貫
Original Assignee
日立マクセル株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日立マクセル株式会社 filed Critical 日立マクセル株式会社
Priority to PCT/JP2014/060516 priority Critical patent/WO2015155893A1/en
Publication of WO2015155893A1 publication Critical patent/WO2015155893A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display

Definitions

  • the present invention relates to a video output device, a video reception device, and a video output method.
  • HDMI High Definition Multimedia Interface
  • 4k2k horizontal 3840 pixels ⁇ vertical 2160 pixels
  • a method of broadcasting 4k2k resolution video data at 120 frames / second is being studied. From the above, when video data of 4k2k resolution and 120 frames / second (hereinafter referred to as 4k2k @ 120Hz) is adopted in broadcasting, content distribution, etc., the transmission bandwidth is insufficient in the HDMI 2.0 specification described above. Become.
  • Patent Document 1 discloses that “a playback device that decodes a video signal encoded by a motion vector and a display device that displays the decoded video signal at a high frame rate are separated. In this case, the frame rate conversion is performed using the motion vector obtained by the decoding process in the playback device in the display device ”(see the abstract of Patent Document 1).
  • Patent Document 1 it is not necessary to connect a playback device and a display device with a high transmission amount cable corresponding to high frame rate video.
  • the interpolation frame generated using the motion vector is uniquely determined, the high frame rate video displayed on the display device is also uniquely determined regardless of the video type.
  • An object of the present invention is to provide a technique capable of obtaining a high-quality video when a transmission band is limited or while suppressing a transmission data amount.
  • the present invention relates to a video output device that converts input video data into video data for transmission and outputs the video data, video thinning data obtained by thinning out part of video data from the input video data, and video thinning data
  • a data generation unit that generates metadata for interpolating video, and a data output unit that outputs video thinning data and metadata to a transmission path.
  • the data specification is changed.
  • the present invention is a video receiving device that displays video data transmitted from a video output device on a display unit, the video thinning data from which a part of the video data is thinned from the video output device, and the video thinning data
  • a data input unit that receives metadata for interpolating the data, and a data generation unit that generates video data to be displayed on the display unit based on the received video thinning data and metadata.
  • the information for changing the specification of the metadata to be transmitted is transmitted to the video output device in accordance with the video type of the video data received at.
  • a video output method for converting input video data into video data for transmission and outputting the video data, the step of determining the video type of the input video data, and the video data from the input video data.
  • the present invention it is possible to view high-quality video under conditions suitable for the type of video without widening the transmission band or suppressing the amount of transmission data.
  • FIG. 1 is a diagram showing a block configuration of a video transmission system according to Embodiment 1.
  • FIG. 3 is a diagram illustrating a block configuration of a video transmission system according to a second embodiment.
  • FIG. 6 is a diagram illustrating a block configuration of a video transmission system according to a third embodiment. The figure which shows an example of a mode selection screen.
  • FIG. 10 is a diagram illustrating a block configuration of a video transmission system according to a fourth embodiment. The figure which shows an example of the specification of the kind of image
  • FIG. 10 is a block diagram of a video transmission system according to a fifth embodiment.
  • FIG. 1 is a block diagram of a video transmission system according to the first embodiment.
  • the video transmission system has a configuration in which a video output device 100 and a video reception device 200 are connected by a transmission cable 300.
  • the video output device 100 receives, for example, a digital broadcast, decodes it so that it can be viewed, and generates video data for transmission. Alternatively, video data for transmitting video data captured by a camera or the like is generated. Examples of the video output device 100 include a set-top box, a recorder, a personal computer with a built-in recorder function, a mobile phone with a camera function and a recorder function, a smartphone, and a camcorder.
  • the video receiving device 200 receives the video data from the video output device 100 and displays the video on the display unit.
  • Examples of the video receiver 200 include a digital TV, a display, a projector, a mobile phone, a tablet terminal, a signage device, a monitor for a monitoring camera, and the like.
  • the transmission cable 300 is a data transmission path for performing data communication such as video data between the video output device 100 and the video reception device 200.
  • the transmission cable 300 there is a wired cable corresponding to the HDMI standard or the Display Port standard, or a data transmission path for performing data communication in a wireless manner.
  • Video data (video signal 1) is input to the data input unit 101.
  • video data there is digital broadcasting input as radio waves from a relay station such as a broadcasting station or a broadcasting satellite.
  • video data includes digital broadcasting and video content distributed via a network using an Internet broadband connection.
  • digital broadcast or video content may be input from an external recording medium connected to the input unit 101. Examples of the recording medium include an optical disk, a magnetic disk, and a semiconductor memory.
  • the data analysis unit 102 determines the type of video for the input video data. For example, for the genre of video, the video information reading unit 103 reads incidental information (program guide (EPG) for digital broadcasting) and identifies the genre (sports, movies, etc.). Alternatively, the motion detection unit 104 detects the amount of motion of the input video and determines whether or not the scene has a large motion. The determination result of the video type is sent to the data generation unit 105.
  • the data analysis unit 102 includes at least one of the video information reading unit 103 and the motion detection unit 104. Or you may make it discriminate
  • the data generation unit 105 generates video thinning data 150 and metadata 151 from the input video data.
  • the video thinning data 150 is obtained by thinning a part of frame data from input video data, for example, thinning an even number frame from a frame sequence constituting the video signal 1 to form a frame sequence of only odd frames.
  • a part of pixel data in each frame of the input video data is thinned out, that is, the number of pixels in the frame is reduced.
  • a part of gradation data in each pixel data of input video data is thinned out, that is, the number of bits is reduced.
  • the metadata 151 may be data of even frames that are thinned.
  • motion vector data calculated from even frame data to be thinned out and odd frame data before and after the data may be used.
  • the specification of the metadata 151 can be changed according to the video discrimination result in the data analysis unit 102.
  • the specifications of the metadata 151 are conditions relating to the frame rate, resolution, and the like, and specific examples will be described later.
  • the metadata 151 may be composed of one type of difference data, but may be composed of a plurality of types of difference data having different specifications.
  • the metadata 151 may be composed of difference data relating to the frame rate and difference data relating to the resolution.
  • the data thinned out on the video output device 100 side can be restored by using the video thinning data 150 and the metadata 151.
  • the metadata 151 is composed of difference data related to the frame rate, it is possible to restore even frame data thinned out on the video output device 100 side.
  • the data output unit 106 multiplexes the metadata 151 with the video thinning data 150, converts it into video data (video signal 2) in a format suitable for cable transmission, and outputs the video data to the transmission cable 300.
  • video signal 2 As the output timing of the video signal 2, for example, the video thinning data 150 is arranged in the effective pixel period, and the metadata 151 is arranged in the blanking period.
  • the video signal 2 is input to the data input unit 201 via the transmission cable 300.
  • the data input unit 201 separates the input video signal 2 into video thinning data 150 and metadata 151.
  • the video thinning data 150 is supplied to the data generation unit 203, and the metadata 151 is supplied to the metadata selection unit 202.
  • the metadata selection unit 202 selects and outputs one difference data when the metadata 151 includes a plurality of types of difference data. This selection can be performed by a remote control operation by the user. When only one type of difference data is sent as the metadata 151, the difference data is output.
  • the data generation unit 203 generates video data for display based on the input video thinning data 150 and metadata 151. For example, an odd frame is generated from the video thinning data 150, and the metadata 151 is combined with this to interpolate the even frame, thereby restoring the frame rate of the video signal 1 before transmission. Alternatively, the pixel data thinned out in the video thinning data 150 is interpolated with the metadata 151 to restore the number of pixels of the video signal 1 before transmission.
  • the data output unit 204 outputs the video data (video signal 3) generated by the data generation unit 203 to the display unit 205.
  • the display unit 205 displays the input video signal 3. Examples of the display unit 205 include a liquid crystal display, a plasma display, an organic EL display, and a projector projection display.
  • the display unit 205 is built in the video receiving apparatus 200, but may be configured to be connected to the video receiving apparatus 200 as an external device.
  • FIG. 2 is a diagram showing an example of video types and metadata specifications.
  • the input video signal 1 has a resolution of 4k2k, a frame rate of 120 Hz, and a gradation number of 12 bits.
  • the video thinning data 150 of the video signal 2 is appropriately thinned out with respect to the resolution, frame rate, and number of gradations.
  • the resolution is 2k1k (1920 horizontal pixels ⁇ 1080 vertical pixels) regardless of the type of video, the frame rate is 60 Hz, and the gradation number is 8 bits. It is said. Thereby, the data amount of the video thinning data 150 can be greatly reduced, and the load on the transmission path can be reduced.
  • the specification of the metadata 151 of the video signal 2 is changed according to the type of the video signal 1 identified by the video information reading unit 103 (here, TV program, genre of video content). For example, when the genre is “sports”, priority is given to the smoothness or speed of movement, and the metadata 151 is used as frame interpolation data. As a result, the video receiving apparatus 200 can restore the frame rate of the video signal 1 (original). When the genre is “movie”, the resolution is prioritized, and the metadata 151 added to the video thinning data 150 is set as pixel interpolation data. Thereby, in the video receiving apparatus 200, the number of pixels of the video signal 1 (original) can be restored.
  • the genre is “hobby” that handles “painting” or the like
  • priority is given to the number of gradations, and the metadata 151 is used as gradation interpolation data.
  • the number of quantization bits of the video signal 1 (original) can be restored.
  • the metadata 151 is difference data, the amount of data is greatly reduced.
  • the video signal 3 is video data displayed on the display unit 205.
  • the frame rate is 120 Hz for “sports”, the resolution is 4k2k for “movies”, and the number of gradations is 12 bits for “pictures”. Can be displayed.
  • the specification of the metadata 151 of the video signal 2 may be changed according to the amount of motion of the video signal 1 determined by the motion detection unit 104. For scenes with large movements, priority is given to speed, and frame interpolation data is added. For a scene with small motion, priority is given to the resolution and the number of gradations, and pixel interpolation data and gradation interpolation data are added. In this way, the metadata can be changed depending on the genre of the video and the scene, and the user can select either.
  • the specifications of the video thinning data 150 and the metadata 151 described above are set so as not to exceed the transmission capacity of the transmission cable 300 in consideration.
  • FIG. 3 is a diagram showing the transmission period of the video signal 2 in one frame period, and shows one frame period corresponding to the vertical and horizontal directions of the display screen.
  • the vertical period 400 includes a vertical blanking period 401 and a vertical effective period 402 with the vertical synchronization signal VSYNC signal as a base point.
  • the horizontal period 403 includes a horizontal blanking period 404 and a horizontal effective period 405 with the horizontal synchronization signal HSYNC signal as a base point.
  • the area where the vertical effective period 402 and the horizontal effective period 405 overlap is the effective period 406, and the video thinning data 150 is allocated to this area.
  • An area corresponding to the vertical blanking period 401 or the horizontal blanking period 404 is a blanking period 407, and metadata 151 is allocated to this period.
  • the video thinning data 150 and the metadata 151 can be multiplexed and transmitted in one video signal 2.
  • a synchronization pattern, voice, auxiliary information, and the like can be multiplexed and transmitted.
  • the metadata according to the priority condition is transmitted according to the type of video, high-quality video under conditions suitable for the type of video without expanding the transmission band of the transmission path. Can be watched.
  • FIG. 4 is a block diagram of the video transmission system according to the second embodiment.
  • This video transmission system has a configuration in which a video output device 100a and a video reception device 200a are connected by a transmission cable 300.
  • the difference from the configuration of the first embodiment (FIG. 1) is that a function for determining the type of video is provided not in the video output device 100a but in the video reception device 200a.
  • the video receiving apparatus 200a is provided with a data analysis unit 206 that determines the type of the video signal 2 input to the data input unit 201.
  • the data analysis unit 206 includes a video information reading unit 207 and a motion detection unit 208.
  • the video information reading unit 207 identifies the genre of the video signal 2, and the motion detection unit 208 detects the amount of motion of the video signal 2.
  • the video information reading unit 207 performs the same operation as the video information reading unit 103, and the motion detection unit 208 performs the same operation as the motion detection unit 104, respectively.
  • the discrimination information 160 of the video signal 2 acquired by the data analysis unit 206 is sent to the data generation unit 105 of the video output device 100a via the data input unit 201, the transmission cable 300, and the data output unit 106 of the video output device 100a. .
  • the data generation unit 105 generates video thinning data 150 and metadata 151 from the input video signal 1 in accordance with the discrimination information 160 from the data analysis unit 206.
  • the metadata 151 may be composed of a plurality of types of difference data. The relationship between the video type (discrimination information 160) and the metadata 151 is as described with reference to FIG. The operation of other components is the same as that of the first embodiment.
  • the image quality is high under the conditions suitable for the type of video without increasing the transmission band of the transmission path. You can watch the video.
  • FIG. 5 is a block diagram of the video transmission system according to the third embodiment.
  • This video transmission system has a configuration in which a video output device 100b and a video reception device 200b are connected by a transmission cable 300.
  • the difference from the configurations of the first embodiment (FIG. 1) and the second embodiment (FIG. 4) is that a function for setting a display mode by a user is provided instead of the data analysis units 102 and 206 for determining the type of video. It is.
  • the video receiving device 200b is provided with a mode setting unit 209 for the user to select and set a display mode.
  • the user selects a preferred display mode according to the type of video to be viewed.
  • the selected mode information 170 is sent to the data generation unit 105 of the video output device 100b via the data input unit 201, the transmission cable 300, and the data output unit 106 of the video output device 100b.
  • the data generation unit 105 generates video thinning data 150 and metadata 151 from the input video signal 1 in accordance with the mode information 170 from the mode setting unit 209.
  • the operation of other components is the same as that in the first or second embodiment.
  • FIG. 6 is a diagram showing an example of the mode selection screen.
  • a selection screen 500 or 501 is displayed on the screen of the display unit 205.
  • the selection screen 501 is a case where the data analysis units 102 and 206 in the first and second embodiments are provided.
  • the display mode selection screen 500 in (a) it is possible to select from “sport mode”, “movie mode”, “art painting mode”, etc. as the video display mode.
  • the user selects a mode suitable for the video to be viewed using a main body operation button (not shown) or an attached remote controller, the user's selection result is transmitted to the video output device 100b as mode information 170.
  • the data generation unit 105 generates the metadata 151 according to the mode information 170 according to FIG.
  • (B) In the discrimination operation selection screen 501 it is selected whether the video discriminating operation by the data analysis units 102 and 206 is performed by “genre” or “scene”. If “genre” is selected, the video information reading units 103 and 207 identify the genre of the video, and if “scene” is selected, the motion detection units 104 and 208 detect the motion amount of the scene. Thus, the user can select whether to switch the display for each video content or for each scene.
  • the mode information 170 from the mode setting unit 209 is sent to the data analysis units 102 and 206 instead of the data generation unit 105 so that the video output device 100b or the video is displayed.
  • the receiving device 200b is configured.
  • Example 4 describes a video transmission system that combines video transmission by broadcast waves and metadata transmission by a network.
  • FIG. 7 is a block diagram of a video transmission system according to the fourth embodiment.
  • This video transmission system is configured to transmit broadcast waves from the video output device 100 c to the video receiving device 200 c via the antennas 301 and 302 and to transmit metadata via the network 303 and the router 304.
  • the video output device 100c corresponds to a broadcasting station
  • the video reception device 200c corresponds to a digital TV.
  • Video master data (video signal 4) is input to the data input unit 101.
  • the video master data is, for example, super high resolution video data having a resolution of 8k4k (horizontal 7680 pixels ⁇ vertical 4320 pixels) and a frame rate of 120 Hz.
  • the data analysis unit 102 determines the type of video for the input video signal 4. That is, the video information reading unit 103 identifies the genre of the video, and the motion detection unit 104 detects the amount of motion of the video.
  • the data generation unit 105 generates video thinning data 150 and metadata 151 as differential data from the input video signal 4.
  • the specification of the metadata 151 is determined according to the video discrimination result in the data analysis unit 102.
  • the metadata 151 may be composed of a plurality of types of difference data.
  • the data output unit 106 transmits the generated video thinning data 150 as a broadcast wave (video signal 5a) via the antenna 301.
  • the data output unit 106 also has a function as a data server for distributing the generated metadata 151 as the video signal 5 b via the network 303.
  • the data input unit 201 receives the video signal 5a via the antenna 302. Further, the data input unit 201 receives a video signal 5b stored in the data output unit 106 of the video output device 100c via the network 303 and the router 304 by accessing a predetermined URL (Uniform Resource Locator). To do. Both video signals 5 a and 5 b received by the data input unit 201 are supplied to the data generation unit 203 and the metadata selection unit 202. The subsequent processing is the same as in the first embodiment (FIG. 1), and the data generation unit 203 generates the video signal 6 using the metadata selected by the metadata selection unit 202 and displays it on the display unit 205.
  • a predetermined URL Uniform Resource Locator
  • FIG. 8 is a diagram showing an example of video types and metadata specifications.
  • the video signal 4 to be input has a resolution of 8k4k, a frame rate of 120 Hz, and a gradation number of 12 bits.
  • the video signal 5a (video thinning data 150) has a resolution of 4k2k, a frame rate of 60 Hz, and a gradation number of 8 bits regardless of the type of video.
  • the video signal 5a may be a signal that has been compression-encoded according to a standard such as MPEG (Moving Picture Experts Group) -2 system.
  • the video signal 5b includes time information similar to the PCR (Program Clock Reference) information of the MPEG-2 system for synchronizing with the video signal 5a.
  • the specifications of the video signal 5b (metadata 151) are changed according to the type of video. For example, when the type of video is “sports”, priority is given to smoothness or speed of movement, and the video signal 5b (metadata 151) is used as frame interpolation data. When the video type is “news / movie”, the resolution is prioritized and the video signal 5b (metadata 151) is used as pixel interpolation data. When the video type is “art / painting”, priority is given to the number of gradations, and the video signal 5b (metadata 151) is used as gradation interpolation data.
  • the video signal 6 is a video displayed on the display unit 205.
  • the frame rate is 120 Hz for “sports”
  • the resolution is 8k4k for “news / movies”
  • the specification of the video signal 5b may be changed according to the amount of motion of the video signal 4.
  • Frame interpolation data is added to a scene with a large motion giving priority to speed, and pixel interpolation data and gradation interpolation data are added to a scene with little motion giving priority to the resolution and the number of gradations.
  • the image quality is high under the conditions suitable for the type of video without increasing the transmission band of the transmission path. You can watch the video.
  • the transmission path is separated and video transmission by broadcast waves and metadata transmission by a network are combined, a large amount of metadata can be transmitted, and a user can view an ultra-high resolution video. Can watch.
  • Embodiment 5 provides the video transmission system of Embodiment 4 with a function for setting a display mode by the user.
  • FIG. 9 is a block diagram of the video transmission system according to the fifth embodiment.
  • the video output device 100d includes a metadata storage unit 107 that stores a plurality of types of metadata having different specifications for the video signal 4. This metadata is generated and stored in advance by the data generation unit 105.
  • the video receiving device 200d is provided with a mode setting unit 209 for a user to select and set a display mode.
  • the user selects a preferred display mode according to the type of video to be viewed.
  • the selected mode information 170 is sent to the data generation unit 105 of the video output device 100d via the data input unit 201, the router 304 and the network 303, and the data output unit 106 of the video output device 100d.
  • the data generation unit 105 extracts the corresponding metadata 151 from the metadata storage unit 107 according to the mode information 170 from the mode setting unit 209. Then, the extracted metadata 151 is transmitted as a video signal 5b to the video reception device 200d via the network 303.
  • the operation of other components is the same as that of the fourth embodiment.
  • the display mode selection screen displayed on the display unit 205 may be the same as that shown in FIG. However, since the network 303 is used in the present embodiment, different URLs 502 are set in advance for each metadata stored in the metadata storage unit 107, and each option on the display mode selection screen 500 and the above-described options are set. The URL 502 is associated. Thus, when the user selects a video display mode on the display mode selection screen 500, the user can access the URL 502 associated with the selected display mode and receive the predetermined metadata 151.
  • the user can view the ultra-high resolution video. Further, since the metadata is transmitted according to the display mode selected by the user, it is possible to view high-quality video under conditions that suit the user's preference.
  • the present invention is not limited to the above-described embodiments, and includes various modifications.
  • the video thinning data is transmitted with the same specification regardless of the type of video, but the specification may be changed according to the type of video.
  • an efficient combination of video thinning data and metadata is possible.
  • transmission efficiency can be improved by appropriately performing compression encoding processing and transmitting.
  • the metadata specification is determined on the video output device side, it is necessary to know the metadata specification determined on the video reception device side.
  • an identification code for identifying the metadata specification is inserted into a part of the video signal and transmitted to the video receiving device side, so that it is possible to cope with an increase in the type of specification.
  • the configuration of each part is described in detail in order to explain the present invention in an easy-to-understand manner, and the present invention is not necessarily limited to one having all the configurations described.
  • a part of the configuration of a certain embodiment can be replaced with the configuration of another embodiment, or the configuration of another embodiment can be added to the configuration of a certain embodiment.
  • a combination of the data analysis units 102 and 206 of the first embodiment (FIG. 1) or the second embodiment (FIG. 4) and the mode setting unit 209 of the third embodiment (FIG. 5) is also effective.
  • a configuration in which the data analysis unit 102 of the fourth embodiment (FIG. 7) and the mode setting unit 209 of the fifth embodiment (FIG. 9) are combined is also effective.
  • Video output device 101 Data input unit 102: Data analysis unit 103: Video information reading unit 104: Motion detection unit 105: Data generation unit 106: Data output unit 107: Metadata storage unit 150: Video thinning data, 151: Metadata, 160: Discrimination information, 170: Mode information, 200: Video receiving device, 201: Data input unit, 202: Metadata selection unit, 203: Data generation unit, 204: Data output , 205: display unit, 206: video data analysis unit, 207: video information reading unit, 208: motion detection unit, 209: mode setting unit, 300: transmission cable, 303: network.

Abstract

In a video output apparatus (100), a data generation unit (105) generates, from input video data, video thinned-out data (150) obtained by thinning out part of the video data and also generates meta data (151) for interpolating the video thinned-out data. A data output unit (106) outputs the video thinned-out data and meta data to a video reception apparatus (200) via a transmission cable (300). The data generation unit (105) changes the specification of the meta data (151) in accordance with the type of video of the video data determined by a data analysis unit (102). The data analysis unit (102) includes a video information reading unit (103) that identifies the genre of the video, or a motion detection unit (104) that detects a motion amount of the video so as to determine whether the scene thereof includes a large motion. In this way, the video reception apparatus (200) enables the viewing of a high-picture-quality video under conditions suitable for the type of the video without expansion of the transmission band.

Description

映像出力装置、映像受信装置、および映像出力方法Video output device, video receiver, and video output method
 本発明は、映像出力装置、映像受信装置、および映像出力方法に関する。 The present invention relates to a video output device, a video reception device, and a video output method.
 映像データを映像機器間で伝送する方式として、HDMI(High Definition Multimedia Interface)規格(HDMI Licensing,LLCの登録商標)がある。例えばHDMI2.0規格では、4k2k(横3840画素×縦2160画素)解像度の映像データを、60フレーム/秒のフレームレートで伝送可能となっている。一方、欧州等では、4k2k解像度の映像データを、120フレーム/秒で放送する方式が検討されている。以上より、4k2k解像度かつ120フレーム/秒(以下、4k2k@120Hzと表記する)の映像データが、放送、コンテンツ配信等で採用された場合、上記のHDMI2.0仕様では伝送帯域が不足することになる。 There is an HDMI (High Definition Multimedia Interface) standard (a registered trademark of HDMI Licensing, LLC) as a method for transmitting video data between video devices. For example, in the HDMI 2.0 standard, video data having a resolution of 4k2k (horizontal 3840 pixels × vertical 2160 pixels) can be transmitted at a frame rate of 60 frames / second. On the other hand, in Europe and the like, a method of broadcasting 4k2k resolution video data at 120 frames / second is being studied. From the above, when video data of 4k2k resolution and 120 frames / second (hereinafter referred to as 4k2k @ 120Hz) is adopted in broadcasting, content distribution, etc., the transmission bandwidth is insufficient in the HDMI 2.0 specification described above. Become.
 これを解決する技術として、特許文献1には、「動きベクトルにより符号化された映像信号を復号する再生装置と、復号された映像信号をハイフレームレート化して表示する表示装置が分離されている場合に、表示装置で、再生装置での復号処理によって得られる動きベクトルを利用してフレームレート変換する」(特許文献1要約参照)ことが記載されている。 As a technique for solving this, Patent Document 1 discloses that “a playback device that decodes a video signal encoded by a motion vector and a display device that displays the decoded video signal at a high frame rate are separated. In this case, the frame rate conversion is performed using the motion vector obtained by the decoding process in the playback device in the display device ”(see the abstract of Patent Document 1).
特開2007-274679号公報JP 2007-274679 A
 特許文献1の技術によれば、再生装置と表示装置とをハイフレームレート映像に対応した高い伝送量のケーブルで接続する必要がなくなる。しかしながら、動きベクトルを用いて生成する補間フレームは一義的に決定されるので、表示装置で表示されるハイフレームレート映像も映像の種類に依らず一義的に決定されることになる。 According to the technique of Patent Document 1, it is not necessary to connect a playback device and a display device with a high transmission amount cable corresponding to high frame rate video. However, since the interpolation frame generated using the motion vector is uniquely determined, the high frame rate video displayed on the display device is also uniquely determined regardless of the video type.
 表示装置で表示する映像は、その種類(ジャンルやシーン)により表示条件を異ならせることが望ましい。例えば、動きの速い映像はフレームレートを高くしてちらつきをなくし、高画質の映像は画素数を増加させて高解像度で表示するのが望ましい。特許文献1では、映像の種類に応じて表示映像の表示条件を変えること、すなわち映像の伝送条件を変えることに関しては考慮されていない。 It is desirable to change the display conditions of the video displayed on the display device depending on the type (genre or scene). For example, it is desirable to display a high-resolution video with a high frame rate by increasing the frame rate to eliminate flickering, and to display a high-quality video by increasing the number of pixels. In Patent Document 1, no consideration is given to changing the display condition of the display image according to the type of image, that is, changing the transmission condition of the image.
 本発明の目的は、伝送帯域が制限されている場合に、或いは伝送データ量を抑制しつつ、高画質な映像を得ることが可能な技術を提供することである。 An object of the present invention is to provide a technique capable of obtaining a high-quality video when a transmission band is limited or while suppressing a transmission data amount.
 本発明は、入力した映像データを伝送用の映像データに変換して出力する映像出力装置であって、入力した映像データから、映像データの一部データを間引いた映像間引きデータと、映像間引きデータを補間するためのメタデータを生成するデータ生成部と、映像間引きデータとメタデータを伝送路に出力するデータ出力部とを備え、データ生成部は、映像データの映像の種類に応じて、メタデータの仕様を変更する構成とする。 The present invention relates to a video output device that converts input video data into video data for transmission and outputs the video data, video thinning data obtained by thinning out part of video data from the input video data, and video thinning data A data generation unit that generates metadata for interpolating video, and a data output unit that outputs video thinning data and metadata to a transmission path. The data specification is changed.
 また本発明は、映像出力装置から伝送された映像データを表示部に表示する映像受信装置であって、映像出力装置から、映像データの一部が間引かれた映像間引きデータと、映像間引きデータを補間するためのメタデータとを受信するデータ入力部と、受信した映像間引きデータとメタデータをもとに、表示部にて表示する映像データを生成するデータ生成部とを備え、データ入力部にて受信した映像データの映像の種類に応じて、映像出力装置に対し、伝送するメタデータの仕様を変更するための情報を送信する構成とする。 In addition, the present invention is a video receiving device that displays video data transmitted from a video output device on a display unit, the video thinning data from which a part of the video data is thinned from the video output device, and the video thinning data A data input unit that receives metadata for interpolating the data, and a data generation unit that generates video data to be displayed on the display unit based on the received video thinning data and metadata. The information for changing the specification of the metadata to be transmitted is transmitted to the video output device in accordance with the video type of the video data received at.
 また本発明は、入力した映像データを伝送用の映像データに変換して出力する映像出力方法であって、入力した映像データの映像の種類を判別するステップと、入力した映像データから、映像データの一部データを間引いた映像間引きデータを生成するステップと、判別した映像の種類に応じて、映像間引きデータを補間するためのメタデータを生成するステップと、映像間引きデータとメタデータを伝送路に出力するステップと、を備える。 According to another aspect of the present invention, there is provided a video output method for converting input video data into video data for transmission and outputting the video data, the step of determining the video type of the input video data, and the video data from the input video data. A step of generating video thinning data obtained by thinning out a part of the data, a step of generating metadata for interpolating the video thinning data according to the determined type of video, and a transmission path for the video thinning data and metadata. And outputting to.
 本発明によれば、伝送帯域を広げることなく、或いは伝送データ量を抑制しつつ、映像の種類に適した条件で高画質な映像を視聴することができる。 According to the present invention, it is possible to view high-quality video under conditions suitable for the type of video without widening the transmission band or suppressing the amount of transmission data.
実施例1にかかる映像伝送システムのブロック構成を示す図。1 is a diagram showing a block configuration of a video transmission system according to Embodiment 1. FIG. 映像の種類とメタデータの仕様の一例を示す図。The figure which shows an example of the specification of the kind of image | video, and metadata. 1フレーム期間における映像信号の伝送期間を示す図。The figure which shows the transmission period of the video signal in 1 frame period. 実施例2にかかる映像伝送システムのブロック構成を示す図。FIG. 3 is a diagram illustrating a block configuration of a video transmission system according to a second embodiment. 実施例3にかかる映像伝送システムのブロック構成を示す図。FIG. 6 is a diagram illustrating a block configuration of a video transmission system according to a third embodiment. モード選択画面の一例を示す図。The figure which shows an example of a mode selection screen. 実施例4にかかる映像伝送システムのブロック構成を示す図。FIG. 10 is a diagram illustrating a block configuration of a video transmission system according to a fourth embodiment. 映像の種類とメタデータの仕様の一例を示す図。The figure which shows an example of the specification of the kind of image | video, and metadata. 実施例5にかかる映像伝送システムのブロック構成を示す図。FIG. 10 is a block diagram of a video transmission system according to a fifth embodiment.
 以下、本発明の実施形態を図面を用いて説明する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings.
 図1は、実施例1にかかる映像伝送システムのブロック構成を示す図である。映像伝送システムは、映像出力装置100と映像受信装置200を伝送ケーブル300で接続した構成である。 FIG. 1 is a block diagram of a video transmission system according to the first embodiment. The video transmission system has a configuration in which a video output device 100 and a video reception device 200 are connected by a transmission cable 300.
 映像出力装置100は、例えばデジタル放送を受信し、視聴できるようにデコードし、伝送用の映像データを生成する。あるいは、カメラなどで撮影した映像データを伝送するための映像データを生成する。映像出力装置100の一例として、セットトップボックス、レコーダ、レコーダ機能を内蔵したパソコン、カメラ機能やレコーダ機能を搭載した携帯電話、スマートフォン、カムコーダなどがある。 The video output device 100 receives, for example, a digital broadcast, decodes it so that it can be viewed, and generates video data for transmission. Alternatively, video data for transmitting video data captured by a camera or the like is generated. Examples of the video output device 100 include a set-top box, a recorder, a personal computer with a built-in recorder function, a mobile phone with a camera function and a recorder function, a smartphone, and a camcorder.
 映像受信装置200は、映像出力装置100からの映像データを入力し、表示部に映像を表示する。映像受信装置200の一例として、デジタルTVや、ディスプレイ、プロジェクタ、携帯電話、タブレット端末、サイネージ機器、監視カメラ用モニタなどがある。 The video receiving device 200 receives the video data from the video output device 100 and displays the video on the display unit. Examples of the video receiver 200 include a digital TV, a display, a projector, a mobile phone, a tablet terminal, a signage device, a monitor for a monitoring camera, and the like.
 伝送ケーブル300は、映像出力装置100と映像受信装置200の機器間で映像データ等のデータ通信を行うデータ伝送路である。伝送ケーブル300の一例として、HDMI規格やDisplay Port規格に対応した有線ケーブル、もしくは無線方式でデータ通信を行うデータ伝送路などがある。 The transmission cable 300 is a data transmission path for performing data communication such as video data between the video output device 100 and the video reception device 200. As an example of the transmission cable 300, there is a wired cable corresponding to the HDMI standard or the Display Port standard, or a data transmission path for performing data communication in a wireless manner.
 まず、映像出力装置100の構成について説明する。
  データ入力部101には、映像データ(映像信号1)が入力される。入力される映像データとしては、放送局または放送用衛星などの中継局からの電波として入力されるデジタル放送がある。その他の映像データとして、インターネットのブロードバンド接続を利用して、ネットワーク経由で配信されてくるデジタル放送や、映像コンテンツなどがある。また、入力部101に接続された外部の記録メディアからデジタル放送もしくは映像コンテンツを入力しても良い。記録メディアとしては、光ディスク、磁気ディスク、半導体メモリなどがある。
First, the configuration of the video output device 100 will be described.
Video data (video signal 1) is input to the data input unit 101. As input video data, there is digital broadcasting input as radio waves from a relay station such as a broadcasting station or a broadcasting satellite. Other video data includes digital broadcasting and video content distributed via a network using an Internet broadband connection. Further, digital broadcast or video content may be input from an external recording medium connected to the input unit 101. Examples of the recording medium include an optical disk, a magnetic disk, and a semiconductor memory.
 データ解析部102では、入力した映像データについて映像の種類を判別する。例えば映像のジャンルについては、映像情報読み取り部103にて入力映像の付帯情報(デジタル放送であれば番組ガイド(EPG))を読み取り、そのジャンル(スポーツ、映画など)を識別する。あるいは、動き検出部104にて、入力映像の動き量を検出し、動きの大きいシーンか否かを判別する。映像の種類の判別結果は、データ生成部105に送る。データ解析部102は、少なくとも映像情報読み取り部103と動き検出部104の何れか一方を備えるものとする。或いは、異なる手法により映像の種類を判別するようにしても良い。 The data analysis unit 102 determines the type of video for the input video data. For example, for the genre of video, the video information reading unit 103 reads incidental information (program guide (EPG) for digital broadcasting) and identifies the genre (sports, movies, etc.). Alternatively, the motion detection unit 104 detects the amount of motion of the input video and determines whether or not the scene has a large motion. The determination result of the video type is sent to the data generation unit 105. The data analysis unit 102 includes at least one of the video information reading unit 103 and the motion detection unit 104. Or you may make it discriminate | determine the kind of image | video with a different method.
 データ生成部105では、入力した映像データから映像間引きデータ150とメタデータ151を生成する。映像間引きデータ150は、入力した映像データから一部のフレームデータを間引いたもの、例えば映像信号1を構成するフレーム列から偶数フレームを間引き、奇数フレームだけのフレーム列としたものである。あるいは、入力した映像データの各フレーム内の一部の画素データを間引いたもの、すなわちフレーム内の画素数を減じたものである。あるいは、入力した映像データの各画素データ内の一部の階調データを間引いたもの、すなわちビット数を減じたものである。そして、これらの間引き処理を適宜組み合わせたものとする。メタデータ151は、入力した映像データと映像間引きデータ150との差分データである。例えば、間引き処理が入力した映像データから一部のフレームデータを間引く処理である場合、メタデータ151(差分データ)は間引かれた偶数フレームのデータであって良い。或いは、間引かれる偶数フレームのデータ及びその前後の奇数フレームのデータから算出された動きベクトルデータであっても良い。 The data generation unit 105 generates video thinning data 150 and metadata 151 from the input video data. The video thinning data 150 is obtained by thinning a part of frame data from input video data, for example, thinning an even number frame from a frame sequence constituting the video signal 1 to form a frame sequence of only odd frames. Alternatively, a part of pixel data in each frame of the input video data is thinned out, that is, the number of pixels in the frame is reduced. Alternatively, a part of gradation data in each pixel data of input video data is thinned out, that is, the number of bits is reduced. These thinning-out processes are appropriately combined. The metadata 151 is difference data between the input video data and the video thinning data 150. For example, when the thinning process is a process of thinning out part of the frame data from the input video data, the metadata 151 (difference data) may be data of even frames that are thinned. Alternatively, motion vector data calculated from even frame data to be thinned out and odd frame data before and after the data may be used.
 なお、このメタデータ151(差分データ)の仕様は、データ解析部102での映像判別結果に応じて変更可能とされている。ここで、メタデータ151の仕様とは、フレームレートや解像度等に関する条件であり、具体例は後述する。メタデータ151は1種類の差分データで構成しても良いが、仕様の異なる複数種類の差分データで構成しても良い。例えば、メタデータ151を、フレームレートに関する差分データと解像度に関する差分データとで構成するようにしても良い。映像受信装置200側では、映像間引きデータ150とメタデータ151を用いることで、映像出力装置100側で間引かれたデータを復元することができる。例えば、メタデータ151がフレームレートに関する差分データで構成されていれば、映像出力装置100側で間引かれた偶数フレームのデータを復元することができる。 The specification of the metadata 151 (difference data) can be changed according to the video discrimination result in the data analysis unit 102. Here, the specifications of the metadata 151 are conditions relating to the frame rate, resolution, and the like, and specific examples will be described later. The metadata 151 may be composed of one type of difference data, but may be composed of a plurality of types of difference data having different specifications. For example, the metadata 151 may be composed of difference data relating to the frame rate and difference data relating to the resolution. On the video reception device 200 side, the data thinned out on the video output device 100 side can be restored by using the video thinning data 150 and the metadata 151. For example, if the metadata 151 is composed of difference data related to the frame rate, it is possible to restore even frame data thinned out on the video output device 100 side.
 データ出力部106では、映像間引きデータ150にメタデータ151を多重化し、ケーブル伝送に適した形式の映像データ(映像信号2)に変換して伝送ケーブル300へ出力する。映像信号2の出力タイミングとして、例えば、有効画素期間に映像間引きデータ150を配置し、ブランキング期間にメタデータ151を配置する。 The data output unit 106 multiplexes the metadata 151 with the video thinning data 150, converts it into video data (video signal 2) in a format suitable for cable transmission, and outputs the video data to the transmission cable 300. As the output timing of the video signal 2, for example, the video thinning data 150 is arranged in the effective pixel period, and the metadata 151 is arranged in the blanking period.
 次に映像受信装置200の構成について説明する。
  データ入力部201には、伝送ケーブル300を介して映像信号2が入力される。データ入力部201では、入力した映像信号2を、映像間引きデータ150とメタデータ151に分離する。映像間引きデータ150はデータ生成部203に、メタデータ151はメタデータ選択部202に供給される。
Next, the configuration of the video reception device 200 will be described.
The video signal 2 is input to the data input unit 201 via the transmission cable 300. The data input unit 201 separates the input video signal 2 into video thinning data 150 and metadata 151. The video thinning data 150 is supplied to the data generation unit 203, and the metadata 151 is supplied to the metadata selection unit 202.
 メタデータ選択部202では、メタデータ151が複数種類の差分データで構成されている場合に、1つの差分データを選択して出力する。この選択は、ユーザによるリモコン操作等により行うことができる。メタデータ151として1種類の差分データのみが送られているときは、その差分データを出力する。 The metadata selection unit 202 selects and outputs one difference data when the metadata 151 includes a plurality of types of difference data. This selection can be performed by a remote control operation by the user. When only one type of difference data is sent as the metadata 151, the difference data is output.
 データ生成部203は、入力した映像間引きデータ150とメタデータ151をもとに、表示用の映像データを生成する。例えば、映像間引きデータ150から奇数フレームを生成し、これにメタデータ151を組み合わせて偶数フレームを補間することで、伝送前の映像信号1のフレームレートを復元する。あるいは、映像間引きデータ150において間引かれた画素データをメタデータ151で補間して、伝送前の映像信号1の画素数を復元する。 The data generation unit 203 generates video data for display based on the input video thinning data 150 and metadata 151. For example, an odd frame is generated from the video thinning data 150, and the metadata 151 is combined with this to interpolate the even frame, thereby restoring the frame rate of the video signal 1 before transmission. Alternatively, the pixel data thinned out in the video thinning data 150 is interpolated with the metadata 151 to restore the number of pixels of the video signal 1 before transmission.
 データ出力部204は、データ生成部203で生成された映像データ(映像信号3)を、表示部205に出力する。表示部205は、入力された映像信号3を表示する。表示部205の一例として、液晶ディスプレイや、プラズマディスプレイや、有機ELディスプレイ、プロジェクタ投影ディスプレイなどがある。ここでは表示部205は映像受信装置200に内蔵されるものとしたが、外部機器として映像受信装置200に接続する構成でも良い。 The data output unit 204 outputs the video data (video signal 3) generated by the data generation unit 203 to the display unit 205. The display unit 205 displays the input video signal 3. Examples of the display unit 205 include a liquid crystal display, a plasma display, an organic EL display, and a projector projection display. Here, the display unit 205 is built in the video receiving apparatus 200, but may be configured to be connected to the video receiving apparatus 200 as an external device.
 図2は、映像の種類とメタデータの仕様の一例を示す図である。ここでは、入力した映像信号1の仕様として、解像度4k2k、フレームレート120Hz、階調数12bitを想定している。映像信号2の映像間引きデータ150は、解像度とフレームレート及び階調数のそれぞれを適宜間引き、映像の種類に関わらず解像度2k1k(横1920画素×縦1080画素)、フレームレート60Hz、階調数8bitとしている。これにより、映像間引きデータ150のデータ量を大幅に削減し、伝送路に対する負荷を軽減することができる。 FIG. 2 is a diagram showing an example of video types and metadata specifications. Here, it is assumed that the input video signal 1 has a resolution of 4k2k, a frame rate of 120 Hz, and a gradation number of 12 bits. The video thinning data 150 of the video signal 2 is appropriately thinned out with respect to the resolution, frame rate, and number of gradations. The resolution is 2k1k (1920 horizontal pixels × 1080 vertical pixels) regardless of the type of video, the frame rate is 60 Hz, and the gradation number is 8 bits. It is said. Thereby, the data amount of the video thinning data 150 can be greatly reduced, and the load on the transmission path can be reduced.
 映像信号2のメタデータ151は、映像情報読み取り部103にて識別した映像信号1の種類(ここではTV番組、映像コンテンツのジャンル)に応じてその仕様が変更される。例えばジャンルが「スポーツ」の場合は動きの滑らかさ或いはスピード感を優先し、メタデータ151をフレーム補間データとする。これにより映像受信装置200において、映像信号1(オリジナル)のフレームレートを復元することができる。ジャンルが「映画」の場合は解像度を優先し、映像間引きデータ150に付加されるメタデータ151を画素補間データとする。これにより映像受信装置200において、映像信号1(オリジナル)の画素数を復元することができる。またジャンルが「絵画」等を扱う「趣味」の場合は階調数を優先し、メタデータ151を階調補間データとする。これにより映像受信装置200において、映像信号1(オリジナル)の量子化ビット数を復元することができる。いずれの場合もメタデータ151は差分データであるので、データ量は大幅に削減されている。 The specification of the metadata 151 of the video signal 2 is changed according to the type of the video signal 1 identified by the video information reading unit 103 (here, TV program, genre of video content). For example, when the genre is “sports”, priority is given to the smoothness or speed of movement, and the metadata 151 is used as frame interpolation data. As a result, the video receiving apparatus 200 can restore the frame rate of the video signal 1 (original). When the genre is “movie”, the resolution is prioritized, and the metadata 151 added to the video thinning data 150 is set as pixel interpolation data. Thereby, in the video receiving apparatus 200, the number of pixels of the video signal 1 (original) can be restored. When the genre is “hobby” that handles “painting” or the like, priority is given to the number of gradations, and the metadata 151 is used as gradation interpolation data. Thereby, in the video receiver 200, the number of quantization bits of the video signal 1 (original) can be restored. In any case, since the metadata 151 is difference data, the amount of data is greatly reduced.
 映像信号3は、表示部205にて表示される映像データであり、「スポーツ」の場合はフレームレート120Hzで、「映画」の場合は解像度4k2kで、「絵画」の場合は階調数12bitで表示することができる。 The video signal 3 is video data displayed on the display unit 205. The frame rate is 120 Hz for “sports”, the resolution is 4k2k for “movies”, and the number of gradations is 12 bits for “pictures”. Can be displayed.
 また、映像信号2のメタデータ151は、動き検出部104にて判別した映像信号1の動き量に応じてその仕様を変更しても良い。動きが大きいシーンはスピード感を優先し、フレーム補間データを付加する。動きが小さいシーンは解像度や階調数を優先し、画素補間データや階調補間データを付加する。このようにメタデータの変更は、映像のジャンルによってもシーンによっても可能であり、ユーザはいずれによるかを選択することができる。 The specification of the metadata 151 of the video signal 2 may be changed according to the amount of motion of the video signal 1 determined by the motion detection unit 104. For scenes with large movements, priority is given to speed, and frame interpolation data is added. For a scene with small motion, priority is given to the resolution and the number of gradations, and pixel interpolation data and gradation interpolation data are added. In this way, the metadata can be changed depending on the genre of the video and the scene, and the user can select either.
 上記した映像間引きデータ150とメタデータ151の仕様は、伝送ケーブル300の伝送容量を考慮し、これを超えないように設定することは言うまでもない。 It goes without saying that the specifications of the video thinning data 150 and the metadata 151 described above are set so as not to exceed the transmission capacity of the transmission cable 300 in consideration.
 図3は、1フレーム期間における映像信号2の伝送期間を示す図であり、1フレーム期間を表示画面の垂直・水平方向に対応させて示している。 FIG. 3 is a diagram showing the transmission period of the video signal 2 in one frame period, and shows one frame period corresponding to the vertical and horizontal directions of the display screen.
 垂直期間400は、垂直同期信号VSYNC信号を基点に、垂直ブランキング期間401と垂直有効期間402から構成される。水平期間403は、水平同期信号HSYNC信号を基点に、水平ブランキング期間404と水平有効期間405から構成される。 The vertical period 400 includes a vertical blanking period 401 and a vertical effective period 402 with the vertical synchronization signal VSYNC signal as a base point. The horizontal period 403 includes a horizontal blanking period 404 and a horizontal effective period 405 with the horizontal synchronization signal HSYNC signal as a base point.
 垂直有効期間402と水平有効期間405の重なった領域が有効期間406で、この期間に映像間引きデータ150を割り当てる。また、垂直ブランキング期間401または水平ブランキング期間404に該当する領域がブランキング期間407で、この期間にメタデータ151を割り当てる。これより、映像間引きデータ150とメタデータ151を1つの映像信号2に多重化して伝送することができる。なお、ブランキング期間407には、同期パターンや音声、補助情報、等も多重化して伝送することができる。 The area where the vertical effective period 402 and the horizontal effective period 405 overlap is the effective period 406, and the video thinning data 150 is allocated to this area. An area corresponding to the vertical blanking period 401 or the horizontal blanking period 404 is a blanking period 407, and metadata 151 is allocated to this period. Thus, the video thinning data 150 and the metadata 151 can be multiplexed and transmitted in one video signal 2. In the blanking period 407, a synchronization pattern, voice, auxiliary information, and the like can be multiplexed and transmitted.
 本実施例によれば、映像の種類に応じて優先条件に従ったメタデータを伝送するようにしたので、伝送路の伝送帯域を広げることなく、映像の種類に適した条件で高画質な映像を視聴することができる。 According to the present embodiment, since the metadata according to the priority condition is transmitted according to the type of video, high-quality video under conditions suitable for the type of video without expanding the transmission band of the transmission path. Can be watched.
 図4は、実施例2にかかる映像伝送システムのブロック構成を示す図である。本映像伝送システムは、映像出力装置100aと映像受信装置200aを伝送ケーブル300で接続した構成である。実施例1(図1)の構成と異なる点は、映像の種類を判別する機能を映像出力装置100aではなく映像受信装置200aに設けたことである。 FIG. 4 is a block diagram of the video transmission system according to the second embodiment. This video transmission system has a configuration in which a video output device 100a and a video reception device 200a are connected by a transmission cable 300. The difference from the configuration of the first embodiment (FIG. 1) is that a function for determining the type of video is provided not in the video output device 100a but in the video reception device 200a.
 映像受信装置200aには、データ入力部201に入力した映像信号2の種類を判別するデータ解析部206を設けている。データ解析部206は映像情報読み取り部207と動き検出部208を有し、映像情報読み取り部207では映像信号2のジャンルを識別し、動き検出部208では映像信号2の動き量を検出する。その他、映像情報読み取り部207は映像情報読み取り部103と、動き検出部208は動き検出部104と、それぞれ同様の動作を行う。 The video receiving apparatus 200a is provided with a data analysis unit 206 that determines the type of the video signal 2 input to the data input unit 201. The data analysis unit 206 includes a video information reading unit 207 and a motion detection unit 208. The video information reading unit 207 identifies the genre of the video signal 2, and the motion detection unit 208 detects the amount of motion of the video signal 2. In addition, the video information reading unit 207 performs the same operation as the video information reading unit 103, and the motion detection unit 208 performs the same operation as the motion detection unit 104, respectively.
 データ解析部206が取得した映像信号2の判別情報160は、データ入力部201、伝送ケーブル300、及び映像出力装置100aのデータ出力部106を介して映像出力装置100aのデータ生成部105へ送られる。データ生成部105では、データ解析部206からの判別情報160に応じて、入力した映像信号1から映像間引きデータ150とメタデータ151を生成する。メタデータ151は、複数種類の差分データで構成しても良い。映像の種類(判別情報160)とメタデータ151の関係は、図2で説明した通りである。他の構成部分の動作は、実施例1と同様である。 The discrimination information 160 of the video signal 2 acquired by the data analysis unit 206 is sent to the data generation unit 105 of the video output device 100a via the data input unit 201, the transmission cable 300, and the data output unit 106 of the video output device 100a. . The data generation unit 105 generates video thinning data 150 and metadata 151 from the input video signal 1 in accordance with the discrimination information 160 from the data analysis unit 206. The metadata 151 may be composed of a plurality of types of difference data. The relationship between the video type (discrimination information 160) and the metadata 151 is as described with reference to FIG. The operation of other components is the same as that of the first embodiment.
 実施例2の構成においても、映像の種類に応じて優先条件に従ったメタデータを伝送するようにしたので、伝送路の伝送帯域を広げることなく、映像の種類に適した条件で高画質な映像を視聴することができる。 Even in the configuration of the second embodiment, since the metadata according to the priority condition is transmitted according to the type of video, the image quality is high under the conditions suitable for the type of video without increasing the transmission band of the transmission path. You can watch the video.
 図5は、実施例3にかかる映像伝送システムのブロック構成を示す図である。本映像伝送システムは、映像出力装置100bと映像受信装置200bを伝送ケーブル300で接続した構成である。実施例1(図1)や実施例2(図4)の構成と異なる点は、映像の種類を判別するデータ解析部102,206の代わりに、ユーザにより表示モードを設定する機能を設けたことである。 FIG. 5 is a block diagram of the video transmission system according to the third embodiment. This video transmission system has a configuration in which a video output device 100b and a video reception device 200b are connected by a transmission cable 300. The difference from the configurations of the first embodiment (FIG. 1) and the second embodiment (FIG. 4) is that a function for setting a display mode by a user is provided instead of the data analysis units 102 and 206 for determining the type of video. It is.
 映像受信装置200bには、ユーザが表示モードを選択設定するモード設定部209を設けている。ユーザは視聴する映像の種類に応じて、好みの表示モードを選択する。選択したモード情報170は、データ入力部201、伝送ケーブル300、及び映像出力装置100bのデータ出力部106を介して映像出力装置100bのデータ生成部105へ送られる。データ生成部105では、モード設定部209からのモード情報170に応じて、入力した映像信号1から映像間引きデータ150とメタデータ151を生成する。他の構成部分の動作は、実施例1または実施例2と同様である。 The video receiving device 200b is provided with a mode setting unit 209 for the user to select and set a display mode. The user selects a preferred display mode according to the type of video to be viewed. The selected mode information 170 is sent to the data generation unit 105 of the video output device 100b via the data input unit 201, the transmission cable 300, and the data output unit 106 of the video output device 100b. The data generation unit 105 generates video thinning data 150 and metadata 151 from the input video signal 1 in accordance with the mode information 170 from the mode setting unit 209. The operation of other components is the same as that in the first or second embodiment.
 図6は、モード選択画面の一例を示す図である。表示部205の画面には、選択画面500または501を表示する。選択画面501は、実施例1や実施例2におけるデータ解析部102,206を併設している場合である。 FIG. 6 is a diagram showing an example of the mode selection screen. A selection screen 500 or 501 is displayed on the screen of the display unit 205. The selection screen 501 is a case where the data analysis units 102 and 206 in the first and second embodiments are provided.
 (a)の表示モード選択画面500では、映像の表示モードとして「スポーツモード」、「映画モード」、「美術絵画モード」などから選択可能である。ユーザが図示を省略した本体操作ボタンや付属のリモコン等により視聴する映像に適したモードを選択すると、ユーザの選択結果がモード情報170として映像出力装置100bへ送信される。データ生成部105では、モード情報170に応じて、図2に従ってメタデータ151を生成する。 On the display mode selection screen 500 in (a), it is possible to select from “sport mode”, “movie mode”, “art painting mode”, etc. as the video display mode. When the user selects a mode suitable for the video to be viewed using a main body operation button (not shown) or an attached remote controller, the user's selection result is transmitted to the video output device 100b as mode information 170. The data generation unit 105 generates the metadata 151 according to the mode information 170 according to FIG.
 (b)の判別動作選択画面501では、データ解析部102、206による映像判別動作を、「ジャンル」と「シーン」のいずれで行うかを選択する。「ジャンル」を選択すれば、映像情報読み取り部103,207により映像のジャンルを識別し、「シーン」を選択すれば、動き検出部104,208によりシーンの動き量を検出する。これによりユーザは、表示の切り替えを、映像コンテンツ単位で行うか、あるいはシーンごとに行うかを選択することができる。なお、(b)の判別動作選択画面501を表示する場合、モード設定部209からのモード情報170は、データ生成部105ではなくデータ解析部102、206に送られるように映像出力装置100b或いは映像受信装置200bを構成する。 (B) In the discrimination operation selection screen 501, it is selected whether the video discriminating operation by the data analysis units 102 and 206 is performed by “genre” or “scene”. If “genre” is selected, the video information reading units 103 and 207 identify the genre of the video, and if “scene” is selected, the motion detection units 104 and 208 detect the motion amount of the scene. Thus, the user can select whether to switch the display for each video content or for each scene. When the discrimination operation selection screen 501 of (b) is displayed, the mode information 170 from the mode setting unit 209 is sent to the data analysis units 102 and 206 instead of the data generation unit 105 so that the video output device 100b or the video is displayed. The receiving device 200b is configured.
 実施例3の構成においても、ユーザの選択した表示モードに従ってメタデータを伝送するようにしたので、伝送路の伝送帯域を広げることなく、ユーザの好みに合った条件で高画質な映像を視聴することができる。 Even in the configuration of the third embodiment, since the metadata is transmitted according to the display mode selected by the user, high-quality video can be viewed under conditions that suit the user's preference without widening the transmission band of the transmission path. be able to.
 実施例4では、放送波による映像伝送と、ネットワークによるメタデータ伝送を組み合わせた映像伝送システムについて説明する。 Example 4 describes a video transmission system that combines video transmission by broadcast waves and metadata transmission by a network.
 図7は、実施例4にかかる映像伝送システムのブロック構成を示す図である。本映像伝送システムは、映像出力装置100cから映像受信装置200cへ、アンテナ301,302を介して放送波を伝送するとともに、ネットワーク303とルータ304を介してメタデータを伝送する構成である。映像出力装置100cは例えば放送局が相当し、映像受信装置200cはデジタルTVが相当する。 FIG. 7 is a block diagram of a video transmission system according to the fourth embodiment. This video transmission system is configured to transmit broadcast waves from the video output device 100 c to the video receiving device 200 c via the antennas 301 and 302 and to transmit metadata via the network 303 and the router 304. For example, the video output device 100c corresponds to a broadcasting station, and the video reception device 200c corresponds to a digital TV.
 映像出力装置100cの構成について説明する。データ入力部101には、映像マスタデータ(映像信号4)が入力される。映像マスタデータは例えば解像度8k4k(横7680画素×縦4320画素)、フレームレート120Hzの超高解像度の映像データである。データ解析部102では、入力した映像信号4について映像の種類を判別する。すなわち、映像情報読み取り部103では映像のジャンルを識別し、動き検出部104では映像の動き量を検出する。 The configuration of the video output device 100c will be described. Video master data (video signal 4) is input to the data input unit 101. The video master data is, for example, super high resolution video data having a resolution of 8k4k (horizontal 7680 pixels × vertical 4320 pixels) and a frame rate of 120 Hz. The data analysis unit 102 determines the type of video for the input video signal 4. That is, the video information reading unit 103 identifies the genre of the video, and the motion detection unit 104 detects the amount of motion of the video.
 データ生成部105では、入力した映像信号4から映像間引きデータ150と差分データであるメタデータ151を生成する。このメタデータ151の仕様は、データ解析部102での映像判別結果に応じて決定する。また、メタデータ151は複数種類の差分データで構成しても良い。 The data generation unit 105 generates video thinning data 150 and metadata 151 as differential data from the input video signal 4. The specification of the metadata 151 is determined according to the video discrimination result in the data analysis unit 102. The metadata 151 may be composed of a plurality of types of difference data.
 データ出力部106は、生成された映像間引きデータ150を、アンテナ301を介して放送波(映像信号5a)として送信する。また、データ出力部106は、生成されたメタデータ151を、ネットワーク303を介して映像信号5bとして配信するためのデータサーバとしての機能も有する。 The data output unit 106 transmits the generated video thinning data 150 as a broadcast wave (video signal 5a) via the antenna 301. The data output unit 106 also has a function as a data server for distributing the generated metadata 151 as the video signal 5 b via the network 303.
 映像受信装置200cの構成について説明する。データ入力部201はアンテナ302を介して映像信号5aを受信する。更に、データ入力部201は、所定のURL(Uniform Resource Locator)にアクセスすることにより、映像出力装置100cのデータ出力部106に記憶されている映像信号5bを、ネットワーク303及びルータ304を介して受信する。データ入力部201で受信した両映像信号5a,5bはデータ生成部203とメタデータ選択部202へ供給される。この後の処理は実施例1(図1)と同様で、データ生成部203は、メタデータ選択部202で選択したメタデータを用いて映像信号6を生成し、表示部205にて表示する。 The configuration of the video receiving device 200c will be described. The data input unit 201 receives the video signal 5a via the antenna 302. Further, the data input unit 201 receives a video signal 5b stored in the data output unit 106 of the video output device 100c via the network 303 and the router 304 by accessing a predetermined URL (Uniform Resource Locator). To do. Both video signals 5 a and 5 b received by the data input unit 201 are supplied to the data generation unit 203 and the metadata selection unit 202. The subsequent processing is the same as in the first embodiment (FIG. 1), and the data generation unit 203 generates the video signal 6 using the metadata selected by the metadata selection unit 202 and displays it on the display unit 205.
 図8は、映像の種類とメタデータの仕様の一例を示す図である。ここでは、入力する映像信号4の仕様として、解像度8k4k、フレームレート120Hz、階調数12bitを想定している。映像信号5a(映像間引きデータ150)は、映像の種類に関わらず解像度4k2k、フレームレート60Hz、階調数8bitとしている。なお、映像信号5aはMPEG(Moving Picture Experts Group)-2システム等の規格により圧縮符号化処理を施されたものであって良い。また、映像信号5bは映像信号5aと同期をとるための、MPEG-2システムのPCR(Program Clock Reference)情報と同様の時間情報を含むものとする。 FIG. 8 is a diagram showing an example of video types and metadata specifications. Here, it is assumed that the video signal 4 to be input has a resolution of 8k4k, a frame rate of 120 Hz, and a gradation number of 12 bits. The video signal 5a (video thinning data 150) has a resolution of 4k2k, a frame rate of 60 Hz, and a gradation number of 8 bits regardless of the type of video. The video signal 5a may be a signal that has been compression-encoded according to a standard such as MPEG (Moving Picture Experts Group) -2 system. The video signal 5b includes time information similar to the PCR (Program Clock Reference) information of the MPEG-2 system for synchronizing with the video signal 5a.
 映像信号5b(メタデータ151)は、映像の種類に応じてその仕様が変更される。例えば、映像の種類が「スポーツ」の場合は動きの滑らかさ或いはスピード感を優先し、映像信号5b(メタデータ151)をフレーム補間データとする。映像の種類が「ニュース・映画」の場合は解像度を優先し、映像信号5b(メタデータ151)を画素補間データとする。また映像の種類が「美術・絵画」の場合は階調数を優先し、映像信号5b(メタデータ151)を階調補間データとする。映像信号6は、表示部205にて表示される映像であり、「スポーツ」の場合はフレームレート120Hzで、「ニュース・映画」の場合は解像度8k4kで、「美術・絵画」の場合は12bitで表示することができる。 The specifications of the video signal 5b (metadata 151) are changed according to the type of video. For example, when the type of video is “sports”, priority is given to smoothness or speed of movement, and the video signal 5b (metadata 151) is used as frame interpolation data. When the video type is “news / movie”, the resolution is prioritized and the video signal 5b (metadata 151) is used as pixel interpolation data. When the video type is “art / painting”, priority is given to the number of gradations, and the video signal 5b (metadata 151) is used as gradation interpolation data. The video signal 6 is a video displayed on the display unit 205. The frame rate is 120 Hz for “sports”, the resolution is 8k4k for “news / movies”, and 12 bits for “art / paintings”. Can be displayed.
 また、映像信号5b(メタデータ151)は、映像信号4の動き量に応じてその仕様が変更されるようにしても良い。動きが大きいシーンはスピード感を優先してフレーム補間データを付加し、動きが小さいシーンは解像度や階調数を優先して画素補間データや階調補間データを付加する。 Also, the specification of the video signal 5b (metadata 151) may be changed according to the amount of motion of the video signal 4. Frame interpolation data is added to a scene with a large motion giving priority to speed, and pixel interpolation data and gradation interpolation data are added to a scene with little motion giving priority to the resolution and the number of gradations.
 実施例4の構成においても、映像の種類に応じて優先条件に従ったメタデータを伝送するようにしたので、伝送路の伝送帯域を広げることなく、映像の種類に適した条件で高画質な映像を視聴することができる。特に、本実施例では伝送路を分離し、放送波による映像伝送と、ネットワークによるメタデータ伝送を組み合わせているので、大容量のメタデータを伝送することができ、ユーザは超高解像度の映像を視聴することができる。 Also in the configuration of the fourth embodiment, since the metadata according to the priority condition is transmitted according to the type of video, the image quality is high under the conditions suitable for the type of video without increasing the transmission band of the transmission path. You can watch the video. In particular, in this embodiment, since the transmission path is separated and video transmission by broadcast waves and metadata transmission by a network are combined, a large amount of metadata can be transmitted, and a user can view an ultra-high resolution video. Can watch.
 実施例5は、実施例4の映像伝送システムにおいて、ユーザにより表示モードを設定する機能を設けたものである。 [Embodiment 5] Embodiment 5 provides the video transmission system of Embodiment 4 with a function for setting a display mode by the user.
 図9は、実施例5にかかる映像伝送システムのブロック構成を示す図である。本映像出力装置100dには、映像信号4に対し仕様の異なる複数種類のメタデータを格納するメタデータ格納部107を有する。このメタデータは、データ生成部105が予め生成して格納したものである。 FIG. 9 is a block diagram of the video transmission system according to the fifth embodiment. The video output device 100d includes a metadata storage unit 107 that stores a plurality of types of metadata having different specifications for the video signal 4. This metadata is generated and stored in advance by the data generation unit 105.
 映像受信装置200dには、ユーザが表示モードを選択設定するモード設定部209を設けている。ユーザは視聴する映像の種類に応じて、好みの表示モードを選択する。選択したモード情報170は、データ入力部201、ルータ304とネットワーク303、及び映像出力装置100dのデータ出力部106を介して映像出力装置100dのデータ生成部105へ送られる。データ生成部105では、モード設定部209からのモード情報170に応じて、メタデータ格納部107から該当するメタデータ151を抽出する。そして抽出したメタデータ151を映像信号5bとしてネットワーク303を介して映像受信装置200dに伝送する。他の構成部分の動作は、実施例4と同様である。 The video receiving device 200d is provided with a mode setting unit 209 for a user to select and set a display mode. The user selects a preferred display mode according to the type of video to be viewed. The selected mode information 170 is sent to the data generation unit 105 of the video output device 100d via the data input unit 201, the router 304 and the network 303, and the data output unit 106 of the video output device 100d. The data generation unit 105 extracts the corresponding metadata 151 from the metadata storage unit 107 according to the mode information 170 from the mode setting unit 209. Then, the extracted metadata 151 is transmitted as a video signal 5b to the video reception device 200d via the network 303. The operation of other components is the same as that of the fourth embodiment.
 なお、表示部205に表示する表示モード選択画面は図6(a)と同様で良い。但し、本実施例ではネットワーク303を利用した構成であるので、予めメタデータ格納部107に格納したそれぞれのメタデータに異なるURL502を設定しておき、更に、表示モード選択画面500の各選択肢と前記URL502を関連付けておくようにする。これにより、ユーザが表示モード選択画面500上で映像の表示モードを選択すると、前記選択した表示モードに関連付けられたURL502にアクセスして所定のメタデータ151を受信することができる。 Note that the display mode selection screen displayed on the display unit 205 may be the same as that shown in FIG. However, since the network 303 is used in the present embodiment, different URLs 502 are set in advance for each metadata stored in the metadata storage unit 107, and each option on the display mode selection screen 500 and the above-described options are set. The URL 502 is associated. Thus, when the user selects a video display mode on the display mode selection screen 500, the user can access the URL 502 associated with the selected display mode and receive the predetermined metadata 151.
 実施例5の構成においても、放送波による映像伝送と、ネットワークによるメタデータ伝送を組み合わせているので、ユーザは超高解像度の映像を視聴することができる。また、ユーザの選択した表示モードに従ってメタデータを伝送するようにしたので、ユーザの好みに合った条件で高画質な映像を視聴することができる。 Also in the configuration of the fifth embodiment, since the video transmission by the broadcast wave and the metadata transmission by the network are combined, the user can view the ultra-high resolution video. Further, since the metadata is transmitted according to the display mode selected by the user, it is possible to view high-quality video under conditions that suit the user's preference.
 本発明は上記した各実施例に限定されるものではなく、様々な変形例が含まれる。例えば、映像間引きデータは、映像の種類に関わらず同一の仕様で伝送するものとしたが、映像の種類に応じて仕様を変更するようにしても良い。これにより、映像間引きデータとメタデータとの効率の良い組み合わせが可能となる。映像間引きデータとメタデータについては、適宜圧縮符号化処理を施して伝送することで、伝送効率を向上させることができる。また、映像出力装置側でメタデータの仕様を決定する場合、映像受信装置側において決定されたメタデータの仕様を知る必要がある。これについては、メタデータの仕様を識別する識別コードを映像信号の一部に挿入して映像受信装置側へ伝送することで、仕様の種類が増加しても対応可能となる。 The present invention is not limited to the above-described embodiments, and includes various modifications. For example, the video thinning data is transmitted with the same specification regardless of the type of video, but the specification may be changed according to the type of video. Thereby, an efficient combination of video thinning data and metadata is possible. About video thinning data and metadata, transmission efficiency can be improved by appropriately performing compression encoding processing and transmitting. Further, when the metadata specification is determined on the video output device side, it is necessary to know the metadata specification determined on the video reception device side. As for this, an identification code for identifying the metadata specification is inserted into a part of the video signal and transmitted to the video receiving device side, so that it is possible to cope with an increase in the type of specification.
 なお、本発明の各実施例においては、本発明の特徴を為す部分に関しての詳細な説明を記述しており、映像出力装置や映像受信装置としての一般的な構成や技術に関しては説明を省略している。例えば、映像信号の送受信に関しては、変調/復調処理や、符号化/復号化処理等を行うことが一般的であるが、前記処理に関しては説明を省略している。 In each of the embodiments of the present invention, a detailed description of the parts that make up the features of the present invention is described, and a description of a general configuration and technology as a video output device and a video reception device is omitted. ing. For example, for transmission / reception of video signals, it is common to perform modulation / demodulation processing, encoding / decoding processing, and the like, but the description of the processing is omitted.
 上記した実施例では、本発明を分かりやすく説明するために各部の構成を詳細に説明したものであり、必ずしも説明した全ての構成を備えるものに限定されない。また、ある実施例の構成の一部を他の実施例の構成に置き換えることや、ある実施例の構成に他の実施例の構成を追加することも可能である。例えば、実施例1(図1)または実施例2(図4)のデータ解析部102、206と、実施例3(図5)のモード設定部209を組み合わせた構成も有効である。また、実施例4(図7)のデータ解析部102と、実施例5(図9)のモード設定部209を組み合わせた構成も有効である。 In the above-described embodiments, the configuration of each part is described in detail in order to explain the present invention in an easy-to-understand manner, and the present invention is not necessarily limited to one having all the configurations described. Further, a part of the configuration of a certain embodiment can be replaced with the configuration of another embodiment, or the configuration of another embodiment can be added to the configuration of a certain embodiment. For example, a combination of the data analysis units 102 and 206 of the first embodiment (FIG. 1) or the second embodiment (FIG. 4) and the mode setting unit 209 of the third embodiment (FIG. 5) is also effective. Further, a configuration in which the data analysis unit 102 of the fourth embodiment (FIG. 7) and the mode setting unit 209 of the fifth embodiment (FIG. 9) are combined is also effective.
 100:映像出力装置、101:データ入力部、102:データ解析部、103:映像情報読み取り部、104:動き検出部、105:データ生成部、106:データ出力部、107:メタデータ格納部、150:映像間引きデータ、151:メタデータ、160:判別情報、170:モード情報、200:映像受信装置、201:データ入力部、202:メタデータ選択部、203:データ生成部、204:データ出力部、205:表示部、206:映像データ解析部、207:映像情報読み取り部、208:動き検出部、209:モード設定部、300:伝送ケーブル、303:ネットワーク。 100: Video output device 101: Data input unit 102: Data analysis unit 103: Video information reading unit 104: Motion detection unit 105: Data generation unit 106: Data output unit 107: Metadata storage unit 150: Video thinning data, 151: Metadata, 160: Discrimination information, 170: Mode information, 200: Video receiving device, 201: Data input unit, 202: Metadata selection unit, 203: Data generation unit, 204: Data output , 205: display unit, 206: video data analysis unit, 207: video information reading unit, 208: motion detection unit, 209: mode setting unit, 300: transmission cable, 303: network.

Claims (14)

  1.  入力した映像データを伝送用の映像データに変換して出力する映像出力装置において、
     前記入力した映像データから、該映像データの一部データを間引いた映像間引きデータと、前記映像間引きデータを補間するためのメタデータを生成するデータ生成部と、
     前記映像間引きデータと前記メタデータを伝送路に出力するデータ出力部とを備え、
     前記データ生成部は、前記映像データの映像の種類に応じて、前記メタデータの仕様を変更することを特徴とする映像出力装置。
    In a video output device that converts input video data into video data for transmission and outputs it,
    From the input video data, video thinning data obtained by thinning out part of the video data, a data generation unit for generating metadata for interpolating the video thinning data,
    A data output unit that outputs the video thinning data and the metadata to a transmission path;
    The video output device, wherein the data generation unit changes a specification of the metadata according to a video type of the video data.
  2.  請求項1に記載の映像出力装置において、
     前記入力した映像データの映像の種類を判別するデータ解析部を備え、
     前記データ生成部は、前記データ解析部が判別した映像の種類に応じて、前記メタデータの仕様を変更することを特徴とする映像出力装置。
    The video output device according to claim 1,
    A data analysis unit for determining a video type of the input video data;
    The video output device, wherein the data generation unit changes a specification of the metadata according to a type of video determined by the data analysis unit.
  3.  請求項2に記載の映像出力装置において、
     前記データ解析部は、前記映像の種類を判別するために、前記入力した映像データの付帯情報から映像のジャンルを識別する映像情報読み取り部を備えることを特徴とする映像出力装置。
    The video output device according to claim 2,
    The video output device, wherein the data analysis unit includes a video information reading unit that identifies a video genre from the incidental information of the input video data in order to determine the type of the video.
  4.  請求項2に記載の映像出力装置において、
     前記データ解析部は、前記映像の種類を判別するために、前記入力した映像の動き量を検出し、動きの大きいシーンか否かを判別する動き検出部を備えることを特徴とする映像出力装置。
    The video output device according to claim 2,
    The data analysis unit includes a motion detection unit that detects a motion amount of the input video and determines whether or not the scene has a large motion in order to determine the type of the video. .
  5.  請求項1乃至4のいずれかに記載の映像出力装置において、
     前記データ生成部の生成する前記映像間引きデータは、前記入力した映像データから一部のフレームデータ、フレーム内の一部の画素データ、画素データ内の一部の階調データの少なくとも1つを間引いたものであり、
     前記データ生成部の生成する前記メタデータは、前記入力した映像データから間引かれた前記フレームデータ、前記画素データ、前記階調データの少なくとも1つを補間するデータであることを特徴とする映像出力装置。
    In the video output device according to any one of claims 1 to 4,
    The video thinning data generated by the data generator thins out at least one of a part of frame data, a part of pixel data in the frame, and a part of gradation data in the pixel data from the input video data. And
    The metadata generated by the data generation unit is data that interpolates at least one of the frame data, the pixel data, and the gradation data thinned out from the input video data. Output device.
  6.  請求項1乃至5のいずれかに記載の映像出力装置において、
     前記データ生成部は、前記メタデータとして仕様の異なる複数種類のメタデータを生成することを特徴とする映像出力装置。
    The video output device according to any one of claims 1 to 5,
    The video output device, wherein the data generation unit generates a plurality of types of metadata having different specifications as the metadata.
  7.  請求項1乃至6のいずれかに記載の映像出力装置において、
     前記データ出力部は、前記映像間引きデータと前記メタデータを異なる伝送路に出力することを特徴とする映像出力装置。
    The video output device according to any one of claims 1 to 6,
    The video output device, wherein the data output unit outputs the video thinned data and the metadata to different transmission paths.
  8.  映像出力装置から伝送された映像データを表示部に表示する映像受信装置において、
     前記映像出力装置から、映像データの一部が間引かれた映像間引きデータと該映像間引きデータを補間するためのメタデータとを受信するデータ入力部と、
     受信した前記映像間引きデータと前記メタデータをもとに、前記表示部にて表示する映像データを生成するデータ生成部とを備え、
     前記データ入力部にて受信した前記映像データの映像の種類に応じて、前記映像出力装置に対し、伝送する前記メタデータの仕様を変更するための情報を送信することを特徴とする映像受信装置。
    In a video receiver that displays video data transmitted from a video output device on a display unit,
    A data input unit that receives, from the video output device, video thinning data in which a part of the video data is thinned out and metadata for interpolating the video thinning data;
    A data generation unit that generates video data to be displayed on the display unit based on the received video thinning data and the metadata;
    A video receiving apparatus, wherein information for changing a specification of the metadata to be transmitted is transmitted to the video output apparatus in accordance with a video type of the video data received by the data input unit. .
  9.  請求項8に記載の映像受信装置において、
     前記入力した映像データの映像の種類を判別するデータ解析部を備え、
     前記データ解析部の判別結果を、伝送する前記メタデータの仕様を変更するための情報として、前記映像出力装置に対して送信することを特徴とする映像受信装置。
    The video receiving device according to claim 8,
    A data analysis unit for determining a video type of the input video data;
    A video receiving apparatus, wherein the data analysis unit transmits a determination result to the video output apparatus as information for changing a specification of the metadata to be transmitted.
  10.  請求項8に記載の映像受信装置において、
     ユーザが前記入力した映像データの表示モードを設定するモード設定部を備え、
     前記モード設定部の設定結果を、伝送する前記メタデータの仕様を変更するための情報として、前記映像出力装置に対して送信することを特徴とする映像受信装置。
    The video receiving device according to claim 8,
    A mode setting unit for setting a display mode of the video data input by the user;
    A video receiving apparatus, wherein the setting result of the mode setting unit is transmitted to the video output apparatus as information for changing the specification of the metadata to be transmitted.
  11.  請求項8に記載の映像受信装置において、
     前記データ入力部が受信した前記メタデータに仕様の異なる複数種類のメタデータが含まれているとき、ユーザにより1つのメタデータを選択するメタデータ選択部を備えることを特徴とする映像受信装置。
    The video receiving device according to claim 8,
    An image receiving apparatus comprising: a metadata selection unit that selects one metadata by a user when the metadata received by the data input unit includes a plurality of types of metadata having different specifications.
  12.  入力した映像データを伝送用の映像データに変換して出力する映像出力方法において、
     前記入力した映像データの映像の種類を判別するステップと、
     前記入力した映像データから、該映像データの一部データを間引いた映像間引きデータを生成するステップと、
     前記判別した映像の種類に応じて、前記映像間引きデータを補間するためのメタデータを生成するステップと、
     前記映像間引きデータと前記メタデータを伝送路に出力するステップと、
     を備えることを特徴とする映像出力方法。
    In a video output method for converting input video data into video data for transmission and outputting it,
    Determining the type of video of the input video data;
    Generating video thinning data obtained by thinning out part of the video data from the input video data;
    Generating metadata for interpolating the video thinning data according to the determined video type;
    Outputting the video thinning data and the metadata to a transmission path;
    A video output method comprising:
  13.  請求項12に記載の映像出力方法において、
     前記映像の種類を判別するステップでは、前記入力した映像データの付帯情報から映像のジャンルを識別する、あるいは、前記入力した映像の動き量から動きの大きいシーンか否かを判別することを特徴とする映像出力方法。
    The video output method according to claim 12,
    In the step of determining the type of the video, the genre of the video is identified from the incidental information of the input video data, or it is determined whether the scene is a large motion from the amount of motion of the input video. Video output method.
  14.  請求項12または13に記載の映像出力方法において、
     前記映像間引きデータは、前記入力した映像データから一部のフレームデータ、フレーム内の一部の画素データ、画素データ内の一部の階調データの少なくとも1つを間引いたものであり、
     前記メタデータは、前記入力した映像データから間引かれた前記フレームデータ、前記画素データ、前記階調データの少なくとも1つを補間するデータであることを特徴とする映像出力方法。
    The video output method according to claim 12 or 13,
    The video thinning data is obtained by thinning at least one of a part of frame data, a part of pixel data in a frame, and a part of gradation data in the pixel data from the input video data.
    The video output method, wherein the metadata is data that interpolates at least one of the frame data, the pixel data, and the gradation data thinned out from the input video data.
PCT/JP2014/060516 2014-04-11 2014-04-11 Video output apparatus, video reception apparatus, and video output method WO2015155893A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2014/060516 WO2015155893A1 (en) 2014-04-11 2014-04-11 Video output apparatus, video reception apparatus, and video output method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2014/060516 WO2015155893A1 (en) 2014-04-11 2014-04-11 Video output apparatus, video reception apparatus, and video output method

Publications (1)

Publication Number Publication Date
WO2015155893A1 true WO2015155893A1 (en) 2015-10-15

Family

ID=54287493

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2014/060516 WO2015155893A1 (en) 2014-04-11 2014-04-11 Video output apparatus, video reception apparatus, and video output method

Country Status (1)

Country Link
WO (1) WO2015155893A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08336134A (en) * 1995-04-06 1996-12-17 Sanyo Electric Co Ltd Method and device for moving image compression coding, method and device for moving image decoding and recording medium
JPH11266457A (en) * 1998-01-14 1999-09-28 Canon Inc Method and device for picture processing and recording medium
JP2008125015A (en) * 2006-11-15 2008-05-29 Funai Electric Co Ltd Video/audio recording and reproducing device
JP2009200848A (en) * 2008-02-21 2009-09-03 Fujitsu Ltd Image encoder, image encoding method, and image encoding program
JP2012070113A (en) * 2010-09-22 2012-04-05 Panasonic Corp Multicast delivery system, and transmitter and receiver for use in the same

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08336134A (en) * 1995-04-06 1996-12-17 Sanyo Electric Co Ltd Method and device for moving image compression coding, method and device for moving image decoding and recording medium
JPH11266457A (en) * 1998-01-14 1999-09-28 Canon Inc Method and device for picture processing and recording medium
JP2008125015A (en) * 2006-11-15 2008-05-29 Funai Electric Co Ltd Video/audio recording and reproducing device
JP2009200848A (en) * 2008-02-21 2009-09-03 Fujitsu Ltd Image encoder, image encoding method, and image encoding program
JP2012070113A (en) * 2010-09-22 2012-04-05 Panasonic Corp Multicast delivery system, and transmitter and receiver for use in the same

Similar Documents

Publication Publication Date Title
JP6279463B2 (en) Content transmission device, content reception device, content transmission method, and content reception method
US7644425B2 (en) Picture-in-picture mosaic
US9787968B2 (en) Transmission apparatus, transmission method, reception apparatus, reception method, and transmission/reception system using audio compression data stream as a container of other information
JP4715886B2 (en) Video display device, video display system, and video display method
EP1956848A2 (en) Image information transmission system, image information transmitting apparatus, image information receiving apparatus, image information transmission method, image information transmitting method, and image information receiving method
US20110229106A1 (en) System for playback of ultra high resolution video using multiple displays
US8798132B2 (en) Video apparatus to combine graphical user interface (GUI) with frame rate conversion (FRC) video and method of providing a GUI thereof
WO2008085874A2 (en) Methods and systems for improving low-resolution video
US20110149022A1 (en) Method and system for generating 3d output video with 3d local graphics from 3d input video
US20120050154A1 (en) Method and system for providing 3d user interface in 3d televisions
KR102383117B1 (en) Display apparatus, display method and display system
KR20170005366A (en) Method and Apparatus for Extracting Video from High Resolution Video
KR20090067119A (en) Video processing system with layered video coding and methods for use therewith
KR20240007097A (en) Video transmitting device and video playing device
JP2016532386A (en) Method for displaying video and apparatus for displaying video
US10681345B2 (en) Image processing apparatus, image processing method, and image display system
WO2013015116A1 (en) Encoding device and encoding method, and decoding device and decoding method
US9456192B2 (en) Method of coding and transmission of progressive video using differential signal overlay
EP2312859A2 (en) Method and system for communicating 3D video via a wireless communication link
WO2016031912A1 (en) Control information generating device, transmission device, reception device, television receiver, video signal transmission system, control program, and recording medium
JP2007013949A (en) Digital broadcasting system and channel changing method in the digital broadcast system
US20190027077A1 (en) Electronic device and method
US9743034B2 (en) Video transmitting/receiving device and video display device
WO2015155893A1 (en) Video output apparatus, video reception apparatus, and video output method
JP6637233B2 (en) Electronic equipment and display method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14889009

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14889009

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP