WO2019112398A1 - Method and system for providing sign language broadcast service using companion screen service - Google Patents

Method and system for providing sign language broadcast service using companion screen service Download PDF

Info

Publication number
WO2019112398A1
WO2019112398A1 PCT/KR2018/015611 KR2018015611W WO2019112398A1 WO 2019112398 A1 WO2019112398 A1 WO 2019112398A1 KR 2018015611 W KR2018015611 W KR 2018015611W WO 2019112398 A1 WO2019112398 A1 WO 2019112398A1
Authority
WO
WIPO (PCT)
Prior art keywords
sign language
signaling information
language video
video
service
Prior art date
Application number
PCT/KR2018/015611
Other languages
French (fr)
Korean (ko)
Inventor
김태우
홍성욱
김회웅
Original Assignee
주식회사 에어코드
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 에어코드 filed Critical 주식회사 에어코드
Priority claimed from KR1020180157994A external-priority patent/KR102153708B1/en
Publication of WO2019112398A1 publication Critical patent/WO2019112398A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H20/00Arrangements for broadcast or for distribution combined with broadcast
    • H04H20/65Arrangements characterised by transmission systems for broadcast
    • H04H20/71Wireless systems
    • H04H20/72Wireless systems of terrestrial networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/434Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams, extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/647Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless

Definitions

  • the present invention relates to a technology for providing a sign language broadcast service in a digital broadcast system.
  • ATSC Advanced Television System Committee
  • IP Internet Protocol
  • ATSC 3.0 is an IP-based transmission system that provides various services such as service guide, HTML (HyperText Markup Language) 5 app service, on-demand video, mobile interworking service, dynamic advertisement etc. in addition to basic TV service through hybrid broadcasting environment of broadcasting network and communication network. Service can be provided.
  • service guide HTML (HyperText Markup Language) 5 app service
  • on-demand video on-demand video
  • mobile interworking service mobile interworking service
  • dynamic advertisement etc. in addition to basic TV service through hybrid broadcasting environment of broadcasting network and communication network.
  • Service can be provided.
  • the sign language broadcast service is a broadcast service with sign language interpretation for the hearing impaired. Sign language broadcast service is used as one of important means of ensuring broadcasting access right equal to that of non - handicapped persons by providing convenience for broadcasting of closed -
  • This sign language broadcast service is a form in which a sign language video of a sign language interpreter is provided together with a part of the main broadcast screen. Therefore, the size of the sign language image is so small that it is difficult for a person with a hearing impairment to see the expression or hand gesture of the sign language interpreter.
  • the present invention provides a method and system for providing a sign language broadcast service using a Companion Screen Service.
  • a method for providing a sign language broadcast service comprises the steps of: a device operated by at least one processor for providing a sign language broadcast service, the method comprising: pairing with at least one companion device; Extracting signaling information of a sign language video service from a signal and transmitting the extracted signaling information to the paired at least one companion device, wherein the extracted signaling information includes at least one companion device Is connected to a server for streaming and is used to receive the sign language video.
  • the extracting step extracts a MPD (Media Presentation Description) of the sign language video service from the service level signaling information, and the MPD can be transmitted to the at least one companion device.
  • MPD Media Presentation Description
  • the MPD and the second MPD may be transmitted to different companion devices and used to receive the 2D sign language video and the virtual reality sign language video, respectively.
  • a method for providing a sign language broadcast service by a broadcast server operated by at least one processor comprising: transmitting a broadcast signal to which signaling information for a sign language video service is added to a main device Receiving a sign language video request using the signaling information from a companion device paired with the main device, and transmitting the requested sign language video to the companion device, wherein the companion device outputs A sign language video describing the broadcast screen is received from the broadcast server and is output to the screen.
  • the signaling information comprises first signaling information for a sign language video service of a first content type and second signaling information for a sign language video service of a second content type that is different from the first content type, Receiving a sign language video request using the first signaling information from a first companion device and receiving a sign language video request using the second signaling information from a second companion device different from the first companion device And the step of transmitting to the companion device may transmit a sign language image using the first signaling information and a sign language image using the second signaling information, respectively.
  • the step of transmitting to the main device may further include a MPD (Media Presentation Description) of the sign language video service in the service level signaling information of the broadcast signal.
  • MPD Media Presentation Description
  • a broadcasting system includes a signaling encoder for encoding first signaling information for a main video and second signaling information for a video signal describing the main video, a main signal, A transmission signal transmission device for transmitting the broadcasting signal multiplexed with the signaling information and the second signaling information to a main device through a broadcasting network and a request signal for a sign language video using the second signaling information from a companion device paired with the main device, And a sign language video transmission device for transmitting the requested sign language video to the companion device.
  • the sign language video transmission apparatus may be a DASH server that transmits the sign language video using a Moving Picture Expert Group (MPEG) -DASH (Dynamic Adaptive Streaming over HTTP) protocol.
  • MPEG Moving Picture Expert Group
  • DASH Dynamic Adaptive Streaming over HTTP
  • the signaling encoder further encodes third signaling information for a sign language video stream of a virtual reality (VR) type describing the main video
  • the sign language video transmission device further comprises: Receiving a request for a sign language video image using the third signaling information from a device and transmitting the requested sign language video image to the virtual reality device, wherein the main video, the sign language video and the virtual reality type sign language video Can be output simultaneously.
  • VR virtual reality
  • the second signaling information and the third signaling information may be included in service level signaling information of the broadcast signal.
  • a device in which a main image is outputted and a device in which a mime image is output are separated using a companion screen service, so that a mime image is conventionally small in size.
  • a sign language video image is output through a TV, a companion device, a VR (Virtual Reality) device, etc. through a communication network separately from the main video image, thereby allowing the viewer to watch the sign language video image on various devices separate from the main video image output device.
  • FIG. 1 shows a network configuration for providing a sign language broadcast service according to an embodiment of the present invention.
  • FIG. 2 is a flowchart illustrating a process of providing a sign language broadcast service according to an embodiment of the present invention.
  • FIG. 3 illustrates a network configuration for providing a sign language broadcast service according to another embodiment of the present invention.
  • FIG. 4 is a flowchart illustrating a process of providing a sign language broadcast service according to another embodiment of the present invention.
  • FIG. 5 is a hardware block diagram of a broadcasting system to which an embodiment of the present invention can be applied.
  • FIG. 6 is a hardware block diagram of a device to which an embodiment of the present invention may be applied.
  • signaling refers to transmission and reception of service information (SI) provided in a broadcasting system, an Internet broadcasting system, a broadcasting and Internet convergence system.
  • SI service information
  • the broadcast signal can be used in a bidirectional broadcasting such as Internet broadcasting, broadband broadcasting, communication broadcasting, data broadcasting and / or VOD (Video On Demand) in addition to terrestrial UHD (Ultra High Definition) broadcasting, cable broadcasting, satellite broadcasting and / And a signal and / or data to be provided.
  • a bidirectional broadcasting such as Internet broadcasting, broadband broadcasting, communication broadcasting, data broadcasting and / or VOD (Video On Demand) in addition to terrestrial UHD (Ultra High Definition) broadcasting, cable broadcasting, satellite broadcasting and / And a signal and / or data to be provided.
  • Embodiments of the present invention may be supported by the Advanced Television System Committee Standard (ATSC) document. That is, the steps or portions of the embodiments of the present invention that are not described in order to clearly illustrate the technical idea of the present invention can be supported by the document. In addition, all terms disclosed in this document may be described by the standard document.
  • ATSC Advanced Television System Committee Standard
  • Companion Screen (Companion Screen) service is a type of N screen, which provides services related to broadcasting contents by linking TV with various smart devices.
  • a sign language broadcast service is provided using a companion screen service.
  • FIG. 1 shows a network configuration for providing a sign language broadcast service according to an embodiment of the present invention
  • FIG. 2 is a flowchart illustrating a process for providing a sign language broadcast service according to an embodiment of the present invention.
  • the broadcasting system 100 is a digital broadcasting system such as a terrestrial UHD broadcasting system.
  • the broadcast system 100 includes a signaling encoder 101, an audio and video (AV) encoder 103, a sign language video encoder 105, a broadcast signal transmission device 107 and a sign language video transmission device 109.
  • AV audio and video
  • the signaling encoder 101 encodes signaling information including signaling information including meta data of a main image and meta data of a sign language image.
  • the main video refers to video data constituting a specific broadcast program.
  • the AV encoder 103 receives and encodes the main video.
  • the main picture may include video data and audio data (AV).
  • the sign language video encoder 105 receives and encodes the sign language video.
  • the sign language image is a sign language image that is synchronized with the main image and describes the main image.
  • the broadcast signal transmission apparatus 107 generates a broadcast signal in which an encoded signaling signal output from the signaling encoder 101 and an encoded main video signal output from the AV encoder 103 are multiplexed. And transmits the broadcast signal to the main device 300 through the broadcasting network 200.
  • the broadcasting network 200 provides a path for transmitting a broadcasting signal to the main device 300, for example, a terrestrial broadcasting network, a satellite broadcasting network, and a cable broadcasting network.
  • the sign language video transmission device 109 transmits the encoded sign language video signal output from the sign language video encoder 105 to the companion devices 501 and 503 via the communication network 400.
  • the communication network 400 may be a broadband network such as an IP (Internet Protocol) network or a mobile broadcasting network.
  • the sign language video signal is transmitted through the communication network 400 according to MPEG (Moving Picture Expert Group) -DASH (Dynamic Adaptive Streaming over HTTP) standard.
  • DASH is a streaming standard that divides AV data into files called segments and delivers them to the HTTP (HyperText Transfer Protocol).
  • the sign language video transmission apparatus 109 may be a DASH server.
  • the sign language video transmission apparatus 109 converts media files such as an MPEG-2 TS (Transport Stream) file or an ISO (International Organization for Standardization) BMFF (Base Media File Format)
  • the companion devices 501 and 503 may include a DASH client corresponding to the DASH server.
  • the main device 300 is a device that can receive and reproduce a digital broadcast signal from the broadcast signal transmission device 107 and includes a TV (Television), a settop box, and the like.
  • At least one of the companion devices 501 and 503 may include a mobile terminal such as a smart phone or a tablet capable of receiving a digital broadcast signal, a desktop personal computer (PC), a notebook PC, .
  • the main device 300 is connected to the broadcasting network 200 and the at least one companion device 501 and 503 is connected to the communication network 400.
  • the main device 300 transmits a Uniform Resource Locator (URL) that can acquire a sign language image to at least one companion device (501, 503) and a media presentation description (MPD), which is signaling information of a sign language video.
  • URL Uniform Resource Locator
  • MPD media presentation description
  • MPD describes a DASH media presentation corresponding to a sign language video service.
  • the MPD describes a presentation that is delivered over the network 400.
  • the MPD provides all the URLs for the initialization segment or media segment files needed to initialize the decoder.
  • the main device 300 and the at least one companion device 501, 503 may generate a pairing session using a Universal Plug and Play (UPnP) protocol.
  • UnP Universal Plug and Play
  • the main device 300 and the at least one companion device 501, 503 may be capable of web socket communication.
  • the main device 300 and the at least one companion device 501, 503 reproduce a sign language video image received from the communication network 400 based on the MPD.
  • the companion devices 501 and 503 request the sign language video image transmission device 109 using the MPD and receive them. Then, the sign language video streamed from the sign language video transmission device 109 is reproduced using the MPD.
  • Companion devices 501 and 503 may include a DASH client.
  • the main device 300 searches neighboring companion devices 501 and 503, and pairs the companion devices 501 and 503 with the searched companion devices 501 and 503 (S101 and S103).
  • the main device 300 extracts the MPD for the sign language video service from the signaling information of the broadcast signal broadcasted from the broadcast signal transmission device 107 (S107). When the main device 300 receives the broadcast signal, it checks whether the signaling information includes information related to the sign language video service.
  • the main device 300 transmits the extracted MPD to the companion devices 501 and 503 that have been paired (S103) (S109).
  • the companion devices 501 and 503 having been paired with each other in step S103 parse and analyze the received MPD in step S111 and request a sign language video image transmission device 109 based on the parsed and interpreted content S113).
  • the sign language video transmission apparatus 109 streams the sign language video that has been requested (S113) to the companion devices 501 and 503 (S115).
  • the main device 300 outputs the main image of the broadcast signal received in step S105 (S117), and the companion devices 501 and 503 paired with the main device 300 (S103) (S119). ≪ / RTI > At this time, steps S117 and S119 are synchronized.
  • FIG. 3 shows a network configuration for providing a sign language broadcast service according to another embodiment of the present invention
  • FIG. 4 is a flowchart illustrating a process for providing a sign language broadcast service according to another embodiment of the present invention.
  • FIG. 3 and 4 are similar to those of FIG. 1 and FIG. 2, and a description overlapping with FIG. 1 and FIG. 2 will be omitted.
  • a signaling encoder 101 encodes main video signaling information, sign language video signaling information, and VR sign language video signaling information.
  • the broadcast signal transmission apparatus 107 broadcasts the broadcasting signal multiplexed with the encoded signal output from the signaling encoder 101 and the AV encoder 103 to the broadcasting network 200.
  • the sign language video encoder 105 encodes and outputs the sign language video.
  • the VR sign language video encoder 111 encodes and outputs the sign language video of the virtual reality type.
  • the sign language video transmission device 109 streams the sign language video stream at the request of the companion device 505. In addition, the sign language video transmission device 109 streams the sign language video signal of the virtual reality type at the request of the VR device 507.
  • the main device 300 searches for nearby companion devices 505 and 507 and pairs them with the searched companion devices 505 and 507 (S201, S203, and S205 S207).
  • the main device 300 extracts MPD1 for the sign language video service and MPD2 for the VR sign language video service from the signaling information of the broadcasting signal broadcasted from the broadcast signal transmission device 107 in step S211.
  • the main device 300 transmits the MPD1 extracted (S211) to the companion device 505 that is paired (S203) (S213). Then, the companion device 505 having been paired (S103) parses and analyzes the received MPD (S213) (S215), and requests the sign language video transmission apparatus 109 to receive the sign language video based on the parsed and analyzed content S217). The sign language video transmission device 109 streams the sign language video that has been requested (S217) to the companion device 505 (S219).
  • the main device 300 transmits the MPD 2 extracted (S211) to the VR device 507 that is paired (S207) (S221). Then, the VR device 505 having been subjected to the pairing (S207) parses and analyzes the received MPD (S213) (S223), and requests the sign language video transmission apparatus 109 for the sign language video based on the parsed and analyzed contents S225). The sign language video transmission device 109 streams the request signal (S225) to the VR device 505 (S227).
  • the main device 300 outputs the main image of the broadcast signal received in step S209 to the screen (step S229), and the companion devices 505 and 507 that are paired with the main device 300 (steps S103 and S) And outputs a sign language video image received in step S231 (S233).
  • steps S229, S231, and S233 are synchronized.
  • FIG. 5 is a hardware block diagram of a broadcasting system to which an embodiment of the present invention can be applied, and shows a hardware configuration of the broadcasting system 100 described with reference to FIG. 1 to FIG.
  • a broadcast system 600 includes a communication device 601, a memory 603, a storage device 605, and at least one processor 607.
  • the communication device 601 is connected to at least one processor 607 to transmit and receive data.
  • the memory 603 stores programs that are associated with at least one processor 607 and include instructions that cause the configuration and / or methods in accordance with the embodiments described in Figs. 1-4 to execute.
  • the program is implemented in combination with hardware such as memory 603, storage device 605 and at least one processor 607 to implement the present invention.
  • FIG. 6 is a hardware block diagram of a device to which an embodiment of the present invention can be applied.
  • a device 700 is comprised of hardware including a communications device 701, a memory device 703, an input device 705, a display 707, and at least one processor 709, A program executed in combination with hardware at a specified location is stored.
  • the communication device 701 is connected to at least one processor 709 to receive a broadcasting signal through the broadcasting network 200 and receive the media signal through the communication network 400.
  • the memory device 703 is coupled to the processor 709 and stores a program that includes instructions for causing the computer to perform the configuration and / or the method according to the embodiments described in Figs. 1-4.
  • the program is implemented in conjunction with hardware such as memory device 703 and processor 709 to implement the present invention.
  • the input device 705 is coupled to the processor 709 and is a means for user input operation in accordance with the embodiments described in Figures 1-4.
  • the display 707 is connected to the processor 709 and outputs data according to the embodiments described in Figs. 1 to 4 to the screen.
  • the input device 705 and the display 707 may be implemented as a single device.
  • Processor 709 in combination with hardware, such as memory device 703, implements the present invention.
  • the embodiments of the present invention described above are not implemented only by the apparatus and method, but may be implemented through a program for realizing the function corresponding to the configuration of the embodiment of the present invention or a recording medium on which the program is recorded.

Abstract

A method and a system for providing a sign language broadcast service are provided. The method is a method for providing a sign language broadcast service by a device operating by at least one processor, and comprises the steps of: pairing with at least one companion device; extracting signaling information of a sign language video service from a broadcast signal received through a broadcasting network; and transmitting the extracted signaling information to the at least one paired companion device, wherein the at least one companion device is used to connect to a server for streaming a sign language video and receiving the sign language video.

Description

컴패니언 스크린 서비스를 이용한 수화 방송 서비스 제공 방법 및 그 시스템Method and system for providing a sign language broadcast service using a companion screen service
본 발명은 디지털 방송 시스템에서 수화 방송 서비스를 제공하는 기술에 관한 것이다.The present invention relates to a technology for providing a sign language broadcast service in a digital broadcast system.
ATSC(Advanced Television System Committee)에서는 IP(Internet Protocol) 기반의 차세대 지상파 방송 서비스를 위해 지상파 UHD(Ultra High Definition) 방송 전송 표준인 ATSC 3.0이라는 이름으로 방송 기술 규격 개발을 진행하고 있다.The Advanced Television System Committee (ATSC) is developing a broadcasting technology specification under the name of ATSC 3.0, which is a terrestrial UHD (Ultra High Definition) broadcasting transmission standard for IP (Internet Protocol) based next generation terrestrial broadcasting service.
ATSC 3.0에서는 IP 기반의 전송 방식으로, 방송망 및 통신망의 하이브리드 방송 환경을 통하여 기본 TV 서비스뿐 아니라 서비스 가이드, HTML(HyperText Markup Language)5 앱 서비스, 주문형 비디오, 모바일 연동 서비스, 동적 광고 등과 같은 다양한 부가 서비스를 제공할 수 있다.ATSC 3.0 is an IP-based transmission system that provides various services such as service guide, HTML (HyperText Markup Language) 5 app service, on-demand video, mobile interworking service, dynamic advertisement etc. in addition to basic TV service through hybrid broadcasting environment of broadcasting network and communication network. Service can be provided.
한편, 많은 나라들이 장애인에 대한 차별 금지와 권리 구제 등에 관한 법률 규정을 제정하여 장애인들이 TV를 통해 다양한 정보를 얻을 수 있도록 지원하고 있다. 특히, 수화 방송 서비스는 청각 장애를 가진 장애인을 위한 수화 통역을 곁들인 방송 서비스이다. 수화 방송 서비스는 자막 방송과 더불어 청각 장애인들의 방송 시청에 대한 편의를 제공하여 비장애인과 동등한 방송 접근권을 보장하는 중요한 수단의 하나로 활용되고 있다.On the other hand, many countries have enacted legislation on prohibition of discrimination against persons with disabilities and rights relief, so that persons with disabilities can obtain various information through TV. In particular, the sign language broadcast service is a broadcast service with sign language interpretation for the hearing impaired. Sign language broadcast service is used as one of important means of ensuring broadcasting access right equal to that of non - handicapped persons by providing convenience for broadcasting of closed -
현재, 지상파, 위성 방송, 케이블 방송 등에서, 보도 채널을 중심으로 청각 장애인을 위한 수화 방송 서비스가 제공되고 있다. 이러한 수화 방송 서비스는 주 방송 화면의 일부 영역에 수화 통역사의 수화 영상이 함께 제공되는 형태이다. 따라서, 수화 영상의 크기가 너무 작아 청각 장애인이 수화 통역자의 표정이나 손짓을 보기가 어렵다.At present, terrestrial broadcasting, satellite broadcasting, cable broadcasting, and so on, a sign language broadcast service for the hearing impaired is being provided mainly on the news channel. This sign language broadcast service is a form in which a sign language video of a sign language interpreter is provided together with a part of the main broadcast screen. Therefore, the size of the sign language image is so small that it is difficult for a person with a hearing impairment to see the expression or hand gesture of the sign language interpreter.
본 발명이 해결하고자 하는 과제는 컴패니언 스크린 서비스(Companion Screen Service)를 이용하여 수화 방송 서비스를 제공하는 방법 및 그 시스템을 제공하는 것이다.SUMMARY OF THE INVENTION The present invention provides a method and system for providing a sign language broadcast service using a Companion Screen Service.
본 발명의 하나의 특징에 따르면, 수화 방송 서비스 제공 방법은 적어도 하나의 프로세서에 의해 동작하는 디바이스가 수화 방송 서비스를 제공하는 방법으로서, 적어도 하나의 컴패니언 디바이스와 페어링하는 단계, 방송망을 통해 수신되는 방송 신호로부터 수화 영상 서비스의 시그널링 정보를 추출하는 단계, 그리고 상기 추출한 시그널링 정보를 상기 페어링된 적어도 하나의 컴패니언 디바이스에게 전송하는 단계를 포함하고, 상기 추출한 시그널링 정보는, 상기 적어도 하나의 컴패니언 디바이스가 수화 영상을 스트리밍하는 서버에 접속하여, 상기 수화 영상을 수신하는데 사용된다.According to one aspect of the present invention, a method for providing a sign language broadcast service comprises the steps of: a device operated by at least one processor for providing a sign language broadcast service, the method comprising: pairing with at least one companion device; Extracting signaling information of a sign language video service from a signal and transmitting the extracted signaling information to the paired at least one companion device, wherein the extracted signaling information includes at least one companion device Is connected to a server for streaming and is used to receive the sign language video.
상기 추출하는 단계는, 서비스 레벨 시그널링 정보로부터 상기 수화 영상 서비스의 MPD(Media Presentation Description)를 추출하고, 상기 MPD는, 상기 적어도 하나의 컴패니언 디바이스로 전송될 수 있다.The extracting step extracts a MPD (Media Presentation Description) of the sign language video service from the service level signaling information, and the MPD can be transmitted to the at least one companion device.
상기 추출하는 단계는, 서비스 레벨 시그널링 정보로부터 2D(Dimensional) 수화 영상 서비스의 제1 MPD(Media Presentation Description) 및 가상 현실(Virtual Reality, VR) 수화 영상 서비스의 제2 MPD를 추출하고, 상기 제1 MPD와 상기 제2 MPD는, 서로 다른 컴패니언 디바이스로 전송되어, 2D 수화 영상 및 가상 현실 수화 영상을 각각 수신하는데 사용될 수 있다.Extracting a first MPD of a 2D (Dimensional) sign language video service and a second MPD of a virtual reality (VR) sign language video service from service level signaling information, The MPD and the second MPD may be transmitted to different companion devices and used to receive the 2D sign language video and the virtual reality sign language video, respectively.
본 발명의 다른 특징에 따르면, 적어도 하나의 프로세서에 의해 동작하는 방송 서버가 수화 방송 서비스를 제공하는 방법으로서, 수화 영상 서비스를 위한 시그널링 정보가 추가된 방송 신호를 방송망을 통하여 주 디바이스에게 전송하는 단계, 상기 주 디바이스와 페어링된 컴패니언 디바이스로부터 상기 시그널링 정보를 이용한 수화 영상 요청을 수신하는 단계, 그리고 요청된 수화 영상을 상기 컴패니언 디바이스로 전송하는 단계를 포함하고, 상기 컴패니언 디바이스는, 상기 주 디바이스에 출력된 방송 화면을 설명하는 수화 영상을 상기 방송 서버로부터 수신하여 화면에 출력한다.According to another aspect of the present invention, there is provided a method for providing a sign language broadcast service by a broadcast server operated by at least one processor, the method comprising: transmitting a broadcast signal to which signaling information for a sign language video service is added to a main device Receiving a sign language video request using the signaling information from a companion device paired with the main device, and transmitting the requested sign language video to the companion device, wherein the companion device outputs A sign language video describing the broadcast screen is received from the broadcast server and is output to the screen.
상기 시그널링 정보는, 제1 콘텐츠 타입의 수화 영상 서비스를 위한 제1 시그널링 정보와 상기 제1 콘텐츠 타입과 다른 제2 콘텐츠 타입의 수화 영상 서비스를 위한 제2 시그널링 정보를 포함하고, 상기 수신하는 단계는, 제1 컴패니언 디바이스로부터 상기 제1 시그널링 정보를 이용한 수화 영상 요청을 수신하는 단계, 그리고 상기 제1 컴패니언 디바이스와 다른 제2 컴패니언 디바이스로부터 상기 제2 시그널링 정보를 이용한 수화 영상 요청을 수신하는 단계를 포함하며, 상기 컴패니언 디바이스로 전송하는 단계는, 상기 제1 시그널링 정보를 이용한 수화 영상 및 상기 제2 시그널링 정보를 이용한 수화 영상을 각각 전송할 수 있다.Wherein the signaling information comprises first signaling information for a sign language video service of a first content type and second signaling information for a sign language video service of a second content type that is different from the first content type, Receiving a sign language video request using the first signaling information from a first companion device and receiving a sign language video request using the second signaling information from a second companion device different from the first companion device And the step of transmitting to the companion device may transmit a sign language image using the first signaling information and a sign language image using the second signaling information, respectively.
상기 주 디바이스에게 전송하는 단계는, 상기 방송 신호의 서비스 레벨 시그널링 정보에 상기 수화 영상 서비스의 MPD(Media Presentation Description)를 추가로 포함시킬 수 있다.The step of transmitting to the main device may further include a MPD (Media Presentation Description) of the sign language video service in the service level signaling information of the broadcast signal.
본 발명의 또 다른 특징에 따르면, 방송 시스템은 주 영상에 대한 제1 시그널링 정보와 상기 주 영상을 설명하는 수화 영상에 대한 제2 시그널링 정보를 인코딩하는 시그널링 인코더, 상기 주 영상, 인코딩된 상기 제1 시그널링 정보 및 상기 제2 시그널링 정보를 다중화한 방송 신호를 방송망을 통해 주 디바이스로 전송하는 전송 신호 전송 장치, 그리고 상기 주 디바이스와 페어링된 컴패니언 디바이스로부터 상기 제2 시그널링 정보를 이용한 수화 영상 요청을 수신하고, 요청된 수화 영상을 상기 컴패니언 디바이스로 전송하는 수화 영상 전송 장치를 포함한다.According to another aspect of the present invention, a broadcasting system includes a signaling encoder for encoding first signaling information for a main video and second signaling information for a video signal describing the main video, a main signal, A transmission signal transmission device for transmitting the broadcasting signal multiplexed with the signaling information and the second signaling information to a main device through a broadcasting network and a request signal for a sign language video using the second signaling information from a companion device paired with the main device, And a sign language video transmission device for transmitting the requested sign language video to the companion device.
상기 수화 영상 전송 장치는, MPEG(Moving Picture Expert Group)-DASH(Dynamic Adaptive Streaming over HTTP) 프로토콜을 이용하여 상기 수화 영상을 전송하는 DASH 서버일 수 있다.The sign language video transmission apparatus may be a DASH server that transmits the sign language video using a Moving Picture Expert Group (MPEG) -DASH (Dynamic Adaptive Streaming over HTTP) protocol.
상기 시그널링 인코더는, 상기 주 영상을 설명하는 가상 현실(Virtual Reality, VR) 타입의 수화 영상에 대한 제3 시그널링 정보를 추가로 인코딩하고, 상기 수화 영상 전송 장치는, 상기 주 디바이스와 페어링된 가상 현실 디바이스로부터 상기 제3 시그널링 정보를 이용한 수화 영상 요청을 수신하고, 요청된 수화 영상을 상기 가상 현실 디바이스로 전송하며, 상기 주 영상, 상기 수화 영상 및 상기 가상 현실 타입의 수화 영상은, 각각의 디바이스에서 동시에 출력될 수 있다.Wherein the signaling encoder further encodes third signaling information for a sign language video stream of a virtual reality (VR) type describing the main video, and the sign language video transmission device further comprises: Receiving a request for a sign language video image using the third signaling information from a device and transmitting the requested sign language video image to the virtual reality device, wherein the main video, the sign language video and the virtual reality type sign language video Can be output simultaneously.
상기 제2 시그널링 정보 및 상기 제3 시그널링 정보는, 상기 방송 신호의 서비스 레벨 시그널링 정보에 포함될 수 있다.The second signaling information and the third signaling information may be included in service level signaling information of the broadcast signal.
본 발명의 실시예에 따르면, 컴패니언 스크린 서비스를 이용하여 메인 영상이 출력되는 디바이스와 수화 영상이 출력되는 디바이스를 분리함으로써, 종래에 수화 영상의 크기가 작아 발생하는 불편한 문제점을 해결할 수 있다. According to the embodiment of the present invention, it is possible to solve the inconvenience that a device in which a main image is outputted and a device in which a mime image is output are separated using a companion screen service, so that a mime image is conventionally small in size.
또한, 수화 영상을 주 영상과 별도로 통신망을 통해 TV, 컴패니언 디바이스, VR(Virtual Reality) 장비 등을 통해 출력함으로써, 시청자가 주 영상 출력 장비와 별개의 다양한 장비로 수화 영상을 시청할 수 있게 한다.In addition, a sign language video image is output through a TV, a companion device, a VR (Virtual Reality) device, etc. through a communication network separately from the main video image, thereby allowing the viewer to watch the sign language video image on various devices separate from the main video image output device.
도 1은 본 발명의 한 실시예에 따른 수화 방송 서비스를 제공하는 네트워크 구성을 나타낸다.1 shows a network configuration for providing a sign language broadcast service according to an embodiment of the present invention.
도 2는 본 발명의 한 실시예에 따른 수화 방송 서비스를 제공하는 과정을 나타낸 흐름도이다.2 is a flowchart illustrating a process of providing a sign language broadcast service according to an embodiment of the present invention.
도 3은 본 발명의 다른 실시예에 따른 수화 방송 서비스를 제공하는 네트워크 구성을 나타낸다.3 illustrates a network configuration for providing a sign language broadcast service according to another embodiment of the present invention.
도 4는 본 발명의 다른 실시예에 따른 수화 방송 서비스를 제공하는 과정을 나타낸 흐름도이다.4 is a flowchart illustrating a process of providing a sign language broadcast service according to another embodiment of the present invention.
도 5는 본 발명의 실시예가 적용될 수 있는 방송 시스템의 하드웨어 블록도이다.5 is a hardware block diagram of a broadcasting system to which an embodiment of the present invention can be applied.
도 6은 본 발명의 실시예가 적용될 수 있는 디바이스의 하드웨어 블록도이다.6 is a hardware block diagram of a device to which an embodiment of the present invention may be applied.
아래에서는 첨부한 도면을 참고로 하여 본 발명의 실시예에 대하여 본 발명이 속하는 기술 분야에서 통상의 지식을 가진 자가 용이하게 실시할 수 있도록 상세히 설명한다. 그러나 본 발명은 여러가지 상이한 형태로 구현될 수 있으며 여기에서 설명하는 실시예에 한정되지 않는다. 그리고 도면에서 본 발명을 명확하게 설명하기 위해서 설명과 관계없는 부분은 생략하였으며, 명세서 전체를 통하여 유사한 부분에 대해서는 유사한 도면 부호를 붙였다.Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily carry out the present invention. The present invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. In order to clearly illustrate the present invention, parts not related to the description are omitted, and similar parts are denoted by like reference characters throughout the specification.
명세서 전체에서, 어떤 부분이 어떤 구성요소를 "포함"한다고 할 때, 이는 특별히 반대되는 기재가 없는 한 다른 구성요소를 제외하는 것이 아니라 다른 구성요소를 더 포함할 수 있는 것을 의미한다. 또한, 명세서에 기재된 "…부", "…기", "…모듈" 등의 용어는 적어도 하나의 기능이나 동작을 처리하는 단위를 의미하며, 이는 하드웨어나 소프트웨어 또는 하드웨어 및 소프트웨어의 결합으로 구현될 수 있다. Throughout the specification, when an element is referred to as "comprising ", it means that it can include other elements as well, without excluding other elements unless specifically stated otherwise. Also, the terms " part, "" ... "," module ", and the like described in the specification mean a unit for processing at least one function or operation and may be implemented by hardware or software or a combination of hardware and software .
이하, 본 명세서에서 시그널링(signaling)은 방송 시스템, 인터넷 방송 시스템, 방송 및 인터넷 융합 시스템에서 제공되는 서비스 정보(Service Information, SI)를 송수신하는 것을 나타낸다. In the following description, signaling refers to transmission and reception of service information (SI) provided in a broadcasting system, an Internet broadcasting system, a broadcasting and Internet convergence system.
방송 신호는 지상파 UHD(Ultra High Definition) 방송, 케이블 방송, 위성 방송, 및/또는 모바일 방송 이외에도, 인터넷 방송, 브로드밴드 방송, 통신 방송, 데이터 방송 및/또는 VOD(Video On Demand) 등의 양방향 방송에서 제공되는 신호 및/또는 데이터를 포함하는 개념으로 정의한다. The broadcast signal can be used in a bidirectional broadcasting such as Internet broadcasting, broadband broadcasting, communication broadcasting, data broadcasting and / or VOD (Video On Demand) in addition to terrestrial UHD (Ultra High Definition) broadcasting, cable broadcasting, satellite broadcasting and / And a signal and / or data to be provided.
본 발명의 실시예는 ATSC 표준(Advanced Television System Committee Standard) 문서에 의해 뒷받침될 수 있다. 즉, 본 발명의 실시예들 중 본 발명의 기술적 사상을 명확히 드러내기 위해 설명하지 않은 단계들 또는 부분들은 상기 문서에 의해 뒷받침될 수 있다. 또한, 본 문서에서 개시하고 있는 모든 용어들은 상기 표준 문서에 의해 설명될 수 있다.Embodiments of the present invention may be supported by the Advanced Television System Committee Standard (ATSC) document. That is, the steps or portions of the embodiments of the present invention that are not described in order to clearly illustrate the technical idea of the present invention can be supported by the document. In addition, all terms disclosed in this document may be described by the standard document.
컴패니언 스크린(Companion Screen) 서비스는 N 스크린의 한 종류로서, TV를 다양한 스마트 기기와 연계하여 방송 콘텐츠 관련 서비스를 제공한다.Companion Screen (Companion Screen) service is a type of N screen, which provides services related to broadcasting contents by linking TV with various smart devices.
본 발명의 실시예에서는, 컴패니언 스크린 서비스를 이용하여 수화 방송 서비스를 제공한다.In the embodiment of the present invention, a sign language broadcast service is provided using a companion screen service.
도 1은 본 발명의 한 실시예에 따른 수화 방송 서비스를 제공하는 네트워크 구성을 나타내고, 도 2는 본 발명의 한 실시예에 따른 수화 방송 서비스를 제공하는 과정을 나타낸 흐름도이다.FIG. 1 shows a network configuration for providing a sign language broadcast service according to an embodiment of the present invention, and FIG. 2 is a flowchart illustrating a process for providing a sign language broadcast service according to an embodiment of the present invention.
도 1을 참조하면, 방송 시스템(100)은 지상파 UHD 방송 등과 같은 디지털 방송 시스템이다. 방송 시스템(100)은 시그널링 인코더(101), AV(Audio, Video) 인코더(103), 수화 영상 인코더(105), 방송 신호 전송 장치(107) 및 수화 영상 전송 장치(109)를 포함한다. Referring to FIG. 1, the broadcasting system 100 is a digital broadcasting system such as a terrestrial UHD broadcasting system. The broadcast system 100 includes a signaling encoder 101, an audio and video (AV) encoder 103, a sign language video encoder 105, a broadcast signal transmission device 107 and a sign language video transmission device 109.
시그널링 인코더(101)는 주(Main) 영상의 메타 데이터 등을 포함하는 시그널링 정보와 수화 영상의 메타 데이터 등을 포함하는 시그널링 정보를 인코딩한다. 여기서, 주 영상은 특정 방송 프로그램을 구성하는 영상 데이터를 말한다.The signaling encoder 101 encodes signaling information including signaling information including meta data of a main image and meta data of a sign language image. Here, the main video refers to video data constituting a specific broadcast program.
AV 인코더(103)는 주 영상을 입력받아 인코딩한다. 주 영상은 비디오 데이터 및 오디오 데이터(AV)를 포함할 수 있다.The AV encoder 103 receives and encodes the main video. The main picture may include video data and audio data (AV).
수화 영상 인코더(105)는 수화 영상을 입력받아 인코딩한다. 여기서, 수화 영상은 주 영상과 동기화되며, 주 영상을 설명하는 수화 영상이다. The sign language video encoder 105 receives and encodes the sign language video. Here, the sign language image is a sign language image that is synchronized with the main image and describes the main image.
방송 신호 전송 장치(107)는 시그널링 인코더(101)가 출력하는 인코딩된 시그널링 신호와 AV 인코더(103)가 출력하는 인코딩된 주 영상 신호를 다중화한 방송 신호를 생성한다. 그리고 방송 신호를 방송망(200)을 통하여 주 디바이스(300)로 전송한다. 여기서, 방송망(200)은 방송 신호를 주 디바이스(300)로 전송하기 위한 경로를 제공하며, 예를들어, 지상파 방송망, 위성 방송망, 케이블 방송망 등이 있다. The broadcast signal transmission apparatus 107 generates a broadcast signal in which an encoded signaling signal output from the signaling encoder 101 and an encoded main video signal output from the AV encoder 103 are multiplexed. And transmits the broadcast signal to the main device 300 through the broadcasting network 200. [ Here, the broadcasting network 200 provides a path for transmitting a broadcasting signal to the main device 300, for example, a terrestrial broadcasting network, a satellite broadcasting network, and a cable broadcasting network.
수화 영상 전송 장치(109)는 수화 영상 인코더(105)가 출력하는 인코딩된 수화 영상 신호를 통신망(400)을 통하여 컴패니언 디바이스(501, 503)에게 전송한다. 통신망(400)은 IP(Internet Protocol)망, 이동방송망 등의 광대역(broadband) 망일 수 있다.The sign language video transmission device 109 transmits the encoded sign language video signal output from the sign language video encoder 105 to the companion devices 501 and 503 via the communication network 400. The communication network 400 may be a broadband network such as an IP (Internet Protocol) network or a mobile broadcasting network.
수화 영상 신호는 MPEG(Moving Picture Expert Group)-DASH(Dynamic Adaptive Streaming over HTTP) 표준에 따라 통신망(400)을 통해 전송된다. DASH는 AV 데이터를 세그먼트(Segment)라고 하는 파일로 나누어 HTTP(HyperText Transfer Protocol)로 전달하는 스트리밍 표준이다.The sign language video signal is transmitted through the communication network 400 according to MPEG (Moving Picture Expert Group) -DASH (Dynamic Adaptive Streaming over HTTP) standard. DASH is a streaming standard that divides AV data into files called segments and delivers them to the HTTP (HyperText Transfer Protocol).
이때, 수화 영상 전송 장치(109)는 DASH 서버일 수 있다. 수화 영상 전송 장치(109)는 MPEG-2 TS(Transport Stream) 파일 또는 ISO(International Organization for Standardization) BMFF(Base Media File Format)와 같은 미디어 파일들을 전송에 적합하도록 세그먼트 단위로 변환하여 전송한다.At this time, the sign language video transmission apparatus 109 may be a DASH server. The sign language video transmission apparatus 109 converts media files such as an MPEG-2 TS (Transport Stream) file or an ISO (International Organization for Standardization) BMFF (Base Media File Format)
또한, 컴패니언 디바이스(501, 503)는 DASH 서버에 대응하는 DASH 클라이언트를 포함할 수 있다. In addition, the companion devices 501 and 503 may include a DASH client corresponding to the DASH server.
주 디바이스(300)는 방송 신호 전송 장치(107)로부터 디지털 방송 신호를 수신하여 재생할 수 있는 장치로서, TV(Television), 셋톱박스(settop box) 등을 포함한다. 적어도 하나의 컴패니언 디바이스(501, 503)는 디지털 방송 신호를 수신 가능한 스마트폰(Smartphone)이나 태블릿(Tablet) 등의 모바일 단말, 데스크탑(Desktop) PC(Personal Computer), 노트 PC 등을 포함할 수 있다.The main device 300 is a device that can receive and reproduce a digital broadcast signal from the broadcast signal transmission device 107 and includes a TV (Television), a settop box, and the like. At least one of the companion devices 501 and 503 may include a mobile terminal such as a smart phone or a tablet capable of receiving a digital broadcast signal, a desktop personal computer (PC), a notebook PC, .
주 디바이스(300)는 방송망(200)에 연결되고, 적어도 하나의 컴패니언 디바이스(501, 503)는 통신망(400)에 연결된다.The main device 300 is connected to the broadcasting network 200 and the at least one companion device 501 and 503 is connected to the communication network 400.
주 디바이스(300)는 적어도 하나의 컴패니언 디바이스(501, 503)에게 수화 영상을 획득할 수 있는 URL(Uniform Resource Locator)과 수화 영상의 시그널링 정보인 MPD(Media Presentation Description)를 전송한다.The main device 300 transmits a Uniform Resource Locator (URL) that can acquire a sign language image to at least one companion device (501, 503) and a media presentation description (MPD), which is signaling information of a sign language video.
여기서, MPD는 수화 영상 서비스에 해당하는 DASH 미디어 프레젠테이션을 기술한다. MPD는 통신망(400)을 통해 전달되는 레프레젠테이션을 서술한다. MPD는 복호기를 초기화하는데 필요한 초기화 세그먼트(Initialization Segment) 파일이나 미디어 세그먼트 파일들에 대한 모든 URL을 제공한다. Here, MPD describes a DASH media presentation corresponding to a sign language video service. The MPD describes a presentation that is delivered over the network 400. The MPD provides all the URLs for the initialization segment or media segment files needed to initialize the decoder.
이때, 주 디바이스(300)와 적어도 하나의 컴패니언 디바이스(501, 503)는 페어링이 이미 수행되었다고 가정한다. At this time, it is assumed that the main device 300 and the at least one companion device 501, 503 have already performed pairing.
한 실시예에 따르면, 주 디바이스(300)와 적어도 하나의 컴패니언 디바이스(501, 503)는 UpnP(Universal Plug and Play) 프로토콜을 이용하여 페어링 세션을 생성할 수 있다. According to one embodiment, the main device 300 and the at least one companion device 501, 503 may generate a pairing session using a Universal Plug and Play (UPnP) protocol.
다른 실시예에 따르면, 주 디바이스(300)와 적어도 하나의 컴패니언 디바이스(501, 503)는 웹 소켓 통신을 할 수 있다. According to another embodiment, the main device 300 and the at least one companion device 501, 503 may be capable of web socket communication.
주 디바이스(300)와 적어도 하나의 컴패니언 디바이스(501, 503)는 MPD에 기초하여, 통신망(400)으로부터 수신되는 수화 영상을 재생한다.The main device 300 and the at least one companion device 501, 503 reproduce a sign language video image received from the communication network 400 based on the MPD.
컴패니언 디바이스(501, 503)는 MPD를 이용하여 수화 영상을 수화 영상 전송 장치(109)에게 요청하여 수신한다. 그리고 수화 영상 전송 장치(109)로부터 스트리밍 전송되는 수화 영상을 MPD를 이용하여 재생한다. The companion devices 501 and 503 request the sign language video image transmission device 109 using the MPD and receive them. Then, the sign language video streamed from the sign language video transmission device 109 is reproduced using the MPD.
컴패니언 디바이스(501, 503)는 DASH 클라이언트를 포함할 수 있다. 여기서, DASH 클라이언트는 MPD를 파싱(parsing)하는 MPD 파서(Parser), 세그먼트(Segment)를 파싱하는 세그먼트 파서(Segment Parser), HTTP 요청 메시지를 전송하고 HTTP 응답 메시지를 수신하는 HTTP 클라이언트, 미디어를 재생하는 미디어 엔진(engine)을 포함할 수 있다. Companion devices 501 and 503 may include a DASH client. Here, the DASH client includes an MPD parser for parsing an MPD, a segment parser for parsing a segment, an HTTP client for transmitting an HTTP request message and an HTTP response message, Lt; RTI ID = 0.0 > engine. ≪ / RTI >
도 2를 참조하면, 메인 디바이스(300)는 주변의 컴패니언 디바이스(501, 503)를 검색하고, 검색된 컴패니언 디바이스(501, 503)와 페어링한다(S101, S103).2, the main device 300 searches neighboring companion devices 501 and 503, and pairs the companion devices 501 and 503 with the searched companion devices 501 and 503 (S101 and S103).
메인 디바이스(300)는 방송 신호 전송 장치(107)로부터 브로드캐스팅(S105)되는 방송 신호의 시그널링 정보로부터 수화 영상 서비스에 대한 MPD를 추출한다(S107). 메인 디바이스(300)는 방송 신호가 수신되면, 시그널링 정보에 수화 영상 서비스와 관련된 정보가 포함되어 있는지 확인한다. The main device 300 extracts the MPD for the sign language video service from the signaling information of the broadcast signal broadcasted from the broadcast signal transmission device 107 (S107). When the main device 300 receives the broadcast signal, it checks whether the signaling information includes information related to the sign language video service.
메인 디바이스(300)는 추출(S107)한 MPD를 페어링(S103)된 컴패니언 디바이스(501, 503)에게 전송한다(S109).The main device 300 transmits the extracted MPD to the companion devices 501 and 503 that have been paired (S103) (S109).
페어링(S103)된 컴패니언 디바이스(501, 503)는 수신(S109)한 MPD를 파싱 및 해석(S111)하고, 파싱 및 해석된 내용을 기초로 수화 영상 전송 장치(109)에게 수화 영상을 요청한다(S113). 수화 영상 전송 장치(109)는 요청(S113)된 수화 영상을 컴패니언 디바이스(501, 503)에게 스트리밍 전송한다(S115).The companion devices 501 and 503 having been paired with each other in step S103 parse and analyze the received MPD in step S111 and request a sign language video image transmission device 109 based on the parsed and interpreted content S113). The sign language video transmission apparatus 109 streams the sign language video that has been requested (S113) to the companion devices 501 and 503 (S115).
그러면, 메인 디바이스(300)는 S105 단계에서 수신되는 방송 신호의 메인 영상을 화면에 출력(S117)하고, 메인 디바이스(300)과 페어링(S103)된 컴패니언 디바이스(501, 503)는 S115 단계에서 수신되는 수화 영상 화면을 출력한다(S119). 이때, S117 단계, S119 단계는 동기화된다.Then, the main device 300 outputs the main image of the broadcast signal received in step S105 (S117), and the companion devices 501 and 503 paired with the main device 300 (S103) (S119). ≪ / RTI > At this time, steps S117 and S119 are synchronized.
도 3은 본 발명의 다른 실시예에 따른 수화 방송 서비스를 제공하는 네트워크 구성을 나타내고, 도 4는 본 발명의 다른 실시예에 따른 수화 방송 서비스를 제공하는 과정을 나타낸 흐름도이다.FIG. 3 shows a network configuration for providing a sign language broadcast service according to another embodiment of the present invention, and FIG. 4 is a flowchart illustrating a process for providing a sign language broadcast service according to another embodiment of the present invention.
이때, 도 3 및 도 4는 도 1 및 도 2와 유사하므로, 도 1 및 도 2와 중복되는 설명은 생략한다.3 and 4 are similar to those of FIG. 1 and FIG. 2, and a description overlapping with FIG. 1 and FIG. 2 will be omitted.
도 3을 참조하면, 시그널링 인코더(101)는 메인 영상 시그널링 정보, 수화 영상 시그널링 정보 및 VR 수화 영상 시그널링 정보를 인코딩한다. 방송 신호 전송 장치(107)는 시그널링 인코더(101) 및 AV 인코더(103)가 각각 출력하는 인코딩된 신호를 다중화한 방송 신호를 방송망(200)으로 브로드캐스트 전송한다.Referring to FIG. 3, a signaling encoder 101 encodes main video signaling information, sign language video signaling information, and VR sign language video signaling information. The broadcast signal transmission apparatus 107 broadcasts the broadcasting signal multiplexed with the encoded signal output from the signaling encoder 101 and the AV encoder 103 to the broadcasting network 200.
또한, 수화 영상 인코더(105)는 수화 영상을 인코딩하여 출력한다. 그리고 VR 수화 영상 인코더(111)는 가상 현실 타입의 수화 영상을 인코딩하여 출력한다.In addition, the sign language video encoder 105 encodes and outputs the sign language video. Then, the VR sign language video encoder 111 encodes and outputs the sign language video of the virtual reality type.
수화 영상 전송 장치(109)는 컴패니언 디바이스(505)의 요청에 따라 수화 영상을 스트리밍 전송한다. 또한, 수화 영상 전송 장치(109)는 VR 디바이스(507)의 요청에 따라 가상 현실 타입의 수화 영상을 스트리밍 전송한다.The sign language video transmission device 109 streams the sign language video stream at the request of the companion device 505. In addition, the sign language video transmission device 109 streams the sign language video signal of the virtual reality type at the request of the VR device 507.
도 4를 참조하면, 메인 디바이스(300)는 주변의 컴패니언 디바이스(505, 507)를 검색하고, 검색된 컴패니언 디바이스(505, 507)와 페어링한다(S201, S203, S205 S207).Referring to FIG. 4, the main device 300 searches for nearby companion devices 505 and 507 and pairs them with the searched companion devices 505 and 507 (S201, S203, and S205 S207).
메인 디바이스(300)는 방송 신호 전송 장치(107)로부터 브로드캐스팅(S209)되는 방송 신호의 시그널링 정보로부터 수화 영상 서비스에 대한 MPD1 및 VR 수화 영상 서비스에 대한 MPD2를 추출한다(S211). The main device 300 extracts MPD1 for the sign language video service and MPD2 for the VR sign language video service from the signaling information of the broadcasting signal broadcasted from the broadcast signal transmission device 107 in step S211.
메인 디바이스(300)는 추출(S211)한 MPD1을 페어링(S203)된 컴패니언 디바이스(505)에게 전송한다(S213). 그러면, 페어링(S103)된 컴패니언 디바이스(505)는 수신(S213)한 MPD를 파싱 및 해석(S215)하고, 파싱 및 해석된 내용을 기초로 수화 영상 전송 장치(109)에게 수화 영상을 요청한다(S217). 수화 영상 전송 장치(109)는 요청(S217)된 수화 영상을 컴패니언 디바이스(505)에게 스트리밍 전송한다(S219).The main device 300 transmits the MPD1 extracted (S211) to the companion device 505 that is paired (S203) (S213). Then, the companion device 505 having been paired (S103) parses and analyzes the received MPD (S213) (S215), and requests the sign language video transmission apparatus 109 to receive the sign language video based on the parsed and analyzed content S217). The sign language video transmission device 109 streams the sign language video that has been requested (S217) to the companion device 505 (S219).
또한, 메인 디바이스(300)는 추출(S211)한 MPD2를 페어링(S207)된 VR 디바이스(507)에게 전송한다(S221). 그러면, 페어링(S207)된 VR 디바이스(505)는 수신(S213)한 MPD를 파싱 및 해석(S223)하고, 파싱 및 해석된 내용을 기초로 수화 영상 전송 장치(109)에게 수화 영상을 요청한다(S225). 수화 영상 전송 장치(109)는 요청(S225)된 수화 영상을 VR 디바이스(505)에게 스트리밍 전송한다(S227).Further, the main device 300 transmits the MPD 2 extracted (S211) to the VR device 507 that is paired (S207) (S221). Then, the VR device 505 having been subjected to the pairing (S207) parses and analyzes the received MPD (S213) (S223), and requests the sign language video transmission apparatus 109 for the sign language video based on the parsed and analyzed contents S225). The sign language video transmission device 109 streams the request signal (S225) to the VR device 505 (S227).
이후, 메인 디바이스(300)는 S209 단계에서 수신되는 방송 신호의 메인 영상을 화면에 출력(S229)하고, 메인 디바이스(300)과 페어링(S103)된 컴패니언 디바이스(505, 507)는 각각 S219, S227 단계에서 수신되는 수화 영상 화면을 출력한다(S231, S233). 이때, S229 단계, S231 단계, S233 단계는 동기화된다.Thereafter, the main device 300 outputs the main image of the broadcast signal received in step S209 to the screen (step S229), and the companion devices 505 and 507 that are paired with the main device 300 (steps S103 and S) And outputs a sign language video image received in step S231 (S233). At this time, steps S229, S231, and S233 are synchronized.
한편, 도 5는 본 발명의 실시예가 적용될 수 있는 방송 시스템의 하드웨어 블록도로서, 도 1 ~ 도 4에서 설명한 방송 시스템(100)의 하드웨어 구성을 나타낸다.FIG. 5 is a hardware block diagram of a broadcasting system to which an embodiment of the present invention can be applied, and shows a hardware configuration of the broadcasting system 100 described with reference to FIG. 1 to FIG.
도 5를 참조하면, 방송 시스템(600)은 통신 장치(601), 메모리(603), 저장 장치(605) 및 적어도 하나의 프로세서(607)를 포함한다. 통신 장치(601)는 적어도 하나의 프로세서(607)와 연결되어, 데이터를 송수신 처리한다. 메모리(603)는 적어도 하나의 프로세서(607)와 연결되어, 도 1 내지 도 4에서 설명한 실시예들에 따른 구성 및/또는 방법을 실행하게 하는 명령어들(Instructions)을 포함하는 프로그램을 저장한다. 프로그램은 메모리(603), 저장 장치(605) 및 적어도 하나의 프로세서(607) 등의 하드웨어와 결합하여 본 발명을 구현한다.5, a broadcast system 600 includes a communication device 601, a memory 603, a storage device 605, and at least one processor 607. [ The communication device 601 is connected to at least one processor 607 to transmit and receive data. The memory 603 stores programs that are associated with at least one processor 607 and include instructions that cause the configuration and / or methods in accordance with the embodiments described in Figs. 1-4 to execute. The program is implemented in combination with hardware such as memory 603, storage device 605 and at least one processor 607 to implement the present invention.
도 6은 본 발명의 실시예가 적용될 수 있는 디바이스의 하드웨어 블록도로서, 도 1 ~ 도 4에서 설명한 주 디바이스(300), 컴패니언 디바이스(501, 503, 505, 507)의 하드웨어 구성을 나타낸다.FIG. 6 is a hardware block diagram of a device to which an embodiment of the present invention can be applied. The hardware configuration of the main device 300 and the companion devices 501, 503, 505, and 507 illustrated in FIG. 1 to FIG.
도 6을 참조하면, 디바이스(700)는 통신 장치(701), 메모리 장치(703), 입력 장치(705), 디스플레이(707) 및 적어도 하나의 프로세서(709) 등을 포함하는 하드웨어로 구성되고, 지정된 장소에 하드웨어와 결합되어 실행되는 프로그램이 저장된다. 6, a device 700 is comprised of hardware including a communications device 701, a memory device 703, an input device 705, a display 707, and at least one processor 709, A program executed in combination with hardware at a specified location is stored.
통신 장치(701)는 적어도 하나의 프로세서(709)와 연결되어, 방송망(200)을 통해 방송 신호를 수신하고, 통신망(400)을 통해 미디어 신호를 수신한다.The communication device 701 is connected to at least one processor 709 to receive a broadcasting signal through the broadcasting network 200 and receive the media signal through the communication network 400. [
메모리 장치(703)는 프로세서(709)와 연결되어, 도 1 내지 도 4에서 설명한 실시예들에 따른 구성 및/또는 방법을 실행하게 하는 명령어들을 포함하는 프로그램을 저장한다. 프로그램은 메모리 장치(703) 및 프로세서(709) 등의 하드웨어와 결합하여 본 발명을 구현한다.The memory device 703 is coupled to the processor 709 and stores a program that includes instructions for causing the computer to perform the configuration and / or the method according to the embodiments described in Figs. 1-4. The program is implemented in conjunction with hardware such as memory device 703 and processor 709 to implement the present invention.
입력 장치(705)는 프로세서(709)와 연결되어, 도 1 내지 도 4에서 설명한 실시예들에 따른 사용자 입력 동작을 위한 수단이다. The input device 705 is coupled to the processor 709 and is a means for user input operation in accordance with the embodiments described in Figures 1-4.
디스플레이(707)는 프로세서(709)와 연결되어, 도 1 내지 도 4에서 설명한 실시예들에 따른 데이터들을 화면에 출력한다. 이때, 입력 장치(705)와 디스플레이(707)는 하나의 장치로 구현될 수 있다. The display 707 is connected to the processor 709 and outputs data according to the embodiments described in Figs. 1 to 4 to the screen. At this time, the input device 705 and the display 707 may be implemented as a single device.
프로세서(709)는 메모리 장치(703) 등의 하드웨어와 결합하여 본 발명을 실행한다. Processor 709, in combination with hardware, such as memory device 703, implements the present invention.
이상에서 설명한 본 발명의 실시예는 장치 및 방법을 통해서만 구현이 되는 것은 아니며, 본 발명의 실시예의 구성에 대응하는 기능을 실현하는 프로그램 또는 그 프로그램이 기록된 기록 매체를 통해 구현될 수도 있다.The embodiments of the present invention described above are not implemented only by the apparatus and method, but may be implemented through a program for realizing the function corresponding to the configuration of the embodiment of the present invention or a recording medium on which the program is recorded.
이상에서 본 발명의 실시예에 대하여 상세하게 설명하였지만 본 발명의 권리범위는 이에 한정되는 것은 아니고 다음의 청구범위에서 정의하고 있는 본 발명의 기본 개념을 이용한 당업자의 여러 변형 및 개량 형태 또한 본 발명의 권리범위에 속하는 것이다.While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments, It belongs to the scope of right.

Claims (10)

  1. 적어도 하나의 프로세서에 의해 동작하는 디바이스가 수화 방송 서비스를 제공하는 방법으로서,A method for providing a sign language broadcast service by a device operating by at least one processor,
    적어도 하나의 컴패니언 디바이스와 페어링하는 단계,Pairing with at least one companion device,
    방송망을 통해 수신되는 방송 신호로부터 수화 영상 서비스의 시그널링 정보를 추출하는 단계, 그리고Extracting signaling information of a sign language video service from a broadcast signal received through a broadcast network, and
    상기 추출한 시그널링 정보를 상기 페어링된 적어도 하나의 컴패니언 디바이스에게 전송하는 단계를 포함하고,And transmitting the extracted signaling information to the paired at least one companion device,
    상기 추출한 시그널링 정보는,The extracted signaling information includes,
    상기 적어도 하나의 컴패니언 디바이스가 수화 영상을 스트리밍하는 서버에 접속하여, 상기 수화 영상을 수신하는데 사용되는, 수화 방송 서비스 제공 방법.Wherein the at least one companion device is connected to a server for streaming a sign language video image and is used to receive the sign language video image.
  2. 제1항에서,The method of claim 1,
    상기 추출하는 단계는,Wherein the extracting comprises:
    서비스 레벨 시그널링 정보로부터 상기 수화 영상 서비스의 MPD(Media Presentation Description)를 추출하고,Extracting a MPD (Media Presentation Description) of the sign language video service from service level signaling information,
    상기 MPD는, The MPD,
    상기 적어도 하나의 컴패니언 디바이스로 전송되는, 수화 방송 서비스 제공 방법. To the at least one companion device.
  3. 제1항에서,The method of claim 1,
    상기 추출하는 단계는,Wherein the extracting comprises:
    서비스 레벨 시그널링 정보로부터 2D 수화 영상 서비스의 제1 MPD(Media Presentation Description) 및 가상 현실(Virtual Reality, VR) 수화 영상 서비스의 제2 MPD를 추출하고, Extracting a first MPD (Media Presentation Description) of a 2D sign language video service and a second MPD of a virtual reality (VR) sign language video service from service level signaling information,
    상기 제1 MPD와 상기 제2 MPD는,Wherein the first MPD and the second MPD comprise:
    서로 다른 컴패니언 디바이스로 전송되어, 2D 수화 영상 및 가상 현실 수화 영상을 각각 수신하는데 사용되는, 수화 방송 서비스 제공 방법.Wherein the video signal is transmitted to different companion devices, and is used to receive a 2D sign language video and a virtual reality sign language video, respectively.
  4. 적어도 하나의 프로세서에 의해 동작하는 방송 서버가 수화 방송 서비스를 제공하는 방법으로서,A method for providing a sign language broadcast service by a broadcast server operating by at least one processor,
    수화 영상 서비스를 위한 시그널링 정보가 추가된 방송 신호를 방송망을 통하여 주 디바이스에게 전송하는 단계,Transmitting a broadcasting signal to which signaling information for a sign language video service is added to a main device through a broadcasting network,
    상기 주 디바이스와 페어링된 컴패니언 디바이스로부터 상기 시그널링 정보를 이용한 수화 영상 요청을 수신하는 단계, 그리고Receiving a sign language video request using the signaling information from a companion device paired with the main device, and
    요청된 수화 영상을 상기 컴패니언 디바이스로 전송하는 단계를 포함하고,And transmitting the requested sign language video to the companion device,
    상기 컴패니언 디바이스는,The companion device comprising:
    상기 주 디바이스에 출력된 방송 화면을 설명하는 수화 영상을 상기 방송 서버로부터 수신하여 화면에 출력하는, 수화 방송 서비스 제공 방법.Receiving a sign language video describing a broadcast program outputted to the main device from the broadcast server and outputting the sign language video to a screen;
  5. 제4항에서,5. The method of claim 4,
    상기 시그널링 정보는,Wherein the signaling information comprises:
    제1 콘텐츠 타입의 수화 영상 서비스를 위한 제1 시그널링 정보와 상기 제1 콘텐츠 타입과 다른 제2 콘텐츠 타입의 수화 영상 서비스를 위한 제2 시그널링 정보를 포함하고,The first signaling information for a sign language video service of a first content type and the second signaling information for a sign language video service of a second content type different from the first content type,
    상기 수신하는 단계는,Wherein the receiving comprises:
    제1 컴패니언 디바이스로부터 상기 제1 시그널링 정보를 이용한 수화 영상 요청을 수신하는 단계, 그리고Receiving a sign language video request using the first signaling information from a first companion device, and
    상기 제1 컴패니언 디바이스와 다른 제2 컴패니언 디바이스로부터 상기 제2 시그널링 정보를 이용한 수화 영상 요청을 수신하는 단계를 포함하며,Receiving a sign language video request using the second signaling information from a second companion device different from the first companion device,
    상기 컴패니언 디바이스로 전송하는 단계는,Wherein the step of transmitting to the companion device comprises:
    상기 제1 시그널링 정보를 이용한 수화 영상 및 상기 제2 시그널링 정보를 이용한 수화 영상을 각각 전송하는, 수화 방송 서비스 제공 방법.And a sign language video using the first signaling information and a sign language video using the second signaling information, respectively.
  6. 제4항에서,5. The method of claim 4,
    상기 주 디바이스에게 전송하는 단계는,Wherein the step of transmitting to the main device comprises:
    상기 방송 신호의 서비스 레벨 시그널링 정보에 상기 수화 영상 서비스의 MPD(Media Presentation Description)를 추가로 포함시키는, 수화 방송 서비스 제공 방법.Wherein the service level signaling information of the broadcast signal further includes a MPD (Media Presentation Description) of the sign language video service.
  7. 주 영상에 대한 제1 시그널링 정보와 상기 주 영상을 설명하는 수화 영상에 대한 제2 시그널링 정보를 인코딩하는 시그널링 인코더,A signaling encoder for encoding the first signaling information for the main video and the second signaling information for the video signal describing the main video,
    상기 주 영상, 인코딩된 상기 제1 시그널링 정보 및 상기 제2 시그널링 정보를 다중화한 방송 신호를 방송망을 통해 주 디바이스로 전송하는 전송 신호 전송 장치, 그리고A transmission signal transmission device for transmitting a broadcast signal multiplexed with the main video, the encoded first signaling information, and the second signaling information to a main device through a broadcasting network, and
    상기 주 디바이스와 페어링된 컴패니언 디바이스로부터 상기 제2 시그널링 정보를 이용한 수화 영상 요청을 수신하고, 요청된 수화 영상을 상기 컴패니언 디바이스로 전송하는 수화 영상 전송 장치Receiving a sign language video request using the second signaling information from a companion device paired with the main device, and transmitting the requested sign language video to the companion device,
    를 포함하는, 방송 시스템.The broadcast system.
  8. 제7항에서,8. The method of claim 7,
    상기 수화 영상 전송 장치는, The sign language video transmission apparatus includes:
    MPEG(Moving Picture Expert Group)-DASH(Dynamic Adaptive Streaming over HTTP) 프로토콜을 이용하여 상기 수화 영상을 전송하는 DASH 서버인, 방송 시스템.Wherein the DASH server transmits the sign language video using a Moving Picture Expert Group (MPEG) -DASH (Dynamic Adaptive Streaming over HTTP) protocol.
  9. 제7항에서,8. The method of claim 7,
    상기 시그널링 인코더는,Wherein the signaling encoder comprises:
    상기 주 영상을 설명하는 가상 현실(Virtual Reality, VR) 타입의 수화 영상에 대한 제3 시그널링 정보를 추가로 인코딩하고, Encoding the third signaling information for a sign language video image of a virtual reality (VR) type describing the main video,
    상기 수화 영상 전송 장치는,The sign language video transmission apparatus includes:
    상기 주 디바이스와 페어링된 가상 현실 디바이스로부터 상기 제3 시그널링 정보를 이용한 수화 영상 요청을 수신하고, 요청된 수화 영상을 상기 가상 현실 디바이스로 전송하며,Receiving a sign language video request using the third signaling information from a virtual reality device paired with the main device, transmitting the requested sign language video to the virtual reality device,
    상기 주 영상, 상기 수화 영상 및 상기 가상 현실 타입의 수화 영상은,Wherein the main image, the sign language video, and the virtual reality type sign language video,
    각각의 디바이스에서 동시에 출력되는, 방송 시스템.And outputting at the same time to each device.
  10. 제9항에서,The method of claim 9,
    상기 제2 시그널링 정보 및 상기 제3 시그널링 정보는,The second signaling information, and the third signaling information,
    상기 방송 신호의 서비스 레벨 시그널링 정보에 포함되는, 방송 시스템.Wherein the service level signaling information of the broadcast signal is included in the service level signaling information of the broadcast signal.
PCT/KR2018/015611 2017-12-08 2018-12-10 Method and system for providing sign language broadcast service using companion screen service WO2019112398A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR20170168601 2017-12-08
KR10-2017-0168601 2017-12-08
KR10-2018-0157994 2018-12-10
KR1020180157994A KR102153708B1 (en) 2017-12-08 2018-12-10 Method and system for providing sign language broadcast service using companion screen service

Publications (1)

Publication Number Publication Date
WO2019112398A1 true WO2019112398A1 (en) 2019-06-13

Family

ID=66751695

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2018/015611 WO2019112398A1 (en) 2017-12-08 2018-12-10 Method and system for providing sign language broadcast service using companion screen service

Country Status (1)

Country Link
WO (1) WO2019112398A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110115103A (en) * 2010-04-14 2011-10-20 삼성전자주식회사 Method and apparatus for generating broadcasting bitstream for digital tv captioning, method and apparatus for receiving broadcasting bitstream for digital tv captioning
KR20130056829A (en) * 2011-11-22 2013-05-30 한국전자통신연구원 Transmitter/receiver for 3dtv broadcasting, and method for controlling the same
KR20160074532A (en) * 2014-04-09 2016-06-28 엘지전자 주식회사 Broadcast signal transmission apparatus, broadcast signal reception apparatus, broadcast signal transmission method, and broadcast signal reception method
KR20160091929A (en) * 2014-11-13 2016-08-03 엘지전자 주식회사 Broadcasting signal transmission device, broadcasting signal reception device, broadcasting signal transmission method, and broadcasting signal reception method
KR20160111462A (en) * 2014-03-10 2016-09-26 엘지전자 주식회사 Broadcast reception device and operating method thereof, and companion device interoperating with the broadcast reception device and operating method thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110115103A (en) * 2010-04-14 2011-10-20 삼성전자주식회사 Method and apparatus for generating broadcasting bitstream for digital tv captioning, method and apparatus for receiving broadcasting bitstream for digital tv captioning
KR20130056829A (en) * 2011-11-22 2013-05-30 한국전자통신연구원 Transmitter/receiver for 3dtv broadcasting, and method for controlling the same
KR20160111462A (en) * 2014-03-10 2016-09-26 엘지전자 주식회사 Broadcast reception device and operating method thereof, and companion device interoperating with the broadcast reception device and operating method thereof
KR20160074532A (en) * 2014-04-09 2016-06-28 엘지전자 주식회사 Broadcast signal transmission apparatus, broadcast signal reception apparatus, broadcast signal transmission method, and broadcast signal reception method
KR20160091929A (en) * 2014-11-13 2016-08-03 엘지전자 주식회사 Broadcasting signal transmission device, broadcasting signal reception device, broadcasting signal transmission method, and broadcasting signal reception method

Similar Documents

Publication Publication Date Title
WO2016129981A1 (en) Method and device for transmitting/receiving media data
WO2015147590A1 (en) Broadcast and broadband hybrid service with mmt and dash
WO2013015546A2 (en) Method and system for providing additional information on broadcasting content
WO2013077525A1 (en) Control method and device using same
WO2011136496A2 (en) Method and apparatus for playing live content
WO2013077524A1 (en) User interface display method and device using same
WO2014171803A1 (en) Method and apparatus for transmitting and receiving additional information in a broadcast communication system
US20020144291A1 (en) Network publication of data synchronized with television broadcasts
WO2011115424A2 (en) Content output system and codec information sharing method in same system
WO2018169255A1 (en) Electronic apparatus and control method thereof
CN106464933B (en) Apparatus and method for remotely controlling rendering of multimedia content
WO2012121571A2 (en) Method and device for transmitting/receiving non-real-time stereoscopic broadcasting service
WO2014021624A1 (en) Method and apparatus of providing broadcasting and communication convergence service
WO2011159093A2 (en) Hybrid delivery mechanism in a multimedia transmission system
WO2013172581A1 (en) Content receiving device, display device, and method thereof
WO2003032576A1 (en) Service information multicasting method and system
WO2013154364A1 (en) Streaming playback method and computing apparatus using same
WO2019112398A1 (en) Method and system for providing sign language broadcast service using companion screen service
WO2017047848A1 (en) Zapping advertisement system using multiplexing characteristics
WO2016018102A1 (en) System for cloud streaming-based broadcast-associated service, client apparatus for broadcast-associated service, trigger content provision server and method utilizing same
KR102153708B1 (en) Method and system for providing sign language broadcast service using companion screen service
KR20030055645A (en) Broadcast receiving apparatus adapted to preference and capacities of personal multimedia device and broadcast service method using the same apparatus
WO2011132973A2 (en) Method and apparatus for transmitting and receiving service discovery information in multimedia transmission system and file structure for the same
WO2010074399A2 (en) Apparatus and method for multiplexig and demultiplxeing based on digitgal multimedia broadcasting
WO2015186986A1 (en) Method and apparatus for providing backward compatibility for hybrid broadcast

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18885245

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18885245

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 18885245

Country of ref document: EP

Kind code of ref document: A1