WO2023017578A1 - 映像信号を合成する装置、方法及びプログラム - Google Patents
映像信号を合成する装置、方法及びプログラム Download PDFInfo
- Publication number
- WO2023017578A1 WO2023017578A1 PCT/JP2021/029618 JP2021029618W WO2023017578A1 WO 2023017578 A1 WO2023017578 A1 WO 2023017578A1 JP 2021029618 W JP2021029618 W JP 2021029618W WO 2023017578 A1 WO2023017578 A1 WO 2023017578A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- screen
- sub
- video
- video signals
- screens
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 8
- 230000002194 synthesizing effect Effects 0.000 claims description 16
- 239000002131 composite material Substances 0.000 abstract description 5
- 230000015572 biosynthetic process Effects 0.000 description 6
- 238000003786 synthesis reaction Methods 0.000 description 6
- 238000001514 detection method Methods 0.000 description 2
- 230000001174 ascending effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/66—Transforming electric information into light information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
- H04N21/4316—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G3/00—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
- G09G3/20—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
- H04N21/4307—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
- H04N21/43072—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of multiple content streams on the same device
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/435—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
- H04N21/4355—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reformatting operations of additional data, e.g. HTML pages on a television screen
- H04N21/4356—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reformatting operations of additional data, e.g. HTML pages on a television screen by altering the spatial resolution, e.g. to reformat additional data on a handheld device, attached to the STB
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/04—Synchronising
- H04N5/12—Devices in which the synchronising signals are only operative if a phase difference occurs between synchronising and synchronised scanning devices, e.g. flywheel synchronising
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/44—Receiver circuitry for the reception of television signals according to analogue transmission standards
- H04N5/445—Receiver circuitry for the reception of television signals according to analogue transmission standards for displaying additional information
- H04N5/45—Picture in picture, e.g. displaying simultaneously another television channel in a region of the screen
Definitions
- the present disclosure relates to a video synthesizer that synthesizes one screen from a plurality of video input signals and outputs the result.
- the video signal of this video device transmits one screen using a time equal to the frame rate. For example, in the case of a video signal of 60 frames per second, the video of one screen is transmitted in 1/60 second, that is, approximately 16.8 milliseconds (hereinafter referred to as 60 fps (frame per second)).
- screen synthesis is performed, for example, by splitting and displaying a plurality of images on one screen, or by embedding other images in a reduced size display in a certain image screen.
- the timing of video signals is not synchronized, and the timing of other video signals to be synthesized is different, so the signals are temporarily buffered in memory, etc., and then synthesized. As a result, a delay occurs in the output of the composite screen.
- the delay associated with this synthesis will greatly impair its feasibility.
- 120 BPM Beat Per Minute
- the time taken by the camera and displayed must include other delays such as image processing time in the camera, display time on the monitor, and transmission time. .
- image processing time in the camera In addition to processing related to composition, the time taken by the camera and displayed must include other delays such as image processing time in the camera, display time on the monitor, and transmission time. .
- transmission time As a result, with the conventional technology, it is difficult to perform cooperative work in applications where timing is important, such as ensemble performances while viewing video images from remote locations.
- the purpose of the present disclosure is to reduce the time delay from video input of asynchronous video to its composite video output.
- the apparatus and methods of the present disclosure comprise: A device for synthesizing a plurality of asynchronously input video signals into a video signal displayed on one screen,
- the one screen is composed of a plurality of sub-screens that are larger than the plurality of video signals,
- the plurality of video signals are arranged in a sub-screen such that the output delay of each video signal among the plurality of sub-screens is small, and the plurality of video signals are synthesized.
- the device of the present invention can also be realized by a computer and a program, and the program can be recorded on a recording medium or provided through a network.
- the program of the present disclosure is a program for realizing a computer as each functional unit provided in the apparatus according to the present disclosure, and is a program for causing the computer to execute each step included in the method executed by the apparatus according to the present disclosure. .
- An example of screen information included in a video signal is shown.
- 1 shows a system configuration example of the present disclosure; An example of combining four input frames into one output frame is shown. An example of sub-screens obtained by dividing one screen is shown. 1 shows an example of an input frame and an output frame; An example of placement on a sub-screen is shown. 1 shows an example of an input frame and an output frame; An example of placement on a sub-screen is shown. 1 shows a configuration example of a video synthesizer;
- Fig. 1 shows an example of screen information included in a video signal.
- Information on the screen is transmitted by scanning the screen in the horizontal direction for each scanning line 21 and sequentially scanning the scanning lines 21 below.
- This scan includes the display screen 24 as well as overhead information/signals such as blanking portions 22 and border portions 23 .
- Information other than video information, such as control information and audio information, may be included in the blanking portion 22 (for example, see Non-Patent Document 1).
- Fig. 2 shows a system configuration example of the present disclosure.
- four video signals V1 to V4 are input to the video synthesizer 10, and the video synthesizer 10 synthesizes and outputs the video signal displayed on one screen 20.
- one screen is transmitted using a time equal to that of the frame rate. For example, in the case of a video signal of 60 frames per second, the video signal for one screen is transmitted over 1/60 second, that is, about 16.7 milliseconds (hereinafter referred to as 60 fps (frame per second)).
- the information of one screen at each time included in the video signal is called a "frame"
- the information of one screen of each video signal input to the video synthesizer 10 is called an “input frame”
- the information output from the video synthesizer 10 is called an "input frame”.
- the synthesized information for one screen is called an "output frame”.
- Fig. 3 shows an example of inputting four videos with different timings, synthesizing them into one screen, and outputting them.
- the video synthesizing device 10 reads all input video screens, synthesizes them, and outputs them.
- the output frame will be delayed by 2T_f+T_p at the maximum from the time of input of the first input frame.
- the combined video will include a delay of 2 frame times or more, that is, 33.3 milliseconds or more.
- FIG. 4 shows an example of the screen 20 of this embodiment.
- This embodiment shows an example in which the screen 20 is divided into nine sub-screens of 3 ⁇ 3.
- one screen 20 shown in FIG. 2 is composed of five or more sub-screens, which is more than the plurality of video signals V1 to V4.
- the horizontal screens arranged on the same scanning line are regarded as one group, and the group is called a "sub-screen group".
- the video synthesizer 10 synthesizes four video signals of inputs 1 to 4, sub-screens D1-1, D1-2, and D1-3 arranged in the horizontal direction are taken as a sub-screen group G1, and The sub-screens D2-1, D2-2 and D2-3 arranged side by side are assumed to be a sub-screen group G2, and the sub-screens D3-1, D3-2 and D3-3 arranged in the horizontal direction are assumed to be a sub-screen group G2. That is, the data of the output frame is output first from the sub-screen group G1 arranged at the top, and then the sub-screen groups G2 and G3 are output.
- the present disclosure is a system that inputs a plurality of asynchronous videos and synthesizes those images, and is characterized by arranging from the top to the bottom of the screen 20 so that the output delay is reduced in order of early input timing. .
- the number of output divided screens is greater than the number of input frames, and there may be areas in the screen 20 that are not used as input frame outputs.
- FIGS. 5 and 6 show examples of screen synthesis according to the present disclosure.
- FIGS. 5 and 6 show the output timing of an output frame obtained by arranging four input frames in ascending order of input timing and synthesizing them.
- Input 3 is output to the sub-screen group G2 because the data input can be completed by the time t5 when the output of the sub-screen group G2 is completed.
- it can be arranged on the leftmost sub-screen D2-1 of the sub-screen group screen G2.
- this arrangement is arbitrary within the same sub-screen group G2.
- the central and rightmost sub-screens D2-2 and D2-3 of the sub-screen group G2 are blank.
- Input 4 is output to the sub-screen group G3 because the data input can be completed by the time t6 when the output of the sub-screen group G3 is completed.
- it can be arranged on the leftmost sub-screen D3-1 of the sub-screen group screen G3.
- this arrangement is arbitrary within the same sub-screen group G3.
- the central and rightmost sub-screens D3-2 and D3-3 of the sub-screen group G3 are blank.
- each input frame As a composite screen with the shortest delay.
- the screens in the sub-screen group with the shortest delay it is possible to arrange the screens step by step in the sub-screen group with the shortest delay. For example, as shown in FIG. 7, if input 1 to 4 frames all match at the same input timing, only up to three screens can be arranged in the sub-screen group G1. In such a case, as shown in FIG. 8, one can be placed in a nearby sub-screen group G2 and output. In the figure, only input 4 is arranged in sub-screen group G2. This can reduce the average delay.
- the method of the present disclosure arranges the video signal that is in time for the output timing of the sub-screen group on any one of the sub-screens included in the sub-screen group. The placement can be changed each time.
- FIG. 9 shows a configuration example of the video synthesizing device 10 according to this embodiment.
- the video synthesizing device 10 according to this embodiment includes a detection unit 101 , a crossbar switch 102 , an up/down converter 103 , a buffer 104 and a pixel synthesizing unit 105 .
- the figure shows four inputs and one output, any number of inputs and outputs may be used.
- a functional unit 101 detects the input order within a frame time for N inputs.
- a crossbar switch 102 has a function of rearranging and outputting the input order from 101 in the order of detection results.
- An up-down converter 103 scales the number of pixels to an arbitrary size.
- 102 and 103 may be reversely connected to the inputs (a, b, c, d, . . . ). That is, the inputs a, b, c, and d may be scaled at 103 and then rearranged at 102 in order of input and output.
- 104 is a buffer. 103 or 102 inputs can be buffered and output in any order.
- 105 is a pixel synthesizing unit. Pixel data are read out from 104 in the output order of the entire output screen, synthesized and output. This timing is as described above. 105 may add an arbitrary control signal to the blanking portion of the screen.
- the video synthesizing device 10 of the present disclosure can also be realized by a computer and a program, and the program can be recorded on a recording medium or provided through a network.
- the system according to the present disclosure can shorten the delay time to output after synthesis for asynchronous video input signals.
- a system that synthesizes a plurality of screens at a plurality of sites, etc. cooperative work with strict low-delay requirements, and especially low-delay requirements for specific inputs, becomes possible.
- the present disclosure is a system for inputting a plurality of asynchronous videos and synthesizing those images, and arranging the images from the top to the bottom of the screen 20 in order of early input timing so as to reduce the output delay.
- the present disclosure enables cooperative work with strict low-delay requirements in a system that synthesizes multiple screens at multiple sites.
- This disclosure can be applied to the information and communications industry.
- Video synthesizer 20 Screen 21: Scanning line 22: Blanking portion 23: Border portion 24: Display screen 101: Detector 102: Crossbar switch 103: Up/down converter 104: Buffer 105: Pixel synthesizing unit
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Studio Circuits (AREA)
- Transforming Electric Information Into Light Information (AREA)
Abstract
Description
非同期で入力された複数の映像信号を、1つの画面に表示される映像信号に合成する装置であって、
前記1つの画面は、前記複数の映像信号よりも多い複数のサブ画面で構成され、
前記複数のサブ画面のうちの各映像信号の出力遅延が少なくなるようなサブ画面に前記複数の映像信号を配置し、前記複数の映像信号を合成する。
102は、クロスバスイッチであり、101からの入力順序の検出結果順に並べ替え出力する機能である。
103は画素数を任意の大きさに拡大縮小を行う、アップダウンコンバータである。
102と103は入力(a,b,c,d,…)に対して逆に接続しても構わない。すなわち入力a,b,c,dから103で拡大縮小を行い、その後102、入力順に並べ替え出力しても構わない。
104は、バッファである。103または102の入力をバッファリングして、任意の順序で出力することができる。
105は、画素合成部である。出力の全体画面のうち、出力する順に104から画素データを読み出し、合成して出力する。このタイミングは、前記による。105は、任意のコントロール信号を画面のブランキング部に付加しても構わない。
本開示に係るシステムは、非同期の映像入力信号に対して、合成後の出力までの遅延時間を短縮することができる。これにより、複数拠点等の複数画面を合成するシステムで低遅延要求が厳しく且つ特に特定の入力に対する低遅延要求がより厳しい協調作業が可能となる。
複数拠点の映像を合成して表示するシステムで、合奏のような低遅延要求が厳しい協調作業では、合成処理の低遅延化が必要である。本開示は、複数の非同期の映像を入力し、それらの画像を合成するシステムであって、入力タイミングの早い順に、出力遅延が少なくなるよう画面20の上部から下部にかけて配置する。これにより、本開示は、複数拠点等の複数画面を合成するシステムにおいて、低遅延要求が厳しい協調作業が可能となる。
20:画面
21:走査線
22:ブランキング部分
23:ボーダ部分
24:表示画面
101:検出部
102:クロスバスイッチ
103:アップダウンコンバータ
104:バッファ
105:画素合成部
Claims (6)
- 非同期で入力された複数の映像信号を、1つの画面に表示される映像信号に合成する装置であって、
前記1つの画面は、前記複数の映像信号よりも多い複数のサブ画面で構成され、
前記複数のサブ画面のうちの各映像信号の出力遅延が少なくなるようなサブ画面に前記複数の映像信号を配置し、前記複数の映像信号を合成する、
装置。 - 前記複数の映像信号を、映像信号の入力タイミングの早い順に、前記複数のサブ画面のうちの上部から下部にかけて配置する、
請求項1に記載の装置。 - 前記1つの画面の一部を構成するサブ画面グループごとに、前記複数の映像信号に含まれる映像信号を出力し、
前記サブ画面グループの出力タイミングに間に合う映像信号を、前記サブ画面グループに含まれるいずれかのサブ画面に配置する、
請求項1又は2に記載の装置。 - 前記サブ画面グループは、画面の同一の走査線上に配置されているサブ画面の集合である、
請求項3に記載の装置。 - 非同期で入力された複数の映像信号を、1つの画面に表示される映像信号に合成する方法であって、
前記1つの画面は、前記複数の映像信号よりも多い複数のサブ画面で構成され、
前記複数のサブ画面のうちの各映像信号の出力遅延が少なくなるようなサブ画面に前記複数の映像信号を配置し、前記複数の映像信号を合成する、
方法。 - 請求項1から4のいずれかに記載の装置に備わる各機能部としてコンピュータを実現させるためのプログラム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2021/029618 WO2023017578A1 (ja) | 2021-08-11 | 2021-08-11 | 映像信号を合成する装置、方法及びプログラム |
JP2023541164A JPWO2023017578A1 (ja) | 2021-08-11 | 2021-08-11 | |
US18/681,662 US20240283890A1 (en) | 2021-08-11 | 2021-08-11 | Device, method and program for combining video signals |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2021/029618 WO2023017578A1 (ja) | 2021-08-11 | 2021-08-11 | 映像信号を合成する装置、方法及びプログラム |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023017578A1 true WO2023017578A1 (ja) | 2023-02-16 |
Family
ID=85200079
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/029618 WO2023017578A1 (ja) | 2021-08-11 | 2021-08-11 | 映像信号を合成する装置、方法及びプログラム |
Country Status (3)
Country | Link |
---|---|
US (1) | US20240283890A1 (ja) |
JP (1) | JPWO2023017578A1 (ja) |
WO (1) | WO2023017578A1 (ja) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11234654A (ja) * | 1998-02-19 | 1999-08-27 | Fujitsu Ltd | 多画面合成方法及び多画面合成装置 |
JP2001309368A (ja) * | 2000-04-26 | 2001-11-02 | Matsushita Electric Ind Co Ltd | 監視用デジタル画像記録再生装置 |
-
2021
- 2021-08-11 US US18/681,662 patent/US20240283890A1/en active Pending
- 2021-08-11 WO PCT/JP2021/029618 patent/WO2023017578A1/ja active Application Filing
- 2021-08-11 JP JP2023541164A patent/JPWO2023017578A1/ja active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11234654A (ja) * | 1998-02-19 | 1999-08-27 | Fujitsu Ltd | 多画面合成方法及び多画面合成装置 |
JP2001309368A (ja) * | 2000-04-26 | 2001-11-02 | Matsushita Electric Ind Co Ltd | 監視用デジタル画像記録再生装置 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2023017578A1 (ja) | 2023-02-16 |
US20240283890A1 (en) | 2024-08-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US5459477A (en) | Display control device | |
JP4646446B2 (ja) | 映像信号処理装置 | |
JP2004522365A (ja) | 多チャンネル入力の高画質多重画面分割装置及び方法 | |
JP4559976B2 (ja) | 映像合成装置、映像合成方法及び映像合成プログラム | |
WO2023017578A1 (ja) | 映像信号を合成する装置、方法及びプログラム | |
JPH0775014A (ja) | 映像表示装置、マルチ画面表示システム及び拡大処理回路 | |
JP6448189B2 (ja) | 映像処理装置 | |
WO2023013072A1 (ja) | 映像信号を合成する装置、方法及びプログラム | |
JP3685668B2 (ja) | マルチスクリーン用画面合成装置 | |
WO2023017577A1 (ja) | 映像信号を合成する装置、方法及びプログラム | |
JP7521604B2 (ja) | 映像信号を合成する装置、方法及びプログラム | |
KR102258501B1 (ko) | Fpga 기반의 다 채널 영상 조합 출력 장치 | |
WO2022137326A1 (ja) | 映像音響合成装置、方法及びプログラム | |
WO2022137325A1 (ja) | 映像信号を合成する装置、方法及びプログラム | |
JP7480908B2 (ja) | 映像合成装置及び映像合成方法 | |
JPH11355683A (ja) | 映像表示装置 | |
JP2003289553A (ja) | 映像データ処理装置及び立体映像表示システム | |
JP2878400B2 (ja) | マルチウインドウ表示装置 | |
JPH0359696A (ja) | 画像信号の合成装置 | |
JPH0470797A (ja) | 画像信号合成装置 | |
JP2737557B2 (ja) | 2画面表示テレビジョン受信機及び2画面処理回路 | |
KR0147152B1 (ko) | 메모리 어드레스를 이용한 다수화면분할 및 정지화면 구현 방법 | |
JPH0294974A (ja) | 画像表示装置 | |
JPH05176229A (ja) | 多入力映像信号表示装置 | |
JP6083288B2 (ja) | 映像効果装置及び映像効果処理方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21953474 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023541164 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18681662 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21953474 Country of ref document: EP Kind code of ref document: A1 |