TWI740623B - Apparatus, method, and computer program product thereof for integrating videos - Google Patents

Apparatus, method, and computer program product thereof for integrating videos Download PDF

Info

Publication number
TWI740623B
TWI740623B TW109129154A TW109129154A TWI740623B TW I740623 B TWI740623 B TW I740623B TW 109129154 A TW109129154 A TW 109129154A TW 109129154 A TW109129154 A TW 109129154A TW I740623 B TWI740623 B TW I740623B
Authority
TW
Taiwan
Prior art keywords
virtual camera
main
imaging
video
main virtual
Prior art date
Application number
TW109129154A
Other languages
Chinese (zh)
Other versions
TW202209859A (en
Inventor
劉記顯
王上銘
Original Assignee
財團法人資訊工業策進會
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 財團法人資訊工業策進會 filed Critical 財團法人資訊工業策進會
Priority to TW109129154A priority Critical patent/TWI740623B/en
Application granted granted Critical
Publication of TWI740623B publication Critical patent/TWI740623B/en
Publication of TW202209859A publication Critical patent/TW202209859A/en

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

An apparatus, method, and computer program product thereof for integrating videos are provided. The apparatus includes a processor and a transceiving interface, wherein the processor is electrically connected to the transceiving interface. The processor sets up a main virtual camera and a plurality of render objects in a three-dimensional virtual space, and the processor renders a plurality of video streams on the render objects one-to-one. The processor derives a main video stream by enabling the main virtual camera to video all of or a part of the render objects. The transceiving interface outputs the main video stream.

Description

視訊整合裝置、方法及其電腦程式產品 Video integration device, method and computer program product

本發明係關於一種視訊整合裝置、方法及其電腦程式產品。具體而言,本發明係關於一種利用一主虛擬攝影機於一三維虛擬空間可自由移動之特性之視訊整合裝置、方法及其電腦程式產品。 The invention relates to a video integration device, method and computer program product. Specifically, the present invention relates to a video integration device, method and computer program product that utilizes the feature that a host virtual camera can move freely in a three-dimensional virtual space.

隨著網路與多媒體科技的快速發展,利用推播技術(Push Technology)將視訊流主動地傳送給使用者的需求與日俱增。當有多個來源的視訊流需要被一起播放時,需要先整合該等視訊流,方能提供各種不同的觀看服務(例如:在該等視訊流間切換的服務)。然而,習知的視訊整合技術需先對該等視訊流進行重新編碼與縫合,才能將整合後的視訊流提供給使用者,但這類的整合過程需要大量運算,會導致視訊流推播的延遲。 With the rapid development of Internet and multimedia technology, the demand for using Push Technology to actively transmit video streams to users is increasing day by day. When video streams from multiple sources need to be played together, it is necessary to integrate these video streams to provide various viewing services (for example, services that switch between these video streams). However, the conventional video integration technology needs to re-encode and stitch the video streams before providing the integrated video stream to the user. However, this type of integration process requires a lot of calculations, which will cause the video stream to be pushed. Delay.

有鑑於此,提供一種視訊整合技術來整合複數個來源的視訊流,讓使用者能容易地自由操作,以提升使用者的觀看體驗,並確保與使用者之間互動體驗,為本領域相當重要的課題。 In view of this, it is very important in this field to provide a video integration technology to integrate video streams from multiple sources so that users can easily and freely operate to enhance the user’s viewing experience and ensure the interactive experience with users. Subject.

本發明之一目的在於提供一種視訊整合裝置。該視訊整合裝置包含一處理器及一收發介面,其中該處理器電性連接至該收發介面。該處 理器於一三維虛擬空間設立一主虛擬攝影機及複數個成像物件,將複數個視訊流一對一地呈現於該等成像物件。該處理器使該主虛擬攝影機拍攝該等成像物件之全部或一部分以得一主視訊流。該收發介面輸出該主視訊流。 One objective of the present invention is to provide a video integration device. The video integration device includes a processor and a transceiver interface, wherein the processor is electrically connected to the transceiver interface. Where The processor sets up a main virtual camera and a plurality of imaging objects in a three-dimensional virtual space, and presents the plurality of video streams on the imaging objects one-to-one. The processor enables the main virtual camera to shoot all or part of the imaging objects to obtain a main video stream. The transceiver interface outputs the main video stream.

本發明之又一目的在於提供一種視訊整合方法,其係由一電子裝置執行。該視訊整合方法包含下列步驟:於一三維虛擬空間設立一主虛擬攝影機及複數個成像物件,將複數個視訊流一對一地呈現於該等成像物件,使該主虛擬攝影機拍攝該等成像物件之全部或一部分以得一主視訊流,以及輸出該主視訊流。 Another object of the present invention is to provide a video integration method which is executed by an electronic device. The video integration method includes the following steps: setting up a main virtual camera and a plurality of imaging objects in a three-dimensional virtual space, presenting the plurality of video streams on the imaging objects one-to-one, and making the main virtual camera shoot the imaging objects All or part of it can obtain a main video stream and output the main video stream.

本發明之再一目的在於提供一種電腦程式產品。一電子裝置載入該電腦程式產品後,該電子裝置執行該電腦程式產品所包含之複數個程式指令,以實現一種視訊整合方法。該視訊整合方法包含下列步驟:於一三維虛擬空間設立一主虛擬攝影機及複數個成像物件,將複數個視訊流一對一地呈現於該等成像物件,使該主虛擬攝影機拍攝該等成像物件之全部或一部分以得一主視訊流,以及輸出該主視訊流。 Another object of the present invention is to provide a computer program product. After an electronic device loads the computer program product, the electronic device executes a plurality of program instructions included in the computer program product to realize a video integration method. The video integration method includes the following steps: setting up a main virtual camera and a plurality of imaging objects in a three-dimensional virtual space, presenting the plurality of video streams on the imaging objects one-to-one, and making the main virtual camera shoot the imaging objects All or part of it can obtain a main video stream and output the main video stream.

本發明所提供之視訊整合技術(包含裝置、方法及其電腦程式產品)會於一三維虛擬空間設立一主虛擬攝影機及複數個成像物件,且利用該等成像物件於該三維虛擬空間中可呈現視訊及影像內容的特性,將複數個視訊流一對一地呈現於該等成像物件。本發明所提供之視訊整合技術再利用虛擬攝影機於三維虛擬空間中可自由移動的特性,使該主虛擬攝影機拍攝該等成像物件之全部或一部分,藉此得到一主視訊流。本發明所提供之視訊整合技術會輸出該主視訊流,例如:將該主視訊流推播至用戶端。本發明所提供之視訊整合技術係採用三維物件移動技術進行運鏡,因此主虛 擬攝影機在三維虛擬空間移動時能快速地切換畫面及整合多個畫面,不需重新編碼與縫合。因此,本發明所提供之視訊整合技術能有效降低運算成本,不會延遲輸出主視訊流的時間,進而維持良好的輸出品質,且提供觀眾良好的觀看體驗。 The video integration technology (including devices, methods and computer program products) provided by the present invention will set up a main virtual camera and a plurality of imaging objects in a three-dimensional virtual space, and use these imaging objects to display in the three-dimensional virtual space The characteristics of video and image content present a plurality of video streams on the imaging objects one-to-one. The video integration technology provided by the present invention reuses the feature that the virtual camera can move freely in a three-dimensional virtual space, so that the main virtual camera shoots all or part of the imaging objects, thereby obtaining a main video stream. The video integration technology provided by the present invention outputs the main video stream, for example, pushes the main video stream to the client. The video integration technology provided by the present invention uses three-dimensional object movement technology to move the mirror, so the main virtual The pseudo-camera can quickly switch pictures and integrate multiple pictures when moving in a three-dimensional virtual space, without recoding and stitching. Therefore, the video integration technology provided by the present invention can effectively reduce the computing cost without delaying the time of outputting the main video stream, thereby maintaining a good output quality and providing a good viewing experience for the audience.

以下結合圖式闡述本發明之詳細技術及實施方式,俾使本發明所屬技術領域中具有通常知識者能理解所請求保護之發明之特徵。 The following describes the detailed technology and implementation of the present invention in conjunction with the drawings, so that those with ordinary knowledge in the technical field to which the present invention belongs can understand the features of the claimed invention.

1:視訊整合裝置 1: Video integration device

11:處理器 11: processor

13:收發介面 13: Transceiver interface

20:三維虛擬空間 20: Three-dimensional virtual space

30:主虛擬攝影機 30: Main virtual camera

O1、O2、……、O9:成像物件 O1, O2, ..., O9: imaging objects

S1、S2、……、S9:視訊流 S1, S2, ..., S9: video stream

c1、c2、……、c9:中心座標 c1, c2,......, c9: center coordinates

VA:拍攝範圍 VA: shooting range

VS:主視訊流 VS: Main video stream

S201~S205:步驟 S201~S205: steps

第1A圖係描繪本發明之第一實施方式之視訊整合裝置1之架構示意圖。 FIG. 1A is a schematic diagram depicting the structure of the video integration device 1 according to the first embodiment of the present invention.

第1B圖係描繪視訊整合裝置1於三維虛擬空間20所設立之主虛擬攝影機30與成像物件O1、O2、……、O9之示意圖。 FIG. 1B is a schematic diagram depicting the main virtual camera 30 and the imaging objects O1, O2, ..., O9 set up by the video integration device 1 in the three-dimensional virtual space 20.

第1C圖係描繪視訊整合裝置1將視訊流S1、S2、……、S9一對一地呈現於成像物件O1、O2、……、O9之示意圖。 FIG. 1C depicts a schematic diagram of the video integration device 1 presenting the video streams S1, S2,..., S9 to the imaging objects O1, O2,..., O9 one-to-one.

第1D圖係描繪視訊整合裝置1使主虛擬攝影機30對焦於成像物件O9之中心座標之示意圖。 FIG. 1D is a schematic diagram depicting the video integration device 1 focusing the main virtual camera 30 on the center coordinates of the imaging object O9.

第1E圖係描繪主虛擬攝影機30改變對焦座標時之移動路徑之示意圖。 FIG. 1E is a schematic diagram depicting the movement path of the main virtual camera 30 when the focus coordinate is changed.

第1F圖係描繪主虛擬攝影機30之拍攝範圍VA之示意圖。 FIG. 1F is a schematic diagram depicting the shooting range VA of the main virtual camera 30.

第2圖係描繪本發明之第二實施方式之視訊整合方法之流程圖。 Figure 2 is a flowchart depicting the video integration method of the second embodiment of the present invention.

以下將透過實施方式來解釋本發明所提供之視訊整合裝置、 方法及其電腦程式產品。然而,該等實施方式並非用以限制本發明需在如該等實施方式所述之任何環境、應用或方式方能實施。因此,關於以下實施方式之說明僅在於闡釋本發明之目的,而非用以限制本發明之範圍。應理解,在以下實施方式及圖式中,與本發明非直接相關之元件已省略而未繪示,且圖式中各元件之尺寸以及元件間之尺寸比例僅為便於繪示及說明,而非用以限制本發明之範圍。 The following will explain the video integration device provided by the present invention through implementations, Method and its computer program product. However, these embodiments are not intended to limit the implementation of the present invention in any environment, application or method as described in these embodiments. Therefore, the description of the following embodiments is only for explaining the purpose of the present invention, rather than limiting the scope of the present invention. It should be understood that in the following embodiments and drawings, elements that are not directly related to the present invention have been omitted and are not shown, and the size of each element and the size ratio between elements in the drawings are only for ease of illustration and description, and It is not intended to limit the scope of the present invention.

本發明之第一實施方式為視訊整合裝置1,其示意圖係描繪於第1A圖。視訊整合裝置1包含一處理器11及一收發介面13,且處理器11電性連接至收發介面13。處理器11可為各種處理器、中央處理單元(Central Processing Unit;CPU)、微處理器(Microprocessor Unit;MPU)、數位訊號處理器(Digital Signal Processor;DSP)或本發明所屬技術領域中具有通常知識者所知悉之其他計算裝置。收發介面13可為任何能與處理器11搭配使用,且能接收與傳送訊號的介面,例如:通用串列匯流排介面、網路介面卡,但不以此為限。 The first embodiment of the present invention is the video integration device 1, and its schematic diagram is depicted in FIG. 1A. The video integration device 1 includes a processor 11 and a transceiver interface 13, and the processor 11 is electrically connected to the transceiver interface 13. The processor 11 may be a variety of processors, central processing units (CPU), microprocessors (MPU), digital signal processors (DSP), or common ones in the technical field of the present invention. Other computing devices known to the knowledgeable. The transceiver interface 13 can be any interface that can be used with the processor 11 and can receive and transmit signals, such as a universal serial bus interface, a network interface card, but not limited to this.

請一併參閱第1B圖至第1E圖。於本實施方式中,視訊整合裝置1可於一三維虛擬空間20整合多來源的視訊流,透過一主虛擬攝影機30來拍攝整合後的該等視訊流,且輸出主虛擬攝影機30所拍攝的內容。視訊整合裝置1可利用一三維引擎(例如:由遊戲引擎開發商Unity Technologies所開發的Unity引擎)或其他能在三維虛擬空間中投射視訊及影像之技術來整合該等視訊流。以下詳述視訊整合裝置1的具體運作。 Please refer to Figures 1B to 1E together. In this embodiment, the video integration device 1 can integrate video streams from multiple sources in a three-dimensional virtual space 20, capture the integrated video streams through a master virtual camera 30, and output the content captured by the master virtual camera 30 . The video integration device 1 can utilize a three-dimensional engine (for example, the Unity engine developed by game engine developer Unity Technologies) or other technologies capable of projecting videos and images in a three-dimensional virtual space to integrate the video streams. The specific operation of the video integration device 1 will be described in detail below.

於本實施方式中,視訊整合裝置1之收發介面13接收欲整合的九個視訊流S1、S2、……、S9。需說明者,本發明未限制視訊整合裝置1 所要整合之視訊流之數量、來源及種類。換言之,視訊整合裝置1可整合其他數目的視訊流。此外,視訊流S1、S2、……、S9各自可來自於一真實攝影機、一虛擬攝影機、一視訊平台(例如:影片平台)或其他的視訊來源。若一視訊流係來自於視訊整合裝置1(例如:視訊整合裝置1所執行之三維引擎中的虛擬攝影機),則該視訊流不需透過收發介面13接收。 In this embodiment, the transceiver interface 13 of the video integration device 1 receives nine video streams S1, S2, ..., S9 to be integrated. It should be noted that the present invention does not limit the video integration device 1 The number, source and type of video streams to be integrated. In other words, the video integration device 1 can integrate other numbers of video streams. In addition, the video streams S1, S2, ..., S9 can each come from a real camera, a virtual camera, a video platform (such as a video platform), or other video sources. If a video stream comes from the video integration device 1 (for example, a virtual camera in the 3D engine executed by the video integration device 1), the video stream does not need to be received through the transceiver interface 13.

另外,視訊整合裝置1之處理器11於三維虛擬空間20設立九個成像物件O1、O2、……、O9(參第1B圖)。需說明者,成像物件O1、O2、……、O9各自為一個能於三維虛擬空間20中呈現視訊及影像內容之物件,例如:一成像平面(render panel)。成像物件的數量可取決於視訊流的數量(例如:成像物件的數量可大於或等於視訊流的數量,使每個視訊流都有對應的成像物件)。於本實施方式中,成像物件O1、O2、……、O9之數量與視訊流S1、S2、……、S9之數量相同。另需說明者,本發明未限制成像物件O1、O2、……、O9各自的大小及形狀,且未限制成像物件O1、O2、……、O9於三維虛擬空間20中的排列方式。 In addition, the processor 11 of the video integration device 1 sets up nine imaging objects O1, O2, ..., O9 in the three-dimensional virtual space 20 (see Figure 1B). It should be noted that the imaging objects O1, O2, ..., O9 are each an object capable of presenting video and image content in the three-dimensional virtual space 20, such as an imaging plane (render panel). The number of imaging objects can depend on the number of video streams (for example, the number of imaging objects can be greater than or equal to the number of video streams, so that each video stream has a corresponding imaging object). In this embodiment, the number of imaging objects O1, O2,..., O9 is the same as the number of video streams S1, S2,..., S9. In addition, it should be noted that the present invention does not limit the respective sizes and shapes of the imaging objects O1, O2, ..., O9, and does not limit the arrangement of the imaging objects O1, O2, ..., O9 in the three-dimensional virtual space 20.

於本實施方式中,處理器11將視訊流S1、S2、……、S9一對一地呈現(render)於成像物件O1、O2、……、O9(參第1C圖)。舉例而言,視訊流S1、S2、……、S9各自可包含複數個畫面(未繪示),處理器11可將各畫面轉成一渲染紋理(render texture),且將轉換後的各該渲染紋理呈現於對應的成像物件。 In this embodiment, the processor 11 renders the video streams S1, S2,..., S9 one-to-one on the imaging objects O1, O2,..., O9 (see Figure 1C). For example, each of the video streams S1, S2, ..., S9 may include a plurality of frames (not shown), and the processor 11 may convert each frame into a render texture, and convert each of the converted frames into a render texture. The rendered texture is presented on the corresponding imaging object.

另一方面,視訊整合裝置1之處理器11會於三維虛擬空間20設立主虛擬攝影機30(參第1B圖及第1C圖),使主虛擬攝影機30能夠拍攝成像物件O1、O2、……、O9之全部或一部分以得一主視訊流VS。需說明者, 主虛擬攝影機30被設立的位置(亦即,主虛擬攝影機30可拍攝的成像物件O1、O2、……、O9的範圍)可取決於使用者欲呈現的畫面。 On the other hand, the processor 11 of the video integration device 1 will set up a main virtual camera 30 in the three-dimensional virtual space 20 (refer to Figures 1B and 1C), so that the main virtual camera 30 can shoot imaging objects O1, O2,..., All or part of O9 can obtain a main video stream VS. Those who need clarification, The position where the main virtual camera 30 is set up (that is, the range of the imaging objects O1, O2,..., O9 that can be photographed by the main virtual camera 30) may depend on the screen that the user wants to present.

於本實施方式中,主虛擬攝影機30拍攝成像物件O1、O2、……、O9之全部或一部分以得到主視訊流VS,而收發介面13會輸出主視訊流VS(例如:推播至用戶端)。舉例而言,收發介面13可在主虛擬攝影機30拍攝的過程,即時地輸出主虛擬攝影機30所拍攝到的畫面,而這些畫面即形成前述的主視訊流VS。 In this embodiment, the main virtual camera 30 shoots all or part of the imaging objects O1, O2, ..., O9 to obtain the main video stream VS, and the transceiver interface 13 outputs the main video stream VS (for example: push to the client ). For example, the transceiver interface 13 can instantly output the pictures taken by the main virtual camera 30 during the shooting process of the main virtual camera 30, and these pictures form the aforementioned main video stream VS.

於某些實施方式中,在主虛擬攝影機30於三維虛擬空間20中拍攝的過程,處理器11可使主虛擬攝影機30移動(例如:處理器11可依據一預設指令或者依據使用者透過收發介面13或其他介面所輸入之一指令決定一移動路徑,使主虛擬攝影機30根據該移動路徑移動),以改變主虛擬攝影機30所拍攝到的範圍。需說明者,視訊整合裝置1可採用三維物件移動(3DObject Translation)技術進行運鏡(即,使主虛擬攝影機30移動)。藉由三維物件移動技術,主虛擬攝影機30在三維虛擬空間20移動時能快速地切換畫面及整合多個畫面,因此不需重新編碼與縫合。以下詳述處理器11如何使主虛擬攝影機30於三維虛擬空間20移動。 In some implementations, during the process of the main virtual camera 30 shooting in the three-dimensional virtual space 20, the processor 11 can move the main virtual camera 30 (for example, the processor 11 can follow a preset instruction or according to the user through sending and receiving A command input from the interface 13 or other interfaces determines a movement path, so that the main virtual camera 30 moves according to the movement path, so as to change the range captured by the main virtual camera 30. It should be noted that the video integration device 1 can use 3D Object Translation technology to move the lens (ie, move the main virtual camera 30). With the three-dimensional object movement technology, the main virtual camera 30 can quickly switch pictures and integrate multiple pictures when moving in the three-dimensional virtual space 20, so there is no need to re-encode and stitch. The following is a detailed description of how the processor 11 causes the main virtual camera 30 to move in the three-dimensional virtual space 20.

具體而言,成像物件O1、O2、……、O9於三維虛擬空間20中各自具有一世界座標,處理器11可根據成像物件O1、O2、……、O9之世界座標來移動主虛擬攝影機30。 Specifically, the imaging objects O1, O2,..., O9 each have a world coordinate in the three-dimensional virtual space 20, and the processor 11 can move the main virtual camera 30 according to the world coordinates of the imaging objects O1, O2,..., O9 .

在三維虛擬空間20中,主虛擬攝影機30會對焦於某一世界座標以進行拍攝。舉例而言,當主虛擬攝影機30拍攝成像物件O1、O2、……、O9之全部時,主虛擬攝影機30係對焦於中心的成像物件O5的中心座標,且 可將主虛擬攝影機30的視野(Field of View;FOV)設定為與成像物件O1、O2、……、O9所形成的外框貼齊,如第1C圖所示。再舉例而言,當主虛擬攝影機30拍攝成像物件O9時,主虛擬攝影機30係對焦成像物件O9的中心座標,且可將主虛擬攝影機30的視野調整為與成像物件O9的外框貼齊,如第1D圖所示。處理器11可使主虛擬攝影機30從根據某一成像物件(例如:成像物件O5)的世界座標對焦調整為根據另一成像物件(例如:成像物件O9)的世界座標對焦,藉此使主虛擬攝影機30於三維虛擬空間20中移動以改變其拍攝到的畫面。另外,為了提供使用者較佳的觀賞體驗,處理器11使主虛擬攝影機30於三維虛擬空間20中移動後,還可進一步地調整主虛擬攝影機30的視野,或進一步地改變主虛擬攝影機30與成像物件間的距離,以避免拍攝到不需要的內容(容後說明)。 In the three-dimensional virtual space 20, the main virtual camera 30 will focus on a certain world coordinate for shooting. For example, when the main virtual camera 30 shoots all of the imaging objects O1, O2, ..., O9, the main virtual camera 30 focuses on the center coordinates of the imaging object O5 in the center, and The field of view (FOV) of the main virtual camera 30 can be set to be aligned with the outer frame formed by the imaging objects O1, O2,..., O9, as shown in FIG. 1C. For another example, when the main virtual camera 30 shoots the imaging object O9, the main virtual camera 30 focuses on the center coordinates of the imaging object O9, and can adjust the field of view of the main virtual camera 30 to be aligned with the outer frame of the imaging object O9. As shown in Figure 1D. The processor 11 can cause the main virtual camera 30 to focus on the world coordinates of a certain imaging object (for example, imaging object O5) to focus on the world coordinates of another imaging object (for example, imaging object O9), thereby making the main virtual The camera 30 moves in the three-dimensional virtual space 20 to change the image it captures. In addition, in order to provide users with a better viewing experience, after the processor 11 moves the main virtual camera 30 in the three-dimensional virtual space 20, it can further adjust the field of view of the main virtual camera 30, or further change the main virtual camera 30 and The distance between the imaging objects to avoid unneeded content (described later).

於某些實施方式中,代表一成像物件的世界座標為該成像物件的中心座標。在該等實施方式中,處理器11係使主虛擬攝影機30從對焦至某一成像物件的世界座標調整為對焦至另一成像物件的世界座標。 In some embodiments, the world coordinate representing an imaging object is the center coordinate of the imaging object. In these embodiments, the processor 11 adjusts the main virtual camera 30 from focusing on the world coordinates of a certain imaging object to focusing on the world coordinates of another imaging object.

於某些實施方式中,代表一成像物件的世界座標為該成像物件的左上角座標。於該等實施方式中,處理器11需先計算出成像物件O1、O2、……、O9各自於三維虛擬空間20的中心座標。針對成像物件O1、O2、……、O9的每一個,處理器11係以該成像物件之世界座標(亦即,左上角座標)、該成像物件之長度及該成像物件之高度分別計算出其中心座標。舉例而言,針對成像物件O1、O2、……、O9的每一個,處理器11先計算出該成像物件的長度的一半與該成像物件的高度的一半,再根據該成像物件的世界座標、該長度的一半與該高度的一半計算出該成像物件的中心 座標。如第1C圖所示,成像物件O1、O2、……、O9分別具有中心座標c1、c2、……、c9。 In some embodiments, the world coordinates representing an imaging object are the coordinates of the upper left corner of the imaging object. In these embodiments, the processor 11 needs to first calculate the center coordinates of the imaging objects O1, O2,..., O9 in the three-dimensional virtual space 20. For each of the imaging objects O1, O2, ..., O9, the processor 11 calculates the world coordinates of the imaging object (that is, the upper-left coordinates), the length of the imaging object, and the height of the imaging object. Center coordinates. For example, for each of the imaging objects O1, O2, ..., O9, the processor 11 first calculates half the length of the imaging object and half the height of the imaging object, and then according to the world coordinates of the imaging object, Half of the length and half of the height calculate the center of the imaged object coordinate. As shown in FIG. 1C, the imaging objects O1, O2, ..., O9 have center coordinates c1, c2, ..., c9, respectively.

為便於理解處理器11如何使主虛擬攝影機30移動,請參第1E圖所示之具體範例。於該具體範例中,處理器11欲使主虛擬攝影機30拍攝成像物件O1、O2、……、O9的全部,故使主虛擬攝影機30對焦至成像物件O5的中心座標c5,並將主虛擬攝影機30的視野設定為與成像物件O1、O2、……、O9所形成的外框貼齊。若要將主虛擬攝影機30從拍攝成像物件O1、O2、……、O9的全部調整為只拍攝成像物件O9,處理器11會將主虛擬攝影機30從對焦至成像物件O5的中心座標c5調整為對焦至成像物件O9的中心座標c9,俾使主虛擬攝影機30於三維虛擬空間20中移動以便對焦至成像物件O9的中心座標c9,進而拍攝到成像物件O9。 In order to understand how the processor 11 moves the main virtual camera 30, please refer to the specific example shown in FIG. 1E. In this specific example, the processor 11 wants the main virtual camera 30 to capture all of the imaging objects O1, O2, ..., O9, so the main virtual camera 30 is focused to the center coordinate c5 of the imaging object O5, and the main virtual camera The field of view of 30 is set to be aligned with the outer frame formed by the imaging objects O1, O2, ..., O9. To adjust the main virtual camera 30 from shooting all of the imaging objects O1, O2,..., O9 to only shooting the imaging object O9, the processor 11 will adjust the main virtual camera 30 from focusing to the center coordinate c5 of the imaging object O5 to Focus to the center coordinate c9 of the imaging object O9, so that the main virtual camera 30 moves in the three-dimensional virtual space 20 to focus to the center coordinate c9 of the imaging object O9, and then photograph the imaging object O9.

於某些實施方式中,在處理器11移動主虛擬攝影機30的過程,處理器11還可調整主虛擬攝影機30與成像物件O1、O2、……、O9間的距離(例如:若成像物件O1、O2、……、O9係與xy平面平行,則主虛擬攝影機30可沿著z軸方向移動),藉此改變主虛擬攝影機30所拍攝到的範圍。以第1E圖所示之具體範例作為說明,在處理器11將主虛擬攝影機30從對焦至成像物件O5的中心座標c5調整為對焦至成像物件O9的中心座標c9後,主虛擬攝影機30便會據以在三維虛擬空間20中移動以便拍攝成像物件O9。然而,若未調整主虛擬攝影機30與成像物件O1、O2、……、O9間的距離,主虛擬攝影機30所拍攝到的畫面將會涵蓋到成像物件O9以外的範圍。因此,處理器11還會調整主虛擬攝影機30與成像物件O9間的距離,俾主虛擬攝影機30所拍攝到的畫面範圍與成像物件O9的外框貼齊。 In some embodiments, during the process of the processor 11 moving the main virtual camera 30, the processor 11 may also adjust the distance between the main virtual camera 30 and the imaging objects O1, O2, ..., O9 (for example, if the imaging object O1 , O2,..., O9 are parallel to the xy plane, the main virtual camera 30 can move along the z-axis direction), thereby changing the range captured by the main virtual camera 30. Taking the specific example shown in Figure 1E as an illustration, after the processor 11 adjusts the main virtual camera 30 from focusing on the center coordinate c5 of the imaging object O5 to focusing on the center coordinate c9 of the imaging object O9, the main virtual camera 30 will Accordingly, it moves in the three-dimensional virtual space 20 in order to photograph the imaging object O9. However, if the distance between the main virtual camera 30 and the imaging objects O1, O2,..., O9 is not adjusted, the frame captured by the main virtual camera 30 will cover the range outside the imaging object O9. Therefore, the processor 11 also adjusts the distance between the main virtual camera 30 and the imaging object O9, so that the frame of the image captured by the main virtual camera 30 is aligned with the outer frame of the imaging object O9.

另外,於某些實施方式中,在處理器11移動主虛擬攝影機30的過程,處理器11還可重新設定主虛擬攝影機30的視野,藉此改變主虛擬攝影機30所拍攝到的範圍。以第1E圖所示之具體範例作為說明,在處理器11將主虛擬攝影機30從對焦至成像物件O5的中心座標c5調整為對焦至成像物件O9的中心座標c9後,主虛擬攝影機30所拍攝到的畫面將會涵蓋到成像物件O9以外之範圍。為了避免此一情況,處理器11可透過將主虛擬攝影機30的視野調整為與成像物件O9的外框貼齊,使主虛擬攝影機30不會拍攝到成像物件O9外的其他內容。 In addition, in some embodiments, during the process of the processor 11 moving the main virtual camera 30, the processor 11 may also reset the field of view of the main virtual camera 30, thereby changing the range captured by the main virtual camera 30. Taking the specific example shown in Fig. 1E as an illustration, after the processor 11 adjusts the main virtual camera 30 from focusing to the center coordinate c5 of the imaging object O5 to focusing on the center coordinate c9 of the imaging object O9, the main virtual camera 30 takes The resulting picture will cover the area outside the imaging object O9. In order to avoid this situation, the processor 11 can adjust the field of view of the main virtual camera 30 to be aligned with the outer frame of the imaging object O9, so that the main virtual camera 30 will not capture content other than the imaging object O9.

如前所述,收發介面13會輸出主視訊流VS(例如:推播至用戶端)。於某些實施方式中,視訊整合裝置1還可包含一暫存記憶體(未繪示),其中該暫存記憶體電性連接至處理器11。該暫存記憶體用以儲存主虛擬攝影機30所拍攝到之複數個畫面。於該等實施方式中,收發介面13係輸出暫存記憶體所暫存的該等畫面(例如:將暫存記憶體所暫存的各該畫面推播至用戶端),而這些畫面形成前述的主視訊流VS。需說明者,依據習知的推播技術,一主機並非推播暫存記憶體所暫存的虛擬攝影機所拍攝的畫面,而是推播該主機所顯示的畫面,因此會將主機所顯示的其他資訊(例如:通訊軟體所接收到的即時訊息)一併推播出去。由於視訊整合裝置1僅儲存主虛擬攝影機30所拍攝到之畫面於該暫存記憶體,且係直接將該等畫面推播給用戶,因此不會有習知技術將其他資訊一併推播出去的情況。 As mentioned above, the transceiver interface 13 will output the main video stream VS (for example, push broadcast to the client). In some embodiments, the video integration device 1 may further include a temporary memory (not shown), wherein the temporary memory is electrically connected to the processor 11. The temporary memory is used to store a plurality of images captured by the main virtual camera 30. In these embodiments, the transceiver interface 13 outputs the screens temporarily stored in the temporary memory (for example, pushes each screen temporarily stored in the temporary memory to the client), and these screens form the aforementioned The main video stream VS. It should be clarified that, according to the conventional push broadcast technology, a host does not push the picture taken by the virtual camera temporarily stored in the temporary storage memory, but pushes the picture displayed by the host, so the display of the host will be displayed. Other information (such as instant messages received by communication software) will be pushed out together. Since the video integration device 1 only stores the pictures taken by the main virtual camera 30 in the temporary memory, and directly pushes these pictures to the user, there is no conventional technology to push other information together. To go to the situation.

於某些實施方式中,視訊整合裝置1還可動態地關閉某一(或某些)成像物件的成像功能,以免浪費顯示資源。舉例而言,處理器11可採用視體剔除(View frustum culling)技術來關閉某一(或某些)成像物件的 成像功能。處理器11可判斷成像物件O1、O2、……、O9是否落入主虛擬攝影機30之視角中。若處理器11判斷成像物件O1、O2、……、O9之一子集未落入主虛擬攝影機30之視角中,處理器11會停止該子集所包含之至少一成像物件之成像功能。為便於理解,請參第1D圖之具體範例。於該具體範例中,當主虛擬攝影機30拍攝成像物件O9時,處理器11判斷成像物件O1、O2、……、O9之一子集(亦即,成像物件O1、O2、……、O8)未落入主虛擬攝影機30之視角中,因此處理器11會停止該子集所包含之成像物件O1、O2、……、O8之成像功能。透過關閉主虛擬攝影機30未拍攝到之成像物件O1、O2、……、O8之成像功能,可以減少顯示資源的消耗。 In some embodiments, the video integration device 1 can also dynamically turn off the imaging function of a certain (or some) imaging objects to avoid wasting display resources. For example, the processor 11 may use View frustum culling technology to turn off a certain (or some) imaging object. Imaging function. The processor 11 can determine whether the imaging objects O1, O2,..., O9 fall within the viewing angle of the main virtual camera 30. If the processor 11 determines that a subset of the imaging objects O1, O2, ..., O9 does not fall within the viewing angle of the main virtual camera 30, the processor 11 will stop the imaging function of at least one imaging object included in the subset. For ease of understanding, please refer to the specific example in Figure 1D. In this specific example, when the main virtual camera 30 photographs the imaging object O9, the processor 11 determines a subset of the imaging objects O1, O2, ..., O9 (ie, imaging objects O1, O2, ..., O8) It does not fall into the viewing angle of the main virtual camera 30, so the processor 11 will stop the imaging functions of the imaging objects O1, O2, ..., O8 included in the subset. By turning off the imaging function of the imaging objects O1, O2,..., O8 not captured by the main virtual camera 30, the consumption of display resources can be reduced.

在某些實施方式中,視訊整合裝置1動態地關閉成像物件的成像功能,其精細程度可達到視訊整合裝置1所執行的三維引擎中的一距離單位(例如:公尺或公分)。在第1F圖所示的一具體範例中,主虛擬攝影機30的視角在某個時刻涵蓋成像物件O9的全部以及成像物件O5、O6及O8的一部分(亦即,第1F圖中的拍攝範圍VA)。在該具體範例中,處理器11將主虛擬攝影機30的視角未涵蓋到的成像物件的部分視為需要關閉成像功能之子集,該子集包含成像物件O1、O2、O3、O4及O7的全部以及成像物件O5、O6及O8的各自的一部分。處理器11可將該子集所對應的成像功能關閉。 In some embodiments, the video integration device 1 dynamically turns off the imaging function of the imaging object, and the fineness can reach a distance unit (for example, meters or centimeters) in the three-dimensional engine executed by the video integration device 1. In a specific example shown in Figure 1F, the angle of view of the main virtual camera 30 covers all of the imaging object O9 and a part of the imaging objects O5, O6, and O8 at a certain moment (that is, the shooting range VA in Figure 1F). ). In this specific example, the processor 11 regards the part of the imaging object not covered by the viewing angle of the main virtual camera 30 as a subset of the imaging function that needs to be turned off, and this subset includes all of the imaging objects O1, O2, O3, O4, and O7. And the respective parts of the imaging objects O5, O6, and O8. The processor 11 can turn off the imaging function corresponding to the subset.

於某些實施方式中,處理器11還可利用三維虛擬空間20中的物件疊合特性,在主虛擬攝影機30及成像物件O1、O2、……、O9其中之一之間建立一插入物件(例如:文字),以添加其他的成像效果。具體而言,處理器11可判斷主虛擬攝影機30所拍攝到的成像物件的範圍,再據以決定要在哪一位置(亦即,在主虛擬攝影機30與哪一或哪些成像物件之間)建立 該插入物件。舉例而言,當主虛擬攝影機30會拍攝到成像物件O1、O2、……、O9的全部時,處理器11可於主虛擬攝影機30及成像物件O1、O2、……、O9之間建立該插入物件。再舉例而言,當主虛擬攝影機30僅拍攝到成像物件O9時,處理器11可於主虛擬攝影機30及成像物件O9之間建立該插入物件。 In some embodiments, the processor 11 can also use the object superimposition feature in the three-dimensional virtual space 20 to create an insert object ( For example: text) to add other imaging effects. Specifically, the processor 11 can determine the range of the imaging object captured by the main virtual camera 30, and then determine the position (that is, between the main virtual camera 30 and the imaging object or objects) based on it. Establish The insert. For example, when the main virtual camera 30 will capture all of the imaging objects O1, O2, ..., O9, the processor 11 may establish the image between the main virtual camera 30 and the imaging objects O1, O2, ..., O9 Insert the object. For another example, when the main virtual camera 30 only captures the imaging object O9, the processor 11 may create the insertion object between the main virtual camera 30 and the imaging object O9.

於某些實施方式中,處理器11還可於主虛擬攝影機30前增設一濾鏡,使主虛擬攝影機30拍攝所獲得的主視訊流的畫面具有該濾鏡所能呈現的效果。本發明所屬技術領域中具有通常知識者皆熟知各種濾鏡之使用方式,故不詳述。 In some embodiments, the processor 11 may also add a filter in front of the main virtual camera 30 so that the main video stream obtained by the main virtual camera 30 has the effects that the filter can present. Those with ordinary knowledge in the technical field to which the present invention pertains are familiar with the use of various filters, so they will not be described in detail.

於某些實施方式中,處理器11還可將成像物件O1、O2、……、O9之全部旋轉一角度,或可將成像物件O1、O2、……、O9之至少其中之一旋轉一角度,藉此提供不同的視覺效果。本發明所屬技術領域中具有通常知識者應理解如何於三維虛擬空間20中旋轉一成像物件,故不詳述。 In some embodiments, the processor 11 can also rotate all of the imaging objects O1, O2,..., O9 by an angle, or can rotate at least one of the imaging objects O1, O2,..., O9 by an angle , To provide different visual effects. Those skilled in the art to which the present invention pertains should understand how to rotate an imaging object in the three-dimensional virtual space 20, so it will not be described in detail.

綜上所述,視訊整合裝置1係於一三維虛擬空間設立一主虛擬攝影機及複數個成像物件,且利用該等成像物件於三維虛擬空間中可呈現視訊及影像內容的特性,將複數個視訊流一對一地呈現於該等成像物件。視訊整合裝置1利用虛擬攝影機於三維虛擬空間中可自由移動的特性,使該主虛擬攝影機拍攝該等成像物件之全部或一部分,藉此得到一主視訊流,進而將該主視訊流輸出。藉由前述運作,視訊整合裝置1能透過該主虛擬攝影機及該等成像物件整合該等視訊流,進而達到使用者所需要的整合與觀看效果。另外,由於視訊整合裝置1採用三維物件移動技術進行運鏡,因此主虛擬攝影機在三維虛擬空間移動時能快速地切換畫面及整合多個畫面,不需重新編碼與縫合。因此,視訊整合裝置1可有效地降低運算成本,不會延 遲輸出主視訊流的時間,進而維持良好的輸出(例如:推播)品質,且提供使用者良好的觀看體驗。 To sum up, the video integration device 1 sets up a main virtual camera and a plurality of imaging objects in a three-dimensional virtual space, and utilizes the characteristics of the imaging objects to present video and image content in the three-dimensional virtual space, and combine the plurality of videos The flow is presented one-to-one on the imaging objects. The video integration device 1 utilizes the feature that the virtual camera can move freely in a three-dimensional virtual space, so that the main virtual camera captures all or part of the imaging objects, thereby obtaining a main video stream, and then outputting the main video stream. Through the foregoing operations, the video integration device 1 can integrate the video streams through the main virtual camera and the imaging objects, thereby achieving the integration and viewing effects required by the user. In addition, since the video integration device 1 uses a three-dimensional object movement technology to move the lens, the main virtual camera can quickly switch pictures and integrate multiple pictures when moving in a three-dimensional virtual space, without recoding and stitching. Therefore, the video integration device 1 can effectively reduce the computing cost without delay. Delay the output of the main video stream, thereby maintaining good output (for example: push) quality, and providing users with a good viewing experience.

本發明之第二實施方式為一種視訊整合方法,其流程圖係描繪於第2A圖。該視訊整合方法適用於一電子裝置,例如:第一實施方式所述之視訊整合裝置。 The second embodiment of the present invention is a video integration method, the flowchart of which is depicted in FIG. 2A. The video integration method is suitable for an electronic device, such as the video integration device described in the first embodiment.

於本實施方式中,該視訊整合方法執行步驟S201、S203及S205。於步驟S201,由該電子裝置於一三維虛擬空間設立一主虛擬攝影機及複數個成像物件。於步驟S203,由該電子裝置將複數個視訊流一對一地呈現於該等成像物件。接著,於步驟S205,由該電子裝置使該主虛擬攝影機拍攝該等成像物件之全部或一部分以得一主視訊流,且輸出該主視訊流。具體而言,步驟S205會使該主虛擬攝影機拍攝該等成像物件之全部或一部分以取得複數個畫面,且輸出該等畫面作為該主視訊流。 In this embodiment, the video integration method executes steps S201, S203, and S205. In step S201, the electronic device sets up a main virtual camera and a plurality of imaging objects in a three-dimensional virtual space. In step S203, the electronic device presents a plurality of video streams on the imaging objects one-to-one. Then, in step S205, the electronic device causes the main virtual camera to capture all or part of the imaging objects to obtain a main video stream, and output the main video stream. Specifically, in step S205, the main virtual camera will capture all or part of the imaging objects to obtain a plurality of pictures, and output the pictures as the main video stream.

步驟S203所處理的各該視訊流係包含複數個畫面。於某些實施方式中,步驟S203係由該電子裝置執行一步驟以將各該畫面轉成一渲染紋理,再執行一步驟以將各該渲染紋理呈現於對應之該成像物件。 Each of the video streams processed in step S203 includes a plurality of pictures. In some embodiments, in step S203, the electronic device executes a step to convert each image into a rendering texture, and then executes a step to present each rendering texture to the corresponding imaging object.

另外,於某些實施方式中,該視訊整合方法於執行步驟S205之過程,該電子裝置還可執行一步驟以使該主虛擬攝影機移動。 In addition, in some embodiments, in the process of performing step S205 in the video integration method, the electronic device may also perform a step to move the main virtual camera.

於某些實施方式中,各該成像物件於該三維虛擬空間中具有一世界座標,且該等成像物件包含一第一成像物件(例如:第1C圖所示之成像物件O1)及一第二成像物件(例如:第1C圖所示之成像物件O9)。於該等實施方式中,於該電子裝置使該主虛擬攝影機移動之過程,該視訊整合方法還包含一步驟以使該主虛擬攝影機從根據該第一成像物件之該世界座標 對焦調整為根據該第二成像物件之該世界座標對焦。 In some embodiments, each of the imaging objects has a world coordinate in the three-dimensional virtual space, and the imaging objects include a first imaging object (for example, the imaging object O1 shown in Figure 1C) and a second The imaging object (for example: the imaging object O9 shown in Figure 1C). In these embodiments, in the process of moving the master virtual camera by the electronic device, the video integration method further includes a step for enabling the master virtual camera to follow the world coordinates of the first imaging object The focus adjustment is based on the world coordinate of the second imaging object.

進一步言,於某些實施方式中,代表一成像物件的世界座標為該成像物件的中心座標。在該等實施方式中,該視訊整合方法係使該主虛擬攝影機從對焦至該第一成像物件的該世界座標調整為對焦至該第二成像物件的該世界座標。 Furthermore, in some embodiments, the world coordinate representing an imaging object is the center coordinate of the imaging object. In these embodiments, the video integration method adjusts the main virtual camera from focusing on the world coordinate of the first imaging object to focusing on the world coordinate of the second imaging object.

於某些實施方式中,代表一成像物件的世界座標為該成像物件的左上角座標。於該等實施方式中,該視訊整合方法還包含下列步驟:由該電子裝置根據該第一成像物件之該世界座標、一長度及一寬度計算出該第一成像物件之一第一中心座標,根據該第二成像物件之該世界座標、一長度及一寬度計算出該第二成像物件之一第二中心座標,以及使該主虛擬攝影機從對焦至該第一中心座標調整為對焦至該第二中心座標。 In some embodiments, the world coordinates representing an imaging object are the coordinates of the upper left corner of the imaging object. In these embodiments, the video integration method further includes the following steps: the electronic device calculates a first center coordinate of the first imaging object according to the world coordinates, a length, and a width of the first imaging object, Calculate a second center coordinate of the second imaging object based on the world coordinates, a length, and a width of the second imaging object, and adjust the main virtual camera from focusing to the first center coordinate to focusing to the first center coordinate 2. Center coordinates.

於某些實施方式中,該視訊整合方法還包含一步驟,由該電子裝置調整該主虛擬攝影機與該第二成像物件間之一距離。於該等實施方式中,該視訊整合方法可根據該主虛擬攝影機的一視角調整該主虛擬攝影機與該第二成像物件間的該距離。 In some embodiments, the video integration method further includes a step of adjusting a distance between the main virtual camera and the second imaging object by the electronic device. In these embodiments, the video integration method can adjust the distance between the main virtual camera and the second imaging object according to a viewing angle of the main virtual camera.

於某些實施方式中,該電子裝置還儲存該主虛擬攝影機之複數個畫面,例如:儲存於該電子裝置的一暫存記憶體中。於該等實施方式中,步驟S205係由該電子裝置輸出該暫存記憶體中的該等畫面作為該主視訊流。 In some embodiments, the electronic device also stores a plurality of frames of the main virtual camera, for example, stored in a temporary memory of the electronic device. In these embodiments, in step S205, the electronic device outputs the images in the temporary memory as the main video stream.

於某些實施方式中,該視訊整合方法還由該電子裝置執行下列步驟:判斷該等成像物件之一子集未落入該主虛擬攝影機之一視角中,以及停止該子集所包含之至少一成像物件之成像功能。透過將該主虛擬攝影 機之視角未涵蓋到之該等成像物件關閉,可以減少顯示資源的消耗。 In some embodiments, the video integration method further executes the following steps by the electronic device: determining that a subset of the imaging objects does not fall within a viewing angle of the main virtual camera, and stopping at least the subset included The imaging function of an imaging object. Virtual photography The imaging objects not covered by the camera's viewing angle are turned off, which can reduce the consumption of display resources.

於某些實施方式中,該視訊整合方法還由該電子裝置執行下列步驟:於該主虛擬攝影機及該等成像物件其中之一之間建立一插入物件。另外,於某些實施方式中,該視訊整合方法還由該電子裝置執行下列步驟:將該等成像物件之至少其中之一旋轉一角度。藉由該等步驟,該視訊整合方法可於該主視訊流添加其他的成像效果。 In some embodiments, the video integration method further executes the following steps by the electronic device: creating an insert object between the main virtual camera and one of the imaging objects. In addition, in some embodiments, the video integration method further executes the following steps by the electronic device: rotating at least one of the imaging objects by an angle. Through these steps, the video integration method can add other imaging effects to the main video stream.

除了上述步驟,第二實施方式亦能執行第一實施方式所描述之所有運作及步驟,具有同樣之功能,且達到同樣之技術效果。本發明所屬技術領域中具有通常知識者可直接瞭解第二實施方式如何基於上述第一實施方式以執行此等運作及步驟,具有同樣之功能,並達到同樣之技術效果,故不贅述。 In addition to the above steps, the second embodiment can also perform all the operations and steps described in the first embodiment, have the same functions, and achieve the same technical effects. Those with ordinary knowledge in the technical field to which the present invention pertains can directly understand how the second embodiment performs these operations and steps based on the above-mentioned first embodiment, has the same functions, and achieves the same technical effects, so it will not be repeated.

第二實施方式中所闡述之視訊整合方法可由包含複數個程式指令之一電腦程式產品實現。該電腦程式產品可為能被於網路上傳輸之檔案,亦可被儲存於一非暫態電腦可讀取儲存媒體中。該非暫態電腦可讀取儲存媒體可為一電子產品,例如:一唯讀記憶體(Read Only Memory;ROM)、一快閃記憶體、一軟碟、一硬碟、一光碟(Compact Disk;CD)、一數位多功能光碟(Digital Versatile Disc;DVD)、一隨身碟或本發明所屬技術領域中具有通常知識者所知且具有相同功能之任何其他儲存媒體。該電腦程式產品所包含之該等程式指令被載入一電子裝置(例如:視訊整合裝置1)後,該電腦程式執行如在第二實施方式中所述之該視訊整合方法。 The video integration method described in the second embodiment can be implemented by a computer program product containing a plurality of program instructions. The computer program product can be a file that can be transmitted over the network, and can also be stored in a non-transitory computer-readable storage medium. The non-transitory computer-readable storage medium may be an electronic product, such as: a read only memory (ROM), a flash memory, a floppy disk, a hard disk, and a compact disk (Compact Disk; CD), a Digital Versatile Disc (DVD), a flash drive, or any other storage medium with the same functions known to those skilled in the art to which the present invention belongs. After the program instructions included in the computer program product are loaded into an electronic device (for example, the video integration device 1), the computer program executes the video integration method as described in the second embodiment.

需說明者,於本發明專利說明書中及申請專利範圍中,某些用語(包含:成像物件、中心座標)前被冠以「第一」、「第二」,該等「第 一」、「第二」僅用來區分不同用語。 It should be clarified that in the specification of the present invention and in the scope of the patent application, certain terms (including: imaging objects, center coordinates) are preceded by "first" and "second". "One" and "second" are only used to distinguish different terms.

綜上所述,本發明所提供之視訊整合技術(至少包含裝置、方法及其電腦程式產品)會於一三維虛擬空間設立一主虛擬攝影機及複數個成像物件,且利用該等成像物件於該三維虛擬空間中可呈現視訊及影像內容的特性,將複數個視訊流一對一地呈現於該等成像物件。本發明所提供之視訊整合技術利用該主虛擬攝影機來拍攝該等成像物件之全部或一部分,藉此得到一主視訊流。本發明所提供之視訊整合技術還會將該主視訊流輸出(例如:將該主視訊流推播至用戶端)。本發明所提供之視訊整合技術係採用三維物件移動技術進行運鏡,因此主虛擬攝影機在三維虛擬空間移動時能快速地切換畫面及整合多個畫面,不需重新編碼與縫合。因此,本發明所提供之視訊整合技術能有效降低運算成本,不會延遲輸出主視訊流的時間,進而維持良好的輸出品質,且提供觀眾良好的觀看體驗。 In summary, the video integration technology provided by the present invention (including at least devices, methods and computer program products) will set up a main virtual camera and a plurality of imaging objects in a three-dimensional virtual space, and use these imaging objects in the The characteristics of video and image content can be presented in the three-dimensional virtual space, and multiple video streams are presented one-to-one on the imaging objects. The video integration technology provided by the present invention uses the main virtual camera to shoot all or part of the imaging objects, thereby obtaining a main video stream. The video integration technology provided by the present invention also outputs the main video stream (for example, pushes the main video stream to the client). The video integration technology provided by the present invention uses a three-dimensional object moving technology to move the mirror, so the main virtual camera can quickly switch pictures and integrate multiple pictures when moving in a three-dimensional virtual space, without recoding and stitching. Therefore, the video integration technology provided by the present invention can effectively reduce the computing cost without delaying the time of outputting the main video stream, thereby maintaining a good output quality and providing a good viewing experience for the audience.

上述實施方式僅用來例舉本發明之部分實施態樣,以及闡釋本發明之技術特徵,而非用來限制本發明之保護範疇及範圍。任何本發明所屬技術領域中具有通常知識者可輕易完成之改變或均等性之安排均屬於本發明所主張之範圍,而本發明之權利保護範圍以申請專利範圍為準。 The above-mentioned embodiments are only used to exemplify part of the implementation aspects of the present invention and to explain the technical features of the present invention, and are not used to limit the protection scope and scope of the present invention. Any change or equal arrangement that can be easily accomplished by a person with ordinary knowledge in the technical field of the present invention belongs to the scope of the present invention, and the protection scope of the present invention is subject to the scope of the patent application.

20:三維虛擬空間 20: Three-dimensional virtual space

30:主虛擬攝影機 30: Main virtual camera

c1、c2、……、c9:中心座標 c1, c2,......, c9: center coordinates

Claims (10)

一種視訊整合裝置,包含: A video integration device, including: 一處理器,於一三維虛擬空間設立一主虛擬攝影機及複數個成像物件,將複數個視訊流一對一地呈現於該等成像物件,且使該主虛擬攝影機拍攝該等成像物件之全部或一部分以得一主視訊流;以及 A processor sets up a main virtual camera and a plurality of imaging objects in a three-dimensional virtual space, presents the plurality of video streams on the imaging objects one-to-one, and makes the main virtual camera capture all or the imaging objects Part of it to get a main video stream; and 一收發介面,電性連接至該處理器,且輸出該主視訊流。 A transceiver interface is electrically connected to the processor and outputs the main video stream. 如請求項1所述之視訊整合裝置,其中該處理器還在該主虛擬攝影機拍攝之過程使該主虛擬攝影機移動。 The video integration device according to claim 1, wherein the processor also moves the main virtual camera in the process of shooting by the main virtual camera. 如請求項2所述之視訊整合裝置,其中該等成像物件包含一第一成像物件及一第二成像物件,各該成像物件於該三維虛擬空間中具有一世界座標,且該處理器係藉由以下運作使該主虛擬攝影機移動: The video integration device according to claim 2, wherein the imaging objects include a first imaging object and a second imaging object, each of the imaging objects has a world coordinate in the three-dimensional virtual space, and the processor is borrowed The main virtual camera is moved by the following operations: 使該主虛擬攝影機從根據該第一成像物件之該世界座標對焦調整為根據該第二成像物件之該世界座標對焦。 The main virtual camera is adjusted from focusing based on the world coordinate of the first imaging object to focusing based on the world coordinate of the second imaging object. 如請求項3所述之視訊整合裝置,其中該處理器移動該主虛擬攝影機之過程還調整該主虛擬攝影機與該第二成像物件間之一距離。 The video integration device according to claim 3, wherein the process of moving the main virtual camera by the processor further adjusts a distance between the main virtual camera and the second imaging object. 如請求項1所述之視訊整合裝置,其中各該視訊流包含複數個畫面,該處理器係將各該畫面轉成一渲染紋理(render texture),且將各該渲染紋理呈現於對應之該成像物件。 The video integration device according to claim 1, wherein each of the video streams includes a plurality of pictures, and the processor converts each of the pictures into a render texture, and presents each of the render textures to the corresponding Imaging objects. 一種視訊整合方法,由一電子裝置執行,該視訊整合方法包含下列步驟: A video integration method is executed by an electronic device. The video integration method includes the following steps: (a)於一三維虛擬空間設立一主虛擬攝影機及複數個成像物件; (a) Set up a main virtual camera and a plurality of imaging objects in a three-dimensional virtual space; (b)將複數個視訊流一對一地呈現於該等成像物件; (b) Present a plurality of video streams on the imaging objects one-to-one; (c)使該主虛擬攝影機拍攝該等成像物件之全部或一部分以得一主視訊流;以及 (c) enabling the main virtual camera to shoot all or part of the imaging objects to obtain a main video stream; and (d)輸出該主視訊流。 (d) Output the main video stream. 如請求項6所述之視訊整合方法,其中該等成像物件包含一第一成像物件及一第二成像物件,各該成像物件於該三維虛擬空間中具有一世界座標,且還包含下列步驟: The video integration method according to claim 6, wherein the imaging objects include a first imaging object and a second imaging object, each of the imaging objects has a world coordinate in the three-dimensional virtual space, and further includes the following steps: (e)於執行該步驟(c)之過程使該主虛擬攝影機從根據該第一成像物件之該世界座標對焦調整為根據該第二成像物件之該世界座標對焦。 (e) In performing the process of step (c), the main virtual camera is adjusted from focusing according to the world coordinate of the first imaging object to focusing according to the world coordinate of the second imaging object. 如請求項6所述之視訊整合方法,其中該電子裝置儲存該主虛擬攝影機之複數個畫面,且該步驟(d)係輸出該等畫面作為該主視訊流。 The video integration method according to claim 6, wherein the electronic device stores a plurality of frames of the main virtual camera, and the step (d) is to output the frames as the main video stream. 如請求項6所述之視訊整合方法,其中各該視訊流包含複數 個畫面,且該步驟(b)包含下列步驟: The video integration method according to claim 6, wherein each of the video streams includes a plurality of A screen, and this step (b) includes the following steps: 將各該畫面轉成一渲染紋理;以及 Convert each of the images into a rendering texture; and 將各該渲染紋理呈現於對應之該成像物件。 Present each of the rendering textures to the corresponding imaging object. 一種電腦程式產品,經由一電子裝置載入該電腦程式產品後,該電子裝置執行該電腦程式產品所包含之複數個程式指令,以實現一種視訊整合方法,該視訊整合方法包含下列步驟: A computer program product. After the computer program product is loaded by an electronic device, the electronic device executes a plurality of program instructions included in the computer program product to realize a video integration method. The video integration method includes the following steps: 於一三維虛擬空間設立一主虛擬攝影機及複數個成像物件; Set up a main virtual camera and multiple imaging objects in a three-dimensional virtual space; 將複數個視訊流一對一地呈現於該等成像物件; Present a plurality of video streams on the imaging objects one-to-one; 使該主虛擬攝影機拍攝該等成像物件之全部或一部分以得一主視訊流;以及 Enable the main virtual camera to shoot all or part of the imaging objects to obtain a main video stream; and 輸出該主視訊流。 Output the main video stream.
TW109129154A 2020-08-26 2020-08-26 Apparatus, method, and computer program product thereof for integrating videos TWI740623B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW109129154A TWI740623B (en) 2020-08-26 2020-08-26 Apparatus, method, and computer program product thereof for integrating videos

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW109129154A TWI740623B (en) 2020-08-26 2020-08-26 Apparatus, method, and computer program product thereof for integrating videos

Publications (2)

Publication Number Publication Date
TWI740623B true TWI740623B (en) 2021-09-21
TW202209859A TW202209859A (en) 2022-03-01

Family

ID=78778089

Family Applications (1)

Application Number Title Priority Date Filing Date
TW109129154A TWI740623B (en) 2020-08-26 2020-08-26 Apparatus, method, and computer program product thereof for integrating videos

Country Status (1)

Country Link
TW (1) TWI740623B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007027854A2 (en) * 2005-08-31 2007-03-08 Rah Color Technologies Llc Color calibration of color image rendering devices
TW200906169A (en) * 2007-07-17 2009-02-01 Inventec Corp Equipment and method for examining quality of image display apparatus
US7969444B1 (en) * 2006-12-12 2011-06-28 Nvidia Corporation Distributed rendering of texture data
WO2013123696A1 (en) * 2012-02-21 2013-08-29 海尔集团公司 Method and system for split-screen display applicable in multi-screen sharing
US20150279037A1 (en) * 2014-01-11 2015-10-01 Userful Corporation System and Method of Video Wall Setup and Adjustment Using Automated Image Analysis
TW202025716A (en) * 2018-09-26 2020-07-01 美商卡赫倫特羅吉克斯公司 Surround view generation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007027854A2 (en) * 2005-08-31 2007-03-08 Rah Color Technologies Llc Color calibration of color image rendering devices
US7969444B1 (en) * 2006-12-12 2011-06-28 Nvidia Corporation Distributed rendering of texture data
TW200906169A (en) * 2007-07-17 2009-02-01 Inventec Corp Equipment and method for examining quality of image display apparatus
WO2013123696A1 (en) * 2012-02-21 2013-08-29 海尔集团公司 Method and system for split-screen display applicable in multi-screen sharing
US20150279037A1 (en) * 2014-01-11 2015-10-01 Userful Corporation System and Method of Video Wall Setup and Adjustment Using Automated Image Analysis
TW202025716A (en) * 2018-09-26 2020-07-01 美商卡赫倫特羅吉克斯公司 Surround view generation

Also Published As

Publication number Publication date
TW202209859A (en) 2022-03-01

Similar Documents

Publication Publication Date Title
JP6044328B2 (en) Image processing system, image processing method, and program
US10182187B2 (en) Composing real-time processed video content with a mobile device
US9426451B2 (en) Cooperative photography
US20110158509A1 (en) Image stitching method and apparatus
JP5743016B2 (en) Apparatus and method for generating images
US10645278B2 (en) Imaging control apparatus and control method therefor
CN106101687B (en) VR image capturing devices and its VR image capturing apparatus based on mobile terminal
WO2018188198A1 (en) Dual screen terminal
CN105635568B (en) Image processing method and mobile terminal in a kind of mobile terminal
JP2018093460A (en) Dynamic photo photographing method and device
EP3991132B1 (en) Imaging system, image processing apparatus, imaging device, and recording medium
TWI434129B (en) System and device for displaying spherical panorama image
JP2016504828A (en) Method and system for capturing 3D images using a single camera
Popovic et al. Image blending in a high frame rate FPGA-based multi-camera system
CN106210701A (en) A kind of mobile terminal for shooting VR image and VR image capturing apparatus thereof
TW201824178A (en) Image processing method for immediately producing panoramic images
TWI740623B (en) Apparatus, method, and computer program product thereof for integrating videos
Popovic et al. Design and implementation of real-time multi-sensor vision systems
JP6394682B2 (en) Method and image processing apparatus
JP6992829B2 (en) Image processing system, image processing method and program
CN110264406B (en) Image processing apparatus and image processing method
CN205946040U (en) Device is shot to VR image and VR image imaging system based on mobile terminal thereof
CN205946041U (en) A mobile terminal for taking VR image and VR image imaging system thereof
JP6705477B2 (en) Image processing system, image processing method and program
JP7302647B2 (en) Image processing system, image processing method and program