TW201225638A - Method and system for generating three-dimensional video utilizing a monoscopic camera - Google Patents

Method and system for generating three-dimensional video utilizing a monoscopic camera Download PDF

Info

Publication number
TW201225638A
TW201225638A TW100130759A TW100130759A TW201225638A TW 201225638 A TW201225638 A TW 201225638A TW 100130759 A TW100130759 A TW 100130759A TW 100130759 A TW100130759 A TW 100130759A TW 201225638 A TW201225638 A TW 201225638A
Authority
TW
Taiwan
Prior art keywords
captured
image data
depth information
camera
dimensional image
Prior art date
Application number
TW100130759A
Other languages
Chinese (zh)
Inventor
Nambi Seshadri
Jeyhan Karaoguz
Xuemin Chen
Chris Boross
Original Assignee
Broadcom Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/077,900 external-priority patent/US20120050480A1/en
Application filed by Broadcom Corp filed Critical Broadcom Corp
Publication of TW201225638A publication Critical patent/TW201225638A/en

Links

Landscapes

  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Studio Devices (AREA)

Abstract

Aspects of a method and system for generating three-dimensional video utilizing a monoscopic camera are provided. A monoscopic camera may comprise one or more image sensors and one or more depth sensors. Two-dimensional image data may be captured via the image sensor(s) and depth information may be captured via the depth sensor(s). The depth sensor may utilize infrared waves transmitted by an emitter of the monoscopic camera. The monoscopic camera may be operable to utilize the captured depth information to generate a three-dimensional video stream from the captured two-dimensional image data. The monoscopic camera may be operable to synchronize the captured depth information with the captured two-dimensional image data. The monoscopic camera may be operable to generate a two-dimensional video stream from the captured two-dimensional image data. The monoscopic camera may be configurable to output the two-dimensional video and/or the three-dimensional video stream.

Description

201225638 六、發明說明: 【發明所屬之技術領域】 本發明涉及視頻處理。更具體地,本發明涉及採用單視場 (monoscopic)攝影機生成三維視頻的方法及系統。 【先前技術】 近年來,對支援三維(3-D)視頻的視頻系統的支援和需 求已經迅速增加。無論是字面上還是身體上,3_D視頻為在家 中和電影院中觀看視頻提供全新的方式。但是,3_D視頻在許 夕方面仍然處於萌芽狀態’在成本和性能方面具有很大的提升 空間。 比較本發明後續將要結合附圖介紹的系統,現有技術的其 他局限性和弊端對於本領域的技術人員來說是顯而易見的。 【發明内容】 本發明提供了 一種採用單視場攝影機生成三維視頻的系 統和/或方法,結合至少一幅附圖進行了詳細描述,並在權利 要求中得到了更完整的闡述。 根據本發明的一個方面,一種方法包括: 通過單視場攝影機的一個或多個圖像感測器捕捉二維圖 像資料; 通過所述單視場攝影機的深度感測器捕捉深度資訊;以及 採用所述捕捉的深度資訊、從所述捕捉的二維圖像資料生 成三維視頻流。 優選地,所述方法進一步包括使所述捕捉的深度資訊與所 述捕捉的二維圖像資料同步。 201225638 優選地,所述方法進一步包括縮放所述深度資訊的解析度 以與所述二維圖像資料的解析度匹配。 優選地,所述方法進一步包括調整所述捕捉的深度資訊的 幀頻以與所述捕捉的二維圖像資料的幀頻匹配。 優選地,所述方法進一步包括將所述捕捉的深度資訊獨立 於所述捕捉的二維圖像資料存儲在記憶體中。 優選地,所述捕捉的一維圖像資料包括亮度資訊和色彩資 訊的其中一種或兩種。 優選地,所述方法進一步包括從所述捕捉的二維圖像資料 提供(render)二維視頻流。 優選地’所述單視場攝影機用於將所述二維視頻流和所述 三維視頻流的其中一種輸出至所述單視場攝影機的顯示器。 優選地,所述單視場攝影機用於將所述二維視頻流和所述 二維視頻流的其中一種或兩種輸出至通過一個或多個介面與 所述單視場攝影機耦合的一個或多個電子設備。 優選地,所述深度感測器利用由所述單視場攝影機的發射 ^發射的紅外波。 根據本發明的一個方面,系統包括: 在單視場攝影機中使用的—個或多個電路,所述一個或多 個電路包括-個或多侧像感·和深度感·,並且所述一 個或多個電路用於: 通過單視場攝影機的一個或多個圖像感測器捕捉二維圖 像資料; 通過所述單視場攝影機的深度感測器捕捉深度資訊; 5 201225638 採用所述捕捉的深度資訊、從所述捕捉的二維圖像資料生 成三維視頻流。 優選地,所述一個或多個電路用於使所述捕捉的深度資訊 與所述捕捉的二維圖像資料同步。 優選地,所述一個或多個電路用於縮放所述深度資訊的解 析度以與所述二維圖像資料的解析度匹配。 優選地,所述一個或多個電路用於調整所述捕捉的深度資 訊的幀頻以與所述捕捉的二維圖像資料的巾貞頻匹配。 優選地’所述一個或多個電路用於將所述捕捉的深度資訊 獨立於所述捕捉的二維圖像資料存儲在記憶體中。 優選地’所述捕捉的二維圖像資料包括亮度資訊和色彩資 訊的其中一種或兩種。 優選地,所述一個或多個電路用於從所述捕捉的二維圖像 資料提供二維視頻流。 優選地’所述單視場攝影機用於將所述二維視頻流和所述 二維視頻流的其中一種輸出至所述單視場攝影機的顯示器。 優選地’所述單視場攝影機用於將所述二維視頻流和所述 二維視頻流的其中一種或兩種輸出至通過一個或多個介面與 所述單視場攝影機搞合的一個或多個電子設備。 優選地,所述深度感測器利用由所述單視場攝影機的發射 器發射的紅外波。 本發明的各種優點、各個方面和創新特徵,以及其中所示 實施觸細節,將在町触述和關行詳細。 【實施方式】 201225638 本發明的-些實關提供了—雜科視輯影機生成 二維視頻的方法及系統。在本發明的各個實施射,單視場攝 影機可能包括-個或多個圖像感測n和—個或多個深度感測 器。可此通侧像感測n捕捉二維圖像資料以及通過深度感測 器捕捉深度資訊。深度感測器可能利用由單視場攝影機的發射 器發射的紅外波。單視場攝影機可能用於採用捕捉的深度資 訊、以從捕捉的二維圖像資料生成三維視舰。單視場攝影機 可制於使捕捉㈣度資訊與捕捉的二賴像資料同步。單視 場攝影機可能級縮放深度資訊的解析度、以與二維圖像資料 的解析度⑽,和/__㈣深度資__、以與捕捉 的二維圖像資料的巾貞赃配。單視場攝影機可㈣於將捕捉的 深度資訊獨錄捕㈣二_像#料存儲在記憶财。這樣, 可能單獨地和/或結合地利關像㈣和深度¥料來提供一個 ^多個視頻流。捕捉的二維圖像資料可能包括亮度f訊和色彩 資訊其中的-種或祕。單視場攝影機可能用於從捕捉的二維 =像資料提供二維視頻流。單視場攝影機可制於將二維視頻 流和二維視猶其巾的_種或兩種輸出至單視場攝影機的顯 不器、和/或通過一個或多個介面與單視場攝影機耦合的一個 或多個電子設備。此處所用的“3-D圖像,,指立體視場圖像,“3_D 視頻”指立體視場視頻。 圖1是將體現本發明的特徵的單視場攝影機與傳統的立 體視場攝影機啸的林圖。參相i,域視場攝影機100 可能包括兩個透鏡101&和101b。透鏡101&和101b的每個可 能從不同的視角捕捉圖像,可能將通過兩個透鏡101a和101b 201225638 捕捉的圖像結合以生成3-D圖像。就這一點而言,可能通過透 鏡101a (和相關的光學部件)使可見光譜中的電磁(EM)波 聚焦於第一一個或多個圖像感測器,以及通過透鏡1〇比(和 相關的光學部件)使可見光譜中的電磁(EM)波聚焦於第二 一個或多個圖像感測器。 單視場攝影機102可能通過對應於透鏡1〇lc的單一視角 捕捉圖像。就這一點而言,可能通過透鏡101c使可見光譜中 的EM波聚焦於一個或多個圖像感測器。圖像感測器可能捕捉 亮度和/或色彩資訊。可能在任一合適的色彩空間(比如YCrCb 色心二間或RGB色彩空間)中表示捕捉的亮度和/或色彩資 訊。單視場攝影機102還可能通過透鏡1〇lc (和相關的光學 4件)捕捉深度資訊。例如,單視場攝影機1〇2可能包括紅外 發射器、紅外感測器和可用於基於反射的紅外波測定與物體間 距離的相關電路。以下描述單視場攝影機1G2的其他細節。 單視場攝影機可能包括處理器124、記憶體126和一個或 多個感測器128。處理器124可能包括合適的邏輯、電路、介 =和/或代碼’用於管理攝影機各個元件的操作和執行各種計 ^處理任務。使料—處理器124僅用於難而不是限制本 毛明。在本發明的示範性實施例中,下面的圖2中描述的攝影 機的各個部分可能對應於圖!中描述的處理器124。記憶 可能包括例如DRAM、s趣、快閃記憶體、硬碟驅動 ^其他磁記_或其他適合的存麟備。感測器128可能包 測器、一個或多個深度感測器以及-個或 麥克風。下面參照圖2描述示範性感測器。201225638 VI. Description of the Invention: TECHNICAL FIELD OF THE INVENTION The present invention relates to video processing. More specifically, the present invention relates to a method and system for generating three-dimensional video using a monoscopic camera. [Prior Art] In recent years, support and demand for video systems supporting three-dimensional (3-D) video have rapidly increased. Whether literally or physically, 3D video offers a whole new way to watch videos at home and in the cinema. However, 3_D video is still in its infancy in terms of Xu Xi, which has a lot of room for improvement in terms of cost and performance. Other limitations and disadvantages of the prior art will be apparent to those skilled in the art from a comparison of the present invention, which will be described in conjunction with the drawings. SUMMARY OF THE INVENTION The present invention provides a system and/or method for generating a three-dimensional video using a single field of view camera, which is described in detail in conjunction with at least one of the drawings and is more fully described in the claims. According to one aspect of the invention, a method includes: capturing two-dimensional image data by one or more image sensors of a single field of view camera; capturing depth information by a depth sensor of the single field of view camera; A three-dimensional video stream is generated from the captured two-dimensional image data using the captured depth information. Advantageously, the method further comprises synchronizing said captured depth information with said captured two-dimensional image data. 201225638 Preferably, the method further comprises scaling the resolution of the depth information to match the resolution of the two-dimensional image material. Advantageously, the method further comprises adjusting a frame rate of said captured depth information to match a frame rate of said captured two-dimensional image material. Advantageously, the method further comprises storing said captured depth information in said memory independently of said captured two-dimensional image data. Preferably, the captured one-dimensional image data includes one or both of brightness information and color information. Advantageously, the method further comprises rendering a two-dimensional video stream from said captured two-dimensional image material. Preferably said single field of view camera is operative to output one of said two dimensional video stream and said three dimensional video stream to a display of said single field of view camera. Advantageously, said single field of view camera is operative to output one or both of said two dimensional video stream and said two dimensional video stream to one or both coupled to said single field of view camera via one or more interfaces Multiple electronic devices. Preferably, the depth sensor utilizes infrared waves emitted by the emission of the single field of view camera. According to one aspect of the invention, a system comprises: one or more circuits for use in a single field of view camera, the one or more circuits comprising - or more side image senses and a sense of depth, and said one Or a plurality of circuits for: capturing two-dimensional image data by one or more image sensors of the single-field camera; capturing depth information by the depth sensor of the single-field camera; 5 201225638 The captured depth information generates a three-dimensional video stream from the captured two-dimensional image data. Advantageously, said one or more circuits are operative to synchronize said captured depth information with said captured two-dimensional image data. Advantageously, said one or more circuits are operable to scale the resolution of said depth information to match the resolution of said two-dimensional image material. Advantageously, said one or more circuits are operative to adjust a frame rate of said captured depth information to match a frame frequency of said captured two-dimensional image material. Preferably said one or more circuits are operable to store said captured depth information in said memory independently of said captured two-dimensional image data. Preferably, the captured two-dimensional image data includes one or both of luminance information and color information. Advantageously, said one or more circuits are operable to provide a two dimensional video stream from said captured two dimensional image data. Preferably said single field of view camera is operative to output one of said two dimensional video stream and said two dimensional video stream to a display of said single field of view camera. Preferably, the single field of view camera is configured to output one or both of the two-dimensional video stream and the two-dimensional video stream to one of the single-view cameras through one or more interfaces Or multiple electronic devices. Preferably, the depth sensor utilizes infrared waves emitted by a transmitter of the single field of view camera. The various advantages, aspects, and innovative features of the present invention, as well as the details of the touches shown therein, will be described in detail in the town. [Embodiment] 201225638 The present invention provides a method and system for generating a two-dimensional video by a hybrid video camera. In various implementations of the invention, a single field of view camera may include one or more image sensing n and one or more depth sensors. The pass side image sensing n captures the 2D image data and captures the depth information through the depth sensor. The depth sensor may utilize infrared waves emitted by the transmitter of a single field of view camera. A single field of view camera may be used to capture captured depth information to generate a 3D view ship from captured 2D image data. A single field of view camera can be used to synchronize the capture (four) degree information with the captured image data. The resolution of the depth information of the single-view camera may be scaled to match the resolution (10) of the two-dimensional image data, and /__ (four) depth __, with the captured two-dimensional image data. The single-field camera can (4) store the captured depth information in the memory (four) and two images. Thus, it is possible to provide one or more video streams separately and/or in combination with the image (4) and depth. The captured 2D image data may include the brightness or the color information. A single field of view camera may be used to provide a two-dimensional video stream from captured 2D = image data. A single field of view camera can be used to output two or two types of two-dimensional video streams and two-dimensional video to a single field of view camera, and/or through one or more interfaces to a single field of view camera One or more electronic devices that are coupled. As used herein, "3-D image, refers to stereoscopic field image, "3_D video" refers to stereoscopic field video. Figure 1 is a single field of view camera embodying features of the present invention and a conventional stereoscopic field camera The forest map of the whistle. The domain field of view camera 100 may include two lenses 101 & and 101b. Each of the lenses 101 & and 101b may capture images from different perspectives, possibly through two lenses 101a and 101b 201225638 The captured images are combined to generate a 3-D image. In this regard, it is possible to focus the electromagnetic (EM) waves in the visible spectrum by the lens 101a (and associated optical components) to the first one or more An image sensor, and focusing electromagnetic (EM) waves in the visible spectrum to a second one or more image sensors by a lens ratio (and associated optical components). Single field of view camera 102 may The image is captured by a single viewing angle corresponding to the lens 1 〇 lc. In this regard, it is possible to focus the EM wave in the visible spectrum to one or more image sensors through the lens 101 c. The image sensor may capture Brightness and / or color information. May be in Captured brightness and/or color information is represented in a suitable color space (such as YCrCb color center two or RGB color space). Single field of view camera 102 may also capture depth through lens 1 〇lc (and associated optical 4 pieces) For example, a single field of view camera 1 2 may include an infrared emitter, an infrared sensor, and associated circuitry that can be used to determine the distance between objects based on reflected infrared waves. Other details of the single field of view camera 1G2 are described below. The field of view camera may include a processor 124, a memory 126, and one or more sensors 128. The processor 124 may include suitable logic, circuitry, media, and/or code 'for managing the operation and execution of various components of the camera. Various processing tasks. The processor-processor 124 is only used for difficulty rather than limiting the present invention. In an exemplary embodiment of the present invention, the various portions of the camera described in FIG. 2 below may correspond to the figure! The described processor 124. The memory may include, for example, DRAM, s-fun, flash memory, hard disk drive, other magnetic recordings, or other suitable storage devices. The sensor 128 may include A detector, one or more depth sensors, and a microphone or microphone. An exemplary sensor is described below with reference to FIG.

S 8 201225638 圖2是依照本發明的實施例的示範性單視場攝影機的示 意圖參考圖2’攝影機1〇2可能包括處理器1〇4、記憶體1〇6、 視頻編碼H/解碼H 1G7、深度躺器、音麟碼器/解碼器 109、數位信號處理器(DSP) 11〇、輸入/輸出模組112、一個 或多個圖像感測器114、光學部件i!6、透鏡!!8、數位顯示器 120、控制器122和光學取景器(viewf|n(jer) 124。 處理器104可能包括用於協調攝影機1〇2各個元件的操作 的合適的邏輯、電路、介面和/或代碼。例如,處理器104可 能運行攝影機102的作業系統並控制攝影機1〇2元件之間的資 訊和信號通信。處理器1〇4可能執行存儲在記㈣中的指 令0 記憶體106可能包括例如DRAM、SRAM、快閃記憶體、 硬碟驅動器或其他磁記憶體或其他合適的存儲設備。例如 SRAM可能用於存儲處理器1()4利用和/或生成的資料,硬碟 驅動器和/或快問記憶體可能用於存儲記錄的圖像資料和深度 資料。 又 為了使資料適合於通過I/O塊114向例如顯示器12〇和/ 或向一個或多個外部設備運輸,視頻編碼器/解碼器1〇7可能 包括用於處理捕捉的色彩、亮度和/或深度資料的合適的邏 輯、電路、介面和/或代碼。例如,視頻編碼器/解碼器1〇7可 能在原始的RGB或YCrCb圖元值和mpeg編碼之間轉換。 儘管描述為單獨的塊107,但是視頻編碼器/解竭器1〇7 DSP110中實現。 此 深度感測器108可能包括合適的邏輯、電路、介面和/戈 201225638 代碼,用於檢測可歧譜中的電磁波和基於反射的紅外波測定 與物體間的雜。在本發_實_中,可能基於發射器ι〇9 發射的和反射回感 1G8的紅外波的渡越時_定距離。在 本發明的實施例巾,可能基於捕捉網格(eaptoedgrid)的失 真測定深度。 為了使資料適合於通過I/O塊114向例如揚聲器lu和/ 或向一個或多個外部設備運輸,音頻編碼器/解碼器1〇9可能 包括用於處理捕捉的色彩、亮度和/或深度資料的合適的邏 輯、電路、介面和/或代碼。例如,音頻編碼器/解碼器1〇7可 能在例如原始脈衝編碼調製的音頻和MP3或AAC編碼之間轉 換。儘管描述為單獨的塊1〇9,音頻編碼器/解碼器ι〇9可能在 DSP110中實現。 數位信號處理器(DSP) 110可能包括合適的邏輯、電路、 介面和/或代碼’用於執行對捕捉的圖像資料、捕捉的深度資 料和捕捉的音頻資料的複雜處理(c〇mplex pr〇cessing)。例如, DSP110可能用於壓縮和/或解壓縮資料、編碼和/或解碼資料、 和/或為移除雜訊和/或在其他方面提高聽眾和/或觀察者感知 的音頻和/或視頻品質而過濾資料。 輸入/輸出模組112可能包括合適的邏輯、電路、介面和/ 或代碼’使能攝影機102依照一個或多個標準(比如USB、 PCI-X、IEEE1394、HDMI、顯示埠和/或類比音頻和/或類比視 頻標準)與其他設備交互。例如,I/O模組112可能用於發送 和接收來自控制器122的信號;向顯示器120輸出視頻;向揚 聲器111輸出音頻;處理從麥克風輸入的音頻;讀取或寫入暗 201225638 盒、快閃記憶體卡、硬碟驅動器、固態驅動器或其他連接到攝 影機102的外部記憶體;和/或通過一個或多個淳(比如 IEEE1394或USB槔)輸出音頻和/或視頻。 麥克風113可能包括用於將聲波轉換為電信號的換能器 和相關的邏輯、電路、介面和/或代碼。麥克風113可能用於 放大、平衡和/或在其他方面處理捕捉的音頻信號。可能電動 地和/或機械地控制麥克風113的方向性(directi〇nality> 每個圖像感測器114可能包括用於將光信號轉換為電信 號的合適的邏輯、電路、介面和/或代碼。例如,每個圖像感 測器114可能包括電荷輕合設備(CCD)圖像感測器或互補金 屬氧化物半導體(CMOS)圖像感測器。每個圖像感測器114 可能捕捉2-D亮度和/或色彩資訊。 光學部件116可能包括用於限制(c〇nditi〇ning)和引導 (directing)通過透鏡101c接收的電磁波的光學設備。光學部 件116可月匕將可見光譜中的電磁波引導至圖像感測@⑴以 及將紅外光譜_的電磁波引導至深度感測器1〇8。例如,光學 部件116可能包括一個或多個透鏡、棱鏡、濾光片和/或反光 鏡。 透鏡118可月b用於收集和充分聚焦可見和紅外光譜中的 電磁波。 數位顯不器120可能包括LCD、LED、OLED或其他數顯 技術,在所述數位顯示ϋ上可能顯示通過攝影機·記錄的圖 像。在本發明的實施例中,數位顯示器12〇可能用於顯示3_d 圖像。 201225638 控制器122可能包括合適的邏輯、電路、介面和/或代碼。 控制器122可能使能用戶與攝影機1〇2交互。例如,控制記錄 和重放的控制器。在本發明的實施例中,控制器122可能使能 用戶選擇攝影機102是以2-D還是3-D模式記錄和/或輸出視 頻。 光學取景器124可能使能用戶看見透鏡1〇lc“看見”的東 西’即“在幀中的東西”。 在運行中’深度感測器108可能捕捉深度資訊,圖像感測 器114可能捕捉2-D圖像資訊。相似地,對攝影機1〇2的低端 應用而言(比如安保攝影機),圖像感測器114可能僅僅捕捉 焭度信息以提供黑白3-D視頻。例如,深度資訊作為與2_D圖 像資訊相關的元資料和/或資訊附加層存儲和/或通信。就這一 點而言’其中存儲有2-D圖像資訊的資料結構可能包括一個或 多個欄位和/或指示,所述攔位和/或指示表明與存儲的2_D圖 像資訊相關的深度資料可用於提供3-D圖像。相似地,其中傳 送有2-D圖像資訊的資料包可能包括一個或多個欄位和/或指 示,所述攔位和/或指示表明與通信的2_D圖像資訊相關的深 度資料可用於提供3-D圖像。因此,為了輸出2_D視頻,攝影 機101可能從記憶體讀取2-D圖像資訊、處理它以生成至顯示 器和/或I/O塊的2-D視頻流。為了輸出3-D視頻,可能:(1) 從記憶體讀取2-D圖像資訊;(2)基於與2-D圖像資訊一起存 儲於記憶體中的指示確定相關的深度資訊可利用;(3)從記憶 體讀取深度資訊,以及(4)處理2-D圖像資訊和深度資訊以 生成3_D視頻流。 12 201225638 2-D圖像資訊和深度資訊的處理可能包括使深度資訊鱼 2-D圖像資訊同步。2_D圖像資訊和深度資訊的處理可能包括 縮放和/或插值處理2-D圖像資訊和相關的深度資訊其中的任 何-種或兩種。例如,深度感·⑽的解析度可能小於圖像 感測器114的解析度。因此,為了為2烟像資訊的每個圖元 或圖7C組生成深度資訊’攝影機丨02可能用於在深度資訊的圖 元之間插值。相似地,深麟· 1G8的_可能小於圖像感 測器114的細。因此,為了為2_D s像資訊的每_生成深 度資訊的巾貞’ 彡機102可_於麵度資訊的巾貞之間插值。 圖3是依照本發明的實施例的處理深度資訊和2_D圖像資 訊以生成3-D圖像的示意圖。參考圖3,可能處理深度感測器 108捕捉的深度資訊130的幀以及圖像感測器114捕捉的2〇 圖像_貝訊134的幀’以生成3-D圖像的ψ貞。虛線表明的平面 132僅僅用於在二維圖紙上表示深度。 在幀130中,線的粗線用於表明深度一較粗的線離觀察 者較近。因此’物體138離攝影機1〇2最遠,物體142離攝影 機102最近以及物體1〇4處於中間距離。在本發明的各個實施 例中’冰度負訊可能映射為灰度或偽灰度圖像,以顯示給觀察 者。例如,可能通過DSP110執行上述映射。 中貞134中的圖像是傳統的2D圖像。例如,在顯示器12〇 或在與攝影機102通過I/O模組H2連接的設備上,幀134的 觀察者感知他自身和每個物體138、140和142之間的距離相 同。換言之’每個物體138、140和142各自好像在平面132 上。 13 201225638 幀136中的圖像是3-D圖像。例如,在顯示器no或在與 攝影機102通過I/O模組112連接的設備上,幀136的觀察者 感知物體138離他最遠、物體142離他最近以及物體140處於 中間距離。就這一點而言,物體138好像在參考平面後面,物 體H0好像在參考平面上以及物體142好像在參考平面前面。 圖4是依照本發明的實施例的採用2-D圖像感測器和深度 感測器製造3-D視頻的示範性步驟的流程示意圖。參考圖4, 示範性步驟開始於步驟150 ’在所述步驟150中,可能將攝影 機搬通電。在步驟152,確定是否使能3-D模式。若否,然 後在步驟154 ’攝影機1〇2可能捕捉2-D圖像和/或視頻。 返回步驟152,如果使能3-D模式(例如基於用戶選擇), 然後在步驟156,攝影機102可能通過感測器114捕捉2-D圖 像資訊(亮度資訊和/或色彩資訊)以及通過感測器1〇8捕捉 深度資訊。在步驟158 ’可能使深度資訊與對應的2-D圖像資 訊相關聯。例如,這一聯繫可能包括使2-D圖像資訊和深度資 訊同步、以及將記憶體106中的2-D圖像資訊和深度資訊相關 聯。 在步驟160 ’可能請求重放捕捉的視頻。在步驟π),可 能確定攝影機處於2-D視頻或3-D視頻重放模式。對2-D重 放模式而言,示範性步驟可能前進至步驟164。在步驟164中, 可能從記憶體106讀取2-D圖像資訊。在步驟166,攝影機1〇2 可能提供和/或在其他方面處理2-D圖像資訊以生成2-D視頻 流。在步驟168,可能通過I/O塊112將2-D視頻流輸出至顯 示器120和/或至外部設備。 14 201225638 返回步驟162,對3-D重放模式,示範性步驟可能前進至 步驟170。在步驟170中,可能從記憶體1〇6讀取2-D圖像資 .訊和相關的深度資訊。在步驟172,攝影機102可能提供和/ 或在其他方面處理2-D圖像資訊和深度資訊以生成3_D視頻 流。在步驟174,可能通過I/O塊112將3_D視頻流輸出至顯 示器120和/或外部設備。 本發明提供了採用單視場攝影機生成3-D視頻的方法及 系統的各個方面。在本發明的各個實施例中,單視場攝影機 102可能包括一個或多個圖像感測器丨14和一個或多個深度感 測器108。可能通過圖像感測器114捕捉二維圖像資料、以及 通過冰度感測器108捕捉深度資訊。深度感測器可能利用 由單視場攝影機的發射器109發射的紅外波。單視場攝影機 i〇2可能用於採用捕捉的深度資訊、以從捕捉的二維圖像資料 生成三維視頻流。單視場攝影機1〇2可能用於使捕捉的深度資 訊與捕捉的二維圖像資料同步。單視場攝影機1〇2可能用於縮 放深,資訊的崎度 '讀捕㈣二料的解析度匹 配。單視場攝影機102用於調整捕捉的深度資訊的巾貞頻、以與 捕捉的—維圖像資料的巾貞頻匹配。單視場攝影機1⑹可能用於 =記憶體106中獨立於捕捉的二維圖像資料、存儲捕捉的深度 =訊。這樣,可能單獨地和/或結合地利用圖像資料和深度資 料ί提供—個或多個視頻流。捕捉的二維圖像資料可能包括亮 和色彩資訊的一個或兩個。單視場攝影機102可能用於 &、捉的—維圖像資料提取二維視頻流。單視場攝影機可 此用於將二維視頻流和三維視頻流其巾的-種或兩種輸出至 15 201225638 質和可讀介 質,介質中存儲有機==== 和/或存儲介 =器和_腦運行執行的_段:= $自仃上述_早視場攝影機生成三維視頻的步驟。 ’本發明可能通過硬體、軟體,或者軟、硬體結合來 發明可能在至少一個電腦系統中以集中方式實現,或 者由分佈錢個互連的電職統巾料同部件时散方式實 ^任何適合於實絲處描述的方法的電齡統或其他設備都 =可適用的。軟硬體的常用結合可妓具有電腦程式的通用電 腦系統’ t酿式當其絲和執行雜制電職統,以便它實 施此處所述的方法。 本發明還可能植入電腦程式產品中,所述電腦程式產品包 括使旎此處描述的方法實現的全部特徵,並且當其安裝在電腦 系統中時’它能夠實施這些方法。本文本中的電難式表示以 採用任何程式語言、代碼或符號編寫的一組指令的任何運算 式,該托令組使系統具有資訊處理能力,以直接實現特定功 能,或在進行下述一個或兩個步驟之後實現特定功能:a)轉 換為另一語言、代碼或符號;b)以不同材料的形式複製。 雖然本發明是通過具體實施例進行說明的,本領域技術人 員應當明白,在不脫離本發明範圍的情況下,還可以對本發明 201225638 進行各種變換及等同替代。另外,針對特定情形或材料,可以 對本發明做各種修改,而不脫離本發明的範圍。因此,本發明 不局限於所公開的具體實施例,而應當包括落入本發明權利要 求範圍内的全部實施方式。 相關申請的交又引同 本專利申請參照並享有2011年2月3曰申請的美國臨時 專利申請、申請號為No.61/439,193以及2010年8月27曰申 請的美國臨時專利申請、申請號為No. 61/377867的優先權, 此處上述專利申請均全文引用,以作參考。 【圖式簡單說明】 圖1疋與傳統的立體視場攝影機相比、體現本發明的特徵 的示範性單視場或單視圖攝影機的示意圖; 圖2是依照本發明的實施例的示範性單視場攝影機的示 意圖; 圖3是依照本發明的實施例的示範性處理深度資訊和 圖像資訊以生成3-D圖像的示意圖; 圖4是依照本發明的實施例的採用2_D圖像感測器和深度 感測器製造3-D視頻的示範性步驟的流程示意圖。 【主要元件符號說明】 透鏡 101a、l〇ib 單視場攝影機102 記憶體 106 深度感測器 108 數位信號處理器(DSP) 11〇 輸入/輸出模組112 立體視場攝影機 透鏡 101c 處理器 1〇4 視頻編碼器/解碼器i 〇7 音頻編碼器/解碼器109 揚聲器 m 17 201225638 麥克風 113 圖像感測器 114 光學部件 116 透鏡 118 數位顯示器 120 控制器 122 光學取景器(viewfinder) 124 記憶體 126 感測器 128 深度資訊 130 平面 132 2D圖像資訊 134 幀 136 物體 138 、140 、 142 18S 8 201225638 FIG. 2 is a schematic diagram of an exemplary single field of view camera in accordance with an embodiment of the present invention. Referring to FIG. 2 'camera 1 2 may include processor 1〇4, memory 1〇6, video encoding H/decoding H 1G7 , depth recliner, sound coder/decoder 109, digital signal processor (DSP) 11 〇, input/output module 112, one or more image sensors 114, optical components i! 6, lens! !8, digital display 120, controller 122 and optical viewfinder (viewf|n(jer) 124. Processor 104 may include suitable logic, circuitry, interfaces, and/or for coordinating the operation of the various components of camera 112. For example, the processor 104 may run the operating system of the camera 102 and control information and signal communication between the components of the camera 12. The processor 104 may execute the instructions stored in the note (4). The memory 106 may include, for example. DRAM, SRAM, flash memory, hard disk drive or other magnetic memory or other suitable storage device. For example, SRAM may be used to store processor 1 () 4 utilized and / or generated data, hard disk drive and / or The quick memory may be used to store recorded image data and depth data. Also, to make the data suitable for transport to, for example, the display 12 and/or to one or more external devices through the I/O block 114, the video encoder/ Decoder 1〇7 may include suitable logic, circuitry, interfaces, and/or code for processing captured color, luminance, and/or depth material. For example, video encoder/decoder 1〇7 may be Conversion between the original RGB or YCrCb primitive value and the mpeg encoding. Although described as a separate block 107, the video encoder/decommissioner 1〇7 DSP 110 is implemented. This depth sensor 108 may include suitable logic, Circuit, interface and /Gao 201225638 code for detecting electromagnetic waves in the disparate spectrum and reflection-based infrared waves between the object and the object. In this issue, it may be based on the emission and reflection of the emitter ι〇9 The transit time of the infrared wave of 1G8 is determined to be a fixed distance. In the embodiment of the present invention, the depth may be measured based on the distortion of the eaptoed grid. To make the data suitable for passing through the I/O block 114 to, for example, the speaker lu And/or transporting to one or more external devices, the audio encoder/decoder 1 9 may include suitable logic, circuitry, interfaces, and/or code for processing captured color, brightness, and/or depth material. The audio encoder/decoder 1〇7 may convert between, for example, original pulse code modulated audio and MP3 or AAC encoding. Although described as a separate block 1〇9, the audio encoder/decoder ι〇9 may be in the DSP 110. Implementation The digital signal processor (DSP) 110 may include suitable logic, circuitry, interfaces, and/or code 'for performing complex processing of captured image data, captured depth data, and captured audio material (c〇mplex) For example, DSP 110 may be used to compress and/or decompress data, encode and/or decode data, and/or to improve noise and/or other perceived audio and/or observer perception. And/or video quality filtering data. The input/output module 112 may include suitable logic, circuitry, interfaces, and/or code to enable the camera 102 in accordance with one or more standards (eg, USB, PCI-X, IEEE 1394, HDMI). , display 埠 and / or analog audio and / or analog video standards) to interact with other devices. For example, I/O module 112 may be used to send and receive signals from controller 122; output video to display 120; output audio to speaker 111; process audio input from the microphone; read or write dark 201225638 boxes, fast Flash memory card, hard drive, solid state drive or other external memory connected to camera 102; and/or output audio and/or video via one or more ports (such as IEEE 1394 or USB port). Microphone 113 may include transducers and associated logic, circuitry, interfaces, and/or code for converting sound waves into electrical signals. Microphone 113 may be used to amplify, balance, and/or otherwise process the captured audio signal. It is possible to electrically and/or mechanically control the directivity of the microphone 113. Each image sensor 114 may include suitable logic, circuitry, interfaces, and/or code for converting the optical signal into an electrical signal. For example, each image sensor 114 may include a charge coupled device (CCD) image sensor or a complementary metal oxide semiconductor (CMOS) image sensor. Each image sensor 114 may capture 2-D brightness and/or color information. The optical component 116 may include an optical device for limiting and directing electromagnetic waves received through the lens 101c. The optical component 116 may be visible in the visible spectrum. The electromagnetic waves are directed to image sensing @(1) and direct the electromagnetic waves of the infrared spectrum to depth sensor 1〇 8. For example, optical component 116 may include one or more lenses, prisms, filters, and/or mirrors The lens 118 can be used to collect and adequately focus electromagnetic waves in the visible and infrared spectra. The digital display 120 may include an LCD, LED, OLED, or other digital display technology that may be displayed on the digital display. Over the camera · recorded image. In an embodiment of the invention, the digital display 12 〇 may be used to display a 3 _ image. 201225638 The controller 122 may include suitable logic, circuitry, interfaces, and/or code. The user is enabled to interact with the camera 1. 2. For example, a controller that controls recording and playback. In an embodiment of the invention, the controller 122 may enable the user to select whether the camera 102 is recorded in 2-D or 3-D mode. And/or output video. The optical viewfinder 124 may enable the user to see what the lens "〇" "sees" in the frame. "In operation, the depth sensor 108 may capture depth information, images. The sensor 114 may capture 2-D image information. Similarly, for low-end applications of the camera 1〇2 (such as a security camera), the image sensor 114 may only capture the information to provide black and white 3- D video. For example, the depth information is stored and/or communicated as a metadata and/or information additional layer associated with the 2D image information. In this regard, the data structure in which the 2-D image information is stored may include One or more fields and/or indications indicating that depth data associated with the stored 2_D image information can be used to provide a 3-D image. Similarly, wherein the 2-D image is transmitted A profile-like package may include one or more fields and/or indications indicating that depth data associated with the communicated 2_D image information may be used to provide a 3-D image. Outputting 2_D video, camera 101 may read 2-D image information from memory, process it to generate a 2-D video stream to the display and/or I/O block. To output 3-D video, it is possible: (1 ) reading 2-D image information from the memory; (2) determining that relevant depth information is available based on the indication stored in the memory together with the 2-D image information; (3) reading the depth information from the memory And (4) processing 2-D image information and depth information to generate a 3D video stream. 12 201225638 2-D image information and depth information processing may include synchronizing depth information fish 2-D image information. The processing of 2_D image information and depth information may include any one or two of scaling and/or interpolation processing 2-D image information and related depth information. For example, the resolution of depth perception (10) may be smaller than the resolution of image sensor 114. Therefore, in order to generate depth information for each primitive of Fig. 2 or Fig. 7C, camera 丨02 may be used to interpolate between primitives of depth information. Similarly, the _ of the Shenlin·1G8 may be smaller than the thinness of the image sensor 114. Therefore, in order to generate depth information for each of the 2_D s image information, the device 102 can be interpolated between the masks of the face information. 3 is a schematic diagram of processing depth information and 2_D image information to generate a 3-D image, in accordance with an embodiment of the present invention. Referring to FIG. 3, it is possible to process the frame of the depth information 130 captured by the depth sensor 108 and the frame of the 2" image_before 134 captured by the image sensor 114 to generate a 3- of the 3-D image. The plane 132 indicated by the dashed line is only used to represent depth on a two-dimensional drawing. In frame 130, the thick line of the line is used to indicate that the depth-thick line is closer to the viewer. Thus the object 138 is furthest from the camera 1〇2, the object 142 is closest to the camera 102 and the object 1〇4 is at an intermediate distance. In various embodiments of the invention, the ice motion may be mapped to a grayscale or pseudo grayscale image for display to an observer. For example, the above mapping may be performed by the DSP 110. The image in the middle 134 is a conventional 2D image. For example, on display 12 or on a device connected to camera 102 via I/O module H2, the viewer of frame 134 perceives the same distance between himself and each object 138, 140, and 142. In other words, 'each object 138, 140, and 142 appears to be on plane 132, respectively. 13 201225638 The image in frame 136 is a 3-D image. For example, on display no or on a device connected to camera 102 via I/O module 112, the viewer of frame 136 senses that object 138 is furthest from him, object 142 is closest to him, and object 140 is at an intermediate distance. In this regard, object 138 appears to be behind the reference plane, object H0 appears to be on the reference plane and object 142 appears to be in front of the reference plane. 4 is a flow diagram of exemplary steps for fabricating 3-D video using a 2-D image sensor and depth sensor, in accordance with an embodiment of the present invention. Referring to Figure 4, an exemplary step begins in step 150' in which it is possible to power up the camera. At step 152, it is determined whether the 3-D mode is enabled. If not, then at step 154' camera 1 〇 2 may capture 2-D images and/or video. Returning to step 152, if the 3-D mode is enabled (eg, based on user selection), then at step 156, camera 102 may capture 2-D image information (brightness information and/or color information) and sense of pass through sensor 114. The detector 1〇8 captures depth information. At step 158' it is possible to associate the depth information with the corresponding 2-D image information. For example, this association may include synchronizing 2-D image information with deep information and associating 2-D image information in memory 106 with depth information. At step 160' it may be requested to replay the captured video. At step π), it may be determined that the camera is in a 2-D video or 3-D video playback mode. For the 2-D playback mode, the exemplary steps may proceed to step 164. In step 164, 2-D image information may be read from memory 106. At step 166, camera 1 可能 2 may provide and/or otherwise process 2-D image information to generate a 2-D video stream. At step 168, the 2-D video stream may be output to the display 120 and/or to an external device via the I/O block 112. 14 201225638 Returning to step 162, for the 3-D playback mode, the exemplary steps may proceed to step 170. In step 170, the 2-D image information and associated depth information may be read from the memory 1〇6. At step 172, camera 102 may provide and/or otherwise process 2-D image information and depth information to generate a 3D video stream. At step 174, the 3D video stream may be output to display 120 and/or an external device via I/O block 112. The present invention provides various aspects of a method and system for generating 3-D video using a single field of view camera. In various embodiments of the invention, single field of view camera 102 may include one or more image sensors 丨 14 and one or more depth sensors 108. The two-dimensional image data may be captured by image sensor 114 and captured by ice sensor 108. The depth sensor may utilize infrared waves emitted by the transmitter 109 of the single field of view camera. The single-field camera i〇2 may be used to capture the depth information to generate a three-dimensional video stream from the captured two-dimensional image data. The single field camera 1〇2 may be used to synchronize the captured depth information with the captured 2D image data. The single-field camera 1〇2 may be used for deepening the depth of the information, and the resolution of the information 'matching (four) two materials is matched. The single field of view camera 102 is operative to adjust the frame frequency of the captured depth information to match the frame rate of the captured image data. The single field of view camera 1 (6) may be used to = in the memory 106 independent of the captured two-dimensional image data, to store the captured depth = signal. Thus, it is possible to provide one or more video streams using image data and depth information separately and/or in combination. The captured 2D image data may include one or two of light and color information. The single field of view camera 102 may be used to extract the two-dimensional video stream from the & capture-dimensional image data. A single field of view camera can be used to output two or two types of two-dimensional video streams and three-dimensional video streams to 15 201225638 quality and readable media, storing organic ==== and/or storage media in the medium And _ brain run execution of the _segment: = $ 仃 仃 _ _ early field camera to generate 3D video steps. 'The invention may be implemented by hardware, software, or a combination of soft and hard, may be implemented in a centralized manner in at least one computer system, or may be distributed by means of distributed electric interconnected materials. Any electrical age or other device suitable for the method described at the wire is applicable. A common combination of hardware and software can be a general-purpose computer system with a computer program that operates on the wire and performs a miscellaneous electrical system so that it implements the methods described herein. The invention may also be embedded in a computer program product which includes all of the features enabled by the methods described herein and which can be implemented when installed in a computer system. The electrical difficulty in this document refers to any expression of a set of instructions written in any programming language, code or symbol that enables the system to have information processing capabilities to directly implement a particular function, or to perform one of the following: Or a specific function after two steps: a) conversion to another language, code or symbol; b) reproduction in the form of different materials. While the invention has been described in terms of specific embodiments, it will be understood by those skilled in the art that various modifications and equivalents can be made to the present invention 201225638 without departing from the scope of the invention. In addition, various modifications may be made to the invention without departing from the scope of the invention. Therefore, the invention is not limited to the specific embodiments disclosed, but all the embodiments falling within the scope of the appended claims. The relevant application is hereby incorporated by reference in its entirety to the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of the benefit of Priority is claimed on Serial No. 61/377, the entire disclosure of which is hereby incorporated by reference in its entirety. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a schematic diagram of an exemplary single field of view or single view camera embodying features of the present invention as compared to a conventional stereoscopic field camera; FIG. 2 is an exemplary single in accordance with an embodiment of the present invention. Schematic diagram of a field of view camera; FIG. 3 is a schematic diagram of exemplary processing of depth information and image information to generate a 3-D image in accordance with an embodiment of the present invention; FIG. 4 is a 2D image sense in accordance with an embodiment of the present invention. A schematic diagram of the flow of exemplary steps in the manufacture of 3-D video by the detector and depth sensor. [Major component symbol description] Lens 101a, l〇ib Single field of view camera 102 Memory 106 Depth sensor 108 Digital signal processor (DSP) 11〇 Input/output module 112 Stereoscopic field camera lens 101c Processor 1〇 4 Video Encoder/Decoder i 〇7 Audio Encoder/Decoder 109 Speaker m 17 201225638 Microphone 113 Image Sensor 114 Optical Part 116 Lens 118 Digital Display 120 Controller 122 Optical Viewfinder 124 Memory 126 Sensor 128 depth information 130 plane 132 2D image information 134 frame 136 objects 138, 140, 142 18

Claims (1)

201225638 七、申請專利範圍: 1、 一種方法,其特徵在於,包括: 通過單視場(monoscopic)攝影機的一個或多個圖像感 測器捕捉二維圖像資料;. 通過所述單視場攝影機的深度感測器捕捉深度資訊;以 及 採用所述捕捉的深度資訊、從所述捕捉的二維圖像資料 生成三維視頻流。 2、 如申請專利範圍第1項所述的方法,其中,所述方法包括使 所述捕捉的深度資訊與所述捕捉的二維圖像資料同步。 3、 如申請專利範圍第丨項所述的方法,其中,所述方法包括縮 放所述深度資訊的解析度以與所述二維圖像資料的解析度匹 配。 4、 如申請專纖圍第1項所述的方法,其中,所述方法包括調 整所述捕捉的深度資訊的幀頻以與所述捕捉的二維圖像資料 的幀頻匹配。 5、 如申請專利範圍第1項所述的方法,其巾,所述方法包括將 所述捕捉的深度資訊獨立於所述捕捉的二維圖像資料存儲在 s己憶體中。 6 種系統’其特徵在於,包括: 在單視場攝影機中使用的一個或多個電路,所述一個或 多個電路包括—個❹麵贼測ϋ和深度制II,並且所 述—個或多個電路用於: 通過單視場(monoscopic)攝影機的一個或多個圖像感 19 201225638 測器捕捉二維圖像資料; 通過所述單視場攝影機的深度感測器捕捉深度資訊; 採用所述捕捉的深度資訊、從所述捕捉的二維圖像資料 生成三維視頻流。 7、 如申請專利範圍第6項所述的系統,其中,所述一個或多個 電路用於使所述捕捉的深度資訊與所述捕捉的二維圖像資料 同步。 8、 如申請專利範圍第6項所述的系統,其中,所述一個或多個 電路用於縮放所述深度資訊的解析度以與所述二維圖像資料 的解析度匹配。 9、 如申請專利範圍第6項所述的系統,其中,所述一個或多個 電路用於調整所述捕捉的深度資訊的幀頻以與所述捕捉的二 維圖像資料的幀頻匹配。 10、如申請專利範圍第6項所述的系統,其中,所述一個或多個 電路用於將所述捕捉的深度資訊獨立於所述捕捉的二維圖像 為料存儲在記憶體中。 20201225638 VII. Patent application scope: 1. A method, comprising: capturing two-dimensional image data by one or more image sensors of a monoscopic camera; by the single field of view A depth sensor of the camera captures depth information; and generates a three-dimensional video stream from the captured two-dimensional image data using the captured depth information. 2. The method of claim 1, wherein the method comprises synchronizing the captured depth information with the captured two-dimensional image data. 3. The method of claim 2, wherein the method comprises scaling the resolution of the depth information to match the resolution of the two-dimensional image data. 4. The method of claim 1, wherein the method comprises adjusting a frame rate of the captured depth information to match a frame rate of the captured two-dimensional image data. 5. The method of claim 1, wherein the method comprises storing the captured depth information independently of the captured two-dimensional image data in a memory. 6 systems' are characterized by comprising: one or more circuits used in a single field of view camera, the one or more circuits comprising - a face thief test and a depth system II, and said - or Multiple circuits for: capturing two-dimensional image data by one or more image perceptions of a monoscopic camera; 201225638; capturing depth information by the depth sensor of the single-field camera; The captured depth information generates a three-dimensional video stream from the captured two-dimensional image data. 7. The system of claim 6 wherein the one or more circuits are operative to synchronize the captured depth information with the captured two-dimensional image data. 8. The system of claim 6, wherein the one or more circuits are configured to scale the resolution of the depth information to match the resolution of the two-dimensional image data. 9. The system of claim 6, wherein the one or more circuits are configured to adjust a frame rate of the captured depth information to match a frame rate of the captured two-dimensional image data. . 10. The system of claim 6, wherein the one or more circuits are configured to store the captured depth information in memory independent of the captured two-dimensional image. 20
TW100130759A 2010-08-27 2011-08-26 Method and system for generating three-dimensional video utilizing a monoscopic camera TW201225638A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US37786710P 2010-08-27 2010-08-27
US43919311P 2011-02-03 2011-02-03
US13/077,900 US20120050480A1 (en) 2010-08-27 2011-03-31 Method and system for generating three-dimensional video utilizing a monoscopic camera

Publications (1)

Publication Number Publication Date
TW201225638A true TW201225638A (en) 2012-06-16

Family

ID=46726253

Family Applications (1)

Application Number Title Priority Date Filing Date
TW100130759A TW201225638A (en) 2010-08-27 2011-08-26 Method and system for generating three-dimensional video utilizing a monoscopic camera

Country Status (1)

Country Link
TW (1) TW201225638A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI563842B (en) * 2013-10-25 2016-12-21 Lips Inc Sensing device and signal processing method thereof

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI563842B (en) * 2013-10-25 2016-12-21 Lips Inc Sensing device and signal processing method thereof

Similar Documents

Publication Publication Date Title
KR101245214B1 (en) Method and system for generating three-dimensional video utilizing a monoscopic camera
CN110832883B (en) Mixed Order Ambisonics (MOA) audio data for computer mediated reality systems
US8810565B2 (en) Method and system for utilizing depth information as an enhancement layer
US10389994B2 (en) Decoder-centric UV codec for free-viewpoint video streaming
US9013552B2 (en) Method and system for utilizing image sensor pipeline (ISP) for scaling 3D images based on Z-depth information
US20120050478A1 (en) Method and System for Utilizing Multiple 3D Source Views for Generating 3D Image
US8994792B2 (en) Method and system for creating a 3D video from a monoscopic 2D video and corresponding depth information
US9071831B2 (en) Method and system for noise cancellation and audio enhancement based on captured depth information
US20120054575A1 (en) Method and system for error protection of 3d video
US20120050491A1 (en) Method and system for adjusting audio based on captured depth information
TW201907707A (en) Audio-driven selection
JP2011160299A (en) Three-dimensional imaging system and camera for the same
US20120050477A1 (en) Method and System for Utilizing Depth Information for Providing Security Monitoring
US20120050495A1 (en) Method and system for multi-view 3d video rendering
US20120050479A1 (en) Method and System for Utilizing Depth Information for Generating 3D Maps
EP2485494A1 (en) Method and system for utilizing depth information as an enhancement layer
TW201225638A (en) Method and system for generating three-dimensional video utilizing a monoscopic camera
TWI526044B (en) Method and system for creating a 3d video from a monoscopic 2d video and corresponding depth information
EP2485493A2 (en) Method and system for error protection of 3D video
KR101419419B1 (en) Method and system for creating a 3d video from a monoscopic 2d video and corresponding depth information
KR101303719B1 (en) Method and system for utilizing depth information as an enhancement layer
EP2541945A2 (en) Method and system for utilizing an image sensor pipeline (ISP) for 3D imaging processing utilizing Z-depth information
KR20120089604A (en) Method and system for error protection of 3d video