TWI686079B - Method and apparatus of processing 360-degree virtual reality images - Google Patents
Method and apparatus of processing 360-degree virtual reality images Download PDFInfo
- Publication number
- TWI686079B TWI686079B TW107121493A TW107121493A TWI686079B TW I686079 B TWI686079 B TW I686079B TW 107121493 A TW107121493 A TW 107121493A TW 107121493 A TW107121493 A TW 107121493A TW I686079 B TWI686079 B TW I686079B
- Authority
- TW
- Taiwan
- Prior art keywords
- motion vector
- projection
- sphere
- dimensional
- frame
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/55—Motion estimation with spatial constraints, e.g. at image or region borders
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
本發明涉及360度虛擬實境(virtual reality,VR)圖像/序列之圖像/視訊處理或編解碼。具體而言,本發明涉及以不同投影格式推導出用於三維(three-dimensional,3D)內容之運動矢量。 The invention relates to 360-degree virtual reality (VR) image/sequence image/video processing or coding and decoding. Specifically, the present invention relates to deriving motion vectors for three-dimensional (3D) content in different projection formats.
360度視訊,也稱為沉浸式視訊,係一種新興技術,其可以提供“身臨其境之感覺”。通過用覆蓋全景視圖之環繞式場景來環繞用戶,特別係360度全景,以實現沉浸式感覺。通過立體渲染可以進一步改善“身臨其境之感覺”。因此,全景視訊廣泛應用於虛擬實境(Virtual Reality,VR)應用中。 360-degree video, also known as immersive video, is an emerging technology that can provide an "immersive feeling." Surround the user with a wrap-around scene that covers the panoramic view, especially a 360-degree panorama to achieve an immersive feeling. Stereo rendering can further improve the "feeling of being there". Therefore, panoramic video is widely used in virtual reality (Virtual Reality, VR) applications.
沉浸式視訊涉及使用多個攝像機捕獲情景,以覆蓋全景視圖,例如,360度視場。沉浸式攝像機通常使用用於捕獲360度視場之全景攝像機或攝像機集。通常,兩個或以上攝像機被用於沉浸式攝像機。所有視訊必須同時被獲取,並且該情景之單個段(也稱為單個視角)被記錄。此外,攝像機集通常用於水準地捕獲視圖,而其他攝像機設計係可能的。 Immersive video involves using multiple cameras to capture the scene to cover a panoramic view, for example, a 360-degree field of view. Immersive cameras usually use a panoramic camera or camera set for capturing a 360-degree field of view. Generally, two or more cameras are used for immersive cameras. All videos must be acquired at the same time, and a single segment of the scene (also called a single perspective) is recorded. In addition, camera sets are usually used to capture views horizontally, while other camera designs are possible.
使用360度球面全景攝像機或用於覆蓋360度周圍所有視場之多個圖像,360度VR圖像可以被捕獲。使用傳統圖像/視訊處理設備,3D球面圖像很難處理或儲存。因此,使用3D到2D投影方法,360度VR圖像通常被轉換成2D格式。例如,等角投影(equirectangular projection,ERP)和立方體投影(cubemap projection,CMP)係已普遍採用投影方法。因此,360度圖像可以以等角投影格式進行儲存。等角投影將整個球體之表面投影到平面圖像上。垂直軸為緯度,水準軸為經度。第1圖示出了根據ERP將球體110投影到矩形圖像120之示例,其中每個經度線被映射到ERP圖像之垂直線。對於ERP投影,球體之北極和南極中之區域比靠近赤道之區域被拉伸得更嚴重(即,從單個點到線)。此外,由於拉伸所引起之失真,特別在靠近兩個極點處,預測性編解碼工具通常不能做出較好之預測,使得編解碼效率降低。第2圖示出了具有6個面之立方體210,其中360度VR圖像可以根據CMP被投影到立方體上之6個面。存在不同之方式以將6個面從立方體上取出,並將其組合成矩形圖像。第2圖中之示例將6個面劃分成兩個部分(即220a和220b),其中每個部分包括3個連接面。這兩個部分可以被展開成兩個帶(即230a和230b),其中每個帶對應於連續面圖像。根據所選擇之佈局格式,這兩個帶可以被組合成緊湊型矩形幀。Using a 360-degree spherical panoramic camera or multiple images used to cover all fields of view around 360 degrees, 360-degree VR images can be captured. Using traditional image/video processing equipment, 3D spherical images are difficult to process or store. Therefore, using 3D to 2D projection methods, 360-degree VR images are usually converted into 2D format. For example, equirectangular projection (ERP) and cubemap projection (CMP) systems have generally adopted projection methods. Therefore, 360-degree images can be stored in an isometric projection format. Isometric projection projects the entire surface of the sphere onto a flat image. The vertical axis is latitude and the horizontal axis is longitude. Figure 1 shows an example of projecting a
如JVET-F1003 (Y. Ye, et al., “Algorithm descriptions of projection format conversion and video quality metrics in 360Lib”, Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 6th Meeting: Hobart, AU, 31 March – 7 April 2017, Document: JVET-F1003)所述,ERP格式和CMP格式均已被包括在投影格式轉換中,其正被考慮用於下一代視訊編解碼。除了ERP格式和CMP格式,存在不同之其他VR投影格式,例如,已調節立方體投影(Adjusted Cubemap Projection,ACP)、等區域投影(Equal-Area Projection,EAP)、八面體投影(Octahedron Projection,OHP)、二十面體投影(Icosahedron Projection,ISP)、分段球體投影(Segmented Sphere Projection,SSP)和旋轉球體投影(Rotated Sphere Projection,RSP),其廣泛應用於該領域。Such as JVET-F1003 (Y. Ye, et al., "Algorithm descriptions of projection format conversion and video quality metrics in 360Lib", Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/ SC 29/WG 11, 6th Meeting: Hobart, AU, 31 March – 7 April 2017, Document: JVET-F1003), ERP format and CMP format have been included in the projection format conversion, which is being considered for Next-generation video codec. In addition to the ERP format and the CMP format, there are different VR projection formats, for example, Adjusted Cubemap Projection (ACP), Equal-Area Projection (EAP), Octahedron Projection (OHP) ), icosahedron projection (Icosahedron Projection, ISP), segmented sphere projection (Segmented Sphere Projection, SSP) and rotating sphere projection (Rotated Sphere Projection, RSP), which are widely used in this field.
第3圖示出了OHP之示例,其中球體被投影到八面體310之8個面上。通過切開面1與面5之間之面邊緣,並將面1和麵5旋轉以分別連接於面2和麵6,以及將相似流程應用於面3和麵7,自八面體310取出之8個面320可以被轉換成中間格式330。中間格式可以被封裝成矩形圖像340。FIG. 3 shows an example of OHP, in which a sphere is projected onto eight faces of the
第4圖示出了ISP之示例,其中,球體被投影到二十面體410之20個面上。來自於二十面體410之20個面420可以被封裝成矩形圖像430(稱為投影佈局)。Figure 4 shows an example of an ISP, in which a sphere is projected onto the 20 faces of the
JVET-E0025 (Zhang et al., “AHG8: Segmented Sphere Projection for 360-degree video”, Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 5th Meeting: Geneva, CH, 12–20 January 2017, Document: JVET-E0025)中已公開了SSP作為一方法,以將球面圖像轉換成SSP格式。第5圖示出了分段球體投影之示例,其中球面圖像500被映射成北極圖像510、南極圖像520和赤道段圖像530。3個段之邊界對應於緯度45°N (即502)和緯度45°S (即504),其中0°對應於赤道(即506)。北極和南極被映射成2個圓圈區域(即510和520),且赤道段之投影可以與ERP或EAP相同。圓圈之直徑等於赤道段之寬度,因為極段和赤道段均具有90°緯度跨度。北極圖像510、南極圖像520和赤道段圖像530可以被封裝成矩形圖像。JVET-E0025 (Zhang et al., "AHG8: Segmented Sphere Projection for 360-degree video", Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 5th Meeting: Geneva, CH, 12–20 January 2017, Document: JVET-E0025) has disclosed SSP as a method to convert spherical images into SSP format. Figure 5 shows an example of a segmented sphere projection, in which the
第6圖示出了RSP之示例,其中球體610被分割成中間之270°x90°區域620和剩餘部分622。每個RSP部分可以在頂端側和底端側被進一步拉伸,以生成具有橢圓形狀之已變形部分。如第6圖所示,這兩個橢圓形狀部分可以被適合於矩形格式630。FIG. 6 shows an example of RSP, in which the
ACP係基於CMP。如果CMP之二維座標(u’, v’)被確定,則ACP之二維座標(u, v)可以通過根據如下等式集調節(u’, v’)而被計算:(1)(2)The ACP system is based on CMP. If the two-dimensional coordinates (u', v') of CMP are determined, the two-dimensional coordinates (u, v) of ACP can be calculated by adjusting (u', v') according to the following set of equations: (1) (2)
使用給定位置(u, v)和面索引f之表格,3D座標(X, Y, Z)可以被推導出。對於3D到2D座標轉換,給定(X, Y, Z),則(u’, v’)和面索引f可以根據CMP之表格被計算。ACP之2D座標可以根據等式集被計算。Using a table of given positions (u, v) and face index f, 3D coordinates (X, Y, Z) can be derived. For 3D to 2D coordinate conversion, given (X, Y, Z), then (u’, v’) and the surface index f can be calculated according to the CMP table. The 2D coordinates of ACP can be calculated according to the set of equations.
同理於ERP,EAP也將球體表面映射到一個面。在(u, v)平面中,u和v均處於範圍[0, 1]中。對於2D到3D座標轉換,給定採樣位置(m, n),則2D座標(u, v)先以相同於ERP之方式被計算。隨後,球體上之經度與緯度(ϕ, θ)可以自(u, v)被計算為: ϕ = (u − 0.5) * (2* π) (3) θ = sin−1 (1.0 − 2*v) (4)Similarly to ERP, EAP also maps the sphere surface to a surface. In the (u, v) plane, u and v are in the range [0, 1]. For the 2D to 3D coordinate conversion, given the sampling position (m, n), the 2D coordinate (u, v) is first calculated in the same way as the ERP. Subsequently, the longitude and latitude (ϕ, θ) on the sphere can be calculated from (u, v) as: ϕ = (u − 0.5) * (2* π) (3) θ = sin −1 (1.0 − 2* v) (4)
最後,使用與相同於用於ERP之等式,(X, Y, Z)可以被計算: X = cos(θ) cos(ϕ) (5) Y = sin(θ) (6) Z = −cos(θ) sin(ϕ) (7)Finally, using the same equation as used in ERP, (X, Y, Z) can be calculated: X = cos(θ) cos(ϕ) (5) Y = sin(θ) (6) Z = −cos (θ) sin(ϕ) (7)
相反地,使用如下,經度與緯度(ϕ, θ)可以自(X, Y, Z)座標被評估: ϕ = tan−1(−Z/X) (8) θ = sin−1(Y/(X2+Y2+Z2)1/2) (9)Conversely, using the following, longitude and latitude (ϕ, θ) can be evaluated from (X, Y, Z) coordinates: ϕ = tan−1(−Z/X) (8) θ = sin−1(Y/( X2+Y2+Z2)1/2) (9)
由於與虛擬實境相關之圖像或視訊可能佔用較大空間以儲存或者較大頻寬以傳輸,因此圖像/視訊壓縮通常被用於降低所需儲存空間或傳輸頻寬。幀間預測已成為強大之編解碼工具,以使用運動估計/運動補償探索幀間幀冗餘。如果傳統幀間預測被應用於自3D空間轉換而來之2D幀,則使用運動估計/運動補償技術不能合適地起作用,因為由於物體運動或物體與攝像機之間之相對運動,3D空間中之物體可能在2D幀中變成失真或變形。為了改進自3D空間轉換而來之2D幀之幀間預測,不同幀間預測技術已被開發以提高自3D空間轉換而來之2D幀之幀間預測之準確性。Since images or videos related to virtual reality may occupy a larger space for storage or a larger bandwidth for transmission, image/video compression is usually used to reduce the required storage space or transmission bandwidth. Inter prediction has become a powerful codec tool to explore inter frame redundancy using motion estimation/motion compensation. If traditional inter-frame prediction is applied to 2D frames converted from 3D space, the use of motion estimation/motion compensation techniques will not work properly, because due to object motion or relative motion between the object and the camera, Objects may become distorted or deformed in 2D frames. In order to improve the inter prediction of 2D frames converted from 3D space, different inter prediction techniques have been developed to improve the accuracy of the inter prediction of 2D frames converted from 3D space.
本發明公開了一種處理360度虛擬實境圖像之方法及裝置。根據一方法,接收2D幀中之當前區塊之輸入資料,其中2D幀係自3D球體投影的。確定與2D幀中之相鄰塊相關之第一運動矢量,其中第一運動矢量自相鄰塊中之第一起始位置指向2D幀中之第一終止位置。根據目標投影,將第一運動矢量投影到3D球體上。將3D球體中之第一運動矢量沿著3D球體之表面上之旋轉圓圈圍繞著旋轉軸進行旋轉,以生成3D球體中之第二運動矢量。根據逆目標投影,將3D球體中之第二運動矢量映射回到2D幀。使用第二運動矢量,編碼或解碼2D幀中之當前區塊。第二運動矢量可以被包括為合併候選列表或AMVP候選列表中之候選,以用於編碼或解碼當前區塊。The invention discloses a method and device for processing 360-degree virtual reality images. According to a method, input data of a current block in a 2D frame is received, where the 2D frame is projected from a 3D sphere. The first motion vector related to the adjacent block in the 2D frame is determined, where the first motion vector points from the first start position in the adjacent block to the first end position in the 2D frame. According to the target projection, the first motion vector is projected onto the 3D sphere. The first motion vector in the 3D sphere is rotated around the rotation axis along the rotation circle on the surface of the 3D sphere to generate the second motion vector in the 3D sphere. According to the inverse target projection, the second motion vector in the 3D sphere is mapped back to the 2D frame. Using the second motion vector, the current block in the 2D frame is encoded or decoded. The second motion vector may be included as a candidate in the merge candidate list or AMVP candidate list for encoding or decoding the current block.
在一實施例中,旋轉圓圈對應於3D球體之表面上之最大圓圈。在另一實施例中,旋轉圓圈小於3D球體之表面上之最大圓圈。目標投影可以對應於ERP、CMP、ACP、EAP、OHP、ISP、SSP、RSP或CLP。In one embodiment, the rotating circle corresponds to the largest circle on the surface of the 3D sphere. In another embodiment, the rotating circle is smaller than the largest circle on the surface of the 3D sphere. The target projection may correspond to ERP, CMP, ACP, EAP, OHP, ISP, SSP, RSP, or CLP.
在一實施例中,將第一運動矢量投影到3D球體上包括:根據目標投影,將2D幀中之第一起始位置、第一終止位置和第二起始位置投影到3D球體上,其中第二起始位置位於對應於相鄰塊中之第一起始位置之當前區塊中相應位置處。將3D球體中之第一運動矢量沿著旋轉圓圈進行旋轉包括:確定目標旋轉,以用於沿著3D球體之表面上之旋轉圓圈圍繞著旋轉軸自第一起始位置旋轉到3D球體中之第二起始位置;以及使用目標旋轉,將第一終止位置旋轉到3D球體上之第二終止位置。將3D球體中之第二運動矢量映射到2D幀包括:根據逆目標投影,將3D球體上之第二終止位置映射回到2D幀;以及根據2D幀中之第二起始位置和第二終止位置,確定2D幀中之第二運動矢量。In an embodiment, projecting the first motion vector onto the 3D sphere includes: projecting the first start position, the first end position, and the second start position in the 2D frame onto the 3D sphere according to the target projection. The two starting positions are located at the corresponding positions in the current block corresponding to the first starting positions in the adjacent blocks. Rotating the first motion vector in the 3D sphere along the rotation circle includes: determining the target rotation for rotating along the rotation circle on the surface of the 3D sphere from the first starting position to the third in the 3D sphere around the rotation axis Two starting positions; and using target rotation to rotate the first ending position to the second ending position on the 3D sphere. Mapping the second motion vector in the 3D sphere to the 2D frame includes: mapping the second end position on the 3D sphere back to the 2D frame according to the inverse target projection; and according to the second start position and the second end in the 2D frame The position determines the second motion vector in the 2D frame.
根據另一方法,接收兩個2D幀,其中兩個幀係使用目標投影而自對應於兩個不同視點之3D球體投影的,且當前區塊和相鄰塊位於兩個2D幀中。基於兩個2D幀,確定攝像機之前置點。在兩個2D幀中確定多個移動流。基於與相鄰塊相關之第一運動矢量,確定攝像機之平移。基於攝像機之平移,推導出與當前區塊相關之第二運動矢量。隨後,使用第二運動矢量,編碼或解碼2D幀中之當前區塊。According to another method, two 2D frames are received, where two frames are projected from a 3D sphere corresponding to two different viewpoints using target projection, and the current block and neighboring blocks are located in two 2D frames. Based on two 2D frames, the previous point of the camera is determined. Multiple mobile streams are determined in two 2D frames. Based on the first motion vector associated with the neighboring block, the translation of the camera is determined. Based on the camera's translation, a second motion vector related to the current block is derived. Subsequently, using the second motion vector, the current block in the 2D frame is encoded or decoded.
對於上述方法,兩個2D幀中之多個移動流可以係自兩個2D幀中之每個圖元之切線方向計算的。第二運動矢量可以被包括為合併候選列表或AMVP候選列表中之候選,以用於編碼或解碼當前區塊。目標投影可以對應於ERP、CMP、ACP、EAP、OHP、ISP、SSP、RSP或CLP。For the above method, multiple moving streams in two 2D frames can be calculated from the tangent direction of each primitive in the two 2D frames. The second motion vector may be included as a candidate in the merge candidate list or AMVP candidate list for encoding or decoding the current block. The target projection may correspond to ERP, CMP, ACP, EAP, OHP, ISP, SSP, RSP, or CLP.
根據又一方法,接收2D幀中之當前區塊之輸入資料,其中2D幀係根據目標投影自3D球體投影的。確定與2D幀中之相鄰塊相關之第一運動矢量,其中第一運動矢量自相鄰塊中之第一起始位置指向2D幀中之第一終止位置。縮放第一運動矢量,以生成第二運動矢量。隨後,使用第二運動矢量,編碼或解碼2D幀中之當前區塊。According to yet another method, the input data of the current block in the 2D frame is received, wherein the 2D frame is projected from the 3D sphere according to the target projection. The first motion vector related to the adjacent block in the 2D frame is determined, where the first motion vector points from the first start position in the adjacent block to the first end position in the 2D frame. The first motion vector is scaled to generate a second motion vector. Subsequently, using the second motion vector, the current block in the 2D frame is encoded or decoded.
在一實施例中,縮放第一運動矢量以生成第二運動矢量之步驟包括:根據目標投影,將2D幀中之第一起始位置、第一終止位置和第二起始位置投影到3D球體上,其中第二起始位置位於對應於相鄰塊中第一起始位置之當前區塊中相應位置。縮放第一運動矢量以生成第二運動矢量之步驟還包括:縮放第一運動矢量之經度分量,以生成第一運動矢量之已縮放經度分量;縮放第一運動矢量之緯度分量,以生成第一運動矢量之已縮放緯度分量;以及基於第一運動矢量之已縮放經度分量和第一運動矢量之已縮放緯度分量,確定對應於第二起始位置之第二終止位置。縮放第一運動矢量以生成第二運動矢量之步驟還包括:根據逆目標投影,將3D球體上之第二終止位置映射回到2D幀;以及基於2D幀中之第二起始位置和第二終止位置,確定2D幀之第二運動矢量。In an embodiment, the step of scaling the first motion vector to generate the second motion vector includes: projecting the first start position, the first end position, and the second start position in the 2D frame onto the 3D sphere according to the target projection , Where the second starting position is located in the corresponding position in the current block corresponding to the first starting position in the adjacent block. The step of scaling the first motion vector to generate the second motion vector further includes: scaling the longitude component of the first motion vector to generate the scaled longitude component of the first motion vector; scaling the latitude component of the first motion vector to generate the first The scaled latitude component of the motion vector; and based on the scaled longitude component of the first motion vector and the scaled latitude component of the first motion vector, determine a second end position corresponding to the second start position. The step of scaling the first motion vector to generate the second motion vector further includes: mapping the second end position on the 3D sphere back to the 2D frame according to the inverse target projection; and based on the second start position and the second in the 2D frame The end position determines the second motion vector of the 2D frame.
在另一實施例中,縮放第一運動矢量以生成第二運動矢量,包括:應用第一組合函數,以生成第二運動矢量之x分量,並應用第二組合函數,以生成第二運動矢量之y分量;其中第一組合函數和第二組合函數均係基於第一起始位置、與第一起始位置相對應之當前區塊中之第二起始位置、第一運動矢量和目標投影;以及其中第一組合函數和第二組合函數組合目標投影、縮放與逆目標投影,目標投影係用於將2D幀中之第一資料投影到3D球體中之第二資料,縮放係將3D球體中選擇之運動矢量縮放成3D球體中之已縮放運動矢量,以及逆目標投影係用於將已縮放運動矢量投影到2D幀中。在一實施例中,當目標投影對應於等角投影時,第一組合函數對應於(),第二組合函數對應於恒等函數,其中對應於與第一起始位置相關之第一緯度,對應於與第二起始位置相關之第二緯度。In another embodiment, scaling the first motion vector to generate the second motion vector includes: applying the first combination function to generate the x component of the second motion vector, and applying the second combination function to generate the second motion vector Y component; wherein the first combined function and the second combined function are based on the first starting position, the second starting position in the current block corresponding to the first starting position, the first motion vector and the target projection; and The first combination function and the second combination function combine target projection, scaling and inverse target projection. The target projection is used to project the first data in the 2D frame to the second data in the 3D sphere, and the zoom is to select from the 3D sphere The motion vector is scaled to the scaled motion vector in the 3D sphere, and the inverse target projection is used to project the scaled motion vector into the 2D frame. In an embodiment, when the target projection corresponds to an isometric projection, the first combination function corresponds to ( ), the second combination function corresponds to the identity function, where Corresponds to the first latitude associated with the first starting position, Corresponds to the second latitude associated with the second starting position.
以下描述為實施本發明之較佳方式。本描述之目之在於闡釋本發明之一般原理,並非起限定意義。本發明之保護範圍當視權利要求書所界定為准。The following description is a preferred way of implementing the invention. The purpose of this description is to explain the general principles of the invention and is not meant to be limiting. The protection scope of the present invention shall be defined as defined in the claims.
對於視訊編解碼中之傳統之幀間預測,運動估計/運動補償廣泛應用於探索視訊資料中之相關性,以便降低傳輸資訊。傳統之視訊內容對應於2D視訊資料,且運動估計和運動補償技術通常假設平移運動。在下一代視訊編解碼中,更高級運動模型被考慮,例如,仿射模型(affine model)。然而,這些技術基於2D圖像推導出3D運動模型。For traditional inter prediction in video coding and decoding, motion estimation/motion compensation is widely used to explore correlations in video data in order to reduce transmission information. Traditional video content corresponds to 2D video data, and motion estimation and motion compensation techniques usually assume translational motion. In the next generation of video codecs, more advanced motion models are considered, for example, affine models. However, these techniques derive 3D motion models based on 2D images.
在本發明中,運動資訊在3D域中被推導出,從而更準確之運動資訊可以被推導出。根據一方法,因為2D視訊內容中之塊變形,假設了球體之旋轉。第7A圖和第7B圖示出了由於球體之旋轉而引起之2D圖像之變形之示例。在第7A圖中,3D球體700上之塊701被移動以變成塊702。當塊701和已移動塊702被映射到ERP幀703時,這兩個相應塊在ERP幀中變成塊704和塊705。雖然3D球體上之塊(即701和702)對應於同一塊,但這兩個塊(即704和705)在ERP幀中具有不同之形狀。換句話說,3D空間中之物體移動或旋轉可能引起2D幀(即本示例中之ERP幀)中之物體失真。在第7B圖中,區域711位於球體之表面710上。由於沿著路徑713之球體之旋轉,區域711被移動到區域714。由於球體之旋轉,與區域711相關之運動矢量712被映射成運動矢量715。2D域720中之部分711-715之對應被示出為部分721-725。在另一示例中,對應於球體之表面730上之軌跡733之對應於713-725之部分分別被示出為部分733-735,且2D域740中之對應部分733-735之部分被繪示為743-745。在又一示例中,對應於球體之表面750上之軌跡753之對應於713-725之部分分別被示出為部分753-755,且2D域760中之對應於部分753-755之部分被繪示為763-765。
In the present invention, motion information is derived in the 3D domain, so that more accurate motion information can be derived. According to one method, because the blocks in the 2D video content are deformed, the rotation of the sphere is assumed. FIGS. 7A and 7B show examples of the deformation of the 2D image due to the rotation of the sphere. In FIG. 7A, the block 701 on the
第8圖示出了從球體810上之較大圓圈820上之點a 830到點b 840之球體旋轉800之示例,其中較大圓圈820對應於球體810之表面上之最大圓圈。旋轉軸850如第8圖中之箭頭所示。旋轉角度為θa。
FIG. 8 shows an example in which the
第9圖示出了從球體920上之較小圓圈910上之點a到點b之球體旋轉900之示例,其中較小圓圈910對應於小於球體920之表面上之最大圓圈(即圓圈930)之圓圈。旋轉之中點如第9圖中之點912所示。另一示例係從球體970之較小圓圈960上之點a到點b之球體旋轉950。較大圓圈980(即最大圓圈)在第9圖中被示出。旋轉軸990在第9圖中被示為箭頭。
Figure 9 shows an example of a 900 rotation of a sphere from a point a to a point b on a
第10圖示出了使用球體旋轉模型推導出用於2D投影圖像之運動矢量。在第10圖中,示意1010描述了根據本發明實施
例之使用球體旋轉模型推導出用於2D投影圖像之運動矢量。位置a和位置b為2D投影圖像中之兩個位置。運動矢量mva從a指向a′。目標係查找用於位置b之運動矢量mvb。本發明使用2D到3D投影將2D投影圖像投影到球體1020上。圍繞著較小圓圈或較大圓圈之旋轉被應用於從位置a旋轉到位置b。根據選擇之相同球體旋轉,位置a’被旋轉到位置b’。隨後,逆投影被應用於3D球體,以將位置b’轉換至2D幀1030。用於位置b之運動矢量可以根據mvb=b’-b被計算。運動矢量mva和運動矢量mvb為(x,y)域中之二維矢量。位置a、位置a’、位置b和位置b’為(x,y)域中之二維座標。記號、記號、記號和記號為(θ,Φ)域中之三維座標。
Figure 10 shows the use of a sphere rotation model to derive motion vectors for 2D projected images. In FIG. 10, schematic 1010 describes the use of a sphere rotation model to derive a motion vector for a 2D projected image according to an embodiment of the present invention. Position a and position b are two positions in the 2D projection image. The motion vector mv a points from a to a ′. Find the target system for the motion vector positions b mv b. The present invention uses 2D to 3D projection to project a 2D projection image onto the
推導MV可以被用作候選,以用於使用合併模式或AMVP模式之視訊編解碼,其中,如高效視訊編解碼(High Efficiency Video Coding,HEVC)中所公開,合併模式和AMVP模式均係預測性地編解碼塊或塊之運動資訊之技術。當以合併模式編解碼塊時,當前區塊使用由合併索引表示之塊之運動資訊,合併索引指向合併候選清單中所選擇之候選。當以AMVP模式編解碼塊時,當前區塊之運動矢量由AMVP索引所表示之預測子進行預測,AMVP索引指向AMVP候選清單中所選擇之候選。第11圖示出了使用基於球體之旋轉而推導之MV作為合併候選或AMVP候選。在第11圖中,用於推導合併候選之相鄰塊之佈局1110被示出以用於塊1112,其中相鄰塊包括空間相鄰塊A0、空間相鄰塊A1、空間相鄰塊B0、空間相鄰塊B1和空間相鄰塊B2,以及時間相鄰塊Col0或時間相鄰塊Col1。對於2D幀中之當前區塊1120,來自於塊A0和塊B0之運動矢量可以被用於推導當前區塊之運動候選。如上所公開,根據球體之旋轉,相鄰塊A0
之運動矢量mva可以被用於推導當前區塊之相應運動矢量mva’。同理,根據球體之旋轉,相鄰塊B0之運動矢量mvb可以被用於推導當前區塊之相應運動矢量mvb’。運動矢量mva’和運動矢量mvb’可以被包括在合併候選列表或AMVP候選列表中。
The derivation MV can be used as a candidate for video encoding and decoding using merge mode or AMVP mode, where, as disclosed in High Efficiency Video Coding (HEVC), both merge mode and AMVP mode are predictive Ground codec block or block motion information technology. When encoding and decoding blocks in the merge mode, the current block uses the motion information of the block indicated by the merge index, which points to the candidate selected in the merge candidate list. When encoding and decoding a block in the AMVP mode, the motion vector of the current block is predicted by the predictor indicated by the AMVP index, and the AMVP index points to the selected candidate in the AMVP candidate list. Figure 11 shows the use of the MV derived based on the rotation of the sphere as a merge candidate or AMVP candidate. In Fig. 11, the neighboring blocks for deriving the layout of the combined
在本發明中,公開了基於視點(viewpoint)之平移之運動矢量推導。第12圖示出了在不同攝像機位置處物體(即樹)被投影到球體之表面上之示例。在攝像機位置A處,樹被投影到球體1210上,以形成樹之圖像1240。在前置位置B處,樹被投影到球體1220上,以形成樹之圖像1250。對應於攝像機位置A之樹之圖像1241也在球體1220上被示出以用於比較。在又一前置位置C處,樹被投影到球體1230上,以形成樹之圖像1260。對應於攝像機位置A之樹之圖像1242和對應於攝像機位置B之樹之圖像1251也在球體1220上被示出以用於比較。在第12圖中,對於沿著直線移動之攝像機所捕獲之視訊,3D空間中攝像機之移動方向(如第12圖中三個不同之攝像機位置之箭頭1212、箭頭1222和箭頭1232所表示)可以由緯度座標與經度座標(θ,φ)表示,其中(θ,φ)對應於運動矢量與3D球體之交叉點。點(θ,φ)被投影到2D目標投影平面上,並且該點成為前置點。
In the present invention, the motion vector derivation based on the translation of the viewpoint is disclosed. Figure 12 shows an example in which objects (ie trees) are projected onto the surface of a sphere at different camera positions. At camera position A, the tree is projected onto the
第13圖係覆蓋有移動流(moving flow)之模型之ERP幀之示例,其中如果攝像前置點已知,則背景(即靜態物體)之流可以被確定。這些流如箭頭所示。移動流對應於基於一個方向上所移動之攝像機之視訊內容之移動方向。攝像機之移動引起靜態背景物體之相對移動,由攝像機所捕獲之2D幀上之背景物體之移動方向可以被表示為移動流。第14圖示出了基於視點平移之MV推導之示例性流程。在第14圖中,物體1410被投影到球體1420,以形成對應於攝像機位置1424之球體之表面上之圖像1422。在攝像機位置1434處,物體1410被投影到球體1430,以形成球體之表面上之圖像1432。在球體1430之表面上,物體之相應位置被示出在位置1423處。基於視點平移推導運動矢量之流程如下所示: · 找出攝像機之前置點1440; · 計算2D幀中之移動流1450(即位於每個圖元處之切線方向); · 通過使用相鄰塊之MV確定當前區塊之運動矢量。 a. 如箭頭1460所示,相鄰塊之MV可以被用於確定攝像機之平移 b. 如箭頭1470所示,攝像機之平移可以被用於確定當前區塊之MVFigure 13 is an example of an ERP frame covered with a moving flow model, where if the camera leading point is known, the flow of the background (ie, static objects) can be determined. These flows are shown by arrows. The moving stream corresponds to the moving direction of the video content based on the camera moved in one direction. The movement of the camera causes the relative movement of the static background object. The movement direction of the background object on the 2D frame captured by the camera can be expressed as a moving stream. FIG. 14 shows an exemplary flow of MV derivation based on viewpoint translation. In Figure 14, an
基於視點平移之MV推導可以被應用於不同投影方法。2D幀中之移動流可以被映射到3D球體。如第15圖所示,係3D球體1510上之移動流,其中前置點和兩個不同之移動流線(即1512和1514)被示出。與ERP 1520相關之3D球體上之移動流在第15圖中被示出,其中移動流被示出以用於ERP幀1526。與CMP 1530相關之3D球體上之移動流在第15圖中被示出,其中移動流被示出以用於2x3佈局格式中之CMP幀1536之六個面。與OHP 1540相關之3D球體上之移動流在第15圖中被示出,其中移動流被示出以用於OHP幀1546之八個面。與ISP 1550相關之3D球體上之移動流在第15圖中被示出,其中移動流被示出以用於ISP幀1556之二十個面。與SSP 1560相關之3D球體上之移動流在第15圖中被示出,其中移動流被示出以用於SSP幀1566之分割面。MV derivation based on viewpoint translation can be applied to different projection methods. The moving stream in the 2D frame can be mapped to the 3D sphere. As shown in Figure 15, it is a moving flow on the
如前面所述,存在不同之映射,例如ERP、CMP、SSP、OHP和ISP,以自3D球體投影到2D幀。當3D球體中之物體運動時,2D幀中之相應物體可能變形。變形係基於投影類型之。長度、角度和面積在2D幀中之所有映射位置可以不再保持一致。3D球體上之MV縮放技術被公開,以通過縮放3D球體上之MV而執行變形,以最小化投影類型對變形之影響。2D幀上之MV縮放技術也被公開,其可以通過將投影、縮放MV和逆投影組合成單個功能而直接應用於2D幀。As mentioned earlier, there are different mappings, such as ERP, CMP, SSP, OHP, and ISP, to project from a 3D sphere to a 2D frame. When an object in a 3D sphere moves, the corresponding object in a 2D frame may be deformed. Deformation is based on the type of projection. The mapping positions of the length, angle and area in the 2D frame can no longer be consistent. The MV scaling technique on the 3D sphere is disclosed to perform the deformation by scaling the MV on the 3D sphere to minimize the influence of the projection type on the deformation. The MV scaling technique on 2D frames is also disclosed, which can be directly applied to 2D frames by combining projection, scaling MV, and back projection into a single function.
第16圖示出了與ERP幀中之運動相關之變形之示例。在第16圖中,沿著三個緯度線之三個運動矢量(即1612、1614和1616)在3D球體1610之表面上被示出,其中這三個運動矢量具有大約相同之長度。如果3D球體之表面被展開成2D平面1620,則三個運動矢量(1622、1624和1626)保持其相等長度之特性。對於ERP幀,展開圖像需要被拉伸(stretch),其中更多拉伸在更高緯度中被需要。因此,如第16圖所示,三個運動矢量(1632、1634和1636)具有不同長度。如果具有運動矢量1634之相鄰塊用於當前區塊(即位於位置1632處之運動矢量),則在被用於編解碼之前,運動矢量1634必須被適當地縮放,例如合併索引或AMVP索引。第16圖中之示例示出了MV縮放之需要。Fig. 16 shows an example of deformation related to the motion in the ERP frame. In Figure 16, three motion vectors along three latitude lines (ie, 1612, 1614, and 1616) are shown on the surface of the
下面公開了3D球體中之MV縮放技術之示例性流程。假設mva 之起點為a ,mva 之終點為a’ 。我們可以通過如下流程預測位元於點b 處之運動矢量mvb :The following discloses an exemplary process of MV scaling technology in a 3D sphere. Suppose the mv a starting point for a, mv a the end of a '. We can predict the motion vector mv b at the point b by the following process:
如第17圖所示,將a, a’
和b
自2D幀1710映射到3D球體1720。 i. 設映射函數為() = Pprojection type(x, y),其將位於2D幀中(x, y)處之圖元映射到()。 2. 計算Δ和Δi. (Δ, Δ) =。 3. 將縮放函數scale()和scale()應用於Δ和Δ。 i. Δ= scale(Δ, Δ); ii. Δ’ = scale(Δ, Δ)。 4. 根據如下公式計算: i.=+ (Δ, Δ’) 5. 將自3D球體1720映射到2D幀1730,以產生b’。 i. 設逆函數為(x, y) = IPprojection type
(),其將位於3D球體上()處之圖元映射到2D幀。 6. 根據mvb
=b
’ -b
,MV mvb
可以被確定。As shown in Figure 17, a, a'and b are mapped from the
在上述流程中,Δ和Δ之縮放函數分別為scale(Δ, Δ)和scale(Δ, Δ)。如下所示,為縮放函數之一些示例。In the above process, Δ And Δ The scaling functions are scale (Δ , Δ ) And scale (Δ , Δ ). Some examples of scaling functions are shown below.
示例1: scale(Δ, Δ) = Δ; scale(Δ, Δ) = Δ因此,Δ= Δ; Δ’ = Δ Example 1: scale (Δ , Δ ) = Δ ; scale (Δ , Δ ) = Δ Therefore, Δ = Δ ; Δ '= Δ
示例2: scale(Δ, Δ) =; scale(Δ, Δ) = ΔΔ=Δ’ = Δ Example 2: scale (Δ , Δ ) = ; scale (Δ , Δ ) = Δ Δ = Δ '= Δ
示例3:縮放函數之一般形式-基於投影 Δ= scale(Δ, Δ,1,,2,) Δ’ = scale(Δ, Δ,1,,2,)Example 3: General form of scaling function-based on projection Δ = scale (Δ , Δ , 1, , 2, ) Δ '= scale (Δ , Δ , 1, , 2, )
示例4:縮放函數之一般形式-基於投影 Δ= scale(Δ, Δ,1,,2,, 投影類型) Δ’ = scale(Δ, Δ,1,,2,, 投影類型)Example 4: General form of scaling function-based on projection Δ = scale (Δ , Δ , 1, , 2, , Projection type) Δ '= scale (Δ , Δ , 1, , 2, , Projection type)
通過將投影函數、縮放函數和逆投影函數組合到單個函數f(a, b, mva , 投影類型),3D域中之MV縮放也可以在2D域中被執行。因此,3D與2D之間之映射可以被跳過。 mvb = (xmvb , ymvb ) = f(a, b, mva , 投影類型) xmvb = fx (a, b, mva , 投影類型) ymvb = fy (a, b, mva , 投影類型)By combining the projection function, the inverse scaling function and the projection function into a single function f (a, b, mv a , projection type), MV 3D domain of scaling may also be performed in a 2D domain. Therefore, the mapping between 3D and 2D can be skipped. mv b = (x mvb , y mvb ) = f(a, b, mv a , projection type) x mvb = f x (a, b, mv a , projection type) y mvb = f y (a, b, mv a , projection type)
在上述等式中,fx
(a, b, mva
, 投影類型)為生成xmvb
之單個函數,fy
(a, b, mva
, 投影類型)為生成ymvb
之單個函數。第18圖示出了2D幀中之MV縮放之示例性流程。2D幀1810中之兩個塊(即A和B)被示出。塊A中之位置a
之運動矢量被標記。在第18圖中,與mva
相關之位置a
之終止位置被標記為a’
。塊A中之位置b
之運動矢量被標記。在第18圖中,與mvb
相關之位置b
之終止位置被標記為b’
。投影函數(即前置投影1830)、3D球體1820中之縮放函數和逆投影函數(即逆投影1840)之步驟可以根據上述等式被組合成單個函數。In the above equation, f x (a, b, mv a , projection type) is a single function that generates x mvb , and f y (a, b, mv a , projection type) is a single function that generates y mvb . Figure 18 shows an exemplary flow of MV scaling in 2D frames. Two blocks (ie A and B) in the
如第19圖所示,為ERP幀中之MV縮放之示例。在第19圖中,3D球體1910之表面被展開成2D平面1920。對於ERP幀1930,展開圖像需要被拉伸,使得所有緯度線具有相同之長度。基於ERP之特徵,當其更靠近北極或南極時,水準線之長度被擴大更多。在ERP幀中,相鄰塊1932具有運動矢量mv1
。運動矢量mv2
需要根據mv1
被推導出,以用於當前區塊1934。推導運動矢量可以被用於編解碼當前區塊。例如,推導塊可以被用作合併候選或AMVP候選。由於相鄰塊位於3D球體上之不同位置處,在其被用於編解碼當前區塊之前,與相鄰塊相關之運動矢量需要被縮放。具體地,由於相鄰塊位於更高緯度,相比於當前區塊之運動矢量,相鄰塊之運動矢量在x方向上被拉伸更多。因此,在其用於當前區塊之前,mv1
之x分量需要被向下縮放。MV縮放流程1940如第19圖所示,其中位於相鄰塊之位置a
處之運動矢量被縮放成mv2
,並被用於編解碼當前區塊。在本發明之一實施例中,公開了保持運動距離之縮放函數,其中mv2
為mv1
, φ1, φ1 , θ2, φ2之函數: mv2
= f(mv1
, θ1, φ1, θ2, φ2, ERP) mvx2
= mvx1
* () mvy2
= mvy1 As shown in Figure 19, it is an example of MV scaling in an ERP frame. In FIG. 19, the surface of the
在上述等式中,(θ1, j1)對應於相鄰塊中之位置a 之經度和緯度,(θ2, j2)對應於當前區塊中之位置b 之經度和緯度。位置a 和位置b 可以對應於各自塊之中心,或者各自塊之其他位置。推導運動矢量之y分量與mv1 之y分量相同。換句話說,y分量之縮放函數為恒等函數。In the above equation, (θ1, j1) corresponds to the longitude and latitude of position a in the adjacent block, and (θ2, j2) corresponds to the longitude and latitude of position b in the current block. Position a and position b may correspond to the centers of the respective blocks, or other positions of the respective blocks. The y component of the derived motion vector is the same as the y component of mv 1 . In other words, the scaling function of the y component is an identity function.
其他投影之MV縮放也被公開了。在第20圖中,ERP之MV縮放被公開,其中ERP圖像2010中之當前區塊2012和相鄰塊2014被示出。相鄰塊具有運動矢量mv1
。來自於相鄰塊之mv1
被用於推導當前區塊之運動矢量mv2
。根據本發明,根據mv2
= f(mv1
, x1
, y1
, x2
, y2
, ERP),使用縮放函數,來自於相鄰塊之運動矢量mv1
需要被縮放,其中(x1
, y1
)對應於相鄰塊中位置a
之座標(x, y),(x2
, y2
)對應於當前區塊中位置b
之座標(x, y)。MV scaling of other projections has also been made public. In FIG. 20, MV scaling of ERP is disclosed, in which the
在第21圖中,CMP之MV縮放被公開,其中CMP圖像2110中之當前區塊2112和相鄰塊2114被示出。相鄰塊具有運動矢量mv1
。來自於相鄰塊之mv1
被用於推導當前區塊之運動矢量mv2
。根據本發明,根據mv2
= f(mv1
, x1
, y1
, x2
, y2
, CMP),使用縮放函數,來自於相鄰塊之運動矢量mv1
需要被縮放。In FIG. 21, the MV scaling of CMP is disclosed, in which the
在第22圖中,SSP之MV縮放被公開,其中SSP圖像2210中之當前區塊2212和相鄰塊2214被示出。相鄰塊具有運動矢量mv1 。來自於相鄰塊之mv1 被用於推導當前區塊之運動矢量mv2 。根據本發明,根據mv2 = f(mv1 , x1 , y1 , x2 , y2 , SSP),使用縮放函數,來自於相鄰塊之運動矢量mv1 需要被縮放。In FIG. 22, the MV scaling of SSP is disclosed, in which the current block 2212 and the neighboring block 2214 in the SSP image 2210 are shown. The neighboring block has a motion vector mv 1 . The mv 1 from the neighboring block is used to derive the motion vector mv 2 of the current block. According to the invention, according to mv 2 = f(mv 1 , x 1 , y 1 , x 2 , y 2 , SSP), using the scaling function, the motion vector mv 1 from the neighboring block needs to be scaled.
在第23圖中,OHP之MV縮放被公開,其中OHP圖像2310中之當前區塊2312和相鄰塊2314被示出。相鄰塊具有運動矢量mv1
。來自於相鄰塊之mv1
被用於推導當前區塊之運動矢量mv2
。根據本發明,根據mv2
= f(mv1
, x1
, y1
, x2
, y2
, OHP),使用縮放函數,來自於相鄰塊之運動矢量mv1
需要被縮放。In FIG. 23, the MV scaling of OHP is disclosed, in which the
在第24圖中,ISP之MV縮放被公開,其中ISP圖像2410中之當前區塊2412和相鄰塊2414被示出。相鄰塊具有運動矢量mv1
。來自於相鄰塊之mv1
被用於推導當前區塊之運動矢量mv2
。根據本發明,根據mv2
= f(mv1
, x1
, y1
, x2
, y2
, ISP),使用縮放函數,來自於相鄰塊之運動矢量mv1
需要被縮放。In FIG. 24, the MV scaling of the ISP is disclosed, in which the
在第25圖中,EAP之MV縮放被公開,其中EAP圖像2510中之當前區塊2512和相鄰塊2514被示出。相鄰塊具有運動矢量mv1
。來自於相鄰塊之mv1
被用於推導當前區塊之運動矢量mv2
。根據本發明,根據mv2
= f(mv1
, x1
, y1
, x2
, y2
, EAP),使用縮放函數,來自於相鄰塊之運動矢量mv1
需要被縮放。In FIG. 25, the MV scaling of EAP is disclosed, in which the
在第26圖中,ACP之MV縮放被公開,其中ACP圖像2610中之當前區塊2612和相鄰塊2614被示出。相鄰塊具有運動矢量mv1
。來自於相鄰塊之mv1
被用於推導當前區塊之運動矢量mv2
。根據本發明,根據mv2
= f(mv1
, x1
, y1
, x2
, y2
, ACP),使用縮放函數,來自於相鄰塊之運動矢量mv1
需要被縮放。In FIG. 26, the MV scaling of ACP is disclosed, in which the
在第27圖中,RSP之MV縮放被公開,其中RSP圖像2710中之當前區塊2712和相鄰塊2714被示出。相鄰塊具有運動矢量mv1
。來自於相鄰塊之mv1
被用於推導當前區塊之運動矢量mv2
。根據本發明,根據mv2
= f(mv1
, x1
, y1
, x2
, y2
, RSP),使用縮放函數,來自於相鄰塊之運動矢量mv1
需要被縮放。In FIG. 27, the MV scaling of RSP is disclosed, in which the
除了上述之這些投影之外,圓柱體投影也被用於將3D球體投影到2D幀中。如第28圖所示,概念上,通過將圓柱體2820包裹在球體2830周圍,並通過該球將光線投射到圓柱體上,圓柱體投影被創建。圓柱體投影將子午線(meridian)表示為直之、間隔均勻之垂直線,以及將平行線表示為直之水準線。如其在球上一樣,子午線和平行線交叉於直角。根據光源之位置,不同圓柱體投影(Cylindrical Projection,CLP)被生成。在第28圖中,CLP之MV縮放被公開,其中,CLP圖像2810中之當前區塊2812和相鄰塊2814被示出。相鄰塊具有運動矢量mv1
。來自於相鄰塊之mv1
被用於推導當前區塊之運動矢量mv2
。根據本發明,根據mv2
= f(mv1
, x1
, y1
, x2
, y2
, CLP),使用縮放函數,來自於相鄰塊之運動矢量mv1
需要被縮放。In addition to the above projections, cylinder projections are also used to project 3D spheres into 2D frames. As shown in FIG. 28, conceptually, by wrapping a
第29圖示出了根據本發明實施例之將球體旋轉應用於調節運動矢量以用於處理360度虛擬實境圖像之系統之示例性流程圖。流程中所示之步驟可以被實現為編碼器側處之一個或多個處理器(例如,一個或多個CPU)上可執行之程式碼。流程圖中所示之步驟也可以基於硬體被實現,例如,用於執行流程圖中之步驟之一個或多個電子設備或處理器。根據本方法,在步驟2910中,接收2D幀中之當前區塊之輸入資料,其中2D幀係自3D球體投影的。輸入資料可以對應於待編碼之2D幀中之圖元資料。在步驟2920中,確定與2D幀中之相鄰塊相關之第一運動矢量,其中第一運動矢量自相鄰塊中之第一起始位置指向2D幀中之第一終止位置。在步驟2930中,根據目標投影,將第一運動矢量投影到3D球體上。在步驟2940中,將3D球體中之第一運動矢量沿著3D球體之表面上之旋轉圓圈圍繞著旋轉軸進行旋轉,以生成3D球體中之第二運動矢量。在步驟2950中,根據逆目標投影,將3D球體中之第二運動矢量映射回到2D幀。在步驟2960中,使用第二運動矢量,編碼或解碼2D幀中之當前區塊。
FIG. 29 shows an exemplary flowchart of a system for applying sphere rotation to adjust motion vectors for processing 360-degree virtual reality images according to an embodiment of the present invention. The steps shown in the flow may be implemented as program code executable on one or more processors (eg, one or more CPUs) at the encoder side. The steps shown in the flowchart can also be implemented based on hardware, for example, one or more electronic devices or processors for performing the steps in the flowchart. According to this method, in
第30圖示出了根據本發明實施例之自視點之平移推導出運動矢量以用於處理360度虛擬實境圖像之系統之示例性流程圖。根據另一方法,在步驟3010中,接收兩個2D幀,其中這兩個幀係使用目標投影而自對應於兩個不同視點之3D球體投影的,當前區塊和相鄰塊位於這兩個2D幀中。在步驟3020中,基於這兩個2D幀,確定攝像機之前置點。在步驟3030中,在這兩個2D幀中確定移動流。在步驟3040中,基於與相鄰塊相關之第一運動矢量,確定攝像機之平移。在步驟3050中,基於攝像機之平移,推導出與當前區塊相關之第二運動矢量。在步驟3060中,使用第二運動矢量,編碼或解碼2D幀中之當前區塊。FIG. 30 shows an exemplary flowchart of a system for deriving a motion vector from translation of a viewpoint according to an embodiment of the present invention for processing a 360-degree virtual reality image. According to another method, in
第31圖示出了根據本發明實施例之將縮放應用於調節運動矢量以用於處理360度虛擬實境圖像之系統之示例性流程圖。根據本方法,在步驟3110中,接收2D幀中之當前區塊之輸入資料,其中2D幀係根據目標投影自3D球體投影的。在步驟3120中,確定與2D幀中之相鄰塊相關之第一運動矢量,其中第一運動矢量自相鄰塊中之第一起始位置指向2D幀中之第一終止位置。在步驟3130中,縮放第一運動矢量,以生成第二運動矢量。在步驟3140中,使用第二運動矢量,編碼或解碼2D幀中之當前區塊。FIG. 31 shows an exemplary flowchart of a system for applying zoom to adjust motion vectors for processing 360-degree virtual reality images according to an embodiment of the present invention. According to the method, in
本發明所示之流程圖用於示出根據本發明之視訊之示例。在不脫離本發明之精神之情況,本領域之技術人員可以修改每個步驟、重組這些步驟、將一個步驟進行分離或者組合這些步驟而實施本發明。The flowchart shown in the present invention is used to show an example of video according to the present invention. Without departing from the spirit of the present invention, those skilled in the art can modify each step, reorganize these steps, separate one step, or combine these steps to implement the present invention.
上述說明被呈現,以使得本領域之普通技術人員能夠在特定應用程式之上下文及其需求中實施本發明。對本領域技術人員來說,所描述之實施例之各種變形將係顯而易見之,並且本文定義之一般原則可以應用於其他實施例中。因此,本發明不限於所示和描述之特定實施例,而係將被賦予與本文所公開之原理和新穎特徵相一致之最大範圍。在上述詳細說明中,說明瞭各種具體細節,以便透徹理解本發明。儘管如此,將被本領域之技術人員理解之是,本發明能夠被實踐。The above description is presented to enable those of ordinary skill in the art to implement the present invention in the context of specific applications and their needs. It will be obvious to those skilled in the art that various modifications of the described embodiments will be apparent, and the general principles defined herein may be applied to other embodiments. Therefore, the present invention is not limited to the specific embodiments shown and described, but is to be accorded the maximum scope consistent with the principles and novel features disclosed herein. In the foregoing detailed description, various specific details are described in order to thoroughly understand the present invention. Nevertheless, it will be understood by those skilled in the art that the present invention can be practiced.
如上所述之本發明之實施例可以在各種硬體、軟體代碼或兩者之結合中實現。例如,本發明之實施例可以係集成在視訊壓縮晶片內之電路,或者係集成到視訊壓縮軟體中之程式碼,以執行本文所述之處理。本發明之一個實施例也可以係在數位訊號處理器(Digital Signal Processor,DSP)上執行之程式碼,以執行本文所描述之處理。本發明還可以包括由電腦處理器、數位訊號處理器、微處理器或現場可程式設計閘陣列(field programmable gate array,FPGA)所執行之若干函數。根據本發明,通過執行定義了本發明所實施之特定方法之機器可讀軟體代碼或者固件代碼,這些處理器可以被配置為執行特定任務。軟體代碼或固件代碼可以由不同之程式設計語言和不同之格式或樣式開發。軟體代碼也可以編譯為不同之目標平臺。然而,執行本發明之任務之不同之代碼格式、軟體代碼之樣式和語言以及其他形式之配置代碼,不會背離本發明之精神和範圍。The embodiments of the present invention as described above can be implemented in various hardware, software codes, or a combination of both. For example, the embodiments of the present invention may be a circuit integrated in a video compression chip, or a program code integrated in a video compression software to perform the processing described herein. An embodiment of the present invention may also be a program code executed on a digital signal processor (Digital Signal Processor, DSP) to perform the processing described herein. The invention may also include several functions performed by a computer processor, digital signal processor, microprocessor, or field programmable gate array (FPGA). According to the present invention, these processors may be configured to perform specific tasks by executing machine-readable software codes or firmware codes that define specific methods implemented by the present invention. The software code or firmware code can be developed by different programming languages and different formats or styles. The software code can also be compiled for different target platforms. However, different code formats, software code formats and languages, and other forms of configuration codes for performing the tasks of the present invention will not depart from the spirit and scope of the present invention.
本發明以不脫離其精神或本質特徵之其他具體形式來實施。所描述之例子在所有方面僅是說明性之,而非限制性之。因此,本發明之範圍由附加之權利要求來表示,而不是前述之描述來表示。權利要求之含義以及相同範圍內之所有變化都應納入其範圍內。The present invention is implemented in other specific forms without departing from its spirit or essential characteristics. The examples described are merely illustrative in all respects, not restrictive. Therefore, the scope of the present invention is indicated by the appended claims rather than the foregoing description. The meaning of the claims and all changes within the same scope should be included in the scope.
110、610、700、810、920、970、1020、1210、1220、1230、1420、1430、1510、1610、1720、1910、2830‧‧‧球體120、340、430、630‧‧‧矩形圖像210‧‧‧立方體220a、220b、620、622‧‧‧部分230a、230b‧‧‧帶310‧‧‧八面體320、420‧‧‧面330‧‧‧中間格式410‧‧‧二十面體500‧‧‧球面圖像510‧‧‧北極圖像520‧‧‧南極圖像530‧‧‧赤道段圖像502‧‧‧緯度45°N504‧‧‧緯度45°S506‧‧‧赤道701、702、704、705、1112、1120、1932、1934、2012、2014、2112、2114、2212、2214、2312、2314、2412、2414、2512、2514、2612、2614、2712、2714、2814、2812‧‧‧塊703、1526、1930、2010‧‧‧ERP 幀711、714‧‧‧區域710‧‧‧表面713‧‧‧路徑712、715、1612、1614、1616、1622、1624、1626、1632、1634、1636‧‧‧運動矢量720、740、760‧‧‧2D域721-725‧‧‧2D域720中之部分711-715之對應733、753‧‧‧軌跡733-735‧‧‧軌跡733之對應於713-725之部分743-745‧‧‧2D域740中之對應部分733-735之部分750‧‧‧球體之表面753-755‧‧‧軌跡753之對應於713-725之部分763-765‧‧‧2D域760中之對應於部分753-755之部分820、910、930、960、980‧‧‧圓圈830、840、912‧‧‧點800、900、950‧‧‧球體旋轉850、990‧‧‧旋轉軸1010、1030‧‧‧示意1100‧‧‧佈局1240、1250、1241、1260、1242、1251、1422、1432‧‧‧圖像1212、1222、1232、1460、1470‧‧‧箭頭1410‧‧‧物體1424、1434、1423‧‧‧位置1440‧‧‧前置點1450‧‧‧移動流1512、1514‧‧‧移動流線1520‧‧‧ERP1530‧‧‧CMP1536、2110‧‧‧CMP幀1540‧‧‧OHP1546、2310‧‧‧OHP幀1550‧‧‧ISP1556、2410‧‧‧ISP幀1560‧‧‧SSP1566、2210‧‧‧SSP幀1620、1920‧‧‧2D平面1710、1730、1810‧‧‧2D幀1830‧‧‧前置投影1840‧‧‧逆投影1940‧‧‧MV縮放流程2510‧‧‧EAP圖像2610‧‧‧ACP圖像2710‧‧‧RSP圖像2820‧‧‧圓柱體2810‧‧‧CLP圖像2910-2960、3010-3060‧‧‧步驟110, 610, 700, 810, 920, 970, 1020, 1210, 1220, 1230, 1420, 1430, 1510, 1610, 1720, 1910, 2830 ‧ ‧ ‧ sphere 120, 340, 430, 630 ‧ ‧ ‧ rectangular image 210‧‧‧Cube 220a, 220b, 620, 622‧‧‧ part 230a, 230b ‧‧‧ with 310‧‧‧ octahedron 320, 420‧‧‧ face 330‧‧‧ intermediate format 410‧‧‧ twenty faces Volume 500 ‧‧‧ Spherical image 510 ‧‧‧ Arctic image 520 ‧ ‧ ‧ Antarctic image 530 ‧ ‧ ‧ Equatorial image 502 ‧ ‧ ‧ Latitude 45°N504‧‧‧ Latitude 45° S506 ‧‧‧Equatorial 701 , 702, 704, 705, 1112, 1120, 1932, 1934, 2012, 2014, 2112, 2114, 2212, 2214, 2312, 2314, 2412, 2414, 2512, 2514, 2612, 2614, 2712, 2714, 2814, 2812 ‧‧‧Blocks 703, 1526, 1930, 2010 ‧‧‧ ERP frame 711, 714 ‧ ‧ ‧ region 710 ‧ ‧ ‧ surface 713 ‧ ‧ ‧ path 712, 715, 1612, 1614, 1616, 1622, 1624, 1626, 1632 , 1634, 1636‧‧‧Motion vectors 720, 740, 760‧‧‧ 2D domain 721-725‧‧‧‧ 2D domain 720 part 711-715 corresponds to 733, 753‧‧‧ locus 733-735‧‧‧ locus 733 corresponds to the part 713-725 743-745 ‧‧‧ 2D domain 740 corresponds to the part 733-735 part 750 ‧‧‧ the surface of the sphere 753-755 ‧‧‧ track 753 corresponds to the part 713-725 763-765 ‧‧‧ 2D domain 760 corresponding to the parts 753-755 820, 910, 930, 960, 980 ‧ ‧ ‧ circle 830, 840, 912 ‧ ‧ ‧ point 800, 900, 950 ‧ ‧ sphere Rotation 850, 990‧‧‧ Rotation axis 1010, 1030‧‧‧ Sketch 1100‧‧‧ Layout 1240, 1250, 1241, 1260, 1242, 1251, 1422, 1432‧‧‧ Image 1212, 1222, 1232, 1460, 1470 ‧‧‧Arrow 1410‧‧‧object 1424, 1434, 1423 ‧‧‧ position 1440 ‧‧‧ leading point 1450 ‧‧‧ mobile flow 1512, 1514‧‧‧ mobile flow line 1520‧‧‧ERP1530‧‧‧CMP1536, 2110‧‧‧CMP frame 1540‧‧‧OHP1546, 2310‧‧‧OHP frame 1550‧‧‧ISP1 556, 2410‧‧‧ISP frame 1560‧‧‧SSP1566, 2210‧‧‧SSP frame 1620, 1920‧‧‧2D plane 1710, 1730, 1810‧‧‧2D frame 1830‧‧‧front projection 1840‧‧‧ reverse Projection 1940‧‧‧MV zoom process 2510‧‧‧EAP image 2610‧‧‧ACP image 2710‧‧‧RSP image 2820‧‧‧Cylinder 2810‧‧‧CLP image 2910-2960, 3010-3060‧ ‧‧step
第1圖係根據等角投影將球體投影到矩形圖像上之示例,其中每條經度線被映射到ERP圖像之垂直線。 第2圖係具有6個面之立方體,其中360度VR圖像可以根據立方體投影而被投影到該立方體之6個面。 第3圖係八面體投影之示例,其中球體被投影到8個面之八面體上。 第4圖係二十面投影之示例,其中球體被投影到20個面之二十面體上。 第5圖係分段球體投影(segmented sphere projection,SSP)之示例,其中球體圖像被映射到北極圖像、南極圖像和赤道段圖像。 第6圖係旋轉球體投影(rotated sphere projection,RSP)之示例,其中球體被分割成中間之270°x90°區域和剩餘部分。RSP之這兩個部分還可以在頂端側和底端側被拉伸以生成頂端和底端上具有橢圓形狀邊界之已變形部分。 Figure 1 is an example of projecting a sphere onto a rectangular image based on an isometric projection, where each longitude line is mapped to the vertical line of the ERP image. Figure 2 is a cube with 6 faces, where a 360-degree VR image can be projected onto the 6 faces of the cube according to the projection of the cube. Figure 3 is an example of octahedron projection, in which a sphere is projected onto an octahedron of 8 faces. Figure 4 is an example of icosahedral projection, in which a sphere is projected onto an icosahedron of 20 faces. Figure 5 is an example of segmented sphere projection (SSP), in which the sphere image is mapped to the North Pole image, the South Pole image, and the equatorial segment image. Figure 6 is an example of a rotated sphere projection (RSP), in which the sphere is divided into the middle 270°x90° area and the remaining part. The two parts of the RSP can also be stretched on the top and bottom sides to produce deformed parts with elliptical borders on the top and bottom ends.
第7A圖和第7B圖係由於球體之旋轉引起之2D圖像之變形之示例。 Figures 7A and 7B are examples of deformation of 2D images due to the rotation of the sphere.
第8圖係在球體之較大圓圈上從點a到點b之球體之旋轉之示例,其中較大圓圈對應於球體之表面上之最大圓圈。 Figure 8 is an example of the rotation of the sphere from point a to point b on the larger circle of the sphere, where the larger circle corresponds to the largest circle on the surface of the sphere.
第9圖係在球體之較小圓圈上從點a到點b之球體之旋轉之示例,其中較大圓圈對應於小於球體之表面上之最大圓圈之圓圈。 Figure 9 is an example of the rotation of the sphere from point a to point b on the smaller circle of the sphere, where the larger circle corresponds to a circle smaller than the largest circle on the surface of the sphere.
第10圖係使用球體旋轉模型推導用於2D投影圖像之運動矢量之矢量。 Figure 10 uses the sphere rotation model to derive the vector of the motion vector used for the 2D projection image.
第11圖係使用基於球體之旋轉而推導之運動矢量(motion vector,MV)作為合併或高級運動矢量預測(advanced motion vector prediction,AMVP)候選之示例。 Figure 11 is an example of using motion vector (MV) derived based on the rotation of the sphere as a candidate for merging or advanced motion vector prediction (AMVP).
第12圖係在不同攝像機位置處物體(即樹)被投影到球體之表面上之示例。 Figure 12 is an example of objects (ie trees) being projected onto the surface of a sphere at different camera positions.
第13圖係覆蓋有移動流之模型之ERP幀之示例,其中如果攝像前置點(forward point)已知,則背景(即靜態物體)之流可以被確定。 Figure 13 is an example of an ERP frame covered by a model of a moving stream, where if the camera forward point is known, the flow of the background (ie, static objects) can be determined.
第14圖係基於視點平移之MV推導之示例性流程。 Figure 14 is an exemplary flow of MV derivation based on viewpoint translation.
第15圖係用於不同投影方法之基於視點平移之示例性MV推導。 Figure 15 is an exemplary MV derivation based on viewpoint translation for different projection methods.
第16圖係與ERP幀中之運動相關之變形之示例。 Figure 16 is an example of the deformation related to the motion in the ERP frame.
第17圖係用於3D球體中之MV縮放技術之示例性流程。 Figure 17 is an exemplary flow of MV scaling technology used in a 3D sphere.
第18圖係2D幀中之MV縮放技術之示例性流程。 第19圖係ERP幀中之MV縮放技術之示例性流程。 第20圖係ERP幀中之MV縮放技術之示例性流程,其中當前區塊和相鄰塊在ERP圖像中被示出。 第21圖係CMP幀中之MV縮放技術之示例性流程,其中當前區塊和相鄰塊在CMP圖像中被示出。 第22圖係SSP幀中之MV縮放技術之示例性流程,其中當前區塊和相鄰塊在SSP圖像中被示出。 第23圖係八面體投影(Octahedron Projection,OHP)幀中之MV縮放技術之示例性流程,其中當前區塊和相鄰塊在OHP圖像中被示出。 第24圖係二十面體投影(Icosahedron Projection,ISP)幀中之MV縮放技術之示例性流程,其中當前區塊和相鄰塊在ISP圖像中被示出。 第25圖係等區域投影(Equal-Area Projection,EAP)幀中之MV縮放技術之示例性流程,其中當前區塊和相鄰塊在EAP圖像中被示出。 第26圖係已調節立方體投影(Adjusted Cubemap Projection,ACP)幀中之MV縮放技術之示例性流程,其中當前區塊和相鄰塊在ACP圖像中被示出。 第27圖係旋轉球體投影(Rotated Sphere Projection,RSP)幀中之MV縮放技術之示例性流程,其中當前區塊和相鄰塊在RSP圖像中被示出。 第28圖係圓柱體投影(Cylindrical Projection,CLP)幀中之MV縮放技術之示例性流程,其中當前區塊和相鄰塊在CLP圖像中被示出。 第29圖係根據本發明實施例之將球體之旋轉應用於調節運動矢量來處理360度VR圖像之系統之示例性流程圖。 第30圖係根據本發明實施例之自視點之平移推導出運動矢量以處理360度VR圖像之系統之示例性流程圖。 第31圖係根據本發明實施例之將縮放應用於調節運動矢量來處理360度VR圖像之系統之示例性流程圖。Figure 18 is an exemplary flow of the MV scaling technique in 2D frames. Figure 19 is an exemplary flow of the MV scaling technique in the ERP frame. Figure 20 is an exemplary flow of the MV scaling technique in the ERP frame, where the current block and neighboring blocks are shown in the ERP image. Figure 21 is an exemplary flow of the MV scaling technique in the CMP frame, where the current block and the neighboring blocks are shown in the CMP image. Figure 22 is an exemplary flow of the MV scaling technique in the SSP frame, where the current block and neighboring blocks are shown in the SSP image. Figure 23 is an exemplary flow of the MV scaling technique in an Octahedron Projection (OHP) frame, where the current block and neighboring blocks are shown in the OHP image. Figure 24 is an exemplary flow of the MV scaling technique in the icosahedron projection (Icosahedron Projection, ISP) frame, where the current block and the neighboring blocks are shown in the ISP image. Figure 25 is an exemplary flow of the MV scaling technique in the Equal-Area Projection (EAP) frame, where the current block and neighboring blocks are shown in the EAP image. Figure 26 is an exemplary flow of the MV scaling technique in the Adjusted Cubemap Projection (ACP) frame, where the current block and neighboring blocks are shown in the ACP image. Figure 27 is an exemplary flow of the MV scaling technique in a Rotated Sphere Projection (Rotated Sphere Projection, RSP) frame, where the current block and neighboring blocks are shown in the RSP image. Figure 28 is an exemplary flow of the MV scaling technique in a Cylindrical Projection (CLP) frame, where the current block and neighboring blocks are shown in the CLP image. FIG. 29 is an exemplary flowchart of a system for processing 360-degree VR images by applying rotation of a sphere to adjust motion vectors according to an embodiment of the present invention. FIG. 30 is an exemplary flowchart of a system for deriving a motion vector from a translation of a viewpoint to process a 360-degree VR image according to an embodiment of the present invention. FIG. 31 is an exemplary flowchart of a system for processing 360-degree VR images by applying scaling to adjust motion vectors according to an embodiment of the present invention.
703‧‧‧ERP幀 703‧‧‧ERP frame
701、702、704、705‧‧‧塊 701, 702, 704, 705‧‧‧ block
700‧‧‧球體 700‧‧‧Sphere
Claims (20)
Applications Claiming Priority (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201762523885P | 2017-06-23 | 2017-06-23 | |
US201762523883P | 2017-06-23 | 2017-06-23 | |
US62/523,885 | 2017-06-23 | ||
US62/523,883 | 2017-06-23 | ||
??PCT/CN2018/092143 | 2018-06-21 | ||
WOPCT/CN2018/092143 | 2018-06-21 | ||
PCT/CN2018/092143 WO2018233662A1 (en) | 2017-06-23 | 2018-06-21 | Method and apparatus of motion vector derivations in immersive video coding |
Publications (2)
Publication Number | Publication Date |
---|---|
TW201911867A TW201911867A (en) | 2019-03-16 |
TWI686079B true TWI686079B (en) | 2020-02-21 |
Family
ID=64735503
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW107121493A TWI686079B (en) | 2017-06-23 | 2018-06-22 | Method and apparatus of processing 360-degree virtual reality images |
TW107121492A TWI690193B (en) | 2017-06-23 | 2018-06-22 | Method and apparatus of processing 360-degree virtual reality images |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW107121492A TWI690193B (en) | 2017-06-23 | 2018-06-22 | Method and apparatus of processing 360-degree virtual reality images |
Country Status (3)
Country | Link |
---|---|
CN (2) | CN109691104B (en) |
TW (2) | TWI686079B (en) |
WO (2) | WO2018233661A1 (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10904558B2 (en) * | 2019-04-26 | 2021-01-26 | Tencent America LLC | Method and apparatus for motion compensation for 360 video coding |
CN110248212B (en) * | 2019-05-27 | 2020-06-02 | 上海交通大学 | Multi-user 360-degree video stream server-side code rate self-adaptive transmission method and system |
JP7198949B2 (en) * | 2019-06-13 | 2023-01-04 | ベイジン、ターチア、インターネット、インフォメーション、テクノロジー、カンパニー、リミテッド | Motion vector prediction for video coding |
US11095912B2 (en) | 2019-10-28 | 2021-08-17 | Mediatek Inc. | Video decoding method for decoding part of bitstream to generate projection-based frame with constrained guard band size, constrained projection face size, and/or constrained picture size |
US11263722B2 (en) | 2020-06-10 | 2022-03-01 | Mediatek Inc. | Video processing method for remapping sample locations in projection-based frame with hemisphere cubemap projection layout to locations on sphere and associated video processing apparatus |
CN115423812B (en) * | 2022-11-05 | 2023-04-18 | 松立控股集团股份有限公司 | Panoramic monitoring planarization display method |
CN116540872B (en) * | 2023-04-28 | 2024-06-04 | 中广电广播电影电视设计研究院有限公司 | VR data processing method, device, equipment, medium and product |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103039075A (en) * | 2010-05-21 | 2013-04-10 | Jvc建伍株式会社 | Image encoding apparatus, image encoding method, image encoding program, image decoding apparatus, image decoding method and image decoding program |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102333221B (en) * | 2011-10-21 | 2013-09-04 | 北京大学 | Panoramic background prediction video coding and decoding method |
CN104063843B (en) * | 2014-06-18 | 2017-07-28 | 长春理工大学 | A kind of method of the integrated three-dimensional imaging element image generation based on central projection |
US9277122B1 (en) * | 2015-08-13 | 2016-03-01 | Legend3D, Inc. | System and method for removing camera rotation from a panoramic video |
KR102432085B1 (en) * | 2015-09-23 | 2022-08-11 | 노키아 테크놀로지스 오와이 | A method, an apparatus and a computer program product for coding a 360-degree panoramic video |
-
2018
- 2018-06-21 WO PCT/CN2018/092142 patent/WO2018233661A1/en active Application Filing
- 2018-06-21 CN CN201880002044.4A patent/CN109691104B/en active Active
- 2018-06-21 WO PCT/CN2018/092143 patent/WO2018233662A1/en active Application Filing
- 2018-06-21 CN CN201880001715.5A patent/CN109429561B/en active Active
- 2018-06-22 TW TW107121493A patent/TWI686079B/en active
- 2018-06-22 TW TW107121492A patent/TWI690193B/en active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103039075A (en) * | 2010-05-21 | 2013-04-10 | Jvc建伍株式会社 | Image encoding apparatus, image encoding method, image encoding program, image decoding apparatus, image decoding method and image decoding program |
Non-Patent Citations (3)
Title |
---|
Jill Boyce et.al, "Spherical rotation orientation SEI for HEVC and AVC coding of 360 video", JCTVC-Z0025, Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 26th Meeting: Geneva, CH, 12–20 January 2017, |
Jill Boyce et.al, "Spherical rotation orientation SEI for HEVC and AVC coding of 360 video", JCTVC-Z0025, Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 26th Meeting: Geneva, CH, 12–20 January 2017, Sejin Oh et.al, "SEI message for signaling of 360-degree video information", JCTVC-Z0026, Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 26th Meeting: Geneva, CH, 12–20 January 2017 * |
Sejin Oh et.al, "SEI message for signaling of 360-degree video information", JCTVC-Z0026, Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 26th Meeting: Geneva, CH, 12–20 January 2017, |
Also Published As
Publication number | Publication date |
---|---|
WO2018233662A1 (en) | 2018-12-27 |
CN109691104A (en) | 2019-04-26 |
CN109429561A (en) | 2019-03-05 |
TW201911867A (en) | 2019-03-16 |
WO2018233661A1 (en) | 2018-12-27 |
CN109429561B (en) | 2022-01-21 |
TW201911861A (en) | 2019-03-16 |
CN109691104B (en) | 2021-02-23 |
TWI690193B (en) | 2020-04-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI686079B (en) | Method and apparatus of processing 360-degree virtual reality images | |
US10600233B2 (en) | Parameterizing 3D scenes for volumetric viewing | |
US10264282B2 (en) | Method and apparatus of inter coding for VR video using virtual reference frames | |
EP3043320B1 (en) | System and method for compression of 3d computer graphics | |
US10212411B2 (en) | Methods of depth based block partitioning | |
KR101131756B1 (en) | Mesh-based video compression with domain transformation | |
WO2017125030A1 (en) | Apparatus of inter prediction for spherical images and cubic images | |
TWI666913B (en) | Method and apparatus for mapping virtual-reality image to a segmented sphere projection format | |
EP3610647B1 (en) | Apparatuses and methods for encoding and decoding a panoramic video signal | |
US20190045212A1 (en) | METHOD AND APPARATUS FOR PREDICTIVE CODING OF 360º VIDEO | |
TWI702835B (en) | Method and apparatus of motion vector derivation for vr360 video coding | |
JP2018530225A (en) | Method and apparatus for encoding and decoding a light field base image and corresponding computer program product | |
KR102141319B1 (en) | Super-resolution method for multi-view 360-degree image and image processing apparatus | |
Meuleman et al. | Real-time sphere sweeping stereo from multiview fisheye images | |
US20210150665A1 (en) | Image processing method and device | |
Lee et al. | Farfetchfusion: Towards fully mobile live 3d telepresence platform | |
US20180338160A1 (en) | Method and Apparatus for Reduction of Artifacts in Coded Virtual-Reality Images | |
Pintore et al. | PanoVerse: automatic generation of stereoscopic environments from single indoor panoramic images for Metaverse applications | |
KR101946715B1 (en) | Adaptive search ragne determination method for motion estimation of 360 degree video | |
Lee et al. | Improved reference frame by adopting a video stabilization technique | |
Li et al. | MonoSelfRecon: Purely Self-Supervised Explicit Generalizable 3D Reconstruction of Indoor Scenes from Monocular RGB Views |