TWI712918B - Method, device and equipment for displaying images of augmented reality - Google Patents

Method, device and equipment for displaying images of augmented reality Download PDF

Info

Publication number
TWI712918B
TWI712918B TW108124231A TW108124231A TWI712918B TW I712918 B TWI712918 B TW I712918B TW 108124231 A TW108124231 A TW 108124231A TW 108124231 A TW108124231 A TW 108124231A TW I712918 B TWI712918 B TW I712918B
Authority
TW
Taiwan
Prior art keywords
module
human eye
scene
photography module
relative position
Prior art date
Application number
TW108124231A
Other languages
Chinese (zh)
Other versions
TW202013149A (en
Inventor
周岳峰
Original Assignee
開曼群島商創新先進技術有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 開曼群島商創新先進技術有限公司 filed Critical 開曼群島商創新先進技術有限公司
Publication of TW202013149A publication Critical patent/TW202013149A/en
Application granted granted Critical
Publication of TWI712918B publication Critical patent/TWI712918B/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本說明書實施例提供一種擴增實境的影像展示方法、裝置及設備,本說明書實施例獲取人物攝影模組所採集的人物影像,利用人物影像中的人眼區域與其所在影像的關係,確定人眼與人物攝影模組的相對位置;並至少基於人眼與人物攝影模組的相對位置、以及場景攝影模組的位置資訊,確定人眼位置資訊;以人眼位置資訊作為渲染參數中場景攝影模組的位置資訊,並對三維模型進行渲染,獲得投影在顯示螢幕上的投影影像,實現將擴增實境展示內容由攝影模組視角的投影影像更改為人眼視角的投影影像,使投影影像跟隨人眼位置的變更而變更。The embodiment of this specification provides an augmented reality image display method, device, and equipment. The embodiment of this specification obtains the person image collected by the person photography module, and uses the relationship between the human eye area in the person image and the image to determine the person The relative position of the eye and the person photography module; and at least based on the relative position of the human eye and the person photography module, and the position information of the scene photography module, determine the eye position information; use the eye position information as the rendering parameter in the scene photography The position information of the module and the rendering of the 3D model to obtain the projected image projected on the display screen, so as to change the augmented reality display content from the projected image of the camera module’s perspective to the projected image of the human eye’s perspective. The image changes following the change of the position of the human eye.

Description

擴增實境的影像展示方法、裝置及設備Method, device and equipment for displaying images of augmented reality

本說明書涉及影像處理技術領域,尤其涉及擴增實境的影像展示方法、裝置及設備。This specification relates to the field of image processing technology, and in particular to methods, devices and equipment for displaying augmented reality images.

擴增實境(Augmented Reality,簡稱AR),可以是指透過攝影機攝影的位置及角度並加上影像分析技術,讓虛擬世界能夠與實境世界場景進行結合與互動的技術。這種技術可以將真實的環境和虛擬的物件疊加到同一個畫面而同時存在,從而給與使用者超越實境的感官體驗。在AR場景中,可以將相機位置作為渲染參數之一,對由虛擬物件和實境場景所獲得的三維模型進行渲染,獲得投影影像。然而,所展示的投影影像與行動設備的位姿相關,而針對設備靜止且拍攝者運動的情況,設備不能做出相應的回應。Augmented Reality (Augmented Reality, abbreviated as AR) can refer to the technology that allows the virtual world to be combined and interacted with the real world scene through the position and angle of the camera and coupled with image analysis technology. This technology can superimpose the real environment and virtual objects on the same screen and exist at the same time, thereby giving users a sensory experience beyond the real world. In the AR scene, the camera position can be used as one of the rendering parameters to render the three-dimensional model obtained from the virtual object and the real scene to obtain the projected image. However, the displayed projected image is related to the pose of the mobile device, and the device cannot respond accordingly when the device is stationary and the photographer is moving.

為克服相關技術中存在的問題,本說明書提供了擴增實境的影像展示方法、裝置及設備。 根據本說明書實施例的第一態樣,提供一種擴增實境的影像展示方法,所述方法包括: 獲取人物攝影模組所採集的人物影像,並利用人物影像中人眼區域與所述人物影像的關係,確定人眼與人物攝影模組的相對位置; 至少基於人眼與人物攝影模組的相對位置、以及場景攝影模組的位置資訊,確定人眼位置資訊; 以人眼位置資訊作為渲染參數中場景攝影模組的位置資訊,對三維模型進行渲染,獲得投影在顯示螢幕上的投影影像,所述三維模型基於將虛擬物件與場景攝影模組所掃描實境場景相結合而獲得。 在一個實施例中,所述方法應用於電子設備,所述人物攝影模組包括所述電子設備的前置攝影機,所述場景攝影模組包括所述電子設備的後置攝影機。 在一個實施例中,所述三維模型的構建步驟包括: 利用場景攝影模組所採集的實景影像,對實境場景進行三維重建,獲得場景模型; 基於預設疊加策略,將虛擬物件疊加至場景模型,獲得三維模型。 在一個實施例中,所述至少基於人眼與人物攝影模組的相對位置、以及場景攝影模組的位置資訊,確定人眼位置資訊,包括: 獲取人物攝影模組和場景攝影模組的相對位置; 利用人物攝影模組和場景攝影模組的相對位置,將人眼與人物攝影模組的相對位置,轉換為人眼與場景攝影模組的相對位置; 結合人眼與場景攝影模組的相對位置、以及場景攝影模組的位置資訊,計算獲得人眼位置資訊。 在一個實施例中,所述方法還包括: 在確定人眼位置之前,根據當前所獲得的人眼與人物攝影模組的相對位置,與上一次所獲得的人眼與人物攝影模組的相對位置,判定人眼與人物攝影模組的相對位置發生變更。 根據本說明書實施例的第二態樣,提供一種擴增實境的影像展示裝置,所述裝置包括: 相對位置確定模組,用於:獲取人物攝影模組所採集的人物影像,並利用人物影像中人眼區域與所述人物影像的關係,確定人眼與人物攝影模組的相對位置; 人眼位置確定模組,用於:至少基於人眼與人物攝影模組的相對位置、以及場景攝影模組的位置資訊,確定人眼位置資訊; 影像渲染模組,用於:以人眼位置資訊作為渲染參數中場景攝影模組的位置資訊,對三維模型進行渲染,獲得投影在顯示螢幕上的投影影像,所述三維模型基於將虛擬物件與場景攝影模組所掃描實境場景相結合而獲得。 在一個實施例中,所述裝置設於電子設備,所述人物攝影模組包括所述電子設備的前置攝影機,所述場景攝影模組包括所述電子設備的後置攝影機。 在一個實施例中,所述裝置還包括三維模型構建模組,用於: 利用場景攝影模組所採集的實景影像,對實境場景進行三維重建,獲得場景模型; 基於預設疊加策略,將虛擬物件疊加至場景模型,獲得三維模型。 在一個實施例中,所述人眼位置確定模組,具體用於: 獲取人物攝影模組和場景攝影模組的相對位置; 利用人物攝影模組和場景攝影模組的相對位置,將人眼與人物攝影模組的相對位置,轉換為人眼與場景攝影模組的相對位置; 結合人眼與場景攝影模組的相對位置、以及場景攝影模組的位置資訊,計算獲得人眼位置資訊。 在一個實施例中,所述裝置還包括位置判斷模組,用於: 在確定人眼位置之前,根據當前所獲得的人眼與人物攝影模組的相對位置,與上一次所獲得的人眼與人物攝影模組的相對位置,判定人眼與人物攝影模組的相對位置發生變更。 根據本說明書實施例的第三態樣,提供一種電腦設備,包括記憶體、處理器及儲存在記憶體上並可在處理器上運行的電腦程序,其中,所述處理器執行所述程序時實現如上述任一項所述方法。 本說明書的實施例提供的技術方案可以包括以下有益效果: 本說明書實施例獲取人物攝影模組所採集的人物影像,並利用人物影像中人眼區域與所述人物影像的關係,確定人眼與人物攝影模組的相對位置;並至少基於人眼與人物攝影模組的相對位置、以及場景攝影模組的位置資訊,確定人眼位置資訊;以人眼位置資訊作為渲染參數中場景攝影模組的位置資訊,結合其他渲染參數對三維模型進行渲染,獲得投影在顯示螢幕上的投影影像,實現將擴增實境展示內容由攝影模組視角的投影影像更改為人眼視角的投影影像,使投影影像跟隨人眼位置的變更而變更。 應當理解的是,以上的一般描述和後文的細節描述僅是示例性和解釋性的,並不能限制本說明書。In order to overcome the problems in related technologies, this manual provides methods, devices and equipment for displaying augmented reality images. According to a first aspect of the embodiments of the present specification, there is provided an augmented reality image display method, the method including: Obtain the person image collected by the person photography module, and use the relationship between the human eye area in the person image and the person image to determine the relative position of the human eye and the person photography module; Determine the position information of the human eye based at least on the relative position of the human eye and the human photography module, and the position information of the scene photography module; Using the human eye position information as the position information of the scene photography module in the rendering parameters, the 3D model is rendered to obtain the projected image projected on the display screen. The 3D model is based on the virtual object and the real scene scanned by the scene photography module Combination of scenes. In one embodiment, the method is applied to an electronic device, the person photography module includes a front camera of the electronic device, and the scene photography module includes a rear camera of the electronic device. In an embodiment, the step of constructing the three-dimensional model includes: Use the real scene images collected by the scene photography module to reconstruct the real scene in three dimensions to obtain the scene model; Based on a preset overlay strategy, the virtual object is overlayed on the scene model to obtain a three-dimensional model. In one embodiment, the determining the position information of the human eye based at least on the relative position of the human eye and the human photographing module and the position information of the scene photographing module includes: Obtain the relative position of the character photography module and the scene photography module; Using the relative position of the person photography module and the scene photography module, convert the relative position of the human eye and the person photography module into the relative position of the human eye and the scene photography module; Combining the relative position of the human eye and the scene photography module, and the position information of the scene photography module, calculate the human eye position information. In an embodiment, the method further includes: Before determining the position of the human eye, determine the relative position of the human eye and the human photographic module according to the current obtained relative position of the human eye and the human photographic module and the last obtained relative position of the human eye and the human photographic module The location has changed. According to a second aspect of the embodiments of the present specification, an augmented reality image display device is provided, the device including: The relative position determination module is used to obtain the image of the person collected by the person photography module, and determine the relative position of the human eye and the person photography module by using the relationship between the eye area in the person image and the person image; The human eye position determining module is used to determine the human eye position information based at least on the relative position of the human eye and the human photographing module and the position information of the scene photographing module; The image rendering module is used to: use the human eye position information as the position information of the scene photography module in the rendering parameters to render the three-dimensional model to obtain the projected image projected on the display screen. The three-dimensional model is based on combining the virtual object with It is obtained by combining the real scenes scanned by the scene photography module. In one embodiment, the device is provided in an electronic device, the person photography module includes a front camera of the electronic device, and the scene photography module includes a rear camera of the electronic device. In an embodiment, the device further includes a three-dimensional model construction module for: Use the real scene images collected by the scene photography module to reconstruct the real scene in three dimensions to obtain the scene model; Based on a preset overlay strategy, the virtual object is overlayed on the scene model to obtain a three-dimensional model. In an embodiment, the human eye position determination module is specifically used for: Obtain the relative position of the character photography module and the scene photography module; Using the relative position of the person photography module and the scene photography module, convert the relative position of the human eye and the person photography module into the relative position of the human eye and the scene photography module; Combining the relative position of the human eye and the scene photography module, and the position information of the scene photography module, calculate the human eye position information. In an embodiment, the device further includes a position judgment module for: Before determining the position of the human eye, determine the relative position of the human eye and the human photographic module according to the current obtained relative position of the human eye and the human photographic module and the last obtained relative position of the human eye and the human photographic module The location has changed. According to a third aspect of the embodiments of the present specification, a computer device is provided, including a memory, a processor, and a computer program stored on the memory and running on the processor, wherein the processor executes the program when the program is executed. Implement the method described in any of the above. The technical solutions provided by the embodiments of this specification may include the following beneficial effects: The embodiment of this specification obtains the person image collected by the person photography module, and uses the relationship between the human eye area in the person image and the person image to determine the relative position of the human eye and the person photography module; The relative position of the camera module and the position information of the scene camera module determine the human eye position information; the human eye position information is used as the position information of the scene camera module in the rendering parameters, and the 3D model is rendered in combination with other rendering parameters to obtain The projected image projected on the display screen realizes that the augmented reality display content is changed from the projected image of the camera module perspective to the projected image of the human eye perspective, so that the projected image changes with the change of the human eye position. It should be understood that the above general description and the following detailed description are only exemplary and explanatory, and cannot limit this specification.

這裡將詳細地對示例性實施例進行說明,其示例表示在附圖中。下面的描述涉及附圖時,除非另有表示,不同附圖中的相同數位表示相同或相似的要素。以下示例性實施例中所描述的實施方式並不代表與本說明書相一致的所有實施方式。相反,它們僅是與如所附申請專利範圍中所詳述的、本說明書的一些態樣相一致的裝置和方法的例子。 在本說明書使用的術語是僅僅出於描述特定實施例的目的,而非旨在限制本說明書。在本說明書和所附申請專利範圍中所使用的單數形式的“一種”、“所述”和“該”也旨在包括多數形式,除非上下文清楚地表示其他含義。還應當理解,本文中使用的術語“及/或”是指並包含一個或多個相關聯的列出項目的任何或所有可能組合。 應當理解,儘管在本說明書可能採用術語第一、第二、第三等來描述各種資訊,但這些資訊不應限於這些術語。這些術語僅用來將同一類型的資訊彼此區分開。例如,在不脫離本說明書範圍的情況下,第一資訊也可以被稱為第二資訊,類似地,第二資訊也可以被稱為第一資訊。取決於語境,如在此所使用的詞語“如果”可以被解釋成為“在……時”或“當……時”或“回應於確定”。 擴增實境(Augmented Reality,簡稱AR)技術,是一種將真實世界資訊和虛擬世界資訊無縫整合的新技術,該技術可以透過電腦技術,將虛擬的資訊應用到真實世界,真實環境和虛擬物體即時地疊加到了同一個畫面或空間同時存在。 AR技術一種常見的應用場景是,使用者透過手持或佩戴等行動設備中的攝影模組拍攝真實環境,提供AR伺服的軟體可以基於所拍攝的初始影像資料,在初始影像資料上渲染一個或多個虛擬物件。實現上述場景的關鍵在於如何將虛擬物件與實際拍攝的實境場景結合,一態樣,可以提供AR伺服的軟體可以預先配置一個或多個對應虛擬物件的模型,每個虛擬物件的模型規定該虛擬物件對應的狀態演變規則,以決定虛擬物件的不同運動狀態。另一態樣,軟體還可以根據設備所拍攝的影像資料,確定虛擬物件在實境場景中的位置,進而確定虛擬物件渲染到影像資料上的哪個位置,在成功渲染後,使用者即可觀看到基於真實環境疊加有虛擬物件的畫面。 然而,對由虛擬物件和實境場景所構建的三維模型進行渲染時,是以攝影模組的視角進行渲染。增強顯示方案依賴設備的陀螺儀、加速度及重力感應器來感知設備角度變化。因此,如果攝影模組沒有行動而拍攝者/觀看者行動的情況,成像影像不會做出相應的回應,代入感和立體感較差。 舉例來說,如圖1所示,是本說明書根據一示例性實施例提供的一種AR場景拍攝示意圖。圖1中以虛擬物件為小狗、AR系統採用普通顯示器顯示為例進行示例,在該情況下,使用者無需穿戴任何顯示設備即可從顯示螢幕中看到真實環境與虛擬物件的融合效果。拍攝者/觀看者利用手機後置攝影機拍攝實境場景,在手機螢幕中展示包括小狗的投影影像。然而,拍攝者/觀看者保持手機不動而眼睛與手機的相對位置發生改變時,手機螢幕所展示畫面不會有任何改變。 鑒於此,本說明書提供一種擴增實境的影像展示方法,透過將設備的擴增實境展示內容由攝影機視角的複合影像改為人眼視角的複合影像,從而使得展示的影像更接近人眼視角的效果,增強立體感和代入感。其中,攝影模組之所以能攝影成像,主要是靠鏡頭將被攝體結成影像投在攝影管或固體攝影裝置的成像面上。攝影機鏡頭能涵蓋多大範圍的景物,通常以角度來表示,該角度可以稱為鏡頭的視角。本說明書實施例所指人眼視角,並非指人眼所能看到的全部視角,而可以是透過顯示螢幕所能看到的視角。 以下結合附圖對本說明書實施例進行示例說明。 如圖2所示,是本說明書根據一示例性實施例示出的一種擴增實境的影像展示方法的流程圖,所述方法包括: 在步驟202中,獲取人物攝影模組所採集的人物影像,並利用人物影像中人眼區域與所述人物影像的關係,確定人眼與人物攝影模組的相對位置; 在步驟204中,至少基於人眼與人物攝影模組的相對位置、以及場景攝影模組的位置資訊,確定人眼位置資訊; 在步驟206中,以人眼位置資訊作為渲染參數中場景攝影模組的位置資訊,對三維模型進行渲染,獲得投影在顯示螢幕上的投影影像,所述三維模型基於將虛擬物件與場景攝影模組所掃描實境場景相結合而獲得。 在本說明書實施例中,人物攝影模組和場景攝影模組是不同的拍攝模組,兩個拍攝模組的拍攝區域不同。在一個例子中,人物攝影模組和場景攝影模組的攝影方向相反,人物攝影模組的攝影機與顯示螢幕在電子設備的同一面,甚至人物攝影模組的攝影機鏡面與顯示螢幕在同一平面,進一步的,兩個攝影模組設於同一電子設備。例如,實際應用中,由於由後置攝影機採集的影像相比於由前置攝影機採集的影像清晰度高,拍攝者/觀看者往往習慣用後置攝影機拍攝實境場景,同時,前置攝影機的鏡面與顯示螢幕在同一平面。因此,人物攝影模組可以是前置攝影機,場景攝影模組可以是後置攝影機,從而實現利用後置攝影機進行擴增實境態樣的應用,並利用前置攝影機輔助AR增強。 可以理解的是,人物拍攝模組和場景拍攝模組都是拍攝模組,只是為了區分不同拍攝模組,而進行不同的命名。在其他例子中,某些終端可能正面和反面均具有顯示螢幕,因此可以將後置攝影機作為人物攝影模組,將前置攝影機作為場景攝影模組;又或者,人物攝影模組與場景攝影模組為設置於不同設備上的拍攝模組等。 人眼位置資訊是用於表示拍攝者/觀看者人眼在空間的位置,可以是人眼在世界座標系或場景攝影模組座標系下的三維座標。步驟202和步驟204介紹如何確定人眼位置資訊。作為一種應用實例,可以先確定人眼與人物攝影模組的相對位置,再根據人眼與人物攝影模組的相對位置、以及場景攝影模組的位置資訊,確定人眼位置資訊。 關於步驟202,人物攝影模組可以採集人物影像,特別是在人物拍攝模組所能拍攝範圍內的攝影者的影像。人眼與人物攝影模組的相對位置,可以是相對位姿,包括相對距離和相對方向。在一個例子中,相對位置可以利用帶方向的向量表示。 人眼與人物攝影模組的相對位置,可以利用人臉檢測算法對人物影像進行人臉檢測而獲得。例如,可以先檢測人物影像中人臉區域,進而根據人眼與人臉的關係從人臉區域中確定人眼區域,根據人眼區域與影像的關係,確定人眼與人物攝影模組的相對位置。 在一個實施例中,可以利用深度學習訓練模型來確定人眼與人物攝影模組的相對位置。例如,可以以標注有人眼與攝影模組的相對位置的人物影像構建訓練樣本,利用訓練樣本對預設初始模型進行訓練,獲得用於檢測人眼與攝影模組的相對位置的檢測模型。在應用階段,利用檢測模型對待檢測影像進行檢測,獲得人眼與攝影模組的相對位置。可以理解的是,在其他例子中,每組訓練樣本中還可以包括其他有助於提高相對位置檢測結果的樣本特徵,例如,人臉區域框等。另外,也可以採用其他方式,透過對人物影像的識別,獲得人眼與人物攝影模組的相對位置,在此不一一贅述。 關於步驟204,場景攝影模組的位置資訊用於表示場景攝影模組在空間的位置,可以是場景攝影模組在世界座標系或場景攝影模組座標系下的三維空間座標,例如,場景攝影模組的位置資訊可以在對場景攝影模組進行相機標定時獲得。可以理解的是,人眼位置和場景攝影模組的位置,是在同一座標系的座標。在影像測量過程以及機器視覺應用中,為確定空間物體表面某點的三維幾何位置與其在影像中對應點之間的相互關係,可以建立攝影機成像的幾何模型,幾何模型參數即攝影機參數。攝影機參數可以包括內參、外參、畸變參數等。實際應用中,可以採用相關技術中的標定方法對攝影機進行標定,例如,線性標定法、非線性最佳化標定法、Tsai的經典兩步標定法等,在此不做限制。 在獲得人眼與人物攝影模組的相對位置、以及場景攝影模組的位置資訊後,可以至少基於人眼與人物攝影模組的相對位置、以及場景攝影模組的位置資訊,確定人眼位置資訊。 在某些應用場景中,人物攝影模組的設置位置和場景攝影模組的設置位置較近,可以忽略兩模組間的相對位置。特別是針對人物攝影模組與場景攝影模組背對設置的情況,可以忽略兩模組間的相對位置,因此,可以直接根據人眼與人物攝影模組的相對位置、以及場景攝影模組的位置資訊,確定人眼位置資訊。例如,假設後置攝影機在場景中的位置是X,人眼相對於前置攝影機的位置是Y,則人眼位置可以是X+Y,朝向可以為-Y。 在某些應用場景中,為了提高人眼位置資訊的準確性,還可以結合人物攝影模組和場景攝影模組的相對位置來確定人眼位置資訊。針對人物攝影模組和場景攝影模組設置在同一設備的情況,人物攝影模組和場景攝影模組的相對位置是固定的,可以基於其所在設備的設備資訊而確定。相應的,所述至少基於人眼與人物攝影模組的相對位置、以及場景攝影模組的位置資訊,確定人眼位置資訊,可以包括: 獲取人物攝影模組和場景攝影模組的相對位置; 利用人物攝影模組和場景攝影模組的相對位置,將人眼與人物攝影模組的相對位置,轉換為人眼與場景攝影模組的相對位置; 結合人眼與場景攝影模組的相對位置、以及場景攝影模組的位置資訊,計算獲得人眼位置資訊。 可見,在該實施例中,透過人物攝影模組和場景攝影模組的相對位置,可以獲得人眼與場景攝影模組的相對位置,從而提高人眼位置資訊的準確性。 本說明書實施例欲透過人眼視角替換攝影機視角,從而以人眼視角動態渲染背景場景(實境場景)和虛擬物件,增強立體感和代入感。因此,以人眼位置資訊作為渲染參數中場景攝影模組的位置資訊,對三維模型進行渲染,獲得投影在顯示螢幕上的投影影像,所述三維模型基於將虛擬物件與場景攝影模組所掃描實境場景相結合而獲得,渲染參數是對三維模型進行渲染時所需參數。 在對模型進行渲染獲得投影影像時,最主要的渲染參數包括相機位置和投影面資訊,而本實施例主要調整渲染參數中的相機位置,實現將相機視角調整為人眼視角。為此,在該實施例中,可以人眼位置資訊作為渲染參數中場景攝影模組的位置資訊,以實現用人眼視角替換場景攝影模組的視角,利用調整後的渲染參數對三維模型進行渲染,可以獲得投影在顯示螢幕上的投影影像。其中,渲染參數中的投影面資訊可以根據顯示螢幕資訊而定。此外,渲染參數還包括渲染時所需的其他參數,例如光照參數等,在此不一一列舉。 在該實施例中,可以透過人眼位置調整渲染參數,將渲染參數和三維模型輸入渲染模組,由渲染模組渲染出投影影像。 在AR系統的傳統流程中,可以從真實世界出發,經過數位成像,然後系統透過影像資料和感測器資料一起對三維世界進行感知理解,同時得到對三維互動的理解。3D互動理解的目的是告知系統要“增強”的內容。3D環境理解的目的就是告知系統要“增強”的位置。一旦系統確定了要增強的內容和位置以後,就可以進行虛實結合,即透過渲染模組完成。最後,合成的視訊被傳遞到使用者的視覺系統中,就達到了擴增實境的效果。 而本說明書的三維模型,可以是基於將虛擬物件與場景攝影模組所掃描顯示場景相結合而獲得的模型。三維模型基於場景建模和虛擬物件疊加而獲得。以下列舉其中一種三維模型構建方法,在該實施例中,三維模型的構建步驟可以包括: 利用場景攝影模組所採集的實景影像,對實境場景進行三維重建,獲得場景模型; 基於預設疊加策略,將虛擬物件疊加至場景模型,獲得三維模型。 其中,所述場景模型又稱為空間模型,包括但不限於用於實現擴增實境的初始化場景模型。在本說明書實施例中可以透過對實境場景進行三維重建,獲得場景模型。三維重建(3D Reconstruction)是從輸入資料中建立實境場景中物體的3D模型。基於視覺的三維重建,可以指透過攝影機獲取場景物體的資料影像,並對此影像進行分析處理,再結合電腦視覺知識推導出實境環境中物體的三維資訊。 在一個實施例中,可以以二維影像作為輸入,重建出場景中的三維場景模型。透過對物體的不同角度拍攝的RGB影像,使用相關的電腦圖形學和視覺技術,便可以重建出該物體的三維模型。 而隨著深度相機的出現,在另一個實施例中,場景攝影模組可以是深度相機。對於實境場景中的點,深度相機掃描得到的每一幀資料不僅包括場景中的點的彩色RGB影像,還可以包括每個點到深度相機所在的垂直平面的距離值。該距離值可以被稱為深度值(depth),而深度值共同組成這一幀的深度影像。深度影像可以理解為一副灰階影像,其中影像中每個點的灰階值代表該點的深度值,即該點在實境中的位置到相機所在垂直平面的真實距離。因此,可以深度相機採集的RGB影像和深度影像作為輸入,重建場景中的三維場景模型。 在三維重建過程中,可以涉及影像獲取、攝影機標定、特徵提取、立體匹配、三維重建等過程。由於三維重建技術是一種較為成熟的現有技術,此處不再贅述。例如,可以採用即時定位與地圖構建(SLAM,simultaneous localization and mapping)等方法實現對實境場景的三維重建。 在獲得場景模型後,可以基於預設疊加策略,篩選出虛擬物件,並定位虛擬物件所需疊加的位置,從而將虛擬物件疊加至場景模型,進而獲得三維模型。預設疊加策略可以是確定待增強的內容和位置的策略,在此不做限制。 由上述實施例可見,在使用場景攝影模組進行擴增實境應用的同時,使用人物攝影模組對人眼進行定位,從而以人眼視角動態渲染實境場景和虛擬物件,可以在人眼與攝影模組位置發生變更時,所展示的投影影像做出相適應的回應,增強立體感和代入感。 在一個實施例中,在確定人眼位置之前,根據當前所獲得的人眼與人物攝影模組的相對位置,與上一次所獲得的人眼與人物攝影模組的相對位置,判定人眼與人物攝影模組的相對位置發生變更。從而實現在人眼與人物攝影模組的相對位置發生變更時,執行步驟204和206,而在人眼與人物攝影模組的相對位置未發生變更時,不執行步驟204和206,從而避免即時計算導致的資源浪費。 以上實施方式中的各種技術特徵可以任意進行組合,只要特徵之間的組合不存在衝突或矛盾,但是限於篇幅,未進行一一描述,因此上述實施方式中的各種技術特徵的任意進行組合也屬本說明書公開的範圍。 以下以其中一種組合進行示例說明。 如圖3A所示,是本說明書根據一示例性實施例示出的另一種擴增實境的影像展示方法的流程圖。所述方法可以應用於行動設備中,將行動設備的擴增實境展示內容由攝影機視角的複合影像改為人眼視角的複合影像。所述方法可以包括: 在步驟302中,透過後置攝影機採集的影像進行設備背面場景的三維重建,並疊加虛擬物件,獲得三維模型。 在步驟304中,透過前置攝影機採集的影像,使用人臉檢測算法檢測到使用者的人眼位置。 在步驟306中,透過人眼的位置,重新計算重建的三維場景在設備螢幕上的投影以及虛擬物件在設備螢幕位置的投影,並獲得投影影像。 在步驟308中,將投影影像展示在設備螢幕上。 圖3A中與圖2中相關技術相似,在此不一一贅述。 為了方便理解,還結合附圖3B對本實施例中虛擬物件的顯示位置與現有技術中虛擬物件顯示位置進行對比說明。後置攝影機的視角往往大於人眼透過螢幕框所能看到的景物的視角(簡稱人眼視角),因此,攝影機視角下虛擬物件的遮擋區大於人眼視角下虛擬物件的遮擋區。圖中32表示利用本實施例方案後,虛擬物件在螢幕中的顯示位置,34表示利用現有技術方案後,虛擬物件在螢幕中的顯示位置。 本實施例透過前置攝影機對人眼位置的判斷來調節顯示內容,使得顯示場景更接近人眼視角的效果,代入感和立體感更強。使用三維場景重建來對背景建模,可以更好的回應人眼位置的變化而顯示不同角度的背景。同時,使用三維場景重建的方式可以對設備靜止而背景運動的場景做出更恰當的回應。 與前述擴增實境的影像展示方法的實施例相對應,本說明書還提供了擴增實境的影像展示裝置及其所應用的電子設備的實施例。 本說明書擴增實境的影像展示裝置的實施例可以應用在電腦設備。裝置實施例可以透過軟體實現,也可以透過硬體或者軟硬體結合的方式實現。以軟體實現為例,作為一個邏輯意義上的裝置,是透過其所在電腦設備的處理器將非揮發性記憶體中對應的電腦程序指令讀取到內部記憶體中運行形成的。從硬體層面而言,如圖4所示,為本說明書擴增實境的影像展示裝置所在電腦設備的一種硬體結構圖,除了圖4所示的處理器410、網路介面420、內部記憶體430、以及非揮發性記憶體440之外,實施例中擴增實境的影像展示裝置431所在的電腦設備通常根據該設備的實際功能,還可以包括其他硬體,對此不再贅述。 如圖5所示,是本說明書根據一示例性實施例示出的一種擴增實境的影像展示裝置的方塊圖,所述裝置包括: 相對位置確定模組52,用於:獲取人物攝影模組所採集的人物影像,並利用人物影像中人眼區域與所述人物影像的關係,確定人眼與人物攝影模組的相對位置; 人眼位置確定模組54,用於:至少基於人眼與人物攝影模組的相對位置、以及場景攝影模組的位置資訊,確定人眼位置資訊; 影像渲染模組56,用於:以人眼位置資訊作為渲染參數中場景攝影模組的位置資訊,對三維模型進行渲染,獲得投影在顯示螢幕上的投影影像,所述三維模型基於將虛擬物件與場景攝影模組所掃描實境場景相結合而獲得。在一個實施例中,所述裝置設於電子設備,所述人物攝影模組包括所述電子設備的前置攝影機,所述場景攝影模組包括所述電子設備的後置攝影機。 在一個實施例中,所述裝置還包括三維模型構建模組(圖5未示出),用於: 利用場景攝影模組所採集的實景影像,對實境場景進行三維重建,獲得場景模型; 基於預設疊加策略,將虛擬物件疊加至場景模型,獲得三維模型。 在一個實施例中,所述人眼位置確定模組,具體用於: 獲取人物攝影模組和場景攝影模組的相對位置; 利用人物攝影模組和場景攝影模組的相對位置,將人眼與人物攝影模組的相對位置,轉換為人眼與場景攝影模組的相對位置; 結合人眼與場景攝影模組的相對位置、以及場景攝影模組的位置資訊,計算獲得人眼位置資訊。 在一個實施例中,所述裝置還包括位置判斷模組(圖5未示出),用於: 在確定人眼位置之前,根據當前所獲得的人眼與人物攝影模組的相對位置,與上一次所獲得的人眼與人物攝影模組的相對位置,判定人眼與人物攝影模組的相對位置發生變更。 對於裝置實施例而言,由於其基本對應於方法實施例,所以相關之處參見方法實施例的部分說明即可。以上所描述的裝置實施例僅僅是示意性的,其中所述作為分離部件說明的模組可以是或者也可以不是實體上分開的,作為模組顯示的部件可以是或者也可以不是實體模組,即可以位於一個地方,或者也可以分佈到多個網路模組上。可以根據實際的需要選擇其中的部分或者全部模組來實現本說明書方案的目的。本領域普通技術人員在不付出進步性勞動的情況下,即可以理解並實施。 相應的,本說明書實施例還提供一種電腦設備,包括記憶體、處理器及儲存在記憶體上並可在處理器上運行的電腦程序,其中,所述處理器執行所述程序時實現如下方法: 獲取人物攝影模組所採集的人物影像,並利用人物影像中人眼區域與所述人物影像的關係,確定人眼與人物攝影模組的相對位置; 至少基於人眼與人物攝影模組的相對位置、以及場景攝影模組的位置資訊,確定人眼位置資訊; 以人眼位置資訊作為渲染參數中場景攝影模組的位置資訊,對三維模型進行渲染,獲得投影在顯示螢幕上的投影影像,所述三維模型基於將虛擬物件與場景攝影模組所掃描實境場景相結合而獲得。 本說明書中的各個實施例均採用漸進的方式描述,各個實施例之間相同相似的部分互相參見即可,每個實施例重點說明的都是與其他實施例的不同之處。尤其,對於設備實施例而言,由於其基本相似於方法實施例,所以描述的比較簡單,相關之處參見方法實施例的部分說明即可。 一種電腦儲存媒體,所述儲存媒體中儲存有程序指令,所述程序指令包括: 獲取人物攝影模組所採集的人物影像,並利用人物影像中人眼區域與所述人物影像的關係,確定人眼與人物攝影模組的相對位置; 至少基於人眼與人物攝影模組的相對位置、以及場景攝影模組的位置資訊,確定人眼位置資訊; 以人眼位置資訊作為渲染參數中場景攝影模組的位置資訊,對三維模型進行渲染,獲得投影在顯示螢幕上的投影影像,所述三維模型基於將虛擬物件與場景攝影模組所掃描實境場景相結合而獲得。 本說明書實施例可採用在一個或多個其中包含有程式碼的儲存媒體(包括但不限於磁碟記憶體、CD-ROM、光學記憶體等)上實施的電腦程序產品的形式。電腦可用儲存媒體包括永久性和非永久性、可行動和非可行動媒體,可以由任何方法或技術來實現資訊儲存。資訊可以是電腦可讀指令、資料結構、程序的模組或其他資料。電腦的儲存媒體的例子包括但不限於:相變內部記憶體(PRAM)、靜態隨機存取記憶體(SRAM)、動態隨機存取記憶體(DRAM)、其他類型的隨機存取記憶體(RAM)、唯讀記憶體(ROM)、電可抹除可編程唯讀記憶體(EEPROM)、快閃記憶體或其他內部記憶體技術、唯讀光碟唯讀記憶體(CD-ROM)、數位多功能光碟(DVD)或其他光學儲存、磁卡式磁帶,磁帶式磁碟儲存器磁碟或其他磁性儲存設備或任何其他非傳輸媒體,可用於儲存可以被電腦設備存取的資訊。 本領域技術人員在考慮說明書及實踐這裡申請的發明後,將容易想到本說明書的其它實施方案。本說明書旨在涵蓋本說明書的任何變型、用途或者適應性變化,這些變型、用途或者適應性變化遵循本說明書的一般性原理並包括本說明書未申請的本技術領域中的公知常識或慣用技術手段。說明書和實施例僅被視為示例性的,本說明書的真正範圍和精神由下面的權利要求指出。 應當理解的是,本說明書並不局限於上面已經描述並在附圖中示出的精確結構,並且可以在不脫離其範圍進行各種修改和改變。本說明書的範圍僅由所附的權利要求來限制。 以上所述僅為本說明書的較佳實施例而已,並不用以限制本說明書,凡在本說明書的精神和原則之內,所做的任何修改、等同替換、改進等,均應包含在本說明書保護的範圍之內。Here, exemplary embodiments will be described in detail, and examples thereof are shown in the accompanying drawings. When the following description refers to the accompanying drawings, unless otherwise indicated, the same digits in different drawings represent the same or similar elements. The implementation manners described in the following exemplary embodiments do not represent all implementation manners consistent with this specification. On the contrary, they are only examples of devices and methods consistent with some aspects of this specification as detailed in the scope of the appended application. The terms used in this specification are only for the purpose of describing specific embodiments, and are not intended to limit the specification. The singular forms of "a", "the" and "the" used in this specification and the scope of the appended application are also intended to include plural forms, unless the context clearly indicates other meanings. It should also be understood that the term "and/or" as used herein refers to and includes any or all possible combinations of one or more associated listed items. It should be understood that although the terms first, second, and third may be used in this specification to describe various information, the information should not be limited to these terms. These terms are only used to distinguish the same type of information from each other. For example, without departing from the scope of this specification, the first information can also be referred to as second information, and similarly, the second information can also be referred to as first information. Depending on the context, the word "if" as used herein can be interpreted as "when" or "when" or "in response to certainty". Augmented Reality (AR) technology is a new technology that seamlessly integrates real world information and virtual world information. This technology can use computer technology to apply virtual information to the real world, real environment and virtual The objects are superimposed on the same screen or space simultaneously. A common application scenario of AR technology is that the user shoots the real environment through the camera module in the mobile device such as handheld or worn. The software that provides AR servo can render one or more of the initial image data based on the initial image data captured. Virtual objects. The key to realizing the above scenario is how to combine the virtual object with the actual shooting scene. In the same way, the software that can provide AR servo can pre-configure one or more models corresponding to the virtual object. The model of each virtual object specifies the The state evolution rule corresponding to the virtual object determines the different motion states of the virtual object. In another aspect, the software can also determine the position of the virtual object in the real scene based on the image data taken by the device, and then determine where the virtual object is rendered on the image data. After the successful rendering, the user can watch To the screen based on the real environment superimposed with virtual objects. However, when rendering a three-dimensional model constructed from virtual objects and real scenes, the rendering is performed from the perspective of the photography module. The enhanced display solution relies on the device's gyroscope, acceleration, and gravity sensor to sense changes in the device's angle. Therefore, if the camera module does not move and the photographer/viewer moves, the imaged image will not respond accordingly, and the sense of substitution and three-dimensionality will be poor. For example, as shown in FIG. 1, it is a schematic diagram of AR scene shooting provided in this specification according to an exemplary embodiment. In Figure 1, the virtual object is a puppy and the AR system uses a common display as an example. In this case, the user can see the fusion effect of the real environment and the virtual object on the display screen without wearing any display device. The photographer/viewer uses the rear camera of the mobile phone to shoot the real scene, and displays the projection image including the puppy on the mobile phone screen. However, when the photographer/viewer keeps the mobile phone still and the relative position of the eyes and the mobile phone changes, the screen displayed on the mobile phone screen will not change. In view of this, this manual provides an augmented reality image display method, by changing the device’s augmented reality display content from a composite image of the camera’s perspective to a composite image of the human’s perspective, so that the displayed image is closer to the human eye The perspective effect enhances the sense of three-dimensionality and substitution. Among them, the main reason why the photographic module can photograph and image is that the lens combines the subject into an image and projects it on the imaging surface of the photographic tube or solid-state photographing device. The scope of the scene that the camera lens can cover is usually expressed by an angle, which can be called the angle of view of the lens. The angle of view of the human eye referred to in the embodiments of this specification does not refer to all the angles of view that can be seen by the human eye, but can be the angle of view that can be seen through the display screen. The embodiments of this specification are illustrated below with reference to the drawings. As shown in FIG. 2, it is a flowchart of an augmented reality image display method according to an exemplary embodiment of this specification, and the method includes: In step 202, the person image collected by the person photography module is acquired, and the relationship between the human eye area in the person image and the person image is used to determine the relative position of the human eye and the person photography module; In step 204, the human eye position information is determined based at least on the relative position of the human eye and the human photographing module and the position information of the scene photographing module; In step 206, using the human eye position information as the position information of the scene photography module in the rendering parameters, the 3D model is rendered to obtain a projected image projected on the display screen. The 3D model is based on combining the virtual object and the scene photography model. Obtained by combining the real scenes scanned by the group. In the embodiment of this specification, the portrait photography module and the scene photography module are different photography modules, and the shooting areas of the two photography modules are different. In one example, the photographing directions of the portrait photography module and the scene photography module are opposite, the camera of the portrait photography module and the display screen are on the same side of the electronic device, and even the camera mirror of the portrait photography module is on the same plane as the display screen. Further, the two camera modules are installed in the same electronic device. For example, in practical applications, since the image captured by the rear camera has higher definition than the image captured by the front camera, the photographer/viewer is often used to using the rear camera to shoot real scenes. At the same time, the front camera’s The mirror is on the same plane as the display screen. Therefore, the portrait camera module can be a front camera, and the scene camera module can be a rear camera, so that the rear camera is used for augmented reality applications and the front camera is used to assist AR enhancement. It is understandable that both the character shooting module and the scene shooting module are shooting modules, but they are named differently to distinguish different shooting modules. In other examples, some terminals may have display screens on both the front and back sides, so the rear camera can be used as the portrait camera module, and the front camera can be used as the scene camera module; or, the portrait camera module and the scene camera module Groups are shooting modules set on different devices, etc. The eye position information is used to indicate the position of the eye of the photographer/viewer in space, which can be the three-dimensional coordinates of the eye in the world coordinate system or the scene photography module coordinate system. Step 202 and step 204 describe how to determine the position information of the human eye. As an application example, the relative position of the human eye and the human photography module can be determined first, and then the human eye position information can be determined based on the relative position of the human eye and the human photography module and the position information of the scene photography module. Regarding step 202, the portrait photography module can collect images of people, especially images of photographers within the range that the portrait photography module can capture. The relative position of the human eye and the human photographing module can be a relative pose, including relative distance and relative direction. In one example, the relative position can be represented by a vector with direction. The relative position of the human eye and the human photography module can be obtained by using a human face detection algorithm to perform face detection on a human image. For example, you can first detect the face area in the human image, and then determine the eye area from the face area based on the relationship between the human eye and the face, and determine the relative relationship between the human eye and the human photography module based on the relationship between the eye area and the image. position. In one embodiment, a deep learning training model can be used to determine the relative position of the human eye and the human photography module. For example, a training sample can be constructed from the image of a person marking the relative position of the human eye and the camera module, and the training sample is used to train a preset initial model to obtain a detection model for detecting the relative position of the human eye and the camera module. In the application stage, the detection model is used to detect the image to be detected to obtain the relative position of the human eye and the camera module. It can be understood that, in other examples, each set of training samples may also include other sample features that help improve the relative position detection result, such as a face area frame. In addition, other methods can also be used to obtain the relative position of the human eye and the human photographing module through the recognition of the human image, which will not be repeated here. Regarding step 204, the position information of the scene photography module is used to indicate the position of the scene photography module in space, which can be the three-dimensional space coordinates of the scene photography module in the world coordinate system or the scene photography module coordinate system, for example, scene photography The position information of the module can be obtained during camera calibration of the scene photography module. It is understandable that the position of the human eye and the position of the scene photography module are in the same coordinate system. In the image measurement process and machine vision applications, in order to determine the relationship between the three-dimensional geometric position of a point on the surface of a space object and its corresponding point in the image, a geometric model of the camera imaging can be established. The geometric model parameters are the camera parameters. Camera parameters can include internal parameters, external parameters, distortion parameters, etc. In practical applications, the camera can be calibrated by calibration methods in related technologies, for example, linear calibration method, nonlinear optimization calibration method, Tsai's classic two-step calibration method, etc., which are not limited here. After obtaining the relative position of the human eye and the character photography module and the position information of the scene photography module, the position of the human eye can be determined based on at least the relative position of the human eye and the human photography module and the position information of the scene photography module News. In some application scenarios, the setting position of the person photography module is close to the setting position of the scene photography module, and the relative position between the two modules can be ignored. Especially for the situation where the person photography module and the scene photography module are set up against each other, the relative position between the two modules can be ignored. Therefore, it can be directly based on the relative position of the human eye and the person photography module and the scene photography module. Location information, determine the location information of the human eye. For example, assuming that the position of the rear camera in the scene is X and the position of the human eye relative to the front camera is Y, the position of the human eye can be X+Y, and the orientation can be -Y. In some application scenarios, in order to improve the accuracy of the human eye position information, the relative position of the person photography module and the scene photography module can also be combined to determine the human eye position information. For the situation that the character photography module and the scene photography module are set on the same device, the relative position of the person photography module and the scene photography module is fixed and can be determined based on the equipment information of the device where it is located. Correspondingly, the determining the position information of the human eye based at least on the relative position of the human eye and the human photographing module and the position information of the scene photographing module may include: Obtain the relative position of the character photography module and the scene photography module; Using the relative position of the person photography module and the scene photography module, convert the relative position of the human eye and the person photography module into the relative position of the human eye and the scene photography module; Combining the relative position of the human eye and the scene photography module, and the position information of the scene photography module, calculate the human eye position information. It can be seen that in this embodiment, the relative position of the human eye and the scene photography module can be obtained through the relative position of the person photography module and the scene photography module, thereby improving the accuracy of the human eye position information. The embodiment of the present specification intends to replace the camera perspective through the human eye perspective, so as to dynamically render the background scene (real scene) and virtual objects from the human eye perspective to enhance the sense of three-dimensionality and substitution. Therefore, using the human eye position information as the position information of the scene photography module in the rendering parameters, the 3D model is rendered to obtain the projected image projected on the display screen. The 3D model is based on scanning the virtual object and the scene photography module. It is obtained by combining the real scenes, and the rendering parameters are the parameters required for rendering the 3D model. When the model is rendered to obtain the projected image, the most important rendering parameters include camera position and projection surface information, and this embodiment mainly adjusts the camera position in the rendering parameters to realize the adjustment of the camera perspective to the human eye perspective. For this reason, in this embodiment, human eye position information can be used as the position information of the scene camera module in the rendering parameters, so as to replace the perspective of the scene camera module with the human eye perspective, and use the adjusted rendering parameters to render the 3D model , You can get the projected image projected on the display screen. Among them, the projection surface information in the rendering parameters can be determined according to the display screen information. In addition, the rendering parameters also include other parameters required during rendering, such as lighting parameters, etc., which are not listed here. In this embodiment, the rendering parameters can be adjusted through the position of the human eye, the rendering parameters and the three-dimensional model are input to the rendering module, and the projection image is rendered by the rendering module. In the traditional process of the AR system, you can start from the real world, go through digital imaging, and then the system can perceive and understand the three-dimensional world through image data and sensor data, and at the same time gain an understanding of three-dimensional interaction. The purpose of 3D interactive understanding is to inform the system what to "enhance". The purpose of 3D environment understanding is to tell the system where to "enhance". Once the system has determined the content and location to be enhanced, it can combine virtual and real, that is, through the rendering module. Finally, the synthesized video is transmitted to the user's visual system to achieve the effect of augmented reality. The three-dimensional model in this specification may be a model obtained by combining a virtual object with a scene scanned and displayed by a scene photography module. The three-dimensional model is obtained based on scene modeling and superposition of virtual objects. One of the three-dimensional model construction methods is listed below. In this embodiment, the three-dimensional model construction steps may include: Use the real scene images collected by the scene photography module to reconstruct the real scene in three dimensions to obtain the scene model; Based on a preset overlay strategy, the virtual object is overlayed on the scene model to obtain a three-dimensional model. Wherein, the scene model is also called a space model, including but not limited to an initialization scene model for realizing augmented reality. In the embodiments of this specification, a scene model can be obtained by performing three-dimensional reconstruction of a real scene. Three-dimensional reconstruction (3D Reconstruction) is to create a 3D model of an object in a real scene from the input data. Vision-based 3D reconstruction can refer to acquiring data images of scene objects through a camera, analyzing and processing the images, and combining computer vision knowledge to derive 3D information about objects in the real environment. In one embodiment, a two-dimensional image can be used as input to reconstruct a three-dimensional scene model in the scene. Through the RGB images taken from different angles of the object, using relevant computer graphics and visual technology, a three-dimensional model of the object can be reconstructed. With the advent of depth cameras, in another embodiment, the scene photography module may be a depth camera. For points in the real scene, each frame of data scanned by the depth camera includes not only the color RGB image of the point in the scene, but also the distance value from each point to the vertical plane where the depth camera is located. The distance value can be called a depth value, and the depth value together constitutes the depth image of this frame. A depth image can be understood as a grayscale image, where the grayscale value of each point in the image represents the depth value of the point, that is, the true distance from the position of the point in the real world to the vertical plane where the camera is located. Therefore, the RGB image and depth image collected by the depth camera can be used as input to reconstruct the 3D scene model in the scene. In the process of 3D reconstruction, it can involve processes such as image acquisition, camera calibration, feature extraction, stereo matching, and 3D reconstruction. Since the 3D reconstruction technology is a relatively mature existing technology, it will not be repeated here. For example, methods such as instantaneous localization and mapping (SLAM, simultaneous localization and mapping) can be used to realize the 3D reconstruction of the real scene. After the scene model is obtained, the virtual objects can be filtered out based on the preset overlay strategy, and the position where the virtual objects need to be superimposed is located, so that the virtual objects are superimposed on the scene model, and then a three-dimensional model is obtained. The preset overlay strategy may be a strategy for determining the content and location to be enhanced, and is not limited here. It can be seen from the above embodiment that while using the scene photography module for augmented reality applications, the person photography module is used to position the human eye, so that the real scene and virtual objects can be dynamically rendered from the human eye perspective. When the position of the camera module changes, the displayed projected image responds appropriately to enhance the sense of three-dimensionality and substitution. In one embodiment, before determining the position of the human eye, according to the current obtained relative position of the human eye and the human photographic module, and the last obtained relative position of the human eye and the human photographic module, determine the human eye and The relative position of the portrait photography module has changed. In this way, steps 204 and 206 are executed when the relative position of the human eye and the person photography module is changed, and steps 204 and 206 are not executed when the relative position of the human eye and the person photography module does not change, thereby avoiding real-time The waste of resources caused by calculation. The various technical features in the above embodiments can be combined arbitrarily, as long as there is no conflict or contradiction between the combinations of features, but due to space limitations, they are not described one by one. Therefore, any combination of the various technical features in the above embodiments also belongs to The scope of this specification. The following is an example of one of the combinations. As shown in FIG. 3A, it is a flowchart of another augmented reality image display method according to an exemplary embodiment of this specification. The method can be applied to a mobile device to change the augmented reality display content of the mobile device from a composite image from the camera's perspective to a composite image from the human's perspective. The method may include: In step 302, a three-dimensional reconstruction of the scene on the back of the device is performed through the images collected by the rear camera, and virtual objects are superimposed to obtain a three-dimensional model. In step 304, the user's eye position is detected using the face detection algorithm through the image collected by the front camera. In step 306, the projection of the reconstructed 3D scene on the screen of the device and the projection of the virtual object on the screen of the device are recalculated based on the position of the human eye, and a projection image is obtained. In step 308, the projected image is displayed on the device screen. The related technology in FIG. 3A is similar to that in FIG. 2, and will not be repeated here. In order to facilitate understanding, the display position of the virtual object in this embodiment and the display position of the virtual object in the prior art will be compared and described with reference to FIG. 3B. The viewing angle of the rear camera is often larger than the viewing angle of the scene that the human eye can see through the screen frame (human eye view for short). Therefore, the occlusion area of the virtual object under the camera perspective is larger than the occlusion area of the virtual object under the human eye perspective. In the figure, 32 represents the display position of the virtual object on the screen after using the solution of this embodiment, and 34 represents the display position of the virtual object on the screen after using the existing technical solution. In this embodiment, the display content is adjusted through the judgment of the position of the human eye by the front camera, so that the display scene is closer to the perspective of the human eye, and the sense of substitution and three-dimensionality is stronger. Using 3D scene reconstruction to model the background can better respond to changes in the position of the human eye and display the background from different angles. At the same time, using the 3D scene reconstruction method can make a more appropriate response to the scene where the device is still and the background is moving. Corresponding to the foregoing embodiment of the augmented reality image display method, this specification also provides embodiments of the augmented reality image display device and the electronic equipment applied thereto. The embodiments of the augmented reality image display device in this specification can be applied to computer equipment. The device embodiments can be implemented through software, or through hardware or a combination of software and hardware. Take software implementation as an example. As a logical device, it is formed by reading the corresponding computer program instructions in the non-volatile memory into the internal memory through the processor of the computer equipment where it is located. From the perspective of hardware, as shown in Figure 4, a hardware structure diagram of the computer equipment where the augmented reality image display device is located in this manual, except for the processor 410, network interface 420, and internal In addition to the memory 430 and the non-volatile memory 440, the computer device in which the augmented reality image display device 431 is located in the embodiment usually includes other hardware according to the actual function of the device, and will not be repeated here. . As shown in FIG. 5, it is a block diagram of an augmented reality image display device according to an exemplary embodiment of the present specification. The device includes: The relative position determination module 52 is used to obtain the image of the person collected by the person photography module, and determine the relative position of the human eye and the person photography module by using the relationship between the eye area in the person image and the person image; The human eye position determining module 54 is used to determine the human eye position information based at least on the relative position of the human eye and the human photographing module and the position information of the scene photographing module; The image rendering module 56 is used to: use the human eye position information as the position information of the scene photography module in the rendering parameters to render a three-dimensional model to obtain a projected image projected on the display screen. The three-dimensional model is based on a virtual object It is obtained by combining with the real scene scanned by the scene photography module. In one embodiment, the device is provided in an electronic device, the person photography module includes a front camera of the electronic device, and the scene photography module includes a rear camera of the electronic device. In one embodiment, the device further includes a three-dimensional model construction module (not shown in Figure 5) for: Use the real scene images collected by the scene photography module to reconstruct the real scene in three dimensions to obtain the scene model; Based on a preset overlay strategy, the virtual object is overlayed on the scene model to obtain a three-dimensional model. In an embodiment, the human eye position determination module is specifically used for: Obtain the relative position of the character photography module and the scene photography module; Using the relative position of the person photography module and the scene photography module, convert the relative position of the human eye and the person photography module into the relative position of the human eye and the scene photography module; Combining the relative position of the human eye and the scene photography module, and the position information of the scene photography module, calculate the human eye position information. In one embodiment, the device further includes a position judgment module (not shown in Figure 5) for: Before determining the position of the human eye, determine the relative position of the human eye and the human photographic module according to the current obtained relative position of the human eye and the human photographic module and the last obtained relative position of the human eye and the human photographic module The location has changed. For the device embodiment, since it basically corresponds to the method embodiment, the relevant part can refer to the part of the description of the method embodiment. The device embodiments described above are merely illustrative. The modules described as separate components may or may not be physically separate, and the components displayed as modules may or may not be physical modules. It can be located in one place, or it can be distributed to multiple network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution in this specification. Those of ordinary skill in the art can understand and implement it without making progressive work. Correspondingly, the embodiment of this specification also provides a computer device, including a memory, a processor, and a computer program stored on the memory and running on the processor, wherein the processor implements the following method when the program is executed : Obtain the person image collected by the person photography module, and use the relationship between the human eye area in the person image and the person image to determine the relative position of the human eye and the person photography module; Determine the position information of the human eye based at least on the relative position of the human eye and the human photography module, and the position information of the scene photography module; Using the human eye position information as the position information of the scene photography module in the rendering parameters, the 3D model is rendered to obtain the projected image projected on the display screen. The 3D model is based on the virtual object and the real scene scanned by the scene photography module Combination of scenes. The various embodiments in this specification are described in a gradual manner, and the same or similar parts between the various embodiments can be referred to each other, and each embodiment focuses on the differences from other embodiments. In particular, for the device embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and the relevant parts can be referred to the part of the description of the method embodiment. A computer storage medium, the storage medium stores program instructions, and the program instructions include: Obtain the person image collected by the person photography module, and use the relationship between the human eye area in the person image and the person image to determine the relative position of the human eye and the person photography module; Determine the position information of the human eye based at least on the relative position of the human eye and the human photography module, and the position information of the scene photography module; Using the human eye position information as the position information of the scene photography module in the rendering parameters, the 3D model is rendered to obtain the projected image projected on the display screen. The 3D model is based on the virtual object and the real scene scanned by the scene photography module Combination of scenes. The embodiments of the present specification may adopt the form of a computer program product implemented on one or more storage media (including but not limited to magnetic disk memory, CD-ROM, optical memory, etc.) containing program codes. Computer-usable storage media include permanent and non-permanent, movable and non-mobile media, and information storage can be realized by any method or technology. Information can be computer-readable instructions, data structures, program modules, or other data. Examples of computer storage media include, but are not limited to: phase change internal memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM) ), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other internal memory technology, read-only CD-ROM (CD-ROM), digital multi Functional discs (DVD) or other optical storage, magnetic cassette tapes, magnetic tape storage disks or other magnetic storage devices or any other non-transmission media that can be used to store information that can be accessed by computer equipment. Those skilled in the art will easily think of other embodiments of this specification after considering the specification and practicing the invention applied here. This specification is intended to cover any variations, uses, or adaptive changes of this specification, which follow the general principles of this specification and include common knowledge or customary technical means in the technical field not applied for in this specification . The description and the embodiments are only regarded as exemplary, and the true scope and spirit of the description are pointed out by the following claims. It should be understood that this specification is not limited to the precise structure that has been described above and shown in the drawings, and various modifications and changes can be made without departing from its scope. The scope of this specification is only limited by the appended claims. The above descriptions are only preferred embodiments of this specification, and are not intended to limit this specification. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of this specification shall be included in this specification Within the scope of protection.

202:步驟 204:步驟 206:步驟 302:步驟 304:步驟 306:步驟 308:步驟 32:顯示位置 34:顯示位置 410:處理器 420:網路介面 430:內部記憶體 431:擴增實境的影像展示裝置 440:非揮發性記憶體 52:相對位置確定模組 54:人眼位置確定模組 56:影像渲染模組202: Step 204: Step 206: Step 302: Step 304: Step 306: Step 308: step 32: display position 34: display position 410: processor 420: network interface 430: Internal memory 431: Augmented Reality Image Display Device 440: Non-volatile memory 52: Relative position determination module 54: Human eye position determination module 56: Image rendering module

此處的附圖被併入說明書中並構成本說明書的一部分,示出了符合本說明書的實施例,並與說明書一起用於解釋本說明書的原理。 圖1是本說明書根據一示例性實施例提供的一種AR場景拍攝示意圖。 圖2是本說明書根據一示例性實施例示出的一種擴增實境的影像展示方法的流程圖。 圖3A是本說明書根據一示例性實施例示出的另一種擴增實境的影像展示方法的流程圖。 圖3B是本說明書根據一示例性實施例示出一種虛擬物件顯示位置對比示意圖。 圖4是本說明書根據一示例性實施例示出的一種擴增實境的影像展示裝置所在電腦設備的一種硬體結構圖。 圖5是本說明書根據一示例性實施例示出的一種擴增實境的影像展示裝置的方塊圖。The drawings here are incorporated into the specification and constitute a part of the specification, show embodiments conforming to the specification, and are used together with the specification to explain the principle of the specification. Fig. 1 is a schematic diagram of AR scene shooting provided in this specification according to an exemplary embodiment. Fig. 2 is a flowchart of an augmented reality image display method according to an exemplary embodiment of the present specification. Fig. 3A is a flowchart of another method for displaying augmented reality images according to an exemplary embodiment of the present specification. Fig. 3B is a schematic diagram showing a comparison of display positions of a virtual object according to an exemplary embodiment of the present specification. Fig. 4 is a hardware structure diagram of a computer device where an augmented reality image display device is shown according to an exemplary embodiment of this specification. Fig. 5 is a block diagram of an augmented reality image display device according to an exemplary embodiment of this specification.

Claims (11)

一種擴增實境的影像展示方法,所述方法包括: 獲取人物攝影模組所採集的人物影像,並利用人物影像中人眼區域與所述人物影像的關係,確定人眼與人物攝影模組的相對位置; 至少基於人眼與人物攝影模組的相對位置、以及場景攝影模組的位置資訊,確定人眼位置資訊; 以人眼位置資訊作為渲染參數中場景攝影模組的位置資訊,對三維模型進行渲染,獲得投影在顯示螢幕上的投影影像,所述三維模型基於將虛擬物件與場景攝影模組所掃描實境場景相結合而獲得。An augmented reality image display method, the method comprising: Obtain the person image collected by the person photography module, and use the relationship between the human eye area in the person image and the person image to determine the relative position of the human eye and the person photography module; Determine the position information of the human eye based at least on the relative position of the human eye and the human photography module, and the position information of the scene photography module; Using the human eye position information as the position information of the scene photography module in the rendering parameters, the 3D model is rendered to obtain the projected image projected on the display screen. The 3D model is based on the virtual object and the real scene scanned by the scene photography module Combination of scenes. 根據請求項1所述的方法,所述方法應用於電子設備,所述人物攝影模組包括所述電子設備的前置攝影機,所述場景攝影模組包括所述電子設備的後置攝影機。The method according to claim 1, wherein the method is applied to an electronic device, the person photography module includes a front camera of the electronic device, and the scene photography module includes a rear camera of the electronic device. 根據請求項1所述的方法,所述三維模型的構建步驟包括: 利用場景攝影模組所採集的實景影像,對實境場景進行三維重建,獲得場景模型; 基於預設疊加策略,將虛擬物件疊加至場景模型,獲得三維模型。According to the method of claim 1, the step of constructing the three-dimensional model includes: Use the real scene images collected by the scene photography module to reconstruct the real scene in three dimensions to obtain the scene model; Based on a preset overlay strategy, the virtual object is overlayed on the scene model to obtain a three-dimensional model. 根據請求項1至3任一項所述的方法,所述至少基於人眼與人物攝影模組的相對位置、以及場景攝影模組的位置資訊,確定人眼位置資訊,包括: 獲取人物攝影模組和場景攝影模組的相對位置; 利用人物攝影模組和場景攝影模組的相對位置,將人眼與人物攝影模組的相對位置,轉換為人眼與場景攝影模組的相對位置; 結合人眼與場景攝影模組的相對位置、以及場景攝影模組的位置資訊,計算獲得人眼位置資訊。According to the method of any one of claims 1 to 3, the determining the position information of the human eye based at least on the relative position of the human eye and the human photographing module and the position information of the scene photographing module includes: Obtain the relative position of the character photography module and the scene photography module; Using the relative position of the person photography module and the scene photography module, convert the relative position of the human eye and the person photography module into the relative position of the human eye and the scene photography module; Combining the relative position of the human eye and the scene photography module, and the position information of the scene photography module, calculate the human eye position information. 根據請求項1至3任一項所述的方法,所述方法還包括: 在確定人眼位置之前,根據當前所獲得的人眼與人物攝影模組的相對位置,與上一次所獲得的人眼與人物攝影模組的相對位置,判定人眼與人物攝影模組的相對位置發生變更。The method according to any one of claims 1 to 3, the method further comprising: Before determining the position of the human eye, determine the relative position of the human eye and the human photographic module according to the current obtained relative position of the human eye and the human photographic module and the last obtained relative position of the human eye and the human photographic module The location has changed. 一種擴增實境的影像展示裝置,所述裝置包括: 相對位置確定模組,用於:獲取人物攝影模組所採集的人物影像,並利用人物影像中人眼區域與所述人物影像的關係,確定人眼與人物攝影模組的相對位置; 人眼位置確定模組,用於:至少基於人眼與人物攝影模組的相對位置、以及場景攝影模組的位置資訊,確定人眼位置資訊; 影像渲染模組,用於:以人眼位置資訊作為渲染參數中場景攝影模組的位置資訊,對三維模型進行渲染,獲得投影在顯示螢幕上的投影影像,所述三維模型基於將虛擬物件與場景攝影模組所掃描實境場景相結合而獲得。An augmented reality image display device, the device comprising: The relative position determination module is used to obtain the image of the person collected by the person photography module, and determine the relative position of the human eye and the person photography module by using the relationship between the human eye area in the person image and the person image; The human eye position determining module is used to determine the human eye position information based at least on the relative position of the human eye and the human photographing module and the position information of the scene photographing module; The image rendering module is used to: use the human eye position information as the position information of the scene photography module in the rendering parameters to render the three-dimensional model to obtain the projected image projected on the display screen. The three-dimensional model is based on combining the virtual object with It is obtained by combining the real scenes scanned by the scene photography module. 根據請求項6所述的裝置,所述裝置設於電子設備,所述人物攝影模組包括所述電子設備的前置攝影機,所述場景攝影模組包括所述電子設備的後置攝影機。The device according to claim 6, wherein the device is provided in an electronic device, the person photography module includes a front camera of the electronic device, and the scene photography module includes a rear camera of the electronic device. 根據請求項6所述的裝置,所述裝置還包括三維模型構建模組,用於: 利用場景攝影模組所採集的實景影像,對實境場景進行三維重建,獲得場景模型; 基於預設疊加策略,將虛擬物件疊加至場景模型,獲得三維模型。The device according to claim 6, wherein the device further includes a three-dimensional model construction module for: Use the real scene images collected by the scene photography module to reconstruct the real scene in three dimensions to obtain the scene model; Based on a preset overlay strategy, the virtual object is overlayed on the scene model to obtain a three-dimensional model. 根據請求項6至8任一項所述的裝置,所述人眼位置確定模組,具體用於: 獲取人物攝影模組和場景攝影模組的相對位置; 利用人物攝影模組和場景攝影模組的相對位置,將人眼與人物攝影模組的相對位置,轉換為人眼與場景攝影模組的相對位置; 結合人眼與場景攝影模組的相對位置、以及場景攝影模組的位置資訊,計算獲得人眼位置資訊。According to the device according to any one of claim items 6 to 8, the human eye position determination module is specifically configured to: Obtain the relative position of the character photography module and the scene photography module; Using the relative position of the person photography module and the scene photography module, convert the relative position of the human eye and the person photography module into the relative position of the human eye and the scene photography module; Combining the relative position of the human eye and the scene photography module, and the position information of the scene photography module, calculate the human eye position information. 根據請求項6至8任一項所述的裝置,所述裝置還包括位置判斷模組,用於: 在確定人眼位置之前,根據當前所獲得的人眼與人物攝影模組的相對位置,與上一次所獲得的人眼與人物攝影模組的相對位置,判定人眼與人物攝影模組的相對位置發生變更。According to the device according to any one of claim items 6 to 8, the device further includes a position judgment module, configured to: Before determining the position of the human eye, determine the relative position of the human eye and the human photographic module according to the current obtained relative position of the human eye and the human photographic module and the last obtained relative position of the human eye and the human photographic module The location has changed. 一種電腦設備,包括記憶體、處理器及儲存在記憶體上並可在處理器上運行的電腦程序,其中,所述處理器執行所述程序時實現如請求項1至5任一項所述方法。A computer device includes a memory, a processor, and a computer program stored on the memory and running on the processor, wherein the processor executes the program as described in any one of claim items 1 to 5 method.
TW108124231A 2018-09-28 2019-07-10 Method, device and equipment for displaying images of augmented reality TWI712918B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811142256.0A CN109615703B (en) 2018-09-28 2018-09-28 Augmented reality image display method, device and equipment
CN201811142256.0 2018-09-28

Publications (2)

Publication Number Publication Date
TW202013149A TW202013149A (en) 2020-04-01
TWI712918B true TWI712918B (en) 2020-12-11

Family

ID=66002749

Family Applications (1)

Application Number Title Priority Date Filing Date
TW108124231A TWI712918B (en) 2018-09-28 2019-07-10 Method, device and equipment for displaying images of augmented reality

Country Status (3)

Country Link
CN (1) CN109615703B (en)
TW (1) TWI712918B (en)
WO (1) WO2020063100A1 (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109615703B (en) * 2018-09-28 2020-04-14 阿里巴巴集团控股有限公司 Augmented reality image display method, device and equipment
CN112797897B (en) * 2019-04-15 2022-12-06 Oppo广东移动通信有限公司 Method and device for measuring geometric parameters of object and terminal
CN110428388B (en) * 2019-07-11 2023-08-08 创新先进技术有限公司 Image data generation method and device
CN112306222A (en) * 2019-08-06 2021-02-02 北京字节跳动网络技术有限公司 Augmented reality method, device, equipment and storage medium
US10993417B2 (en) * 2019-08-14 2021-05-04 International Business Machines Corporation Detection and management of disease outbreaks in livestock using health graph networks
CN110930518A (en) * 2019-08-29 2020-03-27 广景视睿科技(深圳)有限公司 Projection method and projection equipment based on augmented reality technology
CN110928627B (en) * 2019-11-22 2023-11-24 北京市商汤科技开发有限公司 Interface display method and device, electronic equipment and storage medium
CN111405263A (en) * 2019-12-26 2020-07-10 的卢技术有限公司 Method and system for enhancing head-up display by combining two cameras
CN111179438A (en) * 2020-01-02 2020-05-19 广州虎牙科技有限公司 AR model dynamic fixing method and device, electronic equipment and storage medium
WO2021184388A1 (en) * 2020-03-20 2021-09-23 Oppo广东移动通信有限公司 Image display method and apparatus, and portable electronic device
CN111553972B (en) * 2020-04-27 2023-06-30 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for rendering augmented reality data
CN111625101B (en) * 2020-06-03 2024-05-17 上海商汤智能科技有限公司 Display control method and device
CN114125418A (en) * 2020-08-25 2022-03-01 陕西红星闪闪网络科技有限公司 Holographic tourist service center and implementation method thereof
CN112672139A (en) * 2021-03-16 2021-04-16 深圳市火乐科技发展有限公司 Projection display method, device and computer readable storage medium
TWI779922B (en) * 2021-11-10 2022-10-01 財團法人資訊工業策進會 Augmented reality processing device and method
CN114401414B (en) * 2021-12-27 2024-01-23 北京达佳互联信息技术有限公司 Information display method and system for immersive live broadcast and information pushing method
CN114706936B (en) * 2022-05-13 2022-08-26 高德软件有限公司 Map data processing method and location-based service providing method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106462370A (en) * 2014-04-29 2017-02-22 微软技术许可有限责任公司 Stabilization plane determination based on gaze location
CN106710002A (en) * 2016-12-29 2017-05-24 深圳迪乐普数码科技有限公司 AR implementation method and system based on positioning of visual angle of observer
US20170230760A1 (en) * 2016-02-04 2017-08-10 Magic Leap, Inc. Technique for directing audio in augmented reality system
CN107038746A (en) * 2017-03-27 2017-08-11 联想(北京)有限公司 A kind of information processing method and electronic equipment

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8730309B2 (en) * 2010-02-23 2014-05-20 Microsoft Corporation Projectors and depth cameras for deviceless augmented reality and interaction
US8994558B2 (en) * 2012-02-01 2015-03-31 Electronics And Telecommunications Research Institute Automotive augmented reality head-up display apparatus and method
CN106302132A (en) * 2016-09-14 2017-01-04 华南理工大学 A kind of 3D instant communicating system based on augmented reality and method
CN108153502B (en) * 2017-12-22 2021-11-12 长江勘测规划设计研究有限责任公司 Handheld augmented reality display method and device based on transparent screen
CN108181994A (en) * 2018-01-26 2018-06-19 成都科木信息技术有限公司 For the man-machine interaction method of the AR helmets
CN108287609B (en) * 2018-01-26 2021-05-11 成都科木信息技术有限公司 Image drawing method for AR glasses
CN109615703B (en) * 2018-09-28 2020-04-14 阿里巴巴集团控股有限公司 Augmented reality image display method, device and equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106462370A (en) * 2014-04-29 2017-02-22 微软技术许可有限责任公司 Stabilization plane determination based on gaze location
US20170230760A1 (en) * 2016-02-04 2017-08-10 Magic Leap, Inc. Technique for directing audio in augmented reality system
CN106710002A (en) * 2016-12-29 2017-05-24 深圳迪乐普数码科技有限公司 AR implementation method and system based on positioning of visual angle of observer
CN107038746A (en) * 2017-03-27 2017-08-11 联想(北京)有限公司 A kind of information processing method and electronic equipment

Also Published As

Publication number Publication date
CN109615703B (en) 2020-04-14
CN109615703A (en) 2019-04-12
TW202013149A (en) 2020-04-01
WO2020063100A1 (en) 2020-04-02

Similar Documents

Publication Publication Date Title
TWI712918B (en) Method, device and equipment for displaying images of augmented reality
US11869205B1 (en) Techniques for determining a three-dimensional representation of a surface of an object from a set of images
US10269177B2 (en) Headset removal in virtual, augmented, and mixed reality using an eye gaze database
US20210344891A1 (en) System and method for generating combined embedded multi-view interactive digital media representations
US10979693B2 (en) Stereoscopic 3D camera for virtual reality experience
US10650574B2 (en) Generating stereoscopic pairs of images from a single lens camera
US8711204B2 (en) Stereoscopic editing for video production, post-production and display adaptation
WO2021030002A1 (en) Depth-aware photo editing
US20150235408A1 (en) Parallax Depth Rendering
US9813693B1 (en) Accounting for perspective effects in images
US20140009503A1 (en) Systems and Methods for Tracking User Postures to Control Display of Panoramas
US11720996B2 (en) Camera-based transparent display
JP7479729B2 (en) Three-dimensional representation method and device
US20190266802A1 (en) Display of Visual Data with a Virtual Reality Headset
CN111047709A (en) Binocular vision naked eye 3D image generation method
WO2014008320A1 (en) Systems and methods for capture and display of flex-focus panoramas
JP2016504828A (en) Method and system for capturing 3D images using a single camera
US20230152883A1 (en) Scene processing for holographic displays
Louis et al. Rendering stereoscopic augmented reality scenes with occlusions using depth from stereo and texture mapping
US11044464B2 (en) Dynamic content modification of image and video based multi-view interactive digital media representations
US20230217001A1 (en) System and method for generating combined embedded multi-view interactive digital media representations
US20240078743A1 (en) Stereo Depth Markers
JP2024062935A (en) Method and apparatus for generating stereoscopic display content - Patents.com
CN115334296A (en) Stereoscopic image display method and display device