TWI659335B - Graphic processing method and device, virtual reality system, computer storage medium - Google Patents

Graphic processing method and device, virtual reality system, computer storage medium Download PDF

Info

Publication number
TWI659335B
TWI659335B TW107116847A TW107116847A TWI659335B TW I659335 B TWI659335 B TW I659335B TW 107116847 A TW107116847 A TW 107116847A TW 107116847 A TW107116847 A TW 107116847A TW I659335 B TWI659335 B TW I659335B
Authority
TW
Taiwan
Prior art keywords
eye
information
picture
target
position information
Prior art date
Application number
TW107116847A
Other languages
Chinese (zh)
Other versions
TW201835723A (en
Inventor
劉皓
Original Assignee
大陸商騰訊科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 大陸商騰訊科技(深圳)有限公司 filed Critical 大陸商騰訊科技(深圳)有限公司
Publication of TW201835723A publication Critical patent/TW201835723A/en
Application granted granted Critical
Publication of TWI659335B publication Critical patent/TWI659335B/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2215/00Indexing scheme for image rendering
    • G06T2215/16Using real world measurements to influence rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Neurosurgery (AREA)
  • Neurology (AREA)
  • General Health & Medical Sciences (AREA)
  • Dermatology (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申請提供一種圖形處理方法和裝置、虛擬實境系統和計算機儲存介質,該方法包括:獲取觀察者的位置訊息;根據所述位置訊息確定待展示的虛擬實境VR畫面中的目標物體;獲取預先儲存的所述目標物體對應的至少兩個圖像,所述至少兩個圖像為分別從不同的拍攝位置拍攝的圖像;根據所述位置訊息和所述至少兩個圖像對應的拍攝位置,利用所述至少兩個圖像生成目標圖像,所述目標圖像為所述觀察者的位置對應的所述目標物體的圖像;以及展示所述VR畫面,並在所述VR畫面中渲染所述目標圖像。 The present application provides a graphics processing method and device, a virtual reality system, and a computer storage medium. The method includes: obtaining position information of an observer; determining a target object in a virtual reality VR picture to be displayed according to the position information; obtaining At least two images corresponding to the target object stored in advance, the at least two images are images respectively taken from different shooting positions; according to the position information and the corresponding shooting of the at least two images Position, using the at least two images to generate a target image, the target image being an image of the target object corresponding to the position of the observer; and displaying the VR picture, and displaying the VR picture in the VR picture Rendering the target image.

Description

圖形處理方法和裝置、虛擬實境系統和計算機儲存介質    Graphic processing method and device, virtual reality system and computer storage medium   

本申請係關於圖形處理領域,特別有關一種圖形處理方法和裝置、虛擬實境系統和計算機儲存介質。 This application relates to the field of graphics processing, and in particular, to a graphics processing method and device, a virtual reality system, and a computer storage medium.

當前生成虛擬實境(Virtual Reality,VR)場景的一種主流技術是三維(three dimensional,3D)建模技術。3D建模技術生成VR場景主要是根據3D模型製作VR場景。在某些VR遊戲產品中,VR場景主要是採用3D建模技術結合實時(real time)渲染技術完成的。用戶以VR頭戴式顯示設備,例如VR眼鏡或VR頭盔等,作為觀察媒體,融入到VR場景中,與VR場景中的人物或其他物體進行互動,從而得到真實的空間感受。最常見的例如雲霄飛車VR場景等。 A mainstream technology for generating a virtual reality (VR) scene is three-dimensional (3D) modeling technology. 3D modeling technology generates VR scenes based on 3D models. In some VR game products, VR scenes are mainly completed using 3D modeling technology combined with real-time rendering technology. Users use VR head-mounted display devices, such as VR glasses or VR helmets, as viewing media, integrate into VR scenes, and interact with characters or other objects in VR scenes, so as to get a real sense of space. The most common example is the roller coaster VR scene.

本申請實施例提供了一種圖形處理方法,包括:一種圖形處理方法,應用於計算設備,包括:獲取觀察者的位置訊息;根據所述位置訊息確定待展示的虛擬實境VR畫面中的目標物體;獲取預先儲存的所述目標物體對應的至少兩個圖像,所述至少兩個圖像為分別從不同的拍攝位置拍攝的圖像;根據所述位置訊息和所述至少兩個圖像對應的拍攝位置,利用所述至少兩個圖像生成目標圖像,所述目標圖像為所述觀察者的位置對應的所述目標 物體的圖像;展示所述VR畫面,並在所述VR畫面中渲染所述目標圖像。 An embodiment of the present application provides a graphic processing method, including: a graphic processing method applied to a computing device, including: obtaining position information of an observer; and determining a target object in a virtual reality VR picture to be displayed according to the position information Obtaining at least two images corresponding to the target object stored in advance, the at least two images being images respectively taken from different shooting positions; corresponding to the at least two images according to the position information Using the at least two images to generate a target image, where the target image is an image of the target object corresponding to the position of the observer; displaying the VR picture, and displaying the VR image in the VR The target image is rendered in a picture.

本申請實施例提供了一種圖形處理裝置,其包括處理器和儲存器,所述儲存器中儲存有計算機可讀指令,可以使所述處理器執行:獲取觀察者的位置訊息;根據所述位置訊息確定待展示的虛擬實境VR畫面中的目標物體;獲取預先儲存的所述目標物體對應的至少兩個圖像,所述至少兩個圖像為分別從不同的拍攝位置拍攝的圖像;根據所述位置訊息和所述至少兩個圖像對應的拍攝位置,利用所述至少兩個圖像生成目標圖像,所述目標圖像為所述觀察者的位置對應的所述目標物體的圖像;展示所述VR畫面,並在所述VR畫面中渲染所述目標圖像。 An embodiment of the present application provides a graphics processing apparatus, which includes a processor and a storage, where the storage stores computer-readable instructions, which can cause the processor to perform: obtaining position information of an observer; and according to the position The message determines a target object in the virtual reality VR picture to be displayed; acquires at least two images corresponding to the target object stored in advance, and the at least two images are images respectively taken from different shooting positions; Generating a target image using the at least two images according to the position information and a shooting position corresponding to the at least two images, where the target image is a target object corresponding to the position of the observer An image; displaying the VR picture, and rendering the target image in the VR picture.

本申請實施例提供了一種圖形處理方法,適應於計算設備,包括:收集觀察者當前的姿態訊息;根據所述姿態訊息,得到所述觀察者的位置訊息;根據所述位置訊息確定待展示的虛擬實境VR畫面中的目標物體;獲取預先儲存的所述目標物體對應的至少兩個圖像,所述至少兩個圖像為分別從不同的拍攝位置拍攝的圖像;根據所述位置訊息和所述至少兩個圖像對應的拍攝位置,利用所述至少兩個圖像生成目標圖像,所述目標圖像為所述觀察者的位置對應的所述目標物體的圖像;展示所述VR畫面,並在所述VR畫面中渲染所述目標圖像。 An embodiment of the present application provides a graphic processing method adapted to a computing device, including: collecting current pose information of an observer; obtaining position information of the observer according to the pose information; and determining a to-be-displayed based on the position information. A target object in a virtual reality VR picture; obtaining at least two images corresponding to the target object stored in advance, the at least two images are images respectively taken from different shooting positions; according to the position information A shooting position corresponding to the at least two images, and using the at least two images to generate a target image, where the target image is an image of the target object corresponding to the position of the observer; Describe the VR picture, and render the target image in the VR picture.

本申請實施例提供了一種虛擬實境VR系統,包括姿態收集裝置、處理裝置和顯示裝置:所述姿態收集裝置用於:收集觀察者當前的姿態訊息;所述處理裝置用於:根據所述姿態訊息,得到所述觀察者的位置訊息;根據所述位置訊息確定待展示的虛擬實境VR畫面中的目標物體;獲取預先儲存的所述目標物體對應的至少兩個圖像,所述至少兩個圖像為分別從不同的拍攝位置拍 攝的圖像;根據所述位置訊息和所述至少兩個圖像對應的拍攝位置,利用所述至少兩個圖像生成目標圖像,所述目標圖像為所述觀察者的位置對應的所述目標物體的圖像;所述顯示裝置用於展示所述VR畫面,並在所述VR畫面中渲染所述目標圖像。 An embodiment of the present application provides a virtual reality VR system, including a gesture collection device, a processing device, and a display device: the gesture collection device is configured to: collect current pose information of an observer; the processing device is configured to: Posture information to obtain the position information of the observer; determine the target object in the virtual reality VR picture to be displayed according to the position information; obtain at least two images corresponding to the target object stored in advance, the at least The two images are images respectively taken from different shooting positions; and according to the position information and the shooting positions corresponding to the at least two images, a target image is generated by using the at least two images, and the target The image is an image of the target object corresponding to the position of the observer; the display device is configured to display the VR picture, and render the target image in the VR picture.

本申請實施例提供一種計算機儲存介質,其上儲存有指令,當所述指令在計算機上運行時,使得所述計算機執行本申請實施例所述的方法。 An embodiment of the present application provides a computer storage medium having instructions stored thereon, and when the instructions are run on a computer, the computer is caused to execute the method described in the embodiment of the present application.

本申請實施例提供一種計算機儲存介質,其上儲存有指令,當所述指令在計算機上運行時,使得所述計算機執行本申請實施例所述的方法。 An embodiment of the present application provides a computer storage medium having instructions stored thereon, and when the instructions are run on a computer, the computer is caused to execute the method described in the embodiment of the present application.

本申請實施例提供一種包括指令的計算機程式產品,當計算機運行所述計算機程式產品的所述指令,所述計算機執行本申請實施例所述的方法。 The embodiment of the present application provides a computer program product including instructions. When a computer runs the instructions of the computer program product, the computer executes the method described in the embodiment of the present application.

本申請實施例提供一種包括指令的計算機程式產品,當計算機運行所述計算機程式產品的所述指令,所述計算機執行本申請實施例所述的方法。 The embodiment of the present application provides a computer program product including instructions. When a computer runs the instructions of the computer program product, the computer executes the method described in the embodiment of the present application.

30‧‧‧VR系統 30‧‧‧VR system

32‧‧‧姿態收集裝置 32‧‧‧ posture collection device

34‧‧‧處理裝置 34‧‧‧Processing device

36‧‧‧顯示裝置 36‧‧‧display device

42‧‧‧人物 42‧‧‧ Character

43‧‧‧物體 43‧‧‧ objects

44‧‧‧物體 44‧‧‧ objects

46‧‧‧物體 46‧‧‧ objects

52‧‧‧物體 52‧‧‧ objects

54‧‧‧物體 54‧‧‧ objects

82‧‧‧第一紋理 82‧‧‧First texture

84‧‧‧第二紋理 84‧‧‧Second Texture

86‧‧‧第三紋理 86‧‧‧Third texture

88‧‧‧第四紋理 88‧‧‧ Fourth texture

101‧‧‧VR頭顯設備 101‧‧‧VR headset equipment

102‧‧‧計算設備 102‧‧‧ Computing Equipment

103‧‧‧攝像機 103‧‧‧Camera

201~205‧‧‧步驟 201 ~ 205‧‧‧ steps

900‧‧‧技術設備 900‧‧‧ technical equipment

901‧‧‧處理器 901‧‧‧ processor

902‧‧‧儲存器 902‧‧‧Storage

903‧‧‧I/O介面 903‧‧‧I / O interface

904‧‧‧顯示介面 904‧‧‧display interface

905‧‧‧網路通訊介面 905‧‧‧Network communication interface

906‧‧‧匯流排 906‧‧‧Bus

907‧‧‧操作系統 907‧‧‧operating system

908‧‧‧I/O模組 908‧‧‧I / O module

909‧‧‧通訊模組 909‧‧‧Communication Module

900A‧‧‧圖形處理裝置 900A‧‧‧Graphics Processing Device

900B‧‧‧處理器 900B‧‧‧ processor

910‧‧‧獲取模組 910‧‧‧Get Module

920‧‧‧計算模組 920‧‧‧Computing Module

930‧‧‧渲染模組 930‧‧‧ rendering module

1000‧‧‧VR頭盔 1000‧‧‧VR helmet

1010‧‧‧頭部跟蹤器 1010‧‧‧Head Tracker

1011‧‧‧角度感應器 1011‧‧‧Angle Sensor

1012‧‧‧訊號處理器 1012‧‧‧Signal Processor

1013‧‧‧資料傳輸器 1013‧‧‧Data Transmitter

1014‧‧‧顯示器 1014‧‧‧Display

1020‧‧‧CPU 1020‧‧‧CPU

1030‧‧‧GPU 1030‧‧‧GPU

1040‧‧‧顯示器 1040‧‧‧ Display

1110‧‧‧VR眼鏡 1110‧‧‧VR Glasses

1112‧‧‧角度感應器 1112‧‧‧Angle Sensor

1114‧‧‧訊號處理器 1114‧‧‧Signal Processor

1116‧‧‧資料傳輸器 1116‧‧‧Data Transmitter

1118‧‧‧顯示器 1118‧‧‧Display

1120‧‧‧主機 1120‧‧‧Host

C1‧‧‧拍攝位置 C1‧‧‧ shooting location

C2‧‧‧拍攝位置 C2‧‧‧ shooting location

C3‧‧‧拍攝位置 C3‧‧‧ shooting location

Cview‧‧‧平均位置 Cview‧‧‧ average position

L41‧‧‧物體 L41‧‧‧ Object

L80‧‧‧左眼畫面 L80‧‧‧Left eye picture

LE‧‧‧左眼位置 LE‧‧‧left eye position

R45‧‧‧物體 R45‧‧‧ Object

R80‧‧‧右眼畫面 R80‧‧‧Right eye picture

RE‧‧‧右眼位置 RE‧‧‧Right eye position

S310~S370‧‧‧步驟 S310 ~ S370‧‧‧step

第1圖是本申請一個實施例所述的VR系統的示意圖。 FIG. 1 is a schematic diagram of a VR system according to an embodiment of the present application.

第2圖是本申請一個實施例的圖形處理方法的示意性流程圖。 FIG. 2 is a schematic flowchart of a graphics processing method according to an embodiment of the present application.

第3圖是本申請一個實施例的圖形處理方法的示意性流程圖。 FIG. 3 is a schematic flowchart of a graphics processing method according to an embodiment of the present application.

第4圖是本申請一個實施例的需要呈現的場景的示意圖。 FIG. 4 is a schematic diagram of a scene to be presented according to an embodiment of the present application.

第5圖是本申請一個實施例的進行預先拍攝的場景的示意圖。 FIG. 5 is a schematic diagram of a scene taken in advance according to an embodiment of the present application.

第6圖是本申請一個實施例的在不同拍攝位置得到的視頻的示意圖。 FIG. 6 is a schematic diagram of videos obtained at different shooting positions according to an embodiment of the present application.

第7圖是本申請一個實施例的確定目標視頻的示意圖。 FIG. 7 is a schematic diagram of determining a target video according to an embodiment of the present application.

第8圖是本申請一個實施例的呈現目標視頻的示意圖。 FIG. 8 is a schematic diagram of presenting a target video according to an embodiment of the present application.

第9A圖是本申請一個實施例的圖形處理裝置所在的計算設備的結構示意圖。 FIG. 9A is a schematic structural diagram of a computing device where a graphics processing apparatus according to an embodiment of the present application is located.

第9B圖是本申請一個實施例的處理器的示意性方塊圖。 FIG. 9B is a schematic block diagram of a processor according to an embodiment of the present application.

第10圖是本申請一個實施例的虛擬實境系統的示意圖。 FIG. 10 is a schematic diagram of a virtual reality system according to an embodiment of the present application.

第11圖是本申請另一個實施例的虛擬實境系統的示意圖。 FIG. 11 is a schematic diagram of a virtual reality system according to another embodiment of the present application.

下面將結合圖式,對本申請中的技術方案進行描述。 The technical solutions in this application will be described below with reference to the drawings.

本申請實施例提供了一種圖形處理方法、裝置和VR系統。 Embodiments of the present application provide a graphics processing method, device, and VR system.

應理解,本申請各實施例的方法和設備應用於VR場景領域,例如,可以應用於VR遊戲領域,還可以應用於其他的可互動場景,例如可互動的VR電影,可互動的VR演唱會等,本申請各實施例對此不作限定。 It should be understood that the methods and devices of the embodiments of the present application are applied in the field of VR scenes, for example, in the field of VR games, and also in other interactive scenes, such as interactive VR movies and interactive VR concerts. The embodiments of this application do not limit this.

在詳細說明本申請實施例的圖形處理方法之前,首先介紹本申請各實施例涉及的實時渲染技術。實時渲染技術的本質是圖形資料的實時計算和輸出,其最大的特性是實時(real time)性。當前,個人電腦(Personal Computer,PC)、工作站、遊戲機、行動設備或VR系統等中的處理器每秒至少以24幀以上的速度進行運算。也就是說,渲染一畫面的圖像,至少也要在1/24秒以內。而在實際的3D遊戲中,每秒幀數要求則更高。正是由於實時渲染的實時性,才有可能實現3D遊戲的連貫播放,以及實現3D遊戲中用戶 與遊戲場景中的人物或其他物體進行互動。 Before describing the graphics processing method in the embodiments of the present application in detail, the real-time rendering technology involved in the embodiments of the present application is first introduced. The essence of real-time rendering technology is real-time calculation and output of graphics data, and its biggest characteristic is real-time. Currently, a processor in a personal computer (PC), a workstation, a game console, a mobile device, or a VR system performs operations at a speed of at least 24 frames per second. In other words, the rendering of a picture must be at least 1/24 second. In actual 3D games, the frame rate per second is even higher. Because of the real-time nature of real-time rendering, it is possible to achieve continuous playback of 3D games, as well as to enable users in 3D games to interact with characters or other objects in the game scene.

本申請各實施例涉及的實時渲染可以是透過中央處理器(Central Processing Unit,CPU)或圖形處理器(Graphics Processing Unit,GPU)實現的,本申請實施例對此不作限定。具體而言,GPU是一種專門用於實現圖像運算工作的處理器,其可以存在於顯卡中,又稱顯示核心、視覺處理器或顯示晶片。是一種VR眼鏡 The real-time rendering involved in the embodiments of the present application may be implemented through a Central Processing Unit (CPU) or a Graphics Processing Unit (GPU), which is not limited in the embodiments of the present application. Specifically, the GPU is a type of processor specifically used to implement image computing work, which may exist in a graphics card, which is also called a display core, a vision processor, or a display chip. Is a kind of VR glasses

第1圖示出的是本申請實施例的一種VR系統的示意圖。第1圖所示,該系統包括VR頭顯設備101與計算設備102。 FIG. 1 is a schematic diagram of a VR system according to an embodiment of the present application. As shown in FIG. 1, the system includes a VR headset device 101 and a computing device 102.

其中,VR頭顯設備101可以是VR眼鏡或VR頭盔等,可以包括角度感應器1011、訊號處理器1012、資料傳輸器1013和顯示器1014。其中,角度感應器1011可以收集觀察者的姿態訊息。 The VR headset device 101 may be VR glasses or a VR helmet, and may include an angle sensor 1011, a signal processor 1012, a data transmitter 1013, and a display 1014. The angle sensor 1011 can collect attitude information of the observer.

計算設備102可以是個人電腦(PC)、筆記型電腦等智能終端設備,也可以是智慧手機、PAD或者平板電腦等智能移動終端設備,可以包括CPU和GPU,用於計算並渲染觀察畫面,並將觀察畫面發送給顯示器1014進行顯示。訊號處理器1012和資料傳輸器1013主要用於VR頭顯設備101與計算設備102之間的通訊。 The computing device 102 may be a smart terminal device such as a personal computer (PC), a notebook computer, or a smart mobile terminal device such as a smartphone, a PAD, or a tablet computer. The computing device 102 may include a CPU and a GPU for calculating and rendering an observation screen, and The observation screen is sent to the display 1014 for display. The signal processor 1012 and the data transmitter 1013 are mainly used for communication between the VR headset device 101 and the computing device 102.

在一些實例中,本申請實施例的VR系統還可以進一步包括:攝像機103,用於從多個不同的拍攝位置拍攝的VR場景內物體的視頻。 In some examples, the VR system according to the embodiment of the present application may further include: a camera 103, configured to shoot videos of objects in the VR scene captured from multiple different shooting positions.

基於第1圖所示的系統,本申請實施例提出了一種圖形處理方法。第2圖是本申請實施例提供的一種圖形處理方法200的流程圖,該方法由VR系統中的計算設備102執行。如第2圖所示,該方法包括以下步驟: Based on the system shown in Figure 1, an embodiment of the present application proposes a graphics processing method. FIG. 2 is a flowchart of a graphics processing method 200 provided by an embodiment of the present application, and the method is executed by a computing device 102 in a VR system. As shown in Figure 2, the method includes the following steps:

步驟201:獲取觀察者的位置訊息。 Step 201: Obtain the position information of the observer.

在一些實例中,獲取所述觀察者的左眼位置訊息、右眼 位置訊息、左眼朝向訊息和右眼朝向訊息。其中,所述左眼位置訊息、所述右眼位置訊息、所述左眼朝向訊息和所述右眼朝向訊息是根據所收集的用戶當前的姿態訊息確定的,所述姿態訊息包括頭部姿態訊息、四肢姿態訊息、軀幹姿態訊息、肌肉電刺激訊息、眼球跟蹤訊息、皮膚感知訊息、運動感知訊息和腦訊號訊息中的至少一種。 In some examples, left-eye position information, right-eye position information, left-eye orientation information, and right-eye orientation information of the observer are obtained. The left-eye position information, the right-eye position information, the left-eye direction information, and the right-eye direction information are determined according to the collected current posture information of the user, and the posture information includes a head posture At least one of a message, a limb posture message, a trunk posture message, a muscle electrical stimulation message, an eye tracking message, a skin perception message, a motion perception message, and a brain signal message.

步驟202:根據所述位置訊息確定待展示的虛擬實境VR畫面中的目標物體。 Step 202: Determine a target object in the virtual reality VR picture to be displayed according to the position information.

在一些實例中,所述目標物體可以為人物。其中,該人物是希望被改善其真實性的物體,也即為所述目標物體。 In some examples, the target object may be a character. Among them, the character is an object that is expected to be improved in its authenticity, that is, the target object.

例如,每個場景或多個場景可以存在一個目標物體列表,在生成VR場景時,根據目標物體列表找到該目標場景中的目標物體。再如,在VR場景的遊戲設計中規定,近景(距離用戶一定範圍內的場景)處的人物是目標物體,近景處除人物以外的其他物體不是目標物體,遠景(距離用戶一定範圍外的場景)處的所有物體均不是目標物體,等等。確定場景中的目標物體可以由處理裝置34來執行,例如可以由處理裝置34中的CPU確定,本申請實施例對此不作限定。 For example, each scene or multiple scenes may have a target object list. When generating a VR scene, the target object in the target scene is found according to the target object list. As another example, in the game design of VR scenes, it is stipulated that the characters in the close range (scene within a certain range from the user) are the target objects, the objects other than the characters in the close range are not the target objects, and the distant view (the scene outside the user's range) All objects at) are not target objects, and so on. The determination of the target object in the scene may be performed by the processing device 34, for example, may be determined by a CPU in the processing device 34, which is not limited in the embodiment of the present application.

步驟203:獲取預先儲存的所述目標物體對應的至少兩個圖像,所述至少兩個圖像為分別從不同的拍攝位置拍攝的圖像。 Step 203: Obtain at least two images corresponding to the target object that are stored in advance, and the at least two images are images respectively taken from different shooting positions.

在一些實例中,根據待展示的所述VR畫面的時間訊息,從預先拍攝的多個視頻中確定每個視頻中所述時間訊息對應的視頻幀作為所述圖像,其中,所述VR畫面的時間訊息可以為VR畫面當前的時間。 In some examples, a video frame corresponding to the time information in each video is determined as the image according to the time information of the VR picture to be displayed from a plurality of videos taken in advance, where the VR picture The time information can be the current time of the VR screen.

步驟204:根據所述位置訊息和所述至少兩個圖像對應的拍攝位置,利用所述至少兩個圖像生成目標圖像,所述目標圖 像為所述觀察者的位置對應的所述目標物體的圖像。 Step 204: Generate a target image by using the at least two images according to the position information and a shooting position corresponding to the at least two images, where the target image is the position corresponding to the position of the observer. The image of the target object.

在一些實例中,將所述目標圖像渲染到所述VR畫面中的第一預設紋理上,其中,所述第一預設紋理是基於廣告牌(billboard)面片(patch)技術的。 In some examples, the target image is rendered onto a first preset texture in the VR picture, wherein the first preset texture is based on a billboard patch technology.

在一些實例中,根據所述位置訊息,確定所述VR畫面中的第一物體;從三維模型庫中確定出所述第一物體對應的三維模型;將所述三維模型渲染到所述VR畫面的第二預設紋理上,其中,其中,所述VR畫面內包括上述目標物體和上述目標物體以外的上述第一物體,所述第二預設紋理可以為所述VR畫面的背景。 In some examples, a first object in the VR picture is determined according to the location information; a three-dimensional model corresponding to the first object is determined from a three-dimensional model library; and the three-dimensional model is rendered to the VR picture On the second preset texture, wherein the VR picture includes the target object and the first object other than the target object, the second preset texture may be a background of the VR picture.

在一些實例中,所述多個視頻是對所述多個視頻的原始視頻經過透明處理後的僅包括所述目標物體的視頻,其中,所述目標物體可以為人物。 In some examples, the plurality of videos are videos that include only the target object after transparent processing of the original videos of the plurality of videos, and the target object may be a person.

在一些實例中,在確定所述目標圖像的步驟中,對所述左眼位置訊息和所述右眼位置訊息求平均值,得到平均位置;根據所述平均位置,從所述預先拍攝的多個視頻中選取至少兩個視頻,所述多個視頻是從不同的拍攝位置拍攝得到;從所述至少兩個視頻中的每個視頻中選取一個視頻幀作為所述圖像;根據所述平均位置和所述至少兩個視頻的拍攝位置之間的空間位置關係,對所述圖像進行運算得到所述目標圖像。 In some examples, in the step of determining the target image, the left-eye position information and the right-eye position information are averaged to obtain an average position; according to the average position, from the pre-captured At least two videos are selected from a plurality of videos, and the multiple videos are obtained from different shooting positions; a video frame is selected from each of the at least two videos as the image; according to the A spatial position relationship between an average position and a shooting position of the at least two videos, and the target image is obtained by performing an operation on the image.

具體的,在得到上述平均位置之後,從所述平均位置左右兩側各選取至少一個視頻,從選取的至少一個視頻中各選取一個與上述時間訊息對應的視頻幀作為所述圖像,其中,上述時間訊息可以為上述VR畫面當前的時間訊息,根據上述平均位置和上述至少兩個視頻的拍攝位置之間的空間位置關係,對所述圖像進行插值運算得到所述目標圖像。 Specifically, after the average position is obtained, at least one video is selected from left and right sides of the average position, and one video frame corresponding to the time information is selected from the selected at least one video as the image. The time information may be the current time information of the VR picture, and the target image is obtained by performing an interpolation operation on the image according to a spatial position relationship between the average position and the shooting positions of the at least two videos.

在一些實例中,在確定所述目標圖像的步驟中,對所述 左眼位置訊息和所述右眼位置訊息求平均值,得到平均位置;根據所述平均位置,從所述預先拍攝的多個視頻中選取出目標視頻,其中,所述目標視頻的拍攝位置與所述平均位置的距離是所述預先拍攝的多個視頻的拍攝位置與所述平均位置的空間距離中最小的;從所述目標視頻中選取一個視頻幀,並將所述視頻幀作為所述目標圖像。 In some examples, in the step of determining the target image, the left-eye position information and the right-eye position information are averaged to obtain an average position; according to the average position, from the pre-captured A target video is selected from a plurality of videos, wherein the distance between the shooting position of the target video and the average position is the smallest of the spatial distance between the shooting position of the plurality of pre-shot videos and the average position; One video frame is selected from the target video, and the video frame is used as the target image.

具體的,在得到上述平均位置之後,從上述預先拍攝的多個視頻中選取一個拍攝位置距離上述平均位置最近的視頻最為目標視頻,從上述目標視頻中選取一個上述時間訊息對應的視頻幀作為上述目標圖像,其中,上述時間訊息可以為上述VR畫面當前的時間訊息。 Specifically, after obtaining the average position, a video whose shooting position is closest to the average position is selected from the plurality of pre-shot videos, and a video frame corresponding to the time information is selected from the target videos as the foregoing. The target image, wherein the time information may be the current time information of the VR picture.

步驟205:展示所述VR畫面,並在所述VR畫面中渲染所述目標圖像。 Step 205: Display the VR picture, and render the target image in the VR picture.

在一些實例中,根據所述左眼位置訊息和所述左眼朝向訊息確定所述左眼畫面;根據所述右眼位置訊息和所述右眼朝向訊息確定所述右眼畫面;根據所述左眼朝向訊息和所述目標圖像,實時渲染所述左眼畫面,並在所述左眼畫面中渲染所述目標圖像;根據所述右眼朝向訊息和所述目標圖像,實時渲染所述右眼畫面,並在所述右眼畫面中渲染所述目標圖像。 In some examples, the left-eye picture is determined according to the left-eye position information and the left-eye orientation information; the right-eye picture is determined according to the right-eye position information and the right-eye orientation information; The left eye is oriented to the message and the target image, and the left eye frame is rendered in real time, and the target image is rendered in the left eye frame; according to the right eye orientation message and the target image, it is rendered in real time. The right-eye picture, and rendering the target image in the right-eye picture.

本申請實施例的技術方案可以根據觀察者的位置訊息,確定待展示的虛擬實境VR畫面中的目標物體;獲取預先儲存的所述目標物體對應的至少兩個圖像,所述至少兩個圖像為分別從不同的拍攝位置拍攝的圖像;根據所述位置訊息和所述至少兩個圖像對應的拍攝位置,利用所述至少兩個圖像生成目標圖像,所述目標圖像為所述觀察者的位置對應的所述目標物體的圖像;展示所述VR畫面,並在所述VR畫面中渲染所述目標圖像。該VR 畫面可以真實的展現實景,在保持整個VR場景可互動性的基礎上,為用戶提供真實的臨場感,從而能夠提升用戶體驗。 The technical solution of the embodiment of the present application can determine the target object in the virtual reality VR picture to be displayed according to the position information of the observer; obtain at least two images corresponding to the target object stored in advance, and the at least two The images are images respectively taken from different shooting positions; according to the position information and the shooting positions corresponding to the at least two images, a target image is generated by using the at least two images, and the target image An image of the target object corresponding to the position of the observer; displaying the VR picture, and rendering the target image in the VR picture. This VR picture can truly show the real scene, and on the basis of maintaining the interactivity of the entire VR scene, it can provide users with a real sense of presence, which can improve the user experience.

第3圖是本申請一個實施例的圖形處理方法300的示意性流程圖。該方法300由VR系統30執行。其中,VR系統30可以包括姿態收集裝置32、處理裝置34和顯示裝置36。該方法300可以包括以下步驟。 FIG. 3 is a schematic flowchart of a graphics processing method 300 according to an embodiment of the present application. The method 300 is performed by the VR system 30. The VR system 30 may include a gesture collection device 32, a processing device 34, and a display device 36. The method 300 may include the following steps.

S310,收集用戶當前的姿態訊息。應理解,S310可以由姿態收集裝置32來執行。 S310. Collect the current posture information of the user. It should be understood that S310 may be performed by the gesture collection device 32.

S320,根據姿態訊息,得到用戶的左眼位置訊息、右眼位置訊息、左眼朝向訊息和右眼朝向訊息。 S320: Obtain left-eye position information, right-eye position information, left-eye orientation information, and right-eye orientation information of the user according to the posture information.

S330,根據左眼位置訊息和右眼位置訊息,從三維模型庫中確定出目標三維模型。 S330. Determine a target three-dimensional model from the three-dimensional model library according to the left-eye position information and the right-eye position information.

S340,根據左眼位置訊息、右眼位置訊息和預先拍攝的多個視頻,確定目標視頻,其中,多個視頻是分別從不同的拍攝位置拍攝的視頻。 S340: Determine a target video according to the left-eye position information, the right-eye position information, and a plurality of videos shot in advance, where the plurality of videos are videos shot from different shooting positions, respectively.

S350,根據左眼朝向訊息、目標三維模型和目標視頻,實時渲染左眼畫面。 S350: Render the left-eye image in real time according to the left-eye orientation information, the target three-dimensional model, and the target video.

S360,根據右眼朝向訊息、目標三維模型和目標視頻,實時渲染右眼畫面。 S360: Render the right-eye picture in real time according to the right-eye orientation information, the target three-dimensional model, and the target video.

應理解,S320至S360可以由處理裝置34來執行。 It should be understood that S320 to S360 may be executed by the processing device 34.

S370,顯示左眼畫面和右眼畫面,其中,左眼畫面和右眼畫面顯示時形成VR場景,VR場景中包括目標三維模型的圖像和目標視頻的圖像。 S370: Display a left-eye picture and a right-eye picture, where a VR scene is formed when the left-eye picture and the right-eye picture are displayed, and the VR scene includes an image of a target three-dimensional model and an image of a target video.

應理解,S370可以由顯示裝置36來執行。 It should be understood that S370 may be performed by the display device 36.

本申請實施例的圖形處理方法,收集用戶的姿態訊息來確定用戶左右眼的位置,根據用戶的左右眼的位置訊息,確定目 標三維模型並且根據預先拍攝的多個視頻確定目標視頻,透過實時渲染的方式渲染技術分別渲染左眼畫面和右眼畫面,從而顯示VR場景,其中,VR場景中包括目標三維模型的圖像和目標視頻的圖像,該目標視頻可以真實的展現實景,在保持整個VR場景可互動性的基礎上,為用戶提供真實的臨場感,從而能夠提升用戶體驗。 The graphic processing method in the embodiment of the present application collects the pose information of the user to determine the positions of the left and right eyes of the user, determines the target three-dimensional model based on the position information of the left and right eyes of the user, and determines the target video based on a plurality of pre-shot videos, and renders them in real time. The rendering technology renders the left-eye image and the right-eye image separately to display the VR scene. The VR scene includes an image of the target three-dimensional model and an image of the target video. The target video can truly show the actual scene and maintain the entire scene. Based on the interactivity of VR scenes, it provides users with a real sense of presence, which can improve the user experience.

應理解,通常而言VR系統30包括VR頭顯設備,顯示裝置36可以集成(integrate)在VR頭顯設備中。本申請實施例的處理裝置34和/或姿態收集裝置32可以集成在VR頭顯設備中,也可以獨立於VR頭顯設備單獨部署,其中,上述VR頭顯設備可以是VR頭戴式顯示設備,例如VR眼鏡或VR頭盔等。姿態收集裝置32、處理裝置34和顯示裝置36之間可以透過有線通訊也可以透過無線通訊,本申請實施例對此不作限定。 It should be understood that, generally speaking, the VR system 30 includes a VR headset device, and the display device 36 may be integrated into the VR headset device. The processing device 34 and / or the attitude collection device 32 in the embodiment of the present application may be integrated into a VR headset device, or may be deployed separately from the VR headset device. The above VR headset device may be a VR headset display device. , Such as VR glasses or VR helmets. The attitude collection device 32, the processing device 34, and the display device 36 may communicate through wired or wireless communication, which is not limited in the embodiment of the present application.

下面具體描述本申請的圖形處理方法300的各個步驟以及VR系統30的各組件。 Each step of the graphics processing method 300 and the components of the VR system 30 are described in detail below.

在本申請實施例中,S310,姿態收集裝置32收集用戶當前的姿態訊息。 In the embodiment of the present application, in S310, the gesture collection device 32 collects the current gesture information of the user.

姿態收集裝置32可以包括VR頭戴式顯示設備,例如VR眼鏡或VR頭盔中的傳感器。傳感器可以包括光敏傳感器,例如紅外傳感器、攝像鏡頭等;傳感器還可以包括力敏傳感器,例如陀螺儀等;傳感器還可以包括磁敏傳感器,例如腦機連接埠等;傳感器還可以包括聲敏傳感器等,本申請實施例對傳感器的具體類型不作限定。VR頭戴式顯示設備中的傳感器可以收集用戶當前的頭部姿態訊息、眼球跟蹤訊息、皮膚感知訊息、肌肉電刺激訊息和腦訊號訊息中的至少一種。然後,處理裝置34可以根據這些訊息確定用戶的左眼位置訊息、右眼位置訊息、左眼朝向訊息和右 眼朝向訊息。 The gesture collection device 32 may include a VR head-mounted display device, such as a sensor in VR glasses or a VR helmet. The sensor may include a light-sensitive sensor, such as an infrared sensor, a camera lens, etc .; the sensor may also include a force-sensitive sensor, such as a gyroscope; the sensor may also include a magnetic sensor, such as a brain-computer port; etc .; the sensor may also include a sound-sensitive sensor, etc. In the embodiment of the present application, the specific type of the sensor is not limited. The sensor in the VR head-mounted display device can collect at least one of a user's current head posture information, eye tracking information, skin perception information, muscle electrical stimulation information, and brain signal information. Then, the processing device 34 can determine the left-eye position information, the right-eye position information, the left-eye orientation information, and the right-eye orientation information of the user based on these messages.

在一個具體的例子中,在VR場景中,用戶的視角是指用戶的人眼視線方向在虛擬空間中的方位角,其中,包括人眼的位置和朝向。在虛擬空間中,用戶的視角可以隨用戶的頭部在現實空間中姿態的變化而變化。在一種具體的情況下,虛擬空間中用戶的視角的變化與現實空間中用戶的頭部姿態的變化同速且同方向。其中,用戶的視角又包括左眼視角和右眼視角,即包括用戶的左眼位置、右眼位置、左眼朝向和右眼朝向。 In a specific example, in a VR scene, the user's perspective refers to the azimuth of the user's eye direction in the virtual space, including the position and orientation of the human eye. In virtual space, the user's perspective can change as the user's head poses change in real space. In a specific case, the change of the user's perspective in the virtual space is the same speed and direction as the change of the user's head posture in the real space. The user's perspective includes a left-eye perspective and a right-eye perspective, that is, the user's left-eye position, right-eye position, left-eye orientation, and right-eye orientation.

在該例子中,用戶佩戴的VR頭顯設備上的傳感器可以在用戶使用VR頭顯設備的過程中感測頭部的轉動、移動等運動及其姿態變化,並對各項運動進行運算,得到相關的頭部姿態訊息(例如運動的速度、角度等),處理裝置34根據得到的頭部姿態訊息就可以確定用戶的左眼位置訊息、右眼位置訊息、左眼朝向訊息和右眼朝向訊息。 In this example, the sensor on the VR headset device worn by the user can sense movements such as head rotation, movement, and posture changes during the process of using the VR headset device, and perform calculations on various motions to obtain Relevant head posture information (such as the speed and angle of movement), the processing device 34 can determine the user's left-eye position information, right-eye position information, left-eye orientation information, and right-eye orientation information based on the obtained head posture information. .

姿態收集裝置32還可以包括定位器、操控手柄、體感手套、體感衣服,以及跑步機等動感裝置等等,用於收集用戶的姿態訊息,繼而由處理裝置34處理得到用戶的左眼位置訊息、右眼位置訊息、左眼朝向訊息和右眼朝向訊息。其中,姿態收集裝置32可以透過操控手柄、體感手套、體感衣服和跑步機等收集用戶的四肢姿態訊息、軀幹姿態訊息、肌肉電刺激訊息、皮膚感知訊息和運動感知訊息等。 The posture collection device 32 may further include a positioner, a control handle, a body-sensing glove, a body-sensing clothing, and other motion-sensing devices such as a treadmill, etc., for collecting posture information of the user, and then the processing device 34 processes to obtain the position of the left eye of the user Message, right eye position message, left eye direction message, and right eye direction message. Among them, the posture collection device 32 can collect user's limb posture information, trunk posture information, electrical muscle stimulation information, skin perception information, and motion perception information through a control handle, somatosensory gloves, somatosensory clothes, and a treadmill.

在一個具體的例子中,VR頭顯設備上可以設有一個或多個定位器,用於監測用戶頭部位置(可以包括高度)、朝向等。此時,用戶在佩戴VR頭顯設備所在的現實空間中可以設有定位系統,該定位系統可以與用戶佩戴的VR頭顯設備上的一個或多個定位器進行定位通訊,確定用戶在此現實空間中的具體位置(可以 包括高度)、朝向等姿態訊息。然後。可以由處理裝置34將上述姿態訊息轉換為用戶頭部在虛擬空間中的相關位置(可以包括高度)、朝向等訊息。亦即,處理裝置34得到用戶的左眼位置訊息、右眼位置訊息、左眼朝向訊息和右眼朝向訊息。 In a specific example, one or more locators may be provided on the VR headset device to monitor the user's head position (which may include height), orientation, and the like. At this time, the user may be provided with a positioning system in the real space where the VR headset device is worn, and the positioning system may perform positioning communication with one or more locators on the VR headset device worn by the user to determine that the user is in this reality. Specific position (including height) and orientation information in space. then. The above-mentioned posture information may be converted by the processing device 34 into information such as the relevant position (which may include the height) and the orientation of the user's head in the virtual space. That is, the processing device 34 obtains left-eye position information, right-eye position information, left-eye orientation information, and right-eye orientation information of the user.

應理解,本申請實施例的左眼位置訊息、右眼位置訊息可以透過在座標系中的座標值來表示;左眼朝向訊息和右眼朝向訊息可以透過在座標系中的一個向量來表示,但本申請實施例對此不作限定。 It should be understood that the left-eye position information and the right-eye position information of the embodiment of the present application may be represented by coordinate values in the coordinate system; the left-eye orientation message and the right-eye orientation message may be represented by a vector in the coordinate system, However, this embodiment of the present application does not limit this.

還應理解,姿態收集裝置32在收集到姿態訊息後,需透過有線通訊或無線通訊,將姿態訊息發送給處理裝置34,文中對此不進行贅述。 It should also be understood that after the gesture collection device 32 collects the gesture information, it needs to send the gesture information to the processing device 34 through wired communication or wireless communication, which is not described in detail herein.

還應理解,本申請實施例還可以透過其他方式收集用戶的姿態訊息,透過其他的方式來獲取和/或表示左眼位置訊息、右眼位置訊息、左眼朝向訊息和右眼朝向訊息,本申請實施例對具體的方式不作限定。 It should also be understood that the embodiments of the present application may also collect the posture information of the user in other ways, and obtain and / or represent the left-eye position information, the right-eye position information, the left-eye orientation information, and the right-eye orientation information through other methods. The application embodiment does not limit the specific manner.

在VR場景的設計中,例如在VR場景的遊戲設計中,一個位置被設計為對應一個物體組。在一個具體的例子中,用戶的左眼位置LE和右眼位置RE分別對應的物體如第4圖所示。用戶的左眼位置對應物體L41、物體43、物體44、物體46和人物42,用戶的右眼位置對應物體R45、物體43、物體44、物體46和人物42。其中,該人物42是希望被改善其真實性的物體,為目標物體。 In the design of a VR scene, for example, in a game design of a VR scene, a position is designed to correspond to an object group. In a specific example, the objects corresponding to the left eye position LE and the right eye position RE of the user are respectively shown in FIG. 4. The position of the left eye of the user corresponds to the object L41, the object 43, the object 44, the object 46, and the person 42, and the position of the right eye of the user corresponds to the object R45, object 43, object 44, object 46, and person 42. Among them, the character 42 is an object that is expected to be improved in its authenticity, and is a target object.

具體地,確定用戶的左眼位置或右眼位置對應的物體組中哪個物體是目標物體,可以基於VR場景的設計。例如,每個場景或多個場景可以存在一個目標物體列表,在生成VR場景時,根據目標物體列表找到該目標場景中的目標物體。再如,在VR場景的遊戲設計中規定,近景(距離用戶一定範圍內的場景)處的人 物是目標物體,近景處除人物以外的其他物體不是目標物體,遠景(距離用戶一定範圍外的場景)處的所有物體均不是目標物體,等等。確定場景中的目標物體可以由處理裝置34來執行,例如可以由處理裝置34中的CPU確定,本申請實施例對此不作限定。 Specifically, determining which object in the object group corresponding to the left eye position or the right eye position of the user is the target object may be based on the design of the VR scene. For example, each scene or multiple scenes may have a target object list. When generating a VR scene, the target object in the target scene is found according to the target object list. As another example, in the game design of VR scenes, it is stipulated that the characters in the close range (scene within a certain range from the user) are the target objects, the objects other than the characters in the close range are not the target objects, and the distant view (the scene outside the user's range) All objects at) are not target objects, and so on. The determination of the target object in the scene may be performed by the processing device 34, for example, may be determined by a CPU in the processing device 34, which is not limited in the embodiment of the present application.

應理解,對於VR場景而言,其中除目標物體以外的其他物體可以是預先透過3D建模生成3D模型,儲存在3D模型庫中。具體而言,第4圖示出的物體L41、物體43、物體44、物體R45和物體46的3D模型均儲存在3D模型庫中。處理裝置34(例如處理裝置34中的CPU)得到左眼位置訊息和右眼位置訊息後,從3D模型庫中確定出目標三維模型,即物體L41、物體43、物體44、物體R45和物體46的3D模型,以供後續渲染畫面使用。當然,也可以透過其他方式確定目標三維模型,本申請實施例對此不作限定。 It should be understood that for VR scenes, objects other than the target object may be generated through 3D modeling in advance and stored in a 3D model library. Specifically, the 3D models of the object L41, the object 43, the object 44, the object R45, and the object 46 shown in FIG. 4 are all stored in a 3D model library. After the processing device 34 (for example, the CPU in the processing device 34) obtains the left-eye position information and the right-eye position information, the target three-dimensional model is determined from the 3D model library, that is, the object L41, the object 43, the object 44, the object R45, and the object 46 3D model for subsequent renderings. Of course, the target three-dimensional model may also be determined through other methods, which is not limited in the embodiment of the present application.

對於VR場景中的目標物體,例如第4圖所示的VR場景中的人物42,則根據預先拍攝的多個視頻來生成。其中,該多個視頻是分別從不同的拍攝位置拍攝的包括目標物體的視頻。 For the target object in the VR scene, such as the person 42 in the VR scene shown in FIG. 4, it is generated based on a plurality of videos shot in advance. The plurality of videos are videos including target objects, which are shot from different shooting positions, respectively.

具體地,假設該目標物體是人物42,則本申請實施例會從多個拍攝位置預先拍攝的關於該人物42的多個視頻。第5圖示出了預先拍攝的場景的示意圖。如第5圖所示,要拍攝的場景中包括人物42、物體52和物體54,要拍攝的場景儘量與最終顯示的VR場景的情況接近,以增加真實感。針對要拍攝的場景,可以在水平方向上放置多個拍攝設備,分別從拍攝位置C1、拍攝位置C2和拍攝位置C3進行攝像,可以得到人物在不同拍攝位置的原始視頻如第6圖所示。 Specifically, assuming that the target object is a person 42, in the embodiment of the present application, multiple videos about the person 42 are pre-shot from multiple shooting positions. FIG. 5 is a schematic diagram of a scene shot in advance. As shown in FIG. 5, the scene to be shot includes a person 42, an object 52, and an object 54. The scene to be shot is as close as possible to the situation of the VR scene finally displayed to increase the realism. For the scene to be shot, multiple shooting devices can be placed in the horizontal direction, and shooting is performed from the shooting position C1, the shooting position C2, and the shooting position C3, and the original videos of the people at different shooting positions can be obtained as shown in FIG.

應理解,預先拍攝視頻時可以在距離目標物體一定半徑的圓周上進行拍攝。在該圓周上拍攝位置選取得越多越密集,從中選擇出與用戶的左眼位置或右眼位置相同或相近的概率也越 大,最終選擇出的或者計算出的目標視頻放到VR場景中的真實性也越高。 It should be understood that when shooting a video in advance, shooting can be performed on a circle with a certain radius from the target object. The more and more densely selected shooting positions on this circle, the greater the probability that the same or similar left or right eye position of the user will be selected from it. The final selected or calculated target video is placed in the VR scene. The higher the authenticity.

更進一步的,拍預先拍攝視頻時的拍攝位置除了在一條直線上或距離目標物體一定半徑的圓周上以外,拍攝位置還可以組成一個平面或曲面甚至在三維空間內不同的位置,進而實現360度全景拍攝。 Furthermore, in addition to the shooting position when shooting a pre-shooting video, in addition to a straight line or a circle with a certain radius from the target object, the shooting position can also form a plane or curved surface or even different positions in three-dimensional space, thereby achieving 360 degrees Panorama.

在本申請實施例中,多個視頻可以是對原始視頻經過透明處理後的僅包括目標物體的視頻。具體地,可以將分別從3個拍攝位置所拍攝的3個視頻中將人物42與構成背景的物體52和物體54進行分離,就可以得到只包括人物42的3個視頻。3個視頻是在相同的時間進行攝製的時間長度也相同的視頻。 In the embodiment of the present application, the plurality of videos may be videos including only the target object after the original video is transparently processed. Specifically, the person 42 can be separated from the objects 52 and 54 constituting the background from the three videos respectively captured from the three shooting positions, and three videos including only the person 42 can be obtained. The three videos are videos that have been filmed at the same time for the same length of time.

可選地,本申請實施例中,透明處理可以是基於阿爾法(alpha)透明技術的處理。具體而言,如果VR場景的3D環境中允許像素擁有一組alpha值,alpha值用來記載像素的透明度,這樣使得物體可以擁有不同的透明程度。本申請實施例中,可以將原始視頻中的目標物體人物42處理為不透明的,構成背景的物體52和物體54處理為透明的。 Optionally, in the embodiment of the present application, the transparent processing may be processing based on an alpha (alpha) transparent technology. Specifically, if the 3D environment of the VR scene allows pixels to have a set of alpha values, the alpha value is used to record the transparency of the pixels, so that objects can have different degrees of transparency. In the embodiment of the present application, the target object person 42 in the original video may be processed as opaque, and the object 52 and the object 54 constituting the background are processed as transparent.

一種具體的方案中,S340根據所述左眼位置訊息、所述右眼位置訊息和預先拍攝的多個視頻,確定目標視頻,可以包括:對所述左眼位置訊息和所述右眼位置訊息求平均值,得到平均位置;根據所述平均位置,從所述多個視頻中選取出所述目標視頻,其中,所述目標視頻的拍攝位置與所述平均位置的距離是所述多個視頻的所有的拍攝位置中與所述平均位置最接近的。 In a specific solution, S340 determines a target video according to the left-eye position information, the right-eye position information, and a plurality of videos taken in advance, which may include: comparing the left-eye position information and the right-eye position information. An average is obtained to obtain an average position; the target video is selected from the plurality of videos according to the average position, and a distance between a shooting position of the target video and the average position is the plurality of videos Of all the shooting positions, the closest to the average position.

應理解,本申請各實施例中,左眼位置、右眼位置和拍攝位置在VR場景中可以統一表示為虛擬空間的座標,例如在x軸、y軸和z軸三軸座標系的座標或者球座標。左眼位置、右眼位置和 拍攝位置也可以以其他形式表示,本申請實施例對此不作限定。 It should be understood that in the embodiments of the present application, the left-eye position, the right-eye position, and the shooting position may be uniformly represented as coordinates of a virtual space in a VR scene, for example, coordinates in a three-axis coordinate system of x-axis, y-axis, and z-axis Ball coordinates. The left eye position, the right eye position, and the shooting position may also be expressed in other forms, which are not limited in the embodiment of the present application.

在本方案中,對左眼位置訊息和右眼位置訊息求平均值,得到平均位置。例如,以三軸座標系為例,左眼位置為(x1,y1,z1),右眼位置為(x2,y2,z2),則平均位置為((x1+x2)/2,(y1+y2)/2,(z1+z2)/2)。從多個視頻中選出拍攝位置與平均位置最接近的視頻作為目標視頻。 In this solution, the left eye position information and the right eye position information are averaged to obtain the average position. For example, taking the three-axis coordinate system as an example, the left eye position is (x1, y1, z1), and the right eye position is (x2, y2, z2), then the average position is ((x1 + x2) / 2, (y1 + y2) / 2, (z1 + z2) / 2). The video with the closest shooting position to the average position is selected from the multiple videos as the target video.

在多個拍攝位置是距離目標物體一定半徑的圓周上的多個位置的情況下,目標視頻的拍攝位置與平均位置最接近可以理解為目標視頻的拍攝位置(xt,yt,zt)與平均位置((x1+x2)/2,(y1+y2)/2,(z1+z2)/2)的距離需小於預設的閾值,即保證目標視頻的拍攝位置與平均位置的距離足夠小。 In the case where the multiple shooting positions are multiple positions on a circle with a certain radius from the target object, the shooting position of the target video is closest to the average position, which can be understood as the shooting position (xt, yt, zt) and the average position of the target video. The distance of ((x1 + x2) / 2, (y1 + y2) / 2, (z1 + z2) / 2) must be less than a preset threshold, that is, the distance between the shooting position of the target video and the average position is sufficiently small.

在多個拍攝位置不在距離目標物體一定半徑的圓周上的情況下,目標視頻的拍攝位置與平均位置最接近可以理解為,平均位置與目標物體構成的線段與目標視頻的拍攝位置與目標物體構成的線段之間的夾角是平均位置與目標物體構成的線段與所有拍攝位置與目標物體構成的線段之間的夾角中角度最小的。 When multiple shooting positions are not on a circle with a certain radius from the target object, the shooting position of the target video is closest to the average position. It can be understood that the line segment formed by the average position and the target object and the shooting position of the target video and the target object constitute The included angle between the line segments is the smallest of the included angles between the average position and the line segment formed by the target object and all the shooting positions and the line segment formed by the target object.

另一種具體的方案中,S340根據所述左眼位置訊息、所述右眼位置訊息和預先拍攝的多個視頻,確定目標視頻,可以包括:對所述左眼位置訊息和所述右眼位置訊息求平均值,得到平均位置;根據所述平均位置,從所述多個視頻中選取出至少兩個視頻;將所述至少兩個視頻中每個視頻在相應時刻對應的視頻幀抽取出來;根據所述平均位置和所述至少兩個視頻的拍攝位置,對所述至少兩個視頻幀進行插值運算,得到所述當目標視頻。 In another specific solution, S340 determines the target video according to the left-eye position information, the right-eye position information, and multiple videos taken in advance, which may include: comparing the left-eye position information and the right-eye position The information is averaged to obtain an average position; at least two videos are selected from the plurality of videos according to the average position; a video frame corresponding to each of the at least two videos at a corresponding moment is extracted; Performing an interpolation operation on the at least two video frames according to the average position and the shooting positions of the at least two videos to obtain the current target video.

在這個方案中,可以選取用戶的左眼和右眼平均位置的左右至少各一個拍攝位置,從多個視頻中選取出左右至少各一個拍攝位置拍攝的視頻,作為計算目標視頻的參考。截取至少兩個 視頻在同一時刻對應的視頻幀進行插值運算,得到目標視頻。 In this solution, at least one left and right shooting position of the average position of the left and right eyes of the user may be selected, and a video shot at least one left and right shooting position may be selected from a plurality of videos as a reference for calculating a target video. Intercepting video frames corresponding to at least two videos at the same time to obtain a target video.

在多個拍攝位置是距離目標物體一定半徑的圓周上的多個位置的情況下,從多個視頻中選取至少兩個視頻可以是選取與平均位置((x1+x2)/2,(y1+y2)/2,(z1+z2)/2)的距離最小的至少兩個視頻。至少兩個視頻的拍攝位置至少有一個分佈在平均位置的左側,並且至少有一個分佈在平均位置的右側。 In the case where the multiple shooting positions are multiple positions on a circle with a certain radius from the target object, selecting at least two videos from multiple videos may be selecting and averaging positions ((x1 + x2) / 2, (y1 + y2) / 2, (z1 + z2) / 2) at least two videos with the smallest distance. At least one of the shooting positions of at least two videos is distributed to the left of the average position, and at least one is distributed to the right of the average position.

在多個拍攝位置不在距離目標物體一定半徑的圓周上的情況下,從多個視頻中選取至少兩個視頻可以是,平均位置與目標物體構成的線段與至少兩個視頻的拍攝位置與目標物體構成的線段之間的夾角是平均位置與目標物體構成的線段與所有拍攝位置與目標物體構成的線段之間的夾角中角度最小的幾個。至少兩個視頻的拍攝位置至少有一個分佈在平均位置的左側,並且至少有一個分佈在平均位置的右側。 In the case where the multiple shooting positions are not on a circle with a certain radius from the target object, selecting at least two videos from the multiple videos may be a line segment formed by the average position and the target object and the shooting positions of the at least two videos and the target object The included angle between the line segments is the smallest of the included angles between the average position and the line segment formed by the target object and all the shooting positions and the line segment formed by the target object. At least one of the shooting positions of at least two videos is distributed to the left of the average position, and at least one is distributed to the right of the average position.

應理解,在本申請實施例中,還可以根據其他的準則選取作為參考的視頻,本申請實施例對此不作限定。 It should be understood that in the embodiments of the present application, a video that can be used as a reference may also be selected according to other criteria, which is not limited in the embodiments of the present application.

還應理解,在本申請實施例中,不同拍攝位置拍攝到的視頻代表著觀察目標物體(例如,人物42)時的不同的觀察位置。換句話說,第6圖所示的3個視頻在同一物理時刻對應的視頻幀,是在不同的觀察位置觀察時的圖像。3個拍攝角度分別可以對應3個拍攝位置C1、C2和C3。 It should also be understood that, in the embodiments of the present application, videos captured at different shooting positions represent different observation positions when observing a target object (for example, a person 42). In other words, the video frames corresponding to the three videos shown in Figure 6 at the same physical moment are images when viewed at different observation positions. The three shooting angles can correspond to three shooting positions C1, C2, and C3, respectively.

應理解,在本申請實施例中,除了預先拍攝多個視頻以外,也可以採用從多個拍攝位置預先拍攝目標物體的多組照片(或多組圖像)。根據左眼位置和右眼位置(或者平均位置)與多個拍攝位置的關係,從多組圖像中找到至少兩個拍攝位置對應的至少兩張圖像,對至少兩張圖像進行插值運算,得到目標圖像。具體的插值算法,會在下文中詳細描述。 It should be understood that, in the embodiment of the present application, in addition to capturing multiple videos in advance, multiple groups of photos (or multiple groups of images) of the target object may be captured in advance from multiple shooting positions. According to the relationship between the position of the left eye and the position of the right eye (or average position) and multiple shooting positions, find at least two images corresponding to at least two shooting positions from a plurality of groups of images, and perform interpolation on the at least two images To get the target image. The specific interpolation algorithm will be described in detail below.

第7圖是本申請一個實施例的確定目標視頻的示意圖。根據平均位置,從多個視頻中選取出至少兩個視頻,將至少兩個視頻中每個視頻在相應時刻對應的視頻幀抽取出來,根據平均位置和至少兩個視頻的拍攝位置,對至少兩個視頻幀進行插值運算,得到當目標視頻的具體過程可以如第7圖所示。 FIG. 7 is a schematic diagram of determining a target video according to an embodiment of the present application. According to the average position, at least two videos are selected from the multiple videos, and the video frame corresponding to each of the at least two videos is extracted at the corresponding moment. According to the average position and the shooting position of the at least two videos, at least two Each video frame is interpolated, and the specific process of obtaining the target video can be shown in FIG. 7.

用戶在觀察VR場景時,觀察位置可以發生變化,例如用戶在面向VR場景時,觀察位置可以沿左右方向移動。3個拍攝位置分別為C1、C2和C3。C1、C2和C3可以透過三維直角座標系的座標值來表示,也可以透過球座標系的座標值表示,還可以透過其他方式表示,本申請實施例對此不作限定。根據用戶的左眼位置訊息和右眼位置訊息,可以確定用戶觀察時的平均位置Cview。如第7圖所示,平均位置Cview在C1和C2之間。在確定目標視頻時,因為平均位置Cview介於C1和C2之間,因此選取在拍攝位置C1和C2預先拍攝的視頻作為參考。在生成目標視頻的視頻幀(圖像)時,同時取出C1和C2分別對應的視頻在同一時刻對應的視頻幀I1和I2,然後對兩個視頻幀I1和I2進行插值,例如可以是線性插值。其中,插值的權重依據平均位置Cview與C1和C2的距離而定。輸出的目標視頻的視頻幀Iout=I1*(1-(C1-Cview/C1-C2))+I2*(1-(C2-Cview/C1-C2))。 When the user observes the VR scene, the observation position may change. For example, when the user faces the VR scene, the observation position may move in the left-right direction. The three shooting positions are C1, C2, and C3. C1, C2, and C3 can be represented by coordinate values of a three-dimensional rectangular coordinate system, can also be represented by coordinate values of a spherical coordinate system, and can also be represented by other methods, which are not limited in the embodiment of the present application. According to the left eye position information and the right eye position information of the user, the average position Cview when the user observes can be determined. As shown in Figure 7, the average position Cview is between C1 and C2. When determining the target video, because the average position Cview is between C1 and C2, the video shot in advance at the shooting positions C1 and C2 is selected as a reference. When generating the video frames (images) of the target video, simultaneously take out the video frames I1 and I2 corresponding to the videos corresponding to C1 and C2 at the same time, and then interpolate the two video frames I1 and I2, for example, linear interpolation . Among them, the weight of interpolation depends on the distance between the average position Cview and C1 and C2. The output video frame of the target video is Iout = I1 * (1- (C1-Cview / C1-C2)) + I2 * (1- (C2-Cview / C1-C2)).

應理解,以上只討論了用戶的觀察位置沿左右方向移動的情況,如果用戶的觀察位置前後移動,因為是在VR的3D場景中,所以觀察者看到的人物自然會呈現近大遠小的效果,雖然在物理上顯示的角度也應該有所變化,但是這種變化影響很小,一般用戶不會在意或者觀察到。此外,在一般的場景中,用戶只會前後左右移動,很少會進行在上下方向上進行大範圍移動,所以對於根據本申請實施例的方法確定的目標視頻,用戶所產生的失 真感覺也很小。 It should be understood that only the case where the user's observation position moves in the left-right direction has been discussed above. If the user's observation position moves back and forth, because it is in a 3D scene in VR, the characters seen by the observer will naturally appear near large and small. The effect, although the angle of physical display should also be changed, but this change has little effect, and the general user will not care or observe. In addition, in a general scene, the user only moves back and forth, left and right, and rarely moves in a large range in the up and down direction. Therefore, for the target video determined according to the method of the embodiment of the present application, the user feels very distorted small.

應理解,本申請實施例以目標物體為人物為例進行說明。當然目標物體也可以為要求真實性的動物,甚至建築物或植物等等,本申請實施例對此不作限定。 It should be understood that, in the embodiment of the present application, a target object is used as an example for description. Of course, the target object may also be an animal that requires authenticity, or even a building or a plant, which is not limited in the embodiments of the present application.

可選地,在本申請實施例中,S350根據左眼朝向訊息、所述目標三維模型和所述目標視頻,實時渲染左眼畫面,可以包括:根據所述左眼朝向訊息,將所述目標三維模型渲染到第一紋理上;根據所述左眼朝向訊息,將所述目標視頻渲染到第二紋理上,其中,所述第一紋理可以是所述左眼畫面的背景,所述第二紋理是基於廣告牌面片技術的;S360根據右眼朝向訊息、所述目標三維模型和所述目標視頻,實時渲染右眼畫面,可以包括:根據所述右眼朝向訊息,將所述目標三維模型渲染到第三紋理上;根據所述右眼朝向訊息,將所述目標視頻渲染到第四紋理上,其中,所述第三紋理可以是所述右眼畫面的背景,所述第四紋理是基於廣告牌面片技術的。 Optionally, in the embodiment of the present application, S350 rendering the left-eye picture in real time according to the left-eye orientation information, the target three-dimensional model, and the target video may include: according to the left-eye orientation information, the target is The three-dimensional model is rendered onto a first texture; the target video is rendered onto a second texture according to the left-eye orientation information, where the first texture may be a background of the left-eye picture, and the second The texture is based on billboard patch technology; S360 renders the right-eye picture in real time according to the right-eye orientation message, the target three-dimensional model, and the target video, which may include: according to the right-eye orientation message, the target is three-dimensionally The model is rendered onto a third texture; the target video is rendered onto a fourth texture according to the right-eye orientation information, where the third texture may be the background of the right-eye picture and the fourth texture It is based on billboard face sheet technology.

下面,結合第8圖詳細說明本申請實施例中渲染左眼畫面和右眼畫面的過程。如前文描述,處理裝置34(例如其中的CPU)在S330中已經確定目標三維模型,在S340中已經確定目標視頻。處理裝置34(例如其中的GPU)根據左眼朝向訊息,確定應呈現的左眼畫面;根據右眼朝向訊息,確定應呈現的右眼畫面。例如如第4圖所示的場景中,根據左眼朝向訊息(面向人物42),確定左眼畫面中呈現物體L41、物體43、物體44和人物42;根據右眼朝向訊息(面向人物42),確定右眼畫面中呈現物體43、物體44、物體R45和人物42。 Hereinafter, the process of rendering the left-eye picture and the right-eye picture in the embodiment of the present application will be described in detail with reference to FIG. 8. As described above, the processing device 34 (for example, the CPU therein) has determined the target three-dimensional model in S330, and the target video has been determined in S340. The processing device 34 (for example, the GPU therein) determines the left-eye picture that should be presented according to the left-eye orientation message; and determines the right-eye picture that should be presented according to the right-eye orientation message. For example, in the scene shown in FIG. 4, according to the left-eye orientation message (facing the person 42), it is determined that the object L41, the object 43, the object 44 and the person 42 are presented in the left-eye picture; according to the right-eye orientation message (facing the person 42) , Determine that the object 43, the object 44, the object R45, and the person 42 are present in the right-eye frame.

處理裝置34(例如其中的GPU)將目標三維模型物體L41、物體43和物體44,渲染到左眼畫面L800的第一紋理82上,將 所述目標視頻渲染到左眼畫面L800的第二紋理84上;將目標三維模型物體43、物體44和物體R45渲染到右眼畫面R800的第三紋理86上,將所述目標視頻渲染到右眼畫面R800的第四紋理88上。 The processing device 34 (for example, the GPU therein) renders the target three-dimensional model object L41, object 43, and object 44 onto the first texture 82 of the left-eye picture L800, and renders the target video to the second texture of the left-eye picture L800. 84; rendering the target three-dimensional model object 43, object 44, and object R45 onto the third texture 86 of the right-eye picture R800, and rendering the target video onto the fourth texture 88 of the right-eye picture R800.

具體地,分別對於左眼畫面和右眼畫面,可以在畫面的目標物體的位置設置廣告牌面片,在廣告牌面片上呈現目標視頻。廣告牌技術是計算機圖形學領域中進行快速繪製的一種方法。在類似3D遊戲這種對實時性要求較高的情況下,採取廣告牌技術可以大大加快繪製的速度從而提高3D遊戲畫面的流暢性。廣告牌技術是在3D場景中,用2D來表示物體,讓該物體始終朝向用戶。 Specifically, for the left-eye picture and the right-eye picture, a billboard patch can be set at the position of a target object on the screen, and a target video can be presented on the billboard patch. Billboard technology is a method for rapid rendering in the field of computer graphics. In the case of high real-time requirements similar to 3D games, the use of billboard technology can greatly speed up the drawing and thus improve the smoothness of the 3D game screen. Billboard technology is to use 2D to represent an object in a 3D scene, so that the object always faces the user.

具體地,廣告牌面片在左眼畫面可以具有傾斜角度,傾斜角度的具體參數可以根據左眼位置訊息來計算;廣告牌面片在右眼畫面可以具有傾斜角度,傾斜角度的具體參數可以根據右眼位置訊息來計算。 Specifically, the billboard face sheet may have an oblique angle in the left eye picture, and specific parameters of the oblique angle may be calculated according to the left eye position information; the billboard face sheet may have an oblique angle in the right eye picture, and the specific parameters of the oblique angle may be based on Right eye position information to calculate.

實際上,由於VR場景是實時渲染的,在任一時刻,可以認為是將前述透過插值得到的視頻幀呈現在目標物體的位置上。在場景變化的一個連續時間段內,可以等效為視頻在廣告牌面片上進行播放。 In fact, since the VR scene is rendered in real time, at any time, it can be considered that the aforementioned video frame obtained through interpolation is presented at the position of the target object. In a continuous period of time during which the scene changes, it can be equivalent to playing a video on a billboard patch.

如第8圖所示,在目標物體對應的位置設置廣告牌面片,將視頻的每一幀作為貼圖紋理繪製到上述廣告牌面片的貼圖,則視頻的每一幀會是一直是面對用戶的。 As shown in Figure 8, a billboard patch is set at the position corresponding to the target object. Each frame of the video is drawn as a texture to the texture of the billboard patch, and each frame of the video will always face users.

應理解,在渲染左眼畫面和右眼畫面時,可以採用深度緩衝技術與廣告牌技術結合。深度緩衝技術有助於目標物體按遠近距離與其他物體形成遮擋關係和大小比例關係。本申請實施例中,渲染目標視頻還可以使用其他技術,本申請實施例對此不作限定。 It should be understood that when rendering the left-eye picture and the right-eye picture, a depth buffer technology and a billboard technology may be combined. Depth buffer technology helps target objects form occlusion relationships and size-ratio relationships with other objects at near and far distances. In the embodiment of the present application, other techniques may be used for rendering the target video, which is not limited in the embodiment of the present application.

還應理解,本申請實施例還提供一種圖形處理方法,包括步驟S320至S360,方法由處理器執行。 It should also be understood that the embodiment of the present application further provides a graphics processing method, which includes steps S320 to S360, and the method is executed by a processor.

還應理解,在本發明的各種實施例中,上述各過程的序號的大小並不意味著執行順序的先後,各過程的執行順序應以其功能和內在邏輯確定,而不應對本申請實施例的實施過程構成任何限定。 It should also be understood that, in various embodiments of the present invention, the size of the sequence numbers of the above processes does not mean the order of execution, and the execution order of each process should be determined by its function and internal logic, and should not deal with the embodiments of this application. The implementation process constitutes any limitation.

上文中結合第1圖至第8圖,詳細描述了根據本申請實施例的圖形處理方法。下面將結合第9A圖、9B和第10圖,詳細描述根據本申請實施例的裝置、處理器和VR系統。 The graphics processing method according to the embodiment of the present application is described in detail above with reference to FIGS. 1 to 8. The device, the processor, and the VR system according to the embodiments of the present application will be described in detail below with reference to FIGS. 9A, 9B, and 10.

第9A圖是本申請實施例中用於圖形處理方法的計算設備的結構示意圖。如第9A圖所示,該計算設備900包括處理器901、非揮發性計算機可讀儲存器902、I/O介面903、顯示介面904和網路通訊介面905。這些組件透過匯流排906進行通訊。在本申請一些實施例中,儲存器902中儲存有多個程式模組:操作系統907、I/O模組908、通訊模組909和圖像處理裝置900A。處理器901可以讀取儲存器902中的圖像處理裝置900A對應的計算機可讀指令,來實現本申請實施例提供的方案。 FIG. 9A is a schematic structural diagram of a computing device used for a graphics processing method in an embodiment of the present application. As shown in FIG. 9A, the computing device 900 includes a processor 901, a non-volatile computer-readable storage 902, an I / O interface 903, a display interface 904, and a network communication interface 905. These components communicate via a bus 906. In some embodiments of the present application, the memory 902 stores a plurality of program modules: an operating system 907, an I / O module 908, a communication module 909, and an image processing device 900A. The processor 901 may read the computer-readable instructions corresponding to the image processing apparatus 900A in the storage 902 to implement the solution provided by the embodiment of the present application.

在本申請實施例中,I/O介面903可以與輸入/輸出設備連接。I/O介面903將從輸入設備接收到的輸入資料發送給I/O模組908進行處理,並將I/O模組908輸出的資料發送給輸出設備。 In the embodiment of the present application, the I / O interface 903 may be connected to an input / output device. The I / O interface 903 sends input data received from the input device to the I / O module 908 for processing, and sends data output by the I / O module 908 to the output device.

網路通訊介面905可以將從通訊匯流排906接收到的資料發送給通訊模組909,並將從通訊模組909接收到的資料透過匯流排906發送出去。 The network communication interface 905 may send data received from the communication bus 906 to the communication module 909, and send data received from the communication module 909 through the bus 906.

在一些實例中,所述儲存器902中儲存的圖像處理裝置900A對應的計算機可讀指令,可以使所述處理器901執行:獲取觀察者的位置訊息;根據所述位置訊息確定待展示的虛擬實境VR畫 面中的目標物體;獲取預先儲存的所述目標物體對應的至少兩個圖像,所述至少兩個圖像為分別從不同的拍攝位置拍攝的圖像;根據所述位置訊息和所述至少兩個圖像對應的拍攝位置,利用所述至少兩個圖像生成目標圖像,所述目標圖像為所述觀察者的位置對應的所述目標物體的圖像;展示所述VR畫面,並在所述VR畫面中渲染所述目標圖像。 In some examples, the computer-readable instructions corresponding to the image processing device 900A stored in the storage 902 may cause the processor 901 to execute: obtain the position information of the observer; and determine the to-be-displayed information according to the position information. A target object in a virtual reality VR picture; obtaining at least two images corresponding to the target object stored in advance, the at least two images are images respectively taken from different shooting positions; according to the position information A shooting position corresponding to the at least two images, and using the at least two images to generate a target image, where the target image is an image of the target object corresponding to the position of the observer; Describe the VR picture, and render the target image in the VR picture.

在一些實例中,所述指令可以使所述處理器901:根據待展示的所述VR畫面的時間訊息,從預先拍攝的多個視頻中確定每個視頻中所述時間訊息對應的視頻幀作為所述圖像。 In some examples, the instruction may cause the processor 901 to determine, according to the time information of the VR picture to be displayed, a video frame corresponding to the time information in each video from a plurality of videos captured in advance as The image.

在一些實例中,所述指令可以使所述處理器901:將所述目標圖像渲染到所述VR畫面中的第一預設紋理上,其中,所述第一預設紋理是基於廣告牌面片技術的。 In some examples, the instructions may cause the processor 901 to render the target image onto a first preset texture in the VR picture, where the first preset texture is based on a billboard Pasta technology.

在一些實例中,其中,所述指令可以使所述處理器901:獲取所述觀察者的左眼位置訊息、右眼位置訊息、左眼朝向訊息和右眼朝向訊息;其中,所述VR畫面包括左眼畫面和右眼畫面;根據所述左眼位置訊息和所述左眼朝向訊息確定所述左眼畫面;根據所述右眼位置訊息和所述右眼朝向訊息確定所述右眼畫面;根據所述左眼朝向訊息和所述目標圖像,實時渲染所述左眼畫面,並在所述左眼畫面中渲染所述目標圖像;根據所述右眼朝向訊息和所述目標圖像,實時渲染所述右眼畫面,並在所述右眼畫面中渲染所述目標圖像。 In some examples, the instructions may cause the processor 901 to obtain left-eye position information, right-eye position information, left-eye orientation information, and right-eye orientation information of the observer; wherein the VR picture Including a left-eye image and a right-eye image; determining the left-eye image according to the left-eye position information and the left-eye orientation information; determining the right-eye image based on the right-eye position information and the right-eye orientation information ; Rendering the left-eye picture in real time according to the left-eye orientation message and the target image, and rendering the target image in the left-eye picture; according to the right-eye orientation message and the target image Image, the right eye picture is rendered in real time, and the target image is rendered in the right eye picture.

在一些實例中,所述指令可以使所述處理器901:根據所述位置訊息,確定所述VR畫面中的第一物體;從三維模型庫中確定出所述第一物體對應的目標三維模型;將所述三維模型渲染到所述VR畫面的第二預設紋理上。在一些實例中,所述指令可以使所述處理器901:對所述左眼位置訊息和所述右眼位置訊息求平 均值,得到平均位置;根據所述平均位置,從所述預先拍攝的多個視頻中選取至少兩個視頻,所述多個視頻是從不同的拍攝位置拍攝得到;從所述至少兩個視頻中的每個視頻中選取一個視頻幀作為所述圖像;根據所述平均位置和所述至少兩個視頻的拍攝位置之間的空間位置關係,對所述圖像進行運算得到所述目標圖像。 In some examples, the instructions may cause the processor 901 to: determine a first object in the VR picture according to the position information; and determine a target three-dimensional model corresponding to the first object from a three-dimensional model library ; Rendering the three-dimensional model onto a second preset texture of the VR picture. In some examples, the instruction may cause the processor 901 to: average the left-eye position information and the right-eye position information to obtain an average position; according to the average position, from the pre-photographed At least two videos are selected from a plurality of videos, and the multiple videos are obtained from different shooting positions; a video frame is selected from each of the at least two videos as the image; according to the A spatial position relationship between an average position and a shooting position of the at least two videos, and the target image is obtained by performing an operation on the image.

在一些實例中,所述指令可以使所述處理器901:對所述左眼位置訊息和所述右眼位置訊息求平均值,得到平均位置;根據所述平均位置,從所述預先拍攝的多個視頻中選取出目標視頻,其中,所述目標視頻的拍攝位置與所述平均位置的距離是所述預先拍攝的多個視頻的拍攝位置中與所述平均位置的空間距離中最小的;從所述目標視頻中選取一個視頻幀,並將所述視頻幀作為所述目標圖像。 In some examples, the instruction may cause the processor 901 to: average the left-eye position information and the right-eye position information to obtain an average position; according to the average position, from the pre-photographed A target video is selected from a plurality of videos, wherein the distance between the shooting position of the target video and the average position is the smallest of the spatial distance from the shooting position of the plurality of pre-shot videos and the average position; Selecting a video frame from the target video, and using the video frame as the target image.

在一些實例中,所述多個視頻是對所述多個視頻的原始視頻經過透明處理後的僅包括所述目標物體的視頻,所述目標物體為人物。 In some examples, the plurality of videos are videos including only the target object after the original videos of the plurality of videos are transparently processed, and the target object is a person.

在一些實例中,所述左眼位置訊息、所述右眼位置訊息、所述左眼朝向訊息和所述右眼朝向訊息是根據所收集的所述用戶當前的姿態訊息確定的。 In some examples, the left-eye position information, the right-eye position information, the left-eye orientation information, and the right-eye orientation information are determined according to the collected current posture information of the user.

在一些實例中,所述姿態訊息包括頭部姿態訊息、四肢姿態訊息、軀幹姿態訊息、肌肉電刺激訊息、眼球跟蹤訊息、皮膚感知訊息、運動感知訊息和腦訊號訊息中的至少一種。 In some examples, the gesture information includes at least one of a head gesture message, a limb gesture message, a trunk gesture message, an electrical muscle stimulation message, an eye tracking message, a skin-aware message, a motion-aware message, and a brain signal message.

第9B圖是本申請一個實施例的處理器900BB的示意性方塊圖。處理器900B可以對應於前文所述的處理裝置34。如第9圖所示,處理器900B可以包括獲取模組910、計算模組920和渲染模組930。 FIG. 9B is a schematic block diagram of a processor 900BB according to an embodiment of the present application. The processor 900B may correspond to the processing device 34 described above. As shown in FIG. 9, the processor 900B may include an obtaining module 910, a computing module 920, and a rendering module 930.

獲取模組910用於獲取用戶的左眼位置訊息、右眼位置 訊息、左眼朝向訊息和右眼朝向訊息。 The obtaining module 910 is configured to obtain left-eye position information, right-eye position information, left-eye orientation information, and right-eye orientation information of a user.

計算模組920用於根據所述獲取模組獲取的所述左眼位置訊息和所述右眼位置訊息,從三維模型庫中確定出目標三維模型;計算模組920還用於根據所述左眼位置訊息、所述右眼位置訊息和預先拍攝的多個視頻,確定目標視頻,其中,所述多個視頻是分別從不同的拍攝位置拍攝的視頻。 The calculation module 920 is configured to determine a target three-dimensional model from a three-dimensional model library according to the left-eye position information and the right-eye position information obtained by the acquisition module; The eye position information, the right eye position information, and a plurality of videos shot in advance determine a target video, where the plurality of videos are videos shot from different shooting positions, respectively.

渲染模組930用於根據左眼朝向訊息、所述目標三維模型和所述目標視頻,實時渲染左眼畫面;渲染模組930還用於根據右眼朝向訊息、所述目標三維模型和所述目標視頻,實時渲染右眼畫面;其中,所述左眼畫面和所述右眼畫面顯示在虛擬實境VR顯示器上時形成VR場景,所述VR場景中包括所述目標三維模型的圖像和所述目標視頻的圖像。 The rendering module 930 is configured to render the left-eye picture in real time according to the left-eye orientation information, the target three-dimensional model, and the target video; the rendering module 930 is further configured to use the right-eye orientation information, the target three-dimensional model, and the target The target video renders the right eye picture in real time; wherein the left eye picture and the right eye picture form a VR scene when displayed on a virtual reality VR display, and the VR scene includes an image of the target three-dimensional model and An image of the target video.

本申請實施例的圖形處理裝置,根據用戶的左右眼的位置訊息,確定目標三維模型並且根據預先拍攝的多個視頻確定目標視頻,透過實時渲染的方式渲染技術分別渲染左眼畫面和右眼畫面,從而顯示VR場景,其中,VR場景中包括目標三維模型的圖像和目標視頻的圖像,該目標視頻可以真實的展現實景,在保持整個VR場景可互動性的基礎上,為用戶提供真實的臨場感,從而能夠提升用戶體驗。 The graphics processing device according to the embodiment of the present application determines a target three-dimensional model according to the position information of the left and right eyes of the user, and determines a target video according to a plurality of videos shot in advance, and renders the left-eye picture and the right-eye picture by rendering technology in real time. , Thereby displaying a VR scene, where the VR scene includes an image of a target three-dimensional model and an image of a target video, and the target video can truly show the actual scene, while providing the user with real reality while maintaining the interactivity of the entire VR scene Presence, which can enhance the user experience.

可選地,作為一個實施例,所述渲染模組930具體可以用於:根據所述左眼朝向訊息,將所述目標三維模型渲染到第一紋理上;根據所述左眼朝向訊息,將所述目標視頻渲染到第二紋理上,其中,所述第二紋理是基於廣告牌面片技術的;根據所述右眼朝向訊息,將所述目標三維模型渲染到第三紋理上;根據所述右眼朝向訊息,將所述目標視頻渲染到第四紋理上,其中,所述第四紋理是基於廣告牌面片技術的。 Optionally, as an embodiment, the rendering module 930 may be specifically configured to: render the target three-dimensional model to the first texture according to the left-eye orientation information; and render the target three-dimensional model according to the left-eye orientation information. The target video is rendered onto a second texture, wherein the second texture is based on a billboard patch technology; the target three-dimensional model is rendered onto a third texture according to the right eye orientation information; The right-eye-oriented information is used to render the target video onto a fourth texture, where the fourth texture is based on a billboard patch technology.

可選地,作為一個實施例,所述計算模組920根據所述左眼位置訊息、所述右眼位置訊息和預先拍攝的多個視頻,確定目標視頻,可以包括:對所述左眼位置訊息和所述右眼位置訊息求平均值,得到平均位置;根據所述平均位置,從所述多個視頻中選取出至少兩個視頻;將所述至少兩個視頻中每個視頻在相應時刻對應的視頻幀抽取出來;根據所述平均位置和所述至少兩個視頻的拍攝位置,對所述至少兩個視頻幀進行插值運算,得到所述當目標視頻。 Optionally, as an embodiment, the calculating module 920 determines the target video according to the left-eye position information, the right-eye position information, and a plurality of videos taken in advance, which may include: regarding the left-eye position Average the information and the right eye position information to obtain an average position; select at least two videos from the plurality of videos according to the average position; and place each of the at least two videos at a corresponding time The corresponding video frames are extracted; according to the average position and the shooting positions of the at least two videos, an interpolation operation is performed on the at least two video frames to obtain the current target video.

可選地,作為一個實施例,所述計算模組920根據所述左眼位置訊息、所述右眼位置訊息和預先拍攝的多個視頻,確定目標視頻,可以包括:對所述左眼位置訊息和所述右眼位置訊息求平均值,得到平均位置;根據所述平均位置,從所述多個視頻中選取出所述目標視頻,其中,所述目標視頻的拍攝位置與所述平均位置的距離是所述多個視頻的所有的拍攝位置中與所述平均位置最接近的。 Optionally, as an embodiment, the calculating module 920 determines the target video according to the left-eye position information, the right-eye position information, and a plurality of videos taken in advance, which may include: regarding the left-eye position Average the information and the right eye position information to obtain an average position; and select the target video from the plurality of videos according to the average position, wherein a shooting position of the target video and the average position The distance of is the closest to the average position among all the shooting positions of the plurality of videos.

可選地,作為一個實施例,所述多個視頻是對原始視頻經過透明處理後的僅包括目標物體的視頻。 Optionally, as an embodiment, the plurality of videos are videos that include only the target object after being transparently processed on the original video.

可選地,作為一個實施例,所述目標物體為人物。 Optionally, as an embodiment, the target object is a person.

可選地,作為一個實施例,所述獲取模組910獲取的所述左眼位置訊息、所述右眼位置訊息、所述左眼朝向訊息和所述右眼朝向訊息是根據所收集的所述用戶當前的姿態訊息確定的。 Optionally, as an embodiment, the left-eye position information, the right-eye position information, the left-eye orientation information, and the right-eye orientation information obtained by the acquisition module 910 are based on the collected information. Determined by the user's current posture message.

可選地,作為一個實施例,所述姿態訊息包括頭部姿態訊息、四肢姿態訊息、軀幹姿態訊息、肌肉電刺激訊息、眼球跟蹤訊息、皮膚感知訊息、運動感知訊息和腦訊號訊息中的至少一種。 Optionally, as an embodiment, the posture information includes at least one of head posture information, limb posture information, trunk posture information, muscle electrical stimulation information, eye tracking information, skin perception information, motion perception information, and brain signal information. One.

應理解,所述處理器900B可以是CPU也可以是GPU。處 理器900B還可以既包括CPU的功能又包括GPU的功能,例如,獲取模組910和計算模組920的功能(S320至S340)由CPU執行,渲染模組930的功能(S350和S360)由GPU執行,本申請實施例對此不作限定。 It should be understood that the processor 900B may be a CPU or a GPU. The processor 900B may also include the functions of the CPU and the GPU. For example, the functions of the acquisition module 910 and the calculation module 920 (S320 to S340) are executed by the CPU, and the functions of the rendering module 930 (S350 and S360) are performed by The execution of the GPU is not limited in the embodiment of the present application.

第10圖示出的是本申請實施例的一種VR系統的示意圖。第10圖所示的是一種VR頭盔1000,VR頭盔1000可以包括頭部跟蹤器1010、CPU 1020、GPU 1030和顯示器1040。其中,頭部跟蹤器1010對應於姿態收集裝置,CPU 1020和GPU 1030對應於處理裝置,顯示器1040對應於顯示裝置,此處對頭部跟蹤器1010、CPU 1020、GPU 1030和顯示器1040的功能不再贅述。 FIG. 10 is a schematic diagram of a VR system according to an embodiment of the present application. FIG. 10 shows a VR helmet 1000. The VR helmet 1000 may include a head tracker 1010, a CPU 1020, a GPU 1030, and a display 1040. Among them, the head tracker 1010 corresponds to the attitude collecting device, the CPU 1020 and the GPU 1030 correspond to the processing device, and the display 1040 corresponds to the display device. More details.

應理解,第10圖示出的頭部跟蹤器1010、CPU 1020、GPU 1030和顯示器1040集成在VR頭盔1000中。在VR頭盔1000外部還可以有其他的姿態收集裝置,收集用戶的姿態訊息,發送給CPU 1020進行處理,本申請實施例對此不作限定。 It should be understood that the head tracker 1010, the CPU 1020, the GPU 1030, and the display 1040 shown in FIG. 10 are integrated in the VR helmet 1000. There may be other posture collection devices outside the VR helmet 1000, which collect the posture information of the user and send it to the CPU 1020 for processing, which is not limited in this embodiment of the present application.

第11圖示出的是本申請實施例的另一種VR系統的示意圖。第11圖所示的是一種VR眼鏡1110與主機1120構成的VR系統,VR眼鏡1110可以包括角度感應器1112、訊號處理器1114、資料傳輸器1116和顯示器1118。其中,角度感應器1112對應於姿態收集裝置,主機1120中包括CPU和GPU對應於處理裝置來計算並渲染畫面,顯示器1118對應於顯示裝置。角度感應器1112收集用戶的姿態訊息,將姿態訊息發送給主機1120進行處理,主機1120計算並渲染左眼畫面和右眼畫面,並將左眼畫面和右眼畫面發送給顯示器1118進行顯示。訊號處理器1114和資料傳輸器1116主要用於VR眼鏡1110與主機1120之間的通訊。 FIG. 11 is a schematic diagram of another VR system according to an embodiment of the present application. FIG. 11 shows a VR system composed of VR glasses 1110 and a host 1120. The VR glasses 1110 may include an angle sensor 1112, a signal processor 1114, a data transmitter 1116, and a display 1118. Among them, the angle sensor 1112 corresponds to a gesture collection device, the host 1120 includes a CPU and a GPU to calculate and render a picture corresponding to a processing device, and a display 1118 corresponds to a display device. The angle sensor 1112 collects the posture information of the user, and sends the posture information to the host 1120 for processing. The host 1120 calculates and renders the left-eye frame and the right-eye frame, and sends the left-eye frame and the right-eye frame to the display 1118 for display. The signal processor 1114 and the data transmitter 1116 are mainly used for communication between the VR glasses 1110 and the host 1120.

在VR眼鏡1110外部還可以有其他的姿態收集裝置,收集用戶的姿態訊息,發送給主機1120進行處理,本申請實施例對此 不作限定。 There may be other posture collection devices outside the VR glasses 1110, which collect the posture information of the user and send it to the host 1120 for processing, which is not limited in this embodiment of the present application.

本申請實施例的虛擬實境系統,收集用戶的姿態訊息來確定用戶左右眼的位置,根據用戶的左右眼的位置訊息,確定目標三維模型並且根據預先拍攝的多個視頻確定目標視頻,透過實時渲染的方式渲染技術分別渲染左眼畫面和右眼畫面,從而顯示VR場景,其中,VR場景中包括目標三維模型的圖像和目標視頻的圖像,該目標視頻可以真實的展現實景,在保持整個VR場景可互動性的基礎上,為用戶提供真實的臨場感,從而能夠提升用戶體驗。 The virtual reality system of the embodiment of the present application collects posture information of the user to determine the positions of the left and right eyes of the user, determines the target three-dimensional model according to the position information of the left and right eyes of the user, and determines the target video according to multiple videos taken in advance. Rendering methods Rendering technology renders left-eye and right-eye images separately to display VR scenes. The VR scene includes the image of the target three-dimensional model and the image of the target video. The target video can truly show the actual scene and maintain Based on the interactivity of the entire VR scene, it provides users with a real sense of presence, which can improve the user experience.

本申請實施例還提供一種計算機可讀儲存介質,其上儲存有指令,當所述指令在計算機上運行時,使得所述計算機執行上述方法實施例的圖形處理方法。具體地,該計算機可以為上述VR系統或者為處理器。 An embodiment of the present application further provides a computer-readable storage medium having instructions stored thereon, and when the instructions are run on a computer, the computer is caused to execute the graphics processing method of the foregoing method embodiment. Specifically, the computer may be the aforementioned VR system or a processor.

本申請實施例還提供一種包括指令的計算機程式產品,其特徵在於,當計算機運行所述計算機程式產品的所述指時,所述計算機執行上述方法實施例的圖形處理方法。具體地,該計算機程式產品可以運行於VR系統或者處理器中。 The embodiment of the present application further provides a computer program product including instructions, characterized in that, when a computer runs the finger of the computer program product, the computer executes the graphics processing method of the above method embodiment. Specifically, the computer program product can run in a VR system or a processor.

在上述實施例中,可以全部或部分地透過軟體、硬體、韌體或者其任意組合來實現。當使用軟體實現時,可以全部或部分地以計算機程式產品的形式實現。所述計算機程式產品包括一個或多個計算機指令。在計算機上加載和執行所述計算機指令時,全部或部分地產生按照本申請實施例所述的流程或功能。所述計算機可以是通用計算機、特殊應用計算機、計算機網路、或者其他可編程裝置。所述計算機指令可以儲存在計算機可讀儲存介質中,或者從一個計算機可讀儲存介質向另一個計算機可讀儲存介質傳輸,例如,所述計算機指令可以從一個網站站點、計算 機、伺服器或資料中心透過有線(例如同軸電纜、光纖、數位用戶迴路(Digital Subscriber Line,DSL))或無線(例如紅外、無線、微波等)方式向另一個網站站點、計算機、伺服器或資料中心進行傳輸。所述計算機可讀儲存介質可以是計算機能夠存取的任何可用介質或者是包含一個或多個可用介質集成的伺服器、資料中心等資料儲存設備。所述可用介質可以是磁性介質(例如,軟碟、硬碟、磁帶)、光介質(例如,數位多功能影音光碟(Digital Video Disc,DVD))、或者半導體介質(例如,固態硬碟(Solid State Disk,SSD))等。 In the above embodiments, it may be implemented in whole or in part through software, hardware, firmware, or any combination thereof. When implemented in software, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer instructions are loaded and executed on a computer, the processes or functions according to the embodiments of the present application are generated in whole or in part. The computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be from a website site, computer, server, or The data center transmits to another website site, computer, server or data center through wired (such as coaxial cable, optical fiber, Digital Subscriber Line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) . The computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, a data center, and the like that includes one or more available medium integration. The usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a Digital Video Disc (DVD)), or a semiconductor medium (for example, a solid state disk (Solid State Disk, SSD)) and so on.

應理解,本文中涉及的第一、第二以及各種數位編號僅為描述方便進行的區分,並不用來限制本申請的範圍。 It should be understood that the first, second, and various digital numbers referred to herein are only for the convenience of description and are not intended to limit the scope of the present application.

應理解,本文中術語“和/或”,僅僅是一種描述關聯對象的關聯關係,表示可以存在三種關係,例如,A和/或B,可以表示:單獨存在A,同時存在A和B,單獨存在B這三種情況。另外,本文中字元“/”,一般表示前後關聯對象是一種“或”的關係。 It should be understood that the term "and / or" in this document is only an association relationship describing an associated object, which means that there can be three kinds of relationships, for example, A and / or B can mean: A exists alone, A and B exist simultaneously, alone There are three cases of B. In addition, the character "/" in this article generally indicates that the related objects are an "or" relationship.

所屬技術領域具有通常知識者可以意識到,結合本文中所公開的實施例描述的各示例的單元及算法步驟,能夠以電子硬體、或者計算機軟體和電子硬體的結合來實現。這些功能究竟以硬體還是軟體方式來執行,取決於技術方案的特定應用和設計約束條件。專業技術人員可以對每個特定的應用來使用不同方法來實現所描述的功能,但是這種實現不應認為超出本申請的範圍。 Those having ordinary knowledge in the technical field may realize that the units and algorithm steps of each example described in combination with the embodiments disclosed herein can be implemented by electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Professional technicians can use different methods to implement the described functions for each specific application, but such implementation should not be considered to be beyond the scope of this application.

所屬領域的技術人員可以清楚地瞭解到,為描述的方便和簡潔,上述描述的系統、裝置和單元的具體工作過程,可以參考前述方法實施例中的對應過程,在此不再贅述。 Those skilled in the art can clearly understand that, for the convenience and brevity of description, the specific working processes of the systems, devices, and units described above can refer to the corresponding processes in the foregoing method embodiments, and are not repeated here.

在本申請所提供的幾個實施例中,應該理解到,所揭露的系統、裝置和方法,可以透過其他的方式實現。例如,以上所 描述的裝置實施例僅僅是示意性的,例如,所述單元的劃分,僅僅為一種邏輯功能劃分,實際實現時可以有另外的劃分方式,例如多個單元或組件可以結合或者可以集成到另一個系統,或一些特徵可以忽略,或不執行。另一點,所顯示或討論的相互之間的耦合或直接耦合或通訊連接可以是透過一些連接埠,裝置或單元的間接耦合或通訊連接,可以是電性,機械或其他的形式。 In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other ways. For example, the device embodiments described above are only schematic. For example, the division of the units is only a logical function division. In actual implementation, there may be another division manner. For example, multiple units or components may be combined or may be combined. Integration into another system, or some features can be ignored or not implemented. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some ports, devices or units, which may be electrical, mechanical or other forms.

所述作為分離部件說明的單元可以是或者也可以不是物理上分開的,作為單元顯示的部件可以是或者也可以不是物理單元,即可以位於一個地方,或者也可以分佈到多個網路單元上。可以根據實際的需要選擇其中的部分或者全部單元來實現本實施例方案的目的。 The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, which may be located in one place, or may be distributed on multiple network units. . Some or all of the units may be selected according to actual needs to achieve the objective of the solution of this embodiment.

另外,在本申請各個實施例中的各功能單元可以集成在一個處理單元中,也可以是各個單元單獨物理存在,也可以兩個或兩個以上單元集成在一個單元中。 In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, or each of the units may exist separately physically, or two or more units may be integrated into one unit.

以上所述,僅為本申請的具體實施方式,但本申請的保護範圍並不局限於此,所屬技術領域具有通常知識者在本申請揭露的技術範圍內,可輕易想到變化或替換,都應涵蓋在本申請的保護範圍之內。因此,本申請的保護範圍應所述以申請專利範圍的保護範圍為准。 The above description is only the specific implementation of this application, but the scope of protection of this application is not limited to this. Those skilled in the art can easily think of changes or replacements within the technical scope disclosed in this application. Covered within the scope of this application. Therefore, the protection scope of this application shall be subject to the protection scope of the patent application scope.

Claims (34)

一種圖形處理方法,應用於計算設備,包括:獲取觀察者的位置訊息;根據所述位置訊息確定待展示的虛擬實境(Virtual Reality,VR)畫面中的目標物體;獲取預先儲存的所述目標物體對應的至少兩個圖像,所述至少兩個圖像為分別從不同的拍攝位置拍攝的圖像;根據所述位置訊息和所述至少兩個圖像對應的拍攝位置,利用所述至少兩個圖像生成目標圖像,所述目標圖像為所述觀察者的位置對應的所述目標物體的圖像;以及展示所述VR畫面,並在所述VR畫面中渲染所述目標圖像,其中獲取所述觀察者的所述位置訊息包括:獲取所述觀察者的左眼位置訊息、右眼位置訊息、左眼朝向訊息和右眼朝向訊息;其中,所述VR畫面包括左眼畫面和右眼畫面,所述展示所述VR畫面,並在所述VR畫面中渲染所述目標圖像,包括:根據所述左眼位置訊息和所述左眼朝向訊息確定所述左眼畫面;根據所述右眼位置訊息和所述右眼朝向訊息確定所述右眼畫面;根據所述左眼朝向訊息和所述目標圖像,實時(real time)渲染所述左眼畫面,並在所述左眼畫面中渲染所述目標圖像;以及根據所述右眼朝向訊息和所述目標圖像,實時渲染所述右眼畫面,並在所述右眼畫面中渲染所述目標圖像,其中根據所述位置訊息和所述至少兩個圖像對應的拍攝位置,利用所述至少兩個圖像生成所述目標圖像,包括:對所述左眼位置訊息和所述右眼位置訊息求平均值,得到平均位置;根據所述平均位置,從所述預先拍攝的多個視頻中選取出目標視頻,其中,所述目標視頻的拍攝位置與所述平均位置的距離是所述預先拍攝的多個視頻的拍攝位置與所述平均位置的空間距離中最小的;以及從所述目標視頻中選取一個視頻幀,並將所述視頻幀作為所述目標圖像。A graphics processing method applied to a computing device includes: obtaining position information of an observer; determining a target object in a virtual reality (VR) picture to be displayed according to the position information; and obtaining the target stored in advance At least two images corresponding to the object, the at least two images are images respectively taken from different shooting positions; according to the position information and the shooting positions corresponding to the at least two images, using the at least Two images generate a target image, the target image being an image of the target object corresponding to the position of the observer; and displaying the VR picture, and rendering the target picture in the VR picture Image, wherein obtaining the position information of the observer includes: obtaining left-eye position information, right-eye position information, left-eye orientation information, and right-eye orientation information of the observer; wherein the VR picture includes the left eye A picture and a right-eye picture, and displaying the VR picture, and rendering the target image in the VR picture include: determining the left-eye position information and the left-eye direction information according to The left-eye picture; determining the right-eye picture according to the right-eye position information and the right-eye orientation message; rendering the left-eye in real time according to the left-eye orientation message and the target image An eye picture, and rendering the target image in the left eye picture; and rendering the right eye picture in real time according to the right eye orientation information and the target image, and rendering in the right eye picture The target image, wherein generating the target image by using the at least two images according to the shooting information corresponding to the position information and the at least two images includes: The right eye position information is averaged to obtain an average position; a target video is selected from the plurality of pre-captured videos according to the average position, wherein the shooting position of the target video and the average position are The distance is the smallest of the spatial distance between the shooting position of the plurality of pre-shot videos and the average position; and selecting a video frame from the target video, and using the video frame as the target image. 如申請專利範圍第1項所述之方法,其中獲取預先儲存的所述目標物體對應的至少兩個圖像包括:根據待展示的所述VR畫面的時間訊息,從預先拍攝的多個視頻中確定所述時間訊息對應的至少兩個視頻幀作為所述至少兩個圖像。The method according to item 1 of the scope of patent application, wherein obtaining at least two images corresponding to the target object stored in advance includes: from a plurality of videos shot in advance according to the time information of the VR picture to be displayed Determining at least two video frames corresponding to the time information as the at least two images. 如申請專利範圍第1項所述之方法,進一步包括:將所述目標圖像渲染到所述VR畫面中的第一預設紋理上,其中所述第一預設紋理是基於廣告牌(billboard)面片(patch)技術的。The method according to item 1 of the scope of patent application, further comprising: rendering the target image onto a first preset texture in the VR picture, wherein the first preset texture is based on a billboard (billboard ) Patch technology. 如申請專利範圍第1項所述之方法,進一步包括根據所述位置訊息,確定所述VR畫面中的第一物體;從三維模型庫中確定出所述第一物體對應的三維模型;以及將所述三維模型渲染到所述VR畫面的第二預設紋理上。The method according to item 1 of the scope of patent application, further comprising determining a first object in the VR frame according to the position information; determining a three-dimensional model corresponding to the first object from a three-dimensional model library; and The three-dimensional model is rendered onto a second preset texture of the VR picture. 如申請專利範圍第2項所述之方法,其中所述多個視頻是對所述多個視頻的原始視頻經過透明處理後的僅包括所述目標物體的視頻。The method according to item 2 of the scope of patent application, wherein the plurality of videos are videos including only the target object after the original videos of the plurality of videos are transparently processed. 如申請專利範圍第5項所述之方法,其中所述目標物體為人物。The method according to item 5 of the scope of patent application, wherein the target object is a person. 如申請專利範圍第1項所述之方法,其中所述左眼位置訊息、所述右眼位置訊息、所述左眼朝向訊息和所述右眼朝向訊息是根據所收集的用戶當前的姿態訊息確定的。The method according to item 1 of the scope of patent application, wherein the left-eye position information, the right-eye position information, the left-eye orientation information, and the right-eye orientation information are based on the collected current posture information of the user definite. 如申請專利範圍第7項所述之方法,其中所述姿態訊息包括頭部姿態訊息、四肢姿態訊息、軀幹姿態訊息、肌肉電刺激訊息、眼球跟蹤訊息、皮膚感知訊息、運動感知訊息和腦訊號訊息中的至少一種。The method according to item 7 of the scope of patent application, wherein the posture information includes head posture information, limb posture information, trunk posture information, muscle electrical stimulation information, eye tracking information, skin perception information, motion perception information, and brain signals At least one of the messages. 一種圖形處理裝置,包括:處理器和儲存器,所述儲存器中儲存有計算機可讀指令,可以使所述處理器執行:獲取觀察者的位置訊息;根據所述位置訊息確定待展示的虛擬實境(Virtual Reality,VR)畫面中的目標物體;獲取預先儲存的所述目標物體對應的至少兩個圖像,所述至少兩個圖像為分別從不同的拍攝位置拍攝的圖像;根據所述位置訊息和所述至少兩個圖像對應的拍攝位置,利用所述至少兩個圖像生成目標圖像,所述目標圖像為所述觀察者的位置對應的所述目標物體的圖像;以及展示所述VR畫面,並在所述VR畫面中渲染所述目標圖像,其中所述指令可以使所述處理器:獲取所述觀察者的左眼位置訊息、右眼位置訊息、左眼朝向訊息和右眼朝向訊息;其中,所述VR畫面包括左眼畫面和右眼畫面;根據所述左眼位置訊息和所述左眼朝向訊息確定所述左眼畫面;根據所述右眼位置訊息和所述右眼朝向訊息確定所述右眼畫面;根據所述左眼朝向訊息和所述目標圖像,實時(real time)渲染所述左眼畫面,並在所述左眼畫面中渲染所述目標圖像;以及根據所述右眼朝向訊息和所述目標圖像,實時渲染所述右眼畫面,並在所述右眼畫面中渲染所述目標圖像,其中所述指令可以使所述處理器:對所述左眼位置訊息和所述右眼位置訊息求平均值,得到平均位置;根據所述平均位置,從所述預先拍攝的多個視頻中選取出目標視頻,其中,所述目標視頻的拍攝位置與所述平均位置的距離是所述預先拍攝的多個視頻的拍攝位置中與所述平均位置的空間距離中最小的;以及從所述目標視頻中選取一個視頻幀,並將所述視頻幀作為所述目標圖像。A graphics processing device includes a processor and a storage, and the storage stores computer-readable instructions that can cause the processor to perform: obtaining an observer's location information; and determining a virtual to be displayed according to the location information A target object in a virtual reality (VR) picture; acquiring at least two images corresponding to the target object stored in advance, the at least two images are images taken from different shooting positions respectively; according to A shooting position corresponding to the position information and the at least two images, and using the at least two images to generate a target image, where the target image is a map of the target object corresponding to the position of the observer And displaying the VR picture and rendering the target image in the VR picture, wherein the instruction may cause the processor to: obtain left-eye position information, right-eye position information of the observer, A left-eye orientation message and a right-eye orientation message; wherein the VR picture includes a left-eye picture and a right-eye picture; and determining the left-eye according to the left-eye position message and the left-eye orientation message Determine the right-eye picture according to the right-eye position information and the right-eye orientation information; render the left-eye picture in real time according to the left-eye orientation information and the target image, and Rendering the target image in the left-eye frame; and rendering the right-eye frame in real-time according to the right-eye orientation message and the target image; and rendering the target image in the right-eye frame. For example, the instructions may cause the processor to: average the left-eye position information and the right-eye position information to obtain an average position; and based on the average position, from the plurality of videos shot in advance Select a target video, wherein the distance between the shooting position of the target video and the average position is the smallest of the spatial distance from the shooting position of the plurality of pre-shot videos and the average position; and A video frame is selected from the target video, and the video frame is used as the target image. 如申請專利範圍第9項所述之裝置,其中所述指令可以使所述處理器:根據待展示的所述VR畫面的時間訊息,從預先拍攝的多個視頻中確定每個視頻中所述時間訊息對應的至少兩個視頻幀作為所述至少兩個圖像。The device according to item 9 of the scope of patent application, wherein the instruction may cause the processor to determine the description in each video from a plurality of pre-shot videos according to the time information of the VR picture to be displayed. At least two video frames corresponding to the time information are used as the at least two images. 如申請專利範圍第9項所述之裝置,其中所述指令可以使所述處理器:將所述目標圖像渲染到所述VR畫面中的第一預設紋理上,其中,所述第一預設紋理是基於廣告牌(billboard)面片(patch)技術的。The device according to item 9 of the scope of patent application, wherein the instruction may cause the processor to: render the target image onto a first preset texture in the VR picture, wherein the first The preset texture is based on billboard patch technology. 如申請專利範圍第11項所述之裝置,其中所述指令可以使所述處理器:根據所述位置訊息,確定所述VR畫面中的第一物體;從三維模型庫中確定出所述第一物體對應的目標三維模型;以及將所述三維模型渲染到所述VR畫面的第二預設紋理上。The device according to item 11 of the scope of patent application, wherein the instructions enable the processor to: determine a first object in the VR frame according to the position information; and determine the first object from a three-dimensional model library. A target three-dimensional model corresponding to an object; and rendering the three-dimensional model onto a second preset texture of the VR picture. 如申請專利範圍第10項所述之裝置,其中所述多個視頻是對所述多個視頻的原始視頻經過透明處理後的僅包括所述目標物體的視頻。The device according to item 10 of the scope of patent application, wherein the plurality of videos are videos including only the target object after transparent processing of the original videos of the plurality of videos. 如申請專利範圍第13項所述之裝置,其中所述目標物體為人物。The device according to item 13 of the patent application scope, wherein the target object is a person. 如申請專利範圍第9項所述之裝置,其中所述左眼位置訊息、所述右眼位置訊息、所述左眼朝向訊息和所述右眼朝向訊息是根據所收集的所述用戶當前的姿態訊息確定的。The device according to item 9 of the scope of patent application, wherein the left-eye position information, the right-eye position information, the left-eye orientation information, and the right-eye orientation information are based on the collected current user information The gesture message is ok. 如申請專利範圍第15項所述之裝置,其中所述姿態訊息包括頭部姿態訊息、四肢姿態訊息、軀幹姿態訊息、肌肉電刺激訊息、眼球跟蹤訊息、皮膚感知訊息、運動感知訊息和腦訊號訊息中的至少一種。The device according to item 15 of the scope of patent application, wherein the posture information includes head posture information, limb posture information, trunk posture information, electrical muscle stimulation information, eye tracking information, skin perception information, motion perception information, and brain signals At least one of the messages. 如申請專利範圍第9至16項任一項所述之裝置,其中所述處理器包括中央處理器(Central Processing Unit,CPU)和圖形處理器(Graphics Processing Init,GPU)中的至少一種。The device according to any one of claims 9 to 16, wherein the processor includes at least one of a Central Processing Unit (CPU) and a Graphics Processing Init (GPU). 一種圖形處理方法,適應於計算設備,包括:收集觀察者當前的姿態訊息;根據所述姿態訊息,得到所述觀察者的位置訊息;根據所述位置訊息確定待展示的虛擬實境(Virtual Reality,VR)畫面中的目標物體;獲取預先儲存的所述目標物體對應的至少兩個圖像,所述至少兩個圖像為分別從不同的拍攝位置拍攝的圖像;根據所述位置訊息和所述至少兩個圖像對應的拍攝位置,利用所述至少兩個圖像生成目標圖像,所述目標圖像為所述觀察者的位置對應的所述目標物體的圖像;以及展示所述VR畫面,並在所述VR畫面中渲染所述目標圖像,其中所述獲取觀察者的位置訊息,包括:獲取所述觀察者的左眼位置訊息、右眼位置訊息、左眼朝向訊息和右眼朝向訊息;其中,所述VR畫面包括左眼畫面和右眼畫面,所述展示所述VR畫面,並在所述VR畫面中渲染所述目標圖像,包括:根據所述左眼位置訊息和所述左眼朝向訊息確定所述左眼畫面;根據所述右眼位置訊息和所述右眼朝向訊息確定所述右眼畫面;以及根據所述左眼朝向訊息和所述目標圖像,實時(real time)渲染所述左眼畫面,並在所述左眼畫面中渲染所述目標圖像;根據所述右眼朝向訊息和所述目標圖像,實時渲染所述右眼畫面,並在所述右眼畫面中渲染所述目標圖像,其中所述根據所述位置訊息和所述至少兩個圖像對應的拍攝位置,利用所述至少兩個圖像生成目標圖像,包括:對所述左眼位置訊息和所述右眼位置訊息求平均值,得到平均位置;根據所述平均位置,從所述預先拍攝的多個視頻中選取出目標視頻,其中,所述目標視頻的拍攝位置與所述平均位置的距離是所述預先拍攝的多個視頻的拍攝位置與所述平均位置的空間距離中最小的;以及從所述目標視頻中選取一個視頻幀,並將所述視頻幀作為所述目標圖像。A graphics processing method adapted to a computing device includes: collecting current posture information of an observer; obtaining position information of the observer according to the posture information; and determining a virtual reality to be displayed according to the position information. (VR) a target object in the picture; acquiring at least two images corresponding to the target object stored in advance, the at least two images are images respectively taken from different shooting positions; according to the position information and A shooting position corresponding to the at least two images, using the at least two images to generate a target image, the target image being an image of the target object corresponding to the position of the observer; The VR picture, and rendering the target image in the VR picture, wherein the obtaining the position information of the observer includes: obtaining the left-eye position information, the right-eye position information, and the left-eye orientation information of the observer And the right eye orientation message; wherein the VR picture includes a left eye picture and a right eye picture, the displaying the VR picture, and rendering the target image in the VR picture, Including: determining the left-eye picture according to the left-eye position information and the left-eye orientation information; determining the right-eye picture according to the right-eye position information and the right-eye orientation information; and Towards the message and the target image, rendering the left-eye picture in real time, and rendering the target image in the left-eye picture; according to the right-eye orientation message and the target image, Rendering the right-eye picture in real time and rendering the target image in the right-eye picture, wherein according to the position information and a shooting position corresponding to the at least two images, using the at least two The image generating a target image includes: averaging the left-eye position information and the right-eye position information to obtain an average position; and selecting a target from the plurality of pre-shot videos according to the average position. A video, wherein the distance between the shooting position of the target video and the average position is the smallest of the spatial distances between the shooting position of the plurality of previously shot videos and the average position; and from the target view Select a video frame in the frequency, and use the video frame as the target image. 如申請專利範圍第18項所述之方法,其中根據待展示的所述VR畫面的時間訊息,從預先拍攝的多個視頻中確定每個視頻中所述時間訊息對應的視頻幀作為所述圖像。The method according to item 18 of the scope of patent application, wherein according to the time information of the VR picture to be displayed, a video frame corresponding to the time information in each video is determined from the multiple videos taken in advance as the picture image. 如申請專利範圍第18項所述之方法,進一步包括:將所述目標圖像渲染到所述VR畫面中的第一預設紋理上,其中,所述第一預設紋理是基於廣告牌(billboard)面片(patch)技術的。The method according to item 18 of the patent application scope, further comprising: rendering the target image onto a first preset texture in the VR picture, wherein the first preset texture is based on a billboard ( billboard) patch technology. 如申請專利範圍第18項所述之方法,進一步包括:根據所述位置訊息,確定所述VR畫面中的第一物體;從三維模型庫中確定出所述第一物體對應的三維模型;以及將所述三維模型渲染到所述VR畫面的第二預設紋理上。The method of claim 18, further comprising: determining a first object in the VR frame based on the position information; determining a three-dimensional model corresponding to the first object from a three-dimensional model library; and Rendering the three-dimensional model onto a second preset texture of the VR picture. 如申請專利範圍第19項所述之方法,其中所述多個視頻是對所述多個視頻的原始視頻經過透明處理後的僅包括所述目標物體的視頻。The method according to item 19 of the scope of patent application, wherein the plurality of videos are videos including only the target object after transparent processing of the original videos of the plurality of videos. 如申請專利範圍第22項所述之方法,其中所述目標物體為人物。The method according to item 22 of the scope of patent application, wherein the target object is a person. 如申請專利範圍第18項所述之方法,其中所述收集觀察者當前的姿態訊息,包括:收集所述觀察者當前的頭部姿態訊息、四肢姿態訊息、軀幹姿態訊息、肌肉電刺激訊息、眼球跟蹤訊息、皮膚感知訊息、運動感知訊息和腦訊號訊息中的至少一種。The method according to item 18 of the scope of patent application, wherein said collecting current observer posture information includes collecting current observer posture information, limb posture information, trunk posture information, electrical muscle stimulation information, At least one of an eye-tracking message, a skin-aware message, a motion-aware message, and a brain signal message. 一種虛擬實境(Virtual Reality,VR)系統,包括姿態收集裝置、處理裝置和顯示裝置:所述姿態收集裝置用於:收集觀察者當前的姿態訊息;所述處理裝置用於:根據所述姿態訊息,得到所述觀察者的位置訊息;根據所述位置訊息確定待展示的VR畫面中的目標物體;獲取預先儲存的所述目標物體對應的至少兩個圖像,所述至少兩個圖像為分別從不同的拍攝位置拍攝的圖像;以及根據所述位置訊息和所述至少兩個圖像對應的拍攝位置,利用所述至少兩個圖像生成目標圖像,所述目標圖像為所述觀察者的位置對應的所述目標物體的圖像;所述顯示裝置用於:展示所述VR畫面,並在所述VR畫面中渲染所述目標圖像,其中所述處理裝置,獲取所述觀察者的左眼位置訊息、右眼位置訊息、左眼朝向訊息和右眼朝向訊息;其中,所述VR畫面包括左眼畫面和右眼畫面,其中,所述處理裝置根據所述左眼位置訊息和所述左眼朝向訊息確定所述左眼畫面;根據所述右眼位置訊息和所述右眼朝向訊息確定所述右眼畫面;根據所述左眼朝向訊息和所述目標圖像,實時(real time)渲染所述左眼畫面,並在所述左眼畫面中渲染所述目標圖像;以及根據所述右眼朝向訊息和所述目標圖像,實時渲染所述右眼畫面,並在所述右眼畫面中渲染所述目標圖像,其中所述處理裝置根據所述位置訊息和所述至少兩個圖像對應的拍攝位置,利用所述至少兩個圖像生成目標圖像,包括:對所述左眼位置訊息和所述右眼位置訊息求平均值,得到平均位置;根據所述平均位置,從所述預先拍攝的多個視頻中選取出目標視頻,其中,所述目標視頻的拍攝位置與所述平均位置的距離是所述預先拍攝的多個視頻的拍攝位置與所述平均位置的空間距離中最小的;以及從所述目標視頻中選取一個視頻幀,並將所述視頻幀作為所述目標圖像。A virtual reality (VR) system includes a gesture collection device, a processing device, and a display device. The gesture collection device is configured to: collect current pose information of an observer; the processing device is configured to: Information to obtain the position information of the observer; determine the target object in the VR picture to be displayed according to the position information; obtain at least two images corresponding to the target object stored in advance, the at least two images Are images taken from different shooting positions respectively; and a target image is generated by using the at least two images according to the position information and the shooting positions corresponding to the at least two images, where the target images are An image of the target object corresponding to the position of the observer; the display device is configured to display the VR image and render the target image in the VR image, wherein the processing device obtains The observer's left-eye position information, right-eye position information, left-eye orientation information, and right-eye orientation information; wherein the VR picture includes a left-eye picture and a right-eye picture, where The processing device determines the left-eye picture according to the left-eye position information and the left-eye orientation information; determines the right-eye picture according to the right-eye position information and the right-eye orientation information; The left-eye orientation message and the target image, rendering the left-eye picture in real time, and rendering the target image in the left-eye picture; and according to the right-eye orientation message and the target Image, rendering the right-eye picture in real time, and rendering the target image in the right-eye picture, wherein the processing device uses the position information and a shooting position corresponding to the at least two images to use Generating a target image by the at least two images includes: averaging the left-eye position information and the right-eye position information to obtain an average position; and based on the average position, from a plurality of previously photographed A target video is selected from the videos, and the distance between the shooting position of the target video and the average position is the smallest of the spatial distances between the shooting positions of the plurality of pre-shot videos and the average position. And selecting a video frame from the target video, and the video frame as the target image. 如申請專利範圍第25項所述之VR系統,其中所述處理裝置根據待展示的所述VR畫面的時間訊息,從預先拍攝的多個視頻中確定每個視頻中所述時間訊息對應的視頻幀作為所述圖像。The VR system as described in claim 25, wherein the processing device determines a video corresponding to the time information in each video from a plurality of pre-shot videos according to the time information of the VR picture to be displayed. A frame is used as the image. 如申請專利範圍第25項所述之VR系統,其中所述處理裝置將所述目標圖像渲染到所述VR畫面中的第一預設紋理上,其中,所述第一預設紋理是基於廣告牌(billboard)面片(patch)技術的。The VR system as described in claim 25, wherein the processing device renders the target image onto a first preset texture in the VR picture, wherein the first preset texture is based on Billboard patch technology. 如申請專利範圍第25項所述之VR系統,其中所述處理裝置,進一步用於:根據所述位置訊息,確定所述VR畫面中的第一物體;從三維模型庫中確定出所述第一物體對應的三維模型;以及將所述三維模型渲染到所述VR畫面的第二預設紋理上。The VR system according to item 25 of the scope of patent application, wherein the processing device is further configured to: determine a first object in the VR frame according to the position information; and determine the first object from a three-dimensional model library. A three-dimensional model corresponding to an object; and rendering the three-dimensional model onto a second preset texture of the VR picture. 如申請專利範圍第25項所述之VR系統,其中所述多個視頻是對所述多個視頻的原始視頻經過透明處理後的僅包括所述目標物體的視頻。The VR system according to item 25 of the scope of patent application, wherein the plurality of videos are videos including only the target object after transparent processing of the original videos of the plurality of videos. 如申請專利範圍第29項所述之VR系統,其中所述目標物體為人物。The VR system according to item 29 of the application, wherein the target object is a person. 如申請專利範圍第25項所述之VR系統,其中所述姿態收集裝置具體用於:收集所述用戶當前的頭部姿態訊息、四肢姿態訊息、軀幹姿態訊息、肌肉電刺激訊息、眼球跟蹤訊息、皮膚感知訊息、運動感知訊息和腦訊號訊息中的至少一種。The VR system according to item 25 of the scope of patent application, wherein the posture collecting device is specifically configured to collect the current head posture information, limb posture information, trunk posture information, electrical muscle stimulation information, and eye tracking information of the user. , Skin-aware messages, motion-aware messages, and brain signals. 如申請專利範圍第25項所述之VR系統,其中所述處理裝置包括中央處理器(Central Processing Unit,CPU)和圖形處理器(Graphics Processing Init,GPU)中的至少一種。The VR system according to item 25 of the scope of patent application, wherein the processing device includes at least one of a Central Processing Unit (CPU) and a Graphics Processing Init (GPU). 一種計算機儲存介質,其上儲存有指令,當所述指令在計算機上運行時,使得所述計算機執行申請專利範圍第1至8項任一項所述的方法。A computer storage medium stores instructions thereon. When the instructions are run on a computer, the computer is caused to execute the method according to any one of claims 1 to 8. 一種計算機儲存介質,其上儲存有指令,當所述指令在計算機上運行時,使得所述計算機執行申請專利範圍第18至24項任一項所述的方法。 A computer storage medium stores instructions thereon, and when the instructions are run on a computer, the computer is caused to execute the method according to any one of claims 18 to 24 of the scope of patent application.
TW107116847A 2017-05-25 2018-05-17 Graphic processing method and device, virtual reality system, computer storage medium TWI659335B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
??201710379516.5 2017-05-25
CN201710379516.5A CN107315470B (en) 2017-05-25 2017-05-25 Graphic processing method, processor and virtual reality system

Publications (2)

Publication Number Publication Date
TW201835723A TW201835723A (en) 2018-10-01
TWI659335B true TWI659335B (en) 2019-05-11

Family

ID=60182018

Family Applications (1)

Application Number Title Priority Date Filing Date
TW107116847A TWI659335B (en) 2017-05-25 2018-05-17 Graphic processing method and device, virtual reality system, computer storage medium

Country Status (3)

Country Link
CN (1) CN107315470B (en)
TW (1) TWI659335B (en)
WO (1) WO2018214697A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI833560B (en) * 2022-11-25 2024-02-21 大陸商立訊精密科技(南京)有限公司 Image scene construction method, apparatus, electronic equipment and storage medium

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107315470B (en) * 2017-05-25 2018-08-17 腾讯科技(深圳)有限公司 Graphic processing method, processor and virtual reality system
CN110134222A (en) * 2018-02-02 2019-08-16 上海集鹰科技有限公司 A kind of VR shows positioning sighting system and its positioning method of sight
CN108616752B (en) * 2018-04-25 2020-11-06 北京赛博恩福科技有限公司 Head-mounted equipment supporting augmented reality interaction and control method
CN109032350B (en) * 2018-07-10 2021-06-29 深圳市创凯智能股份有限公司 Vertigo sensation alleviating method, virtual reality device, and computer-readable storage medium
CN110570513B (en) * 2018-08-17 2023-06-20 创新先进技术有限公司 Method and device for displaying vehicle loss information
US11500455B2 (en) 2018-10-16 2022-11-15 Nolo Co., Ltd. Video streaming system, video streaming method and apparatus
CN111065053B (en) * 2018-10-16 2021-08-17 北京凌宇智控科技有限公司 System and method for video streaming
CN111064985A (en) * 2018-10-16 2020-04-24 北京凌宇智控科技有限公司 System, method and device for realizing video streaming
CN109976527B (en) * 2019-03-28 2022-08-12 重庆工程职业技术学院 Interactive VR display system
CN112015264B (en) * 2019-05-30 2023-10-20 深圳市冠旭电子股份有限公司 Virtual reality display method, virtual reality display device and virtual reality equipment
CN111857336B (en) * 2020-07-10 2022-03-25 歌尔科技有限公司 Head-mounted device, rendering method thereof, and storage medium
CN113947652A (en) * 2020-07-15 2022-01-18 北京芯海视界三维科技有限公司 Method and device for realizing target object positioning and display device
CN112073669A (en) * 2020-09-18 2020-12-11 三星电子(中国)研发中心 Method and device for realizing video communication
CN112308982A (en) * 2020-11-11 2021-02-02 安徽山水空间装饰有限责任公司 Decoration effect display method and device
CN113436489A (en) * 2021-06-09 2021-09-24 深圳大学 Study leaving experience system and method based on virtual reality
WO2024174050A1 (en) * 2023-02-20 2024-08-29 京东方科技集团股份有限公司 Video communication method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080316299A1 (en) * 2007-06-25 2008-12-25 Ati Technologies Ulc Virtual stereoscopic camera
CN102404584A (en) * 2010-09-13 2012-04-04 腾讯科技(成都)有限公司 Method and device for adjusting scene left camera and scene right camera, three dimensional (3D) glasses and client side
TW201224516A (en) * 2010-11-08 2012-06-16 Microsoft Corp Automatic variable virtual focus for augmented reality displays
TW201329853A (en) * 2011-10-14 2013-07-16 Microsoft Corp User controlled real object disappearance in a mixed reality display
US20150058102A1 (en) * 2013-08-21 2015-02-26 Jaunt Inc. Generating content for a virtual reality system
US20150358539A1 (en) * 2014-06-06 2015-12-10 Jacob Catt Mobile Virtual Reality Camera, Method, And System

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100573595C (en) * 2003-06-20 2009-12-23 日本电信电话株式会社 Virtual visual point image generating method and three-dimensional image display method and device
KR100656342B1 (en) * 2004-12-16 2006-12-11 한국전자통신연구원 Apparatus for visual interface for presenting multiple mixed stereo image
KR101629479B1 (en) * 2009-11-04 2016-06-10 삼성전자주식회사 High density multi-view display system and method based on the active sub-pixel rendering
WO2011111349A1 (en) * 2010-03-10 2011-09-15 パナソニック株式会社 3d video display device and parallax adjustment method
US9380287B2 (en) * 2012-09-03 2016-06-28 Sensomotoric Instruments Gesellschaft Fur Innovative Sensorik Mbh Head mounted system and method to compute and render a stream of digital images using a head mounted display
CN104679509B (en) * 2015-02-06 2019-11-15 腾讯科技(深圳)有限公司 A kind of method and apparatus rendering figure
WO2017062268A1 (en) * 2015-10-04 2017-04-13 Thika Holdings Llc Eye gaze responsive virtual reality headset
CN106385576B (en) * 2016-09-07 2017-12-08 深圳超多维科技有限公司 Stereoscopic Virtual Reality live broadcasting method, device and electronic equipment
CN106507086B (en) * 2016-10-28 2018-08-31 北京灵境世界科技有限公司 A kind of 3D rendering methods of roaming outdoor scene VR
CN106527696A (en) * 2016-10-31 2017-03-22 宇龙计算机通信科技(深圳)有限公司 Method for implementing virtual operation and wearable device
CN106657906B (en) * 2016-12-13 2020-03-27 国家电网公司 Information equipment monitoring system with self-adaptive scene virtual reality function
CN106643699B (en) * 2016-12-26 2023-08-04 北京互易科技有限公司 Space positioning device and positioning method in virtual reality system
CN107315470B (en) * 2017-05-25 2018-08-17 腾讯科技(深圳)有限公司 Graphic processing method, processor and virtual reality system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080316299A1 (en) * 2007-06-25 2008-12-25 Ati Technologies Ulc Virtual stereoscopic camera
CN102404584A (en) * 2010-09-13 2012-04-04 腾讯科技(成都)有限公司 Method and device for adjusting scene left camera and scene right camera, three dimensional (3D) glasses and client side
TW201224516A (en) * 2010-11-08 2012-06-16 Microsoft Corp Automatic variable virtual focus for augmented reality displays
TW201329853A (en) * 2011-10-14 2013-07-16 Microsoft Corp User controlled real object disappearance in a mixed reality display
US20150058102A1 (en) * 2013-08-21 2015-02-26 Jaunt Inc. Generating content for a virtual reality system
US20150358539A1 (en) * 2014-06-06 2015-12-10 Jacob Catt Mobile Virtual Reality Camera, Method, And System

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI833560B (en) * 2022-11-25 2024-02-21 大陸商立訊精密科技(南京)有限公司 Image scene construction method, apparatus, electronic equipment and storage medium

Also Published As

Publication number Publication date
WO2018214697A1 (en) 2018-11-29
CN107315470A (en) 2017-11-03
TW201835723A (en) 2018-10-01
CN107315470B (en) 2018-08-17

Similar Documents

Publication Publication Date Title
TWI659335B (en) Graphic processing method and device, virtual reality system, computer storage medium
US11238568B2 (en) Method and system for reconstructing obstructed face portions for virtual reality environment
US8878846B1 (en) Superimposing virtual views of 3D objects with live images
US20190139297A1 (en) 3d skeletonization using truncated epipolar lines
CN107801045B (en) Method, device and system for automatically zooming when playing augmented reality scene
JP7566973B2 (en) Information processing device, information processing method, and program
US20230350489A1 (en) Presenting avatars in three-dimensional environments
CN110363133B (en) Method, device, equipment and storage medium for sight line detection and video processing
WO2015122108A1 (en) Information processing device, information processing method and program
KR102461232B1 (en) Image processing method and apparatus, electronic device, and storage medium
US20120162384A1 (en) Three-Dimensional Collaboration
CN109671141B (en) Image rendering method and device, storage medium and electronic device
CN111862348B (en) Video display method, video generation method, device, equipment and storage medium
CN104508600A (en) Three-dimensional user-interface device, and three-dimensional operation method
KR101892735B1 (en) Apparatus and Method for Intuitive Interaction
WO2022147227A1 (en) Systems and methods for generating stabilized images of a real environment in artificial reality
CN110349269A (en) A kind of target wear try-in method and system
Rasool et al. Haptic interaction with 2D images
US11128836B2 (en) Multi-camera display
CN113678173A (en) Method and apparatus for graph-based placement of virtual objects
TWI817335B (en) Stereoscopic image playback apparatus and method of generating stereoscopic images thereof
WO2023277043A1 (en) Information processing device
CN108416255B (en) System and method for capturing real-time facial expression animation of character based on three-dimensional animation
CN108388351B (en) Mixed reality experience system
JP7501543B2 (en) Information processing device, information processing method, and information processing program