CN107315470A - Graphic processing method, processor and virtual reality system - Google Patents
Graphic processing method, processor and virtual reality system Download PDFInfo
- Publication number
- CN107315470A CN107315470A CN201710379516.5A CN201710379516A CN107315470A CN 107315470 A CN107315470 A CN 107315470A CN 201710379516 A CN201710379516 A CN 201710379516A CN 107315470 A CN107315470 A CN 107315470A
- Authority
- CN
- China
- Prior art keywords
- video
- information
- target
- left eye
- right eye
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 22
- 238000009877 rendering Methods 0.000 claims abstract description 67
- 238000000034 method Methods 0.000 claims abstract description 55
- 238000012545 processing Methods 0.000 claims description 53
- 238000005516 engineering process Methods 0.000 claims description 43
- 230000008447 perception Effects 0.000 claims description 12
- 210000003205 muscle Anatomy 0.000 claims description 11
- 210000004556 brain Anatomy 0.000 claims description 10
- 230000000638 stimulation Effects 0.000 claims description 10
- 238000003860 storage Methods 0.000 claims description 5
- 230000015572 biosynthetic process Effects 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 17
- 210000003128 head Anatomy 0.000 description 14
- 230000006870 function Effects 0.000 description 11
- 238000004590 computer program Methods 0.000 description 9
- 238000013461 design Methods 0.000 description 9
- 230000000694 effects Effects 0.000 description 8
- 230000008569 process Effects 0.000 description 8
- 230000008859 change Effects 0.000 description 7
- 239000011521 glass Substances 0.000 description 7
- 230000003993 interaction Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 230000000007 visual effect Effects 0.000 description 5
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000004438 eyesight Effects 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 241000288673 Chiroptera Species 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 230000001154 acute effect Effects 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000009738 saturating Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/015—Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/275—Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/011—Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/012—Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2215/00—Indexing scheme for image rendering
- G06T2215/16—Using real world measurements to influence rendering
Abstract
The application provides a kind of graphic processing method, processor and virtual reality system, and this method includes:Obtain left eye position information, right eye position information, left eye orientation information and the right eye orientation information of user;According to left eye position information and right eye position information, target three-dimensional is determined from 3 d model library;According to left eye position information, right eye position information and the multiple videos shot in advance, target video is determined, wherein, multiple videos are the videos shot respectively from different camera sites;According to left eye orientation information, target three-dimensional and target video, real-time rendering left eye picture;According to right eye orientation information, target three-dimensional and target video, real-time rendering right eye picture;Left eye picture and the VR scenes of right eye picture formation include the image of target three-dimensional and the image of target video.The graphic processing method of the application can really show part outdoor scene object, provide the user real telepresenc, so as to lift Consumer's Experience.
Description
Technical field
The application is related to graphics process field, and more particularly, to a kind of graphic processing method, processor and virtual
Reality system.
Background technology
A kind of mainstream technology for being currently generated virtual reality (Virtual Reality, VR) scene is three-dimensional (three
Dimensional, 3D) modeling technique.3D modeling technology generation VR scenes are mainly according to 3D modelling VR scenes.Some
In VR game products, VR scenes are mainly what is completed using 3D modeling technology combination Real-time Rendering Technology.User is with VR wear-types
Display device, such as VR glasses or the VR helmets, as observation media, be dissolved into VR scenes, with the personage in VR scenes or
Other objects are interacted, so as to obtain real space perception.It is most common such as roller-coaster VR scenes.Current 3D is built
Although mould technology has been able to reach the effect that comparison is true to nature on the object in processing VR scenes, use is not reached much also
Family is required.
The content of the invention
The application provides a kind of graphic processing method, processor and virtual reality system, can really open up real-world scene
Body, provides the user real telepresenc, so as to lift Consumer's Experience.
First aspect there is provided a kind of graphic processing method, including:Obtain left eye position information, the right eye position of user
Information, left eye orientation information and right eye orientation information;According to the left eye position information and the right eye position information, from three-dimensional
Target three-dimensional is determined in model library;Shoot according to the left eye position information, the right eye position information and in advance
Multiple videos, determine target video, wherein, the multiple video is the video shot respectively from different camera sites;According to
Left eye orientation information, the target three-dimensional and the target video, real-time rendering left eye picture;Believed according to right eye direction
Breath, the target three-dimensional and the target video, real-time rendering right eye picture;Wherein, the left eye picture and the right side
Eye picture forms VR scenes when being shown on Virtual Reality display, and the VR scenes include the target three-dimensional
The image of image and the target video.
The graphic processing method of first aspect, according to the positional information of the right and left eyes of user, determines target three-dimensional simultaneously
And target video is determined according to the multiple videos shot in advance, Rendering renders left eye picture respectively by way of real-time rendering
Face and right eye picture, so that VR scenes are shown, wherein, VR scenes include the image of target three-dimensional and the figure of target video
Picture, the target video can really show outdoor scene, on the basis of whole VR scenes interaction is kept, and provide the user true
Real telepresenc, so as to lift Consumer's Experience.
It is described according to left eye orientation information, the three-dimensional mould of the target in a kind of possible implementation of first aspect
Type and the target video, real-time rendering left eye picture, including:According to the left eye orientation information, by the three-dimensional mould of the target
Type is rendered on the first texture;According to the left eye orientation information, the target video is rendered on the second texture, wherein,
Second texture is based on Billboard chip technology;It is described according to right eye orientation information, the target three-dimensional and institute
State target video, real-time rendering right eye picture, including:According to the right eye orientation information, the target three-dimensional is rendered
Onto third texture;According to the right eye orientation information, the target video is rendered on the 4th texture, wherein, described
Four textures are based on Billboard chip technology.
It should be understood that billboard dough sheet can have angle of inclination in left eye picture, the design parameter at angle of inclination can root
Calculated according to left eye position information;Billboard dough sheet can have angle of inclination, the design parameter at angle of inclination in right eye picture
It can be calculated according to right eye position information.
It is described according to the left eye position information, right eye position in a kind of possible implementation of first aspect
Multiple videos that confidence ceases and shot in advance, determine target video, including:To the left eye position information and the right eye position
Information is averaged, and obtains mean place;According to the mean place, from the multiple video selecting at least two regards
Frequently;Each video at least two video is extracted in corresponding frame of video of corresponding moment;According to the average bit
The camera site with least two video is put, interpolation arithmetic is carried out at least two frame of video, obtains described working as mesh
Mark video.
It is described according to the left eye position information, right eye position in a kind of possible implementation of first aspect
Multiple videos that confidence ceases and shot in advance, determine target video, including:To the left eye position information and the right eye position
Information is averaged, and obtains mean place;According to the mean place, the target is selected from the multiple video and is regarded
Frequently, wherein, the distance of the camera site of the target video and the mean place is all shootings of the multiple video
It is immediate with the mean place in position.
In a kind of possible implementation of first aspect, the multiple video is to pass through transparent processing to original video
Afterwards only include the video of target object.
It should be understood that transparent processing can be the processing based on Alpha (alpha) transparent technology.
In a kind of possible implementation of first aspect, the target object is personage.
In a kind of possible implementation of first aspect, the left eye position information, the right eye position information, institute
Stating left eye orientation information and the right eye orientation information is determined according to the current attitude information of collected user.
In a kind of possible implementation of first aspect, the attitude information includes head pose information, four limbs appearance
State information, trunk attitude information, muscle electric stimulation information, eye tracking information, skin sensing information, motion perception information and brain
At least one of signal message.
There is provided a kind of processor, including acquisition module, computing module and rendering module, the acquisition mould for second aspect
Block is used for left eye position information, right eye position information, left eye orientation information and the right eye orientation information for obtaining user;It is described to calculate
Module is used for the left eye position information and the right eye position information obtained according to the acquisition module, from 3 d model library
In determine target three-dimensional;The computing module is additionally operable to according to the left eye position information, the right eye position information
The multiple videos shot in advance, determine target video, wherein, the multiple video is to be shot respectively from different camera sites
Video;The rendering module is used for according to left eye orientation information, the target three-dimensional and the target video, real-time wash with watercolours
Contaminate left eye picture;The rendering module is additionally operable to according to right eye orientation information, the target three-dimensional and the target video,
Real-time rendering right eye picture;Wherein, the left eye picture and the right eye picture are shown in shape when on Virtual Reality display
Into VR scenes, the VR scenes include the image of the target three-dimensional and the image of the target video.
In a kind of possible implementation of second aspect, the rendering module specifically for:According to the left eye court
To information, the target three-dimensional is rendered on the first texture;According to the left eye orientation information, by the target video
It is rendered on the second texture, wherein, second texture is based on Billboard chip technology;Believed according to right eye direction
Breath, the target three-dimensional is rendered into third texture;According to the right eye orientation information, the target video is rendered
Onto the 4th texture, wherein, the 4th texture is based on Billboard chip technology.
In a kind of possible implementation of second aspect, the computing module is according to the left eye position information, institute
The multiple videos stated right eye position information and shot in advance, determine target video, including:To the left eye position information and described
Right eye position information is averaged, and obtains mean place;According to the mean place, selected at least from the multiple video
Two videos;Each video at least two video is extracted in corresponding frame of video of corresponding moment;According to described
The camera site of mean place and at least two video, carries out interpolation arithmetic at least two frame of video, obtains institute
State and work as target video.
In a kind of possible implementation of second aspect, the computing module is according to the left eye position information, institute
The multiple videos stated right eye position information and shot in advance, determine target video, including:To the left eye position information and described
Right eye position information is averaged, and obtains mean place;According to the mean place, selected from the multiple video described
Target video, wherein, the distance of the camera site of the target video and the mean place is all of the multiple video
Camera site in it is immediate with the mean place.
In a kind of possible implementation of second aspect, the multiple video is to pass through transparent processing to original video
Afterwards only include the video of target object.
In a kind of possible implementation of second aspect, the target object is personage.
In a kind of possible implementation of second aspect, the left eye position information of the acquisition module acquisition,
The right eye position information, the left eye orientation information and the right eye orientation information are current according to the collected user
Attitude information determine.
In a kind of possible implementation of second aspect, the attitude information includes head pose information, four limbs appearance
State information, trunk attitude information, muscle electric stimulation information, eye tracking information, skin sensing information, motion perception information and brain
At least one of signal message.
It should be understood that the processor can include at least one of central processor CPU and graphics processor GPU.Meter
CPU can be corresponded to by calculating the function of module, and the function of rendering module can correspond to GPU.GPU may reside in video card, also known as show
Show core, vision processor or display chip.
Second aspect and corresponding implementation can be obtained effect and first aspect and corresponding implementation institute energy
The effect correspondence of acquisition, is no longer repeated one by one herein.
There is provided a kind of graphic processing method for the third aspect, it is characterised in that including:Collect the current posture letter of user
Breath;According to the attitude information, left eye position information, right eye position information, left eye orientation information and the right side of the user is obtained
Eye orientation information;According to the left eye position information and the right eye position information, target three is determined from 3 d model library
Dimension module;According to the left eye position information, the right eye position information and the multiple videos shot in advance, determine that target is regarded
Frequently, wherein, the multiple video is the video shot respectively from different camera site;According to left eye orientation information, the mesh
Mark threedimensional model and the target video, real-time rendering left eye picture;According to right eye orientation information, the target three-dimensional and
The target video, real-time rendering right eye picture;The left eye picture and the right eye picture are shown, wherein, the left eye is drawn
VR scenes are formed when face and right eye picture display, the VR scenes include the image of the target three-dimensional and described
The image of target video.
The graphic processing method of the third aspect, collects the attitude information of user to determine the position of user's right and left eyes, according to
The positional information of the right and left eyes of user, determines target three-dimensional and determines that target is regarded according to the multiple videos shot in advance
Frequently, Rendering renders left eye picture and right eye picture respectively by way of real-time rendering, so that VR scenes are shown, wherein,
VR scenes include the image of target three-dimensional and the image of target video, and the target video can really show outdoor scene,
On the basis of whole VR scenes interaction is kept, real telepresenc is provided the user, so as to lift Consumer's Experience.
It is described according to left eye orientation information, the three-dimensional mould of the target in a kind of possible implementation of the third aspect
Type and the target video, real-time rendering left eye picture, including:According to the left eye orientation information, by the three-dimensional mould of the target
Type is rendered on the first texture;According to the left eye orientation information, the target video is rendered on the second texture, wherein,
Second texture is based on Billboard chip technology;It is described according to right eye orientation information, the target three-dimensional and institute
State target video, real-time rendering right eye picture, including:According to the right eye orientation information, the target three-dimensional is rendered
Onto third texture;According to the right eye orientation information, the target video is rendered on the 4th texture, wherein, described
Four textures are based on Billboard chip technology.
It is described according to the left eye position information, right eye position in a kind of possible implementation of the third aspect
Multiple videos that confidence ceases and shot in advance, determine target video, including:To the left eye position information and the right eye position
Information is averaged, and obtains mean place;According to the mean place, from the multiple video selecting at least two regards
Frequently;Each video at least two video is extracted in corresponding frame of video of corresponding moment;According to the average bit
The camera site with least two video is put, interpolation arithmetic is carried out at least two frame of video, obtains described working as mesh
Mark video.
It is described according to the left eye position information, right eye position in a kind of possible implementation of the third aspect
Multiple videos that confidence ceases and shot in advance, determine target video, including:To the left eye position information and the right eye position
Information is averaged, and obtains mean place;According to the mean place, the target is selected from the multiple video and is regarded
Frequently, wherein, the distance of the camera site of the target video and the mean place is all shootings of the multiple video
It is immediate with the mean place in position.
In a kind of possible implementation of the third aspect, the multiple video is to pass through transparent processing to original video
Afterwards only include the video of target object.
In a kind of possible implementation of the third aspect, the target object is personage.
In a kind of possible implementation of the third aspect, the current attitude information of the collection user, including:Collect
The current head pose information of the user, four limbs attitude information, trunk attitude information, muscle electric stimulation information, eye tracking
At least one of information, skin sensing information, motion perception information and brain signal information.
There is provided a kind of Virtual Reality system, including posture collection device, processing unit and display dress for fourth aspect
Put:The posture collection device is used for:Collect the current attitude information of user;The processing unit is used for:According to the posture
Information, obtains left eye position information, right eye position information, left eye orientation information and the right eye orientation information of the user;According to
The left eye position information and the right eye position information, determine target three-dimensional from 3 d model library;According to described
Left eye position information, the right eye position information and the multiple videos shot in advance, determine target video, wherein, it is the multiple
Video is the video shot respectively from different camera sites;According to left eye orientation information, the target three-dimensional and described
Target video, real-time rendering left eye picture;It is real according to right eye orientation information, the target three-dimensional and the target video
When render right eye picture;The display device is used to show the left eye picture and the right eye picture, wherein, the left eye is drawn
VR scenes are formed when face and right eye picture display, the VR scenes include the image of the target three-dimensional and described
The image of target video.
In a kind of possible implementation of fourth aspect, the processing unit is according to left eye orientation information, the mesh
Mark threedimensional model and the target video, real-time rendering left eye picture, including:According to the left eye orientation information, by the mesh
Mark threedimensional model is rendered on the first texture;According to the left eye orientation information, the target video is rendered into the second texture
On, wherein, second texture is based on Billboard chip technology;The processing unit is according to right eye orientation information, described
Target three-dimensional and the target video, real-time rendering right eye picture, including:, will be described according to the right eye orientation information
Target three-dimensional is rendered into third texture;According to the right eye orientation information, the target video is rendered into the 4th line
In reason, wherein, the 4th texture is based on Billboard chip technology.
In a kind of possible implementation of fourth aspect, the processing unit is according to the left eye position information, institute
The multiple videos stated right eye position information and shot in advance, determine target video, including:To the left eye position information and described
Right eye position information is averaged, and obtains mean place;According to the mean place, selected at least from the multiple video
Two videos;Each video at least two video is extracted in corresponding frame of video of corresponding moment;According to described
The camera site of mean place and at least two video, carries out interpolation arithmetic at least two frame of video, obtains institute
State and work as target video.
In a kind of possible implementation of fourth aspect, the processing unit is according to the left eye position information, institute
The multiple videos stated right eye position information and shot in advance, determine target video, including:To the left eye position information and described
Right eye position information is averaged, and obtains mean place;According to the mean place, selected from the multiple video described
Target video, wherein, the distance of the camera site of the target video and the mean place is all of the multiple video
Camera site in it is immediate with the mean place.
In a kind of possible implementation of fourth aspect, the multiple video is to pass through transparent processing to original video
Afterwards only include the video of target object.
In a kind of possible implementation of fourth aspect, the target object is personage.
In a kind of possible implementation of fourth aspect, the posture collection device specifically for:Collect described use
The current head pose information in family, four limbs attitude information, trunk attitude information, muscle electric stimulation information, eye tracking information, skin
At least one of skin perception information, motion perception information and brain signal information.
In a kind of possible implementation of fourth aspect, the processing unit includes central processor CPU and figure
At least one of processor GPU.
5th aspect provides a kind of computer-readable storage medium, is stored thereon with instruction, when the instruction is transported on computers
During row so that the computer performs the method described in any possible implementation of first aspect or first aspect.
6th aspect provides a kind of computer-readable storage medium, is stored thereon with instruction, when the instruction is transported on computers
During row so that the computer performs the method described in any possible implementation of the third aspect or the third aspect.
7th aspect provides a kind of computer program product including instructing, when computer runs the computer program production
During the finger of product, the computer performs the side described in any possible implementation of first aspect or first aspect
Method.
Eighth aspect provides a kind of computer program product including instructing, when computer runs the computer program production
During the finger of product, the computer performs the side described in any possible implementation of the third aspect or the third aspect
Method.
Second aspect to eighth aspect and corresponding implementation can be obtained effect and first aspect and corresponding reality
Existing mode can be obtained effect correspondence, no longer repeat one by one herein.
Brief description of the drawings
Fig. 1 is the schematic diagram of the frame of video in panoramic video.
Fig. 2 is the contrast schematic diagram for the VR scenes that 3D modeling technology and panoramic video technique for taking are generated respectively.
Fig. 3 is the indicative flowchart of the graphic processing method of one embodiment of the invention.
Fig. 4 is the schematic diagram of the scene presented the need for one embodiment of the invention.
Fig. 5 is the schematic diagram of scene that shoots in advance of progress of one embodiment of the invention.
Fig. 6 is the schematic diagram of the video obtained in different camera sites of one embodiment of the invention.
Fig. 7 is that one embodiment of the invention sets the goal the schematic diagram of video really.
Fig. 8 is the schematic diagram of the presentation target video of one embodiment of the invention.
Fig. 9 is the schematic block diagram of the processor of one embodiment of the invention.
Figure 10 is the schematic diagram of the virtual reality system of one embodiment of the invention.
Figure 11 is the schematic diagram of the virtual reality system of another embodiment of the present invention.
Embodiment
Below in conjunction with accompanying drawing, the technical scheme in the application is described.
Another technology of generation VR scenes is described below, i.e., using panoramic video technique for taking generation VR scenes.Panorama
Video is also known as 360 degree of three-dimensional (stereo) videos, and it is similar with ordinary video, and simply panoramic video includes the complete of shooting point
Azimuth information.Fig. 1 is the schematic diagram of the frame of video in panoramic video.Panoramic video technique for taking generation VR scenes are by specialty
Degree panorama video shooting device and shoot team complete VR scenes panoramic video shooting, panoramic video is then converted into VR
Scene.Because panoramic video is shot in advance, therefore the VR scenes for being generated by panoramic video technique for taking, user is only
The direction of eyes can be changed to watch video, it is impossible to incorporate in VR scenes, be carried out with the personage in VR scenes or other objects
Interaction.Simultaneously as panoramic video includes full spectrum information, so general very huge, the VR fields of generation of panoramic video file
The file of scape is also very huge.
Fig. 2 is the contrast schematic diagram for the VR scenes that 3D modeling technology and panoramic video technique for taking are generated respectively.Such as Fig. 2 institutes
Show, the images of the VR scenes of 3D modeling technology generation for numeric type image that can be interactive, it is necessary to by Real-time Rendering Technology come
Realize;And the VR scenes of panoramic video technique for taking generation are then outdoor scene animation, it is not necessary to realized by Real-time Rendering Technology.3D
Preferably, it provides a kind of scene experience of immersion to the VR scenes mobility of modeling technique generation, and user can be with the scene four
Walk about at place;And the VR scenes of panoramic video technique for taking generation are then limited to the scene captured by director, user can be from camera lens
Position is set out, and obtains 360 degree of viewing angles, but cannot go about in the scene.The VR scenes of 3D modeling technology generation
Using User Activity as time shaft, VR scenes can be played out by a series of User Activity, and user can also experience independently
The new VR scenes explored;The VR scenes of panoramic video technique for taking generation are then to direct camera lens movement as time shaft, VR
Scape shoots played in order according to director.The playing platform of the VR scenes of 3D modeling technology generation usually requires VR wear-types and shown to set
Standby (referred to as VR aobvious equipment), such as VR glasses or the VR helmets, VR aobvious equipment can connect PC or mobile device etc.;Entirely
The playing platform of the VR scenes of scape video capture technology generation is usually then to include the computing device or flat of panoramic video player
Platform, including PC, mobile device, YouTube platforms etc..The VR scenes pattern of telling a story of 3D modeling technology generation is that user's triggering is acute
Feelings, director will not control the physical location of user in the scene built, need to guide, excite user along story developing direction
Go the triggering following story of a play or opera;The VR scenes pattern of telling a story of panoramic video technique for taking generation is that the physics of director's control camera lens is moved
It is dynamic, to trigger the story of a play or opera, to attract the notice of user to watch.
As can be seen here, 3D modeling technology generation VR scenes are mainly according to 3D modelling VR scenes, and user can incorporate
In VR scenes, interacted with the personage in VR scenes or other objects.However, current 3D modeling technology is when handling object
Obtained degree true to nature, user's requirement is not reached much.The VR scenes that panoramic video technique for taking makes, user can not be with VR
Personage or other objects in scape interact, and the file of the VR scenes generated is huge.Based on above technical problem, this hair
Bright embodiment provides a kind of graphic processing method, processor and VR systems.
It should be understood that the method and apparatus of various embodiments of the present invention is applied to VR scenes field, for example, can apply to VR trips
Play field, can also be applied to others can interaction scenarios, the VR films that can for example interact, VR concerts that can be interacted etc., this
Each embodiment is invented to be not construed as limiting this.
Before the graphic processing method of the embodiment of the present invention is described in detail, introduce what various embodiments of the present invention were related to first
Real-time Rendering Technology.The essence of Real-time Rendering Technology is the real-time calculating and output of graph data, and its maximum characteristic is real-time
(real time) property.Currently, PC (Personal Computer, PC), work station, game machine, mobile device or VR
Processor in system etc. is per second at least with speed progress computings more than 24 frames.That is, rendering the image of a screen, extremely
Less also will be within 1/24 second.And in actual 3D game, number of pictures per second requires then more much higher.Just because of real-time rendering
Real-time, be possible to realize the coherent broadcasting of 3D game, and realize user and the personage in scene of game in 3D game
Or other objects are interacted.
The real-time rendering that various embodiments of the present invention are related to can be by central processing unit (Central Processing
Unit, CPU) or graphics processor (Graphics Processing Unit, GPU) realize, the embodiment of the present invention to this not
It is construed as limiting.Specifically, GPU is a kind of processor dedicated for realizing image operation work, and it may reside in video card,
Also known as show core, vision processor or display chip.
Fig. 3 is the indicative flowchart of the graphic processing method 300 of one embodiment of the invention.This method 300 is by VR systems
System 30 is performed.Wherein, VR systems 30 can include posture collection device 32, processing unit 34 and display device 36.This method 300
It may comprise steps of.
S310, collects the current attitude information of user.It should be understood that S310 can be performed by posture collection device 32.
S320, according to attitude information, obtain the left eye position information of user, right eye position information, left eye orientation information and
Right eye orientation information.
S330, according to left eye position information and right eye position information, determines target three-dimensional from 3 d model library.
S340, according to left eye position information, right eye position information and the multiple videos shot in advance, determines target video,
Wherein, multiple videos are the videos shot respectively from different camera sites.
S350, according to left eye orientation information, target three-dimensional and target video, real-time rendering left eye picture.
S360, according to right eye orientation information, target three-dimensional and target video, real-time rendering right eye picture.
It should be understood that S320 to S360 can be performed by processing unit 34.
S370, display left eye picture and right eye picture, wherein, VR scenes are formed when left eye picture and the display of right eye picture,
VR scenes include the image of target three-dimensional and the image of target video.
It should be understood that S370 can be performed by display device 36.
The graphic processing method of the embodiment of the present invention, collects the attitude information of user to determine the position of user's right and left eyes,
According to the positional information of the right and left eyes of user, determine target three-dimensional and determine target according to the multiple videos shot in advance
Video, Rendering renders left eye picture and right eye picture respectively by way of real-time rendering, so that VR scenes are shown, its
In, VR scenes include the image of target three-dimensional and the image of target video, and the target video can really show reality
Scape, on the basis of whole VR scenes interaction is kept, provides the user real telepresenc, so as to lift user's body
Test.
It should be understood that typically VR systems 30 include VR aobvious equipment, display device 36 can be integrated in VR aobvious equipment
In.The processing unit 34 and/or posture collection device 32 of the embodiment of the present invention can be integrated in VR aobvious equipment, can also be only
VR aobvious equipment is stood on individually to dispose.Can be by wired between posture collection device 32, processing unit 34 and display device 36
Communication can also be not construed as limiting by radio communication, the embodiment of the present invention to this.
Each step of the graphic processing method 300 of the application and each component of VR systems 30 is detailed below.
In embodiments of the present invention, S310, posture collection device 32 collects the current attitude information of user.
Posture collection device 32 can include the sensor in VR head-mounted display apparatus, such as VR glasses or the VR helmets.
Sensor can include light sensor, and such as infrared sensor, shooting are first-class;Sensor can also include force-sensing sensor,
Such as gyroscope;Sensor can also include magneto-dependent sensor, such as brain-computer interface;Sensor can also include the quick biography of sound
Sensor etc., the embodiment of the present invention is not construed as limiting to the particular type of sensor.Sensor in VR head-mounted display apparatus can be with
Collect the current head pose information of user, eye tracking information, skin sensing information, muscle electric stimulation information and brain signal letter
At least one of breath.Aftertreatment device 34 can determine the left eye position information of user, right eye position according to these information
Information, left eye orientation information and right eye orientation information.
In a specific example, in VR scenes, the visual angle of user refers to the human eye sight direction of user virtual
Azimuth in space, wherein, include the position and orientation of human eye.In Virtual Space, the visual angle of user can be with user's
Head in realistic space the change of posture and change.Under a kind of particular situation, the change at the visual angle of user in Virtual Space
The change changed with the head pose of user in realistic space is synchronized and equidirectional.Wherein, the visual angle of user includes left eye visual angle again
And right-eye perspectives, the i.e. left eye position including user, right eye position, left eye direction and right eye direction.
In this example embodiment, the sensor in VR aobvious equipment that user wears can be in mistake of the user using VR aobvious equipment
Motion and its attitudes vibrations such as rotation, the movement on head are sensed in journey, and items motion is resolved, the head of correlation is obtained
Attitude information (the speed, angle such as motion), processing unit 34 is assured that use according to obtained head pose information
Left eye position information, right eye position information, left eye orientation information and the right eye orientation information at family.
Posture collection device 32 can also include locator, control handle, body-sensing gloves, body-sensing clothes, and treadmill
Etc. dynamic device etc., the attitude information for collecting user is then handled the left eye position for obtaining user by processing unit 34
Information, right eye position information, left eye orientation information and right eye orientation information.Wherein, posture collection device 32 can pass through manipulation
Handle, body-sensing gloves, body-sensing clothes and treadmill etc. collect four limbs attitude information, trunk attitude information, muscle the electricity thorn of user
Swash information, skin sensing information and motion perception information etc..
In a specific example, one or more locators can be provided with VR aobvious equipment, for monitoring user
Head position (height can be included), direction etc..Now, user can set in the realistic space where wearing VR aobvious equipment
There is alignment system, it is logical that one or more locators in the VR aobvious equipment that the alignment system can be worn with user carry out positioning
Letter, determines the attitude informations such as particular location (can include height), direction of the user in this realistic space.Then.Can be by
Above-mentioned attitude information is converted to relevant position (can include height) of the user's head in Virtual Space, court by processing unit 34
To etc. information.That is, processing unit 34 obtains left eye position information, right eye position information, left eye orientation information and the right side of user
Eye orientation information.
It should be understood that the left eye position information of the embodiment of the present invention, right eye position information can pass through seat in a coordinate system
Scale value is represented;Left eye orientation information and right eye orientation information can be represented by a vector in a coordinate system, but sheet
Inventive embodiments are not construed as limiting to this.
It should also be understood that posture collection device 32 is after attitude information is collected into, and need to be by wire communication or radio communication, will
Attitude information is sent in processing unit 34, text to this without repeating.
It should also be understood that the embodiment of the present invention can also collect the attitude information of user by other means, pass through others
Mode obtains and/or represented left eye position information, right eye position information, left eye orientation information and right eye orientation information, this hair
Bright embodiment is not construed as limiting to specific mode.
In the design of VR scenes, such as in the game design of VR scenes, a position is designed to one thing of correspondence
Body group.In a specific example, the corresponding object of left eye position LE and right eye position RE difference of user is as shown in Figure 4.
Left eye position correspondence object L41, object 43, object 44, object 46 and the personage 42 of user, the right eye position correspondence object of user
R45, object 43, object 44, object 46 and personage 42.Wherein, the personage 42 is desirable to be enhanced the object of its authenticity, is mesh
Mark object.
Specifically, it is determined which object is target object in the corresponding group of objects of left eye position or right eye position of user,
Can the design based on VR scenes.For example, each scene or multiple scenes may have a target object list, in generation VR
During scene, the target object in the target scene is found according to target object list.For another example, advised in the game design of VR scenes
Fixed, the personage at close shot (apart from a range of scene of user) place is other things at target object, close shot in addition to personage
Body is not target object, and all objects at distant view (scene outside user's certain limit) place are not target object, etc..
Determine that the target object in scene can be performed by processing unit 34, for example, can be determined by the CPU in processing unit 34, this
Inventive embodiments are not construed as limiting to this.
It should be understood that for VR scenes, wherein other objects in addition to target object can be built beforehand through 3D
Mould generates 3D models, is stored in 3D model libraries.Specifically, shown in Fig. 4 object L41, object 43, object 44, object R45
It is stored in the 3D models of object 46 in 3D model libraries.Processing unit 34 (such as the CPU in processing unit 34) obtains left eye
After positional information and right eye position information, target three-dimensional, i.e. object L41, object 43, object are determined from 3D model libraries
44th, the 3D models of object R45 and object 46, so that follow-up rendering picture is used.It is of course also possible to determine mesh by other means
Threedimensional model is marked, the embodiment of the present invention is not construed as limiting to this.
For the target object in VR scenes, such as the personage 42 in VR scenes shown in Fig. 4, then according to shooting in advance
Multiple videos are generated.Wherein, the plurality of video is to include the video of target object from what different camera sites was shot respectively.
Specifically, it is assumed that the target object is personage 42, then the embodiment of the present invention can in advance be shot from multiple camera sites
Multiple videos on the personage 42.Fig. 5 shows the schematic diagram of the scene shot in advance.As shown in figure 5, the field to be shot
Scape includes personage 42, object 52 and object 54, and the situation for the VR scenes that the scene to be shot is tried one's best with finally showing is close, with
Increase the sense of reality.For the scene to be shot, multiple capture apparatus can be placed in the horizontal direction, respectively from camera site
C1, camera site C2With camera site C3Imaged, original video such as Fig. 6 institute of the personage in different camera sites can be obtained
Show.
It should be understood that can be shot when shooting video in advance on the circumference of distance objective object certain radius.At this
Circumference photographs position is chosen more much more intensive, therefrom selects same or like with the left eye position or right eye position of user
Probability it is also bigger, the authenticity that target video that is that final choice goes out or calculating is put into VR scenes is also higher.
In embodiments of the present invention, multiple videos can only include object after transparent processing to original video
The video of body.Specifically, can be by respectively from 3 videos captured by 3 camera sites by personage 42 and composition background
Object 52 and object 54 are separated, it is possible to obtain only including 3 videos of personage 42.3 videos are in the identical time
The time span produced also identical video.
Alternatively, in the embodiment of the present invention, transparent processing can be the processing based on Alpha (alpha) transparent technology.
Specifically, if allowing pixel to possess one group of alpha value in the 3D environment of VR scenes, alpha value is used for recording the saturating of pixel
Lightness, so that object can possess different transparencies., can be by the target in original video in the embodiment of the present invention
Object personage 42 be processed as it is opaque, constitute background object 52 and object 54 be processed as it is transparent.
In a kind of specific scheme, S340 is according to the left eye position information, the right eye position information and shoots in advance
Multiple videos, determine target video, can include:The left eye position information and the right eye position information are averaging
Value, obtains mean place;According to the mean place, the target video is selected from the multiple video, wherein, it is described
The distance of the camera site of target video and the mean place be the multiple video all camera sites in it is described
Mean place is immediate.
It should be understood that in various embodiments of the present invention, left eye position, right eye position and camera site can be unified in VR scenes
It is expressed as the coordinate of Virtual Space, such as coordinate or spherical coordinates in x-axis, y-axis and z-axis three-axis reference.Left eye position, the right side
Eye position and camera site can also represent that the embodiment of the present invention is not construed as limiting to this otherwise.
In this programme, left eye position information and right eye position information are averaged, mean place is obtained.For example, with
Exemplified by three-axis reference, left eye position is (x1, y1, z1), right eye position is (x2, y2, z2), then mean place is ((x1+x2)/2,
(y1+y2)/2, (z1+z2)/2).Camera site is selected from multiple videos to regard as target with the immediate video of mean place
Frequently.
In the case of in multiple camera sites being multiple positions on the circumference of distance objective object certain radius, target is regarded
The camera site of frequency and the closest camera site (x that can be understood as target video of mean placet, yt, zt) and mean place
((x1+x2)/2, (y1+y2)/2, (z1+z2The distance of)/2) need to be less than default threshold value, that is, ensure the camera site of target video
Distance with mean place is sufficiently small.
In the case where multiple camera sites are not on the circumference of distance objective object certain radius, the shooting of target video
Position is with mean place closest to it is to be understood that the shooting for the line segment and target video that mean place is constituted with target object
It is the line segment and all camera sites that mean place is constituted with target object to put the angle between the line segment constituted with target object
Angle minimum in angle between the line segment constituted with target object.
In another specific scheme, S340 is according to the left eye position information, the right eye position information and claps in advance
The multiple videos taken the photograph, determine target video, can include:The left eye position information and the right eye position information are averaging
Value, obtains mean place;According to the mean place, at least two videos are selected from the multiple video;By described in extremely
Each video is extracted in corresponding frame of video of corresponding moment in few two videos;According to the mean place and it is described at least
The camera site of two videos, interpolation arithmetic is carried out at least two frame of video, obtains described working as target video.
In this scheme, at least each shooting position in the left and right of left eye and right eye mean place of user can be chosen
Put, the video that at least each camera site in left and right is shot is selected from multiple videos, the reference for calculating target video is used as.
Intercept at least two videos and carry out interpolation arithmetic in the corresponding frame of video of synchronization, obtain target video.
In the case of in multiple camera sites being multiple positions on the circumference of distance objective object certain radius, from multiple
It can choose and mean place ((x that at least two videos are chosen in video1+x2)/2, (y1+y2)/2, (z1+z2The distance of)/2)
At least two minimum videos.At least one is distributed in the left side of mean place for the camera site of at least two videos, and
At least one is distributed in the right side of mean place.
In the case where multiple camera sites are not on the circumference of distance objective object certain radius, selected from multiple videos
Taking at least two videos can be, line segment and camera site and the mesh of at least two videos that mean place is constituted with target object
Angle between the line segment that mark object is constituted is the line segment and all camera sites and target that mean place is constituted with target object
Angle minimum is several in angle between the line segment that object is constituted.At least one distribution of the camera site of at least two videos
In the left side of mean place, and at least one is distributed in the right side of mean place.
It should be understood that in embodiments of the present invention, the video as reference, the present invention can also be chosen according to other criterions
Embodiment is not construed as limiting to play.
It should also be understood that in embodiments of the present invention, the video that different camera sites are photographed represents object observing object
Different observation positions when (for example, personage 42).In other words, 3 videos shown in Fig. 6 are corresponding in same time of day
Frame of video, is the image in different observation position observations.3 shooting angle can correspond to 3 camera site C respectively1、C2
And C3。
It should be understood that in embodiments of the present invention, in addition to shooting multiple videos in advance, it would however also be possible to employ shot from multiple
Multigroup photo (or multiple series of images) of the advance photographic subjects object in position.According to left eye position and right eye position (or average bit
Put) with the relations of multiple camera sites, corresponding at least two images at least two camera sites are found from multiple series of images, it is right
At least two images carry out interpolation arithmetic, obtain target image.Specific interpolation algorithm, can be described in more detail below.
Fig. 7 is that one embodiment of the invention sets the goal the schematic diagram of video really.According to mean place, from multiple videos
At least two videos are selected, each video at least two videos are extracted in corresponding frame of video of corresponding moment, root
According to mean place and the camera site of at least two videos, interpolation arithmetic is carried out at least two frame of video, obtained when target is regarded
The detailed process of frequency can be as shown in Figure 7.
User is when observing VR scenes, and observation position can change, and such as user observes position when towards VR scenes
Put to move in left-right direction.3 camera sites are respectively C1、C2And C3。C1、C2And C3Three-dimensional cartesian coordinate system can be passed through
Coordinate value represent that can also represent, can also represent by other means by the coordinate value of spherical coordinate system, the present invention is real
Example is applied to be not construed as limiting this.According to the left eye position information and right eye position information of user, it may be determined that flat when user observes
Equal position Cview.As shown in fig. 7, mean place CviewIn C1And C2Between.It is determined that during target video, because mean place Cview
Between C1And C2Between, therefore it is chosen at camera site C1And C2The video shot in advance is as reference.In generation target video
During frame of video (image), while taking out C1And C2Corresponding video is then right in the corresponding frame of video I1 and I2 of synchronization respectively
Two frame of video I1 and I2 enter row interpolation, for example, can be linear interpolation.Wherein, the weight of interpolation is according to mean place CviewWith
C1And C2Distance depending on.The frame of video I of the target video of outputout=I1* (1- (C1-Cview/C1-C2))+I2*(1-(C2-
Cview/C1-C2))。
It should be understood that the situation that the observation position of user is moved in left-right direction is only discuss above, if the observation of user
Position is moved forward and backward, because being that near big and far smaller effect can be presented in the personage that observer sees naturally in VR 3D scenes
Really, although the angle physically shown should be also varied from, but this change influence very little, general user will not take notice of
Or observe.In addition, in general scene, user only can all around move, it can seldom be entered in the vertical direction
Row is a wide range of mobile, so the target video determined for method according to embodiments of the present invention, the distortion sense produced by user
Feel also very little.
It should be understood that the embodiment of the present invention is illustrated so that target object is personage as an example.Certainly target object can also be
It is required that the animal of authenticity, or even building or plant etc., the embodiment of the present invention is not construed as limiting to this.
Alternatively, in embodiments of the present invention, S350 is according to left eye orientation information, the target three-dimensional and the mesh
Video is marked, real-time rendering left eye picture can include:According to the left eye orientation information, the target three-dimensional is rendered
Onto the first texture;According to the left eye orientation information, the target video is rendered on the second texture, wherein, described
Two textures are based on Billboard chip technology;S360 is according to right eye orientation information, the target three-dimensional and the target
Video, real-time rendering right eye picture, can include:According to the right eye orientation information, the target three-dimensional is rendered into
In third texture;According to the right eye orientation information, the target video is rendered on the 4th texture, wherein, the described 4th
Texture is based on Billboard chip technology.
Below, the process that left eye picture and right eye picture are rendered in the embodiment of the present invention is described in detail with reference to Fig. 8.As above
Description, processing unit 34 (such as CPU therein) has determined target three-dimensional in S330, mesh is had determined in S340
Mark video.Processing unit 34 (such as GPU therein) is according to left eye orientation information, it is determined that the left eye picture that should be presented;According to the right side
Eye orientation information, it is determined that the right eye picture that should be presented.For example in scene as shown in Figure 4, according to left eye orientation information (towards people
Thing 42), determine that object L41, object 43, object 44 and personage 42 are presented in left eye picture;According to right eye orientation information (towards people
Thing 42), determine that object 43, object 44, object R45 and personage 42 are presented in right eye picture.
Target three-dimensional object L41, object 43 and object 44 are rendered into by processing unit 34 (such as GPU therein)
On left eye picture L800 the first texture 82, the target video is rendered on left eye picture L800 the second texture 84;Will
Target three-dimensional object 43, object 44 and object R45 are rendered into right eye picture R800 third texture 86, by the target
Video Rendering is on right eye picture R800 the 4th texture 88.
Specifically, respectively for left eye picture and right eye picture, advertisement can be set in the position of the target object of picture
Plate (billboard) dough sheet, target video is presented on billboard dough sheet.Advertisement plate technique is entered in field of Computer Graphics
A kind of method of row Fast Drawing.Similar 3D game it is this to requirement of real-time it is higher in the case of, take advertisement plate technique
The speed of drafting can be greatly speeded up to improve the fluency of 3D game pictures.Advertisement plate technique is in 3D scenes, to use 2D
To represent object, allow the object all the time towards user.
Specifically, billboard dough sheet can have angle of inclination in left eye picture, and the design parameter at angle of inclination can root
Calculated according to left eye position information;Billboard dough sheet can have angle of inclination, the design parameter at angle of inclination in right eye picture
It can be calculated according to right eye position information.
In fact, because VR scenes are real-time renderings, at any one time, it is believed that be that will be obtained mentioned by interpolation
Frame of video be presented on the position of target object.In the continuous time section of scene changes, video can be equivalent to and existed
Played on billboard dough sheet.
As shown in figure 8, setting billboard dough sheet in the corresponding position of target object, each frame of video is regard as textures line
Reason is plotted to the textures of above-mentioned billboard dough sheet, then each frame of video can always face user.
It should be understood that when rendering left eye picture and right eye picture, can be using Z-buffering and advertisement plate technique knot
Close.Z-buffering contributes to target object by far and near distance and other objects formation hiding relation and size relation.
In the embodiment of the present invention, post-processing object video can also use other technologies, and the embodiment of the present invention is not construed as limiting to this.
It should also be understood that the embodiment of the present invention also provides a kind of graphic processing method, including step S320 to S360, method by
Computing device.
It should also be understood that in various embodiments of the present invention, the size of the sequence number of above-mentioned each process is not meant to perform
The priority of order, the execution sequence of each process should be determined with its function and internal logic, without the reality of the reply embodiment of the present invention
Apply process and constitute any limit.
Above in conjunction with Fig. 1 to Fig. 8, graphic processing method according to embodiments of the present invention is described in detail.Below will knot
Close Fig. 9 and Figure 10, detailed description processor according to embodiments of the present invention and VR systems.
Fig. 9 is the schematic block diagram of the processor 900 of one embodiment of the invention.Processor 900 can correspond to above
Described processing unit 34.As shown in figure 9, processor 900 can include acquisition module 910, computing module 920 and rendering module
930。
Acquisition module 910 is used for left eye position information, right eye position information, left eye orientation information and the right eye for obtaining user
Orientation information.
Computing module 920 is used for the left eye position information obtained according to the acquisition module and the right eye position is believed
Breath, determines target three-dimensional from 3 d model library;Computing module 920 is additionally operable to according to the left eye position information, institute
The multiple videos stated right eye position information and shot in advance, determine target video, wherein, the multiple video is respectively from difference
Camera site shoot video.
Rendering module 930 is used for according to left eye orientation information, the target three-dimensional and the target video, real-time wash with watercolours
Contaminate left eye picture;Rendering module 930 is additionally operable to according to right eye orientation information, the target three-dimensional and the target video,
Real-time rendering right eye picture;Wherein, the left eye picture and the right eye picture are shown in shape when on Virtual Reality display
Into VR scenes, the VR scenes include the image of the target three-dimensional and the image of the target video.
The graphic processing facility of the embodiment of the present invention, according to the positional information of the right and left eyes of user, determines the three-dimensional mould of target
Type and target video is determined according to the multiple videos shot in advance, Rendering renders a left side respectively by way of real-time rendering
Eye picture and right eye picture, so that VR scenes are shown, wherein, VR scenes include the image and target video of target three-dimensional
Image, the target video can really show outdoor scene, be that user carries on the basis of whole VR scenes interaction is kept
For real telepresenc, so as to lift Consumer's Experience.
Alternatively, as one embodiment, the rendering module 930 specifically can be used for:Believed according to left eye direction
Breath, the target three-dimensional is rendered on the first texture;According to the left eye orientation information, the target video is rendered
Onto the second texture, wherein, second texture is based on Billboard chip technology;, will according to the right eye orientation information
The target three-dimensional is rendered into third texture;According to the right eye orientation information, the target video is rendered into
On four textures, wherein, the 4th texture is based on Billboard chip technology.
Alternatively, as one embodiment, the computing module 920 is according to the left eye position information, right eye position
Multiple videos that confidence ceases and shot in advance, determine target video, can include:To the left eye position information and the right eye
Positional information is averaged, and obtains mean place;According to the mean place, at least two are selected from the multiple video
Video;Each video at least two video is extracted in corresponding frame of video of corresponding moment;According to described average
Position and the camera site of at least two video, interpolation arithmetic is carried out at least two frame of video, obtains described work as
Target video.
Alternatively, as one embodiment, the computing module 920 is according to the left eye position information, right eye position
Multiple videos that confidence ceases and shot in advance, determine target video, can include:To the left eye position information and the right eye
Positional information is averaged, and obtains mean place;According to the mean place, the target is selected from the multiple video
Video, wherein, the distance of the camera site of the target video and the mean place is all bats of the multiple video
Act as regent immediate with the mean place in putting.
Alternatively, as one embodiment, the multiple video is only including after transparent processing to original video
The video of target object.
Alternatively, as one embodiment, the target object is personage.
Alternatively, as one embodiment, the left eye position information of the acquisition of acquisition module 910, the right eye
Positional information, the left eye orientation information and the right eye orientation information are according to the current posture letter of the collected user
What breath was determined.
Alternatively, as one embodiment, the attitude information includes head pose information, four limbs attitude information, trunk
In attitude information, muscle electric stimulation information, eye tracking information, skin sensing information, motion perception information and brain signal information
At least one.
It should be understood that it can also be GPU that the processor 900, which can be CPU,.Processor 900 can also both include CPU work(
GPU function can be included again, for example, the function (S320 to S340) of acquisition module 910 and computing module 920 is performed, wash with watercolours by CPU
The function (S350 and S360) of dye module 930 is performed by GPU, and the embodiment of the present invention is not construed as limiting to this.
Figure 10 is illustrated that a kind of schematic diagram of VR systems of the embodiment of the present invention.Shown in Figure 10 is a kind of VR helmets
1000, the VR helmets 1000 can include head-tracker 1010, CPU 1020, GPU 1030 and display 1040.Wherein, head
Tracker 1010 corresponds to posture collection device, and CPU1020 and GPU 1030 correspond to processing unit, and display 1040 corresponds to
Display device, herein the function to head-tracker 1010, CPU 1020, GPU 1030 and display 1040 repeat no more.
It should be understood that the head-tracker 1010, CPU 1020, GPU 1030 and display 1040 shown in Figure 10 are integrated in VR
In the helmet 1000.There can also be other posture collection devices outside the VR helmets 1000, collect the attitude information of user, send
Handled to CPU 1020, the embodiment of the present invention is not construed as limiting to this.
Figure 11 is illustrated that the schematic diagram of another VR systems of the embodiment of the present invention.Shown in Figure 11 is a kind of VR glasses
The 1110 VR systems constituted with main frame 1120, VR glasses 1110 can include angle inductor 1112, signal processor 1114, number
According to transmitter 1116 and display 1118.Wherein, angle inductor 1112 corresponds to posture collection device, and main frame 1120 includes
CPU and GPU corresponds to processing unit to calculate simultaneously rendered picture, and display 1118 corresponds to display device.Angle inductor
1112 collect the attitude information of user, and attitude information is sent into main frame 1120 and handled, main frame 1120 calculates and renders a left side
Eye picture and right eye picture, and left eye picture and right eye picture are sent to display 1118 shown.Signal processor
1114 and data link 1116 be mainly used in communication between VR glasses 1110 and main frame 1120.
There can also be other posture collection devices outside VR glasses 1110, collect the attitude information of user, be sent to
Main frame 1120 is handled, and the embodiment of the present invention is not construed as limiting to this.
The virtual reality system of the embodiment of the present invention, collects the attitude information of user to determine the position of user's right and left eyes,
According to the positional information of the right and left eyes of user, determine target three-dimensional and determine target according to the multiple videos shot in advance
Video, Rendering renders left eye picture and right eye picture respectively by way of real-time rendering, so that VR scenes are shown, its
In, VR scenes include the image of target three-dimensional and the image of target video, and the target video can really show reality
Scape, on the basis of whole VR scenes interaction is kept, provides the user real telepresenc, so as to lift user's body
Test.
The embodiment of the present invention also provides a kind of computer-readable recording medium, is stored thereon with instruction, when the instruction exists
When being run on computer so that the computer performs the graphic processing method of above method embodiment.Specifically, the computer
It can be above-mentioned VR systems or be processor.
The embodiment of the present invention also provides a kind of computer program product including instructing, it is characterised in that when computer fortune
During the finger of the row computer program product, the computer performs the graphic processing method of above method embodiment.Tool
Body, the computer program product can be run in VR systems or processor.
In the above-described embodiments, it can come real wholly or partly by software, hardware, firmware or its any combination
It is existing.When implemented in software, it can realize in the form of a computer program product whole or in part.The computer program
Product includes one or more computer instructions.It is all or part of when loading on computers and performing the computer instruction
Ground is produced according to the flow or function described in the embodiment of the present application.The computer can be all-purpose computer, special-purpose computer,
Computer network or other programmable devices.The computer instruction can be stored in a computer-readable storage medium, or
Person is transmitted from a computer-readable recording medium to another computer-readable recording medium, for example, the computer instruction
Wired (such as coaxial cable, optical fiber, digital subscriber can be passed through from web-site, computer, server or data center
Line (Digital Subscriber Line, DSL)) or wireless (such as infrared, wireless, microwave) mode to another website
Website, computer, server or data center are transmitted.The computer-readable recording medium can be that computer can be deposited
Any usable medium taken is either set comprising data storages such as one or more usable mediums integrated server, data centers
It is standby.The usable medium can be magnetic medium (for example, floppy disk, hard disk, tape), optical medium (for example, high-density digital video
CD (Digital Video Disc, DVD)) or semiconductor medium (for example, solid state hard disc (Solid State Disk,
SSD)) etc..
It should be understood that the differentiation that first, second and various numeral numberings that are referred to herein only are carried out for convenience of description, and
It is not limited to scope of the present application.
It should be understood that the terms "and/or", a kind of only incidence relation for describing affiliated partner, expression can be deposited
In three kinds of relations, for example, A and/or B, can be represented:Individualism A, while there is A and B, these three situations of individualism B.
In addition, character "/" herein, it is a kind of relation of "or" to typically represent forward-backward correlation object.
Those of ordinary skill in the art are it is to be appreciated that the list of each example described with reference to the embodiments described herein
Member and algorithm steps, can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually
Performed with hardware or software mode, depending on the application-specific and design constraint of technical scheme.Professional and technical personnel
Described function can be realized using distinct methods to each specific application, but this realization is it is not considered that exceed
Scope of the present application.
It is apparent to those skilled in the art that, for convenience and simplicity of description, the system of foregoing description,
The specific work process of device and unit, may be referred to the corresponding process in preceding method embodiment, will not be repeated here.
, can be with several embodiments provided herein, it should be understood that disclosed systems, devices and methods
Realize in other way.For example, device embodiment described above is only schematical, for example, the unit
Divide, only a kind of division of logic function there can be other dividing mode when actually realizing, such as multiple units or component
Another system can be combined or be desirably integrated into, or some features can be ignored, or do not perform.It is another, it is shown or
The coupling each other discussed or direct-coupling or communication connection can be the indirect couplings of device or unit by some interfaces
Close or communicate to connect, can be electrical, machinery or other forms.
The unit illustrated as separating component can be or may not be it is physically separate, it is aobvious as unit
The part shown can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple
On NE.Some or all of unit therein can be selected to realize the mesh of this embodiment scheme according to the actual needs
's.
In addition, each functional unit in each embodiment of the invention can be integrated in a processing unit, can also
That unit is individually physically present, can also two or more units it is integrated in a unit.
It is described above, the only embodiment of the application, but the protection domain of the application is not limited thereto, and it is any
Those familiar with the art can readily occur in change or replacement in the technical scope that the application is disclosed, and should all contain
Cover within the protection domain of the application.Therefore, the protection domain of the application described should be defined by scope of the claims.
Claims (34)
1. a kind of graphic processing method, it is characterised in that including:
Obtain left eye position information, right eye position information, left eye orientation information and the right eye orientation information of user;
According to the left eye position information and the right eye position information, target three-dimensional is determined from 3 d model library;
According to the left eye position information, the right eye position information and the multiple videos shot in advance, target video is determined, its
In, the multiple video is the video shot respectively from different camera sites;
According to left eye orientation information, the target three-dimensional and the target video, real-time rendering left eye picture;
According to right eye orientation information, the target three-dimensional and the target video, real-time rendering right eye picture;
Wherein, VR scenes are formed when the left eye picture and the right eye picture are shown on Virtual Reality display, it is described
VR scenes include the image of the target three-dimensional and the image of the target video.
2. according to the method described in claim 1, it is characterised in that described according to left eye orientation information, the three-dimensional mould of the target
Type and the target video, real-time rendering left eye picture, including:
According to the left eye orientation information, the target three-dimensional is rendered on the first texture;
According to the left eye orientation information, the target video is rendered on the second texture, wherein, second texture is base
In Billboard chip technology;
It is described according to right eye orientation information, the target three-dimensional and the target video, real-time rendering right eye picture, bag
Include:
According to the right eye orientation information, the target three-dimensional is rendered into third texture;
According to the right eye orientation information, the target video is rendered on the 4th texture, wherein, the 4th texture is base
In Billboard chip technology.
3. method according to claim 1 or 2, it is characterised in that described according to the left eye position information, the right eye
Positional information and the multiple videos shot in advance, determine target video, including:
The left eye position information and the right eye position information are averaged, mean place is obtained;
According to the mean place, at least two videos are selected from the multiple video;
Each video at least two video is extracted in corresponding frame of video of corresponding moment;
According to the mean place and the camera site of at least two video, row interpolation is entered at least two frame of video
Computing, obtains described working as target video.
4. method according to claim 1 or 2, it is characterised in that described according to the left eye position information, the right eye
Positional information and the multiple videos shot in advance, determine target video, including:
The left eye position information and the right eye position information are averaged, mean place is obtained;
According to the mean place, the target video is selected from the multiple video, wherein, the bat of the target video
Acting as regent, to put with the distance of the mean place most connect with the mean place in all camera sites of the multiple video
Near.
5. method according to any one of claim 1 to 4, it is characterised in that the multiple video is to original video
Only include the video of target object after transparent processing.
6. method according to claim 5, it is characterised in that the target object is personage.
7. method according to any one of claim 1 to 6, it is characterised in that the left eye position information, the right eye
Positional information, the left eye orientation information and the right eye orientation information are true according to the current attitude information of collected user
Fixed.
8. method according to claim 7, it is characterised in that the attitude information includes head pose information, four limbs appearance
State information, trunk attitude information, muscle electric stimulation information, eye tracking information, skin sensing information, motion perception information and brain
At least one of signal message.
9. a kind of processor, it is characterised in that including acquisition module, computing module and rendering module,
The acquisition module is used for left eye position information, right eye position information, left eye orientation information and the right eye court for obtaining user
To information;
The computing module is used for the left eye position information and the right eye position information obtained according to the acquisition module,
Target three-dimensional is determined from 3 d model library;
The computing module is additionally operable to according to the left eye position information, the right eye position information and multiple regarding of shooting in advance
Frequently, target video is determined, wherein, the multiple video is the video shot respectively from different camera sites;
The rendering module is used for according to left eye orientation information, the target three-dimensional and the target video, real-time rendering
Left eye picture;
The rendering module is additionally operable to according to right eye orientation information, the target three-dimensional and the target video, real-time wash with watercolours
Contaminate right eye picture;
Wherein, VR scenes are formed when the left eye picture and the right eye picture are shown on Virtual Reality display, it is described
VR scenes include the image of the target three-dimensional and the image of the target video.
10. processor according to claim 9, it is characterised in that the rendering module specifically for:
According to the left eye orientation information, the target three-dimensional is rendered on the first texture;
According to the left eye orientation information, the target video is rendered on the second texture, wherein, second texture is base
In Billboard chip technology;
According to the right eye orientation information, the target three-dimensional is rendered into third texture;
According to the right eye orientation information, the target video is rendered on the 4th texture, wherein, the 4th texture is base
In Billboard chip technology.
11. the processor according to claim 9 or 10, it is characterised in that the computing module is according to the left eye position
Information, the right eye position information and the multiple videos shot in advance, determine target video, including:
The left eye position information and the right eye position information are averaged, mean place is obtained;
According to the mean place, at least two videos are selected from the multiple video;
Each video at least two video is extracted in corresponding frame of video of corresponding moment;
According to the mean place and the camera site of at least two video, row interpolation is entered at least two frame of video
Computing, obtains described working as target video.
12. the processor according to claim 9 or 10, it is characterised in that the computing module is according to the left eye position
Information, the right eye position information and the multiple videos shot in advance, determine target video, including:
The left eye position information and the right eye position information are averaged, mean place is obtained;
According to the mean place, the target video is selected from the multiple video, wherein, the bat of the target video
Acting as regent, to put with the distance of the mean place most connect with the mean place in all camera sites of the multiple video
Near.
13. the processor according to any one of claim 9 to 12, it is characterised in that the multiple video is to original
Video only includes the video of target object after transparent processing.
14. processor according to claim 13, it is characterised in that the target object is personage.
15. the processor according to any one of claim 9 to 14, it is characterised in that the institute that the acquisition module is obtained
It is according to being received to state left eye position information, the right eye position information, the left eye orientation information and the right eye orientation information
What the current attitude information of the user of collection was determined.
16. processor according to claim 15, it is characterised in that the attitude information includes head pose information, four
Limb attitude information, trunk attitude information, muscle electric stimulation information, eye tracking information, skin sensing information, motion perception information
At least one of with brain signal information.
17. the processor according to any one of claim 9 to 16, it is characterised in that the processor includes centre
Manage at least one of device CPU and graphics processor GPU.
18. a kind of graphic processing method, it is characterised in that including:
Collect the current attitude information of user;
According to the attitude information, left eye position information, right eye position information, left eye orientation information and the right side of the user is obtained
Eye orientation information;
According to the left eye position information and the right eye position information, target three-dimensional is determined from 3 d model library;
According to the left eye position information, the right eye position information and the multiple videos shot in advance, target video is determined, its
In, the multiple video is the video shot respectively from different camera sites;
According to left eye orientation information, the target three-dimensional and the target video, real-time rendering left eye picture;
According to right eye orientation information, the target three-dimensional and the target video, real-time rendering right eye picture;
The left eye picture and the right eye picture are shown, wherein, formed when the left eye picture and right eye picture display
VR scenes, the VR scenes include the image of the target three-dimensional and the image of the target video.
19. method according to claim 18, it is characterised in that described three-dimensional according to left eye orientation information, the target
Model and the target video, real-time rendering left eye picture, including:
According to the left eye orientation information, the target three-dimensional is rendered on the first texture;
According to the left eye orientation information, the target video is rendered on the second texture, wherein, second texture is base
In Billboard chip technology;
It is described according to right eye orientation information, the target three-dimensional and the target video, real-time rendering right eye picture, bag
Include:
According to the right eye orientation information, the target three-dimensional is rendered into third texture;
According to the right eye orientation information, the target video is rendered on the 4th texture, wherein, the 4th texture is base
In Billboard chip technology.
20. the method according to claim 18 or 19, it is characterised in that it is described according to the left eye position information, it is described
Right eye position information and the multiple videos shot in advance, determine target video, including:
The left eye position information and the right eye position information are averaged, mean place is obtained;
According to the mean place, at least two videos are selected from the multiple video;
Each video at least two video is extracted in corresponding frame of video of corresponding moment;
According to the mean place and the camera site of at least two video, row interpolation is entered at least two frame of video
Computing, obtains described working as target video.
21. the method according to claim 18 or 19, it is characterised in that it is described according to the left eye position information, it is described
Right eye position information and the multiple videos shot in advance, determine target video, including:
The left eye position information and the right eye position information are averaged, mean place is obtained;
According to the mean place, the target video is selected from the multiple video, wherein, the bat of the target video
Acting as regent, to put with the distance of the mean place most connect with the mean place in all camera sites of the multiple video
Near.
22. the method according to any one of claim 18 to 21, it is characterised in that the multiple video is regarded to original
Frequency only includes the video of target object after transparent processing.
23. method according to claim 22, it is characterised in that the target object is personage.
24. the method according to any one of claim 18 to 23, it is characterised in that the current posture of the collection user
Information, including:
Collect the current head pose information of the user, four limbs attitude information, trunk attitude information, muscle electric stimulation information,
At least one of eye tracking information, skin sensing information, motion perception information and brain signal information.
25. a kind of Virtual Reality system, it is characterised in that including posture collection device, processing unit and display device:
The posture collection device is used for:Collect the current attitude information of user;
The processing unit is used for:
According to the attitude information, left eye position information, right eye position information, left eye orientation information and the right side of the user is obtained
Eye orientation information;
According to the left eye position information and the right eye position information, target three-dimensional is determined from 3 d model library;
According to the left eye position information, the right eye position information and the multiple videos shot in advance, target video is determined, its
In, the multiple video is the video shot respectively from different camera sites;
According to left eye orientation information, the target three-dimensional and the target video, real-time rendering left eye picture;
According to right eye orientation information, the target three-dimensional and the target video, real-time rendering right eye picture;
The display device is used to show the left eye picture and the right eye picture, wherein, the left eye picture and the right side
VR scenes are formed during eye picture display, the VR scenes include the image and the target video of the target three-dimensional
Image.
26. VR systems according to claim 25, it is characterised in that the processing unit is according to left eye orientation information, institute
State target three-dimensional and the target video, real-time rendering left eye picture, including:
According to the left eye orientation information, the target three-dimensional is rendered on the first texture;
According to the left eye orientation information, the target video is rendered on the second texture, wherein, second texture is base
In Billboard chip technology;
The processing unit is according to right eye orientation information, the target three-dimensional and the target video, real-time rendering right eye
Picture, including:
According to the right eye orientation information, the target three-dimensional is rendered into third texture;
According to the right eye orientation information, the target video is rendered on the 4th texture, wherein, the 4th texture is base
In Billboard chip technology.
27. the VR systems according to claim 25 or 26, it is characterised in that the processing unit is according to the left eye position
Information, the right eye position information and the multiple videos shot in advance, determine target video, including:
The left eye position information and the right eye position information are averaged, mean place is obtained;
According to the mean place, at least two videos are selected from the multiple video;
Each video at least two video is extracted in corresponding frame of video of corresponding moment;
According to the mean place and the camera site of at least two video, row interpolation is entered at least two frame of video
Computing, obtains described working as target video.
28. the VR systems according to claim 25 or 26, it is characterised in that the processing unit is according to the left eye position
Information, the right eye position information and the multiple videos shot in advance, determine target video, including:
The left eye position information and the right eye position information are averaged, mean place is obtained;
According to the mean place, the target video is selected from the multiple video, wherein, the bat of the target video
Acting as regent, to put with the distance of the mean place most connect with the mean place in all camera sites of the multiple video
Near.
29. the VR systems according to any one of claim 25 to 28, it is characterised in that the multiple video is to original
Video only includes the video of target object after transparent processing.
30. VR systems according to claim 29, it is characterised in that the target object is personage.
31. the VR systems according to any one of claim 25 to 30, it is characterised in that the posture collection device is specific
For:
Collect the current head pose information of the user, four limbs attitude information, trunk attitude information, muscle electric stimulation information,
At least one of eye tracking information, skin sensing information, motion perception information and brain signal information.
32. the VR systems according to any one of claim 25 to 31, it is characterised in that the processing unit includes center
At least one of processor CPU and graphics processor GPU.
33. a kind of computer-readable storage medium, it is characterised in that be stored thereon with instruction, when the instruction is run on computers
When so that the method any one of the computer perform claim requirement 1 to 8.
34. a kind of computer-readable storage medium, it is characterised in that be stored thereon with instruction, when the instruction is run on computers
When so that the method any one of the computer perform claim requirement 18 to 24.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710379516.5A CN107315470B (en) | 2017-05-25 | 2017-05-25 | Graphic processing method, processor and virtual reality system |
PCT/CN2018/084714 WO2018214697A1 (en) | 2017-05-25 | 2018-04-27 | Graphics processing method, processor, and virtual reality system |
TW107116847A TWI659335B (en) | 2017-05-25 | 2018-05-17 | Graphic processing method and device, virtual reality system, computer storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710379516.5A CN107315470B (en) | 2017-05-25 | 2017-05-25 | Graphic processing method, processor and virtual reality system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107315470A true CN107315470A (en) | 2017-11-03 |
CN107315470B CN107315470B (en) | 2018-08-17 |
Family
ID=60182018
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710379516.5A Active CN107315470B (en) | 2017-05-25 | 2017-05-25 | Graphic processing method, processor and virtual reality system |
Country Status (3)
Country | Link |
---|---|
CN (1) | CN107315470B (en) |
TW (1) | TWI659335B (en) |
WO (1) | WO2018214697A1 (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108616752A (en) * | 2018-04-25 | 2018-10-02 | 北京赛博恩福科技有限公司 | Support the helmet and control method of augmented reality interaction |
WO2018214697A1 (en) * | 2017-05-25 | 2018-11-29 | 腾讯科技(深圳)有限公司 | Graphics processing method, processor, and virtual reality system |
CN109032350A (en) * | 2018-07-10 | 2018-12-18 | 深圳市创凯智能股份有限公司 | Spinning sensation mitigates method, virtual reality device and computer readable storage medium |
CN109976527A (en) * | 2019-03-28 | 2019-07-05 | 重庆工程职业技术学院 | Interactive VR display systems |
CN110134222A (en) * | 2018-02-02 | 2019-08-16 | 上海集鹰科技有限公司 | A kind of VR shows positioning sighting system and its positioning method of sight |
CN110570513A (en) * | 2018-08-17 | 2019-12-13 | 阿里巴巴集团控股有限公司 | method and device for displaying vehicle damage information |
CN111065053A (en) * | 2018-10-16 | 2020-04-24 | 北京凌宇智控科技有限公司 | System and method for video streaming |
CN111064985A (en) * | 2018-10-16 | 2020-04-24 | 北京凌宇智控科技有限公司 | System, method and device for realizing video streaming |
CN111857336A (en) * | 2020-07-10 | 2020-10-30 | 歌尔科技有限公司 | Head-mounted device, rendering method thereof, and storage medium |
CN112015264A (en) * | 2019-05-30 | 2020-12-01 | 深圳市冠旭电子股份有限公司 | Virtual reality display method, virtual reality display device and virtual reality equipment |
CN112073669A (en) * | 2020-09-18 | 2020-12-11 | 三星电子(中国)研发中心 | Method and device for realizing video communication |
CN112308982A (en) * | 2020-11-11 | 2021-02-02 | 安徽山水空间装饰有限责任公司 | Decoration effect display method and device |
CN113436489A (en) * | 2021-06-09 | 2021-09-24 | 深圳大学 | Study leaving experience system and method based on virtual reality |
US11500455B2 (en) | 2018-10-16 | 2022-11-15 | Nolo Co., Ltd. | Video streaming system, video streaming method and apparatus |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060132915A1 (en) * | 2004-12-16 | 2006-06-22 | Yang Ung Y | Visual interfacing apparatus for providing mixed multiple stereo images |
CN102056003A (en) * | 2009-11-04 | 2011-05-11 | 三星电子株式会社 | High density multi-view image display system and method with active sub-pixel rendering |
WO2011111349A1 (en) * | 2010-03-10 | 2011-09-15 | パナソニック株式会社 | 3d video display device and parallax adjustment method |
CN104603673A (en) * | 2012-09-03 | 2015-05-06 | Smi创新传感技术有限公司 | Head mounted system and method to compute and render stream of digital images using head mounted system |
CN104679509A (en) * | 2015-02-06 | 2015-06-03 | 腾讯科技(深圳)有限公司 | Graph rendering method and device |
CN106527696A (en) * | 2016-10-31 | 2017-03-22 | 宇龙计算机通信科技(深圳)有限公司 | Method for implementing virtual operation and wearable device |
US20170099478A1 (en) * | 2015-10-04 | 2017-04-06 | Thika Holdings Llc | Eye gaze responsive virtual reality headset |
CN106657906A (en) * | 2016-12-13 | 2017-05-10 | 国家电网公司 | Information equipment monitoring system with function of self-adaptive scenario virtual reality |
CN106643699A (en) * | 2016-12-26 | 2017-05-10 | 影动(北京)科技有限公司 | Space positioning device and positioning method in VR (virtual reality) system |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100573595C (en) * | 2003-06-20 | 2009-12-23 | 日本电信电话株式会社 | Virtual visual point image generating method and three-dimensional image display method and device |
US8400493B2 (en) * | 2007-06-25 | 2013-03-19 | Qualcomm Incorporated | Virtual stereoscopic camera |
CN102404584B (en) * | 2010-09-13 | 2014-05-07 | 腾讯科技(成都)有限公司 | Method and device for adjusting scene left camera and scene right camera, three dimensional (3D) glasses and client side |
US9292973B2 (en) * | 2010-11-08 | 2016-03-22 | Microsoft Technology Licensing, Llc | Automatic variable virtual focus for augmented reality displays |
US9255813B2 (en) * | 2011-10-14 | 2016-02-09 | Microsoft Technology Licensing, Llc | User controlled real object disappearance in a mixed reality display |
US9451162B2 (en) * | 2013-08-21 | 2016-09-20 | Jaunt Inc. | Camera array including camera modules |
US20150358539A1 (en) * | 2014-06-06 | 2015-12-10 | Jacob Catt | Mobile Virtual Reality Camera, Method, And System |
CN106385576B (en) * | 2016-09-07 | 2017-12-08 | 深圳超多维科技有限公司 | Stereoscopic Virtual Reality live broadcasting method, device and electronic equipment |
CN106507086B (en) * | 2016-10-28 | 2018-08-31 | 北京灵境世界科技有限公司 | A kind of 3D rendering methods of roaming outdoor scene VR |
CN107315470B (en) * | 2017-05-25 | 2018-08-17 | 腾讯科技(深圳)有限公司 | Graphic processing method, processor and virtual reality system |
-
2017
- 2017-05-25 CN CN201710379516.5A patent/CN107315470B/en active Active
-
2018
- 2018-04-27 WO PCT/CN2018/084714 patent/WO2018214697A1/en active Application Filing
- 2018-05-17 TW TW107116847A patent/TWI659335B/en active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060132915A1 (en) * | 2004-12-16 | 2006-06-22 | Yang Ung Y | Visual interfacing apparatus for providing mixed multiple stereo images |
CN102056003A (en) * | 2009-11-04 | 2011-05-11 | 三星电子株式会社 | High density multi-view image display system and method with active sub-pixel rendering |
WO2011111349A1 (en) * | 2010-03-10 | 2011-09-15 | パナソニック株式会社 | 3d video display device and parallax adjustment method |
CN104603673A (en) * | 2012-09-03 | 2015-05-06 | Smi创新传感技术有限公司 | Head mounted system and method to compute and render stream of digital images using head mounted system |
CN104679509A (en) * | 2015-02-06 | 2015-06-03 | 腾讯科技(深圳)有限公司 | Graph rendering method and device |
US20170099478A1 (en) * | 2015-10-04 | 2017-04-06 | Thika Holdings Llc | Eye gaze responsive virtual reality headset |
CN106527696A (en) * | 2016-10-31 | 2017-03-22 | 宇龙计算机通信科技(深圳)有限公司 | Method for implementing virtual operation and wearable device |
CN106657906A (en) * | 2016-12-13 | 2017-05-10 | 国家电网公司 | Information equipment monitoring system with function of self-adaptive scenario virtual reality |
CN106643699A (en) * | 2016-12-26 | 2017-05-10 | 影动(北京)科技有限公司 | Space positioning device and positioning method in VR (virtual reality) system |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018214697A1 (en) * | 2017-05-25 | 2018-11-29 | 腾讯科技(深圳)有限公司 | Graphics processing method, processor, and virtual reality system |
CN110134222A (en) * | 2018-02-02 | 2019-08-16 | 上海集鹰科技有限公司 | A kind of VR shows positioning sighting system and its positioning method of sight |
CN108616752A (en) * | 2018-04-25 | 2018-10-02 | 北京赛博恩福科技有限公司 | Support the helmet and control method of augmented reality interaction |
CN108616752B (en) * | 2018-04-25 | 2020-11-06 | 北京赛博恩福科技有限公司 | Head-mounted equipment supporting augmented reality interaction and control method |
CN109032350A (en) * | 2018-07-10 | 2018-12-18 | 深圳市创凯智能股份有限公司 | Spinning sensation mitigates method, virtual reality device and computer readable storage medium |
CN109032350B (en) * | 2018-07-10 | 2021-06-29 | 深圳市创凯智能股份有限公司 | Vertigo sensation alleviating method, virtual reality device, and computer-readable storage medium |
CN110570513A (en) * | 2018-08-17 | 2019-12-13 | 阿里巴巴集团控股有限公司 | method and device for displaying vehicle damage information |
CN111065053A (en) * | 2018-10-16 | 2020-04-24 | 北京凌宇智控科技有限公司 | System and method for video streaming |
CN111064985A (en) * | 2018-10-16 | 2020-04-24 | 北京凌宇智控科技有限公司 | System, method and device for realizing video streaming |
US11500455B2 (en) | 2018-10-16 | 2022-11-15 | Nolo Co., Ltd. | Video streaming system, video streaming method and apparatus |
CN109976527B (en) * | 2019-03-28 | 2022-08-12 | 重庆工程职业技术学院 | Interactive VR display system |
CN109976527A (en) * | 2019-03-28 | 2019-07-05 | 重庆工程职业技术学院 | Interactive VR display systems |
CN112015264A (en) * | 2019-05-30 | 2020-12-01 | 深圳市冠旭电子股份有限公司 | Virtual reality display method, virtual reality display device and virtual reality equipment |
CN112015264B (en) * | 2019-05-30 | 2023-10-20 | 深圳市冠旭电子股份有限公司 | Virtual reality display method, virtual reality display device and virtual reality equipment |
CN111857336A (en) * | 2020-07-10 | 2020-10-30 | 歌尔科技有限公司 | Head-mounted device, rendering method thereof, and storage medium |
CN111857336B (en) * | 2020-07-10 | 2022-03-25 | 歌尔科技有限公司 | Head-mounted device, rendering method thereof, and storage medium |
CN112073669A (en) * | 2020-09-18 | 2020-12-11 | 三星电子(中国)研发中心 | Method and device for realizing video communication |
CN112308982A (en) * | 2020-11-11 | 2021-02-02 | 安徽山水空间装饰有限责任公司 | Decoration effect display method and device |
CN113436489A (en) * | 2021-06-09 | 2021-09-24 | 深圳大学 | Study leaving experience system and method based on virtual reality |
Also Published As
Publication number | Publication date |
---|---|
CN107315470B (en) | 2018-08-17 |
TW201835723A (en) | 2018-10-01 |
TWI659335B (en) | 2019-05-11 |
WO2018214697A1 (en) | 2018-11-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107315470B (en) | Graphic processing method, processor and virtual reality system | |
US11687151B2 (en) | Methods and apparatuses for determining and/or evaluating localizing maps of image display devices | |
US10109113B2 (en) | Pattern and method of virtual reality system based on mobile devices | |
US10324522B2 (en) | Methods and systems of a motion-capture body suit with wearable body-position sensors | |
KR101295471B1 (en) | A system and method for 3D space-dimension based image processing | |
US20190080516A1 (en) | Systems and methods for augmented reality preparation, processing, and application | |
CN105188516B (en) | For strengthening the System and method for virtual reality | |
CN108369653A (en) | Use the eyes gesture recognition of eye feature | |
CN111862348B (en) | Video display method, video generation method, device, equipment and storage medium | |
CN112198959A (en) | Virtual reality interaction method, device and system | |
JP2002058045A (en) | System and method for entering real object into virtual three-dimensional space | |
JP2020135222A (en) | Image generation device, image generation method, and program | |
CN108416832A (en) | Display methods, device and the storage medium of media information | |
WO2017139695A1 (en) | Multiuser telepresence interaction | |
JP6775669B2 (en) | Information processing device | |
CN108416255B (en) | System and method for capturing real-time facial expression animation of character based on three-dimensional animation | |
US20200042077A1 (en) | Information processing apparatus | |
US20240078767A1 (en) | Information processing apparatus and information processing method | |
JP7044846B2 (en) | Information processing equipment | |
JP2023156940A (en) | Image processing apparatus, image processing method, and storage medium storing program | |
JP6739539B2 (en) | Information processing equipment | |
Saffold et al. | Virtualizing Humans for Game Ready Avatars |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20240104 Address after: 518057 Tencent Building, No. 1 High-tech Zone, Nanshan District, Shenzhen City, Guangdong Province, 35 floors Patentee after: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd. Patentee after: TENCENT CLOUD COMPUTING (BEIJING) Co.,Ltd. Address before: 518000 Tencent Building, No. 1 High-tech Zone, Nanshan District, Shenzhen City, Guangdong Province, 35 Floors Patentee before: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd. |