CN107315470B - Graphic processing method, processor and virtual reality system - Google Patents
Graphic processing method, processor and virtual reality system Download PDFInfo
- Publication number
- CN107315470B CN107315470B CN201710379516.5A CN201710379516A CN107315470B CN 107315470 B CN107315470 B CN 107315470B CN 201710379516 A CN201710379516 A CN 201710379516A CN 107315470 B CN107315470 B CN 107315470B
- Authority
- CN
- China
- Prior art keywords
- video
- information
- target
- left eye
- right eye
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 22
- 238000000034 method Methods 0.000 claims abstract description 89
- 238000009877 rendering Methods 0.000 claims abstract description 67
- 238000012545 processing Methods 0.000 claims description 51
- 238000005516 engineering process Methods 0.000 claims description 43
- 230000008447 perception Effects 0.000 claims description 12
- 210000003205 muscle Anatomy 0.000 claims description 11
- 210000004556 brain Anatomy 0.000 claims description 10
- 230000000638 stimulation Effects 0.000 claims description 10
- 238000003860 storage Methods 0.000 claims description 9
- 238000010586 diagram Methods 0.000 description 17
- 210000003128 head Anatomy 0.000 description 14
- 230000006870 function Effects 0.000 description 11
- 238000013461 design Methods 0.000 description 9
- 238000004590 computer program Methods 0.000 description 8
- 230000000694 effects Effects 0.000 description 8
- 239000011521 glass Substances 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 230000008859 change Effects 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 5
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000004438 eyesight Effects 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 241000288673 Chiroptera Species 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 230000010485 coping Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000009738 saturating Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/015—Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/275—Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/011—Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/012—Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2215/00—Indexing scheme for image rendering
- G06T2215/16—Using real world measurements to influence rendering
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Graphics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Dermatology (AREA)
- General Health & Medical Sciences (AREA)
- Neurology (AREA)
- Neurosurgery (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Processing Or Creating Images (AREA)
Abstract
A kind of graphic processing method of the application offer, processor and virtual reality system, this method include:Obtain left eye position information, right eye position information, left eye orientation information and the right eye orientation information of user;According to left eye position information and right eye position information, target three-dimensional is determined from 3 d model library;According to left eye position information, right eye position information and the multiple videos shot in advance, target video is determined, wherein multiple videos are the videos shot respectively from different camera sites;According to left eye orientation information, target three-dimensional and target video, real-time rendering left eye picture;According to right eye orientation information, target three-dimensional and target video, real-time rendering right eye picture;The VR scenes that left eye picture and right eye picture are formed include the image of target three-dimensional and the image of target video.The graphic processing method of the application can really show part outdoor scene object, provide true telepresenc to the user, so as to promote user experience.
Description
Technical field
This application involves graphics process fields, and more particularly, to a kind of graphic processing method, processor and virtual
Reality system.
Background technology
A kind of mainstream technology for being currently generated virtual reality (Virtual Reality, VR) scene is three-dimensional (three
Dimensional, 3D) modeling technique.3D modeling technology generates VR scenes mainly according to 3D modelling VR scenes.Certain
In VR game products, VR scenes are mainly to be completed using 3D modeling technology combination Real-time Rendering Technology.User is with VR wear-types
Show that equipment, such as VR glasses or the VR helmets etc. are dissolved into as observation media in VR scenes, in VR scenes personage or
Other objects interact, to obtain true space perception.It is most common such as roller-coaster VR scenes.Current 3D is built
Although mould technology has been able to achieve the effect that more true to nature on the object in handling VR scenes, use is also much not achieved
Family requires.
Invention content
A kind of graphic processing method of the application offer, processor and virtual reality system, can really open up real-world scene
Body provides true telepresenc to the user, so as to promote user experience.
In a first aspect, a kind of graphic processing method is provided, including:Obtain left eye position information, the right eye position of user
Information, left eye orientation information and right eye orientation information;According to the left eye position information and the right eye position information, from three-dimensional
Target three-dimensional is determined in model library;It shoots according to the left eye position information, the right eye position information and in advance
Multiple videos, determine target video, wherein the multiple video is the video shot respectively from different camera sites;According to
Left eye orientation information, the target three-dimensional and the target video, real-time rendering left eye picture;Believed according to right eye direction
Breath, the target three-dimensional and the target video, real-time rendering right eye picture;Wherein, the left eye picture and the right side
Eye picture forms VR scenes when being shown on Virtual Reality display, and the VR scenes include the target three-dimensional
The image of image and the target video.
The graphic processing method of first aspect determines target three-dimensional simultaneously according to the location information of the right and left eyes of user
And determining target video according to the multiple videos shot in advance, Rendering renders left eye picture respectively by way of real-time rendering
Face and right eye picture, to show VR scenes, wherein VR scenes include the image of target three-dimensional and the figure of target video
Picture, the target video can really show outdoor scene, on the basis of keeping entire VR scenes interactivity, provide to the user true
Real telepresenc, so as to promote user experience.
It is described according to left eye orientation information, the target three-dimensional mould in a kind of possible realization method of first aspect
Type and the target video, real-time rendering left eye picture, including:According to the left eye orientation information, by the target three-dimensional mould
Type is rendered on the first texture;According to the left eye orientation information, the target video is rendered on the second texture, wherein
Second texture is based on Billboard chip technology;It is described according to right eye orientation information, the target three-dimensional and institute
State target video, real-time rendering right eye picture, including:According to the right eye orientation information, the target three-dimensional is rendered
Onto third texture;According to the right eye orientation information, the target video is rendered on the 4th texture, wherein described
Four textures are based on Billboard chip technology.
It should be understood that billboard dough sheet can have angle of inclination in left eye picture, the design parameter at angle of inclination can root
It is calculated according to left eye position information;Billboard dough sheet can have angle of inclination, the design parameter at angle of inclination in right eye picture
It can be calculated according to right eye position information.
It is described according to the left eye position information, the right eye position in a kind of possible realization method of first aspect
Multiple videos that confidence is ceased and shot in advance, determine target video, including:To the left eye position information and the right eye position
Information is averaged, and mean place is obtained;According to the mean place, from the multiple video selecting at least two regards
Frequently;Each video at least two video is extracted in corresponding video frame of corresponding moment;According to the average bit
The camera site at least two video is set, interpolation arithmetic is carried out at least two video frame, obtains the target
Video.
It is described according to the left eye position information, the right eye position in a kind of possible realization method of first aspect
Multiple videos that confidence is ceased and shot in advance, determine target video, including:To the left eye position information and the right eye position
Information is averaged, and mean place is obtained;According to the mean place, the target is selected from the multiple video and is regarded
Frequently, wherein the camera site of the target video is all shootings of the multiple video at a distance from the mean place
It is immediate with the mean place in position.
In a kind of possible realization method of first aspect, the multiple video is to pass through transparent processing to original video
The video for only including target object afterwards.
It should be understood that transparent processing can be the processing based on Alpha (alpha) transparent technology.
In a kind of possible realization method of first aspect, the target object is personage.
In a kind of possible realization method of first aspect, the left eye position information, the right eye position information, institute
Stating left eye orientation information and the right eye orientation information is determined according to the current posture information of collected user.
In a kind of possible realization method of first aspect, the posture information includes head pose information, four limbs appearance
State information, trunk posture information, muscle electric stimulation information, eye tracking information, skin sensing information, motion perception information and brain
At least one of signal message.
Second aspect provides a kind of processor, including acquisition module, computing module and rendering module, the acquisition mould
Block is used to obtain left eye position information, right eye position information, left eye orientation information and the right eye orientation information of user;The calculating
The left eye position information and the right eye position information that module is used to be obtained according to the acquisition module, from 3 d model library
In determine target three-dimensional;The computing module is additionally operable to according to the left eye position information, the right eye position information
The multiple videos shot in advance, determine target video, wherein the multiple video is to be shot respectively from different camera sites
Video;The rendering module is used for according to left eye orientation information, the target three-dimensional and the target video, real-time wash with watercolours
Contaminate left eye picture;The rendering module is additionally operable to according to right eye orientation information, the target three-dimensional and the target video,
Real-time rendering right eye picture;Wherein, the left eye picture and the right eye picture are shown in shape when on Virtual Reality display
At VR scenes, the VR scenes include the image of the image and the target video of the target three-dimensional.
In a kind of possible realization method of second aspect, the rendering module is specifically used for:According to the left eye court
To information, the target three-dimensional is rendered on the first texture;According to the left eye orientation information, by the target video
It is rendered on the second texture, wherein second texture is based on Billboard chip technology;Believed according to right eye direction
Breath, the target three-dimensional is rendered into third texture;According to the right eye orientation information, the target video is rendered
Onto the 4th texture, wherein the 4th texture is based on Billboard chip technology.
In a kind of possible realization method of second aspect, the computing module is according to the left eye position information, institute
The multiple videos stated right eye position information and shot in advance, determine target video, including:To the left eye position information and described
Right eye position information is averaged, and mean place is obtained;According to the mean place, selected at least from the multiple video
Two videos;Each video at least two video is extracted in corresponding video frame of corresponding moment;According to described
The camera site of mean place and at least two video carries out interpolation arithmetic at least two video frame, obtains institute
State target video.
In a kind of possible realization method of second aspect, the computing module is according to the left eye position information, institute
The multiple videos stated right eye position information and shot in advance, determine target video, including:To the left eye position information and described
Right eye position information is averaged, and mean place is obtained;According to the mean place, selected from the multiple video described
Target video, wherein the camera site of the target video is all of the multiple video at a distance from the mean place
Camera site in it is immediate with the mean place.
In a kind of possible realization method of second aspect, the multiple video is to pass through transparent processing to original video
The video for only including target object afterwards.
In a kind of possible realization method of second aspect, the target object is personage.
In a kind of possible realization method of second aspect, the left eye position information of the acquisition module acquisition,
The right eye position information, the left eye orientation information and the right eye orientation information are current according to the collected user
Posture information determine.
In a kind of possible realization method of second aspect, the posture information includes head pose information, four limbs appearance
State information, trunk posture information, muscle electric stimulation information, eye tracking information, skin sensing information, motion perception information and brain
At least one of signal message.
It should be understood that the processor may include at least one of central processor CPU and graphics processor GPU.Meter
CPU can be corresponded to by calculating the function of module, and the function of rendering module can correspond to GPU.GPU can reside in video card, also known as aobvious
Show core, vision processor or display chip.
Second aspect and corresponding realization method can be obtained effect and first aspect and corresponding realization method institute energy
The effect of acquisition corresponds to, and no longer repeats one by one herein.
The third aspect provides a kind of graphic processing method, which is characterized in that including:Collect the current posture letter of user
Breath;According to the posture information, left eye position information, right eye position information, left eye orientation information and the right side of the user is obtained
Eye orientation information;According to the left eye position information and the right eye position information, target three is determined from 3 d model library
Dimension module;According to the left eye position information, the right eye position information and the multiple videos shot in advance, determine that target regards
Frequently, wherein the multiple video is the video shot respectively from different camera sites;According to left eye orientation information, the mesh
Mark threedimensional model and the target video, real-time rendering left eye picture;According to right eye orientation information, the target three-dimensional and
The target video, real-time rendering right eye picture;Show the left eye picture and the right eye picture, wherein the left eye is drawn
Form VR scenes when face and the right eye picture are shown, the VR scenes include the image of the target three-dimensional and described
The image of target video.
The graphic processing method of the third aspect, collects the posture information of user to determine the position of user's right and left eyes, according to
The location information of the right and left eyes of user determines target three-dimensional and determines that target regards according to the multiple videos shot in advance
Frequently, Rendering renders left eye picture and right eye picture respectively by way of real-time rendering, to show VR scenes, wherein
VR scenes include the image of target three-dimensional and the image of target video, which can really show outdoor scene,
On the basis of keeping entire VR scenes interactivity, true telepresenc is provided to the user, so as to promote user experience.
It is described according to left eye orientation information, the target three-dimensional mould in a kind of possible realization method of the third aspect
Type and the target video, real-time rendering left eye picture, including:According to the left eye orientation information, by the target three-dimensional mould
Type is rendered on the first texture;According to the left eye orientation information, the target video is rendered on the second texture, wherein
Second texture is based on Billboard chip technology;It is described according to right eye orientation information, the target three-dimensional and institute
State target video, real-time rendering right eye picture, including:According to the right eye orientation information, the target three-dimensional is rendered
Onto third texture;According to the right eye orientation information, the target video is rendered on the 4th texture, wherein described
Four textures are based on Billboard chip technology.
It is described according to the left eye position information, the right eye position in a kind of possible realization method of the third aspect
Multiple videos that confidence is ceased and shot in advance, determine target video, including:To the left eye position information and the right eye position
Information is averaged, and mean place is obtained;According to the mean place, from the multiple video selecting at least two regards
Frequently;Each video at least two video is extracted in corresponding video frame of corresponding moment;According to the average bit
The camera site at least two video is set, interpolation arithmetic is carried out at least two video frame, obtains the target
Video.
It is described according to the left eye position information, the right eye position in a kind of possible realization method of the third aspect
Multiple videos that confidence is ceased and shot in advance, determine target video, including:To the left eye position information and the right eye position
Information is averaged, and mean place is obtained;According to the mean place, the target is selected from the multiple video and is regarded
Frequently, wherein the camera site of the target video is all shootings of the multiple video at a distance from the mean place
It is immediate with the mean place in position.
In a kind of possible realization method of the third aspect, the multiple video is to pass through transparent processing to original video
The video for only including target object afterwards.
In a kind of possible realization method of the third aspect, the target object is personage.
In a kind of possible realization method of the third aspect, the current posture information of the collection user, including:It collects
The current head pose information of the user, four limbs posture information, trunk posture information, muscle electric stimulation information, eye tracking
At least one of information, skin sensing information, motion perception information and brain signal information.
Fourth aspect provides a kind of Virtual Reality system, including posture collection device, processing unit and display dress
It sets:The posture collection device is used for:Collect the current posture information of user;The processing unit is used for:According to the posture
Information obtains left eye position information, right eye position information, left eye orientation information and the right eye orientation information of the user;According to
The left eye position information and the right eye position information, determine target three-dimensional from 3 d model library;According to described
Left eye position information, the right eye position information and the multiple videos shot in advance, determine target video, wherein the multiple
Video is the video shot respectively from different camera sites;According to left eye orientation information, the target three-dimensional and described
Target video, real-time rendering left eye picture;It is real according to right eye orientation information, the target three-dimensional and the target video
When render right eye picture;The display device is for showing the left eye picture and the right eye picture, wherein the left eye is drawn
Form VR scenes when face and the right eye picture are shown, the VR scenes include the image of the target three-dimensional and described
The image of target video.
In a kind of possible realization method of fourth aspect, the processing unit is according to left eye orientation information, the mesh
Mark threedimensional model and the target video, real-time rendering left eye picture, including:According to the left eye orientation information, by the mesh
Mark threedimensional model is rendered on the first texture;According to the left eye orientation information, the target video is rendered into the second texture
On, wherein second texture is based on Billboard chip technology;The processing unit is according to right eye orientation information, described
Target three-dimensional and the target video, real-time rendering right eye picture, including:It, will be described according to the right eye orientation information
Target three-dimensional is rendered into third texture;According to the right eye orientation information, the target video is rendered into the 4th line
In reason, wherein the 4th texture is based on Billboard chip technology.
In a kind of possible realization method of fourth aspect, the processing unit is according to the left eye position information, institute
The multiple videos stated right eye position information and shot in advance, determine target video, including:To the left eye position information and described
Right eye position information is averaged, and mean place is obtained;According to the mean place, selected at least from the multiple video
Two videos;Each video at least two video is extracted in corresponding video frame of corresponding moment;According to described
The camera site of mean place and at least two video carries out interpolation arithmetic at least two video frame, obtains institute
State target video.
In a kind of possible realization method of fourth aspect, the processing unit is according to the left eye position information, institute
The multiple videos stated right eye position information and shot in advance, determine target video, including:To the left eye position information and described
Right eye position information is averaged, and mean place is obtained;According to the mean place, selected from the multiple video described
Target video, wherein the camera site of the target video is all of the multiple video at a distance from the mean place
Camera site in it is immediate with the mean place.
In a kind of possible realization method of fourth aspect, the multiple video is to pass through transparent processing to original video
The video for only including target object afterwards.
In a kind of possible realization method of fourth aspect, the target object is personage.
In a kind of possible realization method of fourth aspect, the posture collection device is specifically used for:Collect the use
The current head pose information in family, four limbs posture information, trunk posture information, muscle electric stimulation information, eye tracking information, skin
At least one of skin perception information, motion perception information and brain signal information.
In a kind of possible realization method of fourth aspect, the processing unit includes central processor CPU and figure
At least one of processor GPU.
5th aspect provides a kind of computer storage media, instruction is stored thereon with, when described instruction is transported on computers
When row so that the computer executes the method described in any possible realization method of first aspect or first aspect.
6th aspect provides a kind of computer storage media, instruction is stored thereon with, when described instruction is transported on computers
When row so that the computer executes the method described in any possible realization method of the third aspect or the third aspect.
It includes the computer program product instructed that 7th aspect, which provides a kind of, when computer runs the computer program production
When the finger of product, the computer executes the side described in any possible realization method of first aspect or first aspect
Method.
It includes the computer program product instructed that eighth aspect, which provides a kind of, when computer runs the computer program production
When the finger of product, the computer executes the side described in any possible realization method of the third aspect or the third aspect
Method.
Second aspect to eighth aspect and corresponding realization method can be obtained effect and first aspect and corresponding reality
Existing mode can be obtained effect correspondence, no longer repeat one by one herein.
Description of the drawings
Fig. 1 is the schematic diagram of the video frame in panoramic video.
The contrast schematic diagram for the VR scenes that Fig. 2 is 3D modeling technology and panoramic video technique for taking generates respectively.
Fig. 3 is the schematic flow chart of the graphic processing method of one embodiment of the invention.
Fig. 4 is the schematic diagram for the scene that the needs of one embodiment of the invention are presented.
Fig. 5 is the schematic diagram for the scene that the progress of one embodiment of the invention is shot in advance.
Fig. 6 is the schematic diagram for the video of one embodiment of the invention obtained in different camera sites.
Fig. 7 is that one embodiment of the invention sets the goal the schematic diagram of video really.
Fig. 8 is the schematic diagram of the presentation target video of one embodiment of the invention.
Fig. 9 is the schematic block diagram of the processor of one embodiment of the invention.
Figure 10 is the schematic diagram of the virtual reality system of one embodiment of the invention.
Figure 11 is the schematic diagram of the virtual reality system of another embodiment of the present invention.
Specific implementation mode
Below in conjunction with attached drawing, the technical solution in the application is described.
Another technology for generating VR scenes is described below, that is, uses panoramic video technique for taking to generate VR scenes.Panorama
Video is also known as 360 degree of three-dimensional (stereo) videos, and similar with ordinary video, only panoramic video includes the complete of shooting point
Azimuth information.Fig. 1 is the schematic diagram of the video frame in panoramic video.It is to pass through profession that panoramic video technique for taking, which generates VR scenes,
Degree panorama video shooting device and shooting team complete VR scenes panoramic video shooting, panoramic video is then converted into VR
Scene.Since panoramic video is shot in advance, for the VR scenes generated by panoramic video technique for taking, user is only
The direction of eyes can be changed to watch video, it is impossible to incorporate in VR scenes, in VR scenes personage or other objects carry out
Interaction.Simultaneously as panoramic video includes full spectrum information, so panoramic video file is generally very huge, the fields VR of generation
The file of scape is also very huge.
The contrast schematic diagram for the VR scenes that Fig. 2 is 3D modeling technology and panoramic video technique for taking generates respectively.Such as Fig. 2 institutes
Show, the image for the VR scenes that 3D modeling technology generates is the image that can be interactive of numeric type, need by Real-time Rendering Technology come
It realizes;And the VR scenes that panoramic video technique for taking generates are then outdoor scene animation, need not be realized by Real-time Rendering Technology.3D
The VR scene mobility that modeling technique generates is preferable, provides a kind of scene experience of immersion, user can be in the scene four
It walks about at place;And the VR scenes that panoramic video technique for taking generates then are limited to direct captured scene, user can be from camera lens
Position is set out, and is obtained 360 degree of viewing angles, but cannot be gone about in the scene.The VR scenes that 3D modeling technology generates
Using User Activity as time shaft, VR scenes can be played out by a series of User Activity, and user can also experience independently
The new VR scenes explored;The VR scenes that panoramic video technique for taking generates then are moved with directing camera lens as time shaft, VR
Scape shoots played in order according to director.The playing platform for the VR scenes that 3D modeling technology generates usually requires VR wear-types and shows to set
Standby (referred to as VR aobvious equipment), such as VR glasses or the VR helmets etc., VR aobvious equipment can connect PC or mobile device etc.;Entirely
The playing platform for the VR scenes that scape video capture technology generates is usually then the computing device or flat for including panoramic video player
Platform, including PC, mobile device, YouTube platforms etc..The VR scenes pattern of telling a story that 3D modeling technology generates is that user triggers play
Feelings, director will not control the physical location of user in the scene built, need guiding, excitation user along story developing direction
Go triggering following plot;The VR scenes pattern of telling a story that panoramic video technique for taking generates is that the physics of director's control camera lens moves
It is dynamic, to trigger plot, watched with attracting the attention of user.
It can be seen that 3D modeling technology generates VR scenes mainly according to 3D modelling VR scenes, user can incorporate
In VR scenes, in VR scenes personage or other objects interact.However, current 3D modeling technology is when handling object
User's requirement is much not achieved in obtained degree true to nature.The VR scenes that panoramic video technique for taking makes, user cannot be with VR
Personage or other objects in scape interact, and the file of the VR scenes generated is huge.Based on the above technical problem, this hair
Bright embodiment provides a kind of graphic processing method, processor and VR systems.
It should be understood that the method and apparatus of various embodiments of the present invention is applied to VR scenes field, for example, can be applied to VR trips
Play field, can also be applied to others can interaction scenarios, such as the VR films that can interact, VR concerts that can be interacted etc., this
Each embodiment is invented to be not construed as limiting this.
Before the graphic processing method of the present invention will be described in detail embodiment, introduce what various embodiments of the present invention were related to first
Real-time Rendering Technology.The essence of Real-time Rendering Technology is the real-time calculating and output of graph data, and maximum characteristic is real-time
(real time) property.Currently, PC (Personal Computer, PC), work station, game machine, mobile device or VR
Processor in system etc. is per second at least with speed progress operations more than 24 frames.That is, the image of a screen is rendered, until
It less also will be within 1/24 second.And in actual 3D game, number of pictures per second requires then more much higher.Just because of real-time rendering
Real-time, be possible to the coherent broadcasting for realizing 3D game, and realize the personage in 3D game in user and scene of game
Or other objects interact.
The real-time rendering that various embodiments of the present invention are related to can be by central processing unit (Central Processing
Unit, CPU) or graphics processor (Graphics Processing Unit, GPU) realize, the embodiment of the present invention to this not
It is construed as limiting.Specifically, GPU is a kind of processor dedicated for realization image operation work, can reside in video card,
Also known as show core, vision processor or display chip.
Fig. 3 is the schematic flow chart of the graphic processing method 300 of one embodiment of the invention.This method 300 is by VR systems
System 30 executes.Wherein, VR systems 30 may include posture collection device 32, processing unit 34 and display device 36.This method 300
It may comprise steps of.
S310 collects the current posture information of user.It should be understood that S310 can be executed by posture collection device 32.
S320, according to posture information, obtain the left eye position information of user, right eye position information, left eye orientation information and
Right eye orientation information.
S330 determines target three-dimensional according to left eye position information and right eye position information from 3 d model library.
S340 determines target video according to left eye position information, right eye position information and the multiple videos shot in advance,
Wherein, multiple videos are the videos shot respectively from different camera sites.
S350, according to left eye orientation information, target three-dimensional and target video, real-time rendering left eye picture.
S360, according to right eye orientation information, target three-dimensional and target video, real-time rendering right eye picture.
It should be understood that S320 to S360 can be executed by processing unit 34.
S370 shows left eye picture and right eye picture, wherein left eye picture and right eye picture form VR scenes when showing,
VR scenes include the image of target three-dimensional and the image of target video.
It should be understood that S370 can be executed by display device 36.
The graphic processing method of the embodiment of the present invention, collects the posture information of user to determine the position of user's right and left eyes,
According to the location information of the right and left eyes of user, determines target three-dimensional and determine target according to the multiple videos shot in advance
Video, Rendering renders left eye picture and right eye picture respectively by way of real-time rendering, to show VR scenes,
In, VR scenes include the image of target three-dimensional and the image of target video, which can really show reality
Scape provides true telepresenc to the user, so as to promote user's body on the basis of keeping entire VR scenes interactivity
It tests.
It should be understood that typically VR systems 30 include VR aobvious equipment, display device 36 can be integrated in VR aobvious equipment
In.The processing unit 34 and/or posture collection device 32 of the embodiment of the present invention can be integrated in VR aobvious equipment, can also be only
VR aobvious equipment is stood on individually to dispose.It can be by wired between posture collection device 32, processing unit 34 and display device 36
Communication can also by radio communication, and the embodiment of the present invention is not construed as limiting this.
Each step of the graphic processing method 300 of the application and each component of VR systems 30 is detailed below.
In embodiments of the present invention, S310, posture collection device 32 collect the current posture information of user.
Posture collection device 32 may include the sensor in VR head-mounted display apparatus, such as VR glasses or the VR helmets.
Sensor may include photosensitive sensor, such as infrared sensor, camera etc.;Sensor can also include force-sensing sensor,
Such as gyroscope etc.;Sensor can also include magneto-dependent sensor, such as brain-computer interface etc.;Sensor can also include the quick biography of sound
Sensor etc., the embodiment of the present invention are not construed as limiting the concrete type of sensor.Sensor in VR head-mounted display apparatus can be with
Collect the current head pose information of user, eye tracking information, skin sensing information, muscle electric stimulation information and brain signal letter
At least one of breath.Aftertreatment device 34 can determine the left eye position information of user, right eye position according to these information
Information, left eye orientation information and right eye orientation information.
In a specific example, in VR scenes, the visual angle of user refers to the human eye sight direction of user virtual
Azimuth in space, wherein the position and orientation including human eye.In Virtual Space, the visual angle of user can be with user's
Head in realistic space the variation of posture and change.Under a kind of particular situation, the change at the visual angle of user in Virtual Space
Change synchronized and equidirectional with the variation of the head pose of user in realistic space.Wherein, the visual angle of user includes left eye visual angle again
And right-eye perspectives, that is, include left eye position, right eye position, left eye direction and the right eye direction of user.
In this example embodiment, the sensor in VR aobvious equipment that user wears can use the mistake of VR aobvious equipment in user
Head is sensed in journey movements and its attitudes vibration such as rotates, moves, and is resolved to items movement, obtains relevant head
Posture information (such as the speed of movement, angle etc.), processing unit 34 are assured that use according to obtained head pose information
Left eye position information, right eye position information, left eye orientation information and the right eye orientation information at family.
Posture collection device 32 can also include locator, control handle, body-sensing gloves, body-sensing clothes and treadmill
Etc. dynamic devices etc., the posture information for collecting user then handled by processing unit 34 and obtain the left eye position of user
Information, right eye position information, left eye orientation information and right eye orientation information.Wherein, posture collection device 32 can pass through manipulation
Handle, body-sensing gloves, body-sensing clothes and treadmill etc. collect four limbs posture information, trunk posture information, muscle the electricity thorn of user
Swash information, skin sensing information and motion perception information etc..
In a specific example, one or more locators can be equipped in VR aobvious equipment, for monitoring user
Head position (may include height), direction etc..At this point, user can set in the realistic space where wearing VR aobvious equipment
There is positioning system, it is logical that one or more locators in VR aobvious equipment which can wear with user carry out positioning
Letter determines the posture informations such as specific location (may include height), direction of the user in this realistic space.Then.It can be by
Above-mentioned posture information is converted to relevant position (may include height) of the user's head in Virtual Space, court by processing unit 34
To etc. information.Also that is, processing unit 34 obtains left eye position information, right eye position information, left eye orientation information and the right side of user
Eye orientation information.
It should be understood that left eye position information, the right eye position information of the embodiment of the present invention can pass through seat in a coordinate system
Scale value indicates;Left eye orientation information and right eye orientation information can be indicated by a vector in a coordinate system, but sheet
Inventive embodiments are not construed as limiting this.
It should also be understood that posture collection device 32 after being collected into posture information, need to be incited somebody to action by wire communication or wireless communication
Posture information is sent to processing unit 34, to this without repeating in text.
It should also be understood that the embodiment of the present invention can also collect the posture information of user by other means, pass through others
Mode obtains and/or indicates left eye position information, right eye position information, left eye orientation information and right eye orientation information, this hair
Bright embodiment is not construed as limiting specific mode.
In the design of VR scenes, such as in the game design of VR scenes, a position is designed to a corresponding object
Body group.In a specific example, the left eye position LE and the corresponding objects of right eye position RE of user is as shown in Figure 4.
The left eye position of user corresponds to object L41, object 43, object 44, object 46 and personage 42, and the right eye position of user corresponds to object
R45, object 43, object 44, object 46 and personage 42.Wherein, which is desirable to be enhanced the object of its authenticity, is mesh
Mark object.
Specifically, it is determined which object is target object in the corresponding group of objects of left eye position or right eye position of user,
It can be based on the design of VR scenes.For example, each scene or multiple scenes may exist a target object list, VR is being generated
When scene, the target object in the target scene is found according to target object list.For another example, it is advised in the game design of VR scenes
Fixed, the personage at close shot (apart from a certain range of scene of user) is target object, other objects at close shot in addition to personage
Body is not target object, and all objects at distant view (scene outside user's a certain range) are not target object, etc..
It determines that the target object in scene can be executed by processing unit 34, such as can be determined by the CPU in processing unit 34, this
Inventive embodiments are not construed as limiting this.
It should be understood that for VR scenes, built wherein other objects in addition to target object can be the pre- 3D that first passes through
Mould generates 3D models, is stored in 3D model libraries.Specifically, object L41, object 43, object 44, object R45 shown in Fig. 4
It is stored in 3D model libraries with the 3D models of object 46.Processing unit 34 (such as CPU in processing unit 34) obtains left eye
After location information and right eye position information, target three-dimensional, i.e. object L41, object 43, object are determined from 3D model libraries
44, the 3D models of object R45 and object 46, so that follow-up rendering picture uses.It is of course also possible to determine mesh by other means
Threedimensional model is marked, the embodiment of the present invention is not construed as limiting this.
For the personage 42 in the target object in VR scenes, such as VR scenes shown in Fig. 4, then according to shooting in advance
Multiple videos generate.Wherein, multiple video is the video for including target object shot respectively from different camera sites.
Specifically, it is assumed that the target object is personage 42, then the embodiment of the present invention can in advance be shot from multiple camera sites
Multiple videos about the personage 42.Fig. 5 shows the schematic diagram of the scene shot in advance.As shown in figure 5, the field to be shot
Scape includes personage 42, object 52 and object 54, the scene to be shot as possible with finally show VR scenes the case where it is close, with
Increase the sense of reality.For the scene to be shot, multiple capture apparatus can be placed in the horizontal direction, respectively from camera site
C1, camera site C2With camera site C3Imaged, can obtain personage different camera sites original video such as Fig. 6 institutes
Show.
It can be shot on the circumference of distance objective object certain radius when it should be understood that shooting video in advance.At this
Circumference photographs position is chosen much more intensive, therefrom selects same or similar with the left eye position of user or right eye position
Probability it is also bigger, the authenticity that target video that final choice goes out or calculated is put into VR scenes is also higher.
In embodiments of the present invention, multiple videos can be to original video after transparent processing only include object
The video of body.Specifically, can will respectively from 3 videos captured by 3 camera sites by personage 42 with constitute background
Object 52 and object 54 are detached, so that it may to obtain only including 3 videos of personage 42.3 videos are at the same time
The time span produced also identical video.
Optionally, in the embodiment of the present invention, transparent processing can be the processing based on Alpha (alpha) transparent technology.
Specifically, if pixel is allowed to possess one group of alpha value in the 3D environment of VR scenes, alpha value is used for recording the saturating of pixel
Lightness, so that object can possess different transparencies.It, can be by the target in original video in the embodiment of the present invention
The processing of object personage 42 is opaque, and the object 52 and the processing of object 54 for constituting background are transparent.
In a kind of specific scheme, S340 is according to the left eye position information, the right eye position information and shooting in advance
Multiple videos, determine target video, may include:The left eye position information and the right eye position information are averaging
Value, obtains mean place;According to the mean place, the target video is selected from the multiple video, wherein described
The camera site of target video be at a distance from the mean place the multiple video all camera sites in it is described
Mean place is immediate.
It should be understood that in various embodiments of the present invention, left eye position, right eye position and camera site can be unified in VR scenes
It is expressed as the coordinate of Virtual Space, such as coordinate or spherical coordinates in x-axis, y-axis and z-axis three-axis reference.Left eye position, the right side
Eye position and camera site can also indicate that the embodiment of the present invention is not construed as limiting this otherwise.
In the present solution, average to left eye position information and right eye position information, mean place is obtained.For example, with
For three-axis reference, left eye position is (x1, y1, z1), right eye position is (x2, y2, z2), then mean place is ((x1+x2)/2,
(y1+y2)/2, (z1+z2)/2).Camera site is selected from multiple videos to regard as target with the immediate video of mean place
Frequently.
In the case where multiple camera sites are multiple positions on the circumference of distance objective object certain radius, target regards
The camera site of frequency and the closest camera site (x that can be understood as target video of mean placet, yt, zt) and mean place
((x1+x2)/2, (y1+y2)/2, (z1+z2)/2) distance need to be less than preset threshold value, that is, ensure target video camera site
With it is sufficiently small at a distance from mean place.
In multiple camera sites not on the circumference of distance objective object certain radius, the shooting of target video
Position and mean place it is closest it is to be understood that the line segment and target video that mean place is constituted with target object shooting position
It is the line segment and all camera sites that mean place is constituted with target object to set the angle between the line segment constituted with target object
Angle minimum in angle between the line segment constituted with target object.
In another specific scheme, S340 is according to the left eye position information, the right eye position information and claps in advance
The multiple videos taken the photograph, determine target video, may include:The left eye position information and the right eye position information are averaging
Value, obtains mean place;According to the mean place, at least two videos are selected from the multiple video;By described in extremely
Each video is extracted in corresponding video frame of corresponding moment in few two videos;According to the mean place and it is described at least
The camera site of two videos carries out interpolation arithmetic at least two video frame, obtains the target video.
In this scheme, left and right at least each shooting position of left eye and right eye mean place that user can be chosen
It sets, the video of at least each camera site shooting in left and right is selected from multiple videos, as the reference for calculating target video.
It intercepts at least two videos and carries out interpolation arithmetic in the corresponding video frame of synchronization, obtain target video.
In the case where multiple camera sites are multiple positions on the circumference of distance objective object certain radius, from multiple
It can choose and mean place ((x that at least two videos are chosen in video1+x2)/2, (y1+y2)/2, (z1+z2)/2) distance
At least two minimum videos.At least one is distributed in the left side of mean place for the camera site of at least two videos, and
At least one is distributed in the right side of mean place.
In multiple camera sites not on the circumference of distance objective object certain radius, selected from multiple videos
Taking at least two videos can be, camera site and the mesh of the line segment and at least two videos that mean place is constituted with target object
Angle between the line segment that mark object is constituted is the line segment and all camera sites and target that mean place is constituted with target object
Angle minimum is several in angle between the line segment that object is constituted.At least one distribution of the camera site of at least two videos
In the left side of mean place, and at least one is distributed in the right side of mean place.
It should be understood that in embodiments of the present invention, video as reference, the present invention can also be chosen according to other criterion
Embodiment is not construed as limiting play.
It should also be understood that in embodiments of the present invention, the video that different camera sites take represents object observing object
Different observation positions when (for example, personage 42).In other words, 3 videos shown in fig. 6 are corresponding in same time of day
Video frame is the image when different observation positions is observed.3 shooting angle can correspond to 3 camera site C respectively1、C2
And C3。
It should be understood that in embodiments of the present invention, other than shooting multiple videos in advance, can also use from multiple shootings
Multigroup photo (or multiple series of images) of the advance photographic subjects object in position.According to left eye position and right eye position (or average bit
Set) with the relationships of multiple camera sites, corresponding at least two images at least two camera sites are found from multiple series of images, it is right
At least two images carry out interpolation arithmetic, obtain target image.Specific interpolation algorithm can be described in more detail below.
Fig. 7 is that one embodiment of the invention sets the goal the schematic diagram of video really.According to mean place, from multiple videos
At least two videos are selected, each video at least two videos are extracted in corresponding video frame of corresponding moment, root
According to the camera site of mean place and at least two videos, interpolation arithmetic is carried out at least two video frame, obtains target video
Detailed process can be as shown in Figure 7.
When observing VR scenes, observation position can change user, such as user observes position when towards VR scenes
Set to move in left-right direction.3 camera sites are respectively C1、C2And C3。C1、C2And C3Three-dimensional cartesian coordinate system can be passed through
Coordinate value indicate that can also indicate, can also be indicated by other means by the coordinate value of spherical coordinate system, the present invention is real
Example is applied to be not construed as limiting this.According to the left eye position information and right eye position information of user, it may be determined that flat when user observes
Equal position Cview.As shown in fig. 7, mean place CviewIn C1And C2Between.When determining target video, because of mean place Cview
Between C1And C2Between, therefore it is chosen at camera site C1And C2The video shot in advance is as reference.Generating target video
When video frame (image), while taking out C1And C2Corresponding video is then right in the corresponding video frame I1 and I2 of synchronization
Two video frame I1 and I2 are into row interpolation, such as can be linear interpolations.Wherein, the weight of interpolation is according to mean place CviewWith
C1And C2Distance depending on.The video frame I of the target video of outputout=I1* (1- (C1-Cview/C1-C2))+I2*(1-(C2-
Cview/C1-C2))。
It should be understood that the situation that the observation position for only discussing user above is moved in left-right direction, if the observation of user
Position is moved forward and backward, because being in the 3D scenes of VR, the personage that observer sees will present near big and far smaller effect naturally
Fruit, although the angle physically shown should be also varied from, this variation influences very little, and general user will not take notice of
Or it observes.In addition, in general scene, user only can all around move, and can seldom carry out enterprising in above-below direction
Row is a wide range of mobile, so for target video determining according to the method for the embodiment of the present invention, distortion sense caused by user
Feel also very little.
It should be understood that the embodiment of the present invention is illustrated so that target object is personage as an example.Certainly target object may be
It is required that animal or even the building or plant etc. of authenticity, the embodiment of the present invention are not construed as limiting this.
Optionally, in embodiments of the present invention, S350 is according to left eye orientation information, the target three-dimensional and the mesh
Video is marked, real-time rendering left eye picture may include:According to the left eye orientation information, the target three-dimensional is rendered
Onto the first texture;According to the left eye orientation information, the target video is rendered on the second texture, wherein described
Two textures are based on Billboard chip technology;S360 is according to right eye orientation information, the target three-dimensional and the target
Video, real-time rendering right eye picture may include:According to the right eye orientation information, the target three-dimensional is rendered into
In third texture;According to the right eye orientation information, the target video is rendered on the 4th texture, wherein the described 4th
Texture is based on Billboard chip technology.
In the following, in conjunction with the process for rendering left eye picture and right eye picture in Fig. 8 the present invention will be described in detail embodiments.As above
Description, processing unit 34 (such as CPU therein) have determined target three-dimensional in S330, mesh are had determined in S340
Mark video.Processing unit 34 (such as GPU therein) is according to left eye orientation information, left eye picture that determination should be presented;According to the right side
Eye orientation information, the right eye picture that determination should be presented.In scene for example, as shown in figure 4, according to left eye orientation information (towards people
Object 42), it determines and object L41, object 43, object 44 and personage 42 is presented in left eye picture;According to right eye orientation information (towards people
Object 42), it determines and object 43, object 44, object R45 and personage 42 is presented in right eye picture.
Target three-dimensional object L41, object 43 and object 44 are rendered by processing unit 34 (such as GPU therein)
On the first texture 82 of left eye picture L800, the target video is rendered on the second texture 84 of left eye picture L800;It will
Target three-dimensional object 43, object 44 and object R45 are rendered into the third texture 86 of right eye picture R800, by the target
On Video Rendering to the 4th texture 88 of right eye picture R800.
Specifically, respectively for left eye picture and right eye picture, can advertisement be set in the position of the target object of picture
Target video is presented in Billboard on piece in plate (billboard) dough sheet.Advertisement plate technique be in field of Computer Graphics into
A kind of method of row Fast Drawing.It plays in similar 3D this higher to requirement of real-time, takes advertisement plate technique
The speed of drafting can be greatly speeded up to improve the fluency of 3D game pictures.Advertisement plate technique is to use 2D in 3D scenes
It indicates object, allows the object always towards user.
Specifically, billboard dough sheet can have angle of inclination in left eye picture, and the design parameter at angle of inclination can root
It is calculated according to left eye position information;Billboard dough sheet can have angle of inclination, the design parameter at angle of inclination in right eye picture
It can be calculated according to right eye position information.
In fact, since VR scenes are real-time renderings, at any one time, it is believed that be that will be obtained mentioned by interpolation
Video frame be presented on the position of target object.In the continuous time section of scene changes, video can be equivalent to and existed
Billboard on piece plays out.
As shown in figure 8, billboard dough sheet is arranged in the corresponding position of target object, using each frame of video as textures line
Reason is plotted to the textures of above-mentioned billboard dough sheet, then each frame of video can always face user.
It should be understood that when rendering left eye picture and right eye picture, Z-buffering and advertisement plate technique knot may be used
It closes.Z-buffering contributes to target object to form hiding relation and size relationship by far and near distance and other objects.
In the embodiment of the present invention, post-processing object video can also use other technologies, the embodiment of the present invention to be not construed as limiting this.
It should also be understood that the embodiment of the present invention also provides a kind of graphic processing method, including step S320 to S360, method by
Processor executes.
It should also be understood that in various embodiments of the present invention, size of the sequence numbers of the above procedures is not meant to execute
The execution sequence of the priority of sequence, each process should be determined by its function and internal logic, the reality without coping with the embodiment of the present invention
It applies process and constitutes any restriction.
Above in conjunction with Fig. 1 to Fig. 8, graphic processing method according to the ... of the embodiment of the present invention is described in detail.It below will knot
Fig. 9 and Figure 10 is closed, processor and VR system according to the ... of the embodiment of the present invention is described in detail.
Fig. 9 is the schematic block diagram of the processor 900 of one embodiment of the invention.Processor 900 can correspond to above
The processing unit 34.As shown in figure 9, processor 900 may include acquisition module 910, computing module 920 and rendering module
930。
Acquisition module 910 is used to obtain left eye position information, right eye position information, left eye orientation information and the right eye of user
Orientation information.
The left eye position information and the right eye position that computing module 920 is used to be obtained according to the acquisition module are believed
Breath, determines target three-dimensional from 3 d model library;Computing module 920 is additionally operable to according to the left eye position information, institute
The multiple videos stated right eye position information and shot in advance, determine target video, wherein the multiple video is respectively from difference
Camera site shooting video.
Rendering module 930 is used for according to left eye orientation information, the target three-dimensional and the target video, real-time wash with watercolours
Contaminate left eye picture;Rendering module 930 is additionally operable to according to right eye orientation information, the target three-dimensional and the target video,
Real-time rendering right eye picture;Wherein, the left eye picture and the right eye picture are shown in shape when on Virtual Reality display
At VR scenes, the VR scenes include the image of the image and the target video of the target three-dimensional.
The graphic processing facility of the embodiment of the present invention determines target three-dimensional mould according to the location information of the right and left eyes of user
Type and determine target video according to the multiple videos shot in advance, Rendering renders a left side respectively by way of real-time rendering
Eye picture and right eye picture, to show VR scenes, wherein VR scenes include the image and target video of target three-dimensional
Image, which can really show outdoor scene, on the basis of keeping entire VR scenes interactivity, be carried for user
For true telepresenc, so as to promote user experience.
Optionally, as one embodiment, the rendering module 930 specifically can be used for:Believed according to left eye direction
Breath, the target three-dimensional is rendered on the first texture;According to the left eye orientation information, the target video is rendered
Onto the second texture, wherein second texture is based on Billboard chip technology;It, will according to the right eye orientation information
The target three-dimensional is rendered into third texture;According to the right eye orientation information, the target video is rendered into
On four textures, wherein the 4th texture is based on Billboard chip technology.
Optionally, as one embodiment, the computing module 920 is according to the left eye position information, the right eye position
Multiple videos that confidence is ceased and shot in advance, determine target video, may include:To the left eye position information and the right eye
Location information is averaged, and mean place is obtained;According to the mean place, at least two are selected from the multiple video
Video;Each video at least two video is extracted in corresponding video frame of corresponding moment;According to described average
The camera site of position and at least two video carries out interpolation arithmetic at least two video frame, obtains the mesh
Mark video.
Optionally, as one embodiment, the computing module 920 is according to the left eye position information, the right eye position
Multiple videos that confidence is ceased and shot in advance, determine target video, may include:To the left eye position information and the right eye
Location information is averaged, and mean place is obtained;According to the mean place, the target is selected from the multiple video
Video, wherein the camera site of the target video is all bats of the multiple video at a distance from the mean place
It acts as regent immediate with the mean place in setting.
Optionally, as one embodiment, the multiple video is only to include after transparent processing to original video
The video of target object.
Optionally, as one embodiment, the target object is personage.
Optionally, as one embodiment, the left eye position information of the acquisition of the acquisition module 910, the right eye
Location information, the left eye orientation information and the right eye orientation information are according to the current posture letter of the collected user
Breath determination.
Optionally, as one embodiment, the posture information includes head pose information, four limbs posture information, trunk
In posture information, muscle electric stimulation information, eye tracking information, skin sensing information, motion perception information and brain signal information
At least one.
It should be understood that it can also be GPU that the processor 900, which can be CPU,.Processor 900 can also both include the work(of CPU
It can include the function of GPU again, for example, the function (S320 to S340) of acquisition module 910 and computing module 920 is executed, wash with watercolours by CPU
The function (S350 and S360) of dye module 930 is executed by GPU, and the embodiment of the present invention is not construed as limiting this.
Figure 10 shows a kind of schematic diagram of VR systems of the embodiment of the present invention.Shown in Fig. 10 is a kind of VR helmets
1000, the VR helmets 1000 may include head-tracker 1010, CPU 1020, GPU 1030 and display 1040.Wherein, head
Tracker 1010 corresponds to posture collection device, and CPU1020 and GPU 1030 correspond to processing unit, and display 1040 corresponds to
Display device herein repeats no more the function of head-tracker 1010, CPU 1020, GPU 1030 and display 1040.
It should be understood that the head-tracker 1010, CPU 1020, GPU 1030 and display 1040 shown in Figure 10 are integrated in VR
In the helmet 1000.There can also be other posture collection devices outside the VR helmets 1000, collect the posture information of user, send
It is handled to CPU 1020, the embodiment of the present invention is not construed as limiting this.
Figure 11 shows the schematic diagram of another VR systems of the embodiment of the present invention.It is a kind of VR glasses shown in Figure 11
The 1110 VR systems constituted with host 1120, VR glasses 1110 may include angle inductor 1112, signal processor 1114, number
According to transmitter 1116 and display 1118.Wherein, angle inductor 1112 corresponds to posture collection device, and host 1120 includes
CPU and GPU is calculated corresponding to processing unit and rendered picture, and display 1118 corresponds to display device.Angle inductor
1112 collect the posture information of user, and posture information is sent to host 1120 and is handled, and host 1120 calculates and renders a left side
Eye picture and right eye picture, and left eye picture and right eye picture are sent to display 1118 and shown.Signal processor
1114 and data link 1116 be mainly used for the communication between VR glasses 1110 and host 1120.
There can also be other posture collection devices outside VR glasses 1110, collect the posture information of user, be sent to
Host 1120 is handled, and the embodiment of the present invention is not construed as limiting this.
The virtual reality system of the embodiment of the present invention, collects the posture information of user to determine the position of user's right and left eyes,
According to the location information of the right and left eyes of user, determines target three-dimensional and determine target according to the multiple videos shot in advance
Video, Rendering renders left eye picture and right eye picture respectively by way of real-time rendering, to show VR scenes,
In, VR scenes include the image of target three-dimensional and the image of target video, which can really show reality
Scape provides true telepresenc to the user, so as to promote user's body on the basis of keeping entire VR scenes interactivity
It tests.
The embodiment of the present invention also provides a kind of computer readable storage medium, is stored thereon with instruction, when described instruction exists
When being run on computer so that the computer executes the graphic processing method of above method embodiment.Specifically, the computer
For above-mentioned VR systems or can be processor.
It includes the computer program product instructed that the embodiment of the present invention, which also provides a kind of, which is characterized in that when computer is transported
When the finger of the row computer program product, the computer executes the graphic processing method of above method embodiment.Tool
Body, which can run in VR systems or processor.
In the above-described embodiments, can come wholly or partly by software, hardware, firmware or its arbitrary combination real
It is existing.When implemented in software, it can entirely or partly realize in the form of a computer program product.The computer program
Product includes one or more computer instructions.It is all or part of when loading on computers and executing the computer instruction
Ground is generated according to the flow or function described in the embodiment of the present application.The computer can be all-purpose computer, special purpose computer,
Computer network or other programmable devices.The computer instruction can store in a computer-readable storage medium, or
Person is transmitted from a computer readable storage medium to another computer readable storage medium, for example, the computer instruction
Wired (such as coaxial cable, optical fiber, digital subscriber can be passed through from a web-site, computer, server or data center
Line (Digital Subscriber Line, DSL)) or wireless (such as infrared, wireless, microwave etc.) mode to another website
Website, computer, server or data center are transmitted.The computer readable storage medium, which can be computer, to be deposited
Any usable medium taken is either set comprising data storages such as one or more usable mediums integrated server, data centers
It is standby.The usable medium can be magnetic medium (for example, floppy disk, hard disk, tape), optical medium (for example, high-density digital video
CD (Digital Video Disc, DVD)) or semiconductor medium (for example, solid state disk (Solid State Disk,
SSD)) etc..
It should be understood that the differentiation that first, second and various digital numbers that are referred to herein only carry out for convenience of description, and
It is not limited to scope of the present application.
It should be understood that the terms "and/or", only a kind of incidence relation of description affiliated partner, expression can deposit
In three kinds of relationships, for example, A and/or B, can indicate:Individualism A exists simultaneously A and B, these three situations of individualism B.
In addition, character "/" herein, it is a kind of relationship of "or" to typically represent forward-backward correlation object.
Those of ordinary skill in the art may realize that lists described in conjunction with the examples disclosed in the embodiments of the present disclosure
Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually
It is implemented in hardware or software, depends on the specific application and design constraint of technical solution.Professional technician
Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed
Scope of the present application.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description,
The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In several embodiments provided herein, it should be understood that disclosed systems, devices and methods, it can be with
It realizes in other way.For example, the apparatus embodiments described above are merely exemplary, for example, the unit
It divides, only a kind of division of logic function, formula that in actual implementation, there may be another division manner, such as multiple units or component
It can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, it is shown or
The mutual coupling, direct-coupling or communication connection discussed can be the indirect coupling by some interfaces, device or unit
It closes or communicates to connect, can be electrical, machinery or other forms.
The unit illustrated as separating component may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, you can be located at a place, or may be distributed over multiple
In network element.Some or all of unit therein can be selected according to the actual needs to realize the mesh of this embodiment scheme
's.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it can also
It is that each unit physically exists alone, it can also be during two or more units be integrated in one unit.
The above, the only specific implementation mode of the application, but the protection domain of the application is not limited thereto, it is any
Those familiar with the art can easily think of the change or the replacement in the technical scope that the application discloses, and should all contain
It covers within the protection domain of the application.Therefore, the protection domain of the application shall be subject to the protection scope of the claim.
Claims (34)
1. a kind of graphic processing method, which is characterized in that including:
Obtain left eye position information, right eye position information, left eye orientation information and the right eye orientation information of user;
According to the left eye position information and the right eye position information, target three-dimensional is determined from 3 d model library;
According to the left eye position information, the right eye position information and the multiple videos shot in advance, target video is determined,
In, the multiple video is the video shot respectively from different camera sites;
According to left eye orientation information, the target three-dimensional and the target video, real-time rendering left eye picture;
According to right eye orientation information, the target three-dimensional and the target video, real-time rendering right eye picture;
Wherein, the left eye picture and the right eye picture form VR scenes, the VR scenes on Virtual Reality display
It include the image of the image and the target video of the target three-dimensional.
2. according to the method described in claim 1, it is characterized in that, described according to left eye orientation information, the target three-dimensional mould
Type and the target video, real-time rendering left eye picture, including:
According to the left eye orientation information, the target three-dimensional is rendered on the first texture;
According to the left eye orientation information, the target video is rendered on the second texture, wherein second texture is base
In Billboard chip technology;
It is described according to right eye orientation information, the target three-dimensional and the target video, real-time rendering right eye picture, packet
It includes:
According to the right eye orientation information, the target three-dimensional is rendered into third texture;
According to the right eye orientation information, the target video is rendered on the 4th texture, wherein the 4th texture is base
In Billboard chip technology.
3. method according to claim 1 or 2, which is characterized in that described according to the left eye position information, the right eye
Location information and the multiple videos shot in advance, determine target video, including:
It averages to the left eye position information and the right eye position information, obtains mean place;
According to the mean place, at least two videos are selected from the multiple video;
Each video at least two video is extracted in corresponding video frame of corresponding moment;
According to the camera site of the mean place and at least two video, at least two video frame into row interpolation
Operation obtains the target video.
4. method according to claim 1 or 2, which is characterized in that described according to the left eye position information, the right eye
Location information and the multiple videos shot in advance, determine target video, including:
It averages to the left eye position information and the right eye position information, obtains mean place;
According to the mean place, the target video is selected from the multiple video, wherein the bat of the target video
Act as regent set at a distance from the mean place be the multiple video all camera sites in most connect with the mean place
Close.
5. according to the method described in claim 1, it is characterized in that, the multiple video is to pass through transparent processing to original video
The video for only including target object afterwards.
6. according to the method described in claim 5, it is characterized in that, the target object is personage.
7. according to the method described in claim 1, it is characterized in that, the left eye position information, the right eye position information, institute
Stating left eye orientation information and the right eye orientation information is determined according to the current posture information of collected user.
8. the method according to the description of claim 7 is characterized in that the posture information includes head pose information, four limbs appearance
State information, trunk posture information, muscle electric stimulation information, eye tracking information, skin sensing information, motion perception information and brain
At least one of signal message.
9. a kind of processor, which is characterized in that including acquisition module, computing module and rendering module,
The acquisition module is used to obtain left eye position information, right eye position information, left eye orientation information and the right eye court of user
To information;
The left eye position information and the right eye position information that the computing module is used to be obtained according to the acquisition module,
Target three-dimensional is determined from 3 d model library;
The computing module is additionally operable to according to the left eye position information, the right eye position information and what is shot in advance multiple regard
Frequently, target video is determined, wherein the multiple video is the video shot respectively from different camera sites;
The rendering module is used for according to left eye orientation information, the target three-dimensional and the target video, real-time rendering
Left eye picture;
The rendering module is additionally operable to according to right eye orientation information, the target three-dimensional and the target video, real-time wash with watercolours
Contaminate right eye picture;
Wherein, VR scenes are formed when the left eye picture and the right eye picture are shown on Virtual Reality display, it is described
VR scenes include the image of the image and the target video of the target three-dimensional.
10. processor according to claim 9, which is characterized in that the rendering module is specifically used for:
According to the left eye orientation information, the target three-dimensional is rendered on the first texture;
According to the left eye orientation information, the target video is rendered on the second texture, wherein second texture is base
In Billboard chip technology;
According to the right eye orientation information, the target three-dimensional is rendered into third texture;
According to the right eye orientation information, the target video is rendered on the 4th texture, wherein the 4th texture is base
In Billboard chip technology.
11. processor according to claim 9 or 10, which is characterized in that the computing module is according to the left eye position
Information, the right eye position information and the multiple videos shot in advance, determine target video, including:
It averages to the left eye position information and the right eye position information, obtains mean place;
According to the mean place, at least two videos are selected from the multiple video;
Each video at least two video is extracted in corresponding video frame of corresponding moment;
According to the camera site of the mean place and at least two video, at least two video frame into row interpolation
Operation obtains the target video.
12. processor according to claim 9 or 10, which is characterized in that the computing module is according to the left eye position
Information, the right eye position information and the multiple videos shot in advance, determine target video, including:
It averages to the left eye position information and the right eye position information, obtains mean place;
According to the mean place, the target video is selected from the multiple video, wherein the bat of the target video
Act as regent set at a distance from the mean place be the multiple video all camera sites in most connect with the mean place
Close.
13. processor according to claim 9, which is characterized in that the multiple video is to original video by transparent
Treated only includes the video of target object.
14. processor according to claim 13, which is characterized in that the target object is personage.
15. processor according to claim 9, which is characterized in that the left eye position letter that the acquisition module obtains
Breath, the right eye position information, the left eye orientation information and the right eye orientation information are according to the collected user
What current posture information determined.
16. processor according to claim 15, which is characterized in that the posture information includes head pose information, four
Limb posture information, trunk posture information, muscle electric stimulation information, eye tracking information, skin sensing information, motion perception information
At least one of with brain signal information.
17. processor according to claim 9, which is characterized in that the processor includes central processor CPU and figure
At least one of processor GPU.
18. a kind of graphic processing method, which is characterized in that including:
Collect the current posture information of user;
According to the posture information, left eye position information, right eye position information, left eye orientation information and the right side of the user is obtained
Eye orientation information;
According to the left eye position information and the right eye position information, target three-dimensional is determined from 3 d model library;
According to the left eye position information, the right eye position information and the multiple videos shot in advance, target video is determined,
In, the multiple video is the video shot respectively from different camera sites;
According to left eye orientation information, the target three-dimensional and the target video, real-time rendering left eye picture;
According to right eye orientation information, the target three-dimensional and the target video, real-time rendering right eye picture;
Show the left eye picture and the right eye picture, wherein the left eye picture and the right eye picture are formed when showing
VR scenes, the VR scenes include the image of the image and the target video of the target three-dimensional.
19. according to the method for claim 18, which is characterized in that described three-dimensional according to left eye orientation information, the target
Model and the target video, real-time rendering left eye picture, including:
According to the left eye orientation information, the target three-dimensional is rendered on the first texture;
According to the left eye orientation information, the target video is rendered on the second texture, wherein second texture is base
In Billboard chip technology;
It is described according to right eye orientation information, the target three-dimensional and the target video, real-time rendering right eye picture, packet
It includes:
According to the right eye orientation information, the target three-dimensional is rendered into third texture;
According to the right eye orientation information, the target video is rendered on the 4th texture, wherein the 4th texture is base
In Billboard chip technology.
20. the method according to claim 18 or 19, which is characterized in that it is described according to the left eye position information, it is described
Right eye position information and the multiple videos shot in advance, determine target video, including:
It averages to the left eye position information and the right eye position information, obtains mean place;
According to the mean place, at least two videos are selected from the multiple video;
Each video at least two video is extracted in corresponding video frame of corresponding moment;
According to the camera site of the mean place and at least two video, at least two video frame into row interpolation
Operation obtains the target video.
21. the method according to claim 18 or 19, which is characterized in that it is described according to the left eye position information, it is described
Right eye position information and the multiple videos shot in advance, determine target video, including:
It averages to the left eye position information and the right eye position information, obtains mean place;
According to the mean place, the target video is selected from the multiple video, wherein the bat of the target video
Act as regent set at a distance from the mean place be the multiple video all camera sites in most connect with the mean place
Close.
22. according to the method for claim 18, which is characterized in that the multiple video is to pass through transparent place to original video
The video for only including target object after reason.
23. according to the method for claim 22, which is characterized in that the target object is personage.
24. according to the method for claim 18, which is characterized in that the current posture information of the collection user, including:
Collect the current head pose information of the user, four limbs posture information, trunk posture information, muscle electric stimulation information,
At least one of eye tracking information, skin sensing information, motion perception information and brain signal information.
25. a kind of Virtual Reality system, which is characterized in that including posture collection device, processing unit and display device:
The posture collection device is used for:Collect the current posture information of user;
The processing unit is used for:
According to the posture information, left eye position information, right eye position information, left eye orientation information and the right side of the user is obtained
Eye orientation information;
According to the left eye position information and the right eye position information, target three-dimensional is determined from 3 d model library;
According to the left eye position information, the right eye position information and the multiple videos shot in advance, target video is determined,
In, the multiple video is the video shot respectively from different camera sites;
According to left eye orientation information, the target three-dimensional and the target video, real-time rendering left eye picture;
According to right eye orientation information, the target three-dimensional and the target video, real-time rendering right eye picture;
The display device is for showing the left eye picture and the right eye picture, wherein the left eye picture and the right side
VR scenes are formed when eye picture display, the VR scenes include the image of the target three-dimensional and the target video
Image.
26. VR systems according to claim 25, which is characterized in that the processing unit is according to left eye orientation information, institute
State target three-dimensional and the target video, real-time rendering left eye picture, including:
According to the left eye orientation information, the target three-dimensional is rendered on the first texture;
According to the left eye orientation information, the target video is rendered on the second texture, wherein second texture is base
In Billboard chip technology;
The processing unit is according to right eye orientation information, the target three-dimensional and the target video, real-time rendering right eye
Picture, including:
According to the right eye orientation information, the target three-dimensional is rendered into third texture;
According to the right eye orientation information, the target video is rendered on the 4th texture, wherein the 4th texture is base
In Billboard chip technology.
27. the VR systems according to claim 25 or 26, which is characterized in that the processing unit is according to the left eye position
Information, the right eye position information and the multiple videos shot in advance, determine target video, including:
It averages to the left eye position information and the right eye position information, obtains mean place;
According to the mean place, at least two videos are selected from the multiple video;
Each video at least two video is extracted in corresponding video frame of corresponding moment;
According to the camera site of the mean place and at least two video, at least two video frame into row interpolation
Operation obtains the target video.
28. the VR systems according to claim 25 or 26, which is characterized in that the processing unit is according to the left eye position
Information, the right eye position information and the multiple videos shot in advance, determine target video, including:
It averages to the left eye position information and the right eye position information, obtains mean place;
According to the mean place, the target video is selected from the multiple video, wherein the bat of the target video
Act as regent set at a distance from the mean place be the multiple video all camera sites in most connect with the mean place
Close.
29. VR systems according to claim 25, which is characterized in that the multiple video is to original video by transparent
Treated only includes the video of target object.
30. VR systems according to claim 29, which is characterized in that the target object is personage.
31. VR systems according to claim 25, which is characterized in that the posture collection device is specifically used for:
Collect the current head pose information of the user, four limbs posture information, trunk posture information, muscle electric stimulation information,
At least one of eye tracking information, skin sensing information, motion perception information and brain signal information.
32. VR systems according to claim 25, which is characterized in that the processing unit include central processor CPU and
At least one of graphics processor GPU.
33. a kind of computer storage media, which is characterized in that instruction is stored thereon with, when described instruction is run on computers
When so that the computer perform claim requires the method described in any one of 1 to 8.
34. a kind of computer storage media, which is characterized in that instruction is stored thereon with, when described instruction is run on computers
When so that the computer perform claim requires the method described in any one of 18 to 24.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710379516.5A CN107315470B (en) | 2017-05-25 | 2017-05-25 | Graphic processing method, processor and virtual reality system |
PCT/CN2018/084714 WO2018214697A1 (en) | 2017-05-25 | 2018-04-27 | Graphics processing method, processor, and virtual reality system |
TW107116847A TWI659335B (en) | 2017-05-25 | 2018-05-17 | Graphic processing method and device, virtual reality system, computer storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710379516.5A CN107315470B (en) | 2017-05-25 | 2017-05-25 | Graphic processing method, processor and virtual reality system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107315470A CN107315470A (en) | 2017-11-03 |
CN107315470B true CN107315470B (en) | 2018-08-17 |
Family
ID=60182018
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710379516.5A Active CN107315470B (en) | 2017-05-25 | 2017-05-25 | Graphic processing method, processor and virtual reality system |
Country Status (3)
Country | Link |
---|---|
CN (1) | CN107315470B (en) |
TW (1) | TWI659335B (en) |
WO (1) | WO2018214697A1 (en) |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107315470B (en) * | 2017-05-25 | 2018-08-17 | 腾讯科技(深圳)有限公司 | Graphic processing method, processor and virtual reality system |
CN110134222A (en) * | 2018-02-02 | 2019-08-16 | 上海集鹰科技有限公司 | A kind of VR shows positioning sighting system and its positioning method of sight |
CN108616752B (en) * | 2018-04-25 | 2020-11-06 | 北京赛博恩福科技有限公司 | Head-mounted equipment supporting augmented reality interaction and control method |
CN109032350B (en) * | 2018-07-10 | 2021-06-29 | 深圳市创凯智能股份有限公司 | Vertigo sensation alleviating method, virtual reality device, and computer-readable storage medium |
CN110570513B (en) * | 2018-08-17 | 2023-06-20 | 创新先进技术有限公司 | Method and device for displaying vehicle loss information |
CN111065053B (en) * | 2018-10-16 | 2021-08-17 | 北京凌宇智控科技有限公司 | System and method for video streaming |
CN111064985A (en) * | 2018-10-16 | 2020-04-24 | 北京凌宇智控科技有限公司 | System, method and device for realizing video streaming |
US11500455B2 (en) | 2018-10-16 | 2022-11-15 | Nolo Co., Ltd. | Video streaming system, video streaming method and apparatus |
CN109976527B (en) * | 2019-03-28 | 2022-08-12 | 重庆工程职业技术学院 | Interactive VR display system |
CN112015264B (en) * | 2019-05-30 | 2023-10-20 | 深圳市冠旭电子股份有限公司 | Virtual reality display method, virtual reality display device and virtual reality equipment |
CN111857336B (en) * | 2020-07-10 | 2022-03-25 | 歌尔科技有限公司 | Head-mounted device, rendering method thereof, and storage medium |
CN112073669A (en) * | 2020-09-18 | 2020-12-11 | 三星电子(中国)研发中心 | Method and device for realizing video communication |
CN112308982A (en) * | 2020-11-11 | 2021-02-02 | 安徽山水空间装饰有限责任公司 | Decoration effect display method and device |
CN113436489A (en) * | 2021-06-09 | 2021-09-24 | 深圳大学 | Study leaving experience system and method based on virtual reality |
CN115713614A (en) * | 2022-11-25 | 2023-02-24 | 立讯精密科技(南京)有限公司 | Image scene construction method and device, electronic equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060132915A1 (en) * | 2004-12-16 | 2006-06-22 | Yang Ung Y | Visual interfacing apparatus for providing mixed multiple stereo images |
CN102056003A (en) * | 2009-11-04 | 2011-05-11 | 三星电子株式会社 | High density multi-view image display system and method with active sub-pixel rendering |
WO2011111349A1 (en) * | 2010-03-10 | 2011-09-15 | パナソニック株式会社 | 3d video display device and parallax adjustment method |
CN104603673A (en) * | 2012-09-03 | 2015-05-06 | Smi创新传感技术有限公司 | Head mounted system and method to compute and render stream of digital images using head mounted system |
CN104679509A (en) * | 2015-02-06 | 2015-06-03 | 腾讯科技(深圳)有限公司 | Graph rendering method and device |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100573595C (en) * | 2003-06-20 | 2009-12-23 | 日本电信电话株式会社 | Virtual visual point image generating method and three-dimensional image display method and device |
US8400493B2 (en) * | 2007-06-25 | 2013-03-19 | Qualcomm Incorporated | Virtual stereoscopic camera |
CN102404584B (en) * | 2010-09-13 | 2014-05-07 | 腾讯科技(成都)有限公司 | Method and device for adjusting scene left camera and scene right camera, three dimensional (3D) glasses and client side |
US9292973B2 (en) * | 2010-11-08 | 2016-03-22 | Microsoft Technology Licensing, Llc | Automatic variable virtual focus for augmented reality displays |
US9255813B2 (en) * | 2011-10-14 | 2016-02-09 | Microsoft Technology Licensing, Llc | User controlled real object disappearance in a mixed reality display |
US9451162B2 (en) * | 2013-08-21 | 2016-09-20 | Jaunt Inc. | Camera array including camera modules |
US20150358539A1 (en) * | 2014-06-06 | 2015-12-10 | Jacob Catt | Mobile Virtual Reality Camera, Method, And System |
EP3356877A4 (en) * | 2015-10-04 | 2019-06-05 | Thika Holdings LLC | Eye gaze responsive virtual reality headset |
CN106385576B (en) * | 2016-09-07 | 2017-12-08 | 深圳超多维科技有限公司 | Stereoscopic Virtual Reality live broadcasting method, device and electronic equipment |
CN106507086B (en) * | 2016-10-28 | 2018-08-31 | 北京灵境世界科技有限公司 | A kind of 3D rendering methods of roaming outdoor scene VR |
CN106527696A (en) * | 2016-10-31 | 2017-03-22 | 宇龙计算机通信科技(深圳)有限公司 | Method for implementing virtual operation and wearable device |
CN106657906B (en) * | 2016-12-13 | 2020-03-27 | 国家电网公司 | Information equipment monitoring system with self-adaptive scene virtual reality function |
CN106643699B (en) * | 2016-12-26 | 2023-08-04 | 北京互易科技有限公司 | Space positioning device and positioning method in virtual reality system |
CN107315470B (en) * | 2017-05-25 | 2018-08-17 | 腾讯科技(深圳)有限公司 | Graphic processing method, processor and virtual reality system |
-
2017
- 2017-05-25 CN CN201710379516.5A patent/CN107315470B/en active Active
-
2018
- 2018-04-27 WO PCT/CN2018/084714 patent/WO2018214697A1/en active Application Filing
- 2018-05-17 TW TW107116847A patent/TWI659335B/en active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060132915A1 (en) * | 2004-12-16 | 2006-06-22 | Yang Ung Y | Visual interfacing apparatus for providing mixed multiple stereo images |
CN102056003A (en) * | 2009-11-04 | 2011-05-11 | 三星电子株式会社 | High density multi-view image display system and method with active sub-pixel rendering |
WO2011111349A1 (en) * | 2010-03-10 | 2011-09-15 | パナソニック株式会社 | 3d video display device and parallax adjustment method |
CN104603673A (en) * | 2012-09-03 | 2015-05-06 | Smi创新传感技术有限公司 | Head mounted system and method to compute and render stream of digital images using head mounted system |
CN104679509A (en) * | 2015-02-06 | 2015-06-03 | 腾讯科技(深圳)有限公司 | Graph rendering method and device |
Also Published As
Publication number | Publication date |
---|---|
TW201835723A (en) | 2018-10-01 |
CN107315470A (en) | 2017-11-03 |
TWI659335B (en) | 2019-05-11 |
WO2018214697A1 (en) | 2018-11-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107315470B (en) | Graphic processing method, processor and virtual reality system | |
US10324522B2 (en) | Methods and systems of a motion-capture body suit with wearable body-position sensors | |
CN106843456B (en) | A kind of display methods, device and virtual reality device based on posture tracking | |
US20190080516A1 (en) | Systems and methods for augmented reality preparation, processing, and application | |
KR101295471B1 (en) | A system and method for 3D space-dimension based image processing | |
US8878846B1 (en) | Superimposing virtual views of 3D objects with live images | |
US8462198B2 (en) | Animation generation systems and methods | |
CN111862348B (en) | Video display method, video generation method, device, equipment and storage medium | |
CN107636534A (en) | General sphere catching method | |
CN112198959A (en) | Virtual reality interaction method, device and system | |
JP7073481B2 (en) | Image display system | |
JP2002058045A (en) | System and method for entering real object into virtual three-dimensional space | |
CN108416832B (en) | Media information display method, device and storage medium | |
US9955120B2 (en) | Multiuser telepresence interaction | |
JP7459870B2 (en) | Image processing device, image processing method, and program | |
US11537162B2 (en) | Wearable article for a performance capture system | |
JP6775669B2 (en) | Information processing device | |
JP2022028091A (en) | Image processing device, image processing method, and program | |
CN108416255B (en) | System and method for capturing real-time facial expression animation of character based on three-dimensional animation | |
WO2022014170A1 (en) | Information processing device, information processing method, and information processing system | |
US20240078767A1 (en) | Information processing apparatus and information processing method | |
WO2018173206A1 (en) | Information processing device | |
JP7044846B2 (en) | Information processing equipment | |
US20240005600A1 (en) | Nformation processing apparatus, information processing method, and information processing program | |
JP6739539B2 (en) | Information processing equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20240104 Address after: 518057 Tencent Building, No. 1 High-tech Zone, Nanshan District, Shenzhen City, Guangdong Province, 35 floors Patentee after: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd. Patentee after: TENCENT CLOUD COMPUTING (BEIJING) Co.,Ltd. Address before: 518000 Tencent Building, No. 1 High-tech Zone, Nanshan District, Shenzhen City, Guangdong Province, 35 Floors Patentee before: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd. |