WO2018214697A1 - Procédé de traitement graphique, processeur, et système de réalité virtuelle - Google Patents

Procédé de traitement graphique, processeur, et système de réalité virtuelle Download PDF

Info

Publication number
WO2018214697A1
WO2018214697A1 PCT/CN2018/084714 CN2018084714W WO2018214697A1 WO 2018214697 A1 WO2018214697 A1 WO 2018214697A1 CN 2018084714 W CN2018084714 W CN 2018084714W WO 2018214697 A1 WO2018214697 A1 WO 2018214697A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
picture
target
videos
image
Prior art date
Application number
PCT/CN2018/084714
Other languages
English (en)
Chinese (zh)
Inventor
刘皓
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2018214697A1 publication Critical patent/WO2018214697A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2215/00Indexing scheme for image rendering
    • G06T2215/16Using real world measurements to influence rendering

Definitions

  • the present application relates to the field of graphics processing, and more particularly to a graphics processing method, a processor, and a virtual reality system.
  • VR modeling technology generates VR scenes mainly based on 3D models to create VR scenes.
  • VR scenes are mainly implemented using 3D modeling technology combined with real-time rendering technology.
  • the user uses a VR head-mounted display device, such as a VR glasses or a VR helmet, as an observation medium, and integrates into a VR scene to interact with characters or other objects in the VR scene to obtain a real spatial experience.
  • the most common ones are, for example, roller coaster VR scenes.
  • the embodiment of the present application provides a graphics processing method, including: a graphics processing method, is applied to a computing device, including: acquiring location information of an observer; and determining a target in a virtual reality VR image to be displayed according to the location information. Obtaining at least two images corresponding to the target object stored in advance, the at least two images being images respectively taken from different shooting positions; and shooting positions corresponding to the at least two images according to the position information Generating a target image using the at least two images, the target image being an image of the target object corresponding to the position of the observer; presenting the VR picture, and rendering the target image in the VR picture .
  • the embodiment of the present application provides a graphics processing device processor and a memory, where the memory stores computer readable instructions, and the processor may be configured to: obtain position information of an observer; and determine to be displayed according to the location information.
  • a target object in the virtual reality VR picture acquiring at least two images corresponding to the target object stored in advance, the at least two images being images respectively taken from different shooting positions; according to the position information and the a shooting position corresponding to the at least two images, the target image is generated by using the at least two images, the target image is an image of the target object corresponding to the position of the observer; the VR screen is displayed, and The target image is rendered in the VR picture.
  • the embodiment of the present application provides a graphics processing method, which is applicable to a computing device, including: collecting current posture information of an observer; obtaining location information of the observer according to the posture information; and determining to be displayed according to the location information a target object in the virtual reality VR picture; acquiring at least two images corresponding to the target object stored in advance, the at least two images being images respectively taken from different shooting positions; according to the position information and the a shooting position corresponding to the at least two images, the target image is generated by using the at least two images, the target image is an image of the target object corresponding to the position of the observer; the VR screen is displayed, and The target image is rendered in the VR picture.
  • the embodiment of the present application provides a virtual reality VR system, including a gesture collection device, a processing device, and a display device: the gesture collection device is configured to: collect current posture information of an observer; and the processing device is configured to: according to the Position information, obtaining position information of the observer; determining a target object in the virtual reality VR picture to be displayed according to the position information; acquiring at least two images corresponding to the target object stored in advance, the at least two The images are images respectively taken from different shooting positions; according to the position information and the shooting positions corresponding to the at least two images, a target image is generated using the at least two images, the target image being the observer's An image of the target object corresponding to the location; the display device is configured to display the VR picture and render the target image in the VR picture.
  • the gesture collection device is configured to: collect current posture information of an observer
  • the processing device is configured to: according to the Position information, obtaining position information of the observer; determining a target object in the virtual reality VR picture to be displayed according
  • the embodiment of the present application provides a computer storage medium on which an instruction is stored, and when the instruction is run on a computer, the computer is caused to execute the method described in the embodiment of the present application.
  • the embodiment of the present application provides a computer storage medium on which an instruction is stored, and when the instruction is run on a computer, the computer is caused to execute the method described in the embodiment of the present application.
  • the embodiment of the present application provides a computer program product including instructions.
  • the computer runs the finger of the computer program product, the computer executes the method described in the embodiment of the present application.
  • the embodiment of the present application provides a computer program product including instructions.
  • the computer runs the finger of the computer program product, the computer executes the method described in the embodiment of the present application.
  • FIG. 1 is a schematic diagram of a VR system according to an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of a graphics processing method according to an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of a graphics processing method according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a scenario that needs to be presented in an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a scene for performing pre-shooting according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a video obtained at different shooting positions according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of determining a target video according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a presentation target video according to an embodiment of the present application.
  • FIG. 9A is a schematic structural diagram of a computing device where a graphics processing apparatus according to an embodiment of the present application is located.
  • FIG. 9A is a schematic structural diagram of a computing device where a graphics processing apparatus according to an embodiment of the present application is located.
  • 9B is a schematic block diagram of a processor of one embodiment of the present application.
  • FIG. 10 is a schematic diagram of a virtual reality system according to an embodiment of the present application.
  • FIG. 11 is a schematic diagram of a virtual reality system of another embodiment of the present application.
  • the embodiment of the present application provides a graphics processing method, apparatus, and VR system.
  • the methods and devices of the embodiments of the present application are applied to the field of VR scenarios, for example, can be applied to the field of VR games, and can also be applied to other interactive scenarios, such as interactive VR movies, interactive VR concerts.
  • the embodiments of the present application do not limit this.
  • real-time rendering technology involved in various embodiments of the present application.
  • the essence of real-time rendering technology is the real-time calculation and output of graphics data. Its biggest feature is real time.
  • processors in personal computers (PCs), workstations, game consoles, mobile devices, or VR systems operate at least 24 frames per second. In other words, rendering an image of a screen should be at least 1/24 of a second. In actual 3D games, the frame number per second requirement is higher. It is because of the real-time nature of real-time rendering that it is possible to achieve consistent play of 3D games and to enable users to interact with characters or other objects in the game scene in 3D games.
  • the real-time rendering of the embodiments of the present application may be implemented by a central processing unit (CPU) or a graphics processing unit (GPU), which is not limited in this embodiment of the present application.
  • the GPU is a processor dedicated to image computing operations, which may be present in a graphics card, also known as a display core, a visual processor, or a display chip.
  • FIG. 1 is a schematic diagram of a VR system according to an embodiment of the present application. As shown in FIG. 1, the system includes a VR head display device 101 and a computing device 102.
  • the VR head display device 101 may be a VR glasses or a VR helmet or the like, and may include an angle sensor 1011, a signal processor 1012, a data transmitter 1013, and a display 1014.
  • the angle sensor 1011 can collect the posture information of the observer.
  • the computing device 102 can be a smart terminal device such as a personal computer (PC), a notebook computer, or a smart mobile terminal device such as a smart phone, a PAD, or a tablet computer, and can include a CPU and a GPU for calculating and rendering an observation image, and The observation screen is sent to the display 1014 for display.
  • Signal processor 1012 and data transmitter 1013 are primarily used for communication between VR head-display device 101 and computing device 102.
  • the VR system of the embodiments of the present application may further include: a camera 103 for video of an object within the VR scene taken from a plurality of different shooting positions.
  • FIG. 2 is a flowchart of a graphics processing method 200 provided by an embodiment of the present application, which is performed by a computing device 102 in a VR system. As shown in FIG. 2, the method includes the following steps:
  • Step 201 Obtain the position information of the observer.
  • the observer's left eye position information, right eye position information, left eye orientation information, and right eye orientation information are acquired.
  • the left eye position information, the right eye position information, the left eye orientation information, and the right eye orientation information are determined according to the collected current posture information of the user, and the posture information includes a head posture.
  • Step 202 Determine a target object in the virtual reality VR picture to be displayed according to the location information.
  • the target object can be a character.
  • the character is an object that is desired to be improved in its authenticity, that is, the target object.
  • each scene or multiple scenes may have a target object list, and when the VR scene is generated, the target object in the target scene is found according to the target object list.
  • the target object list For example, in the game design of the VR scene, it is stipulated that the person at the close scene (the scene within a certain range of the user) is the target object, and the object other than the person at the close scene is not the target object, and the distant scene (the scene outside the certain range of the user) All objects at ) are not target objects, and so on. Determining the target object in the scene can be performed by the processing device 34, for example, by the CPU in the processing device 34, which is not limited by the embodiment of the present application.
  • Step 203 Acquire at least two images corresponding to the target object stored in advance, and the at least two images are images respectively taken from different shooting positions.
  • a video frame corresponding to the time information in each video is determined as the image from a plurality of pre-captured videos according to time information of the VR picture to be displayed, wherein the VR picture
  • the time information can be the current time of the VR picture.
  • Step 204 Generate a target image by using the at least two images according to the location information and the shooting position corresponding to the at least two images, where the target image is an image of the target object corresponding to the position of the observer. .
  • the target image is rendered onto a first predetermined texture in the VR picture, wherein the first predetermined texture is based on an artboard patch technique.
  • the plurality of videos are videos that include only the target object after the original video of the plurality of videos is transparently processed, wherein the target object may be a character.
  • At least one video is selected from the left and right sides of the average position, and a video frame corresponding to the time information is selected from the selected at least one video as the image.
  • the time information may be current time information of the VR picture, and the image may be interpolated to obtain the target image according to a spatial positional relationship between the average position and the shooting positions of the at least two videos.
  • the determining the target image averaging the left eye position information and the right eye position information to obtain an average position; according to the average position, from the pre-shooting
  • the target video is selected from the video, wherein the distance between the shooting position of the target video and the average position is the smallest of the spatial distances between the shooting position of the plurality of pre-captured videos and the average position;
  • One video frame is selected from the target video, and the video frame is used as the target image.
  • the time information may be current time information of the VR screen.
  • Step 205 Display the VR picture and render the target image in the VR picture.
  • determining the left eye frame according to the left eye position information and the left eye orientation information determining the right eye frame according to the right eye position information and the right eye orientation information; Rendering the left eye image in real time with the left eye orientation information and the target image, and rendering the target image in the left eye image; rendering the right image in real time according to the right eye orientation information and the target image An eye picture, and the target image is rendered in the right eye picture.
  • the technical solution of the embodiment of the present application may determine a target object in the virtual reality VR picture to be displayed according to the position information of the observer; and acquire at least two images corresponding to the target object stored in advance, where the at least two images are Images respectively taken from different shooting positions; generating a target image using the at least two images according to the position information and the shooting position corresponding to the at least two images, the target image being the position corresponding to the observer An image of the target object; presenting the VR picture and rendering the target image in the VR picture.
  • the VR picture can realistically display the real scene, and provide the user with a real sense of presence on the basis of maintaining the interactivity of the entire VR scene, thereby improving the user experience.
  • the method 300 is performed by the VR system 30.
  • the VR system 30 can include a gesture collection device 32, a processing device 34, and a display device 36.
  • the method 300 can include the following steps.
  • S310 Collect current user posture information. It should be understood that S310 can be performed by gesture collection device 32.
  • S340 Determine a target video according to the left eye position information, the right eye position information, and the plurality of pre-captured videos, wherein the plurality of videos are videos respectively taken from different shooting positions.
  • S350 Render a left eye picture in real time according to the left eye orientation information, the target three-dimensional model, and the target video.
  • S320 through S360 can be performed by processing device 34.
  • S370 can be performed by display device 36.
  • the graphics processing method of the embodiment of the present application collects the posture information of the user to determine the position of the left and right eyes of the user, determines the target three-dimensional model according to the position information of the left and right eyes of the user, and determines the target video according to the plurality of pre-captured videos, and performs real-time rendering.
  • the rendering method respectively renders the left-eye image and the right-eye image to display the VR scene, wherein the VR scene includes the image of the target three-dimensional model and the image of the target video, and the target video can realistically display the reality scene while maintaining the entire VR scene.
  • the VR scene includes the image of the target three-dimensional model and the image of the target video
  • the target video can realistically display the reality scene while maintaining the entire VR scene.
  • On the basis of interactivity it provides users with a real sense of presence, which can enhance the user experience.
  • VR system 30 includes a VR head display device, and display device 36 can be integrated into the VR head display device.
  • the processing device 34 and/or the gesture collecting device 32 of the embodiment of the present application may be integrated in the VR head display device, or may be separately deployed independently of the VR head display device, wherein the VR head display device may be a VR head mounted display device.
  • VR glasses or VR helmets For example, VR glasses or VR helmets.
  • the gesture collection device 32, the processing device 34, and the display device 36 may be connected by wire or by wireless communication, which is not limited by the embodiment of the present application.
  • the gesture collection device 32 collects the current posture information of the user.
  • Gesture collection device 32 may include a VR head mounted display device, such as a sensor in a VR glasses or VR helmet.
  • the sensor may include a photosensitive sensor, such as an infrared sensor, a camera, etc.; the sensor may also include a force sensitive sensor, such as a gyroscope, etc.; the sensor may also include a magnetic sensor, such as a brain-computer interface, etc.; the sensor may also include an acoustic sensor, etc.
  • the specific embodiment of the sensor is not limited in the application embodiment.
  • the sensor in the VR head mounted display device may collect at least one of a user's current head posture information, eye tracking information, skin sensing information, muscle electrical stimulation information, and brain signal information. Then, the processing device 34 can determine the left eye position information, the right eye position information, the left eye orientation information, and the right eye orientation information of the user based on the information.
  • the user's perspective refers to the azimuth of the user's human eye's line of sight direction in the virtual space, including the position and orientation of the human eye.
  • the user's perspective can change as the user's head changes in the posture in real space.
  • the change in the perspective of the user in the virtual space is the same as the change in the posture of the user's head in the real space.
  • the user's perspective includes the left eye view and the right eye view, that is, the left eye position, the right eye position, the left eye orientation, and the right eye orientation of the user.
  • the sensor on the VR head display device worn by the user can sense the movement of the head, the movement, and the posture thereof during the process of using the VR head display device, and solve the motions.
  • Obtaining related head posture information for example, speed, angle, and the like of the motion
  • the processing device 34 can determine the left eye position information, the right eye position information, the left eye orientation information, and the right eye orientation of the user according to the obtained head posture information. information.
  • the gesture collection device 32 may further include a positioner, a manipulation handle, a somatosensory glove, a somatosensory garment, and a dynamic device such as a treadmill, etc., for collecting posture information of the user, and then processed by the processing device 34 to obtain the left eye position information of the user, Right eye position information, left eye orientation information, and right eye orientation information.
  • the posture collecting device 32 can collect the user's limb posture information, trunk posture information, muscle electrical stimulation information, skin sensing information, motion sensing information, and the like through a manipulation handle, a somatosensory glove, a somatosensory garment, and a treadmill.
  • one or more locators may be provided on the VR head display device for monitoring the position of the user's head (which may include height), orientation, and the like.
  • the user may be provided with a positioning system in the real space where the VR head display device is located, and the positioning system may perform positioning communication with one or more locators on the VR head display device worn by the user to determine the reality of the user.
  • Attitude information such as specific position (which may include height), orientation, etc. in space. then.
  • the above-described posture information may be converted by the processing device 34 into information such as a relevant position (which may include height), orientation, and the like of the user's head in the virtual space. That is, the processing device 34 obtains the user's left eye position information, right eye position information, left eye orientation information, and right eye orientation information.
  • left eye position information and the right eye position information of the embodiment of the present application may be represented by coordinate values in a coordinate system; the left eye orientation information and the right eye orientation information may be represented by a vector in a coordinate system.
  • this embodiment of the present application does not limit this.
  • the gesture collection device 32 sends the gesture information to the processing device 34 through wired communication or wireless communication after the gesture information is collected, which is not described herein.
  • the embodiment of the present application may also collect the posture information of the user by other means, and obtain and/or represent the left eye position information, the right eye position information, the left eye orientation information, and the right eye orientation information by using other methods.
  • the specific embodiments are not limited to the specific embodiments.
  • a location is designed to correspond to a group of objects.
  • the object corresponding to the left eye position LE and the right eye position RE of the user respectively is as shown in FIG.
  • the user's left eye position corresponds to the object L41, the object 43, the object 44, the object 46, and the character 42
  • the user's right eye position corresponds to the object R45, the object 43, the object 44, the object 46, and the character 42.
  • the character 42 is an object that is desired to be improved in its authenticity, and is a target object.
  • determining which object in the object group corresponding to the left eye position or the right eye position of the user is the target object may be based on the design of the VR scene.
  • each scene or multiple scenes may have a target object list, and when the VR scene is generated, the target object in the target scene is found according to the target object list.
  • the person at the close scene (the scene within a certain range of the user) is the target object, and the object other than the person at the close scene is not the target object, and the distant scene (the scene outside the certain range of the user) All objects at ) are not target objects, and so on.
  • Determining the target object in the scene can be performed by the processing device 34, for example, by the CPU in the processing device 34, which is not limited by the embodiment of the present application.
  • the target object may be generated in advance by 3D modeling to be stored in the 3D model library.
  • the 3D models of the object L41, the object 43, the object 44, the object R45, and the object 46 shown in FIG. 4 are all stored in the 3D model library.
  • the processing device 34 obtains the left eye position information and the right eye position information
  • the target three-dimensional model that is, the object L41, the object 43, the object 44, the object R45, and the object 46 are determined from the 3D model library.
  • 3D model for subsequent rendering can also be determined by other means, which is not limited by the embodiment of the present application.
  • the character 42 in the VR scene shown in FIG. 4 is generated based on a plurality of videos taken in advance.
  • the plurality of videos are videos including target objects respectively taken from different shooting positions.
  • FIG. 5 shows a schematic diagram of a pre-taken scene.
  • the scene to be photographed includes the character 42, the object 52, and the object 54, and the scene to be photographed is as close as possible to the case of the finally displayed VR scene to increase the sense of reality.
  • multiple shooting devices can be placed in the horizontal direction, respectively, from the shooting position C 1 , the shooting position C 2 and the shooting position C 3 , and the original video of the character at different shooting positions can be obtained as shown in FIG. 6 . Show.
  • shooting may be performed on a circumference having a certain radius from the target object when the video is captured in advance.
  • the more and more dense the shooting position is selected on the circumference the greater the probability of selecting the same or similar to the left eye position or the right eye position of the user, and the final selected or calculated target video is placed in the VR scene.
  • the authenticity is also higher.
  • the shooting position when the pre-shooting video is taken may be formed on a straight line or a circle having a certain radius from the target object, and the shooting position may also be a plane or a curved surface or even a different position in the three-dimensional space, thereby achieving 360 degrees. Panorama.
  • the plurality of videos may be videos including only the target object after the original video is transparently processed. Specifically, it is possible to separate the person 42 from the object 52 and the object 54 constituting the background among the three videos respectively photographed from the three shooting positions, and to obtain three videos including only the person 42.
  • the three videos are videos that are the same length of time at the same time.
  • the transparent processing may be a processing based on an alpha (alpha) transparent technology.
  • the alpha value is used to record the transparency of the pixels so that the objects can have different degrees of transparency.
  • the target object person 42 in the original video may be processed as opaque, and the object 52 and the object 54 constituting the background are processed to be transparent.
  • the S340 determines the target video according to the left eye position information, the right eye position information, and the plurality of pre-captured videos, and may include: the left eye position information and the right eye position.
  • the information is averaged to obtain an average position; and the target video is selected from the plurality of videos according to the average position, wherein a distance between a shooting position of the target video and the average position is the plurality of The closest of all the shooting positions of the video to the average position.
  • the left eye position, the right eye position, and the shooting position may be uniformly represented as coordinates of the virtual space in the VR scene, for example, coordinates of the x-axis, y-axis, and z-axis three-axis coordinate system or Ball coordinates.
  • the left eye position, the right eye position, and the shooting position may also be represented in other forms, which is not limited in the embodiment of the present application.
  • the left eye position information and the right eye position information are averaged to obtain an average position.
  • the left eye position is (x 1 , y 1 , z 1 )
  • the right eye position is (x 2 , y 2 , z 2 )
  • the average position is ((x 1 +x) 2 )/2, (y 1 + y 2 )/2, (z 1 + z 2 )/2).
  • the video whose shooting position is closest to the average position is selected from the plurality of videos as the target video.
  • the closest position of the shooting position of the target video to the average position can be understood as the shooting position of the target video (x t , y t , z t ).
  • the distance from the average position ((x 1 + x 2 )/2, (y 1 + y 2 )/2, (z 1 + z 2 )/2) needs to be less than the preset threshold, that is, the shooting position of the target video is guaranteed.
  • the distance from the average position is small enough.
  • the shooting position of the target video is closest to the average position, and it can be understood that the line segment composed of the average position and the target object and the shooting position of the target video and the target object constitute The angle between the line segments is the smallest angle between the line segment formed by the average position and the target object and the line segment formed by all the shooting positions and the target object.
  • the S340 determines the target video according to the left eye position information, the right eye position information, and the plurality of pre-captured videos, and may include: the left eye position information and the right eye.
  • the position information is averaged to obtain an average position; according to the average position, at least two videos are selected from the plurality of videos; and each of the at least two videos is extracted at a corresponding time And performing interpolation on the at least two video frames according to the average position and the shooting positions of the at least two videos to obtain the target video.
  • At least one shooting position of the left and right eye positions of the user may be selected, and a video taken by at least one of the left and right shooting positions is selected from the plurality of videos as a reference for calculating the target video. Intercepting at least two video frames at the same time to perform interpolation calculation to obtain a target video.
  • selecting at least two videos from the plurality of videos may be the selected and average positions ((x 1 + x 2 )/2, ( y 1 + y 2 )/2, (z 1 + z 2 )/2) at least two videos with the smallest distance. At least one of the shooting positions of at least two videos is distributed on the left side of the average position, and at least one is distributed on the right side of the average position.
  • selecting at least two videos from the plurality of videos may be a line segment composed of the average position and the target object and a shooting position of the at least two videos and the target object.
  • the angle between the formed line segments is the smallest of the angles between the average position and the line segment formed by the target object and the line segment formed by all the shooting positions and the target object.
  • At least one of the shooting positions of at least two videos is distributed on the left side of the average position, and at least one is distributed on the right side of the average position.
  • the video as a reference may also be selected according to other criteria, and the embodiment of the present application does not limit the play.
  • the video captured at different shooting positions represents different viewing positions when viewing the target object (eg, the character 42).
  • the video frames corresponding to the three videos shown in FIG. 6 at the same physical moment are images viewed at different viewing positions.
  • the three shooting angles can correspond to three shooting positions C 1 , C 2 and C 3 , respectively .
  • a plurality of sets of photos (or groups of images) of the target object are photographed in advance from a plurality of shooting positions.
  • at least two images corresponding to at least two shooting positions are found from the plurality of sets of images, and at least two images are interpolated to obtain Target image.
  • the specific interpolation algorithm will be described in detail below.
  • FIG. 7 is a schematic diagram of determining a target video according to an embodiment of the present application. According to the average position, at least two videos are selected from the plurality of videos, and each of the at least two videos is extracted at a corresponding time, and at least two are selected according to the average position and the shooting positions of the at least two videos.
  • the video frames are interpolated, and the specific process of obtaining the target video can be as shown in FIG. 7.
  • the viewing position may change. For example, when the user faces the VR scene, the viewing position may move in the left and right direction.
  • the three shooting positions are C 1 , C 2 and C 3 , respectively .
  • C 1 , C 2 , and C 3 may be represented by the coordinate values of the three-dimensional Cartesian coordinate system, or may be represented by the coordinate values of the spherical coordinate system, and may be represented by other means, which is not limited by the embodiment of the present application.
  • the average position Cview when the user observes can be determined. As shown in FIG 7, the average position between C 1 and C view in C 2.
  • the video photographed in advance at the shooting positions C 1 and C 2 is selected as a reference.
  • the video photographed in advance at the shooting positions C 1 and C 2 is selected as a reference.
  • the corresponding video frame I1 and I2 may be linear Interpolation.
  • the weight of the interpolation depends on the distance between the average position C view and C 1 and C 2 .
  • the video frame of the output target video I out I1 * (1 - (C 1 - C view / C 1 - C 2 )) + I2 * (1 - (C 2 - C view / C 1 - C 2 )).
  • the embodiment of the present application is described by taking a target object as a character as an example.
  • the target object may also be an animal that requires authenticity, even a building or a plant, etc., which is not limited by the embodiment of the present application.
  • the S350 may render the left-eye image in real time according to the left-eye orientation information, the target three-dimensional model, and the target video, and may include: performing the target according to the left-eye orientation information. Rendering the three-dimensional model onto the first texture; rendering the target video onto the second texture according to the left-eye orientation information, wherein the first texture may be a background of the left-eye image, the second The texture is based on the advertisement panel patch technology; the S360 renders the right eye image in real time according to the right eye orientation information, the target three-dimensional model, and the target video, and may include: the target three-dimensional according to the right eye orientation information Rendering the model onto the third texture; rendering the target video onto the fourth texture according to the right eye orientation information, wherein the third texture may be a background of the right eye image, the fourth texture It is based on the technology of the advertising board.
  • the processing device 34 e.g., the CPU therein
  • the processing device 34 determines a left eye picture that should be presented based on the left eye orientation information; and determines a right eye picture that should be presented based on the right eye orientation information. For example, in the scene shown in FIG.
  • the object L41, the object 43, the object 44, and the person 42 in the left-eye picture are determined; according to the right-eye orientation information (for the person 42), It is determined that the object 43, the object 44, the object R45, and the person 42 are present in the right eye picture.
  • Processing device 34 (eg, a GPU therein) renders target three-dimensional model object L41, object 43 and object 44 onto first texture 82 of left-eye picture L800, and renders the target video to second texture of left-eye picture L800 84;
  • the target three-dimensional model object 43, object 44, and object R45 are rendered onto the third texture 86 of the right eye frame R800, and the target video is rendered onto the fourth texture 88 of the right eye frame R800.
  • a billboard panel may be set at a position of the target object of the screen, and a target video may be presented on the billboard panel.
  • Advertising board technology is a method of rapid drawing in the field of computer graphics. In the case of similar real-time requirements like 3D games, the use of advertising board technology can greatly speed up the drawing and improve the fluency of 3D game graphics.
  • the billboard technology is to represent the object in 2D in a 3D scene, so that the object is always facing the user.
  • the panel of the advertisement board may have an inclination angle on the left eye screen, and the specific parameter of the inclination angle may be calculated according to the left eye position information;
  • the advertisement panel patch may have an inclination angle on the right eye screen, and the specific parameters of the inclination angle may be according to The right eye position information is calculated.
  • the VR scene is rendered in real time, at any time, it can be considered that the aforementioned video frame obtained by interpolation is presented at the position of the target object. In a continuous period of time change of the scene, it can be equivalent to playing the video on the panel of the advertisement board.
  • an advertisement panel is set at a position corresponding to the target object, and each frame of the video is drawn as a texture of the texture to the texture of the advertisement panel, and each frame of the video is always facing the user. of.
  • depth rendering techniques can be employed in conjunction with billboard technology when rendering left eye and right eye images.
  • the depth buffering technique helps the target object form an occlusion relationship and a proportional relationship with other objects according to the distance.
  • other technologies may be used to render the target video, which is not limited in this embodiment of the present application.
  • the embodiment of the present application further provides a graphics processing method, including steps S320 to S360, where the method is performed by a processor.
  • FIGS. 9A, 9B, and 10 The graphics processing method according to an embodiment of the present application has been described in detail above with reference to FIGS. 1 through 8.
  • An apparatus, a processor, and a VR system according to embodiments of the present application will be described in detail below with reference to FIGS. 9A, 9B, and 10.
  • FIG. 9A is a schematic structural diagram of a computing device used in a graphics processing method according to an embodiment of the present application.
  • the computing device 900 includes a processor 901, a non-volatile computer readable memory 902, an I/O interface 903, a display interface 904, and a network communication interface 905. These components communicate over bus 906.
  • a plurality of program modules are stored in the memory 902: an operating system 907, an I/O module 908, a communication module 909, and an image processing device 900A.
  • the processor 901 can read the computer readable instructions corresponding to the image processing device 900A in the memory 902 to implement the solution provided by the embodiment of the present application.
  • the I/O interface 903 can be connected to an input/output device.
  • the I/O interface 903 transmits the input data received from the input device to the I/O module 908 for processing, and transmits the data output by the I/O module 908 to the output device.
  • the network communication interface 905 can transmit data received from the communication bus 906 to the communication module 909 and transmit the data received from the communication module 909 over the communication bus 906.
  • the computer readable instructions corresponding to the image processing apparatus 900A stored in the memory 902 may cause the processor 901 to perform: acquiring position information of an observer; and determining a virtual reality to be displayed according to the location information.
  • a target object in the VR picture acquiring at least two images corresponding to the target object stored in advance, the at least two images being images respectively taken from different shooting positions; according to the position information and the at least two a shooting position corresponding to the image, the target image is generated by using the at least two images, the target image is an image of the target object corresponding to the position of the observer; the VR screen is displayed, and in the VR screen Render the target image.
  • the instruction may cause the processor 901 to: determine, according to time information of the VR picture to be displayed, a video frame corresponding to the time information in each video from a plurality of pre-captured videos as The image.
  • the instructions may cause the processor 901 to: render the target image onto a first predetermined texture in the VR picture, wherein the first predetermined texture is based on an advertising board surface Piece of technology.
  • the instructions may cause the processor 901 to: acquire left eye position information, right eye position information, left eye orientation information, and right eye orientation information of the observer; wherein the VR screen a left eye picture and a right eye picture are included; the left eye picture is determined according to the left eye position information and the left eye orientation information; and the right eye picture is determined according to the right eye position information and the right eye orientation information And rendering the left eye image in real time according to the left eye orientation information and the target image, and rendering the target image in the left eye image; according to the right eye orientation information and the target image, real time The right eye picture is rendered and the target image is rendered in the right eye picture.
  • the instructions may cause the processor 901 to: determine a first object in the VR picture according to the location information; and determine a target three-dimensional model corresponding to the first object from a three-dimensional model library And rendering the three-dimensional model onto a second predetermined texture of the VR picture.
  • the instructions may cause the processor 901 to: average the left eye position information and the right eye position information to obtain an average position; according to the average position, from the pre-photographed Selecting at least two videos from the plurality of videos, the plurality of videos being captured from different shooting positions; selecting one video frame from each of the at least two videos as the image; A spatial positional relationship between the position and the photographing position of the at least two videos, the image being operated to obtain the target image.
  • the instructions may cause the processor 901 to: average the left eye position information and the right eye position information to obtain an average position; according to the average position, from the pre-photographed Selecting a target video from the plurality of videos, wherein a distance between a shooting position of the target video and the average position is the smallest of a spatial distance from the average position among the shooting positions of the plurality of pre-captured videos; A video frame is selected from the target video, and the video frame is used as the target image.
  • the plurality of videos are videos that include only the target object after the original video of the plurality of videos is transparently processed, the target object being a character.
  • the left eye position information, the right eye position information, the left eye orientation information, and the right eye orientation information are determined according to the collected current posture information of the user.
  • the posture information includes at least one of head posture information, limb posture information, torso posture information, muscle electrical stimulation information, eye tracking information, skin sensing information, motion sensing information, and brain signal information.
  • FIG. 9B is a schematic block diagram of a processor 900BB according to an embodiment of the present application.
  • Processor 900B may correspond to processing device 34 as previously described.
  • the processor 900B can include an acquisition module 910, a calculation module 920, and a rendering module 930.
  • the obtaining module 910 is configured to acquire left eye position information, right eye position information, left eye orientation information, and right eye orientation information of the user.
  • the calculating module 920 is configured to determine a target three-dimensional model from the three-dimensional model library according to the left-eye position information and the right-eye position information acquired by the acquiring module, and the calculating module 920 is further configured to use the left-eye position information according to the left eye position information.
  • the right eye position information and a plurality of pre-captured videos determine a target video, wherein the plurality of videos are videos respectively taken from different shooting positions.
  • the rendering module 930 is configured to render a left eye image in real time according to the left eye orientation information, the target three-dimensional model, and the target video; the rendering module 930 is further configured to use the right eye orientation information, the target three-dimensional model, and the target video. Rendering a right eye picture in real time; wherein the left eye picture and the right eye picture are displayed on a virtual reality VR display, and the VR scene includes an image of the target three-dimensional model and the target video Image.
  • the graphics processing apparatus of the embodiment of the present application determines a target three-dimensional model according to location information of the left and right eyes of the user, and determines a target video according to a plurality of pre-captured videos, and respectively renders a left-eye image and a right-eye image by using a real-time rendering method.
  • the VR scene is displayed, wherein the VR scene includes an image of the target three-dimensional model and an image of the target video, and the target video can realistically display the real scene, and provide the user with a real presence on the basis of maintaining the interactivity of the entire VR scene. Sense, which can enhance the user experience.
  • the rendering module 930 may be configured to: render the target three-dimensional model onto the first texture according to the left-eye orientation information; according to the left-eye orientation information, Rendering the target video onto the second texture, wherein the second texture is based on an advertisement panel patch technique; rendering the target three-dimensional model onto the third texture according to the right eye orientation information; The right eye orientation information is rendered onto the fourth texture, wherein the fourth texture is based on an artboard patch technique.
  • the calculating module 920 determines the target video according to the left eye position information, the right eye position information, and the plurality of pre-captured videos, and may include: the left eye position information. And averaging the right eye position information to obtain an average position; selecting at least two videos from the plurality of videos according to the average position; and correspondingly corresponding to each of the at least two videos at corresponding moments The video frame is extracted; and the at least two video frames are interpolated according to the average position and the shooting positions of the at least two videos to obtain the target video.
  • the calculating module 920 determines the target video according to the left eye position information, the right eye position information, and the plurality of pre-captured videos, and may include: the left eye position information. And averaging the right eye position information to obtain an average position; and selecting, according to the average position, the target video from the plurality of videos, wherein a shooting position of the target video and the average position The distance is the closest to the average position among all the shooting positions of the plurality of videos.
  • the multiple videos are videos that only include the target object after the original video is transparently processed.
  • the target object is a character.
  • the left eye position information, the right eye position information, the left eye orientation information, and the right eye orientation information acquired by the acquiring module 910 are according to the collected The user's current posture information is determined.
  • the posture information includes at least one of head posture information, limb posture information, trunk posture information, muscle electrical stimulation information, eyeball tracking information, skin sensing information, motion sensing information, and brain signal information.
  • the processor 900B may be a CPU or a GPU.
  • the processor 900B may also include both the functions of the CPU and the functions of the GPU.
  • the functions of the acquisition module 910 and the calculation module 920 (S320 to S340) are executed by the CPU, and the functions of the rendering module 930 (S350 and S360) are performed by the GPU. This embodiment of the present application does not limit this.
  • FIG. 10 is a schematic diagram of a VR system according to an embodiment of the present application. Shown in FIG. 10 is a VR helmet 1000 that may include a head tracker 1010, a CPU 1020, a GPU 1030, and a display 1040. Wherein, the head tracker 1010 corresponds to the gesture collecting device, the CPU 1020 and the GPU 1030 correspond to the processing device, and the display 1040 corresponds to the display device, where the functions of the head tracker 1010, the CPU 1020, the GPU 1030 and the display 1040 are not Let me repeat.
  • the head tracker 1010 corresponds to the gesture collecting device
  • the CPU 1020 and the GPU 1030 correspond to the processing device
  • the display 1040 corresponds to the display device, where the functions of the head tracker 1010, the CPU 1020, the GPU 1030 and the display 1040 are not Let me repeat.
  • the head tracker 1010, CPU 1020, GPU 1030, and display 1040 shown in FIG. 10 are integrated in the VR helmet 1000. There may be other gesture collection devices on the outside of the VR helmet 1000, and the posture information of the user is collected and sent to the CPU 1020 for processing, which is not limited in the embodiment of the present application.
  • FIG. 11 is a schematic diagram of another VR system of an embodiment of the present application.
  • FIG. 11 shows a VR system composed of VR glasses 1110 and a host 1120.
  • the VR glasses 1110 may include an angle sensor 1112, a signal processor 1114, a data transmitter 1116, and a display 1118.
  • the angle sensor 1112 corresponds to the gesture collection device
  • the host 1120 includes a CPU and a GPU corresponding to the processing device to calculate and render the screen
  • the display 1118 corresponds to the display device.
  • the angle sensor 1112 collects the posture information of the user, transmits the posture information to the host 1120 for processing, and the host 1120 calculates and renders the left eye screen and the right eye screen, and transmits the left eye screen and the right eye screen to the display 1118 for display.
  • Signal processor 1114 and data transmitter 1116 are primarily used for communication between VR glasses 1110 and host 1120.
  • posture collection devices may be provided outside the VR glasses 1110, and the posture information of the user is collected and sent to the host 1120 for processing. This embodiment of the present application does not limit this.
  • the virtual reality system of the embodiment of the present application collects the posture information of the user to determine the position of the left and right eyes of the user, determines the target three-dimensional model according to the position information of the left and right eyes of the user, and determines the target video according to the plurality of pre-captured videos, and performs real-time rendering.
  • the rendering method respectively renders the left-eye image and the right-eye image to display the VR scene, wherein the VR scene includes the image of the target three-dimensional model and the image of the target video, and the target video can realistically display the reality scene while maintaining the entire VR scene.
  • the VR scene includes the image of the target three-dimensional model and the image of the target video
  • the target video can realistically display the reality scene while maintaining the entire VR scene.
  • On the basis of interactivity it provides users with a real sense of presence, which can enhance the user experience.
  • the embodiment of the present application further provides a computer readable storage medium having stored thereon instructions that, when executed on a computer, cause the computer to execute the graphics processing method of the foregoing method embodiment.
  • the computer may be the above VR system or a processor.
  • the embodiment of the present application further provides a computer program product comprising instructions, wherein when the computer runs the finger of the computer program product, the computer executes the graphic processing method of the method embodiment.
  • the computer program product can be run in a VR system or processor.
  • the computer program product includes one or more computer instructions.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • the computer instructions can be stored in a computer readable storage medium or transferred from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions can be from a website site, computer, server or data center Transmission to another website site, computer, server, or data center by wire (eg, coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer readable storage medium can be any available media that can be accessed by a computer or a data storage device such as a server, data center, or the like that includes one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a high-density digital video disc (DVD)), or a semiconductor medium (for example, a solid state hard disk (Solid State Disk, SSD)) and so on.
  • a magnetic medium for example, a floppy disk, a hard disk, a magnetic tape
  • an optical medium for example, a high-density digital video disc (DVD)
  • DVD high-density digital video disc
  • semiconductor medium for example, a solid state hard disk (Solid State Disk, SSD)
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be electrical, mechanical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Biomedical Technology (AREA)
  • Dermatology (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurology (AREA)
  • Neurosurgery (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

L'invention concerne un procédé de traitement graphique, un processeur, et un système de réalité virtuelle. Le procédé comporte les étapes consistant à: acquérir des informations associées à un emplacement d'un spectateur; déterminer, en fonction des informations associées à l'emplacement du spectateur, un objet cible dans une trame de réalité virtuelle (VR) à afficher; acquérir au moins deux images pré-stockées correspondant à l'objet cible, lesdites au moins deux images étant respectivement prises à des emplacements de photographie différents; utiliser, en fonction des informations associées à l'emplacement du spectateur et des emplacements de photographie correspondant auxdites au moins deux images, lesdites au moins deux images pour générer une image cible, l'image cible étant une image de l'objet cible correspondant à l'emplacement du spectateur; et afficher la trame de VR, et restituer l'image cible dans la trame de VR.
PCT/CN2018/084714 2017-05-25 2018-04-27 Procédé de traitement graphique, processeur, et système de réalité virtuelle WO2018214697A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710379516.5 2017-05-25
CN201710379516.5A CN107315470B (zh) 2017-05-25 2017-05-25 图形处理方法、处理器和虚拟现实系统

Publications (1)

Publication Number Publication Date
WO2018214697A1 true WO2018214697A1 (fr) 2018-11-29

Family

ID=60182018

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/084714 WO2018214697A1 (fr) 2017-05-25 2018-04-27 Procédé de traitement graphique, processeur, et système de réalité virtuelle

Country Status (3)

Country Link
CN (1) CN107315470B (fr)
TW (1) TWI659335B (fr)
WO (1) WO2018214697A1 (fr)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107315470B (zh) * 2017-05-25 2018-08-17 腾讯科技(深圳)有限公司 图形处理方法、处理器和虚拟现实系统
CN110134222A (zh) * 2018-02-02 2019-08-16 上海集鹰科技有限公司 一种vr头显定位瞄准系统及其定位瞄准方法
CN108616752B (zh) * 2018-04-25 2020-11-06 北京赛博恩福科技有限公司 支持增强现实交互的头戴设备及控制方法
CN109032350B (zh) * 2018-07-10 2021-06-29 深圳市创凯智能股份有限公司 眩晕感减轻方法、虚拟现实设备及计算机可读存储介质
CN110570513B (zh) * 2018-08-17 2023-06-20 创新先进技术有限公司 一种展示车损信息的方法和装置
CN111065053B (zh) * 2018-10-16 2021-08-17 北京凌宇智控科技有限公司 一种视频串流的系统及方法
US11500455B2 (en) 2018-10-16 2022-11-15 Nolo Co., Ltd. Video streaming system, video streaming method and apparatus
CN111064985A (zh) * 2018-10-16 2020-04-24 北京凌宇智控科技有限公司 一种实现视频串流的系统、方法及装置
CN109976527B (zh) * 2019-03-28 2022-08-12 重庆工程职业技术学院 交互式vr展示系统
CN112015264B (zh) * 2019-05-30 2023-10-20 深圳市冠旭电子股份有限公司 虚拟现实显示方法、虚拟现实显示装置及虚拟现实设备
CN111857336B (zh) * 2020-07-10 2022-03-25 歌尔科技有限公司 头戴式设备及其渲染方法、存储介质
CN112073669A (zh) * 2020-09-18 2020-12-11 三星电子(中国)研发中心 一种视频通信的实现方法和装置
CN112308982A (zh) * 2020-11-11 2021-02-02 安徽山水空间装饰有限责任公司 一种装修效果展示方法及其装置
CN113436489A (zh) * 2021-06-09 2021-09-24 深圳大学 一种基于虚拟现实的留学体验系统及留学体验方法
CN115713614A (zh) * 2022-11-25 2023-02-24 立讯精密科技(南京)有限公司 一种图像场景构造方法、装置、电子设备和存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1809844A (zh) * 2003-06-20 2006-07-26 日本电信电话株式会社 虚拟视点图像生成方法和三维图像显示方法及装置
CN106385576A (zh) * 2016-09-07 2017-02-08 深圳超多维科技有限公司 立体虚拟现实直播方法、装置及电子设备
CN106507086A (zh) * 2016-10-28 2017-03-15 北京灵境世界科技有限公司 一种漫游实景vr的3d呈现方法
CN107315470A (zh) * 2017-05-25 2017-11-03 腾讯科技(深圳)有限公司 图形处理方法、处理器和虚拟现实系统

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100656342B1 (ko) * 2004-12-16 2006-12-11 한국전자통신연구원 다중 입체 영상 혼합 제시용 시각 인터페이스 장치
US8400493B2 (en) * 2007-06-25 2013-03-19 Qualcomm Incorporated Virtual stereoscopic camera
KR101629479B1 (ko) * 2009-11-04 2016-06-10 삼성전자주식회사 능동 부화소 렌더링 방식 고밀도 다시점 영상 표시 시스템 및 방법
WO2011111349A1 (fr) * 2010-03-10 2011-09-15 パナソニック株式会社 Dispositif d'affichage vidéo 3d et procédé de réglage de parallaxe
CN102404584B (zh) * 2010-09-13 2014-05-07 腾讯科技(成都)有限公司 调整场景左右摄像机的方法及装置、3d眼镜、客户端
US9292973B2 (en) * 2010-11-08 2016-03-22 Microsoft Technology Licensing, Llc Automatic variable virtual focus for augmented reality displays
US9255813B2 (en) * 2011-10-14 2016-02-09 Microsoft Technology Licensing, Llc User controlled real object disappearance in a mixed reality display
WO2014033306A1 (fr) * 2012-09-03 2014-03-06 SensoMotoric Instruments Gesellschaft für innovative Sensorik mbH Système de visiocasque, et procédé permettant de calculer un flux d'images numériques et d'assurer le rendu de ce flux d'images numériques à l'aide d'un système de visiocasque
US9451162B2 (en) * 2013-08-21 2016-09-20 Jaunt Inc. Camera array including camera modules
US20150358539A1 (en) * 2014-06-06 2015-12-10 Jacob Catt Mobile Virtual Reality Camera, Method, And System
CN104679509B (zh) * 2015-02-06 2019-11-15 腾讯科技(深圳)有限公司 一种渲染图形的方法和装置
EP3356877A4 (fr) * 2015-10-04 2019-06-05 Thika Holdings LLC Casque à réalité virtuelle répondant au regard de l' oeil
CN106527696A (zh) * 2016-10-31 2017-03-22 宇龙计算机通信科技(深圳)有限公司 一种实现虚拟操作的方法以及可穿戴设备
CN106657906B (zh) * 2016-12-13 2020-03-27 国家电网公司 具有自适应场景虚拟现实功能的信息设备监控系统
CN106643699B (zh) * 2016-12-26 2023-08-04 北京互易科技有限公司 一种虚拟现实系统中的空间定位装置和定位方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1809844A (zh) * 2003-06-20 2006-07-26 日本电信电话株式会社 虚拟视点图像生成方法和三维图像显示方法及装置
CN106385576A (zh) * 2016-09-07 2017-02-08 深圳超多维科技有限公司 立体虚拟现实直播方法、装置及电子设备
CN106507086A (zh) * 2016-10-28 2017-03-15 北京灵境世界科技有限公司 一种漫游实景vr的3d呈现方法
CN107315470A (zh) * 2017-05-25 2017-11-03 腾讯科技(深圳)有限公司 图形处理方法、处理器和虚拟现实系统

Also Published As

Publication number Publication date
TW201835723A (zh) 2018-10-01
CN107315470B (zh) 2018-08-17
TWI659335B (zh) 2019-05-11
CN107315470A (zh) 2017-11-03

Similar Documents

Publication Publication Date Title
WO2018214697A1 (fr) Procédé de traitement graphique, processeur, et système de réalité virtuelle
CN110908503B (zh) 跟踪设备的位置的方法
CN107852573B (zh) 混合现实社交交互
JP6632443B2 (ja) 情報処理装置、情報処理システム、および情報処理方法
US8878846B1 (en) Superimposing virtual views of 3D objects with live images
US9262950B2 (en) Augmented reality extrapolation techniques
US10607403B2 (en) Shadows for inserted content
JP2021530817A (ja) 画像ディスプレイデバイスの位置特定マップを決定および/または評価するための方法および装置
JP7073481B2 (ja) 画像表示システム
CN112198959A (zh) 虚拟现实交互方法、装置及系统
KR101892735B1 (ko) 직관적인 상호작용 장치 및 방법
TWI758869B (zh) 互動對象的驅動方法、裝置、設備以及電腦可讀儲存介質
CN111862348B (zh) 视频显示方法、视频生成方法、装置、设备及存储介质
KR20200138349A (ko) 화상 처리 방법 및 장치, 전자 디바이스, 및 저장 매체
CN115917474A (zh) 在三维环境中呈现化身
CN106843790B (zh) 一种信息展示系统和方法
US11302023B2 (en) Planar surface detection
JP2016105279A (ja) 視覚データを処理するための装置および方法、ならびに関連するコンピュータプログラム製品
US20230037750A1 (en) Systems and methods for generating stabilized images of a real environment in artificial reality
CN107065164B (zh) 图像展示方法及装置
JP6775669B2 (ja) 情報処理装置
US11128836B2 (en) Multi-camera display
CN113678173A (zh) 用于虚拟对象的基于图绘的放置的方法和设备
CN108388351B (zh) 一种混合现实体验系统
US20240005600A1 (en) Nformation processing apparatus, information processing method, and information processing program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18806762

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18806762

Country of ref document: EP

Kind code of ref document: A1