WO2020140720A1 - 虚拟现实场景的渲染方法、装置及设备 - Google Patents

虚拟现实场景的渲染方法、装置及设备 Download PDF

Info

Publication number
WO2020140720A1
WO2020140720A1 PCT/CN2019/124860 CN2019124860W WO2020140720A1 WO 2020140720 A1 WO2020140720 A1 WO 2020140720A1 CN 2019124860 W CN2019124860 W CN 2019124860W WO 2020140720 A1 WO2020140720 A1 WO 2020140720A1
Authority
WO
WIPO (PCT)
Prior art keywords
rendering
virtual reality
area
reality scene
display
Prior art date
Application number
PCT/CN2019/124860
Other languages
English (en)
French (fr)
Inventor
丁亚东
孙剑
郭子强
林琳
訾峰
刘炳鑫
邵继洋
王亚坤
孙宾华
Original Assignee
京东方科技集团股份有限公司
北京京东方光电科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司, 北京京东方光电科技有限公司 filed Critical 京东方科技集团股份有限公司
Priority to US16/764,401 priority Critical patent/US11263803B2/en
Publication of WO2020140720A1 publication Critical patent/WO2020140720A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2215/00Indexing scheme for image rendering
    • G06T2215/16Using real world measurements to influence rendering

Definitions

  • the present application relates to the field of virtual reality technology, and in particular, to a rendering method, device and equipment for virtual reality scenes.
  • virtual reality technology As a simulation technology that can create and experience a virtual world, virtual reality technology has gradually become one of the research hotspots in the direction of human-computer interaction. With the development of virtual reality technology, users have higher and higher requirements for the authenticity and sense of substitution of virtual reality.
  • the first purpose of this application is to propose a rendering method for virtual reality scenes, which solves the problem of limiting the display refresh rate due to the low GPU rendering refresh rate by rendering the image in the idle rendering state and calling it during display Problem, thereby improving the display refresh rate, thereby reducing the delay, and reducing the user's dizziness when using the virtual reality device.
  • the second purpose of the present application is to propose a rendering device for virtual reality scenes.
  • the third purpose of this application is to propose a virtual reality device.
  • the fourth object of the present application is to propose a computer-readable storage medium.
  • An embodiment of the first aspect of the present application provides a rendering method of a virtual reality scene, including:
  • the rendering method of the virtual reality scene in the embodiment of the present application obtains the virtual reality scene and judges whether it is in a rendering idle state; if so, performs image rendering on the virtual reality scene to generate a display image and stores the correspondence between the display image and the display area . Furthermore, the target area to be displayed of the virtual reality scene is acquired, and the target display image corresponding to the target area is called and displayed according to the corresponding relationship. Therefore, by rendering the image in the idle rendering state and calling it during display, the problem of limiting the display refresh rate due to the low GPU rendering refresh rate during GPU real-time rendering is solved, thereby increasing the display refresh rate and thereby reducing The delay is reduced to reduce the user's dizziness when using the virtual reality device. In addition, while improving the display refresh rate, the authenticity of the screen is ensured, and by calling the rendered display image, the power consumption of the device is reduced, and the endurance of the device is improved.
  • the judging whether it is in the rendering idle state includes: judging whether it is in the scene initialization state, and if so, determining that it is in the rendering idle state;
  • the rendering of the virtual reality scene includes: acquiring the virtual reality scene Gaze area; image rendering of the gaze area.
  • the method further includes: acquiring a non-gazing area in the virtual reality scene; and performing image rendering on the non-gazing area in a preset order.
  • the judging whether it is in a rendering idle state includes: judging whether it is in a scene display state and the gaze area has completed image rendering, and if so, it is determined to be in a rendering idle state;
  • the rendering of the virtual reality scene includes: obtaining The non-gazing area in the virtual reality scene; performing image rendering on the non-gazing area according to a preset order.
  • the method further includes: interrupting the image rendering operation when the position of the gaze area changes, and acquiring the changed gaze area; and performing image rendering on the changed gaze area.
  • the rendering of the virtual reality scene includes: obtaining an area to be rendered, and determining whether the area to be rendered has completed image rendering; if so, determining the next area to be rendered according to the area to be rendered, And perform image rendering on the next region to be rendered.
  • the image rendering of the virtual reality scene, generating a display image and storing the correspondence between the display image and the display area includes: establishing a coordinate system according to the virtual reality scene, and according to the coordinate system
  • the virtual reality scene is divided into a plurality of display areas; image rendering is performed on the display area to generate a display image; and the display image and the correspondence between the display image and the display area are stored.
  • An embodiment of the second aspect of the present application provides a rendering device of a virtual reality scene, including:
  • the judgment module is used to obtain the virtual reality scene and judge whether it is in the rendering idle state
  • the processing module is configured to perform image rendering on the virtual reality scene if it is learned that the rendering is idle, generate a display image, and store the correspondence between the display image and the display area;
  • the display module is configured to acquire a target area to be displayed of the virtual reality scene, and call and display a target display image corresponding to the target area according to the corresponding relationship.
  • the rendering device of the virtual reality scene of the embodiment of the present application obtains the virtual reality scene and judges whether it is in a rendering idle state; if it is, performs image rendering on the virtual reality scene to generate a display image and stores the correspondence between the display image and the display area . Furthermore, the target area to be displayed of the virtual reality scene is acquired, and the target display image corresponding to the target area is called and displayed according to the corresponding relationship. Therefore, by rendering the image in the idle rendering state and calling it during display, the problem of limiting the display refresh rate due to the low GPU rendering refresh rate during GPU real-time rendering is solved, thereby increasing the display refresh rate and thereby reducing The delay is reduced to reduce the user's dizziness when using the virtual reality device. In addition, while improving the display refresh rate, the authenticity of the screen is ensured, and by calling the rendered display image, the power consumption of the device is reduced, and the endurance of the device is improved.
  • the judgment module is specifically used to judge whether it is in the scene initialization state, and if so, it is determined to be in the rendering idle state; the processing module is specifically used to: obtain the gaze area in the virtual reality scene; Gaze at the area for image rendering.
  • the processing module is specifically configured to: obtain a non-gaze area in the virtual reality scene; and perform image rendering on the non-gaze area in a preset order.
  • the judgment module is specifically used to judge whether it is in a scene display state and the gaze area has completed image rendering, and if so, it is determined to be in a rendering idle state; the processing module is specifically used to: obtain the virtual reality scene Non-gaze area; perform image rendering on the non-gaze area according to a preset order.
  • the device further includes: an interrupt module, configured to interrupt an image rendering operation when the position of the gaze area changes, and acquire the changed gaze area; and perform image rendering on the changed gaze area.
  • an interrupt module configured to interrupt an image rendering operation when the position of the gaze area changes, and acquire the changed gaze area; and perform image rendering on the changed gaze area.
  • the processing module is further configured to: obtain an area to be rendered, and determine whether the area to be rendered has completed image rendering; if so, determine the next area to be rendered according to the area to be rendered, and Image rendering is performed on the next area to be rendered.
  • An embodiment of the third aspect of the present application proposes a virtual reality device, including the apparatus for rendering a virtual reality scene as described in the embodiment of the second aspect.
  • An embodiment of the fourth aspect of the present application provides a computer-readable storage medium on which a computer program is stored, which is characterized in that when the program is executed by a processor, a virtual reality scene as described in the embodiment of the first aspect is rendered method.
  • FIG. 1 is a schematic flowchart of a method for rendering a virtual reality scene provided by an embodiment of the present application
  • FIG. 2 is a schematic flowchart of another virtual reality scene rendering method provided by an embodiment of the present application.
  • Figure 3 is a schematic diagram of a virtual reality scene
  • FIG. 4 is a schematic diagram of rendering a virtual reality scene
  • FIG. 5 is a schematic flowchart of another virtual reality scene rendering method provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of another virtual reality scene rendering
  • FIG. 8 is a schematic structural diagram of a rendering device for a virtual reality scene provided by an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of another virtual reality scene rendering device provided by an embodiment of the present application.
  • FIG. 10 shows a block diagram of an exemplary electronic device suitable for implementing embodiments of the present application.
  • FIG. 1 is a schematic flowchart of a method for rendering a virtual reality scene provided by an embodiment of the present application. As shown in FIG. 1, the method includes:
  • Step 101 Obtain a virtual reality scene and determine whether it is in a rendering idle state.
  • the virtual reality scene when displaying a virtual reality scene, the virtual reality scene may be acquired first. For example, when performing medical diagnosis through a virtual reality scene, a user's selection instruction on the medical scene can be received, and the medical virtual reality scene can be obtained according to the instruction. For another example, when teaching through a virtual reality scene, a user's selection instruction on an educational scene may be received, and an educational virtual reality scene may be obtained according to the instruction.
  • Determine whether it is rendering idle For example, it is determined whether the virtual reality device is in a rendering idle state.
  • it can be determined whether it is in the scene initialization state, and if it is known that the device is in the scene initialization state, it is determined to be in the rendering idle state.
  • the device is in the scene display state, for example, it is determined that the device is in the scene display state when the virtual reality device displays the virtual reality scene display, and the gaze area has completed the image rendering, it is determined to be in the rendering idle state.
  • Step 102 if so, perform image rendering on the virtual reality scene, generate a display image, and store the correspondence between the display image and the display area.
  • the corresponding rendering information may be obtained according to the virtual reality scene that has been acquired to perform image rendering.
  • three-dimensional model information, three-dimensional animation definition information, material information, etc. corresponding to the virtual reality scene can be acquired, the virtual reality scene can be image rendered, and then a display image can be generated.
  • the virtual reality scene can be rendered according to different strategies to generate a display image, and the correspondence between the display image and the display area is stored.
  • a partial area in the virtual reality scene may be acquired, and the partial area is image-rendered to generate a display image corresponding to the partial area.
  • all regions of the virtual reality scene can be image rendered and display images can be generated.
  • a coordinate system can be established based on the virtual reality scene, and the virtual reality scene can be divided into multiple display areas according to the coordinate system.
  • image rendering is performed on the display area 1 to generate the display image 2
  • the display image 2 is stored, and a mapping table is generated and stored according to the correspondence between the display image 2 and the display area 1.
  • Step 103 Obtain the target area to be displayed of the virtual reality scene, and call and display the target display image corresponding to the target area according to the corresponding relationship.
  • the position and posture information of the user's head can be detected according to sensors such as gyroscopes, accelerometers, etc., and the user's fixation point position in the virtual reality scene can be obtained according to the fixation point position
  • the gaze area is determined, and then the target area to be displayed is determined according to the gaze area. Further, by querying the pre-stored correspondence between the display image and the display area, the corresponding target display image is obtained according to the target area, and the target display image is displayed.
  • the virtual reality scene includes display areas A and B, respectively corresponding to the display images a and b, and the target area matches the display area A according to the position and posture information detected by the sensor, and then the corresponding display image a is obtained for display.
  • GPUs Graphics Processing Unit
  • GPUs are used to render images in real time when displaying virtual reality scenes. Due to the limited rendering capabilities of GPUs, the GPU's lower rendering refresh rate limits the display refresh rate during GPU real-time rendering , Resulting in a lower display refresh rate and a lower frame rate.
  • a fake frame or call an image of the previous frame plus an offset to increase the frame rate
  • a GPU rendering refresh rate of 30-40 Hz relative to a display refresh of 60-120 Hz The rate is much smaller, and the frame rate is increased by inserting a fake frame or calling the previous frame image plus an offset to display, but the picture is less authentic and the resulting picture is not the exact picture to be displayed at the time.
  • the rendering method of the virtual reality scene of the embodiment of the present application compared with the way of inserting a fake frame or calling the previous frame image plus an offset for display, improves the display refresh rate while ensuring the authenticity of the screen, and, by The call to the display image after the rendering is realized can reduce the power consumption of the device.
  • the rendering method of the virtual reality scene in the embodiment of the present application obtains the virtual reality scene and judges whether it is in a rendering idle state; if so, performs image rendering on the virtual reality scene to generate a display image and stores the correspondence between the display image and the display area . Furthermore, the target area to be displayed of the virtual reality scene is acquired, and the target display image corresponding to the target area is called and displayed according to the corresponding relationship. Therefore, by rendering the image in the idle rendering state and calling it during display, the problem of limiting the display refresh rate due to the low GPU rendering refresh rate during GPU real-time rendering is solved, thereby increasing the display refresh rate and thereby reducing The delay is reduced to reduce the user's dizziness when using the virtual reality device. In addition, while improving the display refresh rate, the authenticity of the screen is ensured, and by calling the rendered display image, the power consumption of the device is reduced, and the endurance of the device is improved.
  • FIG. 2 is a schematic flowchart of another virtual reality scene rendering method provided by an embodiment of the present application. As shown in FIG. 2, the method includes:
  • step 201 it is determined whether it is in the scene initialization state, and if so, it is determined to be in the rendering idle state.
  • the scene initialization instruction may be acquired, and it is determined that the scene is in the initialization state within a preset time after receiving the initialization instruction.
  • the preset time can be determined based on a large amount of experimental data, or can be set according to need, and there is no limitation here.
  • the display state of the virtual reality device may be obtained, and if it is known that the current virtual reality device is not displaying a virtual reality scene, it is determined to be in a scene initialization state.
  • Step 202 Obtain the gaze area in the virtual reality scene.
  • Step 203 Perform image rendering on the gaze area.
  • the position and posture information of the user's head can be detected by sensors such as gyroscopes, accelerometers, etc., to obtain the user's gaze point position in the virtual reality scene, and then based on the detected gaze point position and the gaze range of the human eye , Field angle, screen center and other parameters to determine the gaze area.
  • sensors such as gyroscopes, accelerometers, etc.
  • the gaze area in the virtual reality scene may be acquired, and the gaze area may be image rendered to generate a corresponding display image.
  • the gaze area is rendered in the scene initialization state to generate a display image, and then when the scene initialization is completed and the scene is displayed, the display image corresponding to the gaze area is called for display.
  • Step 204 Obtain the non-gazing area in the virtual reality scene.
  • Step 205 Perform image rendering on the non-gazing area according to a preset order.
  • the non-gazing area in the virtual reality scene can also be rendered, and a corresponding display image can be generated. That is to say, in the scene initialization state, image rendering of the entire virtual reality scene can be achieved to generate and store the corresponding display image, so that when the user uses the virtual reality scene, the display image is called and displayed to increase the frame rate and reduce the delay Time and dizziness.
  • the distance between the non-gazing area and the center of the screen may be obtained, and then the image rendering of the non-gazing area may be performed in order from the center of the screen outwards, and a corresponding display image may be generated.
  • the following describes the application scenarios for rendering the gaze area and the non-gaze area.
  • points a and b in the figure are fixation points, and point c is an intermediate point between the two fixation points.
  • a coordinate system can be established based on the virtual reality scene area, and the virtual reality scene can be divided into multiple areas according to the coordinate system, such as division It is multiple square areas in the figure.
  • the display image is displayed to the human eye through the lens or the display component on the virtual reality device.
  • FIG. 4 is a schematic plan view of the virtual reality scene area in FIG. 3.
  • a(x1, y1) and b(x2, y2) the range of the left eye fixation area is (x1 ⁇ d, y1 ⁇ d), and the right eye fixation area is (x2 ⁇ d, y2 ⁇ d) the enclosed area, where d is the side length of the square area in the figure.
  • the image is first rendered on the gaze area, and the display image is generated and stored.
  • the non-gazing area is sequentially rendered and the display image is stored according to the order of the area numbers in the figure from small to large, and then called when the image of a certain area needs to be displayed.
  • the area numbers in the figure can be determined according to the preset correspondence relationship of the data table, and only represent a sequence example of rendering the non-gazing area.
  • the shape, size, and relative position of the gaze area are only examples, and are used to explain the present application, and are not limited herein.
  • the display image is called and displayed when the user uses the virtual reality scene to increase the frame rate and reduce the delay Time and dizziness.
  • FIG. 5 is a schematic flowchart of another virtual reality scene rendering method provided by an embodiment of the present application. As shown in FIG. 5, the method includes:
  • step 301 it is determined whether it is in the scene display state and the gaze area has completed image rendering, and if so, it is determined that it is in the rendering idle state.
  • the display state of the virtual reality device may be obtained, and if it is known that the current virtual reality device is displaying the virtual reality scene, it is further determined whether the current gaze area has completed image rendering. If it is known that the current gaze area has completed image rendering, it is determined to be in the rendering idle state.
  • the virtual reality scene is rendered in real time by the GPU.
  • the image rendering is performed on other areas in advance.
  • the virtual reality scene includes areas 1 and 2. , 3.
  • the current gaze area is area 1.
  • image rendering is performed on areas 2 and 3 in advance, so that the pre-rendered display image can be called when other areas are displayed, thereby improving the display refresh rate and reducing delay .
  • Step 302 Obtain the non-gazing area in the virtual reality scene.
  • Step 303 Perform image rendering on the non-gazing area according to a preset order.
  • the position of the fixation point may be detected every preset time, and if it is known that the fixation point position changes, it is determined that the fixation area position has changed. Furthermore, the current image rendering process is interrupted, and the gaze area is reacquired, and the corresponding image rendering operation is further performed according to the changed gaze area position.
  • the current area to be rendered it is also possible to obtain the current area to be rendered and determine whether the area to be rendered has completed image rendering. If so, skip the area to be rendered and obtain the next area for image rendering in a preset order .
  • the area to be rendered is an area to be rendered in the virtual reality scene. For example, when changing from the gaze area 1 to the gaze area 2, it is necessary to perform image rendering on the gaze area 2 to determine the gaze area 2 as the area to be rendered. Furthermore, it is determined whether the gaze area 2 has completed the image rendering. When the gaze area 2 has completed the image rendering, the gaze area 2 is skipped and the next area is acquired in the preset order for image rendering. For example, when the gaze area changes, there may be a situation where the current area to be rendered has been rendered. Therefore, by querying the correspondence between the display image and the display area, the rendered area can be determined to reduce repeated processing and improve processing efficiency .
  • the original fixation area is an area determined according to fixation points a and b.
  • the fixation area is changed, and the interruption setting is performed according to the program, and Obtain new fixation points a1, b1, and determine the current fixation area. Then start to render the current fixation area.
  • the image rendering is performed out of the image according to the center point c1 of the new fixation points a1 and b1. Among them, when the scene area that has been rendered is skipped, skip and follow the current The algorithm continues to render images.
  • the virtual reality scene is acquired at the beginning, for example, the user’s selection instruction is received to select the corresponding virtual reality scene, and when the virtual reality scene is a known virtual reality scene, such as a medical and other virtual reality scene, the next step is performed; otherwise, proceed Virtual reality scene display.
  • the gaze point coordinates after entering the scene are determined by the posture of the sensors such as the gyroscope and the accelerometer.
  • the gaze area is determined according to the gaze point coordinates, gaze range, field angle, and screen center, where the gaze range, field angle, and screen center can be obtained from related measurements; otherwise, the virtual reality scene is displayed.
  • image rendering is performed on the gaze area of the virtual reality scene, and when the gaze area rendering is completed and the gaze area is unchanged, the non-gaze area is sequentially image rendered according to the preset data table correspondence relationship, and the generated display image is stored . Further, when the gaze area changes, a new gaze area is acquired and the corresponding image rendering operation is performed.
  • the scene initialization is completed, enter the virtual reality scene display and start to call the display image stored in the current coordinate system for display. Therefore, by rendering the image in the idle rendering state and calling it during display, the problem of limiting the display refresh rate due to the low GPU rendering refresh rate during GPU real-time rendering is solved, thereby increasing the display refresh rate and thereby reducing The delay is reduced to reduce the user's dizziness when using the virtual reality device.
  • the display refresh rate the authenticity of the screen is ensured, and by calling the rendered display image, the power consumption of the device is reduced, and the endurance of the device is improved.
  • the present application also proposes a rendering device for virtual reality scenes.
  • FIG. 8 is a schematic structural diagram of a rendering apparatus for a virtual reality scene provided by an embodiment of the present application. As shown in FIG. 8, the apparatus includes: a judgment module 100, a processing module 200, and a display module 300.
  • the determination module 100 is used to obtain a virtual reality scene and determine whether it is in a rendering idle state.
  • the processing module 200 is configured to perform image rendering on the virtual reality scene if it is learned that the rendering is in an idle state, generate a display image, and store the correspondence between the display image and the display area.
  • the display module 300 is configured to acquire the target area to be displayed of the virtual reality scene, and call and display the target display image corresponding to the target area according to the corresponding relationship.
  • the device shown in FIG. 9 further includes: an interrupt module 400.
  • the interrupt module 400 is used to determine whether the position of the gaze area has changed; if so, interrupt the current image rendering and obtain the changed gaze area; and perform image rendering on the changed gaze area.
  • the judgment module 100 is specifically used to judge whether it is in the scene initialization state, and if it is, it is determined to be in the rendering idle state; the processing module 200 is specifically used to: obtain the gaze area in the virtual reality scene; perform on the gaze area Image rendering.
  • processing module 200 is specifically configured to: obtain the non-gazing area in the virtual reality scene; and perform image rendering on the non-gazing area according to a preset order.
  • the judgment module 100 is specifically used to judge whether it is in the scene display state and the gaze area has completed the image rendering, and if so, it is determined to be in the rendering idle state; the processing module 200 is specifically used to: obtain the non-gaze area in the virtual reality scene; according to the preset Sequentially render images to non-gazing areas.
  • processing module 200 is also used to: obtain the current area to be rendered, and determine whether the area to be rendered has completed image rendering; if so, skip the area to be rendered.
  • the module in this embodiment may be a module with data processing capability and/or program execution capability, including but not limited to a processor, a single chip, a digital signal processing (Digital Signal Processing, DSP), an application specific integrated circuit (Application Specific Integrated Circuits, ASIC) and other devices.
  • a processor Digital Signal Processing, DSP
  • ASIC Application Specific Integrated Circuits
  • it may be a central processing unit (CPU), a field programmable gate array (FPGA), or a tensor processing unit (TPU).
  • Each module may include one or more chips in the above device.
  • the rendering device of the virtual reality scene of the embodiment of the present application obtains the virtual reality scene and judges whether it is in a rendering idle state; if it is, performs image rendering on the virtual reality scene to generate a display image and stores the correspondence between the display image and the display area . Furthermore, the target area to be displayed of the virtual reality scene is acquired, and the target display image corresponding to the target area is called and displayed according to the corresponding relationship. Therefore, by rendering the image in the idle rendering state and calling it during display, the problem of limiting the display refresh rate due to the low GPU rendering refresh rate during GPU real-time rendering is solved, thereby increasing the display refresh rate and thereby reducing The delay is reduced to reduce the user's dizziness when using the virtual reality device. In addition, while improving the display refresh rate, the authenticity of the screen is ensured, and by calling the rendered display image, the power consumption of the device is reduced, and the endurance of the device is improved.
  • the present application also proposes a virtual reality device, including the rendering device of the virtual reality scene as described in any one of the foregoing embodiments.
  • the present application also proposes an electronic device, including a processor and a memory; wherein, the processor runs the program corresponding to the executable program code by reading the executable program code stored in the memory for use in The method for rendering a virtual reality scene as described in any of the foregoing embodiments is implemented.
  • the present application also proposes a computer program product, which implements the virtual reality scene rendering method described in any of the foregoing embodiments when the instructions in the computer program product are executed by the processor.
  • the present application also proposes a computer-readable storage medium on which a computer program is stored, which when executed by a processor implements the rendering method of the virtual reality scene described in any of the foregoing embodiments.
  • FIG. 10 shows a block diagram of an exemplary electronic device suitable for implementing embodiments of the present application.
  • the electronic device 12 shown in FIG. 10 is only an example, and should not bring any limitation to the functions and use scope of the embodiments of the present application.
  • the electronic device 12 is represented in the form of a general-purpose computing device.
  • the components of the electronic device 12 may include, but are not limited to, one or more processors or processing units 16, a system memory 28, and a bus 18 connecting different system components (including the system memory 28 and the processing unit 16).
  • the bus 18 represents one or more of several types of bus structures, including a memory bus or a memory controller, a peripheral bus, a graphics acceleration port, a processor, or a local bus using any of a variety of bus structures.
  • these architectures include, but are not limited to, industry standard architecture (Industry Standard Architecture; hereinafter referred to as: ISA) bus, micro channel architecture (Micro Channel Architecture (hereinafter referred to as: MAC) bus, enhanced ISA bus, video electronics Standard Association (Video Electronics Standards Association; hereinafter referred to as: VESA) local bus and peripheral component interconnection (Peripheral Component Interconnection; hereinafter referred to as: PCI) bus.
  • Industry Standard Architecture hereinafter referred to as: ISA
  • MAC Micro Channel Architecture
  • VESA Video Electronics Standards Association
  • PCI peripheral component interconnection
  • the electronic device 12 typically includes a variety of computer system readable media. These media may be any available media that can be accessed by the electronic device 12, including volatile and non-volatile media, removable and non-removable media.
  • the memory 28 may include a computer system readable medium in the form of volatile memory, such as random access memory (Random Access Memory; hereinafter referred to as RAM) 30 and/or cache memory 32.
  • RAM random access memory
  • the electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media.
  • the storage system 34 may be used to read and write non-removable, non-volatile magnetic media (not shown in FIG. 10, commonly referred to as a "hard drive").
  • a disk drive for reading and writing to a removable nonvolatile disk (for example, "floppy disk") and a removable nonvolatile disk (for example: compact disk read only memory (Compact) Disc Read Only Memory (hereinafter referred to as: CD-ROM), digital multi-function read-only disc (Digital Video Disc Read Only (hereinafter referred to as: DVD-ROM) or other optical media) read and write optical disc drive.
  • each drive may be connected to the bus 18 through one or more data medium interfaces.
  • the memory 28 may include at least one program product having a set of (eg, at least one) program modules configured to perform the functions of the embodiments of the present application.
  • a program/utility tool 40 having a set of (at least one) program modules 42 may be stored in, for example, the memory 28.
  • Such program modules 42 include but are not limited to an operating system, one or more application programs, other program modules, and program data Each of these examples or some combination may include the implementation of a network environment.
  • the program module 42 generally performs the functions and/or methods in the embodiments described in this application.
  • the electronic device 12 may also communicate with one or more external devices 14 (eg, keyboard, pointing device, display 24, etc.), and may also communicate with one or more devices that enable users to interact with the computer system/server 12, and/or Or communicate with any device (eg, network card, modem, etc.) that enables the computer system/server 12 to communicate with one or more other computing devices.
  • This communication can be performed through an input/output (I/O) interface 22.
  • the electronic device 12 can also be connected to one or more networks (such as a local area network (Local Area Network; hereinafter referred to as LAN), wide area network (Wide Area Network; hereinafter referred to as WAN) and/or a public network such as the Internet through the network adapter 20. ) Communication.
  • networks such as a local area network (Local Area Network; hereinafter referred to as LAN), wide area network (Wide Area Network; hereinafter referred to as WAN) and/or a public network such as the Internet through the network adapter
  • the network adapter 20 communicates with other modules of the electronic device 12 via the bus 18. It should be understood that although not shown in the figure, other hardware and/or software modules may be used in conjunction with the electronic device 12, including but not limited to: microcode, device driver, redundant processing unit, external disk drive array, RAID system, tape drive And data backup storage system.
  • the processing unit 16 executes various functional applications and data processing by running a program stored in the system memory 28, for example, to implement the method mentioned in the foregoing embodiment.
  • first and second are used for description purposes only, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated.
  • the features defined as “first” and “second” may include at least one of the features explicitly or implicitly.
  • the meaning of “plurality” is at least two, such as two, three, etc., unless specifically defined otherwise.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Human Computer Interaction (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种虚拟现实场景的渲染方法、装置及设备,其中,方法包括:获取虚拟现实场景,并判断是否处于渲染空闲状态(101);若是,则对虚拟现实场景进行图像渲染,生成显示图像并存储显示图像和显示区域的对应关系(102);获取虚拟现实场景的待显示的目标区域,根据对应关系调用与目标区域对应的目标显示图像并显示(103)。由此,通过在渲染空闲状态进行图像渲染并在显示时进行调用的方式,解决了由于GPU渲染刷新率较低限制了显示刷新率的问题,从而提高了显示刷新率,进而降低了延时,减小用户使用虚拟现实设备时的眩晕感。

Description

虚拟现实场景的渲染方法、装置及设备
相关申请的交叉引用
本申请要求京东方科技集团股份有限公司、北京京东方光电科技有限公司于2019年1月2日提交的、发明名称为“虚拟现实场景的渲染方法、装置及设备”的、中国专利申请号“201910001295.7”的优先权。
技术领域
本申请涉及虚拟现实技术领域,尤其涉及一种虚拟现实场景的渲染方法、装置及设备。
背景技术
虚拟现实技术作为一种可以创建和体验虚拟世界的仿真技术,已经逐渐成为人机交互方向的研究热点之一。而随着虚拟现实技术的发展,用户对于虚拟现实的真实度和代入感的要求也越来越高。
当前,影响虚拟现实发展的一个最主要的原因就是“眩晕感”,而“眩晕感”最根本的原因就是虚拟现实设备延时较大,目前亟需一种能够降低延时、减少“眩晕感”的方法。
发明内容
本申请的第一个目的在于提出一种虚拟现实场景的渲染方法,通过在渲染空闲状态进行图像渲染并在显示时进行调用的方式,解决了由于GPU渲染刷新率较低限制了显示刷新率的问题,从而提高了显示刷新率,进而降低了延时,减小用户使用虚拟现实设备时的眩晕感。
本申请的第二个目的在于提出一种虚拟现实场景的渲染装置。
本申请的第三个目的在于提出一种虚拟现实设备。
本申请的第四个目的在于提出一种计算机可读存储介质。
本申请第一方面实施例提出了一种虚拟现实场景的渲染方法,包括:
获取虚拟现实场景,并判断是否处于渲染空闲状态;
若是,则对所述虚拟现实场景进行图像渲染,生成显示图像并存储所述显示图像和显示区域的对应关系;
获取所述虚拟现实场景的待显示的目标区域,根据所述对应关系调用与所述目标区域对应的目标显示图像并显示。
本申请实施例的虚拟现实场景的渲染方法,通过获取虚拟现实场景,并判断是否处于渲染空闲状态;若是,则对虚拟现实场景进行图像渲染,生成显示图像并存储显示图像和显示区域的对应关系。进而获取虚拟现实场景的待显示的目标区域,根据对应关系调用与目标区域对应的目标显示图像并显示。由此,通过在渲染空闲状态进行图像渲染并在显示时进行调用的方式,解决了GPU实时渲染时由于GPU渲染刷新率较低限制了显示刷新率的问题,从而提高了显示刷新率,进而降低了延时,减小用户使用虚拟现实设备时的眩晕感。并且,在提高显示刷新率的同时保证了画面的真实性,以及通过实现对渲染完成的显示图像的调用,减少了设备功耗,提升设备续航能力。
可选地,所述判断是否处于渲染空闲状态包括:判断是否处于场景初始化状态,若是,则确定处于渲染空闲状态;所述对所述虚拟现实场景进行渲染包括:获取所述虚拟现实场景中的注视区域;对所述注视区域进行图像渲染。
可选地,所述的方法还包括:获取所述虚拟现实场景中的非注视区域;按照预设顺序对所述非注视区域进行图像渲染。
可选地,所述判断是否处于渲染空闲状态包括:判断是否处于场景显示状态且注视区域已完成图像渲染,若是,则确定处于渲染空闲状态;所述对所述虚拟现实场景进行渲染包括:获取所述虚拟现实场景中的非注视区域;按照预设顺序对所述非注视区域进行图像渲染。
可选地,所述的方法还包括:当所述注视区域位置改变时,中断图像渲染操作,并获取改变后的注视区域;对所述改变后的注视区域进行图像渲染。
可选地,所述对所述虚拟现实场景进行渲染包括:获取待渲染区域,并判断所述待渲染区域是否已经完成图像渲染;若是,则根据所述待渲染区域确定下一个待渲染区域,并对所述下一个待渲染区域进行图像渲染。
可选地,所述对所述虚拟现实场景进行图像渲染,生成显示图像并存储所述显示图像和显示区域的对应关系包括:根据所述虚拟现实场景建立坐标系,并根据所述坐标系将所述虚拟现实场景划分为多个显示区域;对所述显示区域进行图像渲染,生成显示图像;存储所述显示图像以及所述显示图像和所述显示区域的对应关系。
本申请第二方面实施例提出了一种虚拟现实场景的渲染装置,包括:
判断模块,用于获取虚拟现实场景,并判断是否处于渲染空闲状态;
处理模块,用于若获知处于渲染空闲状态,则对所述虚拟现实场景进行图像渲染,生成显示图像并存储所述显示图像和显示区域的对应关系;
显示模块,用于获取所述虚拟现实场景的待显示的目标区域,根据所述对应关系调用与所述目标区域对应的目标显示图像并显示。
本申请实施例的虚拟现实场景的渲染装置,通过获取虚拟现实场景,并判断是否处于渲染空闲状态;若是,则对虚拟现实场景进行图像渲染,生成显示图像并存储显示图像和显示区域的对应关系。进而获取虚拟现实场景的待显示的目标区域,根据对应关系调用与目标区域对应的目标显示图像并显示。由此,通过在渲染空闲状态进行图像渲染并在显示时进行调用的方式,解决了GPU实时渲染时由于GPU渲染刷新率较低限制了显示刷新率的问题,从而提高了显示刷新率,进而降低了延时,减小用户使用虚拟现实设备时的眩晕感。并且,在提高显示刷新率的同时保证了画面的真实性,以及通过实现对渲染完成的显示图像的调用,减少了设备功耗,提升设备续航能力。
可选地,所述判断模块具体用于:判断是否处于场景初始化状态,若是,则确定处于渲染空闲状态;所述处理模块具体用于:获取所述虚拟现实场景中的注视区域;对所述注视区域进行图像渲染。
可选地,所述处理模块具体用于:获取所述虚拟现实场景中的非注视区域;按照预设顺序对所述非注视区域进行图像渲染。
可选地,所述判断模块具体用于:判断是否处于场景显示状态且注视区域已完成图像渲染,若是,则确定处于渲染空闲状态;所述处理模块具体用于:获取所述虚拟现实场景中的非注视区域;按照预设顺序对所述非注视区域进行图像渲染。
可选地,所述的装置还包括:中断模块,用于当所述注视区域位置改变时,中断图像渲染操作,并获取改变后的注视区域;对所述改变后的注视区域进行图像渲染。
可选地,所述处理模块还用于:获取待渲染区域,并判断所述待渲染区域是否已经完成图像渲染;若是,则根据所述待渲染区域确定下一个待渲染区域,并对所述下一个待渲染区域进行图像渲染。
本申请第三方面实施例提出了一种虚拟现实设备,包括如第二方面实施例所述的虚拟现实场景的渲染装置。
本申请第四方面实施例提出了一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如第一方面实施例所述的虚拟现实场景的渲染方法。
本申请附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。
附图说明
图1为本申请实施例所提供的一种虚拟现实场景的渲染方法的流程示意图;
图2为本申请实施例所提供的另一种虚拟现实场景的渲染方法的流程示意图;
图3为一种虚拟现实场景示意图;
图4为一种虚拟现实场景渲染示意图;
图5为本申请实施例所提供的另一种虚拟现实场景的渲染方法的流程示意图;
图6为另一种虚拟现实场景渲染示意图;
图7为本申请实施例所提供的一种应用场景流程示意图;
图8为本申请实施例所提供的一种虚拟现实场景的渲染装置的结构示意图;
图9为本申请实施例所提供的另一种虚拟现实场景的渲染装置的结构示意图;
图10示出了适于用来实现本申请实施例的示例性电子设备的框图。
具体实施方式
下面详细描述本申请的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本申请,而不能理解为对本申请的限制。
下面参考附图描述本申请实施例的虚拟现实场景的渲染方法、装置及设备。
图1为本申请实施例所提供的一种虚拟现实场景的渲染方法的流程示意图,如图1所示,该方法包括:
步骤101,获取虚拟现实场景,并判断是否处于渲染空闲状态。
本实施例中,在进行虚拟现实场景显示时,可以先获取虚拟现实场景。例如,通过虚拟现实场景进行医疗诊断时,可以接收用户对医疗场景的选择指令,并根据指令获取医疗虚拟现实场景。再例如,通过虚拟现实场景进行教学时,可以接收用户对教育场景的选择指令,并根据指令获取教育虚拟现实场景。
判断是否处于渲染空闲状态。例如,判断虚拟现实设备是否处于渲染空闲状态。
作为一种示例,可以判断是否处于场景初始化状态,若获知设备处于场景初始化状态,则判断处于渲染空闲状态。
作为另一种示例,若获知设备处于场景显示状态,例如获知虚拟现实设备进行虚拟现实场景显示时判断设备处于场景显示状态,且注视区域已经完成图像渲染,则判断处于渲染空闲状态。
步骤102,若是,则对虚拟现实场景进行图像渲染,生成显示图像并存储显示图像和显示区域的对应关系。
作为一种可能的实现方式,可以根据已经获取的虚拟现实场景,获取相应的渲染信息进行图像渲染。例如,可以获取与虚拟现实场景对应的三维模型信息、三维动画定义信息、材质信息等,对虚拟现实场景进行图像渲染,进而生成显示图像。
在本申请的一个实施例中,可以按照不同的策略对虚拟现实场景进行图像渲染,以生 成显示图像,并存储显示图像和显示区域的对应关系。
作为一种示例,可以获取虚拟现实场景中的部分区域,并对部分区域进行图像渲染,以生成与部分区域对应的显示图像。
作为另一种示例,可以对虚拟现实场景的全部区域进行图像渲染,并生成显示图像。
其中,显示图像与显示区域的对应关系的实现方式有多种。例如,可以基于虚拟现实场景建立坐标系,并根据坐标系将虚拟现实场景划分为多个显示区域。进而,在对显示区域1进行图像渲染生成显示图像2后,存储显示图像2,以及根据显示图像2与显示区域1的对应关系生成映射表并存储。
步骤103,获取虚拟现实场景的待显示的目标区域,根据对应关系调用与目标区域对应的目标显示图像并显示。
作为一种示例,以虚拟现实头戴式显示设备为例,可以根据陀螺仪、加速度计等传感器检测用户头部的位置姿态信息,获取用户在虚拟现实场景中的注视点位置,根据注视点位置确定注视区域,进而根据注视区域确定待显示的目标区域。进一步,通过查询预先存储的显示图像与显示区域的对应关系,根据目标区域获取对应的目标显示图像,并对目标显示图像进行显示。比如,虚拟现实场景包括显示区域A、B,分别对应显示图像a、b,根据传感器检测的位置姿态信息获知目标区域与显示区域A匹配,进而获取对应的显示图像a进行显示。
可以理解,产生眩晕感的主要原因是延时较大,为了减少眩晕感需要降低延时,而提升帧率是降低延时的主要方法。相关技术中,在显示虚拟现实场景时通过GPU(Graphics Processing Unit,图形处理器)实时进行图像渲染,由于GPU的渲染能力有限,在GPU实时渲染时GPU较低的渲染刷新率限制了显示刷新率,从而导致显示刷新率较低,导致帧率较低。
本实施例中,通过在渲染空闲状态提前进行图像渲染并在显示时进行调用的方式,解决了GPU实时渲染时由于GPU渲染刷新率较低限制了显示刷新率的问题,从而提高了显示刷新率,进而降低了延时,减小用户使用VR设备时的眩晕感。
进一步的,相关技术中,还存在插入假帧或调用前一帧图像加偏移量进行显示,以提高帧率的方式,例如,30~40Hz的GPU渲染刷新率相对于60~120Hz的显示刷新率而言要小很多,通过插入假帧或者调用前一帧图像加偏移量进行显示,从而提高帧率,但是画面真实性较低,生成的画面并非当时要显示的准确画面。而本申请实施例的虚拟现实场景的渲染方法,相对于插入假帧或调用前一帧图像加偏移量进行显示的方式,在提高显示刷新率的同时保证了画面的真实性,并且,通过实现对渲染完成的显示图像的调用,减少了设备功耗。
本申请实施例的虚拟现实场景的渲染方法,通过获取虚拟现实场景,并判断是否处于渲染空闲状态;若是,则对虚拟现实场景进行图像渲染,生成显示图像并存储显示图像和显示区域的对应关系。进而获取虚拟现实场景的待显示的目标区域,根据对应关系调用与目标区域对应的目标显示图像并显示。由此,通过在渲染空闲状态进行图像渲染并在显示时进行调用的方式,解决了GPU实时渲染时由于GPU渲染刷新率较低限制了显示刷新率的问题,从而提高了显示刷新率,进而降低了延时,减小用户使用虚拟现实设备时的眩晕感。并且,在提高显示刷新率的同时保证了画面的真实性,以及通过实现对渲染完成的显示图像的调用,减少了设备功耗,提升设备续航能力。
基于上述实施例,进一步的,下面结合场景初始化对本申请实施例的虚拟现实场景的渲染方法进行说明。
图2为本申请实施例所提供的另一种虚拟现实场景的渲染方法的流程示意图,如图2所示,该方法包括:
步骤201,判断是否处于场景初始化状态,若是,则确定处于渲染空闲状态。
作为一种示例,可以获取场景初始化指令,并在接收到初始化指令之后的预设时间内,判断处于场景初始化状态。其中,预设时间可以根据大量实验数据确定,也可以根据需要自行设置,此处不作限制。
作为另一种示例,可以获取虚拟现实设备的显示状态,若获知当前虚拟现实设备未进行虚拟现实场景显示时,判断处于场景初始化状态。
步骤202,获取虚拟现实场景中的注视区域。
步骤203,对注视区域进行图像渲染。
作为一种可能的实现方式,可以通过陀螺仪、加速度计等传感器检测用户头部的位置姿态信息,获取用户在虚拟现实场景中的注视点位置,进而根据检测的注视点位置以及人眼注视范围、视场角、屏幕中心等参数,确定注视区域。
本实施例中,可以获取虚拟现实场景中的注视区域,并对注视区域进行图像渲染,以生成相应的显示图像。比如,在场景初始化状态时对注视区域进行渲染,以生成显示图像,进而在场景初始化结束进行场景显示时,调用该注视区域对应的显示图像进行显示。
步骤204,获取虚拟现实场景中的非注视区域。
步骤205,按照预设顺序对非注视区域进行图像渲染。
本实施例中,还可以对虚拟现实场景中的非注视区域进行渲染,并生成相应的显示图像。也就是说,在场景初始化状态可以实现对整个虚拟现实场景进行图像渲染,以生成相应的显示图像并存储,从而在用户使用虚拟现实场景时,调用显示图像并显示,以提高帧率,减少延时和眩晕感。
作为一种示例,可以获取非注视区域与屏幕中心的距离,进而按照由屏幕中心向外的顺序对非注视区域进行图像渲染,并生成相应的显示图像。
需要说明的是,上述对非注视区域进行图像渲染的顺序仅仅是示例性的,此处不作限制。
下面结合对注视区域与非注视区域进行渲染的应用场景进行说明。
参照图3,图中a、b点为注视点,c点为两注视点的中间点,可以基于虚拟现实场景区域建立坐标系,并根据坐标系将虚拟现实场景划分为多个区域,例如划分为图中多个正方形区域。通过对虚拟现实场景进行GPU渲染以及反畸变处理,将显示图像通过镜片或虚拟现实设备上的显示组件显示到人眼。
参照图4,图4为图3中虚拟现实场景区域的平面示意图。图中a(x1,y1)、b(x2,y2),检测到左眼注视区域的范围为(x1±d,y1±d)围成的区域,右眼注视区域为(x2±d,y2±d)所围成区域,其中,d为图中正方形区域的边长。在场景初始化时,首先对注视区域进行图像渲染,生成显示图像并存储。进而按照图中区域数字从小到大的顺序进行依次渲染非注视区域并存储显示图像,当需要显示某区域的图像时,再进行调用。其中,图中区域数字可以根据预先设置的数据表对应关系确定,仅代表一种渲染非注视区域的顺序示例。
需要说明的是,上述虚拟现实场景的区域划分中,区域形状、大小以及注视区域相对位置仅仅是一种示例,用于解释本申请,此处不作限制。
本申请实施例的虚拟现实场景的渲染方法,通过在场景初始化状态对注视区域以及非注视区域进行图像渲染,从而在用户使用虚拟现实场景时,调用显示图像并显示,以提高帧率,减少延时和眩晕感。
基于上述实施例,进一步的,下面结合在场景显示状态时渲染虚拟现实场景进行说明。
图5为本申请实施例所提供的另一种虚拟现实场景的渲染方法的流程示意图,如图5所示,该方法包括:
步骤301,判断是否处于场景显示状态且注视区域已完成图像渲染,若是,则确定处于渲染空闲状态。
作为一种示例,可以获取虚拟现实设备的显示状态,若获知当前虚拟现实设备进行虚拟现实场景显示时,进一步判断当前注视区域是否完成图像渲染。若获知当前注视区域已完成图像渲染,则确定处于渲染空闲状态。
可以理解,相关技术中通过GPU实时对虚拟现实场景进行渲染,本实施例中可以在显示过程中当前注视区域渲染完成时,预先对其他区域进行图像渲染,比如,虚拟现实场景包括区域1、2、3,当前注视区域为区域1,区域1渲染完成时,预先对区域2、区域3进行图像渲染,以便于在显示其他区域时调用预先渲染的显示图像,从而提高显示刷新率、 减少延时。
步骤302,获取虚拟现实场景中的非注视区域。
步骤303,按照预设顺序对非注视区域进行图像渲染。
需要说明的是,前述实施例对非注视区域进行图像渲染的解释说明同样适用于本实施例,此处不再赘述。
在本申请的一些实施例中,还可以判断注视区域的位置是否改变,若获知注视区域的位置改变,则中断当前图像渲染操作,并获取改变后的注视区域,进而对改变后的注视区域进行图像渲染。例如,在场景初始化状态对注视区域1进行图像渲染时,当注视区域1位置改变时,中断当前对注视区域1的图像渲染操作,并获取改变后的注视区域2,对注视区域2进行图像渲染。
作为一种可能的实现方式,可以每隔预设时间检测一次注视点的位置,若获知注视点位置改变,则判断注视区域位置改变。进而,中断当前图像渲染过程,并重新获取注视区域,进一步根据改变后的注视区域的位置,进行相应的图像渲染操作。
在本申请的一些实施例中,还可以获取当前待渲染区域,并判断待渲染区域是否已经完成图像渲染,若是,则跳过该待渲染区域,并按照预设顺序获取下一区域进行图像渲染。其中,待渲染区域为虚拟现实场景中将要进行图像渲染的区域,比如从注视区域1改变至注视区域2时,需要对注视区域2进行图像渲染,确定注视区域2为待渲染区域。进而,判断注视区域2是否已经完成图像渲染,当注视区域2已完成图像渲染时,跳过注视区域2并按照预设顺序获取下一区域进行图像渲染。例如,在注视区域改变时,可能存在当前待渲染的区域已经完成渲染的情况,因此,可以通过查询显示图像与显示区域的对应关系,确定已经完成渲染的区域,从而减少重复处理,提高处理效率。
下面结合注视区域改变的情况进行说明。
参照图6,原注视区域为根据注视点a、b确定的区域,在渲染过程中,在渲染完成当前注视区域以及图中带数字区域时,注视区域进行了改变,依照程序进行中断设置,并获取新注视点a1、b1,确定当前注视区域。进而开始渲染当前注视区,当前注视区渲染完成后,按照新注视点a1、b1的中心点c1为中心依次像外进行图像渲染,其中,遇到渲染过得场景区域时跳过,并按照当前算法继续进行图像渲染。
下面结合实际应用场景进行说明。
参照图7,开始时获取虚拟现实场景,例如接收用户的选择指令选择相应的虚拟现实场景,当虚拟现实场景为已知虚拟现实场景例如为医疗等虚拟现实场景时,执行下一步;否则,进行虚拟现实场景显示。开始进入场景时,例如处于场景初始化状态时,通过陀螺仪、加速度计等传感器的位姿判断进入场景后的注视点坐标。根据注视点坐标、注视范围、视 场角以及屏幕中心来确定注视区域,其中,注视范围、视场角以及屏幕中心可以根据相关测量获得;否则,进行虚拟现实场景显示。进而对虚拟现实场景的注视区域进行图像渲染,以及在注视区域渲染完成且注视区域未改变时根据预先设置的数据表对应关系顺序对非注视区域进行图像渲染,并将渲染生成的显示图像进行存储。进一步当注视区域发生改变时,获取新的注视区域并进行相应的图像渲染操作。当场景初始化完成时,进入虚拟现实场景显示,开始调用当前坐标系下存储的显示图像进行显示。由此,通过在渲染空闲状态进行图像渲染并在显示时进行调用的方式,解决了GPU实时渲染时由于GPU渲染刷新率较低限制了显示刷新率的问题,从而提高了显示刷新率,进而降低了延时,减小用户使用虚拟现实设备时的眩晕感。并且,在提高显示刷新率的同时保证了画面的真实性,以及通过实现对渲染完成的显示图像的调用,减少了设备功耗,提升设备续航能力。
为了实现上述实施例,本申请还提出一种虚拟现实场景的渲染装置。
图8为本申请实施例所提供的一种虚拟现实场景的渲染装置的结构示意图,如图8所示,该装置包括:判断模块100,处理模块200,显示模块300。
其中,判断模块100,用于获取虚拟现实场景,并判断是否处于渲染空闲状态.
处理模块200,用于若获知处于渲染空闲状态,则对虚拟现实场景进行图像渲染,生成显示图像并存储显示图像和显示区域的对应关系。
显示模块300,用于获取虚拟现实场景的待显示的目标区域,根据对应关系调用与目标区域对应的目标显示图像并显示。
在图8的基础上,图9所示的装置还包括:中断模块400。
其中,中断模块400用于判断所述注视区域的位置是否改变;若是,则中断当前图像渲染,并获取改变后的注视区域;对所述改变后的注视区域进行图像渲染。
进一步地,判断模块100具体用于:判断是否处于场景初始化状态,若是,则确定处于渲染空闲状态;处理模块200具体用于:获取所述虚拟现实场景中的注视区域;对所述注视区域进行图像渲染。
进一步地,处理模块200具体用于:获取虚拟现实场景中的非注视区域;按照预设顺序对非注视区域进行图像渲染。
判断模块100具体用于:判断是否处于场景显示状态且注视区域已完成图像渲染,若是,则确定处于渲染空闲状态;处理模块200具体用于:获取虚拟现实场景中的非注视区域;按照预设顺序对非注视区域进行图像渲染。
进一步地,处理模块200还用于:获取当前待渲染区域,并判断待渲染区域是否已经完成图像渲染;若是,则跳过待渲染区域。
需要说明的是,前述实施例对虚拟现实场景的渲染方法的解释说明同样适用于本实施 例的虚拟现实场景的渲染装置,此处不再赘述。本实施例中的模块可以是具有数据处理能力和/或程序执行能力的模块,包括但不限于处理器、单片机、数字信号处理(Digital Signal Process,DSP)、专用集成电路(Application Specific Integrated Circuits,ASIC)等器件中的一种或多种。例如可以为中央处理单元(CPU)、现场可编程门阵列(FPGA)或张量处理单元(TPU)等。各模块可以包括上述器件中的一个或多个芯片。
本申请实施例的虚拟现实场景的渲染装置,通过获取虚拟现实场景,并判断是否处于渲染空闲状态;若是,则对虚拟现实场景进行图像渲染,生成显示图像并存储显示图像和显示区域的对应关系。进而获取虚拟现实场景的待显示的目标区域,根据对应关系调用与目标区域对应的目标显示图像并显示。由此,通过在渲染空闲状态进行图像渲染并在显示时进行调用的方式,解决了GPU实时渲染时由于GPU渲染刷新率较低限制了显示刷新率的问题,从而提高了显示刷新率,进而降低了延时,减小用户使用虚拟现实设备时的眩晕感。并且,在提高显示刷新率的同时保证了画面的真实性,以及通过实现对渲染完成的显示图像的调用,减少了设备功耗,提升设备续航能力。
为了实现上述实施例,本申请还提出一种虚拟现实设备,包括如前述任一实施例所述的虚拟现实场景的渲染装置。
为了实现上述实施例,本申请还提出一种电子设备,包括处理器和存储器;其中,处理器通过读取存储器中存储的可执行程序代码来运行与可执行程序代码对应的程序,以用于实现如前述任一实施例所述的虚拟现实场景的渲染方法。
为了实现上述实施例,本申请还提出一种计算机程序产品,当计算机程序产品中的指令被处理器执行时实现如前述任一实施例所述的虚拟现实场景的渲染方法。
为了实现上述实施例,本申请还提出一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如前述任一实施例所述的虚拟现实场景的渲染方法。
图10示出了适于用来实现本申请实施例的示例性电子设备的框图。图10显示的电子设备12仅仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。
如图10所示,电子设备12以通用计算设备的形式表现。电子设备12的组件可以包括但不限于:一个或者多个处理器或者处理单元16,系统存储器28,连接不同系统组件(包括系统存储器28和处理单元16)的总线18。
总线18表示几类总线结构中的一种或多种,包括存储器总线或者存储器控制器,外围总线,图形加速端口,处理器或者使用多种总线结构中的任意总线结构的局域总线。举例来说,这些体系结构包括但不限于工业标准体系结构(Industry Standard Architecture;以下简称:ISA)总线,微通道体系结构(Micro Channel Architecture;以下简称:MAC)总线,增强型ISA总线、视频电子标准协会(Video Electronics Standards Association;以下简称: VESA)局域总线以及外围组件互连(Peripheral Component Interconnection;以下简称:PCI)总线。
电子设备12典型地包括多种计算机系统可读介质。这些介质可以是任何能够被电子设备12访问的可用介质,包括易失性和非易失性介质,可移动的和不可移动的介质。
存储器28可以包括易失性存储器形式的计算机系统可读介质,例如随机存取存储器(Random Access Memory;以下简称:RAM)30和/或高速缓存存储器32。电子设备12可以进一步包括其它可移动/不可移动的、易失性/非易失性计算机系统存储介质。仅作为举例,存储系统34可以用于读写不可移动的、非易失性磁介质(图10未显示,通常称为“硬盘驱动器”)。尽管图10中未示出,可以提供用于对可移动非易失性磁盘(例如“软盘”)读写的磁盘驱动器,以及对可移动非易失性光盘(例如:光盘只读存储器(Compact Disc Read Only Memory;以下简称:CD-ROM)、数字多功能只读光盘(Digital Video Disc Read Only Memory;以下简称:DVD-ROM)或者其它光介质)读写的光盘驱动器。在这些情况下,每个驱动器可以通过一个或者多个数据介质接口与总线18相连。存储器28可以包括至少一个程序产品,该程序产品具有一组(例如至少一个)程序模块,这些程序模块被配置以执行本申请各实施例的功能。
具有一组(至少一个)程序模块42的程序/实用工具40,可以存储在例如存储器28中,这样的程序模块42包括但不限于操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。程序模块42通常执行本申请所描述的实施例中的功能和/或方法。
电子设备12也可以与一个或多个外部设备14(例如键盘、指向设备、显示器24等)通信,还可与一个或者多个使得用户能与该计算机系统/服务器12交互的设备通信,和/或与使得该计算机系统/服务器12能与一个或多个其它计算设备进行通信的任何设备(例如网卡,调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口22进行。并且,电子设备12还可以通过网络适配器20与一个或者多个网络(例如局域网(Local Area Network;以下简称:LAN),广域网(Wide Area Network;以下简称:WAN)和/或公共网络,例如因特网)通信。如图所示,网络适配器20通过总线18与电子设备12的其它模块通信。应当明白,尽管图中未示出,可以结合电子设备12使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统等。
处理单元16通过运行存储在系统存储器28中的程序,从而执行各种功能应用以及数据处理,例如实现前述实施例中提及的方法。
在本申请的描述中,需要理解的是,术语“第一”、“第二”仅用于描述目的,而不能理解 为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本申请的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (15)

  1. 一种虚拟现实场景的渲染方法,其特征在于,包括以下步骤:
    获取虚拟现实场景,并判断是否处于渲染空闲状态;
    若是,则对所述虚拟现实场景进行图像渲染,生成显示图像并存储所述显示图像和显示区域的对应关系;
    获取所述虚拟现实场景的待显示的目标区域,根据所述对应关系调用与所述目标区域对应的目标显示图像并显示。
  2. 如权利要求1所述的虚拟现实场景的渲染方法,其特征在于,所述判断是否处于渲染空闲状态包括:
    判断是否处于场景初始化状态,若是,则确定处于渲染空闲状态;
    所述对所述虚拟现实场景进行图像渲染包括:
    获取所述虚拟现实场景中的注视区域;
    对所述注视区域进行图像渲染。
  3. 如权利要求2所述的虚拟现实场景的渲染方法,其特征在于,还包括:
    获取所述虚拟现实场景中的非注视区域;
    按照预设顺序对所述非注视区域进行图像渲染。
  4. 如权利要求1-3任一所述的虚拟现实场景的渲染方法,其特征在于,所述判断是否处于渲染空闲状态包括:
    判断是否处于场景显示状态且注视区域已完成图像渲染,若是,则确定处于渲染空闲状态;
    所述对所述虚拟现实场景进行渲染包括:
    获取所述虚拟现实场景中的非注视区域;
    按照预设顺序对所述非注视区域进行图像渲染。
  5. 如权利要求3所述的虚拟现实场景的渲染方法,其特征在于,还包括:
    当所述注视区域位置改变时,中断图像渲染操作,并获取改变后的注视区域;
    对所述改变后的注视区域进行图像渲染。
  6. 如权利要求1-5任一所述的虚拟现实场景的渲染方法,其特征在于,所述对所述虚拟现实场景进行渲染包括:
    获取待渲染区域,并判断所述待渲染区域是否已经完成图像渲染;
    若是,则根据所述待渲染区域确定下一个待渲染区域,并对所述下一个待渲染区域进行图像渲染。
  7. 如权利要求1-6任一所述的虚拟现实场景的渲染方法,其特征在于,所述对所述虚拟现实场景进行图像渲染,生成显示图像并存储所述显示图像和显示区域的对应关系包括:
    根据所述虚拟现实场景建立坐标系,并根据所述坐标系将所述虚拟现实场景划分为多个显示区域;
    对所述显示区域进行图像渲染,生成显示图像;
    存储所述显示图像以及所述显示图像和所述显示区域的对应关系。
  8. 一种虚拟现实场景的渲染装置,其特征在于,包括:
    判断模块,用于获取虚拟现实场景,并判断是否处于渲染空闲状态;
    处理模块,用于若获知处于所述渲染空闲状态,则对所述虚拟现实场景进行图像渲染,生成显示图像并存储所述显示图像和显示区域的对应关系;
    显示模块,用于获取所述虚拟现实场景的待显示的目标区域,根据所述对应关系调用与所述目标区域对应的目标显示图像并显示。
  9. 如权利要求8所述的虚拟现实场景的渲染装置,其特征在于,所述判断模块具体用于:
    判断是否处于场景初始化状态,若是,则确定处于渲染空闲状态;
    所述处理模块具体用于:
    获取所述虚拟现实场景中的注视区域;
    对所述注视区域进行图像渲染。
  10. 如权利要求9所述的虚拟现实场景的渲染装置,其特征在于,所述处理模块具体用于:
    获取所述虚拟现实场景中的非注视区域;
    按照预设顺序对所述非注视区域进行图像渲染。
  11. 如权利要求8-10任一所述的虚拟现实场景的渲染装置,其特征在于,所述判断模块具体用于:
    判断是否处于场景显示状态且注视区域已完成图像渲染,若是,则确定处于渲染空闲状态;
    所述处理模块具体用于:
    获取所述虚拟现实场景中的非注视区域;
    按照预设顺序对所述非注视区域进行图像渲染。
  12. 如权利要求10所述的虚拟现实场景的渲染装置,其特征在于,还包括:
    中断模块,用于当所述注视区域位置改变时,中断图像渲染操作,并获取改变后的注视区域;
    对所述改变后的注视区域进行图像渲染。
  13. 如权利要求8-12任一所述的虚拟现实场景的渲染装置,其特征在于,所述处理模块还用于:
    获取待渲染区域,并判断所述待渲染区域是否已经完成图像渲染;
    若是,则根据所述待渲染区域确定下一个待渲染区域,并对所述下一个待渲染区域进行图像渲染。
  14. 一种虚拟现实设备,其特征在于,包括如权利要求8-13任一项所述的虚拟现实场景的渲染装置。
  15. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1-7中任一项所述的虚拟现实场景的渲染方法。
PCT/CN2019/124860 2019-01-02 2019-12-12 虚拟现实场景的渲染方法、装置及设备 WO2020140720A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/764,401 US11263803B2 (en) 2019-01-02 2019-12-12 Virtual reality scene rendering method, apparatus and device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910001295.7 2019-01-02
CN201910001295.7A CN109741463B (zh) 2019-01-02 2019-01-02 虚拟现实场景的渲染方法、装置及设备

Publications (1)

Publication Number Publication Date
WO2020140720A1 true WO2020140720A1 (zh) 2020-07-09

Family

ID=66363116

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/124860 WO2020140720A1 (zh) 2019-01-02 2019-12-12 虚拟现实场景的渲染方法、装置及设备

Country Status (3)

Country Link
US (1) US11263803B2 (zh)
CN (1) CN109741463B (zh)
WO (1) WO2020140720A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114845147A (zh) * 2022-04-29 2022-08-02 北京奇艺世纪科技有限公司 屏幕渲染方法、显示画面合成方法装置及智能终端

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109741463B (zh) 2019-01-02 2022-07-19 京东方科技集团股份有限公司 虚拟现实场景的渲染方法、装置及设备
CN110351480B (zh) * 2019-06-13 2021-01-15 歌尔光学科技有限公司 用于电子设备的图像处理方法、装置及电子设备
CN110930307B (zh) * 2019-10-31 2022-07-08 江苏视博云信息技术有限公司 图像处理方法和装置
CN111402263B (zh) * 2020-03-04 2023-08-29 南方电网科学研究院有限责任公司 高分大屏可视化优化方法
CN111381967A (zh) * 2020-03-09 2020-07-07 中国联合网络通信集团有限公司 虚拟对象的处理方法及装置
CN111367414B (zh) * 2020-03-10 2020-10-13 厦门络航信息技术有限公司 虚拟现实对象控制方法、装置、虚拟现实系统及设备
CN111429333A (zh) * 2020-03-25 2020-07-17 京东方科技集团股份有限公司 一种gpu动态调频的方法、装置及系统
US11721052B2 (en) * 2020-09-24 2023-08-08 Nuvolo Technologies Corporation Floorplan image tiles
CN114125301B (zh) * 2021-11-29 2023-09-19 卡莱特云科技股份有限公司 一种虚拟现实技术拍摄延迟处理方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6791549B2 (en) * 2001-12-21 2004-09-14 Vrcontext S.A. Systems and methods for simulating frames of complex virtual environments
CN105892683A (zh) * 2016-04-29 2016-08-24 上海乐相科技有限公司 一种显示方法及目标设备
CN106652004A (zh) * 2015-10-30 2017-05-10 北京锤子数码科技有限公司 基于头戴式可视设备对虚拟现实进行渲染的方法及装置
CN107274472A (zh) * 2017-06-16 2017-10-20 福州瑞芯微电子股份有限公司 一种提高vr播放帧率的方法和装置
CN109741463A (zh) * 2019-01-02 2019-05-10 京东方科技集团股份有限公司 虚拟现实场景的渲染方法、装置及设备

Family Cites Families (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003039698A1 (en) * 2001-11-02 2003-05-15 Atlantis Cyberspace, Inc. Virtual reality game system with pseudo 3d display driver & mission control
US20110273466A1 (en) * 2010-05-10 2011-11-10 Canon Kabushiki Kaisha View-dependent rendering system with intuitive mixed reality
CN103164541B (zh) * 2013-04-15 2017-04-12 北京世界星辉科技有限责任公司 图片呈现方法及设备
EP2849080B1 (en) 2013-08-02 2017-10-11 Huawei Technologies Co., Ltd. Image display method and device
EP3129958B1 (en) * 2014-04-05 2021-06-02 Sony Interactive Entertainment LLC Method for efficient re-rendering objects to vary viewports and under varying rendering and rasterization parameters
US10162412B2 (en) * 2015-03-27 2018-12-25 Seiko Epson Corporation Display, control method of display, and program
US10089790B2 (en) * 2015-06-30 2018-10-02 Ariadne's Thread (Usa), Inc. Predictive virtual reality display system with post rendering correction
EP3333691B1 (en) * 2015-08-04 2021-04-28 Wacom Co., Ltd. User notification method, handwriting-data capture apparatus, and program
CN105472207A (zh) * 2015-11-19 2016-04-06 中央电视台 一种视音频文件渲染方法及装置
CN105976424A (zh) * 2015-12-04 2016-09-28 乐视致新电子科技(天津)有限公司 一种图像渲染处理的方法及装置
US10218968B2 (en) * 2016-03-05 2019-02-26 Maximilian Ralph Peter von und zu Liechtenstein Gaze-contingent display technique
US10255714B2 (en) * 2016-08-24 2019-04-09 Disney Enterprises, Inc. System and method of gaze predictive rendering of a focal area of an animation
CN106331823B (zh) * 2016-08-31 2019-08-20 北京奇艺世纪科技有限公司 一种视频播放方法及装置
GB2553353B (en) * 2016-09-05 2021-11-24 Advanced Risc Mach Ltd Graphics processing systems and graphics processors
KR20180028796A (ko) * 2016-09-09 2018-03-19 삼성전자주식회사 이미지 표시 방법, 저장 매체 및 전자 장치
CN106485790A (zh) * 2016-09-30 2017-03-08 珠海市魅族科技有限公司 一种画面显示的方法以及装置
CN109901710B (zh) * 2016-10-19 2020-12-01 腾讯科技(深圳)有限公司 媒体文件的处理方法和装置、存储介质及终端
CN106600515A (zh) * 2016-11-07 2017-04-26 深圳市金立通信设备有限公司 一种虚拟现实低延迟处理方法及终端
US10735691B2 (en) * 2016-11-08 2020-08-04 Rockwell Automation Technologies, Inc. Virtual reality and augmented reality for industrial automation
CA3042553C (en) * 2016-11-16 2024-01-02 Magic Leap, Inc. Mixed reality system with reduced power rendering
CN106502427B (zh) * 2016-12-15 2023-12-01 北京国承万通信息科技有限公司 虚拟现实系统及其场景呈现方法
CN106919360B (zh) * 2017-04-18 2020-04-14 珠海全志科技股份有限公司 一种头部姿态补偿方法及装置
US10109039B1 (en) * 2017-04-24 2018-10-23 Intel Corporation Display engine surface blending and adaptive texel to pixel ratio sample rate system, apparatus and method
CN107145235A (zh) * 2017-05-11 2017-09-08 杭州幻行科技有限公司 一种虚拟现实系统
CN107516335A (zh) * 2017-08-14 2017-12-26 歌尔股份有限公司 虚拟现实的图形渲染方法和装置
CN107317987B (zh) * 2017-08-14 2020-07-03 歌尔股份有限公司 虚拟现实的显示数据压缩方法和设备、系统
CN107562212A (zh) * 2017-10-20 2018-01-09 网易(杭州)网络有限公司 虚拟现实场景防抖方法、装置、存储介质及头戴显示设备
US10949947B2 (en) * 2017-12-29 2021-03-16 Intel Corporation Foveated image rendering for head-mounted display devices
CN109032350B (zh) * 2018-07-10 2021-06-29 深圳市创凯智能股份有限公司 眩晕感减轻方法、虚拟现实设备及计算机可读存储介质
CN109242943B (zh) * 2018-08-21 2023-03-21 腾讯科技(深圳)有限公司 一种图像渲染方法、装置及图像处理设备、存储介质
US20200110271A1 (en) * 2018-10-04 2020-04-09 Board Of Trustees Of Michigan State University Photosensor Oculography Eye Tracking For Virtual Reality Systems
JP2022510843A (ja) * 2018-11-30 2022-01-28 マジック リープ, インコーポレイテッド アバタ移動のためのマルチモードの手の場所および配向

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6791549B2 (en) * 2001-12-21 2004-09-14 Vrcontext S.A. Systems and methods for simulating frames of complex virtual environments
CN106652004A (zh) * 2015-10-30 2017-05-10 北京锤子数码科技有限公司 基于头戴式可视设备对虚拟现实进行渲染的方法及装置
CN105892683A (zh) * 2016-04-29 2016-08-24 上海乐相科技有限公司 一种显示方法及目标设备
CN107274472A (zh) * 2017-06-16 2017-10-20 福州瑞芯微电子股份有限公司 一种提高vr播放帧率的方法和装置
CN109741463A (zh) * 2019-01-02 2019-05-10 京东方科技集团股份有限公司 虚拟现实场景的渲染方法、装置及设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114845147A (zh) * 2022-04-29 2022-08-02 北京奇艺世纪科技有限公司 屏幕渲染方法、显示画面合成方法装置及智能终端
CN114845147B (zh) * 2022-04-29 2024-01-16 北京奇艺世纪科技有限公司 屏幕渲染方法、显示画面合成方法装置及智能终端

Also Published As

Publication number Publication date
CN109741463A (zh) 2019-05-10
US11263803B2 (en) 2022-03-01
CN109741463B (zh) 2022-07-19
US20210225064A1 (en) 2021-07-22

Similar Documents

Publication Publication Date Title
WO2020140720A1 (zh) 虚拟现实场景的渲染方法、装置及设备
US11954805B2 (en) Occlusion of virtual objects in augmented reality by physical objects
CN112042182B (zh) 通过面部表情操纵远程化身
US11244427B2 (en) Image resolution processing method, system, and apparatus, storage medium, and device
US9325960B2 (en) Maintenance of three dimensional stereoscopic effect through compensation for parallax setting
JP2011510407A (ja) グラフィックス処理システムにおけるオフスクリーンサーフェスのためのマルチバッファサポート
JP4234089B2 (ja) エンタテインメント装置、オブジェクト表示装置、オブジェクト表示方法、プログラム、およびキャラクタ表示方法
WO2022121653A1 (zh) 确定透明度的方法、装置、电子设备和存储介质
JP2008077372A (ja) 画像処理装置、画像処理装置の制御方法及びプログラム
JP4031509B1 (ja) 画像処理装置、画像処理装置の制御方法及びプログラム
JPH10295934A (ja) ビデオゲーム装置及びモデルのテクスチャの変化方法
TWI566205B (zh) 圖形驅動程式在顯像圖框中近似動態模糊的方法
US11551383B2 (en) Image generating apparatus, image generating method, and program for generating an image using pixel values stored in advance
WO2023000547A1 (zh) 图像处理方法、装置和计算机可读存储介质
US11640688B2 (en) Apparatus and method for data generation
JP5063022B2 (ja) プログラム、情報記憶媒体及び画像生成システム
WO2022121654A1 (zh) 确定透明度的方法、装置、电子设备及存储介质
US11966278B2 (en) System and method for logging visible errors in a videogame
TWI817832B (zh) 頭戴顯示裝置、其控制方法及非暫態電腦可讀取儲存媒體
TWI775397B (zh) 3d顯示系統與3d顯示方法
CN115426505B (zh) 基于面部捕捉的预设表情特效触发方法及相关设备
WO2022121652A1 (zh) 确定透明度的方法、装置、电子设备及存储介质
US20230158403A1 (en) Graphics data processing
WO2023158375A2 (zh) 表情包生成方法及设备
TW201915938A (zh) 影像處理系統和方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19906868

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19906868

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19906868

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 24.08.2021)