CN111103979A - Partition rendering method and device based on visual focus - Google Patents

Partition rendering method and device based on visual focus Download PDF

Info

Publication number
CN111103979A
CN111103979A CN201911281874.8A CN201911281874A CN111103979A CN 111103979 A CN111103979 A CN 111103979A CN 201911281874 A CN201911281874 A CN 201911281874A CN 111103979 A CN111103979 A CN 111103979A
Authority
CN
China
Prior art keywords
area
rendering
focus
visual
visual focus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911281874.8A
Other languages
Chinese (zh)
Inventor
陈杰
唐勇
刘晓军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xuancai Interactive Network Science And Technology Co ltd
Original Assignee
Xuancai Interactive Network Science And Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xuancai Interactive Network Science And Technology Co ltd filed Critical Xuancai Interactive Network Science And Technology Co ltd
Priority to CN201911281874.8A priority Critical patent/CN111103979A/en
Publication of CN111103979A publication Critical patent/CN111103979A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/211Input arrangements for video game devices characterised by their sensors, purposes or types using inertial sensors, e.g. accelerometers or gyroscopes
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/105Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals using inertial sensors, e.g. accelerometers, gyroscopes
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6692Methods for processing data by generating or executing the game program for rendering three dimensional images using special effects, generally involving post-processing, e.g. blooming

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a partition rendering method and device based on a visual focus, and belongs to the technical field of image processing. The method comprises the steps of obtaining real-time data through an inertia measurement unit of VR equipment, and determining a visual focus; taking a visual focus as a center, and gradually dividing a screen display picture into four rendering areas outwards by adopting a rectangular phase-nested mode; and rendering the divided display pictures by adopting different rendering precisions. The invention has the advantages that the requirements of users on the definition of the core game picture can be ensured, the picture rendering computing capacity is saved, the occupation of computing resources can be effectively reduced, and the picture blocking phenomenon and the dazzling feeling of the users are reduced.

Description

Partition rendering method and device based on visual focus
Technical Field
The invention relates to the technical field of image processing, in particular to a partition rendering method and device based on a visual focus.
Background
The VR technology provides simulation of sense organs such as vision, hearing and touch by constructing a virtual world of a three-dimensional space, so that a user has experience of experiencing the situation and can freely interact with things in the space. VR is a next generation computing platform except for computers and mobile phones due to unique immersion experience and free anthropomorphic interaction mode.
The development of VR technology is subject to the combined effects of computing device processing capability, VR helmet display capability, etc., it is difficult to provide VR applications with higher definition and lower time delay, and it is easy to produce severe vertigo during use.
VR recreation has comparatively good market perspective as one of the main application that VR technique falls to the ground, and along with commercial falling to the ground of 5G and the wireless development of VR equipment, its market prospect is comparatively bright.
However, the conventional single rendering precision is adopted in the conventional VR game, so that the severe processing pressure is brought to the calculation, the severe picture blocking of the display interface can be caused when the visual range of the user changes, and even the user cannot be effectively focused on the core game picture is influenced, so that the user experience is greatly reduced, and the immersion effect of VR is seriously attacked.
Disclosure of Invention
The invention provides a partition rendering method and device based on a visual focus, which adopt a rendering mode of partition precision to position the visual focus in real time, dynamically adjust the range of an area, reduce the calculation pressure, shorten the time delay of image display, effectively improve the smoothness of a VR display image and improve the user experience.
In order to achieve the technical purpose, the invention adopts the following technical scheme:
a method of visual focus-based zone rendering, comprising:
detecting real-time data of VR equipment worn by a user, and determining a visual focus;
dividing a screen display picture into at least two rendering areas from inside to outside according to the distance of the visual focus by taking the visual focus as a center;
the rendering precision is gradually decreased from inside to outside according to the rendering area.
Preferably, the corresponding positions of the central points of the left eye and the right eye in the display picture are determined according to the corresponding tracking and the distance and the angle of the up-down, left-right, front-back movement of the head of the user, which are correspondingly tracked by an inertia measurement unit in the VR device, and the middle point of the two central points is taken as a visual focus.
Preferably, taking the visual focus as a center, the screen display is divided into four rendering regions by adopting a rectangular phase-nested mode, wherein the four rendering regions are respectively as follows: a focus region, a proximity region, a periphery region, an extension region;
the focus area is a rectangular area which is formed by taking a visual focus as a center, extending outwards to the periphery and being parallel to the boundary of the total display screen; the focus area comprises corresponding positions of central points of left and right eyes in the screen, and the field angle range of the focus area is (0, 20 degrees);
the adjacent area is an area which extends outwards from the focus area and has a length and a width smaller than that of the total display screen, and the area of the adjacent area is larger than that of the focus area; the field angle range of the proximity zone is (20 °,40 ° ];
the peripheral area is an area which extends outwards from the adjacent area and has a length and a width slightly larger than that of the total display screen, and the sum of the range of the focus area, the adjacent area and the peripheral area is 15-20% larger than that of the total display screen;
the extension area is an area which is not in the current visual range except the peripheral area; the epitaxial region is 40% greater than the sum of the extents of the remaining three regions.
Further preferably, the rendering precision of the focus area is equal to the highest resolution of the screen, and the rendering precision of the remaining three rendering areas decreases from inside to outside according to an equal ratio of decreasing coefficients of the (0, 1) interval.
Preferably, when the visual focus of the user changes and new visual focus is generated to appear in the peripheral region and the extension region, the visual focus is reset, the rendering region is reset, and the rendering precision is reset.
A visual focus-based zone rendering apparatus, comprising:
the measurement module is used for detecting real-time data of VR equipment worn by a user and determining a visual focus;
the dividing module is used for dividing the screen display picture into at least two rendering areas from inside to outside by taking the visual focus as a center according to the distance of the visual focus;
and the rendering module gradually decreases the rendering precision from inside to outside according to the rendering area.
Preferably, the measurement module determines corresponding positions of center points of left and right eyes in a display picture according to corresponding tracking and testing of the distance and the angle of the up-down, left-right, front-back movement of the head of the user by an inertia measurement unit in the VR device, and takes a middle point of the two center points as a visual focus.
Preferably, the dividing module divides the screen display picture into four rendering regions by using a rectangular phase-nested mode with a visual focus as a center, where the four rendering regions are: a focus region, a proximity region, a periphery region, an extension region;
the focus area is a rectangular area which is formed by taking a visual focus as a center, extending outwards to the periphery and being parallel to the boundary of the total display screen; the focus area comprises corresponding positions of central points of left and right eyes in the screen, and the field angle range of the focus area is (0, 20 degrees);
the adjacent area is an area which extends outwards from the focus area and has a length and a width smaller than that of the total display screen, and the area of the adjacent area is larger than that of the focus area; the field angle range of the proximity zone is (20 °,40 ° ];
the peripheral area is an area which extends outwards from the adjacent area and has a length and a width slightly larger than that of the total display screen, and the sum of the range of the focus area, the adjacent area and the peripheral area is 15-20% larger than that of the total display screen;
the extension area is an area which is not in the current visual range except the peripheral area; the epitaxial region is 40% greater than the sum of the extents of the remaining three regions.
Further preferably, the rendering precision of the focus area is equal to the highest resolution of the screen, and the rendering precision of the remaining three rendering areas decreases from inside to outside according to an equal ratio of decreasing coefficients of the (0, 1) interval.
Preferably, when the visual focus of the user changes and new visual focus is generated to appear in the peripheral region and the extension region, the visual focus is reset, the rendering region is reset, and the rendering precision is reset.
Advantageous effects
According to the method, the real visual focus of a user is taken as a center, a rectangular phase-nested mode is adopted to divide a screen display picture into four rendering areas from inside to outside, and the rendering precision is gradually reduced; the method can ensure the picture display effect in the focus area and also can give consideration to the smooth transition of pictures in other areas.
The invention can support the instant migration of the visual focus and the regional precision adjustment in the process of the great movement of the user; the problems that due to the fact that the calculation pressure is too large due to the change of the visual range, the change of the visual range of a user cannot be effectively supported, the game picture is blurred and is stuck, serious dizziness is caused and the like are solved.
Drawings
FIG. 1 is a general functional framework of the present invention;
FIG. 2 is a flow chart of the system of the present invention;
FIG. 3 is a schematic diagram of partition division according to the present invention.
Detailed Description
The following is further described with reference to the accompanying drawings.
The invention provides a partition rendering method and device based on a visual focus, which are used for dividing a VR display picture into different rectangular nested areas by taking the visual focus of a user as a center and gradually reducing rendering precision from inside to outside. When the visual range of the user changes, the visual focus is repositioned in real time, the range of the area is dynamically adjusted, and seamless connection with the visual range of the user is realized.
The embodiment comprises five parts of comprehensive capability evaluation, picture area division, picture rendering processing, picture output presentation and focus changing processing, wherein VR game pictures included in a visual focus and a peripheral area are mutually converted along with the change of a user visual field. The overall architecture of the present embodiment is shown in fig. 1:
dividing a picture area: the method is characterized in that a user visual focus is taken as a center, and a rectangular phase-nested mode is adopted to gradually and outwards divide the focus area, the adjacent area, the peripheral area, the extension area and the like into four different parts.
And (3) picture rendering processing: the method is a process of rendering and processing the game picture corresponding to the area according to a certain rendering and descending principle.
1) Rendering rules: firstly, defining the range of a divided region, wherein the range mainly comprises the distribution condition, the range and the like of the corresponding pixel point coordinates; and planning related display precision in turn according to the principle from inside to outside, for example, rendering the focus area according to the highest picture quality carried by the VR screen, outputting the adjacent area with 70% precision of the focus area, outputting the peripheral area with 70% precision of the adjacent area, and outputting the epitaxial area with 70% precision of the adjacent area.
If the output with 30% decreasing precision from inside to outside is adopted and the area of the four areas is preset to be the same, the final epitaxial area, the peripheral area and the adjacent area will have 34.3%, 49% and 70% of the precision of the focus area. According to the above calculation mode, on the premise of guaranteeing the user experience, 36.7% of rendering calculation capacity is saved. Meanwhile, a cross-region precision decreasing mechanism is adopted, the method can also effectively adapt to the change process of the field of view of a user, and the phenomena of dizziness, picture blocking and the like caused by long time delay due to overlarge calculation pressure of the same rendering precision are reduced by adopting a presentation mode that the rendering precision is gradually reduced from inside to outside.
2) And (3) rendering a picture: namely, the relevant rendering process is completed aiming at the VR game pictures in different areas through the rendering rule. In the whole rendering process, uniform picture output frequency is adopted, namely the same number of frames is convenient for realizing natural splicing of game pictures in each area in the rendering process.
Focus change processing: when the visual range of a user changes, the self-adaptive rendering of the VR game picture needs to be completed by repositioning a visual focus, resetting a picture area, resetting rendering rules and the like.
1) Resetting the visual focus: the distance and the angle of the up-down, left-right, front-back movement of the head of a user are tested by collecting the corresponding tracking of an Inertial Measurement Unit (IMU) of a VR display terminal (VR helmet) of the user, so that the position of the center point of the current left eye and the current right eye of the user is judged, and the pixel point range of a corresponding VR game picture is obtained.
2) Resetting the picture area: the picture area is re-divided according to the reset visual focus position.
3) Resetting the rendering rule: according to the mode of simulating human eye focusing, under the same picture display frequency, firstly ensuring that a focus area is output with the highest resolution of a VR display screen under short time delay (for example <20 milliseconds), and then sequentially ensuring the output effects of an adjacent area, a peripheral area and an extension area. The display time delay refers to the time difference between the user turning the head greatly and the completion of normal output in the focus area. With experience in the industry that the time is less than 20 milliseconds, the user will be essentially unaware of the sensation and not be dizzy.
In this embodiment, the whole process is shown in fig. 2 from the start of the VR game to the display of the VR game screen according to the partition precision:
step 1: and starting the VR game. The user prepares the VR game computing device and VR headset, etc., and starts and runs the VR game.
Step 2: and (5) collecting the capacity. And collecting computing and displaying capabilities including CPU/GPU chip specifications, VR screen displaying capabilities (resolution, refresh rate and field angle) and the like, and planning to obtain the overall frame number, the highest definition and the like. The technical implementation mode adopts related API interface provided by calling the operating system of the computing equipment and VR helmet driver to directly obtain
And step 3: a visual focus is determined. Position and change information are obtained through an Inertial Measurement Unit (IMU) of a user VR helmet, and the freedom of movement of the VR helmet along three orthogonal coordinate axes of x, y and z and the freedom of rotation around the three coordinate axes are obtained based on related equipment such as a magnetometer, an accelerometer and a gyroscope, so that the positions of the central points of the left eye and the right eye corresponding to game pictures are determined, and the middle positions of the two central points are visual focus points displayed by the whole VR game picture
And 4, step 4: and dividing the game picture. The visual focus displayed by the VR game picture is taken as the center, the center shop of the left and right eyes is divided into a focus area, and the focus area is divided into an adjacent area, a peripheral area, an extension area and the like in sequence, and a related dividing method is shown in fig. 3.
And 5: and performing partition precision rendering. According to the established rendering display precision, sequentially and progressively reducing the game picture from the focus area from inside to outside to perform partition precision display; and adopting a DirectX multi-view rendering technology, correspondingly outputting game pictures with different resolutions of the same frame number based on pixel ranges of all areas of the VR game pictures, and realizing real-time integration of the game pictures.
If the computing power is insufficient, firstly displaying a high-definition picture in a focus area, and temporarily adopting the precision of a peripheral area in a neighboring area and the peripheral area. When the computing power is sufficient, the adjacent area is restored to the definition set by the rule again.
Step 6: and (6) outputting the picture. And outputting the VR game pictures rendered in the corresponding partitions on the left screen and the right screen in a left-eye and right-eye mode on the VR helmet screen according to own display rules.
And 7: the user field of view varies. When the head of the user rotates or the body of the user moves, determining the current position of the user according to Inertial Measurement Unit (IMU) data obtained in real time, turning to the step 3 to obtain related visual precision again, and continuing to finish the steps 3-6.

Claims (10)

1. A method for zone rendering based on visual focus, comprising:
detecting real-time data of VR equipment worn by a user, and determining a visual focus;
dividing a screen display picture into at least two rendering areas from inside to outside according to the distance of the visual focus by taking the visual focus as a center;
the rendering precision is gradually decreased from inside to outside according to the rendering area.
2. The method for rendering subareas based on visual focuses of claim 1, wherein corresponding positions of center points of left and right eyes in a display screen are determined according to corresponding tracking and testing distances and angles of the head of a user moving up, down, left, right, and back and forth by an inertial measurement unit in VR equipment, and a middle point of the two center points is taken as the visual focus.
3. The partition rendering method based on the visual focus of claim 1, wherein the screen display is divided into four rendering regions by using a rectangular phase-nested mode with the visual focus as a center, wherein the four rendering regions are respectively: a focus region, a proximity region, a periphery region, an extension region;
the focus area is a rectangular area which is formed by taking a visual focus as a center, extending outwards to the periphery and being parallel to the boundary of the total display screen; the focus area comprises corresponding positions of central points of left and right eyes in the screen, and the field angle range of the focus area is (0, 20 degrees);
the adjacent area is an area which extends outwards from the focus area and has a length and a width smaller than that of the total display screen, and the area of the adjacent area is larger than that of the focus area; the field angle range of the proximity zone is (20 °,40 ° ];
the peripheral area is an area which extends outwards from the adjacent area and has a length and a width slightly larger than that of the total display screen, and the sum of the range of the focus area, the adjacent area and the peripheral area is 15-20% larger than that of the total display screen;
the extension area is an area which is not in the current visual range except the peripheral area; the epitaxial region is 40% greater than the sum of the extents of the remaining three regions.
4. The method of claim 3, wherein the rendering precision of the focus area is equal to the highest resolution of the screen, and the rendering precision of the remaining three rendering areas decreases from inside to outside according to an equal ratio of a decreasing coefficient of a (0, 1) interval.
5. The method of claim 1, wherein the visual focus is reset, the rendering area is reset, and the rendering accuracy is reset when the user's visual focus changes and a new visual focus is generated in the peripheral area and the extension area.
6. A visual focus-based zone rendering apparatus, comprising:
the measurement module is used for detecting real-time data of VR equipment worn by a user and determining a visual focus;
the dividing module is used for dividing the screen display picture into at least two rendering areas from inside to outside by taking the visual focus as a center according to the distance of the visual focus;
and the rendering module gradually decreases the rendering precision from inside to outside according to the rendering area.
7. The apparatus according to claim 6, wherein the measurement module determines the corresponding positions of the center points of the left and right eyes in the display screen according to the corresponding tracking and the distance and angle of the head of the user moving up, down, left, right, and forward and backward by an inertial measurement unit in the VR device, and takes the middle point of the two center points as the visual focus.
8. The apparatus according to claim 6, wherein the dividing module divides the screen display into four rendering regions by using a rectangular nested mode with the visual focus as a center, wherein the four rendering regions are respectively: a focus region, a proximity region, a periphery region, an extension region;
the focus area is a rectangular area which is formed by taking a visual focus as a center, extending outwards to the periphery and being parallel to the boundary of the total display screen; the focus area comprises corresponding positions of central points of left and right eyes in the screen, and the field angle range of the focus area is (0, 20 degrees);
the adjacent area is an area which extends outwards from the focus area and has a length and a width smaller than that of the total display screen, and the area of the adjacent area is larger than that of the focus area; the field angle range of the proximity zone is (20 °,40 ° ];
the peripheral area is an area which extends outwards from the adjacent area and has a length and a width slightly larger than that of the total display screen, and the sum of the range of the focus area, the adjacent area and the peripheral area is 15-20% larger than that of the total display screen;
the extension area is an area which is not in the current visual range except the peripheral area; the epitaxial region is 40% greater than the sum of the extents of the remaining three regions.
9. The method of claim 8, wherein the rendering precision of the focus area is equal to the highest resolution of the screen, and the rendering precision of the remaining three rendering areas decreases from inside to outside according to an equal ratio of a decreasing coefficient of a (0, 1) interval.
10. The visual focus-based zone rendering apparatus of claim 6, wherein the visual focus is reset, the rendering area is reset, and the rendering accuracy is reset when the visual focus of the user is changed to generate a new visual focus in the peripheral area and the extension area.
CN201911281874.8A 2019-12-11 2019-12-11 Partition rendering method and device based on visual focus Pending CN111103979A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911281874.8A CN111103979A (en) 2019-12-11 2019-12-11 Partition rendering method and device based on visual focus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911281874.8A CN111103979A (en) 2019-12-11 2019-12-11 Partition rendering method and device based on visual focus

Publications (1)

Publication Number Publication Date
CN111103979A true CN111103979A (en) 2020-05-05

Family

ID=70421767

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911281874.8A Pending CN111103979A (en) 2019-12-11 2019-12-11 Partition rendering method and device based on visual focus

Country Status (1)

Country Link
CN (1) CN111103979A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112565898A (en) * 2020-11-27 2021-03-26 福州智象信息技术有限公司 Method and system for moving focus based on smart television operating system
CN114942814A (en) * 2022-06-01 2022-08-26 咪咕视讯科技有限公司 Page component focusing method, system, terminal device and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106101533A (en) * 2016-06-15 2016-11-09 努比亚技术有限公司 Render control method, device and mobile terminal
CN106485790A (en) * 2016-09-30 2017-03-08 珠海市魅族科技有限公司 Method and device that a kind of picture shows
CN106570923A (en) * 2016-09-27 2017-04-19 乐视控股(北京)有限公司 Frame rendering method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106101533A (en) * 2016-06-15 2016-11-09 努比亚技术有限公司 Render control method, device and mobile terminal
CN106570923A (en) * 2016-09-27 2017-04-19 乐视控股(北京)有限公司 Frame rendering method and device
CN106485790A (en) * 2016-09-30 2017-03-08 珠海市魅族科技有限公司 Method and device that a kind of picture shows

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112565898A (en) * 2020-11-27 2021-03-26 福州智象信息技术有限公司 Method and system for moving focus based on smart television operating system
CN112565898B (en) * 2020-11-27 2023-04-07 福州智象信息技术有限公司 Method and system for moving focus based on smart television operating system
CN114942814A (en) * 2022-06-01 2022-08-26 咪咕视讯科技有限公司 Page component focusing method, system, terminal device and medium
CN114942814B (en) * 2022-06-01 2023-07-11 咪咕视讯科技有限公司 Page component focusing method, system, terminal equipment and medium

Similar Documents

Publication Publication Date Title
US11836289B2 (en) Use of eye tracking to adjust region-of-interest (ROI) for compressing images for transmission
US10775886B2 (en) Reducing rendering computation and power consumption by detecting saccades and blinks
US11914147B2 (en) Image generation apparatus and image generation method using frequency lower than display frame rate
CN109741463B (en) Rendering method, device and equipment of virtual reality scene
US20160371884A1 (en) Complementary augmented reality
US11294535B2 (en) Virtual reality VR interface generation method and apparatus
CN113966609A (en) Dynamic tiling for foveal rendering
US11663689B2 (en) Foveated rendering using eye motion
JP2005295004A (en) Stereoscopic image processing method and apparatus thereof
US10540918B2 (en) Multi-window smart content rendering and optimizing method and projection method based on cave system
CN105159522B (en) A kind of method of virtual reality display device response peripheral apparatus operation
WO2019205782A1 (en) Image resolution processing method, system and apparatus, and storage medium and device
CN111103979A (en) Partition rendering method and device based on visual focus
CN106652004A (en) Method and apparatus for rendering virtual reality on the basis of a head-mounted visual device
CN103021007A (en) Animation playing method, device and equipment
JP2022537817A (en) Fast hand meshing for dynamic occlusion
CN108153417A (en) Frame compensation method and the head-mounted display apparatus using this method
CN106204703A (en) Three-dimensional scene models rendering intent and device
US10839587B2 (en) Image processing methods and devices for moving a target object by using a target ripple
KR101947372B1 (en) Method of providing position corrected images to a head mount display and method of displaying position corrected images to a head mount display, and a head mount display for displaying the position corrected images
CN207096928U (en) Virtual reality system
CN107945100A (en) Methods of exhibiting, virtual reality device and the system of virtual reality scenario
JP2002300612A (en) Image generating device, program, and information storage medium
CN107203257A (en) A kind of head pose compensation method and relevant device
JPWO2019159239A1 (en) Image processing device, display image generation method, and font data structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination