CN117632069A - Image display method and device and virtual reality equipment - Google Patents

Image display method and device and virtual reality equipment Download PDF

Info

Publication number
CN117632069A
CN117632069A CN202311752471.3A CN202311752471A CN117632069A CN 117632069 A CN117632069 A CN 117632069A CN 202311752471 A CN202311752471 A CN 202311752471A CN 117632069 A CN117632069 A CN 117632069A
Authority
CN
China
Prior art keywords
pixel
type
value
image
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311752471.3A
Other languages
Chinese (zh)
Inventor
王文
杨青河
邱绪东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Goertek Techology Co Ltd
Original Assignee
Goertek Techology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Techology Co Ltd filed Critical Goertek Techology Co Ltd
Priority to CN202311752471.3A priority Critical patent/CN117632069A/en
Publication of CN117632069A publication Critical patent/CN117632069A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The disclosure relates to an image display method, an image display device and virtual reality equipment, wherein the method comprises the following steps: acquiring a gazing area and a pixel transparency value corresponding to a user gazing range; determining pixel values of the first type of pixel points and pixel values of the second type of pixel points according to the gazing area corresponding to the user gazing range; the first type pixel points are pixel points of the image to be displayed, which are located in the gazing area, and the second type pixel points are pixel points of the image to be displayed, which are located outside the gazing area; determining the pixel value of the first type pixel point as the display pixel value of the first type pixel point, and determining the display pixel value of the second type pixel point according to the pixel transparency value and the pixel value of the second type pixel point; and displaying the image to be displayed on a screen of the virtual reality equipment according to the display pixel values of the first type of pixel points and the display pixel values of the second type of pixel points.

Description

Image display method and device and virtual reality equipment
Technical Field
The present disclosure relates to virtual reality technology, and more particularly, to an image display method, apparatus, and virtual reality device.
Background
An OLED (Organic Light-Emitting Diode) screen has very high contrast and color saturation, and can present a real, fine image. Meanwhile, higher gray level can be realized, more colors can be presented under darker brightness, and the authenticity of the picture is improved. The response speed is very fast, and the picture can be switched instantly, so that the picture blurring and the afterimage are avoided. Therefore, the OLED screen is widely applied to VR/AR equipment, provides a more real and immersive virtual experience for users, and improves the authenticity and immersion of VR/AR application.
At present, a technical scheme is needed to reduce the display power consumption of an OLED screen and improve the use experience of a user.
Disclosure of Invention
An object of the present invention is to provide a new technical solution of an image display method.
According to a first aspect of the present invention, there is provided an image display method comprising:
acquiring a gazing area and a pixel transparency value corresponding to a user gazing range;
determining pixel values of the first type of pixel points and pixel values of the second type of pixel points according to the gazing area corresponding to the user gazing range; the first type pixel points are pixel points of the image to be displayed, which are located in the gazing area, and the second type pixel points are pixel points of the image to be displayed, which are located outside the gazing area;
determining the pixel value of the first type pixel point as the display pixel value of the first type pixel point, and determining the display pixel value of the second type pixel point according to the pixel transparency value and the pixel value of the second type pixel point;
and displaying the image to be displayed on a screen of the virtual reality equipment according to the display pixel values of the first type of pixel points and the display pixel values of the second type of pixel points.
Optionally, before the capturing the gaze area corresponding to the user gaze range, the method further includes:
the method comprises the steps of obtaining a direction vector of a user gazing at a sight line, a preset distance value and a radius value of a gazing area; the preset distance value is a distance value between an eyeball and an imaging plane of the image to be displayed;
determining intersection point position information of the user gazing sight line and the virtual reality equipment screen according to the direction vector of the user gazing sight line and the preset distance value;
and determining a gazing area corresponding to the user gazing range according to the intersection point position information and the radius value.
Optionally, before the obtaining the direction vector of the gaze of the user, the method further comprises:
acquiring pupil images shot by an eyeball tracking module of the virtual reality equipment;
and determining a direction vector of the user's gazing line of sight according to the pupil image.
Optionally, the obtaining, according to the gaze area corresponding to the user gaze range, a pixel value of the first type of pixel point and a pixel value of the second type of pixel point includes:
determining position information of the gazing area in a preset coordinate system and position information of each pixel point of the image to be displayed in the preset coordinate system;
determining the pixel points of the image to be displayed in the gazing area and the pixel points of the image to be displayed outside the gazing area according to the position information of the gazing area in a preset coordinate system and the position information of each pixel point of the image to be displayed in the preset coordinate system;
the method comprises the steps of obtaining pixel values of pixel points of an image to be displayed, which are located in the gazing area, as pixel values of first type pixel points, and obtaining pixel values of pixel points of the image to be displayed, which are located outside the gazing area, as pixel values of second type pixel points.
Optionally, the determining the display pixel value of the second type pixel point according to the pixel transparency value and the pixel value of the second type pixel point includes:
and multiplying the pixel transparency value and the pixel value of the second type of pixel point respectively to obtain the display pixel value of the second type of pixel point.
Optionally, the pixel transparency value includes a plurality of pixel transparency values, and each pixel transparency value corresponds to a pixel value interval; wherein,
the determining the display pixel value of the second type pixel point according to the pixel transparency value and the pixel value of the second type pixel point includes:
determining the average pixel value of the second class pixel points according to the pixel values of the second class pixel points;
determining a pixel value interval corresponding to the average pixel value of the second class pixel points according to the average pixel value of the second class pixel points;
determining a pixel transparency value corresponding to the second type of pixel points according to a pixel value interval corresponding to the average pixel value of the second type of pixel points;
and determining the display pixel value of the second type pixel point according to the pixel transparency value corresponding to the second type pixel point and the pixel value of the second type pixel point.
Optionally, the screen of the virtual reality device is an OLED screen, where displaying the image to be displayed on the screen of the virtual reality device according to the display pixel value of the first type of pixel point and the display pixel value of the second type of pixel point includes:
determining a voltage value corresponding to each pixel point according to the display pixel value of the first type pixel point and the display pixel value of the second type pixel point;
and displaying the image to be displayed on a screen of the virtual reality equipment based on the voltage value corresponding to each pixel point.
Optionally, the pixel transparency value is greater than 0 and less than 1.
According to a second aspect of the present invention, there is provided an image display apparatus comprising:
the acquisition module is used for acquiring a gazing area and a pixel transparency value corresponding to the gazing range of the user;
the first pixel value determining module is used for determining the pixel value of the first type of pixel points and the pixel value of the second type of pixel points according to the gaze area corresponding to the user gaze range; the first type pixel points are pixel points of the image to be displayed, which are located in the gazing area, and the second type pixel points are pixel points of the image to be displayed, which are located outside the gazing area;
a second pixel value determining module, configured to determine a pixel value of the first type of pixel point as a display pixel value of the first type of pixel point, and determine a display pixel value of the second type of pixel point according to the pixel transparency value and the pixel value of the second type of pixel point;
and the display module is used for displaying the image to be displayed on a screen of the virtual reality equipment according to the display pixel values of the first type pixel points and the display pixel values of the second type pixel points.
According to a third aspect of the present invention there is provided a virtual reality device comprising a memory and a processor, the memory storing a computer program for controlling the processor to operate to perform the image display method according to any one of the first aspects of the present invention.
According to the image display method provided by the invention, the display brightness of the gazing area corresponding to the gazing range of the user is reduced by configuring the pixel transparency value, so that the display power consumption is reduced.
Features of the embodiments of the present specification and their advantages will become apparent from the following detailed description of exemplary embodiments of the present specification with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the specification and, together with the description, serve to explain the principles of the embodiments of the specification.
Fig. 1 is a schematic process flow diagram of an image display method according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of intersection location information of a user's gaze range and a virtual reality device screen, according to one embodiment of this invention.
Fig. 3 is a functional block diagram of an image display apparatus according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of a hardware structure of a virtual reality device according to an embodiment of this invention.
Detailed Description
Various exemplary embodiments of the present specification will now be described in detail with reference to the accompanying drawings.
The following description of at least one exemplary embodiment is merely exemplary in nature and is in no way intended to limit the disclosure, its application, or uses.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
< method example >
In the present embodiment, an image display method is provided. According to fig. 1, the image display method of the present embodiment may include the following steps S110 to S140.
Step S110, a fixation area and a pixel transparency value corresponding to a user fixation range are obtained.
In one embodiment, before step S110, the method for displaying an image further includes steps S210 to S230 to determine a gaze area corresponding to the user gaze range.
Step S210, a direction vector of a user gazing at a sight line, a preset distance value and a radius value of a gazing area are obtained; the preset distance value is a distance value between an eyeball and an imaging plane of an image to be displayed.
The direction vector of the user's gaze line of sight may be determined using eye tracking techniques. Eye tracking, which may also be referred to as gaze tracking, is a technique that estimates the gaze direction of an eye by measuring eye movement.
In one embodiment, the virtual reality device includes an eye tracking module. The eye tracking module may be used to determine a direction vector of a user's gaze. The eyeball tracking module comprises a camera. And determining a direction vector of the user's gazing line of sight through pupil images shot by a camera. The principle of the direction vector of the user's gaze line of sight is as follows: the camera is used for recording the eye movement condition of the user so as to acquire an eye image capable of reflecting the eye movement, a pupil image is extracted from the eye image, the position of the pupil center in the image is changed along with the rotation of the eyeball, and the direction vector of the user looking at the sight line is determined according to the position of the pupil center in the image.
The preset distance value is a vertical distance value between the eyeball and an imaging plane of the image to be displayed. Taking the virtual reality device as VR glasses as an example, the imaging plane of the image to be displayed is a screen of the VR glasses. Taking the virtual reality device as an example of AR glasses, the imaging plane of the image to be displayed is a lens of the AR glasses.
The preset distance value and the radius value of the gazing area are values pre-stored in the virtual reality device, and may be obtained directly.
In one embodiment, the pixel transparency value is greater than 0 and less than 1. The pixel transparency value is used to reduce the pixel value of the pixel point of the image to be displayed that is outside the noted area.
Step S220, determining the intersection point position information of the user gazing sight and the virtual reality equipment screen according to the direction vector of the user gazing sight and the preset distance value.
The intersection point position information is position information of the intersection point in a two-dimensional coordinate system established on the imaging plane of the image to be displayed.
Fig. 2 is a schematic diagram of intersection location information of a user's gaze range and a virtual reality device screen, according to one embodiment of this invention. Referring to fig. 2, a two-dimensional coordinate system is established with the upper left corner of the screen as the origin. The intersection point position of the user' S gaze line of sight and the virtual reality device screen determined according to the above steps S210 to S220 is (x) e ,y e )。
Step S230, determining a fixation area corresponding to the fixation range of the user according to the intersection point position information and the radius value.
The radius value of the prestored gazing area is r. See fig. 2, at (x) e ,y e ) The circular area determined by taking r as a radius is taken as a gaze area corresponding to a user gaze area.
Step S120, according to a gazing area corresponding to a user gazing range, obtaining pixel values of a first type of pixel points and pixel values of a second type of pixel points; the first type of pixel points are the pixels of the image to be displayed, which are located in the noted area, and the second type of pixel points are the pixels of the image to be displayed, which are located outside the noted area.
In one embodiment, step S120 specifically includes steps S121-S123.
Step S121, determining the position information of the gaze area in the preset coordinate system and the position information of each pixel of the image to be displayed in the preset coordinate system.
Step S122, determining the pixel points of the image to be displayed in the gazing area and the pixel points of the image to be displayed outside the gazing area according to the position information of the gazing area in the preset coordinate system and the position information of each pixel point of the image to be displayed in the preset coordinate system.
Step S123, obtaining the pixel value of the pixel point of the image to be displayed located in the noted area as the pixel value of the first type of pixel point, and obtaining the pixel value of the pixel point of the image to be displayed located outside the noted area as the pixel value of the second type of pixel point.
Taking fig. 2 as an example, taking the upper left corner of the screen as an origin, a unit length of x-axis coordinates and y-axis coordinates corresponds to one pixel point, and a two-dimensional coordinate system is established. The position of the gaze area in the two-dimensional coordinate system is defined by (x e ,y e ) A circular area defined by a radius r, for the center.
Numbering each pixel of the image to be displayed to obtain the numbering information of each pixel. For example, the screen of the virtual reality device is 1920×1080, the first row of pixel numbers 1, 2, 3, … … 1919, 1920, and the second row of pixel numbers 1921, 1922, 1923, … … 3839, 3840. And the like, obtaining the numbers of other rows of pixel points.
Based on the following calculation formulas (1 a) - (1 b), it is determined whether each pixel is located within the noted area. The pixel points are determined to be located in the noted area by satisfying the calculation formulas (1 a) - (1 b). The pixel points are determined to be located outside the noted area without satisfying the formulas (1 a) - (1 b).
x e -r≤L%1920≤x e +r-equation (1 a),
y e -r 0 ≤L/1920≤y e +r 0 -calculating (1 b),
where L is the number information of one pixel, L%1920 is the remainder obtained by dividing L by 1920, and L/1920 is the quotient obtained by dividing L by 1920.
Step S130, determining the pixel value of the first type pixel point as the display pixel value of the first type pixel point, and determining the display pixel value of the second type pixel point according to the pixel transparency value and the pixel value of the second type pixel point.
In one embodiment, the pixel transparency value and the pixel value of the second class of pixel point are multiplied respectively to obtain the display pixel value of the second class of pixel point. Specifically, the display pixel value of each of the second-type pixel points is obtained according to the following calculation formulas (2 a) - (2 c).
R=R 0 X a-calculation formula (2 a),
G=G 0 x a-calculation formula (2 b),
B=B 0 x a-calculation formula (2 c),
wherein, (R) 0 、G 0 、B 0 ) The pixel value of one pixel point is represented by a pixel transparency value, and the pixel value of one pixel point is represented by (R, G, B).
As can be seen from the above-described calculation formulas (2 a) - (2 c), the pixel value of each pixel in the second type of pixel is multiplied by the pixel transparency, and the obtained display pixel value is smaller than the pixel value before multiplication.
In one embodiment, the pixel transparency value comprises a plurality of pixel transparency values, each pixel transparency value corresponding to a pixel value interval. Step S130 specifically includes steps S131 to S134.
Step S131, determining the average pixel value of the second class pixel according to the pixel value of the second class pixel.
Step S132, determining a pixel value interval corresponding to the average pixel value of the second class pixel according to the average pixel value of the second class pixel.
Step S133, determining the pixel transparency value corresponding to the second class pixel according to the pixel value interval corresponding to the average pixel value of the second class pixel.
Step S134, according to the pixel transparency value corresponding to the second type pixel point and the pixel value of the second type pixel point, determining the display pixel value of the second type pixel point.
The corresponding relation between the pixel transparency value and the pixel value interval is pre-stored in the virtual reality equipment, and can be directly obtained.
In one embodiment, the pixel transparency values include 3 pixel transparency values, a respectively 1 、A 2 、A 3 . Pixel transparency value a 1 The corresponding pixel value interval is 0, 80. Pixel transparency value a 2 The corresponding pixel value interval is [80, 180 ]. Pixel transparency value a 3 The corresponding pixel value interval is [180, 255]。
For example, an average pixel value of 200 is determined for all pixels in the second class of pixels. Then, according to the calculated average pixel value 200, the pixel value interval corresponding to the average pixel value is determined to be [180, 255]. One pixel value interval corresponds to one pixel transparency value one by one. Pixel value interval 180, 255]The corresponding pixel transparency value is A 3 . Pixel transparency value a 3 And multiplying the pixel value of each pixel point in the second class of pixel points respectively to obtain the display pixel value of the second class of pixel points. The calculation process of the display pixel value of each pixel in the second type of pixel may refer to the above-mentioned calculation formulas (2 a) - (2 c).
It should be noted that the number of pixel transparency values in the above embodiment is 3, which is merely an example, and the present invention is not limited in any way. The number of pixel transparency values can be arbitrarily set according to the need, for example, 4, 5.
And step S140, displaying the image to be displayed on the screen of the virtual reality equipment according to the display pixel values of the first type pixel points and the display pixel values of the second type pixel points.
In one embodiment, the screen of the virtual reality device is an OLED screen. Step S140 specifically includes steps S141 to S142.
Step S141, determining a voltage value corresponding to each pixel according to the display pixel value of the first type pixel and the display pixel value of the second type pixel.
In step S142, the image to be displayed is displayed on the virtual reality device screen based on the voltage value corresponding to each pixel point.
According to the above calculation formulas (2 a) - (2 c), it can be seen that after the pixel value of each pixel point in the second type of pixel points is multiplied by the pixel transparency, the obtained display pixel value is smaller than the pixel value before multiplication, so that the voltage value corresponding to each pixel point is reduced, thereby reducing the display power consumption.
According to the image display method provided by the embodiment of the invention, the display brightness of the gazing area corresponding to the gazing range of the user is reduced by configuring the pixel transparency value, so that the display power consumption is reduced.
< device example >
An embodiment of the present invention provides an image display apparatus. Fig. 3 is a functional block diagram of an image display apparatus according to an embodiment of the present invention. Referring to fig. 3, the image display apparatus 300 includes an acquisition module 310, a first pixel value determination module 320, a second pixel value determination module 330, and a display module 340.
The acquiring module 310 is configured to acquire a gaze area and a pixel transparency value corresponding to a user gaze range.
The first pixel value determining module 320 is configured to determine a pixel value of the first type of pixel point and a pixel value of the second type of pixel point according to a gaze area corresponding to a gaze range of a user; the first type of pixel points are the pixels of the image to be displayed, which are located in the noted area, and the second type of pixel points are the pixels of the image to be displayed, which are located outside the noted area.
The second pixel value determining module 330 is configured to determine a pixel value of the first type of pixel point as a display pixel value of the first type of pixel point, and determine a display pixel value of the second type of pixel point according to the pixel transparency value and the pixel value of the second type of pixel point.
The display module 340 is configured to display an image to be displayed on a screen of the virtual reality device according to the display pixel values of the first type of pixel points and the display pixel values of the second type of pixel points.
In one embodiment, the image display device further comprises a gaze region determination module.
The gazing area determining module is used for obtaining a direction vector of the user gazing sight, a preset distance value and a radius value of the gazing area; the preset distance value is the distance value between the eyeball and an imaging plane of the image to be displayed; determining intersection point position information of the user gazing sight and a virtual reality equipment screen according to the direction vector of the user gazing sight and a preset distance value; and determining a gazing area corresponding to the gazing range of the user according to the intersection point position information and the radius value.
In one embodiment, the gazing area determining module is further configured to obtain a pupil image captured by the eye tracking module of the virtual reality device; from the pupil image, a direction vector of the user's gaze line of sight is determined.
In one embodiment, the first pixel value determining module is configured to determine position information of a gaze area in a preset coordinate system and position information of each pixel point of an image to be displayed in the preset coordinate system; determining the pixel points of the image to be displayed in the gazing area and the pixel points of the image to be displayed outside the gazing area according to the position information of the gazing area in the preset coordinate system and the position information of each pixel point of the image to be displayed in the preset coordinate system; the method comprises the steps of obtaining pixel values of pixel points of an image to be displayed, which are located in a noted area, as pixel values of first-class pixel points, and obtaining pixel values of pixel points of the image to be displayed, which are located outside the noted area, as pixel values of second-class pixel points.
In one embodiment, the second pixel value determining module is further configured to multiply the pixel transparency value and the pixel value of the second class of pixel points respectively, to obtain a display pixel value of the second class of pixel points.
In one embodiment, the pixel transparency value comprises a plurality of pixel transparency values, each pixel transparency value corresponding to a pixel value interval. The second pixel value determining module is further configured to determine an average pixel value of the second class of pixel points according to the pixel values of the second class of pixel points; determining a pixel value interval corresponding to the average pixel value of the second class pixel points according to the average pixel value of the second class pixel points; determining a pixel transparency value corresponding to the second class pixel point according to a pixel value interval corresponding to the average pixel value of the second class pixel point; and determining the display pixel value of the second type pixel point according to the pixel transparency value corresponding to the second type pixel point and the pixel value of the second type pixel point.
In one embodiment, the screen of the virtual reality device is an OLED screen. The display module is also used for determining a voltage value corresponding to each pixel point according to the display pixel value of the first type pixel point and the display pixel value of the second type pixel point; and displaying the image to be displayed on a screen of the virtual reality equipment based on the voltage value corresponding to each pixel point.
In one embodiment, the pixel transparency value is the value in (0, 1).
Fig. 4 is a schematic diagram of a virtual reality device 400 provided by one embodiment of the present disclosure. The virtual reality device 400 includes a processor 410 and a memory 420, the memory 420 storing computer instructions that when executed by the processor 410 implement the image display method disclosed in any of the preceding embodiments.
In one embodiment, the virtual reality device may be VR glasses.
In one embodiment, the virtual reality device may be AR glasses.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. For the electric vehicle embodiment, the relevant points are referred to in the description of the method embodiment.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
Embodiments of the present description may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer instructions for causing a processor to implement aspects of embodiments of the present description.
The computer readable storage medium may be a tangible device that can hold and store computer instructions for use by a computer instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove protrusion structures such as punch cards or grooves having computer instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer instructions from the network and forwards the computer instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present description. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of computer instructions, which comprises one or more executable computer instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, implementation by software, and implementation by a combination of software and hardware are all equivalent.
The embodiments of the present specification have been described above, and the above description is illustrative, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. An image display method, comprising:
acquiring a gazing area and a pixel transparency value corresponding to a user gazing range;
determining pixel values of the first type of pixel points and pixel values of the second type of pixel points according to the gazing area corresponding to the user gazing range; the first type pixel points are pixel points of the image to be displayed, which are located in the gazing area, and the second type pixel points are pixel points of the image to be displayed, which are located outside the gazing area;
determining the pixel value of the first type pixel point as the display pixel value of the first type pixel point, and determining the display pixel value of the second type pixel point according to the pixel transparency value and the pixel value of the second type pixel point;
and displaying the image to be displayed on a screen of the virtual reality equipment according to the display pixel values of the first type of pixel points and the display pixel values of the second type of pixel points.
2. The method of claim 1, wherein prior to the acquiring the gaze region corresponding to the user gaze range, the method further comprises:
the method comprises the steps of obtaining a direction vector of a user gazing at a sight line, a preset distance value and a radius value of a gazing area; the preset distance value is a distance value between an eyeball and an imaging plane of the image to be displayed;
determining intersection point position information of the user gazing sight line and the virtual reality equipment screen according to the direction vector of the user gazing sight line and the preset distance value;
and determining a gazing area corresponding to the user gazing range according to the intersection point position information and the radius value.
3. The method of claim 2, wherein prior to the obtaining a direction vector of a user gaze line of sight, the method further comprises:
acquiring pupil images shot by an eyeball tracking module of the virtual reality equipment;
and determining a direction vector of the user's gazing line of sight according to the pupil image.
4. The method of claim 1, wherein the obtaining the pixel values of the first type of pixel points and the pixel values of the second type of pixel points according to the gaze area corresponding to the user gaze range comprises:
determining position information of the gazing area in a preset coordinate system and position information of each pixel point of the image to be displayed in the preset coordinate system;
determining the pixel points of the image to be displayed in the gazing area and the pixel points of the image to be displayed outside the gazing area according to the position information of the gazing area in a preset coordinate system and the position information of each pixel point of the image to be displayed in the preset coordinate system;
the method comprises the steps of obtaining pixel values of pixel points of an image to be displayed, which are located in the gazing area, as pixel values of first type pixel points, and obtaining pixel values of pixel points of the image to be displayed, which are located outside the gazing area, as pixel values of second type pixel points.
5. The method of claim 1, wherein determining the display pixel value for the second type of pixel based on the pixel transparency value and the pixel value for the second type of pixel comprises:
and multiplying the pixel transparency value and the pixel value of the second type of pixel point respectively to obtain the display pixel value of the second type of pixel point.
6. The method of claim 1, wherein the pixel transparency values comprise a plurality of pixel transparency values, each pixel transparency value corresponding to a pixel value interval; wherein,
the determining the display pixel value of the second type pixel point according to the pixel transparency value and the pixel value of the second type pixel point includes:
determining the average pixel value of the second class pixel points according to the pixel values of the second class pixel points;
determining a pixel value interval corresponding to the average pixel value of the second class pixel points according to the average pixel value of the second class pixel points;
determining a pixel transparency value corresponding to the second type of pixel points according to a pixel value interval corresponding to the average pixel value of the second type of pixel points;
and determining the display pixel value of the second type pixel point according to the pixel transparency value corresponding to the second type pixel point and the pixel value of the second type pixel point.
7. The method of claim 1, wherein the screen of the virtual reality device is an OLED screen, wherein the displaying the image to be displayed on the screen of the virtual reality device according to the display pixel values of the first type of pixel points and the display pixel values of the second type of pixel points includes:
determining a voltage value corresponding to each pixel point according to the display pixel value of the first type pixel point and the display pixel value of the second type pixel point;
and displaying the image to be displayed on a screen of the virtual reality equipment based on the voltage value corresponding to each pixel point.
8. The method of any one of claims 1-7, wherein the pixel transparency value is greater than 0 and less than 1.
9. An image display device, comprising:
the acquisition module is used for acquiring a gazing area and a pixel transparency value corresponding to the gazing range of the user;
the first pixel value determining module is used for determining the pixel value of the first type of pixel points and the pixel value of the second type of pixel points according to the gaze area corresponding to the user gaze range; the first type pixel points are pixel points of the image to be displayed, which are located in the gazing area, and the second type pixel points are pixel points of the image to be displayed, which are located outside the gazing area;
a second pixel value determining module, configured to determine a pixel value of the first type of pixel point as a display pixel value of the first type of pixel point, and determine a display pixel value of the second type of pixel point according to the pixel transparency value and the pixel value of the second type of pixel point;
and the display module is used for displaying the image to be displayed on a screen of the virtual reality equipment according to the display pixel values of the first type pixel points and the display pixel values of the second type pixel points.
10. A virtual reality device comprising a memory and a processor, the memory storing computer instructions that, when executed by the processor, implement the image display method of any of claims 1-8.
CN202311752471.3A 2023-12-19 2023-12-19 Image display method and device and virtual reality equipment Pending CN117632069A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311752471.3A CN117632069A (en) 2023-12-19 2023-12-19 Image display method and device and virtual reality equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311752471.3A CN117632069A (en) 2023-12-19 2023-12-19 Image display method and device and virtual reality equipment

Publications (1)

Publication Number Publication Date
CN117632069A true CN117632069A (en) 2024-03-01

Family

ID=90037709

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311752471.3A Pending CN117632069A (en) 2023-12-19 2023-12-19 Image display method and device and virtual reality equipment

Country Status (1)

Country Link
CN (1) CN117632069A (en)

Similar Documents

Publication Publication Date Title
CN106502427B (en) Virtual reality system and scene presenting method thereof
CN108230333B (en) Image processing method, image processing apparatus, computer program, storage medium, and electronic device
EP3826309A2 (en) Method and apparatus for processing video
CN109741289B (en) Image fusion method and VR equipment
US9734551B1 (en) Providing depth-of-field renderings
JP2018537748A (en) Light field rendering of images with variable computational complexity
CN112652046B (en) Game picture generation method, device, equipment and storage medium
US11449968B2 (en) System and method for synthetic depth-of-field effect rendering for videos
CN111275801A (en) Three-dimensional picture rendering method and device
US20120105444A1 (en) Display processing apparatus, display processing method, and display processing program
CN110879739A (en) Display method and display device of notification bar
CN111866492A (en) Image processing method, device and equipment based on head-mounted display equipment
WO2017173583A1 (en) Terminal display anti-shake method and apparatus
CN117632069A (en) Image display method and device and virtual reality equipment
CN113810755B (en) Panoramic video preview method and device, electronic equipment and storage medium
CN116309158A (en) Training method, three-dimensional reconstruction method, device, equipment and medium of network model
CN115937291A (en) Binocular image generation method and device, electronic equipment and storage medium
CN109842738B (en) Method and apparatus for photographing image
US20200035039A1 (en) Image processing apparatus, image processing method, and storage medium
CN111260544A (en) Data processing method and device, electronic equipment and computer storage medium
CN114332416B (en) Image processing method, device, equipment and storage medium
US20210327030A1 (en) Imaging system and method incorporating selective denoising
US11270475B2 (en) Variable rendering system and method
US20240169496A1 (en) Extended depth-of-field correction using reconstructed image
EP4372672A1 (en) Extended depth-of-field correction using reconstructed depth map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination