CN113467619A - Picture display method, picture display device, storage medium and electronic equipment - Google Patents

Picture display method, picture display device, storage medium and electronic equipment Download PDF

Info

Publication number
CN113467619A
CN113467619A CN202110827578.4A CN202110827578A CN113467619A CN 113467619 A CN113467619 A CN 113467619A CN 202110827578 A CN202110827578 A CN 202110827578A CN 113467619 A CN113467619 A CN 113467619A
Authority
CN
China
Prior art keywords
target
picture
display
display screen
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110827578.4A
Other languages
Chinese (zh)
Other versions
CN113467619B (en
Inventor
林明田
周伟彪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110827578.4A priority Critical patent/CN113467619B/en
Publication of CN113467619A publication Critical patent/CN113467619A/en
Application granted granted Critical
Publication of CN113467619B publication Critical patent/CN113467619B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units

Abstract

The invention discloses a picture display method, a picture display device, a storage medium and electronic equipment. Wherein, the method comprises the following steps: acquiring a target eyeball image acquired by an image acquisition component in virtual reality equipment, wherein the target eyeball image comprises an eyeball image of a target user watching a target picture by using the virtual reality equipment at present; identifying the gazing point information of the target user from the target eyeball image, wherein the gazing point information is used for representing the distribution position of the gazing point of the eyeball of the target user on the target picture; and in the process of displaying the target picture in the virtual reality equipment, displaying the target sub-picture matched with the gazing point information according to a first display resolution, wherein the first display resolution is higher than a second display resolution corresponding to the target picture. The invention can be applied to virtual reality scenes, and can also relate to eyeball tracking and other technologies. The invention solves the technical problem of single display of the picture.

Description

Picture display method, picture display device, storage medium and electronic equipment
Technical Field
The invention relates to the field of computers, in particular to a picture display method, a picture display device, a storage medium and electronic equipment.
Background
In recent years, Virtual Reality (VR) devices have been developed more rapidly, and a basic implementation manner of the VR devices is to simulate a Virtual environment for a computer so as to provide a user with a high simulated environmental immersion feeling.
However, the virtual reality device in the related art has a gap between the display of the picture and the actual picture in the real environment, for example, the core angle of the gaze of the human eye is 5 to 18 °, within this angle, the amount of information obtained by the human eye is the largest, and along with the expansion of the angle, that is, the "residual light" part, the human eye can hardly capture the display detail information in this area, which leads to the fact that the visual field of the user in the real environment is often concentrated in the core angle, and the visual field outside the core angle is often ignored or blurred; however, in the display process of the virtual reality device in the related art, the view frames inside or outside the core angle are not distinguished, so that the display of the frame is single, and the user cannot feel highly simulated environmental immersion. That is, the related art has a problem that the display of the screen is relatively single.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a picture display method, a picture display device, a storage medium and electronic equipment, and at least solves the technical problem that the picture is displayed more singly.
According to an aspect of an embodiment of the present invention, there is provided a screen display method including: acquiring a target eyeball image acquired by an image acquisition component in virtual reality equipment, wherein the target eyeball image comprises an eyeball image of a target user watching a target picture by using the virtual reality equipment; recognizing the gaze point information of the target user from the target eye image, wherein the gaze point information is used for indicating the distribution position of the gaze point of the eyes of the target user on the target screen; and in the process of displaying the target picture in the virtual reality equipment, displaying a target sub-picture matched with the gaze point information according to a first display resolution, wherein the first display resolution is higher than a second display resolution corresponding to the target picture.
According to another aspect of the embodiments of the present invention, there is also provided a screen display apparatus including: the image acquisition component is used for acquiring a target eyeball image, wherein the target eyeball image is an image of eyeballs of a target user watching a target picture by using virtual reality equipment at present; a processor for recognizing the gaze point information of the target user from the target eye image; a first display screen, wherein the first display screen is used for displaying a target sub-picture matched with the gazing point information according to a first display resolution; and a second display screen, wherein the second display screen is configured to display the target picture according to a second display resolution, and the first display resolution is higher than a second display resolution corresponding to the target picture.
According to another aspect of the embodiments of the present invention, there is also provided a screen display apparatus including: the system comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring a target eyeball image acquired by an image acquisition component in virtual reality equipment, and the target eyeball image comprises an eyeball image of a target user who uses the virtual reality equipment to watch a target picture at present; a recognition unit configured to recognize, from the target eye image, gaze point information of the target user, where the gaze point information indicates a distribution position of a gaze point of an eye of the target user on the target screen; and a display unit, configured to display, in a process of displaying the target picture in the virtual reality device, a target sub-picture matched with the gaze point information according to a first display resolution, where the first display resolution is higher than a second display resolution corresponding to the target picture.
As an alternative, the display unit includes: a first display module, configured to display the target sub-picture on a first display screen included in the target display screen, where an analysis configuration parameter of the first display screen is the first display resolution, and an analysis value of the analysis configuration parameter is in a positive correlation with precision of an image displayed on the display screen; and the second display module is used for displaying the target picture on a second display screen included in the target display screen, wherein the analysis configuration parameter of the second display screen is a second display resolution.
As an optional solution, the first display module includes: an extraction submodule, configured to extract pixel data corresponding to the target sprite, where the pixel data is used to indicate a color value and a distribution position of each pixel in the target sprite; and the display sub-module is used for displaying the target sub-picture on the first display screen according to the pixel data.
As an alternative, the first display module includes: and a first imaging sub-module, configured to reflect the target sub-screen displayed on the first display screen through a half-mirror lens, so that the target sub-screen is imaged at a lens configured in the virtual reality device, where the half-mirror lens is disposed between the lens and the second display screen according to a target angle.
As an alternative, the second display module includes: and the second sub-module is used for transmitting the target picture displayed on the second display screen through the semi-reflective lens so as to enable the target picture to be imaged at the lens.
As an optional solution, the identification unit includes: the identification module is used for carrying out identification processing on the target eyeball image so as to obtain first position information of a pupil in an eyeball of the target user; and a first determining module, configured to determine, based on the first position information, second position information of a gaze point of an eyeball of the target user on the target screen.
As an alternative, the method comprises the following steps: a second determination module configured to determine a target gaze area on the target screen based on the second position information after the second position information indicating that the gaze point of the eyeball of the target user is on the target screen is determined based on the first position information, wherein the target gaze area indicates a core gaze area on the target screen; and determining the target sub-picture according to the picture of the target watching area and the target picture.
According to still another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program is configured to execute the above-mentioned screen display method when running.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the screen display method through the computer program.
In the embodiment of the invention, a target eyeball image acquired by an image acquisition component in virtual reality equipment is acquired, wherein the target eyeball image comprises an eyeball image of a target user watching a target picture by using the virtual reality equipment at present; recognizing the gaze point information of the target user from the target eye image, wherein the gaze point information is used for indicating the distribution position of the gaze point of the eyes of the target user on the target screen; in the process of displaying the target picture in the virtual reality device, displaying a target sub-picture matched with the gaze point information according to a first display resolution, wherein the first display resolution is higher than a second display resolution corresponding to the target picture, acquiring the distribution position of the gaze point of the eyeball of the target user on the target picture by an eyeball tracking technology, determining the target sub-picture in the core visual field range of the target user in the target picture according to the distribution position, and performing targeted high-resolution display on the target sub-picture, wherein the resolutions of other pictures except the target sub-picture in the target picture are lower than the target sub-picture to the user side, so that the purpose of distinguishing and displaying the visual field pictures in the core visual field or outside the core visual field of the user is achieved, and the technical effect of improving the display diversity of the pictures is achieved, and further the technical problem that the display of the picture is single is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a schematic diagram of an application environment of an alternative screen display method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating a flow of an alternative screen display method according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating an alternative screen display method according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating an alternative screen display method according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating an alternative screen display method according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating an alternative screen display method according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating an alternative screen display method according to an embodiment of the present invention;
FIG. 8 is a diagram illustrating an alternative screen display method according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of an alternative screen display method according to an embodiment of the invention;
FIG. 10 is a schematic diagram of an alternative screen display apparatus according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of an alternative electronic device according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, in order to facilitate understanding of the embodiments of the present invention, some terms or nouns related to the present invention are explained as follows:
artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
Computer Vision technology (CV) Computer Vision is a science for researching how to make a machine "see", and further refers to that a camera and a Computer are used to replace human eyes to perform machine Vision such as identification, tracking and measurement on a target, and further image processing is performed, so that the Computer processing becomes an image more suitable for human eyes to observe or transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision technologies generally include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technologies, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and also include common biometric technologies such as face recognition and fingerprint recognition.
With the research and progress of artificial intelligence technology, the artificial intelligence technology is developed and applied in a plurality of fields, such as common smart homes, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned driving, automatic driving, unmanned aerial vehicles, robots, smart medical care, smart customer service, and the like.
The scheme provided by the embodiment of the application relates to the technologies of artificial intelligence, such as computer vision, computer simulation and the like, and is specifically explained by the following embodiments:
according to an aspect of the embodiments of the present invention, a screen display method is provided, and optionally, as an optional implementation, the screen display method may be applied, but not limited, to the environment as shown in fig. 1. The system may include, but is not limited to, a user equipment 102, a network 110, and a server 112, wherein the user equipment 102 may include, but is not limited to, a display 108, a processor 106, and a memory 104.
The specific process comprises the following steps:
step S102, the user equipment 102 performs image acquisition through an image acquisition component in the virtual reality equipment 1022 to obtain a target eyeball image of the target user 1024;
step S104-S106, the user device 102 sends the target eyeball image to the server 112 through the network 110;
in steps S108 to S110, the server 112 determines the target screen 1026 viewed by the current target user 1024 through the database 114, and determines the target sub-screen 1026 in the target screen 1026 by using the target eyeball image through the processing engine 116;
steps S112-S114, server 112 sends target sprite 1026 to user device 102 via network 110;
in step S116, in the process of displaying the target screen 1026, the processor 106 in the user equipment 102 displays the target sub-screen 1026 on the display 108, and stores the target sub-screen 1026 and the gaze point information corresponding to the target sub-screen 1026 in the memory 104. Alternatively, in a case where the gaze point information acquired at the next time can be searched in the memory 104, the corresponding screen is directly displayed.
In addition to the example shown in fig. 1, the above steps may be performed by the user device 102 independently, that is, the user device 102 performs the steps of determining the target screen 1026 viewed by the current target user 1024, determining the target sub-screen 1026 in the target screen 1026 by using the target eyeball image, and the like, so as to relieve the processing pressure of the server. The user equipment 102 includes, but is not limited to, a handheld device (e.g., a mobile phone), a notebook computer, a desktop computer, a vehicle-mounted device, and the like, and the specific implementation manner of the user equipment 102 is not limited in the present invention.
Optionally, as an optional implementation manner, as shown in fig. 2, the screen display method includes:
s202, acquiring a target eyeball image acquired by an image acquisition component in virtual reality equipment, wherein the target eyeball image comprises an eyeball image of a target user watching a target picture by using the virtual reality equipment;
s204, identifying the gaze point information of the target user from the target eyeball image, wherein the gaze point information is used for representing the distribution position of the gaze point of the eyeball of the target user on the target picture;
and S206, in the process of displaying the target picture in the virtual reality equipment, displaying the target sub-picture matched with the gazing point information according to a first display resolution, wherein the first display resolution is higher than a second display resolution corresponding to the target picture.
Optionally, in this embodiment, the screen display method described above may be applied, but not limited, to screen display scenes of virtual reality, such as users wearing virtual reality devices to experience virtual games simulating reality or watching movie and television works with high immersion feeling;
moreover, the image display method can also relate to an eyeball tracking technology, specifically, the pupil of the user is tracked, and the area watched by the eyes of the user is always displayed and rendered so as to save computing resources and reduce the power consumption of the whole machine; in addition, tracking of the eye pupil also enables the VR device to realize visual interaction by tracking the user's gaze point, and prepares for subsequent additional operations such as visual automatic calibration, player interaction, and the like.
Optionally, in this embodiment, the virtual reality device may be, but is not limited to, a portable wearing device, such as VR glasses, VR helmets, and the like; in addition, the virtual reality device may also be, but is not limited to, all devices in an entire virtual reality system, such as a virtual display device including a VR headset (for displaying a picture of virtual reality), a VR seat (for providing physical contact of virtual reality), and a VR temperature control system (for providing a physical environment of virtual reality), which are not limited herein.
Optionally, in this embodiment, the target eyeball image includes an image of an eyeball of a target user who is currently viewing the target picture using the virtual reality device, and the target eyeball image further includes an image of a face of the target user who is currently viewing the target picture using the virtual reality device, and the like;
further taking an example that the target eyeball image includes an image of an eyeball of a target user who is currently viewing the target picture using the virtual reality device, the image of the eyeball may include, but is not limited to, an image including at least one of the following: eyelids, sclera, pupil, iris, eyelashes, cornea, and the like.
Optionally, in this embodiment, the image capturing component may capture, but is not limited to, the target eyeball image by at least one of the following methods:
(1) the eyeball tracking in the side direction close to the eye has the advantages that the eyeball tracking mode can be used as peripheral equipment, and the disadvantage that the eyeball tracking camera is arranged in the lower side position, so that the eyeball can be shot and shot only from the side surface, but the detection precision is generally not high and the difference is large due to the influence of the height difference of eyelids, eyelashes and cheekbones of human eyes;
(2) the semi-reflective lens is adopted for lateral bending of the optical axis, and the camera for detecting the eyeball is placed on the opposite side of the semi-reflective lens, so that the method has the advantages that the eyeball can be completely shot from the front side through the semi-reflective lens, and tracking is more accurate;
(3) the middle position of the forward display screen is dug to be processed, so that the eyeball tracking camera can be placed in the middle to be used for being matched with the infrared LED matched with the VR optical lens circumference to capture the pupil movement of the user.
Optionally, in this embodiment, the gaze point information is used to represent a distribution position of a gaze point of an eyeball of the target user on the target screen, or may be, but is not limited to, interpreting the gaze point information as a gaze core area for indicating the target user currently on the target screen;
specifically, starting from a human eye physiological structure, the core angle of human eye watching is 5-18 degrees, in the area, the information quantity obtained by the human eye is the largest, and along with the expansion of the angle, the human eye can hardly capture and display detail information in the area, so that the whole feeling of the display effect mainly comes from the core (angle) watching area;
based on this, for highlighting the gazing core area actually felt by human eyes, the gazing core area of the target user on the target picture is obtained by utilizing an eyeball tracking technology, so that an optimal display resolution effect is provided for the gazing core area, and in the non-gazing core area, the display resolution is reduced step by step along with the distance, so that the overall display data total amount of the display area is effectively reduced under the condition of ensuring that the subjective feeling of the visual effect is not changed, therefore, the picture display method not only highlights the gazing core area (target sub-picture) closer to the display environment, but also realizes the beneficial effect of reducing the overall requirements on the performance of an image processing chip, the power consumption of the whole machine, a battery and other modules.
Optionally, in this embodiment, the relationship between the target picture and the target sub-picture may be, but is not limited to, the target picture including the target sub-picture; or the target picture and the target sub-picture are two independent pictures, but the images represented by the target sub-picture are overlapped in the target picture;
further taking an example that the target picture includes a target sub-picture, assuming that a target display screen is configured in the virtual reality device, in the process of displaying the target picture on the target display screen, the display resolution corresponding to the target sub-picture in the target picture is improved, so as to achieve an effect of highlighting the target sub-picture;
the target picture and the target sub-picture are two independent pictures, but the image represented by the target sub-picture is overlapped in the target picture, for example, assuming that a first display screen and a second display screen are configured in the virtual reality device, the target sub-picture is displayed on the first display screen, the target picture is displayed on the second display screen, and the image represented by the target sub-picture is overlapped in the target picture, further, in the whole display process, the picture finally seen by the target user is the image represented by the target picture, but the display resolution of the image corresponding to the overlapped target sub-picture is obviously higher than that of the other image, that is, the target sub-picture covers the overlapped picture in the target picture.
Alternatively, in the present embodiment, the display resolution may be, but is not limited to, representing the number of pixels included in each image in the screen, and in general, the more the number of pixels, the more accurate the image represented by the screen is, for example, a resolution of 300dpi represents that the image has 300 pixels in each of the width and height, and thus has 90000 pixels in total.
In addition, in general, when the eyes of the user watch the image directly in front, the perceived definition of the image observed by the eyes is the highest, and when the user does not turn his head and only observes the image by the rotation of the eyeball, the definition is not greatly reduced within a range of +/-18 °, and when the definition is larger than this value, the definition is greatly reduced, the user is very tired, the holding time is very short, for example, the user turns his head to the image direction subconsciously to see the image further and places the image at the position in front of the eyes, and the above-mentioned image display method can be applied to a range of +/-18 °.
It should be noted that, the distribution position of the gaze point of the eyeball of the target user on the target picture is obtained by the eyeball tracking technology, the target sub-picture in the core visual field range of the target user in the target picture is determined according to the distribution position, and then the target sub-picture is displayed with high resolution in a targeted manner, in addition, the resolutions of other pictures except the target sub-picture in the target picture are lower than those of the target sub-picture for the user side, and thus the purpose of displaying the visual field pictures in the core visual field or outside the core visual field of the user in a differentiated manner is achieved.
To further illustrate, alternatively such as shown in fig. 3, assume that a target user 304 wears a virtual reality device 302 to view a target screen 306, as shown in fig. 3 (a); as further shown in fig. 3 (b), a gaze point 308 (which may be, but is not limited to be, a gaze point of a pupil of the target user 304 on the target screen 306) on the target screen 306 is obtained by using an eye tracking technique; based on this, in the display of the target screen 306, the target sub-screen 310 is highlighted, wherein the highlighting may be, but is not limited to, displaying at a higher display resolution.
According to the embodiment provided by the application, the distribution position of the gaze point of the eyeball of the target user on the target picture is obtained through the eyeball tracking technology, the target sub-picture in the core visual field range of the target user in the target picture is determined according to the distribution position, and the target sub-picture is subjected to targeted high-resolution display.
As an optional scheme, in the process of displaying a target picture in a virtual reality device, displaying a target sub-picture matched with gaze point information according to a first display resolution includes:
s1, displaying a target sub-picture on a first display screen included in the target display screen, wherein the analytic configuration parameter of the first display screen is a first display resolution, and the analytic value of the analytic configuration parameter has a positive correlation with the precision of the image displayed on the display screen;
and S2, displaying the target picture on a second display screen included in the target display screen, wherein the analysis configuration parameter of the second display screen is a second display resolution.
It should be noted that, with the development of VR devices in these years, in order to improve the use experience of VR devices, the related art often adopts a method of improving the display resolution of VR devices, so that the image that can be observed in VR devices further approaches the image state observed in the real scene of the user;
further from the principle and the existing devices, in order to further improve the display resolution of the VR device, the resolution of the display module of the VR device needs to be further improved, but because the human visual range is very wide (FOV exceeds 100 °), the edge distortion of the optical imaging module (optical lens) is considered and balanced, and further a larger display screen size (about 3.5 inches in the industry convention) needs to be adopted, when DPD (angular pixel density) is close to 60, the resolution of the display is close to 7K, the requirement on the production and manufacturing process of the display screen is extremely high, and the batch production is difficult to realize in a short period. In other words, if the display resolution of the screen is to be improved in the related art, the configuration of the display needs to be improved as a whole at a higher cost, which consumes huge manpower and material resources and is inefficient. That is, the related art has a problem that the display resolution of the screen is low.
Optionally, in this embodiment, the first display screen may be but is not limited to a secondary display screen, and the second display screen may be but is not limited to a primary display screen, where a size of the primary display screen is larger than that of the secondary display screen, and an analytic configuration parameter of the primary display screen is lower than that of the secondary display screen;
based on this, the first display screen may be but not limited to be in charge of displaying the target sub-picture, and the second display screen may be but not limited to be in charge of displaying the target picture, wherein the first display screen with a smaller size may achieve a higher resolution configuration, and further, the display of the target sub-picture may also be completed with a higher resolution. For the target user side, there is no obvious difference between improving the overall analysis configuration of the display screen and improving the analysis configuration of the key area, but for the virtual reality device, the effect the same as or even more beneficial (i.e. highlighted) to improving the overall analysis configuration of the display screen can be achieved by only improving the analysis configuration of the key area, and a method for improving the picture display with higher efficiency is provided.
Optionally, in the present embodiment, the second display may be but is not limited to be primarily responsible for image display of the non-visual central region in the large FOV (100 ° and above), and may be but is not limited to include a TFT LCD or AMOLED; the first display can be but not limited to a micro OLED, since the adopted process is different from the traditional TFT LCD and AMOLED production process, the silicon chip process of the semiconductor is adopted, although the area is generally small, the resolution can be extremely high, and thus high resolution presentation of central vision in the area of 30 ° FOV of the user eye in the VR device is realized.
By way of further example, optionally based on the scenario shown in fig. 3, continuing with the example shown in fig. 4, the target screen 306 is displayed on the second display screen 402, and the target sub-screen 310 is displayed on the first display screen 404, so as to combine the target screen 306 and the target sub-screen 310 to complete the final display, that is, the final display screen provided by the virtual reality device 302 for the target user 304.
According to the embodiment provided by the application, the target sub-picture is displayed on the first display screen included in the target display screen, wherein the analysis configuration parameter of the first display screen is the first display resolution, and the analysis value of the analysis configuration parameter is in positive correlation with the precision of the image displayed on the display screen; and displaying the target picture on a second display screen included in the target display screen, wherein the analysis configuration parameter of the second display screen is a second display resolution, so that the effect of improving the display efficiency of the picture is realized.
As an alternative, displaying a target sprite on a first display screen included in a target display screen includes:
s1, extracting pixel data corresponding to the target sub-picture, wherein the pixel data is used for representing the color value and the distribution position of each pixel in the target sub-picture;
s2, the target sprite is displayed on the first display screen according to the pixel data.
Optionally, in this embodiment, the target sprite may be, but is not limited to, extracted to obtain a plurality of image data, and the plurality of image data may be, but is not limited to, converted into corresponding pixel data.
As an alternative, displaying the target sprite on the first display screen included in the target display screen includes: and reflecting the target sub-picture displayed on the first display screen through a semi-reflective lens so as to enable the target sub-picture to be presented at a lens configured in the virtual reality equipment, wherein the semi-reflective lens is arranged between the lens and the second display screen according to a target angle.
Displaying the target screen on a second display screen included in the target display screen includes: and transmitting the target picture displayed on the second display screen through the semi-reflective lens so that the target picture is imaged at the lens.
Optionally, in this embodiment, the half-reflective lens may be, but is not limited to, also called a beam splitter, and may be, but is not limited to, an optical assembly for splitting incident light into two different light beams at a specified ratio, when the half-reflective mirror is tilted at an angle of 45 ° along the optical axis, the main optical axis of the VR device may be extended laterally to add a path to an image displayed on the first display screen without affecting an image displayed on the second display screen, and in addition, if the image capturing component is mounted on the second display screen, the image capturing component on the second display screen may also be, but is not limited to, prevented from being visible to a target user through image superposition.
The target sub-picture displayed on the first display screen is reflected by the semi-reflective lens, so that the target sub-picture is presented at a lens configured in the virtual reality device; and transmitting the target picture displayed on the second display screen through the semi-reflective lens so that the target picture is imaged at the lens.
By way of further illustration, and optionally based on the scenario shown in fig. 4, continuing with the example shown in fig. 5, the target sub-screen 310 displayed on the first display screen 404 is reflected by the semi-reflective transparency 502 to complete the presentation of the target user 304 by the virtual reality device 302 with respect to the target sub-screen 310.
By way of further illustration, and optionally based on the scenario shown in fig. 5, continuing with the example shown in fig. 6, the target screen 306 displayed on the second display screen 402 is reflected by the semi-reflective transparent 502 to complete the presentation of the target user 304 by the virtual reality device 302 with respect to the target screen 306.
As an alternative, identifying the gaze point information of the target user from the target eyeball image includes:
s1, carrying out recognition processing on the target eyeball image to obtain first position information of the pupil in the eyeball of the target user;
and S2, determining second position information of the fixation point of the eyeball of the target user on the target picture based on the first position information.
Optionally, in this embodiment, the first position information may be, but is not limited to, representing the relative position of the pupil in the eyeball of the target user; the second position information may be, but is not limited to, representing a relative position of a gaze point of an eyeball of the target user on the target screen;
by way of further example, assuming that the first position information is a relative position between the pupil and the cornea, the second position information may be, but is not limited to, a relative position between the gaze point and a target element on the target screen, wherein the target element corresponds to the cornea and the pupil corresponds to the gaze point.
As an alternative, after determining second position information of the gaze point of the eyeball of the target user on the target screen based on the first position information, the method includes:
s1, determining a target watching area on the target picture based on the second position information, wherein the target watching area is used for representing a core watching area of the watching point of the eyeball of the target user on the target picture;
and S2, determining a target sub-picture according to the picture of the target watching area and the target picture.
To further illustrate, optionally, for example, as shown in (a) of fig. 7, a point of regard 704 is displayed on a target screen 702 in accordance with second position information; as further shown in (b) of fig. 7, a target gaze area 706 is determined based on the gaze point 704, for example, a circle is drawn with the gaze point 704 as a center and a preset distance as a radius, and the obtained circle is taken as the target gaze area 706; further, a picture in which the target gaze area 706 coincides with the target picture 702 may be determined, but is not limited to, as a target sprite.
For further example, as shown in fig. 8, optionally, a target gazing area 808 is determined on a target image 804 (assumed to be one frame image of a target screen) through a gazing point 806, and then a corresponding image of the target gazing area 808 on the target image 804 is separately displayed in the form of a target sub-image 802 (assumed to be one frame image of the target sub-screen), so that for the side of a target user 810, an observed image should be the target sub-image 802 and the target image 804 partially covered by the target sub-image 802.
By the embodiment provided by the application, a target watching area on the target picture is determined based on the second position information, wherein the target watching area is used for representing a core watching area of a watching point of eyeballs of a target user on the target picture; and determining the target sub-picture according to the picture of the target watching area and the target picture, so that the effect of improving the determination efficiency of the target sub-picture is realized.
As an alternative, for convenience of understanding, it is assumed that the above-described screen display method is applied to a VR scene as an example, as shown in fig. 9, a hole area is configured in the center of a main display screen (a second display screen), an eye tracking camera (an image capturing component) is disposed in the hole area, and a sub display screen (a first display screen) is disposed perpendicular to the main display screen; because the opening area on the center of the main display screen is covered by the display image of the auxiliary display screen reflected by the semi-reflective lens, the display image can not be perceived by a user, and the eyeball tracking camera can be arranged on a main optical axis (namely a horizontal connecting line between the lens and the eyeball tracking camera shown in the figure 9), so that the eye pupil position can be well accurately positioned, the rendering requirement of the gaze point area of the display image is adjusted by matching with the tracking of the gaze point of the eyeball, and the second display screen with high resolution can provide image presentation in time when the user can clearly observe the image as required. Therefore, the scheme design purpose of realizing high resolution and eyeball tracking at the same time is realized;
wherein the components shown in fig. 9 may be, but are not limited to, as follows:
the lens, which can be but not limited to a VR optical lens module, needs a set of optical lens system for users to see the displayed image clearly and naturally in the VR device, and the optical system in this case has no special requirement, and includes but is not limited to a single lens scheme or a multi-lens scheme, a resin lens or a glass lens, an aspheric lens scheme or a fresnel lens or a composite lens scheme, etc.;
the half-reflecting mirror can be but not limited to a half-reflecting mirror also called a spectroscope and is an optical component for dividing incident light into two different light beams in a specified ratio, when the half-reflecting mirror is placed along an optical axis at an angle of 45 degrees, a main optical axis of VR equipment can be extended laterally, a path is added for a display image of a second display, the image displayed by a main display screen is not influenced, and in addition, the opening of an eye tracking module on the main display screen can be prevented from being seen by a user through image superposition;
the main display screen may be but is not limited to be responsible for providing a larger viewable angle of FOV for the VR device, generally speaking, the viewable angle should be greater than or equal to 100 ° (but is not limited to), the DPD resolution may be a more conventional display resolution, and the current specification of mass production in the industry is about DPD to 18. I.e., a more conventional 3.5 inch 1440 x 1600 display, is adequate. In order to set the eye tracking camera on the main optical axis, a hole needs to be dug in the display screen, such as a hole digging scheme of a front camera of a general mobile phone. Screens include but are not limited to TFT LCDs and AMOLEDs;
the secondary display screen comprises but is not limited to a Micro OLED display screen, a monocrystalline silicon semiconductor is used as a substrate, a CMOS driving circuit formed by tens of millions of transistors is integrated in the semiconductor, and an OLED organic light emitting diode is manufactured on the top layer of the CMOS driving circuit, so that the Micro display screen is a Micro display device capable of achieving high resolution and Micro size at the same time. Compared with the conventional TFT or OLED process, the resolution can be extremely high, and more than five times of the resolution of the conventional TFT or AMOLED can be easily realized. Thus, when it is desired to achieve display of local content within a narrower FOV, the resolution thereof can be extremely high;
the eyeball tracking scheme generally adopts an infrared camera, the positions of eyeballs and pupils are identified by shooting images of eyeballs of a user, and the infrared LED matched with the infrared camera irradiates the eyeballs in a dark environment used by VR. The present solution does not have any specific limitation on the eyeball motion recognition technology, and includes but is not limited to (1) pupil corneal reflection method, (2) retina image method, (3) eyeball modeling method, (4) retina reflected light intensity method, (5) cornea reflected light intensity method;
the eyeball tracking camera passes through the opening in the main display screen and is arranged on a main optical axis of a connecting line of the center of the pupil of the eyeball, the optical center of the lens and the display in order to capture the eye movement more accurately.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
According to another aspect of the embodiments of the present invention, there is also provided a screen display apparatus for implementing the screen display method described above. The apparatus comprises:
the image acquisition component is used for acquiring a target eyeball image, wherein the target eyeball image is an image of eyeballs of a target user watching a target picture by using virtual reality equipment at present;
a processor, wherein the processor is used for identifying the gazing point information of the target user from the target eyeball image;
the first display screen is used for displaying a target sub-picture matched with the fixation point information according to a first display resolution;
and the second display screen is used for displaying the target picture according to a second display resolution, and the first display resolution is higher than the second display resolution corresponding to the target picture.
For a specific embodiment, reference may be made to the example shown in the above-mentioned screen display method, and details in this example are not described herein again.
As an optional scheme, the method further comprises the following steps:
the semi-reflective lens is used for reflecting and imaging a target sub-picture displayed on the first display screen at a lens configured in the virtual reality equipment, and the semi-reflective lens is arranged between the lens and the second display screen according to a target angle;
the semi-reflective lens is also used for presenting a target picture displayed on the second display screen at the lens through transmission.
For a specific embodiment, reference may be made to the example shown in the above-mentioned screen display method, and details in this example are not described herein again.
As an alternative, the image capturing element is placed in the middle of the second display screen.
For a specific embodiment, reference may be made to the example shown in the above-mentioned screen display method, and details in this example are not described herein again.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
According to another aspect of the embodiments of the present invention, there is also provided a screen display apparatus for implementing the screen display method described above. As shown in fig. 10, the apparatus includes:
an obtaining unit 1002, configured to obtain a target eyeball image collected by an image collecting component in a virtual reality device, where the target eyeball image includes an image of an eyeball of a target user who uses the virtual reality device to view a target picture currently;
a recognition unit 1004 for recognizing the gaze point information of the target user from the target eyeball image, wherein the gaze point information is used for indicating the distribution position of the gaze point of the eyeball of the target user on the target picture;
the display unit 1006 is configured to display a target sub-picture matched with the gaze point information according to a first display resolution in a process of displaying a target picture in the virtual reality device, where the first display resolution is higher than a second display resolution corresponding to the target picture.
Optionally, in this embodiment, the screen display method described above may be applied, but not limited, to screen display scenes of virtual reality, such as users wearing virtual reality devices to experience virtual games simulating reality or watching movie and television works with high immersion feeling;
moreover, the image display device can also relate to an eyeball tracking technology, specifically, the image display device can always display and render the area watched by the eyes of the user through tracking the pupils of the user so as to save computing resources and reduce the power consumption of the whole machine; in addition, tracking of the eye pupil also enables the VR device to realize visual interaction by tracking the user's gaze point, and prepares for subsequent additional operations such as visual automatic calibration, player interaction, and the like.
Optionally, in this embodiment, the virtual reality device may be, but is not limited to, a portable wearing device, such as VR glasses, VR helmets, and the like; in addition, the virtual reality device may also be, but is not limited to, all devices in an entire virtual reality system, such as a virtual display device including a VR headset (for displaying a picture of virtual reality), a VR seat (for providing physical contact of virtual reality), and a VR temperature control system (for providing a physical environment of virtual reality), which are not limited herein.
Optionally, in this embodiment, the target eyeball image includes an image of an eyeball of a target user who is currently viewing the target picture using the virtual reality device, and the target eyeball image further includes an image of a face of the target user who is currently viewing the target picture using the virtual reality device, and the like;
further taking an example that the target eyeball image includes an image of an eyeball of a target user who is currently viewing the target picture using the virtual reality device, the image of the eyeball may include, but is not limited to, an image including at least one of the following: eyelids, sclera, pupil, iris, eyelashes, cornea, and the like.
It should be noted that, the distribution position of the gaze point of the eyeball of the target user on the target picture is obtained by the eyeball tracking technology, the target sub-picture in the core visual field range of the target user in the target picture is determined according to the distribution position, and then the target sub-picture is displayed with high resolution in a targeted manner, in addition, the resolutions of other pictures except the target sub-picture in the target picture are lower than those of the target sub-picture for the user side, and thus the purpose of displaying the visual field pictures in the core visual field or outside the core visual field of the user in a differentiated manner is achieved.
For a specific embodiment, reference may be made to the example shown in the above-mentioned screen display device, and details in this example are not described herein again.
According to the embodiment provided by the application, the distribution position of the gaze point of the eyeball of the target user on the target picture is obtained through the eyeball tracking technology, the target sub-picture in the core visual field range of the target user in the target picture is determined according to the distribution position, and the target sub-picture is subjected to targeted high-resolution display.
As an alternative, the display unit 1006 includes:
the first display module is used for displaying a target sub-picture on a first display screen included in the target display screen, wherein the analysis configuration parameter of the first display screen is a first display resolution, and the analysis value of the analysis configuration parameter is in positive correlation with the precision of an image displayed on the display screen;
and the second display module is used for displaying a target picture on a second display screen included in the target display screen, wherein the analysis configuration parameter of the second display screen is a second display resolution.
For a specific embodiment, reference may be made to the example shown in the above-mentioned screen display method, and details in this example are not described herein again.
As an alternative, the first display module includes:
the extraction sub-module is used for extracting pixel data corresponding to the target sub-picture, wherein the pixel data is used for representing the color value and the distribution position of each pixel in the target sub-picture;
and the display sub-module is used for displaying the target sub-picture on the first display screen according to the pixel data.
For a specific embodiment, reference may be made to the example shown in the above-mentioned screen display method, and details in this example are not described herein again.
As an alternative, the first display module includes:
and the first imaging sub-module is used for reflecting the target sub-picture displayed on the first display screen through the semi-reflective lens so as to enable the target sub-picture to be imaged at a lens configured in the virtual reality equipment, wherein the semi-reflective lens is arranged between the lens and the second display screen according to a target angle.
For a specific embodiment, reference may be made to the example shown in the above-mentioned screen display method, and details in this example are not described herein again.
As an alternative, the second display module includes:
and the second sub-module is used for transmitting the target picture displayed on the second display screen through the semi-reflective lens so as to enable the target picture to be imaged at the lens.
For a specific embodiment, reference may be made to the example shown in the above-mentioned screen display method, and details in this example are not described herein again.
As an alternative, the identifying unit 1004 includes:
the identification module is used for identifying the target eyeball image so as to obtain first position information of a pupil in an eyeball of a target user;
and the first determining module is used for determining second position information of the fixation point of the eyeballs of the target user on the target picture based on the first position information.
For a specific embodiment, reference may be made to the example shown in the above-mentioned screen display method, and details in this example are not described herein again.
As an alternative, the method comprises the following steps:
the second determination module is used for determining a target watching area on the target picture based on the second position information after determining the second position information of the gazing point of the eyeball of the target user on the target picture based on the first position information, wherein the target watching area is used for representing a core watching area of the gazing point of the eyeball of the target user on the target picture;
and determining a target sub-picture according to the picture of the target watching area and the target picture.
For a specific embodiment, reference may be made to the example shown in the above-mentioned screen display method, and details in this example are not described herein again.
According to still another aspect of the embodiments of the present invention, there is also provided an electronic device for implementing the screen display method, as shown in fig. 11, the electronic device includes a memory 1102 and a processor 1104, the memory 1102 stores therein a computer program, and the processor 1104 is configured to execute the steps in any one of the method embodiments by the computer program.
Optionally, in this embodiment, the electronic device may be located in at least one network device of a plurality of network devices of a computer network.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, acquiring a target eyeball image acquired by an image acquisition component in the virtual reality device, wherein the target eyeball image comprises an image of an eyeball of a target user watching a target picture by using the virtual reality device;
s2, identifying the gaze point information of the target user from the target eyeball image, wherein the gaze point information is used for indicating the distribution position of the gaze point of the eyeball of the target user on the target picture;
and S3, in the process of displaying the target picture in the virtual reality equipment, displaying the target sub-picture matched with the gazing point information according to a first display resolution, wherein the first display resolution is higher than a second display resolution corresponding to the target picture.
Alternatively, it can be understood by those skilled in the art that the structure shown in fig. 11 is only an illustration, and the electronic device may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 11 is a diagram illustrating a structure of the electronic device. For example, the electronic device may also include more or fewer components (e.g., network interfaces, etc.) than shown in FIG. 11, or have a different configuration than shown in FIG. 11.
The memory 1102 may be used to store software programs and modules, such as program instructions/modules corresponding to the image display method and apparatus in the embodiments of the present invention, and the processor 1104 executes various functional applications and data processing by running the software programs and modules stored in the memory 1102, so as to implement the image display method described above. The memory 1102 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 1102 can further include memory located remotely from the processor 1104 and such remote memory can be coupled to the terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The memory 1102 may be specifically, but not limited to, used for storing information such as a target eye image, gaze point information, and a target sub-picture. As an example, as shown in fig. 11, the memory 1102 may include, but is not limited to, an acquisition unit 1002, a recognition unit 1004, and a display unit 1006 in the screen display device. In addition, the display device may further include, but is not limited to, other module units in the screen display device, which is not described in detail in this example.
Optionally, the transmitting device 1106 is used for receiving or transmitting data via a network. Examples of the network may include a wired network and a wireless network. In one example, the transmission device 1106 includes a Network adapter (NIC) that can be connected to a router via a Network cable to communicate with the internet or a local area Network. In one example, the transmission device 1106 is a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In addition, the electronic device further includes: a display 1108 for displaying the target eye image, the gaze point information, and the target sub-picture; and a connection bus 1110 for connecting the respective module components in the above-described electronic apparatus.
In other embodiments, the terminal device or the server may be a node in a distributed system, where the distributed system may be a blockchain system, and the blockchain system may be a distributed system formed by connecting a plurality of nodes through a network communication. The nodes may form a Peer-To-Peer (P2P) network, and any type of computing device, such as a server, a terminal, and other electronic devices, may become a node in the blockchain system by joining the Peer-To-Peer network.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. A processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to execute the screen display method, wherein the computer program is configured to execute the steps in any of the method embodiments described above.
Alternatively, in the present embodiment, the above-mentioned computer-readable storage medium may be configured to store a computer program for executing the steps of:
s1, acquiring a target eyeball image acquired by an image acquisition component in the virtual reality device, wherein the target eyeball image comprises an image of an eyeball of a target user watching a target picture by using the virtual reality device;
s2, identifying the gaze point information of the target user from the target eyeball image, wherein the gaze point information is used for indicating the distribution position of the gaze point of the eyeball of the target user on the target picture;
and S3, in the process of displaying the target picture in the virtual reality equipment, displaying the target sub-picture matched with the gazing point information according to a first display resolution, wherein the first display resolution is higher than a second display resolution corresponding to the target picture.
Alternatively, in this embodiment, a person skilled in the art may understand that all or part of the steps in the methods of the foregoing embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be substantially or partially implemented in the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, and including instructions for causing one or more computer devices (which may be personal computers, servers, or network devices) to execute all or part of the steps of the method according to the embodiments of the present invention.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, a division of a unit is merely a division of a logic function, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (13)

1. A picture display method, comprising:
acquiring a target eyeball image acquired by an image acquisition component in virtual reality equipment, wherein the target eyeball image comprises an eyeball image of a target user watching a target picture by using the virtual reality equipment at present;
identifying the gaze point information of the target user from the target eyeball image, wherein the gaze point information is used for representing the distribution position of the gaze point of the eyeball of the target user on the target picture;
and in the process of displaying the target picture in the virtual reality equipment, displaying a target sub-picture matched with the gazing point information according to a first display resolution, wherein the first display resolution is higher than a second display resolution corresponding to the target picture.
2. The method according to claim 1, wherein the displaying the target sub-picture matching the gaze point information at a first display resolution in the process of displaying the target picture in the virtual reality device comprises:
displaying the target sub-picture on a first display screen included in the target display screen, wherein an analysis configuration parameter of the first display screen is the first display resolution, and an analysis value of the analysis configuration parameter is in a positive correlation with the precision of an image displayed on the display screen;
and displaying the target picture on a second display screen included in the target display screen, wherein the analysis configuration parameter of the second display screen is a second display resolution.
3. The method of claim 2, wherein said displaying the target sprite on a first display screen included in the target display screen comprises:
extracting pixel data corresponding to the target sub-picture, wherein the pixel data is used for representing the color value and the distribution position of each pixel in the target sub-picture;
and displaying the target sub-picture on the first display screen according to the pixel data.
4. The method of claim 2, wherein displaying the target sprite on a first display screen included in the target display screen comprises:
and reflecting the target sub-picture displayed on the first display screen through a semi-reflective lens so as to enable the target sub-picture to be presented at a lens configured in the virtual reality equipment, wherein the semi-reflective lens is arranged between the lens and the second display screen according to a target angle.
5. The method of claim 4, wherein displaying the target screen on a second display screen included in the target display screen comprises:
and transmitting the target picture displayed on the second display screen through the semi-reflective lens so as to enable the target picture to be imaged at the lens.
6. The method of claim 1, wherein the identifying the target user's gaze point information from the target eye image comprises:
performing identification processing on the target eyeball image to obtain first position information of a pupil in an eyeball of the target user;
and determining second position information of the fixation point of the eyeballs of the target user on the target picture based on the first position information.
7. The method of claim 6, wherein after the determining second location information of the gaze point of the target user's eyeball on the target screen based on the first location information, comprising:
determining a target gaze area on the target screen based on the second location information, wherein the target gaze area is used for representing a core gaze area of a gaze point of an eyeball of the target user on the target screen;
and determining the target sub-picture according to the picture of the target watching area and the target picture.
8. A picture display device characterized by comprising:
the image acquisition component is used for acquiring a target eyeball image, wherein the target eyeball image is an image of eyeballs of a target user watching a target picture by using virtual reality equipment at present;
a processor, wherein the processor is configured to identify gaze point information of the target user from the target eye image;
the first display screen is used for displaying a target sub-picture matched with the fixation point information according to a first display resolution;
and the second display screen is used for displaying the target picture according to a second display resolution, and the first display resolution is higher than the second display resolution corresponding to the target picture.
9. The apparatus of claim 8, further comprising:
a semi-reflective lens, wherein the semi-reflective lens is used for imaging the target sub-picture displayed on the first display screen at a lens configured in the virtual reality device through reflection, and the semi-reflective lens is arranged between the lens and the second display screen according to a target angle; the semi-reflective lens is also used for presenting the target picture displayed on the second display screen at the lens through transmission.
10. The apparatus of claim 8,
the image acquisition component is placed in the middle of the second display screen.
11. An image display device, comprising:
the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring a target eyeball image acquired by an image acquisition component in virtual reality equipment, and the target eyeball image comprises an eyeball image of a target user who uses the virtual reality equipment to watch a target picture at present;
the identification unit is used for identifying the gaze point information of the target user from the target eyeball image, wherein the gaze point information is used for indicating the distribution position of the gaze point of the eyeball of the target user on the target picture;
and the display unit is used for displaying a target sub-picture matched with the gazing point information according to a first display resolution in the process of displaying the target picture in the virtual reality equipment, wherein the first display resolution is higher than a second display resolution corresponding to the target picture.
12. A computer-readable storage medium, comprising a stored program, wherein the program is operable to perform the method of any one of claims 1 to 7.
13. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method of any of claims 1 to 7 by means of the computer program.
CN202110827578.4A 2021-07-21 2021-07-21 Picture display method and device, storage medium and electronic equipment Active CN113467619B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110827578.4A CN113467619B (en) 2021-07-21 2021-07-21 Picture display method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110827578.4A CN113467619B (en) 2021-07-21 2021-07-21 Picture display method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN113467619A true CN113467619A (en) 2021-10-01
CN113467619B CN113467619B (en) 2023-07-14

Family

ID=77881745

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110827578.4A Active CN113467619B (en) 2021-07-21 2021-07-21 Picture display method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113467619B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115103094A (en) * 2022-06-16 2022-09-23 深圳市天趣星空科技有限公司 Camera module far-view angle adjusting method and system based on fixation point
CN115509345A (en) * 2022-07-22 2022-12-23 北京微视威信息科技有限公司 Virtual reality scene display processing method and virtual reality equipment
CN115562490A (en) * 2022-10-12 2023-01-03 西北工业大学太仓长三角研究院 Cross-screen eye movement interaction method and system for aircraft cockpit based on deep learning
CN115761249A (en) * 2022-12-28 2023-03-07 北京曼恒数字技术有限公司 Image processing method, system, electronic equipment and computer program product

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105334628A (en) * 2015-11-21 2016-02-17 胡东海 Virtual reality helmet
CN109901290A (en) * 2019-04-24 2019-06-18 京东方科技集团股份有限公司 The determination method, apparatus and wearable device of watching area
CN110413108A (en) * 2019-06-28 2019-11-05 广东虚拟现实科技有限公司 Processing method, device, system, electronic equipment and the storage medium of virtual screen
CN110679147A (en) * 2017-03-22 2020-01-10 奇跃公司 Depth-based foveated rendering for display systems
CN111290581A (en) * 2020-02-21 2020-06-16 京东方科技集团股份有限公司 Virtual reality display method, display device and computer readable medium
CN111831119A (en) * 2020-07-10 2020-10-27 Oppo广东移动通信有限公司 Eyeball tracking method and device, storage medium and head-mounted display equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105334628A (en) * 2015-11-21 2016-02-17 胡东海 Virtual reality helmet
CN110679147A (en) * 2017-03-22 2020-01-10 奇跃公司 Depth-based foveated rendering for display systems
CN109901290A (en) * 2019-04-24 2019-06-18 京东方科技集团股份有限公司 The determination method, apparatus and wearable device of watching area
CN110413108A (en) * 2019-06-28 2019-11-05 广东虚拟现实科技有限公司 Processing method, device, system, electronic equipment and the storage medium of virtual screen
CN111290581A (en) * 2020-02-21 2020-06-16 京东方科技集团股份有限公司 Virtual reality display method, display device and computer readable medium
CN111831119A (en) * 2020-07-10 2020-10-27 Oppo广东移动通信有限公司 Eyeball tracking method and device, storage medium and head-mounted display equipment

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115103094A (en) * 2022-06-16 2022-09-23 深圳市天趣星空科技有限公司 Camera module far-view angle adjusting method and system based on fixation point
CN115509345A (en) * 2022-07-22 2022-12-23 北京微视威信息科技有限公司 Virtual reality scene display processing method and virtual reality equipment
CN115509345B (en) * 2022-07-22 2023-08-18 北京微视威信息科技有限公司 Virtual reality scene display processing method and virtual reality device
CN115562490A (en) * 2022-10-12 2023-01-03 西北工业大学太仓长三角研究院 Cross-screen eye movement interaction method and system for aircraft cockpit based on deep learning
CN115562490B (en) * 2022-10-12 2024-01-09 西北工业大学太仓长三角研究院 Deep learning-based aircraft cockpit cross-screen-eye movement interaction method and system
CN115761249A (en) * 2022-12-28 2023-03-07 北京曼恒数字技术有限公司 Image processing method, system, electronic equipment and computer program product
CN115761249B (en) * 2022-12-28 2024-02-23 北京曼恒数字技术有限公司 Image processing method, system, electronic equipment and computer program product

Also Published As

Publication number Publication date
CN113467619B (en) 2023-07-14

Similar Documents

Publication Publication Date Title
CN113467619B (en) Picture display method and device, storage medium and electronic equipment
CN109086726B (en) Local image identification method and system based on AR intelligent glasses
Plopski et al. Corneal-imaging calibration for optical see-through head-mounted displays
CN106873778B (en) Application operation control method and device and virtual reality equipment
US9135508B2 (en) Enhanced user eye gaze estimation
US10831268B1 (en) Systems and methods for using eye tracking to improve user interactions with objects in artificial reality
US9967555B2 (en) Simulation device
US9911214B2 (en) Display control method and display control apparatus
CN103873840B (en) Display methods and display equipment
US20180082479A1 (en) Virtual fitting method, virtual fitting glasses and virtual fitting system
EP3251092A1 (en) Automatic generation of virtual materials from real-world materials
CN104036169B (en) Biological authentication method and biological authentication apparatus
CN109901290B (en) Method and device for determining gazing area and wearable device
CN114967926A (en) AR head display device and terminal device combined system
CN104865705A (en) Reinforced realistic headwear equipment based intelligent mobile equipment
US11442275B2 (en) Eyewear including a push-pull lens set
US10255676B2 (en) Methods and systems for simulating the effects of vision defects
CN107291233B (en) Wear visual optimization system, intelligent terminal and head-mounted device of 3D display device
CN108427199A (en) A kind of augmented reality equipment, system and method
CN107544660B (en) Information processing method and electronic equipment
CN109917908B (en) Image acquisition method and system of AR glasses
CN113450448A (en) Image processing method, device and system
CN116503475A (en) VRAR binocular 3D target positioning method based on deep learning
Mori et al. A wide-view parallax-free eye-mark recorder with a hyperboloidal half-silvered mirror and appearance-based gaze estimation
US20220067878A1 (en) Method and device for presenting ar information based on video communication technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40053497

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant