CN115883816A - Display method and device, head-mounted display equipment and storage medium - Google Patents

Display method and device, head-mounted display equipment and storage medium Download PDF

Info

Publication number
CN115883816A
CN115883816A CN202211494355.1A CN202211494355A CN115883816A CN 115883816 A CN115883816 A CN 115883816A CN 202211494355 A CN202211494355 A CN 202211494355A CN 115883816 A CN115883816 A CN 115883816A
Authority
CN
China
Prior art keywords
image
real scene
scene image
determining
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211494355.1A
Other languages
Chinese (zh)
Inventor
杨青河
王玉影
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Goertek Techology Co Ltd
Original Assignee
Goertek Techology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Techology Co Ltd filed Critical Goertek Techology Co Ltd
Priority to CN202211494355.1A priority Critical patent/CN115883816A/en
Publication of CN115883816A publication Critical patent/CN115883816A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Controls And Circuits For Display Device (AREA)

Abstract

The invention discloses a display method, a display device, a head-mounted display device and a storage medium, relates to the field of head-mounted display devices, and is applied to the head-mounted display device provided with a first camera for collecting real scene images. Firstly, determining the exposure of a first camera for acquiring a real scene image according to the image content of a region of interest of a user in the real scene image, and then displaying a target real scene image acquired by the first camera based on the target exposure. Therefore, the real scene image which meets the brightness requirement of the user is acquired through exposure determined based on the image content of the region of interest, the technical problem that the brightness of the image displayed by the head-mounted display device is dark and over-bright due to improper exposure in the visual region of human eyes so as to not meet the watching requirement of the human eyes of the user on the brightness of the image is solved, the VST video perspective display effect of the head-mounted display device is improved through displaying the real scene image with proper brightness, and the use experience of the user of the head-mounted display device is improved.

Description

Display method and device, head-mounted display equipment and storage medium
Technical Field
The present invention relates to the field of head-mounted display technologies, and in particular, to a display method, a display device, a head-mounted display apparatus, and a computer-readable storage medium.
Background
As a mainstream interactive tool at present, a VST (Vi deo See-Through) of the head-mounted display device has a working principle as follows, after a user wears the head-mounted display device, a camera on the head-mounted display device acquires a real scene image, and then the real scene image acquired by the camera is displayed on a display screen of the head-mounted display device for the user to watch.
However, the following problems exist in the video perspective of the current head-mounted display device: the pictures viewed by human eyes of a user are too dark and too bright, so that the applicable experience of the user wearing the display device is influenced.
Disclosure of Invention
The invention mainly aims to provide a display method, a display device, a head-mounted display device and a computer readable storage medium, and aims to solve the technical problems that the brightness of a picture displayed by the head-mounted display device in the prior art is too dark and too bright and the watching requirement of human eyes of a user on the brightness of the picture is not met.
In order to achieve the above object, the present invention provides a display method, where the display method is applied to a head-mounted display device, and a first camera for collecting an image of a real scene is installed on the head-mounted display device, and the display method includes:
determining a region of interest of a user in a real scene image;
determining a target exposure of the first camera based on image content of the region of interest;
and acquiring a target real scene image acquired by the first camera based on the target exposure, and displaying the target real scene image.
Optionally, the step of determining a region of interest of the user in the real scene image includes:
the method comprises the steps of acquiring an eye image of a user, and determining an interested area of the user in a real scene image according to the eye image.
Optionally, the step of determining a region of interest of the user in the real scene image according to the eye image includes:
and calculating the fixation point coordinate of the user according to the eye image, and determining the region of interest of the user in the real scene image based on the fixation point coordinate.
Optionally, the step of determining a region of interest of the user in the real scene image based on the gaze point coordinates includes:
determining a corresponding preset angle range when the gaze content of the human eyes is clear;
and according to the gazing point coordinate and the region of interest of the preset angle range in the real scene image.
Optionally, after the step of displaying the target real scene image, the method further includes:
adjusting the virtual image brightness of the virtual scene image based on the real image brightness of the target real scene image to obtain a target virtual scene image;
and fusing and displaying the target virtual scene image and the target real scene image.
Optionally, the display method further includes:
determining a peripheral area corresponding to the region of interest on the acquired real scene image, and determining peripheral brightness corresponding to the peripheral area;
adjusting the real scene image corresponding to the peripheral area to the peripheral brightness to obtain a peripheral scene image;
and displaying the target real scene image and the peripheral scene image.
Optionally, the step of determining the peripheral brightness corresponding to the peripheral area includes:
determining real image brightness of the target real scene image, and determining peripheral brightness corresponding to the peripheral area based on the real image brightness;
wherein the real image luminance is greater than the peripheral luminance.
In addition, to achieve the above object, the present invention also provides a display device including:
the interesting area determining module is used for determining the interesting area of the user in the real scene image;
a target exposure determination module for determining a target exposure of the first camera based on image content of the region of interest;
and the display module is used for acquiring a target real scene image acquired by the first camera based on the target exposure and displaying the target real scene image.
Further, to achieve the above object, the present invention also provides a head mounted display device including: a first camera for capturing images of a real scene, a second camera for capturing images of a user's eyes, a memory, a processor, and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, implements the steps of the display method as set forth in any one of the preceding claims.
Further, to achieve the above object, the present invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the display method as described in any one of the above.
The embodiment of the invention provides a display method, a display device, a head-mounted display device and a computer readable storage medium, wherein the display method is applied to the head-mounted display device, a first camera for collecting real scene images is installed on the head-mounted display device, and the display method comprises the following steps: determining a region of interest of a user in a real scene image; determining a target exposure of the first camera based on image content of the region of interest; and acquiring a target real scene image acquired by the first camera based on the target exposure, and displaying the target real scene image.
Firstly, determining the exposure of a first camera for acquiring a real scene image according to the image content of a region of interest of a user in the real scene image, and then displaying a target real scene image acquired by the first camera based on the target exposure.
Therefore, the real scene image which meets the brightness requirement of the user is acquired through exposure determined based on the image content of the region of interest, the technical problem that the brightness of the picture displayed by the head-mounted display device is too dark and too bright due to improper exposure in the visual region of human eyes, so that the watching requirement of the human eyes of the user on the brightness of the picture is not met is solved, the VST video perspective display effect of the head-mounted display device is improved through displaying the real scene picture with proper brightness, and the use experience of the user of the head-mounted display device is improved.
Drawings
Fig. 1 is a schematic diagram of a terminal structure of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating a display method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a transmission area of a display method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a display device according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an operating device in a hardware operating environment according to an embodiment of the present invention.
As shown in fig. 1, the operation device may include: the processor 1001 is, for example, a Central Processing Unit (CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a display screen (Di sp ay), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., a WI-FI interface, WI-FI interface). The Memory 1005 may be a Random Access Memory (RAM) Memory, or a Non-Vo at i e Memory (NVM), such as a disk Memory. The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the configuration shown in FIG. 1 does not constitute a limitation of the operating device and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of storage medium, may include therein an operating system, a data storage module, a network communication module, a user interface module, and a computer program.
In the operating device shown in fig. 1, the network interface 1004 is mainly used for data communication with other devices; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 in the execution device of the present invention may be provided in an execution device that calls the computer program stored in the memory 1005 by the processor 1001 and performs the following operations:
determining a region of interest of a user in a real scene image;
determining a target exposure of the first camera based on image content of the region of interest;
and acquiring a target real scene image acquired by the first camera based on the target exposure, and displaying the target real scene image.
Further, the processor 1001 may call the computer program stored in the memory 1005, and also perform the following operations:
the step of determining the region of interest of the user in the real scene image includes:
the method comprises the steps of acquiring an eye image of a user, and determining an interested area of the user in a real scene image according to the eye image.
Further, the processor 1001 may call the computer program stored in the memory 1005, and also perform the following operations:
the step of determining the region of interest of the user in the real scene image according to the eye image comprises the following steps:
and calculating the fixation point coordinate of the user according to the eye image, and determining the region of interest of the user in the real scene image based on the fixation point coordinate.
Further, the processor 1001 may call the computer program stored in the memory 1005, and also perform the following operations:
the step of determining the region of interest of the user in the real scene image based on the gazing point coordinates comprises the following steps:
determining a corresponding preset angle range when the staring content of the human eyes is clear;
and according to the gazing point coordinate and the interested area of the preset angle range in the real scene image.
Further, the processor 1001 may call the computer program stored in the memory 1005, and also perform the following operations:
after the step of displaying the target real scene image, the method further includes:
adjusting the virtual image brightness of the virtual scene image based on the real image brightness of the target real scene image to obtain a target virtual scene image;
and fusing and displaying the target virtual scene image and the target real scene image.
Further, the processor 1001 may call the computer program stored in the memory 1005, and also perform the following operations:
the display method further comprises the following steps:
determining a peripheral area corresponding to the region of interest on the acquired real scene image, and determining peripheral brightness corresponding to the peripheral area;
adjusting the real scene image corresponding to the peripheral area to the peripheral brightness to obtain a peripheral scene image;
and displaying the target real scene image and the peripheral scene image.
Further, the processor 1001 may call the computer program stored in the memory 1005, and also perform the following operations:
the step of determining the peripheral brightness corresponding to the peripheral area includes:
determining real image brightness of the target real scene image, and determining peripheral brightness corresponding to the peripheral area based on the real image brightness;
wherein the real image brightness is greater than the peripheral brightness.
Referring to fig. 2, the present invention provides a display method, where the display method is applied to a head-mounted display device, and a first camera for collecting an image of a real scene is installed on the head-mounted display device, and the display method includes:
and step S10, determining the interested area of the user in the real scene image.
The first camera that the head mounted display device such as AR glasses captures the image of the real scene is a camera (which may be a monochrome camera or an RGB camera), and the second camera that the head mounted display device such as AR glasses captures the image of the eyes of the user is an eyetracking camera. The real scene image refers to a real world scene image collected by a camera on a head-mounted display device such as AR glasses, and the region of interest refers to a region determined by collecting a specific coordinate position of a user gaze point by an eyetracking camera.
Further, the step of determining the region of interest on the acquired image of the real scene further includes: acquiring voice information of a user, analyzing the voice information to obtain a main object determined by the user, matching a target main body with the similarity of the main object being larger than a preset threshold value on the real scene image based on the main object, and taking the region where the target main body is located as the region of interest. Therefore, besides the region of interest is determined through the user fixation point, the region of interest can be actively determined on the real scene image by the user, the applicability of the display method is further improved, and more optional operations for realizing fusion display are provided for the user.
Further, the step of determining a region of interest on the acquired image of the real scene comprises: a first gesture action of the user is recognized, and an interested area of the user on the real scene image is determined according to the first gesture action. In addition to determining the region of interest on the real scene image through the voice information of the user, the region of interest may also be determined through a first gesture action of the user. The user can select one or more sub-regions as the regions of interest through a first gesture on the real scene image divided by the preset dividing mode, and can select one region as the region of interest through a first gesture circle on the real scene image which is not divided. In this embodiment, the specific gesture and motion of the first gesture motion are not limited. Therefore, the display capability of the head-mounted display device is fully utilized, and the region of interest is determined by selecting or circling in addition to voice selection.
Step S20, determining a target exposure of the first camera based on the image content of the region of interest.
After the region of interest is determined, the target exposure of the first camera needs to be further determined according to the image content of the region of interest, the first camera acquires a target real scene image based on the target exposure, and the target real scene image is further displayed.
And step S30, acquiring a target real scene image acquired by the first camera based on the target exposure, and displaying the target real scene image.
Video perspective (VST) refers to a head-mounted display device such as AR glasses acquiring an image of a real scene through a miniature camera mounted on the glasses, the AR glasses superimposing information and image signals to be added on a video signal of the camera through scene understanding and analysis, simultaneously fusing a virtual scene generated by the AR glasses with the real scene, and finally presenting the virtual scene to a user through an AR glasses display screen.
Optionally, after the step of displaying the target real scene image, the method further includes: adjusting the virtual image brightness of the virtual scene image based on the real image brightness of the target real scene image to obtain a target virtual scene image; and fusing and displaying the target virtual scene image and the target real scene image.
In the process of displaying the target real scene image, there may be a discrepancy between the brightness of the target real scene image and the brightness of the original virtual scene image, which may result in discomfort to the eyes of the user and a split feeling in the immersion experience. Therefore, after determining the target exposure of the first camera based on the image content of the region of interest, and changing the brightness of the real scene image captured by the first camera, it is also necessary to perform a synchronous adjustment on the brightness of the original virtual scene image. Therefore, the virtual image brightness of the virtual scene image is adjusted based on the real image brightness of the target real scene image, and the target virtual scene image is obtained. Preferably, the virtual image brightness is adjusted to be the same as the real image brightness. Furthermore, since the human eye does not notice all details in the visual field during the process of viewing objects, only the vicinity of the intermediate visual focus is clear, and any region beyond the preset angle range of the center of the gazing area of the human eye, such as more than 5 degrees, gradually reduces the definition. Therefore, the virtual image brightness in the region of interest is set to be the same as the real image brightness, and the virtual image brightness in the region of no interest is set to be lower than the real image brightness. Finally, after the virtual image brightness of the virtual scene image is adjusted to obtain the target, the adjusted target virtual scene image and the acquired target real scene image are subjected to fusion display, so that the problem of inconsistent brightness caused by singly adjusting the brightness of the real scene image is solved, the real scene image and the virtual scene image are adjusted to be consistent or the brightness of the virtual image is set to be lower than that of the real image, and the watching habit brought by the biological attribute of naked eyes can be better met when a user watches the picture displayed by the head-mounted display device.
In this embodiment, the display method is applied to a head-mounted display device on which a first camera for acquiring an image of a real scene is installed, and includes: determining a region of interest of a user in a real scene image; determining a target exposure of the first camera based on image content of the region of interest; and acquiring a target real scene image acquired by the first camera based on the target exposure, and displaying the target real scene image.
Firstly, determining the exposure of a first camera for acquiring a real scene image according to the image content of a region of interest of a user in the real scene image, and then displaying a target real scene image acquired by the first camera based on the target exposure.
Therefore, the real scene image which meets the brightness requirement of the user is acquired through exposure determined based on the image content of the region of interest, the technical problem that the brightness of the image displayed by the head-mounted display device is dark and over-bright due to improper exposure in the visual region of human eyes so as to not meet the watching requirement of the human eyes of the user on the brightness of the image is solved, the VST video perspective display effect of the head-mounted display device is improved through displaying the real scene image with proper brightness, and the use experience of the user of the head-mounted display device is improved.
Further, in another embodiment of the display method of the present invention, the step of determining the region of interest of the user in the image of the real scene includes: the method comprises the steps of obtaining an eye image of a user, and determining an interested area of the user in a real scene image according to the eye image.
The gaze point tracking, also called eyeball tracking, is to estimate the gaze direction or the gaze point position by capturing and extracting eyeball characteristic information using a sensor such as an infrared camera and measuring the movement of the eyes. The second camera of the head-mounted display device, such as the AR glasses, for acquiring the eye image of the user is an eyetrack ng camera, and the eye image of the user is acquired through the second camera of the head-mounted display device, so that the region of interest of the user on the real scene image can be determined according to the eye image.
Optionally, the step of determining a region of interest of the user in the image of the real scene according to the eye image includes: and calculating the fixation point coordinate of the user according to the eye image, and determining the region of interest of the user in the real scene image based on the fixation point coordinate.
After the eye image of the user is acquired through the second camera of the head-mounted display device, the fixation point coordinate of the user is calculated according to the eye image, and therefore the interested area is determined based on the fixation point coordinate. In this embodiment, the fixation point coordinates are determined by the pupillary corneal reflex method: under the condition that the positions of an infrared light source and an eye-tracking camera in a fixation point tracking system of head-mounted display equipment are unchanged, and on the basis of the structure of an eyeball model, the corneal curvature center is obtained by calculating the positions of a scintillation point and the light source. The method comprises the steps of utilizing an image processing technology to calculate and obtain a pupil center, obtaining an eyeball optical axis through a connecting line of a corneal curvature center and the pupil center, and calculating and obtaining a real sight direction, namely a visual axis and a fixation point coordinate through an included angle between the optical axis and the visual axis.
Optionally, the step of determining a region of interest of the user in the real scene image based on the gaze point coordinates includes: determining a corresponding preset angle range when the gaze content of the human eyes is clear; and according to the gazing point coordinate and the region of interest of the preset angle range in the real scene image.
In the process of viewing objects, the human eyes do not notice all details in the visual field, only the vicinity of the intermediate visual focus is clear, the definition of any region exceeding the preset angle range of the center of the gazing area of the human eyes, such as more than 5 degrees, is gradually reduced, and the region exceeding the preset angle range of the center of the gazing area of the human eyes, such as more than 5 degrees, is used as an neglected region. This is due to the different concentrations of cones on the retina responsible for observing color and detail, the area with a high density of cones, called the fovea, corresponds to the point of fixation in the human eye's visual field. Due to the structure of the human retina, the resolution of the fovea of the human eye is highest and the visual quality of the peripheral field is relatively lower, the foveal position being referred to as the fixation point region of the human eye. In the VST technology, the camera focusing area needs to coincide with the eye fixation point area in real time to ensure that the perception view and the visual range are consistent. Therefore, in the present embodiment, the region of interest is determined in the real scene image based on the gazing point coordinates and the preset angle range. The determined region of interest may be a circular region or a rectangular region, and the specific shape or size of the region at least includes a region within a preset angle range centered on the fixation point coordinate.
In the embodiment, firstly, an eye-tracking camera and an infrared fill light ir-Led are used to collect an image of a user's eye, a gaze point tracking algorithm is used to perform image preprocessing including image graying, binarization, image edge detection and the like, then pupil center positioning and cornea reflection spot center positioning are performed respectively, and a gaze point coordinate of the user is calculated, so as to obtain a gaze direction and a specific gaze point coordinate value of the user. And then the interested area in the real scene image can be obtained according to the fixation point coordinate and the preset angle range. Therefore, besides determining the region of interest on the real scene image through the voice information of the user and determining the region of interest through a selection or selection mode, the region of interest of the user on the real scene image can be determined only according to the attention point focused by the user, and further, the non-intelligent fusion display is performed, and the augmented reality effect and experience of the head-mounted display device are improved.
Further, in another embodiment of the display method of the present invention, the display method further includes:
determining a peripheral area corresponding to the region of interest on the acquired real scene image, and determining peripheral brightness corresponding to the peripheral area;
adjusting the real scene image corresponding to the peripheral area to the peripheral brightness to obtain a peripheral scene image;
and displaying the target real scene image and the peripheral scene image.
After the region of interest is determined, determining a peripheral region corresponding to the region of interest on the acquired real scene image, where the peripheral region may be a peripheral annular region of the region of interest, or a rectangular region with equal widths of the upper, lower, left, and right sides, and a determination manner of the peripheral region, a region size, a region shape, and the like of the peripheral region are not limited in this embodiment. Considering that human eyes cannot notice all details in a visual field in the process of viewing objects due to different concentrations of cone cells on retinas which are responsible for observing colors and details, only the vicinity of an intermediate visual focus is clear, and any area beyond a preset angle range of the center of a gazing area of the human eyes, such as more than 5 degrees, can gradually reduce the definition, in the embodiment, only an interested area with the highest definition and most concerned by the eyes of a user is considered, and a peripheral area corresponding to the interested area is also considered, the peripheral area is used as a corresponding area for gradually reducing the brightness, and a real scene image corresponding to the peripheral area is adjusted to the peripheral brightness to obtain a peripheral scene image. And finally, displaying the target real scene image and the peripheral scene image, and simulating and restoring the real watching experience of the user as much as possible. Therefore, the splitting feeling between the virtual and the reality when the user wears and uses the head-mounted display equipment is relieved, the watching experience of human eyes is simulated as much as possible, and the watching experience close to the reality is provided as much as possible for the user.
Optionally, the step of determining the peripheral brightness corresponding to the peripheral area includes:
determining real image brightness of the target real scene image, and determining peripheral brightness corresponding to the peripheral area based on the real image brightness;
wherein the real image luminance is greater than the peripheral luminance.
After the peripheral region is determined, a peripheral luminance corresponding to the peripheral region is determined based on the real image luminance of the target real scene image. Further, the step of determining the peripheral brightness corresponding to the peripheral area by the brightness of the real image includes: determining a division level where the peripheral region is located, and determining the peripheral brightness of the peripheral scene image corresponding to the real image brightness of the target real scene image based on the division level. And dividing the peripheral area of the region of interest in advance to obtain different distance levels, presetting the brightness which is gradually reduced outwards according to the brightness of the real image in the different distance levels, and directly determining the corresponding peripheral brightness of the peripheral area according to the division level where the peripheral area is located. Or, dividing the peripheral area of the region of interest in advance, setting different brightness weights which are smaller than 1 and gradually reduced for the peripheral areas with different distances, and determining the peripheral brightness corresponding to the peripheral area based on the distance of the peripheral area and the brightness weight corresponding to the distance.
In the method, it is also considered that, in the process of viewing objects by human eyes due to different concentrations of cone cells on retinas which are responsible for observing colors and details, all details in the visual field cannot be noticed, only the vicinity of the intermediate visual focus is clear, and the definition is gradually reduced in any area which exceeds a preset angle range of the center of the gazing area of the human eyes, such as more than 5 degrees, so that the real image brightness of the target real scene image is greater than the peripheral brightness of the peripheral area, and the interested area is highlighted in the picture displayed by the head-mounted display device.
Further, in another embodiment of the display method of the present invention, the display method further includes:
identifying a primary object in the region of interest;
determining a target exposure for the first camera based on image content of the primary object.
In addition to determining the target exposure of the first camera based on the image content of the region of interest, the target exposure of the first camera may be accurately determined based only on the main objects in the region of interest. Further, the step of identifying the primary object in the region of interest comprises: identifying each object to be selected in the region of interest, selecting the object to be selected with the largest object area from the objects to be selected as the main object, or selecting the object to be selected in the topmost layer from the objects to be selected as the main object, or determining that each object to be selected is located in a foreground or a background, and using one or more objects to be selected in the foreground as the main object. Similarly, the determination of the main object in the multiple objects to be selected may also be achieved through operations such as voice and gesture, and the specific operation steps are similar to the above manner of determining the region of interest on the acquired real scene image, and are not described herein again.
The main object is identified in the region of interest, and the target exposure of the first camera is determined according to the image content of the main object, so that the data volume and the calculated amount of the head-mounted display device during displaying can be reduced, the processing efficiency of the head-mounted display device is improved, the exposure accuracy is further improved, and finally the display effect of the head-mounted display device and the augmented reality experience of a user are improved.
Referring to fig. 3, in another embodiment of the display method of the present invention, first, a user wears a head-mounted display device, such as VR/AR glasses, and calls a mature gaze point tracking algorithm to calculate the gaze point direction of the user when the functions of a camera for collecting an image of a real scene, an eye-tracking camera, and an ir-LED for collecting an image are normal. Specifically, clear eye images are collected by using an eye-tracking camera and an IR-LED, then the eye images are subjected to image preprocessing including image graying, image filtering, binaryzation, image edge detection and the like, then pupil center positioning and cornea reflection light spot center positioning are respectively carried out, and the fixation point of a user is calculated. And then determining target exposure of a first camera for acquiring a real scene image based on the image content of the gazing point position information, finally fusing the virtual scene image and the target real scene image acquired by the first camera based on the target exposure, and displaying the fused virtual scene image and the target real scene image on a VR/AR glasses display screen. Therefore, the real scene image which meets the brightness requirement of the user is acquired through exposure determined based on the image content of the region of interest, the technical problem that the brightness of the picture displayed by the head-mounted display equipment is dark and over-bright due to improper exposure in the visual region of human eyes and the watching requirement of the human eyes of the user on the brightness of the picture is not met is solved, the VST video perspective display effect of the head-mounted display equipment is improved through displaying the real scene picture with proper brightness, the VST video perspective display effect of the head-mounted display equipment is further improved, and the use experience of the user of the head-mounted display equipment is improved.
In addition, referring to fig. 4, an embodiment of the present invention further provides a display device, where the display device includes:
the interested region determining module M1 is used for determining the interested region of the user in the real scene image;
a target exposure determination module M2, configured to determine a target exposure of the first camera based on the image content of the region of interest;
and the display module M3 is used for acquiring a target real scene image acquired by the first camera based on the target exposure and displaying the target real scene image.
Optionally, the region-of-interest determining module M1 is further configured to acquire an eye image of the user, and determine the region of interest of the user in the real scene image according to the eye image.
Optionally, the region-of-interest determining module M1 is further configured to calculate a gaze point coordinate of the user according to the eye image, and determine the region of interest of the user in the real scene image based on the gaze point coordinate.
Optionally, the region-of-interest determining module M1 is further configured to determine a corresponding preset angle range when the gaze content of the human eye is clear;
and according to the gazing point coordinate and the region of interest of the preset angle range in the real scene image.
Optionally, the display device further includes: the advanced fusion module is used for adjusting the virtual image brightness of the virtual scene image based on the real image brightness of the target real scene image to obtain a target virtual scene image;
and fusing and displaying the target virtual scene image and the target real scene image.
Optionally, the display device further includes: the peripheral region fusion module is used for determining a peripheral region corresponding to the region of interest on the acquired real scene image and determining peripheral brightness corresponding to the peripheral region;
adjusting the real scene image corresponding to the peripheral area to the peripheral brightness to obtain a peripheral scene image;
and displaying the target real scene image and the peripheral scene image.
Optionally, the peripheral region fusion module is further configured to determine a real image brightness of the target real scene image, and determine a peripheral brightness corresponding to the peripheral region based on the real image brightness;
wherein the real image luminance is greater than the peripheral luminance.
The display device provided by the invention adopts the display method in the embodiment, and solves the technical problems that the brightness of the picture displayed by the head-mounted display equipment in the prior art is dark and over-bright and does not meet the watching requirement of human eyes of a user on the brightness of the picture. Compared with the prior art, the display device provided by the embodiment of the invention has the same beneficial effects as the display method provided by the embodiment, and other technical characteristics of the display device are the same as those disclosed by the embodiment method, which are not repeated herein.
In addition, an embodiment of the present invention further provides a head-mounted display device, where the head-mounted display device includes: a first camera for capturing images of a real scene, a second camera for capturing images of the eyes of a user, a memory, a processor, and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, implements the steps of the display method as defined in any one of the preceding claims.
Furthermore, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps of the display method according to any one of the above.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of other like elements in a process, method, article, or system comprising the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention, and all equivalent structures or equivalent processes performed by the present invention or directly or indirectly applied to other related technical fields are also included in the scope of the present invention.

Claims (10)

1. A display method is applied to a head-mounted display device, wherein a first camera for collecting real scene images is installed on the head-mounted display device, and the display method comprises the following steps:
determining a region of interest of a user in a real scene image;
determining a target exposure of the first camera based on image content of the region of interest;
and acquiring a target real scene image acquired by the first camera based on the target exposure, and displaying the target real scene image.
2. The display method of claim 1, wherein the step of determining the region of interest of the user in the image of the real scene comprises:
the method comprises the steps of acquiring an eye image of a user, and determining an interested area of the user in a real scene image according to the eye image.
3. The display method according to claim 2, wherein the step of determining the region of interest of the user in the image of the real scene from the eye image comprises:
and calculating the fixation point coordinate of the user according to the eye image, and determining the region of interest of the user in the real scene image based on the fixation point coordinate.
4. The display method according to claim 3, wherein the step of determining the region of interest of the user in the image of the real scene based on the gaze point coordinates comprises:
determining a corresponding preset angle range when the gaze content of the human eyes is clear;
and according to the gazing point coordinate and the region of interest of the preset angle range in the real scene image.
5. The display method according to claim 1, wherein the step of displaying the target real scene image is followed by further comprising:
adjusting the virtual image brightness of the virtual scene image based on the real image brightness of the target real scene image to obtain a target virtual scene image;
and fusing and displaying the target virtual scene image and the target real scene image.
6. The display method according to claim 1, further comprising:
determining a peripheral area corresponding to the region of interest on the acquired real scene image, and determining peripheral brightness corresponding to the peripheral area;
adjusting the real scene image corresponding to the peripheral area to the peripheral brightness to obtain a peripheral scene image;
and displaying the target real scene image and the peripheral scene image.
7. The display method according to claim 6, wherein the step of determining the peripheral brightness corresponding to the peripheral area comprises:
determining real image brightness of the target real scene image, and determining peripheral brightness corresponding to the peripheral area based on the real image brightness;
wherein the real image brightness is greater than the peripheral brightness.
8. A display device, comprising:
the interesting area determining module is used for determining the interesting area of the user in the real scene image;
a target exposure determination module for determining a target exposure of the first camera based on image content of the region of interest;
and the display module is used for acquiring a target real scene image acquired by the first camera based on the target exposure and displaying the target real scene image.
9. A head-mounted display device, the head-mounted display device comprising: a first camera capturing images of a real scene, a second camera capturing images of a user's eyes, a memory, a processor, and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, implements the steps of the display method according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, carries out the steps of the display method according to any one of claims 1 to 7.
CN202211494355.1A 2022-11-25 2022-11-25 Display method and device, head-mounted display equipment and storage medium Pending CN115883816A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211494355.1A CN115883816A (en) 2022-11-25 2022-11-25 Display method and device, head-mounted display equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211494355.1A CN115883816A (en) 2022-11-25 2022-11-25 Display method and device, head-mounted display equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115883816A true CN115883816A (en) 2023-03-31

Family

ID=85764140

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211494355.1A Pending CN115883816A (en) 2022-11-25 2022-11-25 Display method and device, head-mounted display equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115883816A (en)

Similar Documents

Publication Publication Date Title
CN108427503B (en) Human eye tracking method and human eye tracking device
CN108919958B (en) Image transmission method and device, terminal equipment and storage medium
US9720238B2 (en) Method and apparatus for a dynamic “region of interest” in a display system
EP2621169B1 (en) An apparatus and method for augmenting sight
CN106327584B (en) Image processing method and device for virtual reality equipment
CN109901290B (en) Method and device for determining gazing area and wearable device
CN107260506B (en) 3D vision training system, intelligent terminal and head-mounted device based on eye movement
CN109283997A (en) Display methods, device and system
CN113467619B (en) Picture display method and device, storage medium and electronic equipment
CN112666705A (en) Eye movement tracking device and eye movement tracking method
CN111880654A (en) Image display method and device, wearable device and storage medium
CN107422844A (en) A kind of information processing method and electronic equipment
CN113325947A (en) Display method, display device, terminal equipment and storage medium
CN106708249B (en) Interaction method, interaction device and user equipment
CN107291233B (en) Wear visual optimization system, intelligent terminal and head-mounted device of 3D display device
CN113552947A (en) Virtual scene display method and device and computer readable storage medium
CN109917908B (en) Image acquisition method and system of AR glasses
GB2597917A (en) Gaze tracking method and apparatus
CN114967128B (en) Sight tracking system and method applied to VR glasses
CN115883816A (en) Display method and device, head-mounted display equipment and storage medium
CN115937959A (en) Method and device for determining gazing information and eye movement tracking equipment
CN111179860A (en) Backlight mode adjusting method of electronic equipment, electronic equipment and device
US10779726B2 (en) Device and method for determining eye movements by tactile interface
CN114895790A (en) Man-machine interaction method and device, electronic equipment and storage medium
CN114816065A (en) Screen backlight adjusting method, virtual reality device and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination