CN117197225A - Image display method, device, head-mounted equipment and medium - Google Patents

Image display method, device, head-mounted equipment and medium Download PDF

Info

Publication number
CN117197225A
CN117197225A CN202311035122.XA CN202311035122A CN117197225A CN 117197225 A CN117197225 A CN 117197225A CN 202311035122 A CN202311035122 A CN 202311035122A CN 117197225 A CN117197225 A CN 117197225A
Authority
CN
China
Prior art keywords
image
target image
head
data
pose data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311035122.XA
Other languages
Chinese (zh)
Inventor
杨青河
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Goertek Techology Co Ltd
Original Assignee
Goertek Techology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Techology Co Ltd filed Critical Goertek Techology Co Ltd
Priority to CN202311035122.XA priority Critical patent/CN117197225A/en
Publication of CN117197225A publication Critical patent/CN117197225A/en
Pending legal-status Critical Current

Links

Abstract

The application discloses an image display method, an image display device, head-mounted equipment and a medium, and relates to the technical field of image processing. The method comprises the following steps: acquiring original image data and first pose data of the head-mounted device, wherein the first pose data is the pose data of the head-mounted device at the moment of acquiring the original image data; generating an image to be displayed according to the original image data; selecting a first target image from the images to be displayed, wherein the first target image is an image with the same screen size as the head-mounted equipment in the visual range, and the size of the first target image is smaller than that of the images to be displayed; displaying a first target image and acquiring second pose data, wherein the second pose data is pose data of the head-mounted equipment at the display moment of the first target image; selecting a second target image from the images to be displayed according to the first pose data, the second pose data and the first target image; and displaying the second target image. The method provides a new intermediate frame generation technique that can replace the ATW function.

Description

Image display method, device, head-mounted equipment and medium
Technical Field
The present application relates to the field of image processing technology, and more particularly, to an image display method, an image display apparatus, a head-mounted device, and a computer-readable storage medium.
Background
With the development of technology and economy, head mounted display devices (e.g., XR head mounted display devices) have become increasingly popular.
Currently, graphics processors (Graphics Processing Unit, GPU) on head-mounted display devices are mainly used to implement image rendering and asynchronous warping (Asynchronous Timewarp, ATW), which is a technique for generating intermediate frames, for example, when a game is unable to maintain a sufficient frame rate for a game scene, so as to effectively reduce jitter of the game screen. When the GPU implements ATW, it is generally required that the image output by the GPU is synchronized with the screen brush hold of the screen. To achieve this synchronization, it is necessary to determine that threads in the GPU that implement the ATW are not blocked in the GPU. Thus, threads of an ATW are typically given the highest thread priority. In addition, threads of the ATW are typically deployed to execute on high-performance cores in the GPU. This in turn brings about power consumption and resource occupation of the GPU.
In summary, an image display method capable of replacing the ATW function is needed.
Disclosure of Invention
It is an object of the present application to provide a new solution for image display.
According to a first aspect of the present application, there is provided an image display method comprising:
Acquiring original image data and first pose data of head-mounted equipment, wherein the first pose data are the pose data of the head-mounted equipment at the acquisition moment of the original image data;
generating an image to be displayed according to the original image data;
selecting a first target image from the images to be displayed, wherein the first target image is an image with the same screen size as the head-mounted equipment in a visual range, and the size of the first target image is smaller than that of the images to be displayed;
displaying a first target image and acquiring second pose data, wherein the second pose data is the pose data of the head-mounted equipment at the display moment of the first target image;
selecting a second target image from the images to be displayed according to the first pose data, the second pose data and the first target image;
and displaying the second target image.
Optionally, the selecting a second target image from the images to be displayed according to the first pose data, the second pose data and the first target image includes:
determining the pose change amount of the head-mounted equipment at the display moment compared with the acquisition moment according to the first pose data and the second pose data;
Determining an image selection parameter according to the pose change amount, wherein the image selection parameter comprises at least one of a scaling factor and an offset relative to the first target image;
and selecting a second target image from the images to be displayed according to the image selection parameters and the first target image.
Optionally, the second target image is an RGB image, and displaying the second target image includes:
extracting a red pixel image, a green pixel image and a blue pixel image from the second target image;
performing dispersion correction processing on the red pixel image, the green pixel image, and the blue pixel image, respectively, according to dispersion parameters of a lens of the head-mounted device;
fusing the red pixel image, the green pixel image and the blue pixel image after the dispersion correction processing to obtain a second target image after the dispersion correction;
and displaying the second target image after the dispersion correction.
Optionally, the selecting a first target image from the images to be displayed includes:
acquiring a human eye movement thermal map of a wearer wearing the head-mounted device;
and determining a first target image according to the human eye movement heat point diagram and the image to be displayed.
Optionally, the determining the image selection parameter according to the pose change amount includes:
determining an image selection parameter according to the pose change amount and preset mapping data;
the preset mapping data are data reflecting the corresponding relation between the pose change amount and the image selection parameters.
Optionally, the selecting a first target image from the images to be displayed includes:
acquiring the size of a screen of the head-mounted device;
and selecting a first target image from the images to be displayed according to the size.
Optionally, the method is performed by a display processor.
According to a second aspect of the present application, there is provided an image display apparatus comprising:
the first acquisition module is used for acquiring original image data and first pose data of the head-mounted equipment, wherein the first pose data are pose data of the head-mounted equipment at the moment of acquiring the original image data;
the generation module is used for generating an image to be displayed according to the original image data;
the first selecting module is used for selecting a first target image from the images to be displayed, wherein the first target image is an image with the same screen size as the head-mounted equipment in a visible range, and the size of the first target image is smaller than that of the images to be displayed;
The first display module is used for displaying a first target image;
the second acquisition module is used for acquiring second pose data, wherein the second pose data is the pose data of the head-mounted equipment at the display moment of the first target image;
the second selecting module is used for selecting a second target image from the images to be displayed according to the first pose data, the second pose data and the first target image;
and the second display module is used for displaying a second target image.
According to a third aspect of the present application, there is provided a head-mounted device comprising the image display apparatus as described in the second aspect; or,
the head-mounted device comprises a memory for storing computer instructions and a processor for invoking the computer instructions from the memory to perform the image display method of any of the first aspects.
According to a fourth aspect of the present application, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the image display method according to any one of the first aspects.
In an embodiment of the present application, there is provided an image display method including: acquiring original image data and first pose data of the head-mounted device, wherein the first pose data is the pose data of the head-mounted device at the moment of acquiring the original image data; generating an image to be displayed according to the original image data; selecting a first target image from the images to be displayed, wherein the first target image is an image with the same screen size as the head-mounted equipment in the visual range, and the size of the first target image is smaller than that of the images to be displayed; displaying a first target image and acquiring second pose data, wherein the second pose data is pose data of the head-mounted equipment at the display moment of the first target image; selecting a second target image from the images to be displayed according to the first pose data, the second pose data and the first target image; and displaying the second target image. By the method, after the first target image is displayed, the middle frame image can be displayed. Thus, the intermediate frame technology is realized, and the traditional ATW technology is replaced. That is, embodiments of the present application provide a new intermediate frame generation technique that can replace the ATW function.
Other features of the present application and its advantages will become apparent from the following detailed description of exemplary embodiments of the application, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description, serve to explain the principles of the application.
Fig. 1 is a block diagram one of a hardware configuration of a head-mounted device for implementing an image display method according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a method for implementing image display according to an embodiment of the present application;
fig. 3 is a schematic flow chart of a method for implementing image display according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a dispersion principle provided according to an embodiment of the present application;
fig. 5 is a schematic structural view of an image display device according to an embodiment of the present application;
fig. 6 is a block diagram two of a hardware configuration of a head-mounted device for implementing an image display method according to an embodiment of the present application.
Detailed Description
Various exemplary embodiments of the present application will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present application unless it is specifically stated otherwise.
The following description of at least one exemplary embodiment is merely exemplary in nature and is in no way intended to limit the application, its application, or uses.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any specific values should be construed as merely illustrative, and not a limitation. Thus, other examples of exemplary embodiments may have different values.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
Fig. 1 is a block diagram one of a hardware configuration of a head-mounted device for implementing an image display method according to an embodiment of the present application.
The head-mounted device is exemplified by AR, MR, VR or XR. And the head-mounted device can be a split type head-mounted device or an integrated head-mounted device. The split-type head-mounted device is provided with a head-mounted display device (such as AR glasses or helmets) and an adaptive control device, and the control device is specifically a data processing unit of the head-mounted display device. The control device and the integrated head-mounted device are devices having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which is not particularly limited in this embodiment of the present application.
It should be noted that, in the case that the head-mounted device is a split-type head-mounted device, a technician may deploy all or part of the steps in the image display method provided by the embodiment of the present application on the head-mounted display device or the control device according to the actual situation.
The headset 1000 may include a processor 1100, a memory 1200, an interface device 1300, a communication device 1400, a display device 1500, an input device 1600, a speaker 1700, a microphone 1800, and so forth. The processor 1100 may be a central processing unit CPU, a microprocessor MCU, a display processor DPU, or the like. The memory 1200 includes, for example, ROM (read only memory), RAM (random access memory), nonvolatile memory such as a hard disk, and the like. The interface device 1300 includes, for example, a USB interface, a headphone interface, and the like. The communication device 1400 can perform wired or wireless communication, for example. The display device 1500 is, for example, a liquid crystal display, a touch display, or the like. The input device 1600 may include, for example, a touch screen, keyboard, etc. A user may input/output voice information through the speaker 1700 and microphone 1800.
Although a plurality of devices are shown for the head set 1000 in fig. 1, the present application may relate to only some of the devices, for example, the head set 1000 relates to only the memory 1200, the processor 1100, and the display device 1500.
In an embodiment of the present application, the memory 1200 of the head-mounted device 1000 is used for storing instructions for controlling the processor 1100 to perform the image display method provided by the embodiment of the present application.
In the above description, the skilled person may design instructions according to the disclosed solution. How the instructions control the processor to operate is well known in the art and will not be described in detail here.
The embodiment of the application provides an image display method which can generate an intermediate frame image to replace the traditional ATW technology.
As shown in fig. 2, the method includes the following S2100-S2600:
s2100, acquiring original image data and first pose data of the head-mounted device.
The first pose data are pose data of the head-mounted equipment at the acquisition time of the original image.
In the embodiment of the application, for the integrated head-mounted device, the original image data is data acquired by a plurality of image acquisition devices (such as cameras) on the head-mounted device, which acquire external environment images with different angles. And the head-mounted equipment is provided with a pose sensor, and pose data of the head-mounted equipment are acquired by the pose sensor. And recording pose data acquired by the pose sensor at the acquisition time of the original image as first pose data.
For the split type head-mounted device, the original image data is data acquired by a plurality of image acquisition devices (such as cameras) on the head-mounted display device, which acquire external environment images with different angles. And the head-mounted display device is provided with a pose sensor, and pose data of the head-mounted display device are acquired by the pose sensor. And recording pose data acquired by the pose sensor at the acquisition time of the original image as first pose data.
Based on the foregoing, it can be appreciated that the first pose data is specifically pose data at an image rendering time, and the first pose data may reflect head pose information of a wearer at the image rendering time.
S2200, generating an image to be displayed according to the original image data.
In the embodiment of the application, the original image data is data acquired by a plurality of image acquisition devices (such as cameras) of external environment images of different angles of the head-mounted equipment or the head-mounted display equipment. Based on the above, the original image data are fused, so that an image reflecting the environment around the head-mounted device or the head-mounted display device can be obtained, and the image is recorded as an image to be displayed. It will be appreciated that the image to be displayed includes the image displayed at the latest display time.
It should be noted that, in the embodiment of the present application, the size of the image to be displayed needs to be larger than the size of the screen of the head-mounted device. In one example, the size of the image to be displayed is 1.2 times the size of the screen of the headset.
S2300, acquiring a first target image from an image to be displayed.
The first target image is an image with the same screen size as the head-mounted device in the visual range, and the size of the first target image is smaller than that of the image to be displayed.
In the embodiment of the application, the first target image is an image which needs to be displayed on a screen of the head-mounted device at the latest display time.
In one embodiment of the present application, the above S2300 may be implemented by the following S2310 and S2311:
s2310, acquiring a human eye movement thermal map of a wearer wearing the head-mounted device.
In an embodiment of the present application, the headset is further provided with an image capturing device (for example, a camera) for capturing an image of the eye of the wearer, where the image capturing device is used for capturing an image of the eye of the wearer. Based on this, in one embodiment of the present application, the specific implementation of S2310 may be: and inputting the image of the eye image of the wearer into an eye tracking algorithm to obtain a human eye thermal map. The human eye thermal point diagram is used for reflecting the gazing condition of the wearer and can show the attention distribution condition of the wearer. In general, red in a human eye heat map represents the region where gaze is most concentrated, and yellow and green represent regions where gaze is less.
S2311, determining a first target image according to the human eye thermal diagram and the image to be displayed.
In one embodiment of the present application, an image that includes a human eye thermal map in an image to be displayed and has the same size as the size of the screen of the head-mounted device may be determined as the first target image.
Further, an image centered on the human eye movement heat point map, including the human eye movement heat point map and having the same size as the size of the screen of the head-mounted device may be determined as the first target image.
In one embodiment of the present application, since the size of the first target image is the same as the screen size of the head-mounted device, it is necessary to first determine what the screen size of the head-mounted device is when selecting the target image from the images to be displayed. Based on this, in one embodiment of the present application, the above S2300 may be implemented by specifically the following S2320 and S2321:
s2320, acquiring the size of a screen of the head-mounted device.
In one embodiment of the application, the size of the screen of the headset is pre-stored in the headset. Based on this, S2320 described above can be implemented.
S2321, selecting a first target image from the images to be displayed according to the size.
In the embodiment of the application, a first target image with the size of the screen of the head-mounted equipment is selected from the images to be displayed.
And S2400, displaying the first target image and acquiring second pose data.
The second pose data are pose data of the head-mounted equipment at the display time of the first target image.
Note that, the method for acquiring the second pose data in S2400 is the same as the method for acquiring the first pose data in S2100, and will not be described here again. And the second pose data may reflect the wearer's head pose information at the time of image display.
In the embodiment of the application, the first target image is displayed when the image display time arrives. Based on this, display of the real image of the previous frame of the intermediate frame image can be realized.
S2500, selecting a second target image from the images to be displayed according to the first pose data, the second pose data and the first target image.
Wherein the second target image is an intermediate frame image.
According to the embodiment of the application, the change amount of the pose at the time of image display and the time of image rendering can be determined according to the first pose data and the second pose data. Based on the amount of change in the pose, it can be inferred which orientation of the partial image in the image to be displayed relative to the first target image the wearer sees after seeing the first target image. In the embodiment of the present application, the estimated partial image is recorded as the second target image.
The first target image and the second target image belong to the image to be displayed, and the second target image is a part of the image to be displayed, which is deduced according to the change amount of the pose and is in a certain azimuth relative to the first target image, so that the second target image and the first target image have continuity and can serve as an intermediate frame image.
In one embodiment of the present application, the above S2500 may be specifically implemented by the following S2510 to S2512:
s2510, determining the pose change amount of the head-mounted equipment at the display moment compared with the acquisition moment according to the first pose data and the second pose data.
In the embodiment of the application, the difference between the first pose data and the second pose data is used as the pose variation.
S2511, determining an image selection parameter according to the pose change amount.
Wherein the image selection parameter comprises at least one of a scaling factor and an offset relative to the first target image.
In an embodiment of the application, at least one of an offset of an image seen by the wearer from the first target image and a scaling factor is determined according to the amount of change in pose. Wherein the scaling factor is a positive number greater than 0. In the case where the scaling factor is greater than 0 and less than 1, it is explained that the image to be displayed is subjected to reduction processing. In the case where the scaling factor is greater than 1, the enlargement processing of the image to be displayed is explained. In the case where the scaling factor is equal to 1, it is explained that the image to be displayed is not subjected to scaling processing. The offset may be understood as the sliding amount of the window with the first target image as the window, and is a vector with direction and size.
In one example, in accordance with the near-far-small principle, in the event that the pose change amount indicates that the wearer is near an object image in the first target image, then the scaling factor is determined to be less than 1. Further, the specific value of the scaling factor may be determined based on the proximity.
In the case where the pose change amount indicates that the wearer deviates from a certain object image in the first target image, the offset amount is determined as the offset amount from the certain object image.
It should be noted that, when the image selection parameter includes a scaling factor and an offset relative to the first target image, the image to be displayed is scaled according to the scaling factor. And selecting a second target image from the zoomed images to be displayed according to the offset relative to the first target image. In this scenario, even if the image to be displayed is scaled, the size of the window corresponding to the offset does not scale with the scaling of the image to be displayed, i.e., the size of the window corresponding to the offset does not change.
In one embodiment of the present application, the above S2511 may be implemented by the following S2511-1:
s2511-1, determining an image selection parameter according to the pose change amount and preset mapping data.
The preset mapping data are data reflecting the corresponding relation between the pose change amount and the image selection parameters.
In an embodiment of the present application, the generating process of the preset mapping data may be:
the designer of the head-mounted equipment makes a plurality of experiments, and the process of each experiment is as follows: viewing an image through the head-mounted equipment at the moment T1, recording the image at the moment T1 as a first image, and simultaneously recording pose data of the head-mounted equipment at the moment T1; viewing an image through the head-mounted equipment at the moment T2, recording the image at the moment T2 as a second image, and simultaneously recording pose data of the head-mounted equipment at the moment T2; calculating a slave difference value between pose data of the head-mounted equipment at the moment T1 and pose data of the head-mounted equipment at the moment T2, and an offset and a scaling factor of the second image relative to the first image; taking the difference value, the corresponding offset and the scaling factor as a set of mapping data;
fitting a plurality of groups of mapping data obtained by multiple experiments to obtain a functional relation reflecting the corresponding relation between the pose change quantity and the image selection parameters;
the functional relationship is used as the preset mapping data in S2511-1.
S2512, selecting a second target image from the images to be displayed according to the image selection parameters and the first target image.
In one example, taking an image selection parameter including a scaling factor and an offset relative to the first target image, where the scaling factor is 0.8, the offset is 300 pixels to the left, and 400 pixels to the top, the selection process of the second target image specifically includes: reducing the image to be displayed to 0.8 times of the original image to be displayed; and in the reduced image to be displayed, taking the center of the first target image as the center, and taking the image in the window as a second target image after sliding the window with the same size as the first target image in the original image to be displayed before scaling by 500 pixels to the left.
S2600, displaying the second target image.
Based on S2600, the intermediate frame image can be displayed.
After the execution of S2600, the execution of S2100 to S2600 is repeated, so that the intermediate frame technology can be implemented instead of the conventional ATW technology.
In an embodiment of the present application, there is provided an image display method including: acquiring original image data and first pose data of the head-mounted device, wherein the first pose data is the pose data of the head-mounted device at the moment of acquiring the original image data; generating an image to be displayed according to the original image data; selecting a first target image from the images to be displayed, wherein the first target image is an image with the same screen size as the head-mounted equipment in the visual range, and the size of the first target image is smaller than that of the images to be displayed; displaying a first target image and acquiring second pose data, wherein the second pose data is pose data of the head-mounted equipment at the display moment of the first target image; selecting a second target image from the images to be displayed according to the first pose data, the second pose data and the first target image; and displaying the second target image. By the method, after the first target image is displayed, the middle frame image can be displayed. Thus, the intermediate frame technology is realized, and the traditional ATW technology is replaced. That is, embodiments of the present application provide a new intermediate frame generation technique that can replace the ATW function.
In one embodiment of the present application, the second target image is an RGB image, and on the basis of this, the specific implementation of S2600 may be S2610-S2613 as follows:
s2610 extracts a red pixel image, a green pixel image, and a blue pixel image from the second target image.
In the embodiment of the present application, there may be a dispersion problem as shown in fig. 3 before the second target image is not subjected to dispersion correction due to the dispersion of the lens of the head-mounted device. I.e. the problem of misalignment of the R, G and B pixels in each pixel of the second target image. Which greatly affects the viewing experience of the wearer. Therefore, in order to avoid this problem, after the second target image is obtained, the second target image may be subjected to dispersion correction, and further, the dispersion-corrected second target image may be displayed.
In performing dispersion correction, S2610 is first performed. Specifically, an image composed of red pixels in each pixel in the second target image is referred to as a red pixel image. An image composed of green pixels in each pixel in the second target image is noted as a green pixel image. And, an image composed of blue pixels in each pixel in the second target image is noted as a blue pixel image.
S2611, according to the dispersion parameter of the lens of the head-mounted device, performs dispersion correction processing on the red pixel image, the green pixel image, and the blue pixel image, respectively.
In the embodiment of the application, in the case that the lens of the head-mounted device is fixed, the dispersion parameter of the lens of the head-mounted device is fixed. On this basis, the dispersion parameters of the lens of the head-mounted device are first acquired before S2611 described above is performed. Further, according to the optical model of the lens, dispersion correction processing is performed on the red pixel image, the green pixel image, and the blue pixel image, respectively, by the dispersion parameters of the lens of the head-mounted device.
S2612, fusing the red pixel image, the green pixel image and the blue pixel image after the dispersion correction processing to obtain a second target image after the dispersion correction.
In the embodiment of the application, after the red pixel image, the green pixel image and the blue pixel image which are subjected to dispersion correction processing are obtained, the three are fused. R pixels, G pixels and B pixels are overlapped at the same pixel position in the fused image.
And S2613, displaying the second target image after dispersion correction.
In the embodiment of the application, the second target image after the chromatic dispersion correction is displayed, so that the problem of influencing the watching experience of a wearer caused by the existence of chromatic dispersion of the lens of the head-mounted equipment can be avoided.
In one embodiment of the application, the image display method provided by the embodiment of the application is performed by a display processor (Display Processor Unit, DPU) of the headset. This does not bring about resource occupation to the CPU (or GPU) and does not bring about power consumption of the CPU (or GPU) in comparison with the execution of the image display method provided by the embodiment of the present application by the CPU (or GPU).
On the basis of the above, the image display method provided by the embodiment of the present application, as shown in fig. 4, includes the following steps:
s4001, acquiring original image data and first pose data of the head-mounted device, wherein the first pose data is the pose data of the head-mounted device at the moment of acquiring the original image data;
s4002, generating an image to be displayed according to the original image data;
s4003, selecting a first target image from the images to be displayed;
s4004, displaying a first target image and acquiring second pose data, wherein the second pose data is pose data of the head-mounted equipment at the display moment of the first target image;
s4005, determining pose change quantity of the head-mounted equipment at the display moment compared with the acquisition moment according to the first pose data and the second pose data;
s4006, determining an image selection parameter according to the pose change amount, wherein the image selection parameter comprises at least one of a scaling factor and an offset relative to the first target image;
S4007, selecting a second target image from the images to be displayed according to the image selection parameters and the first target image;
s4008, extracting a red pixel image, a green pixel image, and a blue pixel image from the second target image;
s4009, performing dispersion correction processing on the red pixel image, the green pixel image, and the blue pixel image, respectively, according to a dispersion parameter of a lens of the head-mounted device;
s4010, fusing the red pixel image, the green pixel image and the blue pixel image which are subjected to dispersion correction processing to obtain a second target image after dispersion correction;
s4011, displaying the second target image after the dispersion correction.
The embodiment of the present application further provides an image display apparatus 500, as shown in fig. 5, the apparatus 500 includes:
a first obtaining module 510, configured to obtain original image data and first pose data of a head-mounted device, where the first pose data is pose data of the head-mounted device at an obtaining time of the original image data;
a generating module 520, configured to generate an image to be displayed according to the original image data;
a first selecting module 530, configured to select a first target image from the images to be displayed, where the first target image is an image with a size identical to a screen size of the head-mounted device in a visible range, and a size of the first target image is smaller than a size of the image to be displayed;
A first display module 540 for displaying a first target image;
a second obtaining module 550, configured to obtain second pose data, where the second pose data is pose data of the head-mounted device at a display time of the first target image;
a second selecting module 560, configured to select a second target image from the images to be displayed according to the first pose data, the second pose data, and the first target image;
a second display module 570 for displaying a second target image.
The embodiment of the application also provides an image display device, which comprises: the first acquisition module is used for acquiring original image data and first pose data of the head-mounted equipment, wherein the first pose data is the pose data of the head-mounted equipment at the moment of acquiring the original image data; the generation module is used for generating an image to be displayed according to the original image data; the first selecting module is used for selecting a first target image from the images to be displayed, wherein the first target image is an image with the same screen size as the head-mounted equipment in the visual range, and the size of the first target image is smaller than that of the images to be displayed; the first display module is used for displaying a first target image; the second acquisition module is used for acquiring second pose data, wherein the second pose data is pose data of the head-mounted equipment at the display moment of the first target image; the second selecting module is used for selecting a second target image from the images to be displayed according to the first pose data, the second pose data and the first target image; and the second display module is used for displaying a second target image. The device can display the intermediate frame image after the first target image is displayed. Thus, the intermediate frame technology is realized, and the traditional ATW technology is replaced. That is, embodiments of the present application provide a new intermediate frame generation technique that can replace the ATW function.
In one embodiment of the present application, the second selecting module 560 is specifically configured to determine, according to the first pose data and the second pose data, a pose change amount of the head-mounted device at the display time compared to the acquisition time;
determining an image selection parameter according to the pose change amount, wherein the image selection parameter comprises at least one of a scaling factor and an offset relative to the first target image;
and selecting a second target image from the images to be displayed according to the image selection parameters and the first target image.
In one embodiment of the present application, the second display module 570 is specifically configured to extract a red pixel image, a green pixel image, and a blue pixel image from the second target image;
performing dispersion correction processing on the red pixel image, the green pixel image, and the blue pixel image, respectively, according to dispersion parameters of a lens of the head-mounted device;
fusing the red pixel image, the green pixel image and the blue pixel image after the dispersion correction processing to obtain a second target image after the dispersion correction;
and displaying the second target image after the dispersion correction.
In one embodiment of the present application, the first selection module 530 is specifically configured to obtain a thermal diagram of an eye of a wearer wearing the headset;
and determining a first target image according to the human eye movement heat point diagram and the image to be displayed.
In one embodiment of the present application, the second selecting module 560 is specifically configured to obtain a size of a screen of the head-mounted device;
and selecting a first target image from the images to be displayed according to the size.
In one embodiment of the application, the apparatus is deployed in a display processor.
The embodiment of the application also provides a head-mounted device 600, and the head-mounted device 600 comprises the image display device 500 provided by any one of the device embodiments; or,
as shown in fig. 6, the headset 600 includes a memory 610 and a processor 620, the memory 610 is configured to store computer instructions, and the processor 620 is configured to call the computer instructions from the memory 610 to perform the image display method according to any of the above method embodiments.
Embodiments of the present application also provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements an image display method according to any of the above-described method embodiments.
The present application may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present application.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for carrying out operations of the present application may be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present application are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information for computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present application are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, implementation by software, and implementation by a combination of software and hardware are all equivalent.
The foregoing description of embodiments of the application has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the technical improvement of the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the application is defined by the appended claims.

Claims (10)

1. An image display method, the method comprising:
acquiring original image data and first pose data of head-mounted equipment, wherein the first pose data are the pose data of the head-mounted equipment at the acquisition moment of the original image data;
generating an image to be displayed according to the original image data;
selecting a first target image from the images to be displayed, wherein the first target image is an image with the same screen size as the head-mounted equipment in a visual range, and the size of the first target image is smaller than that of the images to be displayed;
Displaying a first target image and acquiring second pose data, wherein the second pose data is the pose data of the head-mounted equipment at the display moment of the first target image;
selecting a second target image from the images to be displayed according to the first pose data, the second pose data and the first target image;
and displaying the second target image.
2. The method of claim 1, wherein selecting a second target image from the images to be displayed based on the first pose data, the second pose data, and the first target image comprises:
determining the pose change amount of the head-mounted equipment at the display moment compared with the acquisition moment according to the first pose data and the second pose data;
determining an image selection parameter according to the pose change amount, wherein the image selection parameter comprises at least one of a scaling factor and an offset relative to the first target image;
and selecting a second target image from the images to be displayed according to the image selection parameters and the first target image.
3. The method of claim 1, wherein the second target image is an RGB image, and wherein displaying the second target image comprises:
Extracting a red pixel image, a green pixel image and a blue pixel image from the second target image;
performing dispersion correction processing on the red pixel image, the green pixel image, and the blue pixel image, respectively, according to dispersion parameters of a lens of the head-mounted device;
fusing the red pixel image, the green pixel image and the blue pixel image after the dispersion correction processing to obtain a second target image after the dispersion correction;
and displaying the second target image after the dispersion correction.
4. The method of claim 1, wherein selecting a first target image from the images to be displayed comprises:
acquiring a human eye movement thermal map of a wearer wearing the head-mounted device;
and determining a first target image according to the human eye movement heat point diagram and the image to be displayed.
5. The method according to claim 2, wherein determining an image selection parameter according to the pose change amount includes:
determining an image selection parameter according to the pose change amount and preset mapping data;
the preset mapping data are data reflecting the corresponding relation between the pose change amount and the image selection parameters.
6. The method of claim 1, wherein selecting a first target image from the images to be displayed comprises:
acquiring the size of a screen of the head-mounted device;
and selecting a first target image from the images to be displayed according to the size.
7. The method of any of claims 1-6, wherein the method is performed by a display processor.
8. An image display device, the device comprising:
the first acquisition module is used for acquiring original image data and first pose data of the head-mounted equipment, wherein the first pose data are pose data of the head-mounted equipment at the moment of acquiring the original image data;
the generation module is used for generating an image to be displayed according to the original image data;
the first selecting module is used for selecting a first target image from the images to be displayed, wherein the first target image is an image with the same screen size as the head-mounted equipment in a visible range, and the size of the first target image is smaller than that of the images to be displayed;
the first display module is used for displaying a first target image;
the second acquisition module is used for acquiring second pose data, wherein the second pose data is the pose data of the head-mounted equipment at the display moment of the first target image;
The second selecting module is used for selecting a second target image from the images to be displayed according to the first pose data, the second pose data and the first target image;
and the second display module is used for displaying a second target image.
9. A head-mounted device comprising the image display apparatus according to claim 8; or,
the head-mounted device comprising a memory for storing computer instructions and a processor for invoking the computer instructions from the memory to perform the image display method of any of claims 1-7.
10. A computer-readable storage medium, characterized in that a computer program is stored thereon, which, when being executed by a processor, implements the image display method according to any one of claims 1-7.
CN202311035122.XA 2023-08-16 2023-08-16 Image display method, device, head-mounted equipment and medium Pending CN117197225A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311035122.XA CN117197225A (en) 2023-08-16 2023-08-16 Image display method, device, head-mounted equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311035122.XA CN117197225A (en) 2023-08-16 2023-08-16 Image display method, device, head-mounted equipment and medium

Publications (1)

Publication Number Publication Date
CN117197225A true CN117197225A (en) 2023-12-08

Family

ID=88984110

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311035122.XA Pending CN117197225A (en) 2023-08-16 2023-08-16 Image display method, device, head-mounted equipment and medium

Country Status (1)

Country Link
CN (1) CN117197225A (en)

Similar Documents

Publication Publication Date Title
US11823360B2 (en) Virtual, augmented, and mixed reality systems and methods
JP6023801B2 (en) Simulation device
US20180158246A1 (en) Method and system of providing user facial displays in virtual or augmented reality for face occluding head mounted displays
US9911214B2 (en) Display control method and display control apparatus
WO2017095655A1 (en) Multi-optical surface optical design
US20220215688A1 (en) Systems and methods for image adjustment based on pupil size
US20100091031A1 (en) Image processing apparatus and method, head mounted display, program, and recording medium
US9965898B2 (en) Overlay display
CN107610044A (en) Image processing method, computer-readable recording medium and virtual reality helmet
WO2018214431A1 (en) Method and apparatus for presenting scene using virtual reality device, and virtual reality apparatus
WO2017023471A1 (en) Depth image enhancement for hardware generated depth images
US10572764B1 (en) Adaptive stereo rendering to reduce motion sickness
CN103929605B (en) Control method is presented in image and control device is presented in image
US20180054568A1 (en) Display control method and program for executing the display control method on computer
US20160252730A1 (en) Image generating system, image generating method, and information storage medium
US20140267617A1 (en) Adaptive depth sensing
US20220189433A1 (en) Application programming interface for setting the prominence of user interface elements
EP3407167B1 (en) Head-mounted display system, method for adaptively adjusting hidden area mask, and computer program product
US11010865B2 (en) Imaging method, imaging apparatus, and virtual reality device involves distortion
US20220004250A1 (en) Information processing apparatus, information processing method, and program
CN117197225A (en) Image display method, device, head-mounted equipment and medium
US11521297B2 (en) Method and device for presenting AR information based on video communication technology
CN111736692B (en) Display method, display device, storage medium and head-mounted device
US20220382055A1 (en) Head-mounted display generated status message
US20230215108A1 (en) System and method for adaptive volume-based scene reconstruction for xr platform applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination