CN118276671A - Image display method, device, equipment and medium - Google Patents

Image display method, device, equipment and medium Download PDF

Info

Publication number
CN118276671A
CN118276671A CN202211740322.0A CN202211740322A CN118276671A CN 118276671 A CN118276671 A CN 118276671A CN 202211740322 A CN202211740322 A CN 202211740322A CN 118276671 A CN118276671 A CN 118276671A
Authority
CN
China
Prior art keywords
scene
area
virtual
resolution
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211740322.0A
Other languages
Chinese (zh)
Inventor
胡修祥
付延生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202211740322.0A priority Critical patent/CN118276671A/en
Publication of CN118276671A publication Critical patent/CN118276671A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The present disclosure provides an image display method, apparatus, device, and storage medium, the method including: determining a first scene area and a second scene area in a virtual scene to be displayed, wherein the attention of a user to the first scene area is greater than the attention of the user to the second scene area; acquiring a first scene image generated by rendering a first scene area with a first resolution, and acquiring a second scene image generated by rendering a second scene area with a second resolution, wherein the first resolution is greater than the second resolution, and the difference between the first resolution and the second resolution is greater than a preset threshold; a target scene image is generated and displayed based on the first scene image and the second scene image. Therefore, the high-resolution image is displayed in the first scene area with higher attention, the low-resolution image is displayed in the second scene area with lower attention, and the immersive experience effect of the user can be improved while the resource consumption is reduced.

Description

Image display method, device, equipment and medium
Technical Field
The disclosure relates to the technical field of virtual display, and in particular relates to an image display method, device, equipment and medium.
Background
Virtual Reality (VR) technology can provide users with a variety of Virtual scenes such that users experience Virtual time and interact with a Virtual world by wearing VR devices.
The current VR devices perform the same resolution rendering of the virtual scene and then display the rendered virtual scene image. In practical cases, however, the user's attention to different areas in the virtual scene is not equal. If rendering is performed based on uniform high resolution, VR devices can generate greater resource consumption, although users can clearly view the rendered virtual scene images; if rendering is performed based on uniform low resolution, the user cannot clearly view the rendered virtual scene image, resulting in reduced viewing experience for the user.
Disclosure of Invention
In order to solve the above technical problems or at least partially solve the above technical problems, the present disclosure provides an image display method, apparatus, device, and medium.
In a first aspect, the present disclosure provides an image display method, the method comprising:
Determining a first scene area and a second scene area in a virtual scene to be displayed, wherein the attention of a user to the first scene area is greater than the attention of the user to the second scene area;
Acquiring a first scene image generated by rendering the first scene area at a first resolution, and acquiring a second scene image generated by rendering the second scene area at a second resolution, wherein the first resolution is greater than the second resolution, and the difference between the first resolution and the second resolution is greater than a preset threshold;
a target scene image is generated and displayed based on the first scene image and the second scene image.
In a second aspect, the present disclosure provides an image display apparatus, the apparatus comprising:
The device comprises a determining module, a display module and a display module, wherein the determining module is used for determining a first scene area and a second scene area in a virtual scene to be displayed, and the attention of a user to the first scene area is larger than that to the second scene area;
an acquisition module for acquiring a first scene image generated by rendering the first scene area with a first resolution and acquiring a second scene image generated by rendering the second scene area with a second resolution, wherein the first resolution is greater than the second resolution and a difference between the first resolution and the second resolution is greater than a preset threshold;
And the display module is used for generating and displaying a target scene image based on the first scene image and the second scene image.
In a third aspect, the present disclosure provides a computer readable storage medium having instructions stored therein, which when run on a terminal device, cause the terminal device to implement the above-described method.
In a fourth aspect, the present disclosure provides an apparatus comprising: the computer program comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the method when executing the computer program.
In a fifth aspect, the present disclosure provides a computer program product comprising computer programs/instructions which when executed by a processor implement the above-described method.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has at least the following advantages:
The embodiment of the disclosure provides an image display method, device, equipment and medium, wherein the method comprises the following steps: determining a first scene area and a second scene area in a virtual scene to be displayed, wherein the attention of a user to the first scene area is greater than the attention of the user to the second scene area; acquiring a first scene image generated by rendering a first scene area with a first resolution, and acquiring a second scene image generated by rendering a second scene area with a second resolution, wherein the first resolution is greater than the second resolution, and the difference between the first resolution and the second resolution is greater than a preset threshold; a target scene image is generated and displayed based on the first scene image and the second scene image. Therefore, the first scene image with high resolution is displayed in the first scene area with higher attention, the second scene image with low resolution is displayed in the second scene area with lower attention, and the user can clearly watch the scene image of interest to improve the immersive experience effect while avoiding the VR equipment from generating larger resource consumption.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments of the present disclosure or the solutions in the prior art, the drawings that are required for the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a schematic flow chart of an image display method according to an embodiment of the disclosure;
Fig. 2 is a schematic structural diagram of an image display device according to an embodiment of the disclosure;
fig. 3 is a schematic structural diagram of an image display device according to an embodiment of the present disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, a further description of aspects of the present disclosure will be provided below. It should be noted that, without conflict, the embodiments of the present disclosure and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced otherwise than as described herein; it will be apparent that the embodiments in the specification are only some, but not all, embodiments of the disclosure.
Fig. 1 shows a flowchart of an image display method according to an embodiment of the present disclosure. As shown in fig. 1, the image display method includes the following steps.
S110, determining a first scene area and a second scene area in the virtual scene to be displayed, wherein the attention of a user to the first scene area is larger than the attention of the user to the second scene area.
In this embodiment, when a user wears VR device to perform virtual scene experience, because the user has different attention degrees to different areas in the virtual scene, it is necessary to determine, before displaying the virtual scene or during displaying the scene, scene areas corresponding to the different attention degrees from the virtual scene.
In one case, the first scene area and the second scene area may be determined at the stage of capturing the virtual scene image.
In another case, the first scene area and the second scene area are determined after the virtual scene image is captured and during a rendering phase of the virtual scene image.
The virtual scene to be displayed refers to a virtual scene experienced by a user by wearing the VR device. Alternatively, the type of virtual scene may be a game type, a cartoon type, a space type, or a submarine world type, etc.
The first scene area is an area with higher user attention, and high resolution rendering is needed for the first scene area, so that a user can clearly see the virtual scene displayed in the first scene area.
The second scene area is an area with low attention of the user, so that the second scene area needs to be rendered in a low resolution mode, the user does not need to pay attention to the second scene area, and larger resource consumption of VR equipment is avoided.
In some embodiments, the first scene region is a center region of the virtual scene and the second scene region is an edge region of the virtual scene.
In other embodiments, the first scene region is a foreground region of the virtual scene and the second scene region is a background region of the virtual scene.
In still other embodiments, the first scene region is a region in which moving objects in the virtual scene are located, and the second scene region is a region in which stationary objects in the virtual scene are located.
In still other embodiments, the first scene region is a region within the field of view of the user's eyes or the user's gaze range, and the second scene region is a region outside the field of view of the user's eyes or the user's gaze is absent.
It should be noted that when determining the first scene area and the second scene area, the virtual objects at the boundary positions are considered to perform scene area division, and the same virtual object is not divided into two different scene areas, so that the problem that the same virtual object is rendered by two different resolutions is avoided, and the reliability of scene rendering is ensured.
S120, acquiring a first scene image generated by rendering a first scene area with a first resolution, and acquiring a second scene image generated by rendering a second scene area with a second resolution, wherein the first resolution is greater than the second resolution, and the difference between the first resolution and the second resolution is greater than a preset threshold.
In this embodiment, before the VR device displays the virtual scene, if in the stage of shooting the virtual scene, virtual cameras with different resolutions may be used to shoot different scene areas, so as to obtain images with different resolutions corresponding to the different scene areas; if the virtual scene is shot and then the different scene areas are rendered by adopting different resolutions, the resolution of the image corresponding to the shot virtual scene is the initial resolution, and then the different scene areas are rendered by adopting different resolutions, so that the images with different resolutions corresponding to the different scene areas are obtained.
It can be appreciated that, since the difference between the first resolution and the second resolution is greater than the preset threshold, the resolution of the first scene image and the resolution of the second scene image are obviously compared, so that the second scene image is prevented from interfering with the attention of the user to the first scene image.
The preset threshold may be a resolution determined empirically in advance, and the preset threshold may enable the user to clearly see the first scene image while controlling the resource consumption to be sufficiently small.
Alternatively, the first resolution may be 4K or 8K, and the second resolution may be 1K or 2K.
Therefore, for two different scene areas, two different resolutions are adopted to respectively render, so that scene images with two different resolutions are obtained, the problem of resource consumption when the whole scene area is rendered due to the adoption of uniform high resolution is avoided, and meanwhile, the problem that a user cannot see clearly when the whole scene area is rendered due to the adoption of uniform low resolution is avoided.
S130, generating and displaying a target scene image based on the first scene image and the second scene image.
In this embodiment, the two images may be stitched based on the positions of the pixels in the first scene image and the second scene image, the first scene area, and the second scene area, and the stitched image may be used as the target scene image, and then the target scene image may be displayed on the VR device.
Wherein the target scene image refers to an image to be displayed on a display screen of the VR device.
In this embodiment, optionally, S130 specifically includes: matching the position coordinates of each pixel point in the first scene image with the first scene area to obtain a first target pixel point set successfully matched; matching the position coordinates of each pixel point in the second scene image with the second scene area to obtain a second target pixel point set successfully matched; and synthesizing the first target pixel point set and the second target pixel point set to generate a target scene image for display.
Specifically, by matching each scene image with its corresponding scene area, the position of the pixel point in each scene image in its corresponding scene area can be determined, and the target pixel point set to be spliced is determined, and further based on the position of the pixel point in each scene image in its corresponding scene area, the target pixel point sets corresponding to the two scene images respectively are spliced, so as to obtain a target scene image and display the target scene image.
In this embodiment, the target scene images may be displayed on two display devices, which may be display screens at left and right eyes on the VR device, so that the user can see the target scene images with two different resolutions through two eyes, thereby ensuring that the user can clearly view the interested scene images, and avoiding the VR device from generating larger resource consumption. It should be noted that, whether two scene images with different resolutions are obtained at the stage of shooting a virtual scene or two scene images with different resolutions are rendered after the virtual scene is shot, the first scene image and the second scene image are scene images generated at the same time, that is, the first scene image and the second scene image belong to the same image frame, and then are synthesized into a target scene image corresponding to the whole virtual scene. Thus, the user can conveniently watch the same image frame (namely the target scene image) to perform immersive experience in the virtual scene, and the immersive experience effect of the user is ensured.
The embodiment of the disclosure provides an image display method, which comprises the following steps: determining a first scene area and a second scene area in a virtual scene to be displayed, wherein the attention of a user to the first scene area is greater than the attention of the user to the second scene area; acquiring a first scene image generated by rendering a first scene area with a first resolution, and acquiring a second scene image generated by rendering a second scene area with a second resolution, wherein the first resolution is greater than the second resolution, and the difference between the first resolution and the second resolution is greater than a preset threshold; a target scene image is generated and displayed based on the first scene image and the second scene image. Therefore, the first scene image with high resolution is displayed in the first scene area with higher attention, the second scene image with low resolution is displayed in the second scene area with lower attention, and the user can clearly watch the scene image of interest to improve the immersive experience effect while avoiding the VR equipment from generating larger resource consumption.
In another embodiment of the present disclosure, the first scene area and the second scene area may be determined in different manners in different virtual scenes, and further a scene image corresponding to each scene area is determined.
In some embodiments of the present disclosure, S110 may specifically include the following steps:
a first scene area is determined from the virtual scene according to the eye view range of the user, and an area other than the first scene area is taken as a second scene area.
Wherein the eye field of view is used to characterize the gaze range of the user. The eye field of view range may be determined from the angle of view of the user and the center point of the eyeball. Specifically, an area in the virtual scene within the eye view is taken as a first scene area, and the remaining area of the virtual scene is taken as a second scene area.
It will be appreciated that when determining the eye field of view based on the user's field of view and the center point of the eye, if the same virtual object falls within and outside the eye field of view simultaneously, the eye field of view can be fine-tuned based on the virtual object such that the virtual object falls only within or outside the eye field of view, avoiding the subsequent problem of rendering the same virtual object with two different resolutions.
Therefore, according to the real eye view range of the user, the first scene area positioned in the eye view range and the second scene area positioned outside the eye view range are determined, and compared with the mode of determining the scene areas based on the uniform view range, the reliability of the two scene areas is ensured.
Further, in some embodiments, S120 may specifically include the following steps:
And rendering the virtual scene corresponding to the first scene area by adopting the first resolution to obtain a first resolution scene image, and rendering the virtual scene corresponding to the second scene area by adopting the second resolution to obtain a second resolution scene image.
Specifically, a first virtual camera and a second virtual camera are arranged in the VR device, a first scene area is shot by adopting a first resolution camera, a first scene image generated by first resolution rendering is acquired, a second scene area is shot by adopting a second resolution camera, and a second scene image generated by second resolution rendering is acquired. Wherein, the shooting center points of the first resolution camera and the second resolution camera are the same.
Therefore, scene rendering can be performed through two cameras with different resolutions, so that a first scene image and a second scene image are obtained, and the method is suitable for the situation based on double-camera rendering.
Further, in other embodiments, S120 may specifically include the following steps:
acquiring an initial scene image corresponding to the virtual scene, wherein the resolution of the initial scene image is a third resolution;
and adjusting the third resolution corresponding to the first scene area in the initial scene image to be the first resolution, and adjusting the third resolution corresponding to the second scene area in the initial scene image to be the second resolution, so as to obtain the first scene image and the second scene image. Wherein the third resolution is less than the first resolution and greater than the second resolution.
Specifically, a virtual scene is shot by a third resolution camera in advance in the VR device to obtain an initial scene image, and then high resolution rendering is performed on a first scene area in the initial scene image, that is, the third resolution corresponding to the first scene area in the initial scene image is adjusted to be the first resolution, and meanwhile, low resolution rendering is performed on a second scene area in the initial scene image, that is, the third resolution corresponding to the second scene area in the initial scene image is adjusted to be the second resolution.
Therefore, the first scene image and the second scene image can be obtained by performing high-low resolution rendering on different scene areas after performing scene rendering through one camera, and the method is suitable for the situation based on single-camera rendering.
Therefore, the single camera or the double cameras are adopted to acquire the scene images corresponding to different areas, and the flexibility of the scene image acquisition mode is improved.
In other embodiments of the present disclosure, S110 may specifically include the following steps:
Identifying scene content corresponding to the virtual scene, and determining a background area and a foreground area in the virtual scene;
the foreground region is taken as a first scene region and the background region is taken as a second scene region.
Specifically, the scene content is matched with the predetermined foreground content and background content, and a foreground area or a background area is determined, so that a first scene area and a second scene area are obtained.
Alternatively, the scene content may include a color of the virtual scene, a style of the virtual scene, a layout manner of the virtual scene, motion information and position information of the virtual object, and the like.
Therefore, the foreground area and the background area of the virtual scene are distinguished by analyzing the real scene content, the first scene area and the second scene area are obtained, and for different virtual scenes, different first scene areas and different second scene areas are determined.
Further, the first scene image corresponding to the first scene area and the second scene image corresponding to the second scene area may also be determined in the manner described in the foregoing embodiment, which is not described herein again.
In still other embodiments of the present disclosure, S110 may specifically include the following steps:
Detecting motion information of a virtual object in a virtual scene, and determining a moving object and a static object from the virtual scene based on the motion information;
Determining a first scene area in the virtual scene based on a preset first area dividing parameter by taking the moving object as a center;
And determining a second scene area in the virtual scene based on a preset second area dividing parameter by taking the static object as a center.
The first region dividing parameter is used for determining the radius, the length or the width of the first scene region. The second region division parameter refers to a radius, length, or width for determining the second scene region.
In order for the user to focus on more scene content related to the moving object, the first region-dividing parameter may be greater than the second region-dividing parameter such that the extent of the first scene region is greater than the extent of the second scene region, the moving object being presented to the user through the first scene region, and more scene content related to the moving object being presented to the user.
Therefore, by analyzing the motion information of the real virtual scene, the moving object and the static object are divided into two different areas, and the first scene area and the second scene area are obtained.
Further, the first scene image corresponding to the first scene area and the second scene image corresponding to the second scene area may also be determined in the manner described in the foregoing embodiment, which is not described herein again.
In still other embodiments of the present disclosure, prior to determining the scene area, the method further comprises: setting a first virtual camera and a second virtual camera to shoot in the virtual head-mounted equipment, wherein shooting center points of the first virtual camera and the second virtual camera are the same, a shooting visual field angle of the first virtual camera is a first visual field angle, a shooting visual field angle of the second virtual camera is a second visual field angle, and the second visual field angle is larger than the first visual field angle;
Correspondingly, S110 may specifically include the following steps:
Determining a scene display area of the virtual scene based on the shooting center point and the second view angle;
a first scene area is determined from the scene display areas based on the shooting center point and the first field angle, and an area other than the first scene area in the scene display areas is taken as a second scene area.
The virtual head-mounted device is VR device, and the shooting center point is the lens center point of the virtual camera.
Alternatively, the first field of view may be 40 degrees and the second field of view may be 140 degrees.
Specifically, the first virtual camera and the second virtual camera are concentric cameras, that is, the shooting center points of the two cameras are the same, the virtual scene is shot through a second view angle with a larger value, the maximum display range of the virtual scene is determined as a scene display area based on the shooting center point, meanwhile, the virtual scene is shot through a first view angle with a smaller value, a first scene area is determined from the scene display area based on the shooting center point, and finally, the area except the first scene area in the scene display area is taken as a second scene area.
In case the first virtual camera and the second virtual camera are provided in the virtual head-mounted device, i.e. when the photographed scene renders an image, S120 may specifically comprise the steps of: shooting a virtual scene corresponding to a first scene area by a first virtual camera with a first resolution to obtain a scene image with the first resolution; and shooting a virtual scene corresponding to the second scene area by a second virtual camera with a second resolution to obtain a scene image with the second resolution.
Therefore, two scene areas are determined by using two virtual cameras arranged in the virtual head-mounted equipment, scene shooting is carried out through the two virtual cameras, scene images corresponding to the two areas are obtained, and the method is suitable for the situation based on double-camera rendering.
In summary, in different virtual scenes, the first scene area and the second scene area can be determined in different manners, and the scene image corresponding to each scene area is determined, so that the method can be deployed and applied in different scenes, and the flexibility of the image display method is improved.
Based on the same inventive concept as the above-mentioned method embodiments, the present disclosure further provides an image display device, referring to fig. 2, which is a schematic structural diagram of the image display device provided in the embodiment of the present disclosure, where the image display device 200 includes:
A determining module 210, configured to determine a first scene area and a second scene area in a virtual scene to be displayed, where a user's attention to the first scene area is greater than that to the second scene area;
an obtaining module 220, configured to obtain a first scene image generated by rendering the first scene area with a first resolution, and obtain a second scene image generated by rendering the second scene area with a second resolution, where the first resolution is greater than the second resolution, and a difference between the first resolution and the second resolution is greater than a preset threshold;
And a display module 230, configured to generate and display a target scene image based on the first scene image and the second scene image.
In an alternative embodiment, the determining module 210 is specifically configured to determine the first scene area from the virtual scene according to the eye field of view of the user, and take an area other than the first scene area as the second scene area.
In an optional implementation manner, the determining module 210 is specifically configured to identify scene content corresponding to the virtual scene, and determine a background area and a foreground area in the virtual scene;
the foreground region is taken as the first scene region and the background region is taken as the second scene region.
In an alternative embodiment, the determining module 210 is specifically configured to detect motion information of a virtual object in the virtual scene, and determine a moving object and a static object from the virtual scene based on the motion information;
Taking the moving object as a center, and determining a first scene area in the virtual scene based on a preset first area dividing parameter;
And taking the static object as a center, and determining a second scene area in the virtual scene based on a preset second area dividing parameter.
In an alternative embodiment, the apparatus further comprises:
The device comprises a setting module, a setting module and a control module, wherein the setting module is used for setting a first virtual camera and a second virtual camera to shoot in a virtual head-mounted device, shooting center points of the first virtual camera and the second virtual camera are the same, shooting view angles of the first virtual camera are first view angles, shooting view angles of the second virtual camera are second view angles, and the second view angles are larger than the first view angles;
correspondingly, the determining module 210 is specifically configured to determine a scene display area of the virtual scene based on the shooting center point and the second field angle;
The first scene area is determined from the scene display areas based on the shooting center point and the first angle of view, and an area other than the first scene area in the scene display areas is taken as the second scene area.
In an optional implementation manner, the obtaining module 220 is specifically configured to obtain the first scene image by shooting, by using the first virtual camera, a virtual scene corresponding to the first scene area with a first resolution;
And shooting a virtual scene corresponding to the second scene area by the second virtual camera with a second resolution to obtain the second scene image.
In an optional implementation manner, the display module 230 is specifically configured to match the position coordinates of each pixel point in the first scene image with the first scene area, so as to obtain a first target pixel point set that is successfully matched;
matching the position coordinates of each pixel point in the second scene image with the second scene area to obtain a second target pixel point set successfully matched;
And synthesizing the first target pixel point set and the second target pixel point set to generate the target scene image for display.
The embodiment of the disclosure provides an image display device, which determines a first scene area and a second scene area in a virtual scene to be displayed, wherein the attention of a user to the first scene area is greater than the attention of the user to the second scene area; acquiring a first scene image generated by rendering a first scene area with a first resolution, and acquiring a second scene image generated by rendering a second scene area with a second resolution, wherein the first resolution is greater than the second resolution, and the difference between the first resolution and the second resolution is greater than a preset threshold; a target scene image is generated and displayed based on the first scene image and the second scene image. Therefore, the first scene image with high resolution is displayed in the first scene area with higher attention, the second scene image with low resolution is displayed in the second scene area with lower attention, and the user can clearly watch the scene image of interest to improve the immersive experience effect while avoiding the VR equipment from generating larger resource consumption.
In addition to the above-described methods and apparatuses, embodiments of the present disclosure also provide a computer-readable storage medium having instructions stored therein that, when executed on a terminal device, cause the terminal device to implement the image display method of the embodiments of the present disclosure.
The disclosed embodiments also provide a computer program product comprising a computer program/instruction which, when executed by a processor, implements the image display method of the disclosed embodiments.
In addition, the embodiment of the present disclosure further provides an image display apparatus, referring to fig. 3, the image display apparatus may include:
A processor 301, a memory 302, an input device 303 and an output device 304. The number of processors 301 in the image display device may be one or more, one processor being exemplified in fig. 3. In some embodiments of the present disclosure, processor 301, memory 302, input device 303, and output device 304 may be connected by a bus or other means, with bus connections being exemplified in fig. 3.
The memory 302 may be used to store software programs and modules, and the processor 301 executes each of the image display devices by executing the software programs and modules stored in the memory 302
A functional application and data processing. The memory 302 may include primarily a program storage area and a data storage area, wherein the program storage area may store an operating system, a desired application for at least one function
Programs, etc. In addition, memory 302 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. The input device 303 may be used to receive entered numbers or characters
Information, and a signal 0 input related to user settings and function control of the image display device is generated.
In particular, in this embodiment, the processor 301 loads executable files corresponding to the processes of one or more application programs into the memory 302 according to the following instructions, and the processor 301 executes the application programs stored in the memory 302, so as to implement the various functions of the image display device.
5 It should be noted that, in this document, a gateway such as "first" and "second" and the like
The terminology is used merely to distinguish one entity or operation from another entity or operation and does not necessarily require or imply any such actual relationship or order between the entities or operations. Moreover, the terms "comprising," "including," or any other variation thereof, are intended to be inclusive
Non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or
But also includes elements inherent to such processes, methods, articles, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
The foregoing is merely a specific embodiment of the disclosure to enable one skilled in the art to understand or practice the disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art
It will be apparent that the general principles defined herein may be implemented in other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (11)

1. An image display method, the method comprising:
Determining a first scene area and a second scene area in a virtual scene to be displayed, wherein the attention of a user to the first scene area is greater than the attention of the user to the second scene area;
Acquiring a first scene image generated by rendering the first scene area at a first resolution, and acquiring a second scene image generated by rendering the second scene area at a second resolution, wherein the first resolution is greater than the second resolution, and the difference between the first resolution and the second resolution is greater than a preset threshold;
a target scene image is generated and displayed based on the first scene image and the second scene image.
2. The method of claim 1, wherein the determining a first scene region and a second scene region in the virtual scene to be displayed comprises:
And determining the first scene area from the virtual scene according to the eye field range of the user, and taking the area except the first scene area as the second scene area.
3. The method of claim 1, wherein the determining a first scene region and a second scene region in the virtual scene to be displayed comprises:
Identifying scene content corresponding to the virtual scene, and determining a background area and a foreground area in the virtual scene;
the foreground region is taken as the first scene region and the background region is taken as the second scene region.
4. The method of claim 1, wherein the determining a first scene region and a second scene region in the virtual scene to be displayed comprises:
Detecting motion information of a virtual object in the virtual scene, and determining a moving object and a static object from the virtual scene based on the motion information;
Taking the moving object as a center, and determining a first scene area in the virtual scene based on a preset first area dividing parameter;
And taking the static object as a center, and determining a second scene area in the virtual scene based on a preset second area dividing parameter.
5. The method as recited in claim 1, further comprising:
Setting a first virtual camera and a second virtual camera to shoot in the virtual head-mounted equipment, wherein shooting center points of the first virtual camera and the second virtual camera are the same, a shooting view angle of the first virtual camera is a first view angle, a shooting view angle of the second virtual camera is a second view angle, and the second view angle is larger than the first view angle;
the determining a first scene area and a second scene area in the virtual scene to be displayed includes:
determining a scene display area of the virtual scene based on the shooting center point and the second field angle;
The first scene area is determined from the scene display areas based on the shooting center point and the first angle of view, and an area other than the first scene area in the scene display areas is taken as the second scene area.
6. The method of claim 5, wherein the acquiring a first scene image corresponding to the first scene region and acquiring a second scene image corresponding to the second scene region comprises:
shooting a virtual scene corresponding to the first scene area by the first virtual camera with a first resolution to obtain a first scene image;
And shooting a virtual scene corresponding to the second scene area by the second virtual camera with a second resolution to obtain the second scene image.
7. The method of any of claims 1-6, wherein the generating and displaying a target scene image based on the first scene image and the second scene image comprises:
matching the position coordinates of each pixel point in the first scene image with the first scene area to obtain a first target pixel point set successfully matched;
matching the position coordinates of each pixel point in the second scene image with the second scene area to obtain a second target pixel point set successfully matched;
And synthesizing the first target pixel point set and the second target pixel point set to generate the target scene image for display.
8. An image display device, the device comprising:
The device comprises a determining module, a display module and a display module, wherein the determining module is used for determining a first scene area and a second scene area in a virtual scene to be displayed, and the attention of a user to the first scene area is larger than that to the second scene area;
an acquisition module for acquiring a first scene image generated by rendering the first scene area with a first resolution and acquiring a second scene image generated by rendering the second scene area with a second resolution, wherein the first resolution is greater than the second resolution and a difference between the first resolution and the second resolution is greater than a preset threshold;
And the display module is used for generating and displaying a target scene image based on the first scene image and the second scene image.
9. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein instructions, which when run on a terminal device, cause the terminal device to implement the method of any of claims 1-7.
10. An apparatus, comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the method of any one of claims 1-7 when the computer program is executed.
11. A computer program product, characterized in that it comprises a computer program/instruction which, when executed by a processor, implements the method according to any of claims 1-7.
CN202211740322.0A 2022-12-30 2022-12-30 Image display method, device, equipment and medium Pending CN118276671A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211740322.0A CN118276671A (en) 2022-12-30 2022-12-30 Image display method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211740322.0A CN118276671A (en) 2022-12-30 2022-12-30 Image display method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN118276671A true CN118276671A (en) 2024-07-02

Family

ID=91639007

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211740322.0A Pending CN118276671A (en) 2022-12-30 2022-12-30 Image display method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN118276671A (en)

Similar Documents

Publication Publication Date Title
US11514657B2 (en) Replica graphic causing reduced visibility of an image artifact in a direct-view of a real-world scene
CN107911621B (en) Panoramic image shooting method, terminal equipment and storage medium
US20160301868A1 (en) Automated generation of panning shots
US11132770B2 (en) Image processing methods and apparatuses, computer readable storage media and electronic devices
CN110677621B (en) Camera calling method and device, storage medium and electronic equipment
CN110620873B (en) Device imaging method and device, storage medium and electronic device
CN111479059A (en) Photographing processing method and device, electronic equipment and storage medium
US11127141B2 (en) Image processing apparatus, image processing method, and a non-transitory computer readable storage medium
CN107093395B (en) Transparent display device and image display method thereof
KR20120035322A (en) System and method for playing contents of augmented reality
CN111784604A (en) Image processing method, device, equipment and computer readable storage medium
CN114979487B (en) Image processing method and device, electronic equipment and storage medium
CN107743272B (en) Screenshot method and equipment
CN118276671A (en) Image display method, device, equipment and medium
CN110545375B (en) Image processing method, image processing device, storage medium and electronic equipment
CN113395434B (en) Preview image blurring method, storage medium and terminal equipment
US11328494B2 (en) Image processing apparatus, image processing method, and storage medium
WO2020084894A1 (en) Multi-camera system, control value calculation method and control device
US11270475B2 (en) Variable rendering system and method
CN112351212B (en) Local histogram matching with global regularization and motion exclusion for multi-exposure image fusion
CN117459666A (en) Image processing method and image processor
JP5448799B2 (en) Display control apparatus and display control method
CN107786722B (en) Panoramic shooting method and terminal
CN116188341A (en) Image processing method and device
CN118474524A (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination