WO2022233256A1 - 一种显示方法与电子设备 - Google Patents

一种显示方法与电子设备 Download PDF

Info

Publication number
WO2022233256A1
WO2022233256A1 PCT/CN2022/089315 CN2022089315W WO2022233256A1 WO 2022233256 A1 WO2022233256 A1 WO 2022233256A1 CN 2022089315 W CN2022089315 W CN 2022089315W WO 2022233256 A1 WO2022233256 A1 WO 2022233256A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
frame
frames
depth
field
Prior art date
Application number
PCT/CN2022/089315
Other languages
English (en)
French (fr)
Inventor
付钟奇
沈钢
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022233256A1 publication Critical patent/WO2022233256A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/37Details of the operation on graphic patterns

Definitions

  • the present application relates to the field of electronic technology, and in particular, to a display method and an electronic device.
  • VR technology is a human-computer interaction method created with the help of computer and sensor technology.
  • VR technology integrates various scientific technologies such as computer graphics technology, computer simulation technology, sensor technology, display technology, etc., and can create a virtual environment. Users are immersed in the virtual environment by wearing VR wearable devices.
  • the virtual environment is presented through continuous refresh of many rendered 3D images, which include objects at different depths of field, giving users a three-dimensional feel.
  • image rendering frame rate the number of frames of images rendered per unit time
  • GPU Graphics Processing Unit
  • power consumption of the device it is often difficult to Provides larger image rendering frame rates.
  • the purpose of the present application is to provide a display method and an electronic device for reducing power consumption caused by image rendering.
  • a display method is provided, and the method can be performed by a display device.
  • the display device may be a VR display device, an augmented reality (AR) display device, or a mixed reality (Mixed Reality, MR) display device, and the display device may be a wearable device, such as a head-mounted device (eg , eyes, helmet, etc.).
  • the method may also be performed by an electronic device connected to the display device, for example, the electronic device may be a host (eg, a VR host) or a server (eg, a VR server).
  • N frames of images are presented to the user through a display device; wherein, in the N frame images, the first object in the first depth of field on the jth frame image is the same as the first object in the first depth of field on the ith frame image ; the second object in the second depth of field on the jth frame image is different from the second object in the second depth of field on the ith image; N, i, and j are positive integers, and i is less than j.
  • a display device such as VR glasses
  • the objects seen by the user have depth of field, such as , the user will see that some objects are closer to the user and some objects are further away from the user.
  • the second object in the second depth of field is different.
  • the first object in the first depth of field is the same (or unchanged), and the second object in the second depth of field is different (or changed).
  • the first object of the first depth of field can be rendered at a lower frame rate. For example, only one frame of the first object is rendered, and the second and third frames after that use the first object of this frame. Greatly saves rendering power.
  • both the first object and the second object are change objects.
  • the changing object may be understood as, in the eyes of the user, the first object and the second object are constantly changing, such as at least one change in action, position, shape, color or size.
  • the first object of the first depth of field is a little boy playing football
  • the second object of the second depth of field is a boat at sea
  • both the little boy and the boat are changing objects.
  • the first depth of field is greater than the second depth of field.
  • the first depth of field is greater than a first threshold, and/or the second depth of field is less than a second threshold, and the first threshold is greater than or equal to the second threshold.
  • the specific values of the first threshold and the second threshold are not limited in this embodiment of the present application.
  • the first object (which can be understood as a distant object) that is far from the user remains unchanged
  • the second object that is closer to the user which can be understood as a close-range object
  • the near-field objects change in real time
  • the far-field objects change little or even remain unchanged. In this way, the user's viewing experience is not affected, and rendering can be saved. power consumption.
  • the second depth of field changes when the depth of field of the user's gaze point changes.
  • the second depth of field changes as the depth of field of the user's gaze point changes.
  • the second depth of field also changes from far to near.
  • the second object in the second depth of field changes gradually, so as to avoid the object corresponding to the user's focus point remaining unchanged, which affects the user's viewing experience.
  • the implementation in the background is that the image rendering frame rate of the second object in the second depth of field increases, so the number of frames that need to be inserted for the second object is reduced, and the change appears to be accelerated.
  • the second depth of field is the depth of field where the user's gaze point is located. That is to say, in which depth of field the user's gaze point is located, the objects in the depth of field seen by the user may change in real time, and objects at other depths of field (such as the first depth of field) may remain unchanged or change little.
  • the second depth of field may be a depth of field where a preset object is located, and the preset object may be one or more of a virtual object, a display object, or an interface.
  • the preset object may be set by the system by default or set by the user.
  • the first object at the first depth of field on the jth frame of images in the N frame images is the same as the first object at the first depth of field on the ith frame image, including: in the jth frame image At least one of the motion, position, shape, color or size of the first object is the same as on the i-th frame image; the second object in the second depth of field on the j-th frame image is the same as the The second objects in the second depth of field on the i image are different, including: on the j th frame image and the i th frame image, at least one of the action, position, shape, color or size of the second object items are different.
  • the first object in the first depth of field remains unchanged (for example, at least one of the motion, position, shape or size is the same),
  • the second object at the second depth of field changes (eg, at least one of motion, position, shape, or size is different).
  • the first object and the second object are of different types.
  • the first object includes one or more types of virtual objects, display objects, or interfaces; and/or, the second objects include one type of virtual objects, display objects, or interfaces, or Many types.
  • the first object may be a virtual object (such as a VR game character), and the second object is a real object, and the real object refers to an object in the real world captured by a camera. That is, what the user sees includes virtual objects in the real world, wherein the virtual objects change in real time, and the real world changes slowly or even unchanged.
  • a virtual object such as a VR game character
  • the real object refers to an object in the real world captured by a camera. That is, what the user sees includes virtual objects in the real world, wherein the virtual objects change in real time, and the real world changes slowly or even unchanged.
  • the first object may be an interface (such as a video playing interface), and the second object may be a background object, such as a virtual theater.
  • the first object may be an interface (such as a video playing interface)
  • the second object may be a background object, such as a virtual theater.
  • the first object in the first depth of field on the jth frame image is the same as the first object in the first depth of field on the ith frame image, including: the first object on the jth frame image
  • the first object in the depth of field is to copy the first object in the first depth of field on the ith frame image; or, the first object in the first depth of field on the jth frame image is the ith image in the ith frame image.
  • the first object in the first depth of field on the jth frame image does not need to be re-rendered, and the first object in the first depth of field on the ith frame image can be directly used.
  • duplicating the first object in the first depth of field in the image of the ith frame or translating and/or rotating the first object in the first depth of field in the image of the ith frame helps to save rendering power consumption.
  • the second object at the second depth of field on the j-frame image is different from the second object at the second depth of field on the i-th image, including: the second depth-of-field on the j-frame image
  • the second object at the ith image and the second object at the second depth of field on the ith image are different objects; and/or, the second object on the j frame image with the second depth of field is different from that on the ith image.
  • the second object in the second depth of field is a different form of the same object.
  • the first object in the first depth of field is the same (or unchanged), and the second object in the second depth of field is different (or changed).
  • the object in the second depth of field in the current frame and the previous frame is changed, that is, a new object enters the second depth of field of the virtual environment, or the shape of the second object in the second depth of field changes, and the shape includes the second object action, position, shape, size, color, etc.
  • the second object in the second depth of field seen by the user changes in real time, and the user has a better viewing experience.
  • the method before the N frames of images are presented to the user through the display device, the method further includes: within a certain period of time, generating M frames of the first object image and N frames of the second object image, M and N is a positive integer, and M is less than N; N-M frames of first object images are inserted into the M frames of first object images; wherein, the inserted N-M frames of the first object images are copies of at least one frame of the M frames of the first object images The first object image or the image of the at least one frame of the first object image after rotation and/or translation; the N frames of images are obtained by correspondingly fusing N frames of the first object image and N frames of the second object image.
  • the inserted first object image may be a copy of the previous frame or an image of the previous frame after rotation and/or translation.
  • the inserted first object image may be an image obtained by copying the first n frames or the first n frames after rotation and/or translation, which is not limited in this embodiment of the present application.
  • inserting N-M frames of the first object image into the M frames of the first object image includes: combining M frames of the second object image in the N frames of the second object image with the M frames of the second object image Corresponding to the first object image, the M frames of the second object image are adjacent to the generation time of the M frames of the first object image; N-M frames of the first object image are inserted, wherein the inserted N-M frames of the first object image are the same as the N-M frames of the first object image.
  • the frames of the second object image correspond to the remaining N-M frames of the second object image.
  • the generation time of M frames of the second object image is adjacent to the generation time of the M frames of the first object image, it can be understood that the generation time is close or close, the generation time is the closest or closest, or the time difference between the generation times is the smallest or smaller than the threshold value and many more.
  • M ⁇ N so it is necessary to insert N-M frames of the first object image.
  • the M frames of the first object image and the N frames of the second object image are aligned according to the generation time. After the alignment, frames are inserted in the empty space.
  • the specific process please refer to the following introduction.
  • the M frames of the first object images are respectively images obtained by rendering the first object according to the posture of the display device at M times;
  • the N frames of the second object images They are respectively images obtained by rendering the second object according to the posture of the display device at N times, where the M times and the N times are within the first duration.
  • the image rendering frame rates of the first object in the first depth of field and the second object in the second depth of field are different.
  • the image rendering frame rate refers to the number of frames of the rendered image per unit time. Assuming that the rendering frame rate of the first object is M and the rendering frame rate of the second object is N, within a certain period of time (eg, a unit period of time), M frames of the first object image and N frames of the second object image are rendered.
  • M frames of the first object image and N frames of the second object image are rendered.
  • the user wears VR glasses. When the user's head moves, the posture of the VR glasses changes. The first object is rendered based on the posture of the VR glasses, so that the rendered first object is adapted to the user's head movement. , the user experience is better.
  • presenting N frames of images to the user through a display device includes: when the N is less than an image refresh rate P of the display device, inserting N-P frames of the N-frame images into the N frames of images. image; wherein, the inserted N-P frame image is an image obtained by copying at least one frame of the N frame image or at least one frame image after rotation and/or translation; the P frame image is presented to the user through a display device, and P is a positive integer .
  • the inserted 30 frames can be any one or more of the 60 frames.
  • the inserted image can be a copy of the previous frame or the previous frame.
  • the inserted image may be an image obtained by copying the first n frames or after the first n frames are rotated and/or translated, which is not limited in this embodiment of the present application.
  • the method further includes: when the user pays attention to the first object in the first depth of field, displaying the W-frame image through the display device; wherein, the t-th frame in the W-frame image
  • the object at the second depth of field on the image is the same as the object at the second depth of field on the rth frame image, and the object at the first depth of field on the tth frame image is different from the object at the first depth of field on the rth frame image ;
  • N, t, and r are positive integers, and r is less than t.
  • the first object in the first depth of field is the same (or unchanged), and the second object in the second depth of field is different (or changed).
  • the first object in the first depth of field is different (or changed), and the second object in the second depth of field is the same (or unchanged).
  • the original image rendering frame rate of the first object in the first depth of field is low, so there are many frames inserted, and it does not seem to change much or does not change.
  • the user pays attention to the first object in the first depth of field it increases The image of the first object is rendered at the frame rate, so fewer frames are interpolated and it appears to change faster. In order to save power consumption, when the image rendering frame rate of the first object is increased, the image rendering frame rate of the second object is reduced, so the second object of the second depth of field looks unchanged or changes slowly.
  • an electronic device comprising:
  • processor memory, and, one or more programs
  • the one or more programs are stored in the memory, and the one or more programs include instructions that, when executed by the processor, cause the electronic device to perform the above-mentioned first aspect Provided method steps.
  • a computer-readable storage medium is provided, where the computer-readable storage medium is used to store a computer program, and when the computer program runs on a computer, the computer is made to execute the method provided in the first aspect above .
  • a computer program product comprising a computer program, which, when the computer program is run on a computer, causes the computer to perform the method provided in the above-mentioned first aspect.
  • a fifth aspect provides a graphical user interface on an electronic device, the electronic device having a display screen, a memory, and a processor for executing one or more computer programs stored in the memory, the The graphical user interface includes a graphical user interface displayed when the electronic device executes the method provided in the first aspect.
  • the embodiments of the present application further provide a chip system, the chip system is coupled with a memory in an electronic device, and is used to call a computer program stored in the memory and execute the technical solutions of the first aspect of the embodiments of the present application.
  • "Coupling" in the application embodiments means that two components are directly or indirectly combined with each other.
  • FIG. 1 is a schematic diagram of a system architecture provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a virtual environment seen when the posture of the wearable device changes according to an embodiment of the present application
  • FIG. 3 is a schematic diagram of an image rendering method provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of another image rendering method provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a response delay caused by rendering at a low rendering frame rate according to an embodiment of the present application
  • FIG. 6 is a schematic diagram of image translation provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a first application scenario provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a second application scenario provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a third application scenario provided by an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of a wearable device provided by an embodiment of the application.
  • FIG. 11 is a schematic flowchart of an image rendering method provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of a near-field object and a far-field object rendered at different frame rates according to an embodiment of the present application;
  • FIG. 13A and FIG. 13B are schematic diagrams of processing procedures for near-field objects and far-field objects according to an embodiment of the present application
  • 14A and 14B are schematic diagrams of alignment of a near-field object and a far-field object according to an embodiment of the present application
  • 15A to 15C are schematic diagrams of a frame insertion process provided by an embodiment of the present application.
  • FIG. 16A and FIG. 16B are schematic diagrams of the processing flow of a close-range object, a medium-range object, and a distant object provided by an embodiment of the present application;
  • 17 to 20 are schematic diagrams of a frame insertion process provided by an embodiment of the application.
  • FIG. 21 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • At least one involved in the embodiments of the present application includes one or more; wherein, multiple refers to greater than or equal to two.
  • words such as “first” and “second” are only used for the purpose of distinguishing description, and cannot be interpreted as expressing or implying relative importance, nor can it be understood as expressing or implied order.
  • the first object and the second object do not represent the importance of the two, or represent the order of the two, in order to distinguish the objects.
  • "and/or” is an association relationship that describes an associated object, indicating that there can be three kinds of relationships, for example, A and/or B, which can mean that A exists alone, and A and B exist at the same time , there are three cases of B alone.
  • the character "/" in this document generally indicates that the related objects are an "or" relationship.
  • VR technology is a human-computer interaction method created with the help of computer and sensor technology.
  • VR technology integrates various scientific technologies such as computer graphics technology, computer simulation technology, sensor technology, display technology, etc., and can create a virtual environment.
  • the virtual environment includes three-dimensional realistic images generated by computer and played dynamically in real time to bring visual perception to users; and, in addition to the visual perception generated by computer graphics technology, there are also perceptions such as hearing, touch, force, movement, etc. It even includes smell and taste, also known as multi-sensing; in addition, it can also detect the user's head rotation, eyes, gestures, or other human behaviors, and the computer can process the data that is adapted to the user's actions and respond to the user's actions.
  • a user wearing a VR wearable device can see the VR game interface, and can interact with the VR game interface through operations such as gestures, handles, etc., as if they are in a game.
  • AR technology refers to superimposing computer-generated virtual objects on top of real-world scenes to enhance the real world. That is to say, AR technology needs to capture real-world scenes, and then add virtual environments to the real world.
  • VR technology creates a complete virtual environment, and all users see are virtual objects; while AR technology superimposes virtual objects on the real world, that is, including objects in the real world Also includes virtual objects.
  • AR technology creates a complete virtual environment, and all users see are virtual objects; while AR technology superimposes virtual objects on the real world, that is, including objects in the real world Also includes virtual objects.
  • the user wears transparent glasses, through which the real environment around can be seen, and virtual objects can also be displayed on the glasses, so that the user can see both real objects and virtual objects.
  • MR Mixed Reality
  • naked-eye 3D scenes naked-eye 3D display, naked-eye 3D projection, etc.
  • theaters such as 3D movies
  • VR software in electronic devices etc.
  • the three-dimensional images include Objects at different depths of field (or image depths).
  • the following description mainly takes the VR scene as an example.
  • FIG. 1 is a schematic diagram of a VR system according to an embodiment of the present application.
  • the VR system includes a VR wearable device, and a host (such as a VR host) or a server (such as a VR server), and the VR wearable device is connected (wired connection or wireless connection) to the VR host or the VR server.
  • a VR host or a VR server can be a device with large computing power.
  • the VR host can be a mobile phone, a tablet computer, a laptop and other devices
  • the VR server can be a cloud server.
  • the VR host or VR server is responsible for image generation, image rendering, etc., and then sends the rendered image to the VR wearable device for display, and the user can see the image when wearing the VR wearable device.
  • the VR wearable device may be a head mounted device (Head Mounted Display, HMD), such as glasses, helmets, and the like.
  • HMD Head Mounted Display
  • the VR wearable device, VR host or VR server can use the rendering method provided in this application (the specific principle will be described later) to render images, so as to save the rendering power consumption of the VR host or VR server.
  • the VR system in FIG. 1 may not include a VR host or a VR server.
  • the VR wearable device has local image generation and rendering capabilities, and does not need to obtain images from the VR host or VR server for display.
  • the VR wearable device can use the rendering method provided by the embodiment of the present application to render the image, saving VR. Rendering power consumption of wearable devices.
  • the following mainly takes the local image rendering of the VR wearable device as an example to introduce.
  • image rendering includes rendering the image with color, transparency, etc., and also includes rotating and/or translating the image according to the posture of the VR wearable device.
  • the posture of the VR wearable device includes multiple degrees of freedom such as rotation angle and/or translation distance, wherein the selected angle includes yaw angle, pitch angle, and roll angle, and the translation distance includes relative to the three-axis direction (X, Y , Z) translation distance.
  • the image rendering includes rotating the image according to the rotation angle of the VR wearable device, and/or performing translation processing on the image according to the translation distance of the VR wearable device.
  • the gesture may include the orientation and position of the user, and when the user's gesture changes, the user's perspective changes.
  • the gesture may be the user's head gesture.
  • the pose can be acquired by sensors and/or cameras in the VR wearable device.
  • FIG. 2 is a schematic diagram of image rendering in the VR field.
  • the rendered image on the screen is directly in front, and the background objects (such as mountains, water, etc.) are in front;
  • the user's head posture is rotated to the right (such as 40 degrees)
  • the screen on the image is rotated 40 degrees to the left, and the background objects (such as mountains, water, etc.) are rotated 40 degrees to the left, so that the virtual environment seen by the user is linked with the user, and the experience is better.
  • the VR wearable device can render (rotate and/or translate) the image according to the current pose.
  • the image (which can be understood as the original image, that is, the unrendered image) can be rendered according to the gesture of the first ms at the first ms, where the gesture of the first ms can be The motion data generated by the motion sensor in the first ms, such as rotation angle and/or translation distance, etc.
  • the image is rendered according to the attitude of the 2nd ms (motion data generated by the motion sensor at the 2nd ms, such as rotation angle and/or translation distance, etc.), and so on.
  • Three-dimensional images include objects at different image depths.
  • a VR wearable device displays a three-dimensional image
  • a user wearing the VR wearable device sees a three-dimensional scene, and the distances from different objects in the three-dimensional scene to the user's eyes are different, presenting a three-dimensional sense. Therefore, the image depth can be understood as the distance between the object on the three-dimensional image and the user's human eyes. It looks like a close-up. Image depth may also be referred to as "depth of field".
  • the image rendering frame rate refers to the number of image frames rendered in a unit time (such as 1s, 60ms, etc.), that is, how many frames of images can be rendered in a unit time. If the unit time is 1s, the unit of the image rendering frame rate can be fps. The higher the image rendering frame rate, the higher the computing power of the chip is. It should be noted that this application does not limit the specific length (duration) of the unit time, which may be 1s, 1ms, or 60ms, etc., as long as it is a fixed period of time.
  • the image refresh rate refers to the frame rate at which the display refreshes the image within a unit time (such as 1s, 60ms, etc.), that is, how many frames of images the display can refresh per unit time. If the unit time is 1s, the unit of the image refresh rate may be Hertz (Hz).
  • the image rendering frame rate needs to be adapted to the image refresh rate. For example, if the image refresh rate is 90Hz, then the image rendering frame rate needs to be at least 90fps to ensure sufficient image refresh on the display.
  • the images in the image stream to be rendered are rendered one by one, and the rendered image stream is refreshed on the display screen.
  • the image refresh rate of the VR wearable device reaches 90Hz
  • the image rendering frame rate must reach at least 90fps, which requires the support of a powerful graphics processor, which also means high power consumption. Under the condition of a certain battery capacity, it will be reduced. Battery life of mobile VR wearables.
  • the image rendering frame rate may be lower than the image refresh frame rate.
  • the image rendering frame rate can be 30fps or 60fps. Taking 30 frames as an example, please refer to Figure 4. Only 30 frames of images (such as black images) can be rendered per unit time, but since the image refresh rate is 90Hz, the rendered 30 frames of images are obviously not enough for the display screen per unit time. The amount of refresh, so it is necessary to interpolate the rendered 30 frames of images. For example, insert 60 frames of rendered images to make the rendered images reach 90 frames, so as to ensure that enough images are refreshed on the display screen per unit time to ensure the display. Effect.
  • the rendering power consumption is reduced to a certain extent, but it will cause a higher delay in VR operations.
  • multiple frames of images need to be inserted between the rendered image of the ith frame and the rendered image of the i+1th frame, and the inserted image may be a copy of the ith frame of the image.
  • the rendered image stream is displayed on the VR wearable device.
  • a trigger operation is detected when the rendered image of the i-th frame is displayed, and the inserted image is displayed until the i+1-th frame of the rendered image is displayed.
  • the i-th frame image It is a copy of the previous image (the i-th frame image), so it does not respond to the user's trigger operation during the period when the inserted image is displayed, and will respond to the trigger operation when the i+1-th frame is rendered. Therefore, the response time to the user-triggered operation is long, the display effect is poor, and the user experience is poor.
  • the above scheme of reducing the frame rate of the image rendering can cause objects in close range on the image to appear jittery.
  • the inserted image may be an image processed (translated and/or rotated) according to the posture of the VR wearable device.
  • the image after processing the rendered image of the i-th frame according to the posture of the VR wearable device, so that If , there may be a parallax between the inserted image and the image rendered at frame i+1, because the image rendered at frame i+1 and the image rendered at frame i are themselves continuous. In this case, the object will be shaken visually.
  • the lower the image rendering frame rate the more frames need to be inserted, and the more obvious the time difference.
  • the three-dimensional image has the characteristics of large near and far small, so the phenomenon of near-field object shaking is more serious. For obvious reasons, the display effect is poor and the experience is poor.
  • Fig. 6 continue to take the example of inserting a frame of image between the rendered image of the i-th frame and the rendered image of the i+1-th frame.
  • the inserted image is based on the posture of the VR wearable device.
  • the image obtained after the frame image is rotated and/or translated.
  • the inserted image is the image translated to the right relative to the i-th frame image.
  • an embodiment of the present application provides a display method.
  • N frames of images are presented to a user through a display device; wherein, the object in the first depth of field on the jth frame of the N frame images is the same as the object in the first depth of field.
  • the objects in the first depth of field on the ith frame image are the same; the objects in the second depth of field on the jth frame image are different from the objects in the second depth of field on the ith image; i is less than j.
  • the VR wearable device displays N frames of images, and the user wears the VR wearable device to see that the N frames of images are constantly refreshed, wherein, the objects in the near view are constantly changing, and the objects in the far view are relatively unchanged.
  • the close-range object uses a higher image rendering frame rate
  • the distant object uses a lower image rendering frame rate, so the number of frames of the close-range object rendered per unit time is higher than that of the distant object.
  • the missing distant view Objects can be obtained using frame interpolation, and frame interpolation of distant objects will cause the foreground objects to appear unchanged.
  • the rendering frame rate is high, ensuring that users experience.
  • FIG. 7 is a schematic diagram of a first application scenario provided by an embodiment of the present application.
  • An image 701 is displayed on the display screen of the VR wearable device.
  • the image 701 is a rendered three-dimensional image.
  • the three-dimensional image includes multiple objects such as mountains, seas, and a little boy playing football. Therefore, what the user sees when wearing the VR wearable device It is a virtual environment 702 in which a young boy plays football in an environment including mountains and seas.
  • the VR wearable device can determine the object that the user's eyes pay attention to. When rendering images, it can use a high frame rate to render the object that the user's eyes pay attention to, and use a low frame rate to render other objects.
  • one or more of the close-range object may be real objects captured by the camera of the VR wearable device.
  • the close-up object may also be an interface such as a user interface (User Interface, UI for short) or a video playback interface.
  • the VR wearable device determines that the object of attention of the user is a little boy, when rendering the image 701, the VR wearable device uses a higher image rendering frame rate to render the little boy, and uses a lower image rendering frame rate to render images of mountains, seas, Birds, boats, and other objects are rendered.
  • the rendered object composite image 701 if the VR wearable device determines that the object of attention of the user is a little boy, when rendering the image 701, the VR wearable device uses a higher image rendering frame rate to render the little boy, and uses a lower image rendering frame rate to render images of mountains, seas, Birds, boats, and other objects are rendered.
  • the rendered object composite image 701 if the VR wearable device determines that the object of attention of the user is a little boy, when rendering the image 701, the VR wearable device uses a higher image rendering frame rate to render the little boy, and uses a lower image rendering frame rate to render images of mountains, seas, Birds, boats, and other objects are rendered.
  • the VR wearable device may default to the close-range object (such as a little boy) that is the object of the user's attention; in another implementation, the VR wearable device may track the user's gaze point to determine the object the user pays attention to , when the object the user pays attention to is a little boy, use a higher image rendering frame rate to render the little boy, and use a lower image rendering frame rate to render other objects such as mountains, seas, birds, boats, etc.
  • the close-range object such as a little boy
  • the VR wearable device may track the user's gaze point to determine the object the user pays attention to , when the object the user pays attention to is a little boy, use a higher image rendering frame rate to render the little boy, and use a lower image rendering frame rate to render other objects such as mountains, seas, birds, boats, etc.
  • the number of frames of the user's attention object rendered per unit time is higher than the number of frames of other objects, that is, some other objects are missing.
  • Other objects can be obtained by inserting frames. For example, if 60 frames of user-focused objects and 30 frames of other objects are rendered per unit of time, that is, 30 frames of other objects are missing per unit of time. At this time, 30 frames of other objects can be inserted. After frame insertion, there are 60 frames of user-focused objects per unit of time. Objects and 60 frames of other objects, 60 frames of images can be synthesized and displayed.
  • the frame interpolation method Since the image rendering frame rate corresponding to other objects is low, the frame interpolation method is used, so the user visually sees that other objects in the virtual environment 702 change slowly. This method has little impact on the user experience (the user does not pay attention to these objects), and It can save rendering power consumption.
  • the high rendering frame rate for the objects that the user pays attention to can reduce the delay and improve the user experience.
  • FIG. 8 is a schematic diagram of a second application scenario provided by an embodiment of the present application.
  • An image 801 is displayed on the display screen of the VR wearable device.
  • the image 801 is a rendered three-dimensional image, and the three-dimensional image includes objects such as a virtual theater and a video playback interface. Therefore, what the user sees when wearing the VR wearable device is the virtual environment 802 of watching a movie in the theater.
  • the VR wearable device can default to the close-range object (such as a video playback interface) that is the object the user pays attention to; in another implementation, the VR wearable device can track the object by default.
  • the user's gaze point determines the object that the user pays attention to.
  • the object of the user's attention is the video playback interface
  • the video playback interface is rendered with a higher image rendering frame rate
  • the virtual theater and other virtual theaters are rendered with a lower image rendering frame rate. object to render.
  • the VR wearable device when it renders images, it can use a high frame rate to render close-range objects, and use a low frame rate to render distant objects.
  • the VR wearable device Since the image depth h1 of the video playback interface is smaller than the image depth h2 of the virtual theater, that is, the video playback interface is a close-range object and the virtual theater is a distant-view object, when the VR wearable device renders the image 801, it uses a higher image rendering frame rate to Close-range objects (such as video playback interfaces) are rendered, and distant objects (virtual theaters, etc.) are rendered using a lower image rendering frame rate.
  • Close-range objects such as video playback interfaces
  • distant objects virtual theaters, etc.
  • a composite image 801 of a near-field object and a far-field object after rendering For distant objects that are missing per unit time, frame interpolation can be used.
  • the close-up image may include close-up objects, UI interfaces, etc. In short, it may be any object whose image depth is less than the first threshold or UI interface.
  • FIG. 9 is a schematic diagram of a third application scenario provided by an embodiment of the present application.
  • the camera on the VR wearable device can collect images, and the image can include the real environment around the user (for example, including real objects such as mountains and seas), and the VR wearable device can combine the image collected by the camera including the real environment with the virtual objects (for example, UI interface) synthesizes 3D images and displays them.
  • the UI interface may be a UI interactive interface, such as a mobile phone desktop, a game operation interface, a video playback interface, and the like.
  • an image 901 is displayed on the display screen of the VR wearable device, and the image 901 is synthesized by images collected by a camera (including real objects such as mountains and seas) and virtual objects (including UI interfaces). Therefore, what the user sees when wearing the VR wearable device is the scene 902 in which the virtual UI interface is displayed in the real environment.
  • the VR wearable device When the VR wearable device renders images, it can render virtual objects at a high frame rate and render real objects at a low frame rate.
  • the VR wearable device can default to the virtual object that is the object that the user pays attention to; in another implementation, the VR wearable device can track the user's gaze point to determine the object that the user pays attention to.
  • the object When the object is a virtual object, use a higher image rendering frame rate to render the virtual object, and use a lower image rendering frame rate to render other objects such as real objects.
  • a higher image rendering frame rate is used to render the real object
  • a lower image rendering frame rate is used to render other objects such as virtual objects.
  • the VR wearable device when the VR wearable device renders the image 901, it uses a higher image rendering frame rate to render virtual objects (such as UI interfaces), and uses a lower image rendering frame rate to render real objects (mountains, seas, birds, ships, etc.) ) to render.
  • the rendered real object and virtual object composite image 901 Since the image rendering frame rate of virtual objects is higher than that of real objects, the number of frames of virtual objects rendered per unit time is higher than that of real objects. For the missing real objects, you can use frame interpolation.
  • the rendering power consumption is saved, and the image rendering frame rate of the virtual object (UI interface) is high, which can reduce the response delay for operation and provide a better user experience.
  • a high frame rate may be used for rendering of virtual objects and some real objects
  • a lower frame rate may be used for other real objects.
  • the part of the real object and the virtual object are located in the same depth of field or the part of the real object is closer to the user's eyes than the virtual object, in this case, the part of the real object and the virtual object may be rendered at the same high frame rate , renders at a lower frame rate for other real objects.
  • the wearable device may be a VR wearable device, an AR wearable device, an MR wearable device, and the like.
  • FIG. 10 is a schematic structural diagram of a wearable device provided by an embodiment of the present application.
  • the wearable device 100 may include a processor 110 , a memory 120 , a sensor module 130 (which can be used to acquire the user's gesture), a microphone 140 , a button 150 , an input/output interface 160 , a communication module 170 , a camera 180 , and a battery 190, an optical display module 1100, an eye tracking module 1200, and the like.
  • the structures illustrated in the embodiments of the present application do not constitute a specific limitation on the wearable device 100 .
  • the wearable device 100 may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 is generally used to control the overall operation of the wearable device 100, and may include one or more processing units.
  • the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor ( graphics processing unit, GPU), image signal processor (image signal processor, ISP), video processing unit (video processing unit, VPU) controller, memory, video codec, digital signal processor (digital signal processor, DSP) , baseband processor, and/or neural-network processing unit (NPU), etc.
  • application processor application processor
  • AP application processor
  • modem processor graphics processor
  • ISP image signal processor
  • video processing unit video processing unit
  • VPU video processing unit
  • memory video codec
  • digital signal processor digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 110 . If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby increasing the efficiency of the system.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, a universal asynchronous receiver/transmitter (UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general input output (general -purpose input/output, GPIO) interface, subscriber identity module (SIM) interface, and/or universal serial bus (universal serial bus, USB) interface, serial peripheral interface (serial peripheral interface, SPI) interface, etc.
  • I2C integrated circuit
  • UART universal asynchronous receiver/transmitter
  • MIPI mobile industry processor interface
  • GPIO general input output
  • SIM subscriber identity module
  • USB universal serial bus
  • serial peripheral interface serial peripheral interface
  • the processor 110 may render different objects based on different frame rates, for example, using a high frame rate for close-range objects and a low frame rate for distant objects.
  • the I2C interface is a bidirectional synchronous serial bus that includes a serial data line (SDA) and a serial clock line (SCL).
  • the processor 110 may contain multiple sets of I2C buses.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • a UART interface is typically used to connect the processor 110 with the communication module 170 .
  • the processor 110 communicates with the Bluetooth module in the communication module 170 through the UART interface to implement the Bluetooth function.
  • the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen and the camera 180 in the optical display module 1100 .
  • the GPIO interface can be configured by software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 110 with the camera 180 , the display screen in the optical display module 1100 , the communication module 170 , the sensor module 130 , the microphone 140 and the like.
  • the GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the camera 180 may collect images including real objects, and the processor 110 may fuse the images collected by the camera with the virtual objects, and realize the images obtained by fusion through the optical display module 1100.
  • the processor 110 may fuse the images collected by the camera with the virtual objects, and realize the images obtained by fusion through the optical display module 1100.
  • FIG. 9 The scene will not be repeated here.
  • the USB interface is an interface that conforms to the USB standard specification, which can be a Mini USB interface, a Micro USB interface, a USB Type C interface, etc.
  • the USB interface can be used to connect a charger to charge the wearable device 100, and can also be used to transmit data between the wearable device 100 and peripheral devices. It can also be used to connect headphones to play audio through the headphones.
  • the interface can also be used to connect other electronic devices, such as mobile phones.
  • the USB interface can be USB3.0, which is compatible with high-speed display port (DP) signal transmission, and can transmit high-speed video and audio data.
  • DP display port
  • the interface connection relationship between the modules illustrated in the embodiments of the present application is only a schematic illustration, and does not constitute a structural limitation of the wearable device 100 .
  • the wearable device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the wearable device 100 may include a wireless communication function.
  • the wearable device 100 may receive a rendered image from other electronic devices (such as a VR host or a VR server) for display, or receive an unrendered image and then the processor 110 may display the image. Render and display.
  • the communication module 170 may include a wireless communication module and a mobile communication module.
  • the wireless communication function may be implemented by an antenna (not shown), a mobile communication module (not shown), a modem processor (not shown), a baseband processor (not shown), and the like.
  • Antennas are used to transmit and receive electromagnetic wave signals.
  • the wearable device 100 may include multiple antennas, and each antenna may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1 can be multiplexed as a diversity antenna of the wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module can provide the second generation (2th generation, 2G) network/third generation (3th generation, 3G) network/fourth generation (4th generation, 4G) network/fifth generation ( 5th generation, 5G) network and other wireless communication solutions.
  • the mobile communication module may include at least one filter, switch, power amplifier, low noise amplifier (LNA) and the like.
  • the mobile communication module can receive electromagnetic waves by the antenna, filter and amplify the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module can also amplify the signal modulated by the modulation and demodulation processor, and then convert it into electromagnetic waves for radiation through the antenna.
  • at least part of the functional modules of the mobile communication module may be provided in the processor 110 .
  • at least part of the functional modules of the mobile communication module may be provided in the same device as at least part of the modules of the processor 110 .
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low frequency baseband signal is processed by the baseband processor and passed to the application processor.
  • the application processor outputs sound signals through audio devices (not limited to speakers, etc.), or displays images or videos through the display screen in the optical display module 1100 .
  • the modem processor may be a stand-alone device.
  • the modulation and demodulation processor may be independent of the processor 110, and may be provided in the same device as the mobile communication module or other functional modules.
  • the wireless communication module can provide applications on the wearable device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • WLAN wireless local area networks
  • BT wireless fidelity
  • GNSS global navigation satellite system
  • frequency modulation frequency modulation
  • FM near field communication technology
  • NFC near field communication technology
  • IR infrared technology
  • the wireless communication module may be one or more devices integrating at least one communication processing module.
  • the wireless communication module receives electromagnetic waves via the antenna, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module can also receive the signal to be sent from the processor 110, perform frequency modulation on it, amplify it, and radiate it into electromagnetic waves through the antenna.
  • the antenna of the wearable device 100 is coupled with the mobile communication module, so that the wearable device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), wideband code Division Multiple Access (WCDMA), Time Division Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technology, etc.
  • GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi-zenith) satellite system, QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the wearable device 100 realizes the display function through the GPU, the optical display module 1100, and the application processor.
  • the GPU is a microprocessor for image processing, and is connected to the optical display module 1100 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • Memory 120 may be used to store computer-executable program code, which includes instructions.
  • the processor 110 executes various functional applications and data processing of the wearable device 100 by executing the instructions stored in the memory 120 .
  • the memory 120 may include a stored program area and a stored data area.
  • the storage program area can store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like.
  • the storage data area can store data (such as audio data, phone book, etc.) created during the use of the wearable device 100 and the like.
  • the memory 120 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the wearable device 100 may implement audio functions through an audio module, a speaker, a microphone 140, an earphone interface, and an application processor. Such as music playback, recording, etc.
  • the audio module is used to convert digital audio information into analog audio signal output, and also used to convert analog audio input to digital audio signal.
  • the audio module can also be used to encode and decode audio signals.
  • the audio module may be provided in the processor 110 , or some functional modules of the audio module may be provided in the processor 110 .
  • Speakers also known as “horns" are used to convert audio electrical signals into sound signals.
  • the wearable device 100 can listen to music through the speaker, or listen to a hands-free call.
  • the microphone 140 also referred to as “microphone” or “microphone”, is used to convert sound signals into electrical signals.
  • the wearable device 100 may be provided with at least one microphone 140 .
  • the wearable device 100 may be provided with two microphones 140, which can implement a noise reduction function in addition to collecting sound signals.
  • the wearable device 100 may further be provided with three, four or more microphones 140 to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
  • the headphone jack is used to connect wired headphones.
  • the headphone interface can be a USB interface or a 3.5 mm (mm) open mobile terminal platform (OMTP) standard interface, a cellular telecommunications industry association of the USA (CTIA) standard interface .
  • mm 3.5 mm
  • CTIA cellular telecommunications industry association of the USA
  • the wearable device 100 may include one or more buttons 150 that may control the wearable device and provide the user with access to functions on the wearable device 100 .
  • the keys 150 may be in the form of buttons, switches, dials, and touch or near-touch sensing devices (eg, touch sensors). Specifically, for example, the user can turn on the optical display module 1100 of the wearable device 100 by pressing the button .
  • the keys 150 include a power-on key, a volume key, and the like.
  • the keys 150 may be mechanical keys. It can also be a touch key.
  • the wearable device 100 may receive key input, and generate key signal input related to user settings and function control of the wearable device 100 .
  • the wearable device 100 may include an input/output interface 160, and the input/output interface 160 may connect other devices to the wearable device 100 through suitable components.
  • Components may include, for example, audio/video jacks, data connectors, and the like.
  • the optical display module 1100 is used to present images to the user under the control of the processor.
  • the optical display module 1100 can convert the real pixel image display into a virtual image display of near-eye projection through one or more optical devices such as a reflector, a transmission mirror or an optical waveguide, so as to realize a virtual interactive experience, or realize a virtual and An interactive experience that combines reality.
  • the optical display module 1100 receives the image data information sent by the processor, and presents the corresponding image to the user.
  • the wearable device 100 may further include an eye-tracking module 1200, and the eye-tracking module 1200 is configured to track the movement of the human eye, thereby determining the gaze point of the human eye.
  • image processing technology can be used to locate the position of the pupil, obtain the coordinates of the center of the pupil, and then calculate the gaze point of the person.
  • the implementation principle of the eye tracking module 1200 may be to collect the image of the user's eyes through a camera. Based on the image of the user's eyes, the coordinates of the position on the display screen that the user's eyes are looking at are calculated, and the coordinate position is the user's gaze point, and the gaze point is sent to the processor 110 .
  • the processor 110 may render the foveated object using a high rendering frame rate.
  • the eye tracking module 1200 may include an infrared transmitter, and the infrared light emitted by the infrared transmitter is directed to the pupil of the user's eye.
  • the cornea of the eye reflects infrared light, and an infrared camera tracks the reflected infrared light, thereby tracking the movement of the gaze point.
  • FIG. 11 is a schematic flowchart of a display information processing method provided by an embodiment of the present application.
  • the method may be applicable to wearable devices (eg, VR wearable devices), or other electronic devices (eg, VR wearable devices) connected to wearable devices. host or VR server, etc.).
  • wearable devices eg, VR wearable devices
  • other electronic devices eg, VR wearable devices
  • the flow of the method includes:
  • the first object may be the user's point of interest among all objects to be rendered.
  • Mode 1 Determine the gaze point of the user according to the eye tracking technology, and the gaze point is the point of interest. For example, taking FIG. 7 as an example, the VR wearable device determines that the user is looking at the little boy according to the eye tracking technology, and determines that the little boy is the point of interest.
  • the point of interest may be a preset object, and the preset object includes a UI interface, a close-range object, a virtual object, and the like.
  • the scene in which the point of interest is a close-range object or a UI interface is shown in FIG. 8
  • the scene in which the point of interest is a virtual object is shown in FIG. 9 .
  • Mode 1 and Mode 2 may be used alone or in combination, which are not limited in the embodiments of the present application.
  • the second object is an object other than the first object among all the objects to be rendered.
  • the first object is an object with a first depth of field (such as a close-range object)
  • the second object may be an object with a second depth of field (such as a distant object) and/or an object with a third depth of field (such as a medium-range object), that is, the first object
  • the image depth of the second object is greater than the image depth of the first object.
  • the depth of the first image of the first object is less than the first threshold
  • the depth of the second image of the second object is greater than the second threshold
  • the first threshold is less than or equal to the second threshold.
  • the specific values of the first threshold and the second threshold are not limited in this embodiment of the present application.
  • the image depth of close-range objects and distant objects can be seen in Table 1 below:
  • Table 1 Image Depth Ranges for Near Objects vs. Far Objects
  • S3 Render the first object at a first image rendering frame rate, where the first rendering frame rate is used to indicate the number of frames of the first object that can be rendered within a certain period of time.
  • S4 Render the second object at a second image rendering frame rate, where the second rendering frame rate is used to indicate the number of frames of the second object that can be rendered within a certain period of time, wherein the first image rendering frame rate is greater than the second image rendering frame rate frame rate.
  • the first object is a close-range object
  • the first image rendering frame rate corresponding to the close-range object is N as an example
  • the second object is a distant object
  • the second image rendering frame rate corresponding to the distant object is M as an example.
  • the rendering principle of the first object and the second object wherein M and N are positive integers, and N is greater than M.
  • N frames of close-range objects and M frames of distant objects are rendered in a unit time. Since N is greater than M, there are N-M frames more for close-range objects than for distant objects in unit time.
  • the number of frames M of the distant object per unit time is less than the number of frames N of the close-range object. Therefore, before fusion, it is necessary to perform frame interpolation processing on the distant object, and insert N-M frames of distant objects to ensure that the close-range objects are inserted. Same number of frames as far objects before blending.
  • N frames of close-range objects and M frames of distant objects are rendered within a certain period of time, where N is greater than M. Due to the small number of frames of distant objects, N-M frames of distant objects can be inserted.
  • the inserted N-M frame perspective object may be a duplicate of at least one frame of the M frame perspective object.
  • a frame may be inserted every few frames, which is not limited in this embodiment of the present application.
  • the number of frames of close-range objects and distant-view objects is the same, which is N, and N-frames of close-range objects and N-frames of distant objects can be fused correspondingly to obtain N-frame fused images.
  • the inserted P-N frame fusion image may be an image obtained by translating and/or rotating at least one frame of the N frame fusion images according to the posture of the VR wearable device.
  • N-M frames of the distant object can be inserted.
  • the inserted N-M frames of far-sighted objects may be far-sighted objects obtained after at least one frame of far-sighted objects in the M-frames of far-sighted objects is rotated and/or translated according to the posture of the VR wearable device.
  • a frame may be inserted every few frames, which is not limited in this embodiment of the present application.
  • the number of frames of the near-field objects and the distant-view objects is the same as N, and the N-frames of near-view objects and N-frames of distant-view objects can be correspondingly fused to obtain N-frame fusion images.
  • the difference between FIG. 13B and FIG. 13A is that the inserted N-M frame distant objects are different. If the image inserted according to FIG. 13A is to copy the previous frame, this method has less workload and higher efficiency; if the image inserted according to FIG.
  • 13B is the image after translation and/or rotation of the previous frame, this method will When interpolating an image, since the interpolated image is an image that has been translated and/or rotated in the previous frame according to the posture of the VR wearable device, the image seen by the user is adapted to the user's posture (the user posture corresponds to the posture of the VR wearable device) , the user experience is better.
  • S5 may include the following steps:
  • Step 1 align the N frames of close-range objects and M frames of distant objects.
  • the rendering time of N frames of close-range objects and the rendering time of M frames of distant objects may be staggered.
  • the rendering time of the close-range objects in the first frame and the distant objects in the first frame is the same, that is, the rendering starts at the same time, but due to the different rendering frame rates, the rendering times of the close-range objects in the second frame and the distant objects in the second frame are different. Therefore, in step 1, N frames of near-field objects and M frames of far-field objects can be aligned.
  • the first alignment method is to determine the jth frame of close-range objects in the N-frame close-range objects that is close to the rendering time of the i-th frame of distant-view objects in the M-frame distant-view objects, and align the i-th frame of distant view objects with the jth frame of close-range objects.
  • the distant object in the ith frame is the distant object in the second frame
  • it is determined that the rendering time of the close-range object in the third frame and the distant object in the second frame of the N-frame close-range objects are close then the second frame of the distant view The object is aligned with the close-up object in frame 3, and the effect after alignment is shown in Figure 14A.
  • 2 times the rendering speed of and render 1 frame of close-range objects every Tms, and 1 frame of distant objects every 2Tms, for example, render the first frame of close-range objects and the first frame of distant objects at Tms, and render the second frame of close-range objects at 2Tms. (At this time, the distant object of the second frame is not rendered), and the close-range object of the third frame and the distant object of the second frame are rendered at 3Tms.
  • the rendering time of the close-range object and the distant object are aligned, and no additional alignment is required.
  • the second alignment method is to align the M frames of far-sighted objects with the N-frames of close-up objects in the first M frames of close-up objects.
  • the distant object in the first frame is aligned with the close-range object in the first frame
  • the distant object in the second frame is aligned with the close-range object in the second frame, and this is the type.
  • Step 2 inserting N-M frames of distant objects, so that the number of frames of distant objects reaches N frames.
  • the number of frames of the distant object is less than the number of frames of the close-range object by N-M frames, so after aligning the distant object and the close-range object in the previous step 1, there are N-M frames of the close-range object that do not correspond to the distant object.
  • N-M frames of distant objects are inserted, and the inserted N-M frames of distant objects correspond to close-range objects that do not correspond to distant objects among the N-frames of close-range objects.
  • the first case is for the first alignment method
  • the second case is for the second alignment method
  • the alignment is the previous first alignment (ie, the alignment in FIG. 14A ).
  • the first frame insertion method can be as shown in FIG. 15A , inserting a frame of vision objects between the first frame of vision objects and the second frame of vision objects, and the inserted vision objects can be the last frame of vision objects. That is, the first frame distant object. Insert a frame of distant object between the distant object of the second frame and the distant object of the third frame.
  • the inserted distant object here can be the distant object of the previous frame, that is, the distant object of the second frame, and so on. After inserting the distant object of N-M frames , the number of frames of distant objects reaches N frames.
  • This frame insertion method can be simply understood as inserting the previous frame of distant objects at the missing frame.
  • the second frame insertion manner may be as shown in FIG. 15A .
  • insert a frame of distant object between the first frame of vision object and the second frame of vision object may be based on the posture of the VR wearable device. image after panning).
  • the difference from the previous first frame insertion method is that: the first method directly inserts the previous frame of the distant object between the first frame and the second frame, while the second method inserts the first frame of the distant object.
  • the image after the rotation and/or translation of the distant object in the previous frame according to the VR wearable device is inserted between the second frame and the distant object.
  • insert a frame of a distant view object between the second frame of the distant view object and the third frame of the distant view object, and the inserted distant view object can be based on the posture of the VR wearable device.
  • Image after distant object processing rotation and/or translation
  • the image that has been processed on the distant object in the previous frame is inserted at the missing frame.
  • the alignment is the previous second alignment (ie, the alignment of FIG. 14B ).
  • the first frame insertion method is, as shown in FIG. 15B , inserting N-M frames of distant objects after the M-th frame of objects.
  • the inserted N-M frames of vision objects may include at least one frame of the M frames of vision objects.
  • the inserted N-M frames of vision objects are all M-th frame vision objects, that is, the M+1th frame to the Nth frame are all copies of the Mth frame. Version.
  • the second frame insertion method is, as shown in FIG. 15B , inserting N-M frames of vision objects after the M-th frame of vision objects, and the inserted N-M frames of vision objects may include M frames of vision objects according to the VR wearable device.
  • the distant object after at least one frame in the object is processed (rotated and/or translated), for example, the inserted N-M frames of the distant object are processed (rotated and/or translated) on the M-th frame of the distant object according to the posture of the VR wearable device. ) after the distant object.
  • the difference from the first frame insertion method is that the first frame insertion method directly inserts the M-th frame vision object at the missing frame, while the second method inserts the M-th frame vision object according to the VR wearable device at the missing frame.
  • the foreground object after the object is rotated and/or translated.
  • step 3 After the frame is inserted, the number of frames of the distant object and the close-range object is the same, which is N, and step 3 can be performed.
  • Step 3 correspondingly fuse the N frames of distant objects with the N frames of close objects.
  • the first frame of close-range objects and the first frame of distant objects are fused to obtain the first frame of fusion image
  • the second frame of close-range objects and the inserted distant object are fused to obtain the second frame of fusion image
  • Get N frames of fused images Get N frames of fused images.
  • the first frame of close-range objects and the first frame of distant objects are fused to obtain the first frame of fusion image
  • the M-th frame of close-range objects and the M-th frame of distant objects are fused to obtain the second frame of fusion images
  • the M-th frame is obtained.
  • the M+1 frame distant object (the inserted first frame distant object) is fused with the M+1 frame close-range object to obtain the M+1 frame fusion image, and so on to obtain N frames of fusion images.
  • N frames of fused images are displayed through a virtual display device.
  • the j-th fused image in the N-frame fused image is the same as the distant object in the i-th fused image, and the close-range objects are different, and i is less than j.
  • i 1, j-2.
  • the distant object in the jth frame fused image is the copied distant object in the ith frame fused image or an object obtained by rotating and/or translating the distant object in the ith frame fused image. Therefore, from the user's point of view, the distant object does not change, and the close object changes.
  • S6 may also include the step of: determining the image refresh frame rate P of the virtual display device (such as a VR wearable device), and P is greater than N, and the image refresh frame rate is used to indicate the number of frames to refresh the image per unit time. , perform frame interpolation processing on N frames of fused images, so that the number of frames of the fused images reaches P, ensuring that there are enough image refreshes on the display screen.
  • the virtual display device such as a VR wearable device
  • the fused image includes N frames, the image refresh frame rate is P, and N is less than P.
  • a P-N frame fused image is inserted, and the inserted P-N frame fused image here may include: At least one frame in the N frames of fusion images, for example, all of them are the Nth frame fusion images.
  • the near-field objects are rendered at a high image rendering frame rate
  • the far-field objects are rendered at a low image rendering frame rate.
  • the image rendering frame rate of the virtual object can be adjusted according to the change of the user's attention to the virtual object.
  • the image rendering frame rate corresponding to the virtual object increases.
  • the image rendering frame rate corresponding to the virtual object is reduced.
  • the VR wearable device may determine the user's degree of attention to the virtual object according to the degree of interaction between the user and the virtual object. For example, it is detected that the user interacts with the distant object more frequently, and it is determined that the user pays attention to the distant object. Alternatively, the VR wearable device determines that the user's eye is gazing at a distant object through eye tracking, and then determines the distant object that the user pays attention to.
  • different rendering frame rates are used for close-range objects and distant objects.
  • more depth levels can be divided for multiple virtual objects to be rendered according to image depth information, such as It includes a first object, a second object and a third object, wherein the first image depth of the first object is less than the third image depth of the third object, and the third image depth of the third object is less than the second image depth of the second object .
  • the first object may be referred to as a "close-range object”
  • the third object may be referred to as a "medium-range object”
  • the second object may be referred to as a "far-range object”.
  • the depth of the first image of the first object is less than the first threshold
  • the depth of the third image of the third object is greater than the first threshold and less than the second threshold
  • the depth of the second image of the second object is greater than the second threshold.
  • the specific values of the first threshold and the second threshold are not limited in this embodiment of the present application.
  • the depth threshold ranges of close-range objects, medium-range objects and distant objects are shown in Table 2 below:
  • Table 2 Image Depth Ranges for Near, Medium, and Far Objects
  • object image depth close-up objects 0.1-10m medium shot object 10-100m distant object 100-1000m
  • the first image rendering frame rate N of the close-range object is greater than the third image rendering frame rate K of the mid-range object, and the third image rendering frame rate K of the mid-range object is greater than the second image rendering frame rate M of the distant object.
  • N frames of near-view objects, K frames of medium-view objects, and M frames of far-view objects are rendered, and N is greater than K and greater than M. Since the number of frames of distant objects and medium objects is small, it is necessary to insert frames, for example, to insert N-K frames of medium objects (the inserted N-K frames of medium objects may be copies of at least one frame of the K-frame medium objects), Inserting the N-M frame mid-range object (the inserted N-M frame mid-range object may be a duplicate of at least one frame of the M-frame mid-range object).
  • the number of frames of close-range objects, medium-range objects, and distant objects are the same as N, which can be fused correspondingly to obtain N-frame fusion images. If N is less than the image refresh frame rate P, continue to insert P-N frame fusion images to obtain P Frame fused images and displayed.
  • the inserted P-N frame fusion image may be an image obtained by translating and/or rotating at least one frame of the N frame fusion images according to the posture of the VR wearable device.
  • N frames of close-range objects, K frames of medium-view objects, and M frames of distant-view objects are rendered, and N is greater than K and greater than M. Due to the small number of frames of distant objects and medium objects, it is necessary to insert frames, for example, insert N-K frames of medium objects (the inserted N-K frames of medium objects can be at least one of the K-frame medium objects according to the posture of the VR wearable device.
  • the mid-range object after one frame is rotated and/or translated insert the N-M frame of the distant object (the inserted N-M frame of the distant object can be rotated and/or translated according to the posture of the VR wearable device for at least one frame of the M-frame distant object behind the distant object).
  • the number of frames of the close-range object, the medium-range object, and the distant object are the same as N, which can be fused correspondingly to obtain N-frame fused images.
  • the object in the near view is a boy
  • the object in the middle view is a boat
  • the object in the far view is a mountain as an example.
  • the close-range object renders a frame every 1ms
  • the medium-range object renders a frame every 1.33ms
  • the far-range object renders a frame every 2ms.
  • the close-range object, medium-range object, and distant-view object start to be rendered at the same time
  • the first-frame close-range object, the first-frame medium-range object, and the first-frame distant-view object are rendered in the first ms
  • the second-frame close-range object is rendered at the 2ms.
  • the unit time can be a time period of any length, such as 1s (ie, 1000ms).
  • the mid-range object in the first frame is aligned with the close-range object in the first frame
  • the rendering time of the mid-range object in the second frame is 2.33ms, which is relatively close to the close-range object in the second frame, so in the second frame
  • the foreground object is aligned with the close-up object in the second frame, as shown in Figure 18.
  • the rendering time of the mid-range object in the third frame is 3.66ms, which is closer to the rendering time of the close-range object in the fourth frame (ie, the 4th ms), so the mid-range object in the third frame is aligned with the close-range object in the fourth frame, and so on. Since far objects are already aligned with near objects, there is no need to align them again.
  • the missing objects can be interpolated.
  • the mid-view object inserted here may be the mid-view object in the previous frame (that is, the mid-view object in the second frame) or the mid-view object in the previous frame obtained after the gesture processing (rotation and/or translation) of the VR wearable device Object.
  • the number of frames of mid-ground objects reaches 60 frames.
  • insert a frame of vision object between the first frame of vision objects and the second frame of vision objects where the inserted vision objects can be the previous frame (ie the first frame of vision objects) or the last frame of vision objects according to The object after pose processing (rotation and/or translation) of the VR wearable device.
  • a frame of perspective object is inserted between the distant object of the second frame and the distant object of the third frame, and the inserted distant object here can be the distant object of the previous frame (that is, the distant object of the second frame) or the distant object of the previous frame.
  • Objects after pose processing (rotation and/or translation) of the VR wearable device, and so on, after inserting 30 frames of distant objects the number of frames of distant objects reaches 60 frames.
  • the close-range object, the medium-range object and the distant object all reach 60 frames, the corresponding fusion can be performed.
  • the first frame of close-range objects, the first frame of mid-range objects, and the first frame of distant objects are fused to obtain a first-frame fused image
  • the inserted distant object is fused to obtain the second frame of fusion image, and so on, to obtain 60 frames of fusion images.
  • the virtual display device sequentially displays 60 frames of fused images, wherein the mid-ground objects on the third frame of fused images are the same as the mid-ground objects on the second frame of fused images.
  • the mid-ground object is the mid-ground object on the copied second frame fused image or the mid-ground object after processing (rotation and/or translation) the mid-ground object on the second frame fused image, so it is fused from the second frame.
  • the image is refreshed to the third frame of the fused image, and in the user's view, the mid-ground object has not changed.
  • the close-up objects on the fused image of the third frame are different from the close-up objects on the fused image of the second frame.
  • the close-up objects on the fused image of the third frame are related to the shape of the close-up objects on the fused image of the second frame. (The form of the little boy) has changed. It should be understood that the more distant objects are interpolated, so the distant objects appear to change the slowest. Therefore, when refreshing 60 frames of fused images, the user sees that distant objects change the slowest, medium objects second, and close objects change the fastest.
  • the effects presented by the virtual object at different depth positions are different.
  • the same virtual object such as the video playback interface in Figure 9
  • the rendering frame rate corresponding to the virtual object at the close-up view position is higher, the change of the virtual object at the close-up view position is faster. , more fluent.
  • the rendering frame rate corresponding to the virtual object at the distant position is relatively low, so the change of the object at the distant position is slow, and it is relatively stuck.
  • the image rendering frame rate is 30, which is less than the image refresh frame rate 90.
  • this low rendering frame rate is for the entire image, in other words, all virtual objects in each image correspond to the same rendering frame rate, that is, 30 frames.
  • the rendering frame rate of close-range objects is too low, which will lead to large trigger delay and jitter.
  • the rendering frame rates corresponding to different virtual objects on an image are different, and a larger rendering frame rate can be used for close-range objects to ensure the viewing experience of close-range objects, while a relatively lower frame rate can be used for medium-range objects and distant objects.
  • the rendering frame rate is higher, which reduces the rendering power consumption and does not affect the user experience.
  • the close-range objects and the distant-view objects have different rendering frame rates, so the number of frames that need to be inserted is different for the close-range objects and the distant-view objects.
  • the black and white corresponding to the close-range objects and the distant-view objects are different.
  • a close-up object of one frame is inserted between the close-up object in the ith frame and the close-up object in the i+1-th frame, and the inserted close-up object in the one frame is processed after the close-up object in the ith frame is processed according to the posture of the VR wearable device.
  • the width of the non-overlapping portion between the inserted close-range object in 1 frame and the close-range object in the ith frame is equal to the displacement of the VR wearable device. Since the image rendering frame rate corresponding to the close-range object is high, the close-range object in the ith frame and the close-range object in the i+ The time interval between close-range objects in one frame is short. During this time interval, when the VR wearable device moves at a constant speed and the displacement of the VR wearable device is small, the distance between the inserted close-range object in one frame and the close-range object in the i-th frame is short. The width of the non-overlapping portion is small.
  • the inserted distant object is processed according to the posture of the VR wearable device.
  • the width of the non-overlapping part between the distant object and the distant object in the ith frame is equal to the displacement of the VR wearable device. Since the image rendering frame rate corresponding to the distant object is low, the distance between the distant object in the ith frame and the distant object in the i+1th frame is low.
  • the width of the non-overlapping part between the inserted distant object and the i-th frame of vision object is large, so , the width of the black and white corresponding to the close-range object is smaller than the width of the black and white corresponding to the distant object.
  • the first object is a close-range object and the second object is a distant-view object.
  • the first object and the second object may not be divided according to the image depth, but determined based on other methods.
  • the first object is a virtual object at the center of the image to be rendered, and the second object is to be rendered.
  • the first object may be a system default setting or a user-specified object or object type
  • the second object may be all objects on the image to be rendered except the first object, and so on.
  • FIG. 21 shows an electronic device 2000 provided by the present application.
  • the electronic device 2000 may be the aforementioned mobile phone.
  • the electronic device 2000 may include: one or more processors 2001; one or more memories 2002; Bus 2005 connection.
  • the one or more computer programs 2004 are stored in the aforementioned memory 2002 and configured to be executed by the one or more processors 2001, the one or more computer programs 2004 comprise instructions that can be used to perform the above Relevant steps of the mobile phone in the corresponding embodiment.
  • the communication interface 2003 is used to implement communication with other devices, for example, the communication interface may be a transceiver.
  • the methods provided by the embodiments of the present application have been introduced from the perspective of an electronic device (such as a mobile phone) as an execution subject.
  • the electronic device may include a hardware structure and/or software modules, and implement the above functions in the form of a hardware structure, a software module, or a hardware structure plus a software module. Whether a certain function of the above functions is performed in the form of a hardware structure, a software module, or a hardware structure plus a software module depends on the specific application and design constraints of the technical solution.
  • references in this specification to "one embodiment” or “some embodiments” and the like mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application.
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless specifically emphasized otherwise.
  • the terms “including”, “including”, “having” and their variants mean “including but not limited to” unless specifically emphasized otherwise.
  • the above-mentioned embodiments it may be implemented in whole or in part by software, hardware, firmware or any combination thereof.
  • software it can be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, all or part of the processes or functions described in the embodiments of the present solution are generated.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server, or data center Transmission to another website site, computer, server, or data center is by wire (eg, coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, a data center, or the like that includes an integration of one or more available media.
  • the usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVD), or semiconductor media (eg, Solid State Disk (SSD)), and the like.
  • magnetic media eg, floppy disks, hard disks, magnetic tapes
  • optical media eg, DVD
  • semiconductor media eg, Solid State Disk (SSD)

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种显示方法与电子设备,用于降低图像渲染功耗。该方法包括:通过显示设备向用户呈现N帧图像;其中,所述N帧图像中第j帧图像上第一景深处的第一对象与第i帧图像上第一景深处的第一对象相同;所述第j帧图像上第二景深处的第二对象与所述第i图像上第二景深处的第二对象不同;N、i、j为正整数,i小于j。

Description

一种显示方法与电子设备
相关申请的交叉引用
本申请要求在2021年05月07日提交中国专利局、申请号为202110496915.6、申请名称为“一种显示方法与电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及电子技术领域,尤其涉及一种显示方法与电子设备。
背景技术
虚拟现实(Virtual Reality,VR)技术是借助计算机及传感器技术创造的一种人机交互手段。VR技术综合了计算机图形技术、计算机仿真技术、传感器技术、显示技术等多种科学技术,可以创建虚拟环境,用户通过佩戴VR穿戴设备沉浸于虚拟环境中。
虚拟环境是通过许多张经过渲染的三维图像不断刷新而呈现出来的,三维图像中包括处于不同景深的对象,给用户带来立体感。一般,图像渲染帧率(单位时间内渲染的图像的帧数)越高越好,但是,受限于图形处理芯片(Graphics Processing Unit,GPU)的计算能力、设备的功耗等原因,往往难以提供较大的图像渲染帧率。
发明内容
本申请的目的在于提供了一种显示方法与电子设备,用于降低图像渲染所带来的功耗。
第一方面,提供一种显示方法,该方法可以由显示设备执行。其中,显示设备可以是VR显示设备、增强现实(Augmented Reality,AR)显示设备、混合现实技术(Mixed Reality,MR)显示设备,所述显示设备可以是可穿戴设备,比如头戴式设备(如,眼睛、头盔等)。或者,该方法也可以由与显示设备连接的电子设备执行,所述电子设备比如可以是主机(如VR主机)或服务器(如VR服务器)等。该方法中,通过显示设备向用户呈现N帧图像;其中,所述N帧图像中第j帧图像上第一景深处的第一对象与第i帧图像上第一景深处的第一对象相同;所述第j帧图像上第二景深处的第二对象与所述第i图像上第二景深处的第二对象不同;N、i、j为正整数,i小于j。
以VR为例,用户佩戴显示设备(如VR眼镜)时,可以看到虚拟环境,该虚拟环境是通过不断刷新图像(三维图像)而呈现出来的,所以,用户看到的对象具有景深,比如,用户会看到有些对象距离用户比较近,有些对象距离用户比较远。在本申请实施例中,由于第j帧图像和第i帧图像上第一景深处的第一对象相同,第二景深处的第二对象不同。对应的,在第i帧、第j帧不断播放过程中,在用户看来,第一景深处的第一对象相同(或不变),第二景深处的第二对象不同(或变化)。这样的话,第一景深的第一对象可以以较低的帧率渲染,比如,只渲染出一帧第一对象,之后的第二帧、第三帧等等都使用这一帧第一对象,大大节省了渲染功耗。
示例性的,所述第一对象和所述第二对象均为变化对象。其中,变化对象可以理解为, 在用户看来,第一对象和第二对象是不断变化的,比如动作、位置、形状、颜色或大小中的至少一项变化。
举例来说,第一景深的第一对象是小男孩踢球,第二景深的第二对象是海上的船只,小男孩和船只都是变化对象。通过本申请实施例的技术方案,用户看到的是小男孩不断变化,而海上的船只不变或变化较慢。可以简单的理解为,小男孩是实时变化的,而船只变化缓慢甚至不变。
在一种可能的设计中,所述第一景深大于所述第二景深。比如,所述第一景深大于第一阈值,和/或,所述第二景深小于第二阈值,所述第一阈值大于或等于所述第二阈值。其中,第一阈值和第二阈值的具体取值,本申请实施例不作限定。
也就是说,用户看到的虚拟环境中,距离用户较远的第一对象(可以理解为远景对象)不变,距离用户较近的第二对象(可以理解为近景对象)变化。一般来说,用户往往更关注距离用户较近的对象,所以本申请实施例中近景对象实时的变化,远景对象变化较小甚至可以不变,这样,既不影响用户观看体验,还可以节省渲染功耗。
在一种可能的设计中,当所述用户注视点的景深变化时,所述第二景深的变化。或者说,第二景深随着用户注视点的景深的变化而变化。比如,当用户的注视点从远到近变化(比如从10m往1m),第二景深也从远到近变化。这样的话,用户目光从远到近过程中,第二景深处的第二对象变化逐渐加快,避免用户关注点对应的对象不变,影响用户观看体验。具体的,后台的实现方式为,第二景深处的第二对象的图像渲染帧率增大,所以第二对象需要插帧的数量减少,看上去变化加快。
示例性的,所述第二景深为所述用户的注视点所在的景深。也就是说,用户注视点位于哪一个景深,那么用户看到的该景深处的对象可以实时变化,其它景深(比如第一景深)处的对象可以不变或变化较小。
示例性的,所述第二景深可以是预设对象所在景深,所述预设对象可以是虚拟对象、显示对象或界面中的一种或多种。所述预设对象可以是系统默认设置的,或者用户设置的。
在一些实施例中,所述N帧图像中第j帧图像上第一景深处的第一对象与第i帧图像上第一景深处的第一对象相同,包括:在所述第j帧图像和所述第i帧图像上,所述第一对象的动作、位置、形状、颜色或大小中的至少一项相同;所述第j帧图像上第二景深处的第二对象与所述第i图像上第二景深处的第二对象不同,包括:在所述第j帧图像和所述第i帧图像上,所述第二对象的动作、位置、形状、颜色或大小中的至少一项不同。
也就是说,用户佩戴显示设备(如VR眼镜)时,可以看到虚拟环境中,第一景深处的第一对象的不变(比如动作、位置、形状或大小中的至少一项相同),第二景深处的第二对象变化(如,动作、位置、形状或大小中的至少一项不同)。
在一种可能的设计中,所述第一对象和所述第二对象的类型不同。
示例性的,所述第一对象包括虚拟对象、显示对象或界面中的一种类型或多种类型;和/或,所述第二对象包括虚拟对象、显示对象或界面中的一种类型或多种类型。
举例来说,第一对象可以是虚拟对象(比如VR游戏人物),第二对象是真实对象,所述真实对象是指摄像头采集的真实世界中的对象。即,用户看到的是在真实世界中包括虚拟对象,其中,虚拟对象是实时变化的,真实世界变化较慢甚至不变。
再例如,第一对象可以是界面(比如视频播放界面)、第二对象可以是背景对象,比如虚拟影院等。这样,用户看到的是在虚拟影院中观看电影。具体地,电影是实时变化的, 虚拟应用变化较慢甚至不变。
在一种可能的设计中,第i帧图像可以是第j帧图像的前一帧图像,即i=j-1;或者,第i帧图像可以是第j帧图像的前n帧图像,即i=j-n,n>1,本申请实施例不作限定。
在一种可能的设计中,所述第j帧图像上第一景深处的第一对象与第i帧图像上第一景深处的第一对象相同,包括:所述第j帧图像上第一景深处的第一对象是复制所述第i帧图像上第一景深处的第一对象;或者,所述第j帧图像上第一景深处的第一对象是所述第i帧图像上第一景深处的第一对象经过平移和/或旋转后的对象。
这样的话,第j帧图像上第一景深处的第一对象不需要重新渲染,直接利用第i帧图像上的第一景深处的第一对象即可。比如,复制第i帧图像上第一景深处的第一对象或者将第i帧图像上第一景深处的第一对象经过平移和/或旋转,有助于节省渲染功耗。
在一种可能的设计中,所述j帧图像上第二景深处的第二对象与所述第i图像上第二景深处的第二对象不同,包括:所述j帧图像上第二景深处的第二对象与所述第i图像上第二景深处的第二对象是不同的对象;和/或,所述j帧图像上第二景深处的第二对象与所述第i图像上第二景深处的第二对象为同一对象的不同形态。
如前文所述,N帧图像播放的过程中,在用户看来,第一景深处的第一对象相同(或不变),第二景深处的第二对象不同(或变化)。比如,当前帧和上一帧上第二景深处的对象改变了,即有新对象进入虚拟环境的第二景深,或者,第二景深处的第二对象形态变化,所述形态包括第二对象的动作、位置、形状、大小、颜色等等。总之,用户看到的第二景深处的第二对象实时变化,用户观看体验较好。
在一种可能的设计中,在所述通过显示设备向用户呈现N帧图像之前,所述方法还包括:在一定时长内,生成M帧第一对象图像和N帧第二对象图像,M和N是正整数,M小于N;在所述M帧第一对象图像中插入N-M帧第一对象图像;其中,插入的N-M帧第一对象图像是复制所述M帧第一对象图像中至少一帧第一对象图像或者是所述至少一帧第一对象图像经过旋转和/或平移后的图像;将N帧第一对象图像和N帧第二对象图像对应融合得到所述N帧图像。
应理解,M<N,所以需要插帧N-M帧第一对象图像。可选的,插入的第一对象图像可以是复制上一帧或上一帧经过旋转和/或平移后的图像。比如,M=3、N=6,那么在3帧第一对象图像中每隔1帧插入1帧,插入的1帧可以是复制前一帧或前一帧经过旋转和/或平移后的图像。或者,插入的第一对象图像时可以是复制前n帧或前n帧经过旋转和/或平移后的图像,本申请实施例不作限定。
在一种可能的设计中,在所述M帧第一对象图像中插入N-M帧第一对象图像,包括;将所述N帧第二对象图像中的M帧第二对象图像与所述M帧第一对象图像对应,所述M帧第二对象图像与所述M帧第一对象图像生成时间相邻;插入N-M帧第一对象图像,其中,插入的N-M帧第一对象图像与所述N帧第二对象图像剩余的N-M帧第二对象图像对应。
其中,M帧第二对象图像与M帧第一对象图像的生成时间相邻,可以理解为生成时间接近或靠近、生成时间最靠近或最接近、或者,生成时间之间的时间差最小或小于阈值等等。应理解,M<N,所以需要插帧N-M帧第一对象图像。在插入N-M帧第一对象图像之前,将M帧第一对象图像和N帧第二对象图像按照生成时间对齐,对齐之后,在空处 插帧,具体过程请参见后文介绍。
在一种可能的设计中,所述M帧第一对象图像分别是根据所述显示设备在M个时刻时的姿态对所述第一对象进行渲染得到的图像;所述N帧第二对象图像分别是根据所述显示设备在N个时刻时的姿态对所述第二对象进行渲染得到的图像,所述M个时刻和所述N个时刻位于所述第一时长内。
在本申请实施例中,第一景深处的第一对象和第二景深处的第二对象的图像渲染帧率不同。图像渲染帧率是指单位时间内渲染图像的帧数。假设第一对象的渲染帧率是M,第二对象的渲染帧率是N,那么在一定时长(如单位时长)内,渲染出M帧第一对象图像和N帧第二对象图像。以渲染第一对象为例,用户佩戴VR眼镜,当用户头部运动时,VR眼镜姿态变化,基于VR眼镜的姿态对第一对象渲染,使得渲染后的第一对象适配用户的头部运动,用户体验较好。
在一种可能的设计中,通过显示设备向用户呈现N帧图像,包括:在所述N小于所述显示设备的图像刷新率P的情况下,在所述N帧图像中插入N-P帧所述图像;其中,插入的N-P帧图像是复制所述N帧图像中至少一帧图像或者是至少一帧图像经过旋转和/或平移后的图像;通过显示设备向用户呈现P帧图像,P是正整数。
举例来说,P=90,N=60,那么需要插入30帧,插入的30帧可以是60帧中的任意一帧或多种,比如,插入的图像可以是复制前一帧或前一帧经过旋转和/或平移后的图像。或者,插入的图像时可以是复制前n帧或前n帧经过旋转和/或平移后的图像,本申请实施例不作限定。
在一种可能的设计中,所述方法还包括:当用户关注所述第一景深处的第一对象时,通过所述显示设备显示W帧图像;其中,所述W帧图像中第t帧图像上第二景深处的对象与第r帧图像上的第二景深处的对象相同,所述第t帧图像上第一景深处的对象与第r帧图像上的第一景深处的对象不同;N、t、r为正整数,r小于t。
也就是说,原本在用户看来,第一景深处的第一对象相同(或不变),第二景深处的第二对象不同(或变化)。当用户的关注第一景深处的第一对象时,第一景深处的第一对象不同(或变化),第二景深处的第二对象相同(或不变)。这是因为,原本第一景深处的第一对象的图像渲染帧率低,所以插帧较多,看上去变化不大或不变化,当用户关注第一景深处的第一对象时,增大第一对象的图像渲染帧率,所以插帧减少,看上去变化速度加快。为了节省功耗,增大第一对象的图像渲染帧率时,降低了第二对象的图像渲染帧率,所以第二景深的第二对象看上去不变或变化较慢。
第二方面,还提供一种电子设备,包括:
处理器,存储器,以及,一个或多个程序;
其中,所述一个或多个程序被存储在所述存储器中,所述一个或多个程序包括指令,当所述指令被所述处理器执行时,使得所述电子设备执行如上述第一方面提供的方法步骤。
第三方面,提供一种计算机可读存储介质,所述计算机可读存储介质用于存储计算机程序,当所述计算机程序在计算机上运行时,使得所述计算机执行如上述第一方面提供的方法。
第四方面,提供一种计算机程序产品,包括计算机程序,当所述计算机程序在计算机上运行时,使得所述计算机执行如上述第一方面提供的方法。
第五方面,提供一种电子设备上的图形用户界面,所述电子设备具有显示屏、存储器、 以及处理器,所述处理器用于执行存储在所述存储器中的一个或多个计算机程序,所述图形用户界面包括所述电子设备执行上述第一方面提供的方法时显示的图形用户界面。
第六方面,本申请实施例还提供一种芯片系统,所述芯片系统与电子设备中的存储器耦合,用于调用存储器中存储的计算机程序并执行本申请实施例第一方面的技术方案,本申请实施例中“耦合”是指两个部件彼此直接或间接地结合。
上述第二方面至第六方面的有益效果,参见第一方面的有益效果,不重复赘述。
附图说明
图1为本申请一实施例提供的系统架构的示意图;
图2为本申请一实施例提供的可穿戴设备姿态变化时看到的虚拟环境的示意图;
图3为本申请一实施例提供的一种图像渲染方法的示意图;
图4为本申请一实施例提供的另一种图像渲染方法的示意图;
图5为本申请一实施例提供的低渲染帧率渲染时导致的响应时延的示意图;
图6为本申请一实施例提供的图像平移的示意图;
图7为本申请一实施例提供的第一种应用场景的示意图;
图8为本申请一实施例提供的第二种应用场景的示意图;
图9为本申请一实施例提供的第三种应用场景的示意图;
图10为本申请一实施例提供的穿戴设备的结构示意图;
图11为本申请一实施例提供的图像渲染方法的流程示意图;
图12为本申请一实施例提供的以不同帧率渲染出的近景物体和远景物体的示意图;
图13A和图13B为本申请一实施例提供的近景物体和远景物体的处理流程的示意图;
图14A和图14B为本申请一实施例提供的近景物体和远景物体对齐的示意图;
图15A至图15C为本申请一实施例提供的插帧过程的示意图;
图16A和图16B为本申请一实施例提供的近景物体、中景物体和远景物体的处理流程的示意图;
图17至图20为本申请一实施例提供的插帧过程的示意图;
图21为本申请一实施例提供的电子设备的结构示意图。
具体实施方式
以下,对本申请实施例中的部分用语进行解释说明,以便于本领域技术人员理解。
(1)本申请实施例涉及的至少一个,包括一个或者多个;其中,多个是指大于或者等于两个。另外,需要理解的是,在本申请的描述中,“第一”、“第二”等词汇,仅用于区分描述的目的,而不能理解为明示或暗示相对重要性,也不能理解为明示或暗示顺序。比如,第一对象和第二对象并不代表二者的重要程度,或者代表二者的顺序,是为了区分对象。
在本申请实施例中,“和/或”,是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
(2)虚拟现实(Virtual Reality,VR)技术是借助计算机及传感器技术创造的一种人 机交互手段。VR技术综合了计算机图形技术、计算机仿真技术、传感器技术、显示技术等多种科学技术,可以创建虚拟环境。虚拟环境包括由计算机生成的、并实时动态播放的三维立体逼真图像为用户带来视觉感知;而且,除了计算机图形技术所生成的视觉感知外,还有听觉、触觉、力觉、运动等感知,甚至还包括嗅觉和味觉等,也称为多感知;此外,还可以检测用户的头部转动,眼睛、手势、或其他人体行为动作,由计算机来处理与用户的动作相适应的数据,并对用户的动作实时响应,并分别反馈到用户的五官,进而形式虚拟环境。示例性的,用户佩戴VR穿戴设备可以看到VR游戏界面,通过手势、手柄等操作,可以与VR游戏界面交互,仿佛身处游戏中。
(3)增强现实(Augmented Reality,AR)技术是指将计算机生成的虚拟对象叠加到真实世界的场景之上,从而实现对真实世界的增强。也就是说,AR技术中需要采集真实世界的场景,然后在真实世界上增加虚拟环境。
因此,VR技术与AR技术的区别在于,AR技术创建的是完全的虚拟环境,用户看到的全部是虚拟对象;而AR技术是在真实世界上叠加了虚拟对象,即既包括真实世界中对象也包括虚拟对象。比如,用户佩戴透明眼镜,通过该眼镜可以看到周围的真实环境,而且该眼镜上还可以显示虚拟对象,这样,用户既可以看到真实对象也可以看到虚拟对象。
(4)混合现实技术(Mixed Reality,MR),是通过在虚拟环境中引入现实场景信息(或称为真实场景信息),将虚拟环境、现实世界和用户之间搭起一个交互反馈信息的桥梁,从而增强用户体验的真实感。具体来说,把现实对象虚拟化,(比如,使用摄像头来扫描现实对象进行三维重建,生成虚拟对象),经过虚拟化的真实对象引入到虚拟环境中,这样,用户在虚拟环境中可以看到真实对象。
需要说明的是,本申请实施例提供的技术方案可以适用于VR场景、AR场景或MR场景中。
当然,除了VR、AR和MR之外还可以适用于其它场景。比如,裸眼3D场景(裸眼3D显示屏、裸眼3D投影等)、影院(如3D电影)、电子设备中的VR软件等,总之,可以适用于任何需要生成三维图像的场景,其中三维图像中包括位于不同景深(或图像深度)的对象。
为了方便描述,下文主要以VR场景为例进行介绍。
示例性的,请参见图1,为本申请实施例VR系统的示意图。VR系统中包括VR穿戴设备,以及主机(例如VR主机)或服务器(例如VR服务器),VR穿戴设备与VR主机或VR服务器连接(有线连接或无线连接)。VR主机或VR服务器可以是具有较大计算能力的设备。例如,VR主机可以是手机、平板电脑、笔记本电脑等设备,VR服务器可以是云服务器等。VR主机或VR服务器负责图像生成、图像渲染等,然后将渲染后的图像发送给VR穿戴设备显示,用户佩戴VR穿戴设备可以看到图像。示例性的,VR穿戴设备可以是头戴式设备(Head Mounted Display,HMD),比如眼镜、头盔等。
对于这种VR架构,VR穿戴设备、VR主机或VR服务器可以使用本申请提供的渲染方式(具体原理将在后文介绍)对图像进行渲染,以节省VR主机或VR服务器的渲染功耗。可选的,图1中VR系统中也可以不包括VR主机或VR服务器。比如,VR穿戴设备本地具有图像生成、渲染的能力,无需从VR主机或VR服务器获取图像进行显示,这样情况下,VR穿戴设备可以使用本申请实施例提供的渲染方法对图像进行渲染,节省VR 穿戴设备的渲染功耗。
下文主要以VR穿戴设备本地进行图像渲染为例进行介绍。
(5)图像渲染
可以理解的是,用户佩戴VR穿戴设备时,可能会发生位置移动、扭头等行为,为了使得虚拟环境更加真实,当VR穿戴设备发生位置移动、扭头等行为时,需要对图像进行相应的处理,给用户真实的感受。因此,在VR领域中,图像渲染包括对图像进行色彩、透明度等渲染,还包括根据VR穿戴设备的姿态对图像进行旋转和/或平移。其中,VR穿戴设备的姿态包括旋转角度和/或平移距离等多个自由度,其中,选择角度包括偏航角、俯仰角、横滚角,平移距离包括相对于在三轴方向(X,Y,Z)的平移距离。因此,图像渲染包括根据VR穿戴设备的旋转角度对图像进行旋转处理,和/或,根据VR穿戴设备的平移距离对图像进行平移处理。在一些实施例中,姿态可以包括用户的方向(orientation)和位置(position),当用户的姿态变化时,用户的视角发生变化。具体的,姿态可以为用户的头部姿态。姿态可以通过VR穿戴设备中的传感器和/或摄像头获取。
示例性的,请参见图2,为VR领域中图像渲染的一种示意图。当用户佩戴VR穿戴设备朝前时,经渲染后的图像上屏幕位于正前方,且背景对象(例如山、水等)在正前方;当用户头部姿态向右旋转角度(比如40度)后,图像上屏幕向左旋转40度,而且,背景对象(例如山、水等)向左旋转40度,这样,用户看到的虚拟环境是与用户联动的,体验较好。
可以理解的是,VR穿戴设备可以根据当前姿态对图像进行渲染(旋转和/或平移)。比如,VR穿戴设备60ms内渲染60帧图像,那么可以在第1ms时,根据第1ms的姿态对图像(可以理解为原始图像,即未渲染的图像)进行渲染,其中,第1ms的姿态可以是运动传感器在第1ms产生的运动数据,如,旋转角度和/或平移距离等。在第2ms时,根据第2ms的姿态(运动传感器在第2ms产生的运动数据,如,旋转角度和/或平移距离等)对图像进行渲染,以此类推。
(6)景深(Depth of Field,简称DOF)
三维图像包括不同图像深度的对象。比如,VR穿戴设备显示三维图像,用户佩戴VR穿戴设备看到的是三维场景,该三维场景中不同对象到用户人眼的距离不同,呈现立体感。因此,图像深度可以理解为三维图像上对象与用户人眼之间的距离,图像深度越大,视觉上距离用户越远,看上去像是远景,图像深度越小,视觉上距离用户越近,看上去像是近景。图像深度还可以称为“景深”。
(7)图像渲染帧率和图像刷新帧率
图像渲染帧率,是指单位时间(比如1s、60ms等)内渲染的图像的帧数,即单位时间内能渲染出多少帧图像。如果单位时间是1s,图像渲染帧率的单位可以是fps。图像渲染帧率越高对芯片的计算能力要求越高。需要说明的是,本申请不限定单位时间的具体长度(时长),可以是1s、也可以是1ms,或者60ms等等,只要是时长固定的一段时间即可。
图像刷新率,是指显示器在单位时间(比如1s、60ms等)内的刷新图像的帧率,即单位时间内显示屏能刷新多少帧图像。如果单位时间是1s,图像刷新率的单位可以是赫兹(Hz)。
一般来说,如果图像刷新率是固定的,那么图像渲染帧率需要适配所述图像刷新率。比如,图像刷新率是90Hz,那么图像渲染帧率至少需要90fps,以保证显示器上有足够的 图像刷新。
一种方式为,请参见图3,对待渲染的图像流中的图像一张一张进行渲染,渲染后的图像流在显示屏上刷新。假设VR穿戴设备的图像刷新率达到90Hz,则图像渲染帧率要达到至少90fps,需要性能强大的图形处理器支持,这也意味着高额的功耗,在电池容量一定的情况下,会减少移动VR穿戴设备的续航。
为了降低渲染功耗,一种解决方式为,降低图像渲染帧率,比如,图像渲染帧率可以低于图像刷新帧率。假设图像刷新率是90Hz,图像渲染帧率可以是30fps或60fps。以30帧为例,请参见图4,单位时间内只能渲染30帧图像(比如呈黑色的图像),但是由于图像刷新率是90Hz,渲染出的30帧图像显然不够显示屏单位时间内的刷新量,所以需要对渲染后的30帧图像进行插帧,比如插入60帧渲染后的图像使得渲染后的图像达到90帧,以保证单位时间内有足够的图像在显示屏上刷新,保证显示效果。
这种方式,由于图像渲染帧率较低,一定程度上降低了渲染功耗,但是会导致VR操作有较高的延迟。比如,请参见图5,在第i帧渲染后的图像和第i+1帧渲染后的图像之间需要插入多帧图像,插入的图像可以是第i帧图像的复制版。VR穿戴设备上显示渲染后的图像流,假设在显示第i帧渲染后的图像时,检测到触发操作,到显示第i+1帧渲染图像之前显示的都是插入的图像,由于插入的图像是前面图像(第i帧图像)的复制版,所以显示插入的图像的期间内不响应用户的触发操作,等到第i+1帧渲染后的图像时,才会响应触发操作。因此,对用户触发操作的响应时间较长,显示效果较差,用户体验较差。
此外,上面的降低图像渲染帧率的方案会导致图像上近景对象看上去发生抖动。这是因为,在插入图像时,插入的图像可以是根据VR穿戴设备的姿态对图像进行处理(平移和/或旋转)后的图像。比如,图5中,在第i帧渲染后的图像与第i+1帧渲染后的图像之间插入的是根据VR穿戴设备的姿态对第i帧渲染后的图像进行处理后的图像,这样的话,插入的图像与第i+1帧渲染后的图像之间可能存在视差,因为第i+1帧渲染后的图像与第i帧渲染后的图像之间本身是连续的。这样的话,视觉上会感受到对象抖动,而且,图像渲染帧率越低,需要插帧数量越多,时差越明显,而且,三维图像中具有近大远小的特点,所以近景对象抖动现象更为明显,显示效果较差,体验较差。
而且,上面的降低图像渲染帧率的方案中,图像上会出现黑边。比如,请参见图6,继续以在第i帧渲染后的图像和第i+1帧渲染后的图像之间插入一帧图像为例,该插入的图像是根据VR穿戴设备的姿态对第i帧图像进行旋转和/或平移后得到的图像。比如,VR穿戴设备右移时,插入的图像是相对于第i帧图像向右平移后的图像,这样,两帧图像出现错位,那么重叠部分在显示屏上显示,不重叠部分(化斜线部分)显示黑屏,所以显示屏上出看到黑边,影响用户体验。因此,图4中图像渲染帧率较低的方案存在较多的问题。
为了更好的改善显示效果,本申请实施例提供一种显示方法,该方法中,通过显示设备向用户呈现N帧图像;其中,N帧图像中第j帧图像上第一景深处的对象与第i帧图像上第一景深处的对象相同;第j帧图像上第二景深处的对象与所述第i图像上第二景深处的对象不同;i小于j。举例来说,VR穿戴设备显示N帧图像,用户佩戴VR穿戴设备看到N帧图像不断刷新,其中,近景对象不断变化,远景对象相对不变。这是因为,近景对象使用较高的图像渲染帧率,远景对象使用较低的图像渲染帧率,所以单位时间内渲染出 的近景对象的帧数高于远景对象的帧数,对于缺少的远景对象可以使用插帧的方式获取,而插帧的远景对象会导致远景对象看上去没有变化。一般,用户对远景对象的关注较低,对近景对象关注度较高,所以使用低渲染帧率渲染远景对象可以节省渲染功耗,而且不影响用户体验,对于近景对象渲染帧率高,保证用户体验。
下面介绍本申请实施例提供的几种应用场景。
示例性的,图7为本申请实施例提供的第一种应用场景的示意图。
VR穿戴设备的显示屏上显示图像701,图像701是经过渲染后的三维图像,该三维图像包括山、海、以及小男孩踢足球等多个对象,因此用户佩戴VR穿戴设备时所看到的是在包括山、海的环境中有小男孩踢足球的虚拟环境702。该场景中VR穿戴设备可以确定用户眼部关注的对象,在渲染图像时,可以使用高帧率渲染用户眼部关注的对象,使用低帧率渲染其他对象。在一些实施例中,近景对象(小男孩)、中景对象(海或船)或远景对象(山)中的一个或多个对象可以是VR穿戴设备的摄像头采集的真实对象。在一些实施例中,近景对象还可以是用户界面(User Interface,简称UI)或视频播放界面等界面。
比如,VR穿戴设备确定用户关注对象是小男孩,那么VR穿戴设备在渲染图像701时,使用较高的图像渲染帧率对小男孩进行渲染,使用较低的图像渲染帧率对山、海、鸟、船等其他对象进行渲染。经过渲染后的对象合成图像701。在一种实现方式中,VR穿戴设备可以默认为近景对象(例如小男孩)是用户关注的对象;在另外一种实现方式中,VR穿戴设备可以通过追踪用户的注视点,确定用户关注的对象,当用户关注的对象是小男孩时,使用较高的图像渲染帧率对小男孩进行渲染,使用较低的图像渲染帧率对山、海、鸟、船等其他对象进行渲染。
由于用于关注对象的图像渲染帧率高于其他对象的图像渲染帧率,所以单位时间内渲染出的用户关注对象的帧数高于其他对象的帧数,即缺少部分其他对象,对于缺少的其他对象可以使用插帧的方式获取。比如,单位时间内渲染60帧用户关注对象、30帧其他对象,即单位时间内缺少30帧其他对象,此时,可以插入30帧其他对象,经过插帧后,单位时间内有60帧用户关注对象和60帧其他对象,可以合成60帧图像并显示。由于其他对象对应的图像渲染帧率低,使用了插帧方式,所以用户视觉上看到虚拟环境702中其他对象变化缓慢,这种方式对用户体验影响不大(用户不关注这些对象),而且能节省渲染功耗。对于用户关注的对象渲染帧率高,可以降低时延,提升用户体验。
示例性的,图8为本申请一实施例提供的第二种应用场景的示意图。
VR穿戴设备的显示屏上显示图像801,图像801是经过渲染后的三维图像,该三维图像包括虚拟影院、视频播放界面等对象。因此,用户佩戴VR穿戴设备所看到的是在影院中看电影的虚拟环境802。在一种实现方式中,在图8所示的场景中,VR穿戴设备可以默认为近景对象(例如视频播放界面)是用户关注的对象;在另外一种实现方式中,VR穿戴设备可以通过追踪用户的注视点,确定用户关注的对象,当用户关注的对象是视频播放界面时,使用较高的图像渲染帧率对视频播放界面进行渲染,使用较低的图像渲染帧率对虚拟影院等其他对象进行渲染。
该场景中VR穿戴设备在渲染图像时,可以使用高帧率渲染近景对象,使用低帧率渲染远景对象。
由于视频播放界面的图像深度h1小于虚拟影院的图像深度h2,即视频播放界面是近景对象,虚拟影院是远景对象,那么,VR穿戴设备在渲染图像801时,使用较高的图像渲染帧率对近景对象(如视频播放界面)进行渲染,使用较低的图像渲染帧率对远景对象(虚拟影院等)进行渲染。经过渲染后的近景对象和远景对象合成图像801。对于单位时间内缺少的远景对象,可以使用插帧方式。这种观影体验中,用户对背景(即虚拟影院)的关注较低,所以使用较低渲染帧率以节省渲染功耗,对于近景对象(视频播放界面)渲染帧率高,保证视频播放顺畅。
在图8所示的示例中,以近景图像是视频播放界面为例,需要说明的是,近景图像可以包括近景对象、UI界面等,总之,可以是图像深度小于第一阈值的任意的对象或UI界面。
示例性的,图9为本申请实施例提供的第三种应用场景的示意图。
VR穿戴设备上的摄像头可以采集图像,该图像可以包括用户周围的真实环境(如,包括山、海等真实对象),VR穿戴设备可以将摄像头采集的包括真实环境的图像与虚拟对象(如,UI界面)合成三维图像并显示。其中,UI界面可以是UI交互界面,比如手机桌面、游戏操作界面、视频播放界面等等。
示例性的,如图9,VR穿戴设备的显示屏上显示图像901,图像901是由摄像头采集的图像(包括山、海等真实对象),以及虚拟对象(包括UI界面)合成的。因此,用户佩戴VR穿戴设备所看到的是在真实环境中显示虚拟的UI界面的场景902。VR穿戴设备在渲染图像时,可以使用高帧率渲染虚拟对象,使用低帧率渲染真实对象。在一种实现方式中,在VR穿戴设备可以默认为虚拟对象是用户关注的对象;在另外一种实现方式中,VR穿戴设备可以通过追踪用户的注视点,确定用户关注的对象,当用户关注的对象是虚拟对象时,使用较高的图像渲染帧率对虚拟对象进行渲染,使用较低的图像渲染帧率对真实对象等其他对象进行渲染。当用户关注的对象是真实对象时,使用较高的图像渲染帧率对真实对象进行渲染,使用较低的图像渲染帧率对虚拟对象等其他对象进行渲染。
比如,VR穿戴设备在渲染图像901时,使用较高的图像渲染帧率对虚拟对象(如UI界面)进行渲染,使用较低的图像渲染帧率对真实对象(山、海、鸟、船等)进行渲染。经过渲染后的真实对象和虚拟对象合成图像901。由于虚拟对象的图像渲染帧率高于真实对象的图像渲染帧率,所以单位时间内渲染出的虚拟对象的帧数高于真实对象的帧数,对于缺少的真实对象,可以使用插帧方式,节省渲染功耗,而虚拟对象(UI界面)的图像渲染帧率高,可以降低用于操作的响应时延,用户体验更好。
或者,VR穿戴设备在渲染图像901时,对于虚拟对象和部分真实对象可以使用高帧率渲染,对于其他真实对象可以使用较低帧率。比如,所述部分真实对象与虚拟对象位于同一景深或者所述部分真实对象比虚拟对象更靠近用户眼睛,这种情况下,可以将所述部分真实对象和虚拟对象使用相同的高帧率进行渲染,对于其他真实对象使用较低帧率渲染。
下面介绍穿戴设备的结构,所述穿戴设备可以是VR穿戴设备、AR穿戴设备、MR穿戴设备等。
图10是本申请实施例提供的一种穿戴设备的结构示意图。如图10所示,穿戴设备100可以包括处理器110,存储器120,传感器模块130(可以用于获取用户的姿态),麦克风 140,按键150,输入输出接口160,通信模块170,摄像头180,电池190、光学显示模组1100以及眼动追踪模组1200等。
可以理解的是,本申请实施例示意的结构并不构成对穿戴设备100的具体限定。在本申请另一些实施例中,穿戴设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110通常用于控制穿戴设备100的整体操作,可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),视频处理单元(video processing unit,VPU)控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口,串行外设接口(serial peripheral interface,SPI)接口等。
在一些实施例中,处理器110可以基于不同帧率对不同对象进行渲染,比如,对近景对象使用高帧率渲染,对远景对象使用低帧率进行渲染。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器110可以包含多组I2C总线。
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器110与通信模块170。例如:处理器110通过UART接口与通信模块170中的蓝牙模块通信,实现蓝牙功能。
MIPI接口可以被用于连接处理器110与光学显示模组1100中的显示屏,摄像头180等外围器件。
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器110与摄像头180,光学显示模组1100中的显示屏,通信模块170,传感器模块130,麦克风140等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。可选的,摄像头180可以采集包括真实对象的图像,处理器110可以将摄像头采集的图像与虚拟对象融合,通过光学显示模组1100现实融合得到的图像,该示例可以参见图9所示的应用场景,在此不重复赘述。
USB接口是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口可以用于连接充电器为穿戴设备100充电,也可以用于穿戴设备100与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如手机等。USB接口可以是USB3.0,用于兼容高速显示接口(display port,DP)信号传输,可以传输视音频高速数据。
可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对穿戴设备100的结构限定。在本申请另一些实施例中,穿戴设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
另外,穿戴设备100可以包含无线通信功能,比如,穿戴设备100可以从其它电子设备(比如VR主机或VR服务器)接收渲染后的图像进行显示,或者,接收未渲染的图像然后处理器110对图像进行渲染并显示。通信模块170可以包含无线通信模块和移动通信模块。无线通信功能可以通过天线(未示出)、移动通信模块(未示出),调制解调处理器(未示出)以及基带处理器(未示出)等实现。
天线用于发射和接收电磁波信号。穿戴设备100中可以包含多个天线,每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块可以提供应用在穿戴设备100上的包括第二代(2th generation,2G)网络/第三代(3th generation,3G)网络/第四代(4th generation,4G)网络/第五代(5th generation,5G)网络等无线通信的解决方案。移动通信模块可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块可以由天线接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块还可以对经调制解调处理器调制后的信号放大,经天线转为电磁波辐射出去。在一些实施例中,移动通信模块的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器等)输出声音信号,或通过光学显示模组1100中的显示屏显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块或其他功能模块设置在同一个器件中。
无线通信模块可以提供应用在穿戴设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块经由天线接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块还可以从处理器110接收待发送的信号,对其进行调 频,放大,经天线转为电磁波辐射出去。
在一些实施例中,穿戴设备100的天线和移动通信模块耦合,使得穿戴设备100可以通过无线通信技术与网络以及其他设备通信。该无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
穿戴设备100通过GPU,光学显示模组1100,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接光学显示模组1100和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
存储器120可以用于存储计算机可执行程序代码,该可执行程序代码包括指令。处理器110通过运行存储在存储器120的指令,从而执行穿戴设备100的各种功能应用以及数据处理。存储器120可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储穿戴设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,存储器120可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
穿戴设备100可以通过音频模块,扬声器,麦克风140,耳机接口,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块还可以用于对音频信号编码和解码。在一些实施例中,音频模块可以设置于处理器110中,或将音频模块的部分功能模块设置于处理器110中。
扬声器,也称“喇叭”,用于将音频电信号转换为声音信号。穿戴设备100可以通过扬声器收听音乐,或收听免提通话。
麦克风140,也称“话筒”,“传声器”,用于将声音信号转换为电信号。穿戴设备100可以设置至少一个麦克风140。在另一些实施例中,穿戴设备100可以设置两个麦克风140,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,穿戴设备100还可以设置三个,四个或更多麦克风140,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口用于连接有线耳机。耳机接口可以是USB接口,也可以是3.5毫米(mm)的开放移动穿戴设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
在一些实施例中,穿戴设备100可以包括一个或多个按键150,这些按键可以控制穿戴设备,为用户提供访问穿戴设备100上的功能。按键150的形式可以是按钮、开关、刻度盘和触摸或近触摸传感设备(如触摸传感器)。具体的,例如,用户可以通过按下按钮 来打开穿戴设备100的光学显示模组1100。按键150包括开机键,音量键等。按键150可以是机械按键。也可以是触摸式按键。穿戴设备100可以接收按键输入,产生与穿戴设备100的用户设置以及功能控制有关的键信号输入。
在一些实施例中,穿戴设备100可以包括输入输出接口160,输入输出接口160可以通过合适的组件将其他装置连接到穿戴设备100。组件例如可以包括音频/视频插孔,数据连接器等。
光学显示模组1100用于在处理器的控制下,为用户呈现图像。光学显示模组1100可以通过反射镜、透射镜或光波导等中的一种或几种光学器件,将实像素图像显示转化为近眼投影的虚拟图像显示,实现虚拟的交互体验,或实现虚拟与现实相结合的交互体验。例如,光学显示模组1100接收处理器发送的图像数据信息,并向用户呈现对应的图像。
在一些实施例中,穿戴设备100还可以包括眼动跟踪模组1200,眼动跟踪模组1200用于跟踪人眼的运动,进而确定人眼的注视点。如,可以通过图像处理技术,定位瞳孔位置,获取瞳孔中心坐标,进而计算人的注视点。示例性的,眼动跟踪模组1200的实现原理可以是,通过摄像头采集用户眼睛的图像。通过用户眼睛的图像,计算出用户眼睛正在注视的显示屏上的位置坐标,该坐标位置即用户注视点,将此注视点发送给处理器110。处理器110可以使用高渲染帧率对该注视点的对象进行渲染。在另一种实施例中,眼动跟踪模组1200可以包括红外发射器,红外发射器发出的红外光指向用户眼睛的瞳孔。眼睛的角膜反射红外光,红外摄像机跟踪反射的红外光,从而跟踪的注视点的移动。
下面结合附图介绍本申请实施例提供的技术方案,以下技术方案均可以应用于图7-图9等多种应用场景中。
请参见图11,为本申请实施例提供的显示信息处理方法的流程示意图,该方法可以适用于穿戴设备(如,VR穿戴设备),或者,适用于与穿戴设备连接的其它电子设备(比如VR主机或VR服务器等)。如图11所示,该方法的流程包括:
S1,确定第一对象。
示例性的,第一对象可以是待渲染的所有对象中用户的兴趣点。
方式1,根据眼动追踪技术确定用户的注视点,该注视点即所述兴趣点。比如,以图7为例,VR穿戴设备根据眼动追踪技术确定用户注视小男孩,确定小男孩为兴趣点。
方式2,所述兴趣点可以是预设对象,所述预设对象包括UI界面、近景对象、虚拟对象等。其中,兴趣点是近景对象或UI界面的场景请参见图8所示,兴趣点是虚拟对象的场景请参见图9所示。方式2中不需要结合眼动追踪技术确定用户的兴趣点。
可选的,上述方式1和方式2可以单独使用,或结合使用,本申请实施例不作限定。
S2,确定第二对象。
示例性的,第二对象是待渲染的所有对象中第一对象之外的其他对象。比如,第一对象是第一景深的对象(例如近景对象),第二对象可以是第二景深的对象(例如远景对象)和/或第三景深的对象(例如中景对象),即,第二对象的图像深度大于第一对象的图像深度。示例性的,第一对象的第一图像深度小于第一阈值,第二对象的第二图像深度大于第二阈值,第一阈值小于或等于第二阈值。其中,第一阈值和第二阈值的具体取值,本申请实施例不作限定。比如,近景对象和远景对象的图像深度可以参见下表1:
表1:近景对象与远景对象的图像深度范围
对象 图像深度
近景对象 0.1m-10m
远景对象 100m-1000m
S3,以第一图像渲染帧率对第一对象进行渲染,第一渲染帧率用于指示一定时长内能够渲染的第一对象的帧数。
S4,以第二图像渲染帧率对第二对象进行渲染,第二渲染帧率用于指示一定时长内能够渲染的第二对象的帧数,其中,第一图像渲染帧率大于第二图像渲染帧率。
下面以第一对象是近景对象,且近景对象对应的第一图像渲染帧率是N为例,以第二对象是远景对象,且远景对象对应的第二图像渲染帧率是M为例,介绍对第一对象和第二对象的渲染原理,其中,M和N是正整数,N大于M。
示例性的,如图12所示,单位时间内渲染出N帧近景对象、M帧远景对象。由于N大于M,所以,单位时间内近景对象比远景对象多N-M帧。
S5,将渲染后的所述第一对象和所述第二对象融合得到虚拟图像。
示例性的,继续参见图12,单位时间内远景对象的帧数M少于近景对象的帧数N,因此,融合之前,需要对远景对象作插帧处理,插入N-M帧远景对象,保证近景对象与远景对象的帧数相同,然后再融合。
一种可实现方式为,请参见图13A,一定时长内渲染出N帧近景对象、M帧远景对象,N大于M。由于远景对象的帧数少,可以插入N-M帧远景对象。插入的N-M帧远景对象可以是M帧远景对象中至少一帧的复制版。在插入N-M帧远景对象时,可以每隔几帧插入一帧,本申请实施例对此不作限定。这样的话,近景对象、远景对象的帧数一致,都是N,可以将N帧近景对象和N帧远景对象对应融合,得到N帧融合图像,如果N小于图像刷新帧率P,那么继续插入P-N帧融合图像,得到P帧融合图像并显示。在插入P-N帧融合图像时,所述插入的P-N帧融合图像可以是根据VR穿戴设备的姿态对N帧融合图像中的至少一帧进行平移和/或旋转后的图像。
另一种可实现方式为,请参见图13B,由于远景对象的帧数少,可以插入N-M帧远景对象。插入的N-M帧远景对象可以是根据VR穿戴设备的姿态对M帧远景对象中至少一帧远景对象进行旋转和/或平移之后的远景对象。在插入N-M帧远景对象时,可以每隔几帧插入一帧,本申请实施例对此不作限定。这样的话,近景对象、远景对象的帧数一致都是N,可以将N帧近景对象和N帧远景对象对应融合,得到N帧融合图像。图13B与图13A的区别在于,插入的N-M帧远景对象不同。如果按照图13A插入的图像是复制前一帧,这种方式工作量小,效率较高;如果按照图13B插入的图像是前一帧经过平移和/或旋转后的图像,这种方式在显示插帧图像时,由于该插帧图像是根据VR穿戴设备的姿态前一帧经过平移和/或旋转后的图像,所以用户看到的图像适配用户姿态(用户姿态与VR穿戴设备姿态对应),用户体验较好。
具体的,S5可以包括如下步骤:
步骤1,将N帧近景对象和M帧远景对象对齐。
示例性的,图12中,N帧近景对象的渲染时间与M帧远景对象的渲染时间可能会错开。比如,第一帧近景对象和第一帧远景对象的渲染时间相同,即同时开始渲染,但是由于渲染帧率不同,所以第二帧近景对象和第二帧远景对象的渲染时间并不相同。因此,步 骤1中可以将N帧近景对象和M帧远景对象对齐。
第一种对齐方式,确定N帧近景对象中与M帧远景对象中第i帧远景对象的渲染时间接近的第j帧近景对象,将第i帧远景对象与第j帧近景对象对齐。
示例性的,请参见图12,假设第i帧远景对象是第2帧远景对象,确定N帧近景对象中第3帧近景对象与第2帧远景对象的渲染时间接近,那么将第2帧远景对象与第3帧近景对象对齐,对齐之后的效果如图14A所示。
可以理解的是,在一些情况下,步骤1可以无需执行,比如,N=60,M=30,即单位时间内渲染60帧近景对象、30帧远景对象,即近景对象渲染速度刚好是远景对象的渲染速度的2倍,每Tms渲染1帧近景对象,每2Tms渲染1帧远景对象,比如,在第Tms渲染第一帧近景对象和第一帧远景对象,在第2Tms渲染第二帧近景对象(此时不渲染第二帧远景对象),在第3Tms渲染第三帧近景对象和第二帧远景对象,这样的话,近景对象和远景对象的渲染时间本身是对齐的,不需额外对齐。
第二种对齐方式,将M帧远景对象与N帧近景对象中前M帧近景对象一一对齐。示例性的,请参见图14B,即第一帧远景对象与第一帧近景对象对齐,第二帧远景对象与第二帧近景对象对齐,以此类型。
以上是两种对齐方式,对于其他的对齐方式也是可行的,本申请实施例不作限定。
步骤2,插入N-M帧远景对象,使得远景对象的帧数达到N帧。
远景对象的帧数比近景对象的帧数少N-M帧,所以在前面步骤1将远景对象和近景对象对齐之后,有N-M帧近景对象没有对应到远景对象,比如,图14A和图14B中,有部分近景对象没有对应到远景对象,所以,插入N-M帧远景对象,所述插入的N-M帧远景对象对应于N帧近景对象中没有对应到远景对象的近景对象。
由于对齐方式有上面两种方式,不同的对齐方式具有不同的插帧方式,所以以下分两种情况介绍,第一种情况针对第一种对齐方式,第二种情况针对第二种对齐方式。
第一种情况,对齐方式为前面的第一种对齐方式(即图14A的对齐方式)。
针对第一种情况,第一种插帧方式可以如图15A所示,在第一帧远景对象和第二帧远景对象之间插入一帧远景对象,插入的远景对象可以是上一帧远景对象即第一帧远景对象。在第二帧远景对象与第三帧远景对象之间插入一帧远景对象,此处插入的远景对象可以是上一帧远景对象即第二帧远景对象,以此类推,插入N-M帧远景对象之后,远景对象帧数达到N帧。这种插帧方式,可以简单的理解为,在缺帧处插入上一帧远景对象。
针对第一种情况,第二种插帧方式可以是如图15A所示。比如,在第一帧远景对象和第二帧远景对象之间插入一帧远景对象,插入的远景对象可以是根据VR穿戴设备的姿态对上一帧即第一帧远景对象处理(旋转和/或平移)之后的图像。与前面的第一种插帧方式的区别在于:第一种方式在第一帧远景对象和第二帧远景对象之间直接插入上一帧远景对象,而第二种方式在第一帧远景对象和第二帧远景对象之间插入根据VR穿戴设备对上一帧远景对象作旋转和/或平移之后的图像。同理,继续参见如15A,在第二帧远景对象与第三帧远景对象之间插入一帧远景对象,此处插入的远景对象可以是根据VR穿戴设备的姿态对上一帧即第二帧远景对象处理(旋转和/或平移)之后的图像,以此类推。这种插帧方式,在缺帧处插入的是对上一帧远景对象经过处理后的图像。
第二种情况,对齐方式是前面的第二种对齐方式(即图14B的对齐方式)。
针对第二种情况,第一种插帧方式为,如图15B所示,在第M帧远景对象之后插入N-M帧远景对象。插入的N-M帧远景对象可以包括M帧远景对象中的至少一帧,比如插入的N-M帧远景对象都是第M帧远景对象,即第M+1帧到第N帧都是第M帧的复制版。
针对第二种情况,第二种插帧方式为,继续如图15B所示,在第M帧远景对象之后插入N-M帧远景对象,插入的N-M帧远景对象可以包括根据VR穿戴设备对M帧远景对象中的至少一帧作处理(旋转和/或平移)之后的远景对象,比如,插入的N-M帧远景对象都是根据VR穿戴设备的姿态对第M帧远景对象作处理(旋转和/或平移)后的远景对象。因此,与第一种插帧方式的区别在于,第一种插帧方式在缺帧处直接插入第M帧远景对象,而第二种方式在缺帧处插入根据VR穿戴设备对第M帧远景对象作旋转和/或平移之后的远景对象。
经过插帧之后,远景对象和近景对象的帧数相同,都是N,可以执行步骤3。
步骤3,将N帧远景对象与N帧近景对象对应融合。
示例性的,请参见图15A,第一帧近景对象与第一帧远景对象融合得到第一帧融合图像,第二帧近景对象与插入的远景对象融合得到第二帧融合图像,以此类推,得到N帧融合图像。
示例性的,请参见图15B,第一帧近景对象与第一帧远景对象融合得到第一帧融合图像,第M帧近景对象与第M帧远景对象融合得到第二帧融合图像,得到第M帧融合图像,第M+1帧远景对象(插入的第一帧远景对象)与第M+1帧近景对象融合得到第M+1帧融合图像,以此类推,得到N帧融合图像。
S6,通过虚拟显示设备向用户呈现所述虚拟图像。
示例性的,以图15A为例,通过虚拟显示设备显示N帧融合图像。其中,N帧融合图像中第j帧融合图像与第i帧融合图像中远景对象相同,近景对象不同,i小于j。比如,i=1,j-2。这是因为,第j帧融合图像中的远景对象是复制的第i帧融合图像中的远景对象或者是将第i帧融合图像中的远景对象作旋转和/或平移后的对象。因此,在用户看来,远景对象不变、近景对象变化。
可选的,在S6之前,还可以包括步骤:确定虚拟显示设备(如VR穿戴设备)的图像刷新帧率P,且P大于N,图像刷新帧率用于指示单位时间内刷新图像的帧数,对N帧融合图像作插帧处理,使得融合图像的帧数达到P,保证显示屏上有足够的图像刷新。
示例性的,请参见图15C,融合图像包括N帧,图像刷新帧率为P,N小于P,在第N帧融合图像之后,插入P-N帧融合图像,此处插入的P-N帧融合图像可以包括N帧融合图像中的至少一帧,比如全部是第N帧融合图像。
以上的实施例中,在渲染图像时以高图像渲染帧率渲染近景对象,以低图像渲染帧率渲染远景对象。存在一种情况,用户佩戴VR穿戴设备观看渲染后的图像的过程中,可能会关注远景对象,如果确定用户关注远景对象,可以提升远景对象对应的图像渲染帧率,和/或降低近景对象的图像渲染帧率。
也就是说,虚拟对象的图像渲染帧率可以根据用户对该虚拟对象的关注程度的变化而调整,当用户关注该虚拟对象时,该虚拟对象对应的图像渲染帧率增大,当用户不关注该虚拟对象时,该虚拟对象对应的图像渲染帧率降低。示例性的,VR穿戴设备可以通过用户与虚拟对象的交互程度确定用户对该虚拟对象的关注程度。比如,检测到用户与远景对象的交互次数较多,确定用户关注远景对象。或者,VR穿戴设备通过眼球跟踪确定用户 眼球注视远景对象,那么确定用户关注的远景对象。
在上面的实施例中,对近景对象和远景对象使用不同的渲染帧率,在另一些实施例中,还可以根据图像深度信息,对待渲染的多个虚拟对象作更多深度等级的划分,比如包括第一对象、第二对象和第三对象,其中,第一对象的第一图像深度小于第三对象的第三图像深度,第三对象的第三图像深度小于第二对象的第二图像深度。第一对象可称为“近景对象”,第三对象可以称为“中景对象”,第二对象可以称为“远景对象”。
示例性的,第一对象的第一图像深度小于第一阈值,第三对象的第三图像深度大于第一阈值小于第二阈值,第二对象的第二图像深度大于第二阈值。其中,第一阈值和第二阈值的具体取值,本申请实施例不作限定。示例性的,近景对象、中景对象和远景对象的深度阈值范围请参见下表2:
表2:近景对象、中景对象与远景对象的图像深度范围
对象 图像深度
近景对象 0.1-10m
中景对象 10-100m
远景对象 100-1000m
其中,近景对象的第一图像渲染帧率N大于中景对象的第三图像渲染帧率K,中景对象的第三图像渲染帧率K大于远景对象的第二图像渲染帧率M。
一种可实现方式为,如图16A所示,一定时长内渲染出N帧近景对象,K帧中景对象,M帧远景对象,N大于K大于M。由于远景对象、中景对象的帧数少,所以需要进行插帧,比如,插入N-K帧中景对象(插入的N-K帧中景对象可以是K帧中景对象中至少一帧的复制版),插入N-M帧远景对象(插入的N-M帧中景对象可以是M帧中景对象中至少一帧的复制版)。这样的话,近景对象、中景对象,远景对象的帧数一致都是N,可以对应融合,得到N帧融合图像,如果N小于图像刷新帧率P,那么继续插入P-N帧融合图像,得到P帧融合图像并显示。在插入P-N帧融合图像时,所述插入的P-N帧融合图像可以是根据VR穿戴设备的姿态对N帧融合图像中的至少一帧进行平移和/或旋转后的图像。
另一种可实现方式为,如图16B所示,一定时长内渲染出N帧近景对象,K帧中景对象,M帧远景对象,N大于K大于M。由于远景对象、中景对象的帧数少,所以需要进行插帧,比如,插入N-K帧中景对象(插入的N-K帧中景对象可以是根据VR穿戴设备的姿态对K帧中景对象中至少一帧进行旋转和/或平移后的中景对象),插入N-M帧远景对象(插入的N-M帧远景对象可以是根据VR穿戴设备的姿态对M帧远景对象中至少一帧进行旋转和/或平移后的远景对象)。这样的话,近景对象、中景对象,远景对象的帧数一致都是N,可以对应融合,得到N帧融合图像。
下面以图7所示的应用场景为例,且近景对象对应的第一图像渲染帧率N=60,中景对象对应的第三图像渲染帧率K=45,远景对象对应的第二图像渲染帧率M=30为例,介绍渲染过程。且以图7的场景中近景对象是小男孩、中景对象是船,远景对象是山为例。
如图17所示,以单位时间是60ms为例,单位时间内渲染出60帧近景对象、45帧中景对象、30帧远景对象。具体地,近景对象是每1ms渲染一帧,中景对象是每1.33ms渲染一帧,远景对象是每2ms渲染一帧。比如,近景对象、中景对象、远景对象同时开始渲 染,那么,在第1ms分别渲染第一帧近景对象、第一帧中景对象、第一帧远景对象,在第2ms渲染第二帧近景对象,在第2.33ms时渲染第二帧中景对象,在第3ms渲染第三帧近景对象和第二帧远景对象,以此类推。因此,单位时间内渲染的近景对象帧数最多、中景对象帧数其次,远景对象帧数最少。需要说明的是,本文以单位时间是60ms为例,实际上单位时间可以是任一长度的时间段,比如1s(即1000ms)。
渲染之后,可以将60帧近景对象、45帧中景对象和30帧远景对象对齐,对齐的原理可以参见前文提供的两种对齐方式,这里以前面的第一种方式为例介绍,即将渲染时间接近的近景对象、中景对象和远景对象对齐。
示例性的,继续参见图17,第一帧中景对象与第一帧近景对象已对齐,第二帧中景对象的渲染时间为2.33ms,比较接近第二帧近景对象,所以第二帧中景对象与第二帧近景对象对齐,如图18所示。第三帧中景对象的渲染时间是3.66ms,比较靠近第四帧近景对象的渲染时间(即第4ms),所以第三帧中景对象与第四帧近景对象对齐,以此类推。由于远景对象与近景对象已对齐,所以可以无需再对齐。
在近景对象、中景对象和远景对象对齐之后,可以对缺少的对象进行插帧。
示例性的,请参见如19,中景对象缺少60-45=15帧,所以中景对象需要插入15帧,在缺失处插入即可,比如,在第二帧中景对象和第三帧中景对象之间插入一帧中景对象。其中,此处插入的中景对象可以是上一帧中景对象(即第二帧中景对象)或者是上一帧中景对象经过VR穿戴设备的姿态处理(旋转和/或平移)后得到的对象。以此类推,插入15帧中景对象之后,中景对象帧数达到60帧。
继续参见如19,远景对象缺少60-30=30帧,所以远景对象需要插入30帧。如图19,在第一帧远景对象和第二帧远景对象之间插入一帧远景对象,此处插入的远景对象可以是上一帧(即第一帧远景对象)或者上一帧远景对象根据VR穿戴设备的姿态处理(旋转和/或平移)后的对象。同理,第二帧远景对象与第三帧远景对象之间插入一帧远景对象,此处插入的远景对象可以是上一帧远景对象(即第二帧远景对象)或者上一帧远景对象根据VR穿戴设备的姿态处理(旋转和/或平移)后的对象,以此类推,插入30帧远景对象之后,远景对象帧数达到60帧。
在近景对象、中景对象和远景对象都达到60帧时,可以对应融合。
示例性的,请参见图20,将第一帧近景对象、第一帧中景对象、以及第一帧远景对象融合得到第一帧融合图像,第二帧近景对象、第二帧中景对象以及插入的远景对象融合得到第二帧融合图像,以此类推,得到60帧融合图像。
应理解,由于中景对象和远景对象进行了插帧,所以不同融合图像上中景对象和远景对象之间变化缓慢。比如,图20中,虚拟显示设备依次显示60帧融合图像,其中,第三帧融合图像上中景对象与第二帧融合图像上中景对象相同,这是因为,第三帧融合图像上的中景对象是复制的第二帧融合图像上的中景对象或者是将第二帧融合图像上的中景对象作处理(旋转和/或平移)后的中景对象,所以从第二帧融合图像刷新到第三帧融合图像,在用户看来,中景对象没有变化。但是,第三帧融合图像上的近景对象和第二帧融合图像上的近景对象是不同的,如图20,第三帧融合图像上的近景对象相对于第二帧融合图像上的近景对象形态(小男孩的形态)发生了变化。应理解,远景对象插帧数量更多,所以远景对象看上去变化速度最慢。因此,在刷新60帧融合图像时,用户看到远景对象变化最 慢,中景对象其次,近景对象变化最快。一般来说,用户对近景对象关注度更高,保证近景对象实时的变化,会提升观看体验,而用户对中景或远景对象关注度相对较低,所以中景对象或远景对象变化相对缓慢,不会影响用户体验,而且还可以节省渲染功耗。
因此,如果将同一个虚拟对象设置在不同的图像深度处,那么在不同深度位置处的该虚拟对象呈现的效果不同。比如,在近景和远景处分别设置同一个虚拟对象(比如图9中的视频播放界面),由于近景位置处的该虚拟对象对应的渲染帧率较高,所以近景处该虚拟对象的变化较快、较为流畅。远景位置处的该虚拟对象对应的渲染帧率较低,所以远景位置处该对象的变化缓慢,比较卡顿。
需要说明的是,前面介绍过目前存在使用低渲染帧率渲染图像的方案,比如图4中,图像渲染帧率是30,小于图像刷新帧率90。但是这种低渲染帧率是针对整张图像的,换言之,每张图像所有虚拟对象对应相同的渲染帧率,即都是30帧。这种方案近景对象渲染帧率太低会导致触发时延大、发生抖动等现象。但是本申请实施例中,一张图像上不同虚拟对象对应的渲染帧率不同,近景对象可以使用较大的渲染帧率,保证近景对象的观看体验,中景对象和远景对象可以使用相对较低的渲染帧率,减低渲染功耗,且不影响用户体验。
此外,目前的方案中,在VR穿戴设备的姿态变化时,会出现黑边,如图6所示,这是因为,插帧时插入的是经过处理(旋转和/或平移)之后的图像,这样插入的图像与渲染后的图像(比如第i帧图像)上非重叠部分就出现黑边。由于目前的方案是整张图像上所有虚拟对象对应相同的图像渲染帧率,所以所有虚拟对象需要插入的图像帧数相同,且插入的图像都经过相同的旋转和/或平移,所以所有虚拟对象对应的黑边相同。
本申请实施例中,近景对象和远景对象对应的不同渲染帧率,所以近景对象和远景对象需要插帧的数量不同,相应的,近景对象和远景对象对应的黑白不同。举例来说,在第i帧近景对象和第i+1帧近景对象之间插1帧近景对象,插入的1帧近景对象是根据VR穿戴设备的姿态对第i帧近景对象处理后的。比如,插入的1帧近景对象与第i帧近景对象之间的非重叠部分的宽度等于VR穿戴设备的位移,由于近景对象对应的图像渲染帧率高,所以第i帧近景对象和第i+1帧近景对象之间的时间间隔短,在这个时间间隔内,VR穿戴设备移动速度一定的情况下,VR穿戴设备的位移小,那么插入的1帧近景对象与第i帧近景对象之间的非重叠部分的宽度小。同样的道理,在第i帧远景对象和第i+1帧远景对象之间插入远景对象时,插入的远景对象是根据VR穿戴设备的姿态对第i帧远景对象经过处理后的,假设插入的远景对象与第i帧远景对象之间的非重叠部分的宽度等于VR穿戴设备的位移,由于远景对象对应的图像渲染帧率低,所以第i帧远景对象和第i+1帧远景对象之间的时间间隔长,在这个时间间隔内,VR穿戴设备移动速度一定的情况下,VR穿戴设备的位移大,那么插入的远景对象与第i帧远景对象之间的非重叠部分的宽度大,因此,近景对象对应的黑白的宽度小于远景对象对应的黑白的宽度。
以上实施例是以第一对象是近景对象、第二对象是远景对象为例进行介绍的。可以理解的是,第一对象和第二对象还可以不按照图像深度划分,而是基于其它方式确定,比如,第一对象是待渲染图像上处于中心位置的虚拟对象,第二对象是待渲染图像上处于边缘位置的虚拟对象。或者,第一对象可以是系统默认设置或用户指定的对象或对象类型,第二对象是待渲染图像上除去第一对象之外的所有对象,等等。
总而言之,对于待渲染的不同虚拟对象,可以使用不同的图像渲染帧率,渲染原理与对近景对象和远景对象的渲染原理相同。
基于相同的构思,图21所示为本申请提供的一种电子设备2000。该电子设备2000可以是前文中的手机。如图21所示,电子设备2000可以包括:一个或多个处理器2001;一个或多个存储器2002;通信接口2003,以及一个或多个计算机程序2004,上述各器件可以通过一个或多个通信总线2005连接。其中该一个或多个计算机程序2004被存储在上述存储器2002中并被配置为被该一个或多个处理器2001执行,该一个或多个计算机程序2004包括指令,上述指令可以用于执行如上面相应实施例中手机的相关步骤。通信接口2003用于实现与其他设备的通信,比如通信接口可以是收发器。
上述本申请提供的实施例中,从电子设备(例如手机)作为执行主体的角度对本申请实施例提供的方法进行了介绍。为了实现上述本申请实施例提供的方法中的各功能,电子设备可以包括硬件结构和/或软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实现上述各功能。上述各功能中的某个功能以硬件结构、软件模块、还是硬件结构加软件模块的方式来执行,取决于技术方案的特定应用和设计约束条件。
以上实施例中所用,根据上下文,术语“当…时”或“当…后”可以被解释为意思是“如果…”或“在…后”或“响应于确定…”或“响应于检测到…”。类似地,根据上下文,短语“在确定…时”或“如果检测到(所陈述的条件或事件)”可以被解释为意思是“如果确定…”或“响应于确定…”或“在检测到(所陈述的条件或事件)时”或“响应于检测到(所陈述的条件或事件)”。另外,在上述实施例中,使用诸如第一、第二之类的关系术语来区份一个实体和另一个实体,而并不限制这些实体之间的任何实际的关系和顺序。
在本说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本方案实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。在不冲突的情况下,以上各实施例的方案都可以组合使用。
需要指出的是,本专利申请文件的一部分包含受著作权保护的内容。除了对专利局的专利文件或记录的专利文档内容制作副本以外,著作权人保留著作权。

Claims (20)

  1. 一种显示方法,其特征在于,包括:
    通过显示设备向用户呈现N帧图像;
    其中,所述N帧图像中第j帧图像上第一景深处的第一对象与第i帧图像上第一景深处的第一对象相同;所述第j帧图像上第二景深处的第二对象与所述第i图像上第二景深处的第二对象不同;N、i、j为正整数,i小于j。
  2. 根据权利要求1所述的方法,其特征在于,所述第一景深大于所述第二景深。
  3. 根据权利要求1或2所述的方法,其特征在于,所述第一景深大于第一阈值,和/或,所述第二景深小于第二阈值,所述第一阈值大于或等于所述第二阈值。
  4. 根据权利要求1-3任一所述的方法,其特征在于,当所述用户注视点的景深变化时,所述第二景深的变化。
  5. 根据权利要求4所述的方法,其特征在于,所述第二景深为所述用户的注视点所在的景深。
  6. 根据权利要求1-5任一所述的方法,其特征在于,i=j-1。
  7. 根据权利要求1-6任一所述的方法,其特征在于,
    所述N帧图像中第j帧图像上第一景深处的第一对象与第i帧图像上第一景深处的第一对象相同,包括:
    在所述第j帧图像和所述第i帧图像上,所述第一对象的动作、位置、形状、颜色或大小中的至少一项相同;
    所述第j帧图像上第二景深处的第二对象与所述第i图像上第二景深处的第二对象不同,包括:
    在所述第j帧图像和所述第i帧图像上,所述第二对象的动作、位置、形状、颜色或大小中的至少一项不同。
  8. 根据权利要求1-7任一所述的方法,其特征在于,所述第一对象和所述第二对象均为变化对象。
  9. 根据权利要求1-8任一所述的方法,其特征在于,
    所述第一对象包括虚拟对象、显示对象或界面中的一种类型或多种类型;和/或,
    所述第二对象包括虚拟对象、显示对象或界面中的一种类型或多种类型。
  10. 根据权利要求1-9任一所述的方法,其特征在于,所述第一对象和所述第二对象的类型不同。
  11. 根据权利要求1-10任一所述的方法,其特征在于,所述第j帧图像上第一景深处的第一对象与第i帧图像上第一景深处的第一对象相同,包括:
    所述第j帧图像上第一景深处的第一对象是复制所述第i帧图像上第一景深处的第一对象;或者,
    所述第j帧图像上第一景深处的第一对象是所述第i帧图像上第一景深处的第一对象经过平移和/或旋转后的对象。
  12. 根据权利要求1-10任一所述的方法,其特征在于,所述j帧图像上第二景深处的第二对象与所述第i图像上第二景深处的第二对象不同,包括:
    所述j帧图像上第二景深处的第二对象与所述第i图像上第二景深处的第二对象是不 同的对象;和/或,
    所述j帧图像上第二景深处的第二对象与所述第i图像上第二景深处的第二对象为同一对象的不同形态。
  13. 根据权利要求1-12任一所述的方法,其特征在于,在所述通过显示设备向用户呈现N帧图像之前,所述方法还包括:
    在一定时长内,生成M帧第一对象图像和N帧第二对象图像,M和N是正整数,M小于N;
    在所述M帧第一对象图像中插入N-M帧第一对象图像;其中,插入的N-M帧第一对象图像是复制所述M帧第一对象图像中至少一帧第一对象图像或者是所述至少一帧第一对象图像经过旋转和/或平移后的图像;
    将N帧第一对象图像和N帧第二对象图像对应融合得到所述N帧图像。
  14. 根据权利要求13所述的方法,其特征在于,在所述M帧第一对象图像中插入N-M帧第一对象图像,包括;
    将所述N帧第二对象图像中的M帧第二对象图像与所述M帧第一对象图像对应,所述M帧第二对象图像与所述M帧第一对象图像生成时间相邻;
    插入N-M帧第一对象图像,其中,插入的N-M帧第一对象图像与所述N帧第二对象图像剩余的N-M帧第二对象图像对应。
  15. 根据权利要求13或14所述的方法,其特征在于,
    所述M帧第一对象图像分别是根据所述显示设备在M个时刻时的姿态对所述第一对象进行渲染得到的图像;
    所述N帧第二对象图像分别是根据所述显示设备在N个时刻时的姿态对所述第二对象进行渲染得到的图像,所述M个时刻和所述N个时刻位于所述第一时长内。
  16. 根据权利要求13-15任一所述的方法,其特征在于,通过显示设备向用户呈现N帧图像,包括:
    在所述N小于所述显示设备的图像刷新率P的情况下,在所述N帧图像中插入N-P帧所述图像;其中,插入的N-P帧图像是复制所述N帧图像中至少一帧图像或者是至少一帧图像经过旋转和/或平移后的图像;
    通过显示设备向用户呈现P帧图像,P是正整数。
  17. 根据权利要求1-16任一所述的方法,其特征在于,所述方法还包括:
    当用户关注所述第一景深处的第一对象时,通过所述显示设备显示W帧图像;
    其中,所述W帧图像中第t帧图像上第二景深处的对象与第r帧图像上的第二景深处的对象相同,所述第t帧图像上第一景深处的对象与第r帧图像上的第一景深处的对象不同;N、t、r为正整数,r小于t。
  18. 一种电子设备,其特征在于,包括:
    处理器,存储器,以及,一个或多个程序;
    其中,所述一个或多个程序被存储在所述存储器中,所述一个或多个程序包括指令,当所述指令被所述处理器执行时,使得所述电子设备执行如权利要求1-17任一项所述的方法步骤。
  19. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质用于存储计算机程序,当所述计算机程序在计算机上运行时,使得所述计算机执行如权利要求1至17中 任意一项所述的方法。
  20. 一种计算机程序产品,其特征在于,包括计算机程序,当所述计算机程序在计算机上运行时,使得所述计算机执行如上述权利要求1-17中任意一项所述的方法。
PCT/CN2022/089315 2021-05-07 2022-04-26 一种显示方法与电子设备 WO2022233256A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110496915.6 2021-05-07
CN202110496915.6A CN115309256A (zh) 2021-05-07 2021-05-07 一种显示方法与电子设备

Publications (1)

Publication Number Publication Date
WO2022233256A1 true WO2022233256A1 (zh) 2022-11-10

Family

ID=83853107

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/089315 WO2022233256A1 (zh) 2021-05-07 2022-04-26 一种显示方法与电子设备

Country Status (2)

Country Link
CN (1) CN115309256A (zh)
WO (1) WO2022233256A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117205549A (zh) * 2022-11-30 2023-12-12 腾讯科技(深圳)有限公司 画面渲染方法、装置、设备、存储介质及程序产品

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108289175A (zh) * 2018-02-05 2018-07-17 黄淮学院 一种低延迟虚拟现实显示方法及显示系统
CN108734626A (zh) * 2017-04-17 2018-11-02 英特尔公司 通过标记对象来编码3d渲染图像
US20210049983A1 (en) * 2019-08-16 2021-02-18 Facebook Technologies, Llc Display rendering
CN112700377A (zh) * 2019-10-23 2021-04-23 华为技术有限公司 图像泛光处理方法及装置、存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108734626A (zh) * 2017-04-17 2018-11-02 英特尔公司 通过标记对象来编码3d渲染图像
CN108289175A (zh) * 2018-02-05 2018-07-17 黄淮学院 一种低延迟虚拟现实显示方法及显示系统
US20210049983A1 (en) * 2019-08-16 2021-02-18 Facebook Technologies, Llc Display rendering
CN112700377A (zh) * 2019-10-23 2021-04-23 华为技术有限公司 图像泛光处理方法及装置、存储介质

Also Published As

Publication number Publication date
CN115309256A (zh) 2022-11-08

Similar Documents

Publication Publication Date Title
US11765541B2 (en) Audio spatialization
US10009542B2 (en) Systems and methods for environment content sharing
JP2022528082A (ja) ヘッドマウント型ディスプレイのための画像表示方法及びデバイス
US10993066B2 (en) Apparatus and associated methods for presentation of first and second virtual-or-augmented reality content
KR20180021515A (ko) 영상 표시 장치 및 영상 표시 장치의 동작 방법
CN110999328B (zh) 装置以及相关联的方法
WO2021143574A1 (zh) 增强现实眼镜、基于增强现实眼镜的ktv实现方法与介质
US11076121B2 (en) Apparatus and associated methods for video presentation
WO2022252924A1 (zh) 图像传输与显示方法、相关设备及系统
CN111352243A (zh) 一种基于5g网络的ar远程渲染系统及方法
WO2022233256A1 (zh) 一种显示方法与电子设备
WO2019057530A1 (en) APPARATUS AND ASSOCIATED METHODS FOR PRESENTING AUDIO IN THE FORM OF SPACE AUDIO
US20230344973A1 (en) Variable audio for audio-visual content
WO2023001113A1 (zh) 一种显示方法与电子设备
US11775051B2 (en) Apparatus and associated methods for presentation of presentation data
CN116529773A (zh) 视听呈现装置及其操作方法
CN116194792A (zh) 连接评估系统
US20220036075A1 (en) A system for controlling audio-capable connected devices in mixed reality environments
US20230334794A1 (en) Presenting Communication Data Based on Environment
US20230262406A1 (en) Visual content presentation with viewer position-based audio
WO2023185698A1 (zh) 一种佩戴检测方法及相关装置
WO2023035911A1 (zh) 一种显示方法与电子设备
EP3502863A1 (en) An apparatus and associated methods for presentation of first and second augmented, virtual or mixed reality content
CN116934584A (zh) 一种显示方法与电子设备
CN118059485A (zh) 音频处理方法以及装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22798597

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22798597

Country of ref document: EP

Kind code of ref document: A1