WO2018086295A1 - Procédé et appareil d'affichage d'interface d'application - Google Patents

Procédé et appareil d'affichage d'interface d'application Download PDF

Info

Publication number
WO2018086295A1
WO2018086295A1 PCT/CN2017/078027 CN2017078027W WO2018086295A1 WO 2018086295 A1 WO2018086295 A1 WO 2018086295A1 CN 2017078027 W CN2017078027 W CN 2017078027W WO 2018086295 A1 WO2018086295 A1 WO 2018086295A1
Authority
WO
WIPO (PCT)
Prior art keywords
eye image
interface
displayed
right eye
left eye
Prior art date
Application number
PCT/CN2017/078027
Other languages
English (en)
Chinese (zh)
Inventor
曹海恒
谭利文
孙伟
陈心
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201780010154.0A priority Critical patent/CN108604385A/zh
Publication of WO2018086295A1 publication Critical patent/WO2018086295A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Definitions

  • the present application relates to the field of computer applications, and in particular, to an application interface display method and apparatus.
  • Virtual Reality (VR) technology is a computer simulation system that can create and experience virtual worlds. It uses computer to generate a simulation environment. It is a multi-source information fusion, interactive 3D dynamic vision and entity. System simulation of behavior immerses users in the environment. With the demand for quality of life, the development of virtual reality display technology has become the focus of social attention.
  • the virtual reality device requires that the left-eye image and the right-eye image are respectively rendered to generate a stereoscopic effect, and most of the existing application interfaces are two-dimensional (2D), which cannot meet the requirements of the virtual reality device. This makes a large number of applications impossible to use in virtual reality systems.
  • the prior art writes a virtual reality scene into a frame buffer of an Android system by using an Open Graphics Library (OpenGL) function in a left-right split screen manner, and uses the Android system to read the content in the frame buffer to perform rendering.
  • OpenGL Open Graphics Library
  • the two-dimensional application interface simultaneously renders the left and right eye images, and has a three-dimensional effect.
  • rendering a two-dimensional application interface into an image with a three-dimensional visual effect takes a long time, and the rendered result is lagging behind, and cannot be attached to the user's field of view, which easily causes the user to have a visual misalignment, thereby causing dizziness and experience. difference.
  • the embodiment of the present application provides an application interface display method, which can avoid the image position and the user's visual field misalignment caused by the change of the user's head posture during the process of rendering the two-dimensional application interface into an image having a three-dimensional visual effect. Dizziness and enhance the user experience.
  • the first aspect of the present application provides an application interface display method, which is used by an application interface display device to display an interface of a two-dimensional application on a VR device, and the method includes:
  • the application interface display device obtains an interface to be displayed.
  • the interface to be displayed is an interface of the two-dimensional application.
  • the interface to be displayed is subjected to dimension conversion processing to obtain a first left eye corresponding to the interface to be displayed.
  • An image and a first right eye image, the first left eye image and the second right eye image are used to present a to-be-displayed interface having a three-dimensional visual effect, and then the application interface display device acquires a current head posture of the user, that is, the first head And the second left eye image and the second right eye image are obtained by adjusting the first left eye image and the first right eye image according to the first head posture, and finally displaying the second left in the left eye field of view of the VR device.
  • the eye image displays the second right eye image in the right eye field of view of the VR device.
  • the dimension conversion process refers to converting an interface of a two-dimensional application into an interface having a three-dimensional visual effect.
  • the left eye image refers to an image generated for the user's left eye field of view
  • the right eye image refers to an image generated for the user's right eye field of view.
  • the VR device is divided into a left-eye view area and a right-eye view area according to the left and right eyes of the user, and the left-eye view area is a screen area or an optical lens group in the VR device aligned with the user's left-eye view, and the right-eye view area is in the VR device.
  • a screen area or optical lens set that is aligned with the user's right eye field of view.
  • the current head posture of the user is obtained, and the image corresponding to the left screen and the right screen is adjusted according to the head posture, and then the adjustment is performed.
  • the subsequent images are displayed on the left and right screens, respectively. That is to say, after the dimension conversion of the display interface is performed to obtain an image having a three-dimensional visual effect, the converted result is adjusted according to the latest head posture of the user, so that the position of the finally displayed image is more closely matched.
  • the user's field of vision avoids the dizziness caused by the image position and the user's visual field misalignment caused by the user's head posture change during rendering of the two-dimensional application interface into an image with a three-dimensional visual effect, thereby improving the user experience.
  • the process of performing the dimension conversion processing by the application interface display device may specifically include:
  • the application interface display device performs binocular rendering on the display interface, obtains a third left eye image and a third right eye image of the interface to be displayed, and performs barrel distortion processing on the third left eye image and the third right eye image to obtain a to-be-distorted process.
  • the first left eye image of the interface and the first right eye image of the interface to be displayed are displayed.
  • the magnification away from the optical axis region is lower than that near the optical axis, and the convex scene shown in the figure appears in the image plane is called barrel distortion, and the barrel distortion in the present application.
  • the process is used to counteract the distortion produced by the optical lens in the VR device.
  • the embodiment of the present application After obtaining an image with a three-dimensional visual effect by binocular rendering, the embodiment of the present application also performs barrel distortion processing on the image to offset the distortion generated by the optical lens in the VR device, thereby improving image quality and enhancing user experience.
  • the application interface display device performs binocular rendering on the display interface to obtain a third left eye image and a third right eye image.
  • the process specifically includes:
  • the application interface display device After acquiring the interface to be displayed, acquires the current head posture of the user, that is, the second head posture, and then determines the first region, that is, the second region, according to the second head posture, and draws the to-be-displayed region in the first region.
  • the third left eye image is obtained by the interface
  • the third right eye image is obtained by drawing the interface to be displayed in the second area, wherein the first area is an area for displaying the interface to be displayed in the left eye image of the preset three-dimensional scene, and the second area is The area in the right eye image of the preset three-dimensional scene for displaying the interface to be displayed.
  • the embodiment of the present application performs binocular rendering by drawing the interface to be displayed to the preset three-dimensional scene, so that the user can browse the interface to be displayed in the preset three-dimensional scene, thereby improving the flexibility of the solution and further enhancing the user experience.
  • the application interface display device adjusts the first left eye according to the first head posture
  • the process of obtaining the second left eye image and the second right eye image by the image and the first right eye image specifically includes:
  • the application interface display device performs asynchronous time warping on the first left eye image according to the first partial posture to obtain the second left eye.
  • the image is subjected to asynchronous time warping of the first right eye image to obtain a second right eye image.
  • asynchronous time warping is a technique of image correction.
  • the head motion has been too fast, and the delay of rendering the scene, that is, the head has already passed, but the image has not been rendered.
  • the asynchronous time warping solves this delay problem by distorting the image before being sent to the display device.
  • the embodiment of the present application provides a specific manner of adjusting an image, which improves the achievability of the solution.
  • the application interface display device obtains the interface to be displayed, and the application interface display device obtains the interface to be displayed from the mobile terminal;
  • the application interface display device displays the second left eye image in the left eye view area of the VR device, and displays the second right eye image in the right eye view area of the VR device.
  • the left eye image and the second right eye image are sent to the mobile terminal
  • the screen of the mobile terminal includes a third area and a fourth area, the third area corresponds to a left eye view area of the VR device, and the fourth area corresponds to a right eye view of the VR device
  • the mobile terminal displays the second left eye image in the third area of the screen, and displays the second image in the fourth area of the screen.
  • the embodiment of the present application provides a specific manner for obtaining an interface to be displayed and displaying an interface to be displayed, thereby improving the achievability of the solution.
  • the mobile terminal includes a SurfaceFlinger module, where the SurfaceFlinger is a module responsible for display synthesis in the Android system.
  • the position in the final composite image of each layer can be calculated, and then the final display buffer is generated and displayed on a particular display device.
  • the application interface display device may obtain the to-be-displayed interface from the mobile terminal by: the application interface display device acquires the to-be-displayed interface from the SurfaceFlinger module;
  • the application interface display device can send the second left eye image and the second right eye image to the mobile terminal by: the application interface display device sending the second left eye image and the second right eye image to the SurfaceFlinger module, So that the SurfaceFlinger module displays the second left eye image in the third area of the screen of the mobile terminal, and displays the second right eye image in the fourth area of the screen.
  • the application interface display device in the embodiment of the present application may be a device that is independent of the Android system, that is, the application interface display method in the embodiment of the present application may not depend on the Android system, and may reduce the computing load of the mobile terminal, and is used in the method.
  • the algorithm needs to be updated, the update can be performed independently of the Android system.
  • the algorithm used in the method does not need to be modified accordingly, and the flexibility and versatility are higher.
  • the second aspect of the present application provides an application interface display device for displaying an interface of a two-dimensional application on a VR device, the device comprising:
  • a first acquiring module configured to acquire an interface to be displayed, where the interface to be displayed is an interface of a 2D application
  • a processing module configured to perform dimension conversion processing on the interface to be displayed acquired by the first acquiring module, to obtain a first left eye image and a first right eye image corresponding to the interface to be displayed, and the first left eye image and the first right eye image Presenting an interface to be displayed having a three-dimensional visual effect;
  • a second acquiring module configured to acquire a first head posture of the user
  • the adjusting module is configured to adjust the first left eye image to obtain the second left eye image according to the first head posture acquired by the second acquiring module, and adjust the first right eye image to obtain the second right eye image;
  • a display module configured to display a second left eye image in a left eye view area of the VR device, and display a second right eye image in a right eye view area of the VR device.
  • the current head posture of the user is obtained, and the image corresponding to the left screen and the right screen is adjusted according to the head posture, and then the adjustment is performed.
  • the subsequent images are displayed on the left and right screens, respectively. That is to say, after the dimension conversion of the display interface is performed to obtain an image having a three-dimensional visual effect, the converted result is adjusted according to the latest head posture of the user, so that the position of the finally displayed image is more closely matched.
  • the user's field of vision avoids the dizziness caused by the image position and the user's visual field misalignment caused by the user's head posture change during rendering of the two-dimensional application interface into an image with a three-dimensional visual effect, thereby improving the user experience.
  • the processing module specifically includes:
  • a rendering unit configured to perform binocular rendering on the display interface, and obtain a third left eye image and a third right eye image of the interface to be displayed;
  • the processing unit is configured to perform barrel distortion processing on the third left eye image and the third right eye image to obtain a first left eye image of the interface to be displayed and a first right eye image of the interface to be displayed.
  • the processing unit of the embodiment of the present application can perform distortion processing on the image to offset the distortion generated by the optical lens in the VR device, improve image quality, and enhance user experience.
  • the rendering unit may specifically include:
  • a first acquiring subunit configured to acquire a second head posture of the user
  • Determining a sub-unit configured to respectively determine a first area and a second area according to the second head posture, where the first area is an area for displaying an interface to be displayed in a left eye image of a preset three-dimensional scene, and the second area is a pre- The area of the right eye image of the three-dimensional scene for displaying the interface to be displayed;
  • the drawing subunit is configured to draw the interface to be displayed in the first area to obtain the third left eye image, and draw the interface to be displayed in the second area to obtain the third right eye image.
  • the rendering unit of the embodiment of the present application draws the interface to be displayed into the preset three-dimensional scene by using the drawing sub-unit, so that the user can browse the interface to be displayed in the preset three-dimensional scene, thereby improving the flexibility of the solution and further enhancing the user experience.
  • the adjustment module may specifically include:
  • the time warping unit is configured to perform asynchronous time warping on the first left eye image according to the first head posture to obtain a second left eye image, and perform asynchronous time warping on the first right eye image to obtain a second right eye image.
  • the embodiment of the present application provides a specific manner for adjusting an image of an adjustment module, which improves the achievability of the solution.
  • the first obtaining module may specifically include:
  • An obtaining unit configured to acquire the to-be-displayed interface from the mobile terminal
  • the display module may specifically include:
  • a sending unit configured to send the second left eye image and the second right eye image to the mobile terminal, so that the mobile terminal displays the second left eye image in the third area of the screen, and displays the second right eye image in the fourth area of the screen
  • the screen of the mobile terminal includes a third area and a fourth area, the third area corresponds to a left eye view area of the VR device, and the fourth area corresponds to a right eye view area of the VR device.
  • the embodiment of the present application provides a specific manner for an acquiring unit to obtain an interface to be displayed and a display module to display an interface to be displayed, thereby improving the achievability of the solution.
  • the mobile terminal includes a SurfaceFlinger module
  • the acquiring unit may specifically include:
  • a second obtaining subunit configured to obtain an interface to be displayed from the SurfaceFlinger module
  • the sending unit may specifically include:
  • a sending subunit configured to send the second left eye image and the second right eye image to the SurfaceFlinger module, so that the SurfaceFlinger module end displays the second left eye image in the third area of the screen of the mobile terminal, in the fourth area of the screen The second right eye image is displayed.
  • the application interface display device in the embodiment of the present application may be a device independent of the Android system, that is, the method in the embodiment of the present application may be executed independently of the Android system, and the computing load of the mobile terminal may be reduced, and the algorithm used in the method may be used.
  • the update can be performed independently of the Android system.
  • the algorithm used in the method does not need to be modified accordingly, and the flexibility and versatility are higher.
  • a third aspect of the present application provides a computer readable storage medium having stored therein instructions that, when run on a computer, cause the computer to perform the first aspect, the first aspect of the first aspect A method as claimed in any one of the fifth implementations.
  • a fourth aspect of the present application provides a computer program product comprising instructions which, when executed on a computer, cause the computer to perform the first aspect described above, any one of the first to fifth implementations of the first aspect The method described in the manner.
  • the embodiments of the present application have the following advantages:
  • the current head posture of the user is obtained, and the image corresponding to the left screen and the right screen is adjusted according to the head posture, and then the adjustment is performed.
  • the subsequent images are displayed on the left and right screens, respectively. That is to say, after the dimension conversion of the display interface is performed to obtain an image having a three-dimensional visual effect, the converted result is adjusted according to the latest head posture of the user, so that the position of the finally displayed image is more closely matched.
  • the user's field of vision avoids the dizziness caused by the image position and the user's visual field misalignment caused by the user's head posture change during rendering of the two-dimensional application interface into an image with a three-dimensional visual effect, thereby improving the user experience.
  • FIG. 1 is a flow chart of an embodiment of an application interface display method in an embodiment of the present application
  • FIG. 2 is a flow chart of another embodiment of an application interface display method in an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a third left eye image and a fourth left eye image in the embodiment of the present application;
  • FIG. 4 is a schematic diagram of an embodiment of an application interface display system in an embodiment of the present application.
  • FIG. 5 is a flowchart of another embodiment of an application interface display method in an embodiment of the present application.
  • FIG. 6 is a schematic diagram of an embodiment of an application interface display device in an embodiment of the present application.
  • FIG. 7 is a schematic diagram of another embodiment of an application interface display device in an embodiment of the present application.
  • FIG. 8 is a schematic diagram of another embodiment of an application interface display device in an embodiment of the present application.
  • FIG. 9 is a schematic diagram of another embodiment of an application interface display device in an embodiment of the present application.
  • the embodiment of the present application provides an application interface display method and device for avoiding dislocation of an image position and a user's visual field due to a change in a user's head posture during rendering of a two-dimensional application interface into an image having a three-dimensional visual effect.
  • the resulting dizziness enhances the user experience.
  • VR devices refer to hardware products related to the field of virtual reality technology and are hardware devices used in virtual reality solutions. At present, the hardware devices commonly used in virtual reality can be roughly divided into four types: modeling devices, three-dimensional visual display devices, sound devices, and interactive devices.
  • the VR device in the embodiment of the present application refers to a three-dimensional visual display device, such as a three-dimensional display system, a large projection system (such as CAVE), a head-mounted display device, and the like.
  • the VR head-mounted display device is a kind of use of a head-mounted display device to close the user's visual and auditory sense to the outside world, and guide the user to create a feeling in the virtual environment.
  • the VR device in the embodiment of the present application includes a left-eye view area for displaying a left-eye image in a left eye of the user, and a right-eye view for displaying a right-eye image in the right eye of the user. After the user's left and right eyes respectively display the left eye image and the right eye image with differences, the user can create a stereoscopic effect in the mind.
  • VR head display can be subdivided into three categories: external connector display, integrated machine head display, mobile terminal display.
  • the external connector display and the integrated machine head have an independent screen, and the external connector displays the left eye image and the right eye image on the self-contained screen through the data input by the external device, so that the user is immersed in the virtual environment
  • the headphone display allows users to immerse themselves in a virtual environment without any input/output devices.
  • the mobile terminal display also called the VR glasses box, needs to put the mobile terminal into the VR glasses box, and displays the left eye image and the right eye image on the screen of the mobile terminal, and the user obtains the left side of the mobile terminal through the VR glasses box.
  • the eye image and the right eye image produce a sense of three-dimensionality and immersion in the mind.
  • the application interface display method in the embodiment of the present application is used for the interface of the application interface display device to display the 2D application on the VR device.
  • the application interface display device may be the external connector display, or may be an input device capable of being connected to the external connector, such as a personal computer (PC), a mobile phone, etc.;
  • the application interface display device may be the integrated head display, or may be a component for rendering an image in the integrated head display;
  • the application interface display device may be the mobile terminal display, or A mobile terminal capable of being placed in the mobile terminal for displaying a left eye image and a right eye image.
  • the application interface display device may also be other devices capable of communicating with the above three types of headphones, or input devices, or mobile terminals, such as a cloud server.
  • an embodiment of the application interface display method in the embodiment of the present application includes:
  • the application interface display device obtains an interface to be displayed
  • the application interface display device obtains an interface to be displayed.
  • the interface to be displayed is an interface that needs to be displayed on a screen of the display device.
  • the interface to be displayed may be an interface of any two-dimensional application, or may be a plurality of two-dimensional applications.
  • the interface synthesized by the interface is not limited herein. It should be understood that a two-dimensional application refers to an application developed based on two-dimensional display.
  • the application interface display device performs dimension conversion processing on the display interface, and obtains a first left eye image and a first right eye image of the interface to be displayed;
  • the application interface display device After the application interface display device obtains the interface to be displayed, performing dimension conversion processing on the interface to be displayed, and obtaining a first left eye image and a first right eye image of the interface to be displayed, the first left eye image and the first right eye image Used to render an interface to be displayed with a three-dimensional visual effect.
  • the dimension conversion process refers to converting an interface of a two-dimensional application into an interface having a three-dimensional visual effect.
  • the left eye image in the embodiment of the present application refers to an image generated for the user's left eye field of view
  • the right eye image refers to an image generated for the user's right eye field of view.
  • the application interface display device acquires a first head posture of the user
  • the posture of the user's head may change.
  • the application interface display device performs the step 102 and obtains the first left eye image and the first right eye image, Get the user's latest head gesture, the first head gesture.
  • the head posture may specifically include a user's head yaw direction, a yaw angle of the user's head, or a motion mode of the user's head, and may also include other posture information, which is not limited herein.
  • the application interface display device adjusts the first left eye image according to the first head posture to obtain a second left eye image, and adjusts the first right eye image to obtain a second right eye image.
  • the application interface display device After acquiring the first head posture, the application interface display device adjusts the first left eye image according to the first head posture to obtain a second left eye image, and adjusts the first right eye image to obtain a second right eye image.
  • the application interface display device displays the second left eye image in the left eye view area of the VR device, and displays the second right eye image in the right eye view area of the VR device.
  • the VR device is divided into a left-eye view area and a right-eye view area according to the left and right eye views of the user. Specifically, if the VR device has an independent screen, the left-eye view area is the left of the user on the screen.
  • the application interface display device displays the second left eye image in the area, and the right eye view area is the user on the screen
  • the right eye field of view is an optical lens group on which the user's right eye is aligned on the VR device
  • the application interface display device displays the second right eye image on the external screen in the area where the optical lens group is aligned, and the second left eye image And the second right eye image is finally displayed in the left and right eyes of the user through the optical path deformation.
  • the left eye field of view and the right eye field of view of the VR device respectively display the second left eye image and the second right eye image in the left and right eyes of the user, and the user can synthesize a stereoscopic image in the brain.
  • the current head posture of the user is acquired, and the image corresponding to the left and right eyes is adjusted according to the head posture, and the adjusted image is respectively displayed in the image.
  • the left and right eye view areas of the VR device are to say, after the dimension conversion of the display interface is performed to obtain an image having a three-dimensional visual effect, the converted result is adjusted according to the latest head posture of the user, so that the position of the finally displayed image is more closely matched.
  • the user's field of vision avoids the dizziness caused by the image position and the user's visual field misalignment caused by the user's head posture change during rendering of the two-dimensional application interface into an image with a three-dimensional visual effect, thereby improving the user experience.
  • the application interface display device can convert the interface of the two-dimensional application into an interface with a three-dimensional visual effect in various manners, and one of the following uses the application interface as an example in the embodiment of the present application.
  • the display method is described in detail.
  • another embodiment of the application interface display method in the embodiment of the present application includes:
  • the application interface display device obtains an interface to be displayed.
  • the application interface display device obtains an interface to be displayed.
  • the interface to be displayed is an interface that needs to be displayed on a screen of the display device.
  • the interface to be displayed may be an interface of any two-dimensional application, or may be a plurality of two-dimensional applications.
  • the interface synthesized by the interface is not limited herein.
  • a two-dimensional application refers to an application developed based on two-dimensional display.
  • SurfaceFlinger is the module responsible for display synthesis in Android system. It can receive windows and layers as input, and calculate the final synthesis of each layer according to the parameters such as depth, transparency, size and position of each layer. The position in the image is then generated into the final display buffer (Buffer) and displayed on a specific display device.
  • the application interface display device performs binocular rendering on the display interface, and obtains a third left eye image and a third right eye image of the interface to be displayed;
  • the left eye and the right eye of the user can independently view the object, and there is a certain distance between the left and right eyes, so for the same target, the image in the left eye of the user is different from the image in the right eye of the user.
  • the difference produced by observing one target from two points with a certain distance is called parallax.
  • the user's brain can fuse the left-eye image and the right-eye image with parallax to produce a stereoscopic visual effect, so that the user can see the stereoscopic object.
  • the left eye image and the right eye image of the interface to be displayed are drawn for the left eye and the right eye of the user, that is, the Stereoscopic Rendering of the display interface is obtained.
  • the right eye image is referred to as a third left eye image and a third right eye image.
  • 3 is an example of a third left eye image and a third right eye image.
  • the third left eye image and the third right eye image are acquired by the VR device, and the user brain can fuse the two images to generate a stereo image. The visual effect allows the user to see the three-dimensional interface to be displayed.
  • the application interface display device may draw a third left eye image and a third right eye image of the interface to be displayed for the left eye and the right eye of the user by acquiring the second head posture of the user according to the second head
  • the first posture and the second region are respectively determined
  • the third left eye image is obtained by drawing the interface to be displayed in the first region
  • the third right eye image is obtained by drawing the interface to be displayed in the second region
  • the first region is An area for displaying the interface to be displayed in the left eye image of the preset three-dimensional scene
  • the second area is an area for displaying the interface to be displayed in the right eye image of the preset three-dimensional scene.
  • the user or the system may preset one or more three-dimensional scenes, and draw a left-eye image and a right-eye image of the three-dimensional scene for the left and right eyes of the user, and the user points the left eye to the VR device.
  • the left eye field of view and the right eye are aligned with the right eye field of view of the VR device to obtain the left eye image and the right eye image, and the brain synthesis can generate a stereoscopic feeling and a immersive feeling, so that the user is placed in the preset In a 3D scene.
  • the preset three-dimensional scenes include a display area for displaying an interface to be displayed.
  • the corresponding area in the left eye image of the three-dimensional scene is the first area, and the corresponding area in the right eye image of the three-dimensional scene.
  • the area is the second area.
  • the application interface display device draws the to-be-displayed interface in the first area, that is, the second area, when the user is in the preset three-dimensional scene, the display area sees the to-be-displayed interface.
  • the preset three-dimensional scene may be a movie theater, a shopping mall, a classroom, etc., and is not enumerated here.
  • the corresponding display area may be a screen in a movie theater, an advertisement screen in a shopping mall, a blackboard in a classroom, and the like.
  • the VR technology is to immerse the user in the simulation environment, so the three-dimensional scene that the user sees using the VR device simulates the reality situation.
  • the three-dimensional scene seen by the user also rotates, and the scene thereof The elements in the room will change.
  • the user's initial field of view is set at the center of the classroom.
  • the user can see the table and chairs in front, the podium and the entire blackboard, and when the user's head is lifted up, The user can only see the upper half of the blackboard and ceiling. Therefore, as the user's head moves, the position of the display area in the user's field of view changes, even within the user's field of view.
  • the application interface display device acquires the current head posture of the user, that is, the second head posture, and then determines the position of the display area in the user's field of view according to the second head posture, that is, determines the first position information of the first area. And the second location information of the second area, where the location information may be the coordinate information corresponding to the vertices of the area in the screen, or may be other information that can determine the location, which is not limited herein. Then, the application interface display device may draw the interface to be displayed into the first area according to the first location information to obtain a third left eye image, and draw the interface to be displayed into the second area according to the second location information to obtain a third right eye. image.
  • the second head posture refers to a head posture acquired when the application interface display device performs binocular rendering on the display interface by the above manner, and the first header in the following step 204
  • the attitude refers to the head posture acquired before the image adjustment is performed after the barrel distortion processing.
  • the first head pose and the second head pose are application display interface devices that acquire the user's head pose at different times, the first head pose for binocular rendering and the second head pose for image adjustment.
  • the first head posture or the second head posture in the embodiment of the present application is determined by a sensor, and may be a sensor in the VR device, a sensor in the application interface display device, or other external devices. The sensor is not limited herein.
  • the application interface display device can also obtain the third left eye of the interface to be displayed by other means.
  • the image and the third right eye image are not limited herein.
  • the application interface display device performs barrel distortion processing on the third left eye image and the third right eye image to obtain a first left eye image and a first right eye image of the interface to be displayed.
  • the VR device includes a plurality of sets of optical lenses
  • the edge of the image may be distorted to different degrees.
  • the application interface display device draws a third left image for the left and right eyes of the user. After the three right eye images, the third left eye image is subjected to barrel distortion processing to obtain a first left eye image, and the third right eye image is subjected to barrel distortion processing to obtain a first right eye image, thereby generating an optical lens.
  • the distortion is offset.
  • the application interface display device may use a shader to perform barrel distortion on each of the fifth image and the sixth image to the first left eye image and the first right eye image through a set of preset parameters.
  • the preset parameters are set for lens parameters in the VR device, such as thickness, refractive index, pitch, and the like.
  • the application interface display device can also perform the barrel distortion processing in other manners, which is not limited herein.
  • the application interface display device acquires a first head posture of the user.
  • the posture of the user's head may change, and after the application interface display device performs step 203 to obtain the first left eye image and the first right eye image, the application The interface display device acquires the user's latest head posture, that is, the first head posture.
  • the head posture may specifically include a user's head deflection direction, a deflection angle of the user's head, or a motion mode of the user's head.
  • the motion mode may specifically be a left-right swing, a swing up and down, or the like, which is not limited herein.
  • the gesture may also include other gesture information, which is not limited herein.
  • the application interface display device adjusts the first left eye image to obtain a second left eye image according to the first head posture, and adjusts the first right eye image to obtain a second right eye image.
  • the application interface display device After acquiring the first head posture, the application interface display device adjusts the first left eye image according to the first head posture to obtain a second left eye image, and adjusts the first right eye image to obtain a second right eye image. Specifically, the application interface display device may calculate a transformation matrix according to the first head posture, transform the first left eye image according to the transformation matrix to obtain a second left eye image, and transform the first right eye image to obtain a second right image.
  • the eye image that is, the first left eye image is asynchronously time warped to obtain a second left eye image
  • the first right eye image is asynchronously time warped to obtain a second right eye image.
  • the application interface display device may perform an asynchronous time warping operation on the texture data of the first left eye image and the first right eye image through a set of preset parameters by using a shader to obtain a second left eye image and a second image.
  • Right eye image The second left eye image and the second right eye image may be obtained by performing asynchronous time warping in other manners, which is not limited herein.
  • Asynchronous Time Warp is a technique of image correction.
  • the delay of the scene rendering due to the head movement is too fast, that is, the head has been turned, but the image It has not been rendered, or the image of the previous frame is rendered.
  • the asynchronous time warping solves this delay problem by distorting the image before being sent to the display device.
  • the asynchronous time warping refers to an operation of stretching and shifting an image, for example, when the acquired first head posture is rotated to the left, the application interface display device according to the first head a posture, stretching and translating the first left eye image and the first right eye image to the left to obtain a second left eye image and a second right eye image, for example, when the acquired first head posture is downward rotation And the application interface display device stretches and translates the first left eye image downward according to the first head posture to obtain the second left eye image and the second right eye. image.
  • the manner of adjustment is different based on the acquired first head posture information, and will not be enumerated here.
  • the application interface display device displays the second left eye image on the left eye view area of the VR device, and displays the second right eye image on the right eye view area of the VR device.
  • the VR device is divided into a left-eye view area and a right-eye view area according to the left and right eye views of the user. Specifically, if the VR device has an independent screen, the left-eye view area is the left of the user on the screen.
  • the application interface display device displays the second left eye image in the area
  • the right eye view area is the area seen by the user's right eye on the screen
  • the application interface display device displays the second right in the area
  • the eye image, the second left eye image and the second right eye image are displayed in the left and right eyes of the user through the corresponding optical lens group; if the VR device does not have a separate screen, the left eye visual field is aligned on the left eye of the VR device
  • the application interface display device displays the second left eye image in the area of the external screen where the optical lens group is aligned
  • the right eye field of view is the optical lens group on which the user's right eye is aligned on the VR device
  • the application The interface display device displays the second right eye image on the area of the external screen where the optical lens group is aligned, and the second left eye image and the second right eye image are finally deformed by the optical path.
  • the left eye field of view and the right eye field of view of the VR device respectively display the second left eye image and the second right eye image in the left and right eyes of the user, and the user can synthesize a stereoscopic image in the brain.
  • a three-dimensional effect to be displayed interface is displayed in the left and right eyes of the user.
  • the current head posture of the user is acquired, and the image corresponding to the left and right eyes is adjusted according to the head posture, and the adjusted image is respectively displayed in the image.
  • the left and right eye view areas of the VR device are to say, after the dimension conversion of the display interface is performed to obtain an image having a three-dimensional visual effect, the converted result is adjusted according to the latest head posture of the user, so that the position of the finally displayed image is more closely matched.
  • the user's field of vision avoids the dizziness caused by the image position and the user's visual field misalignment caused by the user's head posture change during rendering of the two-dimensional application interface into an image with a three-dimensional visual effect, thereby improving the user experience.
  • the embodiment of the present application provides binocular rendering of the display interface, and after rendering the two-dimensional application interface into an image having a three-dimensional visual effect, the image is also subjected to barrel distortion to eliminate optical lens generation in the VR device. Distortion improves image quality and enhances user experience.
  • the embodiment of the present application provides a variety of manners for image adjustment according to the head posture, which improves the flexibility of the solution.
  • FIG. 4 a schematic diagram of a system component structure applied to the application interface display method and apparatus provided by the embodiments of the present application is provided.
  • the system can include a mobile terminal display 401 and a mobile terminal 402.
  • the mobile terminal 402 includes a screen, and the screen includes a third area and a fourth area. The user first needs to place the mobile terminal in the mobile terminal display 401, and the third area and the mobile end are used.
  • the left eye field of view of the head display 401 is aligned, the fourth area is aligned with the right eye field of view of the moving head display 401, and then the moving end display 401 is worn, and the left eye is aligned with the moving end display 401.
  • the right eye is aligned with the right-eye view area of the moving end display 401.
  • the left-eye view area and the right-eye view area of the mobile terminal respectively include at least one set of optical lenses for optically processing the image displayed by the mobile terminal 402, and displaying the processed image on the user's retina. It creates a sense of three-dimensionality and immersion in the minds of users.
  • the mobile terminal display 401 may also include a pass for tracking the posture of the user's head. Sensor, CPU for processing data, etc.
  • another embodiment of the application interface display method in the embodiment of the present application includes:
  • the application interface display device obtains an interface to be displayed from the mobile terminal
  • the mobile terminal determines an interface of the two-dimensional application that needs to be displayed according to the user operation, and the application interface display device moves from the mobile device.
  • the interface to be displayed is obtained at the terminal.
  • the mobile terminal may include a SurfaceFlinger module.
  • SurfaceFlinger is a module responsible for display synthesis in Android. It can receive windows and layers as input, and calculate the position in the final composite image of each layer according to the depth, transparency, size, position and other parameters of each layer. , then generate the final display buffer (Buffer), and then display it to a specific display device.
  • the mobile terminal can generate the to-be-displayed interface by using a SurfaceFlinger module, and the application interface display device acquires the to-be-displayed interface from the SurfaceFlinger module.
  • the application interface display device may be the mobile terminal, and the mobile terminal synthesizes the to-be-displayed interface through the SurfaceFlinger module, and then transmits the to-be-displayed interface to another independent Android through the cross-process communication interface.
  • the following steps 502 to 503 are performed in the process.
  • the application interface display device in the embodiment of the present application may also be other user equipments, such as PCs, that are independent of the Android system.
  • the user equipment can establish a connection with the mobile terminal by using a data line, a wireless network, a Bluetooth, or other means. After the mobile terminal synthesizes the to-be-displayed interface through the SurfaceFlinger module, the user equipment obtains the to-be-displayed interface from the mobile terminal through the connection. And perform the following steps 502 to 503.
  • the application interface display device in the embodiment of the present application may further be a cloud server independent of the Android system, the mobile terminal communicates with the cloud server through the wireless network, and transmits the interface to be displayed synthesized by the SurfaceFlinger module to the cloud server.
  • the cloud server receives the interface to be displayed, and performs the following steps 502 to 503.
  • the application interface display device performs a dimension conversion process on the display interface, and obtains a first left eye image and a first right eye image corresponding to the interface to be displayed.
  • the first left eye image and the first right eye image may be obtained by performing dimension conversion processing on the display interface in the manner described in steps 202 to 203 in the corresponding embodiment of FIG. 2 above.
  • the first left eye image and the first right eye image of the interface to be displayed may be obtained by other methods, which are not limited herein.
  • the application interface display device acquires a first head posture of the user.
  • the sensor in the mobile terminal display can track the head posture of the user in real time, and after the application interface display device obtains the first left eye image and the second right eye image, the application interface display device moves from the moving head.
  • the sensor in the display acquires the current head posture of the user, that is, the first head posture.
  • the application interface display device adjusts the first left eye image to obtain a second left eye image according to the first head posture, and adjusts the first right eye image to obtain a second right eye image.
  • the first left eye image and the first right eye image may be adjusted to obtain the second left eye image and the second manner as described in step 505 of the corresponding embodiment of FIG. 2 .
  • the first left eye image and the first right eye image may be adjusted by other methods, which are not limited herein.
  • the application interface display device sends the second left eye image and the second right eye image to the mobile terminal.
  • the application interface display device After the application interface display device obtains the second left eye image and the second right eye image, the second left eye image and the second right eye image are sent to the mobile terminal, so that the mobile terminal displays the second left eye in the third area of the screen.
  • the image displays the second right eye image in a fourth area of the screen.
  • the left eye of the user acquires the second left eye image in the third area by moving the headed left eye field of view
  • the right eye acquires the second right eye image in the fourth area by moving the headed right eye field of view.
  • the second left eye image and the second right eye image can be combined into a stereoscopic image to present a three-dimensional effect to be displayed interface.
  • the application interface display device may be the mobile terminal, and when the mobile terminal performs the steps 502 and 503 in another process independent of the Android system, the second left eye image and the second image are obtained.
  • the human left eye image and the second right eye image are sent to the SurfaceFlinger module through the cross-process communication interface, the display buffer is generated by the SurfaceFlinger module, and the second left eye image is displayed on the screen and Second right eye image.
  • the application interface display device in the embodiment of the present application may be another user device or a cloud server that is independent of the Android system.
  • the user device or the cloud server performs steps 502 and 503
  • the second left eye image and the second image are obtained.
  • the second left eye image and the second right eye image may be sent to the SurfaceFlinger module through a wireless network or other manner, the display buffer is produced by the SurfaceFlinger module, and the second left eye image is displayed on the screen. And a second right eye image.
  • the current head posture of the user is acquired, and the image corresponding to the left and right eyes is adjusted according to the head posture, and the adjusted image is respectively displayed in the image.
  • the left and right eye view areas of the VR device are to say, after the dimension conversion of the display interface is performed to obtain an image having a three-dimensional visual effect, the converted result is adjusted according to the latest head posture of the user, so that the position of the finally displayed image is more closely matched.
  • the user's field of vision avoids the dizziness caused by the image position and the user's visual field misalignment caused by the user's head posture change during rendering of the two-dimensional application interface into an image with a three-dimensional visual effect, thereby improving the user experience.
  • the application interface display method in the embodiment of the present application may be run in another process independent of the Android system in the mobile terminal, or may be run in a user device or a cloud server independent of the Android system. That is, the application interface display method in the embodiment of the present application does not depend on the Android system, and can reduce the computational burden of the mobile terminal.
  • the algorithm used in the method needs to be updated, the update can be performed independently of the Android system, when the Android system When the internal architecture in the update is updated, the algorithm used in the method does not need to be modified accordingly, and the flexibility and versatility are higher.
  • the mobile terminal may be an Android-based mobile phone, which includes a SurfaceFlinger process, and another independent.
  • the 3DConverter process for Android is also possible.
  • the mobile phone when the user clicks the icon of the two-dimensional application in the Android mobile phone, the mobile phone starts the process corresponding to the two-dimensional application (Process100), and SurfaceFlinger creates a layer for the Process 100.
  • a graphic buffer (GraphicBuffer) corresponding to the Surface is created.
  • the graphic buffer corresponding to the Surface is referred to as a first image buffer (gb100).
  • SurfaceFlinger passes the data in the first graphics buffer to Process100 through the Binder mechanism, and Process100 maps the data in gb100 to the process space.
  • Process100 performs the drawing operation through the OpenGL function according to the drawing logic of the application. The drawing result is written into the process space, and the SurfaceFlinger drawing is notified through the Binder mechanism.
  • SurfaceFlinger detects whether the data in gb100 is updated every fixed period. If there is an update, it marks gb100.
  • the content of the mark is mainly SurfaceFlinger's synthesis strategy for gb100.
  • SurfaceFlinger is through the graphics processor ( Graphics Processing Unit (GPU) or Hardware Compose (HWC) handles gb100.
  • graphics processor Graphics Processing Unit (GPU) or Hardware Compose (HWC) handles gb100.
  • processing refers to the synthesis of graphics buffers of multiple applications and sends them to the framebuffer display.
  • SurfaceFlinger traverses the data in gb100 to be displayed, and calls the data into the graphics buffer (gb200) corresponding to the framebufferSurface through the call of glDrawArray function.
  • the 3DConverter After the mobile phone starts the 3DConverter process, the 3DConverter obtains the data in the gb200 from the SurfaceFlinger through the cross-process communication interface (Interfacer100), that is, the texture data of the interface to be displayed, and then updates the texture data to the first texture block (P200_texture100), and then the first A texture block is used as input to the OpenGL function to render the display interface once and store the result of one rendering into the second texture block (P200_texture200).
  • Interfacer100 the cross-process communication interface
  • P200_texture100 the texture data of the interface to be displayed
  • the first texture block P200_texture100
  • the first A texture block is used as input to the OpenGL function to render the display interface once and store the result of one rendering into the second texture block (P200_texture200).
  • the specific process of rendering is as follows:
  • the 3DConverter After updating the data in the gb200 to the first texture block, the 3DConverter determines that the preset three-dimensional scene is a cinema scene, and the scene includes a screen (display area), and the 3DConverter acquires the current head of the user through the sensor in the VR device. Gesture (first head pose), then model data of the cinema scene (including the vertices, geometry, color, etc. of the cinema model), texture data of the interface to be displayed stored in the first texture block, and the acquired head pose, etc.
  • the process calls glDrawArray twice, calculates the position of the screen in the virtual scene, obtains the coordinates of the four vertices, and then draws the interface to be displayed to the three-dimensional according to the coordinates of the vertex and the texture data of the interface to be displayed.
  • a third left eye image and a third right eye image corresponding to the three-dimensional scene are obtained.
  • the third left eye image and the third right eye image are barrel-distorted to the first left eye image and the first right eye image through a set of preset parameters, and the first left is
  • the eye image and the first right eye image storage are stored in texture form into the second texture block.
  • the 3DConverter After the 3DConverter stores the first left eye image and the first right eye image to the second texture block, the second texture block is used as an input of the OpenGL function, and the secondary rendering result is stored in the first texture block.
  • the specific process of secondary rendering is as follows:
  • the 3DConverter acquires the current head posture (first head posture) of the user through the sensor in the VR device, and then calculates a transformation matrix according to the head posture, and uses the transformation matrix to transform the image stored in the second texture block, And drawing the transformed image.
  • the OpenGL shader performs an asynchronous time warping operation on the texture data in the second texture block by another set of preset parameters to obtain the second left eye image and the first Two right eye images, and the second left eye image and the second right eye image are stored in a texture form into the first texture block.
  • the texture block is an input of OpenGL drawing
  • the frame buffer is an output of the drawing
  • the embodiment of the present application outputs the drawing result to the texture block
  • the first texture block is associated with the first frame buffer. (p200_faramebuffer100) at the color mount point
  • the second texture block is associated with the color mount point of the second pin buffer (p200_faramebuffer200)
  • the first render buffer can be used to store the first render result to the second.
  • the result of the secondary rendering can be stored in the first texture block by calling the second frame buffer.
  • 3DConverter notifies SurfaceFlinger of the end of rendering through the cross-process communication interface (Interfacer200), and sends the texture data in the first texture block to SurfaceFlinger.
  • SurfaceFlinger displays the second left-eye image in the third area of the screen of the mobile phone.
  • the second right eye image is displayed in the fourth area of the screen.
  • the user points the left eye to the left screen of the mobile phone, the right eye to the right screen of the mobile phone, and feels in the preset movie theater scene, and sees the to-be-displayed on the screen of the movie theater. interface.
  • the application interface display device in the embodiment of the present application is described above. The following describes the application interface display device in the embodiment of the present application. It should be understood that the application interface display device in the embodiment of the present application is used to display a 2D application on the VR device.
  • the interface of the program, the application interface display device may be the VR device, or may be a communication device capable of connecting with the VR device, such as a PC, a mobile terminal, a cloud server, etc., or may be a component in the VR device or the communication device. Specifically, it is not limited here.
  • an embodiment of the application interface display device in the embodiment of the present application includes:
  • the first obtaining module 601 is configured to obtain an interface to be displayed, where the interface to be displayed is an interface of a 2D application;
  • the processing module 602 is configured to perform dimension conversion processing on the interface to be displayed acquired by the first obtaining module 601, and obtain a first left eye image and a first right eye image corresponding to the interface to be displayed, and the first left eye image and the first right eye.
  • the image is used to present an interface to be displayed having a three-dimensional visual effect;
  • a second acquiring module 603, configured to acquire a first head posture of the user
  • the adjusting module 604 is configured to adjust the first left eye image to obtain the second left eye image according to the first head posture acquired by the second acquiring module, and adjust the first right eye image to obtain the second right eye image;
  • the display module 605 is configured to display a second left eye image in a left eye view area of the VR device, and display a second right eye image in a right eye view area of the VR device.
  • the processing module 602 performs the dimension conversion processing on the display interface to obtain the image corresponding to the left and right eyes
  • the second obtaining module 603 acquires the current head posture of the user
  • the adjusting module 604 adjusts the image corresponding to the left and right eyes according to the head posture.
  • the display module then displays the adjusted images on the left and right eye view areas of the VR device. That is to say, after the dimension conversion of the display interface is performed to obtain an image having a three-dimensional visual effect, the converted result is adjusted according to the latest head posture of the user, so that the position of the finally displayed image is more closely matched.
  • the user's field of vision avoids the dizziness caused by the image position and the user's visual field misalignment caused by the user's head posture change during rendering of the two-dimensional application interface into an image with a three-dimensional visual effect, thereby improving the user experience.
  • the processing module can convert the interface of the two-dimensional application into an interface having a three-dimensional visual effect in a plurality of manners, and one of the following is an example of the application interface display device in the embodiment of the present application.
  • FIG. 7 Another embodiment of the application interface display device in the embodiment of the present application includes:
  • the first obtaining module 701 is configured to obtain an interface to be displayed, where the interface to be displayed is an interface of a 2D application.
  • the processing module 702 is configured to perform dimension conversion processing on the interface to be displayed acquired by the first obtaining module 701, and obtain a first left eye image and a first right eye image corresponding to the interface to be displayed, a first left eye image and a first right eye.
  • the image is used to present an interface to be displayed having a three-dimensional visual effect;
  • a second acquiring module 703, configured to acquire a first head posture of the user
  • the adjusting module 704 is configured to adjust the first left eye image according to the first head posture acquired by the second acquiring module a second left eye image, and adjusting the first right eye image to obtain a second right eye image;
  • the display module 705 is configured to display a second left eye image in a left eye view area of the VR device, and display a second right eye image in a right eye view area of the VR device;
  • the processing module 702 includes:
  • a rendering unit 7021 configured to perform binocular rendering on the display interface, and obtain a third left eye image and a third right eye image of the interface to be displayed;
  • the processing unit 7022 is configured to perform barrel distortion processing on the third left eye image and the third right eye image to obtain a first left eye image of the interface to be displayed and a first right eye image of the interface to be displayed.
  • the rendering unit 7021 may include:
  • a first obtaining subunit 70211 configured to acquire a second head posture of the user
  • the determining subunit 70212 is configured to respectively determine the first area and the second area according to the second head posture, where the first area is an area for displaying an interface to be displayed in a left eye image of the preset three-dimensional scene, and the second area is An area in the right eye image of the preset three-dimensional scene for displaying the interface to be displayed;
  • the drawing sub-unit 70213 is configured to draw an interface to be displayed in the first area to obtain a third left-eye image, and draw an interface to be displayed in the second area to obtain a third right-eye image.
  • the adjusting module 704 may include:
  • the time warping unit 7041 is configured to perform asynchronous time warping on the first left eye image according to the first head posture to obtain a second left eye image, and perform asynchronous time warping on the first right eye image to obtain a second right eye image.
  • the processing module 702 performs the dimension conversion processing on the display interface to obtain the image corresponding to the left and right eyes
  • the second obtaining module 703 acquires the current head posture of the user
  • the adjusting module 704 adjusts the image corresponding to the left and right eyes according to the head posture.
  • the display module displays the adjusted images on the left and right eye view areas of the VR device. That is to say, after the dimension conversion of the display interface is performed to obtain an image having a three-dimensional visual effect, the converted result is adjusted according to the latest head posture of the user, so that the position of the finally displayed image is more closely matched.
  • the user's field of vision avoids the dizziness caused by the image position and the user's visual field misalignment caused by the user's head posture change during rendering of the two-dimensional application interface into an image with a three-dimensional visual effect, thereby improving the user experience.
  • the embodiment of the present application provides binocular rendering of the display interface, and after rendering the two-dimensional application interface into an image having a three-dimensional visual effect, the image is also subjected to barrel distortion to eliminate optical lens generation in the VR device. Distortion improves image quality and enhances user experience.
  • the embodiment of the present application provides a method for performing image adjustment according to the head posture, which improves the achievability of the solution.
  • another embodiment of the application interface display device in the embodiment of the present application includes:
  • the first obtaining module 801 is configured to obtain an interface to be displayed, where the interface to be displayed is an interface of a 2D application;
  • the processing module 802 is configured to perform dimension conversion processing on the interface to be displayed acquired by the first obtaining module 801, and obtain a first left eye image and a first right eye image corresponding to the interface to be displayed, a first left eye image and a first right eye.
  • the image is used to present an interface to be displayed having a three-dimensional visual effect;
  • a second acquiring module 803, configured to acquire a first head posture of the user
  • the adjusting module 804 is configured to adjust the first left eye image to obtain a second left eye image according to the first head posture acquired by the second acquiring module, and adjust the first right eye image to obtain a second right eye image;
  • the display module 805 is configured to display a second left eye image in a left eye view area of the VR device, and display a second right eye image in a right eye view area of the VR device;
  • the first obtaining module 801 includes:
  • the obtaining unit 8011 is configured to acquire the to-be-displayed interface from the mobile terminal;
  • the display module 805 includes:
  • the sending unit 8051 is configured to send the second left eye image and the second right eye image to the mobile terminal, so that the mobile terminal displays the second left eye image in the third area of the screen, and displays the second right eye in the fourth area of the screen.
  • the image, the screen of the mobile terminal includes a third area and a fourth area, the third area corresponds to a left eye view area of the VR device, and the fourth area corresponds to a right eye view area of the VR device;
  • the obtaining unit 8011 may include:
  • a second obtaining subunit 80111 configured to obtain an interface to be displayed from the SurfaceFlinger module
  • the sending unit 8051 can include:
  • the sending subunit 80511 is configured to send the second left eye image and the second right eye image to the SurfaceFlinger module, so that the SurfaceFlinger module end displays the second left eye image in the third area of the screen of the mobile terminal, in the fourth screen The area displays the second right eye image;
  • the application interface display device may be a mobile terminal as shown in FIG. 4, and may be other user devices independent of the Android system, such as a PC, and may be a cloud server independent of the Android system. It can also be other devices, which are not limited herein.
  • the processing module 802 performs the dimension conversion processing on the display interface to obtain the image corresponding to the left and right eyes
  • the second obtaining module 803 obtains the current head posture of the user
  • the adjusting module 804 adjusts the image corresponding to the left and right eyes according to the head posture.
  • the display module then displays the adjusted images on the left and right eye view areas of the VR device. That is to say, after the dimension conversion of the display interface is performed to obtain an image having a three-dimensional visual effect, the converted result is adjusted according to the latest head posture of the user, so that the position of the finally displayed image is more closely matched.
  • the user's field of vision avoids the dizziness caused by the image position and the user's visual field misalignment caused by the user's head posture change during rendering of the two-dimensional application interface into an image with a three-dimensional visual effect, thereby improving the user experience.
  • the application interface display device in the embodiment of the present application may be a user device or a cloud server that is independent of the Android system, that is, the application interface display method in the embodiment of the present application does not depend on the Android system, and the computing burden of the mobile terminal may be reduced.
  • the algorithm used in the method needs to be updated, the update can be performed independently of the Android system.
  • the algorithm used in the method does not need to be modified accordingly, and the flexibility and More versatile.
  • the application interface display device in the embodiment of the present application is introduced from the perspective of the function module.
  • the application interface display device in the embodiment of the present application is introduced from the perspective of the physical hardware. Please refer to FIG. 9 , which is an application of the embodiment of the present application.
  • the application interface display device 90 can include an input device 910, an output device 920, a processor 930, and a memory 940.
  • Memory 940 can include read only memory and random access memory and provides instructions and data to processor 930. A portion of the memory 940 may also include a Non-Volatile Random Access Memory (NVRAM).
  • NVRAM Non-Volatile Random Access Memory
  • Memory 940 stores the following elements, executable modules or data structures, or subsets thereof, or their extended sets:
  • Operation instructions include various operation instructions for implementing various operations.
  • Operating system Includes a variety of system programs for implementing various basic services and handling hardware-based tasks.
  • the application interface display device or the VR device includes at least one display, and the processor 930 in the application interface display device is specifically configured to:
  • the control display displays a second left eye image in the left eye view area of the VR device and a second right eye image in the right eye view area of the VR device.
  • the processor 930 controls the operation of the application interface display device 90, which may also be referred to as a Central Processing Unit (CPU).
  • Memory 940 can include read only memory and random access memory and provides instructions and data to processor 930. A portion of the memory 940 can also include an NVRAM.
  • the components of the application interface display device 90 are coupled together by a bus system 950.
  • the bus system 950 may include a power bus, a control bus, a status signal bus, and the like in addition to the data bus. However, for clarity of description, various buses are labeled as bus system 950 in the figure.
  • Processor 930 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the foregoing method may be completed by an integrated logic circuit of hardware in the processor 930 or an instruction in a form of software.
  • the processor 930 may be a general-purpose processor, a digital signal processing (DSP), an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or Other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • DSP digital signal processing
  • ASIC application specific integrated circuit
  • FPGA field-programmable gate array
  • the methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application may be directly implemented by the hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like.
  • the storage medium is located in memory 940, and processor 930 reads the information in memory 940 and, in conjunction with its hardware, performs the steps of the above method.
  • it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
  • When implemented in software it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. Loading and executing the computer on a computer
  • the program or function described in the embodiment of the present application is generated in whole or in part when the program is instructed.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • the computer instructions can be stored in a computer readable storage medium or transferred from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions can be from a website site, computer, server or data center Transfer to another website site, computer, server, or data center by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL), or wireless (eg, infrared, wireless, microwave, etc.).
  • wire eg, coaxial cable, fiber optic, digital subscriber line (DSL), or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer readable storage medium can be any available media that can be stored by a computer or a data storage device such as a server, data center, or the like that includes one or more available media.
  • the usable medium may be a magnetic medium (eg, a floppy disk, a hard disk, a magnetic tape), an optical medium (eg, a DVD), or a semiconductor medium (such as a Solid State Disk (SSD)) or the like.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • a computer readable storage medium A number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present application.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

La présente invention concerne un procédé et un appareil d'affichage d'interface d'application. Le procédé dans les modes de réalisation de la présente invention est utilisé dans le but d'afficher une interface d'un programme d'application 2D sur un dispositif de réalité virtuelle (RV) au moyen d'un appareil d'affichage d'interface d'application. Le procédé consiste : à acquérir une interface devant être affichée, cette dernière étant une interface d'un programme d'application 2D ; à effectuer un traitement de conversion de dimension sur l'interface devant être affichée afin d'obtenir une première image d'œil gauche et une première image d'œil droit correspondant à l'interface devant être affichée, la première image d'œil gauche et la première image d'œil droit étant utilisées dans le but de présenter l'interface devant être affichée selon un effet visuel tridimensionnel ; à acquérir une première posture de tête d'un utilisateur ; à ajuster la première image d'œil gauche afin d'obtenir une seconde image d'œil gauche, et à ajuster la première image d'œil droit afin d'obtenir une seconde image d'œil droit en fonction de la première posture de tête ; à afficher la seconde image d'œil gauche dans un champ de vision d'œil gauche du dispositif de RV, et à afficher la seconde image d'œil droit dans un champ de vision d'œil droit du dispositif de RV.
PCT/CN2017/078027 2016-11-08 2017-03-24 Procédé et appareil d'affichage d'interface d'application WO2018086295A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201780010154.0A CN108604385A (zh) 2016-11-08 2017-03-24 一种应用界面显示方法及装置

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610980760.2 2016-11-08
CN201610980760 2016-11-08

Publications (1)

Publication Number Publication Date
WO2018086295A1 true WO2018086295A1 (fr) 2018-05-17

Family

ID=62110374

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/078027 WO2018086295A1 (fr) 2016-11-08 2017-03-24 Procédé et appareil d'affichage d'interface d'application

Country Status (2)

Country Link
CN (1) CN108604385A (fr)
WO (1) WO2018086295A1 (fr)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110597577A (zh) * 2019-05-31 2019-12-20 珠海全志科技股份有限公司 一种头戴可视设备及其分屏显示方法和装置
CN111556305A (zh) * 2020-05-20 2020-08-18 京东方科技集团股份有限公司 图像处理方法、vr设备、终端、显示系统和计算机可读存储介质
CN112639681A (zh) * 2018-08-23 2021-04-09 苹果公司 用于进程数据共享的方法和设备
CN112965773A (zh) * 2021-03-03 2021-06-15 闪耀现实(无锡)科技有限公司 用于信息显示的方法、装置、设备和存储介质
CN113538648A (zh) * 2021-07-27 2021-10-22 歌尔光学科技有限公司 图像渲染方法、装置、设备及计算机可读存储介质
CN113589927A (zh) * 2021-07-23 2021-11-02 杭州灵伴科技有限公司 分屏显示方法、头戴式显示设备和计算机可读介质
CN113660476A (zh) * 2021-08-16 2021-11-16 纵深视觉科技(南京)有限责任公司 一种基于Web页面的立体显示系统及方法
CN114674531A (zh) * 2021-08-30 2022-06-28 北京新能源汽车股份有限公司 一种车辆后视镜的边界确定方法、装置、控制设备及汽车
CN114972607A (zh) * 2022-07-29 2022-08-30 烟台芯瞳半导体科技有限公司 加速图像显示的数据传输方法、装置及介质
CN115190284A (zh) * 2022-07-06 2022-10-14 敏捷医疗科技(苏州)有限公司 一种图像处理方法
CN115272568A (zh) * 2022-07-12 2022-11-01 重庆大学 一种位错界面特征三维可视化方法

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112015264B (zh) * 2019-05-30 2023-10-20 深圳市冠旭电子股份有限公司 虚拟现实显示方法、虚拟现实显示装置及虚拟现实设备
CN110286866A (zh) * 2019-06-24 2019-09-27 上海临奇智能科技有限公司 一种虚拟透明屏的呈现方法及设备
CN113342220B (zh) * 2021-05-11 2023-09-12 杭州灵伴科技有限公司 窗口渲染方法、头戴式显示套件和计算机可读介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103236074A (zh) * 2013-03-25 2013-08-07 深圳超多维光电子有限公司 一种2d/3d图像处理方法及装置
CN103402106A (zh) * 2013-07-25 2013-11-20 青岛海信电器股份有限公司 三维图像显示方法及装置
CN105376546A (zh) * 2015-11-09 2016-03-02 中科创达软件股份有限公司 一种2d转3d方法、装置及移动终端

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120200676A1 (en) * 2011-02-08 2012-08-09 Microsoft Corporation Three-Dimensional Display with Motion Parallax
JP5874176B2 (ja) * 2011-03-06 2016-03-02 ソニー株式会社 表示装置、並びに中継装置
CN105447898B (zh) * 2015-12-31 2018-12-25 北京小鸟看看科技有限公司 一种虚拟现实设备中显示2d应用界面的方法和装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103236074A (zh) * 2013-03-25 2013-08-07 深圳超多维光电子有限公司 一种2d/3d图像处理方法及装置
CN103402106A (zh) * 2013-07-25 2013-11-20 青岛海信电器股份有限公司 三维图像显示方法及装置
CN105376546A (zh) * 2015-11-09 2016-03-02 中科创达软件股份有限公司 一种2d转3d方法、装置及移动终端

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112639681A (zh) * 2018-08-23 2021-04-09 苹果公司 用于进程数据共享的方法和设备
CN110597577A (zh) * 2019-05-31 2019-12-20 珠海全志科技股份有限公司 一种头戴可视设备及其分屏显示方法和装置
CN111556305A (zh) * 2020-05-20 2020-08-18 京东方科技集团股份有限公司 图像处理方法、vr设备、终端、显示系统和计算机可读存储介质
US11838494B2 (en) 2020-05-20 2023-12-05 Beijing Boe Optoelectronics Technology Co., Ltd. Image processing method, VR device, terminal, display system, and non-transitory computer-readable storage medium
CN112965773A (zh) * 2021-03-03 2021-06-15 闪耀现实(无锡)科技有限公司 用于信息显示的方法、装置、设备和存储介质
CN112965773B (zh) * 2021-03-03 2024-05-28 闪耀现实(无锡)科技有限公司 用于信息显示的方法、装置、设备和存储介质
CN113589927B (zh) * 2021-07-23 2023-07-28 杭州灵伴科技有限公司 分屏显示方法、头戴式显示设备和计算机可读介质
CN113589927A (zh) * 2021-07-23 2021-11-02 杭州灵伴科技有限公司 分屏显示方法、头戴式显示设备和计算机可读介质
CN113538648A (zh) * 2021-07-27 2021-10-22 歌尔光学科技有限公司 图像渲染方法、装置、设备及计算机可读存储介质
CN113538648B (zh) * 2021-07-27 2024-04-30 歌尔科技有限公司 图像渲染方法、装置、设备及计算机可读存储介质
CN113660476A (zh) * 2021-08-16 2021-11-16 纵深视觉科技(南京)有限责任公司 一种基于Web页面的立体显示系统及方法
CN114674531A (zh) * 2021-08-30 2022-06-28 北京新能源汽车股份有限公司 一种车辆后视镜的边界确定方法、装置、控制设备及汽车
CN115190284A (zh) * 2022-07-06 2022-10-14 敏捷医疗科技(苏州)有限公司 一种图像处理方法
CN115190284B (zh) * 2022-07-06 2024-02-27 敏捷医疗科技(苏州)有限公司 一种图像处理方法
CN115272568A (zh) * 2022-07-12 2022-11-01 重庆大学 一种位错界面特征三维可视化方法
CN114972607A (zh) * 2022-07-29 2022-08-30 烟台芯瞳半导体科技有限公司 加速图像显示的数据传输方法、装置及介质

Also Published As

Publication number Publication date
CN108604385A (zh) 2018-09-28

Similar Documents

Publication Publication Date Title
WO2018086295A1 (fr) Procédé et appareil d'affichage d'interface d'application
US20230082705A1 (en) Reprojecting holographic video to enhance streaming bandwidth/quality
US11010958B2 (en) Method and system for generating an image of a subject in a scene
JP7009494B2 (ja) カラー仮想コンテンツワーピングを伴う複合現実システムおよびそれを使用して仮想コンテンツ生成する方法
US9818228B2 (en) Mixed reality social interaction
CN111880644A (zh) 多用户即时定位与地图构建(slam)
CN107924589B (zh) 通信系统
JP2007020142A (ja) 3dグラフィック処理装置及びこれを利用した立体映像表示装置
JP2012079291A (ja) プログラム、情報記憶媒体及び画像生成システム
WO2019020608A1 (fr) Procédé et système de fourniture d'une expérience de réalité virtuelle sur la base de données ultrasonores
CN111355944B (zh) 生成并用信号传递全景图像之间的转换
JP2019197368A (ja) 立体映像奥行き圧縮装置および立体映像奥行き圧縮プログラム
US10957106B2 (en) Image display system, image display device, control method therefor, and program
JP2023505235A (ja) 仮想、拡張、および複合現実システムおよび方法
CN116610213A (zh) 虚拟现实中的交互显示方法、装置、电子设备、存储介质
JP7426413B2 (ja) ブレンドモード3次元ディスプレイシステムおよび方法
US11187914B2 (en) Mirror-based scene cameras
TWM630947U (zh) 立體影像播放裝置
WO2017085803A1 (fr) Dispositif d'affichage vidéo, et procédé d'affichage vidéo
KR101425321B1 (ko) 적응형 렌즈 어레이를 구비하는 3차원 집적 영상 디스플레이 시스템 및 적응형 렌즈 어레이에 대한 요소 영상 생성 방법
CN110620917A (zh) 一种虚拟现实跨屏立体显示的方法
TWI817335B (zh) 立體影像播放裝置及其立體影像產生方法
LU503478B1 (en) Method of virtual reality cross-screen stereoscopic display
EP4030752A1 (fr) Système et procédé de génération d'image
US20230262406A1 (en) Visual content presentation with viewer position-based audio

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17869609

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17869609

Country of ref document: EP

Kind code of ref document: A1