CN117173309A - Image rendering method, apparatus, device, medium, and program product - Google Patents
Image rendering method, apparatus, device, medium, and program product Download PDFInfo
- Publication number
- CN117173309A CN117173309A CN202310956618.4A CN202310956618A CN117173309A CN 117173309 A CN117173309 A CN 117173309A CN 202310956618 A CN202310956618 A CN 202310956618A CN 117173309 A CN117173309 A CN 117173309A
- Authority
- CN
- China
- Prior art keywords
- image
- application
- rendering
- virtual screen
- synthesizer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000009877 rendering Methods 0.000 title claims abstract description 208
- 238000000034 method Methods 0.000 title claims abstract description 75
- 230000005540 biological transmission Effects 0.000 claims abstract description 36
- 239000000872 buffer Substances 0.000 claims description 82
- 238000004590 computer program Methods 0.000 claims description 22
- 238000010130 dispersion processing Methods 0.000 claims description 16
- 230000002194 synthesizing effect Effects 0.000 claims description 8
- 239000002131 composite material Substances 0.000 claims description 2
- 230000001419 dependent effect Effects 0.000 abstract description 5
- 238000010030 laminating Methods 0.000 abstract description 2
- 230000000694 effects Effects 0.000 description 8
- 230000003190 augmentative effect Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 230000008447 perception Effects 0.000 description 6
- 230000008878 coupling Effects 0.000 description 5
- 238000010168 coupling process Methods 0.000 description 5
- 238000005859 coupling reaction Methods 0.000 description 5
- 239000006185 dispersion Substances 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 230000001953 sensory effect Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000033001 locomotion Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000002441 reversible effect Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000016776 visual perception Effects 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000001339 gustatory effect Effects 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Landscapes
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the application provides an image rendering method, an image rendering device, image rendering equipment, an image rendering medium and a program product, wherein a 2D application is started in a virtual screen according to a starting instruction of the 2D application, a 2D rendering synthesizer acquires a plurality of image layers of the 2D application from the virtual screen, a 2D image of the 2D application is obtained by laminating the plurality of image layers of the 2D application, and the 2D image is sent to a transmission component; the transmission component transmits the 2D image to the 3D rendering synthesizer; and the 3D rendering synthesizer renders and synthesizes the 2D image and the data to be rendered of the 3D application to obtain a 3D image of the 3D application, and the 3D image is displayed on a display screen. The rendering of the 2D application and the rendering of the 3D application are independently executed, the rendering of the 2D application is realized by a 2D rendering synthesizer, the rendering of the 3D application is realized by a 3D rendering synthesizer, and the rendering of the 3D application is not dependent on a surface flinger of an android system any more, so that the method of the embodiment is not limited to the android system any more, and the method can be flexibly operated in various operating systems.
Description
Technical Field
The embodiment of the application relates to the technical field of virtual reality, in particular to an image rendering method, an image rendering device, a medium and a program product.
Background
Augmented Reality (XR), which is a common name for various technologies such as Virtual Reality (VR), augmented Reality (Augmented Reality, AR), and Mixed Reality (MR), is created by combining Reality with Virtual through a computer to create a Virtual environment that can be interacted with by human. By integrating the visual interaction technologies of the three, the method brings the 'immersion' of seamless transition between the virtual world and the real world for the experienter.
In XR devices, users have a need to use a conventional 2D application (app), and in the prior art, a system service module (surface flinger) of an Android system is mainly relied on to perform rendering, so that the surface flinger can complete image rendering of the 2D application and can also complete rendering of a 3D image, thereby enabling a User Interface (UI) of the 2D application to be displayed in an extended reality scene provided by the 3D application.
However, the existing rendering method depends on the surface flinger of the android system, and is limited in use.
Disclosure of Invention
The embodiment of the application provides an image rendering method, an image rendering device, a medium and a program product, wherein the rendering of a 2D application and the rendering of a 3D application are independently executed, the rendering of the 2D application is realized by a 2D rendering synthesizer, the rendering of the 3D application is realized by a 3D rendering synthesizer, and the rendering of the 3D application is not dependent on a surface player of an android system, so that the method of the embodiment is not limited to the android system any more, and can flexibly run in various operating systems.
In a first aspect, an embodiment of the present application provides an image rendering method applied to an XR device, the XR device including a 2D rendering compositor, a 3D rendering compositor, and a display screen, the method comprising:
starting the 2D application in the virtual screen according to an opening instruction of the 2D application;
the 2D rendering synthesizer acquires a plurality of layers of the 2D application from the virtual screen, and the 2D image of the 2D application is obtained by combining the plurality of layers of the 2D application, and the 2D image is sent to a transmission component of the virtual screen;
the transmission component transmits the 2D image to the 3D rendering compositor;
the 3D rendering synthesizer renders and synthesizes the 2D image and the data to be rendered of the 3D application to obtain a 3D image of the 3D application, the 3D image is displayed with the virtual screen, and the virtual screen is displayed with the 2D image;
the display screen displays the 3D image.
In some embodiments, a buffer queue is included in the transmission component for storing the 2D image, the transmission component transmitting the 2D image using a producer consumer model.
In some embodiments, before the display screen displays the 3D image, further comprising:
And the 3D rendering synthesizer carries out inverse dispersion processing on the 3D image and sends the 3D image subjected to the inverse dispersion processing to the display screen.
In some embodiments, the XR device further comprises a hardware synthesizer HWC, the display screen further comprising, prior to displaying the 3D image:
the 3D rendering compositor sends the 3D image to the HWC;
and the HWC carries out inverse dispersion processing on the 3D image, and sends the 3D image after the inverse dispersion processing to the display screen.
In some embodiments, a plurality of 2D applications are launched in the 3D application, the plurality of 2D applications being displayed through a plurality of virtual screens, each of the virtual screens being capable of associating only one 2D application.
In some embodiments, the 2D render compositor has a rendering frequency less than or equal to a rendering frequency of the 3D render compositor.
In some embodiments, the 3D rendering compositor renders and composes the 2D image and the data to be rendered of the 3D application to obtain a 3D image of the 3D application, including:
dividing the 2D image into a left-eye 2D image and a right-eye 2D image;
acquiring left eye data and right eye data of the 3D application from a buffer zone of the 3D application;
And rendering and synthesizing according to the left-eye data, the right-eye data, the left-eye 2D image and the right-eye 2D image of the 3D application to obtain the left-eye image and the right-eye image of the 3D application.
In some embodiments, when the XR device employs an android system, the 2D rendering compositor is a system service module Surface filter, the transmission component includes a Surface and a Surface texture module of the virtual screen, and the Surface texture module includes a first buffer queue, where the first buffer queue is used to store the 2D image.
In some embodiments, the transmitting component transmits the 2D image to the 3D rendering compositor, comprising:
the Surface of the virtual screen stores the 2D image into the first buffer queue of the Surface texture module;
the 3D rendering compositor reads the 2D image from the first buffer queue.
In some embodiments, the first buffer queue employs a multi-level buffer.
In a second aspect, an embodiment of the present application provides an image rendering apparatus including:
the starting module is used for starting the 2D application in the virtual screen according to the starting instruction of the 2D application;
a 2D rendering synthesizer, configured to obtain multiple layers of the 2D application from the virtual screen, combine the multiple layers of the 2D application to obtain a 2D image of the 2D application, and send the 2D image to a transmission component of the virtual screen;
A transmission component for transmitting the 2D image to the 3D rendering compositor;
a 3D rendering synthesizer, configured to render and synthesize the 2D image and data to be rendered of a 3D application to obtain a 3D image of the 3D application, where the 3D image is displayed with the virtual screen, and the virtual screen is displayed with the 2D image;
and the display module is used for displaying the 3D image.
In a third aspect, embodiments of the present application provide an XR device, the XR device comprising: a processor and a memory for storing a computer program, the processor being adapted to invoke and run the computer program stored in the memory to perform the method according to any of the first aspect above.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium for storing a computer program, the computer program causing a computer to perform the method according to any one of the first aspects.
In a fifth aspect, embodiments of the present application provide a computer program product comprising a computer program which, when executed by a processor, implements a method as in any of the first aspects above.
According to the image rendering method, the device, the equipment, the medium and the program product provided by the embodiment of the application, according to the starting instruction of the 2D application, the 2D application is started in the virtual screen, the 2D rendering synthesizer acquires a plurality of image layers of the 2D application from the virtual screen, the 2D image of the 2D application is obtained by laminating the plurality of image layers of the 2D application, and the 2D image is sent to the transmission component; the transmission component transmits the 2D image to the 3D rendering synthesizer; and the 3D rendering synthesizer renders and synthesizes the 2D image and the data to be rendered of the 3D application to obtain a 3D image of the 3D application, and the 3D image is displayed on a display screen. The rendering of the 2D application and the rendering of the 3D application are independently executed, the rendering of the 2D application is realized by a 2D rendering synthesizer, the rendering of the 3D application is realized by a 3D rendering synthesizer, and the rendering of the 3D application is not dependent on a surface flinger of an android system any more, so that the method of the embodiment is not limited to the android system any more, and the method can be flexibly operated in various operating systems.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an image rendering method according to an embodiment of the present application;
fig. 2 is a schematic diagram of an image rendering method under an android system;
fig. 3 is a flowchart of an image rendering method according to a second embodiment of the present application;
fig. 4 is a schematic structural diagram of an image rendering device according to a third embodiment of the present application;
fig. 5 is a schematic diagram of an XR device according to a fourth embodiment of the application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In order to facilitate understanding of the embodiments of the present application, before describing the embodiments of the present application, some concepts related to all embodiments of the present application are explained appropriately, specifically as follows:
1) Virtual Reality (VR) is a technology for creating and experiencing a Virtual world, determining to generate a Virtual environment, which is a multi-source information (the Virtual Reality mentioned herein at least includes visual perception, and may also include auditory perception, tactile perception, motion perception, and even include gustatory perception, olfactory perception, etc.), implementing a fused, interactive three-dimensional dynamic view of the Virtual environment and simulation of physical behavior, immersing a user in the simulated Virtual Reality environment, and implementing applications in various Virtual environments such as maps, games, videos, education, medical treatment, simulation, collaborative training, sales, assistance in manufacturing, maintenance, and repair.
2) A virtual reality device (VR device) may be provided in the form of glasses, a head mounted display (Head Mount Display, abbreviated as HMD), or a contact lens for realizing visual perception and other forms of perception, but the form of the virtual reality device is not limited thereto, and may be further miniaturized or enlarged according to actual needs.
Optionally, VR devices described in embodiments of the present application may include, but are not limited to, the following types:
2.1 The external VR equipment is connected with the external equipment in a wired or wireless mode, the external equipment performs relevant calculation of a virtual reality function and outputs data to the VR equipment, the external equipment can be electronic equipment such as a smart phone, a Personal Computer (PC) and the like, and when the external equipment is a PC, the external VR equipment is also called as computer-side virtual reality (PCVR) equipment.
2.2 The all-in-one VR device has a processor for performing the calculation related to the virtual function, and thus has independent virtual reality input and output functions, and is free from connection with a PC or a mobile terminal, and has high degrees of freedom in use.
3) Mixed Reality (MR for short): it means that new environments and visualizations are created in combination with the real and virtual world, physical entities and digital objects coexist and can interact in real time to simulate real objects. Reality, augmented virtual, and virtual reality technologies are mixed. MR is a kind of Virtual Reality (VR) plus the synthetic Mixed Reality (MR) of Augmented Reality (AR), is the extension of Virtual Reality (VR) technique, through the mode that presents virtual scene in real scene, can increase user experience's sense of realism. The MR field relates to computer vision, which is a science of researching how to make a machine "look at", and further refers to that a camera and a computer replace human eyes to perform machine vision such as recognition, tracking and measurement on a target, and further perform image processing, and the image is processed by the computer into an image more suitable for human eyes to observe or transmit to an instrument to detect.
That is, MR is a simulated scenery that integrates computer-created sensory input (e.g., virtual objects) with sensory input from a physical scenery or a representation thereof, in some MR sceneries, the computer-created sensory input may be adapted to changes in sensory input from the physical scenery. In addition, some electronic systems for rendering MR scenes may monitor orientation and/or position relative to the physical scene to enable virtual objects to interact with real objects (i.e., physical elements from the physical scene or representations thereof). For example, the system may monitor movement such that the virtual plants appear to be stationary relative to the physical building.
Fig. 1 is a schematic flow chart of an image rendering method according to an embodiment of the present application, where the method is performed by an XR device, and the XR device may be a VR device, an AR device, or an MR device. The XR device comprises a 2D rendering synthesizer, a 3D rendering synthesizer and a display screen, wherein the 2D rendering synthesizer is used for synthesizing images of a 2D application, the 3D rendering synthesizer is used for synthesizing images of a 3D application, and the XR device also supports the display of the content of the 2D application in the virtual reality space of the 3D application. As shown in fig. 1, the method provided in this embodiment includes the following steps.
S101, starting the 2D application in the virtual screen according to an opening instruction of the 2D application.
The 2D application refers to a traditional application running on an electronic device such as a mobile phone, a computer, a tablet computer and the like, and an image displayed to a user by the 2D application is a 2D image. The 3D application refers to an application running on the XR device, and the image presented to the user by the 3D application is a 3D image. The 2D applications include, but are not limited to: video playback applications, short video applications, music applications, instant messaging applications, shopping software, and the like.
The usage scenario of the embodiment of the application is that the content of the 2D application is displayed in the virtual reality space (also called as the extended reality space or virtual scenario) of the 3D application, and the user can input an opening instruction in the running process of the 3D application or in the 3D desktop environment, and the XR equipment starts the 2D application in the corresponding virtual screen according to the opening instruction.
The virtual screen may be understood as a carrying container for 2D applications, and in some operating systems, may also be a container.
In one implementation, only one 2D application can be associated in each virtual screen, and then after receiving the start instruction, the 3D application can create a virtual screen and a transmission component corresponding to the virtual screen for the 2D application, and associate the 2D application with the corresponding virtual screen. The virtual screen has Identity (ID) and also parameters such as size, resolution, etc. After creating the virtual screen, the content of the 2D application is displayed through the virtual screen. The transmission component corresponding to the virtual screen is used for transmitting the image of the 2D application corresponding to the virtual screen to the 3D rendering synthesizer, and the transmission component corresponding to the virtual screen can be understood as the transmission component corresponding to the 2D application.
In another implementation, each virtual screen may be associated with multiple 2D applications, and the user may open a new application in the created virtual screen, then the virtual screen need not be recreated at this time, but only one 2D application may be displayed on one virtual screen at a time, and after a new 2D application is opened, the new 2D application may be displayed on the virtual screen, and the previously displayed 2D application may be switched to the background. Correspondingly, the XR equipment switches the 2D application displayed before in the virtual screen to the background according to the starting instruction of the 2D application, and starts the 2D application in the virtual screen.
S102, the 2D rendering synthesizer acquires a plurality of layers of the 2D application from the virtual screen, and the 2D rendering synthesizer synthesizes the plurality of layers of the 2D application to obtain a 2D image of the 2D application and sends the 2D image to a transmission component of the virtual screen.
The method of the embodiment independently opens the rendering of the 2D application and the rendering of the 3D application, the rendering of the 2D application is realized by a 2D rendering synthesizer, the rendering of the 3D application is realized by a 3D rendering synthesizer, the 3D rendering synthesizer is also called an XR synthesizer, and the rendering of the 3D application is decoupled from the rendering of the 2D application. The rendering of the 3D application is not dependent on the surface flinger of the android system, namely, the 3D application rendering is decoupled from the surface flinger, so that the 3D application rendering is more flexible, the portability is higher, the rendering is not limited by the android system, and the method of the embodiment of the application can be applied to any existing operating system.
When each frame image of the 2D application is rendered, the 2D rendering synthesizer acquires the contents of a plurality of layers of the current frame image of the 2D application from the virtual screen, and the plurality of layers of the current frame image are synthesized to obtain the current frame image, namely the 2D image of the 2D application. The 2D image is a 2D image corresponding to the current frame, and the 2D image is generally composed of a plurality of layers, and illustratively, taking the 2D application as a video application, the 2D image of the current frame of the video application may include the following windows: video windows, bullet screen windows, control menu windows, advertisement windows, each window may correspond to one layer, or multiple windows may correspond to one layer, e.g., bullet screen windows and control menu windows correspond to one layer.
In one implementation, the 2D rendering compositor reads each layer of the current frame from the virtual screen, each layer being generated by the 2D application, and in another implementation, the 2D rendering compositor reads data of each layer of the current frame from the virtual screen, each layer being generated from the data of each layer, where the data of each layer refers to content displayed in each window, for example, content of a video window, that is, video data.
Taking an example that each window can correspond to one layer, the 2D rendering synthesizer reads the layers corresponding to the video window, the barrage window, the control menu window and the advertisement window from the virtual screen, and combines the layers corresponding to the video window, the barrage window, the control menu window and the advertisement window to obtain a current frame image of the video application.
Illustratively, the 2D rendering compositor composites according to multiple layers of the 2D application and other properties of the image, e.g., according to the position of the window, transparency (e.g., translucent, fully transparent), color, etc., resulting in an image that is displayed to the user.
It will be appreciated that the 2D render compositor may have different names in different operating systems, for example, in an android system, the 2D render compositor is a system service module surface scaler that is at the system layer in the layered architecture of the XR device, while the 2D application and 3D application are at the application layer.
S103, the transmission component transmits the 2D image to the 3D rendering synthesizer.
The transmission component may be understood as a transmission channel between the 2D rendering compositor and the 3D rendering compositor, optionally, the transmission component includes a buffer queue (buffer queue) therein, where the buffer queue is used to store the 2D image, and the transmission component uses the producer consumer model to transmit the 2D image.
There are two roles in the producer consumer model: the producer and the consumer communicate through the buffer queue (memory buffer zone), the producer generates data required by the consumer, the data generated by the producer is stored in the buffer queue, the consumer reads the data from the buffer queue for consumption, the data are not mutually influenced, and the coupling is low.
If the producer consumer model is not adopted, the producer and the consumer are directly related, the generation speed and the consumption speed can be mutually influenced, and the next production is required after the consumption is finished.
Alternatively, the buffer queue may use a multi-level buffer (e.g., a 3-level buffer or a 4-level buffer), where the multi-level buffer is operated in a swap mode, for example, the 3-level buffer may be used, such as the first frame image may be drawn using a first buffer, the second frame image may be drawn using a second buffer, the third frame image may be drawn using a third buffer, the fourth frame image may be drawn using the first buffer, and so on. At the same time, the producer may be drawing data into the first buffer, while the second buffer is used to store the synthesized data, and the consumer may consume the data in the second buffer, and the third buffer may be an empty buffer or an already synthesized buffer, i.e., used to store the synthesized data.
In this embodiment, the 2D rendering compositor is a producer, the 3D rendering compositor is a consumer, the 2D rendering compositor stores the produced 2D images into a buffer queue in the transmission component, and the 3D rendering compositor consumes the composited 2D images in the buffer queue.
And S104, a 3D rendering synthesizer renders and synthesizes the 2D image and the data to be rendered of the 3D application to obtain a 3D image of the 3D application, wherein a virtual screen is displayed in the 3D image, and the 2D image is displayed in the virtual screen.
Illustratively, the 3D rendering compositor divides the 2D image into a left eye 2D image and a right eye 2D image because the data to be rendered for the 3D application is divided into left eye data and right eye data. Then, left-eye data and right-eye data of the 3D application are acquired from a buffer (buffer) of the 3D application, and rendering and synthesizing are performed according to the left-eye data, the right-eye data, the left-eye 2D image and the right-eye 2D image of the 3D application to obtain a left-eye image and a right-eye image of the 3D application.
The left eye data and the right eye data of the 3D application may be stored in different buffers, and two buffers are usually set: left-eye buffer for storing left-eye data and right-eye buffer for storing right-eye data. Optionally, the buffer applied in 3D also adopts multiple levels of buffers, for example, both the left-eye buffer and the right-eye buffer adopt 3 levels of buffers, and the multiple levels of buffers work in a switching mode.
And when the 3D rendering synthesizer is used for rendering and synthesizing, the left-eye data and the left-eye 2D image of the 3D application are rendered and synthesized to obtain a left-eye image of the 3D application, and the right-eye data and the right-eye 2D image of the 3D application are rendered and synthesized to obtain a right-eye image of the 3D application.
The 2D image is displayed through the virtual screen in the left eye image and the right eye image of the 3D application, respectively, thereby enabling the 2D application to be displayed in the augmented reality space of the 3D application.
Since the 2D and 3D renderer can render independently, the 2D and 3D renderer and the 3D renderer have rendering frequencies corresponding to a Frame rate (Frame rate), which is a frequency at which images in Frame units continuously appear on a display, and which can be considered to be equal to the Frame rate.
In the prior art, the surface flinger adopts the same rendering frequency to render the 2D application and the 3D application, but in this embodiment, the rendering frequency of the 2D rendering synthesizer may be equal to the rendering frequency of the 3D rendering synthesizer, and the rendering frequency of the 2D rendering synthesizer may be smaller than the rendering frequency of the 3D rendering synthesizer. When the rendering frequency of the 2D rendering compositor is smaller than the rendering frequency of the 3D rendering compositor, the update frequency of the extended real space provided by the 3D application is greater than the update frequency of the 2D image displayed in the extended real space.
In order to provide a realistic scene for a user, a 3D application generally adopts a higher frame rate, and the display requirement of the user on a 2D application is generally lower, the same frame rate as that of the 3D application is not needed, and the rendering cost is increased because the rendering frequency of the 2D application and the rendering frequency of the 3D application are the same in the prior art method, but in the method of the embodiment, the rendering frequency of the 2D application can be lower than that of the 3D application, so that the rendering cost is reduced.
In this embodiment, the 2D rendering synthesizer and the 3D rendering synthesizer are two independent modules, the 2D rendering synthesizer independently completes the rendering of the 2D application, the 3D rendering synthesizer independently completes the rendering of the 3D application, and the 3D rendering synthesizer supports the 2D image of the 2D application to be displayed in the 3D space. The 3D rendering synthesizer is independent from the surface flinger of the android system, so that the method of the embodiment is not limited to the android system any more and can be flexibly operated in various operating systems. The 2D rendering synthesizer and the 3D rendering synthesizer can be independently developed, independently upgraded and independently maintained, and the low coupling is more convenient to manage.
S105, displaying the 3D image on a display screen.
And after the 3D rendering synthesizer renders the 3D image, submitting the 3D image to a display screen for display.
In one implementation, before the 3D rendering compositor submits the 3D image to the display screen, the 3D rendering compositor performs anti-dispersion processing on the 3D image, and sends the 3D image after the anti-dispersion processing to the display screen.
The inverse dispersion is used for carrying out inverse compensation on the dispersion generated by the 3D image, compensating the color of the 3D image and counteracting the influence of the dispersion on the image.
In another implementation, the XR device further comprises a hardware compositor (Hardware Composer, HWC) to which the 3D rendering compositor sends the 3D image, the HWC performing a reverse dispersion process on the 3D image, and sending the reverse dispersion processed 3D image to the display screen.
According to the method, according to an opening instruction of a 2D application, the 2D application is started in a virtual screen, a 2D rendering synthesizer acquires a plurality of layers of the 2D application from the virtual screen, a 2D image of the 2D application is obtained by combining the plurality of layers of the 2D application, and the 2D image is sent to a transmission component; the transmission component transmits the 2D image to the 3D rendering synthesizer; and the 3D rendering synthesizer renders and synthesizes the 2D image and the data to be rendered of the 3D application to obtain a 3D image of the 3D application, and the 3D image is displayed on a display screen. The rendering of the 2D application and the rendering of the 3D application are independently started, the rendering of the 2D application is realized by a 2D rendering synthesizer, the rendering of the 3D application is realized by a 3D rendering synthesizer, and the rendering of the 3D application is not dependent on a surface flinger of an android system any more, so that the method of the embodiment is not limited to the android system any more, and the method can be flexibly operated in various operating systems.
On the basis of the first embodiment, the second embodiment of the present application provides an image rendering method, which is applied in an android system, fig. 2 is a schematic diagram of the image rendering method in the android system, as shown in fig. 2, in the android system, a 2D rendering synthesizer is a Surface buffer, a transmission component includes a Surface and a Surface structure module of a virtual screen, the Surface structure module includes a first buffer queue, the first buffer queue is used for storing 2D images, the first buffer queue shown in the figure adopts a multi-level buffer, and in the android system, the buffer of the Surface structure module is also called a drawing buffer (i.e., graphic buffer).
Fig. 3 is a flowchart of an image rendering method according to a second embodiment of the present application, and the method according to the present embodiment is described with reference to fig. 2 and 3, and as shown in fig. 3, the method according to the present embodiment includes the following steps.
S201, in a virtual reality space of the 3D application, creating a virtual screen of the 2D application and a surface of the virtual screen according to an opening instruction of the 2D application, and starting the 2D application in the virtual screen.
As shown in fig. 2, a plurality of 2D applications may be opened in a 3D application: 2D app a, 2D app B, 2D app C, one virtual screen for each 2D application.
After a user triggers an opening instruction of any 2D application, the XR equipment creates a virtual screen for the 2D application according to the opening instruction, and creates a surface of the virtual screen, and the 2D application is bound with the virtual screen and the surface of the virtual screen.
S202, acquiring a plurality of layers of the 2D application from the virtual screen by the surface filter, and synthesizing the layers of the 2D application onto the surface of the virtual screen to obtain a 2D image of the 2D application.
As shown in fig. 2, the 2D application may include a plurality of 2D activity components, and may further include a 2D window component.
The activity component is typically a single screen in an android application, on which controls can be displayed and events of the user can be monitored and processed to respond, the activity component typically provides a full screen interface, and the window component provides a non-full screen interface, such as a popup window, a dialog box, and the like.
The surface player obtains the layers of the 2D application from the activity component of the 2D application, or, in general, one activity component corresponds to one layer and one window component corresponds to one layer, in some special cases, one activity component may correspond to multiple layers, for example, when a video is displayed in the activity component, the video is one layer, and the activity component itself is one layer.
A surface of a virtual screen may be understood as a canvas of a virtual screen or a 2D application for rendering an image of the 2D application onto the canvas so that the content of the 2D application can only be displayed on the virtual screen.
S203, the Surface of the virtual screen stores the 2D image into a first buffer queue of the Surface texture.
The Surface of the virtual screen is bound with the Surface texture, the Surface of the virtual screen can be understood as an interface between the Surface flickers and the Surface texture, and the 2D image synthesized by the Surface flickers is finally stored in a first buffer queue of the Surface texture.
In this embodiment, the 2D image is transferred to the 3D rendering synthesizer by using the existing Surface texture of the android system, the creation and principle of the Surface texture are not described here, and the first buffer queue in the Surface texture may use multiple levels of buffers, where data transfer is performed between the multiple levels of buffers through a switching mode.
S204, the 3D rendering synthesizer reads the 2D image from the first buffer queue of the Surface texture.
After a plurality of 2D applications are started in the 3D application, the 3D rendering synthesizer reads 2D images from the Surface texture corresponding to each 2D application.
S205, the 3D rendering synthesizer reads left-eye data from a left-eye buffer of the 3D application and reads right-eye data from a right-eye buffer of the 3D application.
And S206, the 3D rendering synthesizer performs rendering synthesis according to the left eye data, the right eye data and the 2D image of the 3D application to obtain a 3D image, and sends the 3D image to the HWC.
After a plurality of 2D applications are started in the 3D application, for each frame of image, the 3D rendering synthesizer starts to render the current frame of image after acquiring the 2D images of the started plurality of applications and the data to be rendered of the 3D application. The 3D rendering synthesizer divides the 2D image into a left-eye 2D image and a right-eye 2D image, renders the left-eye 2D image and the 3D applied left-eye image together to obtain a left-eye 3D image, and renders the right-eye 2D image and the 3D applied right-eye image together to obtain a right-eye 3D image.
The 3D image rendered and synthesized by the 3D rendering synthesizer is stored in a Graphic buffer of the 3D rendering synthesizer, in fig. 2, the Graphic buffer of the rendering synthesizer is different from the Graphic buffer of the Surface texture, the data stored in the Graphic buffer of the 3D rendering synthesizer is a 3D image, and the data stored in the Graphic buffer of the Surface texture is a 2D image.
S207, the HWC carries out inverse dispersion processing on the 3D image, and the 3D image after the inverse dispersion processing is sent to a display screen.
S208, displaying the 3D image on a display screen.
The 3D image includes the content of the 2D application, and the content of the 2D application is displayed on the virtual screen, and in the example shown in fig. 2, the 3D image, that is, the 3D application, provides the virtual reality space in which the content of 3D applications is displayed. The initial position of the virtual screen of each 2D application may be in a fixed position, and a user may adjust the position, size, gesture, etc. of the virtual screen according to his own needs, and may also close the virtual screen, and close the virtual screen, that is, close the 2D application displayed in the virtual screen.
In this embodiment, the rendering of the 3D application is separated from the Surface render of the android system, the Surface render performs image rendering of the 2D application, the rendering of the 3D application is completed by an independent 3D rendering synthesizer, after the image rendering of the 2D application by the Surface render is completed, the 2D image is transferred to the 3D rendering synthesizer through the Surface texture, and the 3D rendering synthesizer renders the data to be rendered of the 3D application and the 2D image together, so as to obtain the 3D image. The 3D rendering synthesizer is separated from the surface player, so that the 3D rendering synthesizer can be conveniently expanded and upgraded.
In order to facilitate better implementation of the image rendering method of the embodiment of the application, the embodiment of the application also provides an image rendering device. Fig. 4 is a schematic structural diagram of an image rendering device according to a third embodiment of the present application, and as shown in fig. 4, the image rendering device 100 may include:
a starting module 11, configured to start a 2D application in a virtual screen according to an opening instruction of the 2D application;
a 2D rendering synthesizer 12, configured to acquire a plurality of layers of the 2D application from the virtual screen, and combine the plurality of layers of the 2D application to obtain a 2D image of the 2D application, and send the 2D image to a transmission component of the virtual screen;
a transmission component 13 for transmitting the 2D image to the 3D rendering compositor;
a 3D rendering synthesizer 14, configured to render and synthesize the 2D image and data to be rendered of a 3D application to obtain a 3D image of the 3D application, where the 3D image displays the virtual screen, and the virtual screen displays the 2D image;
and a display module 15, configured to display the 3D image.
In some embodiments, a buffer queue is included in the transmission component for storing the 2D image, the transmission component transmitting the 2D image using a producer consumer model.
In some embodiments, the 3D rendering compositor 14 is further to: and performing inverse dispersion processing on the 3D image, and sending the 3D image subjected to inverse dispersion processing to the display module 15.
In some embodiments, the XR device further comprises a hardware compositor HWC, the 3D render compositor 14 further to: transmitting the 3D image to the HWC;
the HWC is configured to perform inverse dispersion processing on the 3D image, and send the 3D image after the inverse dispersion processing to the display module 15.
In some embodiments, a plurality of 2D applications are launched in the 3D application, the plurality of 2D applications being displayed through a plurality of virtual screens, each of the virtual screens being capable of associating only one 2D application.
In some embodiments, the 2D render compositor has a rendering frequency less than or equal to a rendering frequency of the 3D render compositor.
In some embodiments, the 3D rendering compositor 14 is specifically configured to:
dividing the 2D image into a left-eye 2D image and a right-eye 2D image;
acquiring left eye data and right eye data of the 3D application from a buffer zone of the 3D application;
and rendering and synthesizing according to the left-eye data, the right-eye data, the left-eye 2D image and the right-eye 2D image of the 3D application to obtain the left-eye image and the right-eye image of the 3D application.
In some embodiments, when the XR device employs an android system, the 2D rendering compositor is a system service module Surface filter, the transmission component includes a Surface and a Surface texture module of the virtual screen, and the Surface texture module includes a first buffer queue, where the first buffer queue is used to store the 2D image.
In some embodiments, a Surface of the virtual screen stores the 2D image into the first buffer queue of the Surface texture module;
the 3D rendering compositor reads the 2D image from the first buffer queue.
In some embodiments, the first buffer queue employs a multi-level buffer.
It should be understood that apparatus embodiments and method embodiments may correspond with each other and that similar descriptions may refer to the method embodiments. To avoid repetition, no further description is provided here.
The apparatus 100 and 200 of the embodiment of the present application are described above from the perspective of the functional modules in connection with the accompanying drawings. It should be understood that the functional module may be implemented in hardware, or may be implemented by instructions in software, or may be implemented by a combination of hardware and software modules. Specifically, each step of the method embodiment in the embodiment of the present application may be implemented by an integrated logic circuit of hardware in a processor and/or an instruction in a software form, and the steps of the method disclosed in connection with the embodiment of the present application may be directly implemented as a hardware decoding processor or implemented by a combination of hardware and software modules in the decoding processor. Alternatively, the software modules may be located in a well-established storage medium in the art such as random access memory, flash memory, read-only memory, programmable read-only memory, electrically erasable programmable memory, registers, and the like. The storage medium is located in a memory, and the processor reads information in the memory, and in combination with hardware, performs the steps in the above method embodiments.
The embodiment of the application also provides an XR device. Fig. 5 is a schematic diagram of an XR device provided by a fourth embodiment of the application, as shown in fig. 5, the XR device 300 may comprise:
a memory 31 and a processor 32, the memory 31 being for storing a computer program and for transmitting the program code to the processor 32. In other words, the processor 32 may call and run a computer program from the memory 31 to implement the method in the embodiment of the present application.
For example, the processor 32 may be configured to perform the method steps performed by the XR device or server in the method embodiments described above, and accordingly, the XR device 300 is an XR device or server, according to instructions in the computer program.
In some embodiments of the present application, the processor 32 may include, but is not limited to:
a general purpose processor, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like.
In some embodiments of the present application, the memory 31 includes, but is not limited to:
Volatile memory and/or nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (Double Data Rate SDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), and Direct memory bus RAM (DR RAM).
In some embodiments of the present application, the computer program may be divided into one or more modules, which are stored in the memory 31 and executed by the processor 32 to perform the methods provided by the present application. The one or more modules may be a series of computer program instruction segments capable of performing the specified functions, which are used to describe the execution of the computer program in the XR device.
As shown in fig. 5, the XR device may further comprise: a transceiver 33, the transceiver 33 being connectable to the processor 32 or the memory 31.
The processor 32 may control the transceiver 33 to communicate with other devices, and in particular, may send information or data to other devices or receive information or data sent by other devices. The transceiver 33 may include a transmitter and a receiver. The transceiver 33 may further include antennas, the number of which may be one or more.
It will be appreciated that although not shown in fig. 5, the XR device 300 may further include a camera module, a WIFI module, a positioning module, a bluetooth module, a display, a controller, etc., which are not described herein.
It will be appreciated that the various components in the XR device are connected by a bus system comprising, in addition to a data bus, a power bus, a control bus and a status signal bus.
The present application also provides a computer storage medium, on which a computer program is stored, which when executed by a computer, enables the computer to perform the method performed by the XR device or the server in the above method embodiments, which is not described in detail herein.
The present application also provides a computer program product comprising a computer program stored in a computer readable storage medium. The processor of the XR device reads the computer program from the computer readable storage medium, and the processor executes the computer program, so that the XR device executes the method executed by the XR device or the server in the foregoing method embodiment, which is not described herein.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms.
The modules illustrated as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. For example, functional modules in various embodiments of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily appreciate variations or alternatives within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (14)
1. An image rendering method, applied to an XR device comprising a 2D rendering compositor, a 3D rendering compositor, and a display screen, the method comprising:
Starting the 2D application in the virtual screen according to an opening instruction of the 2D application;
the 2D rendering synthesizer acquires a plurality of layers of the 2D application from the virtual screen, and the 2D image of the 2D application is obtained by combining the plurality of layers of the 2D application, and the 2D image is sent to a transmission component of the virtual screen;
the transmission component transmits the 2D image to the 3D rendering compositor;
the 3D rendering synthesizer renders and synthesizes the 2D image and the data to be rendered of the 3D application to obtain a 3D image of the 3D application, the 3D image is displayed with the virtual screen, and the virtual screen is displayed with the 2D image;
the display screen displays the 3D image.
2. The method of claim 1, wherein the transmission component comprises a buffer queue therein for storing the 2D image, the transmission component transmitting the 2D image using a producer consumer model.
3. The method of claim 1, wherein prior to the displaying of the 3D image by the display screen, further comprising:
and the 3D rendering synthesizer carries out inverse dispersion processing on the 3D image and sends the 3D image subjected to the inverse dispersion processing to the display screen.
4. The method of claim 1, wherein the XR device further comprises a hardware synthesizer HWC, the display screen further comprising, prior to displaying the 3D image:
the 3D rendering compositor sends the 3D image to the HWC;
and the HWC carries out inverse dispersion processing on the 3D image, and sends the 3D image after the inverse dispersion processing to the display screen.
5. The method of claim 1, wherein a plurality of 2D applications are launched in the 3D application, the plurality of 2D applications being displayed through a plurality of virtual screens, each virtual screen being capable of associating only one 2D application.
6. The method of claim 1, wherein a rendering frequency of the 2D rendering compositor is less than or equal to a rendering frequency of the 3D rendering compositor.
7. The method according to claim 1, wherein the 3D rendering compositor renders and composites the 2D image and data to be rendered of a 3D application to obtain a 3D image of the 3D application, comprising:
dividing the 2D image into a left-eye 2D image and a right-eye 2D image;
acquiring left eye data and right eye data of the 3D application from a buffer zone of the 3D application;
And rendering and synthesizing according to the left-eye data, the right-eye data, the left-eye 2D image and the right-eye 2D image of the 3D application to obtain the left-eye image and the right-eye image of the 3D application.
8. The method of any of claims 1-7, wherein when the XR device employs an android system, the 2D rendering compositor is a system service flinger, the transport component comprises a Surface and a Surface texture module of the virtual screen, and the Surface texture module comprises a first buffer queue, the first buffer queue is configured to store the 2D images.
9. The method of claim 8, wherein the transmitting component transmits the 2D image to the 3D rendering compositor comprises:
the Surface of the virtual screen stores the 2D image into the first buffer queue of the Surface texture module;
the 3D rendering compositor reads the 2D image from the first buffer queue.
10. The method of claim 9, wherein the first buffer queue employs a multi-level buffer.
11. An image rendering apparatus, comprising:
the starting module is used for starting the 2D application in the virtual screen according to the starting instruction of the 2D application;
A 2D rendering synthesizer, configured to obtain multiple layers of the 2D application from the virtual screen, combine the multiple layers of the 2D application to obtain a 2D image of the 2D application, and send the 2D image to a transmission component of the virtual screen;
a transmission component for transmitting the 2D image to the 3D rendering compositor;
a 3D rendering synthesizer, configured to render and synthesize the 2D image and data to be rendered of a 3D application to obtain a 3D image of the 3D application, where the 3D image is displayed with the virtual screen, and the virtual screen is displayed with the 2D image;
and the display module is used for displaying the 3D image.
12. An XR device comprising:
a processor and a memory for storing a computer program, the processor being for invoking and running the computer program stored in the memory to perform the method of any of claims 1 to 10.
13. A computer readable storage medium storing a computer program for causing a computer to perform the method of any one of claims 1 to 10.
14. A computer program product comprising a computer program which, when executed by a processor, implements the method of any one of claims 1 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310956618.4A CN117173309A (en) | 2023-07-31 | 2023-07-31 | Image rendering method, apparatus, device, medium, and program product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310956618.4A CN117173309A (en) | 2023-07-31 | 2023-07-31 | Image rendering method, apparatus, device, medium, and program product |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117173309A true CN117173309A (en) | 2023-12-05 |
Family
ID=88938329
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310956618.4A Pending CN117173309A (en) | 2023-07-31 | 2023-07-31 | Image rendering method, apparatus, device, medium, and program product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117173309A (en) |
-
2023
- 2023-07-31 CN CN202310956618.4A patent/CN117173309A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106598229B (en) | Virtual reality scene generation method and device and virtual reality system | |
CN106919248A (en) | It is applied to the content transmission method and equipment of virtual reality | |
CN108668168B (en) | Android VR video player based on Unity3D and design method thereof | |
CN109725956B (en) | Scene rendering method and related device | |
CN105959666A (en) | Method and device for sharing 3d image in virtual reality system | |
US20200341541A1 (en) | Simulated reality cross platform system | |
CN112673400A (en) | Avatar animation | |
CN112672131B (en) | Panoramic video image display method and display device | |
CN107343206A (en) | Support video generation method, device, medium and the electronic equipment of various visual angles viewing | |
CN110288703A (en) | Image processing method, device, equipment and storage medium | |
CN107728986B (en) | Display method and display device of double display screens | |
CN116152416A (en) | Picture rendering method and device based on augmented reality and storage medium | |
CN113206993A (en) | Method for adjusting display screen and display device | |
CN112308980A (en) | Augmented reality interactive display method and equipment | |
CN115830203A (en) | Distributed rendering method, apparatus, device, medium, and program product | |
CN114570020A (en) | Data processing method and system | |
US20230052104A1 (en) | Virtual content experience system and control method for same | |
CN117496023A (en) | Gaze point rendering method, device, medium, and program | |
JP2023100616A (en) | Content distribution apparatus, content distribution program, content distribution method, content display device, content display program, and contents display method | |
CN117173309A (en) | Image rendering method, apparatus, device, medium, and program product | |
CN115543138A (en) | Display control method and device, augmented reality head-mounted device and medium | |
CN117115395A (en) | Fusion method, device, equipment and medium of virtual reality and real scene | |
KR101893038B1 (en) | Apparatus and method for providing mapping pseudo hologram using individual video signal output | |
CN108140357B (en) | Information processing apparatus | |
CN116828245A (en) | Video switching method, device, apparatus, medium, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |