CN117193915A - Terminal control method, device, electronic equipment and storage medium - Google Patents

Terminal control method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117193915A
CN117193915A CN202311163699.9A CN202311163699A CN117193915A CN 117193915 A CN117193915 A CN 117193915A CN 202311163699 A CN202311163699 A CN 202311163699A CN 117193915 A CN117193915 A CN 117193915A
Authority
CN
China
Prior art keywords
graph
graphic
buffer
buffer area
graphics
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311163699.9A
Other languages
Chinese (zh)
Inventor
王东旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202311163699.9A priority Critical patent/CN117193915A/en
Publication of CN117193915A publication Critical patent/CN117193915A/en
Pending legal-status Critical Current

Links

Abstract

The disclosure provides a control method and device of a terminal, electronic equipment and a storage medium. The control method of the terminal comprises the following steps: the method comprises the steps that a first graph is drawn in response to a target application, a first graph buffer carrying the first graph is mounted to a first graph buffer queue, and the target application is single-layer application; the first synthesizer acquires a first graph buffer area from the first graph buffer area queue, and mounts the first graph buffer area to the second graph buffer area queue under the condition that the first graph is not used for performing graph layer synthesis, and the first synthesizer is used for performing graph layer synthesis on a first screen; the second synthesizer acquires the first graphic buffer area from the second graphic buffer queue area to perform the graphic layer synthesis by using the first graphic, and the second synthesizer is used for performing the graphic layer synthesis for the second screen. The method can reduce the cost of the GPU and the CPU, reduce the power consumption of the system and improve the efficiency by reducing one-time synthesis.

Description

Terminal control method, device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of intelligent terminals, and in particular relates to a control method and device of a terminal, electronic equipment and a storage medium.
Background
A terminal, such as a virtual reality device, has an application installed therein, and the content displayed by the application is composed of layers. For example, the video software of the bullet screen is that the video of the bottom layer is one layer, the bullet screen on the bottom layer is another layer, and the two layers are overlapped to form a complete interface.
Disclosure of Invention
The disclosure provides a control method and device of a terminal, electronic equipment and a storage medium.
The present disclosure adopts the following technical solutions.
In some embodiments, the present disclosure provides a method for controlling a terminal, including:
the method comprises the steps of responding to the completion of drawing a first graph by a target application, mounting a first graph buffer area carrying the first graph to a first graph buffer area queue, wherein the target application is a single-layer application;
the first synthesizer acquires the first graph buffer area from the first graph buffer area queue, and mounts the first graph buffer area to the second graph buffer area queue under the condition that the first graph is not used for performing graph layer synthesis, wherein the first synthesizer is used for performing graph layer synthesis on a first screen;
the second synthesizer acquires the first graphic buffer area from the second graphic buffer queue area so as to use the first graphic to perform graphic layer synthesis, and the second synthesizer is used for performing graphic layer synthesis for a second screen.
In some embodiments, the present disclosure provides a control apparatus of a terminal, including:
the application unit is used for responding to the completion of drawing a first graph by a target application, and mounting a first graph buffer area carrying the first graph to a first graph buffer area queue, wherein the target application is a single-layer application;
the first synthesizer is used for acquiring the first graphic buffer area from the first graphic buffer area queue, mounting the first graphic buffer area to the second graphic buffer area queue under the condition that the first graphic is not used for performing layer synthesis, and performing layer synthesis on a first screen;
the second synthesizer is configured to obtain the first graphics buffer from the second graphics buffer, so as to perform layer synthesis using the first graphics, and the second synthesizer is configured to perform layer synthesis for a second screen.
In some embodiments, the present disclosure provides an electronic device comprising: at least one memory and at least one processor;
the memory is used for storing program codes, and the processor is used for calling the program codes stored in the memory to execute the method.
In some embodiments, the present disclosure provides a computer readable storage medium for storing program code which, when executed by a processor, causes the processor to perform the above-described method.
According to the control method of the terminal, one graphic buffer area realizes transmission, circulation and consumption in two graphic buffer area queues. When the target application is a single-layer application, the synthesis of the first synthesizer is skipped, so that the cost of the GPU and the CPU is reduced, the system power consumption is reduced, and the efficiency is improved.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
Fig. 1 is a schematic diagram of an embodiment of the present disclosure using an augmented reality device.
Fig. 2 is a schematic diagram of a virtual field of view of an augmented reality device of an embodiment of the present disclosure.
FIG. 3 is a flow diagram of a graphics buffer of an embodiment of the present disclosure.
Fig. 4 is a flowchart of a control method of a terminal according to an embodiment of the present disclosure.
Fig. 5 is a schematic diagram of a control method of a terminal according to an embodiment of the present disclosure.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in and/or in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "a" and "an" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be construed as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The following describes in detail the schemes provided by the embodiments of the present disclosure with reference to the accompanying drawings.
Extended Reality (XR) technology in one or more embodiments of the present disclosure may be mixed Reality technology, augmented Reality technology, virtual Reality technology. The augmented reality technology can combine reality with virtual through a computer, and provides an augmented reality space for a user to interact with. In the augmented reality space, a user may perform social interactions, entertainment, learning, work, tele-office, authoring UGC (User Generated Content ), etc. through an augmented reality device, such as a head mounted display (Head Mount Display, HMD).
Referring to fig. 1, a user may enter an augmented reality space through an augmented reality device such as head-mounted glasses, and control his/her Avatar (Avatar) in the augmented reality space to perform social interaction, entertainment, learning, remote office, etc. with other user-controlled avatars.
In one embodiment, in the augmented reality space, a user may implement related interactive operations through a controller, which may be a handle, for example, a user performs related operation control through operation of keys of the handle. Of course, in other embodiments, the target object in the augmented reality device may be controlled using gesture or voice or multi-modal control without using a controller.
The augmented reality device described in embodiments of the present disclosure may include, but is not limited to, the following types:
the computer-side augmented reality device utilizes the computer-side to perform relevant calculation of the augmented reality function and data output, and the external computer-side augmented reality device utilizes the data output by the computer-side to realize the effect of augmented reality.
Mobile expansion device, supporting to set mobile terminal (such as smart phone) in various modes (such as head-mounted display with special card slot), through wired or wireless connection with mobile terminal, relevant calculation of the expansion reality function is carried out by mobile terminal, and data is output to mobile expansion reality device, such as APP of mobile terminal watching expansion reality video.
The integrated machine augmented reality device has a processor for performing relevant computation of an augmented reality function, and thus has independent functions of augmented reality input and output, and is free from connection with a computer terminal or a mobile terminal, and has high degree of freedom in use.
Of course, the form of implementation of the augmented reality device is not limited to this, and may be further miniaturized or enlarged as needed.
The sensor (such as a nine-axis sensor) for gesture detection is arranged in the augmented reality equipment, and is used for detecting gesture change of the augmented reality equipment in real time, if the user wears the augmented reality equipment, when the gesture of the head of the user changes, the real-time gesture of the head is transmitted to the processor, so that the gaze point of the sight of the user in the augmented reality space environment is calculated, an image in a user gaze range (namely a virtual view field) in a three-dimensional model of the augmented reality space environment is calculated according to the gaze point, and the image is displayed on the display screen, so that the user looks like watching in the real environment.
Fig. 2 shows an alternative schematic view of a virtual field of view of an augmented reality device provided by some embodiments of the present disclosure, where a horizontal field of view and a vertical field of view are used to describe a range of distribution of the virtual field of view in a virtual environment, a vertical range of distribution is represented by a vertical field of view, a horizontal range of distribution is represented by a horizontal field of view, and an image of the virtual field of view in an augmented reality space is always perceived by a human eye through a lens, and it is understood that the larger the field of view, the larger the size of the virtual field of view, and the larger the area of the augmented reality space that a user can perceive. The angle of view represents a distribution range of viewing angles that the lens has when sensing an environment. For example, the angle of view of an augmented reality device represents the range of distribution of viewing angles that human eyes have when an augmented reality space environment is perceived through a lens of the augmented reality device; for another example, in a mobile terminal provided with a camera, the field angle of the camera is a distribution range of the viewing angle that the camera has when sensing the real environment to shoot.
An augmented reality device, such as an HMD, incorporates several cameras (e.g., depth cameras, RGB cameras, etc.), the purpose of which is not limited to providing a through view only. The camera images and integrated Inertial Measurement Unit (IMU) provide data that can be processed by computer vision methods to automatically analyze and understand the environment. Also, HMDs are designed to support not only passive computer vision analysis, but also active computer vision analysis. The passive computer vision method analyzes image information captured from the environment. These methods may be monoscopic (images from a single camera) or stereoscopic (images from two cameras). Including but not limited to feature tracking, object recognition, and depth estimation. Active computer vision methods add information to the environment by projecting a pattern that is visible to the camera but not necessarily to the human vision system. Such techniques include time-of-flight (ToF) cameras, laser scanning, or structured light to simplify stereo matching issues. Active computer vision is used to implement scene depth reconstruction.
Nouns in some embodiments of the disclosure are explained below:
layer (c): the 2D application is composed of layers, such as a bullet screen video software, where the underlying video is one layer and the bullet screen displayed above it is another layer, the two layers being stacked together to form a complete interface.
Synthesizer (synthesizer): the 2D application is only responsible for rendering the content that produced each layer, but how the multiple layers are overlaid to produce the final interface is not the 2D application responsible, but rather is the compositor responsible. The synthesizer in the android system is the surfeflinger process. In addition, there is another synthesizer in the augmented reality device, namely an ATW (Asynchronous TimeWarp ) synthesizer at run time (run time).
Virtual screen (virtual display), a virtual screen simulated by software, is distinguished from a physical screen. The virtual screen may store 2D application interface data. The synthesizer may synthesize the layers of the 2D application onto the physical screen or onto the virtual screen. For non-augmented reality devices, the 2D application layer is typically composited to a physical screen. For an augmented reality device, 2D applications are generally synthesized to a virtual screen, and then data of the virtual screen is synthesized to a physical screen through an ATW synthesizer which is a synthesizer specific to the augmented reality device.
Graphics buffer queue (GraphicBufferQueue): a queue containing a plurality of graphics buffers (graphicbuffers). The bottom implementation of the 2D application layer is a graphics buffer queue. When the 2D application needs to draw the content, a graphics buffer is fetched (dequeue) from the graphics buffer queue for drawing, and the graphics buffer queue is added with the graphics buffer queue after drawing. The 2D application corresponds to a Producer. The synthesizer will take out (acquire) the graphics buffer drawn by the producer to synthesize, and release (release) the graphics buffer after the synthesis is completed, and put it back into the queue for the producer to recycle next time. The synthesizer is equivalent to an individual Consumer (Consumer).
The layer bottom implementation is a GraphicBufferQueue, producer is a 2D application and Consumer is a surfaceflink.
The virtual screen bottom layer implementation is also a GraphicBufferQueue, the Producer is a surfeflinger synthesizer, and the Consumer is an ATW synthesizer.
The relationship is shown in FIG. 3, which shows the direction of flow of the GraphiBufferQueue.
As shown in fig. 4, fig. 4 is a flowchart of a control method of a terminal according to an embodiment of the present disclosure, including the following steps.
And S11, responding to the target application to draw the first graph, and mounting a first graph buffer zone carrying the first graph to a first graph buffer zone queue, wherein the target application is a single-layer application.
In some embodiments, the method proposed by the present disclosure may be used for a terminal, which may be an augmented reality device, which may be, for example, a virtual reality device, an augmented display device, or a mixed reality device. The target application is a single-layer application, and may be a 2D application in particular, so that only a graph of one layer needs to be drawn, and the first graph is the graph of the layer. The first graphics buffer may be a layer graphics buffer, and the target application may obtain from a first graphics buffer queue.
S12, a first synthesizer acquires a first graphic buffer area from the first graphic buffer area queue, and mounts the first graphic buffer area to the second graphic buffer area queue under the condition that the first graphic is not used for performing the layer synthesis, wherein the first synthesizer is used for performing the layer synthesis on the first screen.
In some embodiments, the first compositor may be a SurfaceFlinger compositor, which may be configured to perform layer compositing on graphics drawn by the target application, because the target application is a single layer application, in this embodiment the first compositor skips the action of layer compositing and mounts the first graphics buffer to the second graphics buffer queue. The first screen is, for example, a virtual screen.
S13, a second synthesizer acquires the first graphic buffer area from the second graphic buffer area queue so as to use the first graphic to perform graphic layer synthesis, and the second synthesizer is used for performing the graphic layer synthesis for the second screen.
In some embodiments, the second synthesizer may be an ATW synthesizer. The second graphics buffer queue may be a virtual screen graphics buffer queue. After the second synthesizer takes the first graphic buffer, the first graphic is taken out for graphic synthesis, and then the second graphic can be displayed on a second screen, for example, a real physical screen.
In some embodiments of the present disclosure, the target application is a single-layer application, where the first compositor is controlled to skip layer compositing of the first graphic, and only one layer is drawn by the target application itself. By reducing one layer composition, a layer is composed by the second synthesizer, and the transfer of the graphic buffer is carried out across the graphic buffer queue, so that the power consumption of the first synthesizer and the loads of the CPU and the GPU are reduced.
In some embodiments of the present disclosure, after the second synthesizer performs layer synthesis using the first graph, the method further includes: the second compositor releases the first graphics buffer and removes the first graphics buffer from the second graphics buffer queue. In some embodiments, after the second synthesizer synthesizes, the first graphics buffer is no longer used and is thus released, because at this time the first graphics buffer has not been updated with new graphics, and therefore needs to be removed from the second graphics buffer queue, which is no longer in the second graphics buffer queue, in order for the target application to reuse the first graphics buffer.
In some embodiments of the present disclosure, further comprising: the first compositor releases the first graphics buffer back to the first graphics buffer queue. In this way, the target application can retrieve the first graphics buffer from the first graphics buffer queue for new graphics rendering.
In some embodiments of the present disclosure, further comprising: adding a target object for the first graphics buffer; wherein the target object is used to identify that the first graphics buffer is available for use by the central processor; the target object has a target attribute that varies depending on whether the first graphics buffer is used by the graphics processor.
In some embodiments, the steps of the method are performed by a CPU (central processing unit) responsible for logic control, and the drawing and displaying of graphics are performed by a GPU (graphics processor), and after the second synthesizer is synthesized, the CPU level may already use the first graphics buffer, but at this time the GPU may not be used, so after the second synthesizer is synthesized, the first graphics buffer is released and removed from the second graphics buffer queue, a target object is added or updated, and it may be determined whether the first graphics buffer can be used by the CPU by whether the target object exists, and the target object indicates whether the first graphics buffer can be used by the CPU. Meanwhile, the target object has a target attribute, the value of the target attribute changes along with whether the first graphic buffer is used by the GPU, and whether the first graphic buffer is used by the GPU is determined by judging the target attribute.
In some embodiments of the present disclosure, further comprising: determining whether the first graphics buffer is in an available state; if the first graphic buffer area is in an available state, taking out the first graphic buffer area from the first graphic buffer area queue for drawing a next graphic; and if the target object exists and the target attribute indicates that the graphics processor is used, the first graphics buffer is in an available state.
In some embodiments, the first graphics buffer can be used by the CPU and used by the GPU, indicating that the first graphics buffer is used up and may be reused, or else, indicating that the first graphics buffer is still being used and may not be reused.
In some embodiments of the present disclosure, the step of adding a target object to the first graphics buffer and the step of determining whether the first graphics buffer is in an available state are both performed asynchronously. In some embodiments, the steps of adding the target object to the graphics buffer and determining whether the graphics buffer is available are performed asynchronously, so that the asynchronous execution mode does not need to wait, and since there are multiple graphics buffers, one graphics buffer can detect the next, so that the efficiency can be improved.
In some embodiments of the present disclosure, the first screen is a virtual screen and the second screen is a physical screen. In some embodiments, the terminal is an augmented reality device, the first screen may be a virtual screen in the augmented reality space, and the physical screen may be a real screen on the augmented reality device.
The method proposed in the embodiments of the present disclosure is better explained in the following detailed description. On VR devices (based on the Android system), the display of an interface of a 2D application (target application) generally requires 3 steps:
(1) The 2D application draws the layer(s); namely, the 2D application takes the graphic buffer from the graphic buffer queue of the layer, renders and draws, and puts the rendered and drawn graphic buffer back to the graphic buffer queue of the layer.
(2) The surfefliger synthesizer synthesizes the layer(s) to the virtual screen; namely, the SurfaceFlinger synthesizer obtains graphic buffers from the graphic buffer queues of the graphic layer, synthesizes the graphic buffers onto the virtual screen graphic buffer, and releases the graphic buffers after synthesis. The synthesis to the virtual screen graphic buffer area also needs the SurfaceFlinger synthesizer to obtain a virtual screen graphic buffer area for block synthesis from the virtual screen graphic buffer area queue, and the virtual screen graphic buffer area queue is put back after the synthesis.
(3) The ATW synthesizer synthesizes the virtual screen to the physical screen; that is, after the SurfaceFlinger synthesizer finishes synthesizing and returns to the virtual screen graphics buffer queue, the ATW synthesizer will get the graphics buffer from the virtual screen graphics buffer queue, then synthesize to the physical screen, and then release (release).
In the process, 2-D application data are carried for 2 times, and the 2-D application data are applied to a virtual screen and the virtual screen is applied to a physical screen.
When the 2D application is one with multiple layers, synthesis of the surfeflinger synthesizer is indispensable. However, when the 2D application has only one layer, the synthesis of the surfeflinger synthesizer actually only copies the content to the virtual screen. Therefore, when the 2D application has only one layer, the synthesis of the SurfaceFlinger synthesizer is redundant and can be skipped, namely, one layer of the 2D application is directly synthesized to a physical screen, and the synthesis of the SurfaceFlinger synthesizer is skipped. And the power consumption and CPU/GPU load caused by SurfaceFlinger synthesis are reduced. Therefore, in some embodiments of the present disclosure, when the target application has only one layer of 2D application, the target user draws one layer itself, the ATW synthesizes one layer to the physical screen, reducing the synthesis of the SurfaceFlinger synthesizer. Therefore, the load of the CPU and the GPU is reduced, the power consumption is reduced, and the running speed is improved.
In order to achieve direct synthesis of the target application layer graphics buffer to the ATW synthesizer, it is necessary to achieve transfer, consumption, and circulation of the graphics buffer across the graphics buffer queues. In different processes, the target application, the SurfaceFlinger synthesizer and the Runtime (ATW synthesizer) are respectively in a layer graphics buffer queue, wherein the layer graphics buffer queue is a production and consumption graphics buffer channel between the target application and the SurfaceFlinger, and the virtual screen graphics buffer queue is a production and consumption graphics buffer channel between the SurfaceFlinger and the ATW synthesizer. Graphics buffers in respective graphics buffer queues circulate inside the graphics buffers, and in the embodiment of the present disclosure, the graphics buffer transmission, consumption and circulation techniques across graphics buffer queues are provided for the graphics buffer circulation across graphics buffer queues, so as to omit the operation of surfacefringer synthesizer synthesis. As shown in fig. 5, the specific steps are as follows:
1. after the target application App draws the graphics (first graphics), a graphics buffer (first graphics buffer) is added to a layer graphics buffer queue (bufferqueue between App and SF, i.e., first graphics buffer queue).
The surface eFlink ger (SF) compositor (first compositor) fetches (acquire) this graphics buffer.
The surfefliger compositor does not go to compositing but prepares to mount (attach) the graphics buffer directly into the virtual screen graphics buffer queue (second graphics buffer queue).
The SurfaceFlinger compositor adds the graphics buffer to a virtual screen graphics buffer queue (bufferqueue between SF and Runtime).
The ATW synthesizer (second synthesizer) obtains the graphics buffer and synthesizes the graphics buffer to the physical screen.
After the ATW compositor is completed, the graphics buffer is released (release) and removed (detach) from the virtual screen graphics buffer queue.
7-1. Cross-Process communication callback (callback set fence) informs the target application that this graphics buffer can be reused at the CPU layer, i.e., add the nonce object (target object) and carry a flag amount (target attribute) of whether the GPU layer is available.
7-2, when the target application wants to take out the Buffer again, judging whether the CPU layer is available or not, and judging whether the GPU layer is available or not according to the quality of the fence mark. The graphics buffer may be successfully fetched from the graphics buffer queue after use, continued to be used for rendering the layer content by the target application, and then continued to repeat from step 1 above.
7-1 and 7-2 are executed asynchronously.
Through the steps, one graphic buffer area realizes the transmission, circulation and consumption in two graphic buffer area queues. The implementation skips the synthesis of the surfeflinger synthesizer when the target application is a single-layer application. The one-time synthesis is reduced, the spending of the GPU and the CPU is reduced, the system power consumption is reduced, and the efficiency is improved.
The present disclosure also provides a control device of a terminal, including:
the application unit is used for responding to the completion of drawing the first graph by the target application, mounting a first graph buffer area carrying the first graph to a first graph buffer area queue, and the target application is a single-layer application;
the first synthesizer is used for acquiring a first graph buffer area from the first graph buffer area queue, mounting the first graph buffer area to the second graph buffer area queue under the condition that the first graph is not used for performing graph layer synthesis, and performing graph layer synthesis on the first screen;
and the second synthesizer is used for acquiring the first graphic buffer area from the second graphic buffer area queue so as to perform graphic layer synthesis by using the first graphic, and the second synthesizer is used for performing the graphic layer synthesis for the second screen.
In some embodiments, the second synthesizer is further configured to: after the second synthesizer uses the first graph to perform layer synthesis, the second synthesizer releases the first graph buffer area and removes the first graph buffer area from the second graph buffer area queue.
In some embodiments, the first synthesizer is further to: the first compositor releases the first graphics buffer back to the first graphics buffer queue.
In some embodiments, the method further comprises a control unit for adding a target object to the first graphics buffer; wherein the target object is used to identify that the first graphics buffer is available for use by a central processor; the target object has a target attribute that varies with whether the first graphics buffer is used by a graphics processor.
In some embodiments, the control unit is further configured to: determining whether the first graphics buffer is in an available state;
the application unit is further used for taking out the first graphic buffer area from the first graphic buffer area queue for drawing a next graphic if the first graphic buffer area is in an available state; and if the target object exists and the target attribute indicates that the graphics processor is used, the first graphics buffer is in an available state.
In some embodiments, the step of adding a target object to the first graphics buffer and the step of determining whether the first graphics buffer is in an available state are performed asynchronously.
In some embodiments, at least one of the following is satisfied: the first screen is a virtual screen, and the second screen is a physical screen; the terminal is an augmented reality device.
For embodiments of the device, reference is made to the description of method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are merely illustrative, wherein the modules illustrated as separate modules may or may not be separate. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The method and apparatus of the present disclosure are described above based on the embodiments and applications. In addition, the present disclosure also provides an electronic device and a computer-readable storage medium, which are described below.
Referring now to fig. 6, a schematic diagram of an electronic device (e.g., a terminal device or server) 800 suitable for use in implementing embodiments of the present disclosure is shown. The terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in the drawings is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
The electronic device 800 may include a processing means (e.g., a central processor, a graphics processor, etc.) 801 that may perform various appropriate actions and processes in accordance with programs stored in a Read Only Memory (ROM) 802 or loaded from a storage 808 into a Random Access Memory (RAM) 803. In the RAM803, various programs and data required for the operation of the electronic device 800 are also stored. The processing device 801, the ROM 802, and the RAM803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
In general, the following devices may be connected to the I/O interface 805: input devices 806 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 807 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, etc.; storage 808 including, for example, magnetic tape, hard disk, etc.; communication means 809. The communication means 809 may allow the electronic device 800 to communicate wirelessly or by wire with other devices to exchange data. While an electronic device 800 having various means is shown, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via communication device 809, or installed from storage device 808, or installed from ROM 802. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 801.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the methods of the present disclosure described above.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, there is provided a control method of a terminal, including:
the method comprises the steps of responding to the completion of drawing a first graph by a target application, mounting a first graph buffer area carrying the first graph to a first graph buffer area queue, wherein the target application is a single-layer application;
the first synthesizer acquires the first graph buffer area from the first graph buffer area queue, and mounts the first graph buffer area to the second graph buffer area queue under the condition that the first graph is not used for performing graph layer synthesis, wherein the first synthesizer is used for performing graph layer synthesis on a first screen;
the second synthesizer acquires the first graphic buffer area from the second graphic buffer queue area so as to use the first graphic to perform graphic layer synthesis, and the second synthesizer is used for performing graphic layer synthesis for a second screen.
According to one or more embodiments of the present disclosure, there is provided a method for controlling a terminal, after the second synthesizer performs layer synthesis using the first graphic, the method further including:
the second compositor releases the first graphics buffer and removes the first graphics buffer from the second graphics buffer queue.
According to one or more embodiments of the present disclosure, there is provided a control method of a terminal, further including:
the first compositor releases the first graphics buffer back to the first graphics buffer queue.
According to one or more embodiments of the present disclosure, there is provided a control method of a terminal, further including:
adding a target object to the first graphics buffer;
wherein the target object is used to identify that the first graphics buffer is available for use by a central processor; the target object has a target attribute that follows whether the first graphics buffer is used by a graphics processor.
According to one or more embodiments of the present disclosure, there is provided a control method of a terminal, further including:
determining whether the first graphics buffer is in an available state;
if the first graphic buffer area is in an available state, the first graphic buffer area is taken out of the first graphic buffer area queue and is used for drawing a next graphic;
and if the target object exists and the target attribute indicates that the graphics processor is used, the first graphics buffer is in an available state.
According to one or more embodiments of the present disclosure, there is provided a control method of a terminal, the step of adding a target object to the first graphic buffer and the step of determining whether the first graphic buffer is in an available state are performed asynchronously.
According to one or more embodiments of the present disclosure, there is provided a control method of a terminal satisfying at least one of:
the first screen is a virtual screen, and the second screen is a physical screen;
the terminal is an augmented reality device.
According to one or more embodiments of the present disclosure, there is provided a control apparatus of a terminal, including:
the application unit is used for responding to the completion of drawing a first graph by a target application, and mounting a first graph buffer area carrying the first graph to a first graph buffer area queue, wherein the target application is a single-layer application;
the first synthesizer is used for acquiring the first graphic buffer area from the first graphic buffer area queue, mounting the first graphic buffer area to the second graphic buffer area queue under the condition that the first graphic is not used for performing layer synthesis, and performing layer synthesis on a first screen;
The second synthesizer is configured to obtain the first graphics buffer from the second graphics buffer, so as to perform layer synthesis using the first graphics, and the second synthesizer is configured to perform layer synthesis for a second screen.
According to one or more embodiments of the present disclosure, there is provided an electronic device including: at least one memory and at least one processor;
wherein the at least one memory is configured to store program code, and the at least one processor is configured to invoke the program code stored by the at least one memory to perform any of the methods described above.
According to one or more embodiments of the present disclosure, a computer-readable storage medium is provided for storing program code which, when executed by a processor, causes the processor to perform the above-described method.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (10)

1. A control method of a terminal, comprising:
the method comprises the steps of responding to the completion of drawing a first graph by a target application, mounting a first graph buffer area carrying the first graph to a first graph buffer area queue, wherein the target application is a single-layer application;
The first synthesizer acquires the first graph buffer area from the first graph buffer area queue, and mounts the first graph buffer area to the second graph buffer area queue under the condition that the first graph is not used for performing graph layer synthesis, wherein the first synthesizer is used for performing graph layer synthesis on a first screen;
and a second synthesizer acquires the first graph buffer area from the second graph buffer queue area so as to use the first graph to perform graph layer synthesis, wherein the second synthesizer is used for performing graph layer synthesis for a second screen.
2. The method of claim 1, further comprising, after the second compositor uses the first graphic to perform layer compositing:
the second compositor releases the first graphics buffer and removes the first graphics buffer from the second graphics buffer queue.
3. The method as recited in claim 2, further comprising:
the first compositor releases the first graphics buffer back to the first graphics buffer queue.
4. A method according to claim 2 or 3, further comprising:
adding a target object to the first graphics buffer;
Wherein the target object is used to identify that the first graphics buffer is available for use by a central processor; the target object has a target attribute that varies with whether the first graphics buffer is used by a graphics processor.
5. The method as recited in claim 4, further comprising:
determining whether the first graphics buffer is in an available state;
if the first graphic buffer area is in an available state, the first graphic buffer area is taken out of the first graphic buffer area queue and is used for drawing a next graphic;
and if the target object exists and the target attribute indicates that the graphics processor is used, the first graphics buffer is in an available state.
6. The method of claim 5, wherein the step of adding a target object to the first graphics buffer and the step of determining whether the first graphics buffer is in an available state are performed asynchronously.
7. The method of claim 1, wherein at least one of the following is satisfied:
the first screen is a virtual screen, and the second screen is a physical screen;
The terminal is an augmented reality device.
8. A control apparatus of a terminal, comprising:
the application unit is used for responding to the completion of drawing a first graph by a target application, and mounting a first graph buffer area carrying the first graph to a first graph buffer area queue, wherein the target application is a single-layer application;
the first synthesizer is used for acquiring the first graphic buffer area from the first graphic buffer area queue, mounting the first graphic buffer area to the second graphic buffer area queue under the condition that the first graphic is not used for performing layer synthesis, and performing layer synthesis on a first screen;
and the second synthesizer is used for acquiring the first graphic buffer area from the second graphic buffer area so as to perform graphic layer synthesis by using the first graphic, and the second synthesizer is used for performing the graphic layer synthesis for a second screen.
9. An electronic device, comprising:
at least one memory and at least one processor;
wherein the at least one memory is configured to store program code, and the at least one processor is configured to invoke the program code stored by the at least one memory to perform the method of any of claims 1 to 7.
10. A computer readable storage medium for storing program code which, when executed by a processor, causes the processor to perform the method of any one of claims 1 to 7.
CN202311163699.9A 2023-09-11 2023-09-11 Terminal control method, device, electronic equipment and storage medium Pending CN117193915A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311163699.9A CN117193915A (en) 2023-09-11 2023-09-11 Terminal control method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311163699.9A CN117193915A (en) 2023-09-11 2023-09-11 Terminal control method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117193915A true CN117193915A (en) 2023-12-08

Family

ID=88991917

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311163699.9A Pending CN117193915A (en) 2023-09-11 2023-09-11 Terminal control method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117193915A (en)

Similar Documents

Publication Publication Date Title
CN115576470A (en) Image processing method and apparatus, augmented reality system, and medium
CN117319725A (en) Subtitle display method, device, equipment and medium
CN117193915A (en) Terminal control method, device, electronic equipment and storage medium
CN113837918A (en) Method and device for realizing rendering isolation by multiple processes
CN111833459B (en) Image processing method and device, electronic equipment and storage medium
CN115150653B (en) Media content display method and device, electronic equipment and storage medium
WO2023231666A1 (en) Information exchange method and apparatus, and electronic device and storage medium
WO2024012106A1 (en) Information interaction method and apparatus, electronic device, and storage medium
CN114357348B (en) Display method and device and electronic equipment
RU2802724C1 (en) Image processing method and device, electronic device and machine readable storage carrier
CN114066721B (en) Display method and device and electronic equipment
WO2024016880A1 (en) Information interaction method and apparatus, and electronic device and storage medium
CN112822418B (en) Video processing method and device, storage medium and electronic equipment
CN117631904A (en) Information interaction method, device, electronic equipment and storage medium
US20240078734A1 (en) Information interaction method and apparatus, electronic device and storage medium
CN117641040A (en) Video processing method, device, electronic equipment and storage medium
CN117435041A (en) Information interaction method, device, electronic equipment and storage medium
CN117934769A (en) Image display method, device, electronic equipment and storage medium
CN117632063A (en) Display processing method, device, equipment and medium based on virtual reality space
CN117519457A (en) Information interaction method, device, electronic equipment and storage medium
CN117631921A (en) Information interaction method, device, electronic equipment and storage medium
CN117641025A (en) Model display method, device, equipment and medium based on virtual reality space
CN117640919A (en) Picture display method, device, equipment and medium based on virtual reality space
CN115981544A (en) Interaction method and device based on augmented reality, electronic equipment and storage medium
CN117075770A (en) Interaction control method and device based on augmented reality, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination