CN117372594A - Display method, electronic device and storage medium - Google Patents

Display method, electronic device and storage medium Download PDF

Info

Publication number
CN117372594A
CN117372594A CN202311309150.6A CN202311309150A CN117372594A CN 117372594 A CN117372594 A CN 117372594A CN 202311309150 A CN202311309150 A CN 202311309150A CN 117372594 A CN117372594 A CN 117372594A
Authority
CN
China
Prior art keywords
rendering
canvas
canvas object
objects
rendering engine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311309150.6A
Other languages
Chinese (zh)
Inventor
王浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei Xingji Meizu Technology Co ltd
Original Assignee
Hubei Xingji Meizu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei Xingji Meizu Technology Co ltd filed Critical Hubei Xingji Meizu Technology Co ltd
Priority to CN202311309150.6A priority Critical patent/CN117372594A/en
Publication of CN117372594A publication Critical patent/CN117372594A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a display method, electronic equipment and a storage medium, which belong to the technical field of image rendering, and the display method of the embodiment of the application comprises the following steps: creating a view sub-class by a first process, and requesting a handle of a canvas object corresponding to the view sub-class from a second process by the first process; the first process transmits the handle to a third process, so that the third process obtains the canvas object; the third process runs a rendering engine to render the canvas object; the first process exposes the rendered canvas object on the view sub-class.

Description

Display method, electronic device and storage medium
Technical Field
The present disclosure relates to the field of image rendering technologies, and in particular, to a display method, an electronic device, and a storage medium.
Background
In a vehicle HMI (Human Machine Interface, human-machine interface) system, 3D is often used for real-time rendering in order to show functional highlights of the vehicle.
Currently, for 3D rendering of a vehicle HMI system, the main technical solution is to separately access 3D rendering capability in an application requiring 3D function display. Therefore, each application needing to display the 3D function needs to be independently connected to the 3D rendering function, and the application volume of each application needing to display the 3D function is increased, so that the cost of the vehicle system is high and the stability of the vehicle system is not controllable any more when the 3D rendering function is used in a large range.
Disclosure of Invention
In a first aspect, the present application provides a method comprising:
creating a view sub-class by a first process, and requesting a handle of a canvas object corresponding to the view sub-class from a second process by the first process;
the first process transmits the handle to a third process, so that the third process obtains the canvas object;
the third process runs a rendering engine to render the canvas object;
the first process exposes the rendered canvas object on the view sub-class.
In some embodiments, the canvas object includes a plurality of canvas objects, the third process running a rendering engine to render the canvas object, comprising:
respectively distributing corresponding display identifiers for all canvas objects, wherein each display identifier points to a corresponding screen object;
and activating a screen object corresponding to the canvas object to render the canvas object.
In some embodiments, further comprising:
and activating a multi-screen function of the rendering engine under the condition that the number of the canvas objects is greater than or equal to two, so that the rendering engine can switch the screen objects based on the canvas objects.
In some embodiments, the third process runs a rendering engine that renders the canvas object, comprising:
running the rendering engine to render the canvas object according to rendering resources of the canvas object;
the rendering resources are preloaded to memory.
In some embodiments, the rendering resources are determined based on view content of a currently presented view subclass.
In some embodiments, the third process runs a rendering engine that renders the canvas object, comprising:
and running the rendering engine to render the canvas object according to the gesture event corresponding to the canvas object.
In some embodiments, the gesture event is a gesture event detected under the first process.
In some embodiments, the third process serves wallpaper.
In a second aspect, the present application provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing a method as described in any one of the preceding claims when executing the program.
In a third aspect, the present application provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method as described in any of the above.
In a fourth aspect, the present application provides a computer program product comprising a computer program which, when executed by a processor, implements a method as described in any of the above.
Drawings
For a clearer description of the present application or of the prior art, the drawings that are used in the description of the embodiments or of the prior art will be briefly described, it being apparent that the drawings in the description below are some embodiments of the present application, and that other drawings may be obtained from these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a display method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of multi-screen rendering provided in an embodiment of the present application;
FIG. 3 is a second flow chart of a display method according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present application more apparent, the technical solutions in the present application will be clearly and completely described below with reference to the drawings in the present application, and it is apparent that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the terms "first" and "second" are generally intended to be used in a generic sense and not to limit the number of objects, for example, the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/" generally means a relationship in which the associated object is an "or" before and after.
For 3D rendering of a vehicle machine HMI system, the main stream technical scheme comprises:
the rendering engine is used for 3D rendering to implement all HMI functions. The disadvantage of such a solution is that it is easy to cause that the developed HMI system cannot be fused with Android, thereby causing ecological rupture. For example, desktop widget capabilities supported by Android ecology cannot be implemented in HMI Unity 3D. Moreover, such schemes require a large number of Unity developers to engage in HMI development, with a dramatic increase in learning thresholds and technical costs.
Or in applications where 3D functionality needs to be exposed, the 3D rendering capabilities are accessed separately. If 3D rendering of multiple applications is involved, individual access by each application is typically required. Assuming that the volume of the 3D rendering function is 240MB, when the system calls the 3D rendering function package, the memory occupies 300MB, so that each application can realize independent access, the volume of each application needs to be increased by 240MB, and if the system needs to render 3D effects of multiple (e.g., N) applications at the same time, the corresponding memory overhead also increases by n×300MB.
And then or in cooperation with the provider of the rendering engine to customize rendering capability support. Rendering techniques under such schemes are not open and require ongoing maintenance costs.
Therefore, the application provides a display method, through interaction among the first process, the second process and the third process, the third process can support the rendering capability of the first process through a rendering engine in the background. In the application, the rendering requirements of all the first processes can be uniformly met through the third process, and the first processes are not required to be respectively connected into the rendering engine, so that 3D rendering of the cross-process is realized, the system realization cost is greatly reduced, and the running stability of the system is ensured.
In some embodiments, the vehicle-mounted terminal, i.e., a terminal disposed inside the vehicle for human-machine interaction, provides various optional functions, such as viewing and control of vehicle status, reservation and use of vehicle-related services (e.g., car washing, oiling, etc.), internet-based applications such as social networking, web browsing, etc. As another example, a vehicle positioning and navigation function, a multimedia playing function, etc. The display method provided by the embodiment can be applied to a vehicle machine, and can also be applied to other terminals with 3D rendering requirements, such as mobile phones, tablets, desktop notebooks and the like.
Fig. 1 is a schematic flow chart of a display method according to an embodiment of the present application. As shown in fig. 1, a display method is provided, which includes the following steps: step 110, step 120, step 130, step 140. The method flow steps are only one possible implementation of the present application.
Step 110, a first process creates a view sub-class, and the first process requests a handle of a canvas object corresponding to the view sub-class from a second process.
Here, the first process is the process that needs to expose the 3D function. There may be a plurality of first processes, and when there are a plurality of first processes, the first processes are independent of each other and do not affect each other.
The second process is a process for creating and distributing handles of canvas objects in the system, for example, in an Android system, the second process may be a SurfaceFlinger. The surfaceflink is an important component in the Android system, is responsible for rendering a UI (User Interface) of the system, and particularly can be responsible for synthesizing a handle Surface to frame buffer of a canvas object, and then realizing the display of the canvas object by reading the frame buffer through a screen. In embodiments of the present application, surfeflinger may be used to create, distribute handles Surface for canvas objects.
For a first process that needs to expose 3D functionality, view subclass creation may be performed by the first process itself. The view subclass herein, i.e., the view on which graphics can be drawn, is commonly used for scenes where a large number of graphics need to be drawn on a screen. For example, in the Android system, the view subclass may be SurfaceView.
After creating the view subclass, the first process may request the second process for a handle of the canvas object corresponding to the view subclass. For example, in an Android system, the view subclass may be Surface view, where a Canvas object corresponding to the view subclass Surface view is Canvas, and a handle of the Canvas object Canvas is Surface. After the first process requests the handle of the view sub-class corresponding canvas object from the second process, the second process may assign a handle to the first process as the handle of the view sub-class corresponding canvas object.
Step 120, the first process transmits the handle to a third process, so that the third process obtains the canvas object.
And 130, the third process runs a rendering engine to render the canvas object.
Here, the third process is a process supporting background 3D rendering. For example, in an Android system, the third process may be WallpaperService, wallpaperService an abstract class of wallpaper services provided by the Android system, and WallpaperService may support, for example, rendering of a 3D rendering engine Unity in the background.
The first process, after obtaining the handle of the canvas object corresponding to the view sub-class, may transmit the handle to a third process. After the third process obtains the handle, the canvas object corresponding to the handle can be obtained so as to conveniently perform 3D rendering on the canvas object in the background.
It can be appreciated that, since the third process supports background 3D rendering, after obtaining the canvas object to be rendered, the third process can run a rendering engine supporting background rendering, thereby implementing 3D rendering for the canvas object. Further, in the Android system, a third process WallpaperService may run Unity supporting background rendering, so as to implement 3D rendering for Canvas objects Canvas.
Step 140, the first process displays the canvas object with the rendering completed on the view sub-class.
After the third process finishes rendering the canvas object through the rendering engine, the first process can display the canvas object after the rendering is finished through the view subclass, so that the effect of displaying the 3D rendered graphics in the first process is achieved.
In the embodiment of the application, through interaction among the first process, the second process and the third process, the third process can support the rendering capability of the first process through the rendering engine in the background, so that the rendering requirements of all the first processes can be uniformly met through the third process, the rendering engines are not required to be respectively connected for all the first processes, 3D rendering of the cross process is realized, the system realization cost is greatly reduced, and the running stability of the system is ensured.
And the rendering capability of the application corresponding to each first process is controlled by the unified rendering engine, so that the seamless connection of animation flow among the applications can be realized, and one mirror is at the bottom.
It should be noted that each embodiment of the present application may be freely combined, permuted, or executed separately, and does not need to rely on or rely on a fixed execution sequence.
In some embodiments, the canvas object comprises a plurality of;
accordingly, the step 130 includes:
respectively distributing corresponding display identifiers for all canvas objects, wherein each display identifier points to a corresponding screen object;
and activating a screen object corresponding to the canvas object to render the canvas object.
In particular, there may be multiple canvas objects that need to be rendered at the same time. The plurality of canvas objects herein may be a plurality of canvas objects corresponding to a plurality of view subclasses created by one first process, or may be canvas objects corresponding to view subclasses created by a plurality of first processes, which is not specifically limited in the embodiment of the present application.
For the case that a plurality of canvas objects need to be rendered at the same time, the third process can respectively allocate corresponding display identifiers for each canvas object, so that different canvas objects are corresponding to different screen objects through the corresponding relations between the display identifiers and the screen objects. It will be appreciated that for terminals supporting multi-screen display, i.e. for terminals supporting parallel display of multiple screen windows, the number of screen objects, i.e. the number of screen windows the terminal supports to be displayed simultaneously. If the current terminal does not support multi-screen display, that is, the current terminal does not support simultaneous display of multiple screen windows, the number of screen objects is 1, and at this time, a scheme of simultaneous rendering of multiple canvas objects cannot be supported. The method provided by the embodiment is aimed at a terminal supporting multi-screen display.
Here, the correspondence between the screen objects and the presentation identifiers may be one-to-one correspondence, that is, one screen object corresponds to one presentation identifier, that is, the presentation identifier may be understood as an identity identifier of the screen object, and the corresponding presentation identifiers are respectively allocated to each canvas object, which may be understood as that each canvas object is respectively allocated to a corresponding screen object. For example, in the Android system, the screen object is a display, and the presentation identifier of the screen object display may be denoted as targetDisplay, for example, when camera.
After the allocation of the presentation identifier is completed, the third process may activate the screen object pointed to by the presentation identifier corresponding to the canvas object, thereby rendering the canvas object under the activated screen object.
For example, in the car HMI service, functions such as 3D live desktop, vehicle setting, air conditioning management, energy center, scene center, etc. all need to be implemented by independent applications. When entering air conditioning management from a 3D live desktop, desktop application rendering and air conditioning application rendering are required to be carried out simultaneously, at the moment, the two applications respectively serve as a first process to create view subclasses and transmit handles of canvas objects corresponding to the view subclasses into a third process, two canvas objects needing to be simultaneously rendered exist in the third process, display identifiers are required to be respectively distributed for the canvas objects of the desktop application and the canvas objects of the air conditioning application to point to different screen objects, and therefore rendering of the desktop and the air conditioning is respectively achieved under the different screen objects.
In the embodiment of the application, the display identification is distributed to each canvas object, and the screen object pointed by the display identification is activated to render, so that simultaneous rendering of different 3D contents can be realized.
In some embodiments, the method further comprises:
and activating a multi-screen function of the rendering engine under the condition that the number of the canvas objects is greater than or equal to two, so that the rendering engine can switch the screen objects based on the canvas objects.
Specifically, for the third process, when the number of handles of the received canvas objects is greater than or equal to two, that is, when the number of canvas objects to be rendered is greater than or equal to two, there is a requirement for multi-screen synchronous rendering at this time, a multi-screen function of the rendering engine needs to be activated in the third process, so that the third process can support rendering different contents in each screen object, that is, simultaneous rendering of different 3D contents is achieved.
Here, in the Android system, the multi-screen function is activated, and specifically, activation of the screen object display of the scene Camera in the Unity script can be achieved by calling the updatediisplay international. It will be appreciated that after the multi-screen function is activated, the rendering engine may automatically detect the number of screen objects display of the current terminal, i.e. there are several screen windows in the current device, and initialize several display objects. After initialization of the display object is completed, the screen object can be switched according to the canvas object to be rendered currently, for example, when the presentation identifier camara.
Fig. 2 is a schematic diagram of multi-screen rendering according to an embodiment of the present application. As shown in fig. 2, in the editor of the rendering engine Unity, the presentation identifier targetDisplay of the scene Camera points to the corresponding screen object d display and is displayed in the corresponding screen object display. There are 8 available switches for the display identifier targetDisplay, namely display1, display2. In the case that 4 applications exist, that is, the application A, B, C, D is used as the first process to transmit the handles of the canvas objects to the third process, display identifiers display1, display2, display3 and display4 can be respectively allocated to the canvas objects corresponding to the application A, B, C, D, so that 3D contents under different applications obtained by rendering can be displayed in different screen objects display.
In some embodiments, step 130 comprises:
running the rendering engine to render the canvas object according to rendering parameters of the canvas object;
the rendering parameters include gesture events and/or rendering resources.
In particular, in the process of rendering canvas objects by running a rendering engine in a third process, rendering parameters may be applied for rendering. The rendering parameters herein correspond to canvas objects, i.e., different rendering parameters may correspond to different canvas objects.
Rendering parameters, i.e. parameters applied in the rendering process, may specifically comprise gesture events and/or rendering resources.
The gesture event may be a user gesture detected by the first process in the process of interacting with the user, for example, under a vehicle setting page, when the user operates to turn on a vehicle lamp on a screen, the vehicle setting page may send the gesture event of turning on the vehicle lamp to the third process, and the third process turns on the 3D vehicle lamp based on the gesture event of turning on the vehicle lamp when rendering under the vehicle setting page. For another example, under the 3D live desktop, the user slides the desktop, and the 3D live desktop may send a sliding gesture event to a third process, which may adjust a vehicle model on the desktop based on the sliding gesture event when rendering under the 3D live desktop, to achieve an effect that the vehicle model rotates with the gesture of the user sliding.
Rendering resources are resources for 3D rendering, such as 3D models, texture shaders, texture maps, logical scripts, etc. For example, if the vehicle model needs to be displayed under the 3D live desktop, the rendering resources carrying the vehicle model may be used as parameters required for rendering under the 3D live desktop.
In some embodiments, prior to step 130, further comprising:
and preloading the rendering resources to a memory.
Specifically, when the third process performs rendering, the preloading function can be applied to perform hot start, and the second starting capability of the first process when the 3D content is displayed is realized by loading rendering resources in advance.
It will be appreciated that loading of rendering resources, which typically requires implementation of IO operations, takes a long time to read rendering resources locally, directly affects rendering efficiency and response efficiency of presentation on the view subclass. For example, the loading of rendering resources of a vehicle model takes 200 ms-300 ms, and the loading and rendering of the vehicle model are performed when the vehicle model needs to be displayed, which obviously has a large time delay.
Therefore, in the embodiment of the application, the rendering resources are loaded into the memory in advance, so that the rendering resources and the loading are ensured to be completed when the rendering display is needed, and zero-delay display is realized,
In addition, when the first process is closed or the view subclass of the first process is closed, rendering resources associated with the first process can be timely released to recycle the memory.
In some embodiments, the rendering resources are determined based on view content of a currently presented view subclass.
Specifically, in order to implement preloading of rendering resources, thereby ensuring zero-delay presentation, it is necessary to determine rendering resources that need to be preloaded based on view contents of view subclasses currently being presented.
Here, the view contents of the currently displayed view subclass can reflect the contents that may be subsequently skipped to be displayed, and thus, the view contents that need to be displayed subsequently can be predicted based on the view contents of the currently displayed view subclass, thereby preloading rendering resources required for rendering the view contents that need to be displayed subsequently.
For example, the currently displayed view subclass is a 3D live desktop, the 3D live desktop includes view contents such as vehicle setting, air conditioning setting, energy center, etc., and the view contents are all jumped by a user through operation. In this case, the rendering contents associated with the vehicle setting, the air conditioning setting, the energy center, and the like may be preloaded. Therefore, when the 3D live desktop jumps to the view content, the preloaded rendering resources can be directly applied to rendering and displaying. For example, when a user starts the air conditioner setting through the 3D live desktop, the rendering resources of the air conditioner setting are loaded, and the display can be directly rendered, so that zero-delay display is realized.
It should be noted that, the volume of the rendering resource is usually not large, for example, the sum of the volumes of rendering contents respectively associated with the vehicle setting, the air conditioning setting and the energy center is not more than 100MB, and the rendering resource is preloaded without occupying too much memory, and the hot start of the related application can be realized to greatly reduce the time consumption of starting.
In some embodiments, the gesture event is a gesture event detected under the first process.
Specifically, the first process may detect the gesture event in real time, so that after the gesture event is detected, the detected gesture event is transmitted to the third process, so as to adjust rendering of the canvas object corresponding to the first process in the third process, so that the canvas object can respond with the gesture event in real time.
Further, the first process may send gesture events to the third process through an IPC (Inter Process Communication, inter-process communication) service established between the processes to enable interactive capability of rendering content. The IPC service is used for inter-application inter-process communication, and can be specifically realized through an aidl communication mode of an Android system.
In the embodiment of the application, the gesture event is transmitted through the IPC to realize the control of the 3D rendering, so that the first process only needs to perform service control, does not need to pay attention to the specific realization of the 3D rendering, and effectively realizes the decoupling between the service and the 3D display.
In some embodiments, the third process serves wallpaper.
Specifically, the special feature of the wallpaper service is that the wallpaper service can control full-screen background display of the equipment, render based on the wallpaper service, and guarantee seamless circulation of canvas objects in canvas view in each first process in cross-process rendering, so that the effect of 'one mirror to the bottom' is achieved.
Alternatively, the wallpaper service herein may be WallpaperService.
In some embodiments, fig. 3 is a second flowchart of a display method according to an embodiment of the present application. As shown in fig. 3, under the Android system, application a is a first process, system surfeflinger is a second process, and background WallpaperService is a third process. Wherein the rendering engine Unity supports running in a third process WallpaperService and Unity supports background rendering.
The first process application A can create a view sub-class SurfaceView and apply for a handle Surface of a Canvas object Canvas corresponding to the view sub-class SurfaceView to the second process SurfaceFinger.
Accordingly, the second process SurfaceFlinger allocates the handle Surface of the Canvas object Canvas to the first process application a, and transmits the Surface to the first process application a, so that the first process application a obtains the Surface.
Then, the first process application A transmits the handle Surface to a third process Wallpaper service, and the third process Wallpaper service runs a rendering engine Unity to render Canvas objects Canvas corresponding to the handle Surface. Here, the rendering resources required for rendering in the third process may be preloaded. The first process application a may also transmit detected gesture events to a third process for use in rendering.
In addition, the first process application a may add View to render non-3D content within application a.
It will be appreciated that in the above scenario, the 3D rendering capabilities of the third process may be used as a system service to provide cross-process rendering capabilities. And as an application of the first process, the operation of the scheme can be realized to acquire the 3D capability only by accessing a rendering signaling service with the size of about 100 KB. In this way, compared with the method that 3D rendering capability is independently accessed in applications, each application can save more than 200 MB of the volume, and 300MB of memory occupation is reduced.
The interaction among the first process, the second process and the third process enables the rendering requirements of all the first processes to be uniformly met through the third process, the first processes do not need to be connected into a rendering engine respectively, the third process is combined with the preloading of rendering resources, and the first process and the third process interact to transmit gesture events, so that the application serving as the first process can be light and free, and development and subsequent maintenance costs are reduced.
In an example of a vehicle, each application is independently connected to the 3D rendering capability, so after the vehicle is started, the background starts related applications such as vehicle setting, air conditioning setting, and the like, which can cause multiple increases in memory overhead. For example, the memory overhead of an APP including a Unity engine is 300-500 MB, the vehicle needs to display multiple (e.g. 6) service scenes at the same time, and when each service scene corresponds to one APP, the total memory overhead of 6 APPs is 400MB by 6 calculated by taking the intermediate value 400MB, and the background pre-start+always occupies 2.4G memory resources.
Based on the scheme, the 3D capability is realized, the 3D service is an independent background process, and all functions can be realized only by taking about 0.5G as an example of a Unity resource. Because the 3D services are resident in the background, the air conditioning settings, the vehicle settings, the energy center, etc. applications do not need to be resident in the background. After these applications are started, the 3D content to be rendered may be hot started after the pre-loaded rendering resources are completed.
It should be noted that, when the independent application is used as the first process to access the rendering engine, two steps of initializing the rendering engine and loading resources are needed when the independent application is started, that is, after the first process is created, the rendering engine in the third process is initialized, the rendering resources associated with the first process are loaded, and then the display method is executed to perform 3D rendering.
Based on the scheme, 3D capability is realized, under the vehicle machine HMI system, 1G of storage space occupied by each application in the system can be saved, and memory overhead of each application when being called is saved by about 1.2G.
Fig. 4 is a schematic structural diagram of an electronic device provided in an embodiment of the present application, as shown in fig. 4, the electronic device may include: processor 410, communication interface (Communications Interface) 420, memory 430 and communication bus 440, wherein processor 410, communication interface 420 and memory 430 communicate with each other via communication bus 440. Processor 410 may invoke logic instructions in memory 430 to perform a presentation method comprising:
creating a view sub-class by a first process, and requesting a handle of a canvas object corresponding to the view sub-class from a second process by the first process;
the first process transmits the handle to a third process, so that the third process obtains the canvas object;
the third process runs a rendering engine to render the canvas object;
the first process exposes the rendered canvas object on the view sub-class.
Further, the logic instructions in the memory 430 described above may be implemented in the form of software functional units and may be stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, the present application also provides a computer program product, the computer program product including a computer program, the computer program being storable on a non-transitory computer readable storage medium, the computer program, when executed by a processor, being capable of executing the method provided by the above method embodiments, the method comprising:
creating a view sub-class by a first process, and requesting a handle of a canvas object corresponding to the view sub-class from a second process by the first process;
the first process transmits the handle to a third process, so that the third process obtains the canvas object;
the third process runs a rendering engine to render the canvas object;
the first process exposes the rendered canvas object on the view sub-class.
In yet another aspect, the present application further provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform a method provided by the above-described method embodiments, the method comprising:
creating a view sub-class by a first process, and requesting a handle of a canvas object corresponding to the view sub-class from a second process by the first process;
the first process transmits the handle to a third process, so that the third process obtains the canvas object;
the third process runs a rendering engine to render the canvas object;
the first process exposes the rendered canvas object on the view sub-class.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting thereof; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (10)

1. A display method, comprising:
creating a view sub-class by a first process, and requesting a handle of a canvas object corresponding to the view sub-class from a second process by the first process;
the first process transmits the handle to a third process, so that the third process obtains the canvas object;
the third process runs a rendering engine to render the canvas object;
the first process exposes the rendered canvas object on the view sub-class.
2. The presentation method of claim 1, wherein the canvas object comprises a plurality of canvas objects, the third process running a rendering engine to render the canvas object, comprising:
respectively distributing corresponding display identifiers for all canvas objects, wherein each display identifier points to a corresponding screen object;
and activating a screen object corresponding to the canvas object to render the canvas object.
3. The display method according to claim 2, further comprising:
and activating a multi-screen function of the rendering engine under the condition that the number of the canvas objects is greater than or equal to two, so that the rendering engine can switch the screen objects based on the canvas objects.
4. The presentation method of claim 1, wherein the third process runs a rendering engine that renders the canvas object, comprising:
running the rendering engine to render the canvas object according to rendering resources of the canvas object;
the rendering resources are preloaded to memory.
5. The presentation method of claim 4, wherein the rendering resources are determined based on view content of a currently presented view sub-class.
6. The presentation method of claim 1, wherein the third process runs a rendering engine that renders the canvas object, comprising:
and running the rendering engine to render the canvas object according to the gesture event corresponding to the canvas object.
7. The presentation method of claim 6, wherein the gesture event is a gesture event detected under the first progress.
8. The presentation method of any of claims 1 to 7, wherein the third process is a wallpaper service.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the presentation method of any of claims 1 to 8 when the program is executed by the processor.
10. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the presentation method of any of claims 1 to 8.
CN202311309150.6A 2023-10-10 2023-10-10 Display method, electronic device and storage medium Pending CN117372594A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311309150.6A CN117372594A (en) 2023-10-10 2023-10-10 Display method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311309150.6A CN117372594A (en) 2023-10-10 2023-10-10 Display method, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN117372594A true CN117372594A (en) 2024-01-09

Family

ID=89395761

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311309150.6A Pending CN117372594A (en) 2023-10-10 2023-10-10 Display method, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN117372594A (en)

Similar Documents

Publication Publication Date Title
EP3754490B1 (en) User interface rendering method and apparatus, and terminal
CN110162343B (en) Application starting method and device, electronic equipment and storage medium
CN109324903B (en) Display resource scheduling method and device for embedded system
US20160328241A1 (en) Data processing method for multiple operating systems and terminal equipment
CN109542614B (en) Resource allocation method, device, terminal and storage medium
CN108496198B (en) Image processing method and device
CN110990075A (en) Starting method, device and equipment of fast application and storage medium
JP7100154B6 (en) Processor core scheduling method, device, terminal and storage medium
CN110955499B (en) Processor core configuration method, device, terminal and storage medium
WO2011050683A1 (en) Method and device for displaying application image
CN110874217A (en) Interface display method and device for fast application and storage medium
CN110750664B (en) Picture display method and device
CN107122176B (en) Graph drawing method and device
US20200258195A1 (en) Image Processing Method and Device
CN110825467A (en) Rendering method, rendering apparatus, hardware apparatus, and computer-readable storage medium
CN111008057A (en) Page display method and device and storage medium
CN110865863B (en) Interface display method and device for fast application and storage medium
CN109951732B (en) Method for preventing startup LOGO and application from switching to black screen for Android smart television
CN108228139B (en) Singlechip development system and device based on HTML5 browser frame
US11507633B2 (en) Card data display method and apparatus, and storage medium
CN117372594A (en) Display method, electronic device and storage medium
CN116932144A (en) Control method and system for virtual machine of vehicle machine
CN114092312A (en) Image generation method, image generation device, electronic equipment and storage medium
CN113360230A (en) Application program display method and system and vehicle
CN112307383A (en) Page loading method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination