CN117593433A - Rendering interaction method, electronic device and storage medium - Google Patents

Rendering interaction method, electronic device and storage medium Download PDF

Info

Publication number
CN117593433A
CN117593433A CN202311633220.3A CN202311633220A CN117593433A CN 117593433 A CN117593433 A CN 117593433A CN 202311633220 A CN202311633220 A CN 202311633220A CN 117593433 A CN117593433 A CN 117593433A
Authority
CN
China
Prior art keywords
rendering
application
window
engine
application windows
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311633220.3A
Other languages
Chinese (zh)
Inventor
安书鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Weilai Automobile Technology Anhui Co Ltd
Original Assignee
Weilai Automobile Technology Anhui Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Weilai Automobile Technology Anhui Co Ltd filed Critical Weilai Automobile Technology Anhui Co Ltd
Priority to CN202311633220.3A priority Critical patent/CN117593433A/en
Publication of CN117593433A publication Critical patent/CN117593433A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application relates to the technical field of rendering, and particularly provides a rendering interaction method, electronic equipment and a storage medium, aiming at solving the technical problem that the interaction efficiency of the existing rendering interaction method is low. For this purpose, the rendering interaction method of the present application includes: the method comprises the steps that a client obtains rendering images of at least two application windows and receives a first operation of a user; the server side sends a first operation to the rendering engine; the first operation is responded to by the rendering engine. Therefore, the rendering and interaction capability of a plurality of application processes are realized on the basis of one rendering engine, the rendering efficiency and the drawing capability of calling the rendering process by the cross-process part are improved, and the utilization rate of the rendering engine is improved.

Description

Rendering interaction method, electronic device and storage medium
Technical Field
The application relates to the technical field of rendering, and particularly provides a rendering interaction method, electronic equipment and a storage medium.
Background
With the continued increase in vehicle intelligent cockpit CPU and GPU performance, modern vehicle intelligent cockpit systems have had powerful three-dimensional picture and animation rendering capabilities. The Unity engine can be used as a rendering engine of three-dimensional man-machine interaction of a cabin which is currently mainstream, and how to efficiently use the Unity engine becomes a new subject.
The existing method utilizes a Unity engine to realize rendering interaction of only one application, has single function and has lower use experience of users.
Accordingly, there is a need in the art for a new rendering interaction scheme to address the above-described problems.
Content of the application
The present application has been made to overcome the above-mentioned drawbacks, and to provide a solution or at least partially solve the above-mentioned technical problems. The application provides a rendering interaction method, electronic equipment and a storage medium.
In a first aspect, the present application provides a rendering interaction method, the method comprising:
the method comprises the steps that a client side obtains rendering images of at least two application windows, and receives first operation of a user, wherein the first operation is touch operation of the user on a window where the rendering images are located;
the server side sends the first operation to a rendering engine;
the first operation is responded to by a rendering engine.
In one embodiment, before the client obtains the rendered image of the at least two application windows, the method further comprises:
creating at least two application windows by the client;
the rendering client sends handles of the at least two application windows to a rendering server;
rendering, by the rendering engine, the at least two application windows based on the handles of the at least two application windows, resulting in rendered images of the at least two application windows.
In one embodiment, the rendering, by the rendering engine, the at least two application windows based on the handles of the at least two application windows, to obtain the rendered images of the at least two application windows, includes:
determining at least two display output modules and at least two rendering scenes based on the handles of the at least two application windows;
binding a rendering scene and a display output module;
and rendering the at least two application windows based on the bound rendering scene and the display output module to obtain rendering images of the at least two application windows.
In one embodiment, the method further comprises: the rendering engine adjusts the size of the display interface based on the window size of the application window.
In one embodiment, the rendering of the at least two application windows based on the bound rendered scene and the display output module includes:
decoupling the rendering content corresponding to the application window based on the display output module to obtain rendering sub-content;
and drawing a rendering primitive based on the rendering sub-content in the rendering scene bound with the display output module.
In one embodiment, the responding, by the rendering engine, to the first operation includes:
acquiring a first coordinate corresponding to the first operation by the rendering engine;
judging a trigger position of the first operation based on the first coordinate;
acquiring a rendering scene corresponding to the trigger position;
determining a second coordinate based on the first coordinate;
and in the rendering scene, calling the rendering primitive in the preset range corresponding to the second coordinate to respond to the first operation.
In one embodiment, in a case where the at least two application windows include a first application window and a second application window, the determining the second coordinate based on the first coordinate includes:
acquiring a first height of the first application window and a second height of the second application window;
determining a height difference between the first application window and the second application window based on the first height and the second height;
the second coordinates are obtained based on the first coordinates and the height difference.
In one embodiment, the method further comprises:
monitoring, by the rendering engine, a working state of the display output module, the working state including busy and idle;
and recycling the display output module under the condition that the display output module is in an idle state.
In a second aspect, an electronic device is provided, comprising at least one processor and at least one memory adapted to store a plurality of program code adapted to be loaded and executed by the processor to perform the rendering interaction method of any of the preceding claims.
In a third aspect, a computer readable storage medium is provided, in which a plurality of program codes are stored, the program codes being adapted to be loaded and executed by a processor to perform the rendering interaction method of any of the preceding claims.
The technical scheme has at least one or more of the following beneficial effects:
the application provides a rendering interaction method, which comprises the following steps: the method comprises the steps that a client side obtains rendering images of at least two application windows, and receives first operation of a user, wherein the first operation is touch operation of the user on the rendering images; the server side sends a first operation to the rendering engine; the first operation is responded to by the rendering engine. Therefore, the rendering and interaction capability of a plurality of application processes are realized on the basis of one rendering engine, the rendering efficiency and the drawing capability of calling the rendering process by the cross-process part are improved, and the utilization rate of the rendering engine is improved.
Drawings
The disclosure of the present application will become more readily understood with reference to the accompanying drawings. As will be readily appreciated by those skilled in the art: these drawings are for illustrative purposes only and are not intended to limit the scope of the present application. Moreover, like numerals in the figures are used to designate like parts, wherein:
FIG. 1 is a flow diagram of the main steps of a rendering interaction method according to one embodiment of the present application;
FIG. 2 is a schematic diagram showing binding of an output module to a scene in one embodiment of the present application;
FIG. 3 is a schematic diagram of a rendering flow based on a C/S architecture in one embodiment of the present application;
FIG. 4 is a schematic diagram of a display interface of a client according to one embodiment of the present application;
FIG. 5 is a schematic diagram of coordinate transformation in one embodiment of the present application;
FIG. 6 is a schematic diagram of a rendering interaction flow in one embodiment of the present application;
FIG. 7 is a schematic diagram of a rendering interaction flow in another embodiment of the present application;
FIG. 8 is a schematic diagram of multiplexing display output modules of the Unity engine according to an embodiment of the present application;
fig. 9 is a main structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Some embodiments of the present application are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are merely for explaining the technical principles of the present application, and are not intended to limit the scope of the present application.
In the description of the present application, a "module," "processor" may include hardware, software, or a combination of both. A module may comprise hardware circuitry, various suitable sensors, communication ports, memory, or software components, such as program code, or a combination of software and hardware. The processor may be a central processor, a microprocessor, an image processor, a digital signal processor, or any other suitable processor. The processor has data and/or signal processing functions. The processor may be implemented in software, hardware, or a combination of both. Non-transitory computer readable storage media include any suitable medium that can store program code, such as magnetic disks, hard disks, optical disks, flash memory, read-only memory, random access memory, and the like. The term "a and/or B" means all possible combinations of a and B, such as a alone, B alone or a and B. The term "at least one A or B" or "at least one of A and B" has a meaning similar to "A and/or B" and may include A alone, B alone or A and B. The singular forms "a", "an" and "the" include plural referents.
At present, the traditional method only can realize rendering interaction of one application by using one Unity engine, has single function and has lower use experience of users.
To this end, the application provides a rendering interaction method, an electronic device and a storage medium, wherein the method comprises the following steps: the method comprises the steps that a client side obtains rendering images of at least two application windows, and receives first operation of a user, wherein the first operation is touch operation of the user on the window where the rendering images are located; the server side sends a first operation to the rendering engine; the first operation is responded to by the rendering engine. Therefore, the rendering and interaction capability of a plurality of application processes are realized on the basis of one rendering engine, the rendering efficiency and the drawing capability of calling the rendering process by the cross-process part are improved, and the utilization rate of the rendering engine is improved.
Referring to fig. 1, fig. 1 is a schematic flow chart of main steps of a rendering interaction method according to an embodiment of the present application.
As shown in fig. 1, the rendering interaction method in the embodiment of the present application mainly includes the following steps S100 to S300.
Step S100: the method comprises the steps of obtaining rendering images of at least two application windows by a client, and receiving a first operation of a user, wherein the first operation is a touch operation of the user on a window where the rendering images are located.
Step S200: and the server sends the first operation to a rendering engine.
Step S300: the first operation is responded to by a rendering engine.
In one embodiment, a Unity engine may be used as an example of the rendering engine, but is not limited thereto.
The Unity engine is a real-time 3D interactive content authoring and operation platform. All creators including game development, art, construction, car design, film and television, become realistic with the aid of Unity. The Unity engine provides a complete set of software solutions for authoring, operating and rendering any real-time interactive 2D and 3D content, and the support platform includes cell phones, tablet computers, PCs, game hosts, augmented reality and virtual reality devices.
In order to make the objects, technical solutions and advantages of the present invention more clear, a detailed description of a multi-window rendering method according to the present application will be given below with reference to the drawings and the embodiments, and using a Unity engine as an example of a rendering engine. It should be understood by those skilled in the art that the specific embodiments described herein are merely for explaining the present invention, and are not intended to limit the multi-window rendering method described herein to use a Unity engine as a rendering engine.
Based on the steps S100-S300, the client acquires the rendering images of at least two application windows, and receives a first operation of a user, wherein the first operation is a touch operation of the user on the rendering images; the server side sends a first operation to the rendering engine; the first operation is responded to by the rendering engine. Therefore, the rendering and interaction capability of a plurality of application processes are realized on the basis of one rendering engine, the rendering efficiency and the drawing capability of calling the rendering process by the cross-process part are improved, and the utilization rate of the rendering engine is improved.
The following further describes the steps S100 to S300.
In step S100, first, how to render and obtain rendered images corresponding to at least two application windows in the Unity engine is described.
In a specific embodiment, before the client obtains the rendered images of the at least two application windows, the method further includes: creating at least two application windows by the client; the rendering client sends handles of the at least two application windows to a rendering server; rendering, by the rendering engine, the at least two application windows based on the handles of the at least two application windows, resulting in rendered images of the at least two application windows.
The application window is used for displaying application 3D content, and the application window is independently created by an application program, but is finally drawn by the Unity rendering service. The application window is carried by the android Surface object. The Surface object may be obtained through Surface View and textureView.
An application window can be simply understood as a segment of a drawing buffer in memory. The handle (Surface) of an application window is an identifier that is used to identify the application window, i.e., the virtual address of the application window, through which the native buffer and the contents therein are available.
The third application program interface may be invoked by the application program to create an application window in response to at least one operation request by the user. For example, the operation request may specifically be that the window of the first application program is displayed in full screen, and the window of the second application program is displayed at a position on the right side of the display window. The third application program interface may be an android standard Application Program Interface (API).
In one embodiment, the code to create the application window is as follows:
SurfaceView sv=new SurfaceView(this)
in the embodiment, the rendering method of at least two application windows will be described in detail by taking rendering three application windows as an example, but the method for rendering multiple windows described in the present invention is not limited to using only three application windows as multiple windows, and may be four, five, six or more application windows, but a Unity engine may render 8 windows at most.
In a specific embodiment, the rendering, by the rendering engine, the at least two application windows based on the handles of the at least two application windows, to obtain rendered images of the at least two application windows, includes: determining at least two display output modules and at least two rendering scenes based on the handles of the at least two application windows; binding each rendering scene with each display output module; and rendering the at least two application windows based on the bound rendering scene and the display output module to obtain rendering images of the at least two application windows.
Specifically, a first application program interface is called, an application window is converted into a display output module based on a handle of the application window, and interface information of the application window is obtained based on the handle of each application window; and calling a second application program interface, and creating a rendering scene based on the interface information.
The first application program interface refers to the android application program interface of UnityPlayer.
The second application program interface refers to the standard API of the Unity engine.
The Display output module (Unity Display) can be regarded as a handle of the application window in the Unity engine.
In one embodiment, the android application program interface (android API) of the UnityPlayer is used to convert the handle (Surface) of each application window into a Display output module (Unity Display), and specific codes are as follows:
UnityPlayer.displayChanged(1,surface);
the Unity engine further obtains interface information of the application window through the handle of each application window, wherein the interface information refers to the specific content to be drawn of the application window.
The Unity engine comprises a scene manager that manages the lifecycle of rendered scenes for all applications and the binding of rendered scenes to the display output module.
Specifically, the scene manager calls a standard API of the Unity engine to create a scene, and configures the created scene parameters by using interface information of the application window to obtain a rendering scene.
In one embodiment, the code that invokes the standard API creation scenario of the Unity engine is as follows:
SceneManager.LoadSceneAsync("targetscene",LoadSceneMode.Ad ditive);
based on the above method, one rendering scene can be created for each application window.
Further, the scene manager calls the standard API of the Unity engine to bind a rendering scene Camera (Camera) and a display output module, thereby binding a rendering scene and a display output module. In addition, after the display output modules are destroyed, the scene manager further destroys the corresponding display output modules.
In one embodiment, the application program interface to which the Display binds with the Surface is unityplayer. Through the API, the Unity internal Display can be bound with the Surface of the android.
In another embodiment, the code that binds the rendered scene and the display output module is as follows:
Display.display[i].Activate();
Camera.SetTargetBuffers(Display.display[i].colorBuffer,Display.display[i].depthBuffer);
in this way, binding of each rendering scene and each display output module can be achieved.
As shown in fig. 2, an application window binds a display output module and a rendering scene.
In one embodiment, the method further comprises: the rendering engine adjusts the size of the display interface based on the window size of the application window.
In a specific embodiment, the rendering of the at least two application windows based on the bound rendered scene and the display output module includes: decoupling the rendering content corresponding to the application window based on the display output module to obtain rendering sub-content; and drawing a rendering primitive based on the rendering sub-content in the rendering scene bound with the display output module.
Rendering primitives refer to rendering objects in a current rendering scene.
Specifically, after binding each rendering scene with each display output module, rendering interface information of each application window in each rendering scene, wherein the interface information refers to content which needs to be drawn specifically by the application window. Specifically, the interface information of each application window is firstly decoupled into a plurality of sub-contents (for example, each game interface is decoupled into a plurality of sub-contents such as characters, sky, obstacles and the like), and then rendering primitives corresponding to each sub-content in a rendering scene.
And decoupling and separating different display output modules by using a Mask Layer (Layer Mask) so that the rendering scene corresponding to the display output module only renders the primitives belonging to the scene.
Illustratively, the geometric information required for the rendering operation is first provided for use by the subsequent rendering stage. Then the object, position, shape, etc. are rendered. Finally, a rasterization process is performed, for example, to determine which pixels in the rendered primitive should be drawn on the screen, and then the colors thereof are merged and mixed. Therefore, the rendering of the whole application window can be realized, and for the rendering of a plurality of application windows, the rendering process is only required to be repeated for a plurality of times, so that the rendering image of each application window is obtained. Through the steps, the rendering of a plurality of application windows can be realized.
Unity Canvas Scale component adapts along the width and height of a certain factor according to the real resolution of the screen when handling native Android events (first operation events acquired by the client), what is needed when using Surface to simulate a real screen mu is to handle the display adaptation of the screen mu UGUI according to the size of the real rendering window of the current Surface. Generally, the handle of the application window transferred to the Unity engine contains the window size of the application window, and the Canvas Scale component directly adjusts the display interface size of Unity according to the window size. Therefore, window self-adaption of the application window is realized, and finally drawn rendering images can be displayed on the client according to the requirements of the user, so that the satisfaction degree of the user is improved.
As shown in fig. 3, each application 11 may create one application window, three applications may create three application windows, and the Unity engine may implement rendering of multiple windows simultaneously.
Rendering client 111 is a module in application 11 and may also be considered a thread. The rendering Client 111 and the rendering Server 12 are two parts in a C/S (Client/Server) architecture, and the rendering Client and the rendering Server can be regarded as two different threads.
Firstly, establishing communication connection between a rendering client and a rendering server of each application program, and subsequently supporting cross-application transfer of handles of application windows between the rendering client and the rendering server.
AIDL (android interface definition language ) is a description language for defining client/server communication interfaces.
In one embodiment, code for sending a handle of each application window located at the rendering client to the rendering server via an AIDL communication mechanism is as follows:
and then rendering the image in the Unity engine, and finally transmitting the rendered image to the client through the server for display.
Fig. 4 is a schematic diagram showing two rendered images being drawn on a display interface of a client, wherein a first rendered image is displayed in full screen form and a second rendered image is displayed in a right position of the screen.
When a user clicks a certain position of a screen presenting a plurality of rendering images, a first operation of the user is acquired by the client, wherein the first operation refers to a touch operation of the user on the screen presenting the plurality of rendering images.
Note that the first operation may be a gesture operation, a mouse movement, or the like in addition to a touch operation, which is not particularly limited.
The above is a further explanation of step S100, and the following further explanation of step S200 is continued.
For step S200, after the client obtains the first operation, the client sends the first operation to the server through a client-server (C/S) architecture, and then the server sends the first operation to the rendering engine.
The above is a further explanation of step S200, and the following further explanation of step S300 is continued.
In one embodiment, the responding, by the rendering engine, to the first operation includes: acquiring a first coordinate corresponding to the first operation by the rendering engine; judging a trigger position of the first operation based on the first coordinate; acquiring a rendering scene corresponding to the trigger position; determining a second coordinate based on the first coordinate; and in the rendering scene, calling the rendering primitive in the preset range corresponding to the second coordinate to respond to the first operation.
The preset range may be a preset value, and may specifically be determined according to an actual application scenario, which is not limited.
Specifically, after the first operation is obtained, a first coordinate (coordinate of a display interface of a client) and a specific trigger position of the first operation are correspondingly obtained, so that an application window on a screen is triggered by a user, a rendering scene corresponding to the window is obtained, the rendering scene is converted into a second coordinate under a coordinate system corresponding to a rendering engine, and in the rendering scene, a rendering primitive in a preset range corresponding to the second coordinate is called to respond to the first operation.
For example, taking fig. 4 as an example, when a user clicks a certain position of a rendered image where a map is located, a client obtains a first operation and sends the first operation to the client, and thus, a rendering engine receives the first operation sent by the client. The rendering engine determines a specific triggered position according to a first coordinate of a first operation, if the specific triggered position occurs at a certain position of a map application window, a rendering scene corresponding to the map window is obtained, the first coordinate is converted into a second coordinate under a coordinate system corresponding to the rendering engine, and in the rendering scene corresponding to the map window, a rendering primitive in a preset range corresponding to the second coordinate is called to respond to clicking operation of the map window.
In a specific embodiment, in a case that the at least two application windows include a first application window and a second application window, the determining the second coordinate based on the first coordinate includes: acquiring a first height of the first application window and a second height of the second application window; determining a height difference between the first application window and the second application window based on the first height and the second height; the second coordinates are obtained based on the first coordinates and the height difference.
As shown in fig. 5, since the origin of coordinates of the client display interface (e.g., android system) starts from the upper left of the screen, and the origin of coordinates of the Unity engine coordinate system starts from the lower left of the screen, when the first window surfacview 1 (main surfacview 1, full screen display) rendered by default and the surfacview 2 transferred through other processes is not in one coordinate system, coordinate conversion is required. For touch coordinate transfer under window 2 (surface ev 2), coordinate conversion is required to be recognizable by the Unity engine.
The principle of the coordinate conversion is that the abscissa of the first operation first coordinate is unchanged, and the ordinate is determined from the difference ofsety of the height of the window 1 minus the height of the window 2. Because the X-axis coordinates of the android and the Unity engine are the same and the Y-axis coordinates are opposite, the ordinate correction needs to be performed when the android transmits the corresponding event coordinates to the Unity engine, and the formula is as follows: y=y+ (H SurfaceView1 -H Surface View2 ). Therefore, the conversion from the window of the client to the window size of the rendering engine is realized, and the confirmation of the specific window of the operation event is facilitated, so that the response accuracy to the first operation event is improved.
In addition, the client in the application can perform layered display on a plurality of interfaces rendered by the Unity engine, so that the requirement of respectively displaying a plurality of windows is met.
The efficient response of the Unity engine to the user operation is realized by expanding a plurality of components of the Unity engine. As shown in fig. 6, the component Unity Canvas Scale is expanded to adapt to the width and height of the corresponding application window size and to enable display adaptation for different devices.
The component extension includes an extension to a Mask Layer (Layer Mask) such that a rendering scene corresponding to the display output module renders only primitives attributed to the scene, thereby achieving display grouping.
In the Unity engine, the operation events are detected through a Physics Raycaster component, a GraphicRavMaster component and a Panel RavMaster component, so that the operation events are transmitted to corresponding UI elements, and event grouping is achieved.
Physics Raycaster is a component in the Unity engine UGUI for physical ray detection on UI elements.
The GraphicRavcaster component is a component that is self-contained in the Canvas to monitor all primitives on the Canvas and determine if there are triggered events.
The PanelRavcaster component is a UI component in the Unity engine for detecting whether a UI element is clicked by a mouse or a touch event. When a PanelRaycaster detects a click event, it will send the event to the UI element associated with it in order to perform the corresponding operation.
As shown in fig. 7, one Unity engine may implement drawing of multiple windows, such as starting the window drawing of scene 1, starting the window drawing of scene 2, and closing the window drawing of scene 2. Because the scene 2 is drawn, the resource recovery can be performed on the display output module of the scene to be reused for the next window.
In one embodiment, the method further comprises: monitoring, by the rendering engine, a working state of the display output module, the working state including busy and idle; and recycling the display output module under the condition that the display output module is in an idle state.
Specifically, a manager of the rendering engine monitors the working state of a display output module (Unity display), and when a certain display output module is in an idle state, the display output module is recovered to realize the subsequent reuse in rendering a subsequent process.
As shown in fig. 8, the manager of the rendering engine can form a mechanism with an upper limit of 8 and cyclic multiplexing inside the window, and more 3D window requirements can be selected. For display output modules (D3-D8) that have finished the task, the display output modules are reclaimed and marked as idle to achieve cyclic multiplexing of windows.
It should be noted that, although the foregoing embodiments describe the steps in a specific sequential order, it should be understood by those skilled in the art that, in order to achieve the effects of the present application, different steps need not be performed in such an order, and may be performed simultaneously (in parallel) or in other orders, and these variations are within the scope of protection of the present application.
Further, the application also provides electronic equipment. In one embodiment of an electronic device according to the present application, as shown in fig. 9, the electronic device includes at least one processor 91 and at least one memory 92, the memory 92 may be configured to store a program for executing the rendering interaction method of the above-described method embodiment, and the processor 91 may be configured to execute the program in the memory, including but not limited to the program for executing the rendering interaction method of the above-described method embodiment. For convenience of explanation, only those portions relevant to the embodiments of the present application are shown, and specific technical details are not disclosed, refer to the method portions of the embodiments of the present application.
The electronic device in the embodiment of the present application may be a control apparatus device formed by including various devices. In some possible implementations, an electronic device may include multiple memories and multiple processors. And the program for executing the rendering interaction method of the above method embodiment may be divided into a plurality of sub-programs, and each sub-program may be loaded and executed by a processor to execute the different steps of the rendering interaction method of the above method embodiment, respectively. Specifically, each of the sub-programs may be stored in different memories, and each of the processors may be configured to execute the programs in one or more memories, so as to jointly implement the rendering interaction method of the above method embodiment, that is, each of the processors performs different steps of the rendering interaction method of the above method embodiment, so as to jointly implement the rendering interaction method of the above method embodiment.
The plurality of processors may be processors disposed on the same device, for example, the electronic device may be a high-performance device composed of a plurality of processors, and the plurality of processors may be processors configured on the high-performance device. In addition, the plurality of processors may be processors disposed on different devices, for example, the electronic device may be a server cluster, and the plurality of processors may be processors on different servers in the server cluster.
Further, the present application also provides a computer-readable storage medium. In one computer-readable storage medium embodiment according to the present application, the computer-readable storage medium may be configured to store a program that performs the rendering interaction method of the above-described method embodiment, the program being loadable and executable by a processor to implement the above-described rendering interaction method. For convenience of explanation, only those portions relevant to the embodiments of the present application are shown, and specific technical details are not disclosed, refer to the method portions of the embodiments of the present application. The computer readable storage medium may be a memory device formed by including various electronic devices, and optionally, in embodiments of the present application, the computer readable storage medium is a non-transitory computer readable storage medium.
Thus far, the technical solution of the present application has been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of protection of the present application is not limited to these specific embodiments. Equivalent modifications and substitutions for related technical features may be made by those skilled in the art without departing from the principles of the present application, and such modifications and substitutions will be within the scope of the present application.

Claims (10)

1. A method of rendering interactions, the method comprising:
the method comprises the steps that a client side obtains rendering images of at least two application windows, and receives first operation of a user, wherein the first operation is touch operation of the user on a window where the rendering images are located;
the server side sends the first operation to a rendering engine;
the first operation is responded to by a rendering engine.
2. The method of claim 1, wherein prior to the client capturing the rendered image of the at least two application windows, the method further comprises:
creating at least two application windows by the client;
the rendering client sends handles of the at least two application windows to a rendering server;
rendering, by the rendering engine, the at least two application windows based on the handles of the at least two application windows, resulting in rendered images of the at least two application windows.
3. The method of claim 2, wherein rendering, by the rendering engine, the at least two application windows based on the handles of the at least two application windows, results in a rendered image of the at least two application windows, comprising:
determining at least two display output modules and at least two rendering scenes based on the handles of the at least two application windows;
binding a rendering scene and a display output module;
and rendering the at least two application windows based on the bound rendering scene and the display output module to obtain rendering images of the at least two application windows.
4. The rendering interaction method of claim 2, wherein the method further comprises: the rendering engine adjusts the size of the display interface based on the window size of the application window.
5. A rendering interaction method according to claim 3, wherein said rendering of said at least two application windows based on said bound rendered scene and said display output module comprises:
decoupling the rendering content corresponding to the application window based on the display output module to obtain rendering sub-content;
and drawing a rendering primitive based on the rendering sub-content in the rendering scene bound with the display output module.
6. The render interaction method of claim 1, wherein the responding, by a render engine, to the first operation comprises:
acquiring a first coordinate corresponding to the first operation by the rendering engine;
judging a trigger position of the first operation based on the first coordinate;
acquiring a rendering scene corresponding to the trigger position;
determining a second coordinate based on the first coordinate;
and in the rendering scene, calling the rendering primitive in the preset range corresponding to the second coordinate to respond to the first operation.
7. The method of rendering interaction according to claim 6, wherein in case the at least two application windows comprise a first application window and a second application window, the determining the second coordinates based on the first coordinates comprises:
acquiring a first height of the first application window and a second height of the second application window;
determining a height difference between the first application window and the second application window based on the first height and the second height;
the second coordinates are obtained based on the first coordinates and the height difference.
8. A rendering interaction method as claimed in claim 3, further comprising:
monitoring, by the rendering engine, a working state of the display output module, the working state including busy and idle;
and recycling the display output module under the condition that the display output module is in an idle state.
9. An electronic device comprising at least one processor and at least one memory adapted to store a plurality of program code, characterized in that the program code is adapted to be loaded and executed by the processor to perform the rendering interaction method of any of claims 1 to 8.
10. A computer readable storage medium, in which a plurality of program codes are stored, characterized in that the program codes are adapted to be loaded and executed by a processor to perform the rendering interaction method of any one of claims 1 to 8.
CN202311633220.3A 2023-11-30 2023-11-30 Rendering interaction method, electronic device and storage medium Pending CN117593433A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311633220.3A CN117593433A (en) 2023-11-30 2023-11-30 Rendering interaction method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311633220.3A CN117593433A (en) 2023-11-30 2023-11-30 Rendering interaction method, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN117593433A true CN117593433A (en) 2024-02-23

Family

ID=89921699

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311633220.3A Pending CN117593433A (en) 2023-11-30 2023-11-30 Rendering interaction method, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN117593433A (en)

Similar Documents

Publication Publication Date Title
EP3129871B1 (en) Generating a screenshot
US20060107229A1 (en) Work area transform in a graphical user interface
CN110908625A (en) Multi-screen display method, device, equipment, system, cabin and storage medium
CN111696216B (en) Three-dimensional augmented reality panorama fusion method and system
US20150339038A1 (en) System and method for capturing occluded graphical user interfaces
CN109615685B (en) UML-oriented GPU texture mapping-based texture execution device and method for hardware view model
CN113407086B (en) Object dragging method, device and storage medium
JP2018026064A (en) Image processor, image processing method, system
WO2009033218A1 (en) A system and method for capturing digital images
CN113032080A (en) Page implementation method, application program, electronic device and storage medium
CN116821040B (en) Display acceleration method, device and medium based on GPU direct memory access
CN109144650B (en) Display object management method and device, electronic equipment and computer storage medium
US8436851B2 (en) Systems and methods for rendering three-dimensional graphics in a multi-node rendering system
WO2017113729A1 (en) 360-degree image loading method and loading module, and mobile terminal
CN109445760B (en) Image rendering method and system
CN111614906B (en) Image preprocessing method and device, electronic equipment and storage medium
CN111752619A (en) Image processing method and system
CN117593433A (en) Rendering interaction method, electronic device and storage medium
CN113822978B (en) Electronic map rendering method and device
JP2021522721A (en) Screen capture method, terminal and storage medium
CN110837297B (en) Information processing method and AR equipment
US20060170706A1 (en) Systems and methods for rendering three-dimensional graphics in a multi-node rendering system
CN113609194A (en) Data processing method and device, storage medium and electronic equipment
CN114004953A (en) Method and system for realizing reality enhancement picture and cloud server
CN116402933A (en) Multi-window rendering method and device, computer equipment, storage medium and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination