CN116483301A - Multi-screen display method, device, equipment and storage medium - Google Patents

Multi-screen display method, device, equipment and storage medium Download PDF

Info

Publication number
CN116483301A
CN116483301A CN202310462927.6A CN202310462927A CN116483301A CN 116483301 A CN116483301 A CN 116483301A CN 202310462927 A CN202310462927 A CN 202310462927A CN 116483301 A CN116483301 A CN 116483301A
Authority
CN
China
Prior art keywords
display
combined
displays
target
manager
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310462927.6A
Other languages
Chinese (zh)
Inventor
赵崇瑜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Lichi Semiconductor Technology Co ltd
Original Assignee
Shenzhen Lichi Semiconductor Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Lichi Semiconductor Technology Co ltd filed Critical Shenzhen Lichi Semiconductor Technology Co ltd
Priority to CN202310462927.6A priority Critical patent/CN116483301A/en
Publication of CN116483301A publication Critical patent/CN116483301A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • G06F3/1431Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display using a single graphics controller
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The disclosure provides a multi-screen display method, a device, equipment and a storage medium, wherein a target virtual screen is created, and the target virtual screen is displayed on at least two to-be-combined displays; determining display parameters of each display to be combined through a display manager, and sending the display parameters to a layer manager, wherein the display parameters comprise a target layer stack and a target view area; determining interface images of all to-be-combined displays through the layer manager based on the display parameters; and respectively sending the interface images to buffer frames of the corresponding displays to be combined through the layer manager, and calling the buffer frames to a display controller of the corresponding displays to be combined through the buffer frames so as to display the interface images, thereby achieving the effect of multi-screen display integration.

Description

Multi-screen display method, device, equipment and storage medium
Technical Field
The disclosure relates to the technical field of automobiles, and in particular relates to a multi-screen display method, a device, equipment and a storage medium.
Background
With the improvement of living standard, people not only meet the riding instead of walking requirement of automobiles, but also increasingly pursue the requirements of fashion, entertainment and individuation.
In order to meet the visual effect of the user, the size of the vehicle-mounted display is larger, but the size of the vehicle-mounted display cannot be infinitely extended due to the limitation of the internal space of the vehicle, and the prior art has the concept that the double screen is used as one screen to display the same multimedia data, but the problem of displaying the same multimedia data by combining the double screens is not really solved from the software level.
Disclosure of Invention
The present disclosure provides a multi-screen display method, apparatus, device, and storage medium, so as to at least solve the above technical problems in the prior art.
According to a first aspect of the present disclosure, there is provided a multi-screen display method, the method comprising:
creating a target virtual screen, wherein the target virtual screen is displayed on at least two displays to be combined;
determining display parameters of each display to be combined through a display manager, and sending the display parameters to a layer manager, wherein the display parameters comprise a target layer stack and a target view area;
determining interface images of all to-be-combined displays through the layer manager based on the display parameters;
and respectively sending the interface images to buffer frames of the corresponding displays to be combined through the layer manager, and calling the buffer frames to a display controller of the corresponding displays to be combined through the buffer frames so as to display the interface images.
In an embodiment, the creating the target virtual screen includes:
determining a target resolution of the target virtual screen according to the resolutions of the at least two displays to be combined;
setting a joint instruction, wherein the joint instruction is used for establishing connection between the identity information of the target virtual screen and the identity information of each display to be joined;
the joint instruction is sent to a display manager for recording;
and displaying the multimedia data to be output on the target virtual screen through displaying the calling instruction.
In an embodiment, after the displaying the multimedia data to be output on the target virtual screen, the method further includes:
setting a joint termination instruction, and sending the joint termination instruction to a display manager for recording, wherein the joint termination instruction is used for canceling connection established by the identity information of the target virtual screen and the identity information of each display to be joined;
and restoring the original display state of each display to be combined through the display manager.
In an embodiment, the determining, by the display manager, the display parameters of each display to be combined includes:
and calculating target layer stacks and target view areas of the to-be-combined displays through the display manager, wherein the target view areas are partial areas intercepted in the process that the to-be-output multimedia data are displayed in the target virtual screen.
In an embodiment, the determining, by the layer manager, the interface image of each display to be combined based on the display parameter includes:
setting the initial layer stack of each display to be combined as the target layer stack through the layer manager, and setting the initial view area of each display to be combined as a corresponding target view area;
stacking the target layers into an interface image through the layer manager;
and determining interface images displayed by the displays to be combined according to the target view areas of the displays to be combined.
In an embodiment, before the determining, by the display manager, the display parameters of each display to be combined, the method further includes:
and setting the displays to be combined into a dormant state.
In an embodiment, after the creating the target virtual screen, the method further includes:
and setting virtual touch coordinates of the target virtual screen to receive a touch instruction of a user, wherein the virtual touch coordinates are determined according to the initial coordinates of the to-be-combined displays and the positions of the to-be-combined displays corresponding to the target virtual screen.
According to a second aspect of the present disclosure, there is provided a multi-screen display device, the device comprising:
the system comprises a creation module, a display module and a display module, wherein the creation module is used for creating a target virtual screen, and the target virtual screen is displayed on at least two to-be-combined displays;
the parameter determining module is used for determining display parameters of each display to be combined through the display manager and sending the display parameters to the layer manager, wherein the display parameters comprise a layer stack and a view area;
the image determining module is used for determining interface images of the displays to be combined through the layer manager based on the display parameters;
and the display module is used for respectively sending the interface images to the buffer frames of the corresponding displays to be combined through the layer manager, and calling the display controllers of the corresponding displays to be combined through the buffer frames so as to display the interface images.
In an embodiment, the creating module is specifically configured to:
determining a target resolution of the target virtual screen according to the resolutions of the at least two displays to be combined;
setting a joint instruction, wherein the joint instruction is used for establishing connection between the identity information of the target virtual screen and the identity information of each display to be joined;
The joint instruction is sent to a display manager for recording;
and displaying the multimedia data to be output on the target virtual screen through displaying the calling instruction.
In an embodiment, the method further comprises:
the termination module is used for setting a joint termination instruction after the multimedia data to be output are displayed on the target virtual screen, and sending the joint termination instruction to a display manager for recording, wherein the joint termination instruction is used for canceling connection established by the identity information of the target virtual screen and the identity information of each display to be joined; and restoring the original display state of each display to be combined through the display manager.
In one embodiment, the parameter determining module is specifically configured to:
and calculating target layer stacks and target view areas of the to-be-combined displays through the display manager, wherein the target view areas are partial areas intercepted in the process that the to-be-output multimedia data are displayed in the target virtual screen.
In one embodiment, the image determining module is specifically configured to:
setting the initial layer stack of each display to be combined as the target layer stack through the layer manager, and setting the initial view area of each display to be combined as a corresponding target view area;
Stacking the target layers into an interface image through the layer manager;
and determining interface images displayed by the displays to be combined according to the target view areas of the displays to be combined.
In an embodiment, the method further comprises: and the dormancy setting module is used for setting each display to be combined into a dormancy state before the display parameters of each display to be combined are determined through the display manager.
In an embodiment, the method further comprises: a touch module for, after the creation of the target virtual screen,
and setting virtual touch coordinates of the target virtual screen to receive a touch instruction of a user, wherein the virtual touch coordinates are determined according to the initial coordinates of the to-be-combined displays and the positions of the to-be-combined displays corresponding to the target virtual screen.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the methods described in the present disclosure.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of the present disclosure.
The method, the device, the equipment and the storage medium for multi-screen display are characterized in that a target virtual screen is created, wherein the target virtual screen is displayed on at least two to-be-combined displays; determining display parameters of each display to be combined through a display manager, and sending the display parameters to a layer manager, wherein the display parameters comprise a target layer stack and a target view area; determining interface images of all to-be-combined displays through the layer manager based on the display parameters; and respectively sending the interface images to buffer frames of the corresponding displays to be combined through the layer manager, and calling the buffer frames to a display controller of the corresponding displays to be combined through the buffer frames so as to display the interface images, thereby achieving the effect of multi-screen display integration.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings, in which:
in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Fig. 1 is a schematic implementation flow diagram of a multi-screen display method according to an embodiment of the disclosure;
FIG. 2 illustrates a schematic diagram of the resolution of an exemplary target virtual screen provided by embodiments of the present disclosure;
FIG. 3 illustrates a schematic diagram of the resolution of yet another exemplary target virtual screen provided by embodiments of the present disclosure;
FIG. 4 illustrates a schematic diagram of an exemplary screen display principle provided by embodiments of the present disclosure;
fig. 5 shows a flowchart of a specific implementation of step S130 according to an embodiment of the disclosure;
FIG. 6 illustrates a schematic implementation flow diagram of an exemplary multi-screen display method provided by an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a multi-screen display device according to an embodiment of the present disclosure;
fig. 8 shows a schematic diagram of a composition structure of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, features and advantages of the present disclosure more comprehensible, the technical solutions in the embodiments of the present disclosure will be clearly described in conjunction with the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. Based on the embodiments in this disclosure, all other embodiments that a person skilled in the art would obtain without making any inventive effort are within the scope of protection of this disclosure.
Fig. 1 is a flowchart of a multi-screen display method according to an embodiment of the present disclosure, where the method may be performed by a multi-screen display device according to an embodiment of the present disclosure, and the device may be implemented in software and/or hardware. The method specifically comprises the following steps:
s110, creating a target virtual screen.
The target virtual screen is displayed on at least two to-be-combined displays, and is used for displaying an interface required to be displayed by a user, and is recorded as a virtual display. The display to be combined refers to a display on a software layer corresponding to a physical display watched by a user.
For example, if the to-be-combined display a and the to-be-combined display B are two independent displays, the respective corresponding multimedia data may be played. After the target virtual screen is created, if the target virtual screen is connected with the to-be-combined display A and the to-be-combined display B, the multimedia data displayed on the target virtual screen can be divided into two parts, one part is displayed on the to-be-combined display A for displaying, and the other part is displayed on the to-be-combined display B, so that the to-be-combined display A and the to-be-combined display B jointly display the same multimedia data.
In an embodiment of the present disclosure, creating a target virtual screen includes: determining target resolution of a target virtual screen according to the resolutions of at least two to-be-combined displays; setting a joint instruction, wherein the joint instruction is used for establishing connection between the identity information of the target virtual screen and the identity information of each display to be joined; transmitting the joint instruction to a display manager for recording; and displaying the multimedia data to be output on the target virtual screen through displaying the calling instruction.
Wherein the target resolution may be a resolution of the target virtual screen. The display manager may be a core server of a system management screen, for example, a core server of an Android system management screen, denoted displaymanageservicer. The display calling instruction is used for starting an application interface onto a target virtual screen and recording the application interface as a StartActivity interface. The multimedia data to be output may be multimedia data to be displayed on the target virtual screen.
Specifically, in this embodiment, if the target virtual screen is displayed on at least two to-be-combined displays, the resolution of the target virtual screen must be different from the resolution of the two to-be-combined displays. Since the target virtual screen is to cover the original data of the two displays to be combined, the resolution of the target virtual screen should be the sum of the resolutions of the two displays to be combined after being spliced, as shown in fig. 2 and 3.
Fig. 2 and fig. 3 are schematic structural diagrams of resolution of an exemplary target virtual screen according to an embodiment of the disclosure, including a Display to be combined 0 and a Display to be combined 1. The original resolutions of the two to-be-combined displays are 1280 and 720, the API interfaces of the application interface starting display are respectively Activity1 and Activity2, and the API interface of the application interface starting display of the target virtual screen is Activity3. As shown in fig. 2, the Display0 to be combined and the Display1 to be combined are in a horizontal combination, if the application initiates a request for combined Display, the resolution of the target virtual screen is 2560×720, and finally the coordinate areas displayed on the two displays to be combined are respectively a crop (0, 0-1280, 720) and a crop (1280,0-2560, 720). As shown in fig. 3, the Display0 to be combined and the Display1 to be combined are vertically combined, if the application initiates a request for combined Display, the resolution of the target virtual screen is 1440 x 1280, and finally the coordinate areas displayed on the two displays to be combined are respectively crop (0, 0-1280, 720) and crop (0, 720-1280, 1440).
Specifically, the embodiment sets a joint instruction, and triggers the joint display behavior by adding an API interface. In this embodiment, multiple screens jointly play the same multimedia data, for example, two to-be-combined displays are occupied to play one multimedia data with larger resolution, so that a new interface is required to be added, and a trigger mechanism is set to realize combined display. In this embodiment, connection is established between the identity information of the target virtual screen and the identity information of each to-be-combined display through the combination instruction, and then the combination instruction is sent to the display manager to record, so that the play sources of the to-be-combined displays which need to be combined and displayed are marked at the bottom layer.
For easy understanding, the embodiment is based on an Android system, and uses two to-be-combined displays as an example to describe, and notifies the displaymanageservice marker of the target virtual screen, the application token and the two to-be-combined displays ID to be combined by calling a newly added interface of the system. Illustratively, the call function that calls the system add-on interface is as follows:
DisplayManager.setConflateDisplay(int sourceDisplayId,IBinder token,int dstDisplayId0,int dstDisplayId1);
the sourcedilayId is identity information of the target virtual screen, namely a source displayID; the IBinder token is used for monitoring whether the trigger instruction has abnormal behavior in the execution process by applying the token; dstdisplay id0 and dstdisplay id1 are identity information of two displays to be joined. After the joint instruction is set, the embodiment will notify the displaymanageservice that the displayManagerservice mark needs to merge the source displayID, the application token and the two displayIDs of the target merge Display.
S120, determining display parameters of each display to be combined through the display manager, and sending the display parameters to the layer manager.
The display parameters comprise a target layer stack and a target view area. The target layer stack comprises a plurality of layers of the screen, is layer data of the screen and is marked as layerstack; the target view area is a clipping area of the data source of each display to be combined and is marked as a viewport rect. The layer manager is a core manager that sends display parameters to each of the to-be-joined displays for display, and is denoted as surfeflinger.
Because the layer manager is a server that manages the display parameters and sends the display parameters to be displayed, the embodiment sends the display parameters to the layer manager when determining the display parameters of each display to be combined.
In an embodiment of the present disclosure, determining, by a display manager, display parameters of each display to be joined includes: and calculating target layer stacks and target view areas of the to-be-combined displays through the display manager, wherein the target view areas are partial areas intercepted in the process that the to-be-output multimedia data are displayed in the target virtual screen, and belong to the visual areas.
Since one target virtual screen uses at least two to-be-combined displays, each to-be-combined display displays a part of the image of the target virtual screen, the embodiment can calculate the target layer stack and the target view area of each to-be-combined display through the display manager, so that the display area of each to-be-combined display can be determined. Illustratively, the displaymanageservice in the present embodiment calculates layrstock, viewport rect, etc. information according to the displayinformation in the configurable displayLocked flow, and transmits these information to the surface eFlink.
S130, determining interface images of the to-be-combined displays through the layer manager based on the display parameters.
Because the target layer stacks are used for synthesizing the interface images of the to-be-combined displays, and the target view areas are used for determining which part of the area images of the target virtual screen are displayed by the to-be-combined displays, the embodiment establishes connection between the interface images of the to-be-combined displays and the corresponding target layer stacks and the target view areas through the layer manager after determining the target layer stacks and the target view areas of the to-be-combined displays.
And S140, respectively sending the interface images to the buffer frames of the corresponding displays to be combined through the layer manager, and calling the buffer frames to the display controllers of the corresponding displays to be combined so as to display the interface images.
The buffer frame may be a memory for temporarily storing interface image data, and each display to be combined has a corresponding buffer frame, which is denoted as frame buffer.
In this embodiment, each interface image is sent to a buffer frame of a corresponding display to be combined through a layer manager, and is called to a display controller of the corresponding display to be combined through the buffer frame. The above is a process of sending the interface image to the to-be-combined display for display on the software level, and the cooperation on the hardware level is that the display controller of the to-be-combined display is transmitted to the display controller in the physical to-be-combined display through the data line, and the corresponding interface image is driven to be displayed through the display controller of the physical to-be-combined display.
Fig. 4 is a schematic structural diagram of an exemplary screen display principle according to an embodiment of the present disclosure, including: display to be combined 0, display to be combined 1, virtual Display to be target virtual screen, physical screen 0 and physical screen 1. Wherein the physical screen 0 and the physical screen 1 are controlled to be displayed by the display controller 0 and the display controller 1, respectively. The data of the Display controller 0 is derived from the frame buffer of the Display0, and the data of the Display controller 1 is derived from the frame buffer of the Display 1.
As shown in fig. 4, in the case of normal operation, the present embodiment displays the display contents of the original display data structure on physical screen 0 and physical screen 1, but when joint display is entered, physical screen 0 and physical screen 1 jointly display the display contents of the target virtual screen.
The embodiment creates the target virtual screen; determining display parameters of each display to be combined through a display manager, and sending the display parameters to a layer manager; determining interface images of all to-be-combined displays through a layer manager based on display parameters; the interface images are respectively sent to the buffer frames of the corresponding to-be-combined displays through the layer manager, and are called to the display controllers of the corresponding to-be-combined displays through the buffer frames so as to display the interface images, so that the phenomenon that the same screen can only be combined in a hardware mode in the prior art is improved, and the beneficial effect of displaying the same screen through multiple screens is truly realized on a software level.
In the embodiment of the present disclosure, in step S130, based on the display parameters, the interface images of each display to be combined are determined by the layer manager, and further steps S130a-S130c are included, as shown in fig. 5, specifically as follows:
s130a, setting an initial layer stack of each display to be combined as a target layer stack through a layer manager, and setting an initial view area of each display to be combined as a corresponding target view area.
The initial layer stack and the initial view area are source data of an interface image of the original to-be-combined display.
Because the target virtual screen is put on the multiple to-be-combined displays in the embodiment, the source data of the interface images of the to-be-combined displays are also changed into the target virtual screen. In this embodiment, the initial layer stack of each to-be-combined display is stored in the layer manager, so that the initial layer stack of each to-be-combined display needs to be set as a target layer stack in the layer manager, and the initial view area of each to-be-combined display is set as a corresponding target view area, thereby changing the display source data of the interface image of each to-be-combined display from the underlying data architecture.
That is, surfaceFlinger sets a source tag for each display to be joined display, where the source tag is derived from an ID value corresponding to the target virtual screen in the display ManagerService. In an exemplary embodiment, the layerstack of each to-be-combined display is set to layerstack of the virtual display, so that the display content of each to-be-combined display is derived from the virtual display. Meanwhile, the embodiment also modifies the viewport area of each display to be combined in the surfeflinger layer.
S130b, synthesizing the target layer stack into an interface image through the layer manager.
Specifically, the layer manager of the present embodiment manages the synthesis of the target layer stack in addition to sending the display parameters to each to-be-combined display. For example, the present embodiment may call a layer synthesizer or image processor (Graphics Processing Unit, GPU) to synthesize a target layer stack by the layer manager, thereby generating complete image data to be sent to each of the to-be-joined displays for display.
And S130c, determining interface images displayed by the to-be-combined displays according to the target view areas of the to-be-combined displays.
Specifically, since the displays to be combined are displayed by the same target virtual screen, in this embodiment, the interface images displayed by the displays to be combined are derived from the same target layer stack, and only the displayed areas are different. Thus, the present embodiment determines which portion of the interface image is specifically displayed by each display to be unified through the target view area of each display to be unified.
The display manager and the layer manager in this embodiment belong to two different processes, and therefore, after the display manager determines the display parameters, the display parameters need to be transferred to the layer manager. Because the layer manager manages the data structure and the source of each display to be combined, the internal data structure of the layer manager is also synchronously modified, and the purpose of jointly displaying the same target virtual screen is realized by giving the data structure and the source of each display to be combined.
In an embodiment of the present disclosure, after displaying the multimedia data to be output on the target virtual screen, the method further includes: setting a joint termination instruction, and sending the joint termination instruction to a display manager for recording, wherein the joint termination instruction is used for canceling connection established by the identity information of the target virtual screen and the identity information of each display to be joined; and restoring the original display state of each display to be combined through the display manager.
The restoring the original display state of each display to be combined may be restoring the data structure of each display to be combined into the initial layer stack and the initial view area.
Specifically, the embodiment is further provided with a joint termination instruction, the behavior is realized through the newly added interface, and after the connection established by the identity information of the target virtual screen and the identity information of each display to be joined is canceled, each display to be joined can recover the original state of independent use. For example, the embodiment is also based on an Android system, and two displays to be combined are taken as an example for explanation, and by calling a newly added interface of the system, a function of the newly added interface may be:
DisplayManager.cancelConflateDisplay(int sourceDisplayId,IBinder token,int dstDisplayId0,int dstDisplayId1)。
According to the embodiment, the display states of the displays to be combined can be dynamically switched by setting the combined termination instruction, different application contents can be displayed by the displays to be combined in a normal state, when the combined instruction is received, the content of one application can be jointly displayed by the displays to be combined, and when the combined termination instruction is received, the normal state of the displays to be combined can be restored, so that the displays can be flexibly used, more display possibilities can be realized, and the requirements of different users can be met.
In an embodiment of the present disclosure, before determining, by the display manager, display parameters of each display to be joined, the method further includes: each display to be combined is set to a dormant state.
Since each display to be combined actually has a program running on its own, after the combined instruction is started, the original running program of each display to be combined is still running although being covered by the target virtual screen. Therefore, in this embodiment, each display to be combined is set to a sleep state, so that the original running program is stopped, and the consumption of performance is reduced.
Specifically, the embodiment can notify the system server through the display manager, and set the sleep state of each display to be combined through the system server. The system server is a core service process. For example, in this embodiment, the Android system is further described, and DisplayManager Service sets the state of each Display to be combined from the running state (Activity) to the sleep state (sleep) through the internal interface of the system_server.
According to the embodiment, the to-be-combined displays are set to be in the dormant state, so that the performance consumption of the system can be effectively reduced on the basis that the combined display is not affected.
In an embodiment of the present disclosure, after creating the target virtual screen, further includes: and setting virtual touch coordinates of the target virtual screen to receive a touch instruction of a user, wherein the virtual touch coordinates are determined according to initial coordinates of each to-be-combined display and positions of the to-be-combined displays corresponding to the target virtual screen. The touch instruction is used for triggering a touch event.
In the actual use process, a user often uses a touch screen to trigger an operation, and the corresponding process is completed through the coordinates detected when touching. Before the joint instruction is initiated, the touches of the individual displays to be joined are independent. After the combination instruction is started, the multiple to-be-combined displays are used for displaying the same target virtual screen, so that the coordinates detected by the touch screen operation of the user cannot be equal to the coordinates of the original independent to-be-combined displays. Therefore, the embodiment converts the coordinate when clicking into the coordinate corresponding to the target virtual screen through the position offset of the display to be combined corresponding to the target virtual screen after combining. Specifically, in this embodiment, the coordinates are reset through the initial coordinates and the offset left coordinates of the displays to be combined to obtain the virtual touch coordinates.
For example, the touch events of the displays to be combined may be shifted by touchinputmap of InputFlinger. When a touch event occurs, the coordinates of the click touch are converted into coordinates on the target virtual screen. The InputFlinger is a core server for receiving and distributing input events by the Android system; touchinputmap is a class that processes touch events, translating into application processing events.
As shown in fig. 2 and 3, the coordinate areas crop (0, 0-1280, 720) and crop (1280,0-2560, 720) in fig. 2 are virtual touch coordinates set after the landscape joint screen. The coordinate areas crop (0, 0-1280, 720) and crop (0, 720-1280, 1440) in fig. 3 are virtual touch coordinates that are set after the vertical joint screen.
The embodiment realizes the touch operation of the target virtual screen by setting the virtual touch coordinates of the target virtual screen.
Fig. 6 is a flowchart of an exemplary multi-screen display method according to an embodiment of the present disclosure, including various interfaces and steps called on an Application (App), and steps executed on a bottom data structure for implementing an App layer effect on a service layer display manager service and an input processing server InputFlinger. It should be noted that, among others,
The call interface function that sets the resolution of the target virtual screen may be: displaymanager, setcondiflatediisplay Size (DisplayId 0, dstDisplayId 1);
the call interface function that creates the target virtual screen may be: displaymanager.create virtual display;
the call interface function that puts the application on the target virtual screen may be: startActivity;
the call interface function that triggers the join instruction may be: displaymanager.
According to the embodiment, the display mode of the display screen can be flexibly switched according to the instruction, the display effect of the target virtual screen, namely the large screen is achieved, and the corresponding and achievable triggering operation process is further arranged and used for achieving the triggering event of the target virtual screen, so that the conventional requirements and personalized requirements of a user are met.
Fig. 7 is a schematic structural diagram of a multi-screen display device according to an embodiment of the present disclosure, where the device includes:
a creating module 710, configured to create a target virtual screen, where the target virtual screen is displayed on at least two to-be-combined displays;
a parameter determining module 720, configured to determine, by using a display manager, display parameters of each display to be combined, and send the display parameters to a layer manager, where the display parameters include a layer stack and a view area;
An image determining module 730, configured to determine, based on the display parameters, interface images of the displays to be combined through the layer manager;
the display module 740 is configured to send the interface images to the buffer frames of the corresponding to-be-combined displays through the image manager, and call the buffer frames to the display controllers of the corresponding to-be-combined displays to display the interface images.
In one embodiment, the creation module 710 is specifically configured to: determining target resolution of a target virtual screen according to the resolutions of at least two to-be-combined displays; setting a joint instruction, wherein the joint instruction is used for establishing connection between the identity information of the target virtual screen and the identity information of each display to be joined; transmitting the joint instruction to a display manager for recording; and displaying the multimedia data to be output on the target virtual screen through displaying the calling instruction.
In an embodiment, the method further comprises: the termination module is used for setting a joint termination instruction after the multimedia data to be output are displayed on the target virtual screen, and sending the joint termination instruction to the display manager for recording, wherein the joint termination instruction is used for canceling the connection established by the identity information of the target virtual screen and the identity information of each display to be joined; and restoring the original display state of each display to be combined through the display manager.
In one embodiment, the parameter determining module 720 is specifically configured to: and calculating target layer stacks and target view areas of the to-be-combined displays through the display manager, wherein the target view areas are partial areas intercepted in the process that the to-be-output multimedia data are displayed in the target virtual screen.
In one embodiment, the image determining module 730 is specifically configured to: setting an initial layer stack of each display to be combined as a target layer stack through a layer manager, and setting an initial view area of each display to be combined as a corresponding target view area; stacking the target layers into an interface image through a layer manager; and determining interface images displayed by the to-be-combined displays according to the target view areas of the to-be-combined displays.
In an embodiment, the method further comprises: and the dormancy setting module is used for setting each display to be combined into a dormancy state before the display parameters of each display to be combined are determined through the display manager.
In an embodiment, the method further comprises: and the touch module is used for setting virtual touch coordinates of the target virtual screen after the target virtual screen is created so as to receive a touch instruction of a user, wherein the virtual touch coordinates are determined according to initial coordinates of each to-be-combined display and positions of the to-be-combined displays corresponding to the target virtual screen.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device and a readable storage medium.
Fig. 8 illustrates a schematic block diagram of an example electronic device 800 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The computing unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
Various components in device 800 are connected to I/O interface 805, including: an input unit 806 such as a keyboard, mouse, etc.; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, etc.; and a communication unit 809, such as a network card, modem, wireless communication transceiver, or the like. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 801 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 801 performs the respective methods and processes described above, the multi-screen display method. For example, in some embodiments, the multi-screen display method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 808. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 800 via ROM 802 and/or communication unit 809. When a computer program is loaded into RAM 803 and executed by computing unit 801, one or more of the steps of the multi-screen display method described above may be performed. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the multi-screen display method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems-on-a-chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present disclosure, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
The foregoing is merely specific embodiments of the disclosure, but the protection scope of the disclosure is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the disclosure, and it is intended to cover the scope of the disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A multi-screen display method, the method comprising:
creating a target virtual screen, wherein the target virtual screen is displayed on at least two displays to be combined;
determining display parameters of each display to be combined through a display manager, and sending the display parameters to a layer manager, wherein the display parameters comprise a target layer stack and a target view area;
determining interface images of all to-be-combined displays through the layer manager based on the display parameters;
and respectively sending the interface images to buffer frames of the corresponding displays to be combined through the layer manager, and calling the buffer frames to a display controller of the corresponding displays to be combined through the buffer frames so as to display the interface images.
2. The method of claim 1, wherein the creating the target virtual screen comprises:
determining a target resolution of the target virtual screen according to the resolutions of the at least two displays to be combined;
setting a joint instruction, wherein the joint instruction is used for establishing connection between the identity information of the target virtual screen and the identity information of each display to be joined;
The joint instruction is sent to a display manager for recording;
and displaying the multimedia data to be output on the target virtual screen through displaying the calling instruction.
3. The method of claim 2, further comprising, after the displaying the multimedia data to be output onto the target virtual screen:
setting a joint termination instruction, and sending the joint termination instruction to a display manager for recording, wherein the joint termination instruction is used for canceling connection established by the identity information of the target virtual screen and the identity information of each display to be joined;
and restoring the original display state of each display to be combined through the display manager.
4. A method according to claim 3, wherein determining, by the display manager, display parameters for each display to be joined comprises:
and calculating target layer stacks and target view areas of the to-be-combined displays through the display manager, wherein the target view areas are partial areas intercepted in the process that the to-be-output multimedia data are displayed in the target virtual screen.
5. A method according to claim 3, wherein said determining, by said layer manager, interface images of respective displays to be joined based on said display parameters, comprises:
Setting the initial layer stack of each display to be combined as the target layer stack through the layer manager, and setting the initial view area of each display to be combined as a corresponding target view area;
stacking the target layers into an interface image through the layer manager;
and determining interface images displayed by the displays to be combined according to the target view areas of the displays to be combined.
6. A method according to claim 3, further comprising, prior to said determining, by the display manager, display parameters for each display to be joined:
and setting the displays to be combined into a dormant state.
7. The method of claim 3, further comprising, after said creating the target virtual screen:
and setting virtual touch coordinates of the target virtual screen to receive a touch instruction of a user, wherein the virtual touch coordinates are determined according to the initial coordinates of the to-be-combined displays and the positions of the to-be-combined displays corresponding to the target virtual screen.
8. A multi-screen display device, the device comprising:
The system comprises a creation module, a display module and a display module, wherein the creation module is used for creating a target virtual screen, and the target virtual screen is displayed on at least two to-be-combined displays;
the parameter determining module is used for determining display parameters of each display to be combined through the display manager and sending the display parameters to the layer manager, wherein the display parameters comprise a layer stack and a view area;
the image determining module is used for determining interface images of the displays to be combined through the layer manager based on the display parameters;
and the display module is used for respectively sending the interface images to the buffer frames of the corresponding displays to be combined through the layer manager, and calling the display controllers of the corresponding displays to be combined through the buffer frames so as to display the interface images.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
10. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1-7.
CN202310462927.6A 2023-04-21 2023-04-21 Multi-screen display method, device, equipment and storage medium Pending CN116483301A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310462927.6A CN116483301A (en) 2023-04-21 2023-04-21 Multi-screen display method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310462927.6A CN116483301A (en) 2023-04-21 2023-04-21 Multi-screen display method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116483301A true CN116483301A (en) 2023-07-25

Family

ID=87211574

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310462927.6A Pending CN116483301A (en) 2023-04-21 2023-04-21 Multi-screen display method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116483301A (en)

Similar Documents

Publication Publication Date Title
US9317344B2 (en) Power efficient brokered communication supporting notification blocking
US8924502B2 (en) System, method and computer program product for updating a user session in a mach-derived system environment
CN112614202A (en) GUI rendering display method, terminal, server, electronic device and storage medium
CN112114761A (en) Wireless screen projection control method and device, terminal equipment and readable storage medium
US9801146B2 (en) Terminal and synchronization control method among terminals
KR20230133970A (en) Photography methods, devices and electronics
CN114748873B (en) Interface rendering method, device, equipment and storage medium
US11249771B2 (en) Terminal input invocation
CN110045958B (en) Texture data generation method, device, storage medium and equipment
CN113655975B (en) Image display method, image display device, electronic apparatus, and medium
CN114327087A (en) Input event processing method and device, electronic equipment and storage medium
WO2021042910A1 (en) User interaction method and electronic device
CN111638966A (en) Resource acquisition method and device and electronic equipment
JP2021522721A (en) Screen capture method, terminal and storage medium
CN111857902A (en) Application display method, device, equipment and readable storage medium
CN116483301A (en) Multi-screen display method, device, equipment and storage medium
CN114146406A (en) Method and device for allocating operation resources, electronic equipment and storage medium
CN109960562B (en) Information display method and device and computer readable storage medium
CN107329654A (en) Draw method, device and the computer-readable recording medium of element floating layer
CN113836455A (en) Special effect rendering method, device, equipment, storage medium and computer program product
CN112684965A (en) Dynamic wallpaper state changing method and device, electronic equipment and storage medium
CN111880702A (en) Interface switching method and device and electronic equipment
US20180300160A1 (en) Host and Component Relationship between Applications
CN114416234B (en) Page switching method and device, computer equipment and storage medium
WO2013185664A1 (en) Operating method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination