WO2023020025A1 - 投屏方法和电子设备 - Google Patents

投屏方法和电子设备 Download PDF

Info

Publication number
WO2023020025A1
WO2023020025A1 PCT/CN2022/091554 CN2022091554W WO2023020025A1 WO 2023020025 A1 WO2023020025 A1 WO 2023020025A1 CN 2022091554 W CN2022091554 W CN 2022091554W WO 2023020025 A1 WO2023020025 A1 WO 2023020025A1
Authority
WO
WIPO (PCT)
Prior art keywords
control
layer
display
screen
electronic device
Prior art date
Application number
PCT/CN2022/091554
Other languages
English (en)
French (fr)
Inventor
刘诗聪
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Publication of WO2023020025A1 publication Critical patent/WO2023020025A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • G06F3/1462Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay with means for detecting differences between the image stored in the host and the images displayed on the remote displays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches

Definitions

  • the embodiments of the present application relate to the field of terminals, and in particular, to a method for projecting a screen and an electronic device.
  • terminals With the development of terminal technology, more and more terminals have screen projection functions. For example, in family, work, teaching, and game competition scenarios, terminals can greatly facilitate people to watch by projecting the currently displayed screen onto a large screen. screen content.
  • the content of the screen projected on the large screen by the current screen projection method is consistent with that displayed on the terminal.
  • the process of the conference often involves adding people, setting the chairman, mute people, and sharing content. Selection and other operations, the existing screen projection method on the terminal interface to operate the conference control button will also be projected to the large screen, which will cause interference to the video stream content displayed on the large screen, thus affecting the ability to watch the conference content through the large screen user experience.
  • the present application proposes a screen projection method and electronic equipment.
  • this method by identifying the layers in the interface corresponding to the first application, and then generating pictures including different controls according to the identified layer types and requirements, and transmitting the generated pictures to different electronic devices respectively , so that the image finally displayed on the electronic device that initiates the screen projection is different from the image displayed on the electronic device that accepts the screen projection content, so that it can better adapt to different screen projection scenarios.
  • a method for screen projection is provided, which is applied to a first electronic device that initiates screen projection, the first electronic device is installed with a first application, the first application includes a first control and a second control, and the The first control is located on the first layer, and the second control is located on the second layer.
  • the screen projection method includes: obtaining the control information of the first control and the control information of the second control; determining the layer type of the first layer according to the control information of the first control; The control information of the second control determines the layer type of the second layer, and the layer type of the first layer is different from the layer type of the second layer; according to the layer type of the first layer type and the layer type of the second layer, generate a first display picture and a second display picture, and display the first picture on the screen of the first electronic device, and project the second display picture to The screen of the second electronic device displays, the first display screen includes the first control and the second control, and the second display screen includes the first control and does not include the second control.
  • the first electronic device that initiates screen projection recognizes the control information of the first control located on the first layer and the control information of the second control located on the second layer in the first application, and respectively according to the control information of the first control Information and the control information of the second control determine the layer type of the layer where each control is located.
  • the display screens including different controls are generated according to the determined layer type and sent to the Different electronic devices display, such as sending the first display screen including the first control and the second control to the first electronic device, and sending the second display screen including the first control but not including the second control to the second electronic device.
  • the electronic device can display different images, and thus can better adapt to different screen projection scenarios.
  • the electronic device that initiates screen projection is a mobile phone.
  • the first application is a conference application.
  • the interface when the first application is a conference application, the interface includes at least a video stream playback layer and a conference control button layer.
  • the video stream playback layer includes at least a video stream playback control
  • the conference control button layer includes at least a conference control button control.
  • the first display picture displayed on the screen of the first electronic device includes controls included in layers of all layer types, such as the first control in the first layer and the second control in the second layer
  • the second display picture displayed on the screen of the second electronic device only includes the controls included in the video stream playback layer, such as including the first control in the first layer, not Include the second control in the second layer.
  • the determining the layer type of the first layer according to the control information of the first control includes: extracting the control name and the name of the first control from the control information of the first control The size information of the first control; the control name is analyzed, and the control type of the first control, the package name of the first application and the interface where the first control is located are analyzed from the control name information, determine the layer type of the first layer according to the control type, the package name, the interface information and the size information; after parsing the first control from the control name control type and the interface information where the first control is located, and if the package name of the first application is not resolved, obtain the process identification number PID of the process that draws the first control, and determine the second control according to the PID A source of the control.
  • the layer type of the first layer according to the control type, the source, the interface information, and the size information, and the source includes the package name of the first application.
  • the purpose of the first control can be determined according to the control type of the first control
  • the application type of the first application can be determined according to the package name of the first application
  • the application type of the first application combined with the size information of the first control and the specific interface information in the first application, it is possible to accurately identify the layer type of the layer where the first control is included in most applications on the market.
  • the source of the process corresponding to the PID can be determined according to the unique PID , that is, the first application, and then obtain the package name of the first application.
  • the image of the first layer can be accurately determined according to the control information of the first control. layer type.
  • the method before parsing the control name, the method further includes: using the control name and the size information as retrieval keywords; according to the keyword, searching for a control matching the keyword in the layer identification record library; when finding a control matching the keyword, determining the layer type corresponding to the control as the first layer layer type; when no control matching the keyword is found, the step of parsing the name of the control is performed.
  • the table lookup operation is performed according to the control name and size information, and when the matched control is operated, the corresponding control of the found control is directly
  • the layer type is determined as the layer type of the first layer, so that there is no need to analyze and process according to the control information of the first control, which speeds up the processing speed. It not only takes into account the speed, avoids resource occupation, but also determines the appropriate layer type.
  • the layer type of the first layer cannot be determined according to the control information of the first control
  • obtain the current display of the first application A screen the currently displayed screen includes the first control; and the layer type of the first layer is determined according to the content displayed by the first control in the currently displayed screen.
  • the image of the first layer where the first control is located can be accurately determined by analyzing the screen currently displayed by the first application. Layer type, thereby ensuring subsequent separate drawing of layers based on layer type.
  • the determining the layer type of the second layer according to the control information of the second control includes: from the control of the second control Extract the control name of the second control and the size information of the second control from the information; analyze the control name, and analyze the control type of the second control and the first control name from the control name When using the package name of the application and the interface information where the second control is located, determine the layer type of the second layer according to the control type, the package name, the interface information and the size information; The control type of the second control and the interface information of the second control are analyzed from the control name, and when the package name of the first application is not resolved, obtain the process of drawing the second control Identification number PID, determine the source of the second control according to the PID, determine the layer type of the second layer according to the control type, the source, the interface information and the size information, so The source includes a package name of the first application.
  • the purpose of the second control can be determined according to the control type of the second control
  • the application type of the first application can be determined according to the package name of the first application
  • the purpose of the second control the application type of the first application , combined with the size information of the second control and the specific interface information in the first application, it is possible to accurately identify the layer type of the layer where the second control is included in most applications on the market.
  • the layer type of the second layer cannot be determined according to the control information of the second control
  • obtain the current display of the first application The currently displayed screen includes the second control; and the layer type of the second layer is determined according to the content displayed by the second control in the currently displayed screen.
  • the image of the second layer where the second control is located can be accurately determined by analyzing the screen currently displayed by the first application. Layer type, thereby ensuring subsequent separate drawing of layers based on layer type.
  • the first display screen and the second layer are generated according to the layer type of the first layer and the layer type of the second displaying a picture, and displaying the first picture on the screen of the first electronic device, and projecting the second display picture to the screen of the second electronic device for display, including: according to the layer of the first layer type and the layer type of the second layer, generate a first display picture and a second display picture, and cache the first display picture to the first display cache, and cache the second display picture to the second display cache; according to the cache order, take the first display picture from the first display cache, and display the first display picture on the screen of the first electronic device; Record the second display screen in the screen to obtain the screen projection content; send the screen projection content to the second electronic device for the second electronic device to decode the screen projection content to obtain the second screen projection content Displaying the picture, and displaying it on the screen of the second electronic device.
  • the first display screen and the second layer are generated according to the layer type of the first layer and the layer type of the second displaying a picture, and caching the first display picture into a first display cache, and caching the second display picture into a second display cache, including: determining a first layer filter rule corresponding to the first display cache and A second layer filtering rule corresponding to the second display cache; according to the first layer filtering rule, the layer type of the first layer, and the layer type of the second layer, determine the The first display screen includes the first layer and the second layer; obtaining the resource of the first control in the first layer and the resource of the second control in the second layer, Generate the first display picture according to the resources of the first control and the resources of the second control, and cache the first display picture in the first display cache; according to the filtering rule of the second layer, The layer type of the first layer and the layer type of the second layer, determine that the second display screen includes the first layer; obtain the first control in the first layer resources,
  • the layer filtering rules corresponding to the display buffer and the determined layer types of each layer can be used to determine the cached layer in the display buffer.
  • the control that needs to be included in the display screen is obtained, and the resources of the control are obtained to draw the display screen, so as to obtain a display screen suitable for display by different electronic devices.
  • determining the first layer filter rule corresponding to the first display cache and the second layer filter rule corresponding to the second display cache includes: obtaining the first device identifier of the first electronic device and the second device identifier of the second electronic device; searching the layer filter rule table that matches the first device identifier in the layer filter rule table, and The found layer filter rule is determined as the first layer filter rule corresponding to the first display cache; and the layer filter rule matching the second device identifier is searched in the layer filter rule table, Determining the found layer filtering rule as the second layer filtering rule corresponding to the second display cache.
  • determining the first layer filter rule corresponding to the first display cache and the second layer filter rule corresponding to the second display cache includes: displaying on the screen of the first electronic device a user-operable layer filtering rule decision interface, where the layer filtering rule decision interface includes the first control and the first control where the first control is located. A layer type of a layer, the second control and the layer type of the second layer where the second control is located; in response to the user setting the first layer filter for the first display cache
  • the operation behavior of the rule is to generate the first layer filtering rule; in response to the user's operation behavior of setting the second layer filtering rule for the second display cache, the second layer filtering rule is generated.
  • the user decides the layer filtering rules of the current first application, which not only improves user participation, but also makes the screen projection scene better adapt to different user needs.
  • the recording the second display screen in the second display cache to obtain the projected screen content includes: obtaining the first electronic The first screen aspect ratio of the device and the second screen aspect ratio of the second electronic device; when the first screen aspect ratio is different from the second screen aspect ratio, all the Perform black border removal processing on the second display screen, and record the second display screen after black border removal to obtain the screen projection content; when the aspect ratio of the first screen is the same as that of the second screen, Recording the second display picture in the second display cache to obtain screen projection content.
  • black border removal processing is introduced, so as to ensure that the second display screen displayed on the second electronic device has no black borders, or reduce the black borders as much as possible, thereby improving the viewing experience of the user watching the projected screen.
  • the recording the second display screen in the second display cache to obtain the projected screen content includes: obtaining the second electronic The display capability of the device; determining the refresh frame rate of the video stream according to the display capability; recording the second display picture in the second display buffer according to the refresh frame rate of the video stream to obtain screen projection content.
  • the first electronic device first negotiates with the second electronic device on the refresh frame rate of the video stream, so as to ensure that the transmitted video stream can be displayed normally on the second electronic device, and avoid bandwidth transition occupy.
  • the first application is a conference application
  • the first control is a video control
  • the layer type of the first layer is video streaming playback layer
  • the second control is a button control
  • the layer type of the second layer is a conference control button layer.
  • the first display picture displayed on the screen of the first electronic device not only includes the video picture of the participants displayed through the video stream playback control, but also includes the conference control button control that can be operated by the user, while the second electronic device
  • the second display picture displayed on the screen of the device only includes the video pictures of the participants displayed through the video stream playback control, and does not include the conference control button controls that can be operated by the user, so that users who participate in the conference through the first electronic device can either Viewing the conference screen, you can also operate the conference control buttons, and the user watching the conference through the second electronic device will not be disturbed by the operation performed by the first electronic device.
  • the first application is a conference application
  • the first control is a whiteboard annotation control
  • the layer type of the first layer is a whiteboard annotation map layer
  • the second control is a button control
  • the layer type of the second layer is a conference control button layer.
  • the first display picture displayed on the screen of the first electronic device includes not only the whiteboard content, but also the conference control button controls that can be operated by the user, while the second display picture displayed on the screen of the second electronic device It only includes the content of the whiteboard, and does not include the conference control button controls that can be operated by users, so that users who participate in the conference through the first electronic device can not only watch the drawing on the whiteboard, but also operate the conference control buttons, and watch through the second electronic device Users in the conference will not be disturbed by operations performed on the side of the first electronic device.
  • an electronic device is provided.
  • the electronic device is a first electronic device, the first electronic device is installed with a first application, the first application includes a first control and a second control, the first control is located on the first layer, and the first Two controls are located on the second layer, the electronic device includes: one or more processors; memory; and one or more computer programs, wherein the one or more computer programs are stored on the memory, when the When the computer program is executed by the one or more processors, the electronic device executes the following steps: acquiring the control information of the first control and the control information of the second control; information to determine the layer type of the first layer; determine the layer type of the second layer according to the control information of the second control, and the layer type of the first layer is the same as that of the second control
  • the layer types of the layers are different; according to the layer type of the first layer and the layer type of the second layer, a first display screen and a second display screen are generated, and displayed on the first electronic device The screen displays the first picture, and projects the second display picture to
  • the electronic device when the computer program is executed by the one or more processors, the electronic device is made to perform the following steps: extracting the control name and the control name of the first control from the control information of the first control The size information of the first control; parsing the control name, and parsing the control type of the first control, the package name of the first application and the location of the first control from the control name interface information, determine the layer type of the first layer according to the control type, the package name, the interface information and the size information; after parsing the first layer from the control name When the control type of the control and the interface information where the first control is located, if the package name of the first application is not resolved, obtain the process identification number PID of the process that draws the first control, and determine the PID according to the PID The source of the first control is to determine the layer type of the first layer according to the control type, the source, the interface information and the size information, and the source includes the package name of the first application .
  • the electronic device when the computer program is executed by the one or more processors, the electronic device is made to perform the following steps: combining the control name and the Dimension information is used as a retrieval keyword; according to the keyword, search for a control that matches the keyword in the layer identification record library; when finding a control that matches the keyword, the image corresponding to the control The layer type is determined as the layer type of the first layer; when no control matching the keyword is found, the step of parsing the control name is performed.
  • the electronic device when the computer program is executed by the one or more processors, the electronic device is made to perform the following steps: When the control information cannot determine the layer type of the first layer, obtain the screen currently displayed by the first application, and the currently displayed screen includes the first control; according to the currently displayed screen The content displayed by the first control determines the layer type of the first layer.
  • the electronic device when the computer program is executed by the one or more processors, the electronic device is made to perform the following steps: from the second control Extract the control name of the second control and the size information of the second control from the control information; analyze the control name, and analyze the control type of the second control, the control type of the second control from the control name When the package name of an application and the interface information where the second control is located, determine the layer type of the second layer according to the control type, the package name, the interface information and the size information; Analyzing the control type of the second control and the interface information of the second control from the control name, and obtaining the process of drawing the second control when the package name of the first application is not parsed A process identification number PID, determining the source of the second control according to the PID, determining the layer type of the second layer according to the control type, the source, the interface information and the size information, The source includes a package name of the first application.
  • the electronic device when the computer program is executed by the one or more processors, the electronic device is made to perform the following steps: When the control information cannot determine the layer type of the second layer, obtain the currently displayed screen of the first application, and the currently displayed screen includes the second control; according to the currently displayed screen The content displayed by the second control determines the layer type of the second layer.
  • the electronic device when the computer program is executed by the one or more processors, the electronic device is made to perform the following steps: according to the first layer The layer type of the layer type and the layer type of the second layer, generate a first display screen and a second display screen, and cache the first display screen in the first display buffer, and cache the second display screen to the second display cache; according to the cache order, take the first display picture from the first display cache, and display the first display picture on the screen of the first electronic device; The second display screen in the second display buffer is recorded to obtain screen projection content; the screen projection content is sent to the second electronic device for the second electronic device to decode the screen projection content to obtain the screen projection content The second display screen is displayed on the screen of the second electronic device.
  • the electronic device when the computer program is executed by the one or more processors, the electronic device is made to perform the following steps: determine the first display cache The corresponding first layer filter rule and the second layer filter rule corresponding to the second display cache; according to the first layer filter rule, the layer type of the first layer and the second image The layer type of the layer, determine that the first display screen includes the first layer and the second layer; obtain the resources of the first control in the first layer and the second layer According to the resource of the second control, generate the first display picture according to the resource of the first control and the resource of the second control, and cache the first display picture in the first display cache; according to The second layer filter rule, the layer type of the first layer, and the layer type of the second layer determine that the second display screen includes the first layer; obtain the second layer The resource of the first control in a layer is used to generate the second display frame according to the resource of the first control, and cache the second display frame in the second display cache.
  • the electronic device when the computer program is executed by the one or more processors, the electronic device is made to perform the following steps: acquiring the first electronic device The first device identifier of the second electronic device and the second device identifier of the second electronic device; the layer filter rule matching the first device identifier is searched in the layer filter rule table, and the found layer filter rule is determined as The first layer filter rule corresponding to the first display cache; look up the layer filter rule matching the second device identifier in the layer filter rule table, and determine the found layer filter rule Cache the second layer filter rule corresponding to the second display.
  • the electronic device when the computer program is executed by the one or more processors, the electronic device is made to perform the following steps: on the first electronic device A layer filter rule decision interface that can be operated by the user is displayed on the screen, and the layer filter rule decision interface includes the first control and the layer type of the first layer where the first control is located, so The second control and the layer type of the second layer where the second control is located; in response to the user's operation behavior of setting the filter rule of the first layer for the first display cache, generating the second control A layer filtering rule; generating the second layer filtering rule in response to the user's operation behavior of setting the second layer filtering rule for the second display cache.
  • the electronic device when the computer program is executed by the one or more processors, the electronic device is made to perform the following steps: acquiring the first electronic device The first screen aspect ratio of the second electronic device and the second screen aspect ratio of the second electronic device; when the first screen aspect ratio is different from the second screen aspect ratio, the Perform black border removal processing on the second display picture, and record the second display picture after black border removal to obtain the screen projection content; when the first screen aspect ratio is the same as the second screen aspect ratio, The second display picture in the second display cache is recorded to obtain screen projection content.
  • the electronic device when the computer program is executed by the one or more processors, the electronic device is made to perform the following steps: acquiring the second electronic device The display capability; determine the video stream refresh frame rate according to the display capability; according to the video stream refresh frame rate, record the second display picture in the second display buffer to obtain screen projection content.
  • the second aspect and any implementation manner of the second aspect correspond to the first aspect and any implementation manner of the first aspect respectively.
  • technical effects corresponding to the second aspect and any implementation manner of the second aspect reference may be made to the technical effects corresponding to the above-mentioned first aspect and any implementation manner of the first aspect, and details are not repeated here.
  • a computer-readable storage medium includes a computer program, and when the computer program is run on the electronic device, the electronic device is made to execute the first aspect and the screen projection method in any one of the first aspect.
  • the electronic device may be a mobile phone.
  • the third aspect and any implementation manner of the third aspect correspond to the first aspect and any implementation manner of the first aspect respectively.
  • technical effects corresponding to the third aspect and any implementation manner of the third aspect reference may be made to the technical effects corresponding to the above-mentioned first aspect and any implementation manner of the first aspect, and details are not repeated here.
  • an embodiment of the present application provides a computer program, where the computer program includes instructions for executing the method in the first aspect and any possible implementation manner of the first aspect.
  • the fourth aspect and any implementation manner of the fourth aspect correspond to the first aspect and any implementation manner of the first aspect respectively.
  • the technical effects corresponding to the fourth aspect and any one of the implementation manners of the fourth aspect refer to the above-mentioned first aspect and the technical effects corresponding to any one of the implementation manners of the first aspect, and details are not repeated here.
  • the embodiment of the present application provides a chip, and the chip includes a processing circuit and sending and receiving pins.
  • the transceiving pin and the processing circuit communicate with each other through an internal connection path, and the processing circuit executes the method in the second aspect or any possible implementation of the second aspect to control the receiving pin to receive signals, so as to Control the send pin to send signal.
  • the chip is a chip of an electronic device, and the electronic device may be a mobile phone.
  • the fifth aspect and any implementation manner of the fifth aspect correspond to the first aspect and any implementation manner of the first aspect respectively.
  • the technical effects corresponding to the fifth aspect and any one of the implementation manners of the fifth aspect refer to the technical effects corresponding to the above-mentioned first aspect and any one of the implementation manners of the first aspect, and details are not repeated here.
  • Fig. 1 is one of the schematic diagrams of scenarios where the screen projection function is enabled
  • Fig. 2 is the second schematic diagram of the scenario where the screen projection function is enabled
  • Fig. 3 is one of the schematic diagrams of the terminal and the large-screen display content after screen projection using the screen projection method provided by the embodiment of the present application;
  • Fig. 4 is a schematic diagram of the software structure of the mobile phone shown exemplarily
  • Fig. 5 is a schematic diagram of the layer structure included in the picture displayed by the conference application.
  • Fig. 6 is a schematic diagram of the modules included in the mobile phone and the large screen
  • Fig. 7 is a schematic flow chart of a screen projection method provided by an embodiment of the present application.
  • Fig. 8 is a schematic diagram of control information acquired in the screen projection method provided by the embodiment of the present application.
  • FIG. 9 is a schematic diagram of an interface of filtering rules determined by a user decision layer provided by an embodiment of the present application.
  • Fig. 10 is an exemplary illustration of one of the module interaction schematic diagrams for displaying different screens on the mobile phone and the large screen using the screen projection method provided by the embodiment of the present application;
  • Fig. 11 is one of the sequence diagrams schematically showing the screen projection method provided by the embodiment of the present application to draw the screen to be displayed;
  • Fig. 12 is an exemplary diagram showing the second sequence diagram of drawing a picture to be displayed using the screen projection method provided by the embodiment of the present application;
  • Fig. 13 is the second schematic diagram of the scene of the terminal and the content displayed on the large screen after screen projection using the screen projection method provided by the embodiment of the present application;
  • Fig. 14a and Fig. 14b are schematic diagrams showing the third scene of the content displayed on the terminal and the large screen after screen projection using the screen projection method provided by the embodiment of the present application;
  • Fig. 15 is an exemplary illustration of one of the module interaction schematic diagrams for displaying different screens on the mobile phone and the large screen using the screen projection method provided by the embodiment of the present application;
  • Fig. 16a is a schematic diagram showing the layer relationship between the physical screen and the virtual screen of the mobile phone in the embodiment of the present application;
  • Fig. 16b is a schematic diagram of a method for establishing a screen coordinate system in an exemplary embodiment of the present application
  • Fig. 17 is a schematic structural diagram of a device provided by an embodiment of the present application.
  • first and second in the description and claims of the embodiments of the present application are used to distinguish different objects, rather than to describe a specific order of objects.
  • first target object, the second target object, etc. are used to distinguish different target objects, rather than describing a specific order of the target objects.
  • words such as “exemplary” or “for example” are used as examples, illustrations or illustrations. Any embodiment or design scheme described as “exemplary” or “for example” in the embodiments of the present application shall not be interpreted as being more preferred or more advantageous than other embodiments or design schemes. Rather, the use of words such as “exemplary” or “such as” is intended to present related concepts in a concrete manner.
  • multiple processing units refer to two or more processing units; multiple systems refer to two or more systems.
  • the embodiment of the present application takes the content of the screen to be projected as an example of a conference using a conference application program, and uses a mobile phone as an electronic device to project the screen of the conference process, and uses a TV as a display A large screen for projecting images.
  • the display interface of the mobile phone 100 displays a setting page 10a of the mobile phone
  • the setting page 10a includes one or more controls, such as sound and vibration setting options, notification setting options, device connection setting options, application Settings options, battery settings options, storage settings options, security settings options, etc.
  • the mobile phone jumps from the setting page 10a to the device connection page 10b in response to the user's operation behavior.
  • the device connection page 10b includes one or more controls, such as Bluetooth setting options, NFC (Near Field Communication, near field communication) setting options, mobile phone screen projection setting options, USB (Universal Serial Bus, Universal Serial Bus ) setting options and print setting options, etc.
  • controls such as Bluetooth setting options, NFC (Near Field Communication, near field communication) setting options, mobile phone screen projection setting options, USB (Universal Serial Bus, Universal Serial Bus ) setting options and print setting options, etc.
  • the mobile phone after the user clicks the mobile phone screen projection 10b-1 in the device connection page 10b, the mobile phone jumps from the device connection page 10b to the mobile phone screen projection page 10c in response to the user's operation behavior.
  • the mobile phone screen projection page 10c includes a control for enabling the mobile phone screen projection function, such as the wireless screen projection setting option 10c-1 shown in FIG. 1 .
  • the setting options for enabling the screen projection function of the mobile phone displayed on the screen projection page 10c of the mobile phone vary depending on the model of the mobile phone and the version of the system. Different, it may also be named “multi-screen interaction”, “screen mirroring”, etc., which will not be listed here, and there is no limitation in this implementation.
  • the mobile phone when the user clicks the wireless screen projection setting option 10c-1 in the screen projection page 10c of the mobile phone, the mobile phone will display a list of available devices in the blank area of the screen projection page 10c in response to the user's operation, and use Control 10c-2 displays the content "Searching for available devices... Please ensure that the wireless projection function of the large-screen device is turned on" in the display area of the available device list.
  • FIG. 1 is only a specific display style of the available device list when searching for available large-screen devices. It is an example for better understanding the technical solution of this embodiment. as the only limitation on this example.
  • the mobile phone responds to the user's operation behavior and specifically jumps from the mobile phone screen projection page 10c to a dedicated The Available Devices List page is displayed.
  • control 10c-3 will be used to display the searched large-screen devices, such as large-screen 1 and large-screen 2, in the display area of the available device list.
  • the searched large-screen devices may be TVs, projectors, etc., which are not listed here, and are not limited in this implementation.
  • the presented screen may be a TV screen, or a large screen composed of multiple TV screens, which is not limited in this application .
  • the mobile phone when the user clicks on the large screen 1 (10c-3-1) in the screen projection page 10c of the mobile phone, the mobile phone responds to the user's operation behavior, initiates a pairing request to the large screen 1, and establishes a network connection, and then the mobile phone The content displayed on the display interface will be projected onto the large screen 1 .
  • the embodiment of the present application also provides another method for starting the screen projection function.
  • the process of enabling the screen projection function of the mobile phone will be described below with reference to FIG. 2 .
  • the display interface of the mobile phone 100 shows a picture 20 during a conference using a conference application program.
  • the mobile phone responds to the user's operation Behavior, the drop-down notification bar 30 is displayed on the upper edge area of the display interface.
  • the drop-down notification bar 30 includes one or more controls, such as a time bar, Wi-Fi setting options, Bluetooth setting options, mobile data setting options, automatic rotation setting options, and screen mirroring setting options.
  • controls such as a time bar, Wi-Fi setting options, Bluetooth setting options, mobile data setting options, automatic rotation setting options, and screen mirroring setting options.
  • the mobile phone may pop up an interface for searching available devices on the display interface in response to the user's operation behavior, and after searching for available large-screen devices , displayed on this interface, for the user to select the large-screen device that needs to be paired and establish a network connection.
  • the interface for searching available devices that pops up on the display interface may cover the entire display interface in full screen, or may only cover a partial area, and the specific implementation manner is not limited in this application.
  • the screen projection method provided by the embodiment of the present application is used to project the screen of the current meeting.
  • the operating system of the mobile phone will determine the layer where each control in the screen to be displayed is located, and then filter the screen that needs to be displayed on the screen of the mobile phone and the screen that needs to be displayed on a large screen (such as a TV screen) according to the preset filtering rules , and then perform composite rendering on the filtered layers, and finally send the obtained pictures to the display separately, so that the mobile phone and the large screen can display different content during the conference respectively.
  • the display interface of the mobile phone 100 displays a screen 20 during a conference, and the screen 20 includes a video stream playback layer 20-1 and a conference control button layer 20-2.
  • the video stream playback layer 20-1 includes one or more video stream playback controls, and these video stream playback controls are used to display the video stream acquired during the conference.
  • multiple video stream playback controls can be integrated into one video stream playback layer.
  • the server corresponding to the conference application transmits the video stream to the mobile phone
  • the video streams corresponding to multiple video stream playback controls can be combined into one for transmission.
  • each video stream playback control can be set to correspond to a video stream playback layer.
  • the server corresponding to the conference application transmits the video stream to the mobile phone, it needs to send the video stream to different The video streaming playback control streams the video.
  • the conference control button layer 20-2 may include one or more controls, such as mute setting options, video setting options, sharing setting options, participant setting options and more setting options, etc., which are no longer described here. List them one by one, and this application does not limit them.
  • the preset filtering rule is that the content of the screen projected on the large screen 200 only includes video stream playback controls for displaying video streams, that is, the screen on the large screen 200 includes the graphics The layer only has the video stream playback layer.
  • the picture 20' finally projected on the display interface of the large screen 200 only includes the video stream playback layer 20
  • the mirror content of -1 is 20-1'.
  • the display interface of the large screen always displays the mirror image of the video stream playback control 20-1 Content 20-1', the operation process of the conference control button in the conference control button layer 20-2 will not be projected to the large screen, so as not to interfere with the user watching the video stream displayed on the large screen, and realize the separation of control and display .
  • the screen projection method provided in the embodiment of the present application can not only be applied to the application scenario of one-to-one projection, but also applicable to the application scenario of one-to-many projection. As long as the mobile phone with the screen projection function is turned on, Or other electronic devices support one-to-one screen projection, and one-to-many screen projection is sufficient. The specific implementation details of one-to-one screen projection and one-to-many screen projection are not described in this application.
  • a mobile phone is taken as an example for illustration, and in other embodiments, the present application is also applicable to laptop computers, desktop computers, palmtop computers (such as tablet computer) and other electronic devices that support screen mirroring.
  • the conference application program is installed, and the electronic device for screen projection is a mobile phone.
  • the software structure of the mobile phone is described in conjunction with FIG. 4 .
  • FIG. 4 is a software structural block diagram of the mobile phone 100 according to the embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate through software interfaces.
  • the embodiment of the present application takes the Android system as an example to describe the software structure of the mobile phone 100 with the Android system.
  • the Android system is divided into five layers, from top to bottom are application program layer, application program framework layer (also known as: system framework layer), system library and Android runtime layer, hardware Abstraction layer (hardware abstraction layer, HAL) and kernel layer.
  • application program framework layer also known as: system framework layer
  • system library and Android runtime layer hardware Abstraction layer (hardware abstraction layer, HAL)
  • kernel layer kernel layer
  • the application layer may include application programs such as camera, gallery, calendar, WLAN, meeting, music, and video (hereinafter referred to as applications for short). It should be noted that the applications included in the application program layer shown in FIG. 4 are only illustrative, and the present application does not limit this. Understandably, the applications included in the application layer do not constitute a specific limitation on the mobile phone 100 . In other embodiments of the present application, the mobile phone 100 may include more or fewer applications than the applications contained in the application program layer shown in FIG. Applications.
  • the application framework layer provides an application programming interface (Application Programming Interface, API) and a programming framework for the application of the application layer, including various components and services to support the developer's Android development.
  • the application framework layer also includes some predefined functions. As shown in Figure 4, the application framework layer may include a window manager, a content provider, a view system, a resource manager, a notification manager, a camera service, and the like.
  • a window manager is used to manage window programs.
  • the window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, capture the screen, etc.
  • Content providers are used to store and retrieve data and make it accessible to applications.
  • Data can include videos, images, audio, calls made and received, browsing history and bookmarks, phonebook, etc.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on.
  • the view system can be used to build applications.
  • a display interface can consist of one or more views.
  • a display interface including a text message notification icon may include a view for displaying text and a view for displaying pictures.
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
  • the notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and can automatically disappear after a short stay without user interaction.
  • the notification manager is used to notify the download completion, message reminder, etc.
  • the notification manager can also be a notification that appears on the top status bar of the system in the form of a chart or scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window.
  • prompting text information in the status bar issuing a prompt sound, vibrating the electronic device, and flashing the indicator light, etc.
  • the camera service is used to call a camera (including a front camera and/or a rear camera) in response to an application request.
  • the application framework layer further includes a display management framework and a display rendering framework.
  • the display management framework is used to identify the layers to which the controls included in the conference application belong, and mark and record them;
  • the display rendering framework is used to filter the layers identified by the display management framework according to preset filtering rules, and Filtered layers for composite rendering.
  • the screen 20 displayed on the display interface of the mobile phone 100 in FIG. 3 there are two layers, the video stream playback layer 20-1 and the conference control button layer 20-2 shown in FIG. 5 Composite rendering.
  • the display management framework located in the application framework layer identifies the controls drawn by the conference application installed in the application layer, and then determines the layer type of each control, and marks them, for example, by identifying Stream controls, such as the layer type corresponding to the SurfaceView control is the video stream playback layer, that is, 20-1 in Figure 5, for the mute option setting control, video option setting control, sharing option setting control, participant option setting control, etc.
  • the layer type corresponding to the conference control button is the conference control button layer, that is, 20-2 in FIG. 5 .
  • System library and Android runtime layer includes system library and Android runtime (Android Runtime).
  • a system library can include multiple function modules. For example: surface manager, 2D graphics engine, 3D graphics processing library (eg: OpenGL ES), media library, font library, etc.
  • the browser kernel is responsible for interpreting the syntax of the webpage (such as an application HTML and JavaScript under the standard general markup language) and rendering (displaying) the webpage; layer processing, etc.; the 3D graphics processing library is used to realize 3D graphics drawing, image rendering, synthesis and layer processing, etc.; the media library is used to realize the input of different streaming media; the font library is used to realize the input of different fonts.
  • the Android runtime is responsible for the scheduling and management of the Android system, including the core library and virtual machine.
  • the core library includes two parts: one part is the function function that the java language needs to call, and the other part is the core library of Android; the virtual machine is used to run the Android application developed using the java language.
  • both the application program layer and the application program framework layer need to run in the virtual machine.
  • the virtual machine executes the java files of the application program layer and the application program framework layer as binary files.
  • virtual machines are used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • the components contained in the application framework layer, system library and runtime layer shown in FIG. 4 do not constitute a specific limitation on the mobile phone 100 .
  • the mobile phone 100 may include more or fewer components than shown, or combine some components, or separate some components, or arrange different components.
  • the HAL layer is an interface layer between the operating system kernel and the hardware circuit.
  • the HAL layer includes but is not limited to: Audio Hardware Abstraction Layer (Audio HAL) and Camera Hardware Abstraction Layer (Camera HAL).
  • Audio HAL is used to process the audio stream, for example, to perform noise reduction and directional enhancement on the audio stream
  • Camera HAL is used to process the image stream.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer includes at least a display driver, a camera driver, an audio driver, and a sensor driver.
  • the hardware may include devices such as a camera, a display screen, a microphone, a processor, and a memory.
  • the display screen in the hardware can display pictures during the meeting
  • the camera in the hardware can be used to collect images
  • the microphone in the hardware can be used to collect sound signals and generate analog audio electrical signals.
  • an electronic device such as a mobile phone
  • a large screen used to display the content projected by the electronic device (such as TV) need to include at least the content shown in Figure 6.
  • the mobile phone used for projecting content needs to install the conference application at least in the application layer, and introduce a map for recording layer identification information in the application framework layer.
  • the layer identification record library, the layer identification module used to identify the layer corresponding to the control, the layer filter module used to filter the layer identified by the layer identification module, and the image filtered by the layer filter module Composite rendering module for compositing rendering at the layer, as well as collaborative assistant and network communication modules at the system library and Android runtime layer.
  • the layer identification record library and the layer identification module are specifically located in the display management framework of the application program framework layer, and the layer filtering module and the composite rendering module are specifically located in the display rendering framework of the application program framework layer.
  • the large screen used to display the content projected by the mobile phone only needs to be used to display the projected content, and does not need to perform layer identification and composite rendering, the large screen must at least include a projection display module located in the application framework layer and a Synthetic rendering module for synthesizing and rendering the content transmitted from the mobile phone, and a collaborative assistant and network communication module located in the system library and Android runtime layer.
  • the user triggers the operation of enabling the screen projection function through the method shown in Figure 1 or Figure 2.
  • the collaborative assistant at the time layer is matched to a large screen that also has a collaborative assistant, that is, supports the screen projection function, and establishes a communication connection with the matched large screen through the network communication module.
  • the meeting application at the application layer will interact with the layer identification record library and the layer identification module in the display management framework of the application framework layer. After the layer is identified, it will be filtered by the layer filtering module in the display rendering framework of the application framework layer, and the filtered layer will be composited and rendered by the composite rendering module, so as to obtain the picture that needs to be displayed on the mobile phone screen (such as Screen A) and the screen that needs to be displayed on the large screen (such as screen B).
  • the screen A can be directly sent for display through the display driver of the mobile phone, and then the screen A is displayed on the screen of the mobile phone.
  • the mirroring process is actually performed based on the mirroring protocol, that is, the screen B is recorded in the system library of the mobile phone and the screencasting recording module of the Android runtime layer, and after the recording is completed, with the help of The communication connection established by the network communication module is transmitted to the large screen, and the large screen performs display processing through the projection display module, and finally presents picture B on the large screen.
  • the application to be projected is not limited to a meeting application, and the above description is only for illustration.
  • the screen projection method provided in this embodiment is specifically applied to the first electronic device that initiates screen projection, for example, a mobile phone.
  • the first electronic device is installed with the first application, and for ease of understanding, the meeting application is still taken as an example below.
  • the first application includes a first control and a second control, and the first control is located on the first layer, and the second control is located on the second layer.
  • the first control and the second control are different types of controls.
  • the first control is a video control, such as SurfaceView, and correspondingly, the layer type of the first layer is a video stream playback layer; the second control is a button control, For example, it may be Button.
  • the layer type of the second layer is a conference control button layer.
  • the first control is a whiteboard annotation control, such as BlankWindow, and correspondingly, the layer type of the first layer is a whiteboard annotation layer;
  • the second control is a button control, For example, it may be Button.
  • the layer type of the second layer is a conference control button layer.
  • the screen projection method provided by this application specifically includes the following steps:
  • Step S1 Obtain control information of the first control and control information of the second control.
  • this embodiment still takes the first application as a meeting application as an example, and the premise of executing step S1 is that the user has completed the mobile phone 100 and TV 200 using the method of enabling the screen projection function shown in Figure 1 or Figure 2 above. communication link between.
  • the interface of any application is usually rendered and synthesized by multiple layers, and there are at least two types of layers included.
  • the included layers include at least video stream Playback layer and conference control button layer.
  • each type of layer includes at least one control. That is, the number of first controls in the first layer may be one or more, and the number of second controls in the second layer may also be one or more.
  • the video stream playback layer includes one or more video stream playback controls
  • the conference control button layer includes one or more conference control button controls. See the description in Figure 3 for details. This time No longer.
  • the operation of obtaining the control information of the first control and the control information of the second control may be obtained by calling a preset control information grabbing program by the display processing module located in the application framework layer in the mobile phone 100 .
  • Step S2 Determine the layer type of the first layer according to the control information of the first control.
  • the format of the obtained control information of each control in the conference application may be as shown in FIG. 8 .
  • the obtained control information includes but is not limited to the control name (Name) and size information (disp frame), such as the window type, which is not limited in this application.
  • control c For example, "SurfaceView-com.huawei.welink/com.[...]on.view.activity.InMeetingActivity#0rel-2" in Figure 8 is the control name of control a, and "0 0 283 283" is Size information of control a; "SurfaceView-com.huawei.welink/com.[...]on.view.activity.InMeetingActivity#1rel-1" is the control name of control b, and "0 283 2288 1080" is control b size information; "Button-com.huawei.welink/com.[...]on.view.activity.InMeetingActivity#0rel-0" is the control name of control c, and "413 2130 625 2200" is the size of control c information.
  • control name usually includes three parts. Taking the control name of control a as an example, "SurfaceView” indicates the control type, which indicates that control a is used to display video stream content, and "com.huawei.welink” indicates the location of control a.
  • control name of the first control and the size information of the first control are extracted from the control information of the first control.
  • the control name of control a is "SurfaceView-com.huawei.welink/com.[...]on.view.activity.InMeetingActivity#0rel-2”
  • the size information is "0 0 283 283”.
  • the package name of the first application, and the interface information of the first control from the control name determine the layer type of the first layer.
  • the purpose of the first control can be determined according to the control type of the first control
  • the application type of the first application can be determined according to the package name of the first application
  • the application type of the first application combined with the size information of the first control and the specific interface information in the first application, it is possible to accurately identify the layer type of the layer where the first control is included in most applications on the market.
  • the control type of the first control and the interface information of the first control are parsed from the control name, and the package name of the first application is not parsed, obtain the process of drawing the first control
  • the process identification number PID determine the source of the first control according to the PID, determine the layer type of the first layer according to the control type, the source, the interface information and the size information , the source includes the package name of the first application.
  • the source of the process corresponding to the PID can be determined according to the unique PID, that is, The first application, and then obtain the package name of the first application, so that regardless of whether the control name of the first control contains the package name of the first application, the layer type of the first layer can be accurately determined according to the control information of the first control .
  • the layer identification record library can be set in advance, and the known controls of various sizes and positions in various applications can be associated with the layer type of the layer where they are located. Analyze the control name of a control. Before determining the layer type according to the analysis result and position information, first use the control name and size information as the search keywords, and then search for the control matching the keyword in the layer identification record library according to the keywords. . Correspondingly, when a control matching the keyword is found, the layer type corresponding to the control is determined as the layer type of the first layer; when no control matching the keyword is found , after parsing the control name, determine the layer type according to the parsing result and location information. In this way, not only the layer type can be determined, but also the processing speed and the consumption of device resources can be taken into consideration.
  • a screen currently displayed by the first application may also be acquired.
  • the currently displayed screen includes the first control.
  • the layer type of the first layer is determined according to the content displayed by the first control in the currently displayed screen.
  • the method may be manually determined by technicians through captured page data and then updated to the layer identification library, or it may be determined based on analysis and analysis based on a preset algorithm. For example, by analyzing and determining the content displayed in each control in the screen and the text corresponding to the icon of the control, the specific analysis process will not be repeated in this application.
  • Step S3 Determine the layer type of the second layer according to the control information of the second control.
  • the process is as follows:
  • Parsing the control name when parsing the control type of the second control, the package name of the first application, and the interface information of the second control from the control name, according to the control type , the package name, the interface information and the size information, determine the layer type of the second layer;
  • the control type of the second control and the interface information of the second control are parsed from the control name, and the package name of the first application is not parsed, obtain the process of drawing the second control
  • the process identification number PID determine the source of the second control according to the PID, determine the layer type of the second layer according to the control type, the source, the interface information and the size information , the source includes the package name of the first application.
  • the layer type corresponding to the control is determined as the layer type of the second layer; when no control matching the keyword is found , after parsing the control name, determine the layer type according to the parsing result and location information. In this way, not only the layer type can be determined, but also the processing speed and the consumption of device resources can be taken into consideration.
  • the currently displayed screen of the first application is acquired, and the currently displayed screen includes the second control ; Determine the layer type of the second layer according to the content displayed by the second control in the currently displayed screen.
  • the layer type of the first layer is different from the layer type of the second layer.
  • the layer type of the first layer is a video control, such as SurfaceView
  • the layer type of the first layer is a video stream playback layer
  • the layer type of the first layer is whiteboard Annotation layer
  • the second control is a button control, such as button
  • the layer type of the second layer is a conference control button layer.
  • Step S4 Generate a first display screen and a second display screen according to the layer type of the first layer and the layer type of the second layer, and display the The first frame, projecting the second display frame to the screen of the second electronic device for display.
  • the first display screen includes the first control and the second control
  • the second display screen includes the first control , excluding the second control
  • the first application may also include controls located in other layers, and the correspondingly generated first display screen and second display screen may also include other controls according to business requirements. No restrictions.
  • a first display cache for caching the first display picture and a second display cache for caching the second display picture may be allocated in the first electronic device.
  • the first display picture may be cached in the The first display cache caches the second display picture in the second display cache; then, according to the cache order, fetches the first display picture from the first display cache, and stores the first display picture in the first display cache.
  • displaying the screen of the first electronic device recording the second display screen in the second display buffer to obtain screen projection content; sending the screen projection content to the second electronic device for The second electronic device decodes the projected screen content to obtain the second display picture, and displays it on the screen of the second electronic device.
  • different layer filtering rules are set for different display caches, so that when generating a display screen, the layer filtering rules corresponding to the display cache and the determined layers of each layer can be
  • the layer type determines the control to be included in the display screen cached in the display buffer, and then obtains the resources of the control to draw the display screen, and obtains a display screen suitable for display by different electronic devices.
  • the process of generating the first display screen and the second display screen according to the layer type of the first layer and the layer type of the second layer, and caching the generated display screens to the corresponding display cache is as follows : First, determine the first layer filter rule corresponding to the first display cache and the second layer filter rule corresponding to the second display cache; then, according to the first layer filter rule, the first image The layer type of the layer and the layer type of the second layer, determine that the first display screen includes the first layer and the second layer; then, obtain all the layers in the first layer resources of the first control and resources of the second control in the second layer, generate the first display screen according to the resources of the first control and the resources of the second control, and convert the The first display screen is cached to the first display cache; then, according to the second layer filtering rule, the layer type of the first layer, and the layer type of the second layer, determine the second layer The second display screen includes the first layer; then, acquire the resource of the first control in the first layer, generate the second display screen according to the resource of the first
  • this embodiment provides two ways of determining layer filtering rules, which will be described respectively below.
  • Method 1 Select the first layer filter rule and the second layer filter rule from the predetermined layer filter rule table
  • the obtained layer filtering rule is determined as the second layer filtering rule corresponding to the second display cache.
  • the layer type of the first layer is a video stream playback layer
  • the type of the second layer is a conference control button layer
  • the device ID of the first electronic device is D_01
  • the device ID of the second electronic device is
  • the first layer filter rule suitable for the first display cache found in layer filter rule table 1 according to the device identifier of the first electronic device is "the display layer type is video stream playback layer and conference Control button layer”
  • the second layer filter rule suitable for the second display cache found in the layer filter rule table 1 is "the display layer type is a video stream playback layer, Do not show meeting control button layer”.
  • the first display screen generated according to the filtering rules of the first layer and each layer type will include the first control on the video stream playback layer and the second control on the conference control button layer.
  • the second display screen generated according to the filtering rule of the second layer and each layer type will include the first control located at the video stream playback layer, but not include the second control located at the conference control button layer.
  • the layer type of the first layer is a whiteboard annotation layer
  • the type of the second layer is a conference control button layer
  • the device ID of the first electronic device is D_01
  • the device ID of the second electronic device is D_02
  • the first layer filter rule suitable for the first display cache found in layer filter rule table 2 according to the device identifier of the first electronic device is "display layer type is whiteboard annotation layer and conference control button map Layer”
  • the filter rule for the second layer suitable for the second display cache found in the layer filter rule table 2 is "The display layer type is a whiteboard annotation layer, and conference control is not displayed button layer”.
  • the first display screen generated according to the filter rules of the first layer and each layer type will include a first control on the whiteboard annotation layer and a second control on the meeting control button layer.
  • the second display screen generated according to the filtering rule of the second layer and each layer type will include the first control located at the whiteboard annotation layer, but not include the second control located at the conference control button layer.
  • Method 2 Provide user entry, let the user decide the filtering rules of the first layer and the filtering rules of the second layer
  • a user operation entry can also be provided, and the user decides layer filtering rules.
  • a layer filtering rule decision interface that can be operated by the user is displayed on the screen of the first electronic device, so The layer filtering rule decision interface includes the first control and the layer type of the first layer where the first control is located, the second control and the second map where the second control is located.
  • the layer type of the layer then, in response to the operation behavior of the user setting the first layer filter rule for the first display cache, generate the first layer filter rule; in response to the user setting the first layer filter rule for the second display caching the operation behavior of setting the filtering rules of the second layer, and generating the filtering rules of the second layer.
  • FIG. 9 shows a schematic diagram of an interface for users to decide the filtering rules of the first layer and the filtering rules of the second layer.
  • the display interface of the mobile phone 100 displays an operation interface for setting the filtering rules of the first layer and the filtering rules of the second layer.
  • this operation interface it is divided into an area for setting filter rules for the first layer and an area for setting filter rules for the second layer.
  • the first control is located on the first layer
  • the second control is located on the second layer as an example.
  • the layer type of the first layer is determined to be the video stream playback layer and the second layer is the conference control button layer according to the above method, continue to refer to FIG. 9 .
  • the video stream playback layer and the conference control button layer are displayed in the area for setting the filtering rules of the first layer, and corresponding check boxes are set after each layer option; in the area for setting the filtering rules of the second layer
  • the video stream playback layer and the conference control button layer are also displayed in , and check boxes are set correspondingly after each layer option.
  • the mobile phone when the user selects the checkboxes of the video stream playback layer and the conference control button layer in the setting area of the first layer filter rule, and clicks the save button, the mobile phone responds to According to the user's operation behavior, the first layer filter rule of the first display cache is generated according to the user's selection, specifically "the display layer type is a video stream playback layer and a conference control button layer".
  • the mobile phone when the user selects the check box of the video stream playback layer in the setting area of the second layer filter rule, and clicks the save button, the mobile phone responds to the user's operation behavior, according to the user's Select to generate the filter rule for the second layer of the second display cache, specifically "the display layer type is a video stream playback layer, and the conference control button layer is not displayed".
  • the embodiment of this application assumes that the first electronic device that initiates projection is a mobile phone, and the large-screen The device is a TV, and the first application is a conference application as an example.
  • the following three scenarios are used in combination with the screen projection method provided in the embodiment of the present application to describe.
  • the screen projection process can be divided into three parts.
  • the first part is the layer identification process.
  • the layer identification process is mainly the process of identifying the controls that need to be drawn by the conference application and determining the layer type.
  • the second part is the filtering and rendering synthesis process.
  • the filtering and rendering synthesis process is mainly to filter different types of layers according to the identified layer identification information, and then render and synthesize the filtered layers.
  • the third part is the display process, which is to render and synthesize different images, and transmit them to the corresponding screens (mobile phone screen and TV screen) for display.
  • the conference application (subsequently referred to as: conference application) installed in the application layer of the mobile phone is started, assuming that the conference application is used to join the meeting, the screen that needs to be presented on the screen of the mobile phone by default includes Control 1 (assumed to be a video stream playback control) and control 2 (assumed to be a conference control button control), the conference application will send an application for drawing the control to the display processing module in the display management framework located in the application framework layer for determining control information of draw requests.
  • conference application subsequently referred to as: conference application
  • the drawing request includes, but is not limited to: the application ID of the meeting application (for example, it may be the name of the application program package), the ID of the control to be drawn (for example, it may be the name of the control), and the like.
  • the display processing module after receiving the drawing request sent by the conference application for drawing control 1 and control 2 , the display processing module respectively determines the control information corresponding to the control to be drawn according to the information carried in the drawing request.
  • the display processing module determines the application type according to the captured application ID.
  • the application type determined according to the ID of the conference application may be a conference type.
  • the corresponding relationship between the application ID and the application type can be preset, and then when the application ID is captured, the corresponding application type can be directly determined according to the preset corresponding relationship.
  • the display processing module determines the control type according to the captured control name, for example, when the captured control name is SurfaceView, the determined control type is a video stream playback control or a 3D picture display control.
  • control type can also be determined through the preset corresponding relationship, that is, the corresponding relationship between different control names and control types is determined in advance, and then when the control name is captured, directly according to the preset corresponding relationship Determine the corresponding control type.
  • the display processing module can determine the control information of the control to be drawn according to the received drawing request.
  • the display processing module determines the control information of the control 1 and the control information of the control 2 to be drawn, it sends the determined drawing information of each control to the layer identification module, and the layer identification module identifies the layer to which the control belongs.
  • the display processing module can send drawing requests for the control 1 and the control 2 to the layer filtering module in the display rendering framework.
  • the display processing module may obtain configuration information corresponding to the conference application according to the determined conference type and control type, for example, it may be the resolution corresponding to the screen displayed by the conference application (for example, 1080*720) , and the configuration information of the control, such as the size and position of the control, and then generate a drawing request that needs to be sent to the layer filtering module according to the determined information, and send the drawing request to the layer filtering module.
  • the display processing module may also send the determined configuration information as control information corresponding to each control to the layer identification module for processing.
  • the operation of the display processing module sending a drawing request to the layer filtering module may be performed synchronously with the operation of sending the determined control information to the layer identification module, or it may be performed while sending the control It may be performed before sending the control information, or after sending the control information.
  • the drawing request can be It is sent to the layer filter module only after the layer to which the drawn control belongs.
  • the layer identification module can be divided into two parts: a layer identification module and a layer identification record library.
  • the layer identification module specifically identifies and analyzes the control information sent by the display processing module according to the preset layer identification algorithm, and then determines the type of the layer to which the control belongs, and identifies the layer to which the control belongs; the layer identification The record library is used to record the relationship between known controls and layers.
  • the layer identification module after the layer identification module receives the control information of each control delivered by the display processing module, it can first search in the layer identification record library according to the control information. Corresponding layer and identification information, then directly determine the found layer as the layer type corresponding to the control, if not found, the layer identification module will send it to the display processing module according to the preset layer identification algorithm Identify and analyze the control information, and then determine the layer type to which the control belongs.
  • the layer identification module After receiving the control information of the control 1 and the control information of the control 2 sent by the display processing module, the layer identification module first searches the layer identification record library according to the control information of the control 1 and the control information of the control 2. Assuming that the content matching the control information of control 1 is found in the layer identification record library, if the layer type corresponding to the control information of control 1 is a video stream playback layer, then the layer type of control 1 is determined as video Stream playback layer.
  • the layer identification module identifies and analyzes the control information of control 2 according to the preset layer identification algorithm, and then determines the control information of control 2
  • the layer type to which it belongs for example, according to the layer identification algorithm, it is determined that the layer type of the control 2 is the conference control button layer.
  • the layer identification module may determine the layer type to which the control belongs according to the size, position, splicing condition of the control, and in combination with the application type and control type in the control information.
  • the video stream playback controls of the current speaker will be located in the middle area of the entire screen, and the video stream playback controls of other participants will be located in the top area of the mobile phone screen.
  • the conference control button control operated by the user is located at the bottom area of the mobile phone screen, and the specific style is shown in FIG. 3 .
  • the sizes of the video stream playback controls used to display the participants are the same, and the video stream playback controls of multiple participants are spliced together to form a complete video stream playback layer.
  • the conference control button case for the user to operate is displayed in a conference control button layer, which is located at the bottom area, or the top area, or the left and right sides of the screen of the mobile phone.
  • the layer identification module can determine the layer type to which the control belongs according to the size, position, splicing condition of the control, combined with the application type and control type in the control information.
  • the video stream playback controls used to display the pictures of the participants correspond to a video playback layer 20-1 together, for the user to operate mute setting options, video setting options, sharing setting options, participants setting options, more Multiple setting options together correspond to a conference control button layer 20-2.
  • a video stream playback control to correspond to a video stream playback layer
  • a conference control button control to correspond to a conference control button layer
  • non-video stream playback controls that is, not used to display the video stream of participants, whether it needs to be classified into the video stream playback layer, so as to achieve In the screen projection mode, when the large screen only displays the content in the video stream playback layer, it can be projected to the large screen for display.
  • the layer identification module can monitor whether such non-video stream playback controls can accept touch, And the touch duration (for example, the duration of 10 frames), whether there is information such as cursor changes, prompts, etc. to determine whether such controls need to be classified into the video stream playback layer.
  • the video stream playback controls of the participants are usually not displayed in this mode, but the whiteboard annotation controls need to be displayed. If the control is not classified into the video stream playback layer, then The large screen cannot display the whiteboard annotation screen synchronously with the mobile phone screen. Therefore, for this type of control, by monitoring the changes of the whiteboard brush/cursor, as well as the movement of the position, prompt information, etc., when it is determined that it is a whiteboard annotation control, it is also classified into the video stream playback layer, that is, it is displayed on the large screen. show.
  • the drawing request sent by the conference application to the display processing module does not carry a specific package name feature, and the display processing module cannot determine the specific application type and control type, as shown in Fig.
  • the layer identification module can determine the layer type by judging the source information of the drawing request, and then perform layer identification on the control.
  • the layer type when determining the layer type according to the source information, it is possible to determine the corresponding information of the control according to these related information by judging related information such as a virtual IP address (Virtual IP Address, VIP) and a process identification number (Process Identification, PID).
  • VIP Virtual IP Address
  • PID Process Identification
  • the layer identification module can determine the layer type to which the control belongs according to the source information corresponding to the initiated drawing request and the predetermined corresponding relationship.
  • the layer identification module can intercept the data stream uploaded from the HAL layer, for example, Intercept the first 5 frames, and then analyze the 5 frames of data, and use the pre-trained recognition model to analyze and identify whether the current screen contains controls whose layer type is the video stream playback layer, or the layer type conference Controls that control the button layer, which in turn determines the type of layer each control belongs to.
  • the layer recognition module determines the layer type to which each control belongs, it sends the control and the corresponding layer type to the layer filter module, so that the layer filter module can compare the map according to the identification information added in the identified layer. layer to filter.
  • multiple filtering rules may be pre-installed in the layer filtering module.
  • the filtering rules can be divided according to the type and model of the large-screen device. For example, for a large-screen device such as a TV, the filtering rule can be to only project a video stream to play a layer on a large screen; for a large-screen device such as a projector , filter rules can be displayed on large-screen devices for all layers.
  • the filtering rule may also be that the large-screen device displays all content in the video stream playback layer, and displays some controls in the conference control button layer, for example, only displaying the mute setting option.
  • all layers may be displayed, that is, no layer is filtered.
  • the mobile phone screen can directly not display any content according to business needs, and display on the set main control device. All layers.
  • control 1 is a video stream playback control
  • control 2 is a conference control button control.
  • control 1 and control 2 of the layer type are determined to reach the layer filtering module, assuming that the filtering rule (rule 1 in Figure 10) corresponding to the mobile phone screen is to send and display the contents of all layers, after filtering out according to rule 1 , in essence, control 1 and control 2 are not filtered out, but all resources of control 1 and control 2 are sent to the composite rendering module for picture drawing.
  • the layer filtering module will filter out the control 2, Only the resources of control 1 are sent to the composite rendering module for picture drawing.
  • filtering rules can also be decided by users.
  • the identified layer type is sent to the application layer respectively, such as the conference application in Figure 11, and Shows the layer filter module in the render frame.
  • the mobile phone responds, and an interface that displays the layer corresponding to each control can be popped up on the current interface, allowing the user to choose which layers the controls are located in and put them on the TV screen. , which are displayed on the screen of the mobile phone, that is, the filtering rules corresponding to the TV screen and the filtering rules of the mobile phone screen are obtained.
  • the filtering rules determined by the user are sent to the layer filtering module in the display rendering framework, so that the layer filtering module can filter the layers according to the filtering rules.
  • the layer filtering module After the layer filtering module filters the controls according to the preset filtering rules, it sends the control resources satisfying the filtering rules to the composite rendering module for picture drawing.
  • the resources to be sent to the controls can be determined according to the information written in the drawing request issued by the display processing module.
  • the specific determination method is not limited in this application. This is also not described.
  • the composition rendering module first draws each control to be displayed according to the received control resources, for example, drawing logic, and then synthesizes the layers where each control is located to obtain a complete picture.
  • the essence is to draw each video stream playback control in the video playback stream layer 20-1 shown in FIG. 3 , and The process of synthesizing the layers where these controls are located into one screen, that is, the final screen A obtained is 20-1 in FIG. 3 .
  • the composite rendering module will draw control 1 according to the resources of control 1, draw control 2 according to the resources of control 2, and then draw the image where control 1 is located Layer and the layer where control 2 is located are synthesized, and finally a complete picture B is obtained.
  • the picture A After obtaining the picture A that needs to be displayed on the TV screen, the picture A will be cached in the display cache A, and in the subsequent process, the picture A can be obtained directly from the display cache A.
  • the screen B will be buffered into the display buffer B, and in the subsequent process, the screen B can be obtained directly from the display buffer B.
  • display cache A and display cache B may be the same cache area.
  • specific identifications such as It can be specific to which device
  • the synthetic rendering module draws the control according to the resources of the control, and then obtains the screen that meets the filtering requirements. For details, see the description of step 109 in FIG. 11 , which will not be repeated here.
  • the composite rendering module after synthesizing and drawing the picture to be sent to the display, the composite rendering module temporarily buffers the obtained picture into the corresponding display buffer, and then sends the buffered picture to the corresponding module by the display buffer.
  • the picture A that needs to be projected to the TV screen (large screen) for display will be cached in the display buffer A, and when it needs to be projected to the TV screen, the screen projection recording module will take out the picture A from the display cache A , to record the screen A, that is, to display that the cached screen A in the cache A is sent to the screen projection recording module.
  • the screen projection and recording module After the screen projection and recording module obtains the screen A, it records the screen A, and after the recording is completed, performs video encoding on the recorded content, and then obtains the screen projection content.
  • each frame of video stream corresponds to a picture, so the obtained picture A can be at different times, so when the screencasting recording module is recording, it can record according to the preset recording requirements.
  • Multi-frame images are recorded, such as 30 frames per second. Therefore, the obtained screen projection content is essentially a dynamic video stream.
  • the screen projection and recording module is provided by the electronic device system with the screen projection function. This application does not describe the specific recording process, and this application does not limit the video encoding performed after recording.
  • the screen projection and recording module After the screen projection and recording module completes the recording and video encoding of screen A, it will transmit the obtained screen projection content to the video decoding module in the large-screen device (a TV set in this embodiment) through a pre-established communication connection.
  • the video decoding module in the TV After the video decoding module in the TV receives the projected screen content transmitted by the mobile phone, it will decode according to the predetermined method, and then analyze the picture A.
  • the screen projection content recorded by the screen projection recording module is essentially a video stream, so the additional decoding operation is to decode the video stream to obtain the video content to be displayed.
  • the video decoding module After the video decoding module parses out the picture A, it will send the picture A to the TV screen, specifically to the projection display module of the TV.
  • the screen projection display module displays the received picture A on the display screen 2, that is, the TV screen.
  • screen A only includes controls whose layer type is a video stream playback layer
  • the content displayed on the TV screen is specifically the mirror image of 20-1 in FIG. 3 , that is, 20-1'.
  • the synthetic rendering module draws the control according to the resources of the control, and then obtains the screen that meets the filtering requirements. For details, see the description of step 109 in FIG. 11 , which will not be repeated here.
  • the composite rendering module after synthesizing and drawing the picture to be sent to the display, the composite rendering module temporarily buffers the obtained picture into the corresponding display buffer, and then sends the buffered picture to the corresponding module by the display buffer.
  • the picture B that needs to be displayed on the screen of the mobile phone will be cached in the display buffer B.
  • the cached picture B is sent to the display driver of the mobile phone.
  • the display driver of the mobile phone After receiving the screen B, the display driver of the mobile phone will upload the screen B to the conference application at the application layer, and display it on the conference interface corresponding to the conference application.
  • the screen B includes all the layers, not only 20-1 in FIG. 3 but also 20-2 will be displayed on the screen of the mobile phone.
  • the layer identification module can identify the currently joined conference application.
  • the conference layers which layer is the video stream playback layer and which layer is the conference control button layer.
  • the layer filtering process is performed in the layer filtering module, so as to filter out which layers need to be displayed on the mobile phone screen and which layers need to be displayed on the large screen according to the preset filtering rules.
  • the mobile phone screen For example, for the mobile phone screen, set all layers to be displayed, so that users can not only see the video stream content involved in the entire conference process, but also see the conference control buttons that can be operated by the user, so that the user can control the conference through the conference control button to operate.
  • the large screen only the video stream playback layer is set to be displayed, so that other users who use the large screen to watch the conference can only see the content of the video stream involved in the conference process.
  • the conference control button displayed on the mobile phone screen the entire operation Users who use a large screen to watch the meeting will not see it, so it will not affect the viewing of the large screen.
  • the large screen since the large screen only displays the content of the video stream involved in the meeting process, other content related to private information displayed on the mobile phone screen will not be projected on the large screen, thereby ensuring user privacy.
  • the screen of the mobile phone displays images of all participants and conference control buttons
  • the screen of the TV only displays images of participants.
  • the mobile phone screen displays the pictures of all participants, as well as the marked content and conference control buttons added by the participants to the picture, and the scene where the TV screen displays the pictures of the participants and the marked content added by the participants to the picture will be described in detail illustrate.
  • the display interface of the mobile phone 100 displays a screen 20 after joining a conference through a conference application, and the screen 20 includes a video stream playback layer 20-1, a conference control button layer 20-2 and other conference participants Annotated content 20-3 added by personnel, and annotated content 20-4 added by the user using the mobile phone 100 for himself.
  • the live broadcaster can add the annotation content drawn based on Augmented Reality (AR) technology, see Figure 13.
  • AR Augmented Reality
  • the AR annotation content added by the peer user is 20-3
  • the AR annotation added by the local user is 20-4.
  • the controls of such content when determining whether they should be classified into the video stream playback layer, it may specifically be determined according to the positional relationship between these controls and the video playback stream control. Therefore, when the layer identification module determines the layer type according to the control information of the control, it will determine whether such controls need to be classified into the video stream playback map according to the positional relationship between the controls of the known layer type and such controls layer.
  • the AR annotation content added by the local user since the AR annotation content added by the local user is intended to be viewed by other participants, the content of the marked AR logo needs to be transmitted to other participants through the server, that is, on other participants' mobile phone screens and big screens. All screens need to be displayed, but on the large screen matched with the local machine, there is essentially no need for delivery. Therefore, in one implementation scenario, the AR annotation content added by the local user is not displayed on the large screen, while it is displayed on other users’ It is displayed on the matching large screen of the mobile phone.
  • 20-4 is added by the local user, so the content of this layer is only displayed on the mobile phone screen, not displayed on the matching large screen 200, and 20-3 is added by other participants on their mobile phones , so 20-4 needs to be displayed on the screen 20 of the mobile phone 100, and 20-3 also needs to be displayed when the screen is projected onto the large screen 200, so the mirror image content 20-1 of 20-1 will be displayed on the large screen 200 1' and 20-3' mirror content 20-3'.
  • the screen projected on the large screen 200 not only includes the mirror image content of the AR annotation content 20-3 added by the peer user, but also displays the mirror image content of the AR annotation content 20-4 added by the local user.
  • the content displayed by the non-video stream playback controls can also be projected to the large screen for display, further enriching the projection screen scene.
  • the screen of the mobile phone displays images of all participants and conference control buttons, and the screen of the TV only displays images of participants.
  • 14a, 14b and 15 when the screen of the mobile phone is switched from the scene 1 to the whiteboard annotation mode, the changes of the screens displayed on the mobile phone screen and the TV screen will be described in detail.
  • the display interface of the mobile phone 100 displays a screen 20 after joining a conference through a conference application, and the screen 20 includes a video stream playing layer 20-1 and a conference control button layer 20-2.
  • the video stream playback layer 20-1 includes one or more video stream playback controls. For a specific description, refer to Scene 1, which will not be repeated here.
  • the picture 20' projected onto the TV 200 through the mobile phone 100 only includes the mirror image 20-1' of the video stream playback layer 20-1.
  • the video stream playback layer 20-1' includes one or more video stream playback controls, and the displayed video stream playback controls are in one-to-one correspondence with the video stream playback controls displayed on the screen 20 of the mobile phone 100, and the video The content displayed in the stream playback controls is the same.
  • the conference control button layer 20-2 displayed on the display interface of the mobile phone 100 includes one or more conference control buttons, such as mute setting options, video setting options, sharing Setup options, Attendee setup options, and more setup options.
  • the mobile phone 100 responds to the user's operation behavior, and displays a prompt box 40 for selecting shared content on the display interface, as shown in FIG. 14a .
  • the prompt box 40 includes one or more controls, such as desktop setting option 40-1, whiteboard setting option 40-2, cancel setting option 40-3 and start sharing 40-4 shown in FIG. 14a.
  • the mobile phone 100 marks the state of the whiteboard setting option 40-2 as selected in response to the user's operation behavior, and then if The user clicks the start sharing setting option 40-4, and the mobile phone 100 will switch from the current interface to the whiteboard annotation mode interface in response to the user's operation behavior, as shown in FIG. 14b.
  • the screen 20 displayed on the display interface of the mobile phone 100 includes but is not limited to the conference control button layer 20-2 shown in FIG. 14a, switching to the whiteboard annotation mode Then the whiteboard annotation layer 20-5 drawn by the whiteboard annotation control, and the conference control button layer 20-6 for stopping sharing.
  • the conference control button layer 20-6 includes one or more controls, such as a prompt control in FIG. 14b for prompting the user that the whiteboard is currently being shared, and a control for the user to stop sharing.
  • the layer recognition module will use such controls when casting a screen. identified and filtered by the layer filtering module to ensure that the image 20' projected on the TV screen only includes the content drawn by the user in the whiteboard annotation layer 20-5'.
  • the mobile phone 100 marks the state of the whiteboard setting option 40-2 as selected in response to the user's operation behavior, and then the user clicks After clicking the cancel setting option 40-3, the mobile phone 100 responds to the user's operation behavior, and the prompt box 40 will disappear from the display interface, and return to the content displayed on the mobile phone 100 display interface shown in the lower left corner of Figure 14a.
  • the brush that draws content on the whiteboard will be directly used as a separate layer.
  • the picture displayed on the screen usually does not display the brush that can draw on the whiteboard, but directly displays the content drawn on the whiteboard.
  • the brush is displayed so that the user who draws the content knows where to draw.
  • the cursor displayed on the mobile phone screen can be projected onto the large screen, so that users watching the large screen can know the currently selected content.
  • the conference application (subsequently referred to as: conference application) installed on the application layer of the mobile phone is started, assuming that the conference application is used to join the conference, the screen that needs to be presented on the screen of the mobile phone by default includes Control 1 (assumed to be a video stream playback control) and control 2 (assumed to be a conference control button control), the conference application will send an application for drawing the control to the display processing module in the display management framework located in the application framework layer for determining control information of draw requests.
  • the conference application will also send the display management framework located in the application framework layer to determine the control information.
  • the processing module sends a drawing request for drawing the whiteboard annotation control.
  • For the process of performing the operation of switching the whiteboard mode on the mobile phone interface refer to the interface diagrams shown in Fig. 14a and Fig. 14b for details.
  • the display processing module receives the drawing request for drawing control 1 and control 2 sent by the conference application, according to the information carried in the drawing request, respectively determine the control information corresponding to the control to be drawn, and
  • the layer identification module identifies the operation of the layer type of control 1 and control 2. For a specific description, please refer to scene 1, which will not be repeated here.
  • the setting of the whiteboard annotation control is also a kind of control, but the specific control properties and control information are different from the video stream playback control and the conference control button control, but the process of determining the control information of the whiteboard annotation control is the same as that of the video stream playback control. The process is similar for controls and conference control button controls.
  • the display processing module can also determine the application type by the captured application ID. For example, according to the above-mentioned initiating drawing request is a meeting application, then the application type determined according to the meeting application ID can be the meeting type, and determined according to the captured control name Control type, for example, when the captured control is named BlankWindow, the determined control type is a whiteboard annotation control.
  • the display processing module determines the control information corresponding to the whiteboard annotation control, it will also send the determined control information to the layer identification module, and the layer identification module will identify the layer type, and then the layer identification module will The identified layer types are sent to the layer filtering module. Then, the layer filtering module filters according to the preset filtering rules, and sends the filtered resources of the controls that need to be displayed on different devices to the composite rendering module, and the composite rendering module synthesizes images that need to be sent to different devices for display .
  • rule 1 specifies that the contents of all layers are displayed on the screen of the mobile phone, then the content filtered out according to rule 1 Whiteboard annotation controls displayed for switching to whiteboard mode, and meeting control button controls for user actions.
  • rule 2 stipulates that in the whiteboard mode, the TV screen only displays the content in the whiteboard annotation control, then according to the filter rule 2, the layer filter module The video stream playback control and conference control button control will be filtered out, and only the resources of the whiteboard annotation control will be sent to the composite rendering module for picture drawing.
  • the composite rendering module draws the control according to the corresponding resource, and then obtains images to be displayed on different devices.
  • the picture drawn by the composite rendering module can be cached in the corresponding display cache first.
  • the picture A drawn according to the resource of the whiteboard annotation control and the resource of the conference control button control needs to be cached in the display cache A.
  • the picture A is taken out from the display buffer A and sent to the display driver of the mobile phone, and then sent to the screen of the mobile phone for display by the display driver of the mobile phone, thereby obtaining the picture 20 displayed on the display interface of the mobile phone 100 in FIG. 14b.
  • the screen B drawn according to the resources of the whiteboard annotation control needs to be cached in the display buffer B, and when the screen projection is required, the screen B is taken out from the display buffer B and sent to the screen projection recording module for recording, and then the screen projection content is obtained , and the screen projection recording module sends the obtained screen projection content to the TV with a pre-established communication connection, and the video decoding module in the TV performs decoding processing, and then parses out the screen B, and then transmits the screen B to the screen projection The display module finally displays picture B on the TV screen.
  • the electronic device that projects the content such as a mobile phone
  • the large-screen device that displays the projected content such as a TV
  • only the video stream playback image is displayed Layer, so that when the user uses the operation behavior of the conference empty button control displayed on the conference control button layer displayed on the mobile phone side, when the current conference screen is switched to the whiteboard mode, the entire switching process will not be displayed on the TV screen, and the TV screen will still display
  • the content in the video stream plays the layer.
  • the TV screen After switching to the whiteboard mode on the mobile phone, the TV screen will switch the currently displayed screen to the whiteboard annotation control, which will not affect the visual experience of users who use the TV screen to watch the conference, and can be displayed in time
  • the switched whiteboard screen realizes non-constant switching of different interfaces.
  • the resolution, screen size, and ratio of the electronic device projecting the content and the large-screen device displaying the projected content will be different, so in one implementation, you can The frame rate of the recorded projected screen content is pre-determined according to the resolution.
  • black border processing can be performed on the screen projected on the large-screen device according to information such as screen size and ratio.
  • the first screen aspect ratio of the first electronic device and the first screen aspect ratio of the second electronic device may be obtained first. second screen aspect ratio; then, when the first screen aspect ratio is different from the second screen aspect ratio, perform black border removal processing on the second display picture in the second display buffer, and record The second display screen after removing black borders obtains the screen projection content; when the aspect ratio of the first screen is the same as that of the second screen, the second screen in the second display buffer Display the screen to record and get the screencast content.
  • black border removal processing is introduced, so as to ensure that the second display screen displayed on the second electronic device has no black borders, or reduce the black borders as much as possible, thereby improving the viewing experience of the user watching the projected screen.
  • the second display frame is drawn on a virtual screen in the first electronic device.
  • the visible area of the virtual screen may be set first.
  • the screen of the mobile phone is called a physical screen.
  • the physical screen of a mobile phone is generally rectangular or approximately rectangular.
  • the virtual screen corresponds to the physical screen and is generally rectangular.
  • the visible area of the virtual screen is part or all of the display area of the physical screen.
  • a virtual screen when the physical screen of a mobile phone displays an interface, it includes a display content layer (DisplayRect) and an observation window layer (Viewport).
  • a virtual screen can also include a display content layer and an observation window layer, whether it is a physical screen or a virtual screen.
  • the screen, the visible area of the human eye of the screen is related to the area setting information of the observation window layer.
  • the observation window layer of the physical screen is 2340*1080
  • the visible area of the human eye is 2340*1080
  • the observation window layer of the virtual screen is 1920*1080
  • the visible area of the human eye is also 1920*1080.
  • the interface displayed on the virtual screen is not displayed to the user during screen projection, and its visible area to the human eye refers to the area that can be recorded during screen recording.
  • the setting information of its area is related to the resolution of the physical screen.
  • the origin O can be set at the top left corner of the physical screen
  • the x-axis passes horizontally to the right through the origin O
  • the y-axis passes vertically downwards through the origin O to establish a coordinate system.
  • Each pixel of can be identified by coordinates (x, y).
  • the display area of the physical screen display content layer can be set to (0, 0, 234, 1080), and the display area of the observation window layer is (0, 0, 2340, 1080), the display area of the display content layer of the virtual screen can be set to (0, 0, 2340, 1080), and the display area of the observation window layer can be set to (210, 0, 2130, 1080) .
  • the visible area of the virtual screen may be the display area of the observation window layer of the virtual screen.
  • the visible area of the virtual screen can be changed, that is, the aspect ratio and content of the video frame sent to the second electronic device can be changed.
  • the visible area is set to (210, 0, 2130, 1080)
  • the aspect ratio of the visible area becomes 16:9, that is, the aspect ratio of the second display image obtained by recording the virtual screen with this ratio is 16:9
  • the content in the actual video frame is the content in the rectangle ABCD in Figure 16b.
  • setting the visible area of the virtual screen on the mobile phone may include:
  • the mobile phone obtains the size of the second display picture drawn in the virtual screen and the size of the display control in the large-screen device;
  • the visible area of the virtual screen is set according to the display area of the display control
  • the visible area of the virtual screen is set to be the same as the display area of the screen of the large-screen device.
  • the visible area of the virtual screen in the mobile phone may include:
  • the mobile phone performs screenshots of the second display screen drawn for the first time according to the filtering rules of the second layer and the layer types of each layer, and performs black edge detection on the pictures obtained from the screen captures;
  • the visible area of the virtual screen is set according to the position of the non-black border area in the second display frame;
  • the pixel color in the specified area of the picture is black. For example, if the screen resolution of the mobile phone is 2340*1080, that is, the screen aspect ratio is 19.5:9, and the video Or the aspect ratio of full-screen playback such as PPT is generally 16:9. Then, it can be detected whether the RGB values of pixels in the area (0, 0, 210, 1080) of the picture and the area (2130, 0, 2340, 1080) are uniform. is (0, 0, 0), if yes, the above area is a black border area, otherwise, it is not a black border area.
  • setting the visible area of the virtual screen on the mobile phone may include:
  • the mobile phone obtains the size of the second display screen and the size of the screen display control of the large-screen device;
  • the visible area of the virtual screen is set according to the display area of the display control
  • the mobile phone can take multiple screenshots of the virtual screen, and set the visible area of the virtual screen according to the black border detection results of multiple pictures obtained from multiple screenshots.
  • the black border removal process can be completed, and finally by recording the virtual screen image after the black border removal processing, the second display screen that needs to be displayed on the large screen device can be obtained.
  • the display capability of the second electronic device may be obtained first; and the refresh frame of the video stream is determined according to the display capability rate; according to the refresh frame rate of the video stream, record the second display picture in the second display buffer to obtain screen projection content.
  • the first electronic device first negotiates with the second electronic device on the refresh frame rate of the video stream, so as to ensure that the transmitted video stream can be displayed normally on the second electronic device, and avoid bandwidth transition occupy.
  • the hardware capabilities of the electronic device for projecting content and the large-screen device may not be equal.
  • the device When the device is playing, it needs to be discarded, so the video stream transmitted at a high frame rate actually wastes bandwidth. Therefore, when performing screen projection, the refresh frame rate of the video stream is determined according to the display capability of the large-screen device, which not only ensures that the transmitted video stream can be displayed normally on the large-screen device, but also avoids excessive bandwidth occupation.
  • the frame processing of the video stream can be performed in the rendering and composition module, that is, for the picture that needs to be sent to a large-screen device for display
  • the rendering and composition module directly follows the large-scale Refresh the frame rate of the video stream of the screen device to process the frame and synthesize the video stream that meets the requirements, so that the video stream that needs to be transmitted to the large-screen device for display has been framed according to the display capability of the large-screen device, which can reduce the network transmission process
  • the bandwidth occupation in the middle can reduce the power consumption when the large-screen device renders the received video stream.
  • the electronic device for projecting content includes corresponding hardware and/or software modules for performing various functions.
  • the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a certain function is executed by hardware or computer software drives hardware depends on the specific application and design constraints of the technical solution. Those skilled in the art may use different methods to implement the described functions in combination with the embodiments for each specific application, but such implementation should not be regarded as exceeding the scope of the present application.
  • the functional modules of the electronic device may be divided according to the above method example.
  • each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
  • the above integrated modules may be implemented in the form of hardware. It should be noted that the division of modules in this embodiment is schematic, and is only a logical function division, and there may be other division methods in actual implementation.
  • FIG. 17 shows a schematic block diagram of an apparatus 300 according to an embodiment of the present application.
  • the device 300 may include: a processor 301 , a transceiver/transceiving pin 302 , and optionally a memory 303 .
  • bus 304 includes a power bus, a control bus, and a status signal bus in addition to a data bus.
  • bus 304 includes a power bus, a control bus, and a status signal bus in addition to a data bus.
  • the various buses are referred to as bus 304 in the figure.
  • the memory 303 may be used for the instructions in the foregoing method embodiments.
  • the processor 301 can be used to execute instructions in the memory 303, and control the receiving pin to receive signals, and control the sending pin to send signals.
  • the apparatus 300 may be an electronic device for projecting a picture to a large screen for display in the above method embodiment, such as a mobile phone.
  • the device 300 is the first electronic device that initiates screen projection, and the first electronic device is installed with a first application, the first application includes a first control and a second control, and the first control is located at the first One layer, when the second control is located on the second layer, one or more computer programs in the electronic device are stored on the memory, and when the computer programs are executed by the one or more processors, causing the electronic device to perform the following steps:
  • the first display picture includes the first control and the second control
  • the second display picture includes the first control , excluding the second control.
  • the electronic device when the computer program is executed by the one or more processors, the electronic device is made to perform the following steps:
  • Parsing the control name when parsing the control type of the first control, the package name of the first application, and the interface information of the first control from the control name, according to the control type , the package name, the interface information and the size information, determine the layer type of the first layer;
  • the control type of the first control and the interface information of the first control are parsed from the control name, and the package name of the first application is not parsed, obtain the process of drawing the first control
  • the process identification number PID determine the source of the first control according to the PID, determine the layer type of the first layer according to the control type, the source, the interface information and the size information , the source includes the package name of the first application.
  • the electronic device when the computer program is executed by the one or more processors, the electronic device is made to perform the following steps:
  • control name and the size information as retrieval keywords
  • the layer type corresponding to the control is determined as the layer type of the first layer
  • the electronic device when the computer program is executed by the one or more processors, the electronic device is made to perform the following steps:
  • the layer type of the first layer cannot be determined according to the control information of the first control, acquire a screen currently displayed by the first application, where the screen currently displayed includes the first control;
  • the layer type of the first layer is determined according to the content displayed by the first control in the currently displayed screen.
  • the electronic device when the computer program is executed by the one or more processors, the electronic device is made to perform the following steps:
  • Parsing the control name when parsing the control type of the second control, the package name of the first application, and the interface information of the second control from the control name, according to the control type , the package name, the interface information and the size information, determine the layer type of the second layer;
  • the control type of the second control and the interface information of the second control are parsed from the control name, and the package name of the first application is not parsed, obtain the process of drawing the second control
  • the process identification number PID determine the source of the second control according to the PID, determine the layer type of the second layer according to the control type, the source, the interface information and the size information , the source includes the package name of the first application.
  • the electronic device when the computer program is executed by the one or more processors, the electronic device is made to perform the following steps:
  • the layer type of the second layer cannot be determined according to the control information of the second control, acquire the currently displayed screen of the first application, the currently displayed screen includes the second control;
  • the layer type of the second layer is determined according to the content displayed by the second control in the currently displayed screen.
  • the electronic device when the computer program is executed by the one or more processors, the electronic device is made to perform the following steps:
  • the electronic device when the computer program is executed by the one or more processors, the electronic device is made to perform the following steps:
  • the filter rule of the first layer the layer type of the first layer, and the layer type of the second layer, it is determined that the first display picture includes the first layer and the second layer. second layer;
  • the second display screen includes the first layer according to the second layer filtering rule, the layer type of the first layer, and the layer type of the second layer;
  • the electronic device when the computer program is executed by the one or more processors, the electronic device is made to perform the following steps:
  • the electronic device when the computer program is executed by the one or more processors, the electronic device is made to perform the following steps:
  • the layer filter rule decision interface includes the first control and the first image where the first control is located The layer type of the layer, the layer type of the second control and the second layer where the second control is located;
  • the second layer filter rule is generated in response to the user's operation behavior of setting the second layer filter rule for the second display cache.
  • the electronic device when the computer program is executed by the one or more processors, the electronic device is made to perform the following steps:
  • the electronic device when the computer program is executed by the one or more processors, the electronic device is made to perform the following steps:
  • This embodiment also provides a computer-readable storage medium, in which computer instructions are stored, and when the computer instructions are run on an electronic device/network device (such as an OTA server, a CABE server), the electronic device/network The device executes the above related method steps to implement the screen projection method in the above embodiment.
  • an electronic device/network device such as an OTA server, a CABE server
  • This embodiment also provides a computer program product, which, when running on a computer, causes the computer to execute the above related steps, so as to implement the screen projection method in the above embodiment.
  • embodiments of the present application also provide a chip (which may also be a component or module), which may include one or more processing circuits and one or more transceiver pins; wherein, the transceiver pins and the The processing circuits communicate with each other through internal connection paths, and the processing circuits execute the above-mentioned relevant method steps to implement the screen projection method in the above-mentioned embodiment, to control the receiving pin to receive signals, and to control the sending pin to send signals.
  • a chip which may also be a component or module
  • the processing circuits communicate with each other through internal connection paths, and the processing circuits execute the above-mentioned relevant method steps to implement the screen projection method in the above-mentioned embodiment, to control the receiving pin to receive signals, and to control the sending pin to send signals.
  • the electronic device, computer-readable storage medium, computer program product or chip provided in this embodiment is all used to execute the corresponding method provided above, therefore, the beneficial effects it can achieve can refer to the above-mentioned The beneficial effects of the corresponding method will not be repeated here.
  • the disclosed devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of modules or units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components can be combined or It may be integrated into another device, or some features may be omitted, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • a unit described as a separate component may or may not be physically separated, and a component shown as a unit may be one physical unit or multiple physical units, which may be located in one place or distributed to multiple different places. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
  • an integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a readable storage medium.
  • the technical solution of the embodiment of the present application is essentially or the part that contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, and the software product is stored in a storage medium Among them, several instructions are included to make a device (which may be a single-chip microcomputer, a chip, etc.) or a processor (processor) execute all or part of the steps of the methods in various embodiments of the present application.
  • the aforementioned storage medium includes: various media that can store program codes such as U disk, mobile hard disk, read only memory (ROM), random access memory (random access memory, RAM), magnetic disk or optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

本申请实施例提供了一种投屏方法和电子设备,第一电子设备通过识别第一应用中位于第一图层的第一控件的控件信息和位于第二图层的第二控件的控件信息,并分别根据第一控件的控件信息和第二控件的控件信息确定各控件所在图层的图层类型,在确定两个图层的图层类型不同的情形下,根据确定的图层类型生成包括不同控件的显示画面分别送往不同的电子设备进行显示,从而实现了同一应用对应的界面中不同图层的分离,并能将包括不同控件的显示画面显示在不同的电子设备,使得发起投屏的第一电子设备和接受投屏内容的第二电子设备能够显示不同的画面,进而能够更好的适应不同的投屏场景。

Description

投屏方法和电子设备
本申请要求于2021年8月20日提交中国专利局、申请号为202110958660.0、发明名称为“投屏方法和电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及终端领域,尤其涉及一种投屏方法和电子设备。
背景技术
随着终端技术的发展,越来越多的终端具备投屏功能,例如在家庭、工作、教学、游戏竞技场景下,终端通过将当前显示的画面投射到大屏上,从而可以大大方便人们观看画面内容。
但是目前的投屏方式投射到大屏上的画面内容与终端显示的画面内容是一致的,对于会议应用程序(软件),会议过程中往往会涉及到添加人员、设置主席、人员静音、共享内容选择等操作,现有的投屏方式在终端界面对会议控制按钮的操作,也会投射到大屏,这就会对大屏显示的视频流内容造成干扰,从而影响通过大屏观看会议内容的用户体验。
发明内容
为了解决上述技术问题,本申请提出了一种投屏方法和电子设备。在该方法中,通过对第一应用对应的界面中的图层进行识别,进而进而根据识别出的图层类型和需求生成包括不同控件的画面,并将生成的画面分别传送给不同的电子设备,使得最终显示在发起投屏的电子设备的画面和显示在接受投屏内容的电子设备的画面不同,进而能够更好的适应不同的投屏场景。
第一方面,提供一种投屏方法,应用于发起投屏的第一电子设备,所述第一电子设备安装有第一应用,所述第一应用包括第一控件和第二控件,所述第一控件位于第一图层,所述第二控件位于第二图层。该投屏方法包括:获取所述第一控件的控件信息和所述第二控件的控件信息;根据所述第一控件的控件信息确定所述第一图层的图层类型;根据所述第二控件的控件信息确定所述第二图层的图层类型,所述第一图层的图层类型与所述第二图层的图层类型不同;根据所述第一图层的图层类型和所述第二图层的图层类型,生成第一显示画面和第二显示画面,并在所述第一电子设备的屏幕显示所述第一画面,将所述第二显示画面投射到第二电子设备的屏幕进行显示,所述第一显示画面包括所述第一控件和所述第二控件,所述第二显示画面包括所述第一控件,不包括所述第二控件。这样,发起投屏的第一电子设备通过识别第一应用中位于第一图层的第一控件的控件信息和位于第二图层的第二控件的控件信息,并分别根据第一控件的控件信息和第二控件的控件信息确定各控件所在图层的图层类型,在确定两个图层的图层类型不同的情形下,根据确定的图层类型生成包括不同控件的显示画面分别送往不同的电子设备进行显示,如将包括第一控 件和第二控件的第一显示画面送往第一电子设备,将包括第一控件,不包括第二控件的第二显示画面送往第二电子设备显示,从而实现了同一应用对应的界面中不同图层的分离,并能将包括不同控件的显示画面显示在不同的电子设备,使得发起投屏的第一电子设备和接受投屏内容的第二电子设备能够显示不同的画面,进而能够更好的适应不同的投屏场景。
示例性的,发起投屏的电子设备为手机。
示例性的,第一应用为会议应用。
示例性的,在第一应用为会议应用时,界面至少包括视频流播放图层和会议控制按钮图层。
示例性的,视频流播放图层中至少包括视频流播放控件,会议控制按钮图层中至少包括会议控制按钮控件。
示例性的,在第一应用为会议应用时,显示在第一电子设备的屏幕上的第一显示画面包括所有图层类型的图层中包括的控件,如第一图层中的第一控件和第二图层中的第二控件,显示在第二电子设备的屏幕上的第二显示画面仅包括视频流播放图层中包括的控件,如包括第一图层中的第一控件,不包括第二图层中的第二控件。
根据第一方面,所述根据所述第一控件的控件信息确定所述第一图层的图层类型,包括:从所述第一控件的控件信息中提取所述第一控件的控件名和所述第一控件的尺寸信息;对所述控件名进行解析,在从所述控件名中解析出所述第一控件的控件类型、所述第一应用的包名和所述第一控件所在的界面信息时,根据所述控件类型、所述包名、所述界面信息和所述尺寸信息,确定所述第一图层的图层类型;在从所述控件名中解析出所述第一控件的控件类型和所述第一控件所在的界面信息,未解析出所述第一应用的包名时,获取绘制所述第一控件的进程的进程标识号PID,根据所述PID确定所述第一控件的来源,根据所述控件类型、所述来源、所述界面信息和所述尺寸信息,确定所述第一图层的图层类型,所述来源包括所述第一应用的包名。这样,根据第一控件的控件类型便可以确定第一控件的用途,根据第一应用的包名便可以确定第一应用的应用类型,进而根据第一控件的用途,所在第一应用的应用类型,并结合第一控件的尺寸信息和在第一应用中具体的界面信息,便能够精准的识别出市面上大部分应用中包括的第一控件所在的图层的图层类型。
示例性的,在没有从第一控件的控件名中解析出第一应用的包名时,通过获取绘制第一控件的进程的PID,从而根据唯一的PID可以确定创建该PID对应的进程的来源,即第一应用,进而得到第一应用的包名,这样无论第一控件的控件名中是否有第一应用的包名,均可根据第一控件的控件信息精准确定第一图层的图层类型。
根据第一方面,或者以上第一方面的任意一种实现方式,在对所述控件名进行解析之前,所述方法还包括:将所述控件名和所述尺寸信息作为检索关键词;根据所述关键词,在图层标识记录库中查找与所述关键词匹配的控件;在查找到与所述关键词匹配的控件时,将所述控件对应的图层类型确定为所述第一图层的图层类型;在未查找到与所述关键词匹配的控件时,执行对所述控件名进行解析的步骤。这样,在每次根据获取的第一控件的控件信息确定第一图层的图层类型前先根据控件名和尺寸信息进行查表操作,在操作匹 配的控件时,直接将查找到的控件对应的图层类型确定为第一图层的图层类型,从而无需根据第一控件的控件信息进行分析处理,加快了处理速度,在未查找时,才根据第一控件的控件信息进行分析处理,从而既兼顾速度,避免对资源的占用,又能确定合适的图层类型。
根据第一方面,或者以上第一方面的任意一种实现方式,在根据所述第一控件的控件信息无法确定所述第一图层的图层类型时,获取所述第一应用当前显示的画面,所述当前显示的画面中包括所述第一控件;根据所述当前显示的画面中所述第一控件显示的内容确定所述第一图层的图层类型。这样,在根据第一控件的控件信息无法确定第一图层的图层类型时,通过对第一应用当前显示的画面进行分析,从而能够准确的确定出第一控件所在的第一图层的图层类型,进而保证后续基于图层类型对图层的分离绘制。
根据第一方面,或者以上第一方面的任意一种实现方式,所述根据所述第二控件的控件信息确定所述第二图层的图层类型,包括:从所述第二控件的控件信息中提取所述第二控件的控件名和所述第二控件的尺寸信息;对所述控件名进行解析,在从所述控件名中解析出所述第二控件的控件类型、所述第一应用的包名和所述第二控件所在的界面信息时,根据所述控件类型、所述包名、所述界面信息和所述尺寸信息,确定所述第二图层的图层类型;在从所述控件名中解析出所述第二控件的控件类型和所述第二控件所在的界面信息,未解析出所述第一应用的包名时,获取绘制所述第二控件的进程的进程标识号PID,根据所述PID确定所述第二控件的来源,根据所述控件类型、所述来源、所述界面信息和所述尺寸信息,确定所述第二图层的图层类型,所述来源包括所述第一应用的包名。这样,根据第二控件的控件类型便可以确定第二控件的用途,根据第一应用的包名便可以确定第一应用的应用类型,进而根据第二控件的用途,所在第一应用的应用类型,并结合第二控件的尺寸信息和在第一应用中具体的界面信息,便能够精准的识别出市面上大部分应用中包括的第二控件所在的图层的图层类型。
根据第一方面,或者以上第一方面的任意一种实现方式,在根据所述第二控件的控件信息无法确定所述第二图层的图层类型时,获取所述第一应用当前显示的画面,所述当前显示的画面中包括所述第二控件;根据所述当前显示的画面中所述第二控件显示的内容确定所述第二图层的图层类型。这样,在根据第二控件的控件信息无法确定第二图层的图层类型时,通过对第一应用当前显示的画面进行分析,从而能够准确的确定出第二控件所在的第二图层的图层类型,进而保证后续基于图层类型对图层的分离绘制。
根据第一方面,或者以上第一方面的任意一种实现方式,所述根据所述第一图层的图层类型和所述第二图层的图层类型,生成第一显示画面和第二显示画面,并在所述第一电子设备的屏幕显示所述第一画面,将所述第二显示画面投射到第二电子设备的屏幕进行显示,包括:根据所述第一图层的图层类型和所述第二图层的图层类型,生成第一显示画面和第二显示画面,并将所述第一显示画面缓存到第一显示缓存,将所述第二显示画面缓存到第二显示缓存;按照缓存顺序,从所述第一显示缓存中取出所述第一显示画面,并将所述第一显示画面在所述第一电子设备的屏幕进行显示;对所述第二显示缓存中的所述第二显示画面进行录制,得到投屏内容;将所述投屏内容发送至所述第二电子设备,供所述第二电子设备解码所述投屏内容,得到所述第二显示画面,并在所述第二电子设备的屏幕进 行显示。这样,通过将需要在不同电子设备的屏幕显示的显示画面缓存到不同的显示缓存中,然后从对应的显示缓存中取出显示画面送显,从而既可以实现对缓存内容的批量处理,又可以避免线程拥堵,保证传送的显示画面的流畅性。
根据第一方面,或者以上第一方面的任意一种实现方式,所述根据所述第一图层的图层类型和所述第二图层的图层类型,生成第一显示画面和第二显示画面,并将所述第一显示画面缓存到第一显示缓存,将所述第二显示画面缓存到第二显示缓存,包括:确定所述第一显示缓存对应的第一图层过滤规则和所述第二显示缓存对应的第二图层过滤规则;根据所述第一图层过滤规则、所述第一图层的图层类型和所述第二图层的图层类型,确定所述第一显示画面包括所述第一图层和所述第二图层;获取所述第一图层中所述第一控件的资源和所述第二图层中所述第二控件的资源,根据所述第一控件的资源和所述第二控件的资源,生成所述第一显示画面,并将所述第一显示画面缓存到第一显示缓存;根据所述第二图层过滤规则、所述第一图层的图层类型和所述第二图层的图层类型,确定所述第二显示画面包括所述第一图层;获取所述第一图层中所述第一控件的资源,根据所述第一控件的资源生成所述第二显示画面,并将所述第二显示画面缓存到所述第二显示缓存。这样,通过为不同的显示缓存设置不同的图层过滤规则,从而在生成显示画面时,能够根据对应显示缓存的图层过滤规则和确定的各图层的图层类型确定该显示缓存中缓存的显示画面需要包括的控件,进而获取该控件的资源进行显示画面的绘制,得到适合不同电子设备显示的显示画面。
根据第一方面,或者以上第一方面的任意一种实现方式,所述确定所述第一显示缓存对应的第一图层过滤规则和所述第二显示缓存对应的第二图层过滤规则,包括:获取所述第一电子设备的第一设备标识和所述第二电子设备的第二设备标识;在图层过滤规则表中查找与所述第一设备标识匹配的图层过滤规则,将查找到的图层过滤规则确定为所述第一显示缓存对应的所述第一图层过滤规则;在所述图层过滤规则表中查找与所述第二设备标识匹配的图层过滤规则,将查找到的图层过滤规则确定为所述第二显示缓存对应的所述第二图层过滤规则。这样,通过预先确定图层过滤规则,并进行存储,在需要进行图层过滤时,直接获取已有的图层过滤规则,方便快速。
根据第一方面,或者以上第一方面的任意一种实现方式,所述确定所述第一显示缓存对应的第一图层过滤规则和所述第二显示缓存对应的第二图层过滤规则,包括:在所述第一电子设备的屏幕上显示可供用户操作的图层过滤规则决策界面,所述图层过滤规则决策界面包括所述第一控件和所述第一控件所在的所述第一图层的图层类型,所述第二控件和所述第二控件所在的所述第二图层的图层类型;响应于用户为所述第一显示缓存设置所述第一图层过滤规则的操作行为,生成所述第一图层过滤规则;响应于用户为所述第二显示缓存设置所述第二图层过滤规则的操作行为,生成所述第二图层过滤规则。这样,通过提供用户操作入口,由用户决策当前第一应用的图层过滤规则,既提升了用户参与度,又能够使得投屏场景更好的适应于不同的用户需求。
根据第一方面,或者以上第一方面的任意一种实现方式,所述对所述第二显示缓存中的所述第二显示画面进行录制,得到投屏内容,包括:获取所述第一电子设备的第一屏幕纵横比和所述第二电子设备的第二屏幕纵横比;在所述第一屏幕纵横比和所述第二屏幕纵 横比不同时,对所述第二显示缓存中的所述第二显示画面进行去黑边处理,并录制去黑边后的所述第二显示画面得到所述投屏内容;在所述第一屏幕纵横比和所述第二屏幕纵横比相同时,对所述第二显示缓存中的所述第二显示画面进行录制,得到投屏内容。这样,引入去黑边处理,从而可以保证显示在第二电子设备的第二显示画面没有黑边,或者尽可能减小黑边,从而提升了观看投屏画面的用户观看体验。
根据第一方面,或者以上第一方面的任意一种实现方式,所述对所述第二显示缓存中的所述第二显示画面进行录制,得到投屏内容,包括:获取所述第二电子设备的显示能力;根据所述显示能力确定视频流刷新帧率;根据所述视频流刷新帧率,对所述第二显示缓存中的所述第二显示画面进行录制,得到投屏内容。这样,在录制投屏内容时,第一电子设备先与第二电子设备协商视频流刷新帧率,从而既保证了传输的视频流能在第二电子设备正常显示,又可以避免对带宽的过渡占用。
根据第一方面,或者以上第一方面的任意一种实现方式,所述第一应用为会议应用,所述第一控件为视频类控件,所述第一图层的图层类型为视频流播放图层;所述第二控件为按钮类控件,所述第二图层的图层类型为会议控制按钮图层。这样,在第一电子设备的屏幕显示的第一显示画面中既包括了通过视频流播放控件显示的与会人员的视频画面,又包括了可供用户操作的会议控制按钮控件,而在第二电子设备的屏幕显示的第二显示画面中仅包括通过视频流播放控件显示的与会人员的视频画面,不包括可供用户操作的会议控制按钮控件,从而通过第一电子设备参加会议的用户,既可以观看会议画面,又可以操作会议控制按钮,而通过第二电子设备观看会议的用户,不会受第一电子设备侧进行的操作干扰。
根据第一方面,或者以上第一方面的任意一种实现方式,所述第一应用为会议应用,所述第一控件为白板批注控件,所述第一图层的图层类型为白板批注图层;所述第二控件为按钮类控件,所述第二图层的图层类型为会议控制按钮图层。这样,在第一电子设备的屏幕显示的第一显示画面中既包括了白板内容,又包括了可供用户操作的会议控制按钮控件,而在第二电子设备的屏幕显示的第二显示画面中仅包括白板内容,不包括可供用户操作的会议控制按钮控件,从而通过第一电子设备参加会议的用户,既可以观看在白板进行绘制,又可以操作会议控制按钮,而通过第二电子设备观看会议的用户,不会受第一电子设备侧进行的操作干扰。
第二方面,提供一种电子设备。所述电子设备为第一电子设备,所述第一电子设备安装有第一应用,所述第一应用包括第一控件和第二控件,所述第一控件位于第一图层,所述第二控件位于第二图层,所述电子设备包括:一个或多个处理器;存储器;以及一个或多个计算机程序,其中所述一个或多个计算机程序存储在所述存储器上,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:获取所述第一控件的控件信息和所述第二控件的控件信息;根据所述第一控件的控件信息确定所述第一图层的图层类型;根据所述第二控件的控件信息确定所述第二图层的图层类型,所述第一图层的图层类型与所述第二图层的图层类型不同;根据所述第一图层的图层类型和所述第二图层的图层类型,生成第一显示画面和第二显示画面,并在所述第一电子设备的屏幕显示所 述第一画面,将所述第二显示画面投射到第二电子设备的屏幕进行显示,所述第一显示画面包括所述第一控件和所述第二控件,所述第二显示画面包括所述第一控件,不包括所述第二控件。
根据第二方面,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:从所述第一控件的控件信息中提取所述第一控件的控件名和所述第一控件的尺寸信息;对所述控件名进行解析,在从所述控件名中解析出所述第一控件的控件类型、所述第一应用的包名和所述第一控件所在的界面信息时,根据所述控件类型、所述包名、所述界面信息和所述尺寸信息,确定所述第一图层的图层类型;在从所述控件名中解析出所述第一控件的控件类型和所述第一控件所在的界面信息,未解析出所述第一应用的包名时,获取绘制所述第一控件的进程的进程标识号PID,根据所述PID确定所述第一控件的来源,根据所述控件类型、所述来源、所述界面信息和所述尺寸信息,确定所述第一图层的图层类型,所述来源包括所述第一应用的包名。
根据第二方面,或者以上第二方面的任意一种实现方式,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:将所述控件名和所述尺寸信息作为检索关键词;根据所述关键词,在图层标识记录库中查找与所述关键词匹配的控件;在查找到与所述关键词匹配的控件时,将所述控件对应的图层类型确定为所述第一图层的图层类型;在未查找到与所述关键词匹配的控件时,执行对所述控件名进行解析的步骤。
根据第二方面,或者以上第二方面的任意一种实现方式,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:在根据所述第一控件的控件信息无法确定所述第一图层的图层类型时,获取所述第一应用当前显示的画面,所述当前显示的画面中包括所述第一控件;根据所述当前显示的画面中所述第一控件显示的内容确定所述第一图层的图层类型。
根据第二方面,或者以上第二方面的任意一种实现方式,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:从所述第二控件的控件信息中提取所述第二控件的控件名和所述第二控件的尺寸信息;对所述控件名进行解析,在从所述控件名中解析出所述第二控件的控件类型、所述第一应用的包名和所述第二控件所在的界面信息时,根据所述控件类型、所述包名、所述界面信息和所述尺寸信息,确定所述第二图层的图层类型;在从所述控件名中解析出所述第二控件的控件类型和所述第二控件所在的界面信息,未解析出所述第一应用的包名时,获取绘制所述第二控件的进程的进程标识号PID,根据所述PID确定所述第二控件的来源,根据所述控件类型、所述来源、所述界面信息和所述尺寸信息,确定所述第二图层的图层类型,所述来源包括所述第一应用的包名。
根据第二方面,或者以上第二方面的任意一种实现方式,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:在根据所述第二控件的控件信息无法确定所述第二图层的图层类型时,获取所述第一应用当前显示的画面,所述当前显示的画面中包括所述第二控件;根据所述当前显示的画面中所述第二控件显示的内容确定所述第二图层的图层类型。
根据第二方面,或者以上第二方面的任意一种实现方式,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:根据所述第一图层的图层类型和所述第二图层的图层类型,生成第一显示画面和第二显示画面,并将所述第一显示画面缓存到第一显示缓存,将所述第二显示画面缓存到第二显示缓存;按照缓存顺序,从所述第一显示缓存中取出所述第一显示画面,并将所述第一显示画面在所述第一电子设备的屏幕进行显示;对所述第二显示缓存中的所述第二显示画面进行录制,得到投屏内容;将所述投屏内容发送至所述第二电子设备,供所述第二电子设备解码所述投屏内容,得到所述第二显示画面,并在所述第二电子设备的屏幕进行显示。
根据第二方面,或者以上第二方面的任意一种实现方式,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:确定所述第一显示缓存对应的第一图层过滤规则和所述第二显示缓存对应的第二图层过滤规则;根据所述第一图层过滤规则、所述第一图层的图层类型和所述第二图层的图层类型,确定所述第一显示画面包括所述第一图层和所述第二图层;获取所述第一图层中所述第一控件的资源和所述第二图层中所述第二控件的资源,根据所述第一控件的资源和所述第二控件的资源,生成所述第一显示画面,并将所述第一显示画面缓存到第一显示缓存;根据所述第二图层过滤规则、所述第一图层的图层类型和所述第二图层的图层类型,确定所述第二显示画面包括所述第一图层;获取所述第一图层中所述第一控件的资源,根据所述第一控件的资源生成所述第二显示画面,并将所述第二显示画面缓存到所述第二显示缓存。
根据第二方面,或者以上第二方面的任意一种实现方式,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:获取所述第一电子设备的第一设备标识和所述第二电子设备的第二设备标识;在图层过滤规则表中查找与所述第一设备标识匹配的图层过滤规则,将查找到的图层过滤规则确定为所述第一显示缓存对应的所述第一图层过滤规则;在所述图层过滤规则表中查找与所述第二设备标识匹配的图层过滤规则,将查找到的图层过滤规则确定为所述第二显示缓存对应的所述第二图层过滤规则。
根据第二方面,或者以上第二方面的任意一种实现方式,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:在所述第一电子设备的屏幕上显示可供用户操作的图层过滤规则决策界面,所述图层过滤规则决策界面包括所述第一控件和所述第一控件所在的所述第一图层的图层类型,所述第二控件和所述第二控件所在的所述第二图层的图层类型;响应于用户为所述第一显示缓存设置所述第一图层过滤规则的操作行为,生成所述第一图层过滤规则;响应于用户为所述第二显示缓存设置所述第二图层过滤规则的操作行为,生成所述第二图层过滤规则。
根据第二方面,或者以上第二方面的任意一种实现方式,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:获取所述第一电子设备的第一屏幕纵横比和所述第二电子设备的第二屏幕纵横比;在所述第一屏幕纵横比和所述第二屏幕纵横比不同时,对所述第二显示缓存中的所述第二显示画面进行去黑边处理,并录制去黑边后的所述第二显示画面得到所述投屏内容;在所述第一屏幕纵横比和所述第二屏幕纵横比相同时,对所述第二显示缓存中的所述第二显示画面进行录制,得到投屏内容。
根据第二方面,或者以上第二方面的任意一种实现方式,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:获取所述第二电子设备的显示能力;根据所述显示能力确定视频流刷新帧率;根据所述视频流刷新帧率,对所述第二显示缓存中的所述第二显示画面进行录制,得到投屏内容。
第二方面以及第二方面的任意一种实现方式分别与第一方面以及第一方面的任意一种实现方式相对应。第二方面以及第二方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述。
第三方面,提供一种计算机可读存储介质。该介质包括计算机程序,当计算机程序在电子设备上运行时,使得电子设备执行第一方面以及第一方面中任意一项中的投屏方法。示例性的,电子设备可以为手机。
第三方面以及第三方面的任意一种实现方式分别与第一方面以及第一方面的任意一种实现方式相对应。第三方面以及第三方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述。
第四方面,本申请实施例提供了一种计算机程序,该计算机程序包括用于执行第一方面以及第一方面的任意可能的实现方式中的方法的指令。
第四方面以及第四方面的任意一种实现方式分别与第一方面以及第一方面的任意一种实现方式相对应。第四方面以及第四方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述。
第五方面,本申请实施例提供了一种芯片,该芯片包括处理电路、收发管脚。其中,该收发管脚、和该处理电路通过内部连接通路互相通信,该处理电路执行第二方面或第二方面的任一种可能的实现方式中的方法,以控制接收管脚接收信号,以控制发送管脚发送信号。示例性的,芯片为电子设备的芯片,电子设备可以为手机。
第五方面以及第五方面的任意一种实现方式分别与第一方面以及第一方面的任意一种实现方式相对应。第五方面以及第五方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述。
附图说明
图1是示例性示出的开启投屏功能的场景示意图之一;
图2是示例性示出的开启投屏功能的场景示意图之二;
图3是示例性示出的采用本申请实施例提供的投屏方法投屏后终端和大屏显示内容的场景示意图之一;
图4是示例性示出的手机的软件结构示意图;
图5是示例性示出的会议应用程序显示的画面包括的图层结构示意图;
图6是示例性示出的手机和大屏包括的模块的示意图;
图7是示例性示出的本申请实施例提供的投屏方法的流程示意图;
图8是示例性示出的本申请实施例提供的投屏方法中获取到的控件信息示意图;
图9是示例性示出的本申请实施例提供的由用户决策图层过滤规则的界面示意图;
图10是示例性示出的采用本申请实施例提供的投屏方法在手机和大屏分别显示不同画面的模块交互示意图之一;
图11是示例性示出的采用本申请实施例提供的投屏方法绘制需要显示的画面的时序图之一;
图12是示例性示出的采用本申请实施例提供的投屏方法绘制需要显示的画面的时序图之二;
图13是示例性示出的采用本申请实施例提供的投屏方法投屏后终端和大屏显示内容的场景示意图之二;
图14a、图14b是示例性示出的采用本申请实施例提供的投屏方法投屏后终端和大屏显示内容的场景示意图之三;
图15是示例性示出的采用本申请实施例提供的投屏方法在手机和大屏分别显示不同画面的模块交互示意图之一;
图16a是示例性示出的本申请实施例中手机的物理屏幕和虚拟屏幕中的层关系示意图;
图16b是示例性示出的本申请实施例中屏幕坐标系建立方法的示意图;
图17是本申请实施例提供的一种装置的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。
本申请实施例的说明书和权利要求书中的术语“第一”和“第二”等是用于区别不同的对象,而不是用于描述对象的特定顺序。例如,第一目标对象和第二目标对象等是用于区别不同的目标对象,而不是用于描述目标对象的特定顺序。
在本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如” 等词旨在以具体方式呈现相关概念。
在本申请实施例的描述中,除非另有说明,“多个”的含义是指两个或两个以上。例如,多个处理单元是指两个或两个以上的处理单元;多个系统是指两个或两个以上的系统。
在对本申请实施例的技术方案说明之前,首先结合附图对本申请实施例的应用场景进行说明。为了便于描述,本申请实施例以需要投射的画面内容为使用会议应用程序进行会议时的画面为例,并以手机这一电子设备作为投射会议过程的画面的投屏段,以电视机作为显示投射画面的大屏。
下面结合图1对手机开启投屏功能的过程进行说明。
参见图1,示例性的,手机100的显示界面显示的是手机的设置页面10a,设置页面10a上包括一个或多个控件,例如声音和振动设置选项、通知设置选项、设备连接设置选项、应用设置选项、电池设置选项、存储设置选项和安全设置选项等。
示例性的,当用户点击设置页面10a中的设备连接10a-1后,手机响应于用户的操作行为,从设置页面10a跳转到设备连接页面10b。
示例性的,设备连接页面10b上包括一个或多个控件,例如蓝牙设置选项、NFC(Near Field Communication,近场通信)设置选项、手机投屏设置选项、USB(Universal Serial Bus,通用串行总线)设置选项和打印设置选项等。
示例性的,当用户点击了设备连接页面10b中的手机投屏10b-1后,手机响应于用户的操作行为,从设备连接页面10b跳转到手机投屏页面10c。
示例性的,手机投屏页面10c上包括了一个开启手机投屏功能的控件,例如图1中示出的无线投屏设置选项10c-1。
可理解的,在手机投屏页面10c中显示的开启手机投屏功能的设置选项,除了被命名为图1中的“无线投屏”,在实际的应用场景中,因手机型号、系统版本的不同,还可能被命名为“多屏互动”、“屏幕镜像”等,此处不再一一列举,本实施对此不做限制。
示例性的,当用户点击了手机投屏页面10c中的无线投屏设置选项10c-1后,手机响应于用户的操作行为,会在手机投屏页面10c的空白区域显示可用设备列表,并使用控件10c-2在可用设备列表的显示区域显示“正在搜索可用设备...请确保大屏设备无线投屏功能已开启”的内容。
可理解的,图1中给出的仅为一种搜索可用的大屏设备时,可用设备列表的一种具体显示样式,是为了更好的理解本实施例的技术方案而列举的示例,不作为对本实施例的唯一限制。在实际的应用场景中,可以设置用户点击了手机投屏页面10c中的无线投屏设置选项10c-1后,手机响应于用户的操作行为具体是从手机投屏页面10c跳转到专门用于显示可用设备列表页面。
示例性的,当搜索到可用的大屏设备后,会使用控件10c-3在可用设备列表的显示区域显示搜索到的大屏设备,例如大屏1和大屏2。
可理解的,在实际的应用场景中,搜索到的大屏设备可以为电视、投影仪等,此处不再一一列举,本实施对此不做限制。
示例性的,在实际的应用场景中,在大屏设备为电视时,呈现的屏幕可以是一个电视屏幕,也可以是由多个电视屏幕拼接成的一个大屏幕,本申请对此不做限制。
示例性的,当用户点击了手机投屏页面10c中的大屏1(10c-3-1)后,手机响应于用户的操作行为,向大屏1发起配对请求,并建立网络连接,之后手机显示界面显示的内容便会投射到大屏1上。
由此,完成了以设置页面为入口,在手机开启投屏功能的操作。
此外,本申请实施例还给出了另一种开始投屏功能的方式,下面结合图2对手机开启投屏功能的过程进行说明。
参见图2,示例性的,手机100的显示界面显示的是使用会议应用程序进行会议过程中的画面20,当用户从手机的上边缘沿着箭头方向向下滑动时,手机响应于用户的操作行为,在显示界面的上边缘区域显示下拉通知栏30。
示例性的,下拉通知栏30包括一个或多个控件,例如时间栏、Wi-Fi设置选项、蓝牙设置选项、移动数据设置选项、自动旋转设置选项和屏幕镜像设置选项等。
示例性的,当用户点击了下拉通知栏30中的屏幕镜像30-1后,手机响应于用户的操作行为,可以在显示界面弹出搜索可用设备的界面,并在搜索到可用的大屏设备后,在该界面中显示,以供用户选择需要进行配对,建立网络连接的大屏设备。
可理解的,在显示界面弹出的搜索可用设备的界面,可用全屏覆盖整个显示界面,也可以仅覆盖局部区域,具体的实现方式本申请不做限制。
由此,完成了以下拉通知栏为入口,在手机开启投屏功能的操作。
示例性的,在采用图1或图2所示的开启投屏功能的方式,使手机和大屏建立网络连接后,采用本申请实施例提供的投屏方法对当前会议的画面进行投屏,手机的操作系统便会确定需要显示的画面中各个控件所在的图层,然后根据预设的过滤规则,对需要在手机屏幕显示的画面和需要在大屏(例如电视屏幕)显示的画面进行过滤,然后对过滤后的图层进行合成渲染,最后将得到的画面分别送显,从而实现手机和大屏分别显示会议过程中不同的内容。
参见图3,示例性的,手机100的显示界面显示的是会议过程中的画面20,在该画面20中包括视频流播放图层20-1和会议控制按钮图层20-2。
可理解的,视频流播放图层20-1包括一个或多个视频流播放控件,这些视频流播放控件用于显示会议过程中获取到的视频流。
此外,需要说明的是,在一个例子中,可以将多个视频流播放控件集成在一个视频流播放图层中,对于这种应用场景,会议应用程序对应的服务器在向手机传输视频流的时候,可以将多个视频流播放控件对应的视频流合为一路进行传输。
相应地,在另一个例子中,可以设置每一个视频流播放控件对应一个视频流播放图层,对于这种应用场景,会议应用程序对应的服务器在向手机传输视频流的时候,需要分别向不同的视频流播放控件传输视频流。
应当理解的是,上述说明仅是为了更好的理解本实施例的技术方案而列举的示例,不作为对本实施例的唯一限制。
此外,参见图3,会议控制按钮图层20-2可以包括一个或多个控件,例如静音设 置选项、视频设置选项、共享设置选项、与会者设置选项和更多设置选项等,此处不再一一列举,本申请对此不做限制。
针对图3中手机100显示的画面20,假设预设的过滤规则为投射到大屏200上的画面内容仅包括用于显示视频流的视频流播放控件,即大屏200上的画面包括的图层只有视频流播放图层。基于此,对于手机100的显示界面显示的画面20,经过本申请实施例提供的投屏方法的处理,最终投射在大屏200的显示界面上的画面20’只包括了视频流播放图层20-1的镜像内容20-1’。这样,即使用户在手机上通过会议控制按钮图层20-2中的会议控制按钮进行操作,如静音,添加与会人等,大屏的显示界面始终显示的是视频流播放控件20-1的镜像内容20-1’,对会议控制按钮图层20-2中会议控制按钮的操作过程不会投射到大屏,从而不会干扰用户观看大屏显示的视频流画面,实现了控制和显示的分离。
此外,需要说明的是,图1至图3及后续实施例的描述涉及到的附图中所涉及的手机的显示界面显示的控件的名称和数量,以及下拉通知栏中的控件的名称和数量仅为示意性举例,本申请对此不做限制。
此外,需要说明的是,本申请实施例提供的投屏方法不仅可以适用于一对一投屏的应用场景,也可以适用于一对多投屏的应用场景,只要开启投屏功能的手机,或其他电子设备支持一对一投屏,一对多投屏即可,关于一对一投屏和一对多投屏的具体实现细节,本申请不做描述。
此外,可理解的是,本申请实施例的描述中,是以手机为例进行说明,在其他实施例中,本申请同样适用于膝上型计算机、桌上型计算机、掌上型计算机(如平板电脑)等支持投屏功能的电子设备。
为了更好的描述本申请实施例提供的投屏方法,以安装会议应用程序,进行投屏的电子设备为手机,结合图4对手机的软件结构进行描述。
参见图4,图4为本申请实施例的手机100的软件结构框图。分层架构将软件分成若干层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。
为了便于说明,本申请实施例以安卓(Android)系统为例,对Android系统的手机100的软件结构进行说明。
具体的说,在一些实施例中,将Android系统分为五层,从上至下分别为应用程序层、应用程序框架层(也称:系统框架层)、系统库和安卓运行时层、硬件抽象层(hardware abstraction layer,HAL)和内核层。
应用程序层可以包括相机,图库,日历,WLAN,会议、音乐,视频等应用程序(下文简称为应用)。需要说明的是,图4中示出的应用程序层所包括的应用仅为示例性说明,本申请对此不作限定。可理解的,应用程序层包括的应用并不构成对手机100的具体限定。在本申请另一些实施例中,相较于图4所示应用程序层包含的应用,手机100可包括更多或更少的应用,不同的手机100可以包括相同的应用,也可包括完全不同的应用。
应用程序框架层为应用程序层的应用提供应用编程接口(Application Programming Interface,API)和编程框架,包括各种组件和服务来支持开发者的安卓 开发。应用程序框架层还包括一些预先定义的函数。如图4所示,应用程序框架层可包括窗口管理器、内容提供器、视图系统、资源管理器、通知管理器、摄像头服务等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。
摄像头服务用于响应于应用的请求,调用摄像头(包括前置摄像头和/或后置摄像头)。
此外,为了实现本申请实施例提供的投屏方法,应用程序框架层还包括显示管理框架和显示渲染框架。其中,显示管理框架用于识别会议应用中包括的控件所属的图层,并进行标注和记录;显示渲染框架用于根据预设的过滤规则对显示管理框架识别标记的图层进行过滤,并将过滤后的图层进行合成渲染。
示例性的,对于图3手机100的显示界面显示的画面20,在一个例子中,是由图5示出的视频流播放图层20-1和会议控制按钮图层20-2两个图层合成渲染的。
具体的,位于应用程序框架层的显示管理框架通过对应用程序层安装的会议应用请求绘制的控件进行识别,进而确定每一个控件的图层类型,并进行标记,例如通过识别确定用于显示视频流的控件,如SurfaceView控件对应的图层类型为视频流播放图层,即图5中的20-1,对于静音选项设置控件、视频选项设置控件、共享选项设置控件、与会者选项设置控件等会议控制按钮对应的图层类型为会议控制按钮图层,即图5中的20-2。
系统库和安卓运行时层包括系统库和安卓运行时(Android Runtime)。系统库可以包括多个功能模块。例如:表面管理器、二维图形引擎、三维图形处理库(例如:OpenGL ES),媒体库、字体库等。其中,浏览器内核负责对网页语法的解释(如标准通用标记语言下的一个应用HTML、JavaScript)并渲染(显示)网页;二维图形引擎用于实现二维图形绘图,图像渲染,合成和图层处理等;三维图形处理库用于实现三维图形绘图,图像渲染,合成和图层处理等;媒体库用于实现不同流媒体的输入;字体库用于实现不同字体的输入。
安卓运行时负责安卓系统的调度和管理,具体包括核心库和虚拟机。其中,核心 库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库;虚拟机用于运行使用java语言开发的Android应用。
此外,需要说明的,为了使Android应用能够运行在虚拟机中,应用程序层和应用程序框架层均需运行在虚拟机中。在运行Android应用时,虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。
此外,需要说明的,在实际应用中,虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
可理解的,图4示出的应用程序框架层、系统库与运行时层包含的部件,并不构成对手机100的具体限定。在实际应用中,手机100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。
HAL层为位于操作系统内核与硬件电路之间的接口层。HAL层包括但不限于:音频硬件抽象层(Audio HAL)和摄像头硬件抽象层(Camera HAL)。其中,Audio HAL用于对音频流进行处理,例如,对音频流进行降噪、定向增强等处理,Camera HAL用于对图像流进行处理。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。其中,该硬件可以包括摄像头、显示屏,麦克风,处理器,以及存储器等器件。
在本申请的实施例中,硬件中的显示屏可以显示会议过程中的画面,硬件中的摄像头可以用于采集图像,硬件中的麦克风可以用于采集声音信号,并生成模拟音频电信号。
此外,需要说明的是,在实际的应用场景中,想要实现本申请实施例提供的投屏方法,用于投射内容的电子设备(如手机)和用于显示电子设备投射的内容的大屏(如电视)至少需要包括图6示出的内容。
以需要投射的内容为会议过程中的画面为例,参见图6用于投射内容的手机至少需要在应用程序层安装会议应用程序,在应用程序框架层中引入用于记录图层标识信息的图层标识记录库、用于识别控件对应的图层的图层标识模块、用于对图层标识模块标识后的图层进行过滤的图层过滤模块和用于将图层过滤模块过滤后的图层进行合成渲染的合成渲染模块,以及位于系统库和安卓运行时层的协同助手和网络通信模块。
其中,图层标识记录库和图层标识模块具体位于应用程序框架层的显示管理框架中,图层过滤模块和合成渲染模块具体位于应用程序框架层的显示渲染框架中。
由于用于显示手机投射的内容的大屏只需要用来显示投射的内容,不需要进行图层的识别和合成渲染,因而大屏至少需要包括位于应用程序框架层的投屏显示模块和用于将手机传输来的内容进行合成渲染的合成渲染模块,以及位于系统库和安卓运行时层的协同助手和网络通信模块。
基于上述结构,在实际的应用场景中,用户通过图1或图2给出的开启投屏功能的方式触发开启投屏功能的操作,手机响应于用户的操作行为,借助位于系统库和安 卓运行时层的协同助手匹配到同样具有协同助手,即支持投屏功能的大屏,并通过网络通信模块与匹配到的大屏建立通信连接。
当用户开启会议应用程序,并加入会议后,位于应用程序层的会议应用程序便会与应用程序框架层中显示管理框架中的图层标识记录库、图层标识模块进行数据交互,在完成图层识别后,交由应用程序框架层中显示渲染框架中的图层过滤模块进行过滤,并将过滤出的图层交由合成渲染模块进行合成渲染,进而得到需要在手机屏幕显示的画面(例如画面A)和需要在大屏显示的画面(例如画面B)。
示例性的,在得到画面A和画面B后,画面A可以直接通过手机的显示驱动进行送显,进而在手机屏幕显示画面A。
此外,可理解的,对于投屏功能,实际就是基于镜像协议进行镜像处理,即在手机的系统库和安卓运行时层的投屏录制模块中对画面B进行录制,并在录制完成后借助通过网络通信模块建立的通信连接传输给大屏,由大屏通过投屏显示模块进行显示处理,最终在大屏上呈现出画面B。
此外,需要说明的是,在实际的应用场景中,待投屏的应用不局限于会议应用,上述描述仅仅是为了便于说明。
为了对本实施例提供的投屏方法有一个系统的了解,以下结合图7进行具体说明。
首先,需要说明的是,本实施例提供的投屏方法具体是应用于发起投屏的第一电子设备,例如可以是手机。
示例性的,第一电子设备中安装有第一应用,为了便于理解以下仍以会议应用为例。
示例性的,第一应用包括第一控件和第二控件,且第一控件位于第一图层,第二控件位于第二图层。
示例性的,第一控件和第二控件为不同类型的控件。
示例性的,在一种实现场景中,第一控件为视频类控件,例如可以是SurfaceView,相应地,第一图层的图层类型为视频流播放图层;第二控件为按钮类控件,例如可以是Button,相应地,第二图层的图层类型为会议控制按钮图层。
示例性的,在另一种实现场景中,第一控件为白板批注控件,例如可以是BlankWindow,相应地,第一图层的图层类型为白板批注图层;第二控件为按钮类控件,例如可以是Button,相应地,第二图层的图层类型为会议控制按钮图层。
参见图7,关于本申请提供的投屏方法,具体包括以下步骤:
步骤S1:获取所述第一控件的控件信息和所述第二控件的控件信息。
为了便于说明,本实施例仍以第一应用为会议应用为例,则执行步骤S1的前提是用户已经采用上述图1或图2给出的开启投屏功能的方式完成了手机100和电视200之间的通信连接。
此外,需要说明的是,对于任意应用的界面,通常都是由多个图层渲染合成的,并且包括的图层至少有两种类型,以会议应用为例,包括的图层至少有视频流播放图层和会议控制按钮图层。
示例性的,每一个类型的图层中至少包括一个控件。即,第一图层中的第一控件的数量可以是一个或多个,第二图层中的第二控件的数量也可以是一个或多个。例如,在会议应用中,视频流播放图层中包括一个或多个视频流播放控件,会议控制按钮图层中包括一个或多个会议控制按钮控件,详见正对图3的描述,此次不再赘述。
示例性的,关于获取第一控件的控件信息和第二控件的控件信息的操作,可以是由手机100中位于应用程序框架层中的显示处理模块调用预置的控件信息抓取程序获取到的。
步骤S2:根据所述第一控件的控件信息确定所述第一图层的图层类型。
示例性的,以手机的操作系统为Android系统为例,获取到的会议应用中各控件的控件信息的格式可以如图8所示。
参见图8,获取的控件信息包括但不限于控件名(Name)和尺寸信息(disp frame),比如还可以包括窗口类型,本申请对此不做限制。
假设,第一控件有两个分别为用于显示与会人员的视频画面的,例如控件a和控件b,有一个供用户操作的第二控件,例如控件c。示例性的,图8中的“SurfaceView-com.huawei.welink/com.[...]on.view.activity.InMeetingActivity#0rel-2”为控件a的控件名,“0 0 283 283”为控件a的尺寸信息;“SurfaceView-com.huawei.welink/com.[...]on.view.activity.InMeetingActivity#1rel-1”为控件b的控件名,“0 283 2288 1080”为控件b的尺寸信息;“Button-com.huawei.welink/com.[...]on.view.activity.InMeetingActivity#0rel-0”为控件c的控件名,“413 2130 625 2200”为控件c的尺寸信息。
其中,控件名通常包括3部分,以控件a的控件名为例,“SurfaceView”表示控件类型,该类型表示控件a是用于显示视频流内容的,“com.huawei.welink”表示控件a所在应用的应用包名,根据应用包名可以确定应用类型,如welink为一款会议应用,“on.view.activity.InMeetingActivity”表示当前显示界面会议界面,即界面信息是会议界面。
综上,在根据第一控件的控件信息确定第一图层的图层类型时,流程为:
首先,从所述第一控件的控件信息中提取所述第一控件的控件名和所述第一控件的尺寸信息。例如控件a的控件名“SurfaceView-com.huawei.welink/com.[...]on.view.activity.InMeetingActivity#0rel-2”,尺寸信息“0 0 283 283”。
然后,对所述控件名进行解析。
具体的,在从所述控件名中解析出所述第一控件的控件类型、所述第一应用的包名和所述第一控件所在的界面信息时,根据所述控件类型、所述包名、所述界面信息和所述尺寸信息,确定所述第一图层的图层类型。这样,根据第一控件的控件类型便可以确定第一控件的用途,根据第一应用的包名便可以确定第一应用的应用类型,进而根据第一控件的用途,所在第一应用的应用类型,并结合第一控件的尺寸信息和在第一应用中具体的界面信息,便能够精准的识别出市面上大部分应用中包括的第一控件所在的图层的图层类型。
在从所述控件名中解析出所述第一控件的控件类型和所述第一控件所在的界面信息,未解析出所述第一应用的包名时,获取绘制所述第一控件的进程的进程标识号PID,根据 所述PID确定所述第一控件的来源,根据所述控件类型、所述来源、所述界面信息和所述尺寸信息,确定所述第一图层的图层类型,所述来源包括所述第一应用的包名。这样,在没有从第一控件的控件名中解析出第一应用的包名时,通过获取绘制第一控件的进程的PID,从而根据唯一的PID可以确定创建该PID对应的进程的来源,即第一应用,进而得到第一应用的包名,这样无论第一控件的控件名中是否有第一应用的包名,均可根据第一控件的控件信息精准确定第一图层的图层类型。
此外,为了提升整体效率,可以预先设置图层标识记录库,将已知的各种应用中各种大小,位于各种位置的控件与所在图层的图层类型建立对应关系,从而在对第一控件的控件名进行解析,根据解析结果和位置信息确定图层类型前,先将控件名和尺寸信息作为检索关键词,然后根据关键词在图层标识记录库中查找与该关键词匹配的控件。相应地,在查找到与所述关键词匹配的控件时,将所述控件对应的图层类型确定为所述第一图层的图层类型;在未查找到与该关键词匹配的控件时,在对控件名进行解析,根据解析结果和位置信息确定图层类型。这样不仅能够确定图层类型,同时又能兼顾处理速度和设备资源的消耗。
此外,如果在实际的应用场景中,通过查找图层标识记录库的方式和根据第一控件的控件信息确定第一图层的图层类型的方式均无法确定第一图层的图层类型,还可以获取所述第一应用当前显示的画面。
需要说明的,所述当前显示的画面中包括所述第一控件。
相应地,根据所述当前显示的画面中所述第一控件显示的内容确定所述第一图层的图层类型。
具体的,该方式可以是由技术人员通过抓取的页面数据人工确定后更新到图层标识库的,也可以是基于预设的算法,分析确定的。例如,通过对画面中每一控件中显示的内容,以及控件的图标对应的文字分析确定,具体的分析过程,本申请不做赘述。
应当理解的是,以上给出的仅仅是几种确定待绘制控件所属图层类型的具体方式,是为了更好的理解本实施例的技术方案而列举的示例,不作为对本实施例的唯一限制。
步骤S3:根据所述第二控件的控件信息确定所述第二图层的图层类型。
具体的说,在根据第二控件的控件信息确定第二图层的图层类型时,其过程为:
从所述第二控件的控件信息中提取所述第二控件的控件名和所述第二控件的尺寸信息;
对所述控件名进行解析,在从所述控件名中解析出所述第二控件的控件类型、所述第一应用的包名和所述第二控件所在的界面信息时,根据所述控件类型、所述包名、所述界面信息和所述尺寸信息,确定所述第二图层的图层类型;
在从所述控件名中解析出所述第二控件的控件类型和所述第二控件所在的界面信息,未解析出所述第一应用的包名时,获取绘制所述第二控件的进程的进程标识号PID,根据所述PID确定所述第二控件的来源,根据所述控件类型、所述来源、所述界面信息和所述尺寸信息,确定所述第二图层的图层类型,所述来源包括所述第一应用的包名。
同样,在对第二控件的控件名进行解析,根据解析结果和位置信息确定图层类型前,先将控件名和尺寸信息作为检索关键词,然后根据关键词在图层标识记录库中查找与该关键词匹配的控件。相应地,在查找到与所述关键词匹配的控件时,将所述控件对应的图层类型确定为所述第二图层的图层类型;在未查找到与该关键词匹配的控件时,在对控件名进行解析,根据解析结果和位置信息确定图层类型。这样不仅能够确定图层类型,同时又能兼顾处理速度和设备资源的消耗。
同样,在根据所述第二控件的控件信息无法确定所述第二图层的图层类型时,获取所述第一应用当前显示的画面,所述当前显示的画面中包括所述第二控件;根据所述当前显示的画面中所述第二控件显示的内容确定所述第二图层的图层类型。
不难发现,根据第二控件的控件信息确定第二图层的图层类型的过程与步骤S2中根据第一控件的控件信息确定第一图层的图层类型大致相同,此次不在赘述。
此外,可理解的,所述第一图层的图层类型与所述第二图层的图层类型不同。例如,在第一控件为视频类控件,如SurfaceView时,第一图层的图层类型为视频流播放图层;在第一控件为白板批注控件时,第一图层的图层类型为白板批注图层;在第二控件为按钮类控件,如button时,第二图层的图层类型为会议控制按钮图层。
应当理解的是,上述说明仅是为了更好的理解本实施例的技术方案而列举的示例,不作为对本实施例的唯一限制。
步骤S4:根据所述第一图层的图层类型和所述第二图层的图层类型,生成第一显示画面和第二显示画面,并在所述第一电子设备的屏幕显示所述第一画面,将所述第二显示画面投射到第二电子设备的屏幕进行显示。
可理解的,在第一应用仅包括第一控件和第二控件时,所述第一显示画面包括所述第一控件和所述第二控件,所述第二显示画面包括所述第一控件,不包括所述第二控件。
示例性的,在是的应用场景中,第一应用还可以包括位于其他图层的控件,相应地生成的第一显示画面和第二显示画面也可以根据业务需求包括其他控件,本申请对此不做限制。
示例性的,在实际的应用场景中,可以在第一电子设备中分配用于缓存第一显示画面的第一显示缓存和用于缓存第二显示画面的第二显示缓存。
相应地,在根据所述第一图层的图层类型和所述第二图层的图层类型,生成第一显示画面和第二显示画面后,可以先将所述第一显示画面缓存到第一显示缓存,将所述第二显示画面缓存到第二显示缓存;然后,按照缓存顺序,从所述第一显示缓存中取出所述第一显示画面,并将所述第一显示画面在所述第一电子设备的屏幕进行显示;对所述第二显示缓存中的所述第二显示画面进行录制,得到投屏内容;将所述投屏内容发送至所述第二电子设备,供所述第二电子设备解码所述投屏内容,得到所述第二显示画面,并在所述第二电子设备的屏幕进行显示。这样,通过将需要在不同电子设备的屏幕显示的显示画面缓存到不同的显示缓存中,然后从对应的显示缓存中取出显示画面送显,从而既可以实现对缓存内容的批量处理,又可以避免线程拥堵,保证传送的显示画面的流畅性。
此外,可理解的,在实际的应用场景中,为不同的显示缓存设置不同的图层过滤规则,从而在生成显示画面时,能够根据对应显示缓存的图层过滤规则和确定的各图层的图层类型确定该显示缓存中缓存的显示画面需要包括的控件,进而获取该控件的资源进行显示画面的绘制,得到适合不同电子设备显示的显示画面。
具体的,在根据第一图层的图层类型和第二图层的图层类型生成第一显示画面和第二显示画面,并将生成的显示画面缓存到对应的显示缓存的过程,具体如下:首先,确定所述第一显示缓存对应的第一图层过滤规则和所述第二显示缓存对应的第二图层过滤规则然后,根据所述第一图层过滤规则、所述第一图层的图层类型和所述第二图层的图层类型,确定所述第一显示画面包括所述第一图层和所述第二图层;接着,获取所述第一图层中所述第一控件的资源和所述第二图层中所述第二控件的资源,根据所述第一控件的资源和所述第二控件的资源,生成所述第一显示画面,并将所述第一显示画面缓存到第一显示缓存;接着,根据所述第二图层过滤规则、所述第一图层的图层类型和所述第二图层的图层类型,确定所述第二显示画面包括所述第一图层;接着,获取所述第一图层中所述第一控件的资源,根据所述第一控件的资源生成所述第二显示画面,并将所述第二显示画面缓存到所述第二显示缓存。
示例性的,本实施例给出两种确定图层过滤规则的方式,以下分别进行说明。
方式一:从预先确定的图层过滤规则表中选取第一图层过滤规则和第二图层过滤规则
获取所述第一电子设备的第一设备标识和所述第二电子设备的第二设备标识;在图层过滤规则表中查找与所述第一设备标识匹配的图层过滤规则,将查找到的图层过滤规则确定为所述第一显示缓存对应的所述第一图层过滤规则;在所述图层过滤规则表中查找与所述第二设备标识匹配的图层过滤规则,将查找到的图层过滤规则确定为所述第二显示缓存对应的所述第二图层过滤规则。这样,通过预先确定图层过滤规则,并进行存储,在需要进行图层过滤时,直接获取已有的图层过滤规则,方便快速。
示例性的,参见表1给出一种预设的图层过滤规则表。
表1 图层过滤规则表1
Figure PCTCN2022091554-appb-000001
基于表1,在第一图层的图层类型是视频流播放图层,第二图层的类型为会议控制按钮图层,第一电子设备的设备标识为D_01,第二电子设备的设备标识为D_02时,则根据第一电子设备的设备标识在图层过滤规则表1中查找到的适合第一显示缓存的第一图层过滤规则为“显示图层类型为视频流播放图层和会议控制按钮图层”,根据第二电子设备的设备标识在图层过滤规则表1中查找到的适合第二显示缓存的第二图层过滤规则为“显示图层类型为视频流播放图层,不显示会议控制按钮图层”。
可理解的,由于视频类的第一控件位于视频流播放图层,按钮类的第二控件位于会议控制按钮图层,因此根据第一图层过滤规则和各图层类型生成的第一显示画面中会包括位于视频流播放图层的第一控件和位于会议控制按钮图层的第二控件。
相应地,根据第二图层过滤规则和各图层类型生成的第二显示画面中会包括位于视频流播放图层的第一控件,但不包括位于会议控制按钮图层的第二控件。
示例性的,参见表2给出另一种预设的图层过滤规则表。
表2 图层过滤规则表2
Figure PCTCN2022091554-appb-000002
基于表2在第一图层的图层类型是白板批注图层,第二图层的类型为会议控制按钮图层,第一电子设备的设备标识为D_01,第二电子设备的设备标识为D_02时,则根据第一电子设备的设备标识在图层过滤规则表2中查找到的适合第一显示缓存的第一图层过滤规则为“显示图层类型为白板批注图层和会议控制按钮图层”,根据第二电子设备的设备标识在图层过滤规则表2中查找到的适合第二显示缓存的第二图层过滤规则为“显示图层类型为白板批注图层,不显示会议控制按钮图层”。
可理解的,由于白板批注类的第一控件位于白板批注图层,按钮类的第二控件位于会议控制按钮图层,因此根据第一图层过滤规则和各图层类型生成的第一显示画面中会包括位于白板批注图层的第一控件和位于会议控制按钮图层的第二控件。
相应地,根据第二图层过滤规则和各图层类型生成的第二显示画面中会包括位于白板批注图层的第一控件,但不包括位于会议控制按钮图层的第二控件。
应当理解的是,上述说明仅是为了更好的理解本实施例的技术方案而列举的示例,不作为对本实施例的唯一限制。
方式二:提供用户入口,由用户决策第一图层过滤规则和第二图层过滤规则
为了提升用户参与度,还可以提供用户操作入口,由用户决策图层过滤规则,具体的:首先,在所述第一电子设备的屏幕上显示可供用户操作的图层过滤规则决策界面,所述图层过滤规则决策界面包括所述第一控件和所述第一控件所在的所述第一图层的图层类型,所述第二控件和所述第二控件所在的所述第二图层的图层类型;然后,响应于用户为所述第一显示缓存设置所述第一图层过滤规则的操作行为,生成所述第一图层过滤规则;响应于用户为所述第二显示缓存设置所述第二图层过滤规则的操作行为,生成所述第二图层过滤规则。
参见图9,给出了一种供用户决策第一图层过滤规则和第二图层过滤规则的界面示意图。
示例性的,手机100的显示界面显示的是设置第一图层过滤规则和第二图层过滤规则的操作界面。在该操作界面中,分成设置第一图层过滤规则的区域和设置第二图层过滤规则的区域。
依旧以第一应用包括第一控件和第二控件,第一控件位于第一图层,第二控件位于第二图层这一前提为例。在根据上述方式确定第一图层的图层类型为视频流播放图层,第二图层为会议控制按钮图层时,继续参见图9。在设置第一图层过滤规则的区域中显示了视频流播放图层和会议控制按钮图层,同时在每个图层选项后对应设置了复选框;在设置第二图层过滤规则的区域中同样显示了视频流播放图层和会议控制按钮图层,同时在每个图层选项后对应设置了复选框。
示例性的,当用户在第一图层过滤规则的设置区域中选中了视频流播放图层和会议控制按钮图层这两个图层的复选框,并点击了保存按钮后,手机响应于用户的操作行为,根据用户的选择生成第一显示缓存的第一图层过滤规则,具体为“显示图层类型为视频流播放图层和会议控制按钮图层”。
示例性的,当用户在第二图层过滤规则的设置区域中选中了视频流播放图层这一个图层的复选框,并点击了保存按钮后,手机响应于用户的操作行为,根据用户的选择生成第二显示缓存的第二图层过滤规则,具体为“显示图层类型为视频流播放图层,不显示会议控制按钮图层”。
应当理解的是,上述说明仅是为了更好的理解本实施例的技术方案而列举的示例,不作为对本实施例的唯一限制。
为了更好的说明发起投射的第一电子设备向接受投屏的第二设备(以下称为大屏设备)投射内容的流程,本申请实施例以发起投射的第一电子设备为手机,大屏设备为电视,第一应用为会议应用为例,通过以下三种场景,结合本申请实施例提供的投屏方法进行说明。
场景一:
下面结合图3和图10至图12对本申请实施例具体实现方式进行详细说明。具体的,投屏的过程可分为三部分,第一部分为图层识别过程,图层识别过程主要是对会议应用程序需要绘制的控件进行识别,确定图层类型的过程。第二部分为过滤和渲染合成过程,过滤和渲染合成过程主要是根据识别出的图层标识信息对不同类型的图层进行过滤,然后将过滤后的图层进行渲染合成。第三部分为送显过程,即将渲染合成的不同画面,分别传输给对应的屏幕(手机屏幕和电视屏幕)进行显示。
下面结合图10所示的手机内各模模块与电视(大屏)内各模块的交互流程示意图、图11所示的手机内各模块交互的时序图,以及图12所示的手机内各模块与电视内各模块交互的时序图,对整个投屏过程进行详细说明。
参见图11,具体包括:
101,发送绘制控件1和控件2的绘制请求。
参见图10,示例性的,安装在手机应用程序层的会议应用程序(后续称为:会议应用)成启动后,假设使用会议应用在加入会议后,默认状态下需要呈现在手机屏幕的画面包括控件1(假设为视频流播放控件)和控件2(假设为会议控制按钮控件), 会议应用便会向位于应用程序框架层的显示管理框架中用于确定控件信息的显示处理模块发送申请绘制控件的绘制请求。
可选地,绘制请求中包括但不限于:会议应用的应用ID(例如可以是应用程序包名)、需要绘制的控件的ID(例如可以是控件名)等。
102,分别确定控件1的控件信息和控件2的控件信息。
参见图10,示例性的,显示处理模块接收到会议应用发送的绘制控件1和控件2的绘制请求后,根据绘制请求中携带的信息,分别确定需要绘制的控件对应的控件信息。
示例性的,显示处理模块根据抓取到的应用ID确定应用类型,例如根据上述发起绘制请求的是会议应用,则根据会议应用的ID确定的应用类型可以为会议类型。
可理解的,关于应用类型的确定,可以通过预设应用ID与应用类型的对应关系,进而在抓取到应用ID时,直接根据预设的对应关系确定对应的应用类型。
示例性的,显示处理模块根据抓取到的控件名确定控件类型,例如在抓取到的控件名为SurfaceView时,确定的控件类型为视频流播放控件,或者3D图片显示控件。
可理解的,关于控件类型的确定,同样可以通过预设对应关系的方式确定,即预先确定不同控件名与控件类型的对应关系,进而在抓取到控件名时,直接根据预设的对应关系确定对应的控件类型。
由此,显示处理模块根据接收到的绘制请求,便可以确定需要绘制的控件的控件信息。
应当理解的是,上述说明仅是为了更好的理解本实施例的技术方案而列举的示例,不作为对本实施例的唯一限制。
103,发送控件1的控件信息和控件2的控件信息。
显示处理模块在确定需要绘制的控件1的控件信息和控件2的控件信息后,将确定的各控件的绘制信息发送给图层识别模块,由图层识别模块对控件所属的图层进行识别。
104,发送控件1和控件2的绘制请求。
显示处理模块在向图层识别模糊发送控制信息的同时,可以向显示渲染框架中的图层过滤模块发送针对控件1和控件2的绘制请求。
示例性的,在实际的应用场景中,显示处理模块可以根据确定的会议类型和控件类型,获取会议应用对应的配置信息,例如可以是会议应用显示的画面对应的分辨率(例如1080*720),以及控件的配置信息,例如可以是控件的大小、位置等信息,然后根据确定的信息生成需要向图层过滤模块发送的绘制请求,并将绘制请求发送给图层过滤模块。
此外,显示处理模块也可以将确定的上述配置信息作为各个控件对应的控件信息,发送给图层识别模块进行处理。
示例性的,在实际的应用场景中,显示处理模块向图层过滤模块发送绘制请求的操作,可以是与向图层识别模块发送确定的控件信息的操作同步进行的,也可以是在发送控件信息之前进行的,还可以是在发送控件信息之后进行的。
示例性的,在实际的应用场景中,如果显示处理模块是在向图层识别模块发送控件信息之后,再向图层过滤模块发送绘制请求,那么绘制请求可以是在图层识别模块识别出需要绘制的控件所属图层后才向图层过滤模块发送的。
应当理解的是,上述说明仅是为了更好的理解本实施例的技术方案而列举的示例,不作为对本实施例的唯一限制。
105,根据控件1的控件信息确定控件1的图层类型,根据控件2的控件信息确定控件2的图层类型。
需要说明的,在实际的应用场景中,图层识别模块可以划分为图层标识模块和图层标识记录库两部分。
其中,图层标识模块具体是依据预设的图层标识算法对显示处理模块发送的控件信息进行识别分析,进而确定的控件所属图层类型,并在控件所属图层进行标识的;图层标识记录库是用于记录已知控件与图层关系的。
示例性的,在实际的应用场景中,图层识别模块在接收到来自显示处理模块下发的各控件的控件信息后,可以先根据控件信息在图层标识记录库中进行查找,如果查找到对应的图层,以及标识信息,则直接将查找到的图层确定为控件对应的图层类型,如果没有查找到,则由图层标识模块根据预设的图层标识算法对显示处理模块发送的控件信息进行识别分析,进而确定的控件所属图层类型。
例如,图层识别模块在接收到显示处理模块发送的控件1的控件信息和控件2的控件信息后,首先根据控件1的控件信息和控件2的控件信息在图层标识记录库中进行查找。假设,在图层标识记录库中查找到了与控件1的控件信息匹配的内容,如控件1的控件信息对应的图层类型为视频流播放图层,则将控件1的图层类型确定为视频流播放图层。假设,在图层标识记录库中没有查找到与控件2的控件信息匹配的内容,则由图层标识模块根据预设的图层标识算法对控件2的控件信息进行识别分析,进而确定控件2所属的图层类型,例如根据图层标识算法确定控件2的图层类型为会议控制按钮图层。
关于基于预设的图层标识算法对控制信息进行识别分析的过程,本申请实施例给出几种具体的实现方式,具体如下:
示例性的,在一种可能的实现方式中,图层标识模块可以根据控件的尺寸、位置、拼接情况,并结合控件信息中的应用类型和控件类型,确定控件所属的图层类型。
例如,对于会议类应用,在进行视频会议时,通常情况下当前主讲人的视频流播放控件会位于整个画面的中间区域,其他与会人员的视频流播放控件会位于手机屏幕的顶部区域,可供用户操作的会议控制按钮控件则位于手机屏幕的底部区域,具体样式如图3所示。
还比如,用于显示与会人员(包括主讲人)的视频流播放控件的尺寸是相同的,多个与会人员的视频流播放控件拼接在一起构成一个完成的视频流播放图层。而供用户操作的会议控制按钮案件则显示在一个会议控制按钮图层中,位于手机屏幕的底部区域,或者顶部区域,或者左右两侧。
应当理解的是,上述说明仅是为了更好的理解本实施例的技术方案而列举的示例,不作为对本实施例的唯一限制。
基于这种预知的信息,图层识别模块根据控件的尺寸、位置、拼接情况,并结合控件信息中的应用类型和控件类型,便可以确定控件所属的图层类型。
例如,图3中用于显示与会人员的画面的视频流播放控件一起对应一个视频播放图层20-1,供用户操作的静音设置选项、视频设置选项、共享设置选项、与会者设置选项、更多设置选项一起对应一个会议控制按钮图层20-2。
可选地,在实际的应用场景中,也可以设置一个视频流播放控件对应一个视频流播放图层,一个会议控制按钮控件对应一个会议控制按钮图层,本申请对此不做限制。
示例性的,在另一种可能实现的方式中,对于一些非视频流播放控件,即不是用来显示与会人员的视频流的,是否需要将其归类到视频流播放图层,从而实现在投屏模式下,能够在大屏仅显示视频流播放图层中的内容的情况下,能够投射到大屏进行显示,图层标识模块可以通过监测此类非视频流播放控件能否接受触摸,以及触摸时长(例如10帧的时长),是否存在光标变化、提示等信息来确定此类控件是否需要归类到视频流播放图层。
例如,对于共享设置选项下的白板批注模式,该模式下通常不会显示与会者的视频流播放控件,但是白板批注控件需要进行显示,如果不将该控件归类到视频流播放图层,那么大屏就不能与手机屏幕同步显示白板批注画面。因此,对于此类控件,通过监测白板画笔/光标的变化,以及位置的移动,提示信息等,在确定是白板批注控件时,同样将其归类到视频流播放图层,即在大屏进行显示。
示例性的,在另一种可能实现的方式中,对于会议应用向显示处理模块发送的绘制请求没有携带具体的包名特征的,显示处理模块无法确定具体的应用类型和控件类型的场景,图层标识模块可以通过判断绘制请求的来源信息来确定图层类型,进而对控件进行图层标识。
可选地,在根据来源信息确定图层类型时,具体可以通过判断虚拟IP地址(Virtual IP Address,VIP)、进程识别号(Process Identification,PID)等关联信息,进而根据这些关联信息确定控件对应的图层类型。
可理解的,在实际的应用场景中,可以建立各种已知控件的来源信息与控件和对应的图层类型之间的对应关系,进而在绘制请求没有携带具体的包名特征,无法确定控件的控件信息时,图层标识模块可以根据发起的绘制请求对应的来源信息和预先确定的对应关系,确定控件所属的图层类型。
示例性的,在另一种可能的实现方式中,对于采用上述任一种实现方式均无法确定控件所属图层类型的场景,图层标识模块可以通过截取来自HAL层上传的数据流,例如可以截取前5帧,然后通过对着5帧数据进行解析,利用预先训练好的识别模型分析识别当前画面中是否包括了控件所属图层类型为视频流播放图层的控件,或者所属图层类型会议控制按钮图层的控件,进而确定各控件所属的图层类型。
应当理解的是,上述说明仅是为了更好的理解本实施例的技术方案而列举的示例,不作为对本实施例的唯一限制。
106,发送控件1的图层类型和控件2的图层类型。
图层识别模块在完成对各控件所属图层类型的确定后,将控件和对应的图层类型发送给图层过滤模块,以便图层过滤模块根据识别出的图层中添加的标识信息对图层进行过滤。
需要说明的,在实际的应用场景中,为了保证图层标识记录库中的记录的内容的有效性,在每次通过图层标识模块识别出控件的图层类型后,都可以将控件和对应的图层类型更新到图层标识记录库中。
107,根据预设过滤规则过滤掉控件2。
示例性的,参见图10,在实际的应用场景中,图层过滤模块中可以预先置入多种过滤规则。
具体的,过滤规则可以根据大屏设备的类型、型号等来划分,比如对于电视类的大屏设备,过滤规则可以为仅在大屏投射视频流播放图层;对于投影仪类的大屏设备,过滤规则可以为所有图层均显示在大屏设备。
可选地,在实际的应用场景中,过滤规则还可以是大屏设备显示视频流播放图层中所有的内容,显示会议控制按钮图层中部分控件,例如仅显示静音设置选项。
此外,对于手机屏幕显示的画面对应的过滤规则,可以是所有图层都显示,即不过滤任何图层。
此外,在实际的应用场景中,如果采用其他大屏设备作为主控设备,即供用户进行操作的设备,则手机屏幕可以根据业务需求直接不显示任何内容,而在设定的主控设备显示所有图层。
示例性的,参见图10和图11,控件1为视频流播放控件,控件2为会议控制按钮控件。当确定了图层类型的控件1和控件2到达图层过滤模块后,假设对应手机屏幕的过滤规则(图10中的规则1)为所有图层的内容都送显,根据规则1过滤出来后,实质并没有过滤掉控件1和控件2,而是将控件1的资源和控件2的资源全部发送至合成渲染模块进行画面绘制。
相应地,假设对应电视屏幕(大屏)的过滤规则(图10中的规则2)为仅显示视频流播放图层中的内容,则根据规则2,图层过滤模块会将控件2过滤掉,仅将控件1的资源发送至合成渲染模块进行画面绘制。
应当理解的是,上述说明仅是为了更好的理解本实施例的技术方案而列举的示例,不作为对本实施例的唯一限制。
在实际的应用场景中,过滤规则也可以由用户决策。
示例性的,通过设置图层识别模块识别出当前应用需要绘制的画面包括的图层类型后,分别将识别出的图层类型发送给应用程序层的应,如图11中的会议应用,以及显示渲染框架中的图层过滤模块。
相应地,识别出的图层类型到达应用程序层后,手机做出响应,可以再当前界面弹出显示每一控件对应的图层的界面,供用户选则哪些控件所在的图层投放到电视屏幕,哪些显示在手机屏幕,即得到对应电视屏幕的过滤规则和手机屏幕的过滤规则。 接着,将由用户决策的过滤规则下发给显示渲染框架中的图层过滤模块,供图层过滤模块根据过滤规则进行图层过滤。
通过提供用户决策过滤规则的入口,让用户参与到投屏内容的选择中,使得投屏的应用场景能够更好的满足不同的用户需求,进而提升了用户体验。
108,发送控件1的资源。
图层过滤模块根据预置的过滤规则对控件进行过滤后,将满足过滤规则的控件资源发送至合成渲染模块进行画面的绘制。
需要说明的,在根据预置的过滤规则对控件进行过滤后,需要发送的控件发资源,可以根据显示处理模块下发的绘制请求中写的信息确定,具体确定方式本申请不做限制,对此也不做描述。
109,根据控件1的资源绘制控件1,得到画面A。
具体的,合成渲染模块先根据接收到的控件的资源,例如可以是绘制逻辑绘制需要显示的各个控件,然后将各控件所在的图层进行合成,得到一个完整的画面。
示例性的,对于本申请实施例中根据控件1的资源绘制控件1,进而得到画面A的过程,实质就是绘制图3所示的视频播放流图层20-1中各个视频流播放控件,并将这些控件所在图层合成为一个画面的过程,即最终得到的画面A就是图3中的20-1。
相应地,对于根据规则1过滤后得到的内容是控件1和控件2时,则合成渲染模块会根据控件1的资源绘制控件1,根据控件2的资源绘制控件2,然后将控件1所在的图层和控件2所在的图层进行合成,最后得到一个完整的画面B。
应当理解的是,上述说明仅是为了更好的理解本实施例的技术方案而列举的示例,不作为对本实施例的唯一限制。
由此,实现了图层的分离、过滤和合成渲染。
在得到需要在电视屏幕显示的画面A后,会将画面A缓存到显示缓存A中,在后续流程中,直接从显示缓存A获取画面A即可。
相应地,在得到需要在手机屏幕显示的画面B后,会将画面B缓冲到显示缓存B中,在后续流程中,直接从显示缓存B获取画面B即可。
此外,需要说明的是,在实际的应用场景中,显示缓存A和显示缓存B可以是同一个缓存区域,对于这种场景,在将不同画面缓存到缓存区域时,携带好具体的标识(例如可以是具体对于哪一个设备),这样无需在手机内部分配多个缓存区域,从而避免冗余问题。
此外,在得到画面A和画面B后,对于如何从手机的显示缓存中将缓存的画面分别送显到不同的屏幕进行显示的过程,以下结合图10和图12进行详细说明。
参见图12,具体包括:
201,根据控件1的资源绘制控件1,得到画面A。
合成渲染模块,根据控件的资源绘制控件,进而得到满足过滤要求的画面的过程,详见图11中步骤109的描述,此处不再赘述。
202,发送画面A。
示例性的,参见图10,合成渲染模块在合成绘制得到需要送显的画面后,会将得到的画面暂时缓存到对应的显示缓存中,然后由显示缓冲将缓冲的画面发送给对应的模块。
示例性的,参见图10,需要投射到电视屏幕(大屏)显示的画面A会缓存到显示缓冲A中,在需要投射给电视屏幕时,投屏录制模块会从显示缓存A中取出画面A,对画面A进行录制,即显示缓存A中缓存的画面A是发送给投屏录制模块的。
203,录制画面A,得到投屏内容。
投屏录制模块获取到画面A后,对画面A进行录制,并在录制完成后对录制的内容进行视频编码,进而得到投屏内容。
可理解的,在实际的应用场景中,每一帧视频流都会对应一个画面,因而得到的画面A可以是不同时刻的,因而投屏录制模块在进行录制时,可以按照预设的录制要求对多帧的画面进行录制,比如每秒30帧。因此,得到的投屏内容实质为动态的视频流。
需要说明的,投屏录制模块为具备投屏功能的电子设备系统自带的,关于具体的录制过程本申请不做描述,对于录制后进行的视频编码本申请也不做限制。
204,发送投屏内容。
投屏录制模块在完成对画面A的录制和视频编码操作后,会将得到的投屏内容通过预先建立的通信连接传输给大屏设备(本实施例为电视机)内的视频解码模块。
205,解码投屏内容,得到画面A。
电视内的视频解码模块在接收到手机传输的投屏内容后,会按照预定的方式进行解码,进而解析出画面A。
可理解的,经投屏录制模块录制后得到的投屏内容实质为视频流,因此此外的解码操作为对视频流进行解码,进而得到需要显示的视频内容。
可理解的,对于不同时刻的视频内容,其实是由多帧画面构成的,因而在电视屏幕显示的是连续变化的每一帧对于的画面。
206,发送画面A。
视频解码模块在解析出画面A后,会将画面A发送给电视屏幕,具体为发送给电视的投屏显示模块。
207,显示画面A。
投屏显示模块将接收到的画面A显示在显示屏2,即电视屏幕中。
可理解的,由于画面A中仅包括图层类型为视频流播放图层的控件,因而在电视屏幕中显示的内容具体是图3中20-1的镜像,即20-1’。
208,根据控件1的资源绘制控件1,根据控件2的资源绘制控件2,得到画面B。
合成渲染模块,根据控件的资源绘制控件,进而得到满足过滤要求的画面的过程,详见图11中步骤109的描述,此处不再赘述。
209,发送画面B。
示例性的,参见图10,合成渲染模块在合成绘制得到需要送显的画面后,会将得到的画面暂时缓存到对应的显示缓存中,然后由显示缓冲将缓冲的画面发送给对应的模块。
示例性的,参见图10,需要在手机屏幕显示的画面B会缓存到显示缓冲B中,在需要显示到手机屏幕时,手机显示驱动会从显示缓存B中取出画面B,即显示缓存B中缓存的画面B是发送给手机显示驱动的。
210,发送画面B。
手机显示驱动接收到画面B后,会将画面B上传给位于应用程序层的会议应用,在会议应用对应的会议界面进行显示。
211,显示画面B。
可理解的,由于画面B包括了所有的图层,因而在手机屏幕,不仅会显示图3中的20-1,还会显示20-2。
通过上述描述不难发现,通过在投射内容的电子设备,如手机侧的应用程序框架层中增加图层识别模块和图层过滤模块,从而可以由图层识别模块识别出当前通过会议应用加入的会议的图层中,哪一层是视频流播放图层,哪一层是会议控制按钮图层。而后,在图层过滤模块进行图层过滤处理,从而根据预置的过滤规则,过滤出哪些图层需要在手机屏幕显示,哪些图层需要在大屏显示。例如,对于手机屏幕,设置所有图层都显示,这样用户不仅可以看到整个会议过程中涉及的视频流内容,还可以看到可供用户操作的会议控制按钮,以便用户通过会议控制按钮对会议进行操作。对于大屏,设置仅显示视频流播放图层,这样使用大屏观看会议的其他用户,仅能看到会议过程中涉及的视频流内容,当用户操作手机屏幕显示的会议控制按钮时,整个操作过程使用大屏观看会议的用户不会看到,因而不会影响大屏的观看。
此外,由于大屏仅显示会议过程中涉及的视频流内容,对于手机屏幕显示的其他涉及隐私信息的内容,同样不会投射在大屏,从而保证了用户隐私。
场景二:
场景一所述的实施例是手机屏幕显示所有与会人员的画面和会议控制按钮,电视屏幕仅显示与会人员的画面。下面结合图13对手机屏幕显示所有与会人员的画面,以及与会人员添加到画面中的标注内容和会议控制按钮,电视屏幕显示与会人员的画面以及与会人员添加到画面中的标注内容的场景进行详细说明。
参见图13,示例性的,手机100的显示界面显示的是通过会议应用加入会议后的画面20,在画面20中包括视频流播放图层20-1、会议控制按钮图层20-2其他与会人员添加的标注内容20-3,以及使用手机100的用户为自己添加的标注内容20-4。
可理解的,对于图层20-1和图层20-2的识别和过滤方式,具体过程可以参照场景一中的描述,此处不再赘述。
对于图层20-3和图层20-4,在实际的应用场景中,并非是通过与会人员使用的手机的摄像头采集的,而是与会人员自己在呈现的画面上添加的,比如目前一些直播互动中,直播人员可以自己添加基于增强现实(Augmented Reality,AR)技术绘制的标注内容,参见图13在对端用户处添加的AR标注内容为20-3,在本机用户处添加 的AR标注内容为20-4。对于此类内容对于的控件,在确定其是否应当归类到视频流播放图层时,具体可以根据这些控件与视频播放流控件之间的位置关系来确定。因而,图层识别模块在根据控件的控件信息确定图层类型时,会根据已知图层类型的控件与此类控件之间的位置关系,确定此类控件是否需要归类到视频流播放图层。
相应地,由于本机用户添加的AR标注内容是想要给其他与会者观看的,因此标识的AR标识内容实质是需要通过服务器传输给其他与会者的,即在其他与会者的手机屏幕和大屏均需显示,而在本机匹配的大屏上,实质上不需要进行投放,故而在一种实现场景下,本机用户添加的AR标注内容,在大屏不显示,而在其他用户的手机匹配的大屏上显示。
参见图13,20-4是本机用户添加,因此该图层的内容仅在手机屏幕显示,在匹配的大屏200上不显示,而20-3是其他与会者在自己的手机上添加的,故而手机100的屏幕20上需要显示20-4,同时在将该画面投射到大屏200时,需要将20-3也进行显示,故而大屏200上会显示20-1的镜像内容20-1’和20-3的镜像内容20-3’。
此外,在另一种实现场景中,为了让观看大屏200的用户能够看到使用手机100加入会议的用户添加的AR标注内容20-3,也可以将其归类到视频流播放图层中,从而使得投射到大屏200上的画面不仅包括对端用户添加的AR标注内容20-3的镜像内容,还会显示本机用户添加的AR标注内容20-4的镜像内容。
应当理解的是,上述说明仅是为了更好的理解本实施例的技术方案而列举的示例,不作为对本实施例的唯一限制。
由此,通过根据业务需要,将不属于视频流播放图层的控件归类到视频流播放图层,从而可以将非视频流播放控件显示的内容也投射到大屏进行显示,进一步丰富了投屏场景。
场景三:
场景一所述的实施例是手机屏幕显示所有与会人员的画面和会议控制按钮,电视屏幕仅显示与会人员的画面。下面结合图14a、图14b和图15对手机屏幕从场景1的画面切换到白板批注模式时,手机屏幕和电视屏幕显示画面的变化进行详细说明。
参见图14a,示例性的,手机100的显示界面显示的是通过会议应用加入会议后的画面20,在画面20中包括了视频流播放图层20-1和会议控制按钮图层20-2。视频流播放图层20-1包括一个或多个视频流播放控件,具体描述可参照场景一,此处不再赘述。通过手机100投射到电视200上的画面20’仅包括视频流播放图层20-1的镜像20-1’。
示例性的,视频流播放图层20-1’包括一个或多个视频流播放控件,并且显示的视频流播放控件分别与手机100的画面20中显示的视频流播放控件一一对应,且视频流播放控件中显示的内容均相同。
此外,参见图14a,示例性的,手机100的显示界面显示的会议控制按钮图层20-2包括一个或多个会议控制按钮,例如图14a中示出的静音设置选项、视频设置选项、共享设置选项、与会者设置选项和更多设置选项。
示例性的,当用户点击了会议控制按钮图层20-2中的共享设置选项后,手机100响应于用户的操作行为,在显示界面上显示选择共享内容的提示框40,如图14a所示。其中,提示框40包括一个或多个控件,例如可以是图14a示出的桌面设置选项40-1、白板设置选项40-2、取消设置选项40-3和开始共享40-4。
需要说明的是,图14a及图14b所涉及的手机100的显示界面显示的控件的名称和数量,以及提示框40中显示的的控件的名称和数量仅为示意性举例,本申请不做限定。
继续参照图14a,示例性的,当用户点击了提示框40中的白板设置选项40-2后,手机100响应于用户的操作行为,将白板设置选项40-2的状态标注为选中,接着如果用户点击了开始共享设置选项40-4,手机100响应于用户的操作行为,便会从当前界面切换为白板批注模式的界面,如图14b所示。
参见图14b,示例性的,在切换到白板批注模式后,手机100的显示界面显示的画面20中包括但不限于图14a中就显示的会议控制按钮图层20-2、切换到白板批注模式后由白板批注控件绘制得到的白板批注图层20-5,以及用于停止共享的会议控制按钮图层20-6。其中,会议控制按钮图层20-6包括一个或多个控件,例如图14b中用于提示用户当前正在共享白板的提示控件,以及供用户操作停止共享的控件。
继续参照图14b,示例性的,由于会议控制按钮图层20-6中的控件依旧是供用户进行操作的,不涉及白板内容,因此在进行投屏时,图层识别模块会将此类控件识别出,并由图层过滤模块进行过滤,以保证投射到电视屏幕上的画面20’仅包括白板批注图层20-5’中用户绘制的内容。
此外,应当理解的,若在用户点击了提示框40中的白板设置选项40-2后,手机100响应于用户的操作行为,将白板设置选项40-2的状态标注为选中后,接着用户点击了点击了取消设置选项40-3,手机100响应于用户的操作行为,提示框40便会从显示界面消失,恢复到图14a左下角示出的手机100显示界面显示的内容。
需要说明的是,在实际的应用场景中,切换到白板批注模式下,会直接将在白板中绘制内容的画笔作为一个单独的图层,为了避免画笔对观看大屏画面的用户的干扰,大屏显示的画面通常不会显示能够在白板绘制内容的画笔,而是直接显示使用白板绘制出的内容。但在手机侧,为了便于绘制内容的用户知道绘制位置,会显示画笔。
此外,在另一种实现场景中,例如对共享文档中的内容进行展示时,在手机屏幕显示的光标可以投射到大屏,以便观看大屏的用户获知当前选中内容。
下面结合图15所示的手机内各模模块与电视(大屏)内各模块的交互流程示意图,对场景3涉及的整个投屏过程进行详细说明。
参见图15,示例性的,安装在手机应用程序层的会议应用程序(后续称为:会议应用)成启动后,假设使用会议应用在加入会议后,默认状态下需要呈现在手机屏幕的画面包括控件1(假设为视频流播放控件)和控件2(假设为会议控制按钮控件),会议应用便会向位于应用程序框架层的显示管理框架中用于确定控件信息的显示处理模块发送申请绘制控件的绘制请求。此时,如果用户点击了会议控制按钮图层中显示的共享设置选项,并选中了提供的白板模式,会议应用同样会向位于位于应用程序 框架层的显示管理框架中用于确定控件信息的显示处理模块发送申请绘制白板批注控件的绘制请求。关于在手机界面执行白板模式切换的操作的过程,详见图14a和图14b示出的界面图。
继续参照图15,示例性的,显示处理模块接收到会议应用发送的绘制控件1和控件2的绘制请求后,根据绘制请求中携带的信息,分别确定需要绘制的控件对应的控件信息,以及由图层识别模块识别控件1和控件2的图层类型的操作,具体描述可参照场景一,此处不再赘述。
此外,应当理解的,白板批注控件设置上也是一种控件,只是具体的控件属性和控件信息与视频流播放控件、会议控制按钮控件不同,但确定白板批注控件的控件信息的过程与视频流播放控件和会议控制按钮控件的过程类似。显示处理模块同样可以抓取到的应用ID确定应用类型,例如根据上述发起绘制请求的是会议应用,则根据会议应用的ID确定的应用类型可以为会议类型,以及根据抓取到的控件名确定控件类型,例如在抓取到的控件名为BlankWindow时,确定的控件类型为白板批注控件。
相应地,显示处理模块在确定白板批注控件对应的控件信息后,同样会将确定的控件信息发送给图层识别模块,由图层识别模块进行图层类型的识别,进而由图层识别模块将识别出的图层类型发送给图层过滤模块。接着,图层过滤模块根据预置的过滤规则进行过滤,并将过滤出的需要显示在不同设备的控件的资源分别发送给合成渲染模块,由合成渲染模块合成需要输送到不同设备进行显示的画面。
继续参照图15,示例性的,假设过滤模块中预置的规则1是对应手机屏幕的过滤规则,规则1中规定了所有图层的内容都在手机屏幕显示,则根据规则1过滤出的内容为切换到白板模式显示的白板批注控件,以及供用户操作的会议控制按钮控件。
相应地,假设过滤模块中预置的规则是对应电视屏幕的过滤规则,规则2中规定了在白板模式下,电视屏幕仅显示白板批注控件中的内容,则根据过滤规则2,图层过滤模块会将视频流播放控件、会议控制按钮控件过滤掉,仅将白板批注控件的资源发送至合成渲染模块进行画面绘制。
继续参照图15,示例性的,合成渲染模块在接收到图层过滤模块根据不同过滤规则过滤出的控件的资源后,根据对应的资源绘制控件,进而得到需要显示到不同设备的画面。
同样,合成渲染模块绘制出的画面可以先缓存到对应的显示缓存中,例如根据白板批注控件的资源和会议控制按钮控件的资源绘制的画面A需要缓存到显示缓存A中,在需要进行画面显示时,从显示缓存A取出画面A发送给手机显示驱动,由手机显示驱动发送给手机屏幕进行显示,从而得到图14b手机100的显示界面显示的画面20。
相应地,对于根据白板批注控件的资源绘制的画面B需要缓存到显示缓冲B中,在需要进行画面投射时,从显示缓存B取出画面B发送给投屏录制模块进行录制,进而得到投屏内容,并由投屏录制模块将得到的投屏内容发送给预先建立通信连接的电视机,由电视机中的视频解码模块进行解码处理,进而解析出画面B,让后将画面B传输给投屏显示模块,最终在电视屏幕显示画面B。
由此,通过对会议画面的图层进行分离,在投射内容的电子设备,如手机侧显示全部的图层合成的画面,在显示投射内容的大屏设备,如电视机上仅显示视频流播放图层,从而在用户通过手机侧显示的会议控制按钮图层显示的会议空按钮控件的操作行为,实现将当前会议画面切换为白板模式时,整个切换过程不会在电视屏幕显示,电视屏幕依旧显示视频流播放图层中的内容,在手机侧切换到白板模式后,电视屏幕将当前显示的画面切换为白板批注控件,这样既不影响使用电视屏幕观看会议的用户的视觉体验,又能够及时显示切换后的白板画面,实现了不同界面的无恒切换。
此外,需要说明的是,在实际的应用场景中,由于投射内容的电子设备与显示投射的内容的大屏设备的分辨率、屏幕尺寸、比例会有所差异,因此在一个实现方式中,可以根据分辨率预先确定录制的投屏内容的帧率,在另一个实现方式中,可以根据屏幕尺寸、比例等信息对投射在大屏设备的画面进行去黑边处理。
关于投屏时,去黑边的处理方式,具体如下:
在对所述第二显示缓存中的所述第二显示画面进行录制,得到投屏内容的过程中,可以先获取所述第一电子设备的第一屏幕纵横比和所述第二电子设备的第二屏幕纵横比;然后,在所述第一屏幕纵横比和所述第二屏幕纵横比不同时,对所述第二显示缓存中的所述第二显示画面进行去黑边处理,并录制去黑边后的所述第二显示画面得到所述投屏内容;在所述第一屏幕纵横比和所述第二屏幕纵横比相同时,对所述第二显示缓存中的所述第二显示画面进行录制,得到投屏内容。这样,引入去黑边处理,从而可以保证显示在第二电子设备的第二显示画面没有黑边,或者尽可能减小黑边,从而提升了观看投屏画面的用户观看体验。
可理解的,第二显示画面是在第一电子设备内的虚拟屏幕绘制的。
示例性的,在第一屏幕纵横比和第二屏幕纵横比不相同时,为了去除第二显示画面的黑边,可以先设置虚拟屏幕的可见区域。
为了与虚拟屏幕区分,将手机的屏幕称为物理屏幕。手机的物理屏幕一般是矩形或者近似矩形,虚拟屏幕与物理屏幕对应,一般是矩形,本申请实施例中虚拟屏幕的可见区域是物理屏幕显示区域的部分区域或者全部区域。
参见图16a,手机的物理屏幕在显示界面时,包括显示内容层(DisplayRect)和观察窗口层(Viewport),同样的,虚拟屏幕也可以包括显示内容层和观察窗口层,不管是物理屏幕还是虚拟屏幕,屏幕的人眼可见区域与观察窗口层的区域设置信息相关,例如图16a中物理屏幕的观察窗口层为2340*1080,那么人眼可见区域为2340*1080,虚拟屏幕的观察窗口层为1920*1080,那么人眼可见区域也为1920*1080。需要说明的是,在投屏时虚拟屏幕显示的界面是不向用户显示的,其人眼可见区域是指录屏时可以被录制的区域。
不管是物理屏幕还是虚拟屏幕,其区域的设置信息与物理屏幕的分辨率相关。举例来说,参见图16b所示,可以以物理屏幕的左上角顶点为原点O,经过原点O水平向右为x轴,经过原点O竖直向下为y轴建立坐标系,则物理屏幕中的每个像素点可以通过坐标(x,y)标识。基于该坐标系,参见图16a,以物理屏幕的分辨率是2340*1080为例,物理屏幕显示内容层的显示区域可以设置为(0,0,234,1080),观察窗口 层的显示区域为(0,0,2340,1080),虚拟屏幕的显示内容层的显示区域可以设置为(0,0,2340,1080),观察窗口层的显示区域可以设置为(210,0,2130,1080)。本申请实施例中,虚拟屏幕的可见区域可以是虚拟屏幕的观察窗口层的显示区域。通过设置观察窗口层的显示区域,可以改变虚拟屏幕的人眼可见区域,也即改变发送至第二电子设备的视频帧的长宽比和内容。例如,可见区域设置为(210,0,2130,1080),则可见区域的长宽比例变为16:9,也即通过录制该比例的虚拟屏画面得到的第二显示画面的长宽比例为16:9,实际视频帧中的内容是图16b中矩形ABCD中的内容。
在一种可能的实现方式中,手机设置虚拟屏幕的可见区域可以包括:
手机获取虚拟屏中绘制的第二显示画面的尺寸以及大屏设备中显示控件的尺寸;
如果判断显示控件的尺寸小于第二显示画面的尺寸,根据显示控件的显示区域设置虚拟屏幕的可见区域;
如果判断显示控件的尺寸不小于第二显示画面的尺寸,将虚拟屏幕的可见区域设置为与大屏设备的屏幕的显示区域相同。
在另一种可能的实现方式中,手机中虚拟屏幕的可见区域可以包括:
手机对初次根据第二图层过滤规则和各图层的图层类型,绘制的第二显示画面进行截屏,对截屏得到的图片进行黑边检测;
如果检测到黑边,根据非黑边区域在第二显示画面中的位置设置虚拟屏幕的可见区域;
如果未检测到黑边,将虚拟屏幕的可见区域设置为与大屏设备的屏幕的显示区域相同。
在检测图片是否存在黑边时,可以通过检测图片的指定区域中像素的颜色是否是黑色实现,例如,如果手机的屏幕分辨率是2340*1080,也即屏幕纵横比为19.5:9,而视频或者PPT等全屏播放时的长宽比例一般为16:9,那么,可以检测图片的区域(0,0,210,1080)以及区域(2130,0,2340,1080)中像素的RGB值是否均为(0,0,0),如果是,则上述区域为黑边区域,否则,不是黑边区域。
在又一种可能的实现方式中,手机设置虚拟屏幕的可见区域可以包括:
手机获取第二显示画面的尺寸以及大屏设备的屏幕显示控件的尺寸;
如果判断显示控件的尺寸小于第二显示画面的尺寸,根据显示控件的显示区域设置虚拟屏幕的可见区域;
如果判断显示控件的尺寸不小于第二显示画面的尺寸,对虚拟屏画面的进行截屏,对截屏得到的图片进行黑边检测;
如果检测到黑边,根据非黑边区域设置虚拟屏幕的可见区域;
如果未检测到黑边,将虚拟屏幕的可见区域设置为与大屏设备的屏幕的显示区域相同。
为了保证黑边检测结果的准确性,手机可以对虚拟屏画面的多次截屏,根据多次截屏得到的多个图片的黑边检测结果设置虚拟屏幕的可见区域。
通过上述几种方式,便可以完成去黑边处理,最终通过录制完成去黑边处理后的虚拟屏画面,便可以得到需要显示在大屏设备的第二显示画面。
上述说明仅是为了更好的理解本实施例的技术方案而列举的示例,不作为对本实施例的唯一限制。
关于投屏时,端到端帧率的协商方法,具体如下:
在对所述第二显示缓存中的所述第二显示画面进行录制,得到投屏内容的过程中,可以先获取所述第二电子设备的显示能力;根据所述显示能力确定视频流刷新帧率;根据所述视频流刷新帧率,对所述第二显示缓存中的所述第二显示画面进行录制,得到投屏内容。这样,在录制投屏内容时,第一电子设备先与第二电子设备协商视频流刷新帧率,从而既保证了传输的视频流能在第二电子设备正常显示,又可以避免对带宽的过渡占用。
也就是说,先根据大屏设备的显示能力确定本次投屏过程中,投射内容的电子设备到大屏设备需要的视频流刷新帧率。
可理解的,在实际的应用场景中,投射内容的电子设备与大屏设备的硬件能力会存在不对等的情况,如果按照投射内容的电子设备的高帧率投射,在高帧率的大屏设备播放时就需要做丢弃处理,这样按照高帧率传输的视频流实际是存在带宽浪费的。因此,在进行投屏时,根据大屏设备的显示能力来确定视频流的刷新帧率,这样既保证了传输的视频流能在大屏设备正常显示,又可以避免对带宽的过渡占用。
然后,在投射内容的电子设备侧根据协商的刷新帧率,直接录制对应刷新帧率的投屏内容,从而保证传输到大屏设备侧的视频流就的刷新帧率就是大屏设备支持的,故而大屏设备无需对视频做丢弃处理。
此外,在另一种实现方式中,对视频流的将帧处理,可以是在渲染合成模块中进行的,即对于需要输送到大屏设备显示的画面,渲染合成模块在合成时,直接按照大屏设备的视频流刷新帧率进行将帧处理合成满足要求的视频流,从而实现需要传输到大屏设备显示的视频流已经按照大屏设备的显示能力进行将帧,这样既可以减少网络传输过程中对带宽的占用,又可以降低大屏设备渲染接收到的视频流时消耗的功耗。
此外,可理解的,在具体实现时,投射内容的电子设备为了实现上述功能,其包含了执行各个功能相应的硬件和/或软件模块。结合本文中所公开的实施例描述的各示例的算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。本领域技术人员可以结合实施例对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本实施例可以根据上述方法示例对电子设备进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块可以采用硬件的形式实现。需要说明的是,本实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
示例性的,图17示出了本申请实施例的一种装置300的示意性框图。装置300可包括:处理器301和收发器/收发管脚302,可选地,还包括存储器303。
装置300的各个组件通过总线304耦合在一起,其中,总线304除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图中将各种总线都称为总线304。
可选地,存储器303可以用于前述方法实施例中的指令。该处理器301可用于执行存储器303中的指令,并控制接收管脚接收信号,以及控制发送管脚发送信号。
此外,需要说明的是,在实际的应用场景中,装置300可以是上述方法实施例中的用于投射画面到大屏进行显示的电子设备,例如手机。
具体的,在装置300为发起投屏的第一电子设备,且所述第一电子设备安装有第一应用,所述第一应用包括第一控件和第二控件,所述第一控件位于第一图层,所述第二控件位于第二图层时,电子设备中的一个或多个计算机程序存储在所述存储器上,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
获取所述第一控件的控件信息和所述第二控件的控件信息;
根据所述第一控件的控件信息确定所述第一图层的图层类型;
根据所述第二控件的控件信息确定所述第二图层的图层类型,所述第一图层的图层类型与所述第二图层的图层类型不同;
根据所述第一图层的图层类型和所述第二图层的图层类型,生成第一显示画面和第二显示画面,并在所述第一电子设备的屏幕显示所述第一画面,将所述第二显示画面投射到第二电子设备的屏幕进行显示,所述第一显示画面包括所述第一控件和所述第二控件,所述第二显示画面包括所述第一控件,不包括所述第二控件。
示例性的,在一个例子中,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
从所述第一控件的控件信息中提取所述第一控件的控件名和所述第一控件的尺寸信息;
对所述控件名进行解析,在从所述控件名中解析出所述第一控件的控件类型、所述第一应用的包名和所述第一控件所在的界面信息时,根据所述控件类型、所述包名、所述界面信息和所述尺寸信息,确定所述第一图层的图层类型;
在从所述控件名中解析出所述第一控件的控件类型和所述第一控件所在的界面信息,未解析出所述第一应用的包名时,获取绘制所述第一控件的进程的进程标识号PID,根据所述PID确定所述第一控件的来源,根据所述控件类型、所述来源、所述界面信息和所述尺寸信息,确定所述第一图层的图层类型,所述来源包括所述第一应用的包名。
示例性的,在一个例子中,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
将所述控件名和所述尺寸信息作为检索关键词;
根据所述关键词,在图层标识记录库中查找与所述关键词匹配的控件;
在查找到与所述关键词匹配的控件时,将所述控件对应的图层类型确定为所述第一图层的图层类型;
在未查找到与所述关键词匹配的控件时,执行对所述控件名进行解析的步骤。
示例性的,在一个例子中,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
在根据所述第一控件的控件信息无法确定所述第一图层的图层类型时,获取所述第一应用当前显示的画面,所述当前显示的画面中包括所述第一控件;
根据所述当前显示的画面中所述第一控件显示的内容确定所述第一图层的图层类型。
示例性的,在一个例子中,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
从所述第二控件的控件信息中提取所述第二控件的控件名和所述第二控件的尺寸信息;
对所述控件名进行解析,在从所述控件名中解析出所述第二控件的控件类型、所述第一应用的包名和所述第二控件所在的界面信息时,根据所述控件类型、所述包名、所述界面信息和所述尺寸信息,确定所述第二图层的图层类型;
在从所述控件名中解析出所述第二控件的控件类型和所述第二控件所在的界面信息,未解析出所述第一应用的包名时,获取绘制所述第二控件的进程的进程标识号PID,根据所述PID确定所述第二控件的来源,根据所述控件类型、所述来源、所述界面信息和所述尺寸信息,确定所述第二图层的图层类型,所述来源包括所述第一应用的包名。
示例性的,在一个例子中,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
在根据所述第二控件的控件信息无法确定所述第二图层的图层类型时,获取所述第一应用当前显示的画面,所述当前显示的画面中包括所述第二控件;
根据所述当前显示的画面中所述第二控件显示的内容确定所述第二图层的图层类型。
示例性的,在一个例子中,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
根据所述第一图层的图层类型和所述第二图层的图层类型,生成第一显示画面和第二显示画面,并将所述第一显示画面缓存到第一显示缓存,将所述第二显示画面缓存到第二显示缓存;
按照缓存顺序,从所述第一显示缓存中取出所述第一显示画面,并将所述第一显示画面在所述第一电子设备的屏幕进行显示;
对所述第二显示缓存中的所述第二显示画面进行录制,得到投屏内容;
将所述投屏内容发送至所述第二电子设备,供所述第二电子设备解码所述投屏内容,得到所述第二显示画面,并在所述第二电子设备的屏幕进行显示。
示例性的,在一个例子中,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
确定所述第一显示缓存对应的第一图层过滤规则和所述第二显示缓存对应的第二图层过滤规则;
根据所述第一图层过滤规则、所述第一图层的图层类型和所述第二图层的图层类型,确定所述第一显示画面包括所述第一图层和所述第二图层;
获取所述第一图层中所述第一控件的资源和所述第二图层中所述第二控件的资源,根据所述第一控件的资源和所述第二控件的资源,生成所述第一显示画面,并将所述第一显示画面缓存到第一显示缓存;
根据所述第二图层过滤规则、所述第一图层的图层类型和所述第二图层的图层类型,确定所述第二显示画面包括所述第一图层;
获取所述第一图层中所述第一控件的资源,根据所述第一控件的资源生成所述第二显示画面,并将所述第二显示画面缓存到所述第二显示缓存。
示例性的,在一个例子中,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
获取所述第一电子设备的第一设备标识和所述第二电子设备的第二设备标识;
在图层过滤规则表中查找与所述第一设备标识匹配的图层过滤规则,将查找到的图层过滤规则确定为所述第一显示缓存对应的所述第一图层过滤规则;
在所述图层过滤规则表中查找与所述第二设备标识匹配的图层过滤规则,将查找到的图层过滤规则确定为所述第二显示缓存对应的所述第二图层过滤规则。
示例性的,在一个例子中,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
在所述第一电子设备的屏幕上显示可供用户操作的图层过滤规则决策界面,所述图层过滤规则决策界面包括所述第一控件和所述第一控件所在的所述第一图层的图层类型,所述第二控件和所述第二控件所在的所述第二图层的图层类型;
响应于用户为所述第一显示缓存设置所述第一图层过滤规则的操作行为,生成所述第一图层过滤规则;
响应于用户为所述第二显示缓存设置所述第二图层过滤规则的操作行为,生成所述第二图层过滤规则。
示例性的,在一个例子中,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
获取所述第一电子设备的第一屏幕纵横比和所述第二电子设备的第二屏幕纵横比;
在所述第一屏幕纵横比和所述第二屏幕纵横比不同时,对所述第二显示缓存中的所述第二显示画面进行去黑边处理,并录制去黑边后的所述第二显示画面得到所述投屏内容;
在所述第一屏幕纵横比和所述第二屏幕纵横比相同时,对所述第二显示缓存中的所述第二显示画面进行录制,得到投屏内容。
示例性的,在一个例子中,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
获取所述第二电子设备的显示能力;
根据所述显示能力确定视频流刷新帧率;
根据所述视频流刷新帧率,对所述第二显示缓存中的所述第二显示画面进行录制,得到投屏内容。
其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
本实施例还提供一种计算机可读存储介质,该计算机存储介质中存储有计算机指令,当该计算机指令在电子设备/网络设备(例如OTA服务器、CABE服务器)上运行时,使得电子设备/网络设备执行上述相关方法步骤实现上述实施例中的投屏方法。
本实施例还提供了一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述相关步骤,以实现上述实施例中的投屏方法。
另外,本申请的实施例还提供一种芯片(也可以是组件或模块),该芯片可包括一个或多个处理电路和一个或多个收发管脚;其中,所述收发管脚和所述处理电路通过内部连接通路互相通信,所述处理电路执行上述相关方法步骤实现上述实施例中的投屏方法,以控制接收管脚接收信号,以控制发送管脚发送信号。
其中,本实施例提供的电子设备、计算机可读存储介质、计算机程序产品或芯片均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
通过以上实施方式的描述,所属领域的技术人员可以了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
本申请各个实施例的任意内容,以及同一实施例的任意内容,均可以自由组合。对上述内容的任意组合均在本申请的范围之内。
集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (50)

  1. 一种投屏方法,其特征在于,应用于发起投屏的第一电子设备,所述第一电子设备安装有第一应用,所述第一应用包括第一控件和第二控件,所述第一控件位于第一图层,所述第二控件位于第二图层,所述方法包括:
    获取所述第一控件的控件信息和所述第二控件的控件信息;
    根据所述第一控件的控件信息确定所述第一图层的图层类型;
    根据所述第二控件的控件信息确定所述第二图层的图层类型,所述第一图层的图层类型与所述第二图层的图层类型不同;
    根据所述第一图层的图层类型和所述第二图层的图层类型,生成第一显示画面和第二显示画面,并在所述第一电子设备的屏幕显示所述第一画面,将所述第二显示画面投射到第二电子设备的屏幕进行显示,所述第一显示画面包括所述第一控件和所述第二控件,所述第二显示画面包括所述第一控件,不包括所述第二控件。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述第一控件的控件信息确定所述第一图层的图层类型,包括:
    从所述第一控件的控件信息中提取所述第一控件的控件名和所述第一控件的尺寸信息;
    对所述控件名进行解析,在从所述控件名中解析出所述第一控件的控件类型、所述第一应用的包名和所述第一控件所在的界面信息时,根据所述控件类型、所述包名、所述界面信息和所述尺寸信息,确定所述第一图层的图层类型;
    在从所述控件名中解析出所述第一控件的控件类型和所述第一控件所在的界面信息,未解析出所述第一应用的包名时,获取绘制所述第一控件的进程的进程标识号PID,根据所述PID确定所述第一控件的来源,根据所述控件类型、所述来源、所述界面信息和所述尺寸信息,确定所述第一图层的图层类型,所述来源包括所述第一应用的包名。
  3. 根据权利要求2所述的方法,其特征在于,在对所述控件名进行解析之前,所述方法还包括:
    将所述控件名和所述尺寸信息作为检索关键词;
    根据所述关键词,在图层标识记录库中查找与所述关键词匹配的控件;
    在查找到与所述关键词匹配的控件时,将所述控件对应的图层类型确定为所述第一图层的图层类型;
    在未查找到与所述关键词匹配的控件时,执行对所述控件名进行解析的步骤。
  4. 根据权利要求3所述的方法,其特征在于,所述方法还包括:
    在根据所述第一控件的控件信息无法确定所述第一图层的图层类型时,获取所述第一应用当前显示的画面,所述当前显示的画面中包括所述第一控件;
    根据所述当前显示的画面中所述第一控件显示的内容确定所述第一图层的图层类型。
  5. 根据权利要求1所述的方法,其特征在于,所述根据所述第二控件的控件信息确定所述第二图层的图层类型,包括:
    从所述第二控件的控件信息中提取所述第二控件的控件名和所述第二控件的尺寸信息;
    对所述控件名进行解析,在从所述控件名中解析出所述第二控件的控件类型、所述第一应用的包名和所述第二控件所在的界面信息时,根据所述控件类型、所述包名、所述界面信息和所述尺寸信息,确定所述第二图层的图层类型;
    在从所述控件名中解析出所述第二控件的控件类型和所述第二控件所在的界面信息,未解析出所述第一应用的包名时,获取绘制所述第二控件的进程的进程标识号PID,根据所述PID确定所述第二控件的来源,根据所述控件类型、所述来源、所述界面信息和所述尺寸信息,确定所述第二图层的图层类型,所述来源包括所述第一应用的包名。
  6. 根据权利要求5所述的方法,其特征在于,所述方法还包括:
    在根据所述第二控件的控件信息无法确定所述第二图层的图层类型时,获取所述第一应用当前显示的画面,所述当前显示的画面中包括所述第二控件;
    根据所述当前显示的画面中所述第二控件显示的内容确定所述第二图层的图层类型。
  7. 根据权利要求1所述的方法,其特征在于,所述根据所述第一图层的图层类型和所述第二图层的图层类型,生成第一显示画面和第二显示画面,并在所述第一电子设备的屏幕显示所述第一画面,将所述第二显示画面投射到第二电子设备的屏幕进行显示,包括:
    根据所述第一图层的图层类型和所述第二图层的图层类型,生成第一显示画面和第二显示画面,并将所述第一显示画面缓存到第一显示缓存,将所述第二显示画面缓存到第二显示缓存;
    按照缓存顺序,从所述第一显示缓存中取出所述第一显示画面,并将所述第一显示画面在所述第一电子设备的屏幕进行显示;
    对所述第二显示缓存中的所述第二显示画面进行录制,得到投屏内容;
    将所述投屏内容发送至所述第二电子设备,供所述第二电子设备解码所述投屏内容,得到所述第二显示画面,并在所述第二电子设备的屏幕进行显示。
  8. 根据权利要求7所述的方法,其特征在于,所述根据所述第一图层的图层类型和所述第二图层的图层类型,生成第一显示画面和第二显示画面,并将所述第一显示画面缓存到第一显示缓存,将所述第二显示画面缓存到第二显示缓存,包括:
    确定所述第一显示缓存对应的第一图层过滤规则和所述第二显示缓存对应的第二图层过滤规则;
    根据所述第一图层过滤规则、所述第一图层的图层类型和所述第二图层的图层类型,确定所述第一显示画面包括所述第一图层和所述第二图层;
    获取所述第一图层中所述第一控件的资源和所述第二图层中所述第二控件的资源,根据所述第一控件的资源和所述第二控件的资源,生成所述第一显示画面,并将所述第一显示画面缓存到第一显示缓存;
    根据所述第二图层过滤规则、所述第一图层的图层类型和所述第二图层的图层类型,确定所述第二显示画面包括所述第一图层;
    获取所述第一图层中所述第一控件的资源,根据所述第一控件的资源生成所述第二显示画面,并将所述第二显示画面缓存到所述第二显示缓存。
  9. 根据权利要求8所述的方法,其特征在于,所述确定所述第一显示缓存对应的第一图层过滤规则和所述第二显示缓存对应的第二图层过滤规则,包括:
    获取所述第一电子设备的第一设备标识和所述第二电子设备的第二设备标识;
    在图层过滤规则表中查找与所述第一设备标识匹配的图层过滤规则,将查找到的图层过滤规则确定为所述第一显示缓存对应的所述第一图层过滤规则;
    在所述图层过滤规则表中查找与所述第二设备标识匹配的图层过滤规则,将查找到的图层过滤规则确定为所述第二显示缓存对应的所述第二图层过滤规则。
  10. 根据权利要求8所述的方法,其特征在于,所述确定所述第一显示缓存对应的第一图层过滤规则和所述第二显示缓存对应的第二图层过滤规则,包括:
    在所述第一电子设备的屏幕上显示可供用户操作的图层过滤规则决策界面,所述图层过滤规则决策界面包括所述第一控件和所述第一控件所在的所述第一图层的图层类型,所述第二控件和所述第二控件所在的所述第二图层的图层类型;
    响应于用户为所述第一显示缓存设置所述第一图层过滤规则的操作行为,生成所述第一图层过滤规则;
    响应于用户为所述第二显示缓存设置所述第二图层过滤规则的操作行为,生成所述第二图层过滤规则。
  11. 根据权利要求7所述的方法,其特征在于,所述对所述第二显示缓存中的所述第二显示画面进行录制,得到投屏内容,包括:
    获取所述第一电子设备的第一屏幕纵横比和所述第二电子设备的第二屏幕纵横比;
    在所述第一屏幕纵横比和所述第二屏幕纵横比不同时,对所述第二显示缓存中的所述第二显示画面进行去黑边处理,并录制去黑边后的所述第二显示画面得到所述投屏内容;
    在所述第一屏幕纵横比和所述第二屏幕纵横比相同时,对所述第二显示缓存中的所述第二显示画面进行录制,得到投屏内容。
  12. 根据权利要求7所述的方法,其特征在于,所述对所述第二显示缓存中的所述第二显示画面进行录制,得到投屏内容,包括:
    获取所述第二电子设备的显示能力;
    根据所述显示能力确定视频流刷新帧率;
    根据所述视频流刷新帧率,对所述第二显示缓存中的所述第二显示画面进行录制,得到投屏内容。
  13. 根据权利要求1至12任一项所述的方法,其特征在于,所述第一应用为会议应用,所述第一控件为视频类控件,所述第一图层的图层类型为视频流播放图层;所述第二控件为按钮类控件,所述第二图层的图层类型为会议控制按钮图层。
  14. 根据权利要求1至12任一项所述的方法,其特征在于,所述第一应用为会议应用,所述第一控件为白板批注控件,所述第一图层的图层类型为白板批注图层;所述第二控件为按钮类控件,所述第二图层的图层类型为会议控制按钮图层。
  15. 一种电子设备,其特征在于,所述电子设备为第一电子设备,所述第一电子设备安装有第一应用,所述第一应用包括第一控件和第二控件,所述第一控件位于第一图层,所述第二控件位于第二图层,所述电子设备包括:
    一个或多个处理器;
    存储器;
    以及一个或多个计算机程序,其中所述一个或多个计算机程序存储在所述存储器上,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
    获取所述第一控件的控件信息和所述第二控件的控件信息;
    根据所述第一控件的控件信息确定所述第一图层的图层类型;
    根据所述第二控件的控件信息确定所述第二图层的图层类型,所述第一图层的图层类型与所述第二图层的图层类型不同;
    根据所述第一图层的图层类型和所述第二图层的图层类型,生成第一显示画面和第二显示画面,并在所述第一电子设备的屏幕显示所述第一画面,将所述第二显示画面投射到第二电子设备的屏幕进行显示,所述第一显示画面包括所述第一控件和所述第二控件,所述第二显示画面包括所述第一控件,不包括所述第二控件。
  16. 根据权利要求15所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
    从所述第一控件的控件信息中提取所述第一控件的控件名和所述第一控件的尺寸信息;
    对所述控件名进行解析,在从所述控件名中解析出所述第一控件的控件类型、所述第一应用的包名和所述第一控件所在的界面信息时,根据所述控件类型、所述包名、所述界面信息和所述尺寸信息,确定所述第一图层的图层类型;
    在从所述控件名中解析出所述第一控件的控件类型和所述第一控件所在的界面信息,未解析出所述第一应用的包名时,获取绘制所述第一控件的进程的进程标识号PID,根据 所述PID确定所述第一控件的来源,根据所述控件类型、所述来源、所述界面信息和所述尺寸信息,确定所述第一图层的图层类型,所述来源包括所述第一应用的包名。
  17. 根据权利要求16所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
    将所述控件名和所述尺寸信息作为检索关键词;
    根据所述关键词,在图层标识记录库中查找与所述关键词匹配的控件;
    在查找到与所述关键词匹配的控件时,将所述控件对应的图层类型确定为所述第一图层的图层类型;
    在未查找到与所述关键词匹配的控件时,执行对所述控件名进行解析的步骤。
  18. 根据权利要求17所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
    在根据所述第一控件的控件信息无法确定所述第一图层的图层类型时,获取所述第一应用当前显示的画面,所述当前显示的画面中包括所述第一控件;
    根据所述当前显示的画面中所述第一控件显示的内容确定所述第一图层的图层类型。
  19. 根据权利要求15所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
    从所述第二控件的控件信息中提取所述第二控件的控件名和所述第二控件的尺寸信息;
    对所述控件名进行解析,在从所述控件名中解析出所述第二控件的控件类型、所述第一应用的包名和所述第二控件所在的界面信息时,根据所述控件类型、所述包名、所述界面信息和所述尺寸信息,确定所述第二图层的图层类型;
    在从所述控件名中解析出所述第二控件的控件类型和所述第二控件所在的界面信息,未解析出所述第一应用的包名时,获取绘制所述第二控件的进程的进程标识号PID,根据所述PID确定所述第二控件的来源,根据所述控件类型、所述来源、所述界面信息和所述尺寸信息,确定所述第二图层的图层类型,所述来源包括所述第一应用的包名。
  20. 根据权利要求19所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
    在根据所述第二控件的控件信息无法确定所述第二图层的图层类型时,获取所述第一应用当前显示的画面,所述当前显示的画面中包括所述第二控件;
    根据所述当前显示的画面中所述第二控件显示的内容确定所述第二图层的图层类型。
  21. 根据权利要求15所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
    根据所述第一图层的图层类型和所述第二图层的图层类型,生成第一显示画面和第二显示画面,并将所述第一显示画面缓存到第一显示缓存,将所述第二显示画面缓存到第二显示缓存;
    按照缓存顺序,从所述第一显示缓存中取出所述第一显示画面,并将所述第一显示画面在所述第一电子设备的屏幕进行显示;
    对所述第二显示缓存中的所述第二显示画面进行录制,得到投屏内容;
    将所述投屏内容发送至所述第二电子设备,供所述第二电子设备解码所述投屏内容,得到所述第二显示画面,并在所述第二电子设备的屏幕进行显示。
  22. 根据权利要求21所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
    确定所述第一显示缓存对应的第一图层过滤规则和所述第二显示缓存对应的第二图层过滤规则;
    根据所述第一图层过滤规则、所述第一图层的图层类型和所述第二图层的图层类型,确定所述第一显示画面包括所述第一图层和所述第二图层;
    获取所述第一图层中所述第一控件的资源和所述第二图层中所述第二控件的资源,根据所述第一控件的资源和所述第二控件的资源,生成所述第一显示画面,并将所述第一显示画面缓存到第一显示缓存;
    根据所述第二图层过滤规则、所述第一图层的图层类型和所述第二图层的图层类型,确定所述第二显示画面包括所述第一图层;
    获取所述第一图层中所述第一控件的资源,根据所述第一控件的资源生成所述第二显示画面,并将所述第二显示画面缓存到所述第二显示缓存。
  23. 根据权利要求21所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
    获取所述第一电子设备的第一设备标识和所述第二电子设备的第二设备标识;
    在图层过滤规则表中查找与所述第一设备标识匹配的图层过滤规则,将查找到的图层过滤规则确定为所述第一显示缓存对应的所述第一图层过滤规则;
    在所述图层过滤规则表中查找与所述第二设备标识匹配的图层过滤规则,将查找到的图层过滤规则确定为所述第二显示缓存对应的所述第二图层过滤规则。
  24. 根据权利要求21所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
    在所述第一电子设备的屏幕上显示可供用户操作的图层过滤规则决策界面,所述图层过滤规则决策界面包括所述第一控件和所述第一控件所在的所述第一图层的图层类型,所述第二控件和所述第二控件所在的所述第二图层的图层类型;
    响应于用户为所述第一显示缓存设置所述第一图层过滤规则的操作行为,生成所述第一图层过滤规则;
    响应于用户为所述第二显示缓存设置所述第二图层过滤规则的操作行为,生成所述第二图层过滤规则。
  25. 根据权利要求20所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
    获取所述第一电子设备的第一屏幕纵横比和所述第二电子设备的第二屏幕纵横比;
    在所述第一屏幕纵横比和所述第二屏幕纵横比不同时,对所述第二显示缓存中的所述第二显示画面进行去黑边处理,并录制去黑边后的所述第二显示画面得到所述投屏内容;
    在所述第一屏幕纵横比和所述第二屏幕纵横比相同时,对所述第二显示缓存中的所述第二显示画面进行录制,得到投屏内容。
  26. 根据权利要求20所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
    获取所述第二电子设备的显示能力;
    根据所述显示能力确定视频流刷新帧率;
    根据所述视频流刷新帧率,对所述第二显示缓存中的所述第二显示画面进行录制,得到投屏内容。
  27. 一种投屏方法,其特征在于,应用于发起投屏的第一电子设备,所述第一电子设备安装有第一应用,所述第一应用包括第一控件和第二控件,所述第一控件位于第一图层,所述第二控件位于第二图层,所述方法包括:
    获取所述第一控件的控件信息和所述第二控件的控件信息;
    根据所述第一控件的控件信息确定所述第一图层的图层类型;
    根据所述第二控件的控件信息确定所述第二图层的图层类型,所述第一图层的图层类型与所述第二图层的图层类型不同;
    根据所述第一图层的图层类型和所述第二图层的图层类型,生成第一显示画面和第二显示画面,并在所述第一电子设备的屏幕显示所述第一画面,将所述第二显示画面投射到第二电子设备的屏幕进行显示,所述第一显示画面包括所述第一控件和所述第二控件,所述第二显示画面包括所述第一控件,不包括所述第二控件;
    其中,所述根据所述第一控件的控件信息确定所述第一图层的图层类型,包括:
    从所述第一控件的控件信息中提取所述第一控件的控件名和所述第一控件的尺寸信息;
    对所述第一控件的控件名进行解析,在从所述第一控件的控件名中解析出所述第一控件的控件类型、所述第一应用的包名和所述第一控件所在的界面信息时,根据所述第一控件的控件类型、所述包名、所述第一控件所在的界面信息和所述第一控件的尺寸信息,确定所述第一图层的图层类型;
    在从所述第一控件的控件名中解析出所述第一控件的控件类型和所述第一控件所在的界面信息,未解析出所述第一应用的包名时,获取绘制所述第一控件的进程的进程标识 号PID,根据绘制所述第一控件的PID确定所述第一控件的来源,根据所述第一控件的控件类型、所述第一控件的来源、所述第一控件的所在的界面信息和所述第一控件的尺寸信息,确定所述第一图层的图层类型,所述第一控件的来源包括所述第一应用的包名;
    所述根据所述第二控件的控件信息确定所述第二图层的图层类型,包括:
    从所述第二控件的控件信息中提取所述第二控件的控件名和所述第二控件的尺寸信息;
    对所述第二控件的控件名进行解析,在从所述第二控件的控件名中解析出所述第二控件的控件类型、所述第一应用的包名和所述第二控件所在的界面信息时,根据所述第二控件的控件类型、所述包名、所述第二控件所在的界面信息和所述第二控件的尺寸信息,确定所述第二图层的图层类型;
    在从所述第二控件的控件名中解析出所述第二控件的控件类型和所述第二控件所在的界面信息,未解析出所述第一应用的包名时,获取绘制所述第二控件的进程的进程标识号PID,根据绘制第二控件的PID确定所述第二控件的来源,根据所述第二控件的控件类型、所述第二控件的来源、所述第二控件所在的界面信息和所述第二控件的尺寸信息,确定所述第二图层的图层类型,所述第二控件的来源包括所述第一应用的包名。
  28. 根据权利要求27所述的方法,其特征在于,在对所述控件名进行解析之前,所述方法还包括:
    将所述控件名和所述尺寸信息作为检索关键词;
    根据所述关键词,在图层标识记录库中查找与所述关键词匹配的控件;
    在查找到与所述关键词匹配的控件时,将所述控件对应的图层类型确定为所述第一图层的图层类型;
    在未查找到与所述关键词匹配的控件时,执行对所述控件名进行解析的步骤。
  29. 根据权利要求28所述的方法,其特征在于,所述方法还包括:
    在根据所述第一控件的控件信息无法确定所述第一图层的图层类型时,获取所述第一应用当前显示的画面,所述当前显示的画面中包括所述第一控件;
    根据所述当前显示的画面中所述第一控件显示的内容确定所述第一图层的图层类型。
  30. 根据权利要求27所述的方法,其特征在于,所述方法还包括:
    在根据所述第二控件的控件信息无法确定所述第二图层的图层类型时,获取所述第一应用当前显示的画面,所述当前显示的画面中包括所述第二控件;
    根据所述当前显示的画面中所述第二控件显示的内容确定所述第二图层的图层类型。
  31. 根据权利要求27所述的方法,其特征在于,所述根据所述第一图层的图层类型和所述第二图层的图层类型,生成第一显示画面和第二显示画面,并在所述第一电子设备的屏幕显示所述第一画面,将所述第二显示画面投射到第二电子设备的屏幕进行显示,包括:
    根据所述第一图层的图层类型和所述第二图层的图层类型,生成第一显示画面和第二显示画面,并将所述第一显示画面缓存到第一显示缓存,将所述第二显示画面缓存到第二显示缓存;
    按照缓存顺序,从所述第一显示缓存中取出所述第一显示画面,并将所述第一显示画面在所述第一电子设备的屏幕进行显示;
    对所述第二显示缓存中的所述第二显示画面进行录制,得到投屏内容;
    将所述投屏内容发送至所述第二电子设备,供所述第二电子设备解码所述投屏内容,得到所述第二显示画面,并在所述第二电子设备的屏幕进行显示。
  32. 根据权利要求31所述的方法,其特征在于,所述根据所述第一图层的图层类型和所述第二图层的图层类型,生成第一显示画面和第二显示画面,并将所述第一显示画面缓存到第一显示缓存,将所述第二显示画面缓存到第二显示缓存,包括:
    确定所述第一显示缓存对应的第一图层过滤规则和所述第二显示缓存对应的第二图层过滤规则;
    根据所述第一图层过滤规则、所述第一图层的图层类型和所述第二图层的图层类型,确定所述第一显示画面包括所述第一图层和所述第二图层;
    获取所述第一图层中所述第一控件的资源和所述第二图层中所述第二控件的资源,根据所述第一控件的资源和所述第二控件的资源,生成所述第一显示画面,并将所述第一显示画面缓存到第一显示缓存;
    根据所述第二图层过滤规则、所述第一图层的图层类型和所述第二图层的图层类型,确定所述第二显示画面包括所述第一图层;
    获取所述第一图层中所述第一控件的资源,根据所述第一控件的资源生成所述第二显示画面,并将所述第二显示画面缓存到所述第二显示缓存。
  33. 根据权利要求32所述的方法,其特征在于,所述确定所述第一显示缓存对应的第一图层过滤规则和所述第二显示缓存对应的第二图层过滤规则,包括:
    获取所述第一电子设备的第一设备标识和所述第二电子设备的第二设备标识;
    在图层过滤规则表中查找与所述第一设备标识匹配的图层过滤规则,将查找到的图层过滤规则确定为所述第一显示缓存对应的所述第一图层过滤规则;
    在所述图层过滤规则表中查找与所述第二设备标识匹配的图层过滤规则,将查找到的图层过滤规则确定为所述第二显示缓存对应的所述第二图层过滤规则。
  34. 根据权利要求32所述的方法,其特征在于,所述确定所述第一显示缓存对应的第一图层过滤规则和所述第二显示缓存对应的第二图层过滤规则,包括:
    在所述第一电子设备的屏幕上显示可供用户操作的图层过滤规则决策界面,所述图层过滤规则决策界面包括所述第一控件和所述第一控件所在的所述第一图层的图层类型,所述第二控件和所述第二控件所在的所述第二图层的图层类型;
    响应于用户为所述第一显示缓存设置所述第一图层过滤规则的操作行为,生成所述第一图层过滤规则;
    响应于用户为所述第二显示缓存设置所述第二图层过滤规则的操作行为,生成所述第二图层过滤规则。
  35. 根据权利要求31所述的方法,其特征在于,所述对所述第二显示缓存中的所述第二显示画面进行录制,得到投屏内容,包括:
    获取所述第一电子设备的第一屏幕纵横比和所述第二电子设备的第二屏幕纵横比;
    在所述第一屏幕纵横比和所述第二屏幕纵横比不同时,对所述第二显示缓存中的所述第二显示画面进行去黑边处理,并录制去黑边后的所述第二显示画面得到所述投屏内容;
    在所述第一屏幕纵横比和所述第二屏幕纵横比相同时,对所述第二显示缓存中的所述第二显示画面进行录制,得到投屏内容。
  36. 根据权利要求31所述方法,其特征在于,所述对所述第二显示缓存中的所述第二显示画面进行录制,得到投屏内容,包括:
    获取所述第二电子设备的显示能力;
    根据所述显示能力确定视频流刷新帧率;
    根据所述视频流刷新帧率,对所述第二显示缓存中的所述第二显示画面进行录制,得到投屏内容。
  37. 根据权利要求27至36任一项所述的方法,其特征在于,所述第一应用为会议应用,所述第一控件为视频类控件,所述第一图层的图层类型为视频流播放图层;所述第二控件为按钮类控件,所述第二图层的图层类型为会议控制按钮图层。
  38. 根据权利要求27至36任一项所述的方法,其特征在于,所述第一应用为会议应用,所述第一控件为白板批注控件,所述第一图层的图层类型为白板批注图层;所述第二控件为按钮类控件,所述第二图层的图层类型为会议控制按钮图层。
  39. 一种电子设备,其特征在于,所述电子设备为第一电子设备,所述第一电子设备安装有第一应用,所述第一应用包括第一控件和第二控件,所述第一控件位于第一图层,所述第二控件位于第二图层,所述电子设备包括:
    一个或多个处理器;
    存储器;
    以及一个或多个计算机程序,其中所述一个或多个计算机程序存储在所述存储器上,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
    获取所述第一控件的控件信息和所述第二控件的控件信息;
    根据所述第一控件的控件信息确定所述第一图层的图层类型;
    根据所述第二控件的控件信息确定所述第二图层的图层类型,所述第一图层的图层类型与所述第二图层的图层类型不同;
    根据所述第一图层的图层类型和所述第二图层的图层类型,生成第一显示画面和第二显示画面,并在所述第一电子设备的屏幕显示所述第一画面,将所述第二显示画面投射到第二电子设备的屏幕进行显示,所述第一显示画面包括所述第一控件和所述第二控件,所述第二显示画面包括所述第一控件,不包括所述第二控件;
    其中,所述根据所述第一控件的控件信息确定所述第一图层的图层类型,包括:
    从所述第一控件的控件信息中提取所述第一控件的控件名和所述第一控件的尺寸信息;
    对所述第一控件的控件名进行解析,在从所述第一控件的控件名中解析出所述第一控件的控件类型、所述第一应用的包名和所述第一控件所在的界面信息时,根据所述第一控件的控件类型、所述包名、所述第一控件所在的界面信息和所述第一控件的尺寸信息,确定所述第一图层的图层类型;
    在从所述第一控件的控件名中解析出所述第一控件的控件类型和所述第一控件所在的界面信息,未解析出所述第一应用的包名时,获取绘制所述第一控件的进程的进程标识号PID,根据绘制所述第一控件的PID确定所述第一控件的来源,根据所述第一控件的控件类型、所述第一控件的来源、所述第一控件的所在的界面信息和所述第一控件的尺寸信息,确定所述第一图层的图层类型,所述第一控件的来源包括所述第一应用的包名;
    所述根据所述第二控件的控件信息确定所述第二图层的图层类型,包括:
    从所述第二控件的控件信息中提取所述第二控件的控件名和所述第二控件的尺寸信息;
    对所述第二控件的控件名进行解析,在从所述第二控件的控件名中解析出所述第二控件的控件类型、所述第一应用的包名和所述第二控件所在的界面信息时,根据所述第二控件的控件类型、所述包名、所述第二控件所在的界面信息和所述第二控件的尺寸信息,确定所述第二图层的图层类型;
    在从所述第二控件的控件名中解析出所述第二控件的控件类型和所述第二控件所在的界面信息,未解析出所述第一应用的包名时,获取绘制所述第二控件的进程的进程标识号PID,根据绘制第二控件的PID确定所述第二控件的来源,根据所述第二控件的控件类型、所述第二控件的来源、所述第二控件所在的界面信息和所述第二控件的尺寸信息,确定所述第二图层的图层类型,所述第二控件的来源包括所述第一应用的包名。
  40. 根据权利要求39所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
    将所述控件名和所述尺寸信息作为检索关键词;
    根据所述关键词,在图层标识记录库中查找与所述关键词匹配的控件;
    在查找到与所述关键词匹配的控件时,将所述控件对应的图层类型确定为所述第一图层的图层类型;
    在未查找到与所述关键词匹配的控件时,执行对所述控件名进行解析的步骤。
  41. 根据权利要求40所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
    在根据所述第一控件的控件信息无法确定所述第一图层的图层类型时,获取所述第一应用当前显示的画面,所述当前显示的画面中包括所述第一控件;
    根据所述当前显示的画面中所述第一控件显示的内容确定所述第一图层的图层类型。
  42. 根据权利要求39所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
    在根据所述第二控件的控件信息无法确定所述第二图层的图层类型时,获取所述第一应用当前显示的画面,所述当前显示的画面中包括所述第二控件;
    根据所述当前显示的画面中所述第二控件显示的内容确定所述第二图层的图层类型。
  43. 根据权利要求39所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
    根据所述第一图层的图层类型和所述第二图层的图层类型,生成第一显示画面和第二显示画面,并将所述第一显示画面缓存到第一显示缓存,将所述第二显示画面缓存到第二显示缓存;
    按照缓存顺序,从所述第一显示缓存中取出所述第一显示画面,并将所述第一显示画面在所述第一电子设备的屏幕进行显示;
    对所述第二显示缓存中的所述第二显示画面进行录制,得到投屏内容;
    将所述投屏内容发送至所述第二电子设备,供所述第二电子设备解码所述投屏内容,得到所述第二显示画面,并在所述第二电子设备的屏幕进行显示。
  44. 根据权利要求43所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
    确定所述第一显示缓存对应的第一图层过滤规则和所述第二显示缓存对应的第二图层过滤规则;
    根据所述第一图层过滤规则、所述第一图层的图层类型和所述第二图层的图层类型,确定所述第一显示画面包括所述第一图层和所述第二图层;
    获取所述第一图层中所述第一控件的资源和所述第二图层中所述第二控件的资源,根据所述第一控件的资源和所述第二控件的资源,生成所述第一显示画面,并将所述第一显示画面缓存到第一显示缓存;
    根据所述第二图层过滤规则、所述第一图层的图层类型和所述第二图层的图层类型,确定所述第二显示画面包括所述第一图层;
    获取所述第一图层中所述第一控件的资源,根据所述第一控件的资源生成所述第二显示画面,并将所述第二显示画面缓存到所述第二显示缓存。
  45. 根据权利要求43所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
    获取所述第一电子设备的第一设备标识和所述第二电子设备的第二设备标识;
    在图层过滤规则表中查找与所述第一设备标识匹配的图层过滤规则,将查找到的图层过滤规则确定为所述第一显示缓存对应的所述第一图层过滤规则;
    在所述图层过滤规则表中查找与所述第二设备标识匹配的图层过滤规则,将查找到的图层过滤规则确定为所述第二显示缓存对应的所述第二图层过滤规则。
  46. 根据权利要求43所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
    在所述第一电子设备的屏幕上显示可供用户操作的图层过滤规则决策界面,所述图层过滤规则决策界面包括所述第一控件和所述第一控件所在的所述第一图层的图层类型,所述第二控件和所述第二控件所在的所述第二图层的图层类型;
    响应于用户为所述第一显示缓存设置所述第一图层过滤规则的操作行为,生成所述第一图层过滤规则;
    响应于用户为所述第二显示缓存设置所述第二图层过滤规则的操作行为,生成所述第二图层过滤规则。
  47. 根据权利要求42所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
    获取所述第一电子设备的第一屏幕纵横比和所述第二电子设备的第二屏幕纵横比;
    在所述第一屏幕纵横比和所述第二屏幕纵横比不同时,对所述第二显示缓存中的所述第二显示画面进行去黑边处理,并录制去黑边后的所述第二显示画面得到所述投屏内容;
    在所述第一屏幕纵横比和所述第二屏幕纵横比相同时,对所述第二显示缓存中的所述第二显示画面进行录制,得到投屏内容。
  48. 根据权利要求42所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
    获取所述第二电子设备的显示能力;
    根据所述显示能力确定视频流刷新帧率;
    根据所述视频流刷新帧率,对所述第二显示缓存中的所述第二显示画面进行录制,得到投屏内容。
  49. 一种计算机可读存储介质,包括计算机程序,其特征在于,当所述计算机程序在电子设备上运行时,使得所述电子设备执行如权利要求1至14任一项所述的投屏方法,或者执行如权利要求27至38任一项所述的投屏方法。
  50. 一种芯片,其特征在于,包括:一个或多个处理电路和一个或多个收发管脚; 其中,所述收发管脚和所述处理电路通过内部连接通路互相通信,所述处理电路执行权利要求1至14任一项所述的投屏方法,以控制接收管脚接收信号,以控制发送管脚发送信号,或者执行如权利要求27至38任一项所述的投屏方法。
PCT/CN2022/091554 2021-08-20 2022-05-07 投屏方法和电子设备 WO2023020025A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110958660.0 2021-08-20
CN202110958660.0A CN113778360B (zh) 2021-08-20 2021-08-20 投屏方法和电子设备

Publications (1)

Publication Number Publication Date
WO2023020025A1 true WO2023020025A1 (zh) 2023-02-23

Family

ID=78838463

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/091554 WO2023020025A1 (zh) 2021-08-20 2022-05-07 投屏方法和电子设备

Country Status (2)

Country Link
CN (1) CN113778360B (zh)
WO (1) WO2023020025A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113778360B (zh) * 2021-08-20 2022-07-22 荣耀终端有限公司 投屏方法和电子设备
WO2024007719A1 (zh) * 2022-07-07 2024-01-11 海信视像科技股份有限公司 一种显示设备和显示设备的控制方法
CN118042210A (zh) * 2023-04-14 2024-05-14 深圳支点电子智能科技有限公司 智能录屏方法及相关装置、存储介质和计算机程序

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150195601A1 (en) * 2014-01-08 2015-07-09 Samsung Electronics Co., Ltd. Method and device for screen mirroring
CN110928468A (zh) * 2019-10-09 2020-03-27 广州视源电子科技股份有限公司 智能交互平板的页面显示方法、装置、设备和存储介质
CN111443884A (zh) * 2020-04-23 2020-07-24 华为技术有限公司 投屏方法、装置和电子设备
CN112099705A (zh) * 2020-09-04 2020-12-18 维沃移动通信有限公司 投屏方法、装置及电子设备
CN113778360A (zh) * 2021-08-20 2021-12-10 荣耀终端有限公司 投屏方法和电子设备

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8689121B2 (en) * 2010-05-06 2014-04-01 Cadence Design Systems, Inc. System and method for management of controls in a graphical user interface
CN102375733A (zh) * 2010-08-24 2012-03-14 北大方正集团有限公司 一种便捷的界面布局方法
CN108984137B (zh) * 2017-06-01 2021-10-22 福建星网视易信息系统有限公司 双屏显示方法及其系统、计算机可读存储介质
CN108363571B (zh) * 2018-01-02 2022-02-18 武汉斗鱼网络科技有限公司 一种基于智能过滤的控件布局方法及系统
CN109639898B (zh) * 2018-12-25 2021-07-23 努比亚技术有限公司 一种多屏显示方法、设备及计算机可读存储介质
CN110378145B (zh) * 2019-06-10 2022-04-22 华为技术有限公司 一种分享内容的方法和电子设备
JP7404891B2 (ja) * 2020-01-28 2023-12-26 セイコーエプソン株式会社 プロジェクターの制御方法及びプロジェクター
CN111324327B (zh) * 2020-02-20 2022-03-25 华为技术有限公司 投屏方法及终端设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150195601A1 (en) * 2014-01-08 2015-07-09 Samsung Electronics Co., Ltd. Method and device for screen mirroring
CN110928468A (zh) * 2019-10-09 2020-03-27 广州视源电子科技股份有限公司 智能交互平板的页面显示方法、装置、设备和存储介质
CN111443884A (zh) * 2020-04-23 2020-07-24 华为技术有限公司 投屏方法、装置和电子设备
CN112099705A (zh) * 2020-09-04 2020-12-18 维沃移动通信有限公司 投屏方法、装置及电子设备
CN113778360A (zh) * 2021-08-20 2021-12-10 荣耀终端有限公司 投屏方法和电子设备

Also Published As

Publication number Publication date
CN113778360A (zh) 2021-12-10
CN113778360B (zh) 2022-07-22

Similar Documents

Publication Publication Date Title
WO2023020025A1 (zh) 投屏方法和电子设备
EP4130963A1 (en) Object dragging method and device
CN111031368B (zh) 多媒体播放方法、装置、设备及存储介质
US20220007083A1 (en) Method and stream-pushing client for processing live stream in webrtc
GB2590545A (en) Video photographing method and apparatus, electronic device and computer readable storage medium
US20080091778A1 (en) Presenter view control system and method
WO2022089330A1 (zh) 截图方法和装置、电子设备和可读存储介质
US20170374404A1 (en) Cooperative provision of personalized user functions using shared and personal devices
US20120042265A1 (en) Information Processing Device, Information Processing Method, Computer Program, and Content Display System
CN103731742B (zh) 用于视频流放的方法和装置
CN114297436A (zh) 一种显示设备及用户界面主题更新方法
WO2013157898A1 (en) Method and apparatus of providing media file for augmented reality service
JP6337910B2 (ja) 情報端末、システム、制御方法及び記憶媒体
WO2023143299A1 (zh) 消息展示方法、装置、设备及存储介质
CA2910779A1 (en) Methods and systems for simultaneous display of multimedia during a video communication
CN111796826A (zh) 一种弹幕的绘制方法、装置、设备和存储介质
US20120300127A1 (en) System for managing detection of advertisements in an electronic device, for example in a digital tv decoder
WO2011088765A1 (zh) 图片处理的方法、终端及服务器
CN112203151B (zh) 视频处理方法、装置、电子设备及计算机可读存储介质
WO2024022473A1 (zh) 在直播间发送评论和接收评论的方法及相关设备
CN112788378B (zh) 显示设备与内容显示方法
EP3113489A1 (en) Transfer control system, transfer system, transfer control method, and recording medium
WO2023104102A1 (zh) 一种直播评论展示方法、装置、设备、程序产品及介质
WO2023125316A1 (zh) 视频处理方法、装置、电子设备及介质
CN112040304A (zh) 一种支持无线投屏的硬盘录像机系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22857343

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE