WO2023030099A1 - 跨设备交互的方法、装置、投屏系统及终端 - Google Patents

跨设备交互的方法、装置、投屏系统及终端 Download PDF

Info

Publication number
WO2023030099A1
WO2023030099A1 PCT/CN2022/114303 CN2022114303W WO2023030099A1 WO 2023030099 A1 WO2023030099 A1 WO 2023030099A1 CN 2022114303 W CN2022114303 W CN 2022114303W WO 2023030099 A1 WO2023030099 A1 WO 2023030099A1
Authority
WO
WIPO (PCT)
Prior art keywords
screen
terminal
event
knuckle
user
Prior art date
Application number
PCT/CN2022/114303
Other languages
English (en)
French (fr)
Inventor
任国锋
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023030099A1 publication Critical patent/WO2023030099A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Definitions

  • the present application relates to the field of communication technology, and in particular to a method and device for cross-device interaction, a screen projection system and a terminal.
  • the content displayed by the screen projection device with the screen projection function can be projected to other display devices with the display function for display.
  • the content displayed by the display device can include various media information displayed on the screen projection device and various operation data screens, etc.
  • a mobile phone is used as a screen projection device
  • a TV is used as a display device
  • an interface displayed on the screen of the mobile phone is projected to the TV for display as an example.
  • the display interface of the mobile phone can be projected to a TV for display, and the video can be watched or the live content displayed through the TV.
  • Embodiments of the present application provide a cross-device interaction method, device, screen projection system, and terminal, so that the screen projection terminal can respond to the user's screen capture operation, screen recording operation, or knuckle operation on the screen projection terminal to realize cross-device interaction. User interaction of the device to ensure user experience.
  • the embodiment of the present application provides a method for cross-device interaction, which is applied to a screen projection system.
  • the screen projection system includes a first terminal and a second terminal, and the screen projection connection between the first terminal and the second terminal.
  • the method It includes: the second terminal displays the first screen sent by the first terminal; the second terminal generates an operation event according to the target operation performed by the user on the first screen; wherein, the target operation is a screen capture operation, a screen recording operation or a knuckle operation; the second A terminal determines an operation instruction corresponding to the operation event, and executes the operation instruction based on the first screen.
  • the screen-casting first terminal can respond to the user's screen capture operation, screen-recording operation, or knuckle operation on the screen-casting second terminal, so as to realize cross-device user interaction and ensure user experience.
  • the target operation is a knuckle operation
  • the second terminal generates an operation event according to the target operation performed by the user on the first screen, including: the second terminal generates an operation event according to the knuckle operation performed by the user on the first screen
  • the operation generates a first input event, and the first input event includes knuckle touch information and knuckle press information; when the second terminal recognizes that the first input event is generated by the user's knuckle based on the knuckle algorithm, determine the first input event
  • the knuckle identification; the second terminal encapsulates the first input event and the knuckle identification into an operation event; the first terminal determines the operation instruction corresponding to the operation event, including: identifying the operation event to determine the knuckle action made by the user for the first screen Action: Determine the operation instruction corresponding to the operation event based on the knuckle action.
  • the knuckles are recognized based on the second terminal to be projected, while the first terminal to be projected only recognizes knuckle movements, so as to ensure the response speed and performance of the first terminal to be projected.
  • the first terminal for screen projection does not need to perform coordinate transformation when recognizing knuckle movements, that is, the data in the screen coordinate system of the second terminal to be projected is converted to the screen coordinate system of the first terminal for screen projection, further ensuring The response speed and performance of the first terminal for screencasting.
  • the second terminal displays a target screen before displaying the first screen; the first screen is a drop-down menu screen; the target operation is a click on the screen capture button or the area where the screen recording button is located in the drop-down menu screen Operation; the second terminal stores the layout information of the drop-down menu screen; the operation instruction is a screen capture instruction or screen recording instruction of the target screen; the second terminal generates an operation event according to the target operation made by the user on the first screen, including: the second terminal Generate a second input event according to the click operation made by the user on the screen capture button or the area where the screen recording button is located in the drop-down menu screen, the second input event includes finger click operation information; the second terminal according to the layout information of the drop-down menu screen and the second The input event determines the identifier of the screen capture button or the screen recording button; the second terminal encapsulates the identifier of the screen capture button or the screen recording button and the second input event into an operation event.
  • the screen capture button or screen recording button in the pull-down menu screen is identified based on the second terminal being projected, so that the first terminal that is projecting the screen can directly determine the screen capture command or the screen recording command, ensuring that the first terminal that is projecting the screen response speed and performance.
  • buttons in the drop-down menu by the second terminal to be screened it is not necessary to consider the difference in screen size between the first terminal to be screened and the second terminal to be screened, so as to ensure the accuracy of operation events.
  • the target operation is to press the screen capture button of the second terminal; the second terminal generates an operation event according to the target operation performed by the user on the first screen, including: The screen capture key generates an operation event, and the operation event includes key time information and key value; when the second terminal judges that the operation event is not a local event, it sends the operation event to the first terminal; the first terminal determines the operation instruction corresponding to the operation event, including: When a terminal recognizes that the operation event is a screen capture event, it determines that the operation instruction corresponding to the operation event is a screen capture instruction.
  • the first terminal that is projected can identify the screen capture key and determine the corresponding screen capture command, ensuring User experience for device interaction.
  • the screen projection method between the first terminal and the second terminal is heterogeneous screen projection; the first terminal is provided with a virtual screen of the second terminal; Screen size adaptation screen, the screen size of the virtual screen is adapted to the screen size of the second terminal; the operation event carries the device identifier of the second terminal, so that the first terminal can identify the device identifier of the second terminal to determine the device identifier of the second terminal virtual screen.
  • the screen displayed by the first terminal for screen projection and the screen displayed by the second terminal for screen projection are independent, so that the first terminal for screen projection and the second terminal for screen projection can be used separately to meet different users different needs.
  • the screen projection method between the first terminal and the second terminal is mirror projection; executing the operation instruction based on the first screen includes: executing the operation instruction on the screen displayed by the first terminal, wherein, The first picture is a mirror image of the picture displayed by the first terminal.
  • the screen displayed by the first terminal to be screened is consistent with the screen displayed by the second terminal to be screened, so that the first terminal to be screened can understand the operation status of the user of the second terminal to be screened, Realize reverse control.
  • the method further includes: sending the second screen obtained by executing the operation instruction based on the first screen to the second terminal, so that the second terminal displays the second screen.
  • the embodiment of the present application provides a cross-device interaction method applied to the first terminal, including: sending the first picture to the second terminal, so that the second terminal displays the first picture; wherein, the first A screen projection connection between a terminal and a second terminal; receiving an operation event sent by the second terminal; wherein, the operation event is generated by the second terminal according to the target operation made by the user on the first screen, and the target operation is screen capture operation, screen recording Operation or knuckle operation; determining an operation instruction corresponding to the operation event; and executing the operation instruction based on the first screen.
  • the screen-casting first terminal can respond to the user's screen capture operation, screen-recording operation, or knuckle operation on the screen-casting second terminal, so as to realize cross-device user interaction and ensure user experience.
  • the target operation is a knuckle operation; the operation event is generated by the second terminal encapsulating the first input event and the knuckle identifier of the first input event; The knuckle operation is generated, and the knuckle identification is determined when it is recognized that the first input event is generated by the user’s knuckle based on the knuckle algorithm; determining the operation instruction corresponding to the operation event includes: identifying the operation event to determine the user’s input for the second input event. A knuckle motion made on a screen; based on the knuckle motion, the operation instruction corresponding to the operation event is determined.
  • the second terminal displays a target screen before the displayed first screen; the first screen is a drop-down menu screen; Click operation; the operation instruction is a screenshot instruction or a screen recording instruction of the target screen; the operation event includes the screen capture button or the screen recording button identification in the drop-down menu screen clicked by the user; the method also includes: receiving the second input event sent by the second terminal ; Wherein, the second input event is generated by the second terminal according to the user's sliding operation on the target screen, and the second input event includes finger sliding operation information; when the second input event is identified as displaying a drop-down menu, the internally stored drop-down menu screen and the layout information of the drop-down menu screen are sent to the second terminal, so that the second terminal displays the drop-down menu screen on the basis of the displayed target screen, and according to the user's click on the area where the screen capture button or screen recording button is located in the drop-down menu screen
  • the third input event generated by the operation and the layout information of the drop-down menu screen determine the identification of the screen capture button or
  • the target operation is to press the screen capture button of the second terminal;
  • the operation event includes key time information and key value, and the operation event is not a local event of the second terminal;
  • determining the operation instruction corresponding to the operation event includes: When the operation event is identified as a screen capture event, it is determined that the operation instruction corresponding to the operation event is a screen capture instruction.
  • the screen projection method between the first terminal and the second terminal is heterogeneous screen projection; the first terminal is provided with a virtual screen of the second terminal; Screen size adaptation screen, the screen size of the virtual screen is adapted to the screen size of the second terminal; the operation event carries the device identifier of the second terminal, so that the first terminal can identify the device identifier of the second terminal to determine the device identifier of the second terminal virtual screen.
  • the screen projection method between the first terminal and the second terminal is mirror projection; executing the operation instruction based on the first screen includes: executing the operation instruction on the screen displayed by the first terminal, wherein, The first picture is a mirror image of the picture displayed by the first terminal.
  • the method further includes: sending the second screen obtained by executing the operation instruction based on the first screen to the second terminal, so that the second terminal displays the second screen.
  • the embodiment of the present application provides a method for cross-device interaction, which is applied to the second terminal, including: displaying the first screen sent by the first terminal; wherein, the screen projection connection between the first terminal and the second terminal ; Generate an operation event according to the target operation made by the user on the first screen at the second terminal; wherein, the target operation is a screen capture operation, a screen recording operation or a knuckle operation; the operation event is sent to the first terminal, so that the first terminal An operation instruction corresponding to the operation event is determined, and the operation instruction is executed based on the first screen.
  • the target operation is a knuckle operation
  • generating an operation event according to the user's target operation on the first screen on the second terminal includes: according to the user's knuckle operation on the first screen on the second terminal The operation generates a first input event; wherein, the first input event includes knuckle touch information and knuckle press information; when it is recognized based on the knuckle algorithm that the first input event is generated by the user's knuckle, determine the knuckle of the first input event Joint identifier; encapsulate the first input event and the knuckle identifier as an operation event.
  • the second terminal displays a target screen before the displayed first screen; the first screen is a drop-down menu screen; Click operation; the operation instruction is a screen capture instruction or a screen recording instruction of the target screen; before generating the operation event according to the user's target operation on the first screen, including: generating a second input event according to the user's sliding operation on the target screen on the second terminal;
  • the second input event includes finger sliding operation information; the second input event is sent to the first terminal, so that the first terminal recognizes the second input event as displaying the drop-down menu, the internally stored drop-down menu screen and the drop-down menu sent
  • the layout information of the screen is sent to the second terminal; the drop-down menu screen is displayed on the basis of the target screen, and the layout information of the drop-down menu screen is stored; the second terminal generates an operation event according to the target operation made by the user on the first screen, including: The click operation made by the user on the second terminal for the screen capture button or the screen recording button in the drop-down menu screen generate
  • the target operation is to press the screen capture button of the second terminal;
  • the operation event includes key time information and key value;
  • the second terminal generates the operation event according to the target operation performed by the user on the first screen, and also includes : when it is determined that the operation event is not a local event, send the operation event to the first terminal.
  • the screen projection method between the first terminal and the second terminal is heterogeneous screen projection; the first terminal is provided with a virtual screen of the second terminal; Screen size adaptation screen, the screen size of the virtual screen is adapted to the screen size of the second terminal; the operation event carries the device identifier of the second terminal, so that the first terminal can identify the device identifier of the second terminal to determine the device identifier of the second terminal virtual screen.
  • the screen projection mode between the first terminal and the second terminal is mirror projection.
  • the method further includes: receiving a second screen determined by the second terminal based on the operation instruction executed on the first screen; and displaying the second screen.
  • an embodiment of the present application provides a screen projection system, including: a first terminal and a second terminal; wherein, the first terminal is used to execute the method described in the second aspect, and the second terminal is used to execute the method described in the second aspect.
  • the method described in the third aspect is used to execute the method described in the third aspect.
  • the embodiment of the present application provides a terminal, including: at least one memory for storing programs; at least one processor for executing the programs stored in the memory, and when the programs stored in the memory are executed, the processor is used to Execute the method provided in the second aspect, or execute the method provided in the third aspect.
  • the embodiment of the present application provides a cross-device interaction device, characterized in that the device runs computer program instructions to execute the method provided in the second aspect, or execute the method provided in the third aspect.
  • the device may be a chip or a processor.
  • the device may include a processor, and the processor may be coupled with the memory, read instructions in the memory and execute the method provided in the second aspect according to the instructions, or execute the method provided in the third aspect .
  • the memory may be integrated in the chip or the processor, or independent of the chip or the processor.
  • the embodiment of the present application provides a computer storage medium, and instructions are stored in the computer storage medium, and when the instructions are run on the computer, the computer is made to execute the method provided in the second aspect, or execute the method in the third aspect. provided method.
  • the embodiments of the present application provide a computer program product containing instructions, which, when run on a computer, cause the computer to execute the method provided in the second aspect, or execute the method provided in the third aspect.
  • FIG. 1 is a system architecture diagram of a projection system provided by an embodiment of the present application
  • Fig. 2a is a first schematic diagram of the interface display of the screen projection system provided by the embodiment of the present application.
  • Fig. 2b is a second schematic diagram of the interface display of the screen projection system provided by the embodiment of the present application.
  • Fig. 2c is a third schematic diagram of the interface display of the projection system provided by the embodiment of the present application.
  • Fig. 2d is a schematic diagram 4 of the interface display of the screen projection system provided by the embodiment of the present application.
  • Fig. 2e is a schematic diagram 5 of the interface display of the screen projection system provided by the embodiment of the present application.
  • Fig. 2f is a schematic diagram 6 of the interface display of the screen projection system provided by the embodiment of the present application.
  • Fig. 3 is a schematic diagram of the response process of the knuckle operation provided by the embodiment of the present application.
  • Fig. 4 is a schematic diagram of the screen recording principle provided by the embodiment of the present application.
  • Fig. 5 is a schematic diagram 1 of the screen capture/recording principle of the screen projection system provided in Fig. 2b;
  • Fig. 6 is a second schematic diagram of the screen capture/recording principle of the screen projection system provided in Fig. 2b;
  • FIG. 7a is a schematic diagram of a scene of a screenshot of the projection system provided in FIG. 2b;
  • Fig. 7b is a schematic diagram of the second scene of the screenshot of the projection system provided in Fig. 2b;
  • Fig. 7c is a schematic diagram of scene three of the screenshot of the projection system provided in Fig. 2b;
  • Fig. 7d is a schematic diagram of scene four of the screenshot of the projection system provided in Fig. 2b;
  • Fig. 7e is a schematic diagram of scene five of the screenshot of the projection system provided in Fig. 2b;
  • Fig. 7f is a schematic diagram of the sixth scene of the screenshot of the projection system provided in Fig. 2b;
  • Fig. 8a is a first schematic diagram of the screen recording scene of the screen projection system provided in Fig. 2b;
  • Fig. 8b is a second schematic diagram of the screen recording scene of the screen projection system provided in Fig. 2b;
  • FIG. 9 is a schematic structural diagram of a first terminal provided in an embodiment of the present application.
  • Fig. 10a is a schematic diagram of a software structure of a first terminal provided in an embodiment of the present application.
  • Fig. 10b is a schematic diagram of a software structure of a second terminal provided by an embodiment of the present application.
  • Fig. 11 is a schematic diagram of the software implementation of the screenshot provided by the embodiment of the present application.
  • FIG. 12 is a schematic flowchart of a cross-device interaction solution provided by an embodiment of the present application.
  • Fig. 13a is a schematic flowchart of a knuckle operation cross-device interaction solution provided by an embodiment of the present application
  • FIG. 13b is a schematic flow diagram of a cross-device interaction solution for clicking the screen capture button or screen recording button in the drop-down menu provided by the embodiment of the present application;
  • Fig. 13c is a schematic flowchart of a cross-device interaction solution for pressing a screen capture button of a second terminal provided in an embodiment of the present application;
  • FIG. 14 is a schematic flowchart of a method for cross-device interaction provided by an embodiment of the present application.
  • words such as “exemplary”, “for example” or “for example” are used to represent examples, illustrations or illustrations. Any embodiment or design described as “exemplary”, “for example” or “for example” in the embodiments of the present application shall not be construed as being more preferred or more advantageous than other embodiments or designs. Rather, the use of words such as “exemplary”, “for example” or “for example” is intended to present related concepts in a specific manner.
  • the term "and/or" is only an association relationship describing associated objects, indicating that there may be three relationships, for example, A and/or B may indicate: A exists alone, A exists alone There is B, and there are three cases of A and B at the same time.
  • the term "plurality" means two or more. For example, multiple systems refer to two or more systems, and multiple terminals refer to two or more terminals.
  • first and second are used for descriptive purposes only, and cannot be understood as indicating or implying relative importance or implicitly specifying indicated technical features. Thus, a feature defined as “first” and “second” may explicitly or implicitly include one or more of these features.
  • the terms “including”, “comprising”, “having” and variations thereof mean “including but not limited to”, unless specifically stated otherwise.
  • Screen devices for office use have weak business operation capabilities and rely on mobile phones, tablets, computers, etc. to connect to screens.
  • the screen projected by the first terminal to the second terminal is independent of the screen displayed by the first terminal.
  • a mobile phone is used as the first terminal and a convenient screen is used as the second terminal. After the mobile phone and the convenient screen are connected through screen projection, the mobile phone and the convenient screen run different applications without interfering with each other.
  • the application running on the mobile phone 110 shown is a chat application, such as WeChat, and the application running on the convenience screen 120 is a video playback application, such as Huawei Video Screen, iQiyi, and Tencent.
  • the screen displayed by the first terminal with the screen projection function is projected to other second terminals with the display function.
  • the image displayed on the convenience screen is a mirror image of the display image of the mobile phone.
  • the applications run on the mobile phone and the convenience screen shown in FIG. 2a are both video playback applications.
  • the distributed soft bus provides a unified distributed communication capability for the interconnection and intercommunication between various terminals, and creates conditions for non-inductive discovery and zero-wait transmission between devices.
  • the distributed soft bus allows terminals to call various functions through Wi-Fi or other wireless methods through minimalist communication protocol technology.
  • minimalist communication protocol technology including discovery & connection, networking (multi-hop ad hoc networking, multi-protocol hybrid networking), transmission (simplified transport protocol: diversified protocols and algorithms, intelligent perception and decision-making), in 1
  • An "invisible" bus is built between +8+N devices, which has the characteristics of self-discovery, self-organizing network, high bandwidth and low delay.
  • 1 refers to mobile phones
  • 8 refers to car machines, speakers, earphones, watches/bracelets, tablets, large screens, PCs, AR/VR
  • N generally refers to other Internet of Things (IOT) devices.
  • IOT Internet of Things
  • device virtualization can virtualize the functions of various terminals connected together through a soft bus into a file that can be shared, and assemble various functions based on the file.
  • the second terminal may be connected to the first terminal through a distributed soft bus.
  • the touch screen is composed of a touch sensor and a display screen, also known as a "touch screen”.
  • the display screen may also be referred to as a screen.
  • the content displayed on the screen of a terminal with a screen projection function can be projected to other terminals with a display function (second terminal) for display.
  • content such as conference content, multimedia files, games, movies, or videos of the first terminal can be projected on the screen of the second terminal for presentation, which can bring better user experience and more convenience to users.
  • the content displayed on the first terminal with a smaller screen and with a screen projection function is projected to the second terminal with a relatively larger screen for display. Displaying on a terminal with a larger screen provides more convenience for users to interact, entertain or watch. For example, by projecting the screen displayed on the mobile phone to a convenient screen or a TV for display, the user can view the screen displayed by the screen projection device through a terminal with a larger screen, thereby improving the user experience.
  • a mobile phone is used as a first terminal for screen projection
  • a TV is used as a second terminal for screen projection
  • an image displayed on the screen of the mobile phone is projected to the TV for display as an example.
  • the screen displayed on the mobile phone can be projected to the TV for display, and the video can be watched or the live content can be displayed on the TV.
  • interactive operations on the screen displayed on the mobile phone cannot be performed through the TV side, and operations such as screenshots and screen recordings need to be completed through the mobile phone. Therefore, such a projection effect is not very good for the user, resulting in poor user experience.
  • a cross-device interaction method provided in the embodiment of the present application can be applied to the screen projection system 100 shown in FIG. 1 .
  • the screen projection system 100 includes a first terminal 110 and a second terminal 120, and the second terminal 120 and the first terminal 110 can interact through a network, so that the first terminal 110 and the second terminal 120 can perform data interaction.
  • Fig. 2a and Fig. 2b show that a second terminal 120 is connected with a first terminal 110;
  • a second terminal 120 can be connected with multiple first terminals 110, and
  • Fig. 2c and Fig. 2d show a
  • the first terminal 110 is connected to two second terminals 120; for example, multiple first terminals 110 may be connected to one second terminal 120, and FIG.
  • the network can include cable network, wired network, optical fiber network, telecommunication network, internal network, Internet, local area network (LAN), wide area network (WAN), wireless local area network (YLAN), metropolitan area network (MAN), public exchange Telephone Network (PSTN), Bluetooth Network, ZigBee Network (ZigBee), Near Field Communication (NFC), In-Device Bus, In-Device Line, Cable Connection, etc. or any combination thereof.
  • LAN local area network
  • WAN wide area network
  • YLAN wireless local area network
  • MAN metropolitan area network
  • PSTN public exchange Telephone Network
  • Bluetooth Network ZigBee Network
  • ZigBee Near Field Communication
  • In-Device Bus In-Device Bus
  • In-Device Line Cable Connection, etc. or any combination thereof.
  • the second terminal 120 and the first terminal 110 may be connected to the same local area network, connected to the same wireless local area network, or connected across local area networks through the Internet.
  • the second terminal 120 and the first terminal 110 can communicate with each other through a wide area network (WAN).
  • WAN wide area network
  • the second terminal 120 and the first terminal 110 may be connected to the same router.
  • the second terminal 120 and the first terminal 110 may form a local area network (LAN), and the first terminals 110 in the local area network (LAN) may communicate with each other through a router.
  • LAN local area network
  • both the second terminal 120 and the first terminal 110 join the Wi-Fi network named "XXXXX".
  • Each first terminal 110 in the Wi-Fi network forms a peer-to-peer network.
  • the connection between the first terminal 110 and the second terminal 120 can be established through the Miracast protocol, and data can be transmitted through the Wi-Fi network.
  • the second terminal 120 and the first terminal 110 are interconnected through a transfer device (for example, a USB data cable or a Dock device), so as to realize communication.
  • a connection between the second terminal 120 and the first terminal 110 may be established through a high definition multimedia interface (high definition multimedia interface, HDMI), and data may be transmitted through an HDMI transmission line.
  • HDMI high definition multimedia interface
  • the distributed soft bus connection between the second terminal 120 and the first terminal 110 can be realized through the above network.
  • a screen projection connection is established between the first terminal 110 and the second terminal 120, in other words, the content stored in the first terminal 110 can be projected to the second terminal 120 for display .
  • the content stored in the first terminal 110 with a smaller screen is projected to the second terminal 120 with a relatively larger screen for display.
  • the first terminal may be a mobile phone or a tablet
  • the second terminal may be a display device with weak service capabilities such as a convenient screen.
  • the screen sent by the first terminal 110 to the second terminal 120 is adapted to the screen size of the second terminal 120, so that the second terminal 120 only needs to realize the display function, reducing the amount of data processing , making sure the speed is displayed.
  • the screen size of the second terminal 120 indicates the length and width of the screen of the second terminal 120 .
  • the screen projection method between the first terminal 110 and the second terminal 120 is mirror projection, that is, the first terminal 110 can project its displayed content to the second terminal 120 for display.
  • a user watches a video through a first terminal 110 , and can project the video screen displayed on the first terminal 110 to a second terminal 120 for display, and then watch the video through a second terminal 120 .
  • FIG. 2c When a user watches a video through the first terminal 110, the video screen displayed on the first terminal 110 can be projected to two second terminals 120 for display. Watch, suitable for scenes where multiple people watch videos.
  • the screen projection method between the first terminal 110 and the second terminal 120 is heterogeneous screen projection, that is, the screens displayed by the first terminal 110 and the second terminal 120 are independent, which can also be understood as the first The applications run by the terminal 110 and the second terminal 120 do not interfere with each other.
  • the first terminal 110 displays a WeChat chat screen
  • the second terminal 120 displays a video screen.
  • the second terminal 120 may display a screen not displayed by the first terminal 110 .
  • FIG. 2b the first terminal 110 displays a WeChat chat screen
  • the second terminal 120 displays a video screen.
  • the second terminal 120 may display a screen not displayed by the first terminal 110 .
  • the user chats on WeChat through the first terminal 110 and at the same time can project the undisplayed video screen of the first terminal 110 to the second terminal 120 for display, and watch the video through the second terminal 120 .
  • the user chats on WeChat through the first terminal 110 and can cast the undisplayed video screen of the first terminal 110 to a second terminal 120, and at the same time, the undisplayed music of the first terminal 110
  • the playback screen is projected to another second terminal 120 for display, and the needs of different users are met through the two second terminals 120 .
  • FIG. 2d the user chats on WeChat through the first terminal 110, and at the same time can project the undisplayed video screen of the first terminal 110 to the second terminal 120 for display, and watch the video through the second terminal 120 .
  • FIG. 2d the user chats on WeChat through the first terminal 110, and can cast the undisplayed video screen of the first terminal 110 to a second terminal 120, and at the same time, the undisplaye
  • FIG. 2e user 1 conducts WeChat chat through a first terminal 110, and can cast the undisplayed video screen of the first terminal 110 to the second terminal 120, and user 2 uses another first terminal 110
  • the undisplayed music playback screen of the first terminal 110 can be projected to the second terminal 120; as a possible situation, FIG. 2e shows that the second terminal 120 only displays a The displayed video screen, and then, through screen switching, displays an undisplayed music playback screen (not shown in the figure) of another first terminal 110 .
  • FIG. 2f shows that the second terminal 120 can simultaneously display a non-displayed video screen of a first terminal 110 and a non-displayed music playback screen of another first terminal 110 .
  • the embodiment of the present application does not intend to limit the screen projection method between the first terminal 110 and the second terminal 1120, which needs to be determined in combination with actual scenarios.
  • the screen projection connection between a first terminal 110 and a second terminal 120 will be described below as an example.
  • the first terminal 110 has the capability of processing user's gesture operation.
  • the first terminal 110 and the second terminal 120 are both The system is taken as an example to illustrate the process of event handling.
  • the first terminal 110 is provided with a hardware layer, a kernel layer (Kernel), a system layer, an application architecture layer and an application layer.
  • Kernel kernel layer
  • the system architecture an application architecture layer
  • FIG. 10a For a detailed introduction of each layer, refer to the description of the system architecture below and FIG. 10a , which is only for the convenience of introducing the process of event processing.
  • the hardware layer is configured to generate a corresponding hardware interrupt signal according to a user's gesture operation; wherein, the gesture operation refers to various operations of the user's hand on the first terminal 110 .
  • the gesture operation of the user specifically needs to be combined with the hardware of the hardware layer of the first terminal 110 .
  • the hardware layer may include but not limited to a display screen, a pressure sensor, a distance sensor, an acceleration sensor, a keyboard, a touch sensor (Touch Sensor) and an intelligent sensor hub (Sensor Hub) shown in FIG. 3 , etc.
  • the first terminal 110 has buttons and a touch screen
  • gesture operations may be operations on buttons, such as pressing the power button, volume up button, and volume down button, and operations on the touch screen, such as touching, clicking, sliding, and tapping etc., which are not specifically limited in this embodiment of the present application.
  • the kernel layer is used to receive and report hardware interrupt signals generated by the hardware layer, and generate input events according to the hardware interrupt information, and may include a driver layer, which converts the input of the hardware layer into a unified event form, In other words, the hardware interrupt information generated by the hardware layer is converted into an input event.
  • the input event may at least include touchable coordinates, the timestamp of the touch operation, and may also include event types, such as sliding, clicking, etc.;
  • the input event may include at least the key value of the pressed button, and may also include the event type, such as short press, long press, and the like.
  • the driver layer can include multiple drivers, such as display driver, audio driver, camera driver, sensor driver, etc.
  • FIG. 3 shows a touch sensor driver (Touch Driver) and an input hub driver (Input Hub Driver).
  • an input core layer and an event processing layer may also be included.
  • the input core layer is responsible for coordinating the driver layer and the event processing layer, so that the data transfer between the driver layer and the event processing layer can be completed, and the event processing layer can provide the input events obtained from the input core layer to the user space.
  • the kernel layer (Kernel) can also be understood as the kernel space.
  • the user space is used to read, process and distribute input events provided by the kernel layer (Kernel), and may include the system layer and the application framework layer.
  • the embodiment of the present application mainly relates to the operation of the touch screen and buttons.
  • the kernel layer (Kernel) supports the related events of the touch screen including absolute coordinates, touch press and touch lift events, and the current user space not only uses these events, In order to improve the user interaction of the touch screen, these simple events are often used to implement some specific instructions in the user space.
  • the embodiment of the present application also involves the operation of the key, and the kernel layer (Kernel) can support the key-related events involved in the embodiment of the present application, without implementing specific instructions in the user space.
  • the system layer is configured to process and distribute input events provided by the kernel layer (Kernel). It may be the local framework layer (Native Framework) shown in FIG. 3 . It may include InputFramework, and may also include algorithms for identifying input events provided by the kernel layer (Kernel), such as the knuckle algorithm shown in Figure 3 (can be judged by the vibration frequency generated by the gravitational acceleration of the knuckle The force feature of the knuckle, judge whether the knuckle is moving on the touch screen according to the touch area, so as to determine whether it is the knuckle action), and can also include applications (execute the operation instruction corresponding to the input event).
  • InputFramework is mainly responsible for the management of user events, the specific content is as follows: various original event messages can be obtained from the kernel layer (Kernel), including event messages such as button, touch screen, mouse, and track ball; 2. Preprocess the event , including two aspects: on the one hand, convert the event into a message event that the system can handle; on the other hand, handle some special events, such as main button, menu button, power button, etc. 3. Distributing the processed event to each application process (system layer, application framework layer or application layer application). In practical applications, the kernel layer (Kernel) will write the input event to the device node, and the InputFramework will read the input event from the device node.
  • kernel layer Kernel
  • InputFramework will read the input event from the device node.
  • the system layer obtains new event information before distributing events through InputFramework, it can repackage the new event message and the input event reported by the kernel layer (Kernel) into a new input event, and then pass it to InputFramework.
  • kernel layer Kernel
  • the application framework layer can read, process and dispatch input events provided by the system layer. It can be the Java Framework shown in Figure 3.
  • the application framework layer may include an algorithm for recognizing input events provided by the system layer, such as Huawei gesture recognition (HwGestureAction) as shown in FIG. 3 , and may also include applications (executing operation instructions corresponding to input events).
  • the application framework layer processes the event and obtains new event information, the new event message and the input event reported by the system layer can be repackaged as a new input event.
  • the events reported by the application framework layer to the application layer should be events that can be handled by applications in the application layer, including at least gesture operations, such as double-tap knuckles, double-tap knuckles to draw S, knuckles to draw closed graphics, touch screens at a distance Keep the hand facing up for 1 second at a distance of 30 cm, hold the hand at a distance of 30 cm from the touch screen, slide down 3 cm from the top of the touch screen, etc.
  • gesture operations such as double-tap knuckles, double-tap knuckles to draw S, knuckles to draw closed graphics, touch screens at a distance Keep the hand facing up for 1 second at a distance of 30 cm, hold the hand at a distance of 30 cm from the touch screen, slide down 3 cm from the top of the touch screen, etc.
  • the application layer can read and process the input events provided by the application framework layer, and execute the operation instructions corresponding to the input events distributed by the application framework layer. It may be the Application shown in FIG. 3 .
  • the input events reported by the kernel layer to the system layer are regarded as the first event, and the input events before entering the InputFramework in the system layer are regarded as the second event.
  • the application framework The input event distributed by the layer to the application layer as a third event. It should be understood that when the system layer only distributes events, the first event and the second event can be understood as the same event, and when the application framework layer only performs event distribution, the second event and the third event can be understood as the same event.
  • the first terminal 110 is provided with a touch sensor, an acceleration sensor, and a knuckle algorithm.
  • the knuckle algorithm is placed at the system layer, such as the local framework (Native Framework) shown in FIG. 3 .
  • the touch sensor can sense the area touched by the knuckle, and the acceleration sensor senses the vibration frequency brought by the acceleration of gravity of the knuckle; then the knuckle algorithm can determine whether it is a knuckle action.
  • the following takes double-clicking a knuckle as an example to describe a workflow in which the first terminal 110 processes the user's knuckle operation. Specifically, as shown in FIG. 3 , when the user double-clicks the knuckles on the touch screen of the first terminal 110, the intelligent sensor hub (Sensor Hub) in the hardware layer (Hardware) performs an operation based on the accelerometer and the pressure sensor. The related data (Acc Rawdata) generated by double-clicking on the joint is processed.
  • the data processed by the intelligent sensor hub (Sensor Hub) and the data collected by the touch sensor (Touch Driver) are sent to the kernel layer (Kernel) as a hardware interrupt signal; through the input hub driver (Input Hub Driver) and touch sensor driver (Touch Driver) can process the hardware interrupt signal into the first event, which can include touch coordinates, timestamp of touch operation data, pressing pressure, vibration frequency of gravity acceleration generated by tapping and other information, and report the first event to the user space; the user space recognizes the first event through the knuckle algorithm in the local framework (Native Framework), and when it is determined that the first event is generated by the knuckle, mark the first event as a knuckle The logo, that is, the knuckle label, encapsulates the first event marked with the knuckle logo into a second event, sends the second event to InputFramework, and InputFramework distributes the second event to Huawei gesture recognition (HwGestureAction) in the Java Framework
  • the user after the first terminal 110 and the second terminal 120 establish a screen projection connection, the user performs a target operation on the second terminal 120, and the target operation can be a screen capture operation, a screen recording operation, a knuckle operation, or an air operation
  • the first terminal 110 responds to the target operation and executes an operation instruction corresponding to the target operation, so as to realize cross-device user interaction and improve user experience.
  • knuckle operations can include drawing S on knuckles (the operation command is a sliding screen capture), drawing a closed figure on the knuckles (the operation command is a partial screen capture), drawing horizontal lines on the knuckles (the operation command is split screen), drawing letters on the knuckles (
  • the operation command is to open the application, for example, W to open the weather application, C to open the camera, e to open the browser), double-knuckle double-click (the operation command is a global screen capture), double-knuckle double-click (the operation command is to record the screen), air-space operation It can be grasping in the air after the hand icon appears (the operation instruction is a global screen capture), swiping left (the operation instruction is turning the page to the left), swiping right (the operation instruction is turning the page to the right), swaying upwards (the operation instruction is turning the page to the right) Swipe up the screen), swipe down (the operation command is to slide the screen down),
  • the screenshot operation includes pressing the power button and the volume down button at the same time, drawing an S on the knuckle, drawing a closed figure on the knuckle, double-clicking the knuckle, clicking the screenshot button in the drop-down menu, etc. screen recording button.
  • other operations are involved in the screen capture operation or screen recording operation, and these operations can also generate corresponding operation events.
  • a sliding operation is performed on the second terminal 120 to display a drop-down menu screen.
  • buttons such as save, share, free shape, heart, rectangle, and ellipse in the screenshot editing screen shown in Figure 7d
  • buttons such as save, share, free shape, heart, rectangle, and ellipse in the screenshot editing screen shown in Figure 7d
  • buttons such as save, share, free shape, heart, rectangle, and ellipse in the screenshot editing screen shown in Figure 7d
  • Buttons such as pen shape, color, thickness, share, graffiti, mosaic, eraser, and scrolling screenshots in .
  • the target operations that the user can perform on the second terminal 120 are generally all target operations supported by the first terminal 110 .
  • the target operations that can be realized by the second terminal 120 require the support of the hardware layer and the kernel layer (Kernel) of the second terminal 120 .
  • the second terminal 120 should have some or all of the hardware in the hardware layer of the first terminal 110, and have corresponding hardware drivers, so that the second terminal 120 can generate events, and the generated events can be processed by the first terminal 110.
  • the second terminal 120 may also have hardware that the first terminal 110 does not have, so as to realize its specific functions.
  • the user performs a target operation on the second terminal 120, the second terminal 120 sends the generated event to the first terminal 110, and the first terminal 110 determines the operation instruction of the event and executes the operation instruction.
  • an event sent by the first terminal 110 to the second terminal 120 is called an operation event, and the operation event will be described below as an example.
  • the operation instructions refer to the description above, and will not be repeated here.
  • the information contained in operational events is critical. If there is too much information about the operation event, on the one hand, the amount of data transmission between the first terminal 110 and the second terminal 120 is large, which reduces the efficiency of data transmission; The data processing efficiency of the second terminal 120 will be reduced. Therefore, the information contained in the operation event may have a greater impact on the performance and reaction speed of the first terminal 110 and the second terminal 120 .
  • the first terminal 110 executes the operation instruction of the operation event based on the screen displayed by the second terminal 120 .
  • the first terminal 110 will create a virtual screen of the second terminal 120, and the size of the virtual screen is adapted to the screen size of the second terminal 120.
  • the first terminal 110 displays the screen to execute the operation command.
  • the screen projection method between the first terminal 110 and the second terminal 120 is mirror projection, it is enough for the first terminal 110 to execute the operation instruction based on the screen displayed by itself.
  • the target operations that the user can perform on the second terminal 120 are mainly based on hardware shared by the first terminal 110 and the second terminal 120 .
  • the user can perform gesture operations on the second terminal 120 including operations such as clicking, pressing, dragging, sliding, tapping, and drawing graphics on the touch screen. , you can also perform air operations on the touch screen.
  • the operations that the user can perform on the second terminal 120 include short pressing and long pressing of the buttons. The following description will be made by taking the first terminal 110 and the second terminal 120 both having touch screens and buttons as an example.
  • the first terminal 110 and the second terminal 120 may further have hardware such as a mouse and a keyboard.
  • the key is to process the hardware interrupt signal generated by the hardware layer, so as to realize the understanding of the user's target operation. For example, which button is pressed by the user, which interface element in the screen is clicked by the user, and what is the operating part (finger belly, knuckle, hand) of the user's hand, so that the first terminal 110 can quickly realize the target operation understanding to ensure the response speed and performance of the first terminal 110.
  • the target operation is a pressing operation on a key of the second terminal 120 .
  • the operation event indicates that the gesture operation is the user pressing a key on the second terminal 120, which may include key value, key time information, and device identification of the second terminal 120; after that, the first terminal 110 recognizes the operation event and determines the corresponding operation instruction.
  • the key value of the button indicates what button the user has pressed, which may be a home button, a return button, a power button, a volume button, and the like. Different keys have different key values, thereby distinguishing different keys.
  • the key-press time information may be the duration of the key-press, or descriptive information indicating the duration of the key-press, such as short press and long press.
  • the operation event generated by the second terminal 120 may be the first event generated by the kernel layer (Kernel).
  • the target operation is a click operation on a button in the pull-down menu screen displayed on the second terminal 120 .
  • the operation event indicates that the gesture operation is that the user clicks the screen capture button or screen recording button in the drop-down menu screen displayed on the touch screen of the second terminal 120, which may include the screen capture button or screen recording button in the drop-down menu that the user clicks and the identity of the second terminal 120.
  • the first terminal 110 can directly identify the screen capture button or screen recording button ID in the operation event, determine the screen capture command or screen recording command, and ensure the response speed and performance. It can be understood that the second terminal 120 recognizes the buttons in the drop-down menu screen, so that the first terminal 110 can directly determine the operation instruction corresponding to the button indicated by the operation event.
  • the second terminal 120 stores the layout information of the pull-down menu screen, such as the position information of each button in the pull-down menu and the binding identification, so that the second terminal 120 can understand that each button in the pull-down menu screen displayed by it meaning.
  • the stored layout information of the pull-down menu screen is adapted to the screen size of the second terminal 120 , but not adapted to the screen size of the first terminal 110 .
  • the first terminal 110 identifies the meaning of the button clicked by the user, during the identification process In this process, the relevant data represented by the screen coordinate system of the second terminal 120 needs to be converted to the screen coordinate system of the first terminal 110; if the meaning of the button clicked by the user is recognized based on the second terminal 120, there is no need to carry out the screen coordinate system of the second terminal 120.
  • the conversion between the coordinate system and the screen coordinate system of the first terminal 110 reduces the amount of data to be processed and ensures data processing efficiency.
  • the first terminal 110 may send the drop-down menu screen and the layout information of the drop-down menu screen to the second terminal 120, and the second terminal 120 performs storage, and subsequently, the first terminal 110 sends an operation instruction for displaying the pull-down menu to the second terminal 120, and the second terminal 120 calls the stored pull-down menu screen to display it.
  • the second terminal 120 can directly call the layout information of the drop-down menu screen to determine the identity of the screen capture button or screen recording button.
  • the operation event may also indicate that the user clicks a button in the screenshot editing screen displayed on the touch screen of the second terminal 120, and the operation event generated by the second terminal 120 includes the identifier of the button in the screenshot editing screen clicked by the user.
  • the second terminal 120 stores the layout information of the convenient screenshot screen, so that the second terminal 120 can understand the meaning of each button in the convenient screenshot displayed on it.
  • the target operation is a sliding operation in which the user slides down from the top of the screen of the second terminal 120 .
  • the operation event indicates that the gesture operation is a sliding operation from the top to the bottom of the touch screen of the second terminal 120 by the user, which may include the sliding direction, sliding distance, sliding duration, device identification of the second terminal 120, etc., so that the first terminal 110
  • the operation instruction for displaying the pull-down menu corresponding to the operation event can be directly determined.
  • the operation event is determined based on the operation information indicating that the finger is on the screen.
  • the operation information can be The coordinates and touch time of the plurality of pixel points on the touch screen of the second terminal 120 in the screen coordinate system.
  • the screen coordinate system is the coordinate system of the touch screen of the second terminal 120; the touched pixel can be understood as the pressed pixel.
  • the operations of pressing the button of the second terminal 120 , clicking the button in the drop-down menu screen displayed on the second terminal 120 , and sliding down from the top of the screen of the second terminal 120 are relatively simple gesture operations.
  • the operation event usually includes more information, and correspondingly, the amount of data processing is also very large.
  • knuckle manipulation and space manipulation it involves not only the recognition of knuckles and space hands, but also the recognition of knuckle movements and hand space movements.
  • the second terminal 120 is used to identify knuckles or hands in the air; the first terminal 110 is used to identify knuckles or hands in the air, and based on the knuckles or hands in the air, determine Operating instructions.
  • the system layer (Native Framework) of the first terminal 110 can directly perform event distribution as a division node.
  • the operation event sent by the second terminal 120 to the first terminal 110 may include knuckle identification, knuckle touch information, and knuckle press information.
  • the first terminal 110 identifies the operation event to Determine the knuckle movement of the user on the screen displayed by the second terminal 120, and determine the operation instruction corresponding to the operation event based on the knuckle movement, such as taking a screenshot, recording a screen, splitting the screen, or opening an application. See above for details.
  • the knuckle touch information may include the coordinates of multiple pixel points on the screen of the second terminal 120 touched by the knuckles in the screen coordinate system and the touch time, and may also include the touch area, where the touch area can be understood as the same
  • the area of the area formed by the continuous pixel points at the moment of touch; the knuckle pressing information may include the vibration frequency generated by the acceleration of gravity of the knuckle.
  • the recorded graphic S corresponds to the respective touch position and touch moment of each pixel on the screen.
  • the operation event may include the same pixel at different touch times. For example, in the process of drawing a closed figure with the user's knuckles, the pixel at the start point and the end point are the same, but the touch time is different.
  • the code for generating an operation event by the second terminal 120 is transplanted from the first terminal 110;
  • the terminal 110 requests the knuckle identification; for example, when the second terminal 120 recognizes through the knuckle algorithm that the first event generated by the kernel layer is generated by the knuckle, it requests the knuckle identification from the first terminal 110 .
  • the second terminal 120 is used to recognize motions of knuckle joints or air motions of hands.
  • the first terminal 110 is configured to determine an operation instruction based on the recognized knuckle motion or air motion of the hand.
  • the operation event generated by the second terminal 120 may include an air movement, so that the first terminal 110 can directly read the air movement in the operation event, and determine the corresponding operation instruction .
  • the processing capability of the second terminal 120 can be considered.
  • the second terminal 120 can identify air movements and/or knuckle movements. On the contrary, the second terminal 120 can only recognize the hand or knuckles in the air.
  • the operation instruction corresponding to the operation may be determined directly based on knuckle motions or hand air motions, without considering the content of the screen displayed by the second terminal 120 .
  • the operation instruction is to take a screenshot; if the knuckle is drawn with an S, the operation instruction is to slide to take a screenshot.
  • the operation instruction corresponding to the operation needs to be determined based on the knuckle movement or the air movement of the hand and the content of the screen displayed by the second terminal 120 .
  • the operation instruction of the air press is to pause playing.
  • the second terminal 120 can only identify whether the knuckles or hands are in the air, or identify the motions of the knuckles and the air motions of the hands; The screen determines the operation instruction.
  • the above operation events are only examples and do not limit the present application, as long as the performance and response speed of the first terminal 110 and the second terminal 120 can be balanced. It should be understood that in this embodiment of the present application, the process of identifying the target operation by the first terminal 110 does not need to consider the difference in screen size between the first terminal 110 and the second terminal 120 . In order to ensure the reliability of the event interaction between the first terminal 110 and the second terminal 120 , preferably, the programs related to obtaining operation events in the second terminal 120 are transplanted from the first terminal 110 .
  • the operation event further enables the first terminal 110 to respond to the operation event of the second terminal 120, execute the operation instruction corresponding to the operation event, realize cross-device screen operation, handle cross-device usage scenarios, and improve user experience.
  • the second terminal 120 can recognize the knuckles, please refer to FIG. 5, the second terminal 120 has a smart sensor hub (Sensor Hub), a touch sensor (Touch Sensor), a display screen (not shown in the figure), transplanted with the input hub driver (Input Hub Driver) and touch sensor driver (Touch Driver) in the kernel layer (Kernel), the knuckle algorithm in the local framework (Native Framework) in the user space, Front-end framework (Java/Js UI Framework) and Application.
  • the second terminal 120 has an Input subsystem.
  • the Input subsystem includes the above-mentioned driver layer, input core layer and event processing layer, which is located in the core layer (Kernel).
  • Kernel the core layer
  • the balance of data processing efficiency between the first terminal 110 and the second terminal 120 is realized, which is conducive to improving the processing efficiency of the interactive operation of the user's screen and improving the use of the projection screen. experience.
  • a collaborative application at the Sink end is usually installed on the second terminal 120, and this application is used to realize the second screen projection connection. Processing of the terminal 120 and communication with the first terminal 110 .
  • a collaborative application at the Source end is installed on the first terminal 110 , and the application is used to realize processing of the first terminal 110 connected to the screen projection and communication with the second terminal 120 .
  • the coordinated application can interact with the application in the application layer, correspondingly, the coordinated application of the first terminal 110 can handle some special operation events, such as a button event, click a button in the drop-down menu, and the first terminal 110
  • the coordinated application of can directly distribute the operation event to the corresponding application, and the application responds to the operation event.
  • the Input subsystem in the second terminal 120 processes the key event, and after recognizing the screenshot key, delivers the key event to the top-level window, and the Sink side
  • the coordinated application responds, and delivers it to the coordinated application at the Source end on the first terminal 110 side.
  • the collaborative application on the source side After the collaborative application on the source side recognizes the key event, it triggers the corresponding screenshot service.
  • the event interaction between the first terminal 110 and the second terminal 120 can be through the communication Modules interact, for example, the communication module can be understood as a physical communication channel.
  • the first terminal 110 responds to the gesture operation of the second terminal 120 to illustrate.
  • the screen projection method of the first terminal 110 and the second terminal 120 is mirror projection, and the user performs a screenshot trigger on the second terminal 120, and the first terminal 110 responds to the gesture operation of the second terminal 120 to realize Screenshots mainly include the following:
  • the first terminal 110 responds to the screen capture trigger of the second terminal 120.
  • the installed screen capture application calls the SurfaceControl.screenshot function family interface to obtain the display picture on the display of the first terminal 110, and calls the SurfaceFlinger drawing function through the interface of the native Framework.
  • the cached content is displayed, drawn into a Bitmap file, and returned to the screen capture application, thereby completing the screen capture, and displaying a screenshot preview screen on the first terminal 110 and the second terminal 120 .
  • the screen projection method of the first terminal 110 and the second terminal 120 is heterogeneous projection
  • the first terminal 110 responds to the gesture operation of the second terminal 120 to realize Screenshots mainly include the following:
  • the trigger source displayId can be understood as the device identifier of the second terminal 120 .
  • the first terminal 120 internally caches the screen displayed by the second terminal 120 corresponding to the displayId.
  • the first terminal 110 controls the second terminal 120 to display the screenshot interaction animation and the screenshot interaction screen.
  • the screenshot interaction screen can be the screenshot preview screen shown in Figure 7a, Figure 7b, Figure 7c, and Figure 7f, or the screenshot editing screen shown in Figure 7d, Figure 7e, or the gesture shown in Figure 7f
  • the screen where the icon 121 is located may also be the screen recording preview screen shown in FIG. 8a and FIG. 8b
  • the screenshot interaction animation may be the screen scrolling animation shown in FIG. 7e.
  • the first terminal 110 may control the display of the second terminal 120 .
  • the screen stored in the first terminal 110 can be projected to the second terminal 120 for display, and the user can perform corresponding operations on the second terminal 120 .
  • the second terminal 120 can support related operations of the touch screen and buttons, see the above for details, and will not go into details here.
  • the user performs an operation of triggering a screenshot on the second terminal 120 , the interactive recognition of the second terminal 120 can recognize the operation, transmit the device ID, and trigger the screenshot service of the first terminal 110 .
  • the device ID can be understood as the above displayId.
  • the first terminal 110 can determine the screenshot instruction corresponding to the target operation and trigger the screenshot service.
  • the screenshot service can be based on the screenshot instruction and the device ID Take a screenshot of the screen displayed by the second terminal to obtain a screenshot interactive screen, such as a screenshot preview screen and a screenshot edit screen.
  • multiple screenshot interactive screens can also form a screenshot interactive animation.
  • Fig. 7a, Fig. 7b, Fig. 7c, Fig. 7f, Fig. 8a and Fig. 8b show the screenshot preview screen
  • Fig. 7d and Fig. 7e show the screenshot edit screen
  • Fig. 7e shows the screenshot produced by screen sliding Interactive animation.
  • the screenshot service of the first terminal 110 sends the screenshot interaction screen to the second terminal 120, so that the second terminal can display it.
  • the screenshot interaction screen is a screenshot editing screen
  • the first terminal 110 saves the screenshot without displaying a screenshot preview screen
  • the first terminal 110 deletes the screenshot.
  • the first terminal 110 responds to the gesture operation of the second terminal 120 for further description.
  • the screen projection method of the first terminal 110 and the second terminal 120 is mirror projection, and the user performs a screen recording trigger on the second terminal 120, and the first terminal 110 responds to the screen recording operation of the second terminal 120
  • the realization of screen recording mainly includes the following contents:
  • the first terminal 110 uses the MediaProjection interface to render the image of the first terminal 110 on a specified surface.
  • the main implementation steps include: creating a VirtualDisplay through the MediaProjection obtained by the MediaProjectionManager; the Display of the first terminal 110 can "project " to the VirtualDisplay; VirtualDisplay will render the image into the Surface, and this Surface is created from the MediaCodec encoder, so that the image pair displayed by the first terminal 110 will be automatically filled to the MediaCodec encoder; finally, the MediaMuxer will be from The image metadata obtained by MediaCodec is encapsulated and output to an MP4 file to obtain a screen recording file.
  • both the first terminal 110 and the second terminal 120 display the same screen recording image.
  • the screen projection method of the first terminal 110 and the second terminal 120 is heterogeneous screen projection, and the user performs a screen recording trigger on the second terminal 120, and the first terminal 110 responds to the gesture operation of the second terminal 120
  • the realization of screen recording mainly includes the following contents:
  • the trigger source displayId can be understood as the device identifier of the second terminal 120 .
  • the AudioRecord of the trigger source displayId will acquire the sound signal collected by the microphone on the second terminal 120 .
  • the MediaMuxer encapsulates the image metadata of the displayId obtained from the MediaCodec and outputs it into an MP4 file, thereby obtaining a screen recording file.
  • the first terminal 110 controls the second terminal 120 to display multiple screen recording pictures, and the screen recording time is displayed in the screen recording icons of these screen recording pictures.
  • the screen recording screen shown in FIG. 8 a and FIG. 8 b refer to the screen recording screen shown in FIG. 8 a and FIG. 8 b , and the screen recording icon 122 .
  • the first terminal in the embodiment of the present application may be a first terminal 110 capable of projecting and sending (Source) and processing input events, such as mobile phones, tablet computers, personal digital assistants (personal digital assistants, PDAs), desktop computers, Wearable devices, notebooks, etc.
  • the second terminal in the embodiment of the present application may be the first terminal 110 capable of generating input events and at least capable of receiving (Sink) and displaying images, such as a convenient screen or a tablet computer.
  • the second terminal may also have a sound output capability and a sound collection capability.
  • the specific types of the second terminal and the first terminal 110 of the first terminal are not limited here, and may be determined according to actual scenarios.
  • the mobile phone when the screen is projected from the mobile phone to the convenience screen in an actual scene, the mobile phone is the first terminal and the convenience screen is the second terminal.
  • the first terminal and the second terminal carry the same operating data system, including but not limited to carrying iOS, android, microsoft or other operating data systems.
  • the mobile phone is used as the first terminal 110 and the convenient screen is used as the second terminal 120 to introduce specific scenarios of heterogeneous screen projection in the embodiment of the present application.
  • the convenient screen 120 transplants the knuckle recognition algorithm, and the knuckle events obtained from the convenient screen.
  • the convenience screen 120 when pressing keys on the convenience screen 120, such as the on/off key and the volume down key, the convenience screen 120 sends the key event to the mobile phone 110; after the mobile phone 110 recognizes that the key event is a screen capture event, the internal buffer
  • the video screen (not shown in the figure) displayed on the convenient screen 120 is screenshotted, and the preview screen of the screenshot is generated and sent to the convenient screen 120; the convenient screen 120 displays the preview screen of the screenshot.
  • the window PhoneWindowManager performs combined button recognition, and when a screen capture event is recognized, The window PhoneWindowManager calls the screenshot assistant ScreenShotHeIper(Ex), so that the screenshot assistant ScreenShotHeIper(Ex) triggers and calls the screenshot service TakeScreenShotService, the screenshot service TakeScreenShotService calls the screenshot management (HW)GlobalScreenshot, and the screenshot management (HW)GlobalScreenshot calls the screenshot generation interface and calls the graphics After SurfaceControl generates screenshots, add preview thumbnails to get screenshot previews.
  • the user slides the drop-down menu on the touch screen of the convenience screen 120, and the convenience screen 120 sends the sliding event to the mobile phone 110; after the mobile phone 110 recognizes that the sliding event is a drop-down menu event, a drop-down menu screen is generated Send to convenient screen 120 after;
  • Convenient screen 120 displays drop-down menu screen, clicks the screenshot button in the drop-down menu screen of convenient screen 120, and click event is sent to mobile phone 110; , take a screenshot of the video picture (not shown in the figure) displayed on the convenient screen 120 in the internal cache, generate a preview screen of the screenshot and send it to the convenient screen 120; the convenient screen 120 displays the preview screen of the screenshot.
  • the user performs a pull-down action on the touch screen of the convenience screen 120;
  • the recognition of the pull-down action when the pull-down menu action is recognized, call the screenshot service TakeScreenShotService, the screenshot service TakeScreenShotService calls the screenshot management (HW) GlobalScreenshot, and the screenshot management (HW) GlobalScreenshot calls the screenshot generation interface, calls the graphic SurfaceControl to generate the screenshot, and adds a preview Thumbnail to get screenshot preview screen.
  • the convenient screen 120 double-click the knuckle on the video screen displayed on the touch screen of the convenient screen 120, and the convenient screen 120 sends the knuckle event to the mobile phone 110; the mobile phone 110 recognizes the knuckle event and determines that the user action is a finger Double-click the joint; take a screenshot of the video picture (not shown in the figure) displayed on the convenient screen 120 of the internal cache, and send it to the convenient screen 120 after generating a preview screen of the screenshot; the preview screen of the screenshot is displayed on the convenient screen 120.
  • the screenshot assistant ScreenShotHeIper(Ex) is called to trigger and call the screenshot service TakeScreenShotService
  • the screenshot service TakeScreenShotService calls the screenshot management (HW)GlobalScreenshot
  • the screenshot management (HW)GlobalScreenshot calls the screenshot generation interface, and calls the graphic SurfaceControl to generate the screenshot , add a small preview image to get a screenshot preview screen.
  • the video screen displayed on the touch screen of the convenient screen 120 is tapped with knuckles to draw a closed figure, and the convenient screen 120 sends the knuckle event to the mobile phone 110; the mobile phone 110 recognizes the knuckle event and determines The user's action is to tap the knuckles to draw a closed figure; take a screenshot of the video picture (not shown in the figure) displayed on the convenient screen 120 of the internal cache, and send it to the convenient screen 120 after generating the screenshot editing screen; the convenient screen 120 displays the screenshot editing screen, when the user clicks the save button of the screenshot editing screen, the mobile phone 110 saves the screenshot (not shown in the figure).
  • the knuckles are tapped and drawn S on the weather screen displayed on the touch screen of the convenient screen 120, and the convenient screen 120 sends the knuckle event to the mobile phone 110; the mobile phone 110 recognizes the knuckle event and determines the user
  • the action is to tap the picture S with the knuckles; scroll and capture the weather picture (not shown in the figure) displayed on the convenient screen 120 of the internal cache, and send the generated scrolling picture and screenshot editing screen to the convenient screen 120; the convenient screen 120 displays the scrolling animation Afterwards, the screenshot editing screen is displayed, and when the user clicks the save button on the screenshot editing screen, the mobile phone 110 saves the screenshot (not shown in the figure).
  • the user performs knuckle-swipe S on the touch screen of the convenience screen 120; for the mobile phone 110, the knuckle SystemwideActionListener performs knuckle recognition , when the finger knuckle is recognized, the scrolling screenshot management MultiScreenShotService is triggered, and the scrolling screenshot management MultiScreenShotService is based on the screenshot management (HW) GlobalScreenshot.
  • HW screenshot management
  • the user stays with his hand facing upwards at a distance of 20-40 cm from the touch screen of the convenient screen 120, and the convenient screen 120 sends the remote event to the mobile phone 110; the mobile phone 110 recognizes the remote event and determines The user's action is to stay up with the hand at a distance of 20-40cm from the touch screen of the convenient screen 120; add a hand icon 121 to the video screen (not shown in the figure) displayed on the convenient screen 120 of the internal cache, and generate a screen shot after the prompt screen Send it to the convenience screen 120; the convenience screen 120 displays a screenshot prompt screen.
  • the convenient screen 120 sends the event of grasping in the air to the mobile phone 110; the mobile phone 110 recognizes the event of grasping in the air, and takes a screenshot of the video picture (not shown in the figure) displayed on the convenient screen 120 in the internal cache, The screenshot preview screen is generated and sent to the convenience screen 120; the convenience screen 120 displays the screenshot preview screen.
  • the user double-clicks the operation data on the touch screen of the convenience screen 120 with two knuckles, and the convenience screen 120 sends the knuckle event to the mobile phone 110; the mobile phone 110 recognizes the knuckle event and determines that the user action is Double-click with two knuckles; record the video picture (not shown in the figure) displayed on the convenient screen 120 in the internal cache, and send it to the convenient screen 120 after generating the screen recording picture; the convenient screen 120 displays the screen recording picture.
  • the convenient screen 120 sends the click event to the mobile phone 110; the mobile phone 110 recognizes the click event as clicking the recording screen icon 122 in the screen recording interface, and the video picture displayed on the convenient screen 120 of the internal cache (Fig. (not shown in ) to take a screenshot, generate a preview screen of the screenshot and send it to the convenient screen 120; the convenient screen 120 displays the preview screen of the screenshot.
  • the user slides the drop-down menu on the touch screen of the convenience screen 120, and the convenience screen 120 sends the sliding event to the mobile phone 110; after the mobile phone 110 recognizes that the sliding event is a drop-down menu event, a drop-down menu screen is generated Send to convenient screen 120 after;
  • Convenient screen 120 displays drop-down menu screen, clicks the recording screen button in the drop-down menu screen of convenient screen 120, and click event is sent to mobile phone 110;
  • the video screen (not shown in the figure) displayed on the convenient screen 120 in the internal cache is recorded, and the screen recording screen is generated and sent to the convenient screen 120; the convenient screen 120 displays the screen recording screen.
  • the convenient screen 120 sends the click event to the mobile phone 110; the mobile phone 110 recognizes the click event as clicking the recording screen icon 122 in the screen recording interface, and the video picture displayed on the convenient screen 120 of the internal cache (Fig. (not shown in ) to take a screenshot, generate a preview screen of the screenshot and send it to the convenient screen 120; the convenient screen 120 displays the preview screen of the screenshot.
  • button events, click events, slide events, knuckle events, and space events sent by the second terminal 120 to the first terminal 110 are all operation events described above and below.
  • FIG. 9 shows a schematic structural diagram of a first terminal 110 .
  • the first terminal 110 may include a processor 1110, an external memory interface 1120, an internal memory 1121, a universal serial bus (universal serial bus, USB) interface 1130, a charging management module 1140, a power management module 1141, a battery 1142, an antenna 1, an antenna 2.
  • the sensor module 1180 may include a pressure sensor 1180A, an attitude sensor 1180B, a distance sensor 1180C, a touch sensor 1180D and the like.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the first terminal 110 .
  • the first terminal 110 may include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • the processor 1110 may include one or more processing units, for example: the processor 1110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor ( image signal processor (ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • graphics processing unit graphics processing unit
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • baseband processor baseband processor
  • neural network processor neural-network processing unit
  • the controller may be the nerve center and command center of the first terminal 110 .
  • the controller can operate the data code and timing signal according to the instruction, generate the operation data control signal, and complete the control of fetching and executing the instruction.
  • a memory may also be provided in the processor 1110 for storing instructions and data.
  • the memory in processor 1110 is a cache memory.
  • the memory may hold instructions or data that the processor 1110 has just used or recycled. If the processor 1110 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 1110 is reduced, thereby improving the efficiency of the system.
  • processor 1110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuitsound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver) /transmitter, UART) interface, mobile industry processor interface (mobile industry processor interface, MIPI), general-purpose input and output (general-purpose input/output, GPIO) interface, subscriber identity module (subscriber identity module, SIM) interface, and/or Universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous receiver transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input and output
  • subscriber identity module subscriber identity module
  • SIM subscriber identity module
  • USB Universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (serial data line, SDA) and a serial clock line (derail clock line, SCL).
  • processor 1110 may include multiple sets of I2C buses.
  • the processor 1110 may be respectively coupled to the touch sensor 1180D, the charger, the flashlight, the camera 1191 and the like through different I2C bus interfaces.
  • the processor 1110 may be coupled to the touch sensor 1180D through the I2C interface, so that the processor 1110 and the touch sensor 1180D communicate through the I2C bus interface to realize the touch function of the first terminal 110 .
  • the I2S interface can be used for audio communication.
  • processor 1110 may include multiple sets of I2S buses.
  • the processor 1110 may be coupled to the audio module 1170 through an I2S bus to implement communication between the processor 1110 and the audio module 1170 .
  • the audio module 1170 can transmit audio signals to the wireless communication module 1160 through the I2S interface, so as to realize the function of answering calls through the Bluetooth headset.
  • the PCM interface can also be used for audio communication, sampling, quantizing and encoding the analog signal.
  • the audio module 1170 and the wireless communication module 1160 can be coupled through a PCM bus interface.
  • the audio module 1170 can also transmit audio signals to the wireless communication module 1160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus can be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • a UART interface is generally used to connect the processor 1110 and the wireless communication module 1160 .
  • the processor 1110 communicates with the Bluetooth module in the wireless communication module 1160 through the UART interface to realize the Bluetooth function.
  • the audio module 1170 can transmit audio signals to the wireless communication module 1160 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
  • the MIPI interface can be used to connect the processor 1110 with the display screen 1192, the camera 1191 and other peripheral devices.
  • MIPI interface includes camera serial interface (camera serial interface, CSI), display serial interface (displayserial interface, DSI), etc.
  • the processor 1110 communicates with the camera 1191 through a CSI interface to realize the shooting function of the first terminal 110 .
  • the processor 1110 communicates with the display screen 1192 through the DSI interface to implement the display function of the first terminal 110 .
  • the GPIO interface can be configured by software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 1110 with the camera 1191 , the display screen 1192 , the wireless communication module 1160 , the audio module 1170 , the sensor module 1180 and so on.
  • the GPIO interface can also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 1130 is an interface conforming to the USB standard specification, specifically, it may be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 1130 can be used to connect a charger to charge the first terminal 110, and can also be used to transmit data between the first terminal 110 and peripheral devices. It can also be used to connect headphones and play audio through them.
  • the interface may also be used to connect to other first terminals 110, such as AR devices.
  • the interface connection relationship between the modules shown in the embodiment of the present application is only a schematic illustration, and does not constitute a structural limitation of the first terminal 110 .
  • the first terminal 110 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the charging management module 1140 is used for receiving charging input from the charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 1140 can receive charging input from the wired charger through the USB interface 1130 .
  • the charging management module 1140 may receive wireless charging input through the wireless charging coil of the first terminal 110 .
  • the charging management module 1140 can also supply power to the first terminal 110 through the power management module 1141 while charging the battery 1142 .
  • the power management module 1141 is used for connecting the battery 1142 , the charging management module 1140 and the processor 1110 .
  • the power management module 1141 receives the input of the battery 1142 and/or the charging management module 1140, and supplies power for the processor 1110, the internal memory 1121, the external memory 1120, the display screen 1192, the camera 1191, and the wireless communication module 1160, etc.
  • the power management module 1141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance).
  • the power management module 1141 can also be set in the processor 1110 .
  • the power management module 1141 and the charging management module 1140 may also be set in the same device.
  • the wireless communication function of the first terminal 110 may be realized by the antenna 1, the antenna 2, the mobile communication module 1150, the wireless communication module 1160, the modem processor and the baseband processor.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the first terminal 110 may be used to cover single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas.
  • Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 1150 may provide wireless communication solutions including 2G/3G/4G/5G applied on the first terminal 110 .
  • the mobile communication module 1150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA) and the like.
  • the mobile communication module 1150 can receive electromagnetic waves through the antenna 1, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation.
  • the mobile communication module 1150 can also amplify the signal modulated by the modem processor, convert it into electromagnetic wave and radiate it through the antenna 1 .
  • at least part of the functional modules of the mobile communication module 1150 may be set in the processor 1110 .
  • at least part of the functional modules of the mobile communication module 1150 and at least part of the modules of the processor 1110 may be set in the same device.
  • a modem processor may include a modulator and a demodulator.
  • the modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator sends the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is passed to the application processor after being processed by the baseband processor.
  • the application processor outputs sound signals through audio equipment (not limited to speaker 1170A, receiver 1170B, etc.), or displays images or videos through display screen 1192 .
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent of the processor 1110, and be set in the same device as the mobile communication module 1150 or other functional modules.
  • the wireless communication module 1160 can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wireless fidelity, Wi-Fi) network), bluetooth (bluetooth, BT), global navigation satellite, etc. applied on the first terminal 110.
  • System global navigation satellite system, GNSS
  • frequency modulation frequency modulation, FM
  • near field communication technology near field communication, NFC
  • infrared technology infrared, IR
  • the wireless communication module 1160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 1160 receives electromagnetic waves via the antenna 2, frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 1110.
  • the wireless communication module 1160 can also receive the signal to be transmitted from the processor 1110 , frequency-modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
  • the antenna 1 of the first terminal 110 is coupled to the mobile communication module 1150, and the antenna 2 is coupled to the wireless communication module 1160, so that the first terminal 110 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), broadband code wideband code division multiple access (WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC, FM, and / or IR technology etc.
  • GSM global system for mobile communications
  • general packet radio service general packet radio service
  • GPRS general packet radio service
  • code division multiple access code division multiple access
  • CDMA broadband code wideband code division multiple access
  • WCDMA broadband code wideband code division multiple access
  • TD-SCDMA time-division code division multiple access
  • LTE long term evolution
  • BT GNSS
  • WLAN NFC
  • FM FM
  • / or IR technology etc.
  • the GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a Beidou navigation satellite system (beidou navigation satellite system, BDS), a quasi-zenith satellite system (quasi -zenithsatellite system (QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • Beidou navigation satellite system beidou navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the first terminal 110 implements a display function through a GPU, a display screen 1192, an application processor, and the like.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 1192 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 1110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 1192 is used to display images, videos and the like.
  • Display 1192 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrixorganic light-emitting diode) , AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diodes (quantum dot light emitting diodes, QLED), etc.
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode active-matrixorganic light-emitting diode
  • AMOLED organic light-emitting dio
  • the first terminal 110 may include 1 or N display screens 1192, where N is a positive integer greater than 1.
  • the display screen 1192 may be punched out, for example, a through hole is provided in the upper left corner or upper right corner of the display screen 1192, and the camera 1191 may be embedded in the through hole.
  • the first terminal 110 may implement a shooting function through an ISP, a camera 1191 , a video codec, a GPU, a display screen 1192 , and an application processor.
  • the ISP is used to process the data fed back by the camera 1191 .
  • the light is transmitted to the photosensitive element of the camera through the lens, and the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin color.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be located in the camera 1191.
  • Camera 1191 is used to capture still images or video.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other image signals.
  • the first terminal 110 may include 1 or N cameras 1191, where N is a positive integer greater than 1.
  • the position of the camera on the first terminal 110 may be front or rear, which is not limited in this embodiment of the present application.
  • the first terminal 110 may include a single camera, dual cameras, or triple cameras, etc., which is not limited in this embodiment of the present application.
  • a mobile phone may include three cameras, wherein one is a main camera, one is a wide-angle camera, and one is a telephoto camera.
  • the first terminal 110 includes multiple cameras, all of the multiple cameras may be front-mounted, or all may be rear-mounted, or some may be front-mounted and the other part may be rear-mounted, which is not limited in this embodiment of the present application.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the first terminal 110 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the first terminal 110 may support one or more video codecs.
  • the first terminal 110 can play or record videos in multiple encoding formats, for example: moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • the biological neural network structure such as the transfer mode between neurons in the human brain, it can quickly process operation events and continuously self-learn.
  • Applications such as intelligent cognition of the first terminal 110 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • the external memory interface 1120 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the first terminal 110.
  • the external memory card communicates with the processor 1110 through the external memory interface 1120 to implement a data storage function. Such as saving music, video and other files in the external memory card.
  • the internal memory 1121 may be used to store computer-executable program codes including instructions.
  • the processor 1110 executes various functional applications and data processing of the first terminal 110 by executing instructions stored in the internal memory 1121 .
  • the internal memory 1121 may include an area for storing programs and an area for storing data.
  • the storage program area can store the operating data system, at least one application program required by a function (such as sound playing function, image playing function, etc.) and the like.
  • the storage data area can store data created during the use of the first terminal 110 (such as audio data, phonebook, etc.) and the like.
  • the internal memory 1121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (universal flash storage, UFS) and the like.
  • the first terminal 110 may implement an audio function through an audio module 1170, a speaker 1170A, a receiver 1170B, a microphone 1170C, an earphone interface (not shown in the figure), and an application processor. Such as music playback, recording, etc.
  • the audio module 1170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signal.
  • the audio module 1170 may also be used to encode and decode audio signals.
  • the audio module 1170 may be set in the processor 1110 , or some functional modules of the audio module 1170 may be set in the processor 1110 .
  • Loudspeaker 1170A also called “horn" is used to convert audio electrical signals into sound signals.
  • the first terminal 110 can listen to music through the speaker 1170A, or listen to a hands-free call.
  • Receiver 1170B also called “earpiece” is used to convert audio electrical signals into audio signals.
  • the receiver 1170B can be placed close to the human ear to listen to the voice.
  • the microphone 1170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals. When making a phone call or sending a voice message, the user can put his mouth close to the microphone 1170C to make a sound, and input the sound signal to the microphone 1170C.
  • the first terminal 110 may be provided with at least one microphone 1170C. In some other embodiments, the first terminal 110 may be provided with two microphones 1170C, which may also implement a noise reduction function in addition to collecting sound signals. In some other embodiments, the first terminal 110 can also be equipped with three, four or more microphones 1170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions, etc.
  • the headphone jack is used to connect wired headphones.
  • the earphone interface can be a USB interface 1130, or a 3.5mm open mobile terminal platform (open mobile terminal platform, OMTP) standard interface, or a cellular telecommunications industry association of the USA (CTIA) standard interface .
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the pressure sensor 1180A is used to sense the pressure signal and convert the pressure signal into an electrical signal.
  • pressure sensor 1180A may be located on display screen 1192 .
  • a capacitive pressure sensor may be comprised of at least two parallel plates with conductive material.
  • the first terminal 110 determines the strength of the pressure according to the change in capacitance.
  • touch operation data acting on the display screen 1192 the first terminal 110 detects the intensity of the touch operation data according to the pressure sensor 1180A.
  • the first terminal 110 may also calculate the touched position according to the detection signal of the pressure sensor 1180A.
  • the touch operation data acting on the same touch position but with different touch operation data intensities may correspond to different operation data instructions. For example: when the touch operation data whose strength is less than the first pressure threshold acts on the icon of the short message application, the instruction of viewing the short message is executed. When the touch operation data whose intensity is greater than or equal to the first pressure threshold acts on the short message application icon, the instruction of creating a new short message is executed.
  • the first terminal 110 can also calculate the touched area according to the detection signal of the pressure sensor 1180A.
  • the attitude sensor 1180B may be used to determine the movement attitude of the terminal 100 . It includes three-axis gyroscope, three-axis accelerometer, three-axis electronic compass and other motion sensors, and the temperature-compensated three-dimensional attitude and orientation data can be obtained through the embedded low-power ARM processor.
  • the angular velocity of the terminal 100 around three axes ie, x, y and z axes
  • the acceleration sensor in the attitude sensor 1180B can detect the acceleration of the first terminal 110 in various directions (generally three axes). When the first terminal 110 is stationary, the magnitude and direction of gravity can be detected.
  • the air screenshot can be realized based on the gesture sensor 1180B and the distance sensor 1180C.
  • the first terminal 110 may measure the distance by infrared or laser. In some embodiments, when shooting a scene, the first terminal 110 may use the distance sensor 1180C for distance measurement to achieve fast focusing. In some embodiments, in the screenshot scene, the first terminal 110 may use the distance sensor 1180C to measure the distance between the hand and the first terminal 110 .
  • the touch sensor 1180D is also called “touch panel”.
  • the touch sensor 1180D can be arranged on the display screen 1192, and the touch sensor 1180D and the display screen 1192 form a touch screen, also called “touch screen”.
  • the touch sensor 1180D is used to detect touch operation data acting on or near it.
  • the touch sensor can transmit the detected touch operation data to the application processor to determine the type of touch event.
  • Visual output related to the touch manipulation data can be provided through the display screen 1192 .
  • the touch sensor 1180D may also be disposed on the surface of the first terminal 110 , which is different from the position of the display screen 1192 .
  • the sensor module 1180 may also include an air pressure sensor, a magnetic sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, an ambient light sensor, a bone sensor, and the like.
  • the keys 1190 include a power key, a volume key and the like. Key 1190 may be a mechanical key. It can also be a touch button.
  • the first terminal 110 may receive key input and generate key signal input related to user settings and function control of the first terminal 110 .
  • the first terminal 110 may further include a motor, an indicator, a SIM interface, and the like.
  • the second terminal 120 may include a processor, an internal memory, a universal serial bus (universal serial bus, USB) interface, a charging management module, a power management module, a battery, an antenna, a mobile communication module, a wireless communication module, an audio Modules, speakers, receivers, microphones, sensor modules, buttons, cameras and displays, etc.
  • the sensor module may include a pressure sensor, a touch sensor, an attitude sensor, a distance sensor, and the like. For details, please refer to the above, so I won’t go into too much detail here. It should be noted that the above content is only an example of the structure of the second terminal 120 , and does not constitute a specific limitation on the second terminal 120 . In other embodiments, the second terminal 120 may include more or fewer components than those shown in FIG. 9 , or combine certain components, or separate certain components, or arrange different components.
  • the software system of the first terminal 110 may adopt a layered architecture, an event hardware-driven architecture, a micro-kernel architecture, a micro-service architecture, or a cloud architecture.
  • an Android system with a layered architecture is taken as an example to illustrate the software structure of the first terminal 120 .
  • Fig. 10a is a block diagram of the software structure of the first terminal 110 according to the embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate through software interfaces.
  • the Android system is divided into four layers, which are application program layer, application program framework layer, system layer (including Android runtime (Android runtime) and system library) and kernel layer (Kernel layer) from top to bottom. ).
  • the application layer can consist of a series of application packages.
  • the application package may include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message, screen capture, and screen recording.
  • applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message, screen capture, and screen recording.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions, such as functions for receiving events sent by the application layer.
  • the application framework layer may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and the like.
  • a window manager is used to manage window programs.
  • the window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, capture the screen, etc.
  • Content providers are used to store and retrieve data and make it accessible to applications.
  • Said data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebook, etc.
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
  • the notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and can automatically disappear after a short stay without user interaction.
  • the notification manager is used to notify the download completion, message reminder, etc.
  • the notification manager can also be a notification that appears on the top status bar of the system in the form of a chart or scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in the status bar, a prompt sound is issued, the first terminal 110 vibrates, and the indicator light blinks.
  • the application framework layer can also include:
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on.
  • the view system can be used to build applications.
  • a display interface can consist of one or more views.
  • a display interface including a text message notification icon may include a view for displaying text and a view for displaying pictures.
  • the phone manager is used to provide the communication function of the first terminal 110 .
  • the management of call status including connected, hung up, etc.).
  • a system library can include multiple function modules. For example: surface manager (surface manager), media library (Media Libraries), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg: SGL), etc.
  • the surface manager is used to manage the display subsystem and provides the fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of various commonly used audio and video formats, as well as still image files, etc.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing, etc.
  • 2D graphics engine is a drawing engine for 2D drawing.
  • System libraries can also include:
  • the sensor service module is used to monitor the sensor data uploaded by various sensors at the hardware layer, and determine the physical state of the first terminal 110;
  • the physical state recognition module is used to analyze and recognize user gestures, faces, etc., and may include knuckle algorithms;
  • the Android Runtime includes core library and virtual machine. The Android runtime is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one part is the function function that the java language needs to call, and the other part is the core library of Android.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application program layer and the application program framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer at least includes a display driver, a camera driver, an audio driver, a sensor driver, and related hardware used to drive the hardware layer, such as a display screen, a camera, a speaker, and a sensor.
  • the software system of the second terminal 120 may adopt a layered architecture, an event hardware-driven architecture, a micro-kernel architecture, a micro-service architecture, or a cloud architecture.
  • the architecture of the software system of the second terminal 120 is consistent with that of the software system of the first terminal.
  • an Android system with a layered architecture is taken as an example to illustrate the software structure of the second terminal 120 .
  • Fig. 10b is a block diagram of the software structure of the second terminal 120 according to the embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate through software interfaces.
  • the Android system is divided into four layers, which are application program layer, application program framework layer, system layer (including Android runtime (Android runtime) and system library) and kernel layer (Kernel layer) from top to bottom. ).
  • the application layer can consist of a series of application packages.
  • the application package may include a collaborative application for implementing a screen projection connection with the first terminal 110 , and applications such as WLAN and Bluetooth for connection with the first terminal 110 .
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer may include a content provider, a view system, a resource manager, a notification manager, etc.
  • the application framework layer may include a content provider, a view system, a resource manager, a notification manager, etc.
  • a system library can include multiple function modules. For example: surface manager (surface manager), sensor service module and physical state recognition module, etc. See the description above for details.
  • Android Runtime includes core library and virtual machine. See the description above for details. .
  • the kernel layer includes at least a display driver and a touch sensor driver to drive the touch screen.
  • audio drivers and attitude sensor drivers can be included to drive speakers, microphones, and attitude sensors with hardware.
  • FIG. 10b is a schematic diagram of a software structure provided by the embodiment of the present application, which does not constitute any limitation on the content in the software structure.
  • the content in the software architecture of the second terminal 120 may be the content shown in FIG. 10a , can also be more or less than the content shown in 10a.
  • first terminal 110 and the second terminal 120 may be the same or different, but data interaction between the first terminal 110 and the second terminal 120 is possible.
  • the touch sensor 1180D of the second terminal 120 receives a touch operation, corresponding hardware interrupt information is sent to the kernel layer (Kernel).
  • the kernel layer (Kernel) processes hardware interrupt information into input events (including touch coordinates, time stamps of touch operation data, etc.), and distributes input events to the system layer, and the system layer distributes input events to the application framework layer.
  • the program framework layer processes the input event into an operation event (indicating that the control clicked by the user is a screenshot control of the pull-down notification menu). Send the operation event to the first terminal 110, acquire the screenshot preview screen sent by the first terminal 110, start the display driver by calling the kernel layer (Kernel), and display the corresponding screenshot preview screen through the display screen 1192 .
  • the user space of the first terminal 110 obtains the operation event sent by the second terminal 120, reads the information of the operation event, triggers the screenshot application to call the interface of the system layer, and intercepts the pixels of the current entire screen of the second terminal 120 in the internal cache. Pixel value, and add a preview thumbnail to get a screenshot preview screen. Send the screenshot preview screen to the corresponding second terminal 120 .
  • the exemplary description is used as an exemplary description to realize the screen recording workflow as the software and hardware of the first terminal 110 and the second terminal 120 .
  • the kernel layer processes touch operations into input events (including touch coordinates, time stamps of touch operation data, etc.), and distributes the input events to the system layer.
  • the system layer distributes input events to the application framework layer.
  • the application framework layer processes the input event into an operation event (indicating that the control clicked by the user is a screen recording control of the pull-down notification menu) and sends the operation event to the first terminal 110 .
  • the audio driver is started by calling the kernel layer (Kernel), and the sound signal input to the second terminal 120 is collected through the microphone 1170C and reported to the first terminal 110 .
  • the screen recording image sent by the first terminal 110 is obtained, and then the display driver is started by calling the kernel layer (Kernel), and the corresponding screen recording image is displayed through the display screen 1192 .
  • the user space of the first terminal 110 obtains the operation event sent by the second terminal 120, reads the information of the operation event, triggers the screen recording application to call the interface of the system layer, and intercepts the pixels of the current entire screen of the second terminal in the internal cache. pixel value, and draw the entire screen to obtain a screen recording image, and at the same time, generate a screen recording file based on the sound signal collected by the microphone and the screen recording image reported by the second terminal 120 during the recording process. Send the screen recording picture to the corresponding second terminal 120 .
  • the first terminal establishes a screen projection connection with the second terminal.
  • the second terminal and the first terminal are connected together through the above-mentioned network to implement a distributed soft bus connection, so as to project the screen displayed by the first terminal to the second terminal for display.
  • the first terminal can create a virtual screen of the second terminal, and the virtual screen is adapted to the screen size of the second terminal.
  • the virtual screen of the second terminal can be identified based on the device identifier of the second terminal, and an operation instruction can be executed on the screen of the virtual screen of the second terminal.
  • the first terminal can manage the microphone of the second terminal. Subsequently, in screen recording or other scenarios that require the microphone to collect sound signals, the second terminal reports the sound signals collected by the microphone to the first terminal.
  • the first terminal sends the target screen to the second terminal.
  • the screen projection mode between the first terminal and the second terminal is mirror projection.
  • the target screen is a mirror image of the screen currently displayed by the first terminal.
  • the screen projection mode between the first terminal and the second terminal is heterogeneous screen projection.
  • the target screen is a screen not displayed by the first terminal.
  • the target screen is a mirror image of the screen currently displayed by the first terminal, and subsequently, the screens displayed by the first terminal and the second terminal are independent of each other and do not interfere with each other.
  • the size of the target screen is adapted to the screen size of the second terminal.
  • the second terminal displays the target screen.
  • the target picture may be a video picture as shown in FIGS. 7a-7d, 7f, 8a, and 8b, or a weather picture as shown in FIG. 7d.
  • the second terminal generates an input event according to the knuckle operation performed by the user on the target screen.
  • the input event may include knuckle touch information and knuckle press information. See the above for details, so I won’t go into details here.
  • the input event can be generated by the kernel space after processing the hardware interrupt information generated by the user's knuckle operation on the target screen, or it can be the user's knuckle operation on the target screen reported by the kernel layer to the hardware layer
  • the generated hardware interrupt information is generated after processing.
  • the kernel space, kernel layer, and hardware layer refer to the above description, and will not be repeated here.
  • the second terminal When the second terminal recognizes that the input event is generated by the user's knuckle based on the knuckle algorithm, it determines the knuckle identifier of the input event.
  • the knuckle algorithm of the first terminal and the second terminal are the same, for example, the knuckle algorithm of the second terminal is transplanted from the first terminal.
  • the knuckle identification may be determined by a knuckle algorithm.
  • the knuckle algorithms of the first terminal and the second terminal are different.
  • the knuckle algorithm of the second terminal recognizes that the input event is generated by the user's knuckles, it converts the knuckle identifier determined by the knuckle algorithm into a knuckle identifier recognizable by the first terminal, or determines that the knuckle identifier can be recognized by the first terminal.
  • a knuckle identifier recognized by the terminal which may indicate that the input event is generated by the user's knuckle.
  • the second terminal encapsulates the input event and the knuckle identifier into an operation event.
  • the operation event should be an event that can be processed by the first terminal, in other words, the operation event complies with the data exchange protocol between the first terminal and the second terminal.
  • the second terminal performs recognition of knuckle joints, and the first terminal performs recognition of knuckle motions.
  • the second terminal encapsulates the input event, the knuckle identifier, and the device identifier of the second terminal into an operation event, so that the first terminal can recognize the terminal and the knuckle movement operated by the user.
  • the operation event generated by the second terminal 120 includes the knuckle identifier.
  • the second terminal sends the operation event to the first terminal.
  • the first terminal recognizes the operation event, so as to determine the knuckle motion of the user on the target screen.
  • the first terminal may identify the knuckle motion based on the knuckle touch information and the knuckle press information in the operation event, so as to determine the user's knuckle motion for the target screen.
  • the action of knuckles may be double-clicking knuckles as shown in FIG. 7c , drawing a closed figure with knuckles as shown in FIG. 7d , drawing S with knuckles as shown in FIG. 7e , and double-clicking with two knuckles as shown in FIG. 8b .
  • the first terminal determines the operation instruction corresponding to the operation event based on the motion of the knuckle.
  • the first terminal stores a correspondence between knuckle motions and operation instructions. Based on the matching of the knuckle motion and the stored knuckle motion, an operation instruction corresponding to the matched knuckle motion is determined.
  • the motion of the knuckles is double-tap with two knuckles as shown in FIG. 8 b , and correspondingly, the operation instruction is a screen recording instruction.
  • the motion of the knuckle is the double-tap of the knuckle shown in FIG. 7 c
  • the operation instruction is a screenshot instruction.
  • the motion of the knuckles is drawing a closed figure with the knuckles as shown in FIG. 7 d , and correspondingly, the operation instruction is a partial screenshot instruction.
  • the knuckle action is the knuckle tap S shown in FIG. 7 e
  • the operation instruction is a scrolling screenshot instruction.
  • the first terminal stores knuckle motions and screens, and operation instructions corresponding to knuckle motions and screens. Based on the matching of the knuckle motion and the target screen with the stored knuckle motion and screen, an operation instruction corresponding to the matched knuckle motion and screen is determined.
  • the embodiment of the present application does not limit whether the determination of the operation instruction is based on knuckle motion or knuckle motion and screen, which needs to be determined in combination with actual needs.
  • the knuckles can replace the pads of the fingers to operate on the screen to realize the functions that the pads of the fingers can achieve, for example, open WeChat, camera and other applications, and operate various spaces in the application, for example, click The pause button in the video screen, etc.
  • the operation instruction corresponding to the operation event needs to be based not only on the knuckle action, but also on the information of the knuckle operation position. For example, if the knuckle clicks on the WeChat icon, the information on the knuckle operation position for the WeChat icon.
  • the first terminal recognizes the knuckle operation position information based on the knuckle touch information in the operation event, the layout information of the target screen, and the size information of the target screen, so as to know the information of the user's knuckle operation position,
  • the operation instruction is determined according to the information of the knuckle motion and the operation position of the knuckle.
  • the second terminal pre-stores the layout information of the target screen, and when the first terminal recognizes that the operation instruction for determining the operation event needs to be determined based on the information of the knuckle operation position, it sends a request for the knuckle operation position information to the second terminal.
  • the second terminal determines the user's knuckle operation position information based on the pre-stored layout of the displayed target screen and sends it to the first terminal, so that the first terminal determines the operation instruction corresponding to the operation event.
  • the first terminal executes the operation instruction corresponding to the operation event on the target screen.
  • the first terminal executes an operation instruction corresponding to the operation event on the target screen to generate a screen to be displayed.
  • the operation instruction is a screenshot instruction
  • the image to be displayed is a screenshot preview image.
  • the corresponding screen to be displayed can be as shown in FIG.
  • the screen to be displayed may be the screenshot editing screen as shown in FIG. 7d.
  • both the first terminal and the second terminal can display the screen to be displayed.
  • the screen projection method between the first terminal and the second terminal is heterogeneous screen projection, and the second terminal displays the screen to be displayed.
  • the first terminal executes the operation instruction corresponding to the operation event on the target screen to generate a screen that does not need to be displayed. For example, when the target screen is the screenshot editing screen shown in FIG. 7d or FIG. 7e, the user clicks on the screenshot editing screen Save button, if the operation instruction is to save the screenshot, the saved screen will not be displayed on the second terminal.
  • the first terminal executes the operation instruction corresponding to the operation event on the target screen, and does not generate a screen.
  • the target screen is the screenshot editing screen shown in FIG. 7d or FIG. button, and the operation instruction is to delete the screenshot, then the first terminal executes the operation instruction and will not generate a screen to be displayed.
  • the first terminal establishes a screen projection connection with the second terminal.
  • the first terminal sends the target image to the second terminal.
  • the second terminal displays the target screen.
  • the second terminal generates a first input event according to the user's sliding operation on the target screen.
  • the first input event includes event type and finger sliding operation information.
  • the finger sliding operation information may include a sliding direction, a sliding distance, a sliding duration, etc.
  • an event type may be pressing, sliding, lifting, and the like.
  • the first input event may further include the device identifier of the second terminal, so that the first terminal can identify the terminal operated by the user.
  • the first input event may be generated by the kernel space after processing the hardware interrupt information generated by the user's sliding operation on the target screen, or may be the user's sliding operation on the target screen reported by the kernel layer to the hardware layer
  • the generated hardware interrupt information is generated after processing.
  • the kernel space, kernel layer, and hardware layer refer to the above description, and will not be repeated here.
  • the second terminal sends the first input event to the first terminal.
  • the first terminal identifies the first input event as displaying a drop-down menu.
  • the second terminal recognizes the first input event, and learns about the terminal operated by the user and the operation behavior.
  • the gesture operation recognized by the first terminal may be sliding down from the top as shown in FIG. 7b , and the operation instruction is to display a drop-down menu.
  • the first terminal sends the pull-down menu screen and the layout information of the pull-down menu screen to the second terminal.
  • the layout information of the drop-down menu screen indicates position information and binding identifiers of various buttons in the drop-down menu. See the above for details, so I won’t go into details here.
  • the second terminal displays a pull-down menu screen based on the target screen.
  • the second terminal generates a second input event according to the user's click operation on the screen capture button or the area where the screen record button is located in the drop-down menu screen.
  • the second input event includes finger click operation information and event type.
  • the finger click operation information includes finger touch position information and finger touch time information.
  • the time information may be the respective touch moments of the clicked multiple pixels on the touch screen of the second terminal.
  • Event types can be press and lift.
  • the second input event may be generated by the kernel space after processing the hardware interrupt information generated by the user's click operation on the drop-down menu screen, or it may be the user's click operation on the drop-down menu screen reported by the kernel layer to the hardware layer.
  • the hardware interrupt information generated by the click operation is processed and generated. See the above description for the kernel space, kernel layer, and hardware layer, so I won’t go into details here.
  • the second terminal determines an identifier of the screen capture button or the screen recording button according to the second input event and the layout information of the pull-down menu screen.
  • the second terminal determines that the second input event is generated by the user's click, that is, when there is only one press and lift, and the duration of the press is relatively short, based on the finger touch position information in the second input event
  • the finger touch position information By comparing with the position information of various buttons in the drop-down menu screen, it can be determined that the user has clicked the screen capture button or the screen recording button, and the identification of the screen capture button or the screen recording button can be determined.
  • the user's touch duration can be known, so as to know whether it is a click operation or a long press operation.
  • the second terminal encapsulates the identifier of the screen capture button or the screen record button and the second input event into an operation event.
  • the second terminal encapsulates the screen capture button or screen record button identifier, the device identifier of the second terminal, and the second input event into an operation event, so that the first terminal can recognize the terminal and operation behavior operated by the user.
  • the user's gesture operation may be clicking the screenshot button in the drop-down menu as shown in FIG.
  • the event is a screenshot event.
  • the user's gesture operation may be clicking the screen recording button in the drop-down menu as shown in FIG.
  • the operation event is a screen recording event.
  • the second terminal sends the operation event to the first terminal.
  • the first terminal identifies the operation event, and determines a screen capture instruction or a screen recording instruction of the target screen corresponding to the operation event.
  • the first terminal identifies the operation event and determines that the user clicks the screen capture button or screen recording button on the drop-down menu displayed by the second terminal, and the first terminal can determine that the operation command is a screen capture command or a screen recording instruction.
  • the screenshot instruction indicates to take a screenshot of the target screen. For example, if the first terminal recognizes that the user has clicked the screenshot button in the drop-down menu shown in FIG. 7b, the operation instruction is to take a screenshot of the video screen in FIG. 7b.
  • the screen recording instruction indicates to start recording the screen from the target screen. For example, if the first terminal recognizes that the user has clicked the screen recording button in the drop-down menu shown in FIG. 8b, the operation instruction is to start recording the video screen in FIG. 8b.
  • an operation instruction corresponding to an operation event is executed on a target screen to generate a screen to be displayed.
  • the operation command is a screen capture command
  • the screen to be displayed is a screen capture preview screen.
  • the target screen is the video screen shown in FIG. 7b
  • the screenshot preview screen shown in FIG. 7b can be obtained after executing the screen capture instruction.
  • the operation command is a screen recording command
  • the screen to be displayed is a screen recording preview screen.
  • the target screen is the video screen shown in FIG. 8b
  • the screen recording preview screen shown in FIG. 8b can be obtained after the screen recording instruction is executed.
  • the first terminal establishes a screen projection connection with the second terminal.
  • the first terminal sends the target image to the second terminal.
  • the second terminal displays the target screen.
  • the second terminal generates an operation event according to the user pressing the screen capture button of the second terminal.
  • the operation event includes key value and key time information. See the above for details, so I won’t go into details here.
  • the operation event may further include the device identifier of the second terminal, so that the first terminal can identify the terminal operated by the user.
  • the operation event can be generated by the kernel space after processing the hardware interrupt information generated by the user pressing the screen capture button of the second terminal, or it can be the user pressing the screen capture button of the second terminal reported by the kernel layer to the hardware layer
  • the generated hardware interrupt information is generated after processing.
  • the kernel space, kernel layer, and hardware layer refer to the above description, and will not be repeated here.
  • the user's gesture operation may be a key (on/off key + volume down key) as shown in FIG. dog.
  • the second terminal determines that the operation event is not a local event.
  • a local event can be understood as an event that can be directly processed by the second terminal, for example, the user presses the volume up key of the second terminal, the volume up event is processed by the second terminal, and the audio up event is a local event.
  • the second terminal sends the operation event to the first terminal.
  • the first terminal identifies the operation event, and determines a screen capture instruction of the target screen.
  • the gesture operation recognized by the first terminal may be a key (on/off key+volume down key) shown in FIG. 7a , and correspondingly, the operation instruction is a screenshot instruction.
  • the first terminal executes a screen capture instruction on the target screen to determine a screen capture preview screen.
  • the first terminal may generate a screenshot preview screen as shown in FIG. 7a.
  • the user's gesture operation can be as shown in FIG. 7f and grasping in the air after the hand icon appears.
  • the air-grip operation performed on the screen generates an operation event, and the operation event indicates the air-distance event after the hand icon appears; after that, the first terminal 110 can recognize the air-distance action corresponding to the operation event as a grasp, and determine the air-distance event.
  • the operation command corresponding to the grasping is a screen capture command, which takes a screenshot of the video playing screen shown in FIG.
  • the second terminal 120 realizes the identification of the space, and the first terminal 110 recognizes the action of the space, so as to ensure the reaction speed and performance between the first terminal 110 and the second terminal 120 .
  • the screen projection method between the first terminal and the second terminal is heterogeneous screen projection
  • the virtual screen of the second terminal is determined based on the device identifier of the second terminal
  • the virtual screen of the second terminal is determined based on the virtual The target screen of the screen executes the operation command.
  • a cross-device interaction method provided by the embodiment of the present application is introduced. It can be understood that this method is another way of expressing the cross-device interaction solution described above, and the two are combined. This method is proposed based on the cross-device interaction solution described above, and part or all of the content of the method may refer to the above description of the cross-device interaction solution.
  • the method is applied to the above-mentioned screen projection system, and a screen projection connection is established between the first terminal and the second terminal in a wired or wireless manner.
  • Step 101 the second terminal displays the first picture sent by the first terminal
  • Step 102 the second terminal generates an operation event according to the target operation performed by the user on the first screen; wherein, the target operation is a screen capture operation, a screen recording operation or a knuckle operation;
  • Step 103 the second terminal sends an operation event
  • Step 104 the first terminal determines the operation instruction corresponding to the operation event, and executes the operation instruction based on the first screen.
  • the first screen may be the target screen or the pull-down menu screen shown in FIGS. 13a-13c .
  • the target screen may be the target screen or the pull-down menu screen shown in FIGS. 13a-13c .
  • the pull-down menu screen shown in FIGS. 13a-13c please refer to the above, and details will not be repeated here.
  • the target operation is performed by the user on the second terminal with respect to the first screen. See above for details, so I won’t go into too much detail this time.
  • Step 102 includes:
  • the second terminal generates a first input event according to the knuckle operation performed by the user on the first screen; wherein, the first input event includes knuckle touch information and knuckle press information; when the second terminal recognizes the first input event based on the knuckle algorithm
  • the knuckle identifier of the first input event is determined; the second terminal encapsulates the first input event and the knuckle identifier as an operation event.
  • the first input event may be generated by the kernel space or the kernel layer processing the hardware interrupt information generated by the knuckle operation performed by the user on the first screen. See the above for details, so I won’t go into details here.
  • the programs required by the second terminal to generate operational events are transplanted from the first terminal.
  • the codes for generating the operation event by the first terminal and the second terminal are the same.
  • the knuckle identifier may be determined by a knuckle algorithm.
  • codes for generating operation events by the first terminal and the second terminal are different.
  • the knuckle identifier may be recognizable by the first terminal, and indicates an identifier that the first input event is generated by the knuckle.
  • the second terminal may also request the knuckle identification from the first terminal. See the above for details, so I won’t go into details here.
  • step 104 includes:
  • the first terminal identifies the operation event to determine the knuckle movement made by the user on the first screen; the first terminal determines the operation instruction corresponding to the operation event based on the knuckle movement.
  • the second terminal to be projected recognizes the knuckles, while the first terminal to be projected only recognizes knuckle movements, so as to ensure the response speed and performance of the first terminal to be projected.
  • the first terminal may also determine the operation instruction based on the stored first screen and knuckle motions, that is, the association between knuckle motions and screens.
  • the main page and knuckles draw letters
  • the operation instruction can be a specified application that can open the main page.
  • the first terminal determines the operation instruction based on the knuckle action and the knuckle touch position information, and the knuckle is equivalent to the pad of the finger at this time. See above for details, so I won’t go into too much detail this time.
  • the second terminal displays a target screen before the displayed first screen;
  • the first screen is a drop-down menu screen;
  • the target operation is a click on the screen capture button or the area where the screen recording button is located in the pull-down menu screen Operation;
  • the second terminal stores the layout information of the drop-down menu screen;
  • the operation instruction is a screenshot instruction or a screen recording instruction of the target image; step 102, including:
  • the second terminal generates a second input event according to the click operation performed by the user on the screen capture button or the area where the screen recording button is located in the drop-down menu screen; wherein, the second input event includes finger click operation information;
  • the layout information and the second input event determine the identification of the screen capture button or screen recording button; the second terminal encapsulates the identification of the screen capture button or screen recording button and the second input event into an operation event.
  • the second input event may be generated by the kernel space or the kernel layer processing the hardware interrupt information generated by the user's click operation on the screen capture button or the area where the screen record button is located in the drop-down menu screen.
  • the drop-down menu screen and the layout information of the drop-down menu screen are sent to the second terminal for storage.
  • the second terminal generates an input event according to the user's sliding operation on the target screen, and sends the input event to the first terminal.
  • the layout information of the menu screen is sent to the second terminal for storage.
  • the screen capture button or screen recording button in the pull-down menu screen is identified based on the second terminal being projected, so that the first terminal for screen projection can directly determine the screen capture command or screen recording command, ensuring the response of the terminal for screen projection speed and performance.
  • the buttons in the drop-down menu by the second terminal to be screened it is not necessary to consider the difference in screen size between the first terminal to be screened and the second terminal to be screened, so as to ensure the accuracy of operation events.
  • Step 102 includes:
  • the second terminal generates an operation event according to the user pressing the screen capture button of the second terminal; wherein the operation event includes key time information and key value; when the second terminal determines that the operation event is not a local event, the operation event is sent to the first terminal.
  • the operation event may be generated for the kernel space or the kernel layer to process the hardware interrupt information generated by the user pressing the screen capture button of the second terminal.
  • step 104 includes:
  • the first terminal When the first terminal recognizes that the operation event is a screen capture event, it determines that the operation instruction corresponding to the operation event is a screen capture instruction.
  • the screen projection method between the first terminal and the second terminal is heterogeneous screen projection; the first terminal is provided with a virtual screen of the second terminal; wherein, the first screen is a screen with the virtual screen Size adapted screen, the screen size of the virtual screen is adapted to the screen size of the second terminal; the operation event carries the device identifier of the second terminal, so that the first terminal can recognize the device identifier of the second terminal to determine the virtual Screen.
  • Step 104 includes:
  • the operation instruction is executed on the screen displayed by the first terminal, wherein the first screen is a mirror image of the screen displayed by the first terminal.
  • the method also includes:
  • the second screen obtained by executing the operation instruction based on the first screen is sent to the second terminal, so that the second terminal displays the second screen.
  • processor in the embodiments of the present application may be a central processing unit (central processing unit, CPU), and may also be other general processors, digital signal processors (digital signal processor, DSP), application specific integrated circuits (application specific integrated circuit, ASIC), field programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, transistor logic devices, hardware components or any combination thereof.
  • CPU central processing unit
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • a general-purpose processor can be a microprocessor, or any conventional processor.
  • the method steps in the embodiments of the present application may be implemented by means of hardware, or may be implemented by means of a processor executing software instructions.
  • the software instructions can be composed of corresponding software modules, and the software modules can be stored in random access memory (random access memory, RAM), flash memory, read-only memory (read-only memory, ROM), programmable read-only memory (programmable rom) , PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically erasable programmable read-only memory (electrically EPROM, EEPROM), register, hard disk, mobile hard disk, CD-ROM or known in the art any other form of storage medium.
  • An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium.
  • the storage medium may also be a component of the processor.
  • the processor and storage medium can be located in the ASIC.
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present application will be generated in whole or in part.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in or transmitted via a computer-readable storage medium.
  • the computer instructions may be transmitted from one website site, computer, server, or data center to another website site by wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) , computer, server or data center for transmission.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrated with one or more available media.
  • the available medium may be a magnetic medium (such as a floppy disk, a hard disk, or a magnetic tape), an optical medium (such as a DVD), or a semiconductor medium (such as a solid state disk (solid state disk, SSD)), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种跨设备交互的方法、装置、投屏系统及终端。在实施例中,该方法应用于投屏系统,投屏系统包括第一终端和第二终端,第一终端和第二终端之间投屏连接,该方法包括:第二终端显示第一终端发送的第一画面;第二终端根据用户针对第一画面在第二终端做出的目标操作生成操作事件;其中,目标操作为截屏操作、录屏操作或指关节操作;第一终端确定操作事件对应的操作指令,基于第一画面执行操作指令。由此,通过本申请实施例提供的技术方案,使得投屏的终端可响应用户对被投屏的终端的截屏操作、录屏操作或指关节操作,实现跨设备的用户交互,确保用户体验。

Description

跨设备交互的方法、装置、投屏系统及终端
本申请要求于2021年09月03日提交中国国家知识产权局、申请号为202111034541.2、申请名称为“跨设备交互的方法、装置、投屏系统及终端”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及通信技术领域,尤其涉及跨设备交互的方法、装置、投屏系统及终端。
背景技术
随着投屏技术的发展,投屏的使用给用户带来了极大的方便。根据投屏技术,可以将具有投屏功能的投屏设备所显示的内容投屏到其他具有显示功能的显示设备中进行显示,显示设备显示的内容可以包括投屏设备中显示的各类媒体信息和各种操作数据画面等内容。例如,以将手机作为投屏设备,将电视作为显示设备,将手机的屏幕显示的界面投屏到电视进行显示为例进行说明。在用户通过手机观看视频或者使用直播软件进行直播时等,可以将手机的显示界面投屏到电视进行显示,通过电视进行视频的观看或者直播内容的显示。在将手机的显示界面投屏到电视上之后,通过手机端完成截屏、录屏等操作。但是当手机端和电视端距离较远时,比如,手机在房间A中,用户在房间B中看电视,用户需要到房间A中操作手机才能实现交互,导致用户的使用体验较差。
发明内容
本申请实施例提供了一种跨设备交互的方法、装置、投屏系统及终端,使得投屏的终端可响应用户对被投屏的终端的截屏操作、录屏操作或指关节操作,实现跨设备的用户交互,确保用户体验。
第一方面,本申请实施例提供了一种跨设备交互的方法,应用于投屏系统,投屏系统包括第一终端和第二终端,第一终端和第二终端之间投屏连接,方法包括:第二终端显示第一终端发送的第一画面;第二终端根据用户针对第一画面做出的目标操作生成操作事件;其中,目标操作为截屏操作、录屏操作或指关节操作;第一终端确定操作事件对应的操作指令,基于第一画面执行操作指令。
本方案中,使得投屏的第一终端可响应用户对被投屏的第二终端的截屏操作、录屏操作或指关节操作,实现跨设备的用户交互,确保用户体验。
在一种可能的实现方式中,目标操作为指关节操作;第二终端根据用户针对第一画面做出的目标操作生成操作事件,包括:第二终端根据用户针对第一画面做出的指关节操作生成第一输入事件,第一输入事件包括指关节触摸信息和指关节按压信息;第二终端当基于指关节算法识别到第一输入事件由用户的指关节产生时,确定第一输入事件的指关节标识;第二终端将第一输入事件和指关节标识封装为操作事件;第一终端确定操作事件对应的操作指令,包括:识别操作事件,以确定用户针对第一画面做出的指关节动作;基于指关节动作,确定操作事件对应的操作指令。
本实现方式中,基于被投屏的第二终端识别指关节,而投屏的第一终端仅仅识别指关节动作,确保投屏的第一终端的反应速度和性能。
另外,投屏的第一终端在识别指关节动作时无需进行坐标转换,即将被投屏的第二终端的屏幕坐标系下的数据转换到投屏的第一终端的屏幕坐标系下,进一步确保投屏的第一终端的反应速度和性能。
在一种可能的实现方式中,第二终端在显示第一画面之前显示有目标画面;第一画面为下拉菜单画面;目标操作为针对下拉菜单画面中的截屏按钮或录屏按钮所在区域的点击操作;第二终端存储有下拉菜单画面的布局信息;操作指令为目标画面的截屏指令或者录屏指令;第二终端根据用户针对第一画面做出的目标操作生成操作事件,包括:第二终端根据用户针对下拉菜单画面中的截屏按钮或录屏按钮所在区域做出的点击操作生成第二输入事件,第二输入事件包括手指点击操作信息;第二终端根据下拉菜单画面的布局信息和第二输入事件,确定截屏按钮或录屏按钮的标识;第二终端将截屏按钮或录屏按钮的标识和第二输入事件封装成操作事件。
本实现方式中,基于被投屏的第二终端识别下拉菜单画面中的截屏按钮或录屏按钮,使得投屏的第一终端可以直接确定截屏指令或录屏指令,确保投屏的第一终端的反应速度和性能。
另外,通过被投屏的第二终端识别下拉菜单中的按钮,无需考虑投屏的第一终端和被投屏的第二终端之间的屏幕尺寸的差异,确保操作事件的准确性。
在一种可能的实现方式中,目标操作为按压第二终端的截屏按键;第二终端根据用户针对第一画面做出的目标操作生成操作事件,包括:第二终端根据用户按压第二终端的截屏按键生成操作事件,操作事件包括按键时间信息和键值;第二终端判断操作事件不是本地事件时,将操作事件发送至第一终端;第一终端确定操作事件对应的操作指令,包括:第一终端识别操作事件为截屏事件时,确定操作事件对应的操作指令为截屏指令。
本实现方式中,基于被投屏的第二终端确定按键的键值和按键时间信息,在确定不是本地事件时,使得投屏的第一终端可以识别截屏按键并确定对应的截屏指令,确保跨设备交互的用户体验。
在一种可能的实现方式中,第一终端和第二终端之间的投屏方式为异源投屏;第一终端设置有第二终端的虚拟屏幕;其中,第一画面为与虚拟屏幕的屏幕尺寸适配的画面,虚拟屏幕的屏幕尺寸和第二终端的屏幕尺寸适配;操作事件携带第二终端的设备标识,以使第一终端识别第二终端的设备标识以确定第二终端的虚拟屏幕。
本实现方式中,投屏的第一终端显示的画面和被投屏的第二终端显示的画面各自独立,使得投屏的第一终端和被投屏的第二终端可以分开使用,满足不同用户的不同需求。
在一种可能的实现方式中,第一终端和第二终端之间的投屏方式为镜像投屏;基于第一画面执行操作指令,包括:对第一终端显示的画面执行操作指令,其中,第一画面为第一终端显示的画面的镜像。
本实现方式中,投屏的第一终端显示的画面和被投屏的第二终端显示的画面保持一致,使得投屏的第一终端可以了解被投屏的第二终端的用户的操作情况,实现反向控制。
在一种可能的实现方式中,方法还包括:将基于第一画面执行操作指令得到的第二画面发送至第二终端,以使第二终端显示第二画面。
第二方面,本申请实施例提供了一种跨设备交互的方法,应用于第一终端,包括:将第一画面发送至与第二终端,以使第二终端显示第一画面;其中,第一终端和第二终端之间投屏连接;接收第二终端发送的操作事件;其中,操作事件为第二终端根据用户针对第一画面 做出的目标操作生成,目标操作为截屏操作、录屏操作或指关节操作;确定操作事件对应的操作指令;基于第一画面执行操作指令。
本方案中,使得投屏的第一终端可响应用户对被投屏的第二终端的截屏操作、录屏操作或指关节操作,实现跨设备的用户交互,确保用户体验。
在一种可能的实现方式中,目标操作为指关节操作;操作事件为第二终端封装第一输入事件和第一输入事件的指关节标识生成;其中,第一输入事件根据用户针对第一画面做出的指关节操作生成,指关节标识当基于指关节算法识别到第一输入事件由用户的指关节产生时确定;确定操作事件对应的操作指令,包括:识别操作事件,以确定用户针对第一画面做出的指关节动作;基于指关节动作,确定操作事件对应的操作指令。
本实现方式的有益效果参见上文,此处不做过多赘述。
在一种可能的实现方式中,第二终端在显示的第一画面之前显示有目标画面;第一画面为下拉菜单画面;目标操作为针对下拉菜单画面中的截屏按钮或录屏按钮所在区域的点击操作;操作指令为目标画面的截屏指令或者录屏指令;操作事件包括用户点击的下拉菜单画面中的截屏按钮或录屏按钮的标识;方法还包括:接收第二终端发送的第二输入事件;其中,第二输入事件为第二终端根据用户针对目标画面的滑动操作生成,第二输入事件包括手指滑动操作信息;当识别第二输入事件为显示下拉菜单时,将内部存储的下拉菜单画面和下拉菜单画面的布局信息发送至第二终端,以使第二终端在显示的目标画面的基础上显示下拉菜单画面,并根据用户针对下拉菜单画面中的截屏按钮或录屏按钮所在区域的点击操作生成的第三输入事件和下拉菜单画面的布局信息确定截屏按钮或录屏按钮的标识;其中,第三输入事件包括手指点击操作信息。
本实现方式的有益效果参见上文,此处不做过多赘述。
在一种可能的实现方式中,目标操作为按压第二终端的截屏按键;操作事件包括按键时间信息和键值,操作事件不是第二终端的本地事件;确定操作事件对应的操作指令,包括:当识别操作事件为截屏事件时,确定操作事件对应的操作指令为截屏指令。
本实现方式的有益效果参见上文,此处不做过多赘述。
在一种可能的实现方式中,第一终端和第二终端之间的投屏方式为异源投屏;第一终端设置有第二终端的虚拟屏幕;其中,第一画面为与虚拟屏幕的屏幕尺寸适配的画面,虚拟屏幕的屏幕尺寸和第二终端的屏幕尺寸适配;操作事件携带第二终端的设备标识,以使第一终端识别第二终端的设备标识以确定第二终端的虚拟屏幕。
本实现方式中的有益效果参见上文,此处不做过多赘述。
在一种可能的实现方式中,第一终端和第二终端之间的投屏方式为镜像投屏;基于第一画面执行操作指令,包括:对第一终端显示的画面执行操作指令,其中,第一画面为第一终端显示的画面的镜像。
本实现方式的有益效果参见上文,此处不做过多赘述。
在一种可能的实现方式中,还包括:将基于第一画面执行操作指令得到的第二画面发送至第二终端,以使第二终端显示第二画面。
第三方面,本申请实施例提供了一种跨设备交互的方法,应用于第二终端,包括:显示第一终端发送的第一画面;其中,第一终端和第二终端之间投屏连接;根据用户针对第一画面在第二终端做出的目标操作生成操作事件;其中,目标操作为截屏操作、录屏操作或指关节操作;将操作事件发送至第一终端,以使第一终端确定操作事件对应的操作指令,基于第 一画面执行操作指令。
本实施例的有益效果参见上文,此处不做过多赘述。
在一种可能的实现方式中,目标操作为指关节操作;根据用户针对第一画面在第二终端的目标操作生成操作事件,包括:根据用户针对第一画面在第二终端做出的指关节操作生成第一输入事件;其中,第一输入事件包括指关节触摸信息和指关节按压信息;当基于指关节算法识别到第一输入事件由用户的指关节产生时,确定第一输入事件的指关节标识;将第一输入事件和指关节标识封装为操作事件。
本实现方式的有益效果参见上文,此处不做过多赘述。
在一种可能的实现方式中,第二终端在显示的第一画面之前显示有目标画面;第一画面为下拉菜单画面;目标操作为针对下拉菜单画面中的截屏按钮或录屏按钮所在区域的点击操作;操作指令为目标画面的截屏指令或者录屏指令;根据用户针对第一画面的目标操作生成操作事件之前,包括:根据用户针对目标画面在第二终端的滑动操作生成第二输入事件;其中,第二输入事件包括手指滑动操作信息;将第二输入事件发送至第一终端,以使第一终端识别第二输入事件为显示下拉菜单时,发送的内部存储的下拉菜单画面和下拉菜单画面的布局信息至第二终端;在目标画面的基础上显示下拉菜单画面,并存储下拉菜单画面的布局信息;第二终端根据用户针对第一画面做出的目标操作生成操作事件,包括:根据用户针对下拉菜单画面中的截屏按钮或录屏按钮所在区域在第二终端做出的点击操作生成第三输入事件;其中,第三输入事件包括手指点击操作信息;根据下拉菜单画面的布局信息和第三输入事件,确定截屏按钮或录屏按钮的标识;将截屏按钮或录屏按钮的标识和第三输入事件封装成操作事件。
本实现方式的有益效果参见上文,此处不做过多赘述。
在一种可能的实现方式中,目标操作为按压第二终端的截屏按键;操作事件包括按键时间信息和键值;第二终端根据用户针对第一画面做出的目标操作生成操作事件,还包括:判断操作事件不是本地事件时,将操作事件发送至第一终端。
本实现方式的有益效果参见上文,此处不做过多赘述。
在一种可能的实现方式中,第一终端和第二终端之间的投屏方式为异源投屏;第一终端设置有第二终端的虚拟屏幕;其中,第一画面为与虚拟屏幕的屏幕尺寸适配的画面,虚拟屏幕的屏幕尺寸和第二终端的屏幕尺寸适配;操作事件携带第二终端的设备标识,以使第一终端识别第二终端的设备标识以确定第二终端的虚拟屏幕。
本实现方式的有益效果参见上文,此处不做过多赘述。
在一种可能的实现方式中,第一终端和第二终端之间的投屏方式为镜像投屏。
本实现方式的有益效果参见上文,此处不做过多赘述。
在一种可能的实现方式中,还包括:接收第二终端基于第一画面执行操作指令确定的第二画面;显示第二画面。
第四方面,本申请实施例提供了一种投屏系统,包括:第一终端和第二终端;其中,第一终端用于执行如第二方面所述的方法,第二终端用于执行如第三方面所述的方法。
第五方面,本申请实施例提供了一种终端,包括:至少一个存储器,用于存储程序;至少一个处理器,用于执行存储器存储的程序,当存储器存储的程序被执行时,处理器用于执行第二方面中所提供的方法,或者执行第三方面中所提供的方法。
第六方面,本申请实施例提供了一种跨设备交互的装置,其特征在于,装置运行计算机 程序指令,以执行第二方面中所提供的方法,或者执行第三方面中所提供的方法。示例性的,该装置可以为芯片,或处理器。
在一个例子中,该装置可以包括处理器,该处理器可以与存储器耦合,读取存储器中的指令并根据该指令执行第二方面中所提供的方法,或者执行第三方面中所提供的方法。其中,该存储器可以集成在芯片或处理器中,也可以独立于芯片或处理器之外。
第七方面,本申请实施例提供了一种计算机存储介质,计算机存储介质中存储有指令,当指令在计算机上运行时,使得计算机执行第二方面中所提供的方法,或者执行第三方面中所提供的方法。
第八方面,本申请实施例提供了一种包含指令的计算机程序产品,当指令在计算机上运行时,使得计算机执行第二方面中所提供的方法,或者执行第三方面中所提供的方法。
附图说明
图1是本申请实施例提供的投屏系统的系统架构图;
图2a是本申请实施例提供的投屏系统的界面显示示意图一;
图2b是本申请实施例提供的投屏系统的界面显示示意图二;
图2c是本申请实施例提供的投屏系统的界面显示示意图三;
图2d是本申请实施例提供的投屏系统的界面显示示意图四;
图2e是本申请实施例提供的投屏系统的界面显示示意图五;
图2f是本申请实施例提供的投屏系统的界面显示示意图六;
图3是本申请实施例提供的指关节操作的响应过程的示意图;
图4是本申请实施例提供的录屏原理的示意图;
图5是图2b提供的投屏系统的截屏/录屏原理的示意图一;
图6是图2b提供的投屏系统的截屏/录屏原理的示意图二;
图7a是图2b提供的投屏系统的截屏的场景示意图一;
图7b是图2b提供的投屏系统的截屏的场景示意图二;
图7c是图2b提供的投屏系统的截屏的场景示意图三;
图7d是图2b提供的投屏系统的截屏的场景示意图四;
图7e是图2b提供的投屏系统的截屏的场景示意图五;
图7f是图2b提供的投屏系统的截屏的场景示意图六;
图8a是图2b提供的投屏系统的录屏的场景示意图一;
图8b是图2b提供的投屏系统的录屏的场景示意图二;
图9是本申请实施例提供的一种第一终端的结构示意图;
图10a是本申请实施例提供的一种第一终端的软件结构示意图;
图10b是本申请实施例提供的一种第二终端的软件结构示意图;
图11是本申请实施例提供的截屏的软件实现的示意图;
图12是本申请实施例提供的跨设备交互方案的流程示意图;
图13a是本申请实施例提供的指关节操作的跨设备交互方案的流程示意图;
图13b是本申请实施例提供的点击下拉菜单中的截屏按钮或录屏按钮的点击操作的跨设备交互方案的流程示意图;
图13c是本申请实施例提供的按压第二终端的截屏按键的跨设备交互方案的流程示意图;
图14是本申请实施例提供的跨设备交互的方法的流程示意。
具体实施方式
为了使本申请实施例的目的、技术方案和优点更加清楚,下面将结合附图,对本申请实施例中的技术方案进行描述。
在本申请实施例的描述中,“示例性的”、“例如”或者“举例来说”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”、“例如”或者“举例来说”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”、“例如”或者“举例来说”等词旨在以具体方式呈现相关概念。
在本申请实施例的描述中,术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,单独存在B,同时存在A和B这三种情况。另外,除非另有说明,术语“多个”的含义是指两个或两个以上。例如,多个系统是指两个或两个以上的系统,多个终端是指两个或两个以上的终端。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
以下,对本申请实施例中的部分用语进行解释说明。需要说明的是,这些解释是为了便于本领域技术人员理解,并不是对本申请所要求的保护范围构成限定。
(1)便携屏
面向办公用途的屏类设备,业务运行能力较弱,依赖手机、平板、电脑等投屏连接使用。
(2)异源投屏
将具有投屏功能的第一终端存储的画面投屏到其他具有显示功能的第二终端,这里,第一终端投屏到第二终端的画面与第一终端显示的画面是独立的。示例地,以手机作为第一终端,便捷屏作为第二终端为例进行说明,手机和便捷屏通过投屏连接后,手机和便捷屏各自运行不同的应用,互不干扰,比如,图2b所示的手机110运行的应用是聊天应用,比如微信,便捷屏120运行的应用是视频播放应用,比如,华为视屏、爱奇艺、腾讯。
(3)镜像投屏
将具有投屏功能的第一终端显示的画面投屏到其他具有显示功能的第二终端。示例地,便捷屏显示的画面是手机显示画面的镜像,比如,图2a所示的手机和便捷屏运行的应用均为视频播放应用。
(4)分布式软总线
分布式软总线为各种终端之间的互联互通提供了统一的分布式通信能力,为设备之间的无感发现和零等待传输创造了条件。具体来说,分布式软总线通过极简通信协议技术以Wi-Fi或者其他无线的方式让终端可以调用到各种功能。其中,极简通信协议技术,包括发现&连接、组网(多跳自组网、多协议混合组网)、传输(极简传输协议:多元化协议与算法、智能感知与决策),在1+8+N设备间搭建一条“无形”的总线,具备自发现、自组网、高带宽低时延的特点。其中,1指的是手机;8代表车机、音箱、耳机、手表/手环、平板、大屏、PC、AR/VR,N泛指其他物联(IOT)设备。通过分布式软总线,全场景设备间完成设备虚拟化、跨设备服务调用、多屏协同、文件分享等分布式业务。这里,设备虚拟化能够将通过软总线连接在一起各种终端的功能虚拟成可被共享的文件,基于该文件组装成各种功能。本申请实施例中, 第二终端和第一终端之间可以通过分布式软总线连接。
(5)触摸屏
触摸屏由触摸传感器与显示屏组成,也成“触控屏”。另外,显示屏也可以称为屏幕。
为了便于理解本申请,此处先对现有技术进行简要说明:
在相关技术中,根据投屏技术,可以将具有投屏功能的终端(第一终端)的屏幕所显示的内容投屏到其他具有显示功能的终端(第二终端)进行显示。例如,可以将第一终端会议内容、多媒体文件、游戏、电影或者视频等内容投放到第二终端的屏幕上进行呈现,可以为用户带来更好的使用体验和更多便利。在投屏时,将屏幕较小的具有投屏功能的第一终端所显示的内容,投屏到屏幕相对较大的第二终端中进行显示。通过更大屏幕的终端进行显示,为用户进行互动、娱乐或者观看等提供了更多的方便。例如,将手机显示的画面投屏到便捷屏或电视上进行显示等,用户可以通过屏幕更大的终端查看投屏设备所显示的画面,从而可以提升用户的使用体验。
例如以将手机作为投屏的第一终端,将电视作为被投屏的第二终端,将手机的屏幕显示的画面投屏到电视进行显示为例进行说明。在用户通过手机观看视频或者使用直播软件进行直播时等,可以将手机显示的画面投屏到电视进行显示,通过电视进行视频的观看或者直播内容的显示。在将手机的显示界面投屏到电视上之后,通过电视侧并不能对手机的显示的画面进行交互操作,需要通过手机端完成截屏、录屏等操作。所以,这样的投屏效果对于用户而言并不是很好,导致用户的使用体验不佳。
为了解决上述技术问题,本申请实施例提供了如下技术方案。
本申请实施例提供的一种跨设备交互的方法,可以应用在图1所示的投屏系统100。该投屏系统100包括第一终端110和第二终端120,第二终端120和第一终端110之间可以通过网络进行交互,以使第一终端110和第二终端120能够进行数据交互。示例地,图2a和图2b示出一个第二终端120和一个第一终端110连接;示例地,一个第二终端120可以和多个第一终端110连接,图2c和图2d示出了一个第一终端110和2个第二终端120连接;示例地,多个第一终端110可以和一个第二终端120连接,图2e示出了两个第一终端110和一个第二终端120连接。其中,网络可以包括电缆网络、有线网络、光纤网络、电信网络、内部网络、互联网、局域网络(LAN)、广域网络(WAN)、无线局域网络(YLAN)、城域网(MAN)、公共交换电话网络(PSTN)、蓝牙网络、紫蜂网络(ZigBee)、近场通信(NFC)、设备内总线、设备内线路、线缆连接等或其任意组合。值得注意的是,第二终端120和第一终端110可以连接在同一局域网中,连接相同的无线局域网,或者通过互联网进行跨局域网的网络连接。
作为一个示例,当使用同一个账号登录该第二终端120和第一终端110时,该第二终端120和第一终端110之间可通过广域网络(WAN)互相通信。
作为另一个示例,可将第二终端120和第一终端110接入同一个路由器上。此时,第二终端120和第一终端110可形成一个局域网络(LAN),该局域网络(LAN)内的各个第一终端110之间可以由路由器实现互相通信。
作为又一个示例,第二终端120和第一终端110均加入名称为“XXXXXX”的Wi-Fi网络。该Wi-Fi网络内的各个第一终端110形成了一个对等网络。比如,可以通过Miracast协议建立第一终端110和第二终端120之间的连接,并通过Wi-Fi网络传输数据。
作为再一个示例,第二终端120和第一终端110之间通过转接设备(例如,USB数据线或 Dock设备)互联,从而实现通信。比如,可以通过高清多媒体接口(high definition multimedia interface,HDMI)建立第二终端120和第一终端110之间的连接,并通过HDMI传输线传输数据。
应当理解,通过上述网络可实现第二终端120和第一终端110之间的分布式软总线连接。
在本申请实施例中,根据投屏技术,第一终端110和第二终端120之间建立投屏连接,换言之,可以将第一终端110所存储的内容投屏到第二终端120中进行显示。在实际应用中,在投屏时,将屏幕较小的第一终端110所存储的内容,投屏到屏幕相对较大的第二终端120中进行显示。示例地,第一终端可以为手机或平板,第二终端可以为便捷屏等业务能力较弱的显示设备。另外,在投屏时,第一终端110发送给第二终端120的画面,与第二终端120的屏幕尺寸是适配的,使得第二终端120仅需实现显示功能即可,降低数据处理量,确保显示速度。其中,第二终端120的屏幕尺寸指示了第二终端120的屏幕的长宽。
在一个例子中,第一终端110和第二终端120之间的投屏方式为镜像投屏,即第一终端110可将其显示的内容投射至第二终端120中进行显示。示例地,请参考图2a,用户通过第一终端110观看视频,可以将第一终端110显示的视频画面投屏到一个第二终端120进行显示,通过一个第二终端120进行视频的观看。示例地,请参考图2c,用户通过第一终端110观看视频,可以将第一终端110显示的视频画面投屏到两个第二终端120上进行显示,通过两个第二终端1201进行视频的观看,适合多人观看视频的场景。
在另一个例子中,第一终端110和第二终端120之间的投屏方式为异源投屏,即第一终端110和第二终端120各自显示的画面是独立,也可以理解为第一终端110和第二终端120各自运行的应用互不干扰。示例的,如图2b所示,第一终端110显示的是微信聊天画面,第二终端120显示的是视频画面。换言之,第二终端120可以显示第一终端110未显示的画面。示例地,请参考图2b,用户通过第一终端110进行微信聊天,同时可以将第一终端110的未显示的视频画面投屏到第二终端120进行显示,通过第二终端120进行视频的观看。示例地,请参考图2d,用户通过第一终端110进行微信聊天,可以将第一终端110的未显示的视频画面投屏到一个第二终端120,同时将第一终端110的未显示的音乐播放画面投屏到另一个第二终端120上进行显示,通过两个第二终端120满足不同用户的需求。示例地,请参考图2e,用户1通过一个第一终端110进行微信聊天,可以将该第一终端110的未显示的视频画面投屏到第二终端120,用户2通过另一个第一终端110进行拍照,可以将该第一终端110的未显示的音乐播放画面投屏到第二终端120;作为一种可能的情况,图2e示出了第二终端120只显示一个第一终端110的未显示的视频画面,之后,通过画面切换的方式,显示另一个第一终端110的未显示的音乐播放画面(图中未示意)。作为另一种可能的情况,图2f示出了第二终端120可以同时显示一个第一终端110的未显示的视频画面和另一个第一终端110的未显示的音乐播放画面。
需要说明的是,本申请实施例并不意图对第一终端110和第二终端1120之间的投屏方式进行限定,具体需要结合实际场景确定。下文以一个第一终端110和一个第二终端120之间进行投屏连接为例进行说明。
第一终端110具有处理用户的手势操作的能力。下面以第一终端110和第二终端120均为
Figure PCTCN2022114303-appb-000001
系统为例说明事件处理的过程。示例地,第一终端110设置有硬件层,内核层(Kernel)、系统层、应用架构层和应用层。对于各层的详细介绍参见下文系统架构方面的描述和图10a,此处仅仅是为了便于介绍事件处理的过程。
在一个例子中,硬件层(Hardware)用于根据用户的手势操作产生对应的硬件中断信号; 其中,手势操作为用户的手对第一终端110的各种操作。对于用户的手势操作具体需要结合第一终端110的硬件层所具有的硬件。比如,硬件层(Hardware)可以包括但不限于显示屏,压力传感器,距离传感器,加速度传感器,键盘,图3示出的触摸传感器(Touch Sensor)和智能传感集线器(Sensor Hub)等。示例地,第一终端110具有按键和触摸屏,手势操作可以是对按键的操作,比如按开关机键、音量加键、音量减键,以及对触摸屏的操作,比如触摸、点击、滑动、敲击等,本申请实施例对此不做具体限定。
在一个例子中,内核层(Kernel)用于接收和上报硬件层产生的硬件中断信号,并根据硬件中断信息生成输入事件,可以包括驱动层,驱动层将硬件层的输入转化为统一事件形式,换言之,将硬件层产生的硬件中断信息转化成输入事件。示例地,当手势操作为对触摸屏的触摸操作时,输入事件可以至少包括可以触摸坐标,触摸操作的时间戳,另外,还可以包括事件类型,比如,滑动、点击等;当手势操作为对按键的按压时,输入事件至少可以包括按压的按键的键值,还可以包括事件类型,比如,短按,长按等。驱动层可以包括多个驱动,比如显示屏驱动、音频驱动、摄像头驱动、传感器驱动等。示例地,图3示出了触摸传感器驱动(Touch Driver)和输入集线器驱动(Input Hub Driver)。另外,还可以包括输入核心层和事件处理层。其中,输入核心层负责协调驱动层和事件处理层,使得驱动层和事件处理层之间能够完成数据传递,事件处理层能够将从输入核心层得到的输入事件提供给用户空间。当然,内核层(Kernel)也可以理解为内核空间。
用户空间用于对内核层(Kernel)提供的输入事件进行读取、加工和分发,可以包括系统层和应用程序框架层。本申请实施例主要涉及对触摸屏和按键的操作,内核层(Kernel)对触摸屏的相关事件的支持包括绝对坐标、触摸按下和触摸抬起事件,而现在的用户空间不仅仅只使用这些事件,为了提高触摸屏的用户交互性,经常会利用这些简单的事件,在用户空间实现一些特定的指令。另外,本申请实施例还涉及到了按键的操作,内核层(Kernel)可支持本申请实施例涉及的按键相关的事件,无需在用户空间实现特定的指令。
在一个例子中,系统层,用于对内核层(Kernel)提供的输入事件进行处理和分发。可以为图3示出的本地框架层(Native Framework)。可以包括InputFramework,还可以包括用于对内核层(Kernel)提供的输入事件进行识别的算法,比如,图3示出的指关节算法(可通过由指关节的重力加速度产生的震动频率判断是否符合指关节的用力特征,通过触摸面积判断是否为指关节在对触摸屏进行动作,从而确定是否为指关节动作),还可以包括应用(执行输入事件对应的操作指令)。其中,InputFramework主要负责用户事件的管理,具体内容如下:可以从内核层(Kernel)获取各种原始的事件消息,包括按键、触摸屏、鼠标、滚迹球等事件消息;2.对事件进行预处理,包括两个方面:一方面,将事件转化成系统可以处理的消息事件;另一方面,处理一些特殊的事件,比如主按键、菜单键、电源键等处理。3.将处理后的事件分发到各个应用进程(系统层、应用框架层或应用层的应用)。在实际应用中,内核层(Kernel)会将输入事件写入设备节点,InputFramework从设备节点中读取输入事件。另外,若在通过InputFramework分发事件之前,系统层得到了新的事件信息,则可将新的事件消息和内核层(Kernel)上报的输入事件重新封装为新的输入事件,然后再传递到InputFramework。
在一个例子中,应用程序框架层可以对系统层提供的输入事件进行读取、处理和分发。可以是图3示出的Java Framework。示例地,应用程序框架层可以包括对系统层提供的输入事件进行识别的算法,比如如图3示出的华为手势识别(HwGestureAction),还可以包括应用(执行输入事件对应的操作指令)。另外,如果应用程序框架层对事件进行了处理得到了新的事件信息,则可将新的事件消息和系统层上报的输入事件重新封装为新的输入事件。示例 地,应用程序框架层上报给应用层的事件应当是应用层中的应用可处理的事件,至少包括手势操作,比如指关节双击、指关节双击画S、指关节画闭合图形、在距离触摸屏30cm的地方手朝上停留1s,手在距离触摸屏30cm的地方抓握,从触摸屏顶端下滑3cm等。
在一个例子中,应用层可以对应用程序框架层提供的输入事件进行读取和处理,执行应用程序框架层分发的输入事件对应的操作指令。可以是图3示出的Application。
下文为了便于区分内核层、系统层、应用程序框架层的输入事件,将内核层上报给系统层的输入事件作为第一事件,系统层中进入InputFramework之前的输入事件作为第二事件,应用程序框架层分发给应用层的输入事件作为第三事件。应当理解,当系统层仅仅进行事件分发,则第一事件和第二事件可以理解为同一事件,当应用程序框架层仅仅进行事件分发,则第二事件和第三事件可以理解为同一事件。
对于涉及到指关节操作,即指关节对触摸屏的操作,值得注意的是,由于人类指关节构造的特殊性,第一终端110中设置触摸传感器、加速度传感器以及指关节算法。示例地,指关节算法置于系统层,比如图3所示的本地框架(Native Framework)。在指关节敲击屏幕时,触摸传感器可以感知指关节触摸面积,加速度传感器感知指关节的重力加速度所带来的震动频率;然后指关节算法可以确定是否为指关节动作。
下面以指关节双击为例说明第一终端110处理用户的指关节操作的工作流程。具体地,如图3所示,当用户对第一终端110的触摸屏进行指关节双击时,硬件层(Hardware)中的智能传感集线器(Sensor Hub)对基于加速度传感器和压力传感器采集到的指关节双击产生的相关数据(Acc Rawdata)进行处理。之后,将智能传感集线器(Sensor Hub)处理后的数据和触摸传感器(Touch Driver)采集的数据作为硬件中断信号发送至内核层(Kernel);通过内核层(Kernel)中的输入集线器驱动(Input Hub Driver)和触摸传感器驱动(Touch Driver)可以将硬件中断信号加工成第一事件,该第一事件可以包括触摸坐标、触摸操作数据的时间戳、按压压力、敲击产生的重力加速度的振动频率等信息,并将第一事件上报至用户空间;用户空间通过本地框架(Native Framework)中的指关节算法识别第一事件,当确定第一事件由指关节产生时,为第一事件打上指关节标识,即指关节标签,将打上指关节标识的第一事件封装成第二事件,将第二事件发送到InputFramework,InputFramework将第二事件分发给Java Framework中的华为手势识别(HwGestureAction),华为手势识别(HwGestureAction)可识别指关节的路径轨迹确定指关节动作,将第二事件和指关节动作封装成第三事件,将第三事件分发给Application中的截屏,触发截屏。
本申请实施例中,在第一终端110和第二终端120建立投屏连接后,用户对第二终端120进行目标操作,目标操作可以为截屏操作、录屏操作、指关节操作或隔空操作,通过第一终端110响应该目标操作,执行目标操作对应的操作指令,实现跨设备的用户交互,提高了用户的使用体验。其中,指关节操作可以包括指关节画S(操作指令为滑动截屏),指关节画闭合图形(操作指令为局部截屏),指关节画横线(操作指令为分屏),指关节画字母(操作指令为打开应用程序,比如,W打开天气应用,C打开相机,e打开浏览器),指关节双击(操作指令为全局截屏),双指关节双击(操作指令为录屏),隔空操作可以是出现手型图标后隔空抓握(操作指令为全局截屏)、向左挥动(操作指令为向左翻页)、向右挥动(操作指令为向右翻页)、向上挥动(操作指令为向上滑动屏幕)、向下挥动(操作指令为向下滑动屏幕)、按压(操作指令为暂停或继续音乐播放)等。截屏操作包括同时按开关机键和音量减键,指关节画S,指关节画闭合图形,指关节双击,点击下拉菜单中的截屏按钮等,录屏操作包括双指关节双击,点击下拉菜单中的录屏按钮。另外,截屏操作或录屏操作过程中还涉及到其 他的操作,这些操作也可以生成对应的操作事件。示例地,对第二终端120进行滑动操作以显示下拉菜单画面。示例地,点击截图编辑画面中的各种按钮,比如,图7d示出的截图编辑画面中的保存、分享、自由图形、心形、矩形、椭圆形等按钮,图7e示出的截图编辑画面中的笔形、颜色、粗细、分享、涂鸦、马赛克、橡皮擦、滚动截屏等按钮。
对于用户可以对第二终端120做出的目标操作,通常为第一终端110所支持的所有目标操作。当然,在实际应用中,第二终端120可实现的目标操作,需要第二终端120的硬件层和内核层(Kernel)的支持。
对于第二终端120来说,第二终端120应当具有第一终端110的硬件层中的部分或全部硬件,并且具有对应的硬件的驱动,从而使得第二终端120可以生成事件,且生成的事件可被第一终端110处理。另外,第二终端120还可以具有第一终端110没有的硬件,以实现其特有的功能。本申请实施例中,用户对第二终端120进行目标操作,第二终端120发送生成的事件至第一终端110,第一终端110确定事件的操作指令,并执行操作指令。为了便于区分,将第一终端110发送给第二终端120的事件称为操作事件,下文以操作事件为例进行描述。其中,操作指令参见上文描述,此处不做过多赘述。考虑第一终端110和第二终端120的性能和反应速度的平衡,需要对事件的整个处理过程进行划分,使得第二终端120能够进行事件的部分处理,降低第一终端110的数据处理量,换言之,操作事件所包含的信息非常关键。若操作事件的信息过多,一方面,第一终端110和第二终端120之间的数据传输量多,降低了数据传输效率;另一方面,第二终端120的数据处理量较大,则会降低第二终端120的数据处理效率。因此,操作事件所包含的信息可对第一终端110和第二终端120的性能和反应速度产生较大的影响。
需要说明的是,当第一终端110和第二终端120之间的投屏方式为异源投屏,第一终端110基于第二终端120显示的画面执行操作事件的操作指令。在实际应用中,第一终端110会建立第二终端120的虚拟屏幕,虚拟屏幕的尺寸和第二终端120的屏幕尺寸适配,对应的,第一终端110基于第二终端120的虚拟屏幕显示的画面执行操作指令。当然,对于第一终端110和第二终端120之间投屏方式为镜像投屏,第一终端110基于自身显示的画面执行操作指令即可。
值得注意的是,对于用户可对第二终端120进行的目标操作,主要基于第一终端110和第二终端120所共有的硬件。示例地,当第一终端110和第二终端120均具有触摸屏时,用户可对第二终端120进行手势操作包括对触摸屏的点击、按压、拖拽、滑动、敲击、画图形等操作,另外,还可以对触摸屏进行隔空操作。示例地,当第一终端110和第二终端120均具有按键时,用户可对第二终端120进行的操作包括对按键的短按和长按。下文以第一终端110和第二终端120均具有触摸屏和按键为例进行说明。另外,在一些可能的实现方式中,第一终端110和第二终端120还可具有鼠标、键盘等硬件。
考虑到从用户对第二终端120的目标操作到第一终端110响应该目标操作的过程中,关键为对硬件层产生的硬件中断信号的处理,以实现对用户的目标操作的理解。比如,用户按的是哪个按键,用户点击的是哪个画面中的哪个界面元素,用户的手的操作部位(指肚、指关节、手)是什么,从而便于第一终端110快速实现对目标操作的理解,确保第一终端110的反应速度和性能。
示例地,目标操作为对第二终端120的按键的按压操作。操作事件指示了手势操作为用户按压第二终端120的按键,可以包括键值、按键时间信息、第二终端120的设备标识;之后,第一终端110识别操作事件,确定对应的操作指令。其中,按键的键值指示了用户对什 么按键进行了按压,可以是主页键、返回键、电源键、音量键等。不同按键具有不同的键值,从而区分不同的按键。按键时间信息可以为按键时长,或者,短按,长按等指示按键时长的描述信息。这里,第二终端120生成的操作事件可以为内核层(Kernel)生成的第一事件。
示例地,目标操作为对第二终端120显示的下拉菜单画面中的按钮的点击操作。操作事件指示了手势操作为用户点击第二终端120的触摸屏显示的下拉菜单画面中的截屏按钮或录屏按钮,可以包括用户点击的下拉菜单中的截屏按钮或录屏按钮的标识和第二终端120的设备标识,第一终端110可以直接识别操作事件中的截屏按钮或录屏按钮的标识,确定截屏指令或录屏指令,确保反应速度和性能。可以理解,由第二终端120进行了下拉菜单画面中的按钮的识别,使得第一终端110可直接确定出操作事件指示的按钮对应的操作指令。对应地,第二终端120存储有下拉菜单画面的布局信息,比如下拉菜单中各按钮的位置信息和绑定的标识,从而使得第二终端120能够了解到其显示的下拉菜单画面中的各个按钮的含义。这里,存储的下拉菜单画面的布局信息与第二终端120的屏幕尺寸是适配的,并不是与第一终端110的屏幕尺寸适配。需要说明的是,当用户在第二终端120的触摸屏上下拉菜单中的截屏按钮和录屏按钮所在区域做出了点击操作,若通过第一终端110识别用户点击的按钮的含义,在识别过程中,需要将第二终端120的屏幕坐标系表示的相关数据转换到第一终端110的屏幕坐标系下;若基于第二终端120识别用户点击的按钮的含义,无需进行第二终端120的屏幕坐标系和第一终端110的屏幕坐标系的转换,减少了处理的数据量,可确保数据处理效率。
在一个例子中,在第一终端110和第二终端120建立投屏连接时,第一终端110可以将下拉菜单画面和下拉菜单画面的布局信息发送至第二终端120,由第二终端120进行存储,后续,第一终端110将显示下拉菜单的操作指令发送至第二终端120,由第二终端120调用存储的下拉菜单画面进行显示即可,如后续用户点击了第二终端120显示的下拉菜单画面的截屏按钮或录屏按钮,第二终端120可直接调用下拉菜单画面的布局信息确定截屏按钮或录屏按钮的标识。
在一些可能的情况,操作事件也可以指示了用户点击第二终端120的触摸屏显示的截图编辑画面中的按钮,第二终端120生成的操作事件包括用户点击的截图编辑画面中的按钮的标识。对应地,第二终端120存储有截屏便捷画面的布局信息,从而使得第二终端120能够了解到其显示的截屏便捷画面中的各个按钮的含义。
示例地,目标操作为用户从第二终端120的屏幕的顶部向下滑动的滑动操作。操作事件指示了手势操作为用户对第二终端120的触摸屏的从顶部向下滑动的滑动操作,可以包括滑动方向,滑动距离,滑动时长,第二终端120的设备标识等,使得第一终端110可直接确定出操作事件对应的显示下拉菜单的操作指令。
本领域技术人员可以理解,对于用户点击下拉菜单中的截屏按钮的点击操作以及用户从屏幕顶部滑动的滑动操作,操作事件基于指示手指在屏幕上的操作信息确定,比如,操作信息可以为触摸的第二终端120的触摸屏上的多个像素点各自在屏幕坐标系下的坐标、触摸时刻。其中,屏幕坐标系为第二终端120的触摸屏的坐标系;触摸的像素点可以理解为被按压的像素点。
对于上述按压第二终端120的按键、点击第二终端120显示的下拉菜单画面中的按钮、从第二终端120的屏幕顶部向下滑动的操作,是相对简单的手势操作。对于比较复杂的手势操作,比如,指关节操作,隔空操作,操作事件通常会包括较多的信息,对应的,数据处理量也非常大。对于指关节操作和隔空操作来说,不仅涉及到指关节和隔空的手的识别,还涉及到了指关节动作和手的隔空动作的识别。
在一个例子中,第二终端120用于识别指关节或隔空的手;第一终端110用于识别指关节的动作或手的隔空动作,基于指关节动作或手的隔空动作,确定操作指令。
示例地,对于用户的指关节对第二终端120的触摸屏的操作,比如,指关节双击、指关节画S、指关节画闭合图形,为了平衡第一终端110和第二终端120的性能和反应速度,这里,如图5所示以第一终端110的系统层(Native Framework)可直接进行事件分发作为划分节点。对应的,本申请实施例中,第二终端120发送给第一终端110的操作事件可以包括指关节标识、指关节触摸信息和指关节按压信息,对应的,第一终端110识别操作事件,以确定用户针对第二终端120显示的画面的指关节动作,基于指关节动作,确定操作事件对应的操作指令,比如,截屏、录屏、分屏或打开应用程序。详细内容参见上文。其中,指关节触摸信息可以包括指关节触摸的第二终端120的屏幕上的多个像素点各自在屏幕坐标系下的坐标以及触摸时刻,也可以包括触摸面积,其中,触摸面积可以理解为相同触摸时刻的连续的像素点形成的区域的面积;指关节按压信息可以包括指关节的重力加速度产生的震动频率。示例地,在用户的指关节画S的过程中,记录图形S对应在屏幕上的每个像素点各自的触摸位置和触摸时刻。另外,操作事件可能包括触摸时刻不同的相同像素点,比如,在用户的指关节画闭合图形的过程中,起点和终点的像素点相同,但是触摸时刻并不相同。
需要说明的是,为了确保第一终端110可以识别指关节标识,在一个例子中,第二终端120生成操作事件的代码从第一终端110移植;在一个例子中,第二终端120向第一终端110请求指关节标识;示例地,当第二终端120通过指关节算法识别到内核层生成的第一事件由指关节产生,向第一终端110请求指关节标识。
在一个例子中,第二终端120用于识别指关节动作或手的隔空动作。第一终端110用于基于识别的指关节动作或手的隔空动作,确定操作指令。
示例地,对于用户的手对第二终端120的触摸屏的隔空操作,比如,在距离第二终端120的触摸屏的20-40cm处手朝上停留1S、在距离第二终端120的触摸屏的20-40cm处手朝上抓握等,第二终端120生成的操作事件可以包括隔空动作,从而使得第一终端110直接可以读取操作事件中的隔空动作,确定隔空动作对应的操作指令。
对于第一终端110和第二终端120之间的数据处理的分工,可以考虑第二终端120的处理能力,当第二终端120的处理能力比较强的前提下,可以由第二终端120识别手的隔空动作和/或指关节动作。反之,第二终端120可仅仅识别隔空的手或指关节。
值得注意的是,对于指关节的操作或隔空的手的操作。在一个例子中,该操作对应的操作指令可以为直接基于指关节动作或手的隔空动作确定,无需考虑第二终端120显示的画面的内容。示例地,指关节双击,操作指令为截屏;指关节画S,操作指令为滑动截屏。在另一个例子中,该操作对应的操作指令需要基于指关节动作或手的隔空动作以及第二终端120显示的画面的内容确定。示例地,当第二终端120的显示的画面为音乐播放画面,隔空按压的操作指令为暂停播放。这里,第二终端120可以仅仅识别指关节或手是否隔空,或者,识别指关节动作和手的隔空动作;第一终端110基于指关节或手的隔空动作和第二终端120显示的画面确定操作指令。
当然应该认为,上述操作事件仅仅为示例,并不对本申请构成限定,只要能够确保第一终端110和第二终端120的性能和反应速度的平衡即可。应当理解,本申请实施例中,第一终端110对目标操作的识别过程是不需要考虑第一终端110和第二终端120之间的屏幕尺寸的差异。为了确保第一终端110和第二终端120之间的事件交互的可靠性,优选地,第二终端120中得到操作事件相关的程序是从第一终端110移植的。
另外,需要基于第二终端120所具有的硬件层,确定出用户可以对第二终端120进行的目标操作;移植能够获取该操作对应的操作事件所需的程序,从而使得第二终端120能够获取操作事件,进一步使得第一终端110响应第二终端120的操作事件,执行操作事件对应的操作指令,实现跨设备的屏幕操作,可处理跨设备的使用场景,提高用户体验。
示例地,若第二终端120可对指关节进行识别,请参考图5,第二终端120具有硬件层(Hardware)中的智能传感集线器(Sensor Hub)、触摸传感器(Touch Sensor)、显示屏(图中未示出),移植有内核层(Kernel)中的输入集线器驱动(Input Hub Driver)和触摸传感器驱动(Touch Driver),用户空间中的本地框架(Native Framework)中的指关节算法,前端框架(Java/Js UI Framework)以及Application。
示例地,若第二终端120可对按键进行识别,请参考图6,第二终端120具有Input子系统。其中,Input子系统包括上述驱动层、输入核心层和事件处理层,位于核心层(Kernel)详细内容参见上文,此处不做过多赘述。
另外,通过降低第一终端110的数据处理量,实现第一终端110和第二终端120之间的数据处理效率的平衡,有利于提高用户的画面的交互操作的处理效率,提高投屏的使用体验。
需要说明的是,为了实现第二终端120和第一终端110的投屏连接,在实际应用中,通常会在第二终端120安装Sink端的协同应用,该应用用于实现投屏连接的第二终端120的处理和与第一终端110的通信。对应的,在第一终端110安装Source端的协同应用,该应用用于实现投屏连接的第一终端110的处理和与第二终端120通信。另外,由于协同应用可以和应用层中的应用进行交互,对应的,第一终端110的协同应用可以处理一些特殊的操作事件,比如,按键事件,点击下拉菜单中的按钮,第一终端110中的协同应用可将操作事件直接分发到对应的应用,该应用响应操作事件。示例地,请参考图6,当用户在第二终端120侧进行按键操作,第二终端120内的Input子系统处理按键事件,识别到截屏按键后,将按键事件投递到顶层窗口,由Sink端的协同应用响应,并传递给第一终端110侧的Source端的协同应用。Source端的协同应用识别到按键事件后,触发对应的截屏服务。另外,对于需要系统层分发的操作事件,比如,指关节的相关事件,如图5所示,第一终端110和第二终端120之间的事件交互可以通过系统层(Native Framework)中的通信模块进行交互,示例地,该通信模块可以理解为物理通信通道。
以截屏为例,对第一终端110响应第二终端120的手势操作进行说明。
在一个实施例中,第一终端110和第二终端120的投屏方式为镜像投屏,用户在第二终端120上进行截屏触发,则第一终端110响应第二终端120的手势操作以实现截屏主要包括以下内容:
第一终端110响应第二终端120的截屏触发,通过安装的截屏应用通过调用SurfaceControl.screenshot函数族接口,获取第一终端110的display上的显示图片,通过native Framework的接口调用SurfaceFlinger绘图功能,将显示缓存的内容,绘制到Bitmap文件,并返回给截屏应用,从而完成截屏,并在第一终端110和第二终端120显示截图预览画面。
在一个实施例中,第一终端110和第二终端120的投屏方式为异源投屏,用户在第二终端120上进行截屏触发,则第一终端110响应第二终端120的手势操作实现截屏主要包括以下内容:
(1)、识别触发源displayId。
这里,触发源displayId可以理解为第二终端120的设备标识。
(2)、调用SurfaceControl新增接口获得指定displayId的截屏图片。
这里,第一终端120内部缓存displayId所对应的第二终端120显示的画面。
(3)、在触发源displayId对应的第二终端120显示截屏交互动画或截图交互画面。
这里,第一终端110控制第二终端120显示截屏交互动画和截图交互画面。示例地,截图交互画面可以为图7a、图7b、图7c、图7f所示的截图预览画面,也可以为图7d、图7e所示的截图编辑画面,还可以为图7f所示的手势图标121所在画面,还可以是图8a和图8b所示的录屏预览画面,截屏交互动画可以为图7e所示的屏幕滚动动画。
参考图12,对第一终端110和第二终端120之间的截屏实现进行进一步的说明。
1.通过第一终端110的设备管理实现其和第二终端120的设备连接,以及实现图形控制和第二终端120的显示的连接。
在连接之后,第一终端110可以控制第二终端120的显示。
第一终端110与第二终端120建立投屏连接后,第一终端110存储的画面可以投屏到第二终端120上进行显示,用户可在第二终端120上进行相应的操作。当然本实施例中,第二终端120可支持的触摸屏和按键的相关操作,详细内容参见上文,此处不做过多赘述。
2.用户对第二终端120进行触发截屏的操作,第二终端120的交互识别可识别操作,传递设备ID,触发第一终端110的截屏服务。
这里,设备ID可以理解为上述displayId。在实际应用中,当用户在第二终端120的触摸屏或按键进行触发截屏的目标操作时,第一终端110可确定目标操作对应的截屏指令,触发截屏服务,截屏服务可以基于截屏指令和设备ID对第二终端显示的画面进行截图,得到截屏交互画面,比如截图预览画面、截图编辑画面,另外,多个截图交互画面也可以形成截图交互动画。示例地,图7a、图7b、图7c、图7f、图8a和图8b示出了截图预览画面,图7d和图7e示出了的截图编辑画面,图7e示出的屏幕滑动产生的截图交互动画。
3.第一终端110的截屏服务将截图交互画面发送给第二终端120,以使第二终端进行显示。
这里,当截图交互画面为截图编辑画面时,如果用户点击保存按钮,第一终端110保存截图,不会显示截屏预览画面;当如果用户点击删除按钮时,第一终端110会删除截图。
以录屏为例,对第一终端110响应第二终端120的手势操作进行进一步的说明。
在一个实施例中,第一终端110和第二终端120的投屏方式为镜像投屏,用户在第二终端120上进行录屏触发,则第一终端110响应第二终端120的录屏操作实现录屏的主要包括以下内容:
如图4所示,第一终端110采用MediaProjection接口把第一终端110图像渲染到指定surface上的功能,主要实现步骤包括:通过MediaProjectionManager取得的MediaProjection,创建VirtualDisplay;第一终端110的Display可以“投影”到VirtualDisplay上;VirtualDisplay会将图像渲染到Surface中,而这个Surface是从MediaCodec编码器中创建的,这样第一终端110显示的图像对内就会自动填充给MediaCodec编码器;最后,MediaMuxer将从MediaCodec得到的图像元数据封装并输出到MP4文件中,从而得到录屏文件。另外,第一终端110和第二终端120均显示相同的录屏画面。
在一个实施例中,第一终端110和第二终端120的投屏方式为异源投屏,用户在第二终端120上进行录屏触发,则第一终端110响应第二终端120的手势操作实现录屏主要包括以下内容:
(1)、通过MediaProjectionManager取得的MediaProjection,创建触发源displayId 的VirtualDisplay;创建触发源displayId的AudioRecord。
这里,触发源displayId可以理解为第二终端120的设备标识。在录屏触发成功后,触发源displayId的AudioRecord会获取第二终端120上的麦克风采集的声音信号。
(2)、MediaMuxer将从MediaCodec得到的displayId的图像元数据封装并输出到MP4文件中,从而得到录屏文件。
这里,第一终端110控制第二终端120显示多个录屏画面,这些录屏画面的录屏图标中显示录屏时间。示例地,可参见图8a和图8b显示的录屏画面,以及,录屏图标122。
另外,上述截屏和录屏仅仅作为操作指令的示例,并不构成对具体限定。下文继续以截屏和录屏为例进行说明。
本申请实施例中的第一终端可以为具备投屏发送(Source)能力且将处理输入事件的第一终端110,比如手机、平板电脑、个人数字助理(personal digitalassistant,PDA)、台式电脑、可穿戴设备、笔记本等。本申请实施例中的第二终端可以是具有输入事件生成能力,并且至少具备投屏接收(Sink)能力、图像显示能力的第一终端110,比如,便捷屏、平板电脑。另外,第二终端还可以具备声音输出能力和声音采集能力。具体第二终端和第一终端的第一终端110类型此处不予限定,可根据实际的场景确定。例如,当实际场景中是由手机向便捷屏进行投屏时,此时手机为第一终端,便捷屏为第二终端。在实际应用中,第一终端和第二终端搭载的操作数据系统相同,包括但不限于搭载iOS、android、microsoft或者其他操作数据系统。
接下来,以手机作为第一终端110,便捷屏作为第二终端120对本申请实施例中的异源投屏的具体场景进行介绍。这里,便捷屏120移植识别指关节算法,由便捷屏得到的指关节事件。
在一个场景中,如图7a所示,对便捷屏120进行按键,比如开关键和音量减键,便捷屏120将按键事件发送给手机110;手机110识别按键事件为截屏事件后,对内部缓存的便捷屏120显示的视频画面(图中未示意)进行截屏,生成截图预览画面后发送给便捷屏120;便捷屏120显示截图预览画面。
在实际应用中,如图11所示,对于图7a所示的按键的场景,用户对便捷屏120进行组合按键的动作;对于手机110,窗口PhoneWindowManager进行组合按键识别,当识别到截屏事件时,窗口PhoneWindowManager调用截屏助手ScreenShotHeIper(Ex),以使截屏助手ScreenShotHeIper(Ex)触发并调用截屏服务TakeScreenShotService,截屏服务TakeScreenShotService调用截屏管理(HW)GlobalScreenshot,截屏管理(HW)GlobalScreenshot调用截屏图片生成接口,调用图形SurfaceControl生成截屏图片后添加预览小图后得到截图预览画面。
在一个场景中,如图7b所示,用户对便捷屏120的触摸屏进行下拉菜单的滑动,便捷屏120将滑动事件发送给手机110;手机110识别滑动事件为下拉菜单事件后,生成下拉菜单画面后发送给便捷屏120;便捷屏120显示下拉菜单画面,点击便捷屏120的下拉菜单画面中截屏按钮,并将点击事件发送给手机110;手机110识别点击事件为点击下拉菜单中的截屏按钮后,对内部缓存的便捷屏120显示的视频画面(图中未示意)进行截屏,生成截图预览画面后发送给便捷屏120;便捷屏120显示截图预览画面。
在实际应用中,如图11所示,对于图7b所示的下拉菜单的场景,用户对便捷屏120的 触摸屏进行下拉菜单的下拉动作;对于手机110,下拉菜单ScreenShotHelper(Ex)进行下拉菜单的下拉动作的识别,当识别到下拉菜单动作时,调用截屏服务TakeScreenShotService,截屏服务TakeScreenShotService调用截屏管理(HW)GlobalScreenshot,截屏管理(HW)GlobalScreenshot调用截屏图片生成接口,调用图形SurfaceControl生成截屏图片,添加预览小图,以得到截图预览画面。
在一个场景中,如图7c所示,对便捷屏120的触摸屏显示的视频画面进行指关节双击,便捷屏120将指关节事件发送给手机110;手机110识别指关节事件,确定用户动作为指关节双击;对内部缓存的便捷屏120显示的视频画面(图中未示意)进行截屏,生成截图预览画面后将其发送给便捷屏120;便捷屏120显示截图预览画面。
在实际应用中,如图11所示,对于图7c所示的指关节双击的场景,用户对便捷屏120的触摸屏进行指关节双击的动作;对于手机110,指关节SystemwideActionListener进行指关节识别,当识别到指关节双击时,调用截屏助手ScreenShotHeIper(Ex)触发并调用截屏服务TakeScreenShotService,截屏服务TakeScreenShotService调用截屏管理(HW)GlobalScreenshot,截屏管理(HW)GlobalScreenshot调用截屏图片生成接口,调用图形SurfaceControl生成截屏图片,添加预览小图,以得到截图预览画面。
在一个场景中,如图7d所示,对便捷屏120的触摸屏显示的视频画面进行指关节敲击画闭合图形,便捷屏120将指关节事件发送给手机110;手机110识别指关节事件,确定用户动作为指关节敲击画闭合图形;对内部缓存的便捷屏120显示的视频画面(图中未示意)进行截屏,生成截图编辑画面后将其发送给便捷屏120;便捷屏120显示截图编辑画面,当用户点击截图编辑画面的保存按钮,手机110保存截屏画面(图中未示意)。
在实际应用中,如图11所示,对于图7d所示的指关节划闭合区域的场景,用户对便捷屏120的触摸屏进行指关节划闭合区域的动作;对于手机110,指关节SystemwideActionListener进行指关节识别,当识别到指关节划划闭合区域时,启动智能截屏CropActivity,智能截屏CropActivity选择图形,触发截屏编辑PhotoEditorActivity进行编辑,得到截图编辑画面;当用户点击截图编辑画面中的滚动截屏按钮时,触发滚动截屏管理MultiScreenShotService。
在一个场景中,如图7e所示,对便捷屏120的触摸屏显示的天气画面进行指关节敲击画S,便捷屏120将指关节事件发送给手机110;手机110识别指关节事件,确定用户动作为指关节敲击画S;对内部缓存的便捷屏120显示的天气画面(图中未示意)进行滚动截屏,将生成滚动画面和截图编辑画面发送给便捷屏120;便捷屏120显示滚动动画后显示截图编辑画面,当用户点击截图编辑画面的保存按钮,手机110保存截屏画面(图中未示意)。
在实际应用中,若图11所示,对于图7e所示的指关节划S的场景,用户对便捷屏120的触摸屏进行指关节划S的动作;对于手机110,指关节SystemwideActionListener进行指关节识别,当识别到指关节划S时,触发滚动截屏管理MultiScreenShotService,进行,滚动截屏管理MultiScreenShotService基于截屏管理(HW)GlobalScreenshot的下滑手势触发指关进行滚动截取图片后,截屏编辑PhotoEditorActivity基于滚动截屏管理MultiScreenShotService的预览编辑指令,得到截图编辑画面;当用户点击截图编辑画面中的预览编辑的按钮时,截屏管理(HW)GlobalScreenshot发送点击预览编辑的指令,截屏编辑PhotoEditorActivity基于点击预览编辑的指令进行相应处理。
在一个场景中,如图7f所示,用户在距便捷屏120的触摸屏的20-40cm的地方手朝上停留,便捷屏120将隔空事件发送给手机110;手机110识别隔空事件,确定用户动作为在距 便捷屏120的触摸屏的20-40cm的地方手朝上停留;对内部缓存的便捷屏120显示的视频画面(图中未示意)上添加手型图标121,生成截屏提示画面后将其发送给便捷屏120;便捷屏120显示截屏提示画面。当用户进行抓握时,便捷屏120将隔空抓握事件发送给手机110;手机110识别隔空抓握事件,对内部缓存的便捷屏120显示的视频画面(图中未示意)进行截屏,生成截图预览画面后发送给便捷屏120;便捷屏120显示截图预览画面。
在一个场景中,如图8a所示,用户对便捷屏120的触摸屏进行双指关节双击的操作数据,便捷屏120将指关节事件发送给手机110;手机110识别指关节事件,确定用户动作为双指关节双击;对内部缓存的便捷屏120显示的视频画面(图中未示意)进行录屏,生成录屏画面后将其发送给便捷屏120;便捷屏120显示录屏画面。当用户点击录屏图标122时,便捷屏120将点击事件发送给手机110;手机110识别点击事件为点击录屏界面中的录屏图标122,对内部缓存的便捷屏120显示的视频画面(图中未示意)进行截屏,生成截图预览画面后发送给便捷屏120;便捷屏120显示截图预览画面。
在一个场景中,如图8b所示,用户对便捷屏120的触摸屏进行下拉菜单的滑动,便捷屏120将滑动事件发送给手机110;手机110识别滑动事件为下拉菜单事件后,生成下拉菜单画面后发送给便捷屏120;便捷屏120显示下拉菜单画面,点击便捷屏120的下拉菜单画面中录屏按钮,并将点击事件发送给手机110;手机110识别点击事件为点击下拉菜单中的录屏按钮后,对内部缓存的便捷屏120显示的视频画面(图中未示意)进行录屏,生成录屏画面后发送给便捷屏120;便捷屏120显示录屏画面。当用户点击录屏图标122时,便捷屏120将点击事件发送给手机110;手机110识别点击事件为点击录屏界面中的录屏图标122,对内部缓存的便捷屏120显示的视频画面(图中未示意)进行截屏,生成截图预览画面后发送给便捷屏120;便捷屏120显示截图预览画面。
需要说明的是,第二终端120发送给第一终端110的按键事件、点击事件、滑动事件、指关节事件、隔空事件均为上文以下文所述的操作事件。
示例性的,图9示出了一种第一终端110的结构示意图。
第一终端110可以包括处理器1110,外部存储器接口1120,内部存储器1121,通用串行总线(universal serial bus,USB)接口1130,充电管理模块1140,电源管理模块1141,电池1142,天线1,天线2,移动通信模块1150,无线通信模块1160,音频模块1170,扬声器1170A,受话器1170B,麦克风1170C,传感器模块1180,按键1190,摄像头1191以及显示屏1192等。其中传感器模块1180可以包括压力传感器1180A、姿态传感器1180B、距离传感器1180C和触摸传感器1180D等。
可以理解的是,本申请实施例示意的结构并不构成对第一终端110的具体限定。在另一些实施例中,第一终端110可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器1110可以包括一个或多个处理单元,例如:处理器1110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processingunit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
其中,控制器可以是第一终端110的神经中枢和指挥中心。控制器可以根据指令操作数 据码和时序信号,产生操作数据控制信号,完成取指令和执行指令的控制。
处理器1110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器1110中的存储器为高速缓冲存储器。该存储器可以保存处理器1110刚用过或循环使用的指令或数据。如果处理器1110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器1110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器1110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuitsound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purposeinput/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器1110可以包含多组I2C总线。处理器1110可以通过不同的I2C总线接口分别耦合触摸传感器1180D,充电器,闪光灯,摄像头1191等。例如:处理器1110可以通过I2C接口耦合触摸传感器1180D,使处理器1110与触摸传感器1180D通过I2C总线接口通信,实现第一终端110的触摸功能。
I2S接口可以用于音频通信。在一些实施例中,处理器1110可以包含多组I2S总线。处理器1110可以通过I2S总线与音频模块1170耦合,实现处理器1110与音频模块1170之间的通信。在一些实施例中,音频模块1170可以通过I2S接口向无线通信模块1160传递音频信号,实现通过蓝牙耳机接听电话的功能。
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块1170与无线通信模块1160可以通过PCM总线接口耦合。在一些实施例中,音频模块1170也可以通过PCM接口向无线通信模块1160传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器1110与无线通信模块1160。例如:处理器1110通过UART接口与无线通信模块1160中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块1170可以通过UART接口向无线通信模块1160传递音频信号,实现通过蓝牙耳机播放音乐的功能。
MIPI接口可以被用于连接处理器1110与显示屏1192,摄像头1191等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(displayserial interface,DSI)等。在一些实施例中,处理器1110和摄像头1191通过CSI接口通信,实现第一终端110的拍摄功能。处理器1110和显示屏1192通过DSI接口通信,实现第一终端110的显示功能。
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器1110与摄像头1191,显示屏1192,无线通信模块1160,音频模块1170,传感器模块1180等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。
USB接口1130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口1130可以用于连接充电器为第一终端110充电,也可以用于第 一终端110与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他第一终端110,例如AR设备等。
可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对第一终端110的结构限定。在另一些实施例中,第一终端110也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块1140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块1140可以通过USB接口1130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块1140可以通过第一终端110的无线充电线圈接收无线充电输入。充电管理模块1140为电池1142充电的同时,还可以通过电源管理模块1141为第一终端110供电。
电源管理模块1141用于连接电池1142,充电管理模块1140与处理器1110。电源管理模块1141接收电池1142和/或充电管理模块1140的输入,为处理器1110,内部存储器1121,外部存储器1120,显示屏1192,摄像头1191,和无线通信模块1160等供电。电源管理模块1141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块1141也可以设置于处理器1110中。在另一些实施例中,电源管理模块1141和充电管理模块1140也可以设置于同一个器件中。
第一终端110的无线通信功能可以通过天线1,天线2,移动通信模块1150,无线通信模块1160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。第一终端110中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块1150可以提供应用在第一终端110上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块1150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块1150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块1150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块1150的至少部分功能模块可以被设置于处理器1110中。在一些实施例中,移动通信模块1150的至少部分功能模块可以与处理器1110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器1170A,受话器1170B等)输出声音信号,或通过显示屏1192显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器1110,与移动通信模块1150或其他功能模块设置在同一个器件中。
无线通信模块1160可以提供应用在第一终端110上的包括无线局域网(wirelesslocal area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块1160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块1160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将 处理后的信号发送到处理器1110。无线通信模块1160还可以从处理器1110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,第一终端110的天线1和移动通信模块1150耦合,天线2和无线通信模块1160耦合,使得第一终端110可以通过无线通信技术与网络以及其他设备通信。
所述无线通信技术可以包括全球移动通讯系统(global system for mobilecommunications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband codedivision multiple access,WCDMA),时分码分多址(time-division code divisionmultipleaccess,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenithsatellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
第一终端110通过GPU,显示屏1192,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏1192和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器1110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏1192用于显示图像,视频等。显示屏1192包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emittingdiode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrixorganic light emitting diode,AMOLED),柔性发光二极管(flex light-emittingdiode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot lightemitting diodes,QLED)等。在一些实施例中,第一终端110可以包括1个或N个显示屏1192,N为大于1的正整数。在本申请实施例中,显示屏1192可以做挖孔处理,例如,在显示屏1192的左上角或者右上角等位置设置有通孔,摄像头1191可以嵌设于该通孔内。
第一终端110可以通过ISP,摄像头1191,视频编解码器,GPU,显示屏1192以及应用处理器等实现拍摄功能。
ISP用于处理摄像头1191反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头1191中。
摄像头1191用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,第一终端110可以包括1个或N个摄像头1191,N为大于1的正整数。可选地,摄像头在第一终端110的上的位置可以为前置的,也可以为后置的,本申请实施例对此不作限定。可选地,第一终端110可以包括单摄像头、双摄像头或三摄像头等,本申请实施例对此不作限定。例如,手机可以包括三摄像头,其中,一个为主摄像头、一个为广角摄像头、一个为长焦摄像头。可选地,当第一终端110包括多个摄像头时,这多个摄像头可以全部前置,或者可以 全部后置,或者可以一部分前置、另一部分后置,本申请实施例对此不作限定。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当第一终端110在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。第一终端110可以支持一种或多种视频编解码器。这样,第一终端110可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对操作事件快速处理,还可以不断的自学习。通过NPU可以实现第一终端110的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口1120可以用于连接外部存储卡,例如Micro SD卡,实现扩展第一终端110的存储能力。外部存储卡通过外部存储器接口1120与处理器1110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器1121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器1110通过运行存储在内部存储器1121的指令,从而执行第一终端110的各种功能应用以及数据处理。
内部存储器1121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作数据系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储第一终端110使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器1121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
第一终端110可以通过音频模块1170,扬声器1170A,受话器1170B,麦克风1170C,耳机接口(图中未示意),以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块1170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块1170还可以用于对音频信号编码和解码。在一些实施例中,音频模块1170可以设置于处理器1110中,或将音频模块1170的部分功能模块设置于处理器1110中。
扬声器1170A,也称“喇叭”,用于将音频电信号转换为声音信号。第一终端110可以通过扬声器1170A收听音乐,或收听免提通话。
受话器1170B,也称“听筒”,用于将音频电信号转换成声音信号。当第一终端110接听电话或语音信息时,可以通过将受话器1170B靠近人耳接听语音。
麦克风1170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风1170C发声,将声音信号输入到麦克风1170C。第一终端110可以设置至少一个麦克风1170C。在另一些实施例中,第一终端110可以设置两个麦克风1170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,第一终端110还可以设置三个,四个或更多麦克风1170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口用于连接有线耳机。耳机接口可以是USB接口1130,也可以是3.5mm的开放移动第一终端110平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
压力传感器1180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中, 压力传感器1180A可以设置于显示屏1192。
压力传感器1180A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器1180A,电极之间的电容改变。第一终端110根据电容的变化确定压力的强度。当有触摸操作数据作用于显示屏1192,第一终端110根据压力传感器1180A检测所述触摸操作数据强度。
第一终端110也可以根据压力传感器1180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作数据强度的触摸操作数据,可以对应不同的操作数据指令。例如:当有触摸操作数据强度小于第一压力阈值的触摸操作数据作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作数据强度大于或等于第一压力阈值的触摸操作数据作用于短消息应用图标时,执行新建短消息的指令。在一个例子中,第一终端110还可以压力传感器1180A的检测信号计算触摸的面积。
姿态传感器1180B,可以用于确定终端100的运动姿态。包含三轴陀螺仪、三轴加速度计,三轴电子罗盘等运动传感器,通过内嵌的低功耗ARM处理器得到经过温度补偿的三维姿态与方位等数据。在一些示例中,可以通过姿态传感器1180B中的陀螺仪传感器确定终端100围绕三个轴(即,x,y和z轴)的角速度。在一些示例中,姿态传感器1180B中的加速度传感器可检测第一终端110在各个方向上(一般为三轴)加速度的大小。当第一终端110静止时可检测出重力的大小及方向。还可以用于识别终端的姿态,应用于横竖屏切换,计步器等应用。另外,还可以感知到由于指关节的重力加速度所带来的震动频率。在一些示例中,基于姿态传感器1180B和距离传感器1180C可以实现隔空截屏。
距离传感器1180C,用于测量距离。第一终端110可以通过红外或激光测量距离。在一些实施例中,拍摄场景,第一终端110可以利用距离传感器1180C测距以实现快速对焦。在一些实施例中,截屏场景,第一终端110可以利用距离传感器1180C测量手和第一终端110之间的距离。
触摸传感器1180D,也称“触控面板”。触摸传感器1180D可以设置于显示屏1192,由触摸传感器1180D与显示屏1192组成触摸屏,也称“触控屏”。
触摸传感器1180D用于检测作用于其上或附近的触摸操作数据。触摸传感器可以将检测到的触摸操作数据传递给应用处理器,以确定触摸事件类型。可以通过显示屏1192提供与触摸操作数据相关的视觉输出。在另一些实施例中,触摸传感器1180D也可以设置于第一终端110的表面,与显示屏1192所处的位置不同。
传感器模块1180还可以包括气压传感器、磁传感器,,接近光传感器,指纹传感器,温度传感器,环境光传感器,骨传感器等。
按键1190包括开机键,音量键等。按键1190可以是机械按键。也可以是触摸式按键。第一终端110可以接收按键输入,产生与第一终端110的用户设置以及功能控制有关的键信号输入。
另外,第一终端110中还可以包括马达、指示器、SIM接口等。
示例性的,第二终端120可以包括处理器,内部存储器,通用串行总线(universal serial bus,USB)接口,充电管理模块,电源管理模块,电池,天线,移动通信模块,无线通信模块,音频模块,扬声器,受话器,麦克风,传感器模块,按键,摄像头以及显示屏等。其中传感器模块可以包括压力传感器,触摸传感器、姿态传感器,距离传感器等。具体内容可参见上 文,此处不做过多赘述。值得注意的,上述内容仅仅是对第二终端120的结构的一个示例,并不构成对第二终端120的具体限定。在另一些实施例中,第二终端120可以包括比图9所示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。
第一终端110的软件系统可以采用分层架构,事件硬件驱动架构,微核架构,微服务架构,或云架构。本申请实施例以分层架构的Android系统为例,示例性说明第一终端120的软件结构。
图10a是本申请实施例的第一终端110的软件结构框图。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,系统层(包括安卓运行时(Android runtime)和系统库),以及内核层(Kernel)。
应用程序层可以包括一系列应用程序包。
如图10a所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息,截屏,录屏等应用程序。
应用程序框架层为应用程序层的应用程序提供应用编程接口(applicationprogramming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数,例如用于接收应用程序层所发送的事件的函数。
如图10a所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。
通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,第一终端110振动,指示灯闪烁等。
应用程序框架层还可以包括:
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供第一终端110的通信功能。例如通话状态的管理(包括接通,挂断等)。
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以 支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。
2D图形引擎是2D绘图的绘图引擎。
系统库还可以包括:
传感器服务模块,用于对硬件层各类传感器上传的传感器数据进行监测,确定第一终端110的物理状态;
物理状态识别模块,用于对用户手势、人脸等进行分析和识别,可以包括指关节算法;
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
内核层(Kernel)是硬件和软件之间的层。内核层(Kernel)至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动,用于硬件驱动硬件层的相关硬件,如显示屏、摄像头、扬声器、传感器等。
第二终端120的软件系统可以采用分层架构,事件硬件驱动架构,微核架构,微服务架构,或云架构。优选地,第二终端120的软件系统和第一终端的软件系统的架构是一致的。本申请实施例以分层架构的Android系统为例,示例性说明第二终端120的软件结构。
图10b是本申请实施例的第二终端120的软件结构框图。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,系统层(包括安卓运行时(Android runtime)和系统库),以及内核层(Kernel)。
应用程序层可以包括一系列应用程序包。
如图10b所示,应用程序包可以包括实现与第一终端110进行投屏连接的协同应用,与第一终端110连接的WLAN,蓝牙等应用程序。
应用程序框架层为应用程序层的应用程序提供应用编程接口(applicationprogramming interface,API)和编程框架。如图10b所示,应用程序框架层可以包括内容提供器,视图系统,资源管理器,通知管理器等,详细内容参见上文描述。
系统库可以包括多个功能模块。例如:表面管理器(surface manager)、传感器服务模块和物理状态识别模块等。详细内容参见上文描述。
Android Runtime包括核心库和虚拟机。详细内容参见上文描述。。
内核层(Kernel)至少包含显示驱动,触摸传感器驱动,以驱动触控屏。另外还可以包括音频驱动,姿态传感器驱动,以硬件驱动扬声器、麦克风、姿态传感器等。
值得注意的,图10b本申请实施例提供的一种软件结构的示意图,对软件结构中的内容不构成任何限制,例如,第二终端120的软件架构中的内容可以为图10a所示的内容,也可以多于或少于10a所示的内容。
应当理解,第一终端110和第二终端120的软件框架可以相同也可以不同,但是第一端110和第二终端120之间是能够进行数据交互的。
下面结合图7b截屏的场景,图10a和图10b所示的软件系统,示例性的说明作为第一终 端110和第二终端120的软硬件实现截屏的工作流程。
第二终端120当触摸传感器1180D接收到触摸操作时,相应的硬件中断信息被发给内核层(Kernel)。内核层(Kernel)将硬件中断信息加工成输入事件(包括触摸坐标,触摸操作数据的时间戳等信息),并将输入事件分发给系统层,系统层将输入事件分发至应用程序框架层,应用程序框架层将输入事件加工成操作事件(指示了用户点击的控件为下拉通知菜单的截屏控件)。将操作事件发送到第一终端110,获取第一终端110发送的截图预览画面,通过调用内核层(Kernel)启动显示驱动,通过显示屏1192显示相应的截图预览画面。
第一终端110的用户空间获取第二终端120发送的操作事件,读取该操作事件的信息,触发截屏应用调用系统层的接口,截取内部缓存的第二终端120的当前整个屏幕的像素点的像素值,并添加预览小图,得到截图预览画面。将截图预览画面发送至对应的第二终端120。
下面结合图8b录屏的场景,示例性的说明作为示例性的说明作为第一终端110和第二终端120的软硬件实现录屏工作流程。
第二终端120当触摸传感器1180D接收到触摸行为,相应的硬件中断信息被发给内核层(Kernel)。内核层(Kernel)将触摸操作加工成输入事件(包括触摸坐标,触摸操作数据的时间戳等信息),并将输入事件分发给系统层。系统层将输入事件分发至应用程序框架层。应用程序框架层将输入事件加工成操作事件(指示了用户点击的控件为下拉通知菜单的录屏控件)将操作事件发送到第一终端110。当接收到第一终端120发送的录屏指令时,通过调用内核层(Kernel)启动音频驱动,通过麦克风1170C采集输入第二终端120的声音信号并将其上报至第一终端110。获取第一终端110发送的录屏画面,进而通过调用内核层(Kernel)启动显示驱动,通过显示屏1192显示相应的录屏画面。
第一终端110的用户空间获取第二终端120发送的操作事件,读取该操作事件的信息,触发录屏应用调用系统层的接口,截取内部缓存的第二终端的当前整个屏幕的像素点的像素值,并绘制整个屏幕,得到录屏画面,同时基于第二终端120上报的在录制过程中麦克风采集的声音信号和录屏画面,生成录屏文件。将录屏画面发送至对应的第二终端120。
示例性的,以下实施例中所涉及的技术方案均可以在上述投屏系统中实现。
以上即是对本申请实施例中涉及的投屏系统,以及该投屏系统中各个组成部分的介绍。接下来基于上述描述的投屏系统、图13a所示的跨设备交互方案,对本申请实施例中涉及的指关节操作的应用场景进行详细介绍。
A1.第一终端建立和第二终端的投屏连接。
在一个例子中,第二终端和第一终端通过上述网络连接在一起,实现分布式软总线连接,以将第一终端显示的画面投屏到第二终端进行显示。
在一个例子中,第一终端和第二终端之间的投屏方式为异源投屏,则第一终端能够建立第二终端的虚拟屏幕,虚拟屏幕和第二终端的屏幕尺寸适配,后续,基于第二终端的设备标识即可识别第二终端的虚拟屏幕,对第二终端的虚拟屏幕的画面执行操作指令即可。
另外,若第二终端上携带有麦克风,第一终端能够对第二终端的麦克风进行管理。后续,在录屏或其他需要麦克风采集声音信号的场景下,第二终端将麦克风采集的声音信号上报给第一终端。
A2.第一终端发送目标画面至第二终端。
作为一种可行的实现方式,第一终端和第二终端之间的投屏方式为镜像投屏。在一个例子中,目标画面为第一终端当前显示的画面的镜像。
作为一种可行的实现方式,第一终端和第二终端之间的投屏方式为异源投屏。在一个例子中,目标画面为第一终端未显示的画面。在一个例子中,目标画面为第一终端当前显示的画面的镜像,后续,第一终端和第二终端的显示的画面彼此独立,互不干扰。
需要说明的是,目标画面的尺寸和第二终端的屏幕尺寸适配。
A3.第二终端显示目标画面。
示例地,目标画面可以是如图7a-图7d、图7f、8a、图8b所示的视频画面,也可以是图7d所示的天气画面。
A4.第二终端根据用户针对目标画面做出的指关节操作生成输入事件。
作为一种可行的实现方式,输入事件可以包括指关节触摸信息和指关节按压信息。详细内容参见上文,此处不做过多赘述。
在一个例子中,输入事件可以由内核空间对用户针对目标画面的指关节操作所产生的硬件中断信息进行处理后生成的,也可以为内核层对硬件层上报的用户针对目标画面的指关节操作所产生的硬件中断信息进行处理后生成,内核空间、内核层、硬件层参见上文描述,此处不做过多赘述。
A5.第二终端当基于指关节算法识别到输入事件由用户的指关节产生时,确定输入事件的指关节标识。
指关节算法参见上文,此处不做过多赘述。
在一个例子中,第一终端和第二终端的指关节算法相同,示例地,第二终端的指关节算法从第一终端移植得到。对应的,指关节标识可由指关节算法确定。
在一个例子中,第一终端和第二终端的指关节算法不同。对应的,第二终端的指关节算法识别到输入事件由用户的指关节产生时,将指关节算法确定的指关节标识转换成第一终端可以识别的指关节标识,或者,确定可以由第一终端识别的,可指示输入事件由用户的指关节产生的指关节标识。
A6.第二终端将输入事件和指关节标识封装成操作事件。
需要说明的是,操作事件应当是第一终端可以处理的事件,换言之,操作事件符合第一终端和第二终端之间的数据交互协议。另外为了确保第一终端和第二终端之间的反应速度和处理性能,由第二终端进行指关节的识别,第一终端进行指关节动作的识别。
在一个例子中,第二终端将输入事件、指关节标识、第二终端的设备标识封装成操作事件,以使第一终端可以识别到用户操作的终端和指关节动作。
示例地,当用户的手势操作为图7c所示的指关节双击,图7d所示的指关节画闭合图形,图7e所示的指关节画S,图8b所示的双指关节双击。对应的,第二终端120生成的操作事件包括指关节标识。
A7.第二终端发送操作事件至第一终端。
A8.第一终端识别操作事件,以确定用户针对目标画面的指关节动作。
第一终端可以基于操作事件中的指关节触摸信息和指关节按压信息进行指关节动作的识别,从而确定用户针对目标画面的指关节动作。
示例地,指关节动作可以为图7c所示的指关节双击,图7d所示的指关节画闭合图形,图7e所示的指关节画S,图8b所示的双指关节双击。
A9.第一终端基于指关节动作,确定操作事件对应的操作指令。
在一种可行的实现方式中,第一终端存储有指关节动作和操作指令的对应关系。基于指关节动作和存储的指关节动作的匹配,以确定匹配的指关节动作对应的操作指令。
示例地,指关节动作为图8b所示的双指关节双击,对应的,操作指令为录屏指令。
示例地,指关节动作为图7c所示的指关节双击,对应的,操作指令为截屏指令。
示例地,指关节动作为图7d所示的指关节画闭合图形,对应的,操作指令为局部截屏指令。
示例地,指关节动作为图7e所示的指关节敲击画S,对应的,操作指令为滚动截屏指令。
在一些可行的实现方式中,第一终端存储有指关节动作和画面,以及指关节动作和画面对应的操作指令。基于指关节动作和目标画面与存储的指关节动作和画面的匹配,以确定匹配的指关节动作和画面对应的操作指令。
需要说明的是,本申请实施例并不限定操作指令的确定是基于指关节动作还是基于指关节动作和画面确定,具体需要结合实际需求确定。
在一些可能的实现方式中,指关节可以代替手指肚在屏幕上操作,实现手指肚所能实现的功能,比如,打开微信、相机等应用,对应用中的各种空间进行操作,比如,点击视频画面中的暂停按钮等,此时,确定操作事件对应的操作指令不仅需要基于指关节动作,还需要基于指关节操作位置的信息,比如,指关节点击微信图标,则指关节操作位置的信息为微信图标。
在一个例子中,第一终端基于操作事件中的指关节触摸信息、目标画面的布局信息和目标画面的尺寸信息,识别到指关节操作位置的信息,从而了解用户的指关节操作位置的信息,根据指关节动作和指关节操作位置的信息确定操作指令。
在一个例子中,第二终端预先存储目标画面的布局信息,则第一终端识别到确定该操作事件的操作指令需要基于指关节操作位置的信息确定时,将指关操作位置信息请求发送至第二终端,第二终端基于显示的目标画面的预先存储的布局确定用户的指关节操作位置的信息后发给第一终端,以使第一终端确定操作事件对应的操作指令。
A10.第一终端对目标画面执行操作事件对应的操作指令。
在一种可行的实现方式中,第一终端对目标画面执行操作事件对应的操作指令,生成待显示的画面。示例地,操作指令为截屏指令,则待显示的画面为截屏预览画面。比如,当用户对第二终端的触摸屏进行如图7c所示的指关节双击,图8a和图8b所示的点击录屏图标122的操作时,对应的,待显示的画面可以为图7c、图8a和图8b所示的截图预览画面。当用户对第二终端的触摸屏进行如图7d所示的指关节画闭合图形时,对应的,待显示的画面可以为图7d所示的截图编辑画面。当用户对第二终端的触摸屏进行如图7e所示的指关节画S时,对应的,待显示的画面有多个,指示了图7e所示的屏幕滑动效果和截图编辑画面。
进一步的,在一个例子中,当第一终端和第二终端之间的投屏方式为镜像投屏时,可选地,第一终端和第二终端均可以显示待显示的画面。
在一个例子中,第一终端和第二终端之间的投屏方式为异源投屏,第二终端显示待显示的画面。
在一个例子中,第一终端对目标画面执行操作事件对应的操作指令,生成无需显示的画面,比如,当目标画面为图7d或图7e所示的截图编辑画面,用户点击截图编辑画面中的保存按钮,操作指令为保存截图,则第二终端并不会显示保存的画面。
在一个例子中,第一终端对目标画面执行操作事件对应的操作指令,不会生成画面,比如,当目标画面为图7d或图7e所示的截图编辑画面,用户点击截图编辑画面中的删除按钮,操作指令为删除截图,则第一终端执行操作指令并不会生成待显示的画面。
以上即是对本申请实施例中涉及的投屏系统,以及该投屏系统中各个组成部分的介绍。接下来基于上述描述的投屏系统、图13b所示的跨设备交互方案,对本申请实施例中涉及的下拉菜单中截屏按钮或录屏按钮所在区域的点击操作的应用场景进行详细介绍。
B1.第一终端建立和第二终端的投屏连接。
详细内容参见上文,此处不做过多赘述。
B2.第一终端发送目标画面至第二终端。
详细内容参见上文,此处不做过多赘述。
B3.第二终端显示目标画面。
详细内容参见上文,此处不做过多赘述。
B4.第二终端根据用户针对目标画面做出的滑动操作生成第一输入事件。
作为一种可行的实现方式,第一输入事件包括事件类型和手指滑动操作信息。其中,手指滑动操作信息可以包括滑动方向,滑动距离,滑动时长等,事件类型可以为按压、滑动、抬起等。
在一个例子中,第一输入事件还可以包括第二终端的设备标识,以便于第一终端识别用户操作的终端。
在一个例子中,第一输入事件可以由内核空间对用户针对目标画面的滑动操作所产生的硬件中断信息进行处理后生成的,也可以为内核层对硬件层上报的用户针对目标画面的滑动操作所产生的硬件中断信息进行处理后生成,内核空间、内核层、硬件层参见上文描述,此处不做过多赘述。
B5.第二终端发送第一输入事件至第一终端。
B6.第一终端识别第一输入事件为显示下拉菜单。
在一个例子中,第二终端识别第一输入事件,了解用户操作的终端和操作行为。
示例地,第一终端识别到的手势操作可以为图7b所示的从顶部向下滑动,则操作指令为显示下拉菜单。
B7.第一终端将下拉菜单画面和下拉菜单画面的布局信息发送至第二终端。
在一个例子中,下拉菜单画面的布局信息指示了下拉菜单中的各种按钮的位置信息和绑定的标识。详细内容参见上文,此处不做过多赘述。
B8.第二终端在目标画面的基础上显示下拉菜单画面。
B9.第二终端根据用户针对下拉菜单画面中的截屏按钮或录屏按钮所在区域做出的点击操作生成第二输入事件。
作为一种可行的实现方式,第二输入事件包括手指点击操作信息和事件类型。其中,手指点击操作信息包括手指触摸位置信息和手指触摸时间信息,示例地,手指触摸位置信息可以为点击的第二终端的触摸屏上的多个像素点各自在屏幕坐标系下的坐标,手指触摸时间信息可以为点击的第二终端的触摸屏上的多个像素点各自的触摸时刻。事件类型可以为按压和抬起。
在一个例子中,第二输入事件可以由内核空间对用户针对下拉菜单画面的点击操作所产生的硬件中断信息进行处理后生成的,也可以为内核层对硬件层上报的用户针对下拉菜单画面的点击操作所产生的硬件中断信息进行处理后生成,内核空间、内核层、硬件层参见上文描述,此处不做过多赘述。
B10.第二终端根据第二输入事件和下拉菜单画面的布局信息,确定截屏按钮或录屏按钮的标识。
作为一种可行的实现方式,第二终端判断第二输入事件是由用户点击产生,即在只有一次按下和抬起,且按下的时长较短,基于第二输入事件中手指触摸位置信息和下拉菜单画面中的各种按钮的位置信息的比对,即可确定用户点击了截屏按钮或录屏按钮,并确定截屏按钮或录屏按钮的标识。需要说明的是,基于手指触摸时间信息,可以知晓用户的触摸时长,从而了解是点击操作还是长按操作。
B11.第二终端将截屏按钮或录屏按钮的标识和第二输入事件封装成操作事件。
在一个例子中,第二终端将截屏按钮或录屏按钮的标识、第二终端的设备标识和第二输入事件封装成操作事件,以使第一终端可以识别到用户操作的终端和操作行为。
示例地,用户的手势操作可以是图7b所示的点击下拉菜单中的截屏按钮,对应的,第二终端120可以识别到用户点击的是下拉菜单中的截屏按钮,以生成操作事件,该操作事件为截屏事件。
示例地,用户的手势操作可以是图8b所示的点击下拉菜单中的录屏按钮,对应的,第二终端120可以识别到用户点击的是下拉菜单中的录屏按钮,以生成操作事件,该操作事件为录屏事件。
B12.第二终端将操作事件发送至第一终端。
B13.第一终端识别操作事件,确定操作事件对应的目标画面的截屏指令或录屏指令。
作为一种可行的实现方式,第一终端对操作事件进行识别,确定用户在第二终端显示的下拉菜单上点击了截屏按钮或录屏按钮,第一终端可以确定操作指令为截屏指令或录屏指令。
在一个例子中,截屏指令指示了对目标画面进行截屏。示例地,若第一终端识别到了用户点击了图7b所示的下拉菜单中的截屏按钮,则操作指令为对图7b中的视频画面进行截屏。
在一个例子中,录屏指令指示了从目标画面开始进行录屏。示例地,若第一终端识别到了用户点击了图8b所示的下拉菜单中的录屏按钮,则操作指令为对图8b中的视频画面开始进行录屏。
B14.对目标画面执行截屏指令或录屏指令。
作为一种可行的实现方式,对目标画面执行操作事件对应的操作指令,生成待显示的画面。
在一个例子中,操作指令为截屏指令,则待显示的画面为截屏预览画面。示例地,目标画面为图7b示出的视频画面,则执行截屏指令后可以得到图7b示出的截屏预览画面。
在一个例子中,操作指令为录屏指令,则待显示的画面为录屏预览画面。示例地,目标画面为图8b示出的视频画面,则执行录屏指令后可以得到图8b示出的录屏预览画面。
以上即是对本申请实施例中涉及的投屏系统,以及该投屏系统中各个组成部分的介绍。接下来基于上述描述的投屏系统、图13b所示的跨设备交互方案,对本申请实施例中涉及的按压第二终端的截屏按键的应用场景进行详细介绍。
C1.第一终端建立和第二终端的投屏连接。
详细内容参见上文,此处不做过多赘述。
C2.第一终端发送目标画面至第二终端。
详细内容参见上文,此处不做过多赘述。
C3.第二终端显示目标画面。
详细内容参见上文,此处不做过多赘述。
C4.第二终端根据用户按压第二终端的截屏按键生成操作事件。
作为一种可行的实现方式,操作事件包括键值和按键时间信息。详细内容参见上文,此处不做过多赘述。
在一个例子中,操作事件还可以包括第二终端的设备标识,以便于第一终端识别用户操作的终端。
在一个例子中,操作事件可以由内核空间对用户按压第二终端的截屏按键所产生的硬件中断信息进行处理后生成的,也可以为内核层对硬件层上报的用户按压第二终端的截屏按键所产生的硬件中断信息进行处理后生成,内核空间、内核层、硬件层参见上文描述,此处不做过多赘述。
示例地,用户的手势操作可以是图7a所示的按键(开关键+音量减键),对应的,第二终端120生成的操作事件包括按键的键值和按键时间信息,按键时间信息可以为短按。
C5.第二终端判断操作事件不是本地事件。
在一个例子中,本地事件可以理解为第二终端可以直接处理的事件,比如,用户按压第二终端的音量加键,音量增加事件由第二终端处理,该音频增加事件即为本地事件。
C6.第二终端发送操作事件至第一终端。
C7.第一终端识别操作事件,确定目标画面的截屏指令。
示例地,第一终端识别到的手势操作可以是图7a所示的按键(开关键+音量减键),对应的,操作指令为截屏指令。
C8.第一终端对目标画面执行截屏指令,以确定截屏预览画面。
示例地,当用户对第二终端进行如图7a所示的按键,对应的,第一终端可以生成图7a所示的截图预览画面。
需要说明的是,上述仅仅作为跨设备交互方案的示例,并不构成任何限定。在一些可行的实现方式中,用户的手势操作可以是图7f所示的出现手型图标后隔空抓握,如图7f所示,第二终端120会根据用户对出现手型图标121的视频画面做出的隔空抓握操作生成操作事件,该操作事件指示了出现手型图标后的隔空事件;之后,第一终端110可以识别操作事件对应的隔空动作为抓握,确定隔空抓握对应的操作指令为截屏指令,对图7f示出的视频播放画面进行截图,生成截图预览画面,并发送至第二终端120进行显示。该实现方式中,通过第二终端120实现隔空的识别,由第一终端110识别隔空动作,确保第一终端110和第二终端120之间的反应速度和性能。
应当理解,当第一终端和第二终端之间的投屏方式为异源投屏时,在第一终端执行操作指令时,基于第二终端的设备标识确定第二终端的虚拟屏幕,基于虚拟屏幕的目标画面执行操作指令。
接下来,基于上文所描述的跨设备交互方案,对本申请实施例提供的一种跨设备交互的方法进行介绍。可以理解的是,该方法是上文所描述的跨设备交互的方案的另一种表达方式,两者是相结合的。该方法是基于上文所描述的跨设备交互的方案提出,该方法中的部分或全部内容可以参见上文对跨设备交互的方案的描述。该方法应用于上述投屏系统,第一终端和第二终端之间通过有线或无线的方式建立投屏连接。
步骤101、第二终端显示第一终端发送的第一画面;
步骤102、第二终端根据用户针对第一画面做出的目标操作生成操作事件;其中,目标操作为截屏操作、录屏操作或指关节操作;
步骤103、第二终端发送操作事件;
步骤104、第一终端确定操作事件对应的操作指令,基于第一画面执行操作指令。
在一个例子中,第一画面可以为图13a-图13c示出的目标画面或下拉菜单画面,详细内容参见上文,此处不做过多赘述。
在一个例子中,目标操作是用户针对第一画面在第二终端做出的。详细内容参见上文,此次不做过多赘述。
作为一种可行的实现方式,目标操作为指关节操作;步骤102,包括:
第二终端根据用户针对第一画面做出的指关节操作生成第一输入事件;其中,第一输入事件包括指关节触摸信息和指关节按压信息;第二终端当基于指关节算法识别到第一输入事件由用户的指关节产生时,确定第一输入事件的指关节标识;第二终端将第一输入事件和指关节标识封装为操作事件。
在一个例子中,第一输入事件可以为上述内核空间或内核层处理用户针对第一画面做出的指关节操作产生的的硬件中断信息生成。详细内容参见上文,此处不做过多赘述。
在一个例子中,第二终端生成操作事件所需的程序从第一终端移植得到。或者,第一终端和第二终端生成操作事件的代码是相同的。对应的,指关节标识可以由指关节算法确定。
在一个例子中,第一终端和第二终端生成操作事件的代码是不同的。对应的,指关节标识可以为第一终端可识别的,且指示第一输入事件由指关节产生的标识。另外,在一些可能的情况,也可以由第二终端向第一终端请求指关节标识。详细内容参见上文,此处不做过多赘述。
进一步地,步骤104,包括:
第一终端识别操作事件,以确定用户针对第一画面做出的指关节动作;第一终端基于指关节动作,确定操作事件对应的操作指令。
本实现方式,基于被投屏的第二终端识别指关节,而投屏的第一终端仅仅识别指关节动作,确保投屏的第一终端的反应速度和性能。
在一个例子中,第一终端还可以基于存储的第一画面和指关节动作,即指关节动作和画面关联,确定操作指令。比如,主页面和指关节画字母,操作指令可以为可以打开主页面的指定应用。
在一个例子中,第一终端基于指关节动作和指关节触摸位置的信息确定操作指令,此时指关节相当于手指肚。详细内容参见上文,此次不做过多赘述。
作为一种可行的实现方式,第二终端在显示的第一画面之前显示有目标画面;第一画面为下拉菜单画面;目标操作为针对下拉菜单画面中的截屏按钮或录屏按钮所在区域的点击操作;第二终端存储有下拉菜单画面的布局信息;操作指令为目标画面的截屏指令或者录屏指令;步骤102,包括:
第二终端根据用户针对下拉菜单画面中的截屏按钮或录屏按钮所在区域做出的点击操作生成第二输入事件;其中,第二输入事件包括手指点击操作信息;第二终端根据下拉菜单画面的布局信息和第二输入事件,确定截屏按钮或录屏按钮的标识;第二终端将截屏按钮或录屏按钮的标识和第二输入事件封装成操作事件。
在一个例子中,第二输入事件可以为上述内核空间或内核层处理用户针对下拉菜单画面中的截屏按钮或录屏按钮所在区域的点击操作产生的的硬件中断信息生成。
在一个例子中,在第一终端和第二终端建立投屏连接时,将下拉菜单画面和下拉菜单画面的布局信息发送至第二终端进行存储。
在一个例子中,第二终端根据用户针对目标画面做出的滑动操作生成输入事件,并将输入事件发送至第一终端,第一终端识别输入事件为显示下拉菜单时,将下拉菜单画面和下拉菜单画面的布局信息发送至第二终端进行存储。
本实现方式中,基于被投屏的第二终端识别下拉菜单画面中的截屏按钮或录屏按钮,使得投屏的第一终端可以直接确定截屏指令或录屏指令,确保投屏的终端的反应速度和性能。另外,通过被投屏的第二终端识别下拉菜单中的按钮,无需考虑投屏的第一终端和被投屏的第二终端之间的屏幕尺寸的差异,确保操作事件的准确性。
作为一种可行的实现方式,目标操作为按压第二终端的截屏按键;步骤102,包括:
第二终端根据用户按压第二终端的截屏按键生成操作事件;其中,操作事件包括按键时间信息和键值;第二终端判断操作事件不是本地事件时,将操作事件发送至第一终端。
在一个例子中,操作事件可以为上述内核空间或内核层处理用户按压第二终端的截屏按键产生的的硬件中断信息生成。
进一步地,步骤104,包括:
第一终端识别操作事件为截屏事件时,确定操作事件对应的操作指令为截屏指令。
作为一种可行的实现方式,第一终端和第二终端之间的投屏方式为异源投屏;第一终端设置有第二终端的虚拟屏幕;其中,第一画面为与虚拟屏幕的屏幕尺寸适配的画面,虚拟屏幕的屏幕尺寸和第二终端的屏幕尺寸适配;操作事件携带第二终端的设备标识,以使第一终端识别第二终端的设备标识以确定第二终端的虚拟屏幕。
作为一种可行的实现方式中,第一终端和第二终端之间的投屏方式为镜像投屏;步骤104,包括:
对第一终端显示的画面执行操作指令,其中,第一画面为第一终端显示的画面的镜像。
作为一种可行的实现方式中,方法还包括:
将基于第一画面执行操作指令得到的第二画面发送至第二终端,以使第二终端显示第二画面。
其中,第二画面参见上文对目标画面执行操作指令得到的待显示的画面。
可以理解的是,本申请的实施例中的处理器可以是中央处理单元(central processing unit,CPU),还可以是其他通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、晶体管逻辑器件,硬件部件或者其任意组合。通用处理器可以是微处理器,也可以是任何常规的处理器。
本申请的实施例中的方法步骤可以通过硬件的方式来实现,也可以由处理器执行软件指令的方式来实现。软件指令可以由相应的软件模块组成,软件模块可以被存放于随机存取存储器(random access memory,RAM)、闪存、只读存储器(read-only memory,ROM)、可编程只读存储器(programmable rom,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)、寄存器、硬盘、移动硬盘、CD-ROM或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包 括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者通过所述计算机可读存储介质进行传输。所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。
可以理解的是,在本申请的实施例中涉及的各种数字编号仅为描述方便进行的区分,并不用来限制本申请的实施例的范围。

Claims (25)

  1. 一种跨设备交互的方法,其特征在于,应用于投屏系统,所述投屏系统包括第一终端和第二终端,所述第一终端和所述第二终端之间投屏连接,所述方法包括:
    所述第二终端显示所述第一终端发送的第一画面;
    所述第二终端根据用户针对所述第一画面做出的目标操作生成操作事件;其中,所述目标操作为截屏操作、录屏操作或指关节操作;
    所述第一终端确定所述操作事件对应的操作指令,基于所述第一画面执行所述操作指令。
  2. 根据权利要求1所述的方法,其特征在于,所述目标操作为指关节操作;
    所述第二终端根据用户针对所述第一画面做出的目标操作生成操作事件,包括:
    所述第二终端根据用户针对所述第一画面做出的指关节操作生成第一输入事件;其中,所述第一输入事件包括指关节触摸信息和指关节按压信息;
    所述第二终端当基于指关节算法识别到所述第一输入事件由所述用户的指关节产生时,确定所述第一输入事件的指关节标识;
    所述第二终端将所述第一输入事件和所述指关节标识封装为操作事件;
    所述第一终端确定所述操作事件对应的操作指令,包括:
    所述第一终端识别所述操作事件,以确定所述用户针对所述第一画面做出的指关节动作;
    所述第一终端基于所述指关节动作,确定所述操作事件对应的操作指令。
  3. 根据权利要求1所述的方法,其特征在于,所述第二终端在显示所述第一画面之前显示有目标画面;所述第一画面为下拉菜单画面;所述目标操作为针对所述下拉菜单画面中的截屏按钮或录屏按钮所在区域的点击操作;所述第二终端存储有所述下拉菜单画面的布局信息;所述操作指令为所述目标画面的截屏指令或者录屏指令;
    所述第二终端根据用户针对所述第一画面做出的目标操作生成操作事件,包括:
    所述第二终端根据用户针对所述下拉菜单画面中的截屏按钮或录屏按钮所在区域做出的点击操作生成第二输入事件;其中,所述第二输入事件包括手指点击操作信息;
    所述第二终端根据所述下拉菜单画面的布局信息和所述第二输入事件,确定截屏按钮或录屏按钮的标识;
    所述第二终端将所述截屏按钮或录屏按钮的标识和所述第二输入事件封装成操作事件。
  4. 根据权利要求1所述的方法,其特征在于,所述目标操作为按压所述第二终端的截屏按键;
    所述第二终端根据用户针对所述第一画面做出的目标操作生成操作事件,包括:
    所述第二终端根据用户按压所述第二终端的截屏按键生成操作事件;其中,所述操作事件包括按键时间信息和键值;
    所述第二终端判断所述操作事件不是本地事件时,将所述操作事件发送至所述第一终端;
    所述第一终端确定所述操作事件对应的操作指令,包括:
    所述第一终端识别所述操作事件为截屏事件时,确定所述操作事件对应的操作指令为截屏指令。
  5. 根据权利要求1所述的方法,其特征在于,所述第一终端和所述第二终端之间的投屏方式为异源投屏;
    所述第一终端设置有所述第二终端的虚拟屏幕;其中,所述第一画面为与所述虚拟屏幕的屏幕尺寸适配的画面,所述虚拟屏幕的屏幕尺寸和所述第二终端的屏幕尺寸适配;
    所述操作事件携带所述第二终端的设备标识,以使所述第一终端识别所述第二终端的设备标识以确定所述第二终端的虚拟屏幕。
  6. 根据权利要求1所述的方法,其特征在于,所述第一终端和所述第二终端之间的投屏方式为镜像投屏;
    所述基于所述第一画面执行所述操作指令,包括:
    对所述第一终端显示的画面执行所述操作指令,其中,所述第一画面为所述第一终端显示的画面的镜像。
  7. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    将基于所述第一画面执行所述操作指令得到的第二画面发送至所述第二终端,以使所述第二终端显示所述第二画面。
  8. 一种跨设备交互的方法,其特征在于,应用于第一终端,所述方法包括:
    将第一画面发送至与第二终端,以使所述第二终端显示所述第一画面;其中,所述第一终端和所述第二终端之间投屏连接;
    接收所述第二终端发送的操作事件;其中,所述操作事件为所述第二终端根据用户针对所述第一画面做出的目标操作生成,所述目标操作为截屏操作、录屏操作或指关节操作;
    确定所述操作事件对应的操作指令;
    基于所述第一画面执行所述操作指令。
  9. 根据权利要求8所述的方法,其特征在于,所述目标操作为指关节操作,所述操作事件为所述第二终端封装第一输入事件和第一输入事件的指关节标识生成;其中,所述第一输入事件根据用户针对所述第一画面做出的指关节操作生成,所述指关节标识当基于指关节算法识别到所述第一输入事件由所述用户的指关节产生时确定;
    所述确定所述操作事件对应的操作指令,包括:
    识别所述操作事件,以确定所述用户针对所述第一画面做出的指关节动作;
    基于所述指关节动作,确定所述操作事件对应的操作指令。
  10. 根据权利要求8所述的方法,其特征在于,所述第二终端在显示的第一画面之前显示有目标画面;所述第一画面为下拉菜单画面;所述目标操作为针对所述下拉菜单画面中的截屏按钮或录屏按钮所在区域在所述第二终端做出的点击操作;所述操作指令为所述目标画面的截屏指令或者录屏指令;所述操作事件包括所述用户点击的所述下拉菜单画面中的截屏按钮或录屏按钮的标识;
    所述方法还包括:
    接收所述第二终端发送的第二输入事件;其中,所述第二输入事件为所述第二终端根据用户针对所述目标画面的滑动操作生成,所述第二输入事件包括手指滑动操作信息;
    当识别所述第二输入事件为显示下拉菜单时,将内部存储的所述下拉菜单画面和所述下拉菜单画面的布局信息发送至所述第二终端,以使所述第二终端在显示的目标画面的基础上显示所述下拉菜单画面,并根据用户针对所述下拉菜单画面中的截屏按钮或录屏按钮所在区域的点击操作生成的第三输入事件和所述下拉菜单画面的布局信息确定所述截屏按钮或录屏按钮的标识;其中,所述第三输入事件包括手指点击操作信息。
  11. 根据权利要求8所述的方法,其特征在于,所述目标操作为按压所述第二终端的截屏按键;所述操作事件包括按键时间信息和键值,所述操作事件不是所述第二终端的本地事件;
    所述确定所述操作事件对应的操作指令,包括:
    当识别所述操作事件为截屏事件时,确定所述操作事件对应的操作指令为截屏指令。
  12. 根据权利要求8所述的方法,其特征在于,所述第一终端和所述第二终端之间的投屏方式为异源投屏;
    所述第一终端设置有所述第二终端的虚拟屏幕;其中,所述第一画面为与所述虚拟屏幕的屏幕尺寸适配的画面,所述虚拟屏幕的屏幕尺寸和所述第二终端的屏幕尺寸适配;
    所述操作事件携带所述第二终端的设备标识,以使所述第一终端识别所述第二终端的设备标识以确定所述第二终端的虚拟屏幕。
  13. 根据权利要求8所述的方法,其特征在于,所述第一终端和所述第二终端之间的投屏方式为镜像投屏;
    所述基于所述第一画面执行所述操作指令,包括:
    对所述第一终端显示的画面执行所述操作指令,其中,所述第一画面为所述第一终端显示的画面的镜像。
  14. 根据权利要求8所述的方法,其特征在于,还包括:
    将基于所述第一画面执行所述操作指令得到的第二画面发送至所述第二终端,以使所述第二终端显示所述第二画面。
  15. 一种跨设备交互的方法,其特征在于,应用于第二终端,所述方法包括:
    显示第一终端发送的第一画面;其中,所述第一终端和所述第二终端之间投屏连接;
    根据用户针对所述第一画面在所述第二终端做出的目标操作生成操作事件;其中,所述目标操作为截屏操作、录屏操作或指关节操作;
    将所述操作事件发送至所述第一终端,以使所述第一终端确定所述操作事件对应的操作指令,基于所述第一画面执行所述操作指令。
  16. 根据权利要求15所述的方法,其特征在于,所述目标操作为指关节操作;
    所述根据用户针对所述第一画面在所述第二终端的目标操作生成操作事件,包括:
    根据用户针对所述第一画面在所述第二终端做出的指关节操作生成第一输入事件;其中,所述第一输入事件包括指关节触摸信息和指关节按压信息;
    当基于指关节算法识别到所述第一输入事件由所述用户的指关节产生时,确定所述第一输入事件的指关节标识;
    将所述第一输入事件和所述指关节标识封装为操作事件。
  17. 根据权利要求15所述的方法,其特征在于,所述第二终端在显示的第一画面之前显示有目标画面;所述第一画面为下拉菜单画面;所述目标操作为针对所述下拉菜单画面中的截屏按钮或录屏按钮所在区域的点击操作;所述操作指令为所述目标画面的截屏指令或者录屏指令;
    所述根据用户针对所述第一画面在所述第二终端做出的目标操作生成操作事件之前,还包括:
    根据用户针对所述目标画面在所述第二终端做出的滑动操作生成第二输入事件;其中,所述第二输入事件包括手指滑动操作信息;
    将所述第二输入事件发送至所述第一终端,以使所述第一终端识别所述第二输入事件为显示下拉菜单时,发送内部存储的下拉菜单画面和所述下拉菜单画面的布局信息至所述第二终端;
    在所述目标画面的基础上显示所述下拉菜单画面,并存储所述下拉菜单画面的布局信息;
    所述第二终端根据用户针对所述第一画面在所述第二终端做出的目标操作生成操作事件,包括:
    根据用户针对所述下拉菜单画面中的截屏按钮或录屏按钮所在区域在所述第二终端做出的点击操作生成第三输入事件;其中,所述第三输入事件包括手指点击操作信息;
    根据所述下拉菜单画面的布局信息和所述第三输入事件,确定截屏按钮或录屏按钮的标识;
    将所述截屏按钮或录屏按钮的标识和所述第三输入事件封装成操作事件。
  18. 根据权利要求15所述的方法,其特征在于,所述目标操作为按压所述第二终端的截屏按键;所述操作事件包括按键时间信息和键值;
    所述第二终端根据用户针对所述第一画面做出的目标操作生成操作事件,还包括:
    判断所述操作事件不是本地事件时,将所述操作事件发送至所述第一终端。
  19. 根据权利要求15所述的方法,其特征在于,所述第一终端和所述第二终端之间的投屏方式为异源投屏;
    所述第一终端设置有所述第二终端的虚拟屏幕;其中,所述第一画面为与所述虚拟屏幕的屏幕尺寸适配的画面,所述虚拟屏幕的屏幕尺寸和所述第二终端的屏幕尺寸适配;
    所述操作事件携带所述第二终端的设备标识,以使所述第一终端识别所述第二终端的设备标识以确定所述第二终端的虚拟屏幕。
  20. 根据权利要求15所述的方法,其特征在于,所述第一终端和所述第二终端之间的投屏方式为镜像投屏。
  21. 根据权利要求15所述的方法,其特征在于,还包括:
    接收所述第二终端基于所述第一画面执行所述操作指令确定的第二画面;
    显示所述第二画面。
  22. 一种投屏系统,其特征在于,包括:第一终端和第二终端,其中,所述第一终端用于执行如权利要求8-14任一所述的方法,所述第二终端用于执行如权利要求15-21任一所述的方法。
  23. 一种跨设备交互的装置,其特征在于,所述装置运行计算机程序指令,以执行如权利要求8-14任一所述的方法,或者执行如权利要求15-21任一所述的方法。
  24. 一种终端,其特征在于,包括:
    至少一个存储器,用于存储程序;
    至少一个处理器,用于执行所述存储器存储的程序,当所述存储器存储的程序被执行时,所述处理器用于执行如权利要求8-14任一所述的方法,或者执行如权利要求15-21任一所述的方法。
  25. 一种计算机存储介质,所述计算机存储介质中存储有指令,当所述指令在计算机上运行时,使得计算机执行如权利要求8-14任一所述的方法,或者执行如权利要求15-21任一所述的方法。
PCT/CN2022/114303 2021-09-03 2022-08-23 跨设备交互的方法、装置、投屏系统及终端 WO2023030099A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111034541.2A CN115756268A (zh) 2021-09-03 2021-09-03 跨设备交互的方法、装置、投屏系统及终端
CN202111034541.2 2021-09-03

Publications (1)

Publication Number Publication Date
WO2023030099A1 true WO2023030099A1 (zh) 2023-03-09

Family

ID=85332717

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/114303 WO2023030099A1 (zh) 2021-09-03 2022-08-23 跨设备交互的方法、装置、投屏系统及终端

Country Status (2)

Country Link
CN (1) CN115756268A (zh)
WO (1) WO2023030099A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117130471B (zh) * 2023-03-31 2024-07-26 荣耀终端有限公司 人机交互方法、电子设备及系统
CN116248657B (zh) * 2023-05-09 2023-08-01 深圳开鸿数字产业发展有限公司 投屏系统的控制方法、装置、计算机设备及存储介质
CN117692694A (zh) * 2023-05-12 2024-03-12 荣耀终端有限公司 一种显示方法及相关设备
CN116301699B (zh) * 2023-05-17 2023-08-22 深圳开鸿数字产业发展有限公司 分布式投屏方法、终端设备、显示屏、投屏系统及介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030108856A1 (en) * 2001-11-28 2003-06-12 Sony Corporation Remote operation system for application software projected onto a screen
CN107147769A (zh) * 2016-03-01 2017-09-08 阿里巴巴集团控股有限公司 基于移动终端的设备控制方法、装置和移动终端
CN107483994A (zh) * 2017-07-31 2017-12-15 广州指观网络科技有限公司 一种反向投屏控制系统及方法
CN110377250A (zh) * 2019-06-05 2019-10-25 华为技术有限公司 一种投屏场景下的触控方法及电子设备
CN111221491A (zh) * 2020-01-09 2020-06-02 Oppo(重庆)智能科技有限公司 交互控制方法及装置、电子设备、存储介质
CN112394895A (zh) * 2020-11-16 2021-02-23 Oppo广东移动通信有限公司 画面跨设备显示方法与装置、电子设备
CN113031843A (zh) * 2021-04-25 2021-06-25 歌尔股份有限公司 一种手表控制方法、显示终端及手表

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030108856A1 (en) * 2001-11-28 2003-06-12 Sony Corporation Remote operation system for application software projected onto a screen
CN107147769A (zh) * 2016-03-01 2017-09-08 阿里巴巴集团控股有限公司 基于移动终端的设备控制方法、装置和移动终端
CN107483994A (zh) * 2017-07-31 2017-12-15 广州指观网络科技有限公司 一种反向投屏控制系统及方法
CN110377250A (zh) * 2019-06-05 2019-10-25 华为技术有限公司 一种投屏场景下的触控方法及电子设备
CN111221491A (zh) * 2020-01-09 2020-06-02 Oppo(重庆)智能科技有限公司 交互控制方法及装置、电子设备、存储介质
CN112394895A (zh) * 2020-11-16 2021-02-23 Oppo广东移动通信有限公司 画面跨设备显示方法与装置、电子设备
CN113031843A (zh) * 2021-04-25 2021-06-25 歌尔股份有限公司 一种手表控制方法、显示终端及手表

Also Published As

Publication number Publication date
CN115756268A (zh) 2023-03-07

Similar Documents

Publication Publication Date Title
US11567623B2 (en) Displaying interfaces in different display areas based on activities
WO2021013158A1 (zh) 显示方法及相关装置
WO2021000807A1 (zh) 一种应用程序中等待场景的处理方法和装置
US20230046708A1 (en) Application Interface Interaction Method, Electronic Device, and Computer-Readable Storage Medium
WO2023030099A1 (zh) 跨设备交互的方法、装置、投屏系统及终端
WO2021036770A1 (zh) 一种分屏处理方法及终端设备
US20230216990A1 (en) Device Interaction Method and Electronic Device
WO2022143128A1 (zh) 基于虚拟形象的视频通话方法、装置和终端
WO2022100304A1 (zh) 应用内容跨设备流转方法与装置、电子设备
CN114040242B (zh) 投屏方法、电子设备和存储介质
WO2022017393A1 (zh) 显示交互系统、显示方法及设备
WO2020238759A1 (zh) 一种界面显示方法和电子设备
US12058486B2 (en) Method and apparatus for implementing automatic translation by using a plurality of TWS headsets connected in forwarding mode
WO2022105445A1 (zh) 基于浏览器的应用投屏方法及相关装置
WO2021190524A1 (zh) 截屏处理的方法、图形用户接口及终端
CN112130788A (zh) 一种内容分享方法及其装置
WO2022135157A1 (zh) 页面显示的方法、装置、电子设备以及可读存储介质
US20240264882A1 (en) Application running method and related device
WO2022063159A1 (zh) 一种文件传输的方法及相关设备
CN115016697A (zh) 投屏方法、计算机设备、可读存储介质和程序产品
WO2024045801A1 (zh) 用于截屏的方法、电子设备、介质以及程序产品
US20240303024A1 (en) Display method, electronic device, and system
WO2023273543A1 (zh) 一种文件夹管理方法及装置
WO2022002213A1 (zh) 翻译结果显示方法、装置及电子设备
WO2022048453A1 (zh) 解锁方法及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22863238

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22863238

Country of ref document: EP

Kind code of ref document: A1