CN115756268A - Cross-device interaction method and device, screen projection system and terminal - Google Patents

Cross-device interaction method and device, screen projection system and terminal Download PDF

Info

Publication number
CN115756268A
CN115756268A CN202111034541.2A CN202111034541A CN115756268A CN 115756268 A CN115756268 A CN 115756268A CN 202111034541 A CN202111034541 A CN 202111034541A CN 115756268 A CN115756268 A CN 115756268A
Authority
CN
China
Prior art keywords
terminal
screen
picture
event
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111034541.2A
Other languages
Chinese (zh)
Inventor
任国锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202111034541.2A priority Critical patent/CN115756268A/en
Priority to PCT/CN2022/114303 priority patent/WO2023030099A1/en
Publication of CN115756268A publication Critical patent/CN115756268A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Abstract

The application provides a cross-device interaction method and device, a screen projection system and a terminal. In an embodiment, the method is applied to a screen projection system, the screen projection system comprises a first terminal and a second terminal, and a screen projection connection is formed between the first terminal and the second terminal, and the method comprises the following steps: the second terminal displays a first picture sent by the first terminal; the second terminal generates an operation event according to a target operation which is made by a user on the second terminal aiming at the first picture; the target operation is screen capture operation, screen recording operation or finger joint operation; and the first terminal determines an operation instruction corresponding to the operation event and executes the operation instruction based on the first picture. Therefore, by the technical scheme provided by the embodiment of the application, the screen-projecting terminal can respond to screen capturing operation, screen recording operation or finger joint operation of the user on the screen-projecting terminal, cross-device user interaction is realized, and user experience is ensured.

Description

Cross-device interaction method and device, screen projection system and terminal
Technical Field
The present application relates to the field of communications technologies, and in particular, to a method and an apparatus for cross-device interaction, a screen projection system, and a terminal.
Background
With the development of screen projection technology, the use of screen projection brings great convenience to users. According to the screen projection technology, the content displayed by the screen projection equipment with the screen projection function can be projected to other display equipment with the display function for display, and the content displayed by the display equipment can comprise various media information, various operation data pictures and other content displayed in the screen projection equipment. For example, a description will be given taking as an example a case where a mobile phone is used as a screen projection device, a television is used as a display device, and an interface displayed on a screen of the mobile phone is projected onto the television and displayed. When a user watches videos through a mobile phone or uses live broadcast software to carry out live broadcast and the like, a display interface of the mobile phone can be projected to a television to be displayed, and the videos or live broadcast contents are watched through the television. After a display interface of the mobile phone is projected on a television, operations such as screen capturing, screen recording and the like are completed through the mobile phone terminal. However, when the mobile phone end is far away from the television end, for example, the mobile phone is in the room a, and the user watches the television in the room B, the user needs to operate the mobile phone in the room a to realize the interaction, which results in poor user experience.
Disclosure of Invention
The embodiment of the application provides a cross-device interaction method and device, a screen projection system and a terminal, so that the screen projection terminal can respond to screen capture operation, screen recording operation or finger joint operation of a user on the screen projection terminal, cross-device user interaction is achieved, and user experience is guaranteed.
In a first aspect, an embodiment of the present application provides a cross-device interaction method, which is applied to a screen projection system, where the screen projection system includes a first terminal and a second terminal, and the first terminal and the second terminal are connected in a screen projection manner, and the method includes: the second terminal displays a first picture sent by the first terminal; the second terminal generates an operation event according to a target operation made by a user aiming at the first picture; wherein the target operation is screen capture operation, screen recording operation or finger joint operation; and the first terminal determines an operation instruction corresponding to the operation event and executes the operation instruction based on the first picture.
In the scheme, the screen-projecting first terminal can respond to screen capturing operation, screen recording operation or finger joint operation of a user on the screen-projecting second terminal, cross-device user interaction is achieved, and user experience is guaranteed.
In one possible implementation, the target operation is a finger joint operation; the second terminal generates an operation event according to a target operation made by a user for the first screen, and the operation event comprises the following steps: the second terminal generates a first input event according to the finger joint operation of the user on the first picture, wherein the first input event comprises finger joint touch information and finger joint pressing information; the second terminal determines a knuckle identifier of the first input event when recognizing that the first input event is generated by a knuckle of a user based on a knuckle algorithm; the second terminal packages the first input event and the finger joint identification as operation events; the method for determining the operation instruction corresponding to the operation event by the first terminal comprises the following steps: identifying an operation event to determine a knuckle motion made by a user for a first picture; and determining an operation instruction corresponding to the operation event based on the finger joint action.
In the implementation mode, the knuckle is recognized based on the second terminal which is projected, the first terminal which is projected only recognizes the knuckle action, and the reaction speed and the performance of the first terminal which is projected are ensured.
In addition, the first terminal for screen projection does not need to perform coordinate conversion when the knuckle motion is recognized, namely data in the screen coordinate system of the second terminal for screen projection is converted into the screen coordinate system of the first terminal for screen projection, and the response speed and performance of the first terminal for screen projection are further ensured.
In one possible implementation manner, the second terminal displays the target picture before displaying the first picture; the first picture is a pull-down menu picture; the target operation is a click operation aiming at an area where a screen capture button or a screen recording button is located in a pull-down menu picture; the second terminal stores layout information of a pull-down menu picture; the operation instruction is a screen capturing instruction or a screen recording instruction of a target picture; the second terminal generates an operation event according to a target operation made by a user for the first screen, and the operation event comprises the following steps: the second terminal generates a second input event according to the click operation of the user on the area where the screen capture button or the screen recording button is located in the pull-down menu picture, wherein the second input event comprises finger click operation information; the second terminal determines the identifier of a screen capture button or a screen recording button according to the layout information of the pull-down menu picture and a second input event; and the second terminal encapsulates the identification of the screen capture button or the screen recording button and the second input event into an operation event.
In the implementation mode, the screen capture button or the screen recording button in the pull-down menu picture is identified based on the second terminal to be projected, so that the first terminal to be projected can directly determine the screen capture instruction or the screen recording instruction, and the response speed and performance of the first terminal to be projected are ensured.
In addition, the buttons in the pull-down menu are recognized by the second terminal to be screened, so that the accuracy of the operation event is ensured without considering the difference of screen sizes between the first terminal to be screened and the second terminal to be screened.
In one possible implementation, the target operation is to press a screen capture key of the second terminal; the second terminal generates an operation event according to a target operation made by a user for the first screen, and the operation event comprises the following steps: the second terminal generates an operation event according to the fact that a user presses a screen capture key of the second terminal, wherein the operation event comprises key time information and key values; when the second terminal judges that the operation event is not a local event, the operation event is sent to the first terminal; the method for determining the operation instruction corresponding to the operation event by the first terminal comprises the following steps: when the first terminal identifies that the operation event is the screen capture event, the operation instruction corresponding to the operation event is determined to be the screen capture instruction.
In the implementation mode, the key value and the key time information of the key are determined based on the second terminal which is projected on the screen, and when the second terminal is determined not to be a local event, the first terminal which is projected on the screen can identify the screen capture key and determine the corresponding screen capture instruction, so that the cross-device interaction user experience is ensured.
In one possible implementation mode, the screen projection mode between the first terminal and the second terminal is heterogeneous screen projection; the first terminal is provided with a virtual screen of the second terminal; the first picture is a picture matched with the screen size of the virtual screen, and the screen size of the virtual screen is matched with the screen size of the second terminal; the operation event carries the equipment identifier of the second terminal, so that the first terminal identifies the equipment identifier of the second terminal to determine the virtual screen of the second terminal.
In the implementation mode, the picture displayed by the first screen-projecting terminal and the picture displayed by the second screen-projecting terminal are independent, so that the first screen-projecting terminal and the second screen-projecting terminal can be used separately, and different requirements of different users are met.
In one possible implementation manner, the screen projection manner between the first terminal and the second terminal is mirror image screen projection; executing an operation instruction based on the first picture, including: and executing an operation instruction on the picture displayed by the first terminal, wherein the first picture is a mirror image of the picture displayed by the first terminal.
In the implementation mode, the picture displayed by the first screen-projecting terminal is consistent with the picture displayed by the second screen-projecting terminal, so that the first screen-projecting terminal can know the operation condition of a user of the second screen-projecting terminal, and reverse control is realized.
In one possible implementation, the method further includes: and sending a second picture obtained by executing the operation instruction based on the first picture to the second terminal so as to enable the second terminal to display the second picture.
In a second aspect, an embodiment of the present application provides a method for cross-device interaction, which is applied to a first terminal, and includes: sending the first picture to a second terminal so that the second terminal displays the first picture; the first terminal and the second terminal are connected in a screen projection manner; receiving an operation event sent by a second terminal; the operation event is generated by the second terminal according to a target operation made by a user aiming at the first picture, and the target operation is a screen capture operation, a screen recording operation or a finger joint operation; determining an operation instruction corresponding to the operation event; and executing the operation instruction based on the first picture.
In the scheme, the screen-projecting first terminal can respond to screen capturing operation, screen recording operation or finger joint operation of a user on the screen-projecting second terminal, cross-device user interaction is achieved, and user experience is guaranteed.
In one possible implementation, the target operation is a finger joint operation; the operation event is generated by packaging a first input event and a finger joint identifier of the first input event for the second terminal; wherein the first input event is generated according to a finger joint operation made by a user for the first picture, and the finger joint identification is determined when the first input event is identified to be generated by a finger joint of the user based on a finger joint algorithm; determining an operation instruction corresponding to the operation event, wherein the operation instruction comprises the following steps: identifying an operation event to determine a knuckle motion made by a user for a first picture; and determining an operation instruction corresponding to the operation event based on the finger joint action.
The beneficial effects of the implementation mode are as above, and are not described in detail herein.
In one possible implementation manner, the second terminal displays a target picture before the displayed first picture; the first picture is a pull-down menu picture; the target operation is a click operation aiming at an area where a screen capture button or a screen recording button is located in a pull-down menu picture; the operation instruction is a screen capturing instruction or a screen recording instruction of a target picture; the operation event comprises the identification of a screen capture button or a screen recording button in a pull-down menu picture clicked by a user; the method further comprises the following steps: receiving a second input event sent by a second terminal; the second input event is generated by the second terminal according to the sliding operation of the user on the target picture, and comprises finger sliding operation information; when the second input event is identified to display a pull-down menu, sending layout information of a pull-down menu picture and the pull-down menu picture stored inside to the second terminal so that the second terminal can display the pull-down menu picture on the basis of a displayed target picture, and determining the identifier of a screen capture button or a screen recording button according to a third input event generated by a user aiming at the clicking operation of an area where the screen capture button or the screen recording button is located in the pull-down menu picture and the layout information of the pull-down menu picture; wherein the third input event comprises finger click operation information.
The beneficial effects of the implementation mode are as above, and are not described in detail herein.
In one possible implementation, the target operation is to press a screen capture key of the second terminal; the operation event comprises key time information and key values, and is not a local event of the second terminal; determining an operation instruction corresponding to the operation event, including: and when the operation event is identified as the screen capturing event, determining that the operation instruction corresponding to the operation event is the screen capturing instruction.
The beneficial effects of the implementation mode are as above, and are not described in detail herein.
In one possible implementation mode, the screen projection mode between the first terminal and the second terminal is heterogeneous screen projection; the first terminal is provided with a virtual screen of the second terminal; the first picture is a picture matched with the screen size of the virtual screen, and the screen size of the virtual screen is matched with the screen size of the second terminal; the operation event carries the equipment identifier of the second terminal, so that the first terminal identifies the equipment identifier of the second terminal to determine the virtual screen of the second terminal.
The beneficial effects of the implementation method are as above, and are not described in detail herein.
In one possible implementation manner, the screen projection manner between the first terminal and the second terminal is mirror image screen projection; executing an operation instruction based on the first picture, including: and executing an operation instruction on the picture displayed by the first terminal, wherein the first picture is a mirror image of the picture displayed by the first terminal.
The beneficial effects of the implementation mode are as above, and are not described in detail herein.
In one possible implementation manner, the method further includes: and sending a second picture obtained by executing the operation instruction based on the first picture to the second terminal so as to enable the second terminal to display the second picture.
In a third aspect, an embodiment of the present application provides a method for cross-device interaction, which is applied to a second terminal, and includes: displaying a first picture sent by a first terminal; the first terminal and the second terminal are connected in a screen projection manner; generating an operation event according to a target operation which is made by a user on a second terminal aiming at the first picture; the target operation is screen capture operation, screen recording operation or finger joint operation; and sending the operation event to the first terminal so that the first terminal determines an operation instruction corresponding to the operation event and executes the operation instruction based on the first picture.
The beneficial effects of this embodiment are as described above, and are not described herein in detail.
In one possible implementation, the target operation is a finger joint operation; the method for generating the operation event according to the target operation of the user on the second terminal aiming at the first screen comprises the following steps: generating a first input event according to a finger joint operation which is performed by a user on a second terminal aiming at a first picture; wherein the first input event comprises knuckle touch information and knuckle press information; when it is recognized that the first input event is generated by a knuckle of the user based on a knuckle algorithm, determining a knuckle identification of the first input event; the first input event and the knuckle identification are encapsulated as an operational event.
The beneficial effects of the implementation mode are as above, and are not described in detail herein.
In one possible implementation manner, the second terminal displays a target picture before the displayed first picture; the first picture is a pull-down menu picture; the target operation is a click operation aiming at an area where a screen capture button or a screen recording button is located in a pull-down menu picture; the operation instruction is a screen capturing instruction or a screen recording instruction of a target picture; before generating an operation event according to a target operation of a user on a first screen, the method comprises the following steps: generating a second input event according to the sliding operation of the user on the target picture at the second terminal; the second input event comprises finger sliding operation information; sending the second input event to the first terminal so that when the first terminal identifies the second input event as displaying a pull-down menu, sending layout information of a pull-down menu picture and a pull-down menu picture which are stored inside to the second terminal; displaying a pull-down menu picture on the basis of the target picture, and storing layout information of the pull-down menu picture; the second terminal generates an operation event according to a target operation made by a user for the first screen, and the operation event comprises the following steps: generating a third input event according to the click operation of the user on the second terminal aiming at the area where the screen capture button or the screen recording button is located in the pull-down menu picture; the third input event comprises finger click operation information; determining the identifier of a screen capture button or a screen recording button according to the layout information of the pull-down menu picture and the third input event; and packaging the identification of the screen capture button or the screen recording button and the third input event into an operation event.
The beneficial effects of the implementation mode are referred to above, and are not described in detail here.
In one possible implementation, the target operation is to press a screen capture key of the second terminal; the operation event comprises key time information and key values; the second terminal generates an operation event according to a target operation made by a user for the first screen, and the method further comprises the following steps: and when the operation event is judged not to be the local event, the operation event is sent to the first terminal.
The beneficial effects of the implementation mode are as above, and are not described in detail herein.
In one possible implementation mode, the screen projection mode between the first terminal and the second terminal is heterogeneous screen projection; the first terminal is provided with a virtual screen of the second terminal; the first picture is a picture matched with the screen size of the virtual screen, and the screen size of the virtual screen is matched with the screen size of the second terminal; the operation event carries the equipment identifier of the second terminal, so that the first terminal identifies the equipment identifier of the second terminal to determine the virtual screen of the second terminal.
The beneficial effects of the implementation mode are as above, and are not described in detail herein.
In one possible implementation manner, the screen projection manner between the first terminal and the second terminal is mirror image screen projection.
The beneficial effects of the implementation mode are as above, and are not described in detail herein.
In one possible implementation manner, the method further includes: receiving a second picture determined by the second terminal based on the first picture execution operation instruction; and displaying the second picture.
In a fourth aspect, an embodiment of the present application provides a screen projection system, including: a first terminal and a second terminal; wherein the first terminal is adapted to perform the method according to the second aspect and the second terminal is adapted to perform the method according to the third aspect.
In a fifth aspect, an embodiment of the present application provides a terminal, including: at least one memory for storing a program; at least one processor for executing the memory-stored program, the processor being adapted to perform the method provided in the second aspect or to perform the method provided in the third aspect when the memory-stored program is executed.
In a sixth aspect, the present application provides an apparatus for interacting across devices, where the apparatus executes computer program instructions to perform the method provided in the second aspect, or to perform the method provided in the third aspect. Illustratively, the apparatus may be a chip, or a processor.
In one example, the apparatus may include a processor, which may be coupled with a memory, read instructions in the memory and execute the method provided in the second aspect according to the instructions, or execute the method provided in the third aspect. The memory may be integrated in the chip or the processor, or may be independent of the chip or the processor.
In a seventh aspect, an embodiment of the present application provides a computer storage medium, in which instructions are stored, and when the instructions are executed on a computer, the instructions cause the computer to perform the method provided in the second aspect, or perform the method provided in the third aspect.
In an eighth aspect, embodiments of the present application provide a computer program product comprising instructions which, when executed on a computer, cause the computer to perform the method provided in the second aspect, or to perform the method provided in the third aspect.
Drawings
FIG. 1 is a system architecture diagram of a projection system provided by an embodiment of the present application;
FIG. 2a is a first schematic interface display diagram of a screen projection system provided in an embodiment of the present application;
FIG. 2b is a schematic diagram of an interface display of a screen projection system according to an embodiment of the present application;
FIG. 2c is a schematic diagram of an interface display of a screen projection system provided in an embodiment of the present application;
FIG. 2d is a schematic diagram of an interface display of a screen projection system according to an embodiment of the present application;
fig. 2e is a schematic interface display diagram five of the screen projection system provided in the embodiment of the present application;
FIG. 2f is a schematic diagram of an interface display of a screen projection system according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a response process of the finger joint operation provided by an embodiment of the present application;
fig. 4 is a schematic diagram of a screen recording principle provided by an embodiment of the present application;
FIG. 5 is a first schematic diagram illustrating the screen capture/recording principle of the screen projection system provided in FIG. 2 b;
FIG. 6 is a second schematic diagram of the screen capture/recording principle of the screen projection system provided in FIG. 2 b;
FIG. 7a is a first view of a screen shot of the screen projection system provided in FIG. 2 b;
FIG. 7b is a view of a second scenario of a screen shot of the screen projection system provided in FIG. 2 b;
FIG. 7c is a third view of a screen shot of the screen projection system provided in FIG. 2 b;
FIG. 7d is a diagram of a fourth scenario of a screen shot of the screen projection system provided in FIG. 2 b;
FIG. 7e is a scene schematic diagram five of a screen shot of the screen projection system provided in FIG. 2 b;
FIG. 7f is a scene schematic diagram six of a screen shot of the screen projection system provided in FIG. 2 b;
FIG. 8a is a first schematic view of a screen recording scenario of the screen projection system provided in FIG. 2 b;
FIG. 8b is a diagram of a second scenario of screen recording of the screen projection system provided in FIG. 2 b;
fig. 9 is a schematic structural diagram of a first terminal according to an embodiment of the present application;
fig. 10a is a schematic software structure diagram of a first terminal according to an embodiment of the present application;
fig. 10b is a schematic software structure diagram of a second terminal according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a software implementation of a screenshot provided by an embodiment of the present application;
FIG. 12 is a schematic flow chart of a cross-device interaction scheme provided by an embodiment of the present application;
FIG. 13a is a schematic flow chart diagram illustrating a cross-device interaction scenario of knuckle operation provided by an embodiment of the present application;
FIG. 13b is a schematic flowchart of a cross-device interaction scenario of a click operation of clicking a screen capture button or a screen record button in a pull-down menu according to an embodiment of the present application;
fig. 13c is a schematic flowchart of a cross-device interaction scheme for pressing a screen capture key of a second terminal according to an embodiment of the present application;
fig. 14 is a flowchart illustration of a cross-device interaction method provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions of the embodiments of the present application will be described below with reference to the accompanying drawings.
In the description of the embodiments of the present application, the words "exemplary," "for example," or "for instance" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary," "e.g.," or "e.g.," is not to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the words "exemplary," "e.g.," or "exemplary" is intended to present relevant concepts in a concrete fashion.
In the description of the embodiments of the present application, the term "and/or" is only one kind of association relationship describing an association object, and indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, B exists alone, and A and B exist at the same time. In addition, the term "plurality" means two or more unless otherwise specified. For example, the plurality of systems refers to two or more systems, and the plurality of terminals refers to two or more terminals.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicit indication of indicated technical features. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
Hereinafter, some terms in the embodiments of the present application will be explained. It should be noted that these explanations are for the convenience of those skilled in the art, and do not limit the scope of protection claimed in the present application.
(1) Portable screen
The screen equipment for office use has weak service operation capability and is used by depending on screen projection connection of a mobile phone, a tablet, a computer and the like.
(2) Different source projection screen
And projecting the picture stored by the first terminal with the screen projecting function to the second terminal with other display functions, wherein the picture projected to the second terminal by the first terminal is independent from the picture displayed by the first terminal. For example, a mobile phone is taken as a first terminal, a convenient screen is taken as a second terminal for illustration, after the mobile phone and the convenient screen are connected by screen projection, the mobile phone and the convenient screen respectively run different applications without mutual interference, for example, an application run by the mobile phone 110 shown in fig. 2b is a chat application, such as WeChat, and an application run by the convenient screen 120 is a video playing application, such as Huawei video screen, aiqiyi, tencent.
(3) Mirror image projection screen
And projecting the picture displayed by the first terminal with the screen projecting function to other second terminals with the display functions. For example, the picture displayed by the portable screen is a mirror image of the picture displayed by the mobile phone, for example, both the mobile phone shown in fig. 2a and the application run by the portable screen are video playing applications.
(4) Distributed soft bus
The distributed soft bus provides uniform distributed communication capability for interconnection and intercommunication among various terminals, and creates conditions for non-inductive discovery and zero-waiting transmission among devices. In particular, the distributed softbus allows terminals to invoke various functions in Wi-Fi or other wireless manner through minimalist communication protocol technology. The extremely simple communication protocol technology comprises discovery, connection, networking (multi-hop ad hoc network, multi-protocol hybrid networking) and transmission (extremely simple transmission protocol: diversified protocol and algorithm, intelligent perception and decision), an intangible bus is built among 1+8+ N devices, and the extremely simple communication protocol technology has the characteristics of self discovery, ad hoc networking, high bandwidth and low time delay. Wherein, 1 refers to a mobile phone; 8 represent car machine, audio amplifier, earphone, wrist-watch/bracelet, flat board, large screen, PC, AR/VR, N generally indicates other thing allies oneself with (IOT) equipment. And through a distributed soft bus, distributed services such as equipment virtualization, cross-equipment service calling, multi-screen cooperation, file sharing and the like are completed among the full scene equipment. Here, the device virtualization can virtualize functions of various terminals connected together through a soft bus into a file that can be shared, and assemble various functions based on the file. In the embodiment of the application, the second terminal and the first terminal can be connected through a distributed soft bus.
(5) Touch screen
The touch screen is composed of a touch sensor and a display screen and is also a touch screen. In addition, the display screen may also be referred to as a screen.
To facilitate an understanding of the present application, a brief description of the prior art is provided herein:
in the related art, according to the screen projection technology, a content displayed on a screen of a terminal having a screen projection function (a first terminal) may be projected to another terminal having a display function (a second terminal) for display. For example, the content such as the conference content, the multimedia file, the game, the movie or the video of the first terminal can be put on the screen of the second terminal for presentation, so that better use experience and more convenience can be brought to the user. When the screen is projected, the content displayed by the first terminal with the screen projecting function and a smaller screen is projected to the second terminal with a larger screen for displaying. The terminal with the larger screen is used for displaying, so that more convenience is provided for interaction, entertainment or watching and the like of a user. For example, the picture displayed by the mobile phone is projected to a convenient screen or a television for display, and the like, so that the user can view the picture displayed by the screen projecting device through a terminal with a larger screen, and the use experience of the user can be improved.
For example, a description will be given taking an example in which a mobile phone is used as a first terminal for screen projection, a television is used as a second terminal to be screen projected, and a screen displayed on a screen of the mobile phone is projected onto the television and displayed. When a user watches videos through a mobile phone or uses live broadcast software to carry out live broadcast and the like, pictures displayed by the mobile phone can be projected to a television to be displayed, and the videos or live broadcast contents are watched through the television. After a display interface of the mobile phone is projected on a television, interactive operation cannot be performed on a displayed picture of the mobile phone through the television side, and operations such as screen capture and screen recording need to be completed through a mobile phone end. Therefore, the screen projection effect is not very good for the user, which results in poor user experience.
In order to solve the above technical problem, embodiments of the present application provide the following technical solutions.
The method for cross-device interaction provided by the embodiment of the application can be applied to the screen projection system 100 shown in fig. 1. The screen projection system 100 comprises a first terminal 110 and a second terminal 120, and the second terminal 120 and the first terminal 110 can interact through a network, so that the first terminal 110 and the second terminal 120 can perform data interaction. Exemplarily, fig. 2a and 2b show that one second terminal 120 and one first terminal 110 are connected; illustratively, one second terminal 120 may be connected with a plurality of first terminals 110, and fig. 2c and 2d show that one first terminal 110 is connected with 2 second terminals 120; illustratively, a plurality of first terminals 110 may be connected with one second terminal 120, and fig. 2e shows two first terminals 110 connected with one second terminal 120. The network may include, among other things, a cable network, a wired network, a fiber optic network, a telecommunications network, an intranet, the internet, a Local Area Network (LAN), a Wide Area Network (WAN), a wireless local area network (YLAN), a Metropolitan Area Network (MAN), a Public Switched Telephone Network (PSTN), a bluetooth network, a ZigBee network (ZigBee), near Field Communication (NFC), an in-device bus, an in-device line, a cable connection, and the like, or any combination thereof. It should be noted that the second terminal 120 and the first terminal 110 may be connected in the same lan, connected to the same wireless lan, or connected across the lan via the internet.
As an example, when the second terminal 120 and the first terminal 110 are logged in using the same account, the second terminal 120 and the first terminal 110 may communicate with each other through a Wide Area Network (WAN).
As another example, the second terminal 120 and the first terminal 110 may be attached to the same router. At this time, the second terminal 120 and the first terminal 110 may form a Local Area Network (LAN), and the first terminals 110 in the Local Area Network (LAN) may communicate with each other through a router.
As yet another example, the second terminal 120 and the first terminal 110 both join a Wi-Fi network named "XXXXX". Each first terminal 110 within the Wi-Fi network forms a peer-to-peer network. For example, the connection between the first terminal 110 and the second terminal 120 may be established through Miracast protocol, and data may be transmitted through a Wi-Fi network.
As still another example, the second terminal 120 and the first terminal 110 are interconnected through a switching device (e.g., a USB data line or a Dock device) to realize communication. For example, the connection between the second terminal 120 and the first terminal 110 may be established through a High Definition Multimedia Interface (HDMI), and data may be transmitted through an HDMI transmission line.
It should be appreciated that a distributed soft bus connection between the second terminal 120 and the first terminal 110 may be implemented through the network described above.
In the embodiment of the present application, according to the screen projection technology, a screen projection connection is established between the first terminal 110 and the second terminal 120, in other words, the content stored in the first terminal 110 can be projected into the second terminal 120 for displaying. In practical applications, when a screen is projected, the content stored in the first terminal 110 with a smaller screen is projected to the second terminal 120 with a relatively larger screen for displaying. For example, the first terminal may be a mobile phone or a tablet, and the second terminal may be a display device with weak service capability, such as a portable screen. In addition, when the screen is projected, the picture sent to the second terminal 120 by the first terminal 110 is adapted to the screen size of the second terminal 120, so that the second terminal 120 only needs to realize the display function, the data processing amount is reduced, and the display speed is ensured. Wherein the screen size of the second terminal 120 indicates the length and width of the screen of the second terminal 120.
In one example, the screen projection mode between the first terminal 110 and the second terminal 120 is mirror image screen projection, that is, the first terminal 110 may project the content displayed by the first terminal into the second terminal 120 for display. For example, referring to fig. 2a, a user views a video through a first terminal 110, and may project a video frame displayed by the first terminal 110 to a second terminal 120 for display, and view the video through the second terminal 120. For example, referring to fig. 2c, a user watches a video through the first terminal 110, and may screen a video picture displayed by the first terminal 110 onto the two second terminals 120 for displaying, and watch the video through the two second terminals 1201, which is suitable for a plurality of people to watch a scene of the video.
In another example, the screen projection mode between the first terminal 110 and the second terminal 120 is a different-source screen projection mode, that is, the respective displayed screens of the first terminal 110 and the second terminal 120 are independent, and it can also be understood that the respective running applications of the first terminal 110 and the second terminal 120 do not interfere with each other. For example, as shown in fig. 2b, the first terminal 110 displays a WeChat chat screen, and the second terminal 120 displays a video screen. In other words, the second terminal 120 may display a screen that is not displayed by the first terminal 110. For example, referring to fig. 2b, the user performs a WeChat chat through the first terminal 110, and at the same time, the non-displayed video screen of the first terminal 110 may be projected to the second terminal 120 for display, and the video is viewed through the second terminal 120. For example, referring to fig. 2d, a user performs a WeChat chat through a first terminal 110, and may project an undisplayed video screen of the first terminal 110 to one second terminal 120, and simultaneously project an undisplayed music playing screen of the first terminal 110 to another second terminal 120 for display, so as to satisfy requirements of different users through two second terminals 120. For example, referring to fig. 2e, the user 1 performs a WeChat chat through one first terminal 110, and may project an undisplayed video image of the first terminal 110 to the second terminal 120, and the user 2 performs a photo taking through another first terminal 110, and may project an undisplayed music image of the first terminal 110 to the second terminal 120; as a possible scenario, fig. 2e shows that the second terminal 120 displays only the non-displayed video screen of one first terminal 110, and then displays the non-displayed music playing screen of another first terminal 110 by means of screen switching (not shown in the figure). As another possible case, fig. 2f shows that the second terminal 120 can simultaneously display the non-displayed video screen of one first terminal 110 and the non-displayed music play screen of another first terminal 110.
It should be noted that, the embodiment of the present application is not intended to limit the screen projection manner between the first terminal 110 and the second terminal 1120, and the determination needs to be specifically combined with an actual scene. The following description will be made by taking a screen-casting connection between a first terminal 110 and a second terminal 120 as an example.
The first terminal 110 has a capability of processing a gesture operation of a user. In the following, the first terminal 110 and the second terminal 120 are both referred to as
Figure BDA0003246462660000081
The system illustrates the process of event handling by way of example. Illustratively, the first terminal 110 is provided with a hardware layer, a Kernel layer (Kernel), a system layer, an application architecture layer, and an application layer. For a detailed description of the layers, reference is made to the following description of aspects of the system architecture and fig. 10a, which is merely for convenience in describing the process of event handling.
In one example, the Hardware layer (Hardware) is used for generating a corresponding Hardware interrupt signal according to the gesture operation of a user; the gesture operation is various operations of the first terminal 110 by the hand of the user. The gesture operation of the user specifically needs to be combined with hardware of a hardware layer of the first terminal 110. For example, the Hardware layer (Hardware) may include, but is not limited to, a display screen, a pressure Sensor, a distance Sensor, an acceleration Sensor, a keyboard, a Touch Sensor (Touch Sensor) and a smart Sensor Hub (Sensor Hub) shown in fig. 3, and the like. For example, the first terminal 110 has a key and a touch screen, and the gesture operation may be an operation on the key, such as pressing a switch key, a volume up key, a volume down key, and an operation on the touch screen, such as touching, clicking, sliding, tapping, and the like, which is not specifically limited in this embodiment of the present application.
In one example, the Kernel layer (Kernel) is configured to receive and report a hardware interrupt signal generated by the hardware layer, and generate an input event according to the hardware interrupt signal, and may include a driver layer, where the driver layer converts an input of the hardware layer into a uniform event form, in other words, converts the hardware interrupt signal generated by the hardware layer into an input event. For example, when the gesture operation is a touch operation on the touch screen, the input event may include at least a touch coordinate, a time stamp of the touch operation, and additionally, an event type, such as a slide, a click, and the like; when the gesture operation is pressing of a key, the input event may include at least a key value of the pressed key and may also include an event type, such as a short press, a long press, and the like. The driver layer may include a plurality of drivers such as a display screen driver, an audio driver, a camera driver, a sensor driver, and the like. Exemplarily, fig. 3 shows a Touch sensor Driver (Touch Driver) and an Input Hub Driver (Input Hub Driver). In addition, an input core layer and an event processing layer may also be included. The input core layer is responsible for coordinating the driver layer and the event processing layer, so that data transmission can be completed between the driver layer and the event processing layer, and the event processing layer can provide input events obtained from the input core layer to the user space. Of course, kernel layer (Kernel) may also be understood as Kernel space.
The user space is used for reading, processing and distributing input events provided by a Kernel layer (Kernel), and can comprise a system layer and an application framework layer. The embodiment of the application mainly relates to operations of a touch screen and keys, wherein support of a Kernel layer (Kernel) on related events of the touch screen comprises absolute coordinates, touch down and touch up events, the current user space only uses the events, and in order to improve the user interactivity of the touch screen, specific instructions are often implemented in the user space by using the simple events. In addition, the embodiment of the application also relates to the operation of the key, and a Kernel layer (Kernel) can support the key-related events related to the embodiment of the application without realizing specific instructions in a user space.
In one example, a system layer for processing and distributing input events provided by a Kernel layer (Kernel). May be a Native Framework layer (Native Framework) as shown in fig. 3. The input framework may be included, an algorithm for recognizing an input event provided by a Kernel layer (Kernel) may be further included, for example, a finger joint algorithm shown in fig. 3 (it may be determined whether a force characteristic of a finger joint is met through a vibration frequency generated by a gravitational acceleration of the finger joint, and it may be determined whether the finger joint is acting on the touch screen through a touch area, so as to determine whether the finger joint is acting), and an application (executing an operation instruction corresponding to the input event) may be further included. The InputFramework is mainly responsible for managing user events, and the specific content is as follows: various original event messages can be acquired from a Kernel layer (Kernel), wherein the event messages comprise key, touch screen, mouse, rolling ball and the like; 2. the event is preprocessed, and the method comprises the following two aspects: on one hand, the event is converted into a message event which can be processed by the system; on the other hand, some special events such as a home key, a menu key, a power key, etc. are processed. 3. And distributing the processed events to each application process (the application of a system layer, an application framework layer or an application layer). In practical applications, the Kernel layer (Kernel) writes the input event into the device node, and the InputFramework reads the input event from the device node. In addition, if the system layer obtains new event information before distributing an event through the InputFramework, the new event information and an input event reported by the Kernel layer (Kernel) can be repackaged into a new input event, and then the new input event is transmitted to the InputFramework.
In one example, the application framework layer may read, process, and distribute input events provided by the system layer. May be a Java Framework as shown in fig. 3. For example, the application framework layer may include an algorithm for recognizing an input event provided by the system layer, such as hwgesturing gesture recognition (HwGestureAction) as shown in fig. 3, and may further include an application (executing an operation instruction corresponding to the input event). In addition, if the application framework layer processes the event to obtain new event information, the new event information and the input event reported by the system layer can be encapsulated into a new input event again. Exemplarily, the event reported to the application layer by the application framework layer should be an event processable by the application in the application layer, and at least include gesture operations such as finger joint double click, finger joint double click S, finger joint closed graph, hand stopping up for 1S at a distance of 30cm from the touch screen, hand grasping at a distance of 30cm from the touch screen, sliding down for 3cm from the top of the touch screen, and the like.
In one example, the application layer may read and process the input event provided by the application framework layer, and execute an operation instruction corresponding to the input event distributed by the application framework layer. May be the Application shown in fig. 3.
In order to distinguish the input events of the kernel layer, the system layer and the application framework layer, the input event reported to the system layer by the kernel layer is used as a first event, the input event before entering the InputFramework in the system layer is used as a second event, and the input event distributed to the application layer by the application framework layer is used as a third event. It should be understood that when the system layer performs only event distribution, the first event and the second event may be understood as the same event, and when the application framework layer performs only event distribution, the second event and the third event may be understood as the same event.
For the operations related to the finger joint, i.e. the operations of the finger joint on the touch screen, it is noted that the touch sensor, the acceleration sensor and the finger joint algorithm are arranged in the first terminal 110 due to the particularity of the human finger joint structure. Illustratively, the finger joint algorithm is placed at the system level, such as the Native Framework (Native Framework) shown in fig. 3. When the finger joint knocks the screen, the touch sensor can sense the touch area of the finger joint, and the acceleration sensor senses the vibration frequency caused by the gravity acceleration of the finger joint; the knuckle algorithm may then determine whether it is a knuckle motion.
The workflow of the first terminal 110 for processing the finger joint operation of the user will be described below by taking a finger joint double click as an example. Specifically, as shown in fig. 3, when the user performs a finger joint double click on the touch screen of the first terminal 110, the smart Sensor Hub (Sensor Hub) in the Hardware layer (Hardware) processes the related data (Acc Rawdata) generated based on the finger joint double click collected by the acceleration Sensor and the pressure Sensor. Then, the data processed by the intelligent sensing Hub (Sensor Hub) and the data collected by the Touch Sensor (Touch Driver) are sent to a Kernel layer (Kernel) as hardware interrupt signals; processing a hardware interrupt signal into a first event through an Input Hub Driver (Input Hub Driver) and a Touch sensor Driver (Touch Driver) in a Kernel layer (Kernel), wherein the first event can comprise information such as Touch coordinates, a time stamp of Touch operation data, pressing pressure, vibration frequency of gravity acceleration generated by knocking and the like, and reporting the first event to a user space; the method comprises the steps that a user space identifies a first event through a finger joint algorithm in a local Framework (Native Framework), when the first event is determined to be generated by a finger joint, a finger joint identifier is printed on the first event, namely a finger joint label is printed on the first event, the first event with the finger joint identifier is packaged into a second event, the second event is sent to an InputFramework, the InputFramework distributes the second event to Huawei gesture recognition (HwGesturea action) in a Java Framework, the Huawei gesture recognition (HwGesturea action) can identify a path track of the finger joint to determine the motion of the finger joint, the second event and the motion of the finger joint are packaged into a third event, and the third event is distributed to a screenshot in Application to trigger screenshot.
In the embodiment of the application, after the first terminal 110 and the second terminal 120 establish the screen-casting connection, a user performs a target operation on the second terminal 120, the target operation may be a screen-capturing operation, a screen-recording operation, a finger joint operation or a space operation, and the first terminal 110 responds to the target operation to execute an operation instruction corresponding to the target operation, so that cross-device user interaction is realized, and the use experience of the user is improved. The finger joint operation may include a finger joint drawing S (operation instruction is sliding screen capture), a finger joint drawing closed graph (operation instruction is local screen capture), a finger joint drawing horizontal line (operation instruction is split screen capture), a finger joint drawing letter (operation instruction is opening application program, for example, W opens weather application, C opens camera, e opens browser), a finger joint double click (operation instruction is global screen capture), a double finger joint double click (operation instruction is screen capture), an isolation operation may be isolated grasping after a hand-type icon appears (operation instruction is global screen capture), a left swipe (operation instruction is left page flip), a right swipe (operation instruction is right page flip), an upward swipe (operation instruction is upward sliding screen), a downward swipe (operation instruction is downward sliding screen), a press (operation instruction is pause or continuous music play), and the like. The screen capture operation comprises simultaneously pressing a power-on and power-off key and a volume reduction key, drawing an S finger joint, drawing a closed graph on the finger joint, double-clicking the finger joint, clicking a screen capture button in a pull-down menu and the like, and the screen recording operation comprises double-clicking the double finger joint and clicking a screen recording button in the pull-down menu. In addition, other operations are also involved in the screen capturing operation or the screen recording operation, and the operations can also generate corresponding operation events. Illustratively, the second terminal 120 is subjected to a slide operation to display a pull-down menu screen. Illustratively, various buttons in the screenshot editing screen are clicked, such as save, share, free graphic, heart, rectangle, oval, etc. buttons in the screenshot editing screen shown in fig. 7d, and pen, color, thickness, share, scribble, mosaic, eraser, scrolling screen capture, etc. buttons in the screenshot editing screen shown in fig. 7 e.
The target operations that the user can make to the second terminal 120 are typically all the target operations supported by the first terminal 110. Of course, in practical applications, the target operation that the second terminal 120 can achieve requires the support of the hardware layer and Kernel layer (Kernel) of the second terminal 120.
For the second terminal 120, the second terminal 120 should have some or all of the hardware in the hardware layer of the first terminal 110 and have a driver of the corresponding hardware, so that the second terminal 120 can generate an event and the generated event can be processed by the first terminal 110. In addition, the second terminal 120 may also have hardware that the first terminal 110 does not have to implement its own functions. In this embodiment, the user performs a target operation on the second terminal 120, the second terminal 120 sends the generated event to the first terminal 110, and the first terminal 110 determines an operation instruction of the event and executes the operation instruction. For the sake of convenience of distinction, an event transmitted from the first terminal 110 to the second terminal 120 is referred to as an operation event, and the operation event is described as an example below. The operation instruction refers to the above description, and is not described in detail herein. Considering the balance between the performance and the response speed of the first terminal 110 and the second terminal 120, the whole processing procedure of the event needs to be divided, so that the second terminal 120 can perform partial processing of the event, and it is very critical to reduce the data processing amount of the first terminal 110, in other words, to operate the information contained in the event. If the information of the operation event is excessive, on one hand, the data transmission amount between the first terminal 110 and the second terminal 120 is large, which reduces the data transmission efficiency; on the other hand, if the data processing amount of the second terminal 120 is large, the data processing efficiency of the second terminal 120 is reduced. Thus, the information contained in the operational events may have a large impact on the performance and reaction speed of the first terminal 110 and the second terminal 120.
It should be noted that, when the screen projection mode between the first terminal 110 and the second terminal 120 is different-source screen projection, the first terminal 110 executes the operation instruction of the operation event based on the screen displayed by the second terminal 120. In practical applications, the first terminal 110 may establish a virtual screen of the second terminal 120, the size of the virtual screen is adapted to the size of the screen of the second terminal 120, and correspondingly, the first terminal 110 executes an operation instruction based on a picture displayed by the virtual screen of the second terminal 120. Of course, the screen projection mode between the first terminal 110 and the second terminal 120 is mirror image screen projection, and the first terminal 110 may execute the operation instruction based on the self-displayed screen.
It is noted that the target operation that the user can perform on the second terminal 120 is mainly based on hardware common to the first terminal 110 and the second terminal 120. For example, when the first terminal 110 and the second terminal 120 both have touch screens, the user may perform gesture operations on the second terminal 120, including operations of clicking, pressing, dragging, sliding, tapping, drawing graphics, and the like on the touch screens, and may also perform a blank operation on the touch screens. Illustratively, when the first terminal 110 and the second terminal 120 each have a key, the operation that the user can perform on the second terminal 120 includes a short press and a long press on the key. The following description will be given taking an example in which the first terminal 110 and the second terminal 120 each have a touch screen and keys. In addition, in some possible implementations, the first terminal 110 and the second terminal 120 may also have hardware such as a mouse, a keyboard, and the like.
In consideration of the process from the user's target operation on the second terminal 120 to the first terminal 110 responding to the target operation, the key is to process the hardware interrupt signal generated by the hardware layer to realize the understanding of the user's target operation. For example, what key is pressed by the user, what interface element is in what screen is clicked by the user, and what operation part (finger pad, finger joint, hand) is of the hand of the user, so that the first terminal 110 can quickly understand the target operation, and the response speed and performance of the first terminal 110 are ensured.
Illustratively, the target operation is a pressing operation of a key of the second terminal 120. The operation event indicates that the gesture operation is that the user presses a key of the second terminal 120, which may include a key value, key time information, and a device identifier of the second terminal 120; then, the first terminal 110 identifies the operation event and determines a corresponding operation instruction. The key value of the key indicates what key is pressed by the user, and may be a home key, a return key, a power key, a volume key, or the like. Different keys have different key values, so that different keys are distinguished. The key time information may be key duration, or description information indicating key duration such as short press, long press, etc. Here, the operation event generated by the second terminal 120 may be a first event generated by a Kernel layer (Kernel).
Illustratively, the target operation is a click operation on a button in a pull-down menu screen displayed by the second terminal 120. The operation event indicates a gesture operation, that is, a user clicks a screen capture button or a screen recording button in a pull-down menu screen displayed on a touch screen of the second terminal 120, and the gesture operation may include an identifier of the screen capture button or the screen recording button in the pull-down menu clicked by the user and an equipment identifier of the second terminal 120, and the first terminal 110 may directly recognize the identifier of the screen capture button or the screen recording button in the operation event, determine a screen capture instruction or a screen recording instruction, and ensure a response speed and performance. It is understood that the second terminal 120 recognizes the button in the pull-down menu screen, so that the first terminal 110 can directly determine the operation instruction corresponding to the button indicated by the operation event. Correspondingly, the second terminal 120 stores layout information of the pull-down menu screen, such as position information of each button in the pull-down menu and the bound identifier, so that the second terminal 120 can know the meaning of each button in the pull-down menu screen displayed by the second terminal. Here, the stored layout information of the pull-down menu screen is adapted to the screen size of the second terminal 120, and is not adapted to the screen size of the first terminal 110. It should be noted that, when the user performs a click operation in the area where the screen capture button and the screen record button are located in the touch screen pull-down menu of the second terminal 120, if the meaning of the button clicked by the user is identified by the first terminal 110, in the identification process, the related data represented by the screen coordinate system of the second terminal 120 needs to be converted into the screen coordinate system of the first terminal 110; if the meaning of the button clicked by the user is recognized based on the second terminal 120, the screen coordinate system of the second terminal 120 and the screen coordinate system of the first terminal 110 do not need to be converted, the data amount of processing is reduced, and the data processing efficiency can be ensured.
In an example, when the first terminal 110 and the second terminal 120 establish a screen-drop connection, the first terminal 110 may send the pull-down menu screen and the layout information of the pull-down menu screen to the second terminal 120, the second terminal 120 stores the layout information, subsequently, the first terminal 110 sends an operation instruction for displaying the pull-down menu to the second terminal 120, and the second terminal 120 calls the stored pull-down menu screen to display the pull-down menu screen, where if a subsequent user clicks a screen capture button or a screen record button of the pull-down menu screen displayed by the second terminal 120, the second terminal 120 may directly call the layout information of the pull-down menu screen to determine an identifier of the screen capture button or the screen record button.
In some possible cases, the operation event may also indicate that the user clicks a button in the screenshot editing screen displayed by the touch screen of the second terminal 120, and the operation event generated by the second terminal 120 includes an identification of the button in the screenshot editing screen clicked by the user. Correspondingly, the second terminal 120 stores layout information of the screen capture convenient picture, so that the second terminal 120 can know the meaning of each button in the screen capture convenient picture displayed by the second terminal.
Illustratively, the target operation is a slide operation in which the user slides down from the top of the screen of the second terminal 120. The operation event indicates that the gesture operation is a sliding operation of the user on the touch screen of the second terminal 120 sliding from the top to the bottom, which may include a sliding direction, a sliding distance, a sliding duration, an equipment identifier of the second terminal 120, and the like, so that the first terminal 110 may directly determine an operation instruction for displaying a pull-down menu corresponding to the operation event.
As will be understood by those skilled in the art, for a click operation of a user clicking a screen capture button in a pull-down menu and a slide operation of a user sliding from the top of a screen, an operation event is determined based on operation information indicating a finger on the screen, for example, the operation information may be coordinates of a plurality of pixel points on a touch screen of the second terminal 120 that are touched, and a touch time. Wherein, the screen coordinate system is a coordinate system of the touch screen of the second terminal 120; the touched pixel point can be understood as a pressed pixel point.
The above-described operation of pressing the key of the second terminal 120, clicking a button in the pull-down menu screen displayed by the second terminal 120, and sliding down from the top of the screen of the second terminal 120 is a relatively simple gesture operation. For more complicated gesture operations, such as finger joint operation and air gap operation, the operation events usually include more information, and correspondingly, the data processing amount is also very large. The knuckle manipulation and the space manipulation involve not only recognition of the knuckle and the space hand but also recognition of the knuckle motion and the space motion of the hand.
In one example, the second terminal 120 is used to identify a knuckle or a free hand; the first terminal 110 is configured to recognize a motion of a knuckle or an empty motion of a hand, and determine an operation command based on the motion of the knuckle or the empty motion of the hand.
Illustratively, for the operation of the touch screen of the second terminal 120 by the user' S finger joints, such as finger joint double click, finger joint drawing S, and finger joint drawing closed graph, in order to balance the performance and the reaction speed of the first terminal 110 and the second terminal 120, here, as shown in fig. 5, event distribution may be directly performed by a system layer (Native Framework) of the first terminal 110 as a division node. Correspondingly, in this embodiment of the present application, the operation event sent by the second terminal 120 to the first terminal 110 may include a finger joint identifier, finger joint touch information, and finger joint pressing information, and correspondingly, the first terminal 110 identifies the operation event to determine a finger joint motion of a user with respect to a screen displayed by the second terminal 120, and determines an operation instruction corresponding to the operation event based on the finger joint motion, for example, to capture a screen, record a screen, split a screen, or open an application. See above for details. The knuckle touch information may include coordinates of a plurality of pixel points on the screen of the second terminal 120 touched by the knuckle under a screen coordinate system and touch time, and may also include a touch area, where the touch area may be understood as an area of an area formed by continuous pixel points at the same touch time; the finger joint compression information may include a vibration frequency generated by a gravitational acceleration of the finger joint. Illustratively, in the process of drawing the finger joint drawing S of the user, the respective touch position and touch time of each pixel point on the screen corresponding to the graphic S are recorded. In addition, the operation event may include the same pixel point with different touch time, for example, in the process of drawing a closed graph by the knuckle of the user, the pixel points of the starting point and the ending point are the same, but the touch time is different.
Note that, in order to ensure that the first terminal 110 can recognize the knuckle identifier, in one example, the code for the second terminal 120 to generate the operation event is ported from the first terminal 110; in one example, the second terminal 120 requests a knuckle identification from the first terminal 110; illustratively, when the second terminal 120 recognizes that the first event generated by the kernel layer through the knuckle algorithm is generated by a knuckle, a knuckle identification is requested from the first terminal 110.
In one example, the second terminal 120 is used to recognize knuckle motion or hand space motion. The first terminal 110 is configured to determine an operation instruction based on the identified knuckle motion or the hand space motion.
Illustratively, for the blank operation of the touch screen of the second terminal 120 by the hand of the user, for example, the hand stays up for 1S at a distance of 20-40cm from the touch screen of the second terminal 120, the hand grips up at a distance of 20-40cm from the touch screen of the second terminal 120, etc., the operation event generated by the second terminal 120 may include the blank action, so that the first terminal 110 may directly read the blank action in the operation event, and determine the operation instruction corresponding to the blank action.
For the division of the data processing between the first terminal 110 and the second terminal 120, the processing capability of the second terminal 120 may be considered, and when the processing capability of the second terminal 120 is relatively strong, the second terminal 120 may recognize the motion of the hand and/or the motion of the knuckle. Conversely, the second terminal 120 may only recognize the spaced-apart hand or finger joints.
Notably, for the operation of knuckles or the operation of a spaced hand. In one example, the operation instruction corresponding to the operation may be determined based directly on the knuckle motion or the space motion of the hand, without considering the content of the screen displayed by the second terminal 120. Exemplarily, the finger joint is double-clicked, and the operation instruction is screen capture; and drawing an S finger joint, wherein the operation instruction is sliding screen capture. In another example, the operation command corresponding to the operation needs to be determined based on the finger joint movement or the hand space movement and the content of the screen displayed by the second terminal 120. Illustratively, when the displayed screen of the second terminal 120 is a music playing screen, the operation instruction of the space pressing is to pause playing. Here, the second terminal 120 may recognize only whether a knuckle or a hand is spaced, or, recognize a knuckle motion and a hand spacing motion; the first terminal 110 determines an operation instruction based on the motion of the finger joint or the hand and the screen displayed by the second terminal 120.
It should be understood that the above-mentioned operation events are only examples and are not limited to the present application, as long as the balance between the performance and the reaction speed of the first terminal 110 and the second terminal 120 can be ensured. It should be understood that in the embodiment of the present application, the identification process of the target operation by the first terminal 110 does not need to consider the difference in screen size between the first terminal 110 and the second terminal 120. In order to ensure the reliability of the event interaction between the first terminal 110 and the second terminal 120, it is preferable that the program related to the operation event in the second terminal 120 is ported from the first terminal 110.
In addition, it is necessary to determine the target operation that the user can perform on the second terminal 120 based on the hardware layer of the second terminal 120; the program required for obtaining the operation event corresponding to the operation is migrated, so that the second terminal 120 can obtain the operation event, the first terminal 110 further responds to the operation event of the second terminal 120, the operation instruction corresponding to the operation event is executed, the cross-device screen operation is realized, the cross-device use scene can be processed, and the user experience is improved.
For example, if the second terminal 120 can identify the finger joint, please refer to fig. 5, the second terminal 120 has a smart Sensor Hub (Sensor Hub), a Touch Sensor (Touch Sensor), a display screen (not shown in the figure) in a Hardware layer (Hardware), an Input Hub Driver (Input Hub Driver) and a Touch Sensor Driver (Touch Driver) in a Kernel layer (Kernel), a finger joint algorithm in a local Framework (Native Framework) in a user space, a front end Framework (Java/Js UI Framework), and Application.
For example, if the second terminal 120 can identify the key, please refer to fig. 6, the second terminal 120 has an Input subsystem. The Input subsystem includes the above-mentioned driver layer, input core layer and event handling layer, and the detailed content of core layer (Kernel) is referred to above, and is not described here in any greater detail.
In addition, by reducing the data processing amount of the first terminal 110, the balance of the data processing efficiency between the first terminal 110 and the second terminal 120 is realized, which is beneficial to improving the processing efficiency of the interactive operation of the picture of the user and improving the use experience of screen projection.
It should be noted that, in order to implement the screen-projecting connection between the second terminal 120 and the first terminal 110, in practical applications, a Sink-side cooperative application is usually installed on the second terminal 120, and the Sink-side cooperative application is used for implementing processing of the second terminal 120 for screen-projecting connection and communication with the first terminal 110. Correspondingly, a Source-side cooperative application is installed in the first terminal 110, and the application is used for processing the first terminal 110 in the screen projection connection and communicating with the second terminal 120. In addition, since the cooperative application may interact with the application in the application layer, correspondingly, the cooperative application of the first terminal 110 may process some special operation events, for example, a key event is pressed, a button in a pull-down menu is clicked, the cooperative application in the first terminal 110 may directly distribute the operation event to the corresponding application, and the application responds to the operation event. For example, referring to fig. 6, when a user performs a key operation on the second terminal 120 side, the Input subsystem in the second terminal 120 processes a key event, identifies a screen capture key, and then posts the key event to the top window, and the Sink-side cooperative application responds the key event and transmits the key event to the Source-side cooperative application on the first terminal 110 side. And after the cooperative application of the Source terminal identifies the key event, triggering the corresponding screen capturing service. In addition, for an operation event that needs to be distributed by the system layer, for example, a finger joint related event, as shown in fig. 5, event interaction between the first terminal 110 and the second terminal 120 may be performed through a communication module in the system layer (Native Framework), which may be understood as a physical communication channel, for example.
Taking a screen shot as an example, the first terminal 110 responding to the gesture operation of the second terminal 120 will be described.
In one embodiment, the screen projection mode of the first terminal 110 and the second terminal 120 is mirror image screen projection, and a user performs screen capture triggering on the second terminal 120, so that the first terminal 110 responds to the gesture operation of the second terminal 120 to implement screen capture mainly including the following contents:
the first terminal 110 responds to the screenshot trigger of the second terminal 120, acquires a display picture on the display of the first terminal 110 by calling a surface control function family interface through the installed screenshot application, calls a surface flag drawing function through a native frame interface, draws the display cached content to a Bitmap file, and returns the Bitmap file to the screenshot application, thereby completing the screenshot, and displaying screenshot preview pictures on the first terminal 110 and the second terminal 120.
In one embodiment, the screen projection modes of the first terminal 110 and the second terminal 120 are different screen projections, and a user performs a screen capture trigger on the second terminal 120, so that the screen capture realized by the first terminal 110 in response to the gesture operation of the second terminal 120 mainly includes the following contents:
(1) Identifying the trigger source displayId.
Here, the trigger source displayId may be understood as a device identification of the second terminal 120.
(2) And calling a SurfaceControl newly-added interface to obtain a screen capture picture of the appointed displayId.
Here, the first terminal 120 internally buffers the screen displayed by the second terminal 120 corresponding to the displayId.
(3) And displaying the screenshot interactive animation or screenshot interactive picture on the second terminal 120 corresponding to the trigger source displayId.
Here, the first terminal 110 controls the second terminal 120 to display the screen shot interactive animation and the screen shot interactive screen. For example, the screenshot interaction screen may be the screenshot preview screen shown in fig. 7a, 7b, 7c, and 7f, the screenshot editing screen shown in fig. 7d and 7e, the screen where the gesture icon 121 shown in fig. 7f is located, the screenshot preview screen shown in fig. 8a and 8b, and the screenshot interaction animation may be the screen scrolling animation shown in fig. 7 e.
Referring to fig. 12, a screen capture implementation between the first terminal 110 and the second terminal 120 is further illustrated.
1. The connection of the device thereof with the second terminal 120 and the connection of the graphic control and the display of the second terminal 120 are achieved through the device management of the first terminal 110.
After the connection, the first terminal 110 may control the display of the second terminal 120.
After the first terminal 110 establishes the screen-projecting connection with the second terminal 120, the picture stored in the first terminal 110 may be projected onto the second terminal 120 for display, and the user may perform corresponding operations on the second terminal 120. Of course, in this embodiment, the details of the operations related to the touch screen and the keys that can be supported by the second terminal 120 are referred to above, and are not described here in too much detail.
2. The user performs the operation of triggering screen capture on the second terminal 120, the interactive identification of the second terminal 120 can identify the operation, the device ID is transmitted, and the screen capture service of the first terminal 110 is triggered.
Here, the device ID may be understood as the above displayId. In practical application, when a user performs a target operation of triggering screen capture on a touch screen or a key of the second terminal 120, the first terminal 110 may determine a screen capture instruction corresponding to the target operation, and trigger a screen capture service, where the screen capture service may capture a screen displayed by the second terminal based on the screen capture instruction and the device ID to obtain a screen capture interactive screen, such as a screen capture preview screen and a screen capture edit screen, and in addition, multiple screen capture interactive screens may also form a screen capture interactive animation. Exemplarily, fig. 7a, 7b, 7c, 7f, 8a, and 8b illustrate screenshot preview screens, fig. 7d and 7e illustrate screenshot edit screens, and fig. 7e illustrates a screenshot interaction animation generated by a screen slide.
3. The screen capture service of the first terminal 110 transmits the screen capture interactive screen to the second terminal 120 so that the second terminal displays the screen capture interactive screen.
Here, when the screenshot interaction picture is a screenshot editing picture, if the user clicks a save button, the first terminal 110 saves the screenshot, and a screenshot preview picture is not displayed; when the user clicks the delete button, the first terminal 110 deletes the screenshot.
Taking the screen recording as an example, the first terminal 110 responds to the gesture operation of the second terminal 120 for further description.
In an embodiment, the screen projection mode of the first terminal 110 and the second terminal 120 is mirror image screen projection, and a user performs screen recording triggering on the second terminal 120, so that the screen recording implemented by the first terminal 110 in response to the screen recording operation of the second terminal 120 mainly includes the following contents:
as shown in fig. 4, the function of the first terminal 110 rendering the image of the first terminal 110 onto a specified surface by using mediaproject interface mainly includes: creating a VirtualDisplay by mediaproject taken by mediaproject manager; the Display of the first terminal 110 may "project" onto the VirtualDisplay; the virtual display renders the image into a Surface created from the MediaCodec encoder, so that the image pairs displayed by the first terminal 110 are automatically padded to the MediaCodec encoder; finally, mediaMuxer encapsulates and outputs the image metadata obtained from MediaCodec to an MP4 file, thereby obtaining a screen-recording file. In addition, the first terminal 110 and the second terminal 120 both display the same screen recording screen.
In an embodiment, the screen projection modes of the first terminal 110 and the second terminal 120 are different-source screen projection, and a user performs screen recording triggering on the second terminal 120, so that the screen recording of the first terminal 110 in response to the gesture operation of the second terminal 120 mainly includes the following contents:
(1) Creating a VirtualDisplay triggering the Source displayId by MediaProjection obtained from MediaProjectionManager; an AudioRecord is created that triggers the source displayId.
Here, the trigger source displayId may be understood as a device identification of the second terminal 120. After the screen recording is successfully triggered, the AudioRecord of the trigger source displayId acquires the sound signal collected by the microphone on the second terminal 120.
(2) MediaMuxer encapsulates and outputs the image metadata of displayId obtained from MediaCodec to an MP4 file, thereby obtaining a screen-recording file.
Here, the first terminal 110 controls the second terminal 120 to display a plurality of screen recording screens whose screen recording time is displayed in the screen recording icons. For example, see the screen-recording screens shown in fig. 8a and 8b, and a screen-recording icon 122.
In addition, the screen capture and screen recording are only examples of the operation instruction, and are not limited to specific examples. The following description continues by taking screen capture and screen recording as examples.
The first terminal in the embodiment of the present application may be a first terminal 110 that has a screen projection (Source) capability and is to process an input event, such as a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), a desktop computer, a wearable device, a notebook, and the like. The second terminal in the embodiment of the present application may be the first terminal 110 having an input event generating capability, and at least having a screen projection receiving (Sink) capability and an image display capability, for example, a portable screen and a tablet computer. In addition, the second terminal can also have sound output capability and sound collection capability. The specific types of the second terminal and the first terminal 110 of the first terminal are not limited herein and may be determined according to actual scenarios. For example, when the mobile phone is used for projecting a screen to the convenient screen in an actual scene, the mobile phone is the first terminal at the moment, and the convenient screen is the second terminal. In practical applications, the first terminal and the second terminal have the same operating data system, including but not limited to an iOS, android, microsoft, or other operating data system.
Next, a specific scenario of the different-source screen projection in the embodiment of the present application is described with a mobile phone as the first terminal 110 and a convenient screen as the second terminal 120. Here, the convenience screen 120 transplants the identify knuckles algorithm, the knuckle events resulting from the convenience screen.
In one scenario, as shown in fig. 7a, the convenience screen 120 is pressed, such as an on/off key and a volume down key, and the convenience screen 120 sends a key event to the mobile phone 110; after recognizing that the key event is a screen capture event, the mobile phone 110 captures a screen of a video picture (not shown in the figure) displayed on the portable screen 120 cached inside, generates a screenshot preview picture, and sends the screenshot preview picture to the portable screen 120; the convenience screen 120 displays a screenshot preview screen.
In practical applications, as shown in fig. 11, for the scene of the key shown in fig. 7a, the user performs a key combination action on the portable screen 120; for the mobile phone 110, the window PhoneWindowManager performs combined key identification, and when a screenshot event is identified, the window PhoneWindowManager calls a screenshot assistant screenshot-heat (Ex) to trigger and call a screenshot service takescreenshot-heat (Ex), the screenshot service takescreenshot-heat service calls a screenshot management (HW) GlobalScreenshot, the screenshot management (HW) GlobalScreenshot calls a screenshot picture generation interface, and a screenshot preview picture is obtained after a screenshot picture is generated by calling a graphical SurfaceControl and a thumbnail is added.
In one scenario, as shown in fig. 7b, a user slides a pull-down menu on a touch screen of the convenience screen 120, and the convenience screen 120 sends a sliding event to the mobile phone 110; after recognizing that the sliding event is a pull-down menu event, the mobile phone 110 generates a pull-down menu picture and sends the pull-down menu picture to the portable screen 120; the convenience screen 120 displays a pull-down menu picture, clicks a screen capture button in the pull-down menu picture of the convenience screen 120, and sends a click event to the mobile phone 110; after recognizing that the click event is a click of a screen capture button in the pull-down menu, the mobile phone 110 captures a video image (not shown in the figure) displayed on the convenient screen 120 cached inside, generates a screenshot preview image, and sends the screenshot preview image to the convenient screen 120; the convenience screen 120 displays a screenshot preview screen.
In practical applications, as shown in fig. 11, for a scene of the pull-down menu shown in fig. 7b, a user performs a pull-down action of the pull-down menu on the touch screen of the portable screen 120; for the mobile phone 110, the pull-down menu screen shot helper (Ex) identifies the pull-down action of the pull-down menu, when the pull-down menu action is identified, a screen capture service takescreen shot service is called, the screen capture service takescreen shot service calls a screen capture management (HW) globalscreenset, the screen capture management (HW) globalscreenset calls a screen capture picture generation interface, a graphical SurfaceControl is called to generate a screen capture picture, and a preview thumbnail is added to obtain a screenshot preview picture.
In one scenario, as shown in fig. 7c, a finger joint double click is performed on a video picture displayed by the touch screen of the portable screen 120, and the portable screen 120 sends a finger joint event to the mobile phone 110; the mobile phone 110 recognizes the finger joint event and determines that the user moves as a finger joint double click; capturing a video picture (not shown in the figure) displayed by the convenient screen 120 cached inside, generating a screenshot preview picture and sending the screenshot preview picture to the convenient screen 120; the convenience screen 120 displays a screenshot preview screen.
In practical application, as shown in fig. 11, for the finger joint double-click scene shown in fig. 7c, the user performs a finger joint double-click action on the touch screen of the convenience screen 120; for the mobile phone 110, the finger joint systemwideactionListener performs finger joint identification, when a finger joint double-click is identified, a screenshot assistant screen shot HeIper (Ex) is called to trigger and a screenshot service TakeScreen shot service is called, the screenshot service TakeScreen shot service calls a screenshot management (HW) GlobalScreen, the screenshot management (HW) GlobalScreen calls a screenshot picture generation interface, a screenshot picture is generated by calling a graphic SurfaceControl, and a preview small picture is added to obtain a screenshot preview picture.
In one scene, as shown in fig. 7d, a finger joint tapping is performed on a video image displayed on the touch screen of the portable screen 120 to draw a closed graph, and the portable screen 120 sends a finger joint event to the mobile phone 110; the mobile phone 110 recognizes the finger joint event and determines that the user moves as a finger joint tapping closed graph; capturing a video picture (not shown in the figure) displayed by the convenient screen 120 cached inside, generating a screenshot editing picture and sending the screenshot editing picture to the convenient screen 120; the portable screen 120 displays a screenshot editing screen, and when the user clicks a save button of the screenshot editing screen, the mobile phone 110 saves the screenshot (not shown).
In practical application, as shown in fig. 11, for the scene of the knuckle-dividing closed area shown in fig. 7d, the user performs the knuckle-dividing closed area movement on the touch screen of the portable screen 120; for the mobile phone 110, the finger joint systemwideactionListener identifies the finger joint, when the finger joint is identified to divide a closed area, the intelligent screenshot CropActivity is started, the intelligent screenshot CropActivity selects a graph, and the screenshot editing PhotoEditorActivity is triggered to edit to obtain a screenshot editing picture; and when the user clicks a scroll screen capture button in the screenshot editing picture, triggering the scroll screen capture to manage the MultiScreenShotService.
In one scenario, as shown in fig. 7e, a finger joint tapping picture S is performed on a weather picture displayed on a touch screen of the portable screen 120, and the portable screen 120 sends a finger joint event to the mobile phone 110; the mobile phone 110 identifies the finger joint event and determines that the user moves as a finger joint tap S; rolling and screen capturing are carried out on a weather picture (not shown in the figure) displayed on the convenient screen 120 cached inside, and the generated rolling picture and a screenshot editing picture are sent to the convenient screen 120; the portable screen 120 displays the screenshot editing picture after displaying the scroll animation, and when the user clicks a save button of the screenshot editing picture, the mobile phone 110 saves the screenshot picture (not shown in the figure).
In practical applications, as shown in fig. 11, for the scene of the knuckle division S shown in fig. 7e, the user performs the knuckle division S on the touch screen of the portable screen 120; for the mobile phone 110, the finger joint systemwideactionListener identifies the finger joint, when the finger joint is identified as S, the multiscreenShotService is triggered to be operated, after the multiscreenShotService is triggered to scroll and intercept the picture by the downslide gesture of the multiscreenShotService based on the screenshot management (HW) GlobaLscreen, the PhotoEditorActivity is edited by the screenshot based on the preview editing instruction of the multiscreenShotService to obtain a screenshot editing picture; when a user clicks a button for preview editing in a screenshot editing picture, a screenshot management (HW) Globalscreenshot sends a command for clicking the preview editing, and the screenshot editing PhotoEditorActivity performs corresponding processing based on the command for clicking the preview editing.
In one scenario, as shown in fig. 7f, the user stops his hand up at a distance of 20-40cm from the touch screen of the convenience screen 120, and the convenience screen 120 sends a clear event to the handset 110; the cell phone 110 recognizes the air-gap event, and determines that the user moves as a hand-up stop at a distance of 20-40cm from the touch screen of the convenience screen 120; adding a hand-shaped icon 121 to a video picture (not shown) displayed on the convenient screen 120 cached inside, generating a screen capture prompt picture and then sending the screen capture prompt picture to the convenient screen 120; the convenience screen 120 displays a screen capture prompt screen. When the user grips, the convenience screen 120 sends an empty grip event to the handset 110; the mobile phone 110 recognizes the spaced gripping event, captures a video image (not shown in the figure) displayed by the convenient screen 120 cached inside, generates a screenshot preview image, and sends the screenshot preview image to the convenient screen 120; the convenience screen 120 displays a screenshot preview screen.
In one scenario, as shown in fig. 8a, a user performs operation data of double-finger joint double-click on the touch screen of the convenience screen 120, and the convenience screen 120 sends a finger joint event to the mobile phone 110; the mobile phone 110 identifies the knuckle event and determines that the user moves as a double-knuckle double click; recording a video picture (not shown in the figure) displayed by the convenient screen 120 cached inside, generating a screen recording picture and then sending the screen recording picture to the convenient screen 120; the convenience screen 120 displays a screen recording picture. When the user clicks the screen recording icon 122, the convenient screen 120 sends a click event to the mobile phone 110; the mobile phone 110 recognizes that the click event is a click of a screen recording icon 122 in a screen recording interface, captures a video image (not shown in the figure) displayed by the convenient screen 120 cached inside, generates a screenshot preview image, and sends the screenshot preview image to the convenient screen 120; the convenience screen 120 displays a screenshot preview screen.
In one scenario, as shown in fig. 8b, a user slides a pull-down menu on a touch screen of the convenience screen 120, and the convenience screen 120 sends a sliding event to the mobile phone 110; after recognizing that the sliding event is a pull-down menu event, the mobile phone 110 generates a pull-down menu picture and sends the pull-down menu picture to the portable screen 120; the portable screen 120 displays a pull-down menu picture, clicks a screen recording button in the pull-down menu picture of the portable screen 120, and sends a click event to the mobile phone 110; after recognizing that the click event is a click of a screen recording button in the pull-down menu, the mobile phone 110 records a screen of a video picture (not shown in the figure) displayed on the convenient screen 120 cached inside, generates a screen recording picture, and sends the screen recording picture to the convenient screen 120; the convenience screen 120 displays a screen recording picture. When the user clicks the screen recording icon 122, the convenient screen 120 sends a click event to the mobile phone 110; the mobile phone 110 recognizes that the click event is a click of a screen recording icon 122 in a screen recording interface, captures a video image (not shown in the figure) displayed by the convenient screen 120 cached inside, generates a screenshot preview image, and sends the screenshot preview image to the convenient screen 120; the convenience screen 120 displays a screenshot preview screen.
It should be noted that the key event, the click event, the slide event, the knuckle event, and the gap event that are sent by the second terminal 120 to the first terminal 110 are all the operation events described above and below.
Exemplarily, fig. 9 shows a schematic structure of the first terminal 110.
The first terminal 110 may include a processor 1110, an external memory interface 1120, an internal memory 1121, a Universal Serial Bus (USB) interface 1130, a charging management module 1140, a power management module 1141, a battery 1142, an antenna 1, an antenna 2, a mobile communication module 1150, a wireless communication module 1160, an audio module 1170, a speaker 1170A, a receiver 1170B, a microphone 1170C, a sensor module 1180, a button 1190, a camera 1191, a display 1192, and the like. Sensor module 1180 may include pressure sensor 1180A, attitude sensor 1180B, distance sensor 1180C, and touch sensor 1180D, among others.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the first terminal 110. In other embodiments, first terminal 110 may include more or fewer components than illustrated, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 1110 may include one or more processing units, such as: processor 1110 may include an Application Processor (AP), a modem processor, a Graphics Processor (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), among others. Wherein, the different processing units may be independent devices or may be integrated in one or more processors.
Wherein the controller may be a neural center and a command center of the first terminal 110. The controller can generate an operation data control signal according to the instruction operation data code and the time sequence signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 1110 for storing instructions and data. In some embodiments, the memory in processor 1110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by processor 1110. If processor 1110 needs to use the instruction or data again, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 1110, thereby increasing the efficiency of the system.
In some embodiments, processor 1110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bidirectional synchronous serial bus including a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 1110 may include multiple sets of I2C buses. The processor 1110 may be coupled to the touch sensor 1180D, the charger, the flash, the camera 1191, and the like through different I2C bus interfaces. For example: the processor 1110 may be coupled to the touch sensor 1180D through an I2C interface, so that the processor 1110 and the touch sensor 1180D communicate through an I2C bus interface to implement a touch function of the first terminal 110.
The I2S interface may be used for audio communication. In some embodiments, processor 1110 may include multiple sets of I2S buses. The processor 1110 may be coupled to the audio module 1170 via an I2S bus to enable communication between the processor 1110 and the audio module 1170. In some embodiments, the audio module 1170 may pass audio signals to the wireless communication module 1160 via the I2S interface for answering a call via a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 1170 and the wireless communication module 1160 may be coupled by a PCM bus interface. In some embodiments, the audio module 1170 may also pass audio signals to the wireless communication module 1160 via the PCM interface to enable answering a call via a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 1110 with the wireless communication module 1160. For example: the processor 1110 communicates with the bluetooth module in the wireless communication module 1160 through the UART interface to implement the bluetooth function. In some embodiments, the audio module 1170 may pass audio signals to the wireless communication module 1160 via the UART interface to enable music to be played via the bluetooth headset.
The MIPI interface may be used to connect the processor 1110 with peripheral devices such as a display 1192, a camera 1191, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a display screen serial interface (DSI), and the like. In some embodiments, the processor 1110 and the camera 1191 communicate through a CSI interface to implement the photographing function of the first terminal 110. The processor 1110 and the display screen 1192 communicate through the DSI interface to implement the display function of the first terminal 110.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 1110 with the camera 1191, the display screen 1192, the wireless communication module 1160, the audio module 1170, the sensor module 1180, and the like. The GPIO interface may also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, and the like.
The USB interface 1130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 1130 may be used to connect a charger to charge the first terminal 110, and may also be used to transmit data between the first terminal 110 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other first terminals 110, such as AR devices and the like.
It should be understood that the interfacing relationship between the modules illustrated in the embodiment of the present application is only an exemplary illustration, and does not constitute a limitation on the structure of the first terminal 110. In other embodiments, the first terminal 110 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charge management module 1140 is used to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 1140 may receive charging input from a wired charger via the USB interface 1130. In some wireless charging embodiments, the charging management module 1140 may receive a wireless charging input through a wireless charging coil of the first terminal 110. The charging management module 1140 can also supply power to the first terminal 110 through the power management module 1141 while charging the battery 1142.
The power management module 1141 is used to connect the battery 1142, the charging management module 1140 and the processor 1110. The power management module 1141 receives input from the battery 1142 and/or the charging management module 1140, and supplies power to the processor 1110, the internal memory 1121, the external memory 1120, the display 1192, the camera 1191, the wireless communication module 1160, and the like. The power management module 1141 may also be used to monitor parameters such as battery capacity, battery cycle number, battery state of health (leakage, impedance), etc. In other embodiments, the power management module 1141 may be disposed in the processor 1110. In other embodiments, the power management module 1141 and the charging management module 1140 may be disposed in the same device.
The wireless communication function of the first terminal 110 may be implemented by the antenna 1, the antenna 2, the mobile communication module 1150, the wireless communication module 1160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the first terminal 110 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 1150 may provide a solution including 2G/3G/4G/5G wireless communication and the like applied on the first terminal 110. The mobile communication module 1150 may include at least one filter, switch, power amplifier, low Noise Amplifier (LNA), and the like. The mobile communication module 1150 may receive electromagnetic waves from the antenna 1, filter, amplify, etc. the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 1150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 1150 may be disposed in the processor 1110. In some embodiments, at least some of the functional blocks of the mobile communication module 1150 may be disposed in the same device as at least some of the blocks of the processor 1110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to speaker 1170A, receiver 1170B, etc.) or displays images or video through display screen 1192. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be separate from the processor 1110, and may be located in the same device as the mobile communication module 1150 or other functional modules.
The wireless communication module 1160 may provide solutions for wireless communication applied to the first terminal 110, including Wireless Local Area Network (WLAN) (e.g., wireless fidelity (Wi-Fi) network), bluetooth (BT), global Navigation Satellite System (GNSS), frequency Modulation (FM), near Field Communication (NFC), infrared (IR), and the like. The wireless communication module 1160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 1160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering on electromagnetic wave signals, and transmits the processed signals to the processor 1110. Wireless communication module 1160 may also receive signals to be transmitted from processor 1110, frequency modulate, amplify, and convert to electromagnetic radiation via antenna 2.
In some embodiments, the antenna 1 of the first terminal 110 is coupled with the mobile communication module 1150 and the antenna 2 is coupled with the wireless communication module 1160, such that the first terminal 110 can communicate with a network and other devices through wireless communication technology.
The wireless communication technology may include global system for mobile communications (GSM), general Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), wideband Code Division Multiple Access (WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou satellite navigation system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The first terminal 110 implements a display function through the GPU, the display screen 1192, and the application processor. The GPU is a microprocessor for image processing, connected to the display screen 1192 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 1110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 1192 is used to display images, video, and the like. The display screen 1192 includes a display panel. The display panel may be a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-o led, a quantum dot light-emitting diode (QLED), or the like. In some embodiments, the first terminal 110 may include 1 or N display screens 1192, N being a positive integer greater than 1. In this embodiment, the display screen 1192 may be processed by digging a hole, for example, a through hole is disposed in the upper left corner or the upper right corner of the display screen 1192, and the camera 1191 may be embedded in the through hole.
The first terminal 110 may implement a photographing function through the ISP, the camera 1191, the video codec, the GPU, the display screen 1192, the application processor, and the like.
The ISP is used to process the data fed back by the camera 1191. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 1191.
The camera 1191 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to be converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the first terminal 110 may include 1 or N cameras 1191, N being a positive integer greater than 1. Optionally, the position of the camera on the first terminal 110 may be front-located or rear-located, which is not limited in this embodiment of the present application. Optionally, the first terminal 110 may include a single camera, a dual camera, or a triple camera, which is not limited in this embodiment of the present application. For example, a cell phone may include three cameras, one being a main camera, one being a wide camera, and one being a tele camera. Optionally, when the first terminal 110 includes a plurality of cameras, the plurality of cameras may be all front-mounted, or may be all rear-mounted, or may be a part of the front-mounted and another part of the rear-mounted, which is not limited in this embodiment of the present application.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the first terminal 110 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The first terminal 110 may support one or more video codecs. In this way, the first terminal 110 can play or record videos in a plurality of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes operational events quickly by referencing a biological neural network structure, for example, by referencing a transfer pattern between neurons in the human brain, and can also learn itself continuously. The NPU may implement applications such as intelligent recognition of the first terminal 110, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 1120 can be used for connecting an external memory card, such as a Micro SD card, to extend the storage capability of the first terminal 110. The external memory card communicates with the processor 1110 through the external memory interface 1120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 1121 may be used to store computer-executable program code, including instructions. The processor 1110 executes various functional applications and data processing of the first terminal 110 by executing instructions stored in the internal memory 1121.
The internal memory 1121 may include a program storage area and a data storage area. The storage program area may store an operating data system, an application program (such as a sound playing function, an image playing function, and the like) required by at least one function, and the like. The storage data area may store data (e.g., audio data, a phonebook, etc.) created during use of the first terminal 110, and the like. In addition, the internal memory 1121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a Universal Flash Storage (UFS), and the like.
The first terminal 110 can implement audio functions through an audio module 1170, a speaker 1170A, a receiver 1170B, a microphone 1170C, an earphone interface (not shown), and an application processor. Such as music playing, recording, etc.
The audio module 1170 functions to convert digital audio information into an analog audio signal output and also functions to convert an analog audio input into a digital audio signal. The audio module 1170 may also be used to encode and decode audio signals. In some embodiments, the audio module 1170 may be disposed in the processor 1110, or some of the functional modules of the audio module 1170 may be disposed in the processor 1110.
The speaker 1170A, also referred to as a "horn", is used to convert electrical audio signals into sound signals. The first terminal 110 can listen to music through the speaker 1170A or listen to a handsfree call.
Receiver 1170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the first terminal 110 answers a call or voice information, it is possible to answer a voice by bringing the receiver 1170B close to the human ear.
Microphone 1170C, also known as a "microphone," converts sound signals into electrical signals. When making a call or transmitting voice information, a user can input a voice signal to the microphone 1170C by uttering sound near the microphone 1170C through the mouth of the user. The first terminal 110 may be provided with at least one microphone 1170C. In other embodiments, the first terminal 110 may be provided with two microphones 1170C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the first terminal 110 may further include three, four or more microphones 1170C to collect sound signals, reduce noise, identify sound sources, perform directional recording, and so on.
The earphone interface is used for connecting a wired earphone. The headset interface may be a USB interface 1130, or may be a 3.5mm open mobile terminal 110 platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
Pressure sensor 1180A is configured to sense a pressure signal, which may be converted to an electrical signal. In some embodiments, pressure sensor 1180A may be disposed on display screen 1192.
Pressure sensor 1180A may be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, or the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on pressure sensor 1180A, the capacitance between the electrodes changes. The first terminal 110 determines the intensity of the pressure according to the change of the capacitance. When the touch operation data acts on the display screen 1192, the first terminal 110 detects the intensity of the touch operation data according to the pressure sensor 1180A.
The first terminal 110 may also calculate the touched position according to the detection signal of the pressure sensor 1180A. In some embodiments, the touch operation data applied to the same touch position but different touch operation data intensities may correspond to different operation data commands. For example: and when the touch operation data with the intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation data with the intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message. In one example, the first terminal 110 may also calculate the area of touch from the detection signal of the pressure sensor 1180A.
The attitude sensor 1180B may be used to determine the motion attitude of the terminal 100. The system comprises motion sensors such as a three-axis gyroscope, a three-axis accelerometer and a three-axis electronic compass, and data such as a three-dimensional attitude, an azimuth and the like subjected to temperature compensation are obtained through an embedded low-power ARM processor. In some examples, the angular velocity of terminal 100 about three axes (i.e., x, y, and z axes) may be determined by gyro sensors in attitude sensor 1180B. In some examples, an acceleration sensor in attitude sensor 1180B may detect the magnitude of acceleration of first terminal 110 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the first terminal 110 is stationary. The method can also be used for recognizing the gesture of the terminal, and is applied to horizontal and vertical screen switching, pedometers and other applications. In addition, the vibration frequency caused by the gravity acceleration of the finger joints can be sensed. In some examples, an overhead screen shot may be implemented based on the attitude sensor 1180B and the distance sensor 1180C.
A distance sensor 1180C for measuring distance. The first terminal 110 may measure the distance by infrared or laser. In some embodiments, shooting a scene, first terminal 110 may utilize range sensor 1180C to range for fast focus. In some embodiments, the first terminal 110 may measure the distance between the hand and the first terminal 110 using the distance sensor 1180C in a screenshot scenario.
Touch sensor 1180D, also referred to as a "touch panel. The touch sensor 1180D may be disposed on the display screen 1192, and the touch sensor 1180D and the display screen 1192 form a touch screen, which is also referred to as a "touch screen".
The touch sensor 1180D is used to detect touch operation data applied thereto or thereabout. The touch sensor may communicate the detected touch operation data to the application processor to determine the touch event type. Visual output related to the touch operational data may be provided through the display screen 1192. In other embodiments, the touch sensor 1180D may be disposed on a surface of the first terminal 110 at a different location than the display screen 1192.
The sensor module 1180 may also include a barometric sensor, a magnetic sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, an ambient light sensor, a bone sensor, and the like.
Keys 1190 include a power on key, volume key, etc. Keys 1190 may be mechanical keys. Or may be touch keys. The first terminal 110 may receive a key input, and generate a key signal input related to user setting and function control of the first terminal 110.
In addition, a motor, an indicator, a SIM interface, etc. may be further included in the first terminal 110.
Illustratively, the second terminal 120 may include a processor, an internal memory, a Universal Serial Bus (USB) interface, a charging management module, a power management module, a battery, an antenna, a mobile communication module, a wireless communication module, an audio module, a speaker, a receiver, a microphone, a sensor module, a key, a camera, a display screen, and the like. The sensor module may include a pressure sensor, a touch sensor, an attitude sensor, a distance sensor, and the like. The specific contents can be referred to above, and are not described herein in detail. It is to be noted that the above is only one example of the structure of the second terminal 120, and does not constitute a specific limitation to the second terminal 120. In other embodiments, the second terminal 120 may include more or fewer components than shown in fig. 9, or some components may be combined, some components may be split, or a different arrangement of components.
The software system of the first terminal 110 may adopt a hierarchical architecture, an event hardware-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. In the embodiment of the present application, a software structure of the first terminal 120 is exemplarily described by taking an Android system with a layered architecture as an example.
Fig. 10a is a block diagram of a software structure of the first terminal 110 according to the embodiment of the present application.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, which are an application layer, an application framework layer, a system layer (including Android runtime (Android runtime) and system library), and a Kernel layer (Kernel), from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 10a, the application package may include camera, gallery, calendar, call, map, navigation, WLAN, bluetooth, music, video, short message, screen capture, screen recording, etc. applications.
The application framework layer provides an Application Programming Interface (API) and a programming framework for applications of the application layer. The application framework layer includes some predefined functions, such as functions for receiving events sent by the application layer.
As shown in FIG. 10a, the application framework layer may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
Content providers are used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc.
The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting a text message in the status bar, sounding a prompt tone, the first terminal 110 vibrating, flashing an indicator light, etc.
The application framework layer may further include:
the view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide a communication function of the first terminal 110. Such as management of call status (including on, off, etc.).
The system library may include a plurality of functional modules. For example: surface managers (surface managers), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The system library may further comprise:
the sensor service module is configured to monitor sensor data uploaded by various sensors in a hardware layer, and determine a physical state of the first terminal 110;
the physical state recognition module is used for analyzing and recognizing user gestures, human faces and the like, and can comprise a finger joint algorithm;
the Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application layer and the application framework layer as binary files. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The Kernel layer (Kernel) is a layer between hardware and software. The Kernel layer (Kernel) at least comprises a display driver, a camera driver, an audio driver and a sensor driver, and is used for hardware driving related hardware of the hardware layer, such as a display screen, a camera, a loudspeaker, a sensor and the like.
The software system of the second terminal 120 may adopt a hierarchical architecture, an event hardware-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. Preferably, the software system of the second terminal 120 and the architecture of the software system of the first terminal are identical. In the embodiment of the present application, a software structure of the second terminal 120 is exemplarily described by taking an Android system with a layered architecture as an example.
Fig. 10b is a block diagram of the software structure of the second terminal 120 according to the embodiment of the present application.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, which are an application layer, an application framework layer, a system layer (including Android runtime (Android runtime) and system library), and a Kernel layer (Kernel), from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 10b, the application package may include a cooperative application for implementing a screen-casting connection with the first terminal 110, and a WLAN, bluetooth, or other applications connected with the first terminal 110.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application programs of the application layer. As shown in fig. 10b, the application framework layer may include a content provider, a view system, a resource manager, a notification manager, etc., the details of which are described above.
The system library may include a plurality of functional modules. For example: a surface manager, a sensor service module, a physical state identification module, etc. See the description above for details.
The Android Runtime comprises a core library and a virtual machine. See the description above for details. .
The Kernel layer (Kernel) at least comprises a display driver and a touch sensor driver to drive the touch screen. It may also include audio drivers, attitude sensor drivers, hardware driven speakers, microphones, attitude sensors, etc.
It should be noted that fig. 10b is a schematic diagram of a software structure provided in the embodiment of the present application, and the content in the software structure is not limited at all, for example, the content in the software architecture of the second terminal 120 may be the content shown in fig. 10a, and may also be more or less than the content shown in fig. 10 a.
It should be understood that the software frameworks of the first terminal 110 and the second terminal 120 may be the same or different, but the data interaction between the first terminal 110 and the second terminal 120 is possible.
The software systems shown in fig. 10a and 10b, in conjunction with the screen capturing scenario of fig. 7b, exemplarily illustrate the workflow of implementing screen capturing as software and hardware of the first terminal 110 and the second terminal 120.
When the touch sensor 1180D receives the touch operation, the second terminal 120 sends corresponding hardware interrupt information to the Kernel layer (Kernel). The Kernel layer (Kernel) processes the hardware interrupt information into input events (including touch coordinates, time stamps of touch operation data and other information), and distributes the input events to the system layer, the system layer distributes the input events to the application framework layer, and the application framework layer processes the input events into operation events (the control which indicates the user to click is a screen capture control of a pull-down notification menu). The operation event is sent to the first terminal 110, a screenshot preview picture sent by the first terminal 110 is obtained, a Kernel layer (Kernel) is called to start a display driver, and the corresponding screenshot preview picture is displayed through the display screen 1192.
The user space of the first terminal 110 obtains the operation event sent by the second terminal 120, reads the information of the operation event, triggers the screen capture application to call an interface of a system layer, captures the pixel value of the pixel point of the current whole screen of the second terminal 120 cached inside, and adds the preview thumbnail to obtain the screenshot preview picture. And sending the screenshot preview picture to the corresponding second terminal 120.
In the following description, referring to fig. 8b, an exemplary description is given as an exemplary description of a screen recording workflow implemented by software and hardware of the first terminal 110 and the second terminal 120.
When the touch sensor 1180D receives the touch action, the second terminal 120 sends corresponding hardware interrupt information to the Kernel layer (Kernel). The Kernel layer (Kernel) processes the touch operation into input events (including touch coordinates, time stamp of touch operation data, and other information), and distributes the input events to the system layer. The system layer distributes the input events to the application framework layer. The application framework layer processes the input event into an operation event (the control that indicates the user clicks is a screen recording control of the pull-down notification menu) and transmits the operation event to the first terminal 110. When receiving a screen recording instruction sent by the first terminal 120, the audio driver is started by calling a Kernel layer (Kernel), and a sound signal input to the second terminal 120 is collected by the microphone 1170C and reported to the first terminal 110. The screen recording picture sent by the first terminal 110 is obtained, and then the display driver is started by calling a Kernel layer (Kernel), and the corresponding screen recording picture is displayed through the display screen 1192.
The user space of the first terminal 110 obtains an operation event sent by the second terminal 120, reads information of the operation event, triggers the screen recording application to call an interface of a system layer, intercepts pixel values of pixels of a current whole screen of the second terminal cached inside, draws the whole screen, obtains a screen recording picture, and generates a screen recording file based on a sound signal and the screen recording picture collected by a microphone in the recording process reported by the second terminal 120. And transmitting the screen recording picture to the corresponding second terminal 120.
For example, the technical solutions involved in the following embodiments can be implemented in the above-described screen projection system.
The above is a description of the screen projection system and the components of the screen projection system in the embodiments of the present application. Next, based on the above-described screen projection system and the cross-device interaction scheme shown in fig. 13a, a detailed description is given of an application scenario of the finger joint operation in the embodiment of the present application.
A1. The first terminal establishes a screen projection connection with the second terminal.
In one example, the second terminal and the first terminal are connected together through the network, and a distributed soft bus connection is realized, so that a picture displayed by the first terminal is projected to the second terminal for display.
In one example, the screen projection mode between the first terminal and the second terminal is heterogeneous screen projection, the first terminal can establish a virtual screen of the second terminal, the virtual screen is adaptive to the screen size of the second terminal, subsequently, the virtual screen of the second terminal can be identified based on the device identifier of the second terminal, and the operation instruction is executed on the picture of the virtual screen of the second terminal.
In addition, if the second terminal carries a microphone, the first terminal can manage the microphone of the second terminal. Subsequently, the second terminal reports the sound signal collected by the microphone to the first terminal in a screen recording or other scenes needing the microphone to collect the sound signal.
A2. The first terminal sends the target picture to the second terminal.
As a feasible implementation manner, the screen projection manner between the first terminal and the second terminal is mirror image screen projection. In one example, the target screen is a mirror image of a screen currently displayed by the first terminal.
As a possible implementation manner, the screen projection manner between the first terminal and the second terminal is heterogeneous screen projection. In one example, the target screen is a screen not displayed by the first terminal. In one example, the target frame is a mirror image of a frame currently displayed by the first terminal, and subsequently, the frames displayed by the first terminal and the second terminal are independent of each other and do not interfere with each other.
It should be noted that the size of the target picture is adapted to the screen size of the second terminal.
A3. The second terminal displays the target screen.
The target screen may be, for example, a video screen as shown in fig. 7a to 7d, 7f, 8a, 8b, or a weather screen as shown in fig. 7 d.
A4. The second terminal generates an input event according to a finger joint operation made by a user for the target screen.
As one possible implementation, the input event may include knuckle touch information and knuckle press information. For details, reference is made to the above description and not to be repeated in any way.
In an example, the input event may be generated by processing, by the kernel space, hardware interrupt information generated by a user for a finger joint operation of a target picture, or may be generated by processing, by the kernel layer, hardware interrupt information generated by the user for the finger joint operation of the target picture and reported by the hardware layer, where the kernel space, the kernel layer, and the hardware layer are described above, and are not described in detail here.
A5. The second terminal determines a knuckle identification of the input event when it is recognized that the input event is generated by a knuckle of the user based on a knuckle algorithm.
The finger joint algorithm is referred to above, and will not be described in detail herein.
In one example, the finger joint algorithm of the first terminal and the second terminal are the same, and illustratively, the finger joint algorithm of the second terminal is transplanted from the first terminal. Correspondingly, the knuckle identification may be determined by a knuckle algorithm.
In one example, the first terminal and the second terminal have different finger joint algorithms. Correspondingly, when the knuckle algorithm of the second terminal recognizes that the input event is generated by the knuckle of the user, the knuckle identifier determined by the knuckle algorithm is converted into the knuckle identifier recognizable by the first terminal, or the knuckle identifier recognizable by the first terminal and indicating that the input event is generated by the knuckle of the user is determined.
A6. The second terminal encapsulates the input event and the knuckle identification into an operational event.
It should be noted that the operation event should be an event that can be processed by the first terminal, in other words, the operation event conforms to a data interaction protocol between the first terminal and the second terminal. In order to ensure the reaction speed and the processing performance between the first terminal and the second terminal, the second terminal identifies the knuckle and the first terminal identifies the movement of the knuckle.
In one example, the second terminal encapsulates the input event, the knuckle identifier, and the device identifier of the second terminal into an operation event so that the first terminal can recognize the user-operated terminal and the knuckle motion.
Illustratively, when the gesture operation of the user is a finger joint double click as shown in fig. 7c, a finger joint drawing closed figure as shown in fig. 7d, a finger joint drawing S as shown in fig. 7e, and a double finger joint double click as shown in fig. 8 b. Correspondingly, the operation event generated by the second terminal 120 includes a knuckle identification.
A7. The second terminal sends the operation event to the first terminal.
A8. The first terminal identifies the operation event to determine the knuckle motion of the user aiming at the target picture.
The first terminal may perform recognition of the knuckle motion based on the knuckle touch information and the knuckle press information in the operation event, thereby determining the knuckle motion of the user with respect to the target screen.
For example, the finger joint movement may be a double click of the finger joint shown in fig. 7c, a closed figure drawn by the finger joint shown in fig. 7d, a figure drawn by the finger joint shown in fig. 7e, and a double click of the finger joint shown in fig. 8 b.
A9. The first terminal determines an operation instruction corresponding to the operation event based on the finger joint action.
In a possible implementation manner, the first terminal stores the corresponding relationship between the knuckle motions and the operation instructions. And determining an operation instruction corresponding to the matched knuckle motion based on the knuckle motion and the matching of the stored knuckle motion.
Illustratively, the finger joint movement is a double-finger joint double click as shown in fig. 8b, and correspondingly, the operation instruction is a screen recording instruction.
Illustratively, the finger joint movement is a finger joint double click as shown in fig. 7c, and correspondingly, the operation command is a screen capture command.
Illustratively, the knuckle motion is a closed figure drawn by the knuckle shown in fig. 7d, and correspondingly, the operation instruction is a local screen capture instruction.
Illustratively, the finger joint movement is the finger joint tap S shown in fig. 7e, and correspondingly, the operation instruction is a scroll screen shot instruction.
In some possible implementation manners, the first terminal stores the knuckle motion and the picture, and the operation instruction corresponding to the knuckle motion and the picture. And determining the operation instruction corresponding to the matched knuckle motion and picture based on the matching of the knuckle motion and the target picture and the stored knuckle motion and picture.
It should be noted that the present embodiment does not limit whether the determination of the operation instruction is based on the finger joint motion or the finger joint motion and the screen, and the determination needs to be specifically combined with actual requirements.
In some possible implementation manners, the finger joint may be operated on the screen instead of the finger pad to implement the function that can be implemented by the finger pad, for example, opening an application such as a WeChat and a camera, and operating various spaces in the application, for example, clicking a pause button in a video picture, and the like.
In one example, the first terminal recognizes information of the finger joint manipulation position based on finger joint touch information, layout information of the target screen, and size information of the target screen in the manipulation event, recognizes information of the finger joint manipulation position of the user, and determines the manipulation command based on the finger joint motion and the information of the finger joint manipulation position.
In one example, the second terminal stores layout information of the target screen in advance, when the first terminal recognizes that an operation command for determining the operation event needs to be determined based on information of the finger joint operation position, the first terminal sends a finger joint operation position information request to the second terminal, and the second terminal determines information of the finger joint operation position of the user based on the pre-stored layout of the displayed target screen and then sends the information to the first terminal, so that the first terminal determines the operation command corresponding to the operation event.
A10. And the first terminal executes an operation instruction corresponding to the operation event on the target picture.
In a feasible implementation manner, the first terminal executes an operation instruction corresponding to the operation event on the target picture to generate a picture to be displayed. For example, if the operation instruction is a screen capture instruction, the screen to be displayed is a screen capture preview screen. For example, when the user performs a finger joint double click on the touch screen of the second terminal as shown in fig. 7c and clicks the screen capture icon 122 shown in fig. 8a and 8b, the corresponding screen to be displayed may be the screen capture preview screen shown in fig. 7c, 8a and 8 b. When the user performs finger joint drawing closed graphics on the touch screen of the second terminal as shown in fig. 7d, the corresponding picture to be displayed may be the screenshot editing picture shown in fig. 7 d. When the user performs the finger joint drawing S shown in fig. 7e on the touch screen of the second terminal, correspondingly, a plurality of pictures to be displayed indicate the screen sliding effect and the screenshot editing picture shown in fig. 7 e.
Further, in an example, when the screen projection manner between the first terminal and the second terminal is mirror image screen projection, optionally, both the first terminal and the second terminal may display a picture to be displayed.
In one example, the screen projection mode between the first terminal and the second terminal is different-source screen projection, and the second terminal displays a picture to be displayed.
In an example, the first terminal executes an operation instruction corresponding to the operation event on the target screen to generate a screen that does not need to be displayed, for example, when the target screen is a screenshot editing screen shown in fig. 7d or fig. 7e, the user clicks a save button in the screenshot editing screen, and the operation instruction is to save the screenshot, then the second terminal does not display the saved screen.
In an example, the first terminal executes an operation instruction corresponding to the operation event on the target screen, and no screen is generated, for example, when the target screen is the screenshot editing screen shown in fig. 7d or fig. 7e, the user clicks a delete button in the screenshot editing screen, and the operation instruction is to delete the screenshot, the first terminal executes the operation instruction and does not generate a screen to be displayed.
The above is a description of the screen projection system and the components of the screen projection system in the embodiments of the present application. Next, based on the screen projection system described above and the cross-device interaction scheme shown in fig. 13b, an application scenario of a click operation in an area where a screen capture button or a screen record button is located in a pull-down menu according to the embodiment of the present application is described in detail.
B1. The first terminal establishes a screen projection connection with the second terminal.
For details, reference is made to the above description and not to be repeated in any way.
B2. The first terminal sends the target picture to the second terminal.
For details, reference is made to the above description and not to be repeated in any way.
B3. The second terminal displays the target screen.
For details, reference is made to the above description and not to be repeated in any way.
B4. The second terminal generates a first input event according to a sliding operation of a user on the target screen.
As one possible implementation, the first input event includes an event type and finger slide operation information. The finger sliding operation information may include a sliding direction, a sliding distance, a sliding duration, and the like, and the event type may be pressing, sliding, lifting, and the like.
In one example, the first input event may further include a device identification of the second terminal to facilitate the first terminal to identify the user-operated terminal.
In an example, the first input event may be generated by processing, by the kernel space, hardware interrupt information generated by a sliding operation of a user for a target picture, or may be generated by processing, by the kernel layer, hardware interrupt information generated by a sliding operation of a user for a target picture, which is reported by the hardware layer, by the kernel layer, where the kernel space, the kernel layer, and the hardware layer refer to the above description and are not described in detail here.
B5. The second terminal sends the first input event to the first terminal.
B6. The first terminal recognizes the first input event as displaying a pull-down menu.
In one example, the second terminal recognizes the first input event, and learns the terminal operated by the user and the operation behavior.
For example, the gesture operation recognized by the first terminal may be sliding from the top to the bottom as shown in fig. 7b, and the operation instruction is displaying a pull-down menu.
B7. The first terminal sends the pull-down menu picture and the layout information of the pull-down menu picture to the second terminal.
In one example, the layout information of the pull-down menu screen indicates the position information and the identification of the binding of various buttons in the pull-down menu. For details, reference is made to the above description and not to be repeated in any way.
B8. The second terminal displays a pull-down menu screen on the basis of the target screen.
B9. And the second terminal generates a second input event according to the clicking operation of the user on the area where the screen capture button or the screen recording button is located in the pull-down menu picture.
As a possible implementation, the second input event includes finger click operation information and an event type. The finger click operation information includes finger touch position information and finger touch time information, for example, the finger touch position information may be coordinates of a plurality of pixels on a touch screen of the second terminal clicked in a screen coordinate system, and the finger touch time information may be touch time of the plurality of pixels on the touch screen of the second terminal clicked. The event type may be press and lift.
In an example, the second input event may be generated by processing, by the kernel space, hardware interrupt information generated by a user through a click operation on the pull-down menu screen, or may be generated by processing, by the kernel layer, hardware interrupt information generated by a user through a click operation on the pull-down menu screen, which is reported by the hardware layer, and the kernel space, the kernel layer, and the hardware layer refer to the above description, which is not described herein in detail.
B10. And the second terminal determines the identifier of the screen capture button or the screen recording button according to the second input event and the layout information of the pull-down menu picture.
As a feasible implementation manner, the second terminal determines that the second input event is generated by a user click, that is, the second input event is pressed and lifted only once, and the pressing duration is short, based on comparison between the finger touch position information in the second input event and the position information of various buttons in the pull-down menu picture, it can be determined that the user has clicked the screen capture button or the screen recording button, and the identifier of the screen capture button or the screen recording button is determined. It should be noted that, based on the finger touch time information, the touch duration of the user can be known, so that whether the user performs a click operation or a long-time press operation can be known.
B11. And the second terminal encapsulates the identification of the screen capture button or the screen recording button and the second input event into an operation event.
In one example, the second terminal encapsulates the identifier of the screen capture button or the screen recording button, the device identifier of the second terminal and the second input event into the operation event, so that the first terminal can recognize the terminal operated by the user and the operation behavior.
For example, the gesture operation of the user may be clicking a screen capture button in the pull-down menu shown in fig. 7b, and correspondingly, the second terminal 120 may recognize that the user clicks the screen capture button in the pull-down menu to generate an operation event, where the operation event is a screen capture event.
For example, the gesture operation of the user may be to click a screen recording button in the pull-down menu shown in fig. 8b, and correspondingly, the second terminal 120 may recognize that the user clicks the screen recording button in the pull-down menu to generate an operation event, where the operation event is a screen recording event.
B12. The second terminal sends the operation event to the first terminal.
B13. The first terminal identifies the operation event and determines a screen capturing instruction or a screen recording instruction of a target picture corresponding to the operation event.
As a feasible implementation manner, the first terminal identifies the operation event, determines that the user clicks the screen capture button or the screen recording button on the pull-down menu displayed by the second terminal, and may determine that the operation instruction is the screen capture instruction or the screen recording instruction.
In one example, the screen capture instruction indicates to capture a target screen. Illustratively, if the first terminal recognizes that the user clicks the screen capture button in the pull-down menu shown in fig. 7b, the operation instruction is to capture the video screen in fig. 7 b.
In one example, the screen recording instruction instructs to start recording from the target screen. Illustratively, if the first terminal recognizes that the user clicks the screen recording button in the pull-down menu shown in fig. 8b, the operation command is to start recording the video screen in fig. 8 b.
B14. And executing a screen capture instruction or a screen recording instruction on the target picture.
As a feasible implementation manner, the operation instruction corresponding to the operation event is executed on the target picture, and the picture to be displayed is generated.
In one example, if the operation instruction is a screen capture instruction, the screen to be displayed is a screen capture preview screen. Illustratively, the target screen is the video screen shown in fig. 7b, and the screen capture preview screen shown in fig. 7b can be obtained after the screen capture instruction is executed.
In one example, if the operation instruction is a screen recording instruction, the picture to be displayed is a screen recording preview picture. Illustratively, the target screen is a video screen shown in fig. 8b, and a screen recording preview screen shown in fig. 8b can be obtained after executing the screen recording instruction.
The above is a description of the screen projection system and the components of the screen projection system in the embodiments of the present application. Next, based on the screen projection system described above and the cross-device interaction scheme shown in fig. 13b, a detailed description is given of an application scenario related to pressing a screen capture button of the second terminal in the embodiment of the present application.
C1. The first terminal establishes a screen projection connection with the second terminal.
For details, reference is made to the above description and not to be repeated in any way.
C2. The first terminal sends the target picture to the second terminal.
For details, reference is made to the above description and not to be repeated in any way.
C3. The second terminal displays the target screen.
For details, reference is made to the above description and not to be repeated in any way.
C4. And the second terminal generates an operation event according to the screen capture key of the second terminal pressed by the user.
As a possible implementation, the operation event includes a key value and key press time information. For details, reference is made to the above description and not to be repeated in any way.
In one example, the operation event may further include a device identification of the second terminal, so that the first terminal recognizes the terminal operated by the user.
In an example, the operation event may be generated by processing, by the kernel space, hardware interrupt information generated when the user presses the screen capture key of the second terminal, or may be generated by processing, by the kernel layer, hardware interrupt information generated when the user presses the screen capture key of the second terminal and reported by the hardware layer, where the kernel space, the kernel layer, and the hardware layer refer to the above description and are not described in detail here.
For example, the gesture operation of the user may be a key (on/off key + volume down key) shown in fig. 7a, and correspondingly, the operation event generated by the second terminal 120 includes a key value of the key and key time information, and the key time information may be a short press.
C5. The second terminal determines that the operational event is not a local event.
In one example, the local event may be understood as an event that the second terminal can directly process, for example, the user presses a volume up key of the second terminal, a volume up event is processed by the second terminal, and the audio up event is the local event.
C6. The second terminal sends the operation event to the first terminal.
C7. The first terminal identifies the operation event and determines a screen capturing instruction of the target picture.
For example, the gesture operation recognized by the first terminal may be a key (on/off key + volume down key) shown in fig. 7a, and correspondingly, the operation instruction is a screen capture instruction.
C8. The first terminal executes a screen capture instruction on the target picture to determine a screen capture preview picture.
Illustratively, when the user presses a key on the second terminal as shown in fig. 7a, the first terminal may generate a screenshot preview screen as shown in fig. 7 a.
It should be noted that the above is only an example of a cross-device interaction scheme and does not constitute any limitation. In some possible implementations, the gesture operation of the user may be a space-time grip after the hand-shaped icon appears as shown in fig. 7f, and as shown in fig. 7f, the second terminal 120 may generate an operation event according to a space-time grip operation performed by the user on the video screen where the hand-shaped icon 121 appears, where the operation event indicates a space-time event after the hand-shaped icon appears; then, the first terminal 110 may recognize the idle movement corresponding to the operation event as the grip, determine that the operation instruction corresponding to the idle grip is the screen capture instruction, capture the video playing screen shown in fig. 7f, generate a screen capture preview screen, and send the screen capture preview screen to the second terminal 120 for displaying. In this implementation, the second terminal 120 recognizes the gap, and the first terminal 110 recognizes the gap operation, thereby ensuring the response speed and performance between the first terminal 110 and the second terminal 120.
It should be understood that when the screen projection mode between the first terminal and the second terminal is different-source screen projection, when the first terminal executes an operation instruction, the virtual screen of the second terminal is determined based on the device identification of the second terminal, and the operation instruction is executed based on the target picture of the virtual screen.
Next, a cross-device interaction method provided in the embodiments of the present application is introduced based on the above-described cross-device interaction scheme. It will be appreciated that this approach is another expression of the above-described cross-device interaction scheme, and that the two are combined. The method is proposed based on the above-described scheme of cross-device interaction, and part or all of the contents of the method can be referred to the above description of the cross-device interaction scheme. The method is applied to the screen projection system, and screen projection connection is established between the first terminal and the second terminal in a wired or wireless mode.
Step 101, a second terminal displays a first picture sent by a first terminal;
102, generating an operation event by the second terminal according to a target operation made by a user aiming at the first picture; wherein the target operation is screen capture operation, screen recording operation or finger joint operation;
103, the second terminal sends an operation event;
and 104, the first terminal determines an operation instruction corresponding to the operation event and executes the operation instruction based on the first picture.
In an example, the first screen may be a target screen or a pull-down menu screen shown in fig. 13a to 13c, and details are referred to above, and are not described herein again.
In one example, the target operation is made by a user at the second terminal for the first screen. For details, reference is made to the above description, which is not repeated herein.
As a possible implementation, the target operation is a finger joint operation; step 102, comprising:
the second terminal generates a first input event according to the finger joint operation of the user on the first picture; wherein the first input event comprises knuckle touch information and knuckle press information; the second terminal determines a knuckle identifier of the first input event when recognizing that the first input event is generated by a knuckle of a user based on a knuckle algorithm; the second terminal encapsulates the first input event and the knuckle identification as an operational event.
In one example, the first input event may be generated by processing hardware interrupt information generated by a finger joint operation performed by a user for the first screen in the kernel space or the kernel layer. For details, reference is made to the above description and not to be repeated in any way.
In one example, the programs needed by the second terminal to generate the operational events are ported from the first terminal. Alternatively, the codes of the first and second terminals generating the operation event are the same. Correspondingly, the knuckle identification may be determined by a knuckle algorithm.
In one example, the codes for the first terminal and the second terminal to generate the operational event are different. Correspondingly, the knuckle identification may be an identification recognizable by the first terminal and indicative of the first input event being generated by the knuckle. In addition, in some possible cases, the knuckle identification may also be requested from the first terminal by the second terminal. For details, reference is made to the above description and not to be repeated in any way.
Further, step 104 includes:
the first terminal identifies an operation event to determine a finger joint action made by a user aiming at the first picture; the first terminal determines an operation instruction corresponding to the operation event based on the finger joint movement.
According to the implementation mode, the knuckle is recognized based on the second terminal which is projected, the first terminal which is projected only recognizes the knuckle action, and the reaction speed and the performance of the first terminal which is projected are guaranteed.
In one example, the first terminal may further determine the operation instruction based on the stored first screen and the knuckle motion, that is, the knuckle motion and the screen association. For example, the main page and the knuckle draw letters, and the operation instruction may be a designated application that can open the main page.
In one example, the first terminal determines the operation command based on the finger joint motion and the information of the finger joint touch position, and the finger joint corresponds to the finger belly. For details, reference is made to the above description, which is not repeated herein.
As a possible implementation manner, the second terminal displays a target picture before the displayed first picture; the first picture is a pull-down menu picture; the target operation is a click operation aiming at an area where a screen capture button or a screen recording button is located in a pull-down menu picture; the second terminal stores layout information of a pull-down menu picture; the operation instruction is a screen capturing instruction or a screen recording instruction of a target picture; step 102, comprising:
the second terminal generates a second input event according to the click operation of the user on the area where the screen capture button or the screen recording button is located in the pull-down menu picture; the second input event comprises finger click operation information; the second terminal determines the identifier of a screen capture button or a screen recording button according to the layout information of the pull-down menu picture and a second input event; and the second terminal encapsulates the identification of the screen capture button or the screen recording button and the second input event into an operation event.
In an example, the second input event may be generated by processing, by the kernel space or the kernel layer, hardware interrupt information generated by a user clicking an area where a screen capture button or a screen record button is located in a pull-down menu screen.
In one example, when the first terminal and the second terminal establish screen projection connection, the pull-down menu screen and layout information of the pull-down menu screen are sent to the second terminal for storage.
In one example, the second terminal generates an input event according to a sliding operation made by a user for the target screen and sends the input event to the first terminal, and when the first terminal identifies that the input event is displaying a pull-down menu, the first terminal sends the pull-down menu screen and layout information of the pull-down menu screen to the second terminal for storage.
In the implementation mode, the screen capture button or the screen recording button in the pull-down menu picture is identified based on the second terminal to be projected, so that the first terminal to be projected can directly determine the screen capture instruction or the screen recording instruction, and the response speed and the performance of the terminal to be projected are ensured. In addition, the buttons in the pull-down menu are recognized by the second terminal to be screened, so that the accuracy of the operation event is ensured without considering the difference of screen sizes between the first terminal to be screened and the second terminal to be screened.
As a possible implementation manner, the target operation is to press a screen capture key of the second terminal; step 102, comprising:
the second terminal generates an operation event according to the screen capture key of the second terminal pressed by the user; the operation event comprises key time information and key values; and when the second terminal judges that the operation event is not the local event, the operation event is sent to the first terminal.
In one example, the operation event may be generated by processing, by the kernel space or the kernel layer, hardware interrupt information generated by pressing a screen capture key of the second terminal by a user.
Further, step 104 includes:
when the first terminal identifies that the operation event is the screen capture event, the operation instruction corresponding to the operation event is determined to be the screen capture instruction.
As a feasible implementation manner, the screen projection manner between the first terminal and the second terminal is heterogeneous screen projection; the first terminal is provided with a virtual screen of the second terminal; the first picture is a picture matched with the screen size of the virtual screen, and the screen size of the virtual screen is matched with the screen size of the second terminal; the operation event carries the equipment identifier of the second terminal, so that the first terminal identifies the equipment identifier of the second terminal to determine the virtual screen of the second terminal.
As a feasible implementation manner, the screen projection manner between the first terminal and the second terminal is mirror image screen projection; step 104, comprising:
and executing an operation instruction on the picture displayed by the first terminal, wherein the first picture is a mirror image of the picture displayed by the first terminal.
As a possible implementation, the method further includes:
and sending a second picture obtained by executing the operation instruction based on the first picture to the second terminal so as to enable the second terminal to display the second picture.
And the second picture refers to a picture to be displayed obtained by executing the operation instruction on the target picture.
It is understood that the processor in the embodiments of the present application may be a Central Processing Unit (CPU), other general purpose processors, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic devices, transistor logic devices, hardware components or any combination thereof. The general purpose processor may be a microprocessor, but may be any conventional processor.
The method steps in the embodiments of the present application may be implemented by hardware, or may be implemented by software instructions executed by a processor. The software instructions may consist of corresponding software modules that may be stored in Random Access Memory (RAM), flash memory, read-only memory (ROM), programmable read-only memory (PROM), erasable Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), registers, a hard disk, a removable hard disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC.
In the above embodiments, all or part of the implementation may be realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions described in accordance with the embodiments of the application are all or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in or transmitted over a computer-readable storage medium. The computer instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), among others.
It is to be understood that the various numerical references referred to in the embodiments of the present application are merely for convenience of description and distinction and are not intended to limit the scope of the embodiments of the present application.

Claims (25)

1. A cross-device interaction method is applied to a screen projection system, the screen projection system comprises a first terminal and a second terminal, and a screen projection connection is formed between the first terminal and the second terminal, and the method comprises the following steps:
the second terminal displays a first picture sent by the first terminal;
the second terminal generates an operation event according to a target operation made by a user aiming at the first picture; wherein the target operation is screen capture operation, screen recording operation or finger joint operation;
and the first terminal determines an operation instruction corresponding to the operation event and executes the operation instruction based on the first picture.
2. The method of claim 1, wherein the target operation is a knuckle operation;
the second terminal generates an operation event according to a target operation made by a user for the first screen, and the operation event comprises the following steps:
the second terminal generates a first input event according to the finger joint operation of the user on the first picture; wherein the first input event comprises knuckle touch information and knuckle press information;
the second terminal determines a knuckle identification of the first input event when recognizing that the first input event is generated by a knuckle of the user based on a knuckle algorithm;
the second terminal packages the first input event and the knuckle identification as an operation event;
the determining, by the first terminal, an operation instruction corresponding to the operation event includes:
the first terminal identifies the operation event to determine the knuckle motion made by the user aiming at the first picture;
and the first terminal determines an operation instruction corresponding to the operation event based on the knuckle action.
3. The method according to claim 1, wherein the second terminal displays a target screen before displaying the first screen; the first picture is a pull-down menu picture; the target operation is a click operation aiming at an area where a screen capture button or a screen recording button is located in the pull-down menu picture; the second terminal stores layout information of the pull-down menu picture; the operation instruction is a screen capturing instruction or a screen recording instruction of the target picture;
the second terminal generates an operation event according to a target operation made by a user for the first screen, and the operation event comprises the following steps:
the second terminal generates a second input event according to a click operation which is made by a user aiming at an area where a screen capture button or a screen recording button is located in the pull-down menu picture; wherein the second input event comprises finger click operation information;
the second terminal determines the identifier of a screen capture button or a screen recording button according to the layout information of the pull-down menu picture and the second input event;
and the second terminal encapsulates the identifier of the screen capture button or the screen recording button and the second input event into an operation event.
4. The method according to claim 1, wherein the target operation is pressing a screen capture key of the second terminal;
the second terminal generates an operation event according to the target operation made by the user for the first screen, and the operation event comprises the following steps:
the second terminal generates an operation event according to the fact that a user presses a screen capture key of the second terminal; the operation event comprises key time information and a key value;
when the second terminal judges that the operation event is not a local event, the operation event is sent to the first terminal;
the determining, by the first terminal, an operation instruction corresponding to the operation event includes:
and when the first terminal identifies that the operation event is a screen capture event, determining that an operation instruction corresponding to the operation event is a screen capture instruction.
5. The method of claim 1, wherein the screen projection mode between the first terminal and the second terminal is heterogeneous screen projection;
the first terminal is provided with a virtual screen of the second terminal; the first picture is a picture matched with the screen size of the virtual screen, and the screen size of the virtual screen is matched with the screen size of the second terminal;
the operation event carries the equipment identifier of the second terminal, so that the first terminal identifies the equipment identifier of the second terminal to determine the virtual screen of the second terminal.
6. The method according to claim 1, wherein the screen projection mode between the first terminal and the second terminal is mirror image screen projection;
the executing the operation instruction based on the first picture comprises:
and executing the operation instruction on the picture displayed by the first terminal, wherein the first picture is a mirror image of the picture displayed by the first terminal.
7. The method of claim 1, further comprising:
and sending a second picture obtained by executing the operation instruction based on the first picture to the second terminal so as to enable the second terminal to display the second picture.
8. A method for cross-device interaction is applied to a first terminal, and the method comprises the following steps:
sending a first picture to a second terminal so that the second terminal can display the first picture; the first terminal and the second terminal are connected in a screen projection mode;
receiving an operation event sent by the second terminal; the operation event is generated by the second terminal according to a target operation made by a user for the first picture, wherein the target operation is a screen capture operation, a screen recording operation or a finger joint operation;
determining an operation instruction corresponding to the operation event;
and executing the operation instruction based on the first picture.
9. The method according to claim 8, wherein the target operation is a finger joint operation, and the operation event is generated by packaging a first input event and a finger joint identifier of the first input event for the second terminal; wherein the first input event is generated according to a finger joint operation made by a user for the first screen, the finger joint identification is determined when the first input event is recognized to be generated by a finger joint of the user based on a finger joint algorithm;
the determining the operation instruction corresponding to the operation event includes:
identifying the operation event to determine a knuckle motion made by the user for the first picture;
and determining an operation instruction corresponding to the operation event based on the knuckle motion.
10. The method according to claim 8, wherein the second terminal displays a target screen before the displayed first screen; the first picture is a pull-down menu picture; the target operation is a click operation made on the second terminal aiming at the area where the screen capture button or the screen recording button is located in the pull-down menu picture; the operation instruction is a screen capturing instruction or a screen recording instruction of the target picture; the operation event comprises an identifier of a screen capture button or a screen recording button in the pull-down menu picture clicked by the user;
the method further comprises the following steps:
receiving a second input event sent by the second terminal; the second input event is generated by the second terminal according to the sliding operation of the user on the target picture, and the second input event comprises finger sliding operation information;
when the second input event is recognized to be displaying a pull-down menu, sending layout information of the pull-down menu picture and the pull-down menu picture stored inside to the second terminal so that the second terminal can display the pull-down menu picture on the basis of a displayed target picture, and determining the identifier of a screen capture button or a screen recording button according to a third input event generated by a user aiming at the clicking operation of an area where the screen capture button or the screen recording button is located in the pull-down menu picture and the layout information of the pull-down menu picture; wherein the third input event comprises finger click operation information.
11. The method according to claim 8, wherein the target operation is pressing a screen capture key of the second terminal; the operation event comprises key time information and a key value, and is not a local event of the second terminal;
the determining the operation instruction corresponding to the operation event includes:
and when the operation event is identified as the screen capturing event, determining that the operation instruction corresponding to the operation event is the screen capturing instruction.
12. The method of claim 8, wherein the screen projection mode between the first terminal and the second terminal is heterogeneous screen projection;
the first terminal is provided with a virtual screen of the second terminal; the first picture is a picture matched with the screen size of the virtual screen, and the screen size of the virtual screen is matched with the screen size of the second terminal;
the operation event carries the equipment identifier of the second terminal, so that the first terminal identifies the equipment identifier of the second terminal to determine the virtual screen of the second terminal.
13. The method of claim 8, wherein the screen projection mode between the first terminal and the second terminal is mirror image screen projection;
the executing the operation instruction based on the first picture comprises:
and executing the operation instruction on the picture displayed by the first terminal, wherein the first picture is a mirror image of the picture displayed by the first terminal.
14. The method of claim 8, further comprising:
and sending a second picture obtained by executing the operation instruction based on the first picture to the second terminal so as to enable the second terminal to display the second picture.
15. A method for cross-device interaction is applied to a second terminal, and the method comprises the following steps:
displaying a first picture sent by a first terminal; the first terminal and the second terminal are connected in a screen projection mode;
generating an operation event according to a target operation which is made by a user on the second terminal aiming at the first picture; wherein the target operation is screen capture operation, screen recording operation or finger joint operation;
and sending the operation event to the first terminal so that the first terminal determines an operation instruction corresponding to the operation event and executes the operation instruction based on the first picture.
16. The method of claim 15, wherein the target operation is a knuckle operation;
the generating of the operation event according to the target operation of the user on the second terminal aiming at the first picture comprises the following steps:
generating a first input event according to a finger joint operation which is performed on the second terminal by a user aiming at the first picture; wherein the first input event comprises knuckle touch information and knuckle press information;
when it is recognized that the first input event is generated by a knuckle of the user based on a knuckle algorithm, determining a knuckle identification of the first input event;
encapsulating the first input event and the knuckle identification as an operational event.
17. The method according to claim 15, wherein the second terminal displays a target screen before the displayed first screen; the first picture is a pull-down menu picture; the target operation is a click operation aiming at an area where a screen capture button or a screen recording button is located in the pull-down menu picture; the operation instruction is a screen capturing instruction or a screen recording instruction of the target picture;
before generating an operation event according to a target operation of the user on the first screen at the second terminal, the method further includes:
generating a second input event according to the sliding operation of the user on the second terminal aiming at the target picture; wherein the second input event comprises finger sliding operation information;
sending the second input event to the first terminal, so that when the first terminal identifies that the second input event is a pull-down menu, the first terminal sends an internally stored pull-down menu picture and layout information of the pull-down menu picture to the second terminal;
displaying the pull-down menu picture on the basis of the target picture, and storing layout information of the pull-down menu picture;
the second terminal generates an operation event according to a target operation which is made by a user on the second terminal aiming at the first picture, and the operation event comprises the following steps:
generating a third input event according to the click operation of the user on the second terminal aiming at the area where the screen capture button or the screen recording button is located in the pull-down menu picture; wherein the third input event comprises finger click operation information;
determining the identifier of a screen capture button or a screen recording button according to the layout information of the pull-down menu picture and the third input event;
and encapsulating the identification of the screen capture button or the screen recording button and the third input event into an operation event.
18. The method according to claim 15, wherein the target operation is pressing a screen capture key of the second terminal; the operation event comprises key time information and key values;
the second terminal generates an operation event according to a target operation made by a user for the first screen, and further comprises:
and when the operation event is judged not to be a local event, sending the operation event to the first terminal.
19. The method of claim 15, wherein the screen projection mode between the first terminal and the second terminal is heterogeneous screen projection;
the first terminal is provided with a virtual screen of the second terminal; the first picture is a picture matched with the screen size of the virtual screen, and the screen size of the virtual screen is matched with the screen size of the second terminal;
the operation event carries the equipment identifier of the second terminal, so that the first terminal identifies the equipment identifier of the second terminal to determine the virtual screen of the second terminal.
20. The method of claim 15, wherein the screen projection between the first terminal and the second terminal is a mirror image screen projection.
21. The method of claim 15, further comprising:
receiving a second picture determined by the second terminal based on the first picture executing the operation instruction;
and displaying the second picture.
22. A screen projection system, comprising: a first terminal for performing the method of any of claims 8-14 and a second terminal for performing the method of any of claims 15-21.
23. An apparatus for interacting across devices, the apparatus running computer program instructions to perform the method of any one of claims 8-14 or to perform the method of any one of claims 15-21.
24. A terminal, comprising:
at least one memory for storing a program;
at least one processor configured to execute the memory-stored program, the processor configured to perform the method of any of claims 8-14 or perform the method of any of claims 15-21 when the memory-stored program is executed.
25. A computer storage medium having stored therein instructions which, when run on a computer, cause the computer to perform the method of any of claims 8-14 or to perform the method of any of claims 15-21.
CN202111034541.2A 2021-09-03 2021-09-03 Cross-device interaction method and device, screen projection system and terminal Pending CN115756268A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111034541.2A CN115756268A (en) 2021-09-03 2021-09-03 Cross-device interaction method and device, screen projection system and terminal
PCT/CN2022/114303 WO2023030099A1 (en) 2021-09-03 2022-08-23 Cross-device interaction method and apparatus, and screen projection system and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111034541.2A CN115756268A (en) 2021-09-03 2021-09-03 Cross-device interaction method and device, screen projection system and terminal

Publications (1)

Publication Number Publication Date
CN115756268A true CN115756268A (en) 2023-03-07

Family

ID=85332717

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111034541.2A Pending CN115756268A (en) 2021-09-03 2021-09-03 Cross-device interaction method and device, screen projection system and terminal

Country Status (2)

Country Link
CN (1) CN115756268A (en)
WO (1) WO2023030099A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116248657A (en) * 2023-05-09 2023-06-09 深圳开鸿数字产业发展有限公司 Control method and device of screen projection system, computer equipment and storage medium
CN116301699A (en) * 2023-05-17 2023-06-23 深圳开鸿数字产业发展有限公司 Distributed screen projection method, terminal equipment, display screen, screen projection system and medium
CN117130471A (en) * 2023-03-31 2023-11-28 荣耀终端有限公司 Man-machine interaction method, electronic equipment and system

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003162364A (en) * 2001-11-28 2003-06-06 Sony Corp Remote control system for application software which is projected on screen
CN107147769B (en) * 2016-03-01 2020-09-08 阿里巴巴集团控股有限公司 Equipment control method and device based on mobile terminal and mobile terminal
CN107483994A (en) * 2017-07-31 2017-12-15 广州指观网络科技有限公司 It is a kind of reversely to throw screen control system and method
CN110377250B (en) * 2019-06-05 2021-07-16 华为技术有限公司 Touch method in screen projection scene and electronic equipment
CN111221491A (en) * 2020-01-09 2020-06-02 Oppo(重庆)智能科技有限公司 Interaction control method and device, electronic equipment and storage medium
CN112394895B (en) * 2020-11-16 2023-10-13 Oppo广东移动通信有限公司 Picture cross-device display method and device and electronic device
CN113031843A (en) * 2021-04-25 2021-06-25 歌尔股份有限公司 Watch control method, display terminal and watch

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117130471A (en) * 2023-03-31 2023-11-28 荣耀终端有限公司 Man-machine interaction method, electronic equipment and system
CN116248657A (en) * 2023-05-09 2023-06-09 深圳开鸿数字产业发展有限公司 Control method and device of screen projection system, computer equipment and storage medium
CN116301699A (en) * 2023-05-17 2023-06-23 深圳开鸿数字产业发展有限公司 Distributed screen projection method, terminal equipment, display screen, screen projection system and medium
CN116301699B (en) * 2023-05-17 2023-08-22 深圳开鸿数字产业发展有限公司 Distributed screen projection method, terminal equipment, display screen, screen projection system and medium

Also Published As

Publication number Publication date
WO2023030099A1 (en) 2023-03-09

Similar Documents

Publication Publication Date Title
US20230115868A1 (en) Displaying Interfaces in Different Display Areas Based on Activities
WO2020253719A1 (en) Screen recording method and electronic device
US11669242B2 (en) Screenshot method and electronic device
WO2020259452A1 (en) Full-screen display method for mobile terminal, and apparatus
US20230046708A1 (en) Application Interface Interaction Method, Electronic Device, and Computer-Readable Storage Medium
WO2021000881A1 (en) Screen splitting method and electronic device
WO2021036770A1 (en) Split-screen processing method and terminal device
CN113726950B (en) Image processing method and electronic equipment
WO2020093988A1 (en) Image processing method and electronic device
US20230216990A1 (en) Device Interaction Method and Electronic Device
CN113885759A (en) Notification message processing method, device, system and computer readable storage medium
WO2023030099A1 (en) Cross-device interaction method and apparatus, and screen projection system and terminal
CN113961157B (en) Display interaction system, display method and equipment
CN113254120A (en) Data processing method and related device
US20210377642A1 (en) Method and Apparatus for Implementing Automatic Translation by Using a Plurality of TWS Headsets Connected in Forwarding Mode
WO2020155875A1 (en) Display method for electronic device, graphic user interface and electronic device
CN113935898A (en) Image processing method, system, electronic device and computer readable storage medium
CN112068907A (en) Interface display method and electronic equipment
WO2022135157A1 (en) Page display method and apparatus, and electronic device and readable storage medium
CN115016697A (en) Screen projection method, computer device, readable storage medium, and program product
US20230236714A1 (en) Cross-Device Desktop Management Method, First Electronic Device, and Second Electronic Device
CN115119048B (en) Video stream processing method and electronic equipment
CN113448658A (en) Screen capture processing method, graphical user interface and terminal
WO2023029916A1 (en) Annotation display method and apparatus, terminal device, and readable storage medium
WO2022002213A1 (en) Translation result display method and apparatus, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination