WO2022042769A2 - Système et procédé d'interaction multi-écrans, appareil, et support de stockage - Google Patents

Système et procédé d'interaction multi-écrans, appareil, et support de stockage Download PDF

Info

Publication number
WO2022042769A2
WO2022042769A2 PCT/CN2021/125874 CN2021125874W WO2022042769A2 WO 2022042769 A2 WO2022042769 A2 WO 2022042769A2 CN 2021125874 W CN2021125874 W CN 2021125874W WO 2022042769 A2 WO2022042769 A2 WO 2022042769A2
Authority
WO
WIPO (PCT)
Prior art keywords
screen
display frame
target display
user
response
Prior art date
Application number
PCT/CN2021/125874
Other languages
English (en)
Chinese (zh)
Other versions
WO2022042769A3 (fr
Inventor
颜忠生
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Publication of WO2022042769A2 publication Critical patent/WO2022042769A2/fr
Publication of WO2022042769A3 publication Critical patent/WO2022042769A3/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Definitions

  • Embodiments of the present disclosure relate to the field of device interaction, and more particularly, to a system, method, apparatus, and medium for multi-screen interaction.
  • Multi-screen sharing technology generally refers to sharing screen content between a master device and a slave device via a wired connection or a wireless connection.
  • the screen content marking technology generally means that the second screen marks the screen content from the first screen by means of local screenshots, photos, etc., under the condition that the screen content from the first screen is synchronously displayed.
  • the current multi-screen interaction technology cannot share various types of information between different screens, and cannot realize further interaction between different screens after marking the shared screen content.
  • embodiments of the present disclosure provide a system, method, apparatus, device, and computer-readable storage medium for multi-screen interaction, enabling sharing of various types of information between different screens and enabling implementation based on tagged screen content Further interaction between different screens.
  • a system for multi-screen interaction includes a first device including a first screen; and a second device including a second screen, wherein: the first device displays a first content on the first screen, the first content including a plurality of displays frame; the second device receives the user's first trigger operation; in response to the received first trigger operation, the second device sends a request for multi-screen interaction to the first device, the request including a request time point; the first device sends a response to the second device according to the request, and the response at least includes at least one display frame of the first screen near the request time point; the The second device receives the response and displays a first interface on the second screen, the first interface includes the at least one display frame; the second device receives user input indicating that the user selection and/or editing of a target display frame in the at least one display frame; the second device receives a second trigger operation of the user; in response to the received second trigger operation, the second device sends The first device transmits the
  • the first device in response to receiving the target display frame or the edited target display frame, displays on the first screen a prompt whether to allow sharing of the target display frame; The first device receives another user input; and in response to the other user input instructing the user to allow the sharing, the first device displays the target display frame or the edited version on the first screen; target display frame. In this way, screen content sharing between different screens can be performed under user control.
  • the second device receives a control command input by a user; in response to the received control command, the second device sends the control command to the first device for controlling all and the first device displays the target display frame or the edited target display frame on the first screen according to the control command. In this way, various types of interactions can be implemented between different screens based on the user's control commands.
  • the control command includes any one of the following: a first control command for displaying the edited target display frame on the first screen; a second control command for displaying the edited target display frame on the first screen; The target display frame is displayed on the first screen; a third control command is used to play the user's question about the target display frame while the target display frame is displayed on the first screen; and a fourth control command , used to play the recording segment corresponding to the target display frame while displaying the edited target display frame on the first screen.
  • the user input further indicates a user question for the target display frame.
  • the second device sends the question to the first device; and the first device displays the target display frame or edited on the first screen Play the question while the target displays the frame. In this way, user questions regarding screen content are allowed to be shared between different devices.
  • the response includes a sound recording corresponding to the at least one display frame; the first interface includes a visual representation of the sound recording; and the user input instructs the user for a sound recording segment in the sound recording selection, the recording segment corresponds to the target display frame. In this way, the user is allowed to select the screen content to be shared by selecting a segment of the recording.
  • the second device in response to receiving the second trigger operation, sends the audio recording segment to the first device; and the first device displays the recorded segment on the first screen The recording segment is played while the target display frame or the edited target display frame is displayed. In this way, audio clips corresponding to screen content are allowed to be shared between different devices.
  • the first device receives a user's third trigger operation; and in response to the received third trigger operation, the first device stops displaying the target display on the first screen frame or the edited target display frame and redisplay the first content on the first screen. In this way, the user is allowed to take back control of the screen and terminate the multi-screen interaction.
  • the first device and the second device before the second device sends the request to the first device, the first device and the second device establish a connection for multi-screen interaction. In this way, connections for multi-screen interaction can be established between different devices.
  • a method for multi-screen interaction includes: in response to receiving a first trigger operation from a user, the second device sends a request for multi-screen interaction to the first device, where the request includes a request time point; the second device receives a request from the user a response from the first device, where the response at least includes at least one display frame of the first screen of the first device near the requested time point; the second device displays on the second screen of the second device a first interface comprising the at least one display frame; the second device receiving user input indicating the user's selection and/or editing of a target display frame of the at least one display frame ; and the second device, in response to receiving the second trigger operation from the user, sends the target display frame or the edited target display frame to the first device for use in the first device displayed on one screen.
  • the method further comprises: the second device receiving a control command input by a user; and in response to the received control command, the second device sending the control command to the first device , for controlling the display of the first screen.
  • control command includes any one of the following: a first control command for displaying the edited target display frame on the first screen; a second control command for displaying the edited target display frame on the first screen; The target display frame is displayed on the first screen; a third control command is used to play the user's question about the target display frame while the target display frame is displayed on the first screen; and a fourth control command , used to play the recording segment corresponding to the target display frame while displaying the edited target display frame on the first screen.
  • the user input further indicates a user question for the target display frame
  • the method further includes: in response to receiving the second trigger operation, the second device to the first device Send the question.
  • the response includes a sound recording corresponding to the at least one display frame; the first interface includes a visual representation of the sound recording; and the user input instructs the user for a sound recording segment in the sound recording selection, the recording segment corresponds to the target display frame.
  • the method further comprises: in response to receiving the second trigger operation, the second device sending the audio recording segment to the first device.
  • the method further includes: before the second device sends the request to the first device, establishing a connection between the second device and the first device for multi-screen interaction.
  • a method for multi-screen interaction includes: a first device receiving a request for multi-screen interaction from a second device, the first device displaying a first content on a first screen, the first content including a plurality of display frames, and the The request includes a request time point; the first device sends a response to the second device according to the request, and the response at least includes at least one display frame of the first screen near the request time point; the the first device receives a target display frame or an edited target display frame from the second device, the target display frame being selected from the at least one display frame; and the first device is in the first The target display frame or the edited target display frame is displayed on the screen.
  • displaying the target display frame or the edited target display frame on the first screen comprises: in response to receiving the target display frame or the edited target display frame, the first device displays a prompt on the first screen whether to allow the sharing of the target display frame; the first device receives user input; and in response to the user input instructing the user to allow the sharing, the first device A device displays the target display frame or the edited target display frame on the first screen.
  • displaying the target display frame or the edited target display frame on the first screen includes the first device receiving a control command from the second device, the control command for controlling the display of the first screen; and the first device displays the target display frame or the edited target display frame on the first screen according to the control command.
  • control command includes any one of the following: a first control command for displaying the edited target display frame on the first screen; a second control command for displaying the edited target display frame on the first screen; The target display frame is displayed on the first screen; a third control command is used to play the user's question about the target display frame while the target display frame is displayed on the first screen; and a fourth control command , used to play the recording segment corresponding to the target display frame while displaying the edited target display frame on the first screen.
  • the method further comprises: the first device receiving a question from the second device for the target display frame; and the first device displaying the target on the first screen The question is played while the frame or the edited target display frame is displayed.
  • the response includes a sound recording corresponding to the at least one display frame
  • the method further includes: the first device receiving a sound recording segment from the second device, the sound recording segment selected from the group consisting of the recording and corresponding to the target display frame; and the first device plays the recording segment while displaying the target display frame or the edited target display frame on the first screen.
  • the method further includes: the first device receiving a trigger operation from a user; and in response to the received trigger operation, the first device stopping displaying the target on the first screen A frame or the edited target display frame is displayed, and the first content is redisplayed on the first screen.
  • the method further includes: before the first device receives the request from the second device, establishing a connection between the first device and the second device for multi-screen interaction.
  • a device for multi-screen interaction includes: a request sending unit, configured to send a request for multi-screen interaction to a first device in response to receiving a first trigger operation from a user, where the request includes a request time point; a response receiving unit, which is is configured to receive a response from the first device, the response at least including at least one display frame of the first screen of the first device near the request time point; the screen display unit is configured to A first interface is displayed on the second screen of the second device, the first interface includes the at least one display frame; a user input receiving unit is configured to receive a user input, the user input instructing the user to target the at least one display frame selection and/or editing of the target display frame in of the target display frame for display on the first screen.
  • the apparatus further includes: a control command receiving unit configured to receive a control command input by a user; and a control command sending unit configured to, in response to the received control command, send a message to the first The device sends the control command for controlling the display of the first screen.
  • control command includes any one of the following: a first control command for displaying the edited target display frame on the first screen; a second control command for displaying the edited target display frame on the first screen; The target display frame is displayed on the first screen; a third control command is used to play the user's question about the target display frame while the target display frame is displayed on the first screen; and a fourth control command , used to play the recording segment corresponding to the target display frame while displaying the edited target display frame on the first screen.
  • the user input further indicates a question of the user with respect to the target display frame
  • the apparatus further includes: a question sending unit configured to, in response to receiving the second trigger operation, send a question to the first A device sends the question.
  • the response includes a sound recording corresponding to the at least one display frame; the first interface includes a visual representation of the sound recording; and the user input instructs the user for a sound recording segment in the sound recording selection, the recording segment corresponds to the target display frame.
  • the apparatus further includes: a recording segment sending unit, configured to send the recording segment to the first device in response to receiving the second trigger operation.
  • the apparatus further includes: a connection establishing unit configured to establish a connection for multi-screen interaction with the first device before sending the request to the first device.
  • a device for multi-screen interaction includes: a screen display unit configured to display a first content on a first screen, the first content including a plurality of display frames; a request receiving unit configured to receive a request for multi-screen interaction from a second device
  • the request includes a request time point;
  • the response sending unit is configured to send a response to the second device according to the request, and the response at least includes that the first screen is near the request time point and a display frame receiving unit configured to receive a target display frame or an edited target display frame from the second device, the target display frame being selected from the at least one display frame;
  • the screen display unit is further configured to display the target display frame or the edited target display frame on the first screen.
  • the screen display unit includes: a first display unit configured to, in response to receiving the target display frame or the edited target display frame, display on the first screen whether to allow or not a prompt for sharing of the target display frame; a user input receiving unit configured to receive user input; and a second display unit configured to instruct the user to allow the sharing in response to the user input, on the first screen
  • the target display frame or the edited target display frame is displayed above.
  • the screen display unit includes: a control command receiving unit configured to receive a control command from the second device, the control command being used to control the display of the first screen; and a third A display unit configured to display the target display frame or the edited target display frame on the first screen according to the control command.
  • control command includes any one of the following: a first control command for displaying the edited target display frame on the first screen; a second control command for displaying the edited target display frame on the first screen; The target display frame is displayed on the first screen; a third control command is used to play the user's question about the target display frame while the target display frame is displayed on the first screen; and a fourth control command , used to play the recording segment corresponding to the target display frame while displaying the edited target display frame on the first screen.
  • the apparatus further includes: a question receiving unit configured to receive a question for the target display frame from the second device; and a question playing unit configured to display a question on the first screen The question is played while the target display frame or the edited target display frame is displayed.
  • the response includes a sound recording corresponding to the at least one display frame
  • the apparatus further includes: a sound recording segment receiving unit configured to receive a sound recording segment from the second device, the sound recording segment is selected from the recording and corresponds to the target display frame; and a recording segment playing unit configured to play the target display frame or the edited target display frame while displaying the target display frame or the edited target display frame on the first screen. the recorded clips.
  • the apparatus further includes: an operation receiving unit configured to receive a trigger operation from a user; and a fourth display unit configured to stop on the first screen in response to the received trigger operation
  • the target display frame or the edited target display frame is displayed on the screen, and the first content is redisplayed on the first screen.
  • the apparatus further includes a connection establishing unit configured to establish a connection for multi-screen interaction with the second device before receiving the request from the second device.
  • an electronic device in a sixth aspect of the present disclosure, includes: one or more processors; one or more memories; and one or more computer programs.
  • the one or more computer programs are stored in the one or more memories, the one or more computer programs including instructions.
  • the electronic device is caused to perform the method of the second aspect or the third aspect.
  • a computer-readable storage medium is provided.
  • a computer program is stored on the computer-readable storage medium, and when the computer program is executed by the processor, implements the method of the second aspect or the third aspect.
  • FIG. 1A shows a block diagram of an example system according to an embodiment of the present disclosure
  • FIG. 1B shows a software system architecture diagram of an example system according to an embodiment of the present disclosure
  • FIG. 2A shows a block diagram of another example system according to an embodiment of the present disclosure
  • FIG. 2B shows a software system architecture diagram of another example system according to an embodiment of the present disclosure
  • FIG. 3 shows a schematic diagram of establishing a connection between a master device and a slave device according to an embodiment of the present disclosure
  • FIG. 4 shows a signaling interaction diagram for establishing a connection between a master device and a slave device according to an embodiment of the present disclosure
  • FIG. 5 shows a schematic diagram of triggering screen interaction between a master device and a slave device according to an embodiment of the present disclosure
  • FIG. 6 shows a signaling interaction diagram for multi-screen interaction between a master device and a slave device according to an embodiment of the present disclosure
  • FIG. 7 shows a schematic diagram of acquiring screen content related information from a screen buffer and an audio buffer at the master device according to an embodiment of the present disclosure
  • FIGS. 8A-8F illustrate schematic diagrams of example user interfaces for multi-screen interaction according to embodiments of the present disclosure
  • FIG. 9 shows a signaling interaction diagram for performing multi-screen interaction between different screens of the same device according to an embodiment of the present disclosure
  • FIG. 10 shows a flowchart of an example method for multi-screen interaction according to an embodiment of the present disclosure
  • FIG. 11 shows a flowchart of an example method for multi-screen interaction according to an embodiment of the present disclosure
  • FIG. 12 shows a block diagram of an example apparatus for multi-screen interaction according to an embodiment of the present disclosure
  • FIG. 13 shows a block diagram of an example apparatus for multi-screen interaction according to an embodiment of the present disclosure.
  • FIG. 14 illustrates a block diagram of an example device suitable for implementing embodiments of the present disclosure
  • 15 shows a block diagram of the software architecture of an example device suitable for implementing embodiments of the present disclosure.
  • a value, process, or device is referred to as "best,” “lowest,” “highest,” “minimum,” “maximum,” or the like. It should be understood that such descriptions are intended to indicate that a choice may be made among the many functional alternatives that may be used, and that such choices need not be better, smaller, higher, or otherwise preferred than other choices.
  • the current screen content marking technology generally refers to that the second screen marks the screen content from the first screen by means of local screenshots, photos, etc. under the condition of synchronously displaying the screen content from the first screen.
  • the slave device When the first screen and the second screen come from the master device and the slave device, respectively, the slave device needs to synchronously display the screen content of the first screen on the master device, such as a slideshow or video.
  • the slave device desires to mark the screen content, it can obtain the screen content to be marked by taking local screenshots, taking pictures, etc., and then edit the obtained picture.
  • the slave device can only obtain screenshots, but cannot obtain other types of information, such as audio data, when the master device displays the corresponding screen content.
  • the above operation of marking the screen content is cumbersome, and the master device and the slave device cannot perform further interaction based on the marked screen content.
  • the two screens may be used to run different applications, respectively.
  • an application running on the second screen wishes to mark the screen content of the first screen, it can obtain the screen content to be marked by taking local screenshots, taking pictures, etc., and then edit the obtained picture.
  • the above screen content marking operation is cumbersome, and further interaction between different applications based on the marked screen content cannot be performed.
  • Embodiments of the present disclosure provide a solution for multi-screen interaction.
  • this solution does not require the slave device to synchronously display the screen content of the master device.
  • This solution can share various types of information, such as pictures, audio, video, etc., between different screens.
  • this scheme enables further interaction between different screens based on the marked screen content.
  • the solution can share various types of information between the different screens, and can implement different Further interactions between screens.
  • FIG. 1A shows a block diagram of an example system 100 according to an embodiment of the present disclosure.
  • system 100 includes a master device 110 and one or more slave devices 120 (only one is shown in FIG. 1 ).
  • An application 111 is run on the main device 110 , and the application 111 can be run using, for example, a screen of the main device 110 .
  • An application 121 is run on the slave device 120 , and the application 121 can be run using the screen of the slave device 120 , for example.
  • the master device 110 and the slave device 120 may be the same type of device or different types of devices.
  • master device 110 or slave device 120 may include, but are not limited to, non-portable devices such as personal computers, laptops, projectors, televisions, etc., as well as handheld terminals, smart phones, wireless data cards, tablet computers, wearable devices, etc.
  • Portable Devices Examples of applications 111 may include, but are not limited to, video conferencing applications, video playback applications, office applications (eg, slideshows, Word applications, etc.), or other presentation applications.
  • the application 121 may be a multi-screen interactive application, which may interact with the application 111 on the main device 110 according to an embodiment of the present disclosure.
  • the application 111 is also referred to as a "first application”
  • the screen of the main device 110 is also referred to as a "first screen” or "home screen”.
  • the application 121 is also referred to as a “second application” and the screen of the slave device 120 is also referred to as a “second screen” or “slave screen”.
  • FIG. 1B shows a software system architecture diagram of an example system 100 according to an embodiment of the present disclosure.
  • the software system architecture of the master device 110 can be divided into an application layer, a framework layer and a driver layer.
  • the application layer may include applications 111 .
  • the framework layer may include media services 112 for supporting the operation of applications 111 (eg, video conferencing applications, video playback applications, office applications, or other presentation applications).
  • the framework layer may further include a multi-screen interaction service 113 for supporting multi-screen interaction between the application 111 and the application 121 .
  • the driver layer may include, for example, a screen driver 114 and a graphics processing unit (GPU) driver 115 for supporting the display of the application 111 on the screen of the host device 110 .
  • GPU graphics processing unit
  • the driver layer may further include, for example, a Bluetooth driver 116 , a Wifi driver 117 , a near field communication (NFC) driver (not shown), etc., for establishing a communication connection between the master device 110 and the slave device 120 .
  • a Bluetooth driver 116 for establishing a communication connection between the master device 110 and the slave device 120 .
  • a Wifi driver 117 a Wifi driver 117 , a near field communication (NFC) driver (not shown), etc.
  • NFC near field communication
  • the software system architecture of the slave device 120 can be divided into an application layer, a framework layer and a driver layer.
  • the application layer may include applications 121 .
  • the framework layer may also include a multi-screen interaction service 122 for supporting multi-screen interaction between the application 111 and the application 121 .
  • the driver layer may include, for example, a Bluetooth driver 123 , a Wifi driver 124 , a Near Field Communication (NFC) driver (not shown), etc., for establishing a communication connection between the master device 110 and the slave device 120 .
  • the driver layer may further include a screen driver and a GPU driver (not shown), etc., for supporting the display of the application 121 on the screen of the slave device 120 .
  • the software system architecture shown in FIG. 1B is exemplary only and is not related to the operating system of the device. That is, the software system architecture shown in FIG. 1B can be implemented on devices installed with different operating systems, including but not limited to Windows operating systems, Android operating systems, and IOS operating systems.
  • the software layering in the software system architecture described above is also exemplary and is not intended to limit the scope of the present disclosure.
  • multi-screen interaction service 113 may be integrated in application 111
  • multi-screen interaction service 122 may be integrated in application 121 .
  • FIG. 2A shows a block diagram of another example system 200 in accordance with embodiments of the present disclosure.
  • the system 200 includes a device 210, and the device 210 may include multiple screens or the screen of the device 210 may be divided into multiple areas.
  • There are applications 211 and 212 running on the device 210 wherein the application 211 can run using the first screen or the first screen area of the device 210 , and the application 212 can run using the second screen or the second screen area of the device 210 .
  • Examples of applications 211 may include, but are not limited to, video conferencing applications, video playback applications, office applications (eg, slideshows, Word applications, etc.), or other presentation applications.
  • the application 212 may be a multi-screen interactive application, which may interact with the application 211 according to embodiments of the present disclosure.
  • the application 211 is also referred to as the "first application”
  • the screen or screen area it utilizes is also referred to as the "first screen” or "home screen”.
  • Application 212 is also referred to as a "second application,” and the screen or screen area it utilizes is also referred to as a “second screen” or “secondary screen.”
  • FIG. 2B shows a software system architecture diagram of an example system 200 according to an embodiment of the present disclosure.
  • the software system architecture of the device 210 can be divided into an application layer, a framework layer and a driver layer.
  • the application layer may include applications 211 and 212 .
  • the framework layer may include media services 213 for supporting the operation of applications 211 (eg, video conferencing applications, video playback applications, office applications, or other presentation applications).
  • the framework layer may further include a multi-screen interaction service 214 for supporting multi-screen interaction between the application 211 and the application 212 .
  • the driver layer may include, for example, a screen driver 215 and a GPU driver 216 for supporting the display of applications 211 and 212 on different screens.
  • the software system architecture shown in FIG. 2B is exemplary only and is not related to the operating system of the device. That is, the software system architecture shown in FIG. 2B can be implemented on devices installed with different operating systems, including but not limited to Windows operating systems, Android operating systems, and IOS operating systems.
  • the software layering in the software system architecture described above is also exemplary and is not intended to limit the scope of the present disclosure.
  • the multi-screen interactive service 214 may be integrated into the applications 211 and 212 .
  • Embodiments of the present disclosure are first described in detail below in conjunction with an example system 100 as shown in FIGS. 1A and 1B .
  • connection can be established by any means such as Bluetooth, Wifi, NFC, scanning a two-dimensional code, or the like.
  • FIG. 3 shows a schematic diagram of establishing a connection between a master device and a slave device according to an embodiment of the present disclosure.
  • the master device 110 is shown as a laptop computer and the slave device 120 is shown as a mobile phone for the purpose of example.
  • the application 111 on the master device 110 when the application 111 on the master device 110 enables the multi-screen interaction function, it can display a two-dimensional code as shown in FIG. 3 to instruct the slave device to scan the two-dimensional code to establish a multi-screen interaction with it. interactive connection.
  • the multi-screen interactive application 121 on the slave device 120 when activated, may display, for example, a user interface 121-1 as shown in FIG. 3, which includes a button "swipe to pair".
  • the slave device 120 may present a two-dimensional code scanning window to scan the two-dimensional code displayed on the master device 110 .
  • the master device 110 can establish a connection with the slave device 120 for multi-screen interaction.
  • FIG. 4 shows a signaling diagram for establishing a connection between a master device and a slave device according to an embodiment of the present disclosure.
  • FIG. 4 relates to applications 111 and 121 and multi-screen interactive services 113 and 122 as shown in FIG. 1B .
  • the application 111 may send ( 401 ) a binding service request to the multi-screen interaction service 113 , so that the multi-screen interaction service 113 can provide it with the multi-screen interaction service.
  • the application 111 may display (402) a two-dimensional code on the home screen to instruct the slave device to establish a connection therewith for multi-screen interaction by scanning the two-dimensional code.
  • the application 121 may send (403) a binding service request to the multi-screen interaction service 122, so that the multi-screen interaction service 122 can provide the multi-screen interaction service for it.
  • the application 121 may present a QR code scan window on the secondary screen to scan (404) the QR code displayed on the host device 110.
  • a communication connection can be established between the master device 110 and the slave device 120 .
  • the process of establishing a communication connection as shown in steps 401 to 404 is only exemplary, and is not intended to limit the scope of the present disclosure.
  • the master device 110 and the slave device 120 may establish a communication connection in other ways. Alternatively, if a communication connection has been established between the master device 110 and the slave device 120 in some way, steps 401 to 404 may be omitted.
  • the application 121 may perform a handshake with the application 111 to establish a connection for multi-screen interaction. As shown in FIG. 4 , the application 121 may send ( 405 ) a request for establishing a multi-screen interactive connection to the multi-screen interactive service 122 . The multi-screen interaction service 122 may forward ( 406 ) the request to the multi-screen interaction service 113 , which further forwards ( 407 ) the request to the application 111 . Application 111 may generate (408) an application information packet. In some embodiments, depending on the type of application 111, the content of the generated application information packets may be different.
  • the generated application information data packet may include the played video source address, the video name, the number of video frames, the playback speed, the playback time point, and the like.
  • the generated application information data package may include a file address, a file name, a currently playing page number, and the like.
  • the application 111 may send ( 409 ) the application information packet to the multi-screen interactive service 113 .
  • the multi-screen interactive service 113 can forward ( 410 ) the application information data packet to the multi-screen interactive service 122 , and the multi-screen interactive service 122 further forwards ( 411 ) the application information data packet to the application 121 .
  • steps 405 to 411 are only exemplary, and is not intended to limit the scope of the present disclosure.
  • steps 405-407 may be omitted. That is, the application 111 can actively generate the application information data packet and send it to the application 121 .
  • steps 408 to 411 may be omitted, that is, when the application 111 receives a request from the application 121 to establish a multi-screen interactive connection, the handshake process is completed.
  • FIG. 5 shows a schematic diagram of triggering screen interaction between a master device and a slave device according to an embodiment of the present disclosure.
  • the application 111 can run normally on the master device 110 .
  • the application 111 can use the screen of the main device 110 to play a video.
  • the application 111 may utilize the screen of the main device 110 to play the slideshow or document.
  • the multi-screen interaction service 113 may establish a screen buffer for caching the display content of the screen of the host device 110 .
  • the multi-screen interaction service 113 may add captured screen display frames to the screen buffer periodically (eg, every 100ms), and the screen display frames may be obtained by, for example, screenshots or other means.
  • a screen buffer can be implemented using a ring buffer. That is, the screen buffer is only used to buffer the most recent fixed number of display frames. When the screen buffer is full, the newly added display frame will overwrite the oldest display frame in the screen buffer.
  • the multi-screen interactive service 113 may start recording from the application 111 is launched, such as recording a presentation of a presentation by a speaker of a slide or document. The multi-screen interactive service 113 may establish an audio buffer for buffering the audio stream recorded during the running of the application 111 .
  • the audio buffer may be implemented using a ring buffer. That is, the audio buffer is only used to buffer the most recent fixed-length recording data.
  • the newly added recording data will overwrite the oldest recording data in the audio buffer.
  • the purpose of establishing the screen buffer and audio buffer is to deal with the communication delay between the slave device 120 and the master device 110, so that when the master device 110 receives a multi-screen interaction request from the slave device 120, it can find the slave device according to the timestamp carried in the request.
  • the device 120 desires to display the content and corresponding audio recording segments for the objects it interacts with.
  • the user interface of the second application 121 on the slave device 120 is updated from the user interface 121-1 shown in FIG. 3 to the user interface 121-2 shown in FIG. 5 .
  • the button "Swipe to Pair" may be disabled or not displayed, and only the button "Interact with Home Screen” is displayed.
  • the user of the slave device 120 is, for example, in the same space (eg, office or conference room) as the master device 110 and its users.
  • the slave device 120 When the user operating the master device 110 is using the master device to play the content that he or she desires to interact with, the slave device 120 The user can watch the screen of the master device 110, and the user of the slave device 120 can also trigger interaction with the screen of the master device 110 by clicking the button "Interact with the master screen". It should be understood that the user of the slave device 120 can also trigger the interaction with the screen of the master device 110 in other ways, including but not limited to, through gestures such as double-tapping on the screen of the slave device 120, through voice such as "take a note", etc. Triggered by command or triggered by other external devices, etc. The scope of the present disclosure is not limited in this regard.
  • FIG. 6 shows a signaling interaction diagram for multi-screen interaction between a master device and a slave device according to an embodiment of the present disclosure.
  • FIG. 6 relates to applications 111 and 121 and multi-screen interactive services 113 and 122 as shown in FIG. 1B .
  • the application 121 can send the multi-screen interaction service 122 (602)
  • a request for multi-screen interaction between the master device 110 and the slave device 120 (also referred to herein as a "first request"), the request may indicate a time point at which the user of the slave device 120 requests the interaction (herein Also referred to as "request time point") and/or request type (eg, "multi-screen interaction").
  • the multi-screen interaction service 122 may forward ( 603 ) the request to the multi-screen interaction service 113 , and the multi-screen interaction service 113 may further forward ( 604 ) the request to the application 111 .
  • Application 111 may generate (605) an updated application information packet.
  • the updated application information packet may include additional information relative to the display of the home screen at the requested point in time compared to the application information packet sent when the connection was established.
  • the application 111 may obtain at least one display frame of the home screen within a predetermined time period near the request time point from the screen buffer established by the multi-screen interactive service 113, and may obtain from the multi-screen interactive service 113
  • the recording data corresponding to the at least one display frame is acquired from the audio buffer established by the screen interactive service 113 .
  • the application 111 may generate an updated application information data packet based on at least one of the acquired at least one display frame and recording data.
  • FIG. 7 shows a schematic diagram of acquiring screen content-related information from a screen buffer and an audio buffer at a host device according to an embodiment of the present disclosure.
  • FIG. 7 shows a screen buffer 730 at the main device 110, which buffers a plurality of display frames 701-710 of the main screen.
  • FIG. 7 also shows an audio buffer 760 at the host device 110 , which buffers audio data 761 recorded during the execution of the application 111 .
  • the request time point indicated by the first request received by the application 111 is T0. As shown in FIG.
  • the application 111 can obtain the display frames 702 ⁇ 706 of the main screen within the predetermined time period T before the request time point T0 from the screen buffer 730 , and can obtain the display frames 702 to 706 from the audio buffer 760 corresponding to the predetermined time period Recording data 762 corresponding to T.
  • the acquired screen display frame may also be a predetermined number of display frames before and after the request time point T0, or one frame before the request time point T0.
  • the acquired audio recording data may be audio recording data corresponding to the acquired screen display frame.
  • the multi-screen interaction service 113 may determine the audio recording segment corresponding to the display frame based on the intercepted time of each display frame. In this way, audio recording data corresponding to the acquired on-screen display frame can be determined based on the acquired start time and end time of the on-screen display frame.
  • the application 111 may send ( 606 ) the updated application information packet to the multi-screen interactive service 113 as a response to the first request.
  • the multi-screen interaction service 113 may forward ( 607 ) the application information packet to the multi-screen interaction service 122 , which further forwards ( 608 ) it to the application 121 .
  • the application 121 may display on the screen of the slave device 120 at least one display frame included in the application information packet, and a visual representation of the recording data.
  • FIG. 8A shows a schematic diagram of an example user interface 121 - 3 of application 121 .
  • user interface 121 - 3 may present a visual representation of received display frames 702 - 706 and recording data 762 .
  • the user of slave device 120 may select for display frames 702-706 or recording 762 to select a display frame (also referred to herein as "target display frame” or “target display content”) with which interaction is desired and its corresponding recording segment .
  • a display frame also referred to herein as "target display frame” or “target display content
  • the application 121 can determine the recording segment corresponding to the display frame 704 in the recording data 762 and/or the time point corresponding to the display frame 704 (also referred to herein as "target time") point"). For another example, assuming that the user selects a recording segment in the recording data 762, the application 121 may determine a target display frame corresponding to the recording segment and/or a target time point corresponding to the recording segment.
  • the user interface 121-3 may also provide buttons "Select Confirm” and "Reselect”.
  • the application 121 can ignore the user's previous selection and re-receive the user input.
  • the button "Select Confirm” the user interface 121-3 shown in FIG. 8A may be updated to the user interface 121-4 shown in FIG. 8B.
  • the user interface 121 - 4 may present the selected target display frame 704 . Additionally or alternatively, the user interface 121-4 may also present a recording segment corresponding to the target display frame 704 (not shown in Figure 8B). In addition, the user interface 121-4 may also provide buttons "Edit", “Share” and "Ask”. When the user clicks the button "Edit”, the user can edit the target display frame 704, for example, including but not limited to operations such as cropping, modifying, and marking. When the user clicks the button "Ask”, the user may input a question for the target display frame 704 by voice or other means. For example, the application 121 may record and save a recording of the user's question.
  • the application 121 may, based on the determined target time point, the unedited target display content, the edited target display content, the recording segment corresponding to the target display frame, the recorded question recording One or more items to generate a request to interact with the target display content (also referred to herein as a "second request") to send to the application 111 .
  • the user of the slave device 120 may also trigger the sharing of the editing results and/or question recordings in other ways, including but not limited to, triggering by gestures such as swiping up on the screen of the slave device 120 , through gestures such as “Share Notes” ” and other voice commands or triggered by other external devices, etc.
  • the scope of the present disclosure is not limited in this regard.
  • the application 121 may, based on the determined target time point, the edited target display frame, the recording segment corresponding to the target display frame, and the recorded question recording to generate (609) a second request and send (610) it to the multi-screen interactive service 122.
  • the second request may only indicate a target point in time and/or a request type (eg, "share notes").
  • One or more of the edited target display frame, the audio clip corresponding to the target display frame, and the recorded question recording may be included in the updated application information packet and sent with the second request.
  • the multi-screen interaction service 122 may forward ( 611 ) the second request along with the updated application information packet to the multi-screen interaction service 113 , which further forwards ( 612 ) them to the application 111 .
  • the application 111 may display a prompt regarding the second request on the screen of the main device 110 to ask the user of the main device 110 whether to allow sharing.
  • application 111 may send (614-616) a notification to application 121 via multi-screen interactive services 113 and 122 to notify application 121 to send subsequent control commands to application 111.
  • the application 121 may display a visual representation of the plurality of candidate control commands on the screen of the slave device 120 .
  • the user of the slave device 120 may select a control command from a plurality of candidate control commands.
  • application 121 may send ( 618 - 620 ) the control command to application 111 via multi-screen interactive services 122 and 113 .
  • the application 111 may perform (621) an operation related to the target display content according to the control command.
  • FIGS. 8C-8F show schematic diagrams of an example user interface 121 - 5 of the application 121 and a corresponding user interface of the host device 110 .
  • the application 121 may present the user interface 121- 5.
  • the user interface 121-5 may present an edited target display frame 704' and buttons corresponding to candidate control commands 801-804.
  • the control command 801 instructs the display of the edited target display frame 704' on the main screen.
  • Control command 802 instructs to jump back to target display frame 704 (ie, the unedited target display frame) on the home screen.
  • the control command 803 instructs to jump back to the target display frame 704 on the main screen and play the question recording for the target display frame 704 at the same time.
  • the control command 804 instructs to display the edited target display frame 704'
  • FIG. 8C when the user of the slave device 120 clicks the button corresponding to the control command 801, the application 111 can suspend its normal application operation and display the edited target display frame 704' on the screen of the master device 110.
  • FIG. 8D when the user of the slave device 120 clicks the button corresponding to the control command 802 , the application 111 may jump back to the target display frame 704 on the screen of the master device 110 .
  • the application 111 when the application 111 is a slideshow application, the application 111 can jump back to the slideshow page corresponding to the target time point; when the application 111 is a video playback application, the application 111 can make the video playback jump back to the target time point. corresponding location.
  • the application 111 when the user of the slave device 120 clicks the button corresponding to the control command 803 , the application 111 can redisplay the target display frame 704 on the screen of the master device 110 and simultaneously play the question recording for the target display frame 704 . As shown in FIG.
  • the application 111 can suspend its normal application running, and display the edited target display frame 704' on the screen of the master device 110 at the same time The audio clip corresponding to the target display frame 704 is played.
  • the user of the main device 110 can take back control of the main screen through various means such as voice commands, shortcut keys (eg, Ctrl+Alt+M).
  • application 111 may stop ( 622 ) responding to control commands from application 121 .
  • Embodiments of the present disclosure do not require the slave devices to synchronously display the screen content of the master device.
  • Embodiments of the present disclosure can share various types of information, such as pictures, audio, etc., among multiple screens.
  • embodiments of the present disclosure enable further interaction between different screens based on edited screen content.
  • Embodiments of the present disclosure are also applicable to scenarios where multiple screens come from the same device (eg, a folding screen device). Embodiments of the present disclosure will be described in detail below in conjunction with an example system 200 as shown in Figures 2A and 2B.
  • FIG. 9 shows a signaling diagram for multi-screen interaction between different screens of the same device according to an embodiment of the present disclosure.
  • FIG. 9 relates to the applications 211 and 212 and the multi-screen interactive service 214 as shown in FIGS. 2A and 2B .
  • the application 212 can be served by the multi-screen interaction 214 sends (902-903) a request for multi-screen interaction between the master and slave screens (also referred to herein as a "first request") to the application 211, and the request may indicate a request time point and/or a request type ( For example, "multi-screen interaction").
  • the application 211 may generate (904) an application information data packet, which may include information related to the display content of the home screen at the requested time point.
  • the application 211 may obtain at least one display frame of the main screen within a predetermined time period near the request time point from the screen buffer maintained by the multi-screen interactive service 214, and may obtain from the multi-screen interactive service 214
  • the audio recording data corresponding to the at least one display frame is acquired from the audio buffer maintained by the interactive service 214 .
  • the application 211 may generate the application information data packet based on at least one of the acquired at least one display frame and recording data.
  • the application 211 may send (905-906) the application information data packet to the application 212 via the multi-screen interactive service 214 as a response to the first request.
  • the application 212 may display at least one display frame included in the application information packet, as well as a visual representation of the recorded data, on the slave screen for selection by the user.
  • the user interface displayed from the screen may be the same as or similar to the user interface shown in Figures 8A and 8B, for example. After selecting the target display frame and its corresponding recording segment, the user can edit the target display frame, ask questions about the target display frame, and/or trigger sharing of the editing results and/or question recordings.
  • the application 212 may generate (907) based on one or more of the determined target time point, the edited target display frame, a recording segment corresponding to the target display frame, and the recorded question recording
  • the second request is sent (908-909) to the application 211 via the multi-screen interactive service 214.
  • the second request may only indicate a target point in time and/or a request type (eg, "share notes").
  • One or more of the edited target display frame, the audio clip corresponding to the target display frame, and the recorded question recording may be included in the updated application information packet and sent with the second request.
  • the application 211 may display a prompt about the second request on the home screen to ask the user whether to allow sharing.
  • application 211 may send (911-912) a notification to application 212 via multi-screen interactive service 214 to notify application 212 to send subsequent control commands to application 211.
  • the application 212 may display a visual representation of the plurality of candidate control commands on the slave screen.
  • the user interface displayed from the screen may be the same as or similar to the user interface 121-5 shown in FIGS. 8C to 8F, for example.
  • the user can select one control command from a plurality of candidate control commands.
  • application 212 may send (914-915) the control command to application 211 via multi-screen interactive service 214.
  • the application 211 can perform (916) an operation related to the target display content according to the control command.
  • application 211 may stop ( 917 ) responding to control commands from application 212 .
  • embodiments of the present disclosure can share various types of information between different screens, and can realize the difference between different screens based on the marked screen content. further interaction.
  • FIG. 10 shows a flowchart of an example method 1000 for multi-screen interaction according to an embodiment of the present disclosure.
  • the method 1000 may be performed, for example, by a first device, such as the master device 111 shown in FIGS. 1A and 1B .
  • the second device is, for example, the slave device 120 as shown in FIGS. 1A and 1B .
  • method 1000 may also include additional acts not shown and/or that shown acts may be omitted. The scope of the present disclosure is not limited in this regard.
  • the first device receives a request for multi-screen interaction from the second device.
  • the first device displays the first content on the first screen, the first content includes a plurality of display frames, and the request includes the request time point.
  • the first device sends a response to the second device in accordance with the request.
  • the response includes at least one display frame of the first screen near the requested time point.
  • the first device receives the target display frame or the edited target display frame from the second device.
  • the target display frame is selected from at least one display frame.
  • the first device displays the target display frame or the edited target display frame on the first screen.
  • the first device in response to receiving the target display frame or the edited target display frame, may display a prompt on the first screen whether to allow sharing of the target display frame.
  • the first device may receive the user input and display the target display frame or the edited target display frame on the first screen in response to the user input indicating that the user allows the sharing.
  • the first device may receive a control command from the second device, the control command being used to control the display of the first screen.
  • the first device may display the target display frame or the edited target display frame on the first screen according to the control command.
  • control command includes any one of the following: a first control command for displaying the edited target display frame on the first screen; a second control command for displaying the target display on the first screen frame; a third control command for displaying the target display frame on the first screen while playing the user's question for the target display frame; and a fourth control command for displaying the edited target display frame on the first screen At the same time, the audio clip corresponding to the target display frame is played.
  • the first device may receive a question from the second device for the target display frame.
  • the first device may play the question while displaying the target display frame or the edited target display frame on the first screen.
  • the response may include a recording corresponding to at least one display frame.
  • the first device may receive a sound recording segment from the second device, the sound recording segment being selected from the sound recording and corresponding to the target display frame.
  • the first device may play the recording segment while displaying the target display frame or the edited target display frame on the first screen.
  • the first device may receive a user's trigger operation. In response to the received trigger operation, the first device may stop displaying the target display frame or the edited target display frame on the first screen, and redisplay the first content on the first screen.
  • the first device before receiving the request from the second device, may establish a connection with the second device for multi-screen interaction.
  • FIG. 11 shows a flowchart of an example method 1100 for multi-screen interaction according to an embodiment of the present disclosure.
  • the method 1100 may be performed, for example, by a second device, such as the slave device 120 shown in FIGS. 1A and 1B .
  • the first device is, for example, the master device 111 shown in FIGS. 1A and 1B .
  • the method 1100 may also include additional actions not shown and/or that shown actions may be omitted. The scope of the present disclosure is not limited in this regard.
  • the second device sends a request for multi-screen interaction to the first device in response to receiving the first triggering operation from the user.
  • the request includes the request time point.
  • the second device receives the response from the first device.
  • the response includes at least one display frame of the first screen of the first device near the requested time point.
  • the second device displays the first interface on the second screen.
  • the first interface includes the at least one display frame.
  • the second device receives user input.
  • the user input indicates user selection and/or editing of a target display frame of the at least one display frame.
  • the second device in response to receiving the second trigger operation from the user, transmits the target display frame or the edited target display frame to the first device for display on the first screen.
  • the second device may receive control commands entered by the user. In response to the received control command, the second device may send the control command to the first device for controlling the display of the first screen.
  • control command includes any one of the following: a first control command for displaying the edited target display frame on the first screen; a second control command for displaying the target display on the first screen frame; a third control command for displaying the target display frame on the first screen while playing the user's question for the target display frame; and a fourth control command for displaying the edited target display frame on the first screen At the same time, the audio clip corresponding to the target display frame is played.
  • the user input also indicates a user question regarding the target display frame.
  • the second device may send the question to the first device.
  • the response includes a recording corresponding to at least one display frame.
  • the first interface includes a visual representation of the recording.
  • the user input indicates the user's selection of a recording segment in the recording, the recording segment corresponding to the target display frame.
  • the second device in response to receiving the second trigger operation, may send the audio recording segment to the first device.
  • the second device may establish a connection with the first device for multi-screen interaction.
  • FIGS. 1A and 1B show a block diagram of an example apparatus 1200 for multi-screen interaction according to an embodiment of the present disclosure.
  • the apparatus 1200 may be used to implement the host device 110 or a portion of the host device 110 as shown in FIGS. 1A and 1B .
  • the apparatus 1200 includes a screen display unit 1210 configured to display a first content on a first screen, the first content including a plurality of display frames; a request receiving unit 1220 configured to receive a request from the second device A request for multi-screen interaction, the request includes a request time point; the response sending unit 1230 is configured to send a response to the second device according to the request, the response at least including at least one of the first screen near the request time point a display frame; and a display frame receiving unit 1240 configured to receive a target display frame or an edited target display frame from the second device, the target display frame being selected from at least one display frame; wherein the screen display unit 1210 is further configured to The target display frame or the edited target display frame is displayed on the first screen.
  • screen display unit 1210 a first display unit configured to, in response to receiving the target display frame or the edited target display frame, display a prompt on the first screen whether to allow the sharing of the target display frame; a user input receiving unit configured to receive user input; and a second display unit configured to display a target display frame or an edited target display frame on the first screen in response to the user input instructing the user to allow sharing.
  • the screen display unit 1210 a control command receiving unit configured to receive a control command from the second device, the control command being used to control the display of the first screen; and a third display unit configured to The control command displays the target display frame or the edited target display frame on the first screen.
  • control command includes any one of the following: a first control command for displaying the edited target display frame on the first screen; a second control command for displaying the target display on the first screen frame; a third control command for displaying the target display frame on the first screen while playing the user's question for the target display frame; and a fourth control command for displaying the edited target display frame on the first screen At the same time, the audio clip corresponding to the target display frame is played.
  • the apparatus 1200 further includes: a question receiving unit configured to receive a question for the target display frame from the second device; and a question playing unit configured to display the target display frame on the first screen or via Play the question while editing the target display frame.
  • the response includes a sound recording corresponding to the at least one display frame
  • the apparatus 1200 further includes: a sound recording segment receiving unit configured to receive a sound recording segment from the second device, the sound recording segment being selected from the sound recording and corresponding to a target display frame; and a recording segment playing unit configured to play the recording segment while displaying the target display frame or the edited target display frame on the first screen.
  • the apparatus 1200 further includes: an operation receiving unit configured to receive a trigger operation from the user; and a fourth display unit configured to stop displaying the target display on the first screen in response to the received trigger operation
  • the frame or edited object displays the frame and redisplays the first content on the first screen.
  • the apparatus 1200 further includes: a connection establishing unit configured to establish a connection for multi-screen interaction with the second device before receiving the request from the second device.
  • each unit in the apparatus 1200 is respectively to implement the corresponding steps of the method performed by the first device in the foregoing embodiment, and have the same beneficial effects. For the sake of simplicity, specific details will not be repeated.
  • various sending units and receiving units in the apparatus 1200 can be implemented, for example, by using the wireless communication module 1460 shown in FIG. 14 below.
  • the various display units in the device 1200 may be implemented using the display screen 1494 shown in FIG. 14 below.
  • FIG. 13 shows a block diagram of an example apparatus 1300 for multi-screen interaction according to an embodiment of the present disclosure.
  • the apparatus 1300 may be used to implement the slave device 120 or a portion of the slave device 120 as shown in FIGS. 1A and 1B .
  • the apparatus 1300 includes a request sending unit 1310, configured to, in response to receiving the first trigger operation from the user, send a request for multi-screen interaction to the first device, where the request includes the request time point;
  • the response receiving unit 1320 is configured to receive a response from the first device, the response at least including at least one display frame of the first screen of the first device near the request time point;
  • the screen display unit 1330 is configured to A first interface is displayed on the second screen of the device, and the first interface includes at least one display frame;
  • the user input receiving unit 1340 is configured to receive a user input indicating the user's selection and selection of a target display frame in the at least one display frame.
  • a display frame sending unit 1350 configured to send a target display frame or an edited target display frame to the first device for display on the first screen in response to receiving the second trigger operation from the user show.
  • the apparatus 1300 further includes: a control command receiving unit configured to receive a control command input by a user; and a control command sending unit configured to send the control command to the first device in response to the received control command command to control the display of the first screen.
  • control command includes any one of the following: a first control command for displaying the edited target display frame on the first screen; a second control command for displaying the target display on the first screen frame; a third control command for displaying the target display frame on the first screen while playing the user's question for the target display frame; and a fourth control command for displaying the edited target display frame on the first screen At the same time, the audio clip corresponding to the target display frame is played.
  • the user input further indicates the user's question for the target display frame
  • the apparatus 1300 further includes: a question sending unit configured to send the question to the first device in response to receiving the second trigger operation.
  • the response includes a sound recording corresponding to at least one display frame; the first interface includes a visual representation of the sound recording; and the user input indicates a user selection for a sound recording segment in the sound recording, the sound recording segment corresponding to the target display frame.
  • the apparatus 1300 further includes: a recording segment sending unit, configured to send the recording segment to the first device in response to receiving the second trigger operation.
  • the apparatus 1300 further includes: a connection establishing unit, configured to establish a connection for multi-screen interaction with the first device before sending the request to the first device.
  • a connection establishing unit configured to establish a connection for multi-screen interaction with the first device before sending the request to the first device.
  • each unit in the apparatus 1300 is respectively to implement the corresponding steps of the method performed by the second device in the foregoing embodiment, and have the same beneficial effects. For the sake of simplicity, specific details will not be repeated.
  • various sending units and receiving units in the apparatus 1300 can be implemented, for example, by using the wireless communication module 1460 shown in FIG. 14 below.
  • the various display units in the device 1300 may be implemented using the display screen 1494 shown in FIG. 14 below.
  • FIG. 14 shows a schematic structural diagram of an electronic device 1400 .
  • the master device 110 shown in FIG. 1A the slave device 120 and/or the device 210 shown in FIG. 2A may be implemented by the electronic device 1400 .
  • the electronic device 1400 may include a processor 1410, an external memory interface 1420, an internal memory 1421, a universal serial bus (USB) interface 1430, a charge management module 1440, a power management module 1441, a battery 1442, an antenna 141, an antenna 142 , mobile communication module 1450, wireless communication module 1460, audio module 1470, speaker 1470A, receiver 1470B, microphone 1470C, headphone jack 1470D, sensor module 1480, buttons 1490, motor 1491, indicator 1492, camera 1493, display screen 1494, and Subscriber identification module (subscriber identification module, SIM) card interface 1495 and so on.
  • SIM Subscriber identification module
  • the sensor module 1480 may include a pressure sensor 1480A, a gyroscope sensor 1480B, an air pressure sensor 1480C, a magnetic sensor 1480D, an acceleration sensor 1480E, a distance sensor 1480F, a proximity light sensor 1480G, a fingerprint sensor 1480H, a temperature sensor 1480J, a touch sensor 1480K, and ambient light.
  • Sensor 1480L Bone Conduction Sensor 1480M, etc.
  • the structures illustrated in the embodiments of the present invention do not constitute a specific limitation on the electronic device 1400 .
  • the electronic device 1400 may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 1410 may include one or more processing units, for example, the processor 1410 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processor
  • image signal processor image signal processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • baseband processor baseband processor
  • neural-network processing unit neural-network processing unit
  • the controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 1410 for storing instructions and data.
  • the memory in processor 1410 is cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 1410 . If the processor 1410 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 1410 is reduced, thereby improving the efficiency of the system.
  • the processor 1410 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver (universal asynchronous transmitter) receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / or universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus that includes a serial data line (SDA) and a serial clock line (SCL).
  • the processor 1410 may contain multiple sets of I2C buses.
  • the processor 1410 can be respectively coupled to the touch sensor 1480K, the charger, the flash, the camera 1493, etc. through different I2C bus interfaces.
  • the processor 1410 can couple the touch sensor 1480K through the I2C interface, so that the processor 1410 and the touch sensor 1480K communicate with each other through the I2C bus interface, so as to realize the touch function of the electronic device 1400.
  • the I2S interface can be used for audio communication.
  • the processor 1410 may contain multiple sets of I2S buses.
  • the processor 1410 may be coupled with the audio module 1470 through an I2S bus to implement communication between the processor 1410 and the audio module 1470.
  • the audio module 1470 can transmit audio signals to the wireless communication module 1460 through the I2S interface, so as to realize the function of answering calls through a Bluetooth headset.
  • the PCM interface can also be used for audio communications, sampling, quantizing and encoding analog signals.
  • the audio module 1470 and the wireless communication module 1460 may be coupled through a PCM bus interface.
  • the audio module 1470 can also transmit audio signals to the wireless communication module 1460 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • a UART interface is typically used to connect the processor 1410 with the wireless communication module 1460 .
  • the processor 1410 communicates with the Bluetooth module in the wireless communication module 1460 through the UART interface to implement the Bluetooth function.
  • the audio module 1470 can transmit audio signals to the wireless communication module 1460 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
  • the MIPI interface can be used to connect the processor 1410 with the display screen 1494, the camera 1493 and other peripheral devices.
  • MIPI interfaces include camera serial interface (CSI), display serial interface (DSI), etc.
  • the processor 110 communicates with the camera 193 through a CSI interface to implement the photographing function of the electronic device 1400 .
  • the processor 1410 communicates with the display screen 1494 through the DSI interface to implement the display function of the electronic device 1400 .
  • the GPIO interface can be configured by software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface may be used to connect the processor 1410 with the camera 1493, the display screen 1494, the wireless communication module 1460, the audio module 1470, the sensor module 1480, and the like.
  • the GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 1430 is an interface that conforms to the USB standard specification, and can specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 1430 can be used to connect a charger to charge the electronic device 1400, and can also be used to transmit data between the electronic device 1400 and peripheral devices. It can also be used to connect headphones to play audio through the headphones.
  • the interface can also be used to connect other electronic devices, such as AR devices.
  • the interface connection relationship between the modules illustrated in the embodiment of the present invention is only a schematic illustration, and does not constitute a structural limitation of the electronic device 1400 .
  • the electronic device 1400 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the charging management module 1440 is used to receive charging input from the charger.
  • the charger may be a wireless charger or a wired charger.
  • the power management module 1441 is used for connecting the battery 1442 , the charging management module 1440 and the processor 1410 .
  • the power management module 1441 receives input from the battery 1442 and/or the charging management module 1440, and supplies power to the processor 1410, the internal memory 1421, the display screen 1494, the camera 1493, and the wireless communication module 1460.
  • the wireless communication function of the electronic device 1400 can be implemented by the antenna 141, the antenna 142, the mobile communication module 1450, the wireless communication module 1460, the modem processor, the baseband processor, and the like.
  • the antenna 141 and the antenna 142 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 1400 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 141 can be multiplexed as a diversity antenna of the wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 1450 may provide a wireless communication solution including 2G/3G/4G/5G, etc. applied on the electronic device 1400 .
  • the mobile communication module 1450 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), and the like.
  • the mobile communication module 1450 can receive electromagnetic waves through the antenna 141, filter, amplify, etc. the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 1450 can also amplify the signal modulated by the modulation and demodulation processor, and then convert it into electromagnetic waves for radiation through the antenna 141 .
  • at least part of the functional modules of the mobile communication module 1450 may be provided in the processor 1410 .
  • at least part of the functional modules of the mobile communication module 1450 may be provided in the same device as at least part of the modules of the processor 1410 .
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low frequency baseband signal is processed by the baseband processor and passed to the application processor.
  • the application processor outputs sound signals through audio devices (not limited to the speaker 1470A, the receiver 1470B, etc.), or displays images or videos through the display screen 1494 .
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent of the processor 1410, and may be provided in the same device as the mobile communication module 1450 or other functional modules.
  • the wireless communication module 1460 can provide applications on the electronic device 1400 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation satellites Wireless communication solutions such as global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), and infrared technology (IR).
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • IR infrared technology
  • the wireless communication module 1460 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 1460 receives electromagnetic waves via the antenna 142 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 1410 .
  • the wireless communication module 1460 can also receive the signal to be sent from the processor 1410, perform frequency modulation on it, amplify it, and convert it into electromagnetic waves for radiation through the antenna 142.
  • the wireless communication module 1460 may be used to send and receive various messages (including various requests and responses), data packets (including display frames, recorded data, and/or other data), etc. described above.
  • the wireless communication module 1460 may be used to implement various sending units and receiving units in the apparatus 1200 shown in FIG. 12 and/or the apparatus 1300 shown in FIG. 13 .
  • the antenna 141 of the electronic device 1400 is coupled with the mobile communication module 1450, and the antenna 142 is coupled with the wireless communication module 1460, so that the electronic device 1400 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code Division Multiple Access (WCDMA), Time Division Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (GLONASS), a Beidou navigation satellite system (BDS), a quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the electronic device 1400 implements a display function through a GPU, a display screen 1494, an application processor, and the like.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 1494 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 1410 may include one or more GPUs that execute program instructions to generate or alter display information.
  • Display screen 1494 is used to display images, videos, and the like. In some embodiments, display screen 1494 may be used for the various interfaces, display frames, etc. described above. For example, the display screen 1494 may be used to implement various display units in the apparatus 1200 shown in FIG. 12 and/or the apparatus 1300 shown in FIG. 13 .
  • Display screen 1494 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • the electronic device 1400 may include 1 or N display screens 1494, where N is a positive integer greater than 1.
  • the electronic device 1400 may implement a shooting function through an ISP, a camera 1493, a video codec, a GPU, a display screen 1494, an application processor, and the like.
  • the ISP is used to process the data fed back by the camera 1493. For example, when taking a photo, the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye. ISP can also perform algorithm optimization on image noise, brightness, and skin tone. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, the ISP may be located in the camera 1493.
  • the camera 1493 is used to capture still images or video.
  • the object is projected through the lens to generate an optical image onto the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the electronic device 1400 may include 1 or N cameras 1493 , where N is a positive integer greater than 1.
  • a digital signal processor is used to process digital signals, in addition to processing digital image signals, it can also process other digital signals. For example, when the electronic device 1400 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy, and the like.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 1400 may support one or more video codecs. In this way, the electronic device 1400 can play or record videos in various encoding formats, such as: Moving Picture Experts Group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
  • MPEG Moving Picture Experts Group
  • MPEG2 Moving picture experts group
  • MPEG3 Moving Picture Experts Group
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • Applications such as intelligent cognition of the electronic device 1400 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • the external memory interface 1420 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 1400.
  • the external memory card communicates with the processor 1410 through the external memory interface 1420 to realize the data storage function. For example to save files like music, video etc in external memory card.
  • Internal memory 1421 may be used to store computer executable program code, which includes instructions.
  • the internal memory 1421 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like.
  • the storage data area may store data (such as audio data, phone book, etc.) created during the use of the electronic device 1400 and the like.
  • the internal memory 1421 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the processor 1410 executes various functional applications and data processing of the electronic device 1400 by executing instructions stored in the internal memory 1421 and/or instructions stored in a memory provided in the processor.
  • the electronic device 1400 may implement audio functions through an audio module 1470, a speaker 1470A, a receiver 1470B, a microphone 1470C, an earphone interface 1470D, and an application processor. Such as music playback, recording, etc.
  • the audio module 1470 is used for converting digital audio information into analog audio signal output, and also for converting analog audio input into digital audio signal. Audio module 1470 may also be used to encode and decode audio signals. In some embodiments, the audio module 1470 may be provided in the processor 1410 , or some functional modules of the audio module 1470 may be provided in the processor 1410 .
  • Speaker 1470A also referred to as “speaker” is used to convert audio electrical signals into sound signals.
  • Electronic device 1400 can listen to music through speaker 1470A, or listen to hands-free calls.
  • the receiver 1470B also referred to as "earpiece" is used to convert audio electrical signals into sound signals.
  • the voice can be answered by placing the receiver 1470B close to the human ear.
  • Microphone 1470C also called “microphone” or “microphone” is used to convert sound signals into electrical signals. When making a call or sending a voice message, the user can make a sound near the microphone 1470C through the human mouth, and input the sound signal into the microphone 1470C.
  • the electronic device 1400 may be provided with at least one microphone 1470C. In other embodiments, the electronic device 1400 may be provided with two microphones 1470C, which can implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 1400 may further be provided with three, four or more microphones 1470C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
  • the headphone jack 1470D is used to connect wired headphones.
  • the earphone interface 1470D can be a USB interface 1430, or can be a 3.5mm open mobile terminal platform (open mobile terminal platform, OMTP) standard interface, a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the pressure sensor 1480A is used to sense pressure signals, and can convert the pressure signals into electrical signals.
  • pressure sensor 1480A may be provided on display screen 1494 .
  • the capacitive pressure sensor may be comprised of at least two parallel plates of conductive material. When a force is applied to pressure sensor 1480A, the capacitance between the electrodes changes.
  • the electronic device 1400 determines the intensity of the pressure according to the change in capacitance. When a touch operation acts on the display screen 1494, the electronic device 1400 detects the intensity of the touch operation according to the pressure sensor 1480A.
  • the electronic device 1400 may also calculate the touched position according to the detection signal of the pressure sensor 1480A.
  • touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions. For example, when a touch operation whose intensity is less than the first pressure threshold acts on the short message application icon, the instruction for viewing the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, the instruction to create a new short message is executed.
  • the gyro sensor 1480B can be used to determine the motion attitude of the electronic device 1400 .
  • the angular velocity of electronic device 1400 about three axes may be determined by gyro sensor 1480B.
  • the gyro sensor 1480B can be used for image stabilization. Exemplarily, when the shutter is pressed, the gyro sensor 1480B detects the shaking angle of the electronic device 1400, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to counteract the shaking of the electronic device 1400 through reverse motion to achieve anti-shake.
  • the gyroscope sensor 1480B can also be used for navigation and somatosensory game scenarios.
  • Air pressure sensor 1480C is used to measure air pressure. In some embodiments, the electronic device 1400 calculates the altitude from the air pressure value measured by the air pressure sensor 1480C to assist in positioning and navigation.
  • Magnetic sensor 1480D includes a Hall sensor.
  • the electronic device 1400 can detect the opening and closing of the flip holster using the magnetic sensor 1480D.
  • the electronic device 1400 can detect the opening and closing of the flip according to the magnetic sensor 1480D. Further, according to the detected opening and closing state of the leather case or the opening and closing state of the flip cover, characteristics such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 1480E can detect the magnitude of the acceleration of the electronic device 1400 in various directions (generally three axes).
  • the magnitude and direction of gravity can be detected when the electronic device 1400 is stationary. It can also be used to identify the posture of electronic devices, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
  • the electronic device 1400 can measure the distance through infrared or laser. In some embodiments, when shooting a scene, the electronic device 1400 can use the distance sensor 1480F to measure the distance to achieve fast focusing.
  • Proximity light sensor 1480G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
  • the light emitting diodes may be infrared light emitting diodes.
  • the electronic device 1400 emits infrared light to the outside through light emitting diodes.
  • Electronic device 1400 uses photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it may be determined that there is an object near the electronic device 1400 . When insufficient reflected light is detected, the electronic device 1400 may determine that there is no object near the electronic device 1400 .
  • the electronic device 1400 can use the proximity light sensor 1480G to detect that the user holds the electronic device 1400 close to the ear to talk, so as to automatically turn off the screen to save power.
  • Proximity light sensor 1480G can also be used in holster mode, pocket mode automatically unlocks and locks the screen.
  • the ambient light sensor 1480L is used to sense ambient light brightness.
  • the electronic device 1400 can adaptively adjust the brightness of the display screen 1494 according to the perceived ambient light brightness.
  • the ambient light sensor 1480L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 1480L can also cooperate with the proximity light sensor 1480G to detect whether the electronic device 1400 is in the pocket to prevent accidental touch.
  • the fingerprint sensor 1480H is used to collect fingerprints.
  • the electronic device 1400 can use the collected fingerprint characteristics to realize fingerprint unlocking, accessing application locks, taking photos with fingerprints, answering incoming calls with fingerprints, and the like.
  • the temperature sensor 1480J is used to detect the temperature.
  • the electronic device 1400 utilizes the temperature detected by the temperature sensor 1480J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 1480J exceeds a threshold value, the electronic device 1400 performs performance reduction of the processor located near the temperature sensor 1480J in order to reduce power consumption and implement thermal protection.
  • the electronic device 1400 when the temperature is lower than another threshold, the electronic device 1400 heats the battery 1442 to avoid abnormal shutdown of the electronic device 1400 due to low temperature.
  • the electronic device 1400 boosts the output voltage of the battery 1442 to avoid abnormal shutdown caused by low temperature.
  • Touch sensor 1480K also called “touch device”.
  • the touch sensor 1480K can be disposed on the display screen 1494, and the touch sensor 1480K and the display screen 1494 form a touch screen, also called a "touch screen”.
  • the touch sensor 1480K is used to detect touch operations on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to touch operations may be provided through display screen 1494 .
  • the touch sensor 1480K may also be disposed on the surface of the electronic device 1400 , which is different from the location where the display screen 1494 is located.
  • the bone conduction sensor 1480M can acquire vibration signals.
  • the bone conduction sensor 1480M can acquire the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 1480M can also contact the pulse of the human body and receive the blood pressure beating signal.
  • the bone conduction sensor 1480M can also be disposed in the earphone, combined with the bone conduction earphone.
  • the audio module 1470 can parse out the voice signal based on the vibration signal of the vocal vibration bone block obtained by the bone conduction sensor 1480M, so as to realize the voice function.
  • the application processor can analyze the heart rate information based on the blood pressure beat signal obtained by the bone conduction sensor 1480M, and realize the function of heart rate detection.
  • the keys 1490 include a power-on key, a volume key, and the like. Keys 1490 may be mechanical keys. It can also be a touch key.
  • the electronic device 1400 may receive key inputs and generate key signal inputs related to user settings and function control of the electronic device 1400 .
  • Motor 1491 can generate vibrating cues.
  • the motor 1491 can be used for vibrating alerts for incoming calls, and can also be used for touch vibration feedback.
  • touch operations acting on different applications can correspond to different vibration feedback effects.
  • the motor 1491 can also correspond to different vibration feedback effects for touch operations in different areas of the display screen 1494 .
  • Different application scenarios for example: time reminder, receiving information, alarm clock, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 1492 can be an indicator light, which can be used to indicate the charging status, the change of power, and can also be used to indicate messages, missed calls, notifications, and the like.
  • the SIM card interface 1495 is used to connect a SIM card.
  • the SIM card can be inserted into the SIM card interface 1495 or pulled out from the SIM card interface 1495 to achieve contact with and separation from the electronic device 1400 .
  • the electronic device 1400 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • the SIM card interface 1495 can support Nano SIM card, Micro SIM card, SIM card and so on.
  • the same SIM card interface 1495 can insert multiple cards at the same time.
  • the types of the plurality of cards may be the same or different.
  • the SIM card interface 1495 can also be compatible with different types of SIM cards.
  • the SIM card interface 1495 is also compatible with external memory cards.
  • the electronic device 1400 interacts with the network through the SIM card to implement functions such as call and data communication.
  • the electronic device 1400 employs an eSIM, ie: an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 1400 and cannot be separated from the electronic device 1400
  • the software system of the electronic device 1400 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiment of the present invention takes the Android system with a layered architecture as an example to illustrate the software structure of the electronic device 1400 as an example.
  • FIG. 15 is a block diagram of a software structure of an electronic device 1400 according to an embodiment of the present invention.
  • the software structure shown in Figure 15 can be used to implement the software system architecture shown in Figures 1B and 2B.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate with each other through software interfaces.
  • the Android system is divided into four layers, which are, from top to bottom, an application layer, an application framework layer, an Android runtime (Android runtime) and a system library, and a kernel layer.
  • the application layer can include a series of application packages.
  • the application package can include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message and so on.
  • the application may also include application 111 and/or application 121 as shown in FIG. 1B , or may include application 211 and/or application 212 as shown in FIG. 2B (for simplicity, in FIG. 15 , not shown).
  • the application framework layer provides an application programming interface (API) and a programming framework for the applications of the application layer.
  • API application programming interface
  • the application framework layer includes some predefined functions.
  • the application framework layer may include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, a media service, a multi-screen interactive service, and the like.
  • a window manager is used to manage window programs.
  • the window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, take screenshots, etc.
  • Content providers are used to store and retrieve data and make these data accessible to applications.
  • the data may include video, images, audio, calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on. View systems can be used to build applications.
  • a display interface can consist of one or more views.
  • the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
  • the phone manager is used to provide the communication function of the electronic device 1400 .
  • the management of call status including connecting, hanging up, etc.).
  • the resource manager provides various resources for the application, such as localization strings, icons, pictures, layout files, video files and so on.
  • the notification manager enables applications to display notification information in the status bar, which can be used to convey notification-type messages, and can disappear automatically after a brief pause without user interaction. For example, the notification manager is used to notify download completion, message reminders, etc.
  • the notification manager can also display notifications in the status bar at the top of the system in the form of graphs or scroll bar text, such as notifications of applications running in the background, and notifications on the screen in the form of dialog windows. For example, text information is prompted in the status bar, a prompt sound is issued, the electronic device vibrates, and the indicator light flashes.
  • the media service may be, for example, a media service 112 as shown in FIG. 1B for supporting the operation of an application 111 (eg, a video conference application, a video playback application, an office application or other presentation application), or may be as shown in FIG. 2B
  • the media service 213 is used to support the operation of the application 211 (eg, a video conference application, a video playback application, an office application or other presentation applications).
  • the multi-screen interactive service can be, for example, the multi-screen interactive service 113 shown in FIG. 1B, which is used to support the multi-screen interaction between the application 111 and the application 121, or can be the multi-screen interactive service 214 shown in FIG. It is used to support multi-screen interaction between the application 211 and the application 212 .
  • Android Runtime includes core libraries and a virtual machine. Android runtime is responsible for scheduling and management of the Android system.
  • the core library consists of two parts: one is the function functions that the java language needs to call, and the other is the core library of Android.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object lifecycle management, stack management, thread management, safety and exception management, and garbage collection.
  • a system library can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg: SGL), etc.
  • surface manager surface manager
  • media library Media Libraries
  • 3D graphics processing library eg: OpenGL ES
  • 2D graphics engine eg: SGL
  • the Surface Manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing.
  • 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display drivers, camera drivers, audio drivers, and sensor drivers.
  • the kernel layer may include the display driver 114, the GPU driver 115, the Bluetooth driver 116 or 123, and the WIFI driver 117 or 124 as shown in FIG. 1B; or may include the display driver 215, GPU driver 216, Bluetooth driver 217, and WIFI driver 218 (not shown in Figure 15 for simplicity).
  • the workflow of the software and hardware of the electronic device 1400 will be exemplarily described in conjunction with the capturing and photographing scene.
  • the touch sensor 1480K receives a touch operation
  • the corresponding hardware interrupt is sent to the kernel layer.
  • the kernel layer processes touch operations into raw input events (including touch coordinates, timestamps of touch operations, etc.).
  • Raw input events are stored at the kernel layer.
  • the application framework layer obtains the original input event from the kernel layer, and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and the control corresponding to the click operation is the control of the camera application icon, for example, the camera application calls the interface of the application framework layer to start the camera application, and then starts the camera driver by calling the kernel layer.
  • the camera 1493 captures still images or video.
  • the present disclosure may be a method, apparatus, system and/or computer program product.
  • the computer program product may include a computer-readable storage medium having computer-readable program instructions loaded thereon for carrying out various aspects of the present disclosure.
  • a computer-readable storage medium may be a tangible device that can hold and store instructions for use by the instruction execution device.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Non-exhaustive list of computer readable storage media include: portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM) or flash memory), static random access memory (SRAM), portable compact disk read only memory (CD-ROM), digital versatile disk (DVD), memory sticks, floppy disks, mechanically coded devices, such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above.
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable read only memory
  • flash memory static random access memory
  • SRAM static random access memory
  • CD-ROM compact disk read only memory
  • DVD digital versatile disk
  • memory sticks floppy disks
  • mechanically coded devices such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above.
  • Computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (eg, light pulses through fiber optic cables), or through electrical wires transmitted electrical signals.
  • the computer readable program instructions described herein may be downloaded to various computing/processing devices from a computer readable storage medium, or to an external computer or external storage device over a network such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer-readable program instructions from a network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
  • the computer program instructions for carrying out the operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, state setting data, or instructions in one or more programming languages.
  • Source or object code written in any combination including object-oriented programming languages, such as Smalltalk, C++, etc., and conventional procedural programming languages, such as the "C" language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through the Internet connect).
  • LAN local area network
  • WAN wide area network
  • custom electronic circuits such as programmable logic circuits, field programmable gate arrays (FPGAs), or programmable logic arrays (PLAs) can be personalized by utilizing state information of computer readable program instructions.
  • Computer readable program instructions are executed to implement various aspects of the present disclosure.
  • These computer readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine that causes the instructions when executed by the processing unit of the computer or other programmable data processing apparatus , resulting in means for implementing the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • These computer readable program instructions can also be stored in a computer readable storage medium, these instructions cause a computer, programmable data processing apparatus and/or other equipment to operate in a specific manner, so that the computer readable medium storing the instructions includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • Computer readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment to cause a series of operational steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executing on a computer, other programmable data processing apparatus, or other device to implement the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more functions for implementing the specified logical function(s) executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or actions , or can be implemented in a combination of dedicated hardware and computer instructions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Des modes de réalisation de la présente invention concernent un procédé et un système d'interaction multi-écrans, un dispositif et un support de stockage. Selon le procédé de la présente invention, un second dispositif répond à la réception d'une première opération de déclenchement d'un utilisateur, et transmet une demande d'interaction multi-écrans à un premier dispositif, la demande comprenant un instant de demande. Le premier dispositif transmet une réponse au second dispositif selon la requête, la réponse comprenant au moins un cadre d'image proche de l'instant de demande d'un premier écran du premier dispositif. Le second dispositif reçoit la réponse, et affiche une première interface sur un second écran, la première interface comprenant au moins un cadre d'image. Le second dispositif reçoit une entrée d'utilisateur, l'entrée d'utilisateur indiquant une sélection et/ou une modification d'utilisateur spécifique à un cadre d'image cible parmi ledit au moins un cadre d'image. Le second dispositif répond à la réception d'une seconde opération de déclenchement de l'utilisateur, et transmet le cadre d'image cible ou le cadre d'image cible éditée au premier dispositif, pour affichage sur le premier écran. La présente solution permet plusieurs types d'interactions entre différents écrans.
PCT/CN2021/125874 2020-08-24 2021-10-22 Système et procédé d'interaction multi-écrans, appareil, et support de stockage WO2022042769A2 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010857539.4 2020-08-24
CN202010857539.4A CN114185503B (zh) 2020-08-24 2020-08-24 多屏交互的系统、方法、装置和介质

Publications (2)

Publication Number Publication Date
WO2022042769A2 true WO2022042769A2 (fr) 2022-03-03
WO2022042769A3 WO2022042769A3 (fr) 2022-04-14

Family

ID=80352715

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/125874 WO2022042769A2 (fr) 2020-08-24 2021-10-22 Système et procédé d'interaction multi-écrans, appareil, et support de stockage

Country Status (2)

Country Link
CN (1) CN114185503B (fr)
WO (1) WO2022042769A2 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115826898A (zh) * 2023-01-03 2023-03-21 南京芯驰半导体科技有限公司 一种跨屏显示方法、系统、装置、设备及存储介质
EP4387199A1 (fr) * 2022-12-15 2024-06-19 Unify Patente GmbH & Co. KG Procédé de partage d'écran intelligent, application de partage d'écran et système de conférence multimédia et multipartite

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115396706B (zh) * 2022-08-30 2024-06-04 京东方科技集团股份有限公司 多屏交互方法、装置、设备、车载系统及计算机存储介质
CN117129085B (zh) * 2023-02-28 2024-05-31 荣耀终端有限公司 环境光的检测方法、电子设备及可读存储介质

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2001227797A1 (en) * 2000-01-10 2001-07-24 Ic Tech, Inc. Method and system for interacting with a display
JP2014127103A (ja) * 2012-12-27 2014-07-07 Brother Ind Ltd 資料共有プログラム、端末装置、資料共有方法
US20150153777A1 (en) * 2013-12-03 2015-06-04 Nvidia Corporation Electronic device with both inflexible display screen and flexible display screen
CN104104992A (zh) * 2014-07-08 2014-10-15 深圳市同洲电子股份有限公司 一种多屏互动方法、装置及系统
CN104902075B (zh) * 2015-04-29 2017-02-22 努比亚技术有限公司 多屏互动方法及系统
CN105100885A (zh) * 2015-06-23 2015-11-25 深圳市美贝壳科技有限公司 一种浏览播放ppt文件多屏互动方法及其系统
US20170026617A1 (en) * 2015-07-21 2017-01-26 SYLapptech Corporation Method and apparatus for real-time video interaction by transmitting and displaying user interface correpsonding to user input
US20170031947A1 (en) * 2015-07-28 2017-02-02 Promethean Limited Systems and methods for information presentation and collaboration
CN105262974A (zh) * 2015-08-12 2016-01-20 北京恒泰实达科技股份有限公司 一种实现多人屏幕无线共享的方法
US10474412B2 (en) * 2015-10-02 2019-11-12 Polycom, Inc. Digital storyboards using multiple displays for content presentation and collaboration
CN105337998B (zh) * 2015-11-30 2019-02-01 东莞酷派软件技术有限公司 一种多屏互动的系统
CN105760126B (zh) * 2016-02-15 2019-03-26 惠州Tcl移动通信有限公司 一种多屏文件共享方法及系统
CN105812943B (zh) * 2016-03-31 2019-02-22 北京奇艺世纪科技有限公司 一种视频编辑方法及系统
KR20170117843A (ko) * 2016-04-14 2017-10-24 삼성전자주식회사 멀티 스크린 제공 방법 및 그 장치
US10587724B2 (en) * 2016-05-20 2020-03-10 Microsoft Technology Licensing, Llc Content sharing with user and recipient devices
CN106209818A (zh) * 2016-07-06 2016-12-07 上海电机学院 一种无线互动电子白板会议系统
CN108459836B (zh) * 2018-01-19 2019-05-31 广州视源电子科技股份有限公司 批注显示方法、装置、设备及存储介质
CN108509237A (zh) * 2018-01-19 2018-09-07 广州视源电子科技股份有限公司 智能交互平板的操作方法、装置以及智能交互平板
CN108958608B (zh) * 2018-07-10 2022-07-15 广州视源电子科技股份有限公司 电子白板的界面元素操作方法、装置及交互智能设备
CN110896424B (zh) * 2018-09-13 2022-03-29 中兴通讯股份有限公司 一种终端应用的交互方法及装置、终端
CN109634495A (zh) * 2018-11-01 2019-04-16 华为终端有限公司 支付方法、装置和用户设备
CN109857355A (zh) * 2018-12-25 2019-06-07 广州维纳斯家居股份有限公司 升降桌的屏幕共享方法、装置、升降桌和存储介质
CN110377250B (zh) * 2019-06-05 2021-07-16 华为技术有限公司 一种投屏场景下的触控方法及电子设备
CN110471639B (zh) * 2019-07-23 2022-10-18 华为技术有限公司 显示方法及相关装置
CN110708426A (zh) * 2019-09-30 2020-01-17 上海闻泰电子科技有限公司 双屏同步显示方法及装置、服务器及存储介质
CN110928468B (zh) * 2019-10-09 2021-06-25 广州视源电子科技股份有限公司 智能交互平板的页面显示方法、装置、设备和存储介质

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4387199A1 (fr) * 2022-12-15 2024-06-19 Unify Patente GmbH & Co. KG Procédé de partage d'écran intelligent, application de partage d'écran et système de conférence multimédia et multipartite
CN115826898A (zh) * 2023-01-03 2023-03-21 南京芯驰半导体科技有限公司 一种跨屏显示方法、系统、装置、设备及存储介质
CN115826898B (zh) * 2023-01-03 2023-04-28 南京芯驰半导体科技有限公司 一种跨屏显示方法、系统、装置、设备及存储介质

Also Published As

Publication number Publication date
CN114185503B (zh) 2023-09-08
CN114185503A (zh) 2022-03-15
WO2022042769A3 (fr) 2022-04-14

Similar Documents

Publication Publication Date Title
JP7142783B2 (ja) 音声制御方法及び電子装置
US11922005B2 (en) Screen capture method and related device
US11669242B2 (en) Screenshot method and electronic device
JP7498779B2 (ja) 画面表示方法及び電子デバイス
US11785329B2 (en) Camera switching method for terminal, and terminal
WO2021017889A1 (fr) Procédé d'affichage d'appel vidéo appliqué à un dispositif électronique et appareil associé
JP2022549157A (ja) データ伝送方法及び関連装置
JP7355941B2 (ja) 長焦点シナリオにおける撮影方法および端末
WO2022042769A2 (fr) Système et procédé d'interaction multi-écrans, appareil, et support de stockage
CN111240547A (zh) 跨设备任务处理的交互方法、电子设备及存储介质
WO2021036770A1 (fr) Procédé de traitement d'écran partagé et dispositif terminal
WO2022068819A1 (fr) Procédé d'affichage d'interface et appareil associé
WO2022017393A1 (fr) Système d'interaction d'affichage, procédé d'affichage, et dispositif
WO2022001619A1 (fr) Procédé de capture d'écran et dispositif électronique
WO2022042326A1 (fr) Procédé de commande d'affichage et appareil associé
CN114040242A (zh) 投屏方法和电子设备
US20230353862A1 (en) Image capture method, graphic user interface, and electronic device
WO2021143391A1 (fr) Procédé de partage d'écran sur la base d'un appel vidéo et dispositif mobile
WO2022028537A1 (fr) Procédé de reconnaissance de dispositif et appareil associé
WO2024045801A1 (fr) Procédé de capture d'écran, dispositif électronique, support et produit programme
CN112068907A (zh) 一种界面显示方法和电子设备
WO2021052388A1 (fr) Procédé de communication vidéo et appareil de communication vidéo
WO2021037034A1 (fr) Procédé de commutation de l'état d'une application et dispositif terminal
WO2023045597A1 (fr) Procédé et appareil de commande de transfert entre dispositifs de service de grand écran
CN114079691A (zh) 一种设备识别方法及相关装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21860601

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21860601

Country of ref document: EP

Kind code of ref document: A2