WO2022042769A2 - Multi-screen interaction system and method, apparatus, and medium - Google Patents

Multi-screen interaction system and method, apparatus, and medium Download PDF

Info

Publication number
WO2022042769A2
WO2022042769A2 PCT/CN2021/125874 CN2021125874W WO2022042769A2 WO 2022042769 A2 WO2022042769 A2 WO 2022042769A2 CN 2021125874 W CN2021125874 W CN 2021125874W WO 2022042769 A2 WO2022042769 A2 WO 2022042769A2
Authority
WO
WIPO (PCT)
Prior art keywords
screen
display frame
target display
user
response
Prior art date
Application number
PCT/CN2021/125874
Other languages
French (fr)
Chinese (zh)
Other versions
WO2022042769A3 (en
Inventor
颜忠生
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Publication of WO2022042769A2 publication Critical patent/WO2022042769A2/en
Publication of WO2022042769A3 publication Critical patent/WO2022042769A3/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Definitions

  • Embodiments of the present disclosure relate to the field of device interaction, and more particularly, to a system, method, apparatus, and medium for multi-screen interaction.
  • Multi-screen sharing technology generally refers to sharing screen content between a master device and a slave device via a wired connection or a wireless connection.
  • the screen content marking technology generally means that the second screen marks the screen content from the first screen by means of local screenshots, photos, etc., under the condition that the screen content from the first screen is synchronously displayed.
  • the current multi-screen interaction technology cannot share various types of information between different screens, and cannot realize further interaction between different screens after marking the shared screen content.
  • embodiments of the present disclosure provide a system, method, apparatus, device, and computer-readable storage medium for multi-screen interaction, enabling sharing of various types of information between different screens and enabling implementation based on tagged screen content Further interaction between different screens.
  • a system for multi-screen interaction includes a first device including a first screen; and a second device including a second screen, wherein: the first device displays a first content on the first screen, the first content including a plurality of displays frame; the second device receives the user's first trigger operation; in response to the received first trigger operation, the second device sends a request for multi-screen interaction to the first device, the request including a request time point; the first device sends a response to the second device according to the request, and the response at least includes at least one display frame of the first screen near the request time point; the The second device receives the response and displays a first interface on the second screen, the first interface includes the at least one display frame; the second device receives user input indicating that the user selection and/or editing of a target display frame in the at least one display frame; the second device receives a second trigger operation of the user; in response to the received second trigger operation, the second device sends The first device transmits the
  • the first device in response to receiving the target display frame or the edited target display frame, displays on the first screen a prompt whether to allow sharing of the target display frame; The first device receives another user input; and in response to the other user input instructing the user to allow the sharing, the first device displays the target display frame or the edited version on the first screen; target display frame. In this way, screen content sharing between different screens can be performed under user control.
  • the second device receives a control command input by a user; in response to the received control command, the second device sends the control command to the first device for controlling all and the first device displays the target display frame or the edited target display frame on the first screen according to the control command. In this way, various types of interactions can be implemented between different screens based on the user's control commands.
  • the control command includes any one of the following: a first control command for displaying the edited target display frame on the first screen; a second control command for displaying the edited target display frame on the first screen; The target display frame is displayed on the first screen; a third control command is used to play the user's question about the target display frame while the target display frame is displayed on the first screen; and a fourth control command , used to play the recording segment corresponding to the target display frame while displaying the edited target display frame on the first screen.
  • the user input further indicates a user question for the target display frame.
  • the second device sends the question to the first device; and the first device displays the target display frame or edited on the first screen Play the question while the target displays the frame. In this way, user questions regarding screen content are allowed to be shared between different devices.
  • the response includes a sound recording corresponding to the at least one display frame; the first interface includes a visual representation of the sound recording; and the user input instructs the user for a sound recording segment in the sound recording selection, the recording segment corresponds to the target display frame. In this way, the user is allowed to select the screen content to be shared by selecting a segment of the recording.
  • the second device in response to receiving the second trigger operation, sends the audio recording segment to the first device; and the first device displays the recorded segment on the first screen The recording segment is played while the target display frame or the edited target display frame is displayed. In this way, audio clips corresponding to screen content are allowed to be shared between different devices.
  • the first device receives a user's third trigger operation; and in response to the received third trigger operation, the first device stops displaying the target display on the first screen frame or the edited target display frame and redisplay the first content on the first screen. In this way, the user is allowed to take back control of the screen and terminate the multi-screen interaction.
  • the first device and the second device before the second device sends the request to the first device, the first device and the second device establish a connection for multi-screen interaction. In this way, connections for multi-screen interaction can be established between different devices.
  • a method for multi-screen interaction includes: in response to receiving a first trigger operation from a user, the second device sends a request for multi-screen interaction to the first device, where the request includes a request time point; the second device receives a request from the user a response from the first device, where the response at least includes at least one display frame of the first screen of the first device near the requested time point; the second device displays on the second screen of the second device a first interface comprising the at least one display frame; the second device receiving user input indicating the user's selection and/or editing of a target display frame of the at least one display frame ; and the second device, in response to receiving the second trigger operation from the user, sends the target display frame or the edited target display frame to the first device for use in the first device displayed on one screen.
  • the method further comprises: the second device receiving a control command input by a user; and in response to the received control command, the second device sending the control command to the first device , for controlling the display of the first screen.
  • control command includes any one of the following: a first control command for displaying the edited target display frame on the first screen; a second control command for displaying the edited target display frame on the first screen; The target display frame is displayed on the first screen; a third control command is used to play the user's question about the target display frame while the target display frame is displayed on the first screen; and a fourth control command , used to play the recording segment corresponding to the target display frame while displaying the edited target display frame on the first screen.
  • the user input further indicates a user question for the target display frame
  • the method further includes: in response to receiving the second trigger operation, the second device to the first device Send the question.
  • the response includes a sound recording corresponding to the at least one display frame; the first interface includes a visual representation of the sound recording; and the user input instructs the user for a sound recording segment in the sound recording selection, the recording segment corresponds to the target display frame.
  • the method further comprises: in response to receiving the second trigger operation, the second device sending the audio recording segment to the first device.
  • the method further includes: before the second device sends the request to the first device, establishing a connection between the second device and the first device for multi-screen interaction.
  • a method for multi-screen interaction includes: a first device receiving a request for multi-screen interaction from a second device, the first device displaying a first content on a first screen, the first content including a plurality of display frames, and the The request includes a request time point; the first device sends a response to the second device according to the request, and the response at least includes at least one display frame of the first screen near the request time point; the the first device receives a target display frame or an edited target display frame from the second device, the target display frame being selected from the at least one display frame; and the first device is in the first The target display frame or the edited target display frame is displayed on the screen.
  • displaying the target display frame or the edited target display frame on the first screen comprises: in response to receiving the target display frame or the edited target display frame, the first device displays a prompt on the first screen whether to allow the sharing of the target display frame; the first device receives user input; and in response to the user input instructing the user to allow the sharing, the first device A device displays the target display frame or the edited target display frame on the first screen.
  • displaying the target display frame or the edited target display frame on the first screen includes the first device receiving a control command from the second device, the control command for controlling the display of the first screen; and the first device displays the target display frame or the edited target display frame on the first screen according to the control command.
  • control command includes any one of the following: a first control command for displaying the edited target display frame on the first screen; a second control command for displaying the edited target display frame on the first screen; The target display frame is displayed on the first screen; a third control command is used to play the user's question about the target display frame while the target display frame is displayed on the first screen; and a fourth control command , used to play the recording segment corresponding to the target display frame while displaying the edited target display frame on the first screen.
  • the method further comprises: the first device receiving a question from the second device for the target display frame; and the first device displaying the target on the first screen The question is played while the frame or the edited target display frame is displayed.
  • the response includes a sound recording corresponding to the at least one display frame
  • the method further includes: the first device receiving a sound recording segment from the second device, the sound recording segment selected from the group consisting of the recording and corresponding to the target display frame; and the first device plays the recording segment while displaying the target display frame or the edited target display frame on the first screen.
  • the method further includes: the first device receiving a trigger operation from a user; and in response to the received trigger operation, the first device stopping displaying the target on the first screen A frame or the edited target display frame is displayed, and the first content is redisplayed on the first screen.
  • the method further includes: before the first device receives the request from the second device, establishing a connection between the first device and the second device for multi-screen interaction.
  • a device for multi-screen interaction includes: a request sending unit, configured to send a request for multi-screen interaction to a first device in response to receiving a first trigger operation from a user, where the request includes a request time point; a response receiving unit, which is is configured to receive a response from the first device, the response at least including at least one display frame of the first screen of the first device near the request time point; the screen display unit is configured to A first interface is displayed on the second screen of the second device, the first interface includes the at least one display frame; a user input receiving unit is configured to receive a user input, the user input instructing the user to target the at least one display frame selection and/or editing of the target display frame in of the target display frame for display on the first screen.
  • the apparatus further includes: a control command receiving unit configured to receive a control command input by a user; and a control command sending unit configured to, in response to the received control command, send a message to the first The device sends the control command for controlling the display of the first screen.
  • control command includes any one of the following: a first control command for displaying the edited target display frame on the first screen; a second control command for displaying the edited target display frame on the first screen; The target display frame is displayed on the first screen; a third control command is used to play the user's question about the target display frame while the target display frame is displayed on the first screen; and a fourth control command , used to play the recording segment corresponding to the target display frame while displaying the edited target display frame on the first screen.
  • the user input further indicates a question of the user with respect to the target display frame
  • the apparatus further includes: a question sending unit configured to, in response to receiving the second trigger operation, send a question to the first A device sends the question.
  • the response includes a sound recording corresponding to the at least one display frame; the first interface includes a visual representation of the sound recording; and the user input instructs the user for a sound recording segment in the sound recording selection, the recording segment corresponds to the target display frame.
  • the apparatus further includes: a recording segment sending unit, configured to send the recording segment to the first device in response to receiving the second trigger operation.
  • the apparatus further includes: a connection establishing unit configured to establish a connection for multi-screen interaction with the first device before sending the request to the first device.
  • a device for multi-screen interaction includes: a screen display unit configured to display a first content on a first screen, the first content including a plurality of display frames; a request receiving unit configured to receive a request for multi-screen interaction from a second device
  • the request includes a request time point;
  • the response sending unit is configured to send a response to the second device according to the request, and the response at least includes that the first screen is near the request time point and a display frame receiving unit configured to receive a target display frame or an edited target display frame from the second device, the target display frame being selected from the at least one display frame;
  • the screen display unit is further configured to display the target display frame or the edited target display frame on the first screen.
  • the screen display unit includes: a first display unit configured to, in response to receiving the target display frame or the edited target display frame, display on the first screen whether to allow or not a prompt for sharing of the target display frame; a user input receiving unit configured to receive user input; and a second display unit configured to instruct the user to allow the sharing in response to the user input, on the first screen
  • the target display frame or the edited target display frame is displayed above.
  • the screen display unit includes: a control command receiving unit configured to receive a control command from the second device, the control command being used to control the display of the first screen; and a third A display unit configured to display the target display frame or the edited target display frame on the first screen according to the control command.
  • control command includes any one of the following: a first control command for displaying the edited target display frame on the first screen; a second control command for displaying the edited target display frame on the first screen; The target display frame is displayed on the first screen; a third control command is used to play the user's question about the target display frame while the target display frame is displayed on the first screen; and a fourth control command , used to play the recording segment corresponding to the target display frame while displaying the edited target display frame on the first screen.
  • the apparatus further includes: a question receiving unit configured to receive a question for the target display frame from the second device; and a question playing unit configured to display a question on the first screen The question is played while the target display frame or the edited target display frame is displayed.
  • the response includes a sound recording corresponding to the at least one display frame
  • the apparatus further includes: a sound recording segment receiving unit configured to receive a sound recording segment from the second device, the sound recording segment is selected from the recording and corresponds to the target display frame; and a recording segment playing unit configured to play the target display frame or the edited target display frame while displaying the target display frame or the edited target display frame on the first screen. the recorded clips.
  • the apparatus further includes: an operation receiving unit configured to receive a trigger operation from a user; and a fourth display unit configured to stop on the first screen in response to the received trigger operation
  • the target display frame or the edited target display frame is displayed on the screen, and the first content is redisplayed on the first screen.
  • the apparatus further includes a connection establishing unit configured to establish a connection for multi-screen interaction with the second device before receiving the request from the second device.
  • an electronic device in a sixth aspect of the present disclosure, includes: one or more processors; one or more memories; and one or more computer programs.
  • the one or more computer programs are stored in the one or more memories, the one or more computer programs including instructions.
  • the electronic device is caused to perform the method of the second aspect or the third aspect.
  • a computer-readable storage medium is provided.
  • a computer program is stored on the computer-readable storage medium, and when the computer program is executed by the processor, implements the method of the second aspect or the third aspect.
  • FIG. 1A shows a block diagram of an example system according to an embodiment of the present disclosure
  • FIG. 1B shows a software system architecture diagram of an example system according to an embodiment of the present disclosure
  • FIG. 2A shows a block diagram of another example system according to an embodiment of the present disclosure
  • FIG. 2B shows a software system architecture diagram of another example system according to an embodiment of the present disclosure
  • FIG. 3 shows a schematic diagram of establishing a connection between a master device and a slave device according to an embodiment of the present disclosure
  • FIG. 4 shows a signaling interaction diagram for establishing a connection between a master device and a slave device according to an embodiment of the present disclosure
  • FIG. 5 shows a schematic diagram of triggering screen interaction between a master device and a slave device according to an embodiment of the present disclosure
  • FIG. 6 shows a signaling interaction diagram for multi-screen interaction between a master device and a slave device according to an embodiment of the present disclosure
  • FIG. 7 shows a schematic diagram of acquiring screen content related information from a screen buffer and an audio buffer at the master device according to an embodiment of the present disclosure
  • FIGS. 8A-8F illustrate schematic diagrams of example user interfaces for multi-screen interaction according to embodiments of the present disclosure
  • FIG. 9 shows a signaling interaction diagram for performing multi-screen interaction between different screens of the same device according to an embodiment of the present disclosure
  • FIG. 10 shows a flowchart of an example method for multi-screen interaction according to an embodiment of the present disclosure
  • FIG. 11 shows a flowchart of an example method for multi-screen interaction according to an embodiment of the present disclosure
  • FIG. 12 shows a block diagram of an example apparatus for multi-screen interaction according to an embodiment of the present disclosure
  • FIG. 13 shows a block diagram of an example apparatus for multi-screen interaction according to an embodiment of the present disclosure.
  • FIG. 14 illustrates a block diagram of an example device suitable for implementing embodiments of the present disclosure
  • 15 shows a block diagram of the software architecture of an example device suitable for implementing embodiments of the present disclosure.
  • a value, process, or device is referred to as "best,” “lowest,” “highest,” “minimum,” “maximum,” or the like. It should be understood that such descriptions are intended to indicate that a choice may be made among the many functional alternatives that may be used, and that such choices need not be better, smaller, higher, or otherwise preferred than other choices.
  • the current screen content marking technology generally refers to that the second screen marks the screen content from the first screen by means of local screenshots, photos, etc. under the condition of synchronously displaying the screen content from the first screen.
  • the slave device When the first screen and the second screen come from the master device and the slave device, respectively, the slave device needs to synchronously display the screen content of the first screen on the master device, such as a slideshow or video.
  • the slave device desires to mark the screen content, it can obtain the screen content to be marked by taking local screenshots, taking pictures, etc., and then edit the obtained picture.
  • the slave device can only obtain screenshots, but cannot obtain other types of information, such as audio data, when the master device displays the corresponding screen content.
  • the above operation of marking the screen content is cumbersome, and the master device and the slave device cannot perform further interaction based on the marked screen content.
  • the two screens may be used to run different applications, respectively.
  • an application running on the second screen wishes to mark the screen content of the first screen, it can obtain the screen content to be marked by taking local screenshots, taking pictures, etc., and then edit the obtained picture.
  • the above screen content marking operation is cumbersome, and further interaction between different applications based on the marked screen content cannot be performed.
  • Embodiments of the present disclosure provide a solution for multi-screen interaction.
  • this solution does not require the slave device to synchronously display the screen content of the master device.
  • This solution can share various types of information, such as pictures, audio, video, etc., between different screens.
  • this scheme enables further interaction between different screens based on the marked screen content.
  • the solution can share various types of information between the different screens, and can implement different Further interactions between screens.
  • FIG. 1A shows a block diagram of an example system 100 according to an embodiment of the present disclosure.
  • system 100 includes a master device 110 and one or more slave devices 120 (only one is shown in FIG. 1 ).
  • An application 111 is run on the main device 110 , and the application 111 can be run using, for example, a screen of the main device 110 .
  • An application 121 is run on the slave device 120 , and the application 121 can be run using the screen of the slave device 120 , for example.
  • the master device 110 and the slave device 120 may be the same type of device or different types of devices.
  • master device 110 or slave device 120 may include, but are not limited to, non-portable devices such as personal computers, laptops, projectors, televisions, etc., as well as handheld terminals, smart phones, wireless data cards, tablet computers, wearable devices, etc.
  • Portable Devices Examples of applications 111 may include, but are not limited to, video conferencing applications, video playback applications, office applications (eg, slideshows, Word applications, etc.), or other presentation applications.
  • the application 121 may be a multi-screen interactive application, which may interact with the application 111 on the main device 110 according to an embodiment of the present disclosure.
  • the application 111 is also referred to as a "first application”
  • the screen of the main device 110 is also referred to as a "first screen” or "home screen”.
  • the application 121 is also referred to as a “second application” and the screen of the slave device 120 is also referred to as a “second screen” or “slave screen”.
  • FIG. 1B shows a software system architecture diagram of an example system 100 according to an embodiment of the present disclosure.
  • the software system architecture of the master device 110 can be divided into an application layer, a framework layer and a driver layer.
  • the application layer may include applications 111 .
  • the framework layer may include media services 112 for supporting the operation of applications 111 (eg, video conferencing applications, video playback applications, office applications, or other presentation applications).
  • the framework layer may further include a multi-screen interaction service 113 for supporting multi-screen interaction between the application 111 and the application 121 .
  • the driver layer may include, for example, a screen driver 114 and a graphics processing unit (GPU) driver 115 for supporting the display of the application 111 on the screen of the host device 110 .
  • GPU graphics processing unit
  • the driver layer may further include, for example, a Bluetooth driver 116 , a Wifi driver 117 , a near field communication (NFC) driver (not shown), etc., for establishing a communication connection between the master device 110 and the slave device 120 .
  • a Bluetooth driver 116 for establishing a communication connection between the master device 110 and the slave device 120 .
  • a Wifi driver 117 a Wifi driver 117 , a near field communication (NFC) driver (not shown), etc.
  • NFC near field communication
  • the software system architecture of the slave device 120 can be divided into an application layer, a framework layer and a driver layer.
  • the application layer may include applications 121 .
  • the framework layer may also include a multi-screen interaction service 122 for supporting multi-screen interaction between the application 111 and the application 121 .
  • the driver layer may include, for example, a Bluetooth driver 123 , a Wifi driver 124 , a Near Field Communication (NFC) driver (not shown), etc., for establishing a communication connection between the master device 110 and the slave device 120 .
  • the driver layer may further include a screen driver and a GPU driver (not shown), etc., for supporting the display of the application 121 on the screen of the slave device 120 .
  • the software system architecture shown in FIG. 1B is exemplary only and is not related to the operating system of the device. That is, the software system architecture shown in FIG. 1B can be implemented on devices installed with different operating systems, including but not limited to Windows operating systems, Android operating systems, and IOS operating systems.
  • the software layering in the software system architecture described above is also exemplary and is not intended to limit the scope of the present disclosure.
  • multi-screen interaction service 113 may be integrated in application 111
  • multi-screen interaction service 122 may be integrated in application 121 .
  • FIG. 2A shows a block diagram of another example system 200 in accordance with embodiments of the present disclosure.
  • the system 200 includes a device 210, and the device 210 may include multiple screens or the screen of the device 210 may be divided into multiple areas.
  • There are applications 211 and 212 running on the device 210 wherein the application 211 can run using the first screen or the first screen area of the device 210 , and the application 212 can run using the second screen or the second screen area of the device 210 .
  • Examples of applications 211 may include, but are not limited to, video conferencing applications, video playback applications, office applications (eg, slideshows, Word applications, etc.), or other presentation applications.
  • the application 212 may be a multi-screen interactive application, which may interact with the application 211 according to embodiments of the present disclosure.
  • the application 211 is also referred to as the "first application”
  • the screen or screen area it utilizes is also referred to as the "first screen” or "home screen”.
  • Application 212 is also referred to as a "second application,” and the screen or screen area it utilizes is also referred to as a “second screen” or “secondary screen.”
  • FIG. 2B shows a software system architecture diagram of an example system 200 according to an embodiment of the present disclosure.
  • the software system architecture of the device 210 can be divided into an application layer, a framework layer and a driver layer.
  • the application layer may include applications 211 and 212 .
  • the framework layer may include media services 213 for supporting the operation of applications 211 (eg, video conferencing applications, video playback applications, office applications, or other presentation applications).
  • the framework layer may further include a multi-screen interaction service 214 for supporting multi-screen interaction between the application 211 and the application 212 .
  • the driver layer may include, for example, a screen driver 215 and a GPU driver 216 for supporting the display of applications 211 and 212 on different screens.
  • the software system architecture shown in FIG. 2B is exemplary only and is not related to the operating system of the device. That is, the software system architecture shown in FIG. 2B can be implemented on devices installed with different operating systems, including but not limited to Windows operating systems, Android operating systems, and IOS operating systems.
  • the software layering in the software system architecture described above is also exemplary and is not intended to limit the scope of the present disclosure.
  • the multi-screen interactive service 214 may be integrated into the applications 211 and 212 .
  • Embodiments of the present disclosure are first described in detail below in conjunction with an example system 100 as shown in FIGS. 1A and 1B .
  • connection can be established by any means such as Bluetooth, Wifi, NFC, scanning a two-dimensional code, or the like.
  • FIG. 3 shows a schematic diagram of establishing a connection between a master device and a slave device according to an embodiment of the present disclosure.
  • the master device 110 is shown as a laptop computer and the slave device 120 is shown as a mobile phone for the purpose of example.
  • the application 111 on the master device 110 when the application 111 on the master device 110 enables the multi-screen interaction function, it can display a two-dimensional code as shown in FIG. 3 to instruct the slave device to scan the two-dimensional code to establish a multi-screen interaction with it. interactive connection.
  • the multi-screen interactive application 121 on the slave device 120 when activated, may display, for example, a user interface 121-1 as shown in FIG. 3, which includes a button "swipe to pair".
  • the slave device 120 may present a two-dimensional code scanning window to scan the two-dimensional code displayed on the master device 110 .
  • the master device 110 can establish a connection with the slave device 120 for multi-screen interaction.
  • FIG. 4 shows a signaling diagram for establishing a connection between a master device and a slave device according to an embodiment of the present disclosure.
  • FIG. 4 relates to applications 111 and 121 and multi-screen interactive services 113 and 122 as shown in FIG. 1B .
  • the application 111 may send ( 401 ) a binding service request to the multi-screen interaction service 113 , so that the multi-screen interaction service 113 can provide it with the multi-screen interaction service.
  • the application 111 may display (402) a two-dimensional code on the home screen to instruct the slave device to establish a connection therewith for multi-screen interaction by scanning the two-dimensional code.
  • the application 121 may send (403) a binding service request to the multi-screen interaction service 122, so that the multi-screen interaction service 122 can provide the multi-screen interaction service for it.
  • the application 121 may present a QR code scan window on the secondary screen to scan (404) the QR code displayed on the host device 110.
  • a communication connection can be established between the master device 110 and the slave device 120 .
  • the process of establishing a communication connection as shown in steps 401 to 404 is only exemplary, and is not intended to limit the scope of the present disclosure.
  • the master device 110 and the slave device 120 may establish a communication connection in other ways. Alternatively, if a communication connection has been established between the master device 110 and the slave device 120 in some way, steps 401 to 404 may be omitted.
  • the application 121 may perform a handshake with the application 111 to establish a connection for multi-screen interaction. As shown in FIG. 4 , the application 121 may send ( 405 ) a request for establishing a multi-screen interactive connection to the multi-screen interactive service 122 . The multi-screen interaction service 122 may forward ( 406 ) the request to the multi-screen interaction service 113 , which further forwards ( 407 ) the request to the application 111 . Application 111 may generate (408) an application information packet. In some embodiments, depending on the type of application 111, the content of the generated application information packets may be different.
  • the generated application information data packet may include the played video source address, the video name, the number of video frames, the playback speed, the playback time point, and the like.
  • the generated application information data package may include a file address, a file name, a currently playing page number, and the like.
  • the application 111 may send ( 409 ) the application information packet to the multi-screen interactive service 113 .
  • the multi-screen interactive service 113 can forward ( 410 ) the application information data packet to the multi-screen interactive service 122 , and the multi-screen interactive service 122 further forwards ( 411 ) the application information data packet to the application 121 .
  • steps 405 to 411 are only exemplary, and is not intended to limit the scope of the present disclosure.
  • steps 405-407 may be omitted. That is, the application 111 can actively generate the application information data packet and send it to the application 121 .
  • steps 408 to 411 may be omitted, that is, when the application 111 receives a request from the application 121 to establish a multi-screen interactive connection, the handshake process is completed.
  • FIG. 5 shows a schematic diagram of triggering screen interaction between a master device and a slave device according to an embodiment of the present disclosure.
  • the application 111 can run normally on the master device 110 .
  • the application 111 can use the screen of the main device 110 to play a video.
  • the application 111 may utilize the screen of the main device 110 to play the slideshow or document.
  • the multi-screen interaction service 113 may establish a screen buffer for caching the display content of the screen of the host device 110 .
  • the multi-screen interaction service 113 may add captured screen display frames to the screen buffer periodically (eg, every 100ms), and the screen display frames may be obtained by, for example, screenshots or other means.
  • a screen buffer can be implemented using a ring buffer. That is, the screen buffer is only used to buffer the most recent fixed number of display frames. When the screen buffer is full, the newly added display frame will overwrite the oldest display frame in the screen buffer.
  • the multi-screen interactive service 113 may start recording from the application 111 is launched, such as recording a presentation of a presentation by a speaker of a slide or document. The multi-screen interactive service 113 may establish an audio buffer for buffering the audio stream recorded during the running of the application 111 .
  • the audio buffer may be implemented using a ring buffer. That is, the audio buffer is only used to buffer the most recent fixed-length recording data.
  • the newly added recording data will overwrite the oldest recording data in the audio buffer.
  • the purpose of establishing the screen buffer and audio buffer is to deal with the communication delay between the slave device 120 and the master device 110, so that when the master device 110 receives a multi-screen interaction request from the slave device 120, it can find the slave device according to the timestamp carried in the request.
  • the device 120 desires to display the content and corresponding audio recording segments for the objects it interacts with.
  • the user interface of the second application 121 on the slave device 120 is updated from the user interface 121-1 shown in FIG. 3 to the user interface 121-2 shown in FIG. 5 .
  • the button "Swipe to Pair" may be disabled or not displayed, and only the button "Interact with Home Screen” is displayed.
  • the user of the slave device 120 is, for example, in the same space (eg, office or conference room) as the master device 110 and its users.
  • the slave device 120 When the user operating the master device 110 is using the master device to play the content that he or she desires to interact with, the slave device 120 The user can watch the screen of the master device 110, and the user of the slave device 120 can also trigger interaction with the screen of the master device 110 by clicking the button "Interact with the master screen". It should be understood that the user of the slave device 120 can also trigger the interaction with the screen of the master device 110 in other ways, including but not limited to, through gestures such as double-tapping on the screen of the slave device 120, through voice such as "take a note", etc. Triggered by command or triggered by other external devices, etc. The scope of the present disclosure is not limited in this regard.
  • FIG. 6 shows a signaling interaction diagram for multi-screen interaction between a master device and a slave device according to an embodiment of the present disclosure.
  • FIG. 6 relates to applications 111 and 121 and multi-screen interactive services 113 and 122 as shown in FIG. 1B .
  • the application 121 can send the multi-screen interaction service 122 (602)
  • a request for multi-screen interaction between the master device 110 and the slave device 120 (also referred to herein as a "first request"), the request may indicate a time point at which the user of the slave device 120 requests the interaction (herein Also referred to as "request time point") and/or request type (eg, "multi-screen interaction").
  • the multi-screen interaction service 122 may forward ( 603 ) the request to the multi-screen interaction service 113 , and the multi-screen interaction service 113 may further forward ( 604 ) the request to the application 111 .
  • Application 111 may generate (605) an updated application information packet.
  • the updated application information packet may include additional information relative to the display of the home screen at the requested point in time compared to the application information packet sent when the connection was established.
  • the application 111 may obtain at least one display frame of the home screen within a predetermined time period near the request time point from the screen buffer established by the multi-screen interactive service 113, and may obtain from the multi-screen interactive service 113
  • the recording data corresponding to the at least one display frame is acquired from the audio buffer established by the screen interactive service 113 .
  • the application 111 may generate an updated application information data packet based on at least one of the acquired at least one display frame and recording data.
  • FIG. 7 shows a schematic diagram of acquiring screen content-related information from a screen buffer and an audio buffer at a host device according to an embodiment of the present disclosure.
  • FIG. 7 shows a screen buffer 730 at the main device 110, which buffers a plurality of display frames 701-710 of the main screen.
  • FIG. 7 also shows an audio buffer 760 at the host device 110 , which buffers audio data 761 recorded during the execution of the application 111 .
  • the request time point indicated by the first request received by the application 111 is T0. As shown in FIG.
  • the application 111 can obtain the display frames 702 ⁇ 706 of the main screen within the predetermined time period T before the request time point T0 from the screen buffer 730 , and can obtain the display frames 702 to 706 from the audio buffer 760 corresponding to the predetermined time period Recording data 762 corresponding to T.
  • the acquired screen display frame may also be a predetermined number of display frames before and after the request time point T0, or one frame before the request time point T0.
  • the acquired audio recording data may be audio recording data corresponding to the acquired screen display frame.
  • the multi-screen interaction service 113 may determine the audio recording segment corresponding to the display frame based on the intercepted time of each display frame. In this way, audio recording data corresponding to the acquired on-screen display frame can be determined based on the acquired start time and end time of the on-screen display frame.
  • the application 111 may send ( 606 ) the updated application information packet to the multi-screen interactive service 113 as a response to the first request.
  • the multi-screen interaction service 113 may forward ( 607 ) the application information packet to the multi-screen interaction service 122 , which further forwards ( 608 ) it to the application 121 .
  • the application 121 may display on the screen of the slave device 120 at least one display frame included in the application information packet, and a visual representation of the recording data.
  • FIG. 8A shows a schematic diagram of an example user interface 121 - 3 of application 121 .
  • user interface 121 - 3 may present a visual representation of received display frames 702 - 706 and recording data 762 .
  • the user of slave device 120 may select for display frames 702-706 or recording 762 to select a display frame (also referred to herein as "target display frame” or “target display content”) with which interaction is desired and its corresponding recording segment .
  • a display frame also referred to herein as "target display frame” or “target display content
  • the application 121 can determine the recording segment corresponding to the display frame 704 in the recording data 762 and/or the time point corresponding to the display frame 704 (also referred to herein as "target time") point"). For another example, assuming that the user selects a recording segment in the recording data 762, the application 121 may determine a target display frame corresponding to the recording segment and/or a target time point corresponding to the recording segment.
  • the user interface 121-3 may also provide buttons "Select Confirm” and "Reselect”.
  • the application 121 can ignore the user's previous selection and re-receive the user input.
  • the button "Select Confirm” the user interface 121-3 shown in FIG. 8A may be updated to the user interface 121-4 shown in FIG. 8B.
  • the user interface 121 - 4 may present the selected target display frame 704 . Additionally or alternatively, the user interface 121-4 may also present a recording segment corresponding to the target display frame 704 (not shown in Figure 8B). In addition, the user interface 121-4 may also provide buttons "Edit", “Share” and "Ask”. When the user clicks the button "Edit”, the user can edit the target display frame 704, for example, including but not limited to operations such as cropping, modifying, and marking. When the user clicks the button "Ask”, the user may input a question for the target display frame 704 by voice or other means. For example, the application 121 may record and save a recording of the user's question.
  • the application 121 may, based on the determined target time point, the unedited target display content, the edited target display content, the recording segment corresponding to the target display frame, the recorded question recording One or more items to generate a request to interact with the target display content (also referred to herein as a "second request") to send to the application 111 .
  • the user of the slave device 120 may also trigger the sharing of the editing results and/or question recordings in other ways, including but not limited to, triggering by gestures such as swiping up on the screen of the slave device 120 , through gestures such as “Share Notes” ” and other voice commands or triggered by other external devices, etc.
  • the scope of the present disclosure is not limited in this regard.
  • the application 121 may, based on the determined target time point, the edited target display frame, the recording segment corresponding to the target display frame, and the recorded question recording to generate (609) a second request and send (610) it to the multi-screen interactive service 122.
  • the second request may only indicate a target point in time and/or a request type (eg, "share notes").
  • One or more of the edited target display frame, the audio clip corresponding to the target display frame, and the recorded question recording may be included in the updated application information packet and sent with the second request.
  • the multi-screen interaction service 122 may forward ( 611 ) the second request along with the updated application information packet to the multi-screen interaction service 113 , which further forwards ( 612 ) them to the application 111 .
  • the application 111 may display a prompt regarding the second request on the screen of the main device 110 to ask the user of the main device 110 whether to allow sharing.
  • application 111 may send (614-616) a notification to application 121 via multi-screen interactive services 113 and 122 to notify application 121 to send subsequent control commands to application 111.
  • the application 121 may display a visual representation of the plurality of candidate control commands on the screen of the slave device 120 .
  • the user of the slave device 120 may select a control command from a plurality of candidate control commands.
  • application 121 may send ( 618 - 620 ) the control command to application 111 via multi-screen interactive services 122 and 113 .
  • the application 111 may perform (621) an operation related to the target display content according to the control command.
  • FIGS. 8C-8F show schematic diagrams of an example user interface 121 - 5 of the application 121 and a corresponding user interface of the host device 110 .
  • the application 121 may present the user interface 121- 5.
  • the user interface 121-5 may present an edited target display frame 704' and buttons corresponding to candidate control commands 801-804.
  • the control command 801 instructs the display of the edited target display frame 704' on the main screen.
  • Control command 802 instructs to jump back to target display frame 704 (ie, the unedited target display frame) on the home screen.
  • the control command 803 instructs to jump back to the target display frame 704 on the main screen and play the question recording for the target display frame 704 at the same time.
  • the control command 804 instructs to display the edited target display frame 704'
  • FIG. 8C when the user of the slave device 120 clicks the button corresponding to the control command 801, the application 111 can suspend its normal application operation and display the edited target display frame 704' on the screen of the master device 110.
  • FIG. 8D when the user of the slave device 120 clicks the button corresponding to the control command 802 , the application 111 may jump back to the target display frame 704 on the screen of the master device 110 .
  • the application 111 when the application 111 is a slideshow application, the application 111 can jump back to the slideshow page corresponding to the target time point; when the application 111 is a video playback application, the application 111 can make the video playback jump back to the target time point. corresponding location.
  • the application 111 when the user of the slave device 120 clicks the button corresponding to the control command 803 , the application 111 can redisplay the target display frame 704 on the screen of the master device 110 and simultaneously play the question recording for the target display frame 704 . As shown in FIG.
  • the application 111 can suspend its normal application running, and display the edited target display frame 704' on the screen of the master device 110 at the same time The audio clip corresponding to the target display frame 704 is played.
  • the user of the main device 110 can take back control of the main screen through various means such as voice commands, shortcut keys (eg, Ctrl+Alt+M).
  • application 111 may stop ( 622 ) responding to control commands from application 121 .
  • Embodiments of the present disclosure do not require the slave devices to synchronously display the screen content of the master device.
  • Embodiments of the present disclosure can share various types of information, such as pictures, audio, etc., among multiple screens.
  • embodiments of the present disclosure enable further interaction between different screens based on edited screen content.
  • Embodiments of the present disclosure are also applicable to scenarios where multiple screens come from the same device (eg, a folding screen device). Embodiments of the present disclosure will be described in detail below in conjunction with an example system 200 as shown in Figures 2A and 2B.
  • FIG. 9 shows a signaling diagram for multi-screen interaction between different screens of the same device according to an embodiment of the present disclosure.
  • FIG. 9 relates to the applications 211 and 212 and the multi-screen interactive service 214 as shown in FIGS. 2A and 2B .
  • the application 212 can be served by the multi-screen interaction 214 sends (902-903) a request for multi-screen interaction between the master and slave screens (also referred to herein as a "first request") to the application 211, and the request may indicate a request time point and/or a request type ( For example, "multi-screen interaction").
  • the application 211 may generate (904) an application information data packet, which may include information related to the display content of the home screen at the requested time point.
  • the application 211 may obtain at least one display frame of the main screen within a predetermined time period near the request time point from the screen buffer maintained by the multi-screen interactive service 214, and may obtain from the multi-screen interactive service 214
  • the audio recording data corresponding to the at least one display frame is acquired from the audio buffer maintained by the interactive service 214 .
  • the application 211 may generate the application information data packet based on at least one of the acquired at least one display frame and recording data.
  • the application 211 may send (905-906) the application information data packet to the application 212 via the multi-screen interactive service 214 as a response to the first request.
  • the application 212 may display at least one display frame included in the application information packet, as well as a visual representation of the recorded data, on the slave screen for selection by the user.
  • the user interface displayed from the screen may be the same as or similar to the user interface shown in Figures 8A and 8B, for example. After selecting the target display frame and its corresponding recording segment, the user can edit the target display frame, ask questions about the target display frame, and/or trigger sharing of the editing results and/or question recordings.
  • the application 212 may generate (907) based on one or more of the determined target time point, the edited target display frame, a recording segment corresponding to the target display frame, and the recorded question recording
  • the second request is sent (908-909) to the application 211 via the multi-screen interactive service 214.
  • the second request may only indicate a target point in time and/or a request type (eg, "share notes").
  • One or more of the edited target display frame, the audio clip corresponding to the target display frame, and the recorded question recording may be included in the updated application information packet and sent with the second request.
  • the application 211 may display a prompt about the second request on the home screen to ask the user whether to allow sharing.
  • application 211 may send (911-912) a notification to application 212 via multi-screen interactive service 214 to notify application 212 to send subsequent control commands to application 211.
  • the application 212 may display a visual representation of the plurality of candidate control commands on the slave screen.
  • the user interface displayed from the screen may be the same as or similar to the user interface 121-5 shown in FIGS. 8C to 8F, for example.
  • the user can select one control command from a plurality of candidate control commands.
  • application 212 may send (914-915) the control command to application 211 via multi-screen interactive service 214.
  • the application 211 can perform (916) an operation related to the target display content according to the control command.
  • application 211 may stop ( 917 ) responding to control commands from application 212 .
  • embodiments of the present disclosure can share various types of information between different screens, and can realize the difference between different screens based on the marked screen content. further interaction.
  • FIG. 10 shows a flowchart of an example method 1000 for multi-screen interaction according to an embodiment of the present disclosure.
  • the method 1000 may be performed, for example, by a first device, such as the master device 111 shown in FIGS. 1A and 1B .
  • the second device is, for example, the slave device 120 as shown in FIGS. 1A and 1B .
  • method 1000 may also include additional acts not shown and/or that shown acts may be omitted. The scope of the present disclosure is not limited in this regard.
  • the first device receives a request for multi-screen interaction from the second device.
  • the first device displays the first content on the first screen, the first content includes a plurality of display frames, and the request includes the request time point.
  • the first device sends a response to the second device in accordance with the request.
  • the response includes at least one display frame of the first screen near the requested time point.
  • the first device receives the target display frame or the edited target display frame from the second device.
  • the target display frame is selected from at least one display frame.
  • the first device displays the target display frame or the edited target display frame on the first screen.
  • the first device in response to receiving the target display frame or the edited target display frame, may display a prompt on the first screen whether to allow sharing of the target display frame.
  • the first device may receive the user input and display the target display frame or the edited target display frame on the first screen in response to the user input indicating that the user allows the sharing.
  • the first device may receive a control command from the second device, the control command being used to control the display of the first screen.
  • the first device may display the target display frame or the edited target display frame on the first screen according to the control command.
  • control command includes any one of the following: a first control command for displaying the edited target display frame on the first screen; a second control command for displaying the target display on the first screen frame; a third control command for displaying the target display frame on the first screen while playing the user's question for the target display frame; and a fourth control command for displaying the edited target display frame on the first screen At the same time, the audio clip corresponding to the target display frame is played.
  • the first device may receive a question from the second device for the target display frame.
  • the first device may play the question while displaying the target display frame or the edited target display frame on the first screen.
  • the response may include a recording corresponding to at least one display frame.
  • the first device may receive a sound recording segment from the second device, the sound recording segment being selected from the sound recording and corresponding to the target display frame.
  • the first device may play the recording segment while displaying the target display frame or the edited target display frame on the first screen.
  • the first device may receive a user's trigger operation. In response to the received trigger operation, the first device may stop displaying the target display frame or the edited target display frame on the first screen, and redisplay the first content on the first screen.
  • the first device before receiving the request from the second device, may establish a connection with the second device for multi-screen interaction.
  • FIG. 11 shows a flowchart of an example method 1100 for multi-screen interaction according to an embodiment of the present disclosure.
  • the method 1100 may be performed, for example, by a second device, such as the slave device 120 shown in FIGS. 1A and 1B .
  • the first device is, for example, the master device 111 shown in FIGS. 1A and 1B .
  • the method 1100 may also include additional actions not shown and/or that shown actions may be omitted. The scope of the present disclosure is not limited in this regard.
  • the second device sends a request for multi-screen interaction to the first device in response to receiving the first triggering operation from the user.
  • the request includes the request time point.
  • the second device receives the response from the first device.
  • the response includes at least one display frame of the first screen of the first device near the requested time point.
  • the second device displays the first interface on the second screen.
  • the first interface includes the at least one display frame.
  • the second device receives user input.
  • the user input indicates user selection and/or editing of a target display frame of the at least one display frame.
  • the second device in response to receiving the second trigger operation from the user, transmits the target display frame or the edited target display frame to the first device for display on the first screen.
  • the second device may receive control commands entered by the user. In response to the received control command, the second device may send the control command to the first device for controlling the display of the first screen.
  • control command includes any one of the following: a first control command for displaying the edited target display frame on the first screen; a second control command for displaying the target display on the first screen frame; a third control command for displaying the target display frame on the first screen while playing the user's question for the target display frame; and a fourth control command for displaying the edited target display frame on the first screen At the same time, the audio clip corresponding to the target display frame is played.
  • the user input also indicates a user question regarding the target display frame.
  • the second device may send the question to the first device.
  • the response includes a recording corresponding to at least one display frame.
  • the first interface includes a visual representation of the recording.
  • the user input indicates the user's selection of a recording segment in the recording, the recording segment corresponding to the target display frame.
  • the second device in response to receiving the second trigger operation, may send the audio recording segment to the first device.
  • the second device may establish a connection with the first device for multi-screen interaction.
  • FIGS. 1A and 1B show a block diagram of an example apparatus 1200 for multi-screen interaction according to an embodiment of the present disclosure.
  • the apparatus 1200 may be used to implement the host device 110 or a portion of the host device 110 as shown in FIGS. 1A and 1B .
  • the apparatus 1200 includes a screen display unit 1210 configured to display a first content on a first screen, the first content including a plurality of display frames; a request receiving unit 1220 configured to receive a request from the second device A request for multi-screen interaction, the request includes a request time point; the response sending unit 1230 is configured to send a response to the second device according to the request, the response at least including at least one of the first screen near the request time point a display frame; and a display frame receiving unit 1240 configured to receive a target display frame or an edited target display frame from the second device, the target display frame being selected from at least one display frame; wherein the screen display unit 1210 is further configured to The target display frame or the edited target display frame is displayed on the first screen.
  • screen display unit 1210 a first display unit configured to, in response to receiving the target display frame or the edited target display frame, display a prompt on the first screen whether to allow the sharing of the target display frame; a user input receiving unit configured to receive user input; and a second display unit configured to display a target display frame or an edited target display frame on the first screen in response to the user input instructing the user to allow sharing.
  • the screen display unit 1210 a control command receiving unit configured to receive a control command from the second device, the control command being used to control the display of the first screen; and a third display unit configured to The control command displays the target display frame or the edited target display frame on the first screen.
  • control command includes any one of the following: a first control command for displaying the edited target display frame on the first screen; a second control command for displaying the target display on the first screen frame; a third control command for displaying the target display frame on the first screen while playing the user's question for the target display frame; and a fourth control command for displaying the edited target display frame on the first screen At the same time, the audio clip corresponding to the target display frame is played.
  • the apparatus 1200 further includes: a question receiving unit configured to receive a question for the target display frame from the second device; and a question playing unit configured to display the target display frame on the first screen or via Play the question while editing the target display frame.
  • the response includes a sound recording corresponding to the at least one display frame
  • the apparatus 1200 further includes: a sound recording segment receiving unit configured to receive a sound recording segment from the second device, the sound recording segment being selected from the sound recording and corresponding to a target display frame; and a recording segment playing unit configured to play the recording segment while displaying the target display frame or the edited target display frame on the first screen.
  • the apparatus 1200 further includes: an operation receiving unit configured to receive a trigger operation from the user; and a fourth display unit configured to stop displaying the target display on the first screen in response to the received trigger operation
  • the frame or edited object displays the frame and redisplays the first content on the first screen.
  • the apparatus 1200 further includes: a connection establishing unit configured to establish a connection for multi-screen interaction with the second device before receiving the request from the second device.
  • each unit in the apparatus 1200 is respectively to implement the corresponding steps of the method performed by the first device in the foregoing embodiment, and have the same beneficial effects. For the sake of simplicity, specific details will not be repeated.
  • various sending units and receiving units in the apparatus 1200 can be implemented, for example, by using the wireless communication module 1460 shown in FIG. 14 below.
  • the various display units in the device 1200 may be implemented using the display screen 1494 shown in FIG. 14 below.
  • FIG. 13 shows a block diagram of an example apparatus 1300 for multi-screen interaction according to an embodiment of the present disclosure.
  • the apparatus 1300 may be used to implement the slave device 120 or a portion of the slave device 120 as shown in FIGS. 1A and 1B .
  • the apparatus 1300 includes a request sending unit 1310, configured to, in response to receiving the first trigger operation from the user, send a request for multi-screen interaction to the first device, where the request includes the request time point;
  • the response receiving unit 1320 is configured to receive a response from the first device, the response at least including at least one display frame of the first screen of the first device near the request time point;
  • the screen display unit 1330 is configured to A first interface is displayed on the second screen of the device, and the first interface includes at least one display frame;
  • the user input receiving unit 1340 is configured to receive a user input indicating the user's selection and selection of a target display frame in the at least one display frame.
  • a display frame sending unit 1350 configured to send a target display frame or an edited target display frame to the first device for display on the first screen in response to receiving the second trigger operation from the user show.
  • the apparatus 1300 further includes: a control command receiving unit configured to receive a control command input by a user; and a control command sending unit configured to send the control command to the first device in response to the received control command command to control the display of the first screen.
  • control command includes any one of the following: a first control command for displaying the edited target display frame on the first screen; a second control command for displaying the target display on the first screen frame; a third control command for displaying the target display frame on the first screen while playing the user's question for the target display frame; and a fourth control command for displaying the edited target display frame on the first screen At the same time, the audio clip corresponding to the target display frame is played.
  • the user input further indicates the user's question for the target display frame
  • the apparatus 1300 further includes: a question sending unit configured to send the question to the first device in response to receiving the second trigger operation.
  • the response includes a sound recording corresponding to at least one display frame; the first interface includes a visual representation of the sound recording; and the user input indicates a user selection for a sound recording segment in the sound recording, the sound recording segment corresponding to the target display frame.
  • the apparatus 1300 further includes: a recording segment sending unit, configured to send the recording segment to the first device in response to receiving the second trigger operation.
  • the apparatus 1300 further includes: a connection establishing unit, configured to establish a connection for multi-screen interaction with the first device before sending the request to the first device.
  • a connection establishing unit configured to establish a connection for multi-screen interaction with the first device before sending the request to the first device.
  • each unit in the apparatus 1300 is respectively to implement the corresponding steps of the method performed by the second device in the foregoing embodiment, and have the same beneficial effects. For the sake of simplicity, specific details will not be repeated.
  • various sending units and receiving units in the apparatus 1300 can be implemented, for example, by using the wireless communication module 1460 shown in FIG. 14 below.
  • the various display units in the device 1300 may be implemented using the display screen 1494 shown in FIG. 14 below.
  • FIG. 14 shows a schematic structural diagram of an electronic device 1400 .
  • the master device 110 shown in FIG. 1A the slave device 120 and/or the device 210 shown in FIG. 2A may be implemented by the electronic device 1400 .
  • the electronic device 1400 may include a processor 1410, an external memory interface 1420, an internal memory 1421, a universal serial bus (USB) interface 1430, a charge management module 1440, a power management module 1441, a battery 1442, an antenna 141, an antenna 142 , mobile communication module 1450, wireless communication module 1460, audio module 1470, speaker 1470A, receiver 1470B, microphone 1470C, headphone jack 1470D, sensor module 1480, buttons 1490, motor 1491, indicator 1492, camera 1493, display screen 1494, and Subscriber identification module (subscriber identification module, SIM) card interface 1495 and so on.
  • SIM Subscriber identification module
  • the sensor module 1480 may include a pressure sensor 1480A, a gyroscope sensor 1480B, an air pressure sensor 1480C, a magnetic sensor 1480D, an acceleration sensor 1480E, a distance sensor 1480F, a proximity light sensor 1480G, a fingerprint sensor 1480H, a temperature sensor 1480J, a touch sensor 1480K, and ambient light.
  • Sensor 1480L Bone Conduction Sensor 1480M, etc.
  • the structures illustrated in the embodiments of the present invention do not constitute a specific limitation on the electronic device 1400 .
  • the electronic device 1400 may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 1410 may include one or more processing units, for example, the processor 1410 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processor
  • image signal processor image signal processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • baseband processor baseband processor
  • neural-network processing unit neural-network processing unit
  • the controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 1410 for storing instructions and data.
  • the memory in processor 1410 is cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 1410 . If the processor 1410 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 1410 is reduced, thereby improving the efficiency of the system.
  • the processor 1410 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver (universal asynchronous transmitter) receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / or universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus that includes a serial data line (SDA) and a serial clock line (SCL).
  • the processor 1410 may contain multiple sets of I2C buses.
  • the processor 1410 can be respectively coupled to the touch sensor 1480K, the charger, the flash, the camera 1493, etc. through different I2C bus interfaces.
  • the processor 1410 can couple the touch sensor 1480K through the I2C interface, so that the processor 1410 and the touch sensor 1480K communicate with each other through the I2C bus interface, so as to realize the touch function of the electronic device 1400.
  • the I2S interface can be used for audio communication.
  • the processor 1410 may contain multiple sets of I2S buses.
  • the processor 1410 may be coupled with the audio module 1470 through an I2S bus to implement communication between the processor 1410 and the audio module 1470.
  • the audio module 1470 can transmit audio signals to the wireless communication module 1460 through the I2S interface, so as to realize the function of answering calls through a Bluetooth headset.
  • the PCM interface can also be used for audio communications, sampling, quantizing and encoding analog signals.
  • the audio module 1470 and the wireless communication module 1460 may be coupled through a PCM bus interface.
  • the audio module 1470 can also transmit audio signals to the wireless communication module 1460 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • a UART interface is typically used to connect the processor 1410 with the wireless communication module 1460 .
  • the processor 1410 communicates with the Bluetooth module in the wireless communication module 1460 through the UART interface to implement the Bluetooth function.
  • the audio module 1470 can transmit audio signals to the wireless communication module 1460 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
  • the MIPI interface can be used to connect the processor 1410 with the display screen 1494, the camera 1493 and other peripheral devices.
  • MIPI interfaces include camera serial interface (CSI), display serial interface (DSI), etc.
  • the processor 110 communicates with the camera 193 through a CSI interface to implement the photographing function of the electronic device 1400 .
  • the processor 1410 communicates with the display screen 1494 through the DSI interface to implement the display function of the electronic device 1400 .
  • the GPIO interface can be configured by software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface may be used to connect the processor 1410 with the camera 1493, the display screen 1494, the wireless communication module 1460, the audio module 1470, the sensor module 1480, and the like.
  • the GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 1430 is an interface that conforms to the USB standard specification, and can specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 1430 can be used to connect a charger to charge the electronic device 1400, and can also be used to transmit data between the electronic device 1400 and peripheral devices. It can also be used to connect headphones to play audio through the headphones.
  • the interface can also be used to connect other electronic devices, such as AR devices.
  • the interface connection relationship between the modules illustrated in the embodiment of the present invention is only a schematic illustration, and does not constitute a structural limitation of the electronic device 1400 .
  • the electronic device 1400 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the charging management module 1440 is used to receive charging input from the charger.
  • the charger may be a wireless charger or a wired charger.
  • the power management module 1441 is used for connecting the battery 1442 , the charging management module 1440 and the processor 1410 .
  • the power management module 1441 receives input from the battery 1442 and/or the charging management module 1440, and supplies power to the processor 1410, the internal memory 1421, the display screen 1494, the camera 1493, and the wireless communication module 1460.
  • the wireless communication function of the electronic device 1400 can be implemented by the antenna 141, the antenna 142, the mobile communication module 1450, the wireless communication module 1460, the modem processor, the baseband processor, and the like.
  • the antenna 141 and the antenna 142 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 1400 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 141 can be multiplexed as a diversity antenna of the wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 1450 may provide a wireless communication solution including 2G/3G/4G/5G, etc. applied on the electronic device 1400 .
  • the mobile communication module 1450 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), and the like.
  • the mobile communication module 1450 can receive electromagnetic waves through the antenna 141, filter, amplify, etc. the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 1450 can also amplify the signal modulated by the modulation and demodulation processor, and then convert it into electromagnetic waves for radiation through the antenna 141 .
  • at least part of the functional modules of the mobile communication module 1450 may be provided in the processor 1410 .
  • at least part of the functional modules of the mobile communication module 1450 may be provided in the same device as at least part of the modules of the processor 1410 .
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low frequency baseband signal is processed by the baseband processor and passed to the application processor.
  • the application processor outputs sound signals through audio devices (not limited to the speaker 1470A, the receiver 1470B, etc.), or displays images or videos through the display screen 1494 .
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent of the processor 1410, and may be provided in the same device as the mobile communication module 1450 or other functional modules.
  • the wireless communication module 1460 can provide applications on the electronic device 1400 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation satellites Wireless communication solutions such as global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), and infrared technology (IR).
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • IR infrared technology
  • the wireless communication module 1460 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 1460 receives electromagnetic waves via the antenna 142 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 1410 .
  • the wireless communication module 1460 can also receive the signal to be sent from the processor 1410, perform frequency modulation on it, amplify it, and convert it into electromagnetic waves for radiation through the antenna 142.
  • the wireless communication module 1460 may be used to send and receive various messages (including various requests and responses), data packets (including display frames, recorded data, and/or other data), etc. described above.
  • the wireless communication module 1460 may be used to implement various sending units and receiving units in the apparatus 1200 shown in FIG. 12 and/or the apparatus 1300 shown in FIG. 13 .
  • the antenna 141 of the electronic device 1400 is coupled with the mobile communication module 1450, and the antenna 142 is coupled with the wireless communication module 1460, so that the electronic device 1400 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code Division Multiple Access (WCDMA), Time Division Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (GLONASS), a Beidou navigation satellite system (BDS), a quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the electronic device 1400 implements a display function through a GPU, a display screen 1494, an application processor, and the like.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 1494 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 1410 may include one or more GPUs that execute program instructions to generate or alter display information.
  • Display screen 1494 is used to display images, videos, and the like. In some embodiments, display screen 1494 may be used for the various interfaces, display frames, etc. described above. For example, the display screen 1494 may be used to implement various display units in the apparatus 1200 shown in FIG. 12 and/or the apparatus 1300 shown in FIG. 13 .
  • Display screen 1494 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • the electronic device 1400 may include 1 or N display screens 1494, where N is a positive integer greater than 1.
  • the electronic device 1400 may implement a shooting function through an ISP, a camera 1493, a video codec, a GPU, a display screen 1494, an application processor, and the like.
  • the ISP is used to process the data fed back by the camera 1493. For example, when taking a photo, the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye. ISP can also perform algorithm optimization on image noise, brightness, and skin tone. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, the ISP may be located in the camera 1493.
  • the camera 1493 is used to capture still images or video.
  • the object is projected through the lens to generate an optical image onto the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the electronic device 1400 may include 1 or N cameras 1493 , where N is a positive integer greater than 1.
  • a digital signal processor is used to process digital signals, in addition to processing digital image signals, it can also process other digital signals. For example, when the electronic device 1400 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy, and the like.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 1400 may support one or more video codecs. In this way, the electronic device 1400 can play or record videos in various encoding formats, such as: Moving Picture Experts Group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
  • MPEG Moving Picture Experts Group
  • MPEG2 Moving picture experts group
  • MPEG3 Moving Picture Experts Group
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • Applications such as intelligent cognition of the electronic device 1400 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • the external memory interface 1420 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 1400.
  • the external memory card communicates with the processor 1410 through the external memory interface 1420 to realize the data storage function. For example to save files like music, video etc in external memory card.
  • Internal memory 1421 may be used to store computer executable program code, which includes instructions.
  • the internal memory 1421 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like.
  • the storage data area may store data (such as audio data, phone book, etc.) created during the use of the electronic device 1400 and the like.
  • the internal memory 1421 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the processor 1410 executes various functional applications and data processing of the electronic device 1400 by executing instructions stored in the internal memory 1421 and/or instructions stored in a memory provided in the processor.
  • the electronic device 1400 may implement audio functions through an audio module 1470, a speaker 1470A, a receiver 1470B, a microphone 1470C, an earphone interface 1470D, and an application processor. Such as music playback, recording, etc.
  • the audio module 1470 is used for converting digital audio information into analog audio signal output, and also for converting analog audio input into digital audio signal. Audio module 1470 may also be used to encode and decode audio signals. In some embodiments, the audio module 1470 may be provided in the processor 1410 , or some functional modules of the audio module 1470 may be provided in the processor 1410 .
  • Speaker 1470A also referred to as “speaker” is used to convert audio electrical signals into sound signals.
  • Electronic device 1400 can listen to music through speaker 1470A, or listen to hands-free calls.
  • the receiver 1470B also referred to as "earpiece" is used to convert audio electrical signals into sound signals.
  • the voice can be answered by placing the receiver 1470B close to the human ear.
  • Microphone 1470C also called “microphone” or “microphone” is used to convert sound signals into electrical signals. When making a call or sending a voice message, the user can make a sound near the microphone 1470C through the human mouth, and input the sound signal into the microphone 1470C.
  • the electronic device 1400 may be provided with at least one microphone 1470C. In other embodiments, the electronic device 1400 may be provided with two microphones 1470C, which can implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 1400 may further be provided with three, four or more microphones 1470C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
  • the headphone jack 1470D is used to connect wired headphones.
  • the earphone interface 1470D can be a USB interface 1430, or can be a 3.5mm open mobile terminal platform (open mobile terminal platform, OMTP) standard interface, a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the pressure sensor 1480A is used to sense pressure signals, and can convert the pressure signals into electrical signals.
  • pressure sensor 1480A may be provided on display screen 1494 .
  • the capacitive pressure sensor may be comprised of at least two parallel plates of conductive material. When a force is applied to pressure sensor 1480A, the capacitance between the electrodes changes.
  • the electronic device 1400 determines the intensity of the pressure according to the change in capacitance. When a touch operation acts on the display screen 1494, the electronic device 1400 detects the intensity of the touch operation according to the pressure sensor 1480A.
  • the electronic device 1400 may also calculate the touched position according to the detection signal of the pressure sensor 1480A.
  • touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions. For example, when a touch operation whose intensity is less than the first pressure threshold acts on the short message application icon, the instruction for viewing the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, the instruction to create a new short message is executed.
  • the gyro sensor 1480B can be used to determine the motion attitude of the electronic device 1400 .
  • the angular velocity of electronic device 1400 about three axes may be determined by gyro sensor 1480B.
  • the gyro sensor 1480B can be used for image stabilization. Exemplarily, when the shutter is pressed, the gyro sensor 1480B detects the shaking angle of the electronic device 1400, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to counteract the shaking of the electronic device 1400 through reverse motion to achieve anti-shake.
  • the gyroscope sensor 1480B can also be used for navigation and somatosensory game scenarios.
  • Air pressure sensor 1480C is used to measure air pressure. In some embodiments, the electronic device 1400 calculates the altitude from the air pressure value measured by the air pressure sensor 1480C to assist in positioning and navigation.
  • Magnetic sensor 1480D includes a Hall sensor.
  • the electronic device 1400 can detect the opening and closing of the flip holster using the magnetic sensor 1480D.
  • the electronic device 1400 can detect the opening and closing of the flip according to the magnetic sensor 1480D. Further, according to the detected opening and closing state of the leather case or the opening and closing state of the flip cover, characteristics such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 1480E can detect the magnitude of the acceleration of the electronic device 1400 in various directions (generally three axes).
  • the magnitude and direction of gravity can be detected when the electronic device 1400 is stationary. It can also be used to identify the posture of electronic devices, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
  • the electronic device 1400 can measure the distance through infrared or laser. In some embodiments, when shooting a scene, the electronic device 1400 can use the distance sensor 1480F to measure the distance to achieve fast focusing.
  • Proximity light sensor 1480G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
  • the light emitting diodes may be infrared light emitting diodes.
  • the electronic device 1400 emits infrared light to the outside through light emitting diodes.
  • Electronic device 1400 uses photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it may be determined that there is an object near the electronic device 1400 . When insufficient reflected light is detected, the electronic device 1400 may determine that there is no object near the electronic device 1400 .
  • the electronic device 1400 can use the proximity light sensor 1480G to detect that the user holds the electronic device 1400 close to the ear to talk, so as to automatically turn off the screen to save power.
  • Proximity light sensor 1480G can also be used in holster mode, pocket mode automatically unlocks and locks the screen.
  • the ambient light sensor 1480L is used to sense ambient light brightness.
  • the electronic device 1400 can adaptively adjust the brightness of the display screen 1494 according to the perceived ambient light brightness.
  • the ambient light sensor 1480L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 1480L can also cooperate with the proximity light sensor 1480G to detect whether the electronic device 1400 is in the pocket to prevent accidental touch.
  • the fingerprint sensor 1480H is used to collect fingerprints.
  • the electronic device 1400 can use the collected fingerprint characteristics to realize fingerprint unlocking, accessing application locks, taking photos with fingerprints, answering incoming calls with fingerprints, and the like.
  • the temperature sensor 1480J is used to detect the temperature.
  • the electronic device 1400 utilizes the temperature detected by the temperature sensor 1480J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 1480J exceeds a threshold value, the electronic device 1400 performs performance reduction of the processor located near the temperature sensor 1480J in order to reduce power consumption and implement thermal protection.
  • the electronic device 1400 when the temperature is lower than another threshold, the electronic device 1400 heats the battery 1442 to avoid abnormal shutdown of the electronic device 1400 due to low temperature.
  • the electronic device 1400 boosts the output voltage of the battery 1442 to avoid abnormal shutdown caused by low temperature.
  • Touch sensor 1480K also called “touch device”.
  • the touch sensor 1480K can be disposed on the display screen 1494, and the touch sensor 1480K and the display screen 1494 form a touch screen, also called a "touch screen”.
  • the touch sensor 1480K is used to detect touch operations on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to touch operations may be provided through display screen 1494 .
  • the touch sensor 1480K may also be disposed on the surface of the electronic device 1400 , which is different from the location where the display screen 1494 is located.
  • the bone conduction sensor 1480M can acquire vibration signals.
  • the bone conduction sensor 1480M can acquire the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 1480M can also contact the pulse of the human body and receive the blood pressure beating signal.
  • the bone conduction sensor 1480M can also be disposed in the earphone, combined with the bone conduction earphone.
  • the audio module 1470 can parse out the voice signal based on the vibration signal of the vocal vibration bone block obtained by the bone conduction sensor 1480M, so as to realize the voice function.
  • the application processor can analyze the heart rate information based on the blood pressure beat signal obtained by the bone conduction sensor 1480M, and realize the function of heart rate detection.
  • the keys 1490 include a power-on key, a volume key, and the like. Keys 1490 may be mechanical keys. It can also be a touch key.
  • the electronic device 1400 may receive key inputs and generate key signal inputs related to user settings and function control of the electronic device 1400 .
  • Motor 1491 can generate vibrating cues.
  • the motor 1491 can be used for vibrating alerts for incoming calls, and can also be used for touch vibration feedback.
  • touch operations acting on different applications can correspond to different vibration feedback effects.
  • the motor 1491 can also correspond to different vibration feedback effects for touch operations in different areas of the display screen 1494 .
  • Different application scenarios for example: time reminder, receiving information, alarm clock, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 1492 can be an indicator light, which can be used to indicate the charging status, the change of power, and can also be used to indicate messages, missed calls, notifications, and the like.
  • the SIM card interface 1495 is used to connect a SIM card.
  • the SIM card can be inserted into the SIM card interface 1495 or pulled out from the SIM card interface 1495 to achieve contact with and separation from the electronic device 1400 .
  • the electronic device 1400 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • the SIM card interface 1495 can support Nano SIM card, Micro SIM card, SIM card and so on.
  • the same SIM card interface 1495 can insert multiple cards at the same time.
  • the types of the plurality of cards may be the same or different.
  • the SIM card interface 1495 can also be compatible with different types of SIM cards.
  • the SIM card interface 1495 is also compatible with external memory cards.
  • the electronic device 1400 interacts with the network through the SIM card to implement functions such as call and data communication.
  • the electronic device 1400 employs an eSIM, ie: an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 1400 and cannot be separated from the electronic device 1400
  • the software system of the electronic device 1400 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiment of the present invention takes the Android system with a layered architecture as an example to illustrate the software structure of the electronic device 1400 as an example.
  • FIG. 15 is a block diagram of a software structure of an electronic device 1400 according to an embodiment of the present invention.
  • the software structure shown in Figure 15 can be used to implement the software system architecture shown in Figures 1B and 2B.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate with each other through software interfaces.
  • the Android system is divided into four layers, which are, from top to bottom, an application layer, an application framework layer, an Android runtime (Android runtime) and a system library, and a kernel layer.
  • the application layer can include a series of application packages.
  • the application package can include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message and so on.
  • the application may also include application 111 and/or application 121 as shown in FIG. 1B , or may include application 211 and/or application 212 as shown in FIG. 2B (for simplicity, in FIG. 15 , not shown).
  • the application framework layer provides an application programming interface (API) and a programming framework for the applications of the application layer.
  • API application programming interface
  • the application framework layer includes some predefined functions.
  • the application framework layer may include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, a media service, a multi-screen interactive service, and the like.
  • a window manager is used to manage window programs.
  • the window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, take screenshots, etc.
  • Content providers are used to store and retrieve data and make these data accessible to applications.
  • the data may include video, images, audio, calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on. View systems can be used to build applications.
  • a display interface can consist of one or more views.
  • the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
  • the phone manager is used to provide the communication function of the electronic device 1400 .
  • the management of call status including connecting, hanging up, etc.).
  • the resource manager provides various resources for the application, such as localization strings, icons, pictures, layout files, video files and so on.
  • the notification manager enables applications to display notification information in the status bar, which can be used to convey notification-type messages, and can disappear automatically after a brief pause without user interaction. For example, the notification manager is used to notify download completion, message reminders, etc.
  • the notification manager can also display notifications in the status bar at the top of the system in the form of graphs or scroll bar text, such as notifications of applications running in the background, and notifications on the screen in the form of dialog windows. For example, text information is prompted in the status bar, a prompt sound is issued, the electronic device vibrates, and the indicator light flashes.
  • the media service may be, for example, a media service 112 as shown in FIG. 1B for supporting the operation of an application 111 (eg, a video conference application, a video playback application, an office application or other presentation application), or may be as shown in FIG. 2B
  • the media service 213 is used to support the operation of the application 211 (eg, a video conference application, a video playback application, an office application or other presentation applications).
  • the multi-screen interactive service can be, for example, the multi-screen interactive service 113 shown in FIG. 1B, which is used to support the multi-screen interaction between the application 111 and the application 121, or can be the multi-screen interactive service 214 shown in FIG. It is used to support multi-screen interaction between the application 211 and the application 212 .
  • Android Runtime includes core libraries and a virtual machine. Android runtime is responsible for scheduling and management of the Android system.
  • the core library consists of two parts: one is the function functions that the java language needs to call, and the other is the core library of Android.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object lifecycle management, stack management, thread management, safety and exception management, and garbage collection.
  • a system library can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg: SGL), etc.
  • surface manager surface manager
  • media library Media Libraries
  • 3D graphics processing library eg: OpenGL ES
  • 2D graphics engine eg: SGL
  • the Surface Manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing.
  • 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display drivers, camera drivers, audio drivers, and sensor drivers.
  • the kernel layer may include the display driver 114, the GPU driver 115, the Bluetooth driver 116 or 123, and the WIFI driver 117 or 124 as shown in FIG. 1B; or may include the display driver 215, GPU driver 216, Bluetooth driver 217, and WIFI driver 218 (not shown in Figure 15 for simplicity).
  • the workflow of the software and hardware of the electronic device 1400 will be exemplarily described in conjunction with the capturing and photographing scene.
  • the touch sensor 1480K receives a touch operation
  • the corresponding hardware interrupt is sent to the kernel layer.
  • the kernel layer processes touch operations into raw input events (including touch coordinates, timestamps of touch operations, etc.).
  • Raw input events are stored at the kernel layer.
  • the application framework layer obtains the original input event from the kernel layer, and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and the control corresponding to the click operation is the control of the camera application icon, for example, the camera application calls the interface of the application framework layer to start the camera application, and then starts the camera driver by calling the kernel layer.
  • the camera 1493 captures still images or video.
  • the present disclosure may be a method, apparatus, system and/or computer program product.
  • the computer program product may include a computer-readable storage medium having computer-readable program instructions loaded thereon for carrying out various aspects of the present disclosure.
  • a computer-readable storage medium may be a tangible device that can hold and store instructions for use by the instruction execution device.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Non-exhaustive list of computer readable storage media include: portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM) or flash memory), static random access memory (SRAM), portable compact disk read only memory (CD-ROM), digital versatile disk (DVD), memory sticks, floppy disks, mechanically coded devices, such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above.
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable read only memory
  • flash memory static random access memory
  • SRAM static random access memory
  • CD-ROM compact disk read only memory
  • DVD digital versatile disk
  • memory sticks floppy disks
  • mechanically coded devices such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above.
  • Computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (eg, light pulses through fiber optic cables), or through electrical wires transmitted electrical signals.
  • the computer readable program instructions described herein may be downloaded to various computing/processing devices from a computer readable storage medium, or to an external computer or external storage device over a network such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer-readable program instructions from a network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
  • the computer program instructions for carrying out the operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, state setting data, or instructions in one or more programming languages.
  • Source or object code written in any combination including object-oriented programming languages, such as Smalltalk, C++, etc., and conventional procedural programming languages, such as the "C" language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through the Internet connect).
  • LAN local area network
  • WAN wide area network
  • custom electronic circuits such as programmable logic circuits, field programmable gate arrays (FPGAs), or programmable logic arrays (PLAs) can be personalized by utilizing state information of computer readable program instructions.
  • Computer readable program instructions are executed to implement various aspects of the present disclosure.
  • These computer readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine that causes the instructions when executed by the processing unit of the computer or other programmable data processing apparatus , resulting in means for implementing the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • These computer readable program instructions can also be stored in a computer readable storage medium, these instructions cause a computer, programmable data processing apparatus and/or other equipment to operate in a specific manner, so that the computer readable medium storing the instructions includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • Computer readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment to cause a series of operational steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executing on a computer, other programmable data processing apparatus, or other device to implement the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more functions for implementing the specified logical function(s) executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or actions , or can be implemented in a combination of dedicated hardware and computer instructions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Embodiments of the present disclosure provide a multi-screen interaction system and method, a device, and a medium. In the present solution, a second device responds to receiving a first trigger operation of a user, and sends a request for multi-screen interaction to a first device, the request comprising a request time point. The first device sends a response to the second device according to the request, the response comprising at least one display frame near the request time point of a first screen of the first device. The second device receives the response, and displays a first interface on a second screen, the first interface comprising at least one display frame. The second device receives a user input, the user input indicating a user selection and/or edit specific to a target display frame among the at least one display frame. The second device responds to receiving a second trigger operation of the user, and sends the target display frame or the edited target display frame to the first device, so as to display on the first screen. The present solution allows for multiple types of interactions between different screens.

Description

多屏交互的系统、方法、装置和介质System, method, apparatus and medium for multi-screen interaction
本申请要求于2020年08月24日提交国家知识产权局、申请号为202010857539.4、申请名称为“多屏交互的系统、方法、装置和介”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number 202010857539.4 and the application title "System, Method, Apparatus and Interface for Multi-screen Interaction", which was submitted to the State Intellectual Property Office on August 24, 2020, the entire contents of which are by reference Incorporated in this application.
技术领域technical field
本公开的实施例涉及设备交互领域,更具体地涉及用于多屏交互的系统、方法、装置和介质。Embodiments of the present disclosure relate to the field of device interaction, and more particularly, to a system, method, apparatus, and medium for multi-screen interaction.
背景技术Background technique
随着多屏设备、分布式设备的普及,多屏交互技术的应用越来越广泛。目前的多屏交互技术主要包括多屏共享技术以及屏幕内容标记技术。多屏共享技术通常是指经由有线连接或者无线连接在主设备和从设备之间共享屏幕内容。屏幕内容标记技术通常是指第二屏幕在同步显示来自第一屏幕的屏幕内容的情况下,通过本地截图、拍照等方式对来自第一屏幕的屏幕内容进行标记。With the popularization of multi-screen devices and distributed devices, the application of multi-screen interaction technology is becoming more and more extensive. The current multi-screen interaction technology mainly includes multi-screen sharing technology and screen content marking technology. Multi-screen sharing technology generally refers to sharing screen content between a master device and a slave device via a wired connection or a wireless connection. The screen content marking technology generally means that the second screen marks the screen content from the first screen by means of local screenshots, photos, etc., under the condition that the screen content from the first screen is synchronously displayed.
然而,目前的多屏交互技术无法在不同屏幕之间共享多种类型的信息,也无法在对共享屏幕内容进行标记之后实现不同屏幕之间的进一步交互。However, the current multi-screen interaction technology cannot share various types of information between different screens, and cannot realize further interaction between different screens after marking the shared screen content.
发明内容SUMMARY OF THE INVENTION
总体上,本公开的实施例提供了多屏交互的系统、方法、装置、设备和计算机可读存储介质,从而能够在不同屏幕之间共享多种类型的信息并且能够基于经标记的屏幕内容实现不同屏幕之间的进一步交互。In general, embodiments of the present disclosure provide a system, method, apparatus, device, and computer-readable storage medium for multi-screen interaction, enabling sharing of various types of information between different screens and enabling implementation based on tagged screen content Further interaction between different screens.
在本公开的第一方面,提供了一种多屏交互的系统。该系统包括第一设备,包括第一屏幕;以及第二设备,包括第二屏幕,其中:所述第一设备在所述第一屏幕上显示第一内容,所述第一内容包括多个显示帧;所述第二设备接收用户的第一触发操作;响应于接收到的所述第一触发操作,所述第二设备向所述第一设备发送用于多屏交互的请求,所述请求中包括请求时间点;所述第一设备根据所述请求,向所述第二设备发送响应,所述响应至少包括所述第一屏幕在所述请求时间点附近的至少一个显示帧;所述第二设备接收所述响应,在所述第二屏幕上显示第一界面,所述第一界面包括所述至少一个显示帧;所述第二设备接收用户输入,所述用户输入指示用户针对所述至少一个显示帧中的目标显示帧的选择和/或编辑;所述第二设备接收用户的第二触发操作;响应于接收到的所述第二触发操作,所述第二设备向所述第一设备发送所述目标显示帧或者经编辑的所述目标显示帧;并且所述第一设备在所述第一屏幕上显示所述目标显示帧或者经编辑的所述目标显示帧。以此方式,能够在不同屏幕之间共享经用户选择和/或编辑的屏幕内容。In a first aspect of the present disclosure, a system for multi-screen interaction is provided. The system includes a first device including a first screen; and a second device including a second screen, wherein: the first device displays a first content on the first screen, the first content including a plurality of displays frame; the second device receives the user's first trigger operation; in response to the received first trigger operation, the second device sends a request for multi-screen interaction to the first device, the request including a request time point; the first device sends a response to the second device according to the request, and the response at least includes at least one display frame of the first screen near the request time point; the The second device receives the response and displays a first interface on the second screen, the first interface includes the at least one display frame; the second device receives user input indicating that the user selection and/or editing of a target display frame in the at least one display frame; the second device receives a second trigger operation of the user; in response to the received second trigger operation, the second device sends The first device transmits the target display frame or the edited target display frame; and the first device displays the target display frame or the edited target display frame on the first screen. In this way, user-selected and/or edited screen content can be shared between different screens.
在一些实施例中,响应于接收到所述目标显示帧或者经编辑的所述目标显示帧,所述第一设备在所述第一屏幕上显示是否允许所述目标显示帧的分享的提示;所述第 一设备接收另一用户输入;并且响应于所述另一用户输入指示用户允许所述分享,所述第一设备在所述第一屏幕上显示所述目标显示帧或者经编辑的所述目标显示帧。以此方式,能够在用户控制下进行不同屏幕之间的屏幕内容分享。In some embodiments, in response to receiving the target display frame or the edited target display frame, the first device displays on the first screen a prompt whether to allow sharing of the target display frame; The first device receives another user input; and in response to the other user input instructing the user to allow the sharing, the first device displays the target display frame or the edited version on the first screen; target display frame. In this way, screen content sharing between different screens can be performed under user control.
在一些实施例中,所述第二设备接收用户输入的控制命令;响应于接收到的所述控制命令,所述第二设备向所述第一设备发送所述控制命令,以用于控制所述第一屏幕的显示;并且所述第一设备根据所述控制命令,在所述第一屏幕上显示所述目标显示帧或者经编辑的所述目标显示帧。以此方式,能够基于用户的控制命令在不同屏幕之间实现多种类型的交互。In some embodiments, the second device receives a control command input by a user; in response to the received control command, the second device sends the control command to the first device for controlling all and the first device displays the target display frame or the edited target display frame on the first screen according to the control command. In this way, various types of interactions can be implemented between different screens based on the user's control commands.
在一些实施例中,所述控制命令包括以下任一项:第一控制命令,用于在所述第一屏幕上显示经编辑的所述目标显示帧;第二控制命令,用于在所述第一屏幕上显示所述目标显示帧;第三控制命令,用于在所述第一屏幕上显示所述目标显示帧的同时,播放用户针对所述目标显示帧的提问;以及第四控制命令,用于在所述第一屏幕上显示经编辑的所述目标显示帧的同时,播放与所述目标显示帧对应的录音片段。以此方式,能够基于用户的控制命令在不同屏幕之间实现多种类型的交互。In some embodiments, the control command includes any one of the following: a first control command for displaying the edited target display frame on the first screen; a second control command for displaying the edited target display frame on the first screen; The target display frame is displayed on the first screen; a third control command is used to play the user's question about the target display frame while the target display frame is displayed on the first screen; and a fourth control command , used to play the recording segment corresponding to the target display frame while displaying the edited target display frame on the first screen. In this way, various types of interactions can be implemented between different screens based on the user's control commands.
在一些实施例中,所述用户输入还指示用户针对所述目标显示帧的提问。响应于接收到的所述第二触发操作,所述第二设备向所述第一设备发送所述提问;并且所述第一设备在所述第一屏幕上显示所述目标显示帧或者经编辑的所述目标显示帧的同时,播放所述提问。以此方式,允许在不同设备之间分享用户针对屏幕内容的提问。In some embodiments, the user input further indicates a user question for the target display frame. In response to receiving the second trigger operation, the second device sends the question to the first device; and the first device displays the target display frame or edited on the first screen Play the question while the target displays the frame. In this way, user questions regarding screen content are allowed to be shared between different devices.
在一些实施例中,所述响应包括与所述至少一个显示帧对应的录音;所述第一界面包括所述录音的可视表示;并且所述用户输入指示用户针对所述录音中的录音片段的选择,所述录音片段对应于所述目标显示帧。以此方式,允许用户通过选择录音片段来选择要分享的屏幕内容。In some embodiments, the response includes a sound recording corresponding to the at least one display frame; the first interface includes a visual representation of the sound recording; and the user input instructs the user for a sound recording segment in the sound recording selection, the recording segment corresponds to the target display frame. In this way, the user is allowed to select the screen content to be shared by selecting a segment of the recording.
在一些实施例中,响应于接收到的所述第二触发操作,所述第二设备向所述第一设备发送所述录音片段;并且所述第一设备在所述第一屏幕上显示所述目标显示帧或者经编辑的所述目标显示帧的同时,播放所述录音片段。以此方式,允许在不同设备之间分享与屏幕内容对应的录音片段。In some embodiments, in response to receiving the second trigger operation, the second device sends the audio recording segment to the first device; and the first device displays the recorded segment on the first screen The recording segment is played while the target display frame or the edited target display frame is displayed. In this way, audio clips corresponding to screen content are allowed to be shared between different devices.
在一些实施例中,所述第一设备接收用户的第三触发操作;并且响应于接收到的所述第三触发操作,所述第一设备停止在所述第一屏幕上显示所述目标显示帧或者经编辑的所述目标显示帧,并且重新在所述第一屏幕上显示所述第一内容。以此方式,允许用户收回屏幕控制权并且终止多屏交互。In some embodiments, the first device receives a user's third trigger operation; and in response to the received third trigger operation, the first device stops displaying the target display on the first screen frame or the edited target display frame and redisplay the first content on the first screen. In this way, the user is allowed to take back control of the screen and terminate the multi-screen interaction.
在一些实施例中,在所述第二设备向所述第一设备发送所述请求之前,所述第一设备与所述第二设备建立用于多屏交互的连接。以此方式,不同设备之间能够建立用于多屏交互的连接。In some embodiments, before the second device sends the request to the first device, the first device and the second device establish a connection for multi-screen interaction. In this way, connections for multi-screen interaction can be established between different devices.
在本公开的第二方面,提供了一种多屏交互的方法。该方法包括:第二设备响应于接收到来自用户的第一触发操作,向第一设备发送用于多屏交互的请求,所述请求中包括请求时间点;所述第二设备接收来自所述第一设备的响应,所述响应至少包括所述第一设备的第一屏幕在所述请求时间点附近的至少一个显示帧;所述第二设备在所述第二设备的第二屏幕上显示第一界面,所述第一界面包括所述至少一个显示帧;所述第二设备接收用户输入,所述用户输入指示用户针对所述至少一个显示帧中的目 标显示帧的选择和/或编辑;以及所述第二设备响应于接收到来自用户的所述第二触发操作,向所述第一设备发送所述目标显示帧或者经编辑的所述目标显示帧,以用于在所述第一屏幕上显示。In a second aspect of the present disclosure, a method for multi-screen interaction is provided. The method includes: in response to receiving a first trigger operation from a user, the second device sends a request for multi-screen interaction to the first device, where the request includes a request time point; the second device receives a request from the user a response from the first device, where the response at least includes at least one display frame of the first screen of the first device near the requested time point; the second device displays on the second screen of the second device a first interface comprising the at least one display frame; the second device receiving user input indicating the user's selection and/or editing of a target display frame of the at least one display frame ; and the second device, in response to receiving the second trigger operation from the user, sends the target display frame or the edited target display frame to the first device for use in the first device displayed on one screen.
在一些实施例中,该方法还包括:所述第二设备接收用户输入的控制命令;以及响应于接收到的所述控制命令,所述第二设备向所述第一设备发送所述控制命令,以用于控制所述第一屏幕的显示。In some embodiments, the method further comprises: the second device receiving a control command input by a user; and in response to the received control command, the second device sending the control command to the first device , for controlling the display of the first screen.
在一些实施例中,所述控制命令包括以下任一项:第一控制命令,用于在所述第一屏幕上显示经编辑的所述目标显示帧;第二控制命令,用于在所述第一屏幕上显示所述目标显示帧;第三控制命令,用于在所述第一屏幕上显示所述目标显示帧的同时,播放用户针对所述目标显示帧的提问;以及第四控制命令,用于在所述第一屏幕上显示经编辑的所述目标显示帧的同时,播放与所述目标显示帧对应的录音片段。In some embodiments, the control command includes any one of the following: a first control command for displaying the edited target display frame on the first screen; a second control command for displaying the edited target display frame on the first screen; The target display frame is displayed on the first screen; a third control command is used to play the user's question about the target display frame while the target display frame is displayed on the first screen; and a fourth control command , used to play the recording segment corresponding to the target display frame while displaying the edited target display frame on the first screen.
在一些实施例中,所述用户输入还指示用户针对所述目标显示帧的提问,并且该方法还包括:响应于接收到所述第二触发操作,所述第二设备向所述第一设备发送所述提问。In some embodiments, the user input further indicates a user question for the target display frame, and the method further includes: in response to receiving the second trigger operation, the second device to the first device Send the question.
在一些实施例中,所述响应包括与所述至少一个显示帧对应的录音;所述第一界面包括所述录音的可视表示;并且所述用户输入指示用户针对所述录音中的录音片段的选择,所述录音片段对应于所述目标显示帧。In some embodiments, the response includes a sound recording corresponding to the at least one display frame; the first interface includes a visual representation of the sound recording; and the user input instructs the user for a sound recording segment in the sound recording selection, the recording segment corresponds to the target display frame.
在一些实施例中,该方法还包括:响应于接收到所述第二触发操作,所述第二设备向所述第一设备发送所述录音片段。In some embodiments, the method further comprises: in response to receiving the second trigger operation, the second device sending the audio recording segment to the first device.
在一些实施例中,该方法还包括:在所述第二设备向所述第一设备发送所述请求之前,所述第二设备与所述第一设备建立用于多屏交互的连接。In some embodiments, the method further includes: before the second device sends the request to the first device, establishing a connection between the second device and the first device for multi-screen interaction.
在本公开的第三方面,提供了一种多屏交互的方法。该方法包括:第一设备接收来自第二设备的用于多屏交互的请求,所述第一设备在第一屏幕上显示第一内容,所述第一内容包括多个显示帧,并且所述请求中包括请求时间点;所述第一设备根据所述请求,向所述第二设备发送响应,所述响应至少包括所述第一屏幕在所述请求时间点附近的至少一个显示帧;所述第一设备接收来自所述第二设备的目标显示帧或者经编辑的所述目标显示帧,所述目标显示帧选自所述至少一个显示帧;以及所述第一设备在所述第一屏幕上显示所述目标显示帧或者经编辑的所述目标显示帧。In a third aspect of the present disclosure, a method for multi-screen interaction is provided. The method includes: a first device receiving a request for multi-screen interaction from a second device, the first device displaying a first content on a first screen, the first content including a plurality of display frames, and the The request includes a request time point; the first device sends a response to the second device according to the request, and the response at least includes at least one display frame of the first screen near the request time point; the the first device receives a target display frame or an edited target display frame from the second device, the target display frame being selected from the at least one display frame; and the first device is in the first The target display frame or the edited target display frame is displayed on the screen.
在一些实施例中,在所述第一屏幕上显示所述目标显示帧或者经编辑的所述目标显示帧包括:响应于接收到所述目标显示帧或者经编辑的所述目标显示帧,所述第一设备在所述第一屏幕上显示是否允许所述目标显示帧的分享的提示;所述第一设备接收用户输入;以及响应于所述用户输入指示用户允许所述分享,所述第一设备在所述第一屏幕上显示所述目标显示帧或者经编辑的所述目标显示帧。In some embodiments, displaying the target display frame or the edited target display frame on the first screen comprises: in response to receiving the target display frame or the edited target display frame, the first device displays a prompt on the first screen whether to allow the sharing of the target display frame; the first device receives user input; and in response to the user input instructing the user to allow the sharing, the first device A device displays the target display frame or the edited target display frame on the first screen.
在一些实施例中,在所述第一屏幕上显示所述目标显示帧或者经编辑的所述目标显示帧包括:所述第一设备接收来自所述第二设备的控制命令,所述控制命令用于控制所述第一屏幕的显示;以及所述第一设备根据所述控制命令,在所述第一屏幕上显示所述目标显示帧或者经编辑的所述目标显示帧。In some embodiments, displaying the target display frame or the edited target display frame on the first screen includes the first device receiving a control command from the second device, the control command for controlling the display of the first screen; and the first device displays the target display frame or the edited target display frame on the first screen according to the control command.
在一些实施例中,所述控制命令包括以下任一项:第一控制命令,用于在所述第一屏幕上显示经编辑的所述目标显示帧;第二控制命令,用于在所述第一屏幕上显示 所述目标显示帧;第三控制命令,用于在所述第一屏幕上显示所述目标显示帧的同时,播放用户针对所述目标显示帧的提问;以及第四控制命令,用于在所述第一屏幕上显示经编辑的所述目标显示帧的同时,播放与所述目标显示帧对应的录音片段。In some embodiments, the control command includes any one of the following: a first control command for displaying the edited target display frame on the first screen; a second control command for displaying the edited target display frame on the first screen; The target display frame is displayed on the first screen; a third control command is used to play the user's question about the target display frame while the target display frame is displayed on the first screen; and a fourth control command , used to play the recording segment corresponding to the target display frame while displaying the edited target display frame on the first screen.
在一些实施例中,该方法还包括:所述第一设备接收来自所述第二设备的针对所述目标显示帧的提问;以及所述第一设备在所述第一屏幕上显示所述目标显示帧或者经编辑的所述目标显示帧的同时,播放所述提问。In some embodiments, the method further comprises: the first device receiving a question from the second device for the target display frame; and the first device displaying the target on the first screen The question is played while the frame or the edited target display frame is displayed.
在一些实施例中,所述响应包括与所述至少一个显示帧对应的录音,并且该方法还包括:所述第一设备接收来自所述第二设备的录音片段,所述录音片段选自所述录音并且对应于所述目标显示帧;以及所述第一设备在所述第一屏幕上显示所述目标显示帧或者经编辑的所述目标显示帧的同时,播放所述录音片段。In some embodiments, the response includes a sound recording corresponding to the at least one display frame, and the method further includes: the first device receiving a sound recording segment from the second device, the sound recording segment selected from the group consisting of the recording and corresponding to the target display frame; and the first device plays the recording segment while displaying the target display frame or the edited target display frame on the first screen.
在一些实施例中,该方法还包括:所述第一设备接收用户的触发操作;以及响应于接收到的所述触发操作,所述第一设备停止在所述第一屏幕上显示所述目标显示帧或者经编辑的所述目标显示帧,并且重新在所述第一屏幕上显示所述第一内容。In some embodiments, the method further includes: the first device receiving a trigger operation from a user; and in response to the received trigger operation, the first device stopping displaying the target on the first screen A frame or the edited target display frame is displayed, and the first content is redisplayed on the first screen.
在一些实施例中,该方法还包括:在所述第一设备接收来自所述第二设备的所述请求之前,所述第一设备与所述第二设备建立用于多屏交互的连接。In some embodiments, the method further includes: before the first device receives the request from the second device, establishing a connection between the first device and the second device for multi-screen interaction.
在本公开的第四方面,提供了一种多屏交互的装置。该装置包括:请求发送单元,被配置为响应于接收到来自用户的第一触发操作,向第一设备发送用于多屏交互的请求,所述请求中包括请求时间点;响应接收单元,被配置为接收来自所述第一设备的响应,所述响应至少包括所述第一设备的第一屏幕在所述请求时间点附近的至少一个显示帧;屏幕显示单元,被配置为在所述第二设备的第二屏幕上显示第一界面,所述第一界面包括所述至少一个显示帧;用户输入接收单元,被配置为接收用户输入,所述用户输入指示用户针对所述至少一个显示帧中的目标显示帧的选择和/或编辑;以及显示帧发送单元,被配置为响应于接收到来自用户的所述第二触发操作,向所述第一设备发送所述目标显示帧或者经编辑的所述目标显示帧,以用于在所述第一屏幕上显示。In a fourth aspect of the present disclosure, a device for multi-screen interaction is provided. The apparatus includes: a request sending unit, configured to send a request for multi-screen interaction to a first device in response to receiving a first trigger operation from a user, where the request includes a request time point; a response receiving unit, which is is configured to receive a response from the first device, the response at least including at least one display frame of the first screen of the first device near the request time point; the screen display unit is configured to A first interface is displayed on the second screen of the second device, the first interface includes the at least one display frame; a user input receiving unit is configured to receive a user input, the user input instructing the user to target the at least one display frame selection and/or editing of the target display frame in of the target display frame for display on the first screen.
在一些实施例中,该装置还包括:控制命令接收单元,被配置为接收用户输入的控制命令;以及控制命令发送单元,被配置为响应于接收到的所述控制命令,向所述第一设备发送所述控制命令,以用于控制所述第一屏幕的显示。In some embodiments, the apparatus further includes: a control command receiving unit configured to receive a control command input by a user; and a control command sending unit configured to, in response to the received control command, send a message to the first The device sends the control command for controlling the display of the first screen.
在一些实施例中,所述控制命令包括以下任一项:第一控制命令,用于在所述第一屏幕上显示经编辑的所述目标显示帧;第二控制命令,用于在所述第一屏幕上显示所述目标显示帧;第三控制命令,用于在所述第一屏幕上显示所述目标显示帧的同时,播放用户针对所述目标显示帧的提问;以及第四控制命令,用于在所述第一屏幕上显示经编辑的所述目标显示帧的同时,播放与所述目标显示帧对应的录音片段。In some embodiments, the control command includes any one of the following: a first control command for displaying the edited target display frame on the first screen; a second control command for displaying the edited target display frame on the first screen; The target display frame is displayed on the first screen; a third control command is used to play the user's question about the target display frame while the target display frame is displayed on the first screen; and a fourth control command , used to play the recording segment corresponding to the target display frame while displaying the edited target display frame on the first screen.
在一些实施例中,所述用户输入还指示用户针对所述目标显示帧的提问,并且该装置还包括:提问发送单元,被配置为响应于接收到所述第二触发操作,向所述第一设备发送所述提问。In some embodiments, the user input further indicates a question of the user with respect to the target display frame, and the apparatus further includes: a question sending unit configured to, in response to receiving the second trigger operation, send a question to the first A device sends the question.
在一些实施例中,所述响应包括与所述至少一个显示帧对应的录音;所述第一界面包括所述录音的可视表示;并且所述用户输入指示用户针对所述录音中的录音片段的选择,所述录音片段对应于所述目标显示帧。In some embodiments, the response includes a sound recording corresponding to the at least one display frame; the first interface includes a visual representation of the sound recording; and the user input instructs the user for a sound recording segment in the sound recording selection, the recording segment corresponds to the target display frame.
在一些实施例中,该装置还包括:录音片段发送单元,被配置为响应于接收到所述第二触发操作,向所述第一设备发送所述录音片段。In some embodiments, the apparatus further includes: a recording segment sending unit, configured to send the recording segment to the first device in response to receiving the second trigger operation.
在一些实施例中,该装置还包括:连接建立单元,被配置为在向所述第一设备发送所述请求之前,与所述第一设备建立用于多屏交互的连接。In some embodiments, the apparatus further includes: a connection establishing unit configured to establish a connection for multi-screen interaction with the first device before sending the request to the first device.
在本公开的第五方面,提供了一种多屏交互的装置。该装置包括:屏幕显示单元,被配置为在第一屏幕上显示第一内容,所述第一内容包括多个显示帧;请求接收单元,被配置为接收来自第二设备的用于多屏交互的请求,所述请求中包括请求时间点;响应发送单元,被配置为根据所述请求,向所述第二设备发送响应,所述响应至少包括所述第一屏幕在所述请求时间点附近的至少一个显示帧;以及显示帧接收单元,被配置为接收来自所述第二设备的目标显示帧或者经编辑的所述目标显示帧,所述目标显示帧选自所述至少一个显示帧;所述屏幕显示单元还被配置为在所述第一屏幕上显示所述目标显示帧或者经编辑的所述目标显示帧。In a fifth aspect of the present disclosure, a device for multi-screen interaction is provided. The apparatus includes: a screen display unit configured to display a first content on a first screen, the first content including a plurality of display frames; a request receiving unit configured to receive a request for multi-screen interaction from a second device The request includes a request time point; the response sending unit is configured to send a response to the second device according to the request, and the response at least includes that the first screen is near the request time point and a display frame receiving unit configured to receive a target display frame or an edited target display frame from the second device, the target display frame being selected from the at least one display frame; The screen display unit is further configured to display the target display frame or the edited target display frame on the first screen.
在一些实施例中,所述屏幕显示单元包括:第一显示单元,被配置为响应于接收到所述目标显示帧或者经编辑的所述目标显示帧,在所述第一屏幕上显示是否允许所述目标显示帧的分享的提示;用户输入接收单元,被配置为接收用户输入;以及第二显示单元,被配置为响应于所述用户输入指示用户允许所述分享,在所述第一屏幕上显示所述目标显示帧或者经编辑的所述目标显示帧。In some embodiments, the screen display unit includes: a first display unit configured to, in response to receiving the target display frame or the edited target display frame, display on the first screen whether to allow or not a prompt for sharing of the target display frame; a user input receiving unit configured to receive user input; and a second display unit configured to instruct the user to allow the sharing in response to the user input, on the first screen The target display frame or the edited target display frame is displayed above.
在一些实施例中,所述屏幕显示单元包括:控制命令接收单元,被配置为接收来自所述第二设备的控制命令,所述控制命令用于控制所述第一屏幕的显示;以及第三显示单元,被配置为根据所述控制命令,在所述第一屏幕上显示所述目标显示帧或者经编辑的所述目标显示帧。In some embodiments, the screen display unit includes: a control command receiving unit configured to receive a control command from the second device, the control command being used to control the display of the first screen; and a third A display unit configured to display the target display frame or the edited target display frame on the first screen according to the control command.
在一些实施例中,所述控制命令包括以下任一项:第一控制命令,用于在所述第一屏幕上显示经编辑的所述目标显示帧;第二控制命令,用于在所述第一屏幕上显示所述目标显示帧;第三控制命令,用于在所述第一屏幕上显示所述目标显示帧的同时,播放用户针对所述目标显示帧的提问;以及第四控制命令,用于在所述第一屏幕上显示经编辑的所述目标显示帧的同时,播放与所述目标显示帧对应的录音片段。In some embodiments, the control command includes any one of the following: a first control command for displaying the edited target display frame on the first screen; a second control command for displaying the edited target display frame on the first screen; The target display frame is displayed on the first screen; a third control command is used to play the user's question about the target display frame while the target display frame is displayed on the first screen; and a fourth control command , used to play the recording segment corresponding to the target display frame while displaying the edited target display frame on the first screen.
在一些实施例中,该装置还包括:提问接收单元,被配置为接收来自所述第二设备的针对所述目标显示帧的提问;以及提问播放单元,被配置为在所述第一屏幕上显示所述目标显示帧或者经编辑的所述目标显示帧的同时,播放所述提问。In some embodiments, the apparatus further includes: a question receiving unit configured to receive a question for the target display frame from the second device; and a question playing unit configured to display a question on the first screen The question is played while the target display frame or the edited target display frame is displayed.
在一些实施例中,所述响应包括与所述至少一个显示帧对应的录音,并且该装置还包括:录音片段接收单元,被配置为接收来自所述第二设备的录音片段,所述录音片段选自所述录音并且对应于所述目标显示帧;以及录音片段播放单元,被配置为在所述第一屏幕上显示所述目标显示帧或者经编辑的所述目标显示帧的同时,播放所述录音片段。In some embodiments, the response includes a sound recording corresponding to the at least one display frame, and the apparatus further includes: a sound recording segment receiving unit configured to receive a sound recording segment from the second device, the sound recording segment is selected from the recording and corresponds to the target display frame; and a recording segment playing unit configured to play the target display frame or the edited target display frame while displaying the target display frame or the edited target display frame on the first screen. the recorded clips.
在一些实施例中,该装置还包括:操作接收单元,被配置为接收用户的触发操作;以及第四显示单元,被配置为响应于接收到的所述触发操作,停止在所述第一屏幕上显示所述目标显示帧或者经编辑的所述目标显示帧,并且重新在所述第一屏幕上显示所述第一内容。In some embodiments, the apparatus further includes: an operation receiving unit configured to receive a trigger operation from a user; and a fourth display unit configured to stop on the first screen in response to the received trigger operation The target display frame or the edited target display frame is displayed on the screen, and the first content is redisplayed on the first screen.
在一些实施例中,该装置还包括:连接建立单元,被配置为在接收来自所述第二 设备的所述请求之前,与所述第二设备建立用于多屏交互的连接。In some embodiments, the apparatus further includes a connection establishing unit configured to establish a connection for multi-screen interaction with the second device before receiving the request from the second device.
在本公开的第六方面,提供了一种电子设备。该电子设备包括:一个或多个处理器;一个或多个存储器;以及一个或多个计算机程序。所述一个或多个计算机程序被存储在所述一个或多个存储器中,所述一个或多个计算机程序包括指令。当所述指令被所述电子设备执行时,使得所述电子设备执行上述第二方面或第三方面的方法。In a sixth aspect of the present disclosure, an electronic device is provided. The electronic device includes: one or more processors; one or more memories; and one or more computer programs. The one or more computer programs are stored in the one or more memories, the one or more computer programs including instructions. When the instructions are executed by the electronic device, the electronic device is caused to perform the method of the second aspect or the third aspect.
在本公开的第七方面,提供了一种计算机可读存储介质。计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现上述第二方面或第三方面的方法。In a seventh aspect of the present disclosure, a computer-readable storage medium is provided. A computer program is stored on the computer-readable storage medium, and when the computer program is executed by the processor, implements the method of the second aspect or the third aspect.
应当理解,发明内容部分中所描述的内容并非旨在限定本公开的实施例的关键或重要特征,亦非用于限制本公开的范围。本公开的其它特征将通过以下的描述变得容易理解。It should be understood that the matters described in this Summary are not intended to limit key or critical features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
附图说明Description of drawings
图1A示出了根据本公开的实施例的示例系统的框图;1A shows a block diagram of an example system according to an embodiment of the present disclosure;
图1B示出了根据本公开的实施例的示例系统的软件系统架构图;FIG. 1B shows a software system architecture diagram of an example system according to an embodiment of the present disclosure;
图2A示出了根据本公开的实施例的另一示例系统的框图;2A shows a block diagram of another example system according to an embodiment of the present disclosure;
图2B示出了根据本公开的实施例的另一示例系统的软件系统架构图;2B shows a software system architecture diagram of another example system according to an embodiment of the present disclosure;
图3示出了根据本公开的实施例的在主设备和从设备之间建立连接的示意图;3 shows a schematic diagram of establishing a connection between a master device and a slave device according to an embodiment of the present disclosure;
图4示出了根据本公开的实施例在主设备和从设备之间建立连接的信令交互图;4 shows a signaling interaction diagram for establishing a connection between a master device and a slave device according to an embodiment of the present disclosure;
图5示出了根据本公开的实施例的触发主设备和从设备之间的屏幕交互的示意图;5 shows a schematic diagram of triggering screen interaction between a master device and a slave device according to an embodiment of the present disclosure;
图6示出了根据本公开的实施例在主设备和从设备之间进行多屏交互的信令交互图;6 shows a signaling interaction diagram for multi-screen interaction between a master device and a slave device according to an embodiment of the present disclosure;
图7示出了根据本公开的实施例从主设备处的屏幕缓冲区和音频缓冲区中获取屏幕内容相关信息的示意图;7 shows a schematic diagram of acquiring screen content related information from a screen buffer and an audio buffer at the master device according to an embodiment of the present disclosure;
图8A至图8F示出了根据本公开的实施例的用于多屏交互的示例用户界面的示意图;8A-8F illustrate schematic diagrams of example user interfaces for multi-screen interaction according to embodiments of the present disclosure;
图9示出了根据本公开的实施例在同一设备的不同屏幕之间进行多屏交互的信令交互图;9 shows a signaling interaction diagram for performing multi-screen interaction between different screens of the same device according to an embodiment of the present disclosure;
图10示出了根据本公开的实施例的用于多屏交互的示例方法的流程图;10 shows a flowchart of an example method for multi-screen interaction according to an embodiment of the present disclosure;
图11示出了根据本公开的实施例的用于多屏交互的示例方法的流程图;11 shows a flowchart of an example method for multi-screen interaction according to an embodiment of the present disclosure;
图12示出了根据本公开的实施例的用于多屏交互的示例装置的框图;12 shows a block diagram of an example apparatus for multi-screen interaction according to an embodiment of the present disclosure;
图13示出了根据本公开的实施例的用于多屏交互的示例装置的框图;以及13 shows a block diagram of an example apparatus for multi-screen interaction according to an embodiment of the present disclosure; and
图14示出了适合实现本公开的实施例的示例设备的框图;14 illustrates a block diagram of an example device suitable for implementing embodiments of the present disclosure;
图15示出了适合实现本公开的实施例的示例设备的的软件结构框图。15 shows a block diagram of the software architecture of an example device suitable for implementing embodiments of the present disclosure.
在各个附图中,相同或对应的标号表示相同或对应的部分。In the various figures, the same or corresponding reference numerals designate the same or corresponding parts.
具体实施方式detailed description
现在将参考一些示例实施例描述本公开的原理。应当理解,这些实施例仅出于说明的目的进行描述,并且帮助本领域技术人员理解和实现本公开,而不暗示对本公开的范围的任何限制。除了下面描述的方式以外,本文中描述的公开内容可以以各种方式来实现。The principles of the present disclosure will now be described with reference to some example embodiments. It should be understood that these embodiments are described for illustrative purposes only and to assist those skilled in the art in understanding and implementing the present disclosure without implying any limitation on the scope of the present disclosure. The disclosure described herein can be implemented in various ways other than those described below.
在以下描述和权利要求中,除非另有定义,否则本文中使用的所有技术和科学术语具有与本公开所属领域的普通技术人员通常理解的相同的含义。In the following description and claims, unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.
如本文中使用的,除非上下文另外明确指出,否则单数形式“一”、“一个”和“该”也意图包括复数形式。术语“包括”及其变体应当被解读为开放式术语,意指“包括但不限于”。术语“基于”应当被解读为“至少部分基于”。术语“一个实施例”和“实施例”应当被解读为“至少一个实施例”。术语“另一实施例”应当被理解为“至少一个其他实施例”。术语“第一”、“第二”等可以指代不同或相同的对象。下面可以包括其他定义(明确的和隐含的)。As used herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly dictates otherwise. The term "including" and variations thereof should be read as open-ended terms meaning "including but not limited to". The term "based on" should be read as "based at least in part on". The terms "one embodiment" and "an embodiment" should be read as "at least one embodiment." The term "another embodiment" should be understood as "at least one other embodiment." The terms "first", "second", etc. may refer to different or the same objects. Other definitions (explicit and implicit) may be included below.
在一些示例中,值、过程或装置被称为“最佳”、“最低”、“最高”、“最小”、“最大”等。应当理解,这样的描述旨在指示可以在很多使用的功能替代方案中进行选择,并且这样的选择不需要比其他选择更好、更小、更高或以其他方式优选。In some examples, a value, process, or device is referred to as "best," "lowest," "highest," "minimum," "maximum," or the like. It should be understood that such descriptions are intended to indicate that a choice may be made among the many functional alternatives that may be used, and that such choices need not be better, smaller, higher, or otherwise preferred than other choices.
如上所述,目前的屏幕内容标记技术通常是指第二屏幕在同步显示来自第一屏幕的屏幕内容的情况下,通过本地截图、拍照等方式对来自第一屏幕的屏幕内容进行标记。As mentioned above, the current screen content marking technology generally refers to that the second screen marks the screen content from the first screen by means of local screenshots, photos, etc. under the condition of synchronously displaying the screen content from the first screen.
当第一屏幕和第二屏幕分别来自主设备和从设备时,从设备需要同步显示主设备上的第一屏幕的屏幕内容,例如幻灯片或者视频等。当从设备期望对屏幕内容进行标记时,其可以通过本地截图、拍照等方式来获取要标记的屏幕内容,然后针对所获取的图片进行编辑。在上述过程中,从设备仅能获得屏幕截图,而无法获得主设备显示对应屏幕内容时的其他类型的信息,例如音频数据。此外,上述屏幕内容标记的操作较为繁琐,并且主设备和从设备无法基于经标记的屏幕内容进行进一步交互。When the first screen and the second screen come from the master device and the slave device, respectively, the slave device needs to synchronously display the screen content of the first screen on the master device, such as a slideshow or video. When the slave device desires to mark the screen content, it can obtain the screen content to be marked by taking local screenshots, taking pictures, etc., and then edit the obtained picture. In the above process, the slave device can only obtain screenshots, but cannot obtain other types of information, such as audio data, when the master device displays the corresponding screen content. In addition, the above operation of marking the screen content is cumbersome, and the master device and the slave device cannot perform further interaction based on the marked screen content.
当第一屏幕和第二屏幕是来自同一设备(例如,折叠屏设备)的不同屏幕时,两个屏幕可以分别用于运行不同应用。当利用第二屏幕运行的应用期望对第一屏幕的屏幕内容进行标记时,其可以通过本地截图、拍照等方式来获取要标记的屏幕内容,然后针对所获取的图片进行编辑。上述屏幕内容标记的操作较为繁琐,并且不同应用之间无法基于经标记的屏幕内容进行进一步交互。When the first screen and the second screen are different screens from the same device (eg, a folding screen device), the two screens may be used to run different applications, respectively. When an application running on the second screen wishes to mark the screen content of the first screen, it can obtain the screen content to be marked by taking local screenshots, taking pictures, etc., and then edit the obtained picture. The above screen content marking operation is cumbersome, and further interaction between different applications based on the marked screen content cannot be performed.
本公开的实施例提供了一种用于多屏交互的方案。针对第一屏幕和第二屏幕分别来自主设备和从设备的场景,该方案不需要从设备同步显示主设备的屏幕内容。该方案能够在不同屏幕之间共享多种类型的信息,例如图片、音频、视频等。此外,该方案能够基于经标记的屏幕内容实现不同屏幕之间的进一步交互。针对第一屏幕和第二屏幕是来自同一设备(例如,折叠屏设备)的不同屏幕的场景,该方案能够在不同屏幕之间共享多种类型的信息,并且能够基于经标记的屏幕内容实现不同屏幕之间的进一步交互。Embodiments of the present disclosure provide a solution for multi-screen interaction. For the scenario where the first screen and the second screen are from the master device and the slave device respectively, this solution does not require the slave device to synchronously display the screen content of the master device. This solution can share various types of information, such as pictures, audio, video, etc., between different screens. Furthermore, this scheme enables further interaction between different screens based on the marked screen content. For scenarios where the first screen and the second screen are different screens from the same device (eg, a folding screen device), the solution can share various types of information between the different screens, and can implement different Further interactions between screens.
图1A示出了根据本公开的实施例的示例系统100的框图。如图1A所示,系统100包括主设备110和一个或多个从设备120(在图1中仅示出一个)。在主设备110上运行有应用111,应用111例如可以利用主设备110的屏幕来运行。在从设备120上运行有应用121,应用121例如可以利用从设备120的屏幕来运行。主设备110和从设备120可以是相同类型的设备或者不同类型的设备。主设备110或从设备120的示例可以包括但不限于诸如个人计算机、笔记本电脑、投影仪、电视等非便携式设备,以及诸如手持终端、智能电话、无线数据卡、平板型电脑、可穿戴设备等便携式设备。 应用111的示例可以包括但不限于视频会议应用、视频播放应用、办公应用(例如,幻灯片、Word应用等)或者其他展示应用。应用121可以是多屏交互应用,其可以根据本公开的实施例与主设备110上的应用111进行交互。在本文中,应用111也称为“第一应用”,并且主设备110的屏幕也称为“第一屏幕”或“主屏幕”。应用121也称为“第二应用”,并且从设备120的屏幕也称为“第二屏幕”或“从屏幕”。FIG. 1A shows a block diagram of an example system 100 according to an embodiment of the present disclosure. As shown in FIG. 1A , system 100 includes a master device 110 and one or more slave devices 120 (only one is shown in FIG. 1 ). An application 111 is run on the main device 110 , and the application 111 can be run using, for example, a screen of the main device 110 . An application 121 is run on the slave device 120 , and the application 121 can be run using the screen of the slave device 120 , for example. The master device 110 and the slave device 120 may be the same type of device or different types of devices. Examples of master device 110 or slave device 120 may include, but are not limited to, non-portable devices such as personal computers, laptops, projectors, televisions, etc., as well as handheld terminals, smart phones, wireless data cards, tablet computers, wearable devices, etc. Portable Devices. Examples of applications 111 may include, but are not limited to, video conferencing applications, video playback applications, office applications (eg, slideshows, Word applications, etc.), or other presentation applications. The application 121 may be a multi-screen interactive application, which may interact with the application 111 on the main device 110 according to an embodiment of the present disclosure. Herein, the application 111 is also referred to as a "first application", and the screen of the main device 110 is also referred to as a "first screen" or "home screen". The application 121 is also referred to as a "second application" and the screen of the slave device 120 is also referred to as a "second screen" or "slave screen".
图1B示出了根据本公开的实施例的示例系统100的软件系统架构图。如图1B所示,主设备110的软件系统架构可以分为应用层、框架层和驱动层。应用层可以包括应用111。框架层可以包括媒体服务112,用于支持应用111(例如,视频会议应用、视频播放应用、办公应用或者其他展示应用)的运行。框架层还可以包括多屏交互服务113,用于支持应用111与应用121之间的多屏交互。驱动层例如可以包括屏幕驱动114和图形处理单元(GPU)驱动115,用于支持应用111在主设备110的屏幕上的显示。驱动层例如还可以包括蓝牙驱动116、Wifi驱动117、近场通信(NFC)驱动(未示出)等,用于在主设备110和从设备120之间建立通信连接。FIG. 1B shows a software system architecture diagram of an example system 100 according to an embodiment of the present disclosure. As shown in FIG. 1B , the software system architecture of the master device 110 can be divided into an application layer, a framework layer and a driver layer. The application layer may include applications 111 . The framework layer may include media services 112 for supporting the operation of applications 111 (eg, video conferencing applications, video playback applications, office applications, or other presentation applications). The framework layer may further include a multi-screen interaction service 113 for supporting multi-screen interaction between the application 111 and the application 121 . The driver layer may include, for example, a screen driver 114 and a graphics processing unit (GPU) driver 115 for supporting the display of the application 111 on the screen of the host device 110 . The driver layer may further include, for example, a Bluetooth driver 116 , a Wifi driver 117 , a near field communication (NFC) driver (not shown), etc., for establishing a communication connection between the master device 110 and the slave device 120 .
如图1B所示,从设备120的软件系统架构可以分为应用层、框架层和驱动层。应用层可以包括应用121。框架层还可以包括多屏交互服务122,用于支持应用111与应用121之间的多屏交互。驱动层例如可以包括蓝牙驱动123、Wifi驱动124、近场通信(NFC)驱动(未示出)等,用于在主设备110和从设备120之间建立通信连接。驱动层例如还可以包括屏幕驱动和GPU驱动(未示出)等,用于支持应用121在从设备120的屏幕上的显示。As shown in FIG. 1B , the software system architecture of the slave device 120 can be divided into an application layer, a framework layer and a driver layer. The application layer may include applications 121 . The framework layer may also include a multi-screen interaction service 122 for supporting multi-screen interaction between the application 111 and the application 121 . The driver layer may include, for example, a Bluetooth driver 123 , a Wifi driver 124 , a Near Field Communication (NFC) driver (not shown), etc., for establishing a communication connection between the master device 110 and the slave device 120 . For example, the driver layer may further include a screen driver and a GPU driver (not shown), etc., for supporting the display of the application 121 on the screen of the slave device 120 .
应当理解,如图1B所示的软件系统架构仅是示例性的,并且与设备的操作系统无关。也即,如图1B所示的软件系统架构可以实现在安装有不同操作系统的设备上,包括但不限于Windows操作系统、Android操作系统和IOS操作系统。上述软件系统架构中的软件分层也是示例性的,而不旨在限制本公开的范围。例如,在一些实施例中,多屏交互服务113可以被集成在应用111中,并且多屏交互服务122可以被集成在应用121中。It should be understood that the software system architecture shown in FIG. 1B is exemplary only and is not related to the operating system of the device. That is, the software system architecture shown in FIG. 1B can be implemented on devices installed with different operating systems, including but not limited to Windows operating systems, Android operating systems, and IOS operating systems. The software layering in the software system architecture described above is also exemplary and is not intended to limit the scope of the present disclosure. For example, in some embodiments, multi-screen interaction service 113 may be integrated in application 111 , and multi-screen interaction service 122 may be integrated in application 121 .
图2A示出了根据本公开的实施例的另一示例系统200的框图。如图2A所示,系统200包括设备210,设备210可以包括多个屏幕或者设备210的屏幕可以被划分成多个区域。在设备210上运行有应用211和212,其中应用211可以利用设备210的第一屏幕或第一屏幕区域运行,应用212可以利用设备210的第二屏幕或第二屏幕区域运行。应用211的示例可以包括但不限于视频会议应用、视频播放应用、办公应用(例如,幻灯片、Word应用等)或者其他展示应用。应用212可以是多屏交互应用,其可以根据本公开的实施例与应用211进行交互。在本文中,应用211也称为“第一应用”,并且其利用的屏幕或屏幕区域也称为“第一屏幕”或“主屏幕”。应用212也称为“第二应用”,并且其利用的屏幕或屏幕区域也称为“第二屏幕”或“从屏幕”。FIG. 2A shows a block diagram of another example system 200 in accordance with embodiments of the present disclosure. As shown in FIG. 2A, the system 200 includes a device 210, and the device 210 may include multiple screens or the screen of the device 210 may be divided into multiple areas. There are applications 211 and 212 running on the device 210 , wherein the application 211 can run using the first screen or the first screen area of the device 210 , and the application 212 can run using the second screen or the second screen area of the device 210 . Examples of applications 211 may include, but are not limited to, video conferencing applications, video playback applications, office applications (eg, slideshows, Word applications, etc.), or other presentation applications. The application 212 may be a multi-screen interactive application, which may interact with the application 211 according to embodiments of the present disclosure. Herein, the application 211 is also referred to as the "first application", and the screen or screen area it utilizes is also referred to as the "first screen" or "home screen". Application 212 is also referred to as a "second application," and the screen or screen area it utilizes is also referred to as a "second screen" or "secondary screen."
图2B示出了根据本公开的实施例的示例系统200的软件系统架构图。如图2B所示,设备210的软件系统架构可以分为应用层、框架层和驱动层。应用层可以包括应用211和212。框架层可以包括媒体服务213,用于支持应用211(例如,视频会议应用、视频播放应用、办公应用或者其他展示应用)的运行。框架层还可以包括多屏交互服务214,用于支持应用211与应用212之间的多屏交互。驱动层例如可以包括屏 幕驱动215和GPU驱动216,用于支持应用211和212在不同屏幕上的显示。FIG. 2B shows a software system architecture diagram of an example system 200 according to an embodiment of the present disclosure. As shown in FIG. 2B , the software system architecture of the device 210 can be divided into an application layer, a framework layer and a driver layer. The application layer may include applications 211 and 212 . The framework layer may include media services 213 for supporting the operation of applications 211 (eg, video conferencing applications, video playback applications, office applications, or other presentation applications). The framework layer may further include a multi-screen interaction service 214 for supporting multi-screen interaction between the application 211 and the application 212 . The driver layer may include, for example, a screen driver 215 and a GPU driver 216 for supporting the display of applications 211 and 212 on different screens.
应当理解,如图2B所示的软件系统架构仅是示例性的,并且与设备的操作系统无关。也即,如图2B所示的软件系统架构可以实现在安装有不同操作系统的设备上,包括但不限于Windows操作系统、Android操作系统和IOS操作系统。上述软件系统架构中的软件分层也是示例性的,而不旨在限制本公开的范围。例如,在一些实施例中,多屏交互服务214可以被集成在应用211和212中。It should be understood that the software system architecture shown in FIG. 2B is exemplary only and is not related to the operating system of the device. That is, the software system architecture shown in FIG. 2B can be implemented on devices installed with different operating systems, including but not limited to Windows operating systems, Android operating systems, and IOS operating systems. The software layering in the software system architecture described above is also exemplary and is not intended to limit the scope of the present disclosure. For example, in some embodiments, the multi-screen interactive service 214 may be integrated into the applications 211 and 212 .
以下首先结合如图1A和1B所示的示例系统100来详细描述本公开的实施例。Embodiments of the present disclosure are first described in detail below in conjunction with an example system 100 as shown in FIGS. 1A and 1B .
为了在主设备110和从设备120之间实现多屏交互,需要首先在主设备110和从设备120之间建立用于多屏交互的连接。该连接例如可以通过蓝牙、Wifi、NFC、二维码扫描等任一种方式来建立。以下以二维码扫描为例进行说明。In order to realize multi-screen interaction between the master device 110 and the slave device 120, it is necessary to first establish a connection for multi-screen interaction between the master device 110 and the slave device 120. For example, the connection can be established by any means such as Bluetooth, Wifi, NFC, scanning a two-dimensional code, or the like. The following takes QR code scanning as an example for description.
图3示出了根据本公开的实施例的在主设备和从设备之间建立连接的示意图。在图3中,出于示例的目的,主设备110被示出为笔记本电脑,从设备120被示出为手机。在一些实施例中,当主设备110上的应用111启用多屏交互功能时,其可以显示如图3所示的二维码,以指示从设备通过扫描该二维码来与其建立用于多屏交互的连接。从设备120上的多屏交互应用121在被启动时可以显示例如如图3所示的用户界面121-1,其中包括按钮“扫一扫配对”。当用户点击按钮“扫一扫配对”时,从设备120可以呈现二维码扫描窗口,以扫描主设备110上显示的二维码。通过扫描二维码,主设备110可以与从设备120建立用于多屏交互的连接。FIG. 3 shows a schematic diagram of establishing a connection between a master device and a slave device according to an embodiment of the present disclosure. In FIG. 3, the master device 110 is shown as a laptop computer and the slave device 120 is shown as a mobile phone for the purpose of example. In some embodiments, when the application 111 on the master device 110 enables the multi-screen interaction function, it can display a two-dimensional code as shown in FIG. 3 to instruct the slave device to scan the two-dimensional code to establish a multi-screen interaction with it. interactive connection. The multi-screen interactive application 121 on the slave device 120, when activated, may display, for example, a user interface 121-1 as shown in FIG. 3, which includes a button "swipe to pair". When the user clicks the button "scan to pair", the slave device 120 may present a two-dimensional code scanning window to scan the two-dimensional code displayed on the master device 110 . By scanning the QR code, the master device 110 can establish a connection with the slave device 120 for multi-screen interaction.
图4示出了根据本公开的实施例在主设备和从设备之间建立连接的信令图。图4涉及如图1B所示的应用111和121以及多屏交互服务113和122。FIG. 4 shows a signaling diagram for establishing a connection between a master device and a slave device according to an embodiment of the present disclosure. FIG. 4 relates to applications 111 and 121 and multi-screen interactive services 113 and 122 as shown in FIG. 1B .
如图4所示,当应用111启用多屏交互功能时,应用111可以向多屏交互服务113发送(401)绑定服务请求,以使得多屏交互服务113能够为其提供多屏交互服务。应用111可以在主屏幕上显示(402)二维码,以指示从设备通过扫描该二维码来与其建立用于多屏交互的连接。类似地,当应用121被启动时,应用121可以向多屏交互服务122发送(403)绑定服务请求,以使得多屏交互服务122能够为其提供多屏交互服务。应用121可以在从屏幕上呈现二维码扫描窗口,以扫描(404)主设备110上显示的二维码。以此方式,主设备110与从设备120之间能够建立通信连接。应当理解,如步骤401~404所示的建立通信连接的过程仅是示例性的,而无意于限制本公开的范围。在一些实施例中,主设备110与从设备120可以通过其他方式来建立通信连接。备选地,如果主设备110与从设备120之间已经通过某种方式建立通信连接,则步骤401~404可以省略。As shown in FIG. 4 , when the application 111 enables the multi-screen interaction function, the application 111 may send ( 401 ) a binding service request to the multi-screen interaction service 113 , so that the multi-screen interaction service 113 can provide it with the multi-screen interaction service. The application 111 may display (402) a two-dimensional code on the home screen to instruct the slave device to establish a connection therewith for multi-screen interaction by scanning the two-dimensional code. Similarly, when the application 121 is launched, the application 121 may send (403) a binding service request to the multi-screen interaction service 122, so that the multi-screen interaction service 122 can provide the multi-screen interaction service for it. The application 121 may present a QR code scan window on the secondary screen to scan (404) the QR code displayed on the host device 110. In this way, a communication connection can be established between the master device 110 and the slave device 120 . It should be understood that the process of establishing a communication connection as shown in steps 401 to 404 is only exemplary, and is not intended to limit the scope of the present disclosure. In some embodiments, the master device 110 and the slave device 120 may establish a communication connection in other ways. Alternatively, if a communication connection has been established between the master device 110 and the slave device 120 in some way, steps 401 to 404 may be omitted.
在主设备110与从设备120之间的通信连接被建立的情况下,应用121可以与应用111进行握手,以建立用于多屏交互的连接。如图4所示,应用121可以向多屏交互服务122发送(405)建立多屏交互连接的请求。多屏交互服务122可以将该请求转发(406)至多屏交互服务113,多屏交互服务113进一步将该请求转发(407)至应用111。应用111可以生成(408)应用信息数据包。在一些实施例中,取决于应用111的类型,所生成的应用信息数据包的内容可以是不同的。以视频播放应用为例,所生成的应用信息数据包可以包括所播放的视频源地址、视频名称、视频帧数、播放速度、播放时间点等。以幻灯片或文档编辑应用为例,所生成的应用信息数据包可以包括文 件地址、文件名、当前播放的页码等。应用111可以向多屏交互服务113发送(409)该应用信息数据包。多屏交互服务113可以将该应用信息数据包转发(410)至多屏交互服务122,多屏交互服务122进一步将该应用信息数据包转发(411)至应用121。以此方式,应用121与应用111之间完成握手并建立用于多屏交互的连接。应当理解,如步骤405~411所示的建立用于多屏交互的连接的过程仅是示例性的,而无意于限制本公开的范围。在一些实施例中,步骤405~407可以被省略。也即,应用111可以主动生成应用信息数据包并且发送至应用121。备选地,在另一些实施例中,步骤408~411可以被省略,也即当应用111接收到来自应用121的建立多屏交互连接的请求时,该握手过程完成。In the case that the communication connection between the master device 110 and the slave device 120 is established, the application 121 may perform a handshake with the application 111 to establish a connection for multi-screen interaction. As shown in FIG. 4 , the application 121 may send ( 405 ) a request for establishing a multi-screen interactive connection to the multi-screen interactive service 122 . The multi-screen interaction service 122 may forward ( 406 ) the request to the multi-screen interaction service 113 , which further forwards ( 407 ) the request to the application 111 . Application 111 may generate (408) an application information packet. In some embodiments, depending on the type of application 111, the content of the generated application information packets may be different. Taking a video playback application as an example, the generated application information data packet may include the played video source address, the video name, the number of video frames, the playback speed, the playback time point, and the like. Taking a slideshow or document editing application as an example, the generated application information data package may include a file address, a file name, a currently playing page number, and the like. The application 111 may send ( 409 ) the application information packet to the multi-screen interactive service 113 . The multi-screen interactive service 113 can forward ( 410 ) the application information data packet to the multi-screen interactive service 122 , and the multi-screen interactive service 122 further forwards ( 411 ) the application information data packet to the application 121 . In this way, the handshake is completed between the application 121 and the application 111 and a connection for multi-screen interaction is established. It should be understood that the process of establishing a connection for multi-screen interaction as shown in steps 405 to 411 is only exemplary, and is not intended to limit the scope of the present disclosure. In some embodiments, steps 405-407 may be omitted. That is, the application 111 can actively generate the application information data packet and send it to the application 121 . Alternatively, in other embodiments, steps 408 to 411 may be omitted, that is, when the application 111 receives a request from the application 121 to establish a multi-screen interactive connection, the handshake process is completed.
在用于多屏交互的连接被建立的情况下,主设备和从设备之间的屏幕交互可以被触发。图5示出了根据本公开的实施例的触发主设备和从设备之间的屏幕交互的示意图。In the case where the connection for multi-screen interaction is established, the screen interaction between the master device and the slave device can be triggered. FIG. 5 shows a schematic diagram of triggering screen interaction between a master device and a slave device according to an embodiment of the present disclosure.
如图5所示,在主设备110与从设备120之间的多屏交互连接被建立后,应用111可以在主设备110上正常运行。以视频播放应用为例,应用111可以利用主设备110的屏幕来播放视频。以幻灯片或文档编辑应用为例,应用111可以利用主设备110的屏幕来播放幻灯片或者文档。在一些实施例中,多屏交互服务113可以建立屏幕缓冲区,以用于缓存主设备110的屏幕的显示内容。例如,多屏交互服务113可以定时(例如,每隔100ms)向屏幕缓冲区添加所捕获的屏幕显示帧,屏幕显示帧例如可以通过截图或者其他手段获取。例如,屏幕缓冲区可以利用环形缓冲区实现。也即,屏幕缓冲区仅用于缓存最近的固定数目的显示帧。当屏幕缓冲区被占满时,最新添加的显示帧将覆盖屏幕缓冲区最老的显示帧。附加地或备选地,在一些实施例中,多屏交互服务113可以从应用111被启动后开始录音,例如录制幻灯片或文档的演讲者在展示时的演讲。多屏交互服务113可以建立音频缓冲区,以用于缓存在应用111运行期间录制的音频流。在一些实施例中,音频缓冲区可以利用环形缓冲区实现。也即,音频缓冲区仅用于缓存最近的固定长度的录音数据。当音频缓冲区被占满时,最新添加的录音数据将覆盖音频缓冲区最老的录音数据。建立屏幕缓冲区和音频缓冲的目的是为了处理从设备120与主设备110之间的通信延迟,使得当主设备110接收到来自从设备120的多屏交互请求时能够根据请求中携带的时间戳找到从设备120期望针对其进行交互的目标显示内容和对应的录音片段。As shown in FIG. 5 , after the multi-screen interactive connection between the master device 110 and the slave device 120 is established, the application 111 can run normally on the master device 110 . Taking a video playing application as an example, the application 111 can use the screen of the main device 110 to play a video. Taking a slideshow or document editing application as an example, the application 111 may utilize the screen of the main device 110 to play the slideshow or document. In some embodiments, the multi-screen interaction service 113 may establish a screen buffer for caching the display content of the screen of the host device 110 . For example, the multi-screen interaction service 113 may add captured screen display frames to the screen buffer periodically (eg, every 100ms), and the screen display frames may be obtained by, for example, screenshots or other means. For example, a screen buffer can be implemented using a ring buffer. That is, the screen buffer is only used to buffer the most recent fixed number of display frames. When the screen buffer is full, the newly added display frame will overwrite the oldest display frame in the screen buffer. Additionally or alternatively, in some embodiments, the multi-screen interactive service 113 may start recording from the application 111 is launched, such as recording a presentation of a presentation by a speaker of a slide or document. The multi-screen interactive service 113 may establish an audio buffer for buffering the audio stream recorded during the running of the application 111 . In some embodiments, the audio buffer may be implemented using a ring buffer. That is, the audio buffer is only used to buffer the most recent fixed-length recording data. When the audio buffer is full, the newly added recording data will overwrite the oldest recording data in the audio buffer. The purpose of establishing the screen buffer and audio buffer is to deal with the communication delay between the slave device 120 and the master device 110, so that when the master device 110 receives a multi-screen interaction request from the slave device 120, it can find the slave device according to the timestamp carried in the request. The device 120 desires to display the content and corresponding audio recording segments for the objects it interacts with.
如图5所示,从设备120上的第二应用121的用户界面从如图3所示的用户界面121-1被更新为如图5所示的用户界面121-2。例如,在用户界面121-2,按钮“扫一扫配对”可以被禁用或者不显示,而仅显示按钮“与主屏幕交互”。从设备120的用户例如与主设备110及其用户处在同一空间(例如,办公室或会议室)内,当操作主设备110的用户正使用主设备播放其期望交互的内容时,从设备120的用户可以观看主设备110的屏幕,从设备120的用户还可以通过点击按钮“与主屏幕交互”来触发与主设备110的屏幕的交互。应当理解,从设备120的用户也可以通过其他方式来触发与主设备110的屏幕的交互,包括但不限于,通过在从设备120的屏幕上双击等手势触发、通过诸如“做笔记”等语音命令触发或者通过其他外部设备触发等。本公开的范围在此方面不受限制。As shown in FIG. 5 , the user interface of the second application 121 on the slave device 120 is updated from the user interface 121-1 shown in FIG. 3 to the user interface 121-2 shown in FIG. 5 . For example, in the user interface 121-2, the button "Swipe to Pair" may be disabled or not displayed, and only the button "Interact with Home Screen" is displayed. The user of the slave device 120 is, for example, in the same space (eg, office or conference room) as the master device 110 and its users. When the user operating the master device 110 is using the master device to play the content that he or she desires to interact with, the slave device 120 The user can watch the screen of the master device 110, and the user of the slave device 120 can also trigger interaction with the screen of the master device 110 by clicking the button "Interact with the master screen". It should be understood that the user of the slave device 120 can also trigger the interaction with the screen of the master device 110 in other ways, including but not limited to, through gestures such as double-tapping on the screen of the slave device 120, through voice such as "take a note", etc. Triggered by command or triggered by other external devices, etc. The scope of the present disclosure is not limited in this regard.
图6示出了根据本公开的实施例在主设备和从设备之间进行多屏交互的信令交互图。图6涉及如图1B所示的应用111和121以及多屏交互服务113和122。FIG. 6 shows a signaling interaction diagram for multi-screen interaction between a master device and a slave device according to an embodiment of the present disclosure. FIG. 6 relates to applications 111 and 121 and multi-screen interactive services 113 and 122 as shown in FIG. 1B .
如图6所示,当从设备120的用户通过手势、语音命令或外部设备等任何方式触发与主屏幕交互时,响应于接收(601)到触发信号,应用121可以向多屏交互服务122发送(602)用于在主设备110和从设备120之间进行多屏交互的请求(本文中也称为“第一请求”),该请求可以指示从设备120的用户请求交互的时间点(本文中也称为“请求时间点”)和/或请求类型(例如,“多屏交互”)。多屏交互服务122可以将该请求转发(603)至多屏交互服务113,多屏交互服务113可以将该请求进一步转发(604)至应用111。As shown in FIG. 6 , when the user of the slave device 120 triggers interaction with the home screen by any means such as gestures, voice commands, or external devices, in response to receiving ( 601 ) the trigger signal, the application 121 can send the multi-screen interaction service 122 (602) A request for multi-screen interaction between the master device 110 and the slave device 120 (also referred to herein as a "first request"), the request may indicate a time point at which the user of the slave device 120 requests the interaction (herein Also referred to as "request time point") and/or request type (eg, "multi-screen interaction"). The multi-screen interaction service 122 may forward ( 603 ) the request to the multi-screen interaction service 113 , and the multi-screen interaction service 113 may further forward ( 604 ) the request to the application 111 .
应用111可以生成(605)经更新的应用信息数据包。与建立连接时发送的应用信息数据包相比,该经更新的应用信息数据包可以包括附加信息,该信息与主屏幕在请求时间点的显示内容有关。在一些实施例中,根据第一请求,应用111可以从多屏交互服务113所建立的屏幕缓冲区中获取主屏幕在请求时间点附近的预定时间段内的至少一个显示帧,并且可以从多屏交互服务113所建立的音频缓冲区中获取与该至少一个显示帧对应的录音数据。应用111可以基于所获取的至少一个显示帧和录音数据中的至少一项来生成经更新的应用信息数据包。 Application 111 may generate (605) an updated application information packet. The updated application information packet may include additional information relative to the display of the home screen at the requested point in time compared to the application information packet sent when the connection was established. In some embodiments, according to the first request, the application 111 may obtain at least one display frame of the home screen within a predetermined time period near the request time point from the screen buffer established by the multi-screen interactive service 113, and may obtain from the multi-screen interactive service 113 The recording data corresponding to the at least one display frame is acquired from the audio buffer established by the screen interactive service 113 . The application 111 may generate an updated application information data packet based on at least one of the acquired at least one display frame and recording data.
图7示出了根据本公开的实施例从主设备处的屏幕缓冲区和音频缓冲区中获取屏幕内容相关信息的示意图。图7示出了主设备110处的屏幕缓冲区730,其缓存有主屏幕的多个显示帧701~710。图7还示出了主设备110处的音频缓冲区760,其缓存有应用111运行期间录制的音频数据761。例如,应用111接收到的第一请求所指示的请求时间点是T0。如图7所示,应用111可以从屏幕缓冲区730中获取主屏幕在请求时间点T0之前的预定时间段T内的显示帧702~706,并且可以从音频缓冲区760中获取与预定时间段T对应的录音数据762。应当理解,在其他实施例中,所获取的屏幕显示帧也可以是在请求时间点T0前后的预定数目个显示帧,或者在请求时间点T0之前的一帧。所获取的录音数据可以是与所获取的屏幕显示帧相对应的录音数据。例如,在记录屏幕显示帧和录音数据时,多屏交互服务113可以基于每个显示帧截取的时间来确定与该显示帧对应的录音片段。以此方式,可以基于所获取的屏幕显示帧的起始时间和结束时间来确定与所获取的屏幕显示帧相对应的录音数据。FIG. 7 shows a schematic diagram of acquiring screen content-related information from a screen buffer and an audio buffer at a host device according to an embodiment of the present disclosure. FIG. 7 shows a screen buffer 730 at the main device 110, which buffers a plurality of display frames 701-710 of the main screen. FIG. 7 also shows an audio buffer 760 at the host device 110 , which buffers audio data 761 recorded during the execution of the application 111 . For example, the request time point indicated by the first request received by the application 111 is T0. As shown in FIG. 7 , the application 111 can obtain the display frames 702 ˜ 706 of the main screen within the predetermined time period T before the request time point T0 from the screen buffer 730 , and can obtain the display frames 702 to 706 from the audio buffer 760 corresponding to the predetermined time period Recording data 762 corresponding to T. It should be understood that, in other embodiments, the acquired screen display frame may also be a predetermined number of display frames before and after the request time point T0, or one frame before the request time point T0. The acquired audio recording data may be audio recording data corresponding to the acquired screen display frame. For example, when recording the screen display frame and the audio recording data, the multi-screen interaction service 113 may determine the audio recording segment corresponding to the display frame based on the intercepted time of each display frame. In this way, audio recording data corresponding to the acquired on-screen display frame can be determined based on the acquired start time and end time of the on-screen display frame.
返回参考图6,应用111可以将经更新的应用信息数据包作为对第一请求的响应发送(606)至多屏交互服务113。多屏交互服务113可以将该应用信息数据包转发(607)至多屏交互服务122,多屏交互服务122将其进一步转发(608)至应用121。Referring back to FIG. 6 , the application 111 may send ( 606 ) the updated application information packet to the multi-screen interactive service 113 as a response to the first request. The multi-screen interaction service 113 may forward ( 607 ) the application information packet to the multi-screen interaction service 122 , which further forwards ( 608 ) it to the application 121 .
响应于接收到该应用信息数据包,应用121可以在从设备120的屏幕上显示包括在该应用信息数据包中的至少一个显示帧、以及录音数据的可视表示。In response to receiving the application information packet, the application 121 may display on the screen of the slave device 120 at least one display frame included in the application information packet, and a visual representation of the recording data.
图8A示出了应用121的示例用户界面121-3的示意图。如图8A所示,用户界面121-3可以呈现所接收的显示帧702~706以及录音数据762的可视表示。从设备120的用户可以针对显示帧702~706或者录音762进行选择,以选择期望与其交互的显示帧(本文中也称为“目标显示帧”或“目标显示内容”)及其对应的录音片段。例如,假设用户选择了显示帧704作为目标显示内容,应用121可以确定录音数据762中与显示帧704对应的录音片段和/或显示帧704所对应的时间点(本文中也称为“目标时间 点”)。又例如,假设用户选择了录音数据762中的录音片段,应用121可以确定与该录音片段对应的目标显示帧和/或与该录音片段所对应的目标时间点。在一些实施例中,如图8A所示,用户界面121-3还可以提供按钮“选择确认”和“重新选择”。当用户点击按钮“重新选择”时,应用121可以忽略用户之前的选择,而重新接收用户输入。当用户点击按钮“选择确认”时,如图8A所示的用户界面121-3可以被更新为如图8B所示的用户界面121-4。FIG. 8A shows a schematic diagram of an example user interface 121 - 3 of application 121 . As shown in FIG. 8A , user interface 121 - 3 may present a visual representation of received display frames 702 - 706 and recording data 762 . The user of slave device 120 may select for display frames 702-706 or recording 762 to select a display frame (also referred to herein as "target display frame" or "target display content") with which interaction is desired and its corresponding recording segment . For example, assuming that the user selects the display frame 704 as the target display content, the application 121 can determine the recording segment corresponding to the display frame 704 in the recording data 762 and/or the time point corresponding to the display frame 704 (also referred to herein as "target time") point"). For another example, assuming that the user selects a recording segment in the recording data 762, the application 121 may determine a target display frame corresponding to the recording segment and/or a target time point corresponding to the recording segment. In some embodiments, as shown in Figure 8A, the user interface 121-3 may also provide buttons "Select Confirm" and "Reselect". When the user clicks the button "reselect", the application 121 can ignore the user's previous selection and re-receive the user input. When the user clicks the button "Select Confirm", the user interface 121-3 shown in FIG. 8A may be updated to the user interface 121-4 shown in FIG. 8B.
如图8B所示,用户界面121-4可以呈现所选择的目标显示帧704。附加地或者备选地,用户界面121-4还可以呈现与目标显示帧704对应的录音片段(图8B中未示出)。此外,用户界面121-4还可以提供按钮“编辑”、“分享”和“提问”。当用户点击按钮“编辑”时,用户可以对目标显示帧704进行编辑,例如包括但不限于裁剪、修改、标记等操作。当用户点击按钮“提问”时,用户可以通过语音或其他方式输入针对目标显示帧704的提问。例如,应用121可以录制并保存用户的提问录音。当用户点击按钮“分享”时,应用121可以基于所确定的目标时间点、未编辑的目标显示内容、经过编辑的目标显示内容、与目标显示帧对应的录音片段、所录制的提问录音中的一项或多项来生成针对目标显示内容进行交互的请求(本文中也称为“第二请求”),以发送给应用111。应当理解,从设备120的用户也可以通过其他方式来触发对编辑结果和/或提问录音的分享,包括但不限于,通过在从设备120的屏幕上上滑等手势触发、通过诸如“分享笔记”等语音命令触发或者通过其他外部设备触发等。本公开的范围在此方面不受限制。As shown in FIG. 8B , the user interface 121 - 4 may present the selected target display frame 704 . Additionally or alternatively, the user interface 121-4 may also present a recording segment corresponding to the target display frame 704 (not shown in Figure 8B). In addition, the user interface 121-4 may also provide buttons "Edit", "Share" and "Ask". When the user clicks the button "Edit", the user can edit the target display frame 704, for example, including but not limited to operations such as cropping, modifying, and marking. When the user clicks the button "Ask", the user may input a question for the target display frame 704 by voice or other means. For example, the application 121 may record and save a recording of the user's question. When the user clicks the button "Share", the application 121 may, based on the determined target time point, the unedited target display content, the edited target display content, the recording segment corresponding to the target display frame, the recorded question recording One or more items to generate a request to interact with the target display content (also referred to herein as a "second request") to send to the application 111 . It should be understood that the user of the slave device 120 may also trigger the sharing of the editing results and/or question recordings in other ways, including but not limited to, triggering by gestures such as swiping up on the screen of the slave device 120 , through gestures such as “Share Notes” ” and other voice commands or triggered by other external devices, etc. The scope of the present disclosure is not limited in this regard.
返回参考图6,响应于用户通过点击按钮“分享”触发分享,应用121可以基于所确定的目标时间点、经过编辑的目标显示帧、与目标显示帧对应的录音片段、所录制的提问录音中的一项或多项来生成(609)第二请求,并且将其发送(610)至多屏交互服务122。在一些实施例中,第二请求可以仅指示目标时间点和/或请求类型(例如,“分享笔记”)。经过编辑的目标显示帧、与目标显示帧对应的录音片段以及所录制的提问录音中的一项或多项可以被包括在经更新的应用信息数据包中与第二请求一起被发送。多屏交互服务122可以将第二请求连同经更新的应用信息数据包转发(611)至多屏交互服务113,多屏交互服务113将它们进一步转发(612)至应用111。Referring back to FIG. 6 , in response to the user triggering sharing by clicking the button “Share”, the application 121 may, based on the determined target time point, the edited target display frame, the recording segment corresponding to the target display frame, and the recorded question recording to generate (609) a second request and send (610) it to the multi-screen interactive service 122. In some embodiments, the second request may only indicate a target point in time and/or a request type (eg, "share notes"). One or more of the edited target display frame, the audio clip corresponding to the target display frame, and the recorded question recording may be included in the updated application information packet and sent with the second request. The multi-screen interaction service 122 may forward ( 611 ) the second request along with the updated application information packet to the multi-screen interaction service 113 , which further forwards ( 612 ) them to the application 111 .
响应于接收到第二请求,应用111可以在主设备110的屏幕上显示关于第二请求的提示,以询问主设备110的用户是否允许分享。响应于接收(613)到指示允许分享的用户输入,应用111可以经由多屏交互服务113和122向应用121发送(614~616)通知,以通知应用121向应用111发送后续控制命令。In response to receiving the second request, the application 111 may display a prompt regarding the second request on the screen of the main device 110 to ask the user of the main device 110 whether to allow sharing. In response to receiving (613) user input indicating that sharing is permitted, application 111 may send (614-616) a notification to application 121 via multi-screen interactive services 113 and 122 to notify application 121 to send subsequent control commands to application 111.
响应于接收到该通知,应用121可以在从设备120的屏幕上显示多个候选控制命令的可视表示。从设备120的用户可以从多个候选控制命令中选择一个控制命令。响应于接收(617)到指示所选择的控制命令的用户输入,应用121可以经由多屏交互服务122和113向应用111发送(618~620)该控制命令。应用111可以根据该控制命令,执行(621)与目标显示内容有关的操作。In response to receiving the notification, the application 121 may display a visual representation of the plurality of candidate control commands on the screen of the slave device 120 . The user of the slave device 120 may select a control command from a plurality of candidate control commands. In response to receiving ( 617 ) user input indicating the selected control command, application 121 may send ( 618 - 620 ) the control command to application 111 via multi-screen interactive services 122 and 113 . The application 111 may perform (621) an operation related to the target display content according to the control command.
图8C至图8F示出了应用121的示例用户界面121-5以及对应的主设备110的用户界面的示意图。例如,当从设备120的用户在用户界面121-4中点击“编辑”按钮对目标显示帧704进行编辑,然后点击“分享”按钮,应用121可以在从设备120的屏幕上呈现用户界面121-5。在图8C至图8F中,用户界面121-5可以呈现经过编辑的目 标显示帧704’以及与候选控制命令801~804对应的按钮。例如,控制命令801指示在主屏幕上显示经过编辑的目标显示帧704’。控制命令802指示在主屏幕上跳回到目标显示帧704(也即,未编辑的目标显示帧)。控制命令803指示在主屏幕上跳回到目标显示帧704并同时播放针对目标显示帧704的提问录音。控制命令804指示在主屏幕上显示经过编辑的目标显示帧704’并同时播放与目标显示帧704对应的录音片段。如图8C所示,当从设备120的用户点击与控制命令801对应的按钮时,应用111可以暂停其正常的应用运行,并且在主设备110的屏幕上显示经过编辑的目标显示帧704’。如图8D所示,当从设备120的用户点击与控制命令802对应的按钮时,应用111可以在主设备110的屏幕上跳回到目标显示帧704。例如,当应用111是幻灯片应用时,应用111可以跳回到目标时间点所对应的那页幻灯片;当应用111是视频播放应用时,应用111可以使视频播放跳回到目标时间点所对应的位置。如图8E所示,当从设备120的用户点击与控制命令803对应的按钮时,应用111可以在主设备110的屏幕上重新显示目标显示帧704并同时播放针对目标显示帧704的提问录音。如图8F所示,当从设备120的用户点击与控制命令804对应的按钮时,应用111可以暂停其正常的应用运行,并且在主设备110的屏幕上显示经过编辑的目标显示帧704’同时播放与目标显示帧704对应的录音片段。8C-8F show schematic diagrams of an example user interface 121 - 5 of the application 121 and a corresponding user interface of the host device 110 . For example, when the user of the slave device 120 clicks the "Edit" button in the user interface 121-4 to edit the target display frame 704, and then clicks the "Share" button, the application 121 may present the user interface 121- 5. In Figures 8C-8F, the user interface 121-5 may present an edited target display frame 704' and buttons corresponding to candidate control commands 801-804. For example, the control command 801 instructs the display of the edited target display frame 704' on the main screen. Control command 802 instructs to jump back to target display frame 704 (ie, the unedited target display frame) on the home screen. The control command 803 instructs to jump back to the target display frame 704 on the main screen and play the question recording for the target display frame 704 at the same time. The control command 804 instructs to display the edited target display frame 704' As shown in FIG. 8C , when the user of the slave device 120 clicks the button corresponding to the control command 801, the application 111 can suspend its normal application operation and display the edited target display frame 704' on the screen of the master device 110. As shown in FIG. 8D , when the user of the slave device 120 clicks the button corresponding to the control command 802 , the application 111 may jump back to the target display frame 704 on the screen of the master device 110 . For example, when the application 111 is a slideshow application, the application 111 can jump back to the slideshow page corresponding to the target time point; when the application 111 is a video playback application, the application 111 can make the video playback jump back to the target time point. corresponding location. As shown in FIG. 8E , when the user of the slave device 120 clicks the button corresponding to the control command 803 , the application 111 can redisplay the target display frame 704 on the screen of the master device 110 and simultaneously play the question recording for the target display frame 704 . As shown in FIG. 8F, when the user of the slave device 120 clicks the button corresponding to the control command 804, the application 111 can suspend its normal application running, and display the edited target display frame 704' on the screen of the master device 110 at the same time The audio clip corresponding to the target display frame 704 is played.
返回参考图6,主设备110的用户可以通过语音命令、快捷键(例如Ctrl+Alt+M)等各种方式来收回主屏幕的控制权。响应于接收到收回主屏幕的控制权的用户命令,应用111可以停止(622)对于来自应用121的控制命令的响应。Referring back to FIG. 6 , the user of the main device 110 can take back control of the main screen through various means such as voice commands, shortcut keys (eg, Ctrl+Alt+M). In response to receiving a user command to take back control of the home screen, application 111 may stop ( 622 ) responding to control commands from application 121 .
以此方式,针对多个屏幕分别来自不同设备的场景,本公开的实施例不需要从设备同步显示主设备的屏幕内容。本公开的实施例能够在多个屏幕之间共享多种类型的信息,例如图片、音频等。此外,本公开的实施例能够基于经编辑的屏幕内容实现不同屏幕之间的进一步交互。In this way, for scenarios in which multiple screens come from different devices, the embodiments of the present disclosure do not require the slave devices to synchronously display the screen content of the master device. Embodiments of the present disclosure can share various types of information, such as pictures, audio, etc., among multiple screens. Furthermore, embodiments of the present disclosure enable further interaction between different screens based on edited screen content.
本公开的实施例也适用于多个屏幕来自同一设备(例如,折叠屏设备)的场景。以下将结合如图2A和2B所示的示例系统200来详细描述本公开的实施例。Embodiments of the present disclosure are also applicable to scenarios where multiple screens come from the same device (eg, a folding screen device). Embodiments of the present disclosure will be described in detail below in conjunction with an example system 200 as shown in Figures 2A and 2B.
图9示出了根据本公开的实施例在同一设备的不同屏幕之间进行多屏交互的信令图。图9涉及如图2A和2B所示的应用211和212以及多屏交互服务214。FIG. 9 shows a signaling diagram for multi-screen interaction between different screens of the same device according to an embodiment of the present disclosure. FIG. 9 relates to the applications 211 and 212 and the multi-screen interactive service 214 as shown in FIGS. 2A and 2B .
如图9所示,当设备210的用户通过手势、语音命令或外部设备等任何方式触发主从屏幕之间的交互时,响应于接收(901)到触发信号,应用212可以将由多屏交互服务214向应用211发送(902~903)用于在主从屏幕之间进行多屏交互的请求(本文中也称为“第一请求”),该请求可以指示请求时间点和/或请求类型(例如,“多屏交互”)。As shown in FIG. 9 , when the user of the device 210 triggers the interaction between the master and slave screens by any means such as gestures, voice commands, or external devices, in response to receiving (901) the trigger signal, the application 212 can be served by the multi-screen interaction 214 sends (902-903) a request for multi-screen interaction between the master and slave screens (also referred to herein as a "first request") to the application 211, and the request may indicate a request time point and/or a request type ( For example, "multi-screen interaction").
应用211可以生成(904)应用信息数据包,该应用信息数据包可以包括主屏幕在请求时间点的显示内容有关的信息。在一些实施例中,根据第一请求,应用211可以从多屏交互服务214维护的屏幕缓冲区中获取主屏幕在请求时间点附近的预定时间段内的至少一个显示帧,并且可以从多屏交互服务214维护的音频缓冲区中获取与该至少一个显示帧对应的录音数据。应用211可以基于所获取的至少一个显示帧和录音数据中的至少一项来生成该应用信息数据包。The application 211 may generate (904) an application information data packet, which may include information related to the display content of the home screen at the requested time point. In some embodiments, according to the first request, the application 211 may obtain at least one display frame of the main screen within a predetermined time period near the request time point from the screen buffer maintained by the multi-screen interactive service 214, and may obtain from the multi-screen interactive service 214 The audio recording data corresponding to the at least one display frame is acquired from the audio buffer maintained by the interactive service 214 . The application 211 may generate the application information data packet based on at least one of the acquired at least one display frame and recording data.
应用211可以经由多屏交互服务214向应用212发送(905~906)该应用信息数据 包,以作为对第一请求的响应。响应于接收到该应用信息数据包,应用212可以在从屏幕上显示包括在该应用信息数据包中的至少一个显示帧、以及录音数据的可视表示,以供用户选择。从屏幕上显示的用户界面例如可以与图8A和8B所示的用户界面相同或类似。用户在选择目标显示帧及其对应的录音片段后,可以对目标显示帧进行编辑、针对目标显示帧进行提问、和/或触发对编辑结果和/或提问录音的分享。The application 211 may send (905-906) the application information data packet to the application 212 via the multi-screen interactive service 214 as a response to the first request. In response to receiving the application information packet, the application 212 may display at least one display frame included in the application information packet, as well as a visual representation of the recorded data, on the slave screen for selection by the user. The user interface displayed from the screen may be the same as or similar to the user interface shown in Figures 8A and 8B, for example. After selecting the target display frame and its corresponding recording segment, the user can edit the target display frame, ask questions about the target display frame, and/or trigger sharing of the editing results and/or question recordings.
响应于用户触发分享,应用212可以基于所确定的目标时间点、经过编辑的目标显示帧、与目标显示帧对应的录音片段、所录制的提问录音中的一项或多项来生成(907)第二请求,并且经由多屏交互服务214将其发送(908~909)至应用211。在一些实施例中,第二请求可以仅指示目标时间点和/或请求类型(例如,“分享笔记”)。经过编辑的目标显示帧、与目标显示帧对应的录音片段以及所录制的提问录音中的一项或多项可以被包括在经更新的应用信息数据包中与第二请求一起被发送。In response to the user triggering the sharing, the application 212 may generate (907) based on one or more of the determined target time point, the edited target display frame, a recording segment corresponding to the target display frame, and the recorded question recording The second request is sent (908-909) to the application 211 via the multi-screen interactive service 214. In some embodiments, the second request may only indicate a target point in time and/or a request type (eg, "share notes"). One or more of the edited target display frame, the audio clip corresponding to the target display frame, and the recorded question recording may be included in the updated application information packet and sent with the second request.
响应于接收到第二请求,应用211可以在主屏幕上显示关于第二请求的提示,以询问用户是否允许分享。响应于接收(910)到指示允许分享的用户输入,应用211可以经由多屏交互服务214向应用212发送(911~912)通知,以通知应用212向应用211发送后续控制命令。In response to receiving the second request, the application 211 may display a prompt about the second request on the home screen to ask the user whether to allow sharing. In response to receiving (910) a user input indicating that sharing is allowed, application 211 may send (911-912) a notification to application 212 via multi-screen interactive service 214 to notify application 212 to send subsequent control commands to application 211.
响应于接收到该通知,应用212可以在从屏幕上显示多个候选控制命令的可视表示。从屏幕上显示的用户界面例如可以与图8C至图8F所示的用户界面121-5相同或类似。用户可以从多个候选控制命令中选择一个控制命令。响应于接收(913)到指示所选择的控制命令的用户输入,应用212可以经由多屏交互服务214向应用211发送(914~915)该控制命令。应用211可以根据该控制命令,执行(916)与目标显示内容有关的操作。In response to receiving the notification, the application 212 may display a visual representation of the plurality of candidate control commands on the slave screen. The user interface displayed from the screen may be the same as or similar to the user interface 121-5 shown in FIGS. 8C to 8F, for example. The user can select one control command from a plurality of candidate control commands. In response to receiving (913) user input indicating the selected control command, application 212 may send (914-915) the control command to application 211 via multi-screen interactive service 214. The application 211 can perform (916) an operation related to the target display content according to the control command.
分享完毕后,用户可以通过语音命令、快捷键(例如Ctrl+Alt+M)等各种方式来收回主屏幕的控制权。响应于接收到收回主屏幕的控制权的用户命令,应用211可以停止(917)对于来自应用212的控制命令的响应。After sharing, users can take back control of the home screen through voice commands, shortcut keys (such as Ctrl+Alt+M) and other means. In response to receiving a user command to take back control of the home screen, application 211 may stop ( 917 ) responding to control commands from application 212 .
以此方式,针对不同屏幕来自同一设备(例如,折叠屏设备)的场景,本公开的实施例能够在不同屏幕之间共享多种类型的信息,并且能够基于经标记的屏幕内容实现不同屏幕之间的进一步交互。In this way, for scenarios where different screens come from the same device (eg, a folding screen device), embodiments of the present disclosure can share various types of information between different screens, and can realize the difference between different screens based on the marked screen content. further interaction.
图10示出了根据本公开的实施例的用于多屏交互的示例方法1000的流程图。方法1000例如可以由第一设备执行,第一设备例如是如图1A和1B所示的主设备111。另外,第二设备例如是如图1A和1B所示的从设备120。应当理解,方法1000还可以包括未示出的附加动作和/或可以省略所示出的动作。本公开的范围在此方面不受限制。FIG. 10 shows a flowchart of an example method 1000 for multi-screen interaction according to an embodiment of the present disclosure. The method 1000 may be performed, for example, by a first device, such as the master device 111 shown in FIGS. 1A and 1B . In addition, the second device is, for example, the slave device 120 as shown in FIGS. 1A and 1B . It should be understood that method 1000 may also include additional acts not shown and/or that shown acts may be omitted. The scope of the present disclosure is not limited in this regard.
在框1010,第一设备接收来自第二设备的用于多屏交互的请求。第一设备在第一屏幕上显示第一内容,第一内容包括多个显示帧,并且该请求中包括请求时间点。At block 1010, the first device receives a request for multi-screen interaction from the second device. The first device displays the first content on the first screen, the first content includes a plurality of display frames, and the request includes the request time point.
在框1020,第一设备根据该请求,向第二设备发送响应。该响应至少包括第一屏幕在请求时间点附近的至少一个显示帧。At block 1020, the first device sends a response to the second device in accordance with the request. The response includes at least one display frame of the first screen near the requested time point.
在框1030,第一设备接收来自第二设备的目标显示帧或者经编辑的目标显示帧。该目标显示帧选自至少一个显示帧。At block 1030, the first device receives the target display frame or the edited target display frame from the second device. The target display frame is selected from at least one display frame.
在框1040,第一设备在第一屏幕上显示目标显示帧或者经编辑的目标显示帧。At block 1040, the first device displays the target display frame or the edited target display frame on the first screen.
在一些实施例中,响应于接收到目标显示帧或者经编辑的目标显示帧,第一设备 可以在第一屏幕上显示是否允许目标显示帧的分享的提示。第一设备可以接收用户输入,并且响应于用户输入指示用户允许该分享,在第一屏幕上显示目标显示帧或者经编辑的目标显示帧。In some embodiments, in response to receiving the target display frame or the edited target display frame, the first device may display a prompt on the first screen whether to allow sharing of the target display frame. The first device may receive the user input and display the target display frame or the edited target display frame on the first screen in response to the user input indicating that the user allows the sharing.
在一些实施例中,第一设备可以接收来自第二设备的控制命令,该控制命令用于控制第一屏幕的显示。第一设备可以根据该控制命令,在第一屏幕上显示目标显示帧或者经编辑的目标显示帧。In some embodiments, the first device may receive a control command from the second device, the control command being used to control the display of the first screen. The first device may display the target display frame or the edited target display frame on the first screen according to the control command.
在一些实施例中,该控制命令包括以下任一项:第一控制命令,用于在第一屏幕上显示经编辑的目标显示帧;第二控制命令,用于在第一屏幕上显示目标显示帧;第三控制命令,用于在第一屏幕上显示目标显示帧的同时,播放用户针对目标显示帧的提问;以及第四控制命令,用于在第一屏幕上显示经编辑的目标显示帧的同时,播放与目标显示帧对应的录音片段。In some embodiments, the control command includes any one of the following: a first control command for displaying the edited target display frame on the first screen; a second control command for displaying the target display on the first screen frame; a third control command for displaying the target display frame on the first screen while playing the user's question for the target display frame; and a fourth control command for displaying the edited target display frame on the first screen At the same time, the audio clip corresponding to the target display frame is played.
在一些实施例中,第一设备可以接收来自第二设备的针对目标显示帧的提问。第一设备可以在第一屏幕上显示目标显示帧或者经编辑的目标显示帧的同时,播放该提问。In some embodiments, the first device may receive a question from the second device for the target display frame. The first device may play the question while displaying the target display frame or the edited target display frame on the first screen.
在一些实施例中,该响应可以包括与至少一个显示帧对应的录音。第一设备可以接收来自第二设备的录音片段,该录音片段选自录音并且对应于目标显示帧。第一设备可以在第一屏幕上显示目标显示帧或者经编辑的目标显示帧的同时,播放该录音片段。In some embodiments, the response may include a recording corresponding to at least one display frame. The first device may receive a sound recording segment from the second device, the sound recording segment being selected from the sound recording and corresponding to the target display frame. The first device may play the recording segment while displaying the target display frame or the edited target display frame on the first screen.
在一些实施例中,第一设备可以接收用户的触发操作。响应于接收到的触发操作,第一设备可以停止在第一屏幕上显示目标显示帧或者经编辑的目标显示帧,并且重新在第一屏幕上显示第一内容。In some embodiments, the first device may receive a user's trigger operation. In response to the received trigger operation, the first device may stop displaying the target display frame or the edited target display frame on the first screen, and redisplay the first content on the first screen.
在一些实施例中,在接收来自第二设备的请求之前,第一设备可以与第二设备建立用于多屏交互的连接。In some embodiments, before receiving the request from the second device, the first device may establish a connection with the second device for multi-screen interaction.
图11示出了根据本公开的实施例的用于多屏交互的示例方法1100的流程图。方法1100例如可以由第二设备执行,第二设备例如是如图1A和1B所示的从设备120。另外,第一设备例如是如图1A和1B所示的主设备111。应当理解,方法1100还可以包括未示出的附加动作和/或可以省略所示出的动作。本公开的范围在此方面不受限制。FIG. 11 shows a flowchart of an example method 1100 for multi-screen interaction according to an embodiment of the present disclosure. The method 1100 may be performed, for example, by a second device, such as the slave device 120 shown in FIGS. 1A and 1B . In addition, the first device is, for example, the master device 111 shown in FIGS. 1A and 1B . It should be understood that the method 1100 may also include additional actions not shown and/or that shown actions may be omitted. The scope of the present disclosure is not limited in this regard.
在框1110,第二设备响应于接收到来自用户的第一触发操作,向第一设备发送用于多屏交互的请求。该请求中包括请求时间点。At block 1110, the second device sends a request for multi-screen interaction to the first device in response to receiving the first triggering operation from the user. The request includes the request time point.
在框1120,第二设备接收来自第一设备的响应。该响应至少包括第一设备的第一屏幕在请求时间点附近的至少一个显示帧。At block 1120, the second device receives the response from the first device. The response includes at least one display frame of the first screen of the first device near the requested time point.
在框1130,第二设备在第二屏幕上显示第一界面。第一界面包括该至少一个显示帧。At block 1130, the second device displays the first interface on the second screen. The first interface includes the at least one display frame.
在框1140,第二设备接收用户输入。该用户输入指示用户针对至少一个显示帧中的目标显示帧的选择和/或编辑。At block 1140, the second device receives user input. The user input indicates user selection and/or editing of a target display frame of the at least one display frame.
在框1150,第二设备响应于接收到来自用户的第二触发操作,向第一设备发送目标显示帧或者经编辑的所述目标显示帧,以用于在第一屏幕上显示。At block 1150, the second device, in response to receiving the second trigger operation from the user, transmits the target display frame or the edited target display frame to the first device for display on the first screen.
在一些实施例中,第二设备可以接收用户输入的控制命令。响应于接收到的控制命令,第二设备可以向第一设备发送该控制命令,以用于控制第一屏幕的显示。In some embodiments, the second device may receive control commands entered by the user. In response to the received control command, the second device may send the control command to the first device for controlling the display of the first screen.
在一些实施例中,该控制命令包括以下任一项:第一控制命令,用于在第一屏幕上显示经编辑的目标显示帧;第二控制命令,用于在第一屏幕上显示目标显示帧;第三控制命令,用于在第一屏幕上显示目标显示帧的同时,播放用户针对目标显示帧的提问;以及第四控制命令,用于在第一屏幕上显示经编辑的目标显示帧的同时,播放与目标显示帧对应的录音片段。In some embodiments, the control command includes any one of the following: a first control command for displaying the edited target display frame on the first screen; a second control command for displaying the target display on the first screen frame; a third control command for displaying the target display frame on the first screen while playing the user's question for the target display frame; and a fourth control command for displaying the edited target display frame on the first screen At the same time, the audio clip corresponding to the target display frame is played.
在一些实施例中,用户输入还指示用户针对目标显示帧的提问。响应于接收到第二触发操作,第二设备可以向第一设备发送该提问。In some embodiments, the user input also indicates a user question regarding the target display frame. In response to receiving the second trigger operation, the second device may send the question to the first device.
在一些实施例中,该响应包括与至少一个显示帧对应的录音。第一界面包括该录音的可视表示。用户输入指示用户针对该录音中的录音片段的选择,该录音片段对应于目标显示帧。In some embodiments, the response includes a recording corresponding to at least one display frame. The first interface includes a visual representation of the recording. The user input indicates the user's selection of a recording segment in the recording, the recording segment corresponding to the target display frame.
在一些实施例中,响应于接收到第二触发操作,第二设备可以向第一设备发送该录音片段。In some embodiments, in response to receiving the second trigger operation, the second device may send the audio recording segment to the first device.
在一些实施例中,在第二设备向第一设备发送请求之前,第二设备可以与第一设备建立用于多屏交互的连接。In some embodiments, before the second device sends the request to the first device, the second device may establish a connection with the first device for multi-screen interaction.
图12示出了根据本公开的实施例的用于多屏交互的示例装置1200的框图。例如,装置1200可以用于实现如图1A和1B所示的主设备110或者主设备110的一部分。12 shows a block diagram of an example apparatus 1200 for multi-screen interaction according to an embodiment of the present disclosure. For example, the apparatus 1200 may be used to implement the host device 110 or a portion of the host device 110 as shown in FIGS. 1A and 1B .
如图12所示,装置1200包括屏幕显示单元1210,被配置为在第一屏幕上显示第一内容,第一内容包括多个显示帧;请求接收单元1220,被配置为接收来自第二设备的用于多屏交互的请求,该请求中包括请求时间点;响应发送单元1230,被配置为根据该请求,向第二设备发送响应,该响应至少包括第一屏幕在请求时间点附近的至少一个显示帧;以及显示帧接收单元1240,被配置为接收来自第二设备的目标显示帧或者经编辑的目标显示帧,该目标显示帧选自至少一个显示帧;其中屏幕显示单元1210还被配置为在第一屏幕上显示目标显示帧或者经编辑的目标显示帧。As shown in FIG. 12 , the apparatus 1200 includes a screen display unit 1210 configured to display a first content on a first screen, the first content including a plurality of display frames; a request receiving unit 1220 configured to receive a request from the second device A request for multi-screen interaction, the request includes a request time point; the response sending unit 1230 is configured to send a response to the second device according to the request, the response at least including at least one of the first screen near the request time point a display frame; and a display frame receiving unit 1240 configured to receive a target display frame or an edited target display frame from the second device, the target display frame being selected from at least one display frame; wherein the screen display unit 1210 is further configured to The target display frame or the edited target display frame is displayed on the first screen.
在一些实施例中,屏幕显示单元1210:第一显示单元,被配置为响应于接收到目标显示帧或者经编辑的目标显示帧,在第一屏幕上显示是否允许目标显示帧的分享的提示;用户输入接收单元,被配置为接收用户输入;以及第二显示单元,被配置为响应于该用户输入指示用户允许分享,在第一屏幕上显示目标显示帧或者经编辑的目标显示帧。In some embodiments, screen display unit 1210: a first display unit configured to, in response to receiving the target display frame or the edited target display frame, display a prompt on the first screen whether to allow the sharing of the target display frame; a user input receiving unit configured to receive user input; and a second display unit configured to display a target display frame or an edited target display frame on the first screen in response to the user input instructing the user to allow sharing.
在一些实施例中,屏幕显示单元1210:控制命令接收单元,被配置为接收来自第二设备的控制命令,该控制命令用于控制第一屏幕的显示;以及第三显示单元,被配置为根据该控制命令,在第一屏幕上显示目标显示帧或者经编辑的目标显示帧。In some embodiments, the screen display unit 1210: a control command receiving unit configured to receive a control command from the second device, the control command being used to control the display of the first screen; and a third display unit configured to The control command displays the target display frame or the edited target display frame on the first screen.
在一些实施例中,该控制命令包括以下任一项:第一控制命令,用于在第一屏幕上显示经编辑的目标显示帧;第二控制命令,用于在第一屏幕上显示目标显示帧;第三控制命令,用于在第一屏幕上显示目标显示帧的同时,播放用户针对目标显示帧的提问;以及第四控制命令,用于在第一屏幕上显示经编辑的目标显示帧的同时,播放与目标显示帧对应的录音片段。In some embodiments, the control command includes any one of the following: a first control command for displaying the edited target display frame on the first screen; a second control command for displaying the target display on the first screen frame; a third control command for displaying the target display frame on the first screen while playing the user's question for the target display frame; and a fourth control command for displaying the edited target display frame on the first screen At the same time, the audio clip corresponding to the target display frame is played.
在一些实施例中,装置1200还包括:提问接收单元,被配置为接收来自第二设备的针对目标显示帧的提问;以及提问播放单元,被配置为在第一屏幕上显示目标显示帧或者经编辑的目标显示帧的同时,播放该提问。In some embodiments, the apparatus 1200 further includes: a question receiving unit configured to receive a question for the target display frame from the second device; and a question playing unit configured to display the target display frame on the first screen or via Play the question while editing the target display frame.
在一些实施例中,该响应包括与至少一个显示帧对应的录音,并且装置1200还包括:录音片段接收单元,被配置为接收来自第二设备的录音片段,该录音片段选自录音并且对应于目标显示帧;以及录音片段播放单元,被配置为在第一屏幕上显示目标显示帧或者经编辑的目标显示帧的同时,播放录音片段。In some embodiments, the response includes a sound recording corresponding to the at least one display frame, and the apparatus 1200 further includes: a sound recording segment receiving unit configured to receive a sound recording segment from the second device, the sound recording segment being selected from the sound recording and corresponding to a target display frame; and a recording segment playing unit configured to play the recording segment while displaying the target display frame or the edited target display frame on the first screen.
在一些实施例中,装置1200还包括:操作接收单元,被配置为接收用户的触发操作;以及第四显示单元,被配置为响应于接收到的触发操作,停止在第一屏幕上显示目标显示帧或者经编辑的目标显示帧,并且重新在第一屏幕上显示第一内容。In some embodiments, the apparatus 1200 further includes: an operation receiving unit configured to receive a trigger operation from the user; and a fourth display unit configured to stop displaying the target display on the first screen in response to the received trigger operation The frame or edited object displays the frame and redisplays the first content on the first screen.
在一些实施例中,装置1200还包括:连接建立单元,被配置为在接收来自第二设备的请求之前,与第二设备建立用于多屏交互的连接。In some embodiments, the apparatus 1200 further includes: a connection establishing unit configured to establish a connection for multi-screen interaction with the second device before receiving the request from the second device.
应当理解,装置1200中的各个单元的操作和特征分别为了实现前述实施例中由第一设备执行的方法的相应步骤,并且具有同样的有益效果。出于简化的目的,具体细节不再赘述。此外,装置1200中的各种发送单元和接收单元例如可以利用以下图14所示的无线通信模块1460来实现。装置1200中的各种显示单元可以利用以下图14所示的显示屏1494来实现。It should be understood that the operations and features of each unit in the apparatus 1200 are respectively to implement the corresponding steps of the method performed by the first device in the foregoing embodiment, and have the same beneficial effects. For the sake of simplicity, specific details will not be repeated. In addition, various sending units and receiving units in the apparatus 1200 can be implemented, for example, by using the wireless communication module 1460 shown in FIG. 14 below. The various display units in the device 1200 may be implemented using the display screen 1494 shown in FIG. 14 below.
图13示出了根据本公开的实施例的用于多屏交互的示例装置1300的框图。例如,装置1300可以用于实现如图1A和1B所示的从设备120或者从设备120的一部分。13 shows a block diagram of an example apparatus 1300 for multi-screen interaction according to an embodiment of the present disclosure. For example, the apparatus 1300 may be used to implement the slave device 120 or a portion of the slave device 120 as shown in FIGS. 1A and 1B .
如图13所示,装置1300包括请求发送单元1310,被配置为响应于接收到来自用户的第一触发操作,向第一设备发送用于多屏交互的请求,该请求中包括请求时间点;响应接收单元1320,被配置为接收来自第一设备的响应,该响应至少包括第一设备的第一屏幕在请求时间点附近的至少一个显示帧;屏幕显示单元1330,被配置为在第二设备的第二屏幕上显示第一界面,第一界面包括至少一个显示帧;用户输入接收单元1340,被配置为接收用户输入,该用户输入指示用户针对至少一个显示帧中的目标显示帧的选择和/或编辑;以及显示帧发送单元1350,被配置为响应于接收到来自用户的第二触发操作,向第一设备发送目标显示帧或者经编辑的目标显示帧,以用于在第一屏幕上显示。As shown in FIG. 13 , the apparatus 1300 includes a request sending unit 1310, configured to, in response to receiving the first trigger operation from the user, send a request for multi-screen interaction to the first device, where the request includes the request time point; The response receiving unit 1320 is configured to receive a response from the first device, the response at least including at least one display frame of the first screen of the first device near the request time point; the screen display unit 1330 is configured to A first interface is displayed on the second screen of the device, and the first interface includes at least one display frame; the user input receiving unit 1340 is configured to receive a user input indicating the user's selection and selection of a target display frame in the at least one display frame. and a display frame sending unit 1350 configured to send a target display frame or an edited target display frame to the first device for display on the first screen in response to receiving the second trigger operation from the user show.
在一些实施例中,装置1300还包括:控制命令接收单元,被配置为接收用户输入的控制命令;以及控制命令发送单元,被配置为响应于接收到的控制命令,向第一设备发送该控制命令,以用于控制第一屏幕的显示。In some embodiments, the apparatus 1300 further includes: a control command receiving unit configured to receive a control command input by a user; and a control command sending unit configured to send the control command to the first device in response to the received control command command to control the display of the first screen.
在一些实施例中,该控制命令包括以下任一项:第一控制命令,用于在第一屏幕上显示经编辑的目标显示帧;第二控制命令,用于在第一屏幕上显示目标显示帧;第三控制命令,用于在第一屏幕上显示目标显示帧的同时,播放用户针对目标显示帧的提问;以及第四控制命令,用于在第一屏幕上显示经编辑的目标显示帧的同时,播放与目标显示帧对应的录音片段。In some embodiments, the control command includes any one of the following: a first control command for displaying the edited target display frame on the first screen; a second control command for displaying the target display on the first screen frame; a third control command for displaying the target display frame on the first screen while playing the user's question for the target display frame; and a fourth control command for displaying the edited target display frame on the first screen At the same time, the audio clip corresponding to the target display frame is played.
在一些实施例中,用户输入还指示用户针对目标显示帧的提问,并且装置1300还包括:提问发送单元,被配置为响应于接收到第二触发操作,向第一设备发送该提问。In some embodiments, the user input further indicates the user's question for the target display frame, and the apparatus 1300 further includes: a question sending unit configured to send the question to the first device in response to receiving the second trigger operation.
在一些实施例中,该响应包括与至少一个显示帧对应的录音;第一界面包括该录音的可视表示;并且用户输入指示用户针对该录音中的录音片段的选择,录音片段对应于目标显示帧。In some embodiments, the response includes a sound recording corresponding to at least one display frame; the first interface includes a visual representation of the sound recording; and the user input indicates a user selection for a sound recording segment in the sound recording, the sound recording segment corresponding to the target display frame.
在一些实施例中,装置1300还包括:录音片段发送单元,被配置为响应于接收到第二触发操作,向第一设备发送该录音片段。In some embodiments, the apparatus 1300 further includes: a recording segment sending unit, configured to send the recording segment to the first device in response to receiving the second trigger operation.
在一些实施例中,装置1300还包括:连接建立单元,被配置为在向第一设备发送请求之前,与第一设备建立用于多屏交互的连接。In some embodiments, the apparatus 1300 further includes: a connection establishing unit, configured to establish a connection for multi-screen interaction with the first device before sending the request to the first device.
应当理解,装置1300中的各个单元的操作和特征分别为了实现前述实施例中由第二设备执行的方法的相应步骤,并且具有同样的有益效果。出于简化的目的,具体细节不再赘述。此外,装置1300中的各种发送单元和接收单元例如可以利用以下图14所示的无线通信模块1460来实现。装置1300中的各种显示单元可以利用以下图14所示的显示屏1494来实现。It should be understood that the operations and features of each unit in the apparatus 1300 are respectively to implement the corresponding steps of the method performed by the second device in the foregoing embodiment, and have the same beneficial effects. For the sake of simplicity, specific details will not be repeated. In addition, various sending units and receiving units in the apparatus 1300 can be implemented, for example, by using the wireless communication module 1460 shown in FIG. 14 below. The various display units in the device 1300 may be implemented using the display screen 1494 shown in FIG. 14 below.
图14示出了电子设备1400的结构示意图。例如,如图1A所示的主设备110、从设备120和/或如图2A所示的设备210可以由电子设备1400实施。FIG. 14 shows a schematic structural diagram of an electronic device 1400 . For example, the master device 110 shown in FIG. 1A , the slave device 120 and/or the device 210 shown in FIG. 2A may be implemented by the electronic device 1400 .
电子设备1400可以包括处理器1410,外部存储器接口1420,内部存储器1421,通用串行总线(universal serial bus,USB)接口1430,充电管理模块1440,电源管理模块1441,电池1442,天线141,天线142,移动通信模块1450,无线通信模块1460,音频模块1470,扬声器1470A,受话器1470B,麦克风1470C,耳机接口1470D,传感器模块1480,按键1490,马达1491,指示器1492,摄像头1493,显示屏1494,以及用户标识模块(subscriber identification module,SIM)卡接口1495等。其中传感器模块1480可以包括压力传感器1480A,陀螺仪传感器1480B,气压传感器1480C,磁传感器1480D,加速度传感器1480E,距离传感器1480F,接近光传感器1480G,指纹传感器1480H,温度传感器1480J,触摸传感器1480K,环境光传感器1480L,骨传导传感器1480M等。The electronic device 1400 may include a processor 1410, an external memory interface 1420, an internal memory 1421, a universal serial bus (USB) interface 1430, a charge management module 1440, a power management module 1441, a battery 1442, an antenna 141, an antenna 142 , mobile communication module 1450, wireless communication module 1460, audio module 1470, speaker 1470A, receiver 1470B, microphone 1470C, headphone jack 1470D, sensor module 1480, buttons 1490, motor 1491, indicator 1492, camera 1493, display screen 1494, and Subscriber identification module (subscriber identification module, SIM) card interface 1495 and so on. The sensor module 1480 may include a pressure sensor 1480A, a gyroscope sensor 1480B, an air pressure sensor 1480C, a magnetic sensor 1480D, an acceleration sensor 1480E, a distance sensor 1480F, a proximity light sensor 1480G, a fingerprint sensor 1480H, a temperature sensor 1480J, a touch sensor 1480K, and ambient light. Sensor 1480L, Bone Conduction Sensor 1480M, etc.
可以理解的是,本发明实施例示意的结构并不构成对电子设备1400的具体限定。在本申请另一些实施例中,电子设备1400可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。It can be understood that the structures illustrated in the embodiments of the present invention do not constitute a specific limitation on the electronic device 1400 . In other embodiments of the present application, the electronic device 1400 may include more or less components than shown, or combine some components, or separate some components, or arrange different components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
处理器1410可以包括一个或多个处理单元,例如:处理器1410可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。The processor 1410 may include one or more processing units, for example, the processor 1410 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。The controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
处理器1410中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器1410中的存储器为高速缓冲存储器。该存储器可以保存处理器1410刚用过或循环使用的指令或数据。如果处理器1410需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器1410的等待时间,因而提高了系统的效率。A memory may also be provided in the processor 1410 for storing instructions and data. In some embodiments, the memory in processor 1410 is cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 1410 . If the processor 1410 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 1410 is reduced, thereby improving the efficiency of the system.
在一些实施例中,处理器1410可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S) 接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。In some embodiments, the processor 1410 may include one or more interfaces. The interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver (universal asynchronous transmitter) receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / or universal serial bus (universal serial bus, USB) interface, etc.
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器1410可以包含多组I2C总线。处理器1410可以通过不同的I2C总线接口分别耦合触摸传感器1480K,充电器,闪光灯,摄像头1493等。例如:处理器1410可以通过I2C接口耦合触摸传感器1480K,使处理器1410与触摸传感器1480K通过I2C总线接口通信,实现电子设备1400的触摸功能。The I2C interface is a bidirectional synchronous serial bus that includes a serial data line (SDA) and a serial clock line (SCL). In some embodiments, the processor 1410 may contain multiple sets of I2C buses. The processor 1410 can be respectively coupled to the touch sensor 1480K, the charger, the flash, the camera 1493, etc. through different I2C bus interfaces. For example, the processor 1410 can couple the touch sensor 1480K through the I2C interface, so that the processor 1410 and the touch sensor 1480K communicate with each other through the I2C bus interface, so as to realize the touch function of the electronic device 1400.
I2S接口可以用于音频通信。在一些实施例中,处理器1410可以包含多组I2S总线。处理器1410可以通过I2S总线与音频模块1470耦合,实现处理器1410与音频模块1470之间的通信。在一些实施例中,音频模块1470可以通过I2S接口向无线通信模块1460传递音频信号,实现通过蓝牙耳机接听电话的功能。The I2S interface can be used for audio communication. In some embodiments, the processor 1410 may contain multiple sets of I2S buses. The processor 1410 may be coupled with the audio module 1470 through an I2S bus to implement communication between the processor 1410 and the audio module 1470. In some embodiments, the audio module 1470 can transmit audio signals to the wireless communication module 1460 through the I2S interface, so as to realize the function of answering calls through a Bluetooth headset.
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块1470与无线通信模块1460可以通过PCM总线接口耦合。在一些实施例中,音频模块1470也可以通过PCM接口向无线通信模块1460传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。The PCM interface can also be used for audio communications, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 1470 and the wireless communication module 1460 may be coupled through a PCM bus interface. In some embodiments, the audio module 1470 can also transmit audio signals to the wireless communication module 1460 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器1410与无线通信模块1460。例如:处理器1410通过UART接口与无线通信模块1460中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块1470可以通过UART接口向无线通信模块1460传递音频信号,实现通过蓝牙耳机播放音乐的功能。The UART interface is a universal serial data bus used for asynchronous communication. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is typically used to connect the processor 1410 with the wireless communication module 1460 . For example, the processor 1410 communicates with the Bluetooth module in the wireless communication module 1460 through the UART interface to implement the Bluetooth function. In some embodiments, the audio module 1470 can transmit audio signals to the wireless communication module 1460 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
MIPI接口可以被用于连接处理器1410与显示屏1494,摄像头1493等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现电子设备1400的拍摄功能。处理器1410和显示屏1494通过DSI接口通信,实现电子设备1400的显示功能。The MIPI interface can be used to connect the processor 1410 with the display screen 1494, the camera 1493 and other peripheral devices. MIPI interfaces include camera serial interface (CSI), display serial interface (DSI), etc. In some embodiments, the processor 110 communicates with the camera 193 through a CSI interface to implement the photographing function of the electronic device 1400 . The processor 1410 communicates with the display screen 1494 through the DSI interface to implement the display function of the electronic device 1400 .
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器1410与摄像头1493,显示屏1494,无线通信模块1460,音频模块1470,传感器模块1480等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。The GPIO interface can be configured by software. The GPIO interface can be configured as a control signal or as a data signal. In some embodiments, the GPIO interface may be used to connect the processor 1410 with the camera 1493, the display screen 1494, the wireless communication module 1460, the audio module 1470, the sensor module 1480, and the like. The GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
USB接口1430是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口1430可以用于连接充电器为电子设备1400充电,也可以用于电子设备1400与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。The USB interface 1430 is an interface that conforms to the USB standard specification, and can specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like. The USB interface 1430 can be used to connect a charger to charge the electronic device 1400, and can also be used to transmit data between the electronic device 1400 and peripheral devices. It can also be used to connect headphones to play audio through the headphones. The interface can also be used to connect other electronic devices, such as AR devices.
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明, 并不构成对电子设备1400的结构限定。在本申请另一些实施例中,电子设备1400也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。It can be understood that the interface connection relationship between the modules illustrated in the embodiment of the present invention is only a schematic illustration, and does not constitute a structural limitation of the electronic device 1400 . In other embodiments of the present application, the electronic device 1400 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
充电管理模块1440用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。电源管理模块1441用于连接电池1442,充电管理模块1440与处理器1410。电源管理模块1441接收电池1442和/或充电管理模块1440的输入,为处理器1410,内部存储器1421,显示屏1494,摄像头1493,和无线通信模块1460等供电。The charging management module 1440 is used to receive charging input from the charger. The charger may be a wireless charger or a wired charger. The power management module 1441 is used for connecting the battery 1442 , the charging management module 1440 and the processor 1410 . The power management module 1441 receives input from the battery 1442 and/or the charging management module 1440, and supplies power to the processor 1410, the internal memory 1421, the display screen 1494, the camera 1493, and the wireless communication module 1460.
电子设备1400的无线通信功能可以通过天线141,天线142,移动通信模块1450,无线通信模块1460,调制解调处理器以及基带处理器等实现。The wireless communication function of the electronic device 1400 can be implemented by the antenna 141, the antenna 142, the mobile communication module 1450, the wireless communication module 1460, the modem processor, the baseband processor, and the like.
天线141和天线142用于发射和接收电磁波信号。电子设备1400中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线141复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。The antenna 141 and the antenna 142 are used to transmit and receive electromagnetic wave signals. Each antenna in electronic device 1400 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization. For example, the antenna 141 can be multiplexed as a diversity antenna of the wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
移动通信模块1450可以提供应用在电子设备1400上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块1450可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块1450可以由天线141接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块1450还可以对经调制解调处理器调制后的信号放大,经天线141转为电磁波辐射出去。在一些实施例中,移动通信模块1450的至少部分功能模块可以被设置于处理器1410中。在一些实施例中,移动通信模块1450的至少部分功能模块可以与处理器1410的至少部分模块被设置在同一个器件中。The mobile communication module 1450 may provide a wireless communication solution including 2G/3G/4G/5G, etc. applied on the electronic device 1400 . The mobile communication module 1450 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), and the like. The mobile communication module 1450 can receive electromagnetic waves through the antenna 141, filter, amplify, etc. the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation. The mobile communication module 1450 can also amplify the signal modulated by the modulation and demodulation processor, and then convert it into electromagnetic waves for radiation through the antenna 141 . In some embodiments, at least part of the functional modules of the mobile communication module 1450 may be provided in the processor 1410 . In some embodiments, at least part of the functional modules of the mobile communication module 1450 may be provided in the same device as at least part of the modules of the processor 1410 .
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器1470A,受话器1470B等)输出声音信号,或通过显示屏1494显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器1410,与移动通信模块1450或其他功能模块设置在同一个器件中。The modem processor may include a modulator and a demodulator. Wherein, the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal. The demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and passed to the application processor. The application processor outputs sound signals through audio devices (not limited to the speaker 1470A, the receiver 1470B, etc.), or displays images or videos through the display screen 1494 . In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be independent of the processor 1410, and may be provided in the same device as the mobile communication module 1450 or other functional modules.
无线通信模块1460可以提供应用在电子设备1400上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块1460可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块1460经由天线142接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器1410。无线通信模块1460还可以从处理器1410接收待发送的信号,对其进行调频,放大,经天线142转为电磁波辐射出去。在一些实施例中,无线通信模块1460可以用于收发上文描述的各种消息(包括各种请求和响应)、数据包(包括显示帧、录音数据和/或其他数据)等。无线通信模块1460例如可以用于实现如图12所示的装置1200和/或如图13所示的装置1300 中的各种发送单元和接收单元。The wireless communication module 1460 can provide applications on the electronic device 1400 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation satellites Wireless communication solutions such as global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), and infrared technology (IR). The wireless communication module 1460 may be one or more devices integrating at least one communication processing module. The wireless communication module 1460 receives electromagnetic waves via the antenna 142 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 1410 . The wireless communication module 1460 can also receive the signal to be sent from the processor 1410, perform frequency modulation on it, amplify it, and convert it into electromagnetic waves for radiation through the antenna 142. In some embodiments, the wireless communication module 1460 may be used to send and receive various messages (including various requests and responses), data packets (including display frames, recorded data, and/or other data), etc. described above. For example, the wireless communication module 1460 may be used to implement various sending units and receiving units in the apparatus 1200 shown in FIG. 12 and/or the apparatus 1300 shown in FIG. 13 .
在一些实施例中,电子设备1400的天线141和移动通信模块1450耦合,天线142和无线通信模块1460耦合,使得电子设备1400可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。In some embodiments, the antenna 141 of the electronic device 1400 is coupled with the mobile communication module 1450, and the antenna 142 is coupled with the wireless communication module 1460, so that the electronic device 1400 can communicate with the network and other devices through wireless communication technology. The wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code Division Multiple Access (WCDMA), Time Division Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc. The GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (GLONASS), a Beidou navigation satellite system (BDS), a quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite based augmentation systems (SBAS).
电子设备1400通过GPU,显示屏1494,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏1494和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器1410可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。The electronic device 1400 implements a display function through a GPU, a display screen 1494, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display screen 1494 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 1410 may include one or more GPUs that execute program instructions to generate or alter display information.
显示屏1494用于显示图像,视频等。在一些实施例中,显示屏1494可以用于上文描述的各种界面、显示帧等等。例如,显示屏1494可以用于实现如图12所示的装置1200和/或如图13所示的装置1300中的各种显示单元。显示屏1494包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备1400可以包括1个或N个显示屏1494,N为大于1的正整数。Display screen 1494 is used to display images, videos, and the like. In some embodiments, display screen 1494 may be used for the various interfaces, display frames, etc. described above. For example, the display screen 1494 may be used to implement various display units in the apparatus 1200 shown in FIG. 12 and/or the apparatus 1300 shown in FIG. 13 . Display screen 1494 includes a display panel. The display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light). emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on. In some embodiments, the electronic device 1400 may include 1 or N display screens 1494, where N is a positive integer greater than 1.
电子设备1400可以通过ISP,摄像头1493,视频编解码器,GPU,显示屏1494以及应用处理器等实现拍摄功能。The electronic device 1400 may implement a shooting function through an ISP, a camera 1493, a video codec, a GPU, a display screen 1494, an application processor, and the like.
ISP用于处理摄像头1493反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头1493中。The ISP is used to process the data fed back by the camera 1493. For example, when taking a photo, the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye. ISP can also perform algorithm optimization on image noise, brightness, and skin tone. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, the ISP may be located in the camera 1493.
摄像头1493用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备1400可以包括1个或N个摄像头1493,N为大于1的正整数。The camera 1493 is used to capture still images or video. The object is projected through the lens to generate an optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. DSP converts digital image signals into standard RGB, YUV and other formats of image signals. In some embodiments, the electronic device 1400 may include 1 or N cameras 1493 , where N is a positive integer greater than 1.
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备1400在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。A digital signal processor is used to process digital signals, in addition to processing digital image signals, it can also process other digital signals. For example, when the electronic device 1400 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy, and the like.
视频编解码器用于对数字视频压缩或解压缩。电子设备1400可以支持一种或多种视频编解码器。这样,电子设备1400可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。Video codecs are used to compress or decompress digital video. The electronic device 1400 may support one or more video codecs. In this way, the electronic device 1400 can play or record videos in various encoding formats, such as: Moving Picture Experts Group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备1400的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。The NPU is a neural-network (NN) computing processor. By drawing on the structure of biological neural networks, such as the transfer mode between neurons in the human brain, it can quickly process the input information, and can continuously learn by itself. Applications such as intelligent cognition of the electronic device 1400 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
外部存储器接口1420可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备1400的存储能力。外部存储卡通过外部存储器接口1420与处理器1410通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。The external memory interface 1420 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 1400. The external memory card communicates with the processor 1410 through the external memory interface 1420 to realize the data storage function. For example to save files like music, video etc in external memory card.
内部存储器1421可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。内部存储器1421可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备1400使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器1421可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理器1410通过运行存储在内部存储器1421的指令,和/或存储在设置于处理器中的存储器的指令,执行电子设备1400的各种功能应用以及数据处理。Internal memory 1421 may be used to store computer executable program code, which includes instructions. The internal memory 1421 may include a storage program area and a storage data area. The storage program area can store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like. The storage data area may store data (such as audio data, phone book, etc.) created during the use of the electronic device 1400 and the like. In addition, the internal memory 1421 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like. The processor 1410 executes various functional applications and data processing of the electronic device 1400 by executing instructions stored in the internal memory 1421 and/or instructions stored in a memory provided in the processor.
电子设备1400可以通过音频模块1470,扬声器1470A,受话器1470B,麦克风1470C,耳机接口1470D,以及应用处理器等实现音频功能。例如音乐播放,录音等。The electronic device 1400 may implement audio functions through an audio module 1470, a speaker 1470A, a receiver 1470B, a microphone 1470C, an earphone interface 1470D, and an application processor. Such as music playback, recording, etc.
音频模块1470用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块1470还可以用于对音频信号编码和解码。在一些实施例中,音频模块1470可以设置于处理器1410中,或将音频模块1470的部分功能模块设置于处理器1410中。The audio module 1470 is used for converting digital audio information into analog audio signal output, and also for converting analog audio input into digital audio signal. Audio module 1470 may also be used to encode and decode audio signals. In some embodiments, the audio module 1470 may be provided in the processor 1410 , or some functional modules of the audio module 1470 may be provided in the processor 1410 .
扬声器1470A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备1400可以通过扬声器1470A收听音乐,或收听免提通话。Speaker 1470A, also referred to as "speaker", is used to convert audio electrical signals into sound signals. Electronic device 1400 can listen to music through speaker 1470A, or listen to hands-free calls.
受话器1470B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备1400接听电话或语音信息时,可以通过将受话器1470B靠近人耳接听语音。The receiver 1470B, also referred to as "earpiece", is used to convert audio electrical signals into sound signals. When the electronic device 1400 answers a call or a voice message, the voice can be answered by placing the receiver 1470B close to the human ear.
麦克风1470C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风1470C发声,将声音信号输入到麦克风1470C。电子设备1400可以设置至少一个麦克风1470C。在另一些实施例中,电子设备1400可以设置两个麦克风1470C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备1400还可以设置三个,四个或更多麦克风1470C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。 Microphone 1470C, also called "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or sending a voice message, the user can make a sound near the microphone 1470C through the human mouth, and input the sound signal into the microphone 1470C. The electronic device 1400 may be provided with at least one microphone 1470C. In other embodiments, the electronic device 1400 may be provided with two microphones 1470C, which can implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 1400 may further be provided with three, four or more microphones 1470C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
耳机接口1470D用于连接有线耳机。耳机接口1470D可以是USB接口1430,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接 口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。The headphone jack 1470D is used to connect wired headphones. The earphone interface 1470D can be a USB interface 1430, or can be a 3.5mm open mobile terminal platform (open mobile terminal platform, OMTP) standard interface, a cellular telecommunications industry association of the USA (CTIA) standard interface.
压力传感器1480A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器1480A可以设置于显示屏1494。压力传感器1480A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器1480A,电极之间的电容改变。电子设备1400根据电容的变化确定压力的强度。当有触摸操作作用于显示屏1494,电子设备1400根据压力传感器1480A检测所述触摸操作强度。电子设备1400也可以根据压力传感器1480A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。The pressure sensor 1480A is used to sense pressure signals, and can convert the pressure signals into electrical signals. In some embodiments, pressure sensor 1480A may be provided on display screen 1494 . There are many types of pressure sensor 1480A, such as resistive pressure sensor, inductive pressure sensor, capacitive pressure sensor and so on. The capacitive pressure sensor may be comprised of at least two parallel plates of conductive material. When a force is applied to pressure sensor 1480A, the capacitance between the electrodes changes. The electronic device 1400 determines the intensity of the pressure according to the change in capacitance. When a touch operation acts on the display screen 1494, the electronic device 1400 detects the intensity of the touch operation according to the pressure sensor 1480A. The electronic device 1400 may also calculate the touched position according to the detection signal of the pressure sensor 1480A. In some embodiments, touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions. For example, when a touch operation whose intensity is less than the first pressure threshold acts on the short message application icon, the instruction for viewing the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, the instruction to create a new short message is executed.
陀螺仪传感器1480B可以用于确定电子设备1400的运动姿态。在一些实施例中,可以通过陀螺仪传感器1480B确定电子设备1400围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器1480B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器1480B检测电子设备1400抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备1400的抖动,实现防抖。陀螺仪传感器1480B还可以用于导航,体感游戏场景。The gyro sensor 1480B can be used to determine the motion attitude of the electronic device 1400 . In some embodiments, the angular velocity of electronic device 1400 about three axes (ie, x, y, and z axes) may be determined by gyro sensor 1480B. The gyro sensor 1480B can be used for image stabilization. Exemplarily, when the shutter is pressed, the gyro sensor 1480B detects the shaking angle of the electronic device 1400, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to counteract the shaking of the electronic device 1400 through reverse motion to achieve anti-shake. The gyroscope sensor 1480B can also be used for navigation and somatosensory game scenarios.
气压传感器1480C用于测量气压。在一些实施例中,电子设备1400通过气压传感器1480C测得的气压值计算海拔高度,辅助定位和导航。Air pressure sensor 1480C is used to measure air pressure. In some embodiments, the electronic device 1400 calculates the altitude from the air pressure value measured by the air pressure sensor 1480C to assist in positioning and navigation.
磁传感器1480D包括霍尔传感器。电子设备1400可以利用磁传感器1480D检测翻盖皮套的开合。在一些实施例中,当电子设备1400是翻盖机时,电子设备1400可以根据磁传感器1480D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。Magnetic sensor 1480D includes a Hall sensor. The electronic device 1400 can detect the opening and closing of the flip holster using the magnetic sensor 1480D. In some embodiments, when the electronic device 1400 is a flip machine, the electronic device 1400 can detect the opening and closing of the flip according to the magnetic sensor 1480D. Further, according to the detected opening and closing state of the leather case or the opening and closing state of the flip cover, characteristics such as automatic unlocking of the flip cover are set.
加速度传感器1480E可检测电子设备1400在各个方向上(一般为三轴)加速度的大小。当电子设备1400静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。The acceleration sensor 1480E can detect the magnitude of the acceleration of the electronic device 1400 in various directions (generally three axes). The magnitude and direction of gravity can be detected when the electronic device 1400 is stationary. It can also be used to identify the posture of electronic devices, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
距离传感器1480F,用于测量距离。电子设备1400可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备1400可以利用距离传感器1480F测距以实现快速对焦。Distance sensor 1480F, used to measure distance. The electronic device 1400 can measure the distance through infrared or laser. In some embodiments, when shooting a scene, the electronic device 1400 can use the distance sensor 1480F to measure the distance to achieve fast focusing.
接近光传感器1480G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备1400通过发光二极管向外发射红外光。电子设备1400使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备1400附近有物体。当检测到不充分的反射光时,电子设备1400可以确定电子设备1400附近没有物体。电子设备1400可以利用接近光传感器1480G检测用户手持电子设备1400贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器1480G也可用于皮套模式,口袋模式自动解锁与锁屏。Proximity light sensor 1480G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes. The light emitting diodes may be infrared light emitting diodes. The electronic device 1400 emits infrared light to the outside through light emitting diodes. Electronic device 1400 uses photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it may be determined that there is an object near the electronic device 1400 . When insufficient reflected light is detected, the electronic device 1400 may determine that there is no object near the electronic device 1400 . The electronic device 1400 can use the proximity light sensor 1480G to detect that the user holds the electronic device 1400 close to the ear to talk, so as to automatically turn off the screen to save power. Proximity light sensor 1480G can also be used in holster mode, pocket mode automatically unlocks and locks the screen.
环境光传感器1480L用于感知环境光亮度。电子设备1400可以根据感知的环境光 亮度自适应调节显示屏1494亮度。环境光传感器1480L也可用于拍照时自动调节白平衡。环境光传感器1480L还可以与接近光传感器1480G配合,检测电子设备1400是否在口袋里,以防误触。The ambient light sensor 1480L is used to sense ambient light brightness. The electronic device 1400 can adaptively adjust the brightness of the display screen 1494 according to the perceived ambient light brightness. The ambient light sensor 1480L can also be used to automatically adjust the white balance when taking pictures. The ambient light sensor 1480L can also cooperate with the proximity light sensor 1480G to detect whether the electronic device 1400 is in the pocket to prevent accidental touch.
指纹传感器1480H用于采集指纹。电子设备1400可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。The fingerprint sensor 1480H is used to collect fingerprints. The electronic device 1400 can use the collected fingerprint characteristics to realize fingerprint unlocking, accessing application locks, taking photos with fingerprints, answering incoming calls with fingerprints, and the like.
温度传感器1480J用于检测温度。在一些实施例中,电子设备1400利用温度传感器1480J检测的温度,执行温度处理策略。例如,当温度传感器1480J上报的温度超过阈值,电子设备1400执行降低位于温度传感器1480J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备1400对电池1442加热,以避免低温导致电子设备1400异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备1400对电池1442的输出电压执行升压,以避免低温导致的异常关机。The temperature sensor 1480J is used to detect the temperature. In some embodiments, the electronic device 1400 utilizes the temperature detected by the temperature sensor 1480J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 1480J exceeds a threshold value, the electronic device 1400 performs performance reduction of the processor located near the temperature sensor 1480J in order to reduce power consumption and implement thermal protection. In other embodiments, when the temperature is lower than another threshold, the electronic device 1400 heats the battery 1442 to avoid abnormal shutdown of the electronic device 1400 due to low temperature. In some other embodiments, when the temperature is lower than another threshold, the electronic device 1400 boosts the output voltage of the battery 1442 to avoid abnormal shutdown caused by low temperature.
触摸传感器1480K,也称“触控器件”。触摸传感器1480K可以设置于显示屏1494,由触摸传感器1480K与显示屏1494组成触摸屏,也称“触控屏”。触摸传感器1480K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏1494提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器1480K也可以设置于电子设备1400的表面,与显示屏1494所处的位置不同。Touch sensor 1480K, also called "touch device". The touch sensor 1480K can be disposed on the display screen 1494, and the touch sensor 1480K and the display screen 1494 form a touch screen, also called a "touch screen". The touch sensor 1480K is used to detect touch operations on or near it. The touch sensor can pass the detected touch operation to the application processor to determine the type of touch event. Visual output related to touch operations may be provided through display screen 1494 . In other embodiments, the touch sensor 1480K may also be disposed on the surface of the electronic device 1400 , which is different from the location where the display screen 1494 is located.
骨传导传感器1480M可以获取振动信号。在一些实施例中,骨传导传感器1480M可以获取人体声部振动骨块的振动信号。骨传导传感器1480M也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器1480M也可以设置于耳机中,结合成骨传导耳机。音频模块1470可以基于所述骨传导传感器1480M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于所述骨传导传感器1480M获取的血压跳动信号解析心率信息,实现心率检测功能。The bone conduction sensor 1480M can acquire vibration signals. In some embodiments, the bone conduction sensor 1480M can acquire the vibration signal of the vibrating bone mass of the human voice. The bone conduction sensor 1480M can also contact the pulse of the human body and receive the blood pressure beating signal. In some embodiments, the bone conduction sensor 1480M can also be disposed in the earphone, combined with the bone conduction earphone. The audio module 1470 can parse out the voice signal based on the vibration signal of the vocal vibration bone block obtained by the bone conduction sensor 1480M, so as to realize the voice function. The application processor can analyze the heart rate information based on the blood pressure beat signal obtained by the bone conduction sensor 1480M, and realize the function of heart rate detection.
按键1490包括开机键,音量键等。按键1490可以是机械按键。也可以是触摸式按键。电子设备1400可以接收按键输入,产生与电子设备1400的用户设置以及功能控制有关的键信号输入。The keys 1490 include a power-on key, a volume key, and the like. Keys 1490 may be mechanical keys. It can also be a touch key. The electronic device 1400 may receive key inputs and generate key signal inputs related to user settings and function control of the electronic device 1400 .
马达1491可以产生振动提示。马达1491可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏1494不同区域的触摸操作,马达1491也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。Motor 1491 can generate vibrating cues. The motor 1491 can be used for vibrating alerts for incoming calls, and can also be used for touch vibration feedback. For example, touch operations acting on different applications (such as taking pictures, playing audio, etc.) can correspond to different vibration feedback effects. The motor 1491 can also correspond to different vibration feedback effects for touch operations in different areas of the display screen 1494 . Different application scenarios (for example: time reminder, receiving information, alarm clock, games, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect can also support customization.
指示器1492可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。The indicator 1492 can be an indicator light, which can be used to indicate the charging status, the change of power, and can also be used to indicate messages, missed calls, notifications, and the like.
SIM卡接口1495用于连接SIM卡。SIM卡可以通过插入SIM卡接口1495,或从SIM卡接口1495拔出,实现和电子设备1400的接触和分离。电子设备1400可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口1495可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口1495可以同时插入多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口1495也可以兼容不同类型的SIM卡。 SIM卡接口1495也可以兼容外部存储卡。电子设备1400通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备1400采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备1400中,不能和电子设备1400分离。The SIM card interface 1495 is used to connect a SIM card. The SIM card can be inserted into the SIM card interface 1495 or pulled out from the SIM card interface 1495 to achieve contact with and separation from the electronic device 1400 . The electronic device 1400 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1. The SIM card interface 1495 can support Nano SIM card, Micro SIM card, SIM card and so on. The same SIM card interface 1495 can insert multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 1495 can also be compatible with different types of SIM cards. The SIM card interface 1495 is also compatible with external memory cards. The electronic device 1400 interacts with the network through the SIM card to implement functions such as call and data communication. In some embodiments, the electronic device 1400 employs an eSIM, ie: an embedded SIM card. The eSIM card can be embedded in the electronic device 1400 and cannot be separated from the electronic device 1400 .
电子设备1400的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本发明实施例以分层架构的Android系统为例,示例性说明电子设备1400的软件结构。The software system of the electronic device 1400 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. The embodiment of the present invention takes the Android system with a layered architecture as an example to illustrate the software structure of the electronic device 1400 as an example.
图15是本发明实施例的电子设备1400的软件结构框图。如图15所示的软件结构可以用于实现如图1B和2B所示的软件系统架构。FIG. 15 is a block diagram of a software structure of an electronic device 1400 according to an embodiment of the present invention. The software structure shown in Figure 15 can be used to implement the software system architecture shown in Figures 1B and 2B.
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。应用程序层可以包括一系列应用程序包。The layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate with each other through software interfaces. In some embodiments, the Android system is divided into four layers, which are, from top to bottom, an application layer, an application framework layer, an Android runtime (Android runtime) and a system library, and a kernel layer. The application layer can include a series of application packages.
如图15所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。在一些实施例中,应用程序还可以包括如图1B所示的应用111和/或应用121,或者可以包括如图2B所示的应用211和/或应用212(出于简化目的,图15中未示出)。As shown in Figure 15, the application package can include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message and so on. In some embodiments, the application may also include application 111 and/or application 121 as shown in FIG. 1B , or may include application 211 and/or application 212 as shown in FIG. 2B (for simplicity, in FIG. 15 , not shown).
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。The application framework layer provides an application programming interface (API) and a programming framework for the applications of the application layer. The application framework layer includes some predefined functions.
如图15所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器、媒体服务、多屏交互服务等。As shown in FIG. 15 , the application framework layer may include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, a media service, a multi-screen interactive service, and the like.
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。A window manager is used to manage window programs. The window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, take screenshots, etc.
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。Content providers are used to store and retrieve data and make these data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone book, etc.
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。The view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on. View systems can be used to build applications. A display interface can consist of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
电话管理器用于提供电子设备1400的通信功能。例如通话状态的管理(包括接通,挂断等)。The phone manager is used to provide the communication function of the electronic device 1400 . For example, the management of call status (including connecting, hanging up, etc.).
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。The resource manager provides various resources for the application, such as localization strings, icons, pictures, layout files, video files and so on.
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。The notification manager enables applications to display notification information in the status bar, which can be used to convey notification-type messages, and can disappear automatically after a brief pause without user interaction. For example, the notification manager is used to notify download completion, message reminders, etc. The notification manager can also display notifications in the status bar at the top of the system in the form of graphs or scroll bar text, such as notifications of applications running in the background, and notifications on the screen in the form of dialog windows. For example, text information is prompted in the status bar, a prompt sound is issued, the electronic device vibrates, and the indicator light flashes.
媒体服务例如可以是如图1B所示的媒体服务112,用于支持应用111(例如,视频会议应用、视频播放应用、办公应用或者其他展示应用)的运行,或者可以是如图2B所示的媒体服务213,用于支持应用211(例如,视频会议应用、视频播放应用、 办公应用或者其他展示应用)的运行。多屏交互服务例如可以是如图1B所示的多屏交互服务113,用于支持应用111与应用121之间的多屏交互,或者可以是如图2B所示的多屏交互服务214,用于支持应用211与应用212之间的多屏交互。The media service may be, for example, a media service 112 as shown in FIG. 1B for supporting the operation of an application 111 (eg, a video conference application, a video playback application, an office application or other presentation application), or may be as shown in FIG. 2B The media service 213 is used to support the operation of the application 211 (eg, a video conference application, a video playback application, an office application or other presentation applications). The multi-screen interactive service can be, for example, the multi-screen interactive service 113 shown in FIG. 1B, which is used to support the multi-screen interaction between the application 111 and the application 121, or can be the multi-screen interactive service 214 shown in FIG. It is used to support multi-screen interaction between the application 211 and the application 212 .
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。Android Runtime includes core libraries and a virtual machine. Android runtime is responsible for scheduling and management of the Android system.
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。The core library consists of two parts: one is the function functions that the java language needs to call, and the other is the core library of Android.
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。The application layer and the application framework layer run in virtual machines. The virtual machine executes the java files of the application layer and the application framework layer as binary files. The virtual machine is used to perform functions such as object lifecycle management, stack management, thread management, safety and exception management, and garbage collection.
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。A system library can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg: SGL), etc.
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。The Surface Manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。The media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files. The media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。The 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing.
2D图形引擎是2D绘图的绘图引擎。2D graphics engine is a drawing engine for 2D drawing.
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。在一些实施例中,内核层可以包括如图1B所示的显示器驱动114、GPU驱动115、蓝牙驱动116或123、以及WIFI驱动117或124;或者可以包括如图2B所示的显示器驱动215、GPU驱动216、蓝牙驱动217、以及WIFI驱动218(出于简化目的,图15中未示出)。The kernel layer is the layer between hardware and software. The kernel layer contains at least display drivers, camera drivers, audio drivers, and sensor drivers. In some embodiments, the kernel layer may include the display driver 114, the GPU driver 115, the Bluetooth driver 116 or 123, and the WIFI driver 117 or 124 as shown in FIG. 1B; or may include the display driver 215, GPU driver 216, Bluetooth driver 217, and WIFI driver 218 (not shown in Figure 15 for simplicity).
下面结合捕获拍照场景,示例性说明电子设备1400软件以及硬件的工作流程。当触摸传感器1480K接收到触摸操作,相应的硬件中断被发给内核层。内核层将触摸操作加工成原始输入事件(包括触摸坐标,触摸操作的时间戳等信息)。原始输入事件被存储在内核层。应用程序框架层从内核层获取原始输入事件,识别该输入事件所对应的控件。以该触摸操作是触摸单击操作,该单击操作所对应的控件为相机应用图标的控件为例,相机应用调用应用框架层的接口,启动相机应用,进而通过调用内核层启动摄像头驱动,通过摄像头1493捕获静态图像或视频。In the following, the workflow of the software and hardware of the electronic device 1400 will be exemplarily described in conjunction with the capturing and photographing scene. When the touch sensor 1480K receives a touch operation, the corresponding hardware interrupt is sent to the kernel layer. The kernel layer processes touch operations into raw input events (including touch coordinates, timestamps of touch operations, etc.). Raw input events are stored at the kernel layer. The application framework layer obtains the original input event from the kernel layer, and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and the control corresponding to the click operation is the control of the camera application icon, for example, the camera application calls the interface of the application framework layer to start the camera application, and then starts the camera driver by calling the kernel layer. The camera 1493 captures still images or video.
本公开可以是方法、装置、系统和/或计算机程序产品。计算机程序产品可以包括计算机可读存储介质,其上载有用于执行本公开的各个方面的计算机可读程序指令。The present disclosure may be a method, apparatus, system and/or computer program product. The computer program product may include a computer-readable storage medium having computer-readable program instructions loaded thereon for carrying out various aspects of the present disclosure.
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是——但不限于——电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、静态随机存取存储器(SRAM)、便携式压缩盘只读存储器(CD-ROM)、数字多功 能盘(DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。A computer-readable storage medium may be a tangible device that can hold and store instructions for use by the instruction execution device. The computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (non-exhaustive list) of computer readable storage media include: portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM) or flash memory), static random access memory (SRAM), portable compact disk read only memory (CD-ROM), digital versatile disk (DVD), memory sticks, floppy disks, mechanically coded devices, such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above. Computer-readable storage media, as used herein, are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (eg, light pulses through fiber optic cables), or through electrical wires transmitted electrical signals.
这里所描述的计算机可读程序指令可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。The computer readable program instructions described herein may be downloaded to various computing/processing devices from a computer readable storage medium, or to an external computer or external storage device over a network such as the Internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer-readable program instructions from a network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
用于执行本公开操作的计算机程序指令可以是汇编指令、指令集架构(ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言—诸如Smalltalk、C++等,以及常规的过程式编程语言—诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络—包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列(FPGA)或可编程逻辑阵列(PLA),该电子电路可以执行计算机可读程序指令,从而实现本公开的各个方面。The computer program instructions for carrying out the operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, state setting data, or instructions in one or more programming languages. Source or object code written in any combination, including object-oriented programming languages, such as Smalltalk, C++, etc., and conventional procedural programming languages, such as the "C" language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through the Internet connect). In some embodiments, custom electronic circuits, such as programmable logic circuits, field programmable gate arrays (FPGAs), or programmable logic arrays (PLAs), can be personalized by utilizing state information of computer readable program instructions. Computer readable program instructions are executed to implement various aspects of the present disclosure.
这里参照根据本公开实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本公开的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理单元,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理单元执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。These computer readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine that causes the instructions when executed by the processing unit of the computer or other programmable data processing apparatus , resulting in means for implementing the functions/acts specified in one or more blocks of the flowchart and/or block diagrams. These computer readable program instructions can also be stored in a computer readable storage medium, these instructions cause a computer, programmable data processing apparatus and/or other equipment to operate in a specific manner, so that the computer readable medium storing the instructions includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。Computer readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment to cause a series of operational steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executing on a computer, other programmable data processing apparatus, or other device to implement the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
附图中的流程图和框图显示了根据本公开的多个实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,所述模块、程序段或指令的一部分包含 一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more functions for implementing the specified logical function(s) executable instructions. In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It is also noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented in dedicated hardware-based systems that perform the specified functions or actions , or can be implemented in a combination of dedicated hardware and computer instructions.
以上已经描述了本公开的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术的改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。Various embodiments of the present disclosure have been described above, and the foregoing descriptions are exemplary, not exhaustive, and not limiting of the disclosed embodiments. Numerous modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the various embodiments, the practical application or improvement over the technology in the marketplace, or to enable others of ordinary skill in the art to understand the various embodiments disclosed herein.

Claims (18)

  1. 一种多屏交互的系统,其特征在于,包括:A system for multi-screen interaction, comprising:
    第一设备,包括第一屏幕;以及a first device, including a first screen; and
    第二设备,包括第二屏幕,其中:A second device, including a second screen, wherein:
    所述第一设备在所述第一屏幕上显示第一内容,所述第一内容包括多个显示帧;the first device displays first content on the first screen, the first content including a plurality of display frames;
    所述第二设备接收用户的第一触发操作;the second device receives the first trigger operation of the user;
    响应于接收到的所述第一触发操作,所述第二设备向所述第一设备发送用于多屏交互的请求,所述请求中包括请求时间点;In response to the received first trigger operation, the second device sends a request for multi-screen interaction to the first device, where the request includes a request time point;
    所述第一设备根据所述请求,向所述第二设备发送响应,所述响应至少包括所述第一屏幕在所述请求时间点附近的至少一个显示帧;The first device sends a response to the second device according to the request, where the response at least includes at least one display frame of the first screen near the request time point;
    所述第二设备接收所述响应,在所述第二屏幕上显示第一界面,所述第一界面包括所述至少一个显示帧;receiving the response, the second device displays a first interface on the second screen, the first interface including the at least one display frame;
    所述第二设备接收用户输入,所述用户输入指示用户针对所述至少一个显示帧中的目标显示帧的选择和/或编辑;the second device receives user input indicating user selection and/or editing of a target display frame of the at least one display frame;
    所述第二设备接收用户的第二触发操作;the second device receives a second trigger operation of the user;
    响应于接收到的所述第二触发操作,所述第二设备向所述第一设备发送所述目标显示帧或者经编辑的所述目标显示帧;并且In response to receiving the second trigger operation, the second device transmits the target display frame or the edited target display frame to the first device; and
    所述第一设备在所述第一屏幕上显示所述目标显示帧或者经编辑的所述目标显示帧。The first device displays the target display frame or the edited target display frame on the first screen.
  2. 根据权利要求1所述的系统,其特征在于,其中:The system of claim 1, wherein:
    响应于接收到所述目标显示帧或者经编辑的所述目标显示帧,所述第一设备在所述第一屏幕上显示是否允许所述目标显示帧的分享的提示;In response to receiving the target display frame or the edited target display frame, the first device displays on the first screen a prompt whether to allow sharing of the target display frame;
    所述第一设备接收另一用户输入;并且the first device receives another user input; and
    响应于所述另一用户输入指示用户允许所述分享,所述第一设备在所述第一屏幕上显示所述目标显示帧或者经编辑的所述目标显示帧。In response to the further user input indicating that the user allows the sharing, the first device displays the target display frame or the edited target display frame on the first screen.
  3. 根据权利要求1或2所述的系统,其特征在于,其中:The system of claim 1 or 2, wherein:
    所述第二设备接收用户输入的控制命令;the second device receives a control command input by a user;
    响应于接收到的所述控制命令,所述第二设备向所述第一设备发送所述控制命令,以用于控制所述第一屏幕的显示;并且In response to the received control command, the second device sends the control command to the first device for controlling the display of the first screen; and
    所述第一设备根据所述控制命令,在所述第一屏幕上显示所述目标显示帧或者经编辑的所述目标显示帧。The first device displays the target display frame or the edited target display frame on the first screen according to the control command.
  4. 根据权利要求3所述的系统,其特征在于,所述控制命令包括以下任一项:The system according to claim 3, wherein the control command comprises any one of the following:
    第一控制命令,用于在所述第一屏幕上显示经编辑的所述目标显示帧;a first control command for displaying the edited target display frame on the first screen;
    第二控制命令,用于在所述第一屏幕上显示所述目标显示帧;a second control command for displaying the target display frame on the first screen;
    第三控制命令,用于在所述第一屏幕上显示所述目标显示帧的同时,播放用户针对所述目标显示帧的提问;以及a third control command, used to play the user's question on the target display frame while displaying the target display frame on the first screen; and
    第四控制命令,用于在所述第一屏幕上显示经编辑的所述目标显示帧的同时,播放与所述目标显示帧对应的录音片段。The fourth control command is used to play a recording segment corresponding to the target display frame while displaying the edited target display frame on the first screen.
  5. 根据权利要求1至4中任一项所述的系统,其特征在于,其中所述用户输入还 指示用户针对所述目标显示帧的提问,并且其中:The system of any one of claims 1 to 4, wherein the user input further indicates a user question for the target display frame, and wherein:
    响应于接收到的所述第二触发操作,所述第二设备向所述第一设备发送所述提问;并且In response to receiving the second trigger operation, the second device sends the question to the first device; and
    所述第一设备在所述第一屏幕上显示所述目标显示帧或者经编辑的所述目标显示帧的同时,播放所述提问。The first device plays the question while displaying the target display frame or the edited target display frame on the first screen.
  6. 根据权利要求1至5中任一项所述的系统,其特征在于,其中:The system of any one of claims 1 to 5, wherein:
    所述响应包括与所述至少一个显示帧对应的录音;the response includes a recording corresponding to the at least one display frame;
    所述第一界面包括所述录音的可视表示;并且the first interface includes a visual representation of the recording; and
    所述用户输入指示用户针对所述录音中的录音片段的选择,所述录音片段对应于所述目标显示帧。The user input indicates a user selection of an audio recording segment in the audio recording, the audio recording segment corresponding to the target display frame.
  7. 根据权利要求6所述的系统,其特征在于,其中:The system of claim 6, wherein:
    响应于接收到的所述第二触发操作,所述第二设备向所述第一设备发送所述录音片段;并且In response to receiving the second trigger operation, the second device sends the audio recording segment to the first device; and
    所述第一设备在所述第一屏幕上显示所述目标显示帧或者经编辑的所述目标显示帧的同时,播放所述录音片段。The first device plays the recording segment while displaying the target display frame or the edited target display frame on the first screen.
  8. 根据权利要求1至7中任一项所述的系统,其特征在于,其中:The system according to any one of claims 1 to 7, wherein:
    所述第一设备接收用户的第三触发操作;并且the first device receives a third trigger operation from the user; and
    响应于接收到的所述第三触发操作,所述第一设备停止在所述第一屏幕上显示所述目标显示帧或者经编辑的所述目标显示帧,并且重新在所述第一屏幕上显示所述第一内容。In response to receiving the third trigger operation, the first device stops displaying the target display frame or the edited target display frame on the first screen, and re-displays the target display frame on the first screen The first content is displayed.
  9. 根据权利要求1至8中任一项所述的系统,其特征在于,其中:The system of any one of claims 1 to 8, wherein:
    在所述第二设备向所述第一设备发送所述请求之前,所述第一设备与所述第二设备建立用于多屏交互的连接。Before the second device sends the request to the first device, the first device and the second device establish a connection for multi-screen interaction.
  10. 一种多屏交互的方法,其特征在于,包括:A method for multi-screen interaction, comprising:
    第二设备响应于接收到来自用户的第一触发操作,向第一设备发送用于多屏交互的请求,所述请求中包括请求时间点;In response to receiving the first trigger operation from the user, the second device sends a request for multi-screen interaction to the first device, where the request includes the request time point;
    所述第二设备接收来自所述第一设备的响应,所述响应至少包括所述第一设备的第一屏幕在所述请求时间点附近的至少一个显示帧;The second device receives a response from the first device, the response at least including at least one display frame of the first screen of the first device near the requested time point;
    所述第二设备在所述第二设备的第二屏幕上显示第一界面,所述第一界面包括所述至少一个显示帧;the second device displays a first interface on a second screen of the second device, the first interface includes the at least one display frame;
    所述第二设备接收用户输入,所述用户输入指示用户针对所述至少一个显示帧中的目标显示帧的选择和/或编辑;以及the second device receives user input indicating user selection and/or editing of a target display frame of the at least one display frame; and
    所述第二设备响应于接收到来自用户的第二触发操作,向所述第一设备发送所述目标显示帧或者经编辑的所述目标显示帧,以用于在所述第一屏幕上显示。The second device, in response to receiving a second trigger operation from a user, sends the target display frame or the edited target display frame to the first device for display on the first screen .
  11. 根据权利要求10所述的方法,其特征在于,还包括:The method of claim 10, further comprising:
    所述第二设备接收用户输入的控制命令;以及the second device receives a control command input by a user; and
    响应于接收到的所述控制命令,所述第二设备向所述第一设备发送所述控制命令,以用于控制所述第一屏幕的显示。In response to the received control command, the second device sends the control command to the first device for controlling the display of the first screen.
  12. 根据权利要求11所述的方法,其特征在于,所述控制命令包括以下任一项:The method according to claim 11, wherein the control command comprises any one of the following:
    第一控制命令,用于在所述第一屏幕上显示经编辑的所述目标显示帧;a first control command for displaying the edited target display frame on the first screen;
    第二控制命令,用于在所述第一屏幕上显示所述目标显示帧;a second control command for displaying the target display frame on the first screen;
    第三控制命令,用于在所述第一屏幕上显示所述目标显示帧的同时,播放用户针对所述目标显示帧的提问;以及a third control command, used to play the user's question on the target display frame while displaying the target display frame on the first screen; and
    第四控制命令,用于在所述第一屏幕上显示经编辑的所述目标显示帧的同时,播放与所述目标显示帧对应的录音片段。The fourth control command is used to play a recording segment corresponding to the target display frame while displaying the edited target display frame on the first screen.
  13. 根据权利要求10至12中任一项所述的方法,其特征在于,其中所述用户输入还指示用户针对所述目标显示帧的提问,并且所述方法还包括:12. The method of any one of claims 10 to 12, wherein the user input further indicates a user question for the target display frame, and wherein the method further comprises:
    响应于接收到所述第二触发操作,所述第二设备向所述第一设备发送所述提问。In response to receiving the second trigger operation, the second device sends the question to the first device.
  14. 根据权利要求10至13中任一项所述的方法,其特征在于,其中:The method according to any one of claims 10 to 13, wherein:
    所述响应包括与所述至少一个显示帧对应的录音;the response includes a recording corresponding to the at least one display frame;
    所述第一界面包括所述录音的可视表示;并且the first interface includes a visual representation of the recording; and
    所述用户输入指示用户针对所述录音中的录音片段的选择,所述录音片段对应于所述目标显示帧。The user input indicates a user selection of an audio recording segment in the audio recording, the audio recording segment corresponding to the target display frame.
  15. 根据权利要求14所述的方法,其特征在于,还包括:The method of claim 14, further comprising:
    响应于接收到所述第二触发操作,所述第二设备向所述第一设备发送所述录音片段。In response to receiving the second trigger operation, the second device sends the audio recording segment to the first device.
  16. 根据权利要求10至15中任一项所述的方法,其特征在于,还包括:The method according to any one of claims 10 to 15, further comprising:
    在所述第二设备向所述第一设备发送所述请求之前,所述第二设备与所述第一设备建立用于多屏交互的连接。Before the second device sends the request to the first device, the second device establishes a connection for multi-screen interaction with the first device.
  17. 一种电子设备,包括:An electronic device comprising:
    一个或多个处理器;one or more processors;
    一个或多个存储器;以及one or more memories; and
    一个或多个计算机程序,其中所述一个或多个计算机程序被存储在所述一个或多个存储器中,所述一个或多个计算机程序包括指令,当所述指令被所述电子设备执行时,使得所述电子设备执行根据权利要求10至16中任一项所述的方法。One or more computer programs, wherein the one or more computer programs are stored in the one or more memories, the one or more computer programs comprising instructions that, when executed by the electronic device , so that the electronic device performs the method according to any one of claims 10 to 16 .
  18. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现根据权利要求1至16中任一项的方法所述的操作。A computer-readable storage medium having a computer program stored thereon, the computer program implementing the operations according to the method of any one of claims 1 to 16 when executed by a processor.
PCT/CN2021/125874 2020-08-24 2021-10-22 Multi-screen interaction system and method, apparatus, and medium WO2022042769A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010857539.4A CN114185503B (en) 2020-08-24 2020-08-24 Multi-screen interaction system, method, device and medium
CN202010857539.4 2020-08-24

Publications (2)

Publication Number Publication Date
WO2022042769A2 true WO2022042769A2 (en) 2022-03-03
WO2022042769A3 WO2022042769A3 (en) 2022-04-14

Family

ID=80352715

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/125874 WO2022042769A2 (en) 2020-08-24 2021-10-22 Multi-screen interaction system and method, apparatus, and medium

Country Status (2)

Country Link
CN (1) CN114185503B (en)
WO (1) WO2022042769A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115826898A (en) * 2023-01-03 2023-03-21 南京芯驰半导体科技有限公司 Cross-screen display method, system, device, equipment and storage medium
EP4387199A1 (en) * 2022-12-15 2024-06-19 Unify Patente GmbH & Co. KG Method for intelligent screen sharing, screen sharing application and system for multi-party and multi-media conferencing

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115396706B (en) * 2022-08-30 2024-06-04 京东方科技集团股份有限公司 Multi-screen interaction method, device, equipment, vehicle-mounted system and computer storage medium
CN117129085B (en) * 2023-02-28 2024-05-31 荣耀终端有限公司 Ambient light detection method, electronic device and readable storage medium

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001052230A1 (en) * 2000-01-10 2001-07-19 Ic Tech, Inc. Method and system for interacting with a display
JP2014127103A (en) * 2012-12-27 2014-07-07 Brother Ind Ltd Material sharing program, terminal device, and material sharing method
US20150153777A1 (en) * 2013-12-03 2015-06-04 Nvidia Corporation Electronic device with both inflexible display screen and flexible display screen
CN104104992A (en) * 2014-07-08 2014-10-15 深圳市同洲电子股份有限公司 Multi-screen interaction method, device and system
CN104902075B (en) * 2015-04-29 2017-02-22 努比亚技术有限公司 Multi-screen interaction method and system
CN105100885A (en) * 2015-06-23 2015-11-25 深圳市美贝壳科技有限公司 Multi-screen interaction method and system for browsing and playing power point (PPT) files
US20170026617A1 (en) * 2015-07-21 2017-01-26 SYLapptech Corporation Method and apparatus for real-time video interaction by transmitting and displaying user interface correpsonding to user input
US20170031947A1 (en) * 2015-07-28 2017-02-02 Promethean Limited Systems and methods for information presentation and collaboration
CN105262974A (en) * 2015-08-12 2016-01-20 北京恒泰实达科技股份有限公司 Method for realizing wireless screen sharing of multiple users
US10474412B2 (en) * 2015-10-02 2019-11-12 Polycom, Inc. Digital storyboards using multiple displays for content presentation and collaboration
CN105337998B (en) * 2015-11-30 2019-02-01 东莞酷派软件技术有限公司 A kind of system of multi-screen interactive
CN105760126B (en) * 2016-02-15 2019-03-26 惠州Tcl移动通信有限公司 A kind of multi-screen file sharing method and system
CN105812943B (en) * 2016-03-31 2019-02-22 北京奇艺世纪科技有限公司 A kind of video editing method and system
KR20170117843A (en) * 2016-04-14 2017-10-24 삼성전자주식회사 Multi screen providing method and apparatus thereof
US10587724B2 (en) * 2016-05-20 2020-03-10 Microsoft Technology Licensing, Llc Content sharing with user and recipient devices
CN106209818A (en) * 2016-07-06 2016-12-07 上海电机学院 A kind of wireless interactive electronic whiteboard conference system
CN108459836B (en) * 2018-01-19 2019-05-31 广州视源电子科技股份有限公司 Annotate display methods, device, equipment and storage medium
CN108509237A (en) * 2018-01-19 2018-09-07 广州视源电子科技股份有限公司 Operating method, device and the intelligent interaction tablet of intelligent interaction tablet
CN108958608B (en) * 2018-07-10 2022-07-15 广州视源电子科技股份有限公司 Interface element operation method and device of electronic whiteboard and interactive intelligent equipment
CN110896424B (en) * 2018-09-13 2022-03-29 中兴通讯股份有限公司 Interaction method and device for terminal application and terminal
CN109634495A (en) * 2018-11-01 2019-04-16 华为终端有限公司 Method of payment, device and user equipment
CN109857355A (en) * 2018-12-25 2019-06-07 广州维纳斯家居股份有限公司 Screen sharing method, device, same table and the storage medium of same table
CN110377250B (en) * 2019-06-05 2021-07-16 华为技术有限公司 Touch method in screen projection scene and electronic equipment
CN115629730A (en) * 2019-07-23 2023-01-20 华为技术有限公司 Display method and related device
CN110708426A (en) * 2019-09-30 2020-01-17 上海闻泰电子科技有限公司 Double-screen synchronous display method and device, server and storage medium
CN110928468B (en) * 2019-10-09 2021-06-25 广州视源电子科技股份有限公司 Page display method, device, equipment and storage medium of intelligent interactive tablet

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4387199A1 (en) * 2022-12-15 2024-06-19 Unify Patente GmbH & Co. KG Method for intelligent screen sharing, screen sharing application and system for multi-party and multi-media conferencing
CN115826898A (en) * 2023-01-03 2023-03-21 南京芯驰半导体科技有限公司 Cross-screen display method, system, device, equipment and storage medium
CN115826898B (en) * 2023-01-03 2023-04-28 南京芯驰半导体科技有限公司 Cross-screen display method, system, device, equipment and storage medium

Also Published As

Publication number Publication date
CN114185503B (en) 2023-09-08
WO2022042769A3 (en) 2022-04-14
CN114185503A (en) 2022-03-15

Similar Documents

Publication Publication Date Title
JP7142783B2 (en) Voice control method and electronic device
US11922005B2 (en) Screen capture method and related device
US11785329B2 (en) Camera switching method for terminal, and terminal
JP7498779B2 (en) Screen display method and electronic device
US11669242B2 (en) Screenshot method and electronic device
WO2021017889A1 (en) Display method of video call appliced to electronic device and related apparatus
JP2022549157A (en) DATA TRANSMISSION METHOD AND RELATED DEVICE
JP7355941B2 (en) Shooting method and device in long focus scenario
WO2021036770A1 (en) Split-screen processing method and terminal device
WO2022042769A2 (en) Multi-screen interaction system and method, apparatus, and medium
CN111240547A (en) Interactive method for cross-device task processing, electronic device and storage medium
WO2022068819A1 (en) Interface display method and related apparatus
WO2022017393A1 (en) Display interaction system, display method, and device
US20230353862A1 (en) Image capture method, graphic user interface, and electronic device
WO2022001619A1 (en) Screenshot method and electronic device
CN114040242A (en) Screen projection method and electronic equipment
WO2021143391A1 (en) Video call-based screen sharing method and mobile device
WO2022042326A1 (en) Display control method and related apparatus
WO2022028537A1 (en) Device recognition method and related apparatus
WO2024045801A1 (en) Method for screenshotting, and electronic device, medium and program product
CN112068907A (en) Interface display method and electronic equipment
WO2021052388A1 (en) Video communication method and video communication apparatus
WO2021037034A1 (en) Method for switching state of application, and terminal device
WO2023045597A1 (en) Cross-device transfer control method and apparatus for large-screen service
CN114079691A (en) Equipment identification method and related device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21860601

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21860601

Country of ref document: EP

Kind code of ref document: A2