CN115428413A - Notification processing method, electronic equipment and system - Google Patents

Notification processing method, electronic equipment and system Download PDF

Info

Publication number
CN115428413A
CN115428413A CN202080100013.XA CN202080100013A CN115428413A CN 115428413 A CN115428413 A CN 115428413A CN 202080100013 A CN202080100013 A CN 202080100013A CN 115428413 A CN115428413 A CN 115428413A
Authority
CN
China
Prior art keywords
message
user
electronic device
content
prompt
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080100013.XA
Other languages
Chinese (zh)
Inventor
刘敏
余平
杜仲
张莉容
党茂昌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202010102903.6A external-priority patent/CN111404802A/en
Priority claimed from CN202010473781.1A external-priority patent/CN113746718B/en
Priority claimed from CN202010844642.5A external-priority patent/CN114173000B/en
Priority claimed from CN202010844414.8A external-priority patent/CN114173204B/en
Priority claimed from CN202010844313.0A external-priority patent/CN114157756A/en
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN115428413A publication Critical patent/CN115428413A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/21Monitoring or handling of messages
    • H04L51/224Monitoring or handling of messages providing notification on incoming messages, e.g. pushed notifications of received messages

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The application provides a notification processing system, a notification processing method and electronic equipment, wherein the notification processing method comprises the following steps: the first equipment generates a notice and sends a first message, wherein the first message is used for prompting the first equipment to generate the notice; the second equipment receives the first message and generates a prompt for executing a task corresponding to the notification in third equipment; in response to receiving the input of the user corresponding to the prompt, the second device sends a second message to request the third device to perform a task corresponding to the notification. According to the embodiment of the application, the coordination capacity among the devices can be increased, so that the multiple devices can cooperatively process the task in the notification, and the information processing capacity among the multiple devices is improved. And the user can know the notification in time, so that the subsequent operation can be conveniently carried out.

Description

Notification processing method, electronic equipment and system
The present application claims chinese patent application with application number 202010102903.6, application name "notification processing system, method and electronic device" filed on 2/19/2020, chinese patent application with application number 202010473781.1, application name "a method for sharing contents", chinese patent application with application number 202010844414.8 filed on 8/20/2020, chinese patent application with application number 202010844414.8, application name "a method for prompting message, electronic device and system", chinese patent application with application number 2020108420/20/2020, chinese patent application with application number 202010844642.5, application name "a method for replying message, electronic device and system", chinese patent application with application number 202010844313.0, application name "task processing method and related electronic device" filed on 8/20/2020, and the priority of the patent application is incorporated by reference in the present application.
Technical Field
The present application relates to the field of terminals, and in particular, to a notification processing method, an electronic device, and a system.
Background
Currently, users have more and more devices, more and more applications and services on the devices, more and more notification messages sent by various applications and services, and users are being inundated with "flooded" notifications. Meanwhile, as more and more devices are provided for users, the users cannot acquire and process the notification messages in time, so that the users miss important notification messages, and the user experience is affected.
Disclosure of Invention
The application provides a notification processing system, a notification processing method and electronic equipment, which can increase the coordination capacity among the equipment, enable a plurality of equipment to cooperatively process tasks in notifications and improve the information processing capacity among the equipment. And the user can know the notification in time, so that the subsequent operation can be conveniently carried out.
In a first aspect, a notification processing system is provided, the system including a first device and a second device, the first device being configured to: and acquiring the notification, and sending a first message to the second device when the focus of the owner of the first device is determined to be on the second device. The second device is configured to: receiving the first message and generating a prompt for executing the task corresponding to the notification in the third device; and sending a second message to request the third device to execute the task corresponding to the notification in response to receiving the input of the user corresponding to the prompt.
When the first device generates a notification, the second device prompts the first device to receive the notification and prompts the third device that the task can be performed. And when the second device detects that the user inputs the prompt, sending a second message to the third device so as to execute the task on the third device. The first device and the second device have the capability of cooperatively notifying, so that the efficiency of processing notifications among a plurality of devices is improved.
With reference to the first aspect, in some possible implementations of the first aspect, the third device is configured to execute a task corresponding to the notification. A system formed by the first device, the second device and the third device can work cooperatively, and tasks corresponding to the notifications are processed among the devices.
With reference to the first aspect, in some possible implementation manners of the first aspect, the system further includes the third device, where the second device is specifically configured to: sending the second message to the third device; the third device is to perform the task in response to receiving the second message.
In the embodiment of the application, the second message can be sent to the third device when the second device detects the input of the user, so that the third device can execute the task, and the efficiency of processing the notification among multiple devices is improved.
In some possible implementations, the second message includes content of the task.
With reference to the first aspect, in some possible implementation manners of the first aspect, the system further includes the third device, where the second device is specifically configured to: sending a second message to the first device; the first device is further configured to: in response to receiving the second message, sending a third message to the third device; the third device to perform the task in response to receiving the third message.
In the embodiment of the application, the second message can be sent to the first device when the second device detects the input of the user, so that the first device can send the third message to the third device, the third device can execute the task, and the efficiency of processing notifications among multiple devices is improved.
In some possible implementations, the third message includes the content of the task.
In some possible implementations, the first device is further configured to detect a device within a communication range that can be used to prompt the notification; and after detecting the device which can be used for prompting the notification by the second device, sending the first message to the second device, wherein the first message comprises notification information which is used for prompting the first device to generate the notification.
In some possible implementations, the first device is further configured to detect a device within a communication range that can be used to perform the task corresponding to the notification; and after detecting that the third device can be used for executing the task corresponding to the notification, sending the first message to the second device, wherein the first message further comprises third device information, and the third device information is used for generating the prompt at the second device.
In some possible implementations, the second device is further configured to detect a device within a communication range that can be used to perform the task corresponding to the notification; and generating a prompt for executing the task corresponding to the notification in the third device after detecting that the third device can be used for executing the task corresponding to the notification.
In some possible implementations, the second device is further configured to display a first interface in response to the first message; the first interface comprises interface elements which are in one-to-one correspondence with N devices, the N devices can be used for executing the task corresponding to the notification, the N devices comprise the third device, and N is more than or equal to 1; receiving an input corresponding to the prompt from the user, specifically including: and receiving the triggering operation of the user on the interface element corresponding to the third equipment.
In some possible implementations, the third device is specifically configured to: and running the service corresponding to the notification.
In some possible implementations, the third device is specifically configured to: sending a screen projection request to the first device; the first device is further configured to send display data to the third device in response to the screen-casting request, where the display data is used to display an interface of the first device on the third device.
With reference to the first aspect, in some possible implementation manners of the first aspect, the first message includes description information of the task, and the second device is specifically configured to: in response to receiving the first message, displaying a first prompt window, the first prompt window including a first control and descriptive information of the task, the first control being associated with the third device; the second message is sent in response to user input to the first control.
In the embodiment of the application, the first message sent by the first device to the second device can carry the description information of the task, the second device displays the description information of the task content, and the user selects the third device to execute the task content, so that the cross-device task processing is realized. When the second device can conveniently trigger the third device to process tasks through the first prompt window, user experience is improved.
With reference to the first aspect, in some possible implementations of the first aspect, the second device is further configured to: after sending the second message, displaying a second control; and receiving the input of the user for the second control, and sending a fourth message to request the third device to stop executing the task.
In the embodiment of the application, a stopping control is provided on a prompting device (second device), the second device can output the stopping control (second control) in the process of executing the task content by a third device, and a user controls the third device to stop executing the task content through the second control. The user can pause the device to execute the task content at any time, and the efficiency of processing the notification among the devices is improved, so that the user experience is improved.
With reference to the first aspect, in some possible implementations of the first aspect, the system further includes a fourth device, where the second device is further configured to: after sending the second message, displaying a third control, the third control associated with the fourth device; in response to receiving user input for the third control, a fifth message is sent to request the third device to stop performing the task and to request the fourth device to perform the task.
In the embodiment of the application, a switching control is provided on a prompting device (a second device), and during the process of executing the task content, the third device may output the switching control (a third control), and a user controls the third device to stop executing the task content through the third control and instructs another device (a fourth device) to execute the task content. The effect of real-time switching is realized, the user can switch the equipment for executing the task content at any time, and the efficiency of processing the notification among a plurality of equipment is improved, so that the user experience is improved.
In some possible implementations, the second device is specifically configured to: and in response to receiving the input of the user for the third control, sending first indication information to the third device and sending second indication information to the fourth device, wherein the first indication information is used for indicating the third device to stop executing the task, and the second indication information is used for indicating the fourth device to execute the task.
In some possible implementations, the second device is specifically configured to: in response to receiving user input for the third control, sending the fifth message to the first device; the first device is configured to send, in response to receiving the fifth message, first indication information to the third device and second indication information to the fourth device, where the first indication information is used to instruct the third device to stop executing the task, and the second indication information is used to instruct the fourth device to execute the task.
With reference to the first aspect, in some possible implementations of the first aspect, the first prompt window further includes a fourth control, the fourth control is associated with one or more controls, and each of the one or more controls is associated with an available device other than the third device; the second device is further configured to display the one or more controls upon detecting input directed to the fourth control after displaying the first prompt window.
In the embodiment of the application, a control for selecting a device list is provided on a pointing device (second device), the second device may output a control for selecting a successive device list (fourth control), and a user may view the one or more available devices through the fourth control and then autonomously select a device for executing task content. The effect of autonomous selection is achieved, a user can select the device for executing the task content from the multiple devices, the efficiency of processing the notification among the multiple devices is improved, and therefore user experience is improved.
In some possible implementations, the first message includes device information of one or more devices associated with one or more controls; the second device, in response to receiving the device information for the one or more devices, displays the fourth control.
With reference to the first aspect, in some possible implementations of the first aspect, the one or more controls include a fifth control, and the fifth control is associated with a fifth device; the second device is further used for deleting the fifth control when the fifth device is no longer included in one or more devices within the communication range of the second device after the first prompt window is displayed; the second device is further configured to detect an input directed to the fourth control, and display one or more controls that do not include the fifth control.
In the embodiment of the application, the control in the first prompt window may change along with the change of the device state. At the first moment, the fifth device is available, and the prompting device (the second device) outputs a fifth control related to the fifth device; at the second time, the state of the fifth device changes (not within the communication range of the second device), and the second device deletes the fifth control associated with the fifth device. Similarly, if the state of the fifth device changes again (within the communication range of the second device) at the third time, the second device outputs a fifth control associated with the fifth device. Therefore, the method for outputting the control is changed according to the equipment state, timeliness is improved, the latest available equipment can be provided for the user in real time, and user experience is improved.
In some possible implementations, at the second time, if the state of the fifth device changes (is not within the communication range of the first device), the first device instructs the second device to delete the fifth control associated with the fifth device, so that the second device deletes the fifth control associated with the fifth device. Similarly, if the state of the fifth device changes again (within the communication range of the first device) at the third time, the first device instructs the second device to output the fifth control associated with the fifth device, so that the second device outputs the fifth control associated with the fifth device.
With reference to the first aspect, in some possible implementation manners of the first aspect, the notification is message content in the first application program, where the third device is configured to receive the message content and display a message alert box, where the message alert box includes the message content and a reply control; when the operation that the user replies to the message content is detected, the reply content is sent to the first equipment; the first device is further configured to reply to the message content according to the reply content.
In the embodiment of the present application, after seeing the prompt for the message on the prompting device (second device), the user may reply to the message content on the continuing device (third device), and finally complete the real reply to the message on the first device. Therefore, the method and the device are beneficial to the user to timely receive the message reminding and complete the reply to the message, the efficiency of processing the notification among the plurality of devices is improved, the process that the user replies to the message on the first device is avoided, the user is also prevented from missing important messages, and the user experience is promoted.
In some possible implementations, the third device may be a keyboard-equipped device (e.g., a laptop).
With reference to the first aspect, in some possible implementation manners of the first aspect, the third device is further configured to receive indication information, where the indication information is used to indicate that the third device adds the reply control to the reply control.
With reference to the first aspect, in some possible implementation manners of the first aspect, the message content in the first application includes a first message content and a second message content, where the third device is specifically configured to: receiving the first message content identified by the first identification information and the second message content identified by the second identification information; in response to detecting a first reply content replied by the user to the first message content and detecting a second reply content replied by the user to the second message content, sending the first reply content identified by the first identification information and the second reply content identified by the second identification information to the first device; the first device is specifically configured to: and replying the first message content according to the first reply content identified by the first identification information, and replying the second message content according to the second reply content identified by the second identification information.
In the embodiment of the application, the third device receives the message identified by the identification information, so that when the third device acquires the reply content of the user, the third device can identify the reply content by using the identification information, and the first device can determine the reply content of which message the reply content is directed to. Under the condition that the first equipment receives a plurality of messages, the accuracy of the first equipment in message reply is improved.
With reference to the first aspect, in some possible implementations of the first aspect, the third device is a device in which the first application is not installed.
With reference to the first aspect, in some possible implementation manners of the first aspect, the second device is specifically configured to: and generating a prompt for executing the task corresponding to the notification in the third equipment according to the equipment information of the second equipment.
In the embodiment of the application, different second devices have different device information, so that prompts in different modes can be presented to a user, and the user experience when looking up the prompts on the prompting device is facilitated to be improved.
With reference to the first aspect, in some possible implementation manners of the first aspect, the device information includes that the second device has a display screen, and the second device is specifically configured to: when the second equipment detects that the input of a user is being received, displaying a first prompt window through the display screen without positioning a cursor in the first prompt window, wherein the first prompt window comprises prompt information which is used for prompting the execution of the task in third equipment; or when the second device detects that the input of the user is not received, displaying the first prompt window through the display screen and positioning the cursor in the first prompt window.
In the embodiment of the application, for the second device with the display screen, if the second device is receiving the input of the user (or the second device is interacting with the user), the second device may not position the cursor in the first prompt window, so that the trouble of the current operation of the user is avoided, and the user experience is improved.
With reference to the first aspect, in some possible implementations of the first aspect, the second device is further configured to: when the second device detects that the input of the user is being received, prompt information is displayed through the display screen, and the prompt information is used for reminding the user to position the cursor to the first prompt window through first operation.
In this embodiment of the application, if the second device is receiving an input from the user (or the second device is interacting with the user), the second device may further prompt the user to position the cursor to the message reminding box through a preset operation. Therefore, the user can be quickly positioned to the first prompt window through the prompt information while the trouble caused by the current operation of the user is avoided, and the user experience is favorably improved.
With reference to the first aspect, in some possible implementations of the first aspect, the second device is further configured to: determining that the second device is in a non-do-not-disturb mode, or determining that the second device is not currently running a preset application, prior to generating a prompt to perform a task corresponding to the notification in the third device.
In a second aspect, an electronic device is provided, the electronic device comprising: the device comprises a processor, a memory, a display screen and a communication module; the processor, the communication module, the display screen, and the memory coupled to the processor, the memory for storing computer program code, the computer program code including computer instructions that, when executed by the electronic device, cause the electronic device to perform operations comprising: receiving a first message sent by first equipment; in response to receiving the first message, generating a prompt to execute a task corresponding to the notification in the third device; and sending a second message to request the third device to execute the task corresponding to the notification in response to receiving the input of the user corresponding to the prompt.
In a third aspect, a notification processing method is provided, where the method includes: the second equipment receives a first message sent by the first equipment; the second device generates a prompt for executing a task corresponding to the notification in a third device in response to receiving the first message; the second device sends a second message to request the third device to execute a task corresponding to the notification in response to receiving an input corresponding to the prompt from the user.
With reference to the third aspect, in some possible implementations of the third aspect, the sending the second message includes: and sending the second message to the third device, wherein the second message comprises the content of the task.
With reference to the third aspect, in some possible implementations of the third aspect, the sending the second message includes: and sending the second message to the first device, so that the first device sends a third message to the third device according to the second message, wherein the third message comprises the content of the task.
With reference to the third aspect, in some possible implementation manners of the third aspect, the first message includes description information of the task, and the generating, by the second device, a prompt for executing the task corresponding to the notification in the third device includes: in response to receiving the first message, displaying a first prompt window, the first prompt window including a first control and descriptive information of the task, the first control being associated with the third device; the second message is sent in response to user input to the first control.
With reference to the third aspect, in some possible implementations of the third aspect, the method includes: after sending the second message, displaying a second control; in response to receiving user input for the second control, a fourth message is sent to request that the third device stop performing the task.
With reference to the third aspect, in some possible implementations of the third aspect, the method includes: after sending the second message, displaying a third control, the third control associated with the fourth device; in response to receiving user input for the third control, a fifth message is sent to request the third device to stop performing the task and to request the fourth device to perform the task.
With reference to the third aspect, in some possible implementations of the third aspect, the first prompt window further includes a fourth control, the fourth control is associated with one or more controls, and each of the one or more controls is respectively associated with an available device other than the third device; wherein, the method comprises the following steps: after displaying the first prompt window, when input directed to the fourth control is detected, the one or more controls are displayed.
With reference to the third aspect, in some possible implementations of the third aspect, a fifth control is included in the one or more controls, and the fifth control is associated with a fifth device; wherein, the method also comprises: after the first prompt window is displayed, deleting the fifth control when the fifth device is no longer included in the one or more devices within the communication range of the second device; detecting input directed to the fourth control, displaying one or more controls, the one or more controls not including the fifth control.
With reference to the third aspect, in some possible implementations of the third aspect, the notification is message content in the first application, and the method further includes: and the second device sends indication information to the third device, wherein the indication information is used for indicating the third device to add a reply control to the message content.
With reference to the third aspect, in some possible implementation manners of the third aspect, the generating, by the second device, a prompt for executing a task corresponding to the notification in the third device includes: and generating a prompt for executing the task corresponding to the notification in the third equipment according to the equipment information of the second equipment.
With reference to the third aspect, in some possible implementation manners of the third aspect, the generating, by the second device, a prompt for executing a task corresponding to the notification in the third device includes: when the second device detects that the input of a user is being received, displaying a first prompt window through the display screen without positioning a cursor in the first prompt window, wherein the first prompt window comprises prompt information which is used for prompting the execution of the task in the third device; or when the second device detects that the input of the user is not received, displaying the first prompt window through the display screen and positioning the cursor in the first prompt window.
With reference to the third aspect, in some possible implementations of the third aspect, the method further includes: when the second device detects that the input of the user is being received, prompt information is displayed through the display screen, and the prompt information is used for reminding the user to position the cursor to the first prompt window through a first operation.
With reference to the third aspect, in some possible implementation manners of the third aspect, the method further includes: determining that the second device is in a non-do-not-disturb mode, or determining that the second device is not currently running a preset application, prior to generating a prompt to perform a task corresponding to the notification in the third device.
In a fourth aspect, an electronic device is provided, which includes: the system comprises a processor, a memory, a display screen, a communication module and a notification decision manager; the processor, the mobile communication module, the display screen, and the memory coupled to the processor, the memory for storing computer program code, the computer program code including computer instructions that, when executed by the electronic device, cause the electronic device to perform operations comprising: generating a notification; detecting equipment which can be used for prompting the notification in a communication range, and detecting equipment which can be used for executing a task corresponding to the notification in the communication range; detecting a device which can be used for prompting the notification by the second device, wherein a third device can be used for executing a task corresponding to the notification; and sending the first message to the second device, wherein the first message comprises notification information and third device information, the notification information is used for prompting the first device to generate the notification, and the third device information is used for generating the prompt at the second device.
In a fifth aspect, a notification processing method is provided, and the method includes: the first device acquires a notification; and when the first device determines that the device focused by the owner of the first device is the second device, the first device sends a first message to the second device, so that the second device generates a prompt for executing the task corresponding to the notification on the third device.
In some possible implementations, the first device detects devices within a communication range that can be used to prompt the notification, and detects devices within a communication range that can be used to execute the task corresponding to the notification, detects devices within a communication range that can be used to prompt the notification, and detects devices within a communication range that can be used to execute the task corresponding to the notification; the first device sends the first message to the second device, the first message including notification information and third device information, the notification information being used to prompt the first device to generate the notification, the third device information being used to generate the prompt at the second device.
With reference to the fifth aspect, in some possible implementation manners of the fifth aspect, the method further includes: the first equipment receives a second message sent by the second equipment; in response to receiving the second message, the first device sends a third message to the third device, the third message including content of the task, to cause the third device to perform the task.
With reference to the fifth aspect, in some possible implementations of the fifth aspect, the method further includes: the first equipment receives a fourth message sent by the second equipment; in response to receiving the fourth message, the first device sends a fifth message to the third device, the fifth message for instructing the third device to stop performing the task.
With reference to the fifth aspect, in some possible implementations of the fifth aspect, the method further includes: in response to receiving the fourth message, the first device sends a sixth message to the fourth device, the sixth message instructing the fourth device to perform the task.
With reference to the fifth aspect, in some possible implementations of the fifth aspect, the notification is message content in the first application, and the third message includes the message content, and the method further includes: the first equipment receives reply content of the message content sent by the third equipment; and the first equipment replies the message content according to the reply content.
With reference to the fifth aspect, in some possible implementations of the fifth aspect, the method further includes: in response to receiving the second message, the first device sends indication information to the third device, where the indication information is used to instruct the third device to add a reply control to the message content.
With reference to the fifth aspect, in some possible implementation manners of the fifth aspect, the message content in the first application includes first message content and second message content, and the third message includes the first message content identified by the first identification information and the second message content identified by the second identification information; wherein, the method also comprises: receiving the first reply content identified by the first identification information and the second reply content identified by the second identification information, which are sent by the third device; and replying the first message content according to the first reply content identified by the first identification information, and replying the second message content according to the second reply content identified by the second identification information.
In a sixth aspect, there is provided an apparatus comprising: the receiving unit is used for receiving a first message sent by first equipment; a generating unit configured to generate a prompt for executing the task corresponding to the notification in the third device; the detection unit is used for receiving the input of the prompt corresponding to the user; and the sending unit is used for responding to the input and sending a second message to request the third equipment to execute the task corresponding to the notification.
In a seventh aspect, an apparatus is provided, which includes: an acquisition unit configured to acquire a notification; the detection unit is used for detecting that the equipment currently focused by the owner of the first equipment is second equipment; a sending unit, configured to send the first message to the second device, so that the second device generates a prompt for executing a task corresponding to the notification on the third device.
In an eighth aspect, an embodiment of the present application provides a computer storage medium, which includes computer instructions, and when the computer instructions are executed on an electronic device, the electronic device is caused to execute a notification processing method in any possible implementation manner of any one of the foregoing aspects.
In a ninth aspect, the present application provides a chip system, which is applied to an electronic device including a memory, a display screen, and a sensor; the chip system comprises: one or more interface circuits and one or more processors; the interface circuit and the processor are interconnected through a line; the interface circuit is used for receiving signals from the memory and sending signals to the processor, and the signals comprise computer instructions stored in the memory; when the processor executes the computer instructions, the electronic device executes the notification processing method in any one of the possible implementation manners of the third aspect and the third aspect; alternatively, when the processor executes the computer instructions, the electronic device executes the notification processing method in any one of the possible implementations of the fifth aspect and the fifth aspect.
In a tenth aspect, an embodiment of the present application provides a computer program product, which, when running on a computer, causes the computer to execute the notification processing method in any one of the possible implementations of the third aspect; alternatively, the computer program product may be configured to, when run on a computer, cause the computer to perform the notification processing method in any of the possible implementations of the fifth aspect.
In an eleventh aspect, an embodiment of the present application provides a method for task processing, including: the method comprises the steps that first equipment acquires a first message, wherein the first message comprises task content and description information of the task content; the method comprises the steps that a first device displays a first prompt window on a first interface, the first prompt window comprises a first control and description information of task content, and the first control is associated with a second device; the method comprises the steps that a first device receives first input of a user for a first control; in response to the first input, the first device sends a first instruction to the second device, the first instruction being used to instruct the second device to execute the task content.
The first device obtains a first message, where the first message may be notification information provided by a third-party server, may be notification information provided by a system application of the first device, and may also originate from other electronic devices, for example, the other electronic devices share data content with the first device, and the other electronic devices send the first message carrying the data content to the first device. The first message comprises information such as text content, task content and message source, wherein the text content is used for briefly explaining the first message, and the task content comprises data content used for indicating to view the data content. Besides the task content, the first message may also be referred to as descriptive information of the task content, such as text content, message source, and the like.
After the first device acquires the first message, the first device determines available devices supporting the execution of the task content of the first message from the peripheral devices. And displaying a first prompt window on the first interface, wherein the first prompt window is used for prompting a user to select the task content executed by the equipment. The first prompt window comprises a first control and description information of task content, the first control indicates the second device, when the first device detects user operation aiming at the first control, the first device sends the task content to the second device, and the second device is indicated to execute the task content. By implementing the method provided by the eleventh aspect, the first device displays the description information of the task content, and the user selects the second device to execute the task content, so that the cross-device task processing is realized. When the first device is processing a certain task, the second device can be triggered to process another task conveniently through the first prompt window, and user experience is improved.
With reference to the eleventh aspect, in a possible implementation manner, the sending, by the first device, the first instruction to the second device, and then further includes: the first device displays a second control; the first device receives a second input of the user for the second control; in response to the second input, the first device sends a second instruction to the second device, the second instruction being used to instruct the second device to stop executing the task content.
Here, a stop control (second control) is provided, which the first device may output during the execution of the task content by the second device, and the user controls the second device to stop executing the task content through the second control. The user can pause the equipment to execute the task content at any time, and the user experience is improved.
With reference to the eleventh aspect, in a possible implementation manner, the sending, by the first device, the first instruction to the second device, and then further includes: the first device displays a third control, the third control being associated with the third device; the first device receives a third input of the user for a third control element; responding to the third input, the first device sends a second instruction to the second device, wherein the second instruction is used for instructing the second device to stop executing the task content; in response to the third input, the first device sends a third instruction to the third device, the third instruction for instructing the third device to execute the task content.
Here, a switching control (third control) is provided, during execution of the task content by the second device, the first device may output the switching control, and the user controls the second device to stop executing the task content through the third control and instructs the other devices to execute the task content. The effect of real-time switching is realized, the user can switch the equipment for executing the task content at any time, and the user experience is improved.
Optionally, the third device may resume executing the task content, or continue to execute the task content in accordance with the execution progress of the second device. For example, the second device is executing the task content and playing a certain video; at this time, the first device receives a user operation for switching the playing of the third device, and the first device instructs the second device to stop playing the video and instructs the third device to play the video at the same time, where the playing progress of the third device may be from the beginning or may be continued from the progress at which the playing of the second device is stopped.
With reference to the eleventh aspect, in a possible implementation manner, the first interface is a screen locking interface; in response to the first input, the first device sends a first instruction to the second device, which specifically includes: in response to the first input, when the first device detects an unlocking operation for the screen locking interface and the unlocking is successful, the first device sends a first instruction to the second device.
In a scenario where the first prompt window is output on the screen locking interface, when the electronic device detects the first input, an instruction for executing the task content is sent to the second device after the electronic device needs to be unlocked.
With reference to the eleventh aspect, in a possible implementation manner, the first message includes a task type of the task content, and before the first device displays the first prompt window on the first interface, the method further includes: the method comprises the steps that a first device obtains device information of one or more devices within a communication range of the first device; the first device determines one or more available devices supporting the task type of the task content to be executed based on device information of one or more devices within a communication range of the first device, wherein the available devices comprise the second device. The second device is a device within the communication range of the first device and is a device supporting the type of the person performing the task content, where the task type may include a video-type task, an audio-type task, a text-type task, and the like, and then the corresponding device supporting the video-type task needs to have a display function and an audio function, the device supporting the audio-type task needs to have an audio function, and the device supporting the text-type task needs to have a display function and the like.
In one possible implementation manner, the first message includes a list of devices that support execution of task content, and before the first device displays the first prompt window on the first interface, the method further includes: the method comprises the steps that a first device acquires device information of one or more devices within a communication range of the first device; the first device determines one or more available devices in the device information of one or more devices within the communication range of the first device based on the device list, wherein the available devices include the second device. The device list supporting the task execution content in the first message may be a list including device types, for example, the device list supporting the task execution content is a computer or a tablet; the list of devices supporting the task execution content in the first message may be a list including device attributes, for example, the list of devices supporting the task execution content is a device having a display function and an audio function; the list of devices in the first device that support performing the task content may be a list comprising specific device identifications, each device identification representing a device, etc.
With reference to the eleventh aspect, in a possible implementation manner, the method further includes: the first device determines an available device with a highest priority among the one or more available devices as the second device. The only available equipment is selected through the priority, the most suitable equipment for executing the task content is provided for the user, and the selection operation of the user is reduced. Wherein the priority information may be set by a user, may be a system default of the first device, may be set by a third party application, may be automatically determined by the first device according to device attributes, and the like.
With reference to the eleventh aspect, in a possible implementation manner, the method further includes: the first device determines an available device having a smallest physical distance from the first device among the one or more available devices as the second device. The only available equipment is selected according to the distance of the physical distance, the most suitable equipment for executing the task content is provided for the user, and the selection operation of the user is reduced.
With reference to the eleventh aspect, in a possible implementation manner, the first prompt window further includes a fourth control, the fourth control is associated with one or more controls, and each of the one or more controls is associated with an available device other than the second device; after the first device displays the first prompt window on the first interface, the method further includes: when the first device detects a fourth input directed to a fourth control, the first device displays one or more controls.
Here, a control for selecting a device list is provided, and after the first device determines one or more available devices, the first device may output a control for selecting a device list (fourth control), through which the user may view the one or more available devices and then autonomously select a device for executing the task content. The effect of autonomous selection is achieved, a user can select the device for executing the task content from the multiple devices, and user experience is improved.
With reference to the eleventh aspect, in a possible implementation manner, a fifth control is included in the one or more controls, the fifth control is associated with a fifth device, and after the first device displays the first prompt window on the first interface, the method further includes: when the fifth device is no longer included in the one or more devices within the communication range of the first device, the first device deletes the fifth control; the first device detects a fourth input directed to a fourth control, the first device displaying one or more controls, the fifth control not included in the one or more controls.
It is described herein that the controls in the first prompt window may change as the device state changes. At the first moment, the fifth device is an available device, and the first device outputs a fifth control related to the fifth device; at a second time, the state of the fifth device changes (is not within communication range of the first device), and the first device deletes the fifth control associated with the fifth device. Similarly, if the state of the fifth device changes again (within the communication range of the first device) at the third time, the first device outputs a fifth control associated with the fifth device. Therefore, the method for outputting the control is changed according to the equipment state, timeliness is improved, the latest available equipment can be provided for the user in real time, and user experience is improved.
With reference to the eleventh aspect, in one possible implementation manner, the second device and the first device are the same device. That is, the user may select the first device to perform the task content.
With reference to the eleventh aspect, in one possible implementation manner, the first device and the second device log in to the same account or an associated account of the same account.
With reference to the eleventh aspect, in one possible implementation manner, the first message includes: mail notification information, video application notification information, instant messaging message notification information, and video call notification information.
With reference to the eleventh aspect, in one possible implementation manner, the first device is a mobile phone or a watch, and the second device is a computer or a tablet or a television.
In a twelfth aspect, an embodiment of the present application provides an electronic device, which may be a first device, including one or more processors and one or more memories and a touch screen. The one or more memories coupled to the one or more processors, the one or more memories for storing computer program code, the computer program code comprising computer instructions that, when executed by the one or more processors, cause the electronic device to perform: acquiring a first message, wherein the first message comprises task content and description information of the task content; displaying a first prompt window on a first interface, wherein the first prompt window comprises a first control and description information of task content, and the first control is associated with second equipment; receiving a first input of a user for a first control; and responding to the first input, and sending a first instruction to the second device, wherein the first instruction is used for instructing the second device to execute the task content.
The electronic device obtains a first message, where the first message may be notification information provided by a third-party server, may be notification information provided by a system application of the electronic device, and may also originate from other electronic devices, for example, the other electronic devices share data content with the electronic device, and the other electronic devices send the first message carrying the data content to the electronic device. The first message comprises information such as text content, task content and message source, wherein the text content is used for briefly explaining the first message, and the task content comprises data content used for indicating to view the data content. Besides the task content, the first message may also be referred to as descriptive information of the task content, such as text content, message source, and the like.
After the electronic equipment acquires the first message, available equipment which supports the execution of the task content of the first message is determined from the peripheral equipment. And displaying a first prompt window on the first interface, wherein the first prompt window is used for prompting a user to select the task content executed by the equipment. The first prompt window comprises a first control and description information of task content, the first control indicates the second device, and when the electronic device detects user operation aiming at the first control, the electronic device sends the task content to the second device and indicates the second device to execute the task content. The electronic equipment displays the description information of the task content, and the user selects the second equipment to execute the task content, so that the cross-equipment task processing is realized. When the electronic equipment is processing a certain task, the second equipment can be conveniently triggered to process another task through the first prompt window, and user experience is improved.
With reference to the twelfth aspect, in a possible implementation manner, after sending the first instruction to the second device, the electronic device further performs: displaying a second control; receiving a second input of the user for a second control; and responding to the second input, and sending a second instruction to the second device, wherein the second instruction is used for instructing the second device to stop executing the task content.
Here, a stop control is provided, during the process of executing the task content by the second device, the electronic device may output the stop control (second control), and the user controls the second device to stop executing the task content through the second control. The user can pause the device to execute the task content at any time, and the user experience is improved.
With reference to the twelfth aspect, in a possible implementation manner, after sending the first instruction to the second device, the electronic device further performs: displaying a third control, the third control associated with a third device; receiving a third input of the user for a third control; responding to the third input, and sending a second instruction to the second device, wherein the second instruction is used for instructing the second device to stop executing the task content; and sending a third instruction to the third device in response to the third input, wherein the third instruction is used for instructing the third device to execute the task content.
Here, a switching control is provided, during the process of executing the task content by the second device, the electronic device may output the switching control (third control), and the user controls the second device to stop executing the task content through the third control and instructs the other devices to execute the task content. The effect of real-time switching is realized, the user can switch the equipment for executing the task content at any time, and the user experience is improved.
Optionally, the third device may resume executing the task content, or continue to execute the task content in accordance with the execution progress of the second device. For example, the second device is executing task content and playing a video; at this time, the first device receives a user operation for switching the playing of the third device, and the first device instructs the second device to stop playing the video and instructs the third device to play the video at the same time, where the playing progress of the third device may be from the beginning or may be continued from the progress at which the playing of the second device is stopped.
With reference to the twelfth aspect, in a possible implementation manner, the first interface is a screen locking interface; in response to the first input, the electronic device executes to send a first instruction to the second device, which specifically includes: and responding to the first input, and sending a first instruction to the second equipment after the unlocking operation aiming at the screen locking interface is detected and the unlocking is successful.
The description here is that in a scenario where a first prompt window is output on a screen lock interface, when the electronic device detects a first input, an instruction for executing task content is sent to the second device after the electronic device needs to be unlocked.
With reference to the twelfth aspect, in a possible implementation manner, the first message includes a task type of the task content, and before the first prompt window is displayed on the first interface, the electronic device further performs: acquiring device information of one or more devices within a communication range of an electronic device; and determining one or more available devices supporting the task type of the task content to be executed based on the device information of one or more devices within the communication range of the electronic device, wherein the available devices comprise the second device. The second device is a device within a communication range of the electronic device and is a device supporting a character type of task content, where the task type may include a video type task, an audio type task, a text type task, and the like, and then the corresponding device supporting the execution of the video type task needs to have a display function and an audio function, the device supporting the execution of the audio type task needs to have an audio function, and the device supporting the execution of the text type task needs to have a display function and the like.
In a possible implementation manner, the first message includes a list of devices that support execution of task content, and before the first prompt window is displayed on the first interface, the electronic device further performs: acquiring device information of one or more devices within a communication range of an electronic device; and determining one or more available devices in the device information of one or more devices within the communication range of the electronic device based on the device list, wherein the available devices comprise the second device. The device list supporting the task execution content in the first message may be a list including device types, for example, the device list supporting the task execution content is a computer or a tablet; the list of devices supporting the task execution content in the first message may be a list including device attributes, for example, the list of devices supporting the task execution content is a device having a display function and an audio function; the list of devices in the electronic device that support the execution of task content may be a list including specific device identifications, each device identification representing a device, and so on.
With reference to the twelfth aspect, in one possible implementation manner, the electronic device further performs: determining an available device with a highest priority among the one or more available devices as the second device. The only available equipment is selected through the priority, the most suitable equipment for executing the task content is provided for the user, and the selection operation of the user is reduced. The priority information may be set by a user, may be a system default of the electronic device, may be set by a third-party application, may be automatically determined by the electronic device according to device attributes, and the like.
With reference to the twelfth aspect, in a possible implementation manner, the electronic device further performs: determining an available device of the one or more available devices having a smallest physical distance from the first device as the second device. The only available equipment is selected according to the distance of the physical distance, the most suitable equipment for executing the task content is provided for the user, and the selection operation of the user is reduced.
With reference to the twelfth aspect, in a possible implementation manner, the first prompt window further includes a fourth control, the fourth control is associated with one or more controls, and each of the one or more controls is associated with an available device other than the second device; after the first prompt window is displayed on the first interface, the electronic equipment further executes: when a fourth input is detected for a fourth control, one or more controls are displayed.
Here, a control for selecting a device list is provided, and after the electronic device determines one or more available devices, the electronic device may output a control for selecting a device list (fourth control), through which the user may view the one or more available devices and then autonomously select a device for executing the task content. The effect of autonomous selection is achieved, a user can select the device for executing the task content from the multiple devices, and user experience is improved.
With reference to the twelfth aspect, in a possible implementation manner, the one or more controls include a fifth control, the fifth control is associated with a fifth device, and after the first prompt window is displayed on the first interface, the electronic device further performs: deleting the fifth control when the fifth device is no longer included in the one or more devices within the communication range of the electronic device; a fourth input directed to a fourth control is detected, and one or more controls are displayed, the fifth control not being included in the one or more controls.
It is described herein that the controls in the first prompt window may change as the state of the device changes. At the first moment, the fifth device is available, and the electronic device outputs a fifth control associated with the fifth device; at the second time, the state of the fifth device changes (is not within the communication range of the electronic device), and the electronic device deletes the fifth control associated with the fifth device. Similarly, if the state of the fifth device changes again (within the communication range of the electronic device) at the third time, the electronic device outputs a fifth control associated with the fifth device. Therefore, the mode of outputting the control is changed according to the equipment state, the timeliness is improved, the latest available equipment can be provided for the user in real time, and the user experience is improved.
With reference to the twelfth aspect, in a possible implementation manner, the second device and the electronic device are the same device. That is, the user may select the electronic device to perform the task content.
With reference to the twelfth aspect, in one possible implementation manner, the electronic device and the second device log in to the same account or an associated account of the same account.
With reference to the twelfth aspect, in one possible implementation manner, the first message includes: mail notification information, video application notification information, instant messaging message notification information, and video call notification information.
With reference to the twelfth aspect, in one possible implementation manner, the electronic device is a mobile phone or a watch, and the second device is a computer or a tablet or a television.
In a thirteenth aspect, an embodiment of the present application provides a task processing system, including a first device and a second device, where the first device is configured to obtain a first message, and the first message includes task content and description information of the task content; the first device is also used for displaying a first prompt window on the first interface, the first prompt window comprises a first control and description information of task content, and the first control is associated with the second device; the first device is also used for receiving a first input of a user for the first control; the first device is also used for responding to the first input and sending a first instruction to the second device; and the second equipment is used for executing the task content based on the received first instruction.
The first device obtains a first message, where the first message may be notification information provided by a third-party server, may be notification information provided by a system application of the first device, and may also originate from another electronic device, for example, the other electronic device shares data content with the first device, and the other electronic device sends the first message carrying the data content to the first device. The first message comprises information such as text content, task content and message source, wherein the text content is used for briefly explaining the first message, and the task content comprises data content used for indicating to view the data content. Besides the task content, the first message may also be referred to as descriptive information of the task content, such as text content, message source, and the like.
After the first device acquires the first message, the first device determines available devices supporting the execution of the task content of the first message from the peripheral devices. And displaying a first prompt window on the first interface, wherein the first prompt window is used for prompting a user to select the task content executed by the equipment. The first prompt window comprises a first control and description information of task content, the first control indicates the second device, and when the first device detects user operation aiming at the first control, the first device sends the task content to the second device and indicates the second device to execute the task content. The first equipment displays the description information of the task content, and the user selects the second equipment to execute the task content, so that the cross-equipment task processing is realized. When the first device is processing a certain task, the second device can be triggered to process another task conveniently through the first prompt window, and user experience is improved.
With reference to the thirteenth aspect, in a possible implementation manner, the first device is further configured to: after the first instruction is sent to the second device, displaying a second control; the first device is also used for receiving second input of the user for the second control; the first device is also used for responding to the second input and sending a second instruction to the second device; and the second device is also used for stopping executing the task content based on the received second instruction.
With reference to the thirteenth aspect, in a possible implementation manner, the system further includes a third device; the first device is further used for displaying a third control after sending the first instruction to the second device, wherein the third control is associated with the third device; the first device is also used for receiving a third input aiming at a third control element from a user; the first device is also used for responding to a third input and sending a second instruction to the second device; the second equipment is also used for stopping executing the task content based on the received second instruction; the first device is also used for responding to a third input and sending a third instruction to the third device; and a third device for executing the task content based on the third input.
With reference to the thirteenth aspect, in a possible implementation manner, the first interface is a screen locking interface; and the first device is used for responding to the first input, and sending a first instruction to the second device after the unlocking operation aiming at the screen locking interface is detected and the unlocking is successful.
With reference to the thirteenth aspect, in a possible implementation manner, the first message includes a task type of the task content; the first device is further used for acquiring device information of one or more devices within the communication range of the first device before the first prompt window is displayed on the first interface; the first device is further used for determining one or more available devices supporting the task type of the task content to be executed based on the device information of one or more devices within the communication range of the first device, and the available devices comprise the second device.
With reference to the thirteenth aspect, in a possible implementation manner, the first message includes a list of devices that support execution of task content; the first device is further used for acquiring device information of one or more devices within the communication range of the first device before the first prompt window is displayed on the first interface; the first device is further configured to determine one or more available devices from the device information of the one or more devices within the communication range of the first device based on the device list, where the available devices include the second device.
With reference to the thirteenth aspect, in a possible implementation manner, the first device is further configured to determine, as the second device, an available device with a highest priority in the one or more available devices.
With reference to the thirteenth aspect, in a possible implementation manner, the first device is further configured to determine, as the second device, an available device with a smallest physical distance from the first device among the one or more available devices.
With reference to the thirteenth aspect, in a possible implementation manner, the first prompt window further includes a fourth control, the fourth control is associated with one or more controls, and each of the one or more controls is associated with an available device other than the second device; the first device is further configured to display one or more controls upon detecting a fourth input directed to a fourth control after displaying the first prompt window on the first interface.
With reference to the thirteenth aspect, in one possible implementation manner, a fifth control is included in the one or more controls, and the fifth control is associated with a fifth device; the first device is further used for deleting the fifth control when the fifth device is no longer included in the one or more devices within the communication range of the first device after the first prompt window is displayed on the first interface; the first device is further configured to detect a fourth input directed to a fourth control, and display one or more controls that do not include the fifth control.
With reference to the thirteenth aspect, in a possible implementation manner, the second device and the first device are the same device.
With reference to the thirteenth aspect, in a possible implementation manner, the first device and the second device log in to the same account or an associated account of the same account.
With reference to the thirteenth aspect, in a possible implementation manner, the first message includes: mail notification information, video application notification information, instant messaging message notification information, and video call notification information.
With reference to the thirteenth aspect, in a possible implementation manner, the first device is a mobile phone or a watch, and the second device is a computer, a tablet, or a television.
In a fourteenth aspect, an embodiment of the present application provides a computer storage medium, which includes computer instructions, and when the computer instructions are executed on an electronic device, the electronic device is caused to perform a method for task processing in any one of the above-mentioned eleventh aspect and eleventh possible implementation manner.
In a fifteenth aspect, the present application provides a chip system, which is applied to an electronic device including a memory, a display screen, and a sensor; the chip system includes: one or more interface circuits and one or more processors; the interface circuit and the processor are interconnected through a line; the interface circuit is used for receiving signals from the memory and sending signals to the processor, and the signals comprise computer instructions stored in the memory; when the processor executes the computer instructions, the electronic device performs the method for task processing in any one of the possible implementations of the eleventh aspect and the eleventh aspect.
In a sixteenth aspect, the present application provides a computer program product, which when run on a computer, causes the computer to execute the method for processing tasks in any one of the above-mentioned eleventh aspect and eleventh possible implementation manner.
In a seventeenth aspect, there is provided an apparatus comprising: the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a first message, and the first message comprises task content and description information of the task content; the display unit is used for displaying a first prompt window on the first interface, the first prompt window comprises a first control and description information of task content, and the first control is associated with the second equipment; the detection unit is used for receiving a first input of a user for the first control; and the sending unit is used for responding to the first input and sending a first instruction to the second equipment, wherein the first instruction is used for instructing the second equipment to execute the task content.
In an eighteenth aspect, a system is provided that includes a first electronic device to receive a message and a second electronic device; the first electronic device is further configured to send the message and indication information to the second electronic device when it is determined that the device focused by the owner of the first electronic device is the second electronic device, where the indication information is used to indicate the second electronic device to add a reply control to the message; the second electronic device is used for displaying a message reminding frame, and the message reminding frame comprises the message and the reply control; the second electronic device is further used for sending reply content to the first electronic device when the operation of replying the message by the user is detected; the first electronic device is further configured to reply to the message according to the reply content.
In this embodiment of the application, when it is determined that the device focused by the owner of the first electronic device is not the first electronic device but the second electronic device, the first electronic device may send a message and indication information to the second electronic device, so that the second electronic device may present a message reminding frame including the message and a reply control to a user. Therefore, the owner of the first electronic equipment can complete the virtual reply to the message on the second electronic equipment conveniently. The second electronic device may send the reply content to the first electronic device so that the first electronic device completes a true reply to the message. Therefore, the user can receive the message prompt in time and complete the reply to the message, the process that the user replies to the message on the first electronic device is avoided, and the user is prevented from missing important messages.
In some possible implementations, the second electronic device may be a device in a device list in the first electronic device; alternatively, the second electronic device may be a device surrounding the first electronic device.
In some possible implementation manners, the first electronic device is specifically configured to reply to the message according to the reply content and an API provided by an application program corresponding to the message; or replying the message according to the reply content and the drag event.
In some possible implementations, the indication information may be a shortcut reply flag attribute.
With reference to the eighteenth aspect, in some implementations of the eighteenth aspect, the first electronic device is specifically configured to: sending the message identified by the identification information to the second electronic equipment; the second electronic device is specifically configured to: and sending the reply content identified by the identification information to the first electronic equipment.
In the embodiment of the application, the first electronic device may send the message identified by the identification information to the second electronic device, so that when the second electronic device obtains the reply content of the user, the second electronic device may identify the message by using the identification information, and the first electronic device determines which message the reply content is for. Under the condition that the first electronic equipment receives a plurality of messages, the accuracy of the first electronic equipment in message reply is improved.
In some possible implementations, the identification information may be a notification ID attribute and/or a notification channel attribute.
With reference to the eighteenth aspect, in some implementations of the eighteenth aspect, the first electronic device is further configured to display a first interface before receiving the message; after replying to the message, the first interface is displayed.
In the embodiment of the application, the first electronic device can recover the display interface before receiving the message after the first electronic device completes the real reply to the message, so that the real reply process of the first electronic device to the message is not sensitive to the user, and the user experience is favorably improved.
With reference to the eighteenth aspect, in some implementations of the eighteenth aspect, the content of the message is text information, the second electronic device includes a voice function, and the second electronic device is further configured to: after receiving the message, the user is prompted for the text information by voice.
In the embodiment of the application, for the second electronic device with a voice function, when the second electronic device receives the message of the first electronic device as a text message, the second electronic device may prompt the user to receive the message or prompt the user of the content of the message through voice. Therefore, the user is prevented from checking the screen of the second electronic device and obtaining the prompt of the message through the voice prompt, the user is prevented from missing important messages, and the user experience is improved.
In some possible implementations, the second electronic device is a car machine or a smart speaker.
With reference to the eighteenth aspect, in some implementations of the eighteenth aspect, the second electronic device is specifically configured to: before the reply content is sent to the first electronic equipment, voice information replied by a user is collected; and sending the voice information to the first electronic equipment, or sending text information corresponding to the voice signal to the first electronic equipment.
In some possible implementation manners, if the second electronic device sends the collected voice information to the first electronic device, the first electronic device may reply the voice information to the message, or may also reply text information corresponding to the voice information to the message.
With reference to the eighteenth aspect, in some implementations of the eighteenth aspect, the reply content includes a file.
In the embodiment of the application, the user can reply to a multimedia file (such as a picture and a video) and a work type file (such as a word document or an excel document) on the second electronic device, which is helpful for increasing the diversity of reply contents of the user, and is helpful for improving the experience of the user when replying to a message.
With reference to the eighteenth aspect, in some implementations of the eighteenth aspect, the first electronic device is further configured to: before the message and the indication information are sent to the second electronic equipment, determining the type of the message as a message type set by a user; or, before the message and the indication information are sent to the second electronic device, determining that the message is an Instant Messaging (IM) type message.
In the embodiment of the application, before forwarding the message to the second electronic device, the first electronic device may determine that the message is a message type set by the user or an IM type message, which is beneficial to avoiding that some unimportant messages are also forwarded to the second electronic device, thereby avoiding interference of the unimportant messages to the user and being beneficial to improving user experience.
With reference to the eighteenth aspect, in some implementations of the eighteenth aspect, the first electronic device is further configured to: before the message and the indication information are sent to the second electronic equipment, equipment for receiving message forwarding set by the second electronic equipment for a user is determined; or before the message and the indication information are sent to the second electronic device, determining that the account logged in on the first electronic device is associated with the account logged in on the second electronic device.
In the embodiment of the application, before the first electronic device forwards the message to the second electronic device, the device, which is set by the second electronic device for the user and receives the message forwarding, can be determined first, or the account number logged in the first electronic device and the account number logged in the second electronic device are determined to be associated with each other, so that the message is favorably prevented from being forwarded to the unauthorized device of the user, the message forwarding safety is improved, the privacy of the user is prevented from being revealed, and the user experience is favorably improved.
With reference to the eighteenth aspect, in some implementations of the eighteenth aspect, the first electronic device is specifically configured to: when determining that the device which is currently focused by the owner of the first electronic device is not the first electronic device, sending request information to the second electronic device, wherein the request information is used for requesting the second electronic device to determine whether the owner of the first electronic device focuses on the second electronic device; the second electronic device is further configured to send response information to the first electronic device according to the request information, where the response information is used to indicate that the currently focused device of the owner of the first electronic device is the second electronic device; the first electronic device is specifically configured to: in response to receiving the response information, the message is sent to the second electronic device.
In this embodiment of the application, the second electronic device may be a device in a message forwarding list of the first electronic device, and the first electronic device may be a device under the same account with the second electronic device (or the first electronic device and the second electronic device may store information of the same user), so that after receiving the request information, the second electronic device may collect user characteristics and match the user characteristics preset in the second electronic device, and if matching is successful, it is determined that a device currently focused by an owner of the first electronic device is the second electronic device, so as to send response information to the first electronic device. Therefore, the safety of message forwarding is improved, the situation that other users except the owner of the first electronic device receive the prompt of the message is avoided, and the experience of the user is improved.
In some possible implementation manners, after receiving the request message, the second electronic device may first determine whether the first electronic device and the second electronic device are devices under the same account. If the second electronic device determines that the first electronic device and the second electronic device are devices under the same account, the second electronic device can acquire feature information of a user focusing on the second electronic device through the user feature acquisition device, the acquired user feature information is matched with user feature information preset in the second electronic device, and if the matching is successful, the response information is sent to the first electronic device.
With reference to the eighteenth aspect, in some implementations of the eighteenth aspect, the second electronic device includes a user characteristic acquisition apparatus, and the first electronic device is specifically configured to: when determining that the device which is currently focused by the owner of the first electronic device is not the first electronic device, sending request information to the second electronic device, wherein the request information is used for requesting the second electronic device to determine whether the owner of the first electronic device focuses on the second electronic device; the second electronic device is further configured to: acquiring user characteristics through the user characteristic acquisition device according to the request information and sending the user characteristics to the first electronic equipment; the first electronic device is specifically configured to: determining that the user characteristic matches a preset user characteristic in the first electronic device before sending the message to the second electronic device.
In this embodiment of the present application, the second electronic device may be a device in a message forwarding list of the first electronic device, and the first electronic device may not be a device under the same account with the second electronic device (or the first electronic device and the second electronic device may store information of different users); alternatively, the second electronic device may be a device surrounding the first electronic device. Thus, after receiving the request message, the second electronic device can acquire the feature information of the user focusing on the second electronic device through the user feature acquisition device and send the feature information to the first electronic device. And the first electronic equipment matches with the user characteristic information preset in the first electronic equipment according to the characteristic information, and if the matching is successful, the message is forwarded to the second electronic equipment. Therefore, the safety of message forwarding is improved, the situation that other users except the owner of the first electronic device receive the prompt of the message is avoided, and the experience of the user is improved.
With reference to the eighteenth aspect, in some implementations of the eighteenth aspect, the message is a message of a first application, and the second electronic device is a device without the first application installed.
In this embodiment of the application, if the second electronic device is a device without the first application program installed, the second electronic device may also add a reply control in the message reminding box after receiving the indication information, so that the second electronic device may obtain the content replied by the user through the reply content input by the user and the detected operation of clicking the reply control, and send the reply content to the first electronic device. Thereby completing a true reply to the message on the first electronic device. Therefore, the user can receive the message prompt in time, the user can be prevented from missing important messages, and the user experience can be improved.
In a nineteenth aspect, a method for replying to a message is provided, and the method is applied to a first electronic device, and includes: the first electronic equipment receives a message; when the device focused by the owner of the first electronic device is determined to be a second electronic device, the first electronic device sends the message and indication information to the second electronic device, wherein the indication information is used for indicating the second electronic device to add a reply control to the message; the first electronic equipment receives reply content of the message sent by the second electronic equipment; and the first electronic equipment replies the message according to the reply content.
In this embodiment of the application, when it is determined that the device focused by the owner of the first electronic device is not the first electronic device but the second electronic device, the first electronic device may send a message and indication information to the second electronic device, so that the second electronic device may present a message reminding frame including the message and a reply control to a user. Therefore, the owner of the first electronic equipment can complete the virtual reply to the message on the second electronic equipment conveniently. The second electronic device may send the reply content to the first electronic device so that the first electronic device completes a true reply to the message. Therefore, the method and the device are beneficial to the user to receive the message prompt in time and finish the reply to the message, the process that the user returns to the first electronic equipment to reply to the message is avoided, and the user is prevented from missing important messages.
With reference to the nineteenth aspect, in some implementations of the nineteenth aspect, the sending, by the first electronic device, the message to the second electronic device includes: the first electronic equipment sends the message identified by the identification information to the second electronic equipment; wherein, the receiving, by the first electronic device, the reply content to the message sent by the second electronic device includes: and the first electronic equipment receives the reply content which is sent by the second electronic equipment and identified by the identification information.
In the embodiment of the application, the first electronic device may send the message identified by the identification information to the second electronic device, so that when the second electronic device obtains the reply content of the user, the second electronic device may identify the message by using the identification information, and the first electronic device determines which message the reply content is for. Under the condition that the first electronic equipment receives a plurality of messages, the accuracy of the first electronic equipment in message reply is improved.
In some possible implementations, the identification information may be a notification ID attribute and/or a notification channel attribute.
With reference to the nineteenth aspect, in certain implementations of the nineteenth aspect, the method further comprises: before receiving the message, the first electronic device displays a first interface; after replying to the message, the first electronic device displays the first interface.
In the embodiment of the application, the first electronic device can recover the display interface before receiving the message after the first electronic device completes the real reply to the message, so that the real reply process of the first electronic device to the message is not sensitive to the user, and the user experience is favorably improved.
With reference to the nineteenth aspect, in some implementations of the nineteenth aspect, the reply content is a voice message, or the reply content is a text message corresponding to the voice message; the voice information is collected by the second electronic device and replied by the user.
In the embodiment of the application, for the second electronic device with a voice function, when the second electronic device receives the message of the first electronic device as a text message, the second electronic device may prompt the user to receive the message or prompt the user of the content of the message through voice. Therefore, the user is prevented from obtaining the prompt of the message through voice prompt by checking the screen of the second electronic device, and the user is prevented from missing important messages, so that the user experience is improved.
In some possible implementations, the second electronic device is a car machine or a smart speaker.
With reference to the nineteenth aspect, in some implementations of the nineteenth aspect, the reply content is a file.
In the embodiment of the application, the user can reply to a multimedia file (such as a picture and a video) and a work type file (such as a word document or an excel document) on the second electronic device, which is helpful for increasing the diversity of reply contents of the user, and is helpful for improving the experience of the user when replying to a message.
With reference to the nineteenth aspect, in certain implementations of the nineteenth aspect, the method further comprises: before sending the message and the indication information to the second electronic device, the first electronic device determines that the type of the message is a message type set by a user; or, before sending the message and the indication information to the second electronic device, the first electronic device determines that the message is an Instant Messaging (IM) type message.
In the embodiment of the application, before forwarding the message to the second electronic device, the first electronic device may determine that the message is a message type set by the user or an IM type message, which is beneficial to avoiding that some unimportant messages are also forwarded to the second electronic device, thereby avoiding interference of the unimportant messages to the user and being beneficial to improving user experience.
With reference to the nineteenth aspect, in certain implementations of the nineteenth aspect, the method further comprises: before sending the message and the indication information to the second electronic equipment, the first electronic equipment determines equipment for forwarding the received message, which is set by the second electronic equipment for a user, by the second electronic equipment; or before the message and the indication information are sent to the second electronic device, the first electronic device determines that the account logged in on the first electronic device is associated with the account logged in on the second electronic device.
In the embodiment of the application, before the first electronic device forwards the message to the second electronic device, the device, which is set by the second electronic device for the user and receives the message forwarding, can be determined first, or the account number logged in the first electronic device and the account number logged in the second electronic device are determined to be associated with each other, so that the message is favorably prevented from being forwarded to the unauthorized device of the user, the message forwarding safety is improved, the disclosure of the privacy of the user is avoided, and the user experience is favorably improved.
With reference to the nineteenth aspect, in some implementations of the nineteenth aspect, before the first electronic device sends the message and the indication information to the second electronic device, the method further includes: when the first electronic device determines that the device which is currently focused by the owner of the first electronic device is not the first electronic device, sending request information to the second electronic device, wherein the request information is used for requesting the second electronic device to determine whether the owner of the first electronic device focuses on the second electronic device; the first electronic device receives response information sent by the second electronic device, wherein the response information is used for indicating that the device currently focused by the owner of the first electronic device is the second electronic device.
In this embodiment of the application, the second electronic device may be a device in a message forwarding list of the first electronic device, and the first electronic device may be a device under the same account with the second electronic device (or the first electronic device and the second electronic device may store information of the same user), so that after receiving the request information, the second electronic device may collect user characteristics and match the user characteristics preset in the second electronic device, and if matching is successful, it is determined that a device currently focused by an owner of the first electronic device is the second electronic device, thereby sending response information to the first electronic device. Therefore, the safety of message forwarding is improved, the condition that other users except the owner of the first electronic device receive the prompt of the message is avoided, and the experience of the user is improved.
In some possible implementation manners, after receiving the request message, the second electronic device may first determine whether the first electronic device and the second electronic device are devices under the same account. If the second electronic device determines that the first electronic device and the second electronic device are devices under the same account, the second electronic device can acquire the feature information of the user focusing on the second electronic device through the user feature acquisition device, the acquired user feature information is matched with the user feature information preset in the second electronic device, and if the matching is successful, the response information is sent to the first electronic device.
With reference to the nineteenth aspect, in some implementations of the nineteenth aspect, before the second electronic device includes the user characteristic collecting apparatus, and before the first electronic device sends the message and the indication information to the second electronic device, the method further includes: when the first electronic device determines that the device which is currently focused by the owner of the first electronic device is not the first electronic device, sending request information to the second electronic device, wherein the request information is used for requesting the second electronic device to determine whether the owner of the first electronic device focuses on the second electronic device; the first electronic equipment receives the user characteristics sent by the second electronic equipment, wherein the user characteristics are the user characteristics acquired by the second electronic equipment through the user characteristic acquisition device; the first electronic device determines that the user characteristic matches a user characteristic preset in the first electronic device.
In this embodiment of the present application, the second electronic device may be a device in a message forwarding list of the first electronic device, and the first electronic device may not be a device under the same account with the second electronic device (or the first electronic device and the second electronic device may store different user information); alternatively, the second electronic device may be a device surrounding the first electronic device. Thus, after receiving the request message, the second electronic device can collect the feature information of the user focusing on the second electronic device through the user feature collecting device and send the feature information to the first electronic device. And the first electronic equipment matches with the user characteristic information preset in the first electronic equipment according to the characteristic information, and if the matching is successful, the message is forwarded to the second electronic equipment. Therefore, the safety of message forwarding is improved, the situation that other users except the owner of the first electronic device receive the prompt of the message is avoided, and the experience of the user is improved.
In a twentieth aspect, a method for replying to a message is provided, and the method is applied to a second electronic device, and includes: the second electronic equipment receives a message and indication information sent by the first electronic equipment, wherein the indication information is used for indicating the second electronic equipment to add a reply control to the message; the second electronic equipment displays a message reminding frame, wherein the message reminding frame comprises the message and the reply control; and when the operation that the user replies to the message is detected, the second electronic equipment sends reply content to the first electronic equipment.
In this embodiment of the application, when it is determined that the device focused by the owner of the first electronic device is not the first electronic device but the second electronic device, the first electronic device may send a message and indication information to the second electronic device, so that the second electronic device may present a message reminding frame including the message and a reply control to a user. Therefore, the owner of the first electronic equipment can complete the virtual reply to the message on the second electronic equipment conveniently. The second electronic device may send the reply content to the first electronic device so that the first electronic device completes a true reply to the message. Therefore, the user can receive the message prompt in time and complete the reply to the message, the user is prevented from replying to the first electronic device, and the user is prevented from missing important messages.
With reference to the twentieth aspect, in certain implementations of the twentieth aspect, the method further includes: the second electronic equipment receives the message which is sent by the first electronic equipment and is identified by the identification information; wherein, the second electronic device sends reply content to the first electronic device, including: and the second electronic equipment sends the reply content after passing the identification information to the first electronic equipment.
In the embodiment of the application, the first electronic device may send the message identified by the identification information to the second electronic device, so that when the second electronic device obtains the reply content of the user, the second electronic device may identify the message by using the identification information, and the first electronic device determines which message the reply content is for. Under the condition that the first electronic equipment receives a plurality of messages, the accuracy of the first electronic equipment in message reply is improved.
In some possible implementations, the identification information may be a notification ID attribute and/or a notification channel attribute.
With reference to the third aspect, in some implementations of the third aspect, the content of the message is text information, the second electronic device includes a voice function, and the method further includes: after receiving the message, the second electronic device prompts the text information to the user through voice.
In the embodiment of the application, for the second electronic device with a voice function, when the second electronic device receives the message of the first electronic device as a text message, the second electronic device may prompt the user to receive the message or prompt the user of the content of the message through voice. Therefore, the user is prevented from checking the screen of the second electronic device and obtaining the prompt of the message through the voice prompt, the user is prevented from missing important messages, and the user experience is improved.
In some possible implementations, the second electronic device is a car machine or a smart speaker.
With reference to the twentieth aspect, in some implementations of the twentieth aspect, the sending, by the second electronic device, the reply content to the first electronic device includes: the second electronic equipment collects voice information replied by the user; and the second electronic equipment sends the voice information to the first electronic equipment or sends text information corresponding to the voice signal to the first electronic equipment.
In some possible implementation manners, if the second electronic device sends the collected voice information to the first electronic device, the first electronic device may reply the voice information to the message, or may also reply text information corresponding to the voice information to the message.
With reference to the twentieth aspect, in some implementations of the twentieth aspect, the reply content includes a file.
In the embodiment of the application, the user can reply the multimedia file (such as a picture and a video) and the work class file (such as a word document or an excel document) on the second electronic device, so that the diversity of reply contents of the user is favorably increased, and the experience of the user in replying the message is favorably improved.
With reference to the twentieth aspect, in some implementations of the twentieth aspect, before the second electronic device receives the message and the indication information sent by the first electronic device, the method further includes: the second electronic device receives request information sent by the first electronic device, wherein the request information is used for requesting the second electronic device to determine whether the owner of the first electronic device focuses on the second electronic device; and the second electronic equipment sends response information to the first electronic equipment according to the request information, wherein the response information is used for indicating that the equipment currently focused by the owner of the first electronic equipment is the second electronic equipment.
In this embodiment of the application, the second electronic device may be a device in a message forwarding list of the first electronic device, and the first electronic device may be a device under the same account with the second electronic device (or the first electronic device and the second electronic device may store information of the same user), so that after receiving the request information, the second electronic device may collect user characteristics and match the user characteristics preset in the second electronic device, and if matching is successful, it is determined that a device currently focused by an owner of the first electronic device is the second electronic device, so as to send response information to the first electronic device. Therefore, the safety of message forwarding is improved, the situation that other users except the owner of the first electronic device receive the prompt of the message is avoided, and the experience of the user is improved.
With reference to the twentieth aspect, in some implementations of the twentieth aspect, before the second electronic device includes the user characteristic acquisition apparatus, and before the second electronic device receives the message and the indication information sent by the first electronic device, the method further includes: the second electronic device receives request information sent by the first electronic device, wherein the request information is used for requesting the second electronic device to determine whether the owner of the first electronic device focuses on the second electronic device; and the second electronic equipment acquires the user characteristics through the user characteristic acquisition device according to the request information and sends the user characteristics to the first electronic equipment.
In this embodiment of the present application, the second electronic device may be a device in a message forwarding list of the first electronic device, and the first electronic device may not be a device under the same account with the second electronic device (or the first electronic device and the second electronic device may store information of different users); alternatively, the second electronic device may be a device surrounding the first electronic device. Thus, after receiving the request message, the second electronic device can collect the feature information of the user focusing on the second electronic device through the user feature collecting device and send the feature information to the first electronic device. And the first electronic equipment matches with the user characteristic information preset in the first electronic equipment according to the characteristic information, and if the matching is successful, the message is forwarded to the second electronic equipment. Therefore, the safety of message forwarding is improved, the situation that other users except the owner of the first electronic device receive the prompt of the message is avoided, and the experience of the user is improved.
With reference to the twentieth aspect, in some implementations of the twentieth aspect, the message is a message of a first application, and the second electronic device is a device in which the first application is not installed.
In this embodiment of the application, if the second electronic device is a device without the first application program installed, the second electronic device may also add a reply control in the message reminding box after receiving the indication information, so that the second electronic device may obtain the content replied by the user through the reply content input by the user and the detected operation of clicking the reply control, and send the reply content to the first electronic device. Thereby completing a true reply to the message on the first electronic device. Therefore, the method and the device are beneficial to the user to receive the prompt of the message in time and finish the reply of the message, and the user is prevented from missing the important message, so that the experience of the user is promoted.
In a twenty-first aspect, an apparatus for replying to a message is provided, where the apparatus is disposed at a first electronic device, and the apparatus includes: a first receiving unit for receiving a message; a sending unit, configured to send the message and indication information to a second electronic device when it is determined that an apparatus focused by an owner of the first electronic device is the second electronic device, where the indication information is used to indicate the second electronic device to add a reply control to the message; a second receiving unit, configured to receive reply content to the message sent by the second electronic device; and the reply unit is used for replying the message according to the reply content.
In a twenty-second aspect, an apparatus for replying to a message is provided, where the apparatus is disposed on a second electronic device, and the apparatus includes: the receiving unit is used for receiving a message and indication information sent by first electronic equipment, wherein the indication information is used for indicating the second electronic equipment to add a reply control to the message; the display unit is used for displaying a message reminding frame, and the message reminding frame comprises the message and the reply control; and the sending unit is used for sending the reply content to the first electronic equipment when the operation of replying the message by the user is detected.
In a twenty-third aspect, there is provided an electronic device comprising: one or more processors; a memory; and one or more computer programs. Wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions. The instructions, when executed by the electronic device, cause the electronic device to perform the method of replying to a message in any one of the possible implementations of the nineteenth aspect.
A twenty-fourth aspect provides an electronic device comprising: one or more processors; a memory; and one or more computer programs. Wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions. The instructions, when executed by the electronic device, cause the electronic device to perform the method of replying to a message in any one of the possible implementations of the twentieth aspect.
A twenty-fifth aspect provides a computer program product comprising instructions which, when run on a first electronic device, cause the first electronic device to perform the method of replying to a message of the nineteenth aspect; alternatively, when the computer program product is run on a second electronic device, the second electronic device is caused to execute the method for replying to a message according to the twentieth aspect.
A twenty-sixth aspect provides a computer-readable storage medium, comprising instructions which, when run on a first electronic device, cause the first electronic device to perform the method of replying to a message of the nineteenth aspect; alternatively, when the instructions are executed on a second electronic device, the second electronic device is caused to execute the method for replying to a message according to the twentieth aspect.
A twenty-seventh aspect provides a chip for executing instructions, wherein when the chip runs, the chip executes the method for replying to a message according to the nineteenth aspect; or, when the chip is running, the chip executes the method for replying to the message according to the twentieth aspect.
In a twenty-eighth aspect, there is provided the system comprising a first electronic device and a second electronic device, wherein the first electronic device is configured to receive a message; the first electronic device is further used for sending the message to the second electronic device when the device focused by the owner of the first electronic device is determined to be the second electronic device; the second electronic equipment is used for prompting the message to the user according to the equipment information of the second electronic equipment.
In the embodiment of the application, when the first electronic device determines that the device focused by the user is not the first electronic device but the second electronic device, the first electronic device can forward the message to the second electronic device, and the second electronic device can prompt the message to the user according to the device information of the second electronic device, so that the prompt that the user receives the message in time is facilitated, and the user is prevented from missing important messages; meanwhile, different second electronic devices have different device information, so that prompts in different modes can be presented to the user, and the user experience in message receiving is improved.
In some possible implementations, the device information of the second electronic device may include, but is not limited to: hardware capability information of the second electronic device (e.g., whether the second electronic device is a large screen device or the second electronic device is a car machine), a current state of the device (e.g., whether the device is currently interacting with a user, or whether the device is in an immersive state), and so forth.
In some possible implementations, the second electronic device may be a device in a device list in the first electronic device; alternatively, the second electronic device may be a device surrounding the first electronic device.
With reference to the twenty-eighth aspect, in some implementations of the twenty-eighth aspect, the device information includes that the second electronic device is provided with a display screen, and the second electronic device is specifically configured to: the second electronic device displays a message reminding frame through the display screen, wherein the message reminding frame comprises the message and the message reminding frame does not comprise a control.
In the embodiment of the application, for the second electronic device with the display screen, the second electronic device does not display the control when prompting the message to the user, and the user can check the message in the notification center of the second electronic device when wishing to reply the message, so that the user can be helped to timely receive the prompt of the message, the user is prevented from missing important messages, and the user experience is improved.
With reference to the twenty-eighth aspect, in some implementations of the twenty-eighth aspect, the device information includes that the second electronic device has a display screen, and the second electronic device is specifically configured to: when the second electronic equipment detects that the input of a user is being received, displaying a message reminding frame through the display screen without positioning a cursor in the message reminding frame, wherein the message reminding frame comprises the message; or when the second electronic device detects that the input of the user is not received, the reminding frame is displayed through the display screen, and the cursor is positioned in the message reminding frame.
In the embodiment of the application, for the second electronic device with the display screen, if the second electronic device receives the message forwarded by the first electronic device and the second electronic device is receiving the input of the user (or the second electronic device is interacting with the user), the second electronic device may not position the cursor in the message reminding frame, so that trouble on the current operation of the user is avoided, and the user experience is improved.
With reference to the twenty-eighth aspect, in some implementations of the twenty-eighth aspect, the second electronic device is further configured to: when the second electronic device detects that the input of the user is being received, prompt information is displayed through the display screen, and the prompt information is used for prompting the user to position the cursor to the message prompting frame through a first operation.
In this embodiment of the application, if the second electronic device receives the input of the user (or the second electronic device is interacting with the user) while the second electronic device receives the message forwarded by the first electronic device, the second electronic device may further prompt the user to position the cursor to the message reminding box through a preset operation. Therefore, the user can be quickly positioned to the message reminding frame through the prompt message while avoiding the trouble caused by the current operation of the user, so that the user can conveniently reply the message, and the user experience is favorably improved.
In some possible implementations, the prompting message is used to prompt the user to position the cursor to the message prompting box through a first operation, including: the prompt message is used for prompting the user to position the cursor to the reply control in the message prompting frame through a first operation.
With reference to the twenty-eighth aspect, in some implementations of the twenty-eighth aspect, the device information includes that the second electronic device has a voice function, and the second electronic device is specifically configured to: reminding the user of receiving the message through voice; alternatively, the user is alerted to the contents of the message by voice.
In the embodiment of the application, for the second electronic device with the voice function, when the message received by the second electronic device from the first electronic device is a text message, the second electronic device may prompt the user to receive the message or prompt the user of the content of the message through voice. Therefore, the user is prevented from checking the screen of the second electronic device and obtaining the prompt of the message through the voice prompt, the user is prevented from missing important messages, and the user experience is improved.
In some possible implementations, the second electronic device is a car machine or a smart speaker.
With reference to the twenty-eighth aspect, in some implementations of the twenty-eighth aspect, the first electronic device is further configured to send, to the second electronic device, indication information for instructing the second electronic device to add a reply control to the message; the second electronic device is specifically configured to display a message reminding frame according to the device information and the indication information, where the message reminding frame includes the message and the reply control; the second electronic equipment is also used for sending reply content to the first electronic equipment when the operation of replying the message by the user is detected; the first electronic device is further configured to reply to the message according to the reply content.
In this embodiment of the application, when it is determined that the device focused by the owner of the first electronic device is not the first electronic device but the second electronic device, the first electronic device may send a message and indication information to the second electronic device, so that the second electronic device may present a message reminding frame including the message and a reply control to a user. Therefore, the owner of the first electronic equipment can complete the virtual reply to the message on the second electronic equipment conveniently. The second electronic device may send the reply content to the first electronic device so that the first electronic device completes a true reply to the message. Therefore, the method and the device are beneficial to the user to receive the message prompt in time and finish the reply to the message, the process that the user returns to the first electronic equipment to reply to the message is avoided, and the user is prevented from missing important messages.
In some possible implementations, the second electronic device is a large-screen device, and then if the second electronic device is receiving the user input when receiving the indication information, the second electronic device may display a message alert box and position a cursor at a reply control in the message alert box; the second electronic device, upon receiving the indication, may display the message alert box without positioning a cursor over a reply control in the message alert box if the second electronic device does not receive the user's input.
With reference to the twenty-eighth aspect, in some implementations of the twenty-eighth aspect, the message is a message of a first application, and the second electronic device is a device that does not have the first application installed.
In this embodiment of the application, if the second electronic device is a device that does not have the first application installed, the second electronic device may also obtain, by adding the reply control to the message reminding box after receiving the indication information, the content replied by the user through the reply content input by the user and the detected operation of clicking the reply control by the second electronic device, and send the replied content to the first electronic device. Thereby completing a true reply to the message on the first electronic device. Therefore, the user can receive the message in time, and the user is prevented from missing important messages, so that the user experience is improved.
With reference to the twenty-eighth aspect, in some implementations of the twenty-eighth aspect, the device information includes that the second electronic device is provided with a camera, the second electronic device is further configured to: when the second electronic equipment detects that only the owner of the first electronic equipment focuses on the second electronic equipment through a camera, prompting the content of the message; or when the second electronic device detects that a plurality of users including the owner of the first electronic device focus on the second electronic device through the camera, prompting that the message is received.
In the embodiment of the application, for a second electronic device with a camera, if the second electronic device detects that only the owner of the first electronic device focuses on the second electronic device through the camera, the second electronic device may prompt the content of the message to the user; if there are multiple users in focus with the second electronic device, including the owner of the first electronic device, the second electronic device may prompt the user for receipt of a message. Therefore, the method and the device are favorable for avoiding the disclosure of the privacy of the user and ensuring the safety of message forwarding, thereby improving the experience of the user.
With reference to the twenty-eighth aspect, in some implementations of the twenty-eighth aspect, the second electronic device is further configured to: before prompting the message to the user, determining that the second electronic device is in a non-do-not-disturb mode, or determining that the second electronic device is not currently running a preset application.
In an embodiment of the application, the second electronic device may determine that it is in a non-immersive state before prompting the user for the message. The immersive state may also be understood as a notification disabled state, as the immersive state may be that the user disables the notification or turns on the do-not-disturb mode, or the immersive state may also be that an application running in the foreground of the second electronic device is a preset application (e.g., a video App or a game App). This helps avoid causing interference to the user, thereby helping to improve the user's experience.
With reference to the twenty-eighth aspect, in some implementations of the twenty-eighth aspect, the first electronic device is further configured to: before sending the message to the second electronic equipment, determining the type of the message as a message type set by a user; or, before sending the message to the second electronic device, determining that the message is an Instant Messaging (IM) type message.
In the embodiment of the application, before forwarding the message to the second electronic device, the first electronic device may determine the message type set by the user or the message type is an IM type message, which is beneficial to avoiding that some unimportant messages are also forwarded to the second electronic device, thereby avoiding interference of the unimportant messages to the user, and being beneficial to improving user experience.
With reference to the twenty-eighth aspect, in some implementations of the twenty-eighth aspect, the first electronic device is further configured to: before sending the message to the second electronic device, determining a device for receiving message forwarding set by the second electronic device for a user; or before sending the message to the second electronic device, determining that the account logged in on the first electronic device is associated with the account logged in on the second electronic device.
In the embodiment of the application, before the first electronic device forwards the message to the second electronic device, the device, which is set by the second electronic device for the user and receives the message forwarding, can be determined first, or the account number logged in the first electronic device and the account number logged in the second electronic device are determined to be associated with each other, so that the message is favorably prevented from being forwarded to the unauthorized device of the user, the message forwarding safety is improved, the privacy of the user is prevented from being revealed, and the user experience is favorably improved.
With reference to the twenty-eighth aspect, in some implementations of the twenty-eighth aspect, the first electronic device is specifically configured to: when determining that the device which is currently focused by the owner of the first electronic device is not the first electronic device, sending request information to a second electronic device, wherein the request information is used for requesting the second electronic device to determine whether the owner of the first electronic device focuses on the second electronic device; the second electronic device is further configured to send response information to the first electronic device according to the request information, where the response information is used to indicate that the currently focused device of the owner of the first electronic device is the second electronic device; the first electronic device is specifically configured to: in response to receiving the response information, the message is sent to the second electronic device.
In this embodiment of the application, the second electronic device may be a device in a message forwarding list of the first electronic device, and the first electronic device may be a device under the same account with the second electronic device (or the first electronic device and the second electronic device may store information of the same user), so that after receiving the request information, the second electronic device may acquire user characteristics and match the user characteristics with owner characteristics of the first electronic device preset in the second electronic device, and if matching is successful, it is determined that a device currently focused by the owner of the first electronic device is the second electronic device, thereby sending response information to the first electronic device. Therefore, the safety of message forwarding is improved, the situation that other users except the owner of the first electronic device receive the prompt of the message is avoided, and the experience of the user is improved.
In some possible implementation manners, after receiving the request message, the second electronic device may first determine whether the first electronic device and the second electronic device are devices under the same account. If the second electronic device determines that the first electronic device and the second electronic device are devices under the same account, the second electronic device can acquire feature information of a user focusing on the second electronic device through the user feature acquisition device, the acquired user feature information is matched with feature information of a owner of the first electronic device preset in the second electronic device, and if the matching is successful, the response information is sent to the first electronic device.
With reference to the twenty-eighth aspect, in some implementations of the twenty-eighteenth aspect, the second electronic device includes a user characteristic acquisition apparatus, and the first electronic device is specifically configured to: when determining that the device which is currently focused by the owner of the first electronic device is not the first electronic device, sending request information to a second electronic device, wherein the request information is used for requesting the second electronic device to determine whether the owner of the first electronic device focuses on the second electronic device; the second electronic device is further configured to: acquiring user characteristics through the user characteristic acquisition device according to the request information and sending the user characteristics to the first electronic equipment; the first electronic device is specifically configured to: determining that the user characteristic matches a preset user characteristic in the first electronic device before sending the notification message to the second electronic device.
In this embodiment of the present application, the second electronic device may be a device in a message forwarding list of the first electronic device, and the first electronic device may not be a device under the same account with the second electronic device (or the first electronic device and the second electronic device may store information of different users); alternatively, the second electronic device may also be a device surrounding the first electronic device. Thus, after receiving the request message, the second electronic device can collect the feature information of the user focusing on the second electronic device through the user feature collecting device and send the feature information to the first electronic device. And the first electronic equipment matches with the user characteristic information preset in the first electronic equipment according to the characteristic information, and if the matching is successful, the message is forwarded to the second electronic equipment. Therefore, the safety of message forwarding is improved, the situation that other users except the owner of the first electronic device receive the prompt of the message is avoided, and the experience of the user is improved.
In a twenty-ninth aspect, a method for prompting a message is provided, and the method is applied to an electronic device, and includes: the electronic equipment receives a message sent by another electronic equipment, wherein the message is obtained by the other electronic equipment from a server; and the electronic equipment prompts the message to a user according to the equipment information of the electronic equipment.
In the embodiment of the application, when the first electronic device determines that the device focused by the user is not the first electronic device but the second electronic device, the first electronic device can forward the message to the second electronic device, and the second electronic device can prompt the message to the user according to the device information of the second electronic device, so that the user can be helped to receive the prompt of the message in time, and the user is prevented from missing important messages; meanwhile, different second electronic devices have different device information, so that prompts in different modes can be presented to the user, and the user experience in message receiving is improved.
In some possible implementations, the device information of the second electronic device may include, but is not limited to: hardware capability information of the second electronic device (e.g., whether the second electronic device is a large screen device or the second electronic device is a car machine), a current state of the device (e.g., whether the device is currently interacting with a user, or whether the device is in an immersive state), and so forth.
With reference to the twenty-ninth aspect, in some implementations of the twenty-ninth aspect, the device information includes that the electronic device has a display screen, and the electronic device prompts the message to the user according to the device information of the electronic device, including: and displaying a message reminding frame through the display screen, wherein the message reminding frame comprises the message and the message reminding frame does not comprise a control.
In the embodiment of the application, for the electronic equipment with the display screen, the electronic equipment does not display the control when prompting the message to the user, and the user can check the message in the notification center of the electronic equipment when wishing to reply the message, so that the user can be helped to timely receive the prompt of the message, the user is prevented from missing the important message, and the user experience is improved.
With reference to the twenty-ninth aspect, in some implementations of the twenty-ninth aspect, the device information includes that the electronic device has a display screen, and the electronic device prompts the message to the user according to the device information of the electronic device, including: when the electronic equipment detects that the input of a user is being received, displaying a message reminding frame through the display screen without positioning a cursor on the message reminding frame, wherein the message reminding frame comprises the message; or when the electronic equipment detects that the input of the user is not received, the message reminding frame is displayed through the display screen, and the cursor is positioned in the message reminding frame.
In the embodiment of the application, for the electronic device with the display screen, if the electronic device receives the input of the user (or the second electronic device is interacting with the user) while the electronic device receives the message forwarded by the other electronic device, the electronic device may not position the cursor in the message reminding box, so that trouble caused to the current operation of the user is avoided, and the user experience is improved.
With reference to the twenty-ninth aspect, in certain implementations of the twenty-ninth aspect, the method further comprises: when the electronic equipment detects that the input of a user is being received, prompt information is displayed through the display screen, and the prompt information is used for reminding the user to position the cursor to the message reminding frame through a first operation.
In the embodiment of the application, if the electronic device receives an input of a user while receiving a message forwarded by another electronic device (or the second electronic device is interacting with the user), the electronic device may further prompt the user to position a cursor to a message reminding box through a preset operation. Therefore, the user can be quickly positioned to the message reminding frame through the prompt message while avoiding the trouble caused by the current operation of the user, so that the user can conveniently reply the message, and the user experience is favorably improved.
In some possible implementations, the prompting message is used to prompt the user to position the cursor to the message prompting box through a first operation, including: the prompt message is used for reminding the user of positioning the cursor to a reply control in the message reminding frame through a first operation.
With reference to the twenty-ninth aspect, in some implementations of the twenty-ninth aspect, the device information includes that the electronic device has a voice function, and the electronic device prompts the message to the user according to the device information of the electronic device, including: the electronic equipment reminds the user of receiving the message by voice; or the electronic equipment reminds the user of the content of the message through voice.
In the embodiment of the application, for the electronic device with the voice function, when the message received by the electronic device from another electronic device is a text message, the electronic device can prompt the user to receive the message or prompt the user about the content of the message through voice. Therefore, the user is prevented from being informed of the prompt of the message through the voice prompt by checking the screen of the electronic equipment, and the user is prevented from missing important messages, so that the user experience is improved.
In some possible implementations, the electronic device is a car machine or a smart speaker.
With reference to the twenty-ninth aspect, in some implementations of the twenty-ninth aspect, before the electronic device prompts the message to the user, the method further includes: the electronic equipment receives indication information sent by the other electronic equipment, wherein the indication information is used for indicating the electronic equipment to add a reply control to the message; the electronic equipment prompts the message to the user according to the equipment information, and the method comprises the following steps: the electronic equipment displays a message reminding frame according to the equipment information and the indication information, wherein the message reminding frame comprises the message and the reply control; after the electronic device prompts the message to the user, the method further comprises: and when the operation of replying to the message by the user is detected, sending the replied content to the other electronic equipment.
In the embodiment of the application, the electronic device can present a message reminding frame comprising the message and the reply control to the user according to the device information of the electronic device and the indication information. This may facilitate the user completing a virtual reply to the message on the electronic device. The electronic device may send the reply content to the other electronic device so that the other electronic device completes a true reply to the message. Therefore, the method and the device are beneficial to the user to receive the message prompt in time and finish the reply to the message, so that the process that the user returns to another electronic device to reply to the message is avoided, and the user is prevented from missing important messages.
With reference to the twenty-ninth aspect, in certain implementations of the twenty-ninth aspect, the message is a message of a first application, and the electronic device is a device that does not have the first application installed.
In this embodiment of the application, if the second electronic device is a device that does not have the first application installed, the second electronic device may also obtain, by adding the reply control to the message reminding box after receiving the indication information, the content replied by the user through the reply content input by the user and the detected operation of clicking the reply control by the second electronic device, and send the replied content to the first electronic device. Thereby completing a true reply to the message on the first electronic device. Therefore, the user can receive the message in time, and the user is prevented from missing important messages, so that the user experience is improved.
With reference to the twenty-ninth aspect, in some implementations of the twenty-ninth aspect, the device information includes that the electronic device has a camera, and the electronic device prompts the message to the user according to the device information of the electronic device, including: when the electronic equipment detects that only the owner of the other electronic equipment focuses on the electronic equipment through the camera, the electronic equipment prompts the content of the message; or when the electronic equipment detects that a plurality of users including the owner of the other electronic equipment focus on the electronic equipment through the camera, prompting the user to receive the message.
In the embodiment of the application, for the electronic device with the camera, if the electronic device detects that only the owner of another electronic device focuses on the electronic device through the camera, the electronic device can prompt the content of the message to the user; if multiple users, including the owner of another electronic device, are focusing on the electronic device, the electronic device may prompt the user for receipt of a message. Therefore, the method and the device are favorable for avoiding the disclosure of the privacy of the user and ensuring the safety of message forwarding, thereby improving the experience of the user.
With reference to the twenty-ninth aspect, in certain implementations of the twenty-ninth aspect, the method further comprises: before prompting the user for the message, it is determined that the electronic device is in a non-do-not-disturb mode, or that the electronic device is not currently running a preset application.
In an embodiment of the application, the electronic device may determine that it is in a non-immersive state before prompting the user for the message. The immersive state may also be understood as a notification disabled state, as the immersive state may be that the user disables the notification or turns on the do-not-disturb mode, or the immersive state may be that an application running in the foreground of the electronic device is a preset application (e.g., a video App or a game App). This helps avoid causing interference to the user, thereby helping to improve the user's experience.
With reference to the twenty-ninth aspect, in certain implementations of the twenty-ninth aspect, the method further comprises: before receiving a message sent by the other electronic device, the electronic device receives request information sent by the other electronic device, wherein the request information is used for requesting the electronic device to determine whether an owner of the other electronic device focuses on the electronic device; and the electronic equipment sends response information to the other electronic equipment according to the request information, wherein the response information is used for indicating that the equipment currently focused by the owner of the other electronic equipment is the electronic equipment.
In this embodiment of the present application, the electronic device may be a device in a message forwarding list of another electronic device, and the electronic device may be a device under the same account with the another electronic device (or the electronic device and the another electronic device may store information of the same user), so that after receiving the request information, the electronic device may collect user characteristics and match the user characteristics preset in the electronic device, and if matching is successful, it is determined that a device currently focused by an owner of the another electronic device is the electronic device, so as to send response information to the another electronic device. Therefore, the safety of message forwarding is improved, the condition that other users except the owner of another electronic device receive the prompt of the message is avoided, and the experience of the user is improved.
In some possible implementation manners, after receiving the request message, the electronic device may first determine whether another electronic device and the electronic device are devices under the same account. If the electronic device determines that the other electronic device and the electronic device are devices under the same account, the electronic device may collect feature information of a user focusing on the electronic device through the user feature collecting device, match the collected user feature information with feature information of a host of the other electronic device preset in the electronic device, and send the response information to the other electronic device if the matching is successful.
With reference to the twenty-ninth aspect, in some implementations of the twenty-ninth aspect, the electronic device includes a user characteristic acquisition apparatus, and the method further includes: before receiving a message sent by the other electronic device, the electronic device receives request information sent by the other electronic device, wherein the request information is used for requesting the electronic device to determine whether an owner of the other electronic device focuses on the electronic device; and the electronic equipment acquires the user characteristics through the user characteristic acquisition device according to the request information and sends the user characteristics to the other electronic equipment.
In this embodiment of the present application, the electronic device may be a device in a message forwarding list of another electronic device, and the electronic device may not be a device under the same account with the another electronic device (or the electronic device and the another electronic device may store information of different users); alternatively, the electronic device may be a device around another electronic device. Thus, after the electronic device receives the request information, the electronic device can acquire the characteristic information of the user focusing on the electronic device through the user characteristic acquisition device and send the characteristic information to another electronic device. And the other electronic equipment matches with the user characteristic information preset in the other electronic equipment according to the characteristic information, and if the matching is successful, the message is forwarded to the electronic equipment. Therefore, the safety of message forwarding is improved, the situation that other users except the owner of another electronic device receive the prompt of the message is avoided, and the experience of the user is improved.
In a thirtieth aspect, an apparatus for prompting a message is provided, where the apparatus is disposed in an electronic device, and the apparatus includes: the receiving unit is used for receiving a message sent by another electronic device; and the prompting unit is used for prompting the information to the user according to the equipment information of the device.
In a thirty-first aspect, there is provided an electronic device comprising: one or more processors; a memory; and one or more computer programs. Wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions. The instructions, when executed by the electronic device, cause the electronic device to perform the method of prompting for a message in any possible implementation of the twenty-ninth aspect described above.
A thirty-second aspect provides a computer program product comprising instructions which, when run on an electronic device, cause the electronic device to perform the method of prompting messages of the twenty-ninth aspect described above.
A thirty-third aspect provides a computer-readable storage medium comprising instructions that, when executed on an electronic device, cause the electronic device to perform the method for prompting messages of the twenty-ninth aspect.
In a thirty-fourth aspect, a chip is provided for executing instructions, and when the chip runs, the chip executes the method for prompting a message according to the twenty-ninth aspect.
The embodiment of the application also provides a content sharing method, device and system, which can simplify the operation steps of sharing content (such as pictures, videos, documents and the like) by a user, improve the content sharing purpose and provide convenience for the user.
In a thirty-fifth aspect, a content sharing method is provided, and is applied to a first terminal, where the method includes: the first terminal detects the first operation and generates first content. And the first terminal responds to the generated first content and sends a first message to a second terminal, wherein the first terminal and the second terminal are in near field connection, and the first terminal and the second terminal have a preset association relationship. The first message is used for enabling the second terminal to display a first prompt when detecting a second operation, wherein the first prompt is a prompt of the first content. And the first terminal responds to a second message, and sends the first content to the second terminal, wherein the second message is sent to the first terminal after the second terminal detects a third operation on the first prompt.
By adopting the scheme, in the whole content sharing process, the first terminal can share the content to the required nearby required equipment without any other operation except the operation of generating the content, so that convenience is provided for users, the content sharing purpose is improved, and the user experience is enhanced.
In one embodiment, the first content is a picture, wherein the picture is from a camera application or a scanning application or a screen capture application. Therefore, the first terminal can conveniently share the generated picture to nearby required equipment, and the picture sharing efficiency is improved.
In one embodiment, the first message includes a thumbnail of a picture and the first prompt includes a picture thumbnail notification box, wherein the picture thumbnail notification box includes a thumbnail of the picture. Therefore, the picture thumbnail is sent to other equipment, and the information of the picture to be received can be visually displayed by the other equipment, so that the user can conveniently perform subsequent operation.
In one embodiment, the third operation includes a detected click operation on the thumbnail in the picture thumbnail notification frame or a detected click operation on a download button, where the thumbnail notification frame includes the download button or a drag operation on the thumbnail in the picture thumbnail notification frame. Therefore, the method provides a plurality of optional operations for the user, and increases the use experience of the user.
In one embodiment, the second message is a message requesting the picture. In this way, the user can obtain an original picture of the picture.
In an embodiment, the first message is a message to be received of the picture, and the first message enables the second terminal to display a first prompt when detecting the second operation, including: the first message enables the second terminal to send a picture thumbnail request message to the first terminal when detecting a second operation; and the first terminal responds to the picture thumbnail request message and sends the thumbnail of the picture to the second terminal, and the second terminal displays the first prompt comprising the thumbnail notification box. Therefore, the first device does not directly send the thumbnail to other devices after obtaining the picture, but waits for the other devices to request to send the thumbnail to the other devices, so that network resources can be saved, and the thumbnail is prevented from being pushed to the other devices which do not need the picture.
In one embodiment, the picture thumbnail request message includes the picture thumbnail request message sent to the first terminal by the second terminal detecting the second operation within a first defined time. Therefore, timeliness is added in the process of detecting the second operation sending picture thumbnail request message, and user experience is enhanced.
In one embodiment, the near field connection comprises a bluetooth connection or a Wi-Fi connection; the association relationship comprises: the device account numbers of the first terminal and the second terminal are the same, or the device account number of the second terminal is in a sharing device list of the first terminal, wherein the sharing device list is a set of device account numbers. Therefore, the data is transmitted through Bluetooth or Wi-Fi, and the transmission efficiency is improved. The user can share the content to the appointed equipment according to the setting of the user, and the content sharing purpose is improved.
In one embodiment, the first prompt includes: the second terminal detects the first prompt displayed when the second operation is detected within a second limited time. Therefore, timeliness is added to the detection of the second operation display first prompt, and user experience is enhanced.
In one embodiment, the second message comprises: and the second terminal sends a message to the first terminal after detecting the third operation on the first prompt within a third limited time. Therefore, timeliness is increased in the aspect of detecting the third operation of the first prompt, and user experience is enhanced.
In one embodiment, before the first terminal detects the first operation and generates the first content, the method further includes: and the first terminal detects the first operation and generates second content. The first terminal sends a third message to the second terminal in response to the generated second content.
In one embodiment, the first content is a first picture; the second content is a second picture; the first message is a message to be received of the first picture; the third message is a message to be received of the second picture. The first message causes the second terminal to display a first prompt when detecting a second operation, including: the first message causes the second terminal to send a plurality of thumbnail request messages to the first terminal when detecting a second operation, the first terminal sends a thumbnail including the first picture and the number of pictures to the second terminal in response to the plurality of thumbnail request messages, and the second terminal displays the first prompt including a thumbnail notification box, wherein the thumbnail notification box includes the thumbnail of the first picture and the number of pictures.
In one embodiment, the third operation includes a click operation on a download button, and the thumbnail notification frame includes the download button or a drag operation on a thumbnail of the first picture. The first terminal sending the first content to the second terminal in response to a second message, comprising: and the first terminal responds to a second message and sends the first picture and the second picture to the second terminal.
In one embodiment, the third operation comprises a click operation on a thumbnail of the first picture. The first terminal sending the first content to the second terminal in response to a second message, comprising: the first terminal sends a fourth message to the second terminal in response to the second message, wherein the fourth message includes a thumbnail of the second picture, the fourth message causes the second terminal to display a second prompt, wherein the second prompt includes a thumbnail of the first picture and a thumbnail of the second picture. And the first terminal responds to a fifth message, and sends the first picture to the second terminal, wherein the fifth message is sent to the first terminal when the second terminal receives a fifth operation on the thumbnail of the first picture. Therefore, the user can click the thumbnails in the thumbnail notification boxes to display the thumbnails, and further, the user can operate one of the thumbnails to select the desired picture, so that the use experience of the user is enhanced.
In one embodiment, the first content comprises a video or text or a file. Therefore, the user can purposefully share the contents such as videos, texts or downloaded files and the like to other nearby devices, and the use experience of the user is enhanced.
A thirty-sixth aspect provides a first terminal comprising: one or more processors, one or more memories storing one or more computer programs, the one or more computer programs comprising instructions, which when executed by the one or more processors, cause the first terminal to perform the steps of: detecting a first operation, generating a first content, responding to the generated first content, and sending a first message to a second terminal, wherein the first terminal and the second terminal are in a near field connection state, the first terminal and the second terminal have a preset association relation, and the first message enables the second terminal to display a first prompt when detecting a second operation, wherein the first prompt is a prompt for the first content; and the first terminal responds to a second message, and sends the first content to the second terminal, wherein the second message is sent to the first terminal after the second terminal detects a third operation on the first prompt.
By adopting the terminal of the scheme, in the whole content sharing process, the content can be shared to the required nearby required equipment without any other operation except the operation of generating the content, convenience is provided for a user, the content sharing purpose is improved, and the user experience is enhanced.
In one embodiment, the first content is a picture, wherein the picture is from a shooting application or a scanning application or a screen capture application. Therefore, the terminal can conveniently and quickly share the generated picture to nearby required equipment, and the picture sharing efficiency is improved.
In one embodiment, the first message includes a thumbnail of a picture and the first prompt includes a picture thumbnail notification box, wherein the thumbnail notification box includes a thumbnail of the picture. Therefore, the picture thumbnail is sent to other equipment, and the information of the picture to be received can be visually displayed by the other equipment, so that the user can conveniently perform subsequent operation.
In one embodiment, the third operation includes a detected click operation on the thumbnail in the picture thumbnail notification frame or a detected click operation on a download button, where the thumbnail notification frame includes the download button or a drag operation on the thumbnail in the picture thumbnail notification frame. Therefore, the method provides a plurality of optional operations for the user, and increases the use experience of the user.
In one embodiment, the second message is a message requesting the picture. In this way, the user can obtain an original picture of the picture.
In one embodiment, the first message is a message to be received of the picture. The first message enables the second terminal to display a first prompt when detecting a second operation, and the first prompt comprises: the first message enables the second terminal to send a picture thumbnail request message to the first terminal when detecting a second operation; and the first terminal responds to the picture thumbnail request message and sends the thumbnail of the picture to the second terminal, and the second terminal displays the first prompt comprising the thumbnail notification box. Therefore, the first terminal does not directly send the thumbnail to other equipment after obtaining the picture, but waits for the other equipment to request to send the thumbnail to the other equipment, so that network resources can be saved, and the thumbnail is prevented from being pushed to the other equipment which does not need the picture.
In one embodiment, the picture thumbnail request message includes: the second terminal detects the second operation within a first limited time, and sends the picture thumbnail request message to the first terminal. Therefore, timeliness is added in detecting the picture thumbnail request message sent by the second operation, and user experience is enhanced.
In one embodiment, the near field connection comprises a Bluetooth connection or a Wi-Fi connection. The association relationship comprises: the device account numbers of the first terminal and the second terminal are the same, or the device account number of the second terminal is in a sharing device list of the first terminal, wherein the sharing device list is a set of device account numbers. Therefore, the data is transmitted through Bluetooth or Wi-Fi, and the transmission efficiency is improved. The user can share the content to the appointed equipment according to the setting of the user, and the content sharing purpose is improved.
In one embodiment, the first prompt includes: the second terminal detects the first prompt displayed when the second operation is detected within a second limited time. Therefore, timeliness is added to the fact that the second operation is detected to display the first prompt, and user experience is enhanced.
In one embodiment, the second message comprises: and the second terminal sends a message to the first terminal after detecting the third operation on the first prompt within a third limited time. Therefore, timeliness is increased in the aspect of detecting the third operation of the first prompt, and user experience is enhanced.
In one embodiment, the processor is further configured to: before a first terminal detects a first operation and generates a first content, the first operation is detected and a second content is generated; the first terminal sends a third message to the first terminal in response to the generated second content.
In one embodiment, the first content is a first picture; the second content is a second picture; the first message is a message to be received of the first picture; the third message is a message to be received of the second picture. The first message causes the second terminal to display a first prompt when detecting a second operation, including: the first message causes the second terminal to send a plurality of thumbnail request messages to the first terminal when detecting a second operation, the first terminal sends a thumbnail including the first picture and a picture number to the second terminal in response to the plurality of thumbnail request messages, and the second terminal displays the first prompt including a thumbnail notification box, wherein the thumbnail notification box includes the thumbnail of the first picture and the picture number.
In one embodiment, the third operation includes a click operation on a download button, and the thumbnail notification box includes the download button or a drag operation on a thumbnail of the first picture. The first terminal sending the first content to the second terminal in response to a second message, comprising: and the first terminal responds to a second message and sends the first picture and the second picture to the second terminal.
In one embodiment, the third operation includes a click operation on a thumbnail of the first picture. The first terminal sending the first content to the second terminal in response to a second message, including: the first terminal sends a fourth message to the second terminal in response to the second message, wherein the fourth message comprises a thumbnail of the second picture, and the fourth message causes the second terminal to display a second prompt, wherein the second prompt comprises the thumbnail of the first picture and the thumbnail of the second picture. And the first terminal responds to a fifth message, and sends the first picture to the second terminal, wherein the fifth message is a message sent to the first terminal when the second terminal receives a fifth operation on the thumbnail of the first picture. Therefore, the user can click the thumbnails in the thumbnail notification boxes to expand and display the thumbnails, and further the user can operate one of the thumbnails to select a desired picture, so that the use experience of the user is enhanced.
In one embodiment, the first content comprises a video or text or a file. Therefore, the user can purposefully share the contents such as videos, texts or downloaded files and the like to other nearby devices, and the use experience of the user is enhanced.
A thirty-seventh aspect provides a content sharing system, including a first terminal and a second terminal, where the first terminal and the second terminal are in a near field connection state, and the first terminal and the second terminal have a predetermined association relationship; the first terminal detects a first operation and generates first content; the first terminal sends a first message to the second terminal in response to the generated first content; the second terminal detects a second operation and displays a first prompt, wherein the first prompt is a prompt of the first content; and the second terminal detects a third operation on the first prompt and sends a second message to the first terminal. The first terminal sends the first content to the second terminal in response to the second message.
According to the content sharing system adopting the scheme, in the whole content sharing process, the first terminal can share the content to the required equipment nearby without any other operation except the content generating operation, and the second terminal responds to the received content according to the self requirement, so that convenience is provided for the user, the content sharing purpose is improved, and the user experience is enhanced.
In one embodiment, the first content is a picture, wherein the picture is from a shooting application or a scanning application or a screen capture application. Therefore, the terminal can conveniently and quickly share the generated picture to nearby required equipment, and the picture sharing efficiency is improved.
In one embodiment, the first message includes a thumbnail of a picture and the first prompt includes a picture thumbnail notification box, wherein the thumbnail notification box includes a thumbnail of the picture. Therefore, the picture thumbnail is sent to other equipment, and the information of the picture to be received can be visually displayed by the other equipment, so that the user can conveniently perform subsequent operation.
In one embodiment, the third operation includes a detected click operation on the thumbnail in the picture thumbnail notification frame or a detected click operation on a download button, where the thumbnail notification frame includes the download button or a drag operation on the thumbnail in the picture thumbnail notification frame. Therefore, the method provides a plurality of optional operations for the user, and increases the use experience of the user.
In one embodiment, the second message is a message requesting the picture. In this way, the user can obtain an original picture of the picture.
In one embodiment, the first message is a message to be received of the picture. The first message causes the second terminal to display a first prompt when detecting a second operation, including: the first message enables the second terminal to send a picture thumbnail request message to the first terminal when detecting a second operation; and the first terminal responds to the picture thumbnail request message and sends the thumbnail of the picture to the second terminal, and the second terminal displays the first prompt comprising the thumbnail notification box. Therefore, the first terminal does not directly send the thumbnail to other equipment after obtaining the picture, but waits for the other equipment to request to send the thumbnail to the other equipment, so that network resources can be saved, and the thumbnail is prevented from being pushed to the other equipment which does not need the picture.
In one embodiment, the picture thumbnail request message includes: the second terminal detects the second operation within a first limited time, and sends the picture thumbnail request message to the first terminal. Therefore, timeliness is added in the process of detecting the second operation sending picture thumbnail request message, and user experience is enhanced.
In one embodiment, the near field connection comprises a Bluetooth connection or a Wi-Fi connection. The association relationship comprises: the device account numbers of the first terminal and the second terminal are the same, or the device account number of the second terminal is in a sharing device list of the first terminal, wherein the sharing device list is a set of device account numbers. Therefore, the data is transmitted through Bluetooth or Wi-Fi, and the transmission efficiency is improved. The user can share the content to the appointed equipment according to the setting of the user, and the content sharing purpose is improved.
In one embodiment, the first prompt includes: the second terminal detects the first prompt displayed when the second operation is detected within a second limited time. Therefore, timeliness is added to the detection of the second operation display first prompt, and user experience is enhanced.
In one embodiment, the second message comprises: and the second terminal sends a message to the first terminal after detecting the third operation on the first prompt within a third limited time. Therefore, timeliness is increased in the aspect of detecting the third operation of the first prompt, and user experience is enhanced.
In one embodiment, the processor is further configured to: before a first terminal detects a first operation and generates a first content, the first operation is detected and a second content is generated; the first terminal sends a third message to the first terminal in response to the generated second content.
In one embodiment, the first content is a first picture; the second content is a second picture; the first message is a message to be received of the first picture; the third message is a message to be received of the second picture. The first message enables the second terminal to display a first prompt when detecting a second operation, and the first prompt comprises: the first message causes the second terminal to send a plurality of thumbnail request messages to the first terminal when detecting a second operation, the first terminal sends a thumbnail including the first picture and a picture number to the second terminal in response to the plurality of thumbnail request messages, and the second terminal displays the first prompt including a thumbnail notification frame including the thumbnail of the first picture and the picture number.
In one embodiment, the third operation includes a click operation on a download button, and the thumbnail notification frame includes the download button or a drag operation on a thumbnail of the first picture. The first terminal sending the first content to the second terminal in response to a second message, comprising: and the first terminal responds to a second message and sends the first picture and the second picture to the second terminal.
In one embodiment, the third operation includes a click operation on a thumbnail of the first picture. The first terminal sending the first content to the second terminal in response to a second message, comprising: the first terminal sends a fourth message to the second terminal in response to the second message, wherein the fourth message includes a thumbnail of the second picture, the fourth message causes the second terminal to display a second prompt, wherein the second prompt includes a thumbnail of the first picture and a thumbnail of the second picture. And the first terminal responds to a fifth message, and sends the first picture to the second terminal, wherein the fifth message is a message sent to the first terminal when the second terminal receives a fifth operation on the thumbnail of the first picture. Therefore, the user can click the thumbnails in the thumbnail notification boxes to display the thumbnails, and further, the user can operate one of the thumbnails to select the desired picture, so that the use experience of the user is enhanced.
In one embodiment, the first content comprises a video or text or file. Therefore, the user can purposefully share the contents such as videos, texts or downloaded files and the like to other nearby devices, and the use experience of the user is enhanced.
In a thirty-eighth aspect, there is provided a computer program product comprising instructions which, when run on a first terminal, causes the first terminal to perform the method of the thirty-fifth aspect described above.
A thirty-ninth aspect provides a computer-readable storage medium comprising instructions which, when run on a first terminal, cause the first terminal to perform the method of the thirty-fifth aspect described above.
In a fortieth aspect, there is provided a chip for executing instructions, the chip performing the method of the thirty-fifth aspect when the chip is running.
In a fortieth aspect, a content sharing apparatus is provided, where the content sharing apparatus is disposed at a first terminal, and the content sharing apparatus includes: the generating unit is used for detecting a first operation and generating first content; a sending unit, configured to send a first message to a second terminal in response to the generated first content, where the first terminal and the second terminal are in a near field connection state, and the first terminal and the second terminal have a predetermined association relationship, and the first message enables the second terminal to display a first prompt when detecting a second operation, where the first prompt is a prompt for the first content; the sending unit is further configured to send the first content to the second terminal in response to a second message, where the second message is a message sent to the first terminal by the second terminal after the second terminal detects a third operation on the first prompt.
Drawings
Fig. 1 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Fig. 2 is a block diagram of a software structure provided in an embodiment of the present application.
Fig. 3 is a schematic diagram of a network architecture provided in an embodiment of the present application.
Fig. 4 is a notification processing method provided in the embodiment of the present application.
FIG. 5 is a set of graphical user interfaces provided by embodiments of the present application.
Fig. 6 is another set of graphical user interfaces provided by embodiments of the present application.
Fig. 7 is another set of graphical user interfaces provided by embodiments of the present application.
Fig. 8 is another set of graphical user interfaces provided by embodiments of the present application.
Fig. 9 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 10 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 11 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 12 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 13 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 14 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 15 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 16 is another set of graphical user interfaces provided by embodiments of the present application.
Fig. 17 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 18 is another set of graphical user interfaces provided by embodiments of the present application.
Fig. 19 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 20 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 21 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 22 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 23 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 24 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 25 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 26 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 27 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 28 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 29 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 30 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 31 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 32 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 33 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 34 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 35 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 36 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 37 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 38 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 39 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 40 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 41 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 42 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 43 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 44 is another set of graphical user interfaces provided by embodiments of the present application.
Fig. 45 is a schematic block diagram of a transmitting end and a receiving end according to an embodiment of the present application.
Fig. 46 is a schematic flow chart of a method for prompting a message according to an embodiment of the present application.
Fig. 47 is a process of implementing a transmitting end and a receiving end according to the embodiment of the present application.
FIG. 48 is another set of graphical user interfaces provided by embodiments of the present application.
Fig. 49 is a process of reproducing a sender interface at a receiver for audio and video capture according to the embodiment of the present application.
Fig. 50 is a schematic flowchart of a method for quick reply according to an embodiment of the present application.
Fig. 51 is a block diagram of another software structure provided in the embodiment of the present application.
Fig. 52 is a flowchart of interaction between devices provided by an embodiment of the present application.
FIG. 53 is a schematic block diagram of a notification decision manager provided in an embodiment of the present application.
Fig. 54 is a flowchart of a notification processing method according to an embodiment of the present application.
Fig. 55 is a schematic block diagram of a method for transmitting a picture across devices according to an embodiment of the present application.
Fig. 56 is a schematic flowchart of a picture sharing method according to an embodiment of the present application.
Fig. 57 is another schematic flowchart of a picture sharing method according to an embodiment of the present disclosure.
Fig. 58 is another schematic flowchart of a picture sharing method according to an embodiment of the present disclosure.
Fig. 59 is another schematic flowchart of a picture sharing method according to an embodiment of the present disclosure.
Fig. 60 is another schematic flowchart of a picture sharing method according to an embodiment of the present disclosure.
Fig. 61 is a system module architecture for picture sharing according to an embodiment of the present disclosure.
Fig. 62 is an interactive message flow of main modules in the mobile phone terminal and the personal computer terminal according to the embodiment of the present application.
Fig. 63 is a flowchart illustrating a task processing method according to an embodiment of the present application.
Fig. 64 is a schematic frame diagram of a task processing system provided in an embodiment of the present application.
FIG. 65 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 66 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 67 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 68 is another set of graphical user interfaces provided by embodiments of the present application.
Fig. 69 is a schematic flowchart of a notification processing method provided in an embodiment of the present application.
Fig. 70 is a schematic block diagram of a notification processing apparatus provided in an embodiment of the present application.
Fig. 71 is another schematic block diagram of a notification processing apparatus provided in an embodiment of the present application.
Fig. 72 is a schematic structural diagram of a presentation electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. Wherein in the description of the embodiments of the present application, "/" indicates an inclusive meaning, for example, a/B may indicate a or B; "and/or" herein is merely an association relationship describing an associated object, and means that there may be three relationships, for example, a and/or B, and may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, in the description of the embodiments of the present application, "a plurality" or "a plurality" means two or more.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present embodiment, "a plurality" means two or more unless otherwise specified.
The method provided by the embodiment of the application can be applied to electronic devices such as a mobile phone, a tablet computer, a wearable device, an on-board device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), and the like, and the embodiment of the application does not limit the specific types of the electronic devices.
Fig. 1 shows a schematic structural diagram of an electronic device 100. The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 100. In other embodiments of the present application, the electronic device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
Wherein the controller may be a neural center and a command center of the electronic device 100. The controller can generate an operation control signal according to the instruction operation code and the time sequence signal to finish the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose-input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bidirectional synchronous serial bus comprising a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, a charger, a flash, a camera 193, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor 180K through an I2C interface, so that the processor 110 and the touch sensor 180K communicate through an I2C bus interface to implement a touch function of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 through an I2S bus, enabling communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may transmit an audio signal to the wireless communication module 160 through the I2S interface, so as to implement a function of answering a call through a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, audio module 170 and wireless communication module 160 may be coupled by a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a bluetooth headset.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the capture functionality of electronic device 100. Processor 110 and display screen 194 communicate via a DSI interface to implement display functions of electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transmit data between the electronic device 100 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It should be understood that the interface connection relationship between the modules illustrated in the embodiments of the present application is only an illustration, and does not limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive a charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In other embodiments, the power management module 141 may be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including wireless communication of 2G/3G/4G/5G, etc. applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then passed to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), global Navigation Satellite System (GNSS), frequency Modulation (FM), near Field Communication (NFC), infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves via the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), general Packet Radio Service (GPRS), code Division Multiple Access (CDMA), wideband Code Division Multiple Access (WCDMA), time division code division multiple access (time-division multiple access, TD-SCDMA), long Term Evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou satellite navigation system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a user takes a picture, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, an optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and converting into an image visible to the naked eye. The ISP can also carry out algorithm optimization on noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, and the like) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic apparatus 100 can listen to music through the speaker 170A or listen to a hands-free call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic apparatus 100 receives a call or voice information, it is possible to receive voice by placing the receiver 170B close to the human ear.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking near the microphone 170C through the mouth. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and perform directional recording.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used for sensing a pressure signal, and can convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a variety of types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but have different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., x, y, and z axes) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects a shake angle of the electronic device 100, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude from barometric pressure values measured by barometric pressure sensor 180C to assist in positioning and navigation.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip holster using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip phone, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the detected opening and closing state of the leather sheath or the opening and closing state of the flip, the characteristics of automatic unlocking of the flip and the like are set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, electronic device 100 may utilize range sensor 180F to range for fast focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light to the outside through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there are no objects near the electronic device 100. The electronic device 100 can utilize the proximity sensor 180G to detect that the user holds the electronic device 100 close to the ear for talking, so as to automatically turn off the screen to save power. The proximity light sensor 180G may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor 180L is used to sense the ambient light level. Electronic device 100 may adaptively adjust the brightness of display screen 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and so on.
The temperature sensor 180J is used to detect temperature. In some embodiments, electronic device 100 implements a temperature processing strategy using the temperature detected by temperature sensor 180J. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 100 performs a reduction in performance of a processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, the electronic device 100 heats the battery 142 when the temperature is below another threshold to avoid the low temperature causing the electronic device 100 to shut down abnormally. In other embodiments, when the temperature is lower than a further threshold, the electronic device 100 performs a boost on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, the bone conduction sensor 180M may acquire a vibration signal of the human vocal part vibrating the bone mass. The bone conduction sensor 180M may also contact the human body pulse to receive the blood pressure pulsation signal. In some embodiments, bone conduction sensor 180M may also be provided in a headset, integrated into a bone conduction headset. The audio module 170 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part acquired by the bone conduction sensor 180M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so that the heart rate detection function is realized.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration prompts as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be brought into and out of contact with the electronic apparatus 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 is also compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
The software system of the electronic device 100 may employ a hierarchical architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present application takes an Android system with a layered architecture as an example, and exemplarily illustrates a software structure of the electronic device 100.
Fig. 2 is a block diagram of a software structure of the electronic device 100 according to the embodiment of the present application. The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom. The application layer may include a series of application packages.
As shown in fig. 2, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide communication functions of the electronic device 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a brief dwell, and does not require user interaction. Such as a notification manager used to notify download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scrollbar text in a status bar at the top of the system, such as a notification of a running application in the background, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), media libraries (media libraries), three-dimensional graphics processing libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
It should be understood that the embodiments of the present application may be applicable to Android, IOS, or hong meng systems.
The following describes exemplary workflow of software and hardware of the electronic device in connection with capturing a photo scene.
When the touch sensor 180K receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into an original input event (including touch coordinates, a time stamp of the touch operation, and other information). The raw input events are stored at the kernel layer. And the application program framework layer acquires the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and taking a control corresponding to the click operation as a control of a camera application icon as an example, the camera application calls an interface of an application framework layer, starts the camera application, further starts a camera drive by calling a kernel layer, and captures a still image or a video through the camera 193.
Before introducing the technical solution of the embodiment of the present application, a network architecture provided by the embodiment of the present application and a notification method provided by the embodiment of the present application are first introduced through fig. 3 and fig. 4.
As shown in fig. 3 (a), fig. 3 (a) shows a schematic diagram of a network architecture 200 provided in the embodiment of the present application. The network architecture 200 includes a plurality of electronic devices. The electronic device may include a mobile phone 201, a watch 202, a smart speaker 203, a personal computer (or a notebook) 204, a smart television 205, a tablet computer 206, and the like, which is not limited in this embodiment. Wherein, all the electronic devices can communicate with each other. The plurality of electronic devices can be connected to a Local Area Network (LAN) through a wired or Wi-Fi connection; communication may also be via a mobile network or the internet.
For example, when the network architecture 200 is located in an environment such as a home, a plurality of electronic devices may be located in the same local area network. As shown in fig. 3 (a), the network architecture 200 may further include a router 207, and the router 207 may be configured to serve as an Access Point (AP) for providing a signal source of the network. Furthermore, each electronic device in the network architecture 200 may access the router 207 as a Station (STA). The router 207 may communicate with each electronic device in a wired network manner or a wireless network manner. For example, a Wi-Fi link is established between the electronic devices through a Wi-Fi protocol to implement communication between the devices, and a specific implementation may be that a peer-to-peer (P2P) connection (or referred to as Wi-Fi Direct) is established between the electronic devices, or each electronic device accesses the same router 207 to implement communication between the devices.
Optionally, a bluetooth link may be established between the devices by using a bluetooth protocol, and communication between the devices is realized based on the bluetooth link; or, the electronic devices may be interconnected through a cellular network; or, the electronic devices may be interconnected through a switching device (for example, a USB data line or a Dock device), so as to implement a communication function between the electronic devices, which is not limited in this embodiment of the present application.
In one possible implementation, the network architecture 200 also includes a three-party server 208. The three-party server 208 may be a server of third-party application software, and is connected to the electronic device through a network. The three-party server 208 may send notification information to the electronic device. The number of the three-party servers 208 is not limited to one, and may be plural, and is not limited herein.
As shown in fig. 3 (b), fig. 3 (b) shows a schematic diagram of another network architecture 300 provided in the embodiment of the present application. The network architecture 300 includes a plurality of electronic devices. The electronic device may include a smart phone 201, a smart watch 202, a smart speaker 203, a personal computer 204, a smart television 205, a tablet computer 206, and the like, which is not limited in this embodiment of the present application. Among the electronic devices, there is a central device, such as a smart phone 201. The GO of the Wi-Fi P2P, namely the Group Owner, is established on the central equipment, other equipment is used as a P2P GC, namely a Group Client, to connect the GO, and one-to-many networking is formed among the equipment and can be communicated with each other.
In one possible implementation, the network architecture further includes a third party server 208. The three-party server 208 may be a server of third-party application software, and is connected to the smartphone 201 through a network. The three-party server 208 sends notification information to the smartphone 201. The number of the three-party servers 208 is not limited to one, and may be plural, and is not limited herein.
As shown in fig. 3 (c), fig. 3 (c) shows a schematic diagram of another network architecture 400 provided in the embodiment of the present application. The network architecture 300 includes a plurality of electronic devices. The electronic device may include a smart phone 201, a smart watch 202, a smart speaker 203, a personal computer 204, a smart television 205, a tablet computer 206, and the like, which is not limited in this embodiment of the present application. The smart phone 201, the smart watch 202, the smart speaker 203, the personal computer 204, the smart television 205, and the tablet computer 206 may be devices under the same account (for example, hua is an account), and then the devices under the account may establish a connection through the cloud server 209.
The following describes a notification processing method provided in an embodiment of the present application with reference to fig. 4, where the notification processing method includes:
S401, the first device judges the trigger condition.
After obtaining content (e.g., messages, multimedia files, notifications), the first device determines what circumstances trigger the cross-device notification to the user, and what content needs to be notified to the user cross-device.
For example, in this embodiment of the application, the first device may determine the state of the first device after acquiring the content. Taking a mobile phone as an example, if the mobile phone determines that the mobile phone is in the screen locking state after acquiring the content, the mobile phone may determine that the content needs to be notified to the user across devices, thereby avoiding the user from missing some important or valuable content.
For example, if the first device includes a camera, the first device may start the camera to acquire face information of the user after acquiring the content. If the first device does not acquire the face information of the user, the first device may determine that the current focus of the user is not on the first device, and the first device may notify the user across devices.
For example, the first device may determine that there is a visual interruption (e.g., content that may be notified of a lock screen) or that there is an audible/vibratory alert, and may notify the user across devices.
S402, the first device selects the second device.
In this embodiment of the application, the criterion for the first device to select the second device may be to ensure that the user receives the reminder in time.
Illustratively, the first device may determine the second device according to the priority of the user's visual focus device, on-body device, and interaction device, and the specific determination process may refer to the following specific embodiments.
And S403, the second equipment prompts the user.
In the embodiment of the present application, the second device prompts the user for consideration including, but not limited to, the following:
(1) The second device selects a notification form
Illustratively, the second device may select the notification form based on the device status and capabilities of the second device. For example, for a screen-enabled device, the user may be prompted by a banner in the unlocked state; the user can be prompted through screen locking notification in the screen locking state; for another example, for a screenless device, the user may be prompted by voice.
(2) Presentation style of second device
For example, to protect user privacy, the second device may prompt the user for content (e.g., content of a message received by the first device) upon determining that only one person of the user is currently in focus with the second device; and when the second device determines that a plurality of users including the user focus on the second device, the user is only prompted to receive the content without prompting detailed information of the content.
(3) Second device optimized interaction
For example, for some weakly interacting devices (e.g., smart televisions), upon receiving content sent by the first device, the content may be displayed but no interactive controls are displayed and the user may view and process the content in the notification center.
For the car machine, although the car machine is also provided with a display screen, the car machine can preferentially adopt voice broadcast in consideration of safety problems when a user drives the car machine.
When the second device prompts the user, the second device may also prompt the user after detecting the preset operation of the user. For example, as shown in fig. 32, after the mobile phone captures (or captures) a picture, the thumbnail of the picture may be sent to the notebook computer, and after the notebook computer receives the thumbnail of the picture, in order to reduce interference to the user, the notebook computer may prompt the user with the thumbnail of the picture when detecting an operation of sliding a mouse by the user.
S404, the second device receives the processing operation of the user.
After the user receives the prompt on the second device, the corresponding content may be processed. Illustratively, the content is a message, the user may reply to the message on the second device.
For example, the content is a thumbnail of a picture, and the user may obtain information of the original image from the first device by clicking the thumbnail or clicking a download control on the second device.
Illustratively, as shown in fig. 23, the second device (e.g., a watch) may also prompt the user to view the mail content through a third device (e.g., a laptop).
The following specifically describes an embodiment of the present application with reference to Graphical User Interfaces (GUIs) shown in fig. 5 to 44.
FIG. 5 is a set of GUIs provided by embodiments of the present application.
The user is watching the smart television in the living room, the mobile phone is placed on a desk beside the user, and the mobile phone is in a screen locking state.
Referring to the GUI shown in (a) of fig. 5, the smart tv is playing a video at this time. The mobile phone receives a message of App1 sent by Lihua (the message content is '9 am with a meeting'). At this time, the mobile phone judges that the current focus of the user is on the smart television but not on the mobile phone, then the mobile phone can send the message content of App1 to the smart television, and the television reminds the user of the content of the received message of App 1. A message reminding box 501 can be displayed on the smart television, and the message reminding box 501 comprises that the message is App1, that the user sending the message is li hua, and that the content of the message is "9 am with a meeting".
Referring to the GUI shown in (b) in fig. 5, the smart tv is performing a zapping operation at this time. The mobile phone receives a message of App1 (the content of the message is '9 am meeting') sent by user Lihua. At this time, the mobile phone judges that the current focus of the user is on the smart television but not on the mobile phone, then the mobile phone can send the message content of App1 to the smart television, and the television reminds the user of the content of the received message of App 1. A message reminding box 502 can be displayed on the smart television, and the message reminding box 502 includes that the message is App1, that the user sending the message is li hua, and that the content of the message is "9 am with a meeting".
In one embodiment, the smart television can automatically position a cursor of the smart television at the message reminding frame when the message reminding frame is displayed. When the smart television detects that the user clicks the confirmation button through the remote controller, the smart television can automatically jump to the notification center. The user can complete a reply to the message at the notification center.
In one embodiment, if the smart television does not receive any operation of the user within the preset time length of displaying the message reminding frame, the smart television can hide the message reminding frame.
Referring to the GUI shown in (c) of fig. 5, after the user views the video or finishes the channel change operation, the user can enter the notification center of the smart tv through the operation. The notification center displayed by the smart television comprises a message reminding box 503 of previously received App1, wherein the message reminding box 503 of App1 comprises a reply control 504 and an ignore control 505, wherein the message is App1, the source of the message of App1 is from a P40 mobile phone of a user Lily, the user sending the message is Lihua, and the content of the message is '9 am with a meeting'. When the smart tv detects that the user clicks the reply control 504, the smart tv may display a GUI as shown in (d) of fig. 5.
See the GUI shown in (d) of fig. 5. When the smart television detects that the user clicks the control 504, the smart television may display the input method of the smart television. At this time, the user can reply to the message of App1 through an input method.
It should be understood that, in the GUIs shown in fig. 5 (a) and 5 (b), the handset may not light up the display although it receives a message of App1 after it determines that the user is not focused on the handset. But the content of the notification message is sent to the smart television by judging that the device currently focused by the user is the smart television, so that the user is prompted by the smart television. When the mobile phone detects that the user clicks the power key, the mobile phone is in a screen locking and lightening state, and at the moment, the mobile phone can display the message prompt of the App1 on a screen locking interface.
It should be further understood that the smart television shown in (a) in fig. 5 is playing a video at this time, and the smart television shown in (b) in fig. 5 is performing a channel change operation, so that at this time, when the smart television displays the message reminder of App1, the control action may not be added to the message reminder box 501 (for example, a reply control, a ignore control, and the like may not be added to the message reminder box 501), so as to avoid confusion caused by a user clicking by mistake.
FIG. 6 illustrates another set of GUIs provided by an embodiment of the present application.
Referring to the GUI shown in (a) in fig. 6, the GUI is similar to the GUI shown in (a) in fig. 5. The difference lies in that a reply control and an ignore control are added in the message reminding frame displayed by the intelligent television.
In one embodiment, after the smart television receives the message of App1 sent by the mobile phone, the smart television can display a message reminding frame, and at the moment, the smart television can automatically position a cursor to the reply control. When the smart television detects that the user clicks the determination button through the remote controller, the user can complete the reply to the message of App1 in the process of playing the video.
In one embodiment, if the smart television does not detect that the user replies to the App1 message within a preset time, the smart television can automatically hide the message reminding frame. The user can check the message content in the notification center of the intelligent television and reply.
Referring to the GUI shown in (b) in fig. 6, the GUI is similar to the GUI shown in (a) in fig. 6. The difference lies in that when the user is operating the smart television and the smart television receives the notification message sent by the mobile phone, the smart television displays the message reminding frame but does not automatically position the cursor to the reply control, so that the trouble caused to the current operation of the user is avoided.
Because the user is operating the smart television to change channels at the moment, the smart television can not be automatically positioned at the moment, and therefore the operation of the user is prevented from causing troubles. Further, the GUI shown in fig. 6 (b) includes a text prompt "click menu button process" near the message alert window, so that the user can click the menu button after seeing the text prompt although the user is changing channels using the remote controller (the user clicks the up, down, left, and right buttons). When the smart television detects that the user clicks the menu key, the smart television can automatically position the cursor to the reply control, so that the reply of the message of App1 is completed.
As shown in (b) in fig. 6, when the smart television detects that the user has clicked the menu key, the smart television may automatically position the cursor to the reply control.
In an embodiment, the message reminding frame shown in (a) in fig. 6 may also not include a control, and when the smart television detects that the user does not interact with the smart television, the smart television may position a cursor to the message reminding frame. When the user clicks the determination button by using the remote controller, the smart television can jump to a notification center, wherein the notification center comprises the notification message and a corresponding reply control. The user can complete the reply to the message in the notification center.
FIG. 7 illustrates another set of GUIs provided by embodiments of the present application.
The user is driving the car machine at the moment, and navigation information is displayed on a display screen of the car machine. The mobile phone is placed in a storage groove on the vehicle by a user and is in a screen locking state at the moment.
The mobile phone receives a message of App1 sent by Lihua (the message content is '9 am with a meeting'). At the moment, the mobile phone judges that the current focus of the user is on the car machine but not on the mobile phone, so that the mobile phone can send the message content of App1 to a display screen of the car machine, and the display screen reminds the user of the received message of App 1. A message reminding box 701 can be displayed on a display screen of the car machine, wherein the message reminding box 701 comprises that the message is App1, that a user sending the message is Li Hua, and that the content of the message is '9 am with meeting'.
The car machine can remind the user of a meeting at 9 am in a voice reminding mode. Or the car machine can remind the user of 'the user Lihua reminds that there is a meeting at 9 am' in a voice reminding mode. The car machine can collect voice information of a user through a microphone. For example, the speech information that the car machine can gather is "good, i know". The car machine can send the voice information to the user Li Hua, or can convert the voice information into text information and send the text information to the user Li Hua.
In one embodiment, the display screen of the car machine may further include a voice broadcast control, and when the car machine detects that the user clicks the voice broadcast control, the car machine may remind the user that a conference is available at 9 am in a voice reminding manner.
In one embodiment, a Digital Signal Processor (DSP) may be included on the vehicle. The DSP can process the voice information, so that the text content corresponding to the voice information can be analyzed and obtained.
In one embodiment, an Automatic Speech Recognition (ASR) module may be included on the vehicle, wherein the ASR module is primarily operative to recognize speech information of the user as text content.
In an embodiment, the message reminding frame displayed on the display screen of the car machine may also include a sending control, and after the user clicks the sending control, the car machine may send the content replied by the user to the mobile phone. Alternatively, the message reminder box may not include a send control. The car machine acquires the voice information of the user through the microphone, and after I know that the voice information of the user is replied to Li Hua, the car machine can send the content replied by the user to the mobile phone.
In one embodiment, after detecting that the user clicks the sending control, the car machine may also send the preset quick reply content to the mobile phone.
In the embodiment of the application, for the scene that the user is driving the car machine, the car machine can convert the text information into the voice information to prompt the user, so that the user can be prevented from looking over the message content on the display screen and the attention caused by the message content is not concentrated, and the safety of the user when driving the car machine is improved.
FIG. 8 illustrates another set of GUIs provided by embodiments of the present application.
The user is sitting on the sofa, the mobile phone is placed by the user on a desk beside and the sound box is on a desk beside the user.
The mobile phone receives a mail from user Lihua (the main topic of the mail is 'meeting notice'). At this time, the mobile phone judges that the current focus of the user is not on the mobile phone, then the mobile phone can send the mail prompt to the sound box, and the sound box prompts the user to 'you receive a mail and please check the mail on the mobile phone' through voice.
It should be understood that, although the focus of the user is not on the sound box at this time, the sound box can remind the user through voice, so that the user can be better noticed and important mails can be avoided from missing.
FIG. 9 illustrates another set of GUIs provided by an embodiment of the present application.
The user is wearing the wearable device and sitting on a sofa, and the laptop is in a sleeping (or dormant) state and is placed by the user on a table beside.
The user sets a schedule on the notebook computer, the time of the schedule is 9 to 10 am, and the event of the schedule is a conference. When the notebook computer determines that 5 minutes of the schedule event is about to occur, the notebook computer cannot prompt the user because the notebook computer is in the sleep state at the moment (when the notebook computer detects that the notebook computer enters the working state when the notebook computer is opened from the sleep state when the notebook computer is closed, the notebook computer can prompt the schedule information to the user). The laptop can send the schedule reminder to the wearable device for reminder.
Referring to the GUI shown in fig. 9, the wearable device prompts the information of the schedule through the display screen.
In the embodiment of the application, a plurality of devices of a user can be connected with each other and can be perceived by each other. When device a receives the notification message, it may determine the focus that the user is currently focusing on. And if the focus currently concerned by the user is not in the equipment A, triggering the notification forwarding. When the focus equipment is judged to be in the equipment B, the mobile phone can forward the notification message to the equipment B, and then the equipment A does not perform notification and the equipment B performs notification, so that the user can be prevented from being harassed.
After the notification is forwarded to device B, device B may also optimize the notification presentation and interaction according to the capabilities of device B.
FIG. 10 illustrates another set of GUIs provided by embodiments of the present application.
The user is playing the tablet on the sofa, the mobile phone is placed on a desk beside by the user, and the mobile phone is in a screen locking state.
The mobile phone receives a message of App1 sent by Lihua (the message content is '9 am with a meeting'). At this time, the mobile phone judges that the current focus of the user is on the tablet and not on the mobile phone, and then the mobile phone can send the message content of App1 to the tablet. The tablet can judge whether to remind the user of the App1 message according to the state of the current tablet.
Referring to the GUI shown in (a) of fig. 10, if the tablet is in a state of playing video at this time, the tablet may not prompt the user for the App1 message.
In this embodiment, if the tablet is in the immersion state, the tablet may not prompt the App1 message. The immersive state may also be understood as a notification disabled state, as the immersive state may be that the user has disabled the notification or has turned on a do-not-disturb mode. For example, when a user is in a teleconference, especially when sharing a desktop, if a notification is sent, the notification will not only be affected by interference, but also seen by other participants, which may affect the user experience and may reveal the privacy of the user.
Referring to the GUI shown in (b) of fig. 10, if the tablet is displaying the desktop at this time, the tablet may display a message alert box 1001. The message reminding box 1001 includes a message that the message is App1, a user sending the message is li hua, the content of the message is "9 am conference", a reply control, and an ignore control.
In this embodiment of the application, when the focus device receives the notification message, the focus device may first determine whether the focus device is in the immersion state. If the focal device is in an immersive state, the focal device may not alert the user; if the focus device is not in an immersive state, the focus device may alert the user.
In one embodiment, the immersion state includes, but is not limited to:
(1) The focus device is in a do-not-disturb mode;
(2) The focus device is in a game, video, etc. scene.
It should be understood that the focus device may perform notification reminders according to the user's settings. The user can set that the message reminding can not be carried out in a game scene, and the message reminding can be carried out in a video scene.
FIG. 11 illustrates another set of GUIs provided by embodiments of the present application.
The user is watching the smart television in the living room, the mobile phone is placed on a desk beside the user, and the mobile phone is in a screen locking state.
Referring to the GUI shown in (a) in fig. 11, the smart tv is playing a video at this time. The mobile phone receives a message of App1 sent by Lihua (the message content is '9 am with a meeting'). At this time, the mobile phone judges that the current focus of the user is on the smart television but not on the mobile phone, then the mobile phone can send the message content of App1 to the smart television, and the television reminds the user of the content of the received message of App 1.
Before the smart television prompts the user for the App1 message, the smart television can automatically start a camera to acquire the face information of the user. If the smart television judges that the user is the user himself through the acquired face information, a message reminding box 1101 can be displayed on the smart television, wherein the message reminding box 1101 comprises that the message is App1, the user sending the message is Lihua, and the content of the message is '9 am with meeting'.
Referring to the GUI shown in (b) of fig. 11, if the face information collected by the camera of the smart television includes a plurality of face information. Then the smart tv determines that someone other than the user is watching the smart tv at this time, and then the smart tv may display a message alert box 1102, where the message alert box 1102 includes a message that App1 is the message and prompts the user that "you have a message, P40 from Lily". If the user wants to view the message of App1, the user can view the message of App1 in a notification center of the smart television, or the user can return to the mobile phone to view the message of App 1.
In the embodiment of the application, when the focus device prompts the notification message to the user, the focus device can acquire user information such as face information and iris information of the user, so as to judge whether the user is the user. If the user is himself, the focus device may prompt the user for the notification message; if the user is not the user himself or other than the user himself, the focus device may prompt only one notification message without displaying the content of the notification message. Therefore, privacy disclosure of the user is avoided, and safety in the message forwarding process is improved.
In one embodiment, after receiving a message of App1 sent by a mobile phone, a smart television may first determine whether the message is a privacy message, and if the message is a privacy message, may then determine whether the message is a user himself or multiple users; if the message is not a privacy message, the message content can be directly displayed to the user without judgment.
In one embodiment, the privacy message may be set by the user, or may be a default Instant Messaging (IM) type message.
In the several sets of GUIs illustrated above through fig. 5 to 11, when a device a receives a notification message, the device a can determine the focus that the user is currently focusing on. And if the focus currently focused by the user is not in the device A, triggering notification forwarding. When the device B is determined to be the focus device, the mobile phone may forward the notification message to the device B, and at this time, the device a does not perform notification and the device B performs notification. In another embodiment of the present application, after the user determines that the focused device is on device B, device a may further send a notification message to device B and devices of other users, so that the focused device may perform notification and the other devices do not perform notification.
FIG. 12 illustrates another set of GUIs provided by an embodiment of the present application.
The user is sitting on the sofa with the wearable device, with the cell phone placed on a table beside the user and the tablet on the sofa beside the other with the cell phone.
The mobile phone receives a mail (the mail subject is 'meeting notice') sent by Lihua. At this time, the mobile phone determines that the current focus of the user is not on the mobile phone, and then the mobile phone may send the mail content to the tablet and the wearable device. Since the wearable device determines that the user is wearing the wearable device at this time, the wearable device may notify as a focused device. As shown in fig. 12, the wearable device may alert the user of the mail content by shaking. And for the mobile phone and the tablet personal computer, the mail content can be prompted to the user on the screen locking interface in a screen lightening mode because the mobile phone and the tablet personal computer are not focus equipment.
FIG. 13 illustrates another set of GUIs provided by an embodiment of the present application.
Referring to the GUI shown in (a) of fig. 13, the GUI is a desktop of a mobile phone. The GUI includes a plurality of application (App 1, app2, and App 3) icons including an icon 1301 of App 3. When the mobile phone detects that the user clicks the setting icon 1301 on the desktop, the mobile phone related functions may be set up, and the GUI as shown in fig. 13 (b) is displayed.
Referring to the GUI shown in (b) of fig. 13, the GUI is a setting interface of a mobile phone. The GUI includes a plurality of function options including wireless and network, device connection, desktop and wallpaper, sound, notification center, application and notification, etc., wherein the user can set functions such as rights management, default application and notification management, etc. through the application and notification. When the cellular phone detects an operation with the user clicking the application and notification function 1302, a GUI as shown in (c) of fig. 13 is displayed.
Referring to the GUI shown in (c) of fig. 13, the GUI sets an interface for the application and notification functions of the cellular phone. The GUI includes multiple functionality options for wireless and network, including rights management, default applications, application affiliation, and cross-device notification, among others. When the cellular phone detects an operation of the user clicking the cross device notification function 1303, a GUI as shown in (d) in fig. 13 is displayed.
Referring to the GUI shown in (d) in fig. 13, the GUI is a cross device notification function setup interface of a cellular phone. The GUI includes a plurality of functional options under cross-device notifications including switch controls for cross-device notifications, notification types, device management, and device authorization. Wherein the cross-device notification includes a functional description of the cross-device notification under the switch control of the cross-device notification, "when the notification comes, if you are using other devices, the notification is forwarded to the device". When the cellular phone detects an operation in which the user clicks the notification type 1304, the cellular phone may display a GUI as shown in (e) in fig. 13.
Referring to the GUI shown in (e) in fig. 13, the GUI sets up an interface for a notification type of a cellular phone. The user can select a notification type that he wishes to forward from the notification types shown in (e) in fig. 13. Illustratively, the user may select the messages of App5, app6, and App7 as the type of notification desired to be forwarded.
In the embodiment of the application, the user can select the type of the notification which the user wants to forward by setting the option, so that the harassment caused by forwarding unnecessary notification messages to the user can be avoided.
The process of selecting a notification type by a handset is described above in connection with the GUI of fig. 13. The process of selecting a receiving device by a handset will now be described with reference to fig. 14 and 15.
When the cellular phone detects that the user has clicked on the device management 1401 as shown in (a) in fig. 14, the cellular phone may display the GUI as shown in (b) in fig. 14.
Referring to the GUI shown in (b) in fig. 14, the GUI displays an interface for the device management function. Where my device and other devices are included in the device management. Wherein, my equipment is equipment which logs in the same identity ID as the mobile phone. Illustratively, the my device includes a tablet "HUAWEI mathpad Pro", a smart WATCH "HUAWEI WATCH GT 2", another mobile phone "HUAWEI P40 Pro", a notebook computer "HUAWEI mathebook X", and a smart speaker "HUAWEI Sound X". When the mobile phone selects all the devices, the mobile phone can forward the notification message to the devices.
The user may select the devices under the same Huawei ID as the forwarding objects of the notification messages, or may select the devices under different Huawei IDs as the forwarding objects.
When the handset detects that the user has clicked on the device authority 1501 shown in (a) in fig. 15, the handset may display a GUI shown in (b) in fig. 15.
Referring to the GUI shown in (b) of fig. 15, the GUI displays an interface for the device authorization function. The display includes an add device option and a nearby device option below the display. The description of the adding device is that you can add other devices under the Huacheng ID into other devices. For example, the user may add the Huawei IDs of other family members in the family group, and the devices under the Huawei IDs of other family members may appear in other devices as shown in (b) of fig. 14. When the cell phone detects an operation of the user to open the control of the nearby device, the cell phone may search for the nearby device and display the GUI as shown in (c) of fig. 15.
Referring to the GUI shown in (c) of fig. 15, the GUI is another display interface of the device authorization function. The display interface displays the nearby devices searched by the mobile phone, including iphone 11 Pro, iPad Pro 11 and millet 10 Pro. When the mobile phone detects that the user clicks the operation of adding iPad Pro 11 and millet 10 Pro, the mobile phone can add the two devices into other devices in the device management.
Referring to the GUI shown in (d) in fig. 15, the GUI is another display interface for device management. Two nearby devices iPad Pro 11 and millet 10 Pro are added in the 'other devices' on the display interface.
In the embodiment of the application, the mobile phone can search other surrounding equipment in short-distance communication modes such as Bluetooth and Wi-Fi. For example, after detecting that the user clicks and adds the iPad Pro 11 and the millet 10 Pro, the mobile phone may send a Bluetooth Low Energy (BLE) packet to the two devices, where the BLE packet may carry request information of a device requesting to add the notification message. If the handset receives responses (e.g., ACK messages) from both devices, the handset can successfully add both devices to the "other device". Further, on iPad Pro 11 and millet 10 Pro, whether the user agrees to the addition request of the mobile phone or not can be prompted through a reminding box. If the iPad Pro 11 and millet 10 Pro detect that the user indicates agreement to add a request for a device forwarded for notification messages, then both devices may send a response to the handset.
FIG. 16 is another set of GUIs provided by embodiments of the present application.
The user Lily is currently watching a video using a tablet, and the mobile phone is placed on a desk beside by the user Lily.
See GUI shown in (a) in fig. 16. The handset receives a social App message (e.g., happy Birthday |) from the user Tom. At this point the phone may forward the message to the tablet that the user is focusing on, and the tablet may display a message alert box 1601. The message reminder box 1601 includes a source of the message (e.g., the name of the social App, and the message is from the P40 of the user Lily), user information (e.g., "Tom" and the avatar of the user Tom) that sent the message, content of the message (e.g., happy Birthday |), and a control 1602.
It should be understood that the tablet may be an account that is logged into the same social App as the phone; or, the tablet can also be installed with the social App but the user does not log in the social App by using the same account number as the mobile phone; alternatively, the tablet may be a device that does not have the social App installed.
In the embodiment of the application, after the mobile phone receives a notification, if the mobile phone determines that the user is a focused mobile phone and is a focused tablet, the mobile phone may carry the content of the message, the information of the user who sends the message, and the attribute of "quick reply" in the information sent to the tablet. Furthermore, the mobile phone can also carry the information of the mobile phone in the information sent to the tablet. Therefore, the tablet can display the reminding frame 1601 to the user according to the information sent by the mobile phone.
It should be understood that the notification message forwarded by the mobile phone in the embodiment of the present application may be a notification type message set by the user in advance, for example, the user may set the notification message of App such as WeChat, QQ, nail, etc. as a message that can be forwarded in the system setting; alternatively, the notification message forwarded by the mobile phone may also be a default IM type message (such as a short message, a WeChat, etc.).
When the tablet detects an operation of the user clicking on control 1602, a GUI as shown in (b) of fig. 16 may be displayed.
After the mobile phone receives the message for a period of time, the message is automatically hidden. The user may process the message by entering a notification center, or by pulling down a menu.
When the tablet detects a user click on control 1602, a text entry box 1603 may be displayed in the message reminder box, and a send control 1604 may be displayed in text entry box 1603. Meanwhile, the flat panel can also set up a system input method. The user may edit the content he or she needs to reply to (e.g., thank you |) via an input method, which may be displayed in text entry box 1603.
When the tablet detects that the user clicks on the control 1604, the tablet may send the reply content to the cell phone. Meanwhile, the input method and the message reminding frame on the tablet can be automatically hidden. The change of GUI on the mobile phone can be seen in (c) to (f) of fig. 16.
Referring to the GUI shown in (c) of fig. 16, when the mobile phone receives the reply content of the user sent by the tablet, the mobile phone may automatically start the social application App. The display interface of the social application App can sequentially display the historical chat records of the user Tom, the user Sam, the user Lucy and the user Mike according to the sequence of the time when the message is received.
Referring to the GUI shown in (d) of fig. 16, the handset can automatically open the chat interface of the user Tom. When the chat interface is opened, a message (Happy Birthday |) sent by user Tom to user Lily may be displayed on the chat interface.
Referring to the GUI shown in (e) in fig. 16, the cell phone may automatically reply to the user Tom with the content that the user replied to on the tablet. When the handset completes the reply, it can return to the state before receiving the message. See (f) in fig. 16. The mobile phone can automatically return to the desktop of the mobile phone from a chat interface of the social App and the Tom of the user.
It should be understood that in the embodiment of the present application, after the operation of the user clicking the control 1604 is detected on the tablet, the message replied by the user is not really replied. The actual reply process is actually done on the handset. For example, when the handset displays the interface as shown in (e) in fig. 16, the handset completes the real reply to the message.
It should also be understood that the process of the mobile phone completing the reply to the message may also be implemented internally, and the mobile phone may automatically complete the reply to the message in the background after receiving the contents sent by the tablet and replied to by the user. That is, the process by which the handset completes a true reply to the message may be imperceptible to the user.
In the embodiment of the application, the user can directly reply the message on the tablet without returning to the mobile phone to process the message. Therefore, the user can be prevented from missing important information, and the process that the user returns to the mobile phone to process the message is also avoided, so that the user experience is improved.
FIG. 17 is another set of GUIs provided by embodiments of the present application.
The user is working with the notebook computer in a room, and the mobile phone is in a screen locking state and is placed in another room by the user.
Referring to the GUI shown in FIG. 17 (a), the handset receives a social App message from user Tom (e.g., happy Birthay!) and a social App message from user Amy (e.g., happy Birthay!). At this time, the mobile phone may send messages of the user Tom and the user Amy to the notebook computer on which the user is focusing, and the notebook computer may display the message alert box 1701 and the message alert box 1702. The first message reminder box 1701 contains the source of the message (e.g., the message belongs to the social App and is from the P40 of the user Lily), the user information (e.g., "Tom" and the avatar of the user Tom) sending the message, the content of the message (e.g., happy Birthday |), and a control 1703. The second message reminder box 1702 contains the source of the message (e.g., the message belongs to a social App message and the message is from P40 of user Lily), the user information to send the message (e.g., "Amy" and the avatar of user Amy), the content of the message (e.g., happy Birthday |), and the reply control.
When the notebook computer detects an operation of the user clicking on the control 1703, a GUI as shown in (b) of fig. 17 may be displayed.
The laptop may display a text entry box 1704 in the message alert box and may also display a send control 1705 in text entry box 1704. Meanwhile, the notebook computer may detect the content replied by the user through the keyboard (e.g., thank you |), and the content replied by the user may be displayed in the text input box 1704.
When the notebook computer detects that the user clicks the control 1705, the notebook computer can send the reply content to the mobile phone. Meanwhile, the message alert box 1701 on the notebook computer may be automatically hidden. The change of GUI on the mobile phone can be seen in (c) to (f) of fig. 17.
Referring to (c) in fig. 17, the mobile phone may directly pull up the display interface of the social application App on the lock screen interface. The display interface can display the historical chat records of the user Tom, the user Amy, the user Sam, the user Lucy and the user Mike in sequence according to the sequence of the time when the message is received.
It should be understood that the answering interfaces of video calls and voice calls of friends in some social apps can be directly displayed on the screen locking interface of the mobile phone. In the embodiment of the application, by adding a (flags) attribute supporting lock screen loading in the social App, the mobile phone can directly pull up the display interface of the social App on the lock screen interface.
It should be understood that (d) to (e) in fig. 17 can refer to the description of (d) to (e) in fig. 16, and are not repeated herein for brevity.
Referring to (f) in fig. 17, the mobile phone enters a screen-locked state after completing a true reply to the message.
In one embodiment, when the laptop detects that the user has clicked on the widget 1705, the message alert box 1701 may disappear; and the message alert box 1702 may continue to be displayed on the notebook computer. After a period of time, if the user does not reply to user Amy on the laptop, then message alert box 1702 may be automatically hidden.
In one embodiment, after the mobile phone completes the real reply to the user Tom, the message prompt message of the user Tom on the screen locking interface can be automatically hidden; and the message prompt of the user Amy may continue to be displayed on the lock screen interface.
In one embodiment, the notebook computer can automatically start the camera to collect the face information of the user while detecting the content replied by the user through the keyboard. The notebook computer can send the collected face information to the mobile phone while sending the content replied by the user to the user. When the mobile phone determines that the face information collected by the notebook computer is matched with the face information preset in the mobile phone, the mobile phone can complete the real reply to the message. Meanwhile, because the face information collected by the notebook computer is matched with the face information preset in the mobile phone, the mobile phone can automatically perform unlocking operation and enter a non-screen-locking state. Thereby realizing the reply process as shown in (c) to (e) in fig. 17.
In the embodiment of the application, when the notebook computer sends the content replied by the user to the mobile phone, the notebook computer can also send the face information of the user collected by the notebook computer to the mobile phone. The mobile phone can judge whether the user replies to the mobile phone according to the face information. If the user replies, the mobile phone completes the real reply to the notification message; otherwise, the mobile phone may not reply to the notification message. Therefore, the protection of the mobile phone on the privacy of the user is facilitated, and the user experience is improved.
FIG. 18 is another set of GUIs provided by embodiments of the present application.
The user is driving at the moment, and the display screen on the car machine displays that the user is using navigation. The mobile phone is placed in a storage groove on the vehicle by a user.
The handset receives a social App message (e.g., happy Birthday |) from the user Tom. At this time, the mobile phone may send the message to the car machine of the user, and the car machine displays the message reminding frame 1801 through the display screen. The message reminder box 1801 may include, among other things, the source of the message (e.g., the message belongs to the social App message and the message is from P40 of the user Lily), the user information (e.g., "Tom" and the avatar of the user Tom) to send the message, and the content of the message (e.g., happy Birthday |). Meanwhile, the car machine can also prompt the user in a voice reminding mode to prompt that the Tom sends a Happy Birthday to you through the social App! ".
Compared with the GUI shown in fig. 16 or 17, the message reminding frame displayed on the display screen of the car machine may not include the sending control. After the car machine prompts the user in a voice reminding mode, the car machine can collect the content replied by the user through a microphone. As shown, the user replies to "Thank you! ". At this time, the car machine can display the content replied by the user through the message reminding frame 1801.
In one embodiment, the DSP may be included on the vehicle. The DSP can process the voice information, so that the text content corresponding to the voice information can be analyzed and obtained.
In one embodiment, an ASR module may be included on the vehicle, wherein the ASR module is primarily operative to recognize speech information of the user as textual content.
In one embodiment, the message reminding box displayed on the display screen of the in-vehicle device may also include a sending control, and after the user clicks the sending control, the in-vehicle device may send the content replied by the user to the mobile phone. Alternatively, the message reminder box may not include a send control. The car machine collects the voice information of the user through the microphone and replies Thank you to Tom! And after the mobile phone receives the reply message, the car machine can send the reply content of the user to the mobile phone.
In one embodiment, the reply key on the message reminding frame can be mapped to a key on the steering wheel, so that the user can directly perform voice reply by clicking the key on the steering wheel, and the driving safety can be further improved.
In the embodiment of the application, the content replied by the car machine to the mobile phone can be voice information or text content.
Referring to fig. 18 (a), after receiving the reply content sent by the car device, the mobile phone may directly pull up the display interface of the social application App on the screen locking interface. The display interface can display the historical chat records of the user Tom, the user Sam, the user Lucy and the user Mike in turn according to the sequence of the time when the message is received.
Referring to the GUI shown in (b) of fig. 18, the handset can automatically open the chat interface of the user Tom. When the chat interface is opened, a message (Happy Birthday!) sent by user Tom to user Lily may be displayed on the chat interface.
Referring to the GUI shown in (c) of fig. 18, the mobile phone may automatically reply to the user Tom with text corresponding to the voice message. Alternatively, referring to the GUI shown in (d) in fig. 18, the handset may reply the voice message to the user Tom.
In the embodiment of the application, a text-to-speech function can be added in the receiving terminal equipment. The method and the device are beneficial to increasing the safety and convenience of the user in using the message notification forwarding in the scene of the vehicle machine, and are beneficial to improving the user experience.
Fig. 19 is another set of GUIs provided in embodiments of the present application.
The user works with the notebook computer in a room, and the mobile phone is in a screen locking state and is placed in another room by the user.
Referring to the GUI shown in fig. 19 (a), the handset receives a social App message from the user Tom (e.g., please send me the project plan). At this time, the mobile phone may send the message to the laptop that the user is focusing on, and the laptop may display the message alert box 1901. The message reminder box 1901 includes a source of the message (e.g., the message belongs to the social App and is from P40 of the user Lily), user information for sending the message (e.g., "Tom" and the avatar of the user Tom), content of the message (e.g., please send the project plan to me), and a control 1902.
Referring to the GUI shown in (b) of fig. 19, when the notebook computer detects an operation of clicking the control 1902 by the user, the notebook computer may display an input box 1903 and a send control 1904. After the notebook computer detects that the user drags the project plan (word document) on the desktop of the notebook computer to the input box 1903, the notebook computer may send the project plan (word document) to the mobile phone. Or, after the notebook computer detects that the user drags the project plan (word document) on the desktop of the notebook computer to the input box 1903 and the user clicks the control 1904, the notebook computer may send the project plan (word document) to the mobile phone.
The display interface of the mobile phone after receiving the project plan (word document) sent by the notebook computer can be referred to as (c) to (f) in fig. 19.
Referring to the GUI shown in (c) in fig. 19, the mobile phone may directly pull up the display interface of the social application App on the lock screen interface. The display interface can display the user Tom, the user Sam, the user Lucy and the historical chat records of the user Mike in sequence according to the sequence of the time when the message is received.
Referring to the GUI shown in (d) of fig. 19, the handset can automatically open the chat interface of the user Tom. When the chat interface is opened, a message sent by user Tom to user Lily (please send the project plan to me) may be displayed on the chat interface.
Referring to the GUI shown in (e) in fig. 19, the handset can give a project plan (word document) to the user Tom. When the handset completes the reply, it can return to the state before receiving the notification message.
See (f) in fig. 19. After the mobile phone completes the real reply to the message, the mobile phone enters a screen locking state.
In this embodiment, a receiving end (sink) device may send an attachment to a sending end (source) device. Wherein the attachment may be a file of the type picture, video, audio, document, etc. The user can complete the interaction similar to the email in the notification, so that the convenience of the user for forwarding the message notification is improved, and the user experience is improved.
FIG. 20 is another set of GUIs provided by embodiments of the present application.
Referring to the GUI shown in FIG. 20 (a), the handset receives a social App message from the user Tom (e.g., happy Birthday!). At this point, the mobile phone may send the message to the laptop that the user is focusing on, and the laptop may display a message alert box 2001. When the notebook computer detects that the user has clicked the operation of the message notification box 2001, a GUI as shown in (b) in fig. 20 may be displayed.
Referring to the GUI shown in (b) of fig. 20, when the notebook computer detects that the user has clicked a certain position in the message alert box, the notebook computer may display another message alert box 2002. Compared with the message reminding box 2001, the message reminding box 2002 is more in line with the display style of the social App on the PC side. After detecting that the user inputs the content of the reply in the input box of the message reminder box 2002 (e.g., thank you |) and clicking the send control, the notebook computer can send the content of the reply to the mobile phone.
Referring to the GUI shown in (c) of fig. 20, the handset can automatically pull up the chat interface with the user Tom after receiving the content replied by the user, thereby automatically completing the real reply to the message. Meanwhile, the notebook computer may continue to display the message alert box 2003. The message reminder box 2003 may contain the content that the user has just replied to (e.g., thank you |).
It should be understood that, after the user clicks the reply widget 1703 on the GUI as shown in fig. 17 (b), the notebook computer may detect the reply content input by the user and transmit the reply content to the cellular phone after detecting that the user clicks the transmission widget 1704. The message alert box 1701 may automatically disappear while being sent to the handset. After the next time the mobile phone receives the message, the notebook computer can continue to display the message reminding frame. For the message reminding box in the GUI shown in fig. 20 (b), the user may keep the message after replying the message, and when the notebook computer detects that the user closes the message reminding box 2002, the notebook computer may close the message reminding box 2002.
In one embodiment, referring to the GUI shown in fig. 20 (d), the message reminder box 2002 further includes a scroll bar 2004. When the laptop detects that the user has slid the scroll bar 2004 upward using the mouse, the laptop may request a history of chats with the user Tom from the phone. So that more chat content can be displayed through the message alert box 2003.
FIG. 21 is another set of GUIs provided by embodiments of the present application.
User Lily is currently using a notebook computer, and the cell phone is set aside by user Lily.
See GUI shown in (a) in fig. 21. The handset receives a video call request from the user Tom. At this time, the mobile phone may forward the message to the laptop that the user is focusing on, and the laptop may display the message alert box 2101. The message reminder box 2101 includes a message source (e.g., the name of the social App, and the message is from P40 of the user Lily), user information for sending the message (e.g., "Tom" and the avatar of Tom), a prompt message (e.g., "Tom invite you to talk"), and controls 2102 and 2103.
Referring to the GUI shown in fig. 21 (b), when detecting that the user clicks the control 2103, the notebook computer may start the camera to collect the face information of the user, and send the collected face information (the face information of the user Lily) and the indication information to the mobile phone, where the indication information is used to indicate the mobile phone to connect the video call. The mobile phone can be used for connecting the video call and displaying a video call interface in response to the received indication information sent by the notebook computer. The face information (face information of the user Lily) displayed in the small window 2104 in the video call interface is face information acquired by a camera of the notebook computer.
The mobile phone may also send image information collected by the device of the user Tom received by the mobile phone to the notebook computer, and the notebook computer may display the window 2105 when receiving the face information collected by the device of the user Tom. The window 2105 may display face information of the user, which is sent by the mobile phone and acquired by the device of the user Tom, and face information of the user, which is acquired by the camera of the notebook computer. Also included in window 2105 is a control 2106.
In one embodiment, the mobile phone may also connect the video call and not display the video call interface after receiving the indication information. When the mobile phone receives the face information collected by the device of the user Tom, the face information can be sent to the notebook computer.
In one embodiment, when the notebook computer detects that the user clicks the control 2106, an indication message may be sent to the mobile phone, where the indication message is used to instruct the mobile phone to stop the video call with the user Tom. The mobile phone can hang up the video call in response to receiving the indication information.
FIG. 22 is another set of GUIs provided by embodiments of the present application.
See GUI shown in (a) in fig. 22. The handset receives a voice call request from the user Tom. At this point, the cell phone may forward the message to the laptop that the user is focusing on, and the laptop may display a message alert box 2201. The message reminding box 2201 contains a message source (for example, the name of the social App, and the message comes from P40 of the user Lily), user information for sending the message (for example, "Tom" and the avatar of Tom), prompt information (for example, "Tom invites you to talk over voice"), a control 2202 and a control 2203.
Referring to the GUI shown in (b) of fig. 22, when the notebook computer detects that the user clicks the control 2203, the audio information sent by the user Lily may be collected by the microphone, and the collected audio information and the indication information for indicating the mobile phone to connect the voice call may be sent to the mobile phone. The mobile phone can be used for connecting the voice call and displaying a voice call interface in response to the received indication information sent by the notebook computer. Wherein the voice call interface comprises prompting information ' in voice call with Tom ' and the duration ' 00 of the voice call.
The mobile phone may also send the audio information of the user Tom received by the mobile phone to the notebook computer, and the notebook computer may display the window 2204 when receiving the audio information of the user Tom. In the window 2204, audio information of the user Tom transmitted by the mobile phone may be displayed. Also included in window 2204 is a control 2205.
In one embodiment, the mobile phone may also connect the voice call and not display the voice call interface after receiving the indication message. After the mobile phone receives the audio information of the user Tom, the audio information can be sent to the notebook computer.
In one embodiment, when the notebook computer detects that the user clicks the control 2205, an indication message may be sent to the mobile phone, where the indication message is used to instruct the mobile phone to stop the voice call with the user Tom. The mobile phone can hang up the voice call in response to receiving the indication information.
It should be understood that the mobile phone may also determine the notification type of the message before sending the message to the notebook computer. If the notification type corresponding to the message is the notification type message which is allowed to be forwarded, the mobile phone can forward the notification message to the notebook computer; and/or before the mobile phone sends the message to the notebook computer, the mobile phone can judge the focus equipment of the user. If the user's focus device is in the device list allowing forwarding, the mobile phone can forward the notification message to the notebook computer.
It should also be understood that, for the process of selecting a notification type by a mobile phone, reference may be made to the description in the embodiment corresponding to fig. 13, and the process of selecting a receiving device by a mobile phone may be combined with the description in the embodiments corresponding to fig. 14 and fig. 15, and for brevity, no further description is given here.
FIG. 23 is another set of GUIs provided by embodiments of the present application.
Referring to fig. 23 (a), when the mail application of the mobile phone 201 receives a new e-mail, the mobile phone 201 pops up a window 2311 through a display 2310 to remind the user of the new e-mail. If the handset 201 is in the off-screen non-use state or the handset 201 is not in front of the user's eyes, the user cannot timely perceive the notification of the handset 201 and may miss important e-mails. The handset 201 can judge that the notification needs to be prompted to other devices according to the type of the notification (e.g., screen-off notification) and the type of the notification service (e.g., mail), and can increase the interaction capability and the coordination capability between the devices to avoid the user missing important e-mails.
For example, the cell phone 201 may select a suitable device (as a notification device) from the devices networked with the cell phone 201 to prompt, and the cell phone 201 may select a suitable device (as a continuation device) from the devices networked with the cell phone 201 to complete the task of notification. The notification equipment and the connection equipment are selected from the equipment in the networking, so that the safety of data access and information transmission between the equipment can be ensured.
In an embodiment, the device that is networked with the mobile phone may include the tablet computer 206, the watch 202, the smart television 205, and the notebook 204, and the process of networking the mobile phone with other devices may refer to the process of establishing a connection between the mobile phone and other devices in (a) to (c) in fig. 3, which is not described herein again for brevity.
The mobile phone 201 can know that the watch 202 is worn on the user, the watch 202 has interaction capacity and can display brief information of notification, but the watch 202 has poor interaction capacity and is not suitable for replying to mails. The mobile phone can transfer the important notification to the watch 202 for prompting, and can increase the interaction capacity and the coordination capacity between the devices so as to prevent the user from missing the important notification, and can quickly and conveniently acquire the latest notification.
The cell phone 201 may also know that the notebook 204 is closest to the watch 202 (it may be considered that the notebook 204 is closest to the user), and the notebook 204 has better display capability and interaction capability, and is suitable for processing e-mails. Therefore, the cellular phone 201 can select the notebook 204 as a relay device, and perform a task of notification (i.e., processing an email).
It should be understood that the process of the mobile phone learning that the watch 202 is worn on the user and the process of the mobile phone 201 learning that the notebook 204 is closest to the watch 202 may refer to the specific implementation processes in the following embodiments.
The handset 201 may send a prompt message to the watch 202 for prompting the handset 201 on the watch 202 for receipt of the email and for notifying the notebook 204 that the email is suitable for processing as a continuation device, the prompt message may include the information notified in the handset 201 and the information of the notebook 204.
Referring to fig. 23 (b), after the watch 202 receives the prompt message, the prompt text 2341 may be displayed through the display screen 2340, where the prompt text 2341 is used to prompt the user that the mobile phone 201 receives the email, and the display screen 2340 of the watch 202 may further display an interface element, such as a shortcut entrance 2342, and a corresponding word "notebook viewing details". When the watch 202 detects that the user clicks the shortcut entry 2342, the watch 202 may send an execution message to the notebook 204 based on the information of the connected device in the prompt message, so as to trigger the notebook 204 to execute the email, where the execution message may include information required by the notebook to execute the task, for example, the execution message may include one or more of an intention (Intent) of the service, a service name, service data, and the like, so that after the notebook 204 receives the execution message, the notebook 204 may automatically pull up the corresponding service or run the corresponding application or program, thereby improving the cooperative processing capability between the devices and improving the efficiency of information processing.
Referring to (c) in fig. 23, after the notebook 204 receives the execution message sent by the watch 202, the corresponding email may be displayed through the display screen 2370 without opening the corresponding email from the email application portal, and the email application does not need to be in a running state (including background running) before receiving the execution message. The user can directly process the e-mail in the notebook 204 through one-time interaction with the watch 202, so that the operation is saved, the display efficiency is improved, and the user experience is improved.
FIG. 24 is another set of GUIs provided by an embodiment of the present application.
As shown in fig. 24 (a), when notebook 204 receives a notification of a new e-mail, display screen 2370 of notebook 204 may pop up a window 2371 to prompt the user that notebook 204 receives the e-mail. At this time, the notebook 204 may determine whether the focus of the user is on the notebook 204 by turning on the camera. For example, the notebook 204 may start a camera to collect face information of the user, and if the face information is not collected by the camera, the notebook 204 may select a prompt device and a connection device. Illustratively, notebook 204 may select watch 202 as the reminder device and notebook 204 as the continuation device.
As shown in fig. 24 (b), when the user wants to process an email through the notebook 204, the shortcut portal 2342 may be clicked. In response to the user's click, the watch 202 sends an execution message to the notebook 204, as shown in (c) of fig. 24, after the notebook 204 receives the execution message, the notebook 204 may pull up an email service, open a corresponding email, and the user may directly process the email and reply to the email. The user may directly process the email in the notebook 204 through one interaction with the watch 202.
FIG. 25 is another set of GUIs provided by an embodiment of the present application.
As shown in fig. 25 (a), when the video application background of the cell phone 201 receives a notification of video playing update, the cell phone 201 may pop up a window 2312 on the display 2310 to remind the user that the basketball game has been updated. If the user cannot timely perceive the notification from the cellular phone 201, the latest basketball game may be missed. The handset 201 can determine that the notification requires prompting to other devices based on the type of the notification (e.g., off screen notification) and the type of the notification service (e.g., video application), and can increase the interaction and coordination capabilities between the devices to avoid the user missing the latest basketball game.
The cell phone 201 may know that the tablet 206 is being used by the user, the tablet 206 may display a notification and may prompt the user, and thus, the cell phone 201 may select the tablet 206 as a reminder device.
The mobile phone 201 can also know that the smart television 205 and the notebook 204 are both close to the tablet computer 206, and that the smart television 205 and the notebook 204 both have better display capabilities and are suitable for playing videos. Therefore, the mobile phone 201 can push the smart tv 205 and the notebook 204 to the user, and the user selects which one is to be used as a continuation device to perform the notified task (i.e., playing a basketball game).
It should be appreciated that the following implementation process may be referred to for the process of the cell phone 201 knowing that the tablet 206 is being used by the user and the process of the cell phone 201 knowing that the smart tv 205 and the notebook 204 are both close to the tablet 206.
As shown in fig. 25 (b), after the tablet pc 206 receives the prompting message, the tablet pc 206 pops up a window 2321 on the display screen 2320, the window 2321 is used for prompting the user that the mobile phone 201 receives the notification of the update of the basketball game, and the window 2321 may be displayed in a status bar or a banner notification. The window 2321 may also display two shortcut entries, where the word pattern "smart screen view" corresponding to the shortcut entry 2322 is a word pattern "notebook view" corresponding to the shortcut entry 2323. When the user wants to watch a basketball game through the smart tv 205, the shortcut entry 2322 may be clicked. The shortcut entry 2323 may be clicked when the user wants to watch the basketball game via the notebook 204.
An exemplary user wants to watch a basketball game through the smart tv 205 and click on the shortcut portal 2322. In response to the click operation (i.e., the confirmation operation) of the user, the tablet pc 206 sends an execution message to the smart tv 205 to trigger the smart tv 205 to play the basketball game.
It should be understood that the content of the execution message sent by the tablet computer 206 to the smart television 205 may refer to the description in the above embodiment.
As shown in fig. 25 (c), after the smart television 205 receives the execution message, the smart television 205 executes the notified task, opens the video application, invokes the corresponding Activity, plays the basketball game, and directly pulls up the video service and plays the basketball game without the user himself opening the video application installed in the smart television 205, thereby saving the user's operation and improving the user experience.
FIG. 26 is another set of GUIs provided by embodiments of the present application.
As shown in fig. 26 (a), the display 2310 of the mobile phone 201 displays a window 2313 and a shortcut entry 2314, and the wording "smart screen view" corresponding to the shortcut entry 2314. When the user wants to play a basketball game through the smart tv 205, the shortcut entry 2314 can be clicked. In response to the click of the user, the handset 201 sends an execution message to the smart tv 205, and after receiving the execution message, the smart tv 205 executes the notified task.
As shown in fig. 26 (b), the smart tv 205 plays a basketball game. The user can directly play the basketball game on the smart television 205 through one interaction with the mobile phone 201.
FIG. 27 is another set of GUIs provided by an embodiment of the present application.
As shown in fig. 27 (a), the cellular phone 201 may pop up a window 2312 on the display 2310 to remind the user that the basketball game has been updated. If the tablet 206 is being used by the user at this time and there is no device on the user's side that is larger than the display screen of the tablet 206, the cell phone 201 may select the tablet 206 as the reminder device and send a reminder message to the tablet 206.
As shown in fig. 27 (b), after the tablet pc 206 receives the prompt message, the display 2320 of the tablet pc 206 pops up a window 2324, and the window 2324 may also display a shortcut entry 2325, and the word "tablet view" corresponding to the shortcut entry 2325. When the user wants to play the basketball game through the tablet computer 206, the shortcut entry 2325 may be clicked. In response to the user's click, the tablet computer 206 pulls up the corresponding video service and plays the basketball game. The user may play the basketball game directly in the tablet 206 through one interaction with the tablet 206.
FIG. 28 is another set of GUIs provided by embodiments of the present application.
As shown in (a) of fig. 28, the cellular phone 201 is performing an operation. For example, in one embodiment, the operation is a camera operation, and the cell phone 201 may display a camera window 2315 on the display 2310. In another embodiment, the operation may also be a photographing operation, a calling operation, a video playing operation, a music playing operation, and the like.
Illustratively, the watch 202 and the smart tv 205 are both networked with the cell phone 201, the cell phone 201 knows that the watch 202 is worn by the user, the watch 202 can display a notification and can prompt the user instantly, and therefore, the cell phone 201 can select the watch 202 as a prompting device. The mobile phone 201 knows that the smart tv 205 is close to the watch 202, and the smart tv 205 has better display capability and is suitable for video call.
The video call application background of the mobile phone 201 receives the notification of the video call request, as shown in fig. 28 (b), the mobile phone 201 may pop up a window 2316 on the display 2310 to remind the user of a new video call request. If the request for the video call is received on the mobile phone 201, the video call application may call the camera of the mobile phone 201, and the mobile phone 201 may interrupt the camera. The mobile phone 201 can judge that the notification needs to be prompted to other devices according to the type of the notification (for example, a bright screen notification) and the type of the notification service (for example, a video call application), and can increase the interaction capability and the coordination capability between the devices to avoid the interruption of the camera shooting of the mobile phone 201 due to the execution of the video call. The handset may send a prompt message to the watch 202.
As shown in fig. 28 (c), the watch 202 receives the notification message, and the watch 202 notifies the user of the reception of the notification of the video call request to the cellular phone 201 through the display screen 2340. The display screen 2340 can also display the shortcut entry 2343, and the word "intelligent screen on" corresponding to the shortcut entry 2343 is displayed. When a user wants to play a video call through the smart tv 205, the shortcut entry 2343 may be clicked.
As shown in fig. 28 (d), after the smart tv 205 receives the execution message, the smart tv 205 pulls up the corresponding video call service, invokes the corresponding Activity (Activity), and invokes the camera and the audio module of the smart tv 205, to execute the notified task, that is, to perform the video call.
FIG. 29 is another set of GUIs provided by an embodiment of the present application.
Referring to fig. 29 (a), when the mail application of the mobile phone 201 receives a new e-mail, the mobile phone 201 pops up a window 2311 through a display 2310 to remind the user of receiving the new e-mail. If the mobile phone 201 is in the off-screen non-use state or the mobile phone 201 is not in front of the user, the user cannot timely sense the notification of the mobile phone 201, and important e-mails may be missed. The handset 201 can judge that the notification needs to be prompted to other devices according to the type of the notification (e.g., screen-off notification) and the type of the notification service (e.g., mail), and can increase the interaction capability and the coordination capability between the devices to avoid the user missing important e-mails. The handset may select watch 202 as the reminder device and notebook 204 as the continuation device. The handset 201 may send a prompt message to the watch 202.
It should be understood that the process of the handset 201 selecting the watch 202 as the reminder device and the handset 201 selecting the notebook 204 as the continuation device can refer to the description in the above embodiments.
Referring to fig. 29 (b), after the watch 202 receives the prompt message sent by the mobile phone 201, the prompt text 2341 may be displayed through the display screen 2340, where the prompt text 2341 is used to prompt the mobile phone (P40) of the user Lily to receive an email, and the display screen 2340 of the watch 202 may further display an interface element, such as the shortcut entry 2342, and the corresponding word "notebook viewing details".
After the mobile phone 201 sends the prompt message to the watch 202, if the mobile phone detects that the user picks up the mobile phone, the mobile phone 201 may send another prompt message to the watch 202, where the another prompt message may carry information of a new connection device, for example, the new connection device may be the mobile phone 201.
Referring to fig. 29 (c), the watch 202 may update the information of the shortcut entry displayed on the display 2340 after receiving another prompt message sent by the cellular phone 201. As shown in fig. 29 (c), shortcut entry 2342 may be updated to shortcut entry 2343, and shortcut entry 2343 corresponds to the word "cell phone view details".
Referring to (d) in fig. 29, after receiving another prompt message sent by the cell phone 201, the watch 202 may also add another interface element, such as a shortcut entry 2343, and a corresponding word "cell phone viewing details" on the display 2340.
In the embodiment of the application, when the mobile phone determines that the notebook is the closest device to the user for the first time, the information of the notebook can be carried in the prompt message sent to the watch. When the mobile phone detects that the user picks up the mobile phone, the mobile phone can determine that the mobile phone is the device closest to the user, and then the mobile phone can send another prompt message to the watch to carry the information of the mobile phone, so that the watch can dynamically update the information of the continuous device, and the user can conveniently check the mail content on the device closest to the user.
The following describes a content sharing process provided by an embodiment of the present application with reference to the GUI of fig. 30 to 36. The background art to which this application relates will first be described in further detail:
the process of taking and sharing pictures by the terminal device is introduced by the GUI shown in fig. 30:
1. the user needs to open the camera application on the terminal to take a picture.
The GUI shown in fig. 30 (a) is a main interface 3010 displayed on a terminal such as a smartphone, and a page on which application icons are placed is displayed on the interface 3010, and the page includes a plurality of icons (for example, a clock application icon, a calendar application icon, a gallery application icon, a memo application icon, a file management application icon, a browser application icon, a camera application icon, and the like). A page prompt 3011 is also included below the application icons to indicate the positional relationship of the currently displayed page with other pages. And a plurality of tray icons are arranged below the page indicator, and the tray icons are kept displayed when the page is switched. In some embodiments, the page may also include a plurality of application icons and a page indicator, where the page indicator may not be a part of the page, and may exist separately, and the application icons are also optional, which is not limited in this embodiment of the present application. A status bar 3012 is displayed in an upper partial area of the interface 3010, and the status bar 3012 may include: one or more signal strength indicators for mobile communication signals (also known as cellular signals), battery status indicators, time indicators, and the like. When the terminal starts the bluetooth function, a bluetooth start indicator may also be displayed in the status bar 3012. The user may operate the terminal through a preset gesture, such as returning to the main interface, displaying an opened application, and the like, and the specific operation gesture is not limited in the present application. The terminal is not limited to gesture operation, and the terminal can perform related operation through virtual keys or physical keys. The user clicks on the camera icon 3013 application and enters the camera application interface.
A GUI shown in (b) in fig. 30 is an interface 3020 of a camera application, and the interface 3020 includes a preview area 3021 of a photographic subject, a photographing button 3022, a thumbnail button 3023 of a gallery latest photograph, a front-rear camera switching button 3024, and the like. The camera application interface 3020 further includes a camera focus adjustment button 3025, and a user can adjust the focus of the camera by performing a drag operation on the button 3025.
The user can click the shooting button 3022 to take a picture, and the terminal acquires a picture in response to this operation and then displays a camera interface as shown in (c) in fig. 30, at which time a thumbnail of the picture that has just been taken is displayed in the thumbnail button 3023 of the latest picture in the gallery.
2. The user selects the most recently taken picture.
The user can click on the thumbnail button 3023 of the gallery latest photograph in (c) in fig. 30, and the terminal displays a picture presentation interface 3040 as shown in (d) in fig. 30 in response to this operation.
As shown in (d) of fig. 30, a picture 3041 and a menu 3042 are displayed in the picture display interface 3040. The menu 3042 includes a share button 3043, a favorite button 3044, an edit button 3045, a delete button 3046, and a more button 3047. The sharing button 3043 may be used to trigger the picture sharing interface to be opened. The favorites button 3044 may add the picture 3041 to the favorites folder. The edit buttons 3045 may be used to trigger edit functions for rotating, trimming, adding filters, blurring, etc. the picture 3041. A delete button 3046 may be used to trigger the deletion of the picture 3041. The more buttons 3047 may be used to turn on more picture-related functions.
3. And clicking a sharing button in the picture display interface by the user, and opening the picture sharing interface.
The terminal may receive an input operation (e.g., a single click) by the user with respect to the share button 3043 in (d) in fig. 30, and in response to the input operation, the terminal may display a file sharing interface 3050 as illustrated in (e) in fig. 30.
As shown in fig. 30 (e), the file sharing interface 3050 includes a field 3051, a field 3057, and a field 3059. Wherein:
the field 3051 may be used to display one or more pictures or videos in the gallery, which may include a user-selected picture or video, such as the selected picture 3052. Among them, the already selected picture 3052 may have a mark 3054 displayed thereon, and the mark 3054 may be used to indicate that its corresponding picture 3052 is selected by the terminal. A control 3055 and a control 3056 are also displayed in the region 3051, and the control 3055 and the control 3056 can switch and update the picture displayed in the region 3051. The picture or video picture displayed in this area 3051 may be a thumbnail.
Region 3057 can be utilized to display nearby device options discovered by the terminal, as well as one or more user options. The user options correspond to nearby devices discovered by the terminal. Where the terminal may display a search prompt 3058 in field 3057 while searching for nearby devices.
One or more service options may be displayed in field 3059. The application program corresponding to the service option can support sharing the picture selected by the user to the contact or the server. And the user can share data through the application program corresponding to the service option. For example, sharing the selected picture to one or more contacts of the wechat, and for example, sharing the selected picture to a dynamic publishing platform (i.e., a server) of the wechat.
4. After the terminal searches for nearby equipment, the user can click nearby equipment options in the picture sharing interface to share the selected picture to the nearby equipment.
As shown in (f) of fig. 30, the terminal may display the nearby device option or a user option corresponding to the nearby device, for example, user option 3061 in field 3057 after searching for the nearby device.
The terminal may receive an input (e.g., a single click) from the user of the user option 3061, in response to which the terminal may establish a communication connection with the device corresponding to the user option 3061, and then transmit the selected picture to the device corresponding to the user option via the communication connection.
Referring to fig. 31, a system architecture diagram is provided for one embodiment of the present application. As shown in fig. 31, the system 31 includes a first device handset 310 and a nearby second device. The second device includes a cellular phone 311, a Personal Computer (PC) 312, and the like. Fig. 31 schematically shows some application scenarios of the embodiment of the present application, in which a mobile phone 311 and a Personal Computer (PC) 312 are on the right side of the mobile phone 310. The three devices in fig. 31 are in a near field connection state, and a lightning icon 313 and a lightning icon 314 respectively indicate that the cell phone 310 and the cell phone 311 are in a near field connection state with a Personal Computer (PC) 312.
In the embodiment of the present application, the first device may be a mobile phone, a Personal Computer (PC), a tablet computer (PAD), or the like. The second device may also be a mobile phone, a Personal Computer (PC), a tablet computer (PAD), etc. The application is only exemplarily illustrated in fig. 31 by the presence of two second devices around the first device, and should not be construed as limiting.
The following describes an image sharing method according to an embodiment of the present application in further detail with reference to a specific embodiment and the accompanying drawings. In the GUI embodiments exemplarily shown in (a) to (h) in fig. 32, the user may trigger to open the camera application on the mobile phone 310, click the photographing button, send the thumbnail of the photographed picture to other devices, which are in a near field connection state with the mobile phone 310, and log in the same ID.
Fig. 32 (a) includes a mobile phone 310 and a Personal Computer (PC) 312, in which an interface 3210 of a camera application of the mobile phone 310 is exemplarily shown, and the interface 3210 is the same as the interface 3020 shown in fig. 30 (b), so that the above-mentioned text description of the interface 3020 in fig. 30 (b) is also applicable to the interface 3210 of the mobile phone 310 in fig. 32 (a), and is not repeated here. The user can click on the shoot button 3211 to shoot, and at this time, a thumbnail of the picture that has just been shot is displayed in the thumbnail button 3212 of the latest picture in the gallery.
After the mobile phone 310 takes a picture, the thumbnail information of the just taken picture is sent to other devices which are in a near field connection state and log in the same ID as the mobile phone 310 through a near field communication mode (such as bluetooth, wi-Fi, and the like). The Personal Computer (PC) 312, which is in a connected state and logs in the same ID, displays a notification frame of a photo thumbnail as shown in (a) of fig. 32 on the desktop if an interactive event (e.g., a mouse sliding operation, etc.) is detected. As shown in fig. 32 (a), a notification box 3216 appears in the lower right corner of the desktop interface 3215 of the Personal Computer (PC) 312, and a partially enlarged view 3218 of the notification box 3216 is shown. A thumbnail of the latest photograph just taken by the mobile phone 310 is displayed in the notification frame 3216, and a download button 3217 is displayed in the upper right corner of the notification frame 3216.
As shown in (b) of fig. 30, the Personal Computer (PC) 312 may receive a user's click operation on a thumbnail in the notification frame 3216, such as a left mouse click operation performed by the user placing a cursor 3221 on the thumbnail in the notification frame 3216 using a mouse, which shows a partially enlarged view 3218 of the notification frame 3216. At this time, the Personal Computer (PC) 3212 sends an original photograph request message to the cellular phone 310, and the cellular phone 310 sends an original photograph of the latest photograph just taken to the Personal Computer (PC) 312 in response to the original photograph request message. The Personal Computer (PC) 312, having received the original photograph, calls the photograph viewer to display the photograph that was just taken by the cell phone 310, as shown in (c) of fig. 32.
In some embodiments, when the cell phone 310 sends a thumbnail of a photo to the Personal Computer (PC) 312 after taking a picture, the Personal Computer (PC) 312 is currently opening a document editing program (e.g., a Word program) and is in an editing state. As shown in fig. 32 (d), the editing cursor 3241 is at a position of the Word document, and the user slides the mouse (without pressing the left or right mouse button), and a thumbnail notification box 3242 appears near the editing cursor 3241. If the user clicks on the thumbnail in the notification box 3242, the Personal Computer (PC) 312 sends the original photo request to the cell phone 310. The cellular phone 310 transmits an original photograph of the latest photograph just taken to a Personal Computer (PC) 312 in response to the original photograph request message. Upon receiving the original photograph, the Personal Computer (PC) 312 inserts the original photograph into the document at the position of the editing cursor 3241. After the thumbnail notification box 3242 is displayed at the edit cursor 3241, if the user does not want to use the picture, the left mouse button may be clicked at another position, and the thumbnail notification box 3242 disappears.
In some embodiments, a Personal Computer (PC) 312 detects a user click operation on a thumbnail in the thumbnail notification box, and requests the original photo from the cell phone 310. The mobile phone 310 sends the original photo to the Personal Computer (PC) 312 and simultaneously sends a message to the mobile phone 311 that does not request the original photo to notify that the photo thumbnail is deleted, and the subsequent mobile phone 311 does not display a notification message of the photo thumbnail to the user even if an interactive event is detected. Therefore, the aim of picture transmission is improved, and interference to other equipment is avoided.
In some embodiments, after the Personal Computer (PC) 312 displays the notification frame 3216, a drag operation of the thumbnail in the notification frame 3216 by the user may be received. As shown in fig. 32 (e), the user may place the mouse cursor on the thumbnail in the notification box 3216, press the left mouse button, and drag the thumbnail to the desktop (or other folders) along the dotted-line trajectory 3251. When the user drags the thumbnail to the desktop and releases the left mouse button, the Personal Computer (PC) 312 sends an original photo request message to the mobile phone 310, and the mobile phone 310 sends the original photo of the latest photo that was just taken to the Personal Computer (PC) 312 in response to the original photo request message. The Personal Computer (PC) 312 displays the thumbnail information of the photograph and the name information 3261 of the photograph on the desktop interface 3215, as shown in (f) of fig. 32. The user can view the photo by double-clicking or the like.
In some embodiments, after the Personal Computer (PC) 312 displays the notification box 3216, the user may drag the thumbnail in the notification box 3216 to a target application, such as a document editing program. As shown in (g) in fig. 32, the Personal Computer (PC) 312 displays that the document editing program is being opened, and at this time, a thumbnail notification frame 3216 is displayed on the desktop. The user may drop a mouse cursor over the thumbnail in notification box 3216 and press the left mouse button to drag the thumbnail along dashed line 3271 into the document editing program being opened. The user drags the thumbnail to the appropriate position of the document editing program and then releases the left mouse button, at which time the Personal Computer (PC) 312 sends the original photo request to the mobile phone 310. The cellular phone 310 transmits an original photograph of the latest photograph just taken to a Personal Computer (PC) 312 in response to the original photograph request message. The Personal Computer (PC) 312 inserts the original photograph of the photograph into the document editor. The user may subsequently perform further operations on the photograph.
In some embodiments, after the Personal Computer (PC) 312 displays the notification frame 3216, the user may click on a download button 3217 (shown in fig. 32 (a)) of the thumbnail displayed in the notification frame 3216, and at this time, the Personal Computer (PC) 312 sends the original photo request to the mobile phone 310. The cellular phone 310 transmits an original photograph of the latest photograph just taken to a Personal Computer (PC) 312 in response to the original photograph request message. The Personal Computer (PC) 312 will save the original photograph in a default folder or a folder designated by the user.
In some embodiments, after the Personal Computer (PC) 312 displays the notification frame 3216, failure to receive any user action on the notification frame 3216 for a period of time (e.g., 10 s) automatically hides the notification frame 3216. The user may then find a thumbnail notification of the photograph in the notification bar of the Personal Computer (PC) 312 to proceed with the above-described clicking, dragging, downloading, etc. As shown in FIG. 32 (h), for example, in the Microsoft operating system, the user can set notifications that need not be displayed on the desktop to be hidden, a notification bar expansion button 3281 is provided in the bottom right corner of the desktop of the Personal Computer (PC) 312, and clicking on the notification bar expansion button 3281 causes a notification bar expansion window 3282 to appear, in which currently hidden notification information is displayed in the notification bar expansion window 3282, and a partially enlarged view 3284 of the notification bar expansion window 3282 is shown. The user can continue with the action, such as clicking, downloading, etc., of finding the thumbnail notification 3283 for the photo in the notification bar expansion window 3282.
The above-mentioned (a) to (h) in fig. 32 describe the picture sharing method by photographing and sharing with the terminal device, and the embodiment of the present application is also applicable to picture sharing in operations such as scanning and screen capturing, and the specific GUI embodiment is similar to that shown in fig. 32, and is not described herein again.
In the GUI embodiments exemplarily shown in fig. 33 (a) to (j), the user may trigger the screen capture function of the mobile phone 310 to obtain the screen capture picture, and the mobile phone 310 sends the message that the picture is to be received to the other device. Wherein the other device and the handset 310 are in near field connection state and log in the same ID. After receiving the message to be received, the other devices send a thumbnail request message to the mobile phone 310 if a user interaction event is detected. The mobile phone 310 responds to the thumbnail request message, and sends the thumbnail of the picture obtained by screen capture to the other requesting devices.
As shown in (a) of fig. 33, the cell phone 310 displays a weather application interface 3310. The weather interface 3310 includes current location information, time information, and weather information.
As shown in (b) of fig. 33, the mobile phone 310 may receive a screen capture operation (which may be a gesture operation or a physical key operation, etc.) of the user, and display a thumbnail 3321 of the screen capture picture on the lower left of the terminal interface 3310 after the screen capture. At this time, the mobile phone 310 sends a message to be received of a picture to other devices which are in a near field connection state and log in the same ID as the mobile phone 310 through a near field communication mode (e.g., bluetooth, wi-Fi, etc.). After receiving the message to be received, the Personal Computer (PC) 312 of another device detects the mouse sliding operation of the user, and then sends a thumbnail request message to the mobile phone 310. The cellular phone 310 transmits the thumbnail information of the picture obtained just after the screen capturing to the cellular phone 310 in response to the thumbnail request message.
As shown in fig. 33 (c), a notification box 3331 appears in the lower right corner of the desktop interface 3330 of the Personal Computer (PC) 312, and a partial enlarged view 3333 of the notification box 3331 is shown. The notification frame 3331 displays a thumbnail of the latest photo obtained by the mobile phone 310 just after screen capture, and the upper right corner of the notification frame 3331 displays a download button 3332.
As shown in (d) of fig. 33, the Personal Computer (PC) 312 may receive a user's click operation on a thumbnail in the notification box 3331, such as a left mouse click operation in which the user places a cursor 3341 on the thumbnail in the notification box 3331 using a mouse, in which a partially enlarged view 3333 of the notification box 3331 is shown. At this time, the Personal Computer (PC) 312 sends an original photograph request message to the cellular phone 310, and the cellular phone 310 sends the original photograph of the latest photograph obtained just after screen capture to the Personal Computer (PC) 312 in response to the original photograph request message. The Personal Computer (PC) 312 calls the photo viewer to display the photo that the mobile phone 310 has just captured, as shown in fig. 33 (e).
In some embodiments, when the cell phone 310 sends a thumbnail to the Personal Computer (PC) 312 in response to the thumbnail request message, the Personal Computer (PC) 312 is currently opening a document editing program (e.g., a Word program) and is in an editing state.
As shown in fig. 33 (f), when the editing cursor 3361 is at a certain position of the Word document and the user slides the mouse (without pressing the left or right mouse button), a thumbnail notification box 3362 appears near the editing cursor 3361. If the user clicks on the thumbnail in the notification box 3362, the Personal Computer (PC) 312 sends the original photo request to the cell phone 310. The cellular phone 310 transmits an original photograph of the latest photograph just taken to a Personal Computer (PC) 312 in response to the original photograph request message. After receiving the original photograph, the Personal Computer (PC) 312 inserts the original photograph into the document at the position of the editing cursor 3361. After the thumbnail notification box 3362 is displayed on the edit cursor 3361, if the user does not want to use the picture, the user can click the left mouse button at another position, and the thumbnail notification box 3362 disappears.
In some embodiments, a Personal Computer (PC) 312 detects a user click on a thumbnail in the thumbnail notification box and requests the original photograph from the cell phone 310. The mobile phone 310 transmits the original photo to the Personal Computer (PC) 312 and simultaneously transmits a message notifying the deletion of the photo thumbnail to the mobile phone 311 that does not request the original photo, and the subsequent mobile phone 311 does not display a notification message of the photo thumbnail to the user even if an interactive event is detected. Therefore, the purpose of picture transmission is improved, and interference to other equipment is avoided.
In some embodiments, after the Personal Computer (PC) 312 displays the notification frame 3331, a drag operation of the user on the thumbnail in the notification frame 3331 may be received. As shown in fig. 33 (g), the user may place the mouse cursor on the thumbnail in the notification box 3331 and press the left mouse button to drag the thumbnail to the desktop (or other folders) along the dotted-line trajectory 3361. After dragging the thumbnail to the desktop, the user releases the left mouse button, at this time, the Personal Computer (PC) 312 sends an original photo request message to the mobile phone 310, and the mobile phone 310 sends the original photo of the latest photo obtained just by screen capture to the Personal Computer (PC) 312 in response to the original photo request message. The Personal Computer (PC) 312 displays a thumbnail of the picture and name information 3371 of the picture on the desktop interface 3330, as shown in (h) of fig. 33. The user can view the photo by double-clicking and the like.
In some embodiments, after the Personal Computer (PC) 312 displays the notification box 3331, the user can drag the thumbnail in the notification box 3331 to a target application, such as a document editing program. As shown in (i) in fig. 33, the Personal Computer (PC) 312 displays that the document editing program is being opened, and at this time, a thumbnail notification frame 3331 is displayed on the desktop. The user may drop the mouse cursor over the thumbnail in notification box 3331 and press the left mouse button without dropping, dragging the thumbnail along dashed line 3381 into the document editing program being opened. The user drags the thumbnail to the appropriate position of the document editing program and then releases the left mouse button, at which time the Personal Computer (PC) 312 sends the original photo request to the mobile phone 310. The cellular phone 310, in response to the original photograph request message, sends the original photograph of the latest photograph obtained just after screen capture to a Personal Computer (PC) 312. The Personal Computer (PC) 312 inserts the original photograph of the photograph in the document editor. The user may subsequently perform further operations on the photograph.
In some embodiments, after the Personal Computer (PC) 312 displays the notification box 3331, the user can click on the download button 3332 (as shown in fig. 33 (c)) of the thumbnail displayed in the notification box 3331, and the Personal Computer (PC) 312 sends the original photo request to the mobile phone 310. The cellular phone 310, in response to the original photograph request message, sends the original photograph of the latest photograph just captured on screen to a Personal Computer (PC) 312. The Personal Computer (PC) 312 will save the original photograph in a default folder or a folder designated by the user.
In some embodiments, after the notification box 3331 is displayed on the Personal Computer (PC) 312, the notification box is automatically hidden if the user does not perform any operation on the thumbnail notification box 3331 for a period of time (e.g., 10 s). The user may subsequently find the photo thumbnail notification in the status bar of the Personal Computer (PC) 312 to proceed with the above-described clicking, dragging, downloading, etc. As shown in (j) of fig. 33, taking microsoft windows as an example, the user can set notifications that do not need to be displayed on the desktop to be hidden, and a notification bar expansion button 3391 is provided at the bottom right corner of the desktop of the Personal Computer (PC) 312, and clicking the notification bar expansion button 3391 causes a notification bar expansion window 3392 to appear, and the currently hidden notification information is displayed in the notification bar expansion window 3392, which shows a partially enlarged view 3394 of the notification bar expansion window 3392. The user can continue with operations such as clicking, downloading, etc. to find the thumbnail 3393 of the photo in the notification bar expansion window 3392.
The above-mentioned (a) to (j) in fig. 33 describe the picture sharing method by means of terminal device screen capture picture sharing, and the embodiment of the present application is also applicable to picture sharing in operations such as photographing and scanning, and the specific GUI embodiment is similar to that shown in fig. 33, and is not described herein again.
In the GUI embodiments exemplarily shown in fig. 34, (a) to (i), the user may call the camera function of the cell phone 310 to take a plurality of pictures in succession for a period of time, and the cell phone 310 sends a message to be received about the pictures to other devices every time the camera function takes one picture. Wherein the other device and the handset 310 are in a near field connection state and log in the same ID. After receiving the messages to be received of the pictures of the multiple photos, the other devices send multiple thumbnail request messages to the mobile phone 310 if a user interaction event is detected. The mobile phone 310 sends the thumbnail of the last photo of the multiple photos taken and the total number of photos to the requesting other device in response to the multiple-thumbnail request message.
Fig. 34 (a) includes a mobile phone 310 and a Personal Computer (PC) 312, in which an interface 3410 of a camera application of the mobile phone 310 is exemplarily shown, and the interface 3410 is the same as the interface 3220 shown in fig. 32 (b), so that the above description of the interface 3220 in fig. 32 (b) is also applicable to the interface 3410 of the mobile phone 310 in fig. 34 (a), and is not repeated here. The mobile phone 310 can perform consecutive shooting in response to an operation (e.g., a single click) of the user on the shooting button 3411, in which the thumbnail button 3412 of the latest picture of the gallery shows a thumbnail of the last picture of the plurality of pictures that have just been consecutively taken.
After the mobile phone 310 takes a picture, the mobile phone 310 sends a message to be received of the picture to other devices which are in a connection state and log in the same ID as the mobile phone 310 through a near field communication mode (such as bluetooth, wi-Fi, and the like). After receiving the messages to be received of the pictures of the multiple photos, the Personal Computer (PC) 312 of the other device detects the mouse sliding operation of the user, and then sends multiple thumbnail request messages to the mobile phone 310. The cell phone 310 transmits the thumbnail of the last photo of the plurality of photos just taken and the number of total photos to a Personal Computer (PC) 312 in response to the plurality of thumbnail request messages. As shown in fig. 34 (a), a notification box 3416 appears in the lower right corner of the desktop of the Personal Computer (PC) 312, and a partially enlarged view 3419 of the notification box 3416 is shown. A thumbnail of the last photo of the plurality of photos that the mobile phone 310 has just taken in succession and a number of photos icon 3418 in the upper right corner are displayed in the notification frame 3416, and a download button 3417 is also displayed in the upper right corner of the notification frame 3416.
As shown in (b) of fig. 34, the Personal Computer (PC) 312 may receive a user's click operation on the thumbnail in the notification frame 3416, such as a left mouse click operation by a user placing the cursor 3421 on the thumbnail in the notification frame 3416 using a mouse, in which a partially enlarged view 3419 of the notification frame 3416 is shown. At this time, the Personal Computer (PC) 312 sends a request message for thumbnail of the picture other than the last picture to the mobile phone 310, and the mobile phone 310 sends the thumbnail of the picture other than the last picture to the Personal Computer (PC) 312 in response to the request message for thumbnail of the picture other than the last picture.
As shown in fig. 34 (c), the Personal Computer (PC) 312 receives thumbnail images of pictures other than the last picture, and then develops and displays a plurality of pieces of thumbnail image information 3431 in association with the received thumbnail image of the last picture. The icon 3432 is a progress bar, and the user can drag the progress bar left and right to view other non-displayed thumbnail images. The user can click on one of the pieces of thumbnail information, for example, the user can use a mouse to place the cursor 3433 on a thumbnail to perform a left-click operation. In response to a user clicking a thumbnail, the Personal Computer (PC) 312 sends an original photo request corresponding to the thumbnail to the mobile phone 310.
The handset 310 receives the original photograph request and sends the original photograph to a Personal Computer (PC) 312. The Personal Computer (PC) 312, upon receiving the original photograph, invokes the photograph viewer to open the original photograph. As shown in (d) in fig. 34, the thumbnail of the photograph that has just been clicked has been deleted from the expanded thumbnail notification box 3431, and the Personal Computer (PC) 312 is opening the folder in which the original photograph is saved, the thumbnail of the photograph and the photograph file name icon 3441 being shown in the folder.
In some embodiments, after the Personal Computer (PC) 312 expands the thumbnails showing the received photos, the user may drag one of the thumbnails into the destination folder. As shown in fig. 34 (e), the user may place the mouse cursor on a thumbnail and press the left mouse button to drag the thumbnail to the desktop (or other folders) along the dashed-line trajectory 3451. After the user drags the thumbnail to the desktop, the left mouse button is released, at this time, the Personal Computer (PC) 312 sends an original photo request to the mobile phone 310, and the mobile phone 310 sends the original photo corresponding to the thumbnail to the Personal Computer (PC) 312 in response to the original photo request message. The Personal Computer (PC) 312 displays the thumbnail information and the file name information 3461 of the photo on the desktop, and as shown in (f) of fig. 34, the thumbnail of the photo that has just been dragged at this time is deleted from the expanded thumbnail notification box 3431. The user can view the photo by double-clicking and the like.
In some embodiments, after the Personal Computer (PC) 312 expands the thumbnails showing the received photos, the user can drag and drop one of the thumbnails to a target application, such as a document editing program. As shown in (g) in fig. 34, the Personal Computer (PC) 312 shows that the document editing program is being opened, and at this time, the user can place the mouse cursor 3472 on a thumbnail and drag the thumbnail along the dashed-line trajectory 3471 into the document editing program without pressing the left button. After the user drags the thumbnail to a specific position of the document editing program, the left mouse button is released, and at this time, the Personal Computer (PC) 312 sends an original photo request to the mobile phone 310. The cellular phone 310, in response to the original photograph request message, sends an original photograph corresponding to the thumbnail to a Personal Computer (PC) 312. The Personal Computer (PC) 312 will display the original photograph of the photograph in the document editor. The user may subsequently perform further operations on the photograph.
In some embodiments, after the Personal Computer (PC) 312 expands the thumbnails of the received photos, the user can click on the download button in the upper right corner of one of the thumbnails. As shown in (h) of fig. 34, a partial enlarged view 3483 of one of the thumbnails is shown, and a download button 3482 is provided in the upper right corner of the thumbnail. The user may click on the download button 3482 in the upper right corner of one of the thumbnails using a mouse cursor, at which point the Personal Computer (PC) 312 sends an original photograph request to the cell phone 310. The cellular phone 310, in response to the original photograph request message, sends the corresponding original photograph to a Personal Computer (PC) 312. The Personal Computer (PC) 312 will save the original photograph in a default folder or a folder designated by the user.
In some embodiments, after the Personal Computer (PC) 312 displays the notification frame 3416 containing the thumbnail of the last photo of the plurality of photos and the total number of photos on the desktop, the user can drag the thumbnail in the notification frame 3416 directly. As shown in fig. 34 (i), the user may place the mouse cursor 3421 on the thumbnail in the notification box 3416, press the left button, and drag the thumbnail to the desktop (or other folders) along the dashed-line trajectory 3491, at which time the Personal Computer (PC) 312 sends the original photo request of the photos to the mobile phone 310. The terminal sends multiple original photos to a Personal Computer (PC) 312 in response to the original photo request message for all photos. The Personal Computer (PC) 312 automatically creates a folder on the desktop to store the received multiple original photos.
In some embodiments, after the Personal Computer (PC) 312 displays the notification frame 3416 containing the thumbnail of the last photo of the plurality of photos and the total number of photos on the desktop, the user may directly click on the download button 3417 in the upper right corner of the notification frame 3416 (as shown in fig. 34 (a)). The user may click on download button 3417 in the top right corner of notification box 3416 using a mouse cursor, and Personal Computer (PC) 312 sends the original photograph request for all photographs to cell phone 310. The terminal sends multiple original photos to a Personal Computer (PC) 312 in response to the original photo request message for all photos. The Personal Computer (PC) 312 may store the received plurality of original photographs in a designated folder or a default folder.
The embodiment of fig. 34 is illustrated by taking a picture, and the transmission of multiple pictures in succession is also applicable to the application scenarios of screen capture, scanning, etc., and the operation is similar to the above embodiments, and will not be further described here.
In the embodiments of the present application, the mobile phone 310, the mobile phone 311, and the Personal Computer (PC) 312 log in the same ID as an example, and the embodiments of the present application also support image transmission between devices that do not belong to the same ID (for example, hua is an account ID). The mobile phone 310 may be configured with a picture sharing device list for picture transmission in the embodiment of the present application, for example, the ID of the mobile phone 311 and the ID of the Personal Computer (PC) 312 are set as the picture sharing device list for sharing pictures with the mobile phone 310. Fig. 35 shows another GUI provided in this embodiment of the present application, where the GUI is a picture sharing list device setting interface 3510 of the mobile phone 310, and the interface 3510 includes a return button 3511, picture sharing target device ID information 3512, picture sharing target device ID information 3513, and a picture sharing device add button 3514. A back button 3511 may trigger a back to the upper level setting menu interface, and a picture sharing device adding button 3514 may trigger the addition of a picture sharing device. The media library monitoring module in the mobile phone 310 may obtain the list information of the picture sharing device through an interface provided by a setting module in the mobile phone. Thus, when the mobile phone 311 and the Personal Computer (PC) 312 are in a near-field connection with the mobile phone 310, the mobile phone 310 can share pictures obtained by photographing, scanning and screen capturing with the mobile phone 311 and the Personal Computer (PC) 312. Therefore, the user can customize the picture sharing device list, and the picture is shared to the appointed device, so that the use experience of the user is improved.
The embodiment of the application can also include other implementations, some picture recognition technologies can be combined, after the first device obtains the picture, some key information such as a mailbox address, a website link and the like can be extracted by using the picture recognition technologies, and then the key information is shared to the second device. Fig. 36 shows another GUI set provided in the embodiment of the present application, where the first device scans a poster by using a scanning program to obtain a picture, as shown in (a) in fig. 36, the picture includes a person portrait, a mailbox address and a personal blog address, where the mailbox address is joanna @163.com, and the personal blog address is www. After the first device obtains the picture, the mailbox address and the personal blog address are extracted by using a picture recognition technology, and the thumbnail of the picture, the mailbox address text and the personal blog address text are packaged into a message to be sent to the second device. After the second device detects the interaction event, a notification frame is displayed to the user, and for example, as shown in fig. 36 (b), a thumbnail notification frame 3611 is displayed in a lower right corner of a desktop interface 3610 of the Personal Computer (PC) 312. Fig. 36 (b) shows a partial enlarged view 3616 of the thumbnail notification frame 3611, and the thumbnail notification frame 3611 contains a thumbnail 3615, a personal blog card address card 3613, a mailbox address card 3614, and a download button 3612. The user can click the personal blog address card 3613 to conveniently access the personal blog, and can click the mailbox address card 3614 to send mails and other operations. Therefore, the key information in the picture is extracted and sent to other required devices, so that the operation of other devices is facilitated, and the user experience can be enhanced.
FIG. 37 illustrates another set of GUIs provided by embodiments of the present application.
The GUI shown in (a) of fig. 37 shows a lock screen interface 370, the lock screen interface 370 including a status column 3701 and a notification column 3702, wherein,
status column 3701 can include: one or more signal strength indicators 3705 for mobile communication signals (which may also be referred to as cellular signals), one or more signal strength indicators 3706 for Wi-Fi signals, a bluetooth indicator 3707, and a battery status indicator 3708. When the bluetooth module of the electronic device is in an on state (i.e., the electronic device is powering the bluetooth module), a bluetooth indicator 3707 is displayed on the display interface of the electronic device.
A notification bar 3702 displays a notification whose notification content describes the source of the notification (e.g., huan is a video) and the brief content of the notification (a new video comes online, such as that of episode 9 of television series 1) provided by embodiments of the present application. The notification in notification bar 3702 also includes control 3703 and control 3704, illustratively, control 3703 is immediate view and control 3704 is selected device view. When the electronic device detects a user operation on the control 3703, the operation described by the control 3703 is executed, that is, after the electronic device is unlocked, the electronic device immediately jumps to the video playing interface, that is, the episode in the second quarter of the series tv 1 is immediately played.
As shown in fig. 37 (b), when the electronic device detects a user operation with respect to a control 3704, the operation described by the control 3704 is performed, and the electronic device displays a window 37041. The window 37041 includes selectable devices, such as a living room TV, a bedroom TV, a stereo, a computer, etc., and the user selects a device for playing video in the window 37041, such as a living room TV, the living room TV jumps to a video playing interface and immediately plays the episode of the second quarter 9 of the drama 1. The devices in the window 37041 may be arranged by priority, by signal strength of the devices, or randomly, which is not limited in this application.
The GUI, as shown in (c) of fig. 37, shows a user interface 371 after unlocking. A plurality of application icons such as settings, mail, video, gallery, etc. are displayed in the user interface 371. A notification bar 3702 is displayed at the top of the user interface 371, and the electronic device displays a notification provided by an embodiment of the application in the notification bar 3702. When the electronic device detects a user operation for the control 3703, the operation described by the control 3703 is performed, i.e., the electronic device immediately jumps to the video playing interface.
When the electronic device detects a user operation with respect to the control 3704 as shown in (d) of fig. 37, the operation described by the control 3704 is performed, and the electronic device displays a window 37041. In fig. 37 (d), a selectable device, such as a living room TV, a bedroom TV, a stereo, a computer, or the like, is included in a window 37041, and a user selects a device for playing a video in the window 37041, so that the device jumps to a video playing interface.
In some embodiments, the notification in notification bar 3702 may also include different controls in lock screen interface 370. FIG. 38 illustrates another set of GUIs provided by an embodiment of the present application.
As shown in fig. 38 (a), the notification in the notification bar 3702 may also include a control 3705. In contrast to control 3704 displayed in (a) of fig. 37, control 3705 recommends only one device for the user, e.g., living room TV viewing. When the electronic device detects a user operation for the control 3705, the operation described by the control 3705 is performed, and a request message is sent to the living room TV, and the living room TV plays the relevant content in the notification. The device indicated in the control 3705 may be a device with the highest priority, may be a device closest to the electronic device, and may be a device with the highest resolution (the clearest image quality) in the device attributes, so that the user does not need to select among multiple devices, and the user experience is improved.
Similarly, as shown in fig. 38 (b), in the unlocked user interface 371, the notification in the notification bar 3702 further includes a control 3705. When the electronic device detects a user operation for control 3705, a request message is sent to the living room TV, which plays the relevant content in the notification.
Illustratively, as shown in fig. 39, if the user selects the living room TV play in a window 37041, or the user triggers a control 3705, the electronic device sends a request message to the living room TV in response to the user operation, and the living room TV plays the relevant content in the notification, i.e., the "series 1" second quarter 9 set, in response to the request message.
A ribbon field 3901 may also be included in fig. 39, and the ribbon field 3901 includes a back control 3902, a play control 3903, a fast forward control 3904, a progress bar 3905, and a time progress value 3906. The progress bar 3905 is used for indicating video playing progress, and the longer the progress bar 3905 is, the larger the time progress value 3906 is; the back control 3902 is used for controlling the video playing progress to trace back forwards, and when the living room TV detects that the user operation triggering the back control 3902 is performed, the length of the progress bar 3905 is shortened; the fast-forward control 3904 is used for controlling the video playing progress to advance, and when the living room TV detects that the user operation of the backward control 3902 is triggered, the length of the progress bar 3905 is increased; the play control 3903 is used to control the starting and stopping of the playing of the video.
Optionally, the notification in notification bar 3702 may also include controls such as "view later," "TV view later in the living room," and so forth. When the electronic device detects a user operation for the "view later", "view later in the living room TV" control, the electronic device outputs the notification in the display screen again after a preset time. Optionally, the electronic device recognizes that the electronic device is in a no-network-connection state with the living room TV, may display a control of "view later on the living room TV" to the user in the notification, the electronic device detects a user operation for the control of "view later on the living room TV", and after the electronic device establishes a network connection with the living room TV, sends a request message to the living room TV, and the living room TV plays the relevant content in the notification.
Optionally, controls may be associated with time information, e.g., the notification in notification bar 3702 may also include controls such as "view after five minutes", "living room TV view after five minutes", etc. When the electronic device detects a user operation directed to the "view after five minutes" control, the electronic device plays the relevant content in the notification five minutes later. When the electronic equipment detects user operation aiming at a control of viewing the TV in the living room after five minutes, the electronic equipment sends a request message to the TV in the living room after five minutes, and the TV in the living room plays related contents in the notification; optionally, when the electronic device detects a user operation directed to the "TV view in living room after five minutes" control, the electronic device sends a request message to the TV in living room, and the TV in living room plays the relevant content in the notification after five minutes.
In the user interface 371 as shown in fig. 37 and 38, the notification bar 3702 may automatically disappear after staying in the user interface 371 for a short time without user interaction. If the user does not process the notification in a timely manner, notification bar 3702 may automatically disappear or hide. When the user wants to process the notification information again, the user can enter the notification center interface to view the unprocessed notification.
FIG. 40 illustrates another set of GUIs provided by an embodiment of the present application.
The GUI shown in (a) in fig. 40 shows a notification center interface including a region 4001 and a region 4002. Wherein area 4001 exposes a plurality of switch controls, such as bluetooth, flashlight, airplane mode, and the like. An area 4002 displays a number of notification bars, illustratively, notification bar 4003 includes the source of the notification (video software), the time of the notification (2 minutes ago), and the brief content of the notification (series 9, second quarter of drama 1 is on-line). The notification bar 4003 further includes an icon 40031, and the electronic device detects a user operation for the icon 40031, expands the notification bar 4003, and displays one or more controls.
Illustratively, as shown in fig. 40 (b), an immediate view control 40032, a living room TV view control 40033, and a selection device view control 40034 are included. The description about the controls 40032 to 40034 may refer to the related descriptions about the controls 3703 to 3705. The icon 40031 is optional, and the notification bar 4003 may directly display all the controls, as shown in fig. 40 (b), without triggering the icon 40031. The device indicated by the control 40033 may be a device with the highest priority, may be a device closest to the electronic device, and may be a device with the highest resolution (the clearest image quality) in the device attributes, so that the user does not need to select among multiple devices, and the user experience is improved. The control 40034 also includes one or more selectable electronic devices that can be freely selected by the user to enhance the user experience.
In the notification center interface, when the electronic device detects a user operation with respect to the control 40033, the operation described by the control 40033 is performed, that is, the electronic device sends a request message to the living room TV, and the living room TV plays related content in the notification, that is, the episode in the second quarter 9 of the series drama 1, in response to the request message. At this point, the notification bar 4003 does not disappear, and the user can continue to process the notification on the electronic device. As shown in fig. 40 (c), the notification bar 4003 displays status information "in the living room TV viewing", indicating that the notification in this notification bar 4003 is being processed on the living room TV at this time.
When the electronic device detects a user operation directed to the icon 40031, the notification bar 4003 is expanded to display one or more controls.
Illustratively, as shown in fig. 40 (d), a stop living room TV view control 4101, a switch device view control 4102 are included. When the electronic device detects a user operation with respect to widget 4101, the operation described by widget 4101 is performed, i.e. the electronic device sends a request message to the living room TV, and the living room TV stops playing the relevant content in the notification.
As shown in (e) of fig. 40, when the electronic device detects a user operation on the control 4102, the operation described in the control 4102 is performed, and the electronic device displays a switchable device list, such as a computer, a pad, and the like. As shown in (f) of fig. 40, the electronic device displays a switching pad view control 4103 and a switching computer view control 4104. When the user selects a device for switching viewing from the switchable device list, and the electronic device detects the user operation of the control 4104 and executes the operation described by the control 4104, the electronic device sends a request message to the living room TV, the living room TV stops playing the related content in the notification, and the electronic device sends a request message to the computer, and the computer starts playing the related content in the notification.
In some embodiments, (c) in fig. 40 is optional, and in the notification center interface shown in (b) in fig. 40, when the electronic device detects a user operation for processing a notification in the notification bar 4003, the interface shown in (d) in fig. 40 is displayed.
In some embodiments, in the notification center interface shown in fig. 40 (d) and 40 (e), when the electronic device detects a user operation on the control 40031, the interface shown in fig. 40 (f) is displayed.
In some embodiments, (c) in fig. 40, (d) in fig. 40, and (e) in fig. 40 are optional, and in the notification center interface shown in (b) in fig. 40, when the electronic device detects a user operation for processing a notification in the notification bar 4003, an interface shown in (f) in fig. 40 is displayed.
In some embodiments, the controls in the notification may change in accordance with changes in the device state. The electronic equipment outputs notification information, the notification information comprises a control, the control is associated with target equipment, the target equipment is in the communication range of the electronic equipment, and the control is used for triggering the target equipment to execute tasks in the notification information. Before the electronic device detects the user operation directed to the control, the electronic device detects that the target device is not within a communication range of the electronic device, for example, the target device is powered off, disconnected, and the like. At this time, the electronic device changes the control in the notification information, so that the control is associated with other devices to trigger the other devices to execute the task in the notification information.
FIG. 41 illustrates another set of GUIs provided by an embodiment of the present application.
For example, as shown in fig. 41 (a), the notification field 4003 outputs notification information at TI time (08 hours 08 minutes). The description of (a) in fig. 41 may refer to the related description in (b) in fig. 40, and is not repeated here. As shown in fig. 41 (b), when the electronic device detects a user operation for selecting the device view control 40034, the notification bar 4003 expands to display the pad view control 40035 and the computer view control 40036, prompting the user to select a pad and a computer view.
As shown in fig. 41 (c), at time T2 (08-18 minutes), when the electronic device detects that the pad is not within the communication range of the electronic device, the electronic device updates the notification bar 4003, and at this time, a control 40032, a control 40033, and a control 40034 are included in the notification bar 4003. When the electronic device detects a user operation directed to selecting the device view control 40034, as shown in fig. 41 (d), the notification bar 4003 expands to display the computer view control 40036, prompting the user to select a computer view.
As shown in fig. 41 (e), at time T3 (08-32 minutes), when the electronic device detects that the pad is within the communication range of the electronic device, the electronic device updates the notification bar 4003, and at this time, the control 40032, the control 40033, and the control 40034 are included in the notification bar 4003. When the electronic device detects a user operation directed to selecting the device view control 40034, as shown in (f) of fig. 41, the notification bar 4003 expands to display the pad view control 40035 and the computer view control 40036, prompting the user to also select a pad and a computer view.
In some embodiments, a specific application scenario in the embodiments of the present application is described below by taking a smart watch as an example. FIG. 42 illustrates another GUI provided by embodiments of the present application.
As shown in fig. 42, the computer 204 is in a power-off state (no network connection state) or a standby state; for the smart watch 202, the smart watch 202 does not have the function or authority to view the mail. When the smart watch 202 receives the notification message from the mail application, since the computer is in the power-off state or the standby state at this time, a prompt message 420 is displayed in the display interface of the smart watch 202, where the prompt message 420 includes a new mail notification reminder, and a control 4201, where the control 4201 is for the computer to view later. The smart watch 202 detects a user operation on the control 4201 and performs the operation described by the control 4201. When the smart watch 202 detects that the computer is in an open state, an execution message is sent to the computer, and after the computer receives the execution message, the computer pops up an email prompt box or directly opens a corresponding email.
FIG. 43 illustrates another GUI provided by embodiments of the present application.
The GUI shown in fig. 43 (a) is a user interface after a computer receives an execution message, and includes a mail prompt box 4301. When the computer detects the user operation for the prompt box 4301, the corresponding e-mail is opened. Referring to fig. 43 (b), fig. 43 (b) exemplarily shows an application interface for opening an email. Optionally, after receiving the execution message, the computer directly opens the corresponding email, that is, displays the application interface shown in (b) in fig. 43.
Optionally, the smart watch 202 continues to monitor the peripheral device before the smart watch 202 detects the user operation for the control 4201; when the smart watch 202 subsequently detects that the computer is in an open state, the prompt message is output again to prompt the user to check a new mail. Optionally, the control in the prompt message may be viewed by the computer, or may be viewed by the computer later.
FIG. 44 illustrates another GUI provided by embodiments of the present application.
Referring to fig. 44 (a), the user is wearing a watch and is navigating using a cell phone. In response to the cell phone displaying the navigation interface, the cell phone may query surrounding wearable devices. If the cell phone determines that the user is wearing the watch, the cell phone may send information of the navigation interface to the watch.
In one embodiment, the cell phone and the wearable device may be devices under the same account.
Referring to (b) in fig. 44, after the watch receives the information of the navigation interface sent by the mobile phone, the watch may display the navigation interface on the mobile phone, so that the user may view the navigation information without holding the mobile phone all the time when viewing the navigation information.
The GUI provided by the embodiment of the present application is described above with reference to the accompanying drawings, and the following describes a specific interaction process between devices and an implementation process of the devices in the embodiment of the present application with reference to the accompanying drawings.
Fig. 45 and 46 show the implementation of the GUI shown in fig. 5 to 12 and the specific interaction process between the devices.
Fig. 45 shows a schematic block diagram of a sending end (source end) and a receiving end (sink end) provided in the embodiment of the present application. Wherein, the source side includes a notification listening service 4510, an equipment management module 4520, a message encapsulation module 4530, a notification forwarding management module 4540, etc., and the sink side includes a message parsing module 4550, a system adaptation module 4560, etc., wherein,
the notification listening service 4510 is configured to register a listening notification service with a system at a source end, and obtain a system notification message in real time. When a third party App sends a notification message body, the notification message body includes a packet header, a notification ID attribute, a notification channel attribute, message content, and the like. The packet header is used to determine which App the notification message belongs to, and the notification ID attribute and the notification channel attribute may be used to find the corresponding notification message.
The device management module 4520 is configured to manage the sink side. Illustratively, the device management module 4520 is configured to store a device name, a device type, device status information, address information, and the like of the sink side, where the device status information is used to indicate whether the device is online or offline. For another example, the device management module 4520 is further configured to store the hardware attribute information of the sink side (e.g., whether there is a screen, whether there is a speaker, etc.).
The sink terminal in the embodiment of the application may be a device under the same account as the source terminal. Then, when sending the notification message body to the sink, the source end can send the notification message body to the sink end through a short-distance communication mode, or send the notification message body to the sink end through a server.
In this embodiment, the device management module 4520 may control the source terminal to periodically send BLE packets to the surrounding devices, where the purpose of the BLE packets is to query whether the surrounding devices are online. If the source terminal receives a response, the state information stored in the device management module 4520 may be online; otherwise, the state information stored in the device management module 4520 is offline. When the source terminal has the notification message to be forwarded, the device management may first determine whether the sink terminal is online, and for the offline device, the source terminal may choose not to forward the notification message to the sink terminal.
The message wrapper module 4530 may define a notification message body that is common across devices, or may define a notification message body that devices of different operating systems can parse. The message encapsulation module may re-encapsulate the notification message according to the format of the notification message body for the content of the notification message, so as to obtain the encapsulated notification message body.
For example, the encapsulated notification message body may include a header of the message, identification information (e.g., a notification ID attribute and a notification channel attribute), content of the notification, user information of the notification message, and device information of the source device, and the like. The packet header of the notification message is used for determining which App the notification message belongs to; the identification information is used for finding out the corresponding notification message; after detecting the reply of the user, the sink end can send the content, the packet header and the identification information replied by the user to the source end, so that the source end can determine that the notification content is the reply of a certain notification message of a certain App.
For example, in fig. 5 (a), the notification message body sent by the mobile phone to the smart television may include that the notification message belongs to App1, that the user sending the message is li hua, and that the content of the notification message is "9 am with a meeting".
For another example, as shown in fig. 9, the notification message body sent by the laptop to the wearable device may include the notification belonging to the schedule, the notification content including time (9 a.m.:00-10 a.m.).
In one embodiment, the message encapsulation module 4530 may further carry indication information in the notification message body, where the indication information is used to indicate that the sink end adds a reply control to the notification message. For example, the indication information may be a flag attribute of "quick reply". And when the sink terminal determines that the notification message body carries the flag attribute of 'quick reply', a reply control can be added in the message reminding frame. Therefore, if the sink terminal device is a device which does not install the application program corresponding to the notification message, the sink terminal device can also prompt the user of the notification message, and the user can finish the reply to the notification message through the reply control; therefore, the sink end equipment can send the reply content to the source end, and the source end completes the real reply to the notification message. It should be understood that, for the process of the source end actually replying to the notification message, reference may be made to the description in the following embodiments, and details are not described herein for brevity.
The notification forwarding management module 4540 may be configured to determine whether the user is focused on the current source peer. If the user focus is not on the current device, the notification forwarding management module 4540 is further configured to determine the user focus device.
Illustratively, the source end may automatically turn on the camera after receiving the notification message from the server, and if the face information of the user is not collected by the camera, the source end may determine that the device currently focused by the user is not on the source end.
It should be understood that, in the embodiment of the present application, a source of the notification message received by the source is not particularly limited, and the notification message may be a notification message received by the source from a server, or may also be a notification message received by the source from another device, for example, the source receives the notification message from the other device through Wi-Fi P2P.
Illustratively, the source end may detect the user's pupil through a pupil detection sensor (e.g., an image sensor). If the pupil detection sensor can detect the user's pupil, then the source end determines that the user is controlling the device; otherwise, the source end determines that the user is not currently focused on the source end.
In an embodiment, if the notification forwarding management module determines that the current focus of the user is not on the source end, the source end device may further send request information to one or more sink ends stored in the device management module, where the request information is used to request the sink end to determine whether the focus of the user is on the device.
In one embodiment, the notification forwarding management module 4540 may save the priority of the focus device determination. Illustratively, the prioritization may be:
(1) A visual focus;
(2) An on-body device;
(3) The focus of the interaction.
It should be understood that the visual focus device may be a camera-equipped device, e.g., a smart television, a tablet computer, a laptop computer, etc.; the on-body device may be a wearable device of the user, e.g., VR/AR glasses, smart watches, smart bracelets, and so forth; the interactive device may be a device with input manipulation, e.g. a device with keyboard, mouse, touch, remote control, voice input, etc.
If the source end stores the device list for forwarding the notification message, the source end device may send request information to the devices in the device list. The source side may send the request information to the devices in the device list according to the priority of the devices.
For example, if the devices in the device list of the mobile phone include a smart television and a smart watch, the mobile phone may first send request information to the smart television, where the request information is used to request the smart television to determine whether the current focus of the user is on the smart television. After receiving the request information, the smart television can judge whether the focus of the user is on the smart television by starting the camera. If the smart television determines that the current focus equipment of the user is on the smart television through the face information acquired by the camera, the smart television can send a response to the mobile phone, wherein the response is used for indicating that the current focus of the user is on the smart television. After receiving the response, the mobile phone may not send the request information to other devices in the device list. If the mobile phone does not receive the response of the smart television within the preset time (or the mobile phone receives the response sent by the smart television, and the response indicates that the current focus of the user is not on the smart television), the mobile phone can continue to send the request message to the smart watch. After receiving the request information sent by the mobile phone, the smart watch may detect whether the user is wearing the wearable device based on the sensor. For example, a Photo Plethysmography (PPG) sensor in a smart watch may determine whether a user is wearing the smart watch by detecting the heart rate of the user. If the smart watch judges that the user is wearing the smart watch, the smart watch can send a response to the mobile phone to indicate that the user is wearing the smart watch currently.
When the source side sends the request information to the devices in the device list, the source side may also send the request information to a plurality of devices in the device list at the same time.
For example, if the devices in the device list of the mobile phone include a smart tv and a smart watch, the mobile phone may send the request message to the smart tv and the smart watch at the same time. If the mobile phone only directly receives the response of the smart television, the mobile phone can forward the notification message to the smart television; or, if the mobile phone receives only the response of the smart watch, the mobile phone may forward the notification message to the smart watch; alternatively, if the mobile phone receives the responses of the smart television and the smart watch, the mobile phone may forward the notification message to the smart television according to the priority of the device.
If the source end does not store the device list for forwarding the notification message, the source end can send request information to devices around the source end in a broadcast manner. If the mobile phone only receives the response of a certain surrounding device, the mobile phone can forward the notification message to the device; alternatively, if the handset receives responses from multiple surrounding devices, the handset may forward the notification message to a device with a higher priority according to the priority of the device.
It should be understood that, in this embodiment of the application, if a device list for forwarding the notification message is stored in the first electronic device, the first electronic device sends indication information to the second electronic device, where the indication information may be used to instruct the second electronic device to determine whether an owner of the first electronic device focuses on the second electronic device; alternatively, the indication information may be used to instruct the second electronic device to determine whether a user (who may be the owner of the first electronic device, or may be another user) focuses on the second electronic device.
In one embodiment, the source end and the sink end are located in the same system, the system can be further connected with an intelligent camera, the intelligent camera can detect the position of a user through face recognition or infrared detection and the like, and sends the position of the user to the source end, and the source end judges which device in the system is closer to the user according to the position of the user and the positions of devices in the system (the positions of the devices can be obtained through satellite positioning), so that the device closest to the user is the sink end.
When the source end determines that the current focus of the user is not on the source end, the notification forwarding management module of the source end may send the encapsulated notification message body to the focus device.
For example, as shown in fig. 45, the source side may send a notification message body to the sink side device through a network channel (e.g., bluetooth/Wi-Fi, etc.). For example, the source end may send the notification message body to the sink end in a BLE packet. The BLE packet includes a Protocol Data Unit (PDU), and the notification message body may be carried in a service data field (service data) in the PDU, or may also be carried in a vendor specific data field (vendor specific data) in the PDU. For example, a payload (payload) of the service data field may include a plurality of bits, wherein the plurality of bits includes an extensible bit. The sink side and the source side can agree on the content of a certain extensible bit. The source terminal can encode the notification message body by adopting encoding modes such as GBK, ISO8859-1 and the like, and carry the encoded information on one or more extensible bits.
The message parsing module 4550 of the sink side is configured to parse the notification message body sent by the source side, so as to obtain the content of the notification message body. For example, if the source end carries the notification message body in a BLE data packet and sends the notification message body to the sink end, the sink end may perform decoding in a corresponding decoding manner after receiving the BLE data packet, so as to obtain the notification message body.
The system adaptation module 4560 may optimize and forward the notification interaction and presentation mode to the sink terminal according to the notification style rule local to the sink terminal or according to the preset device and notification optimization mode correspondence table.
In one embodiment, if the notification message is a message of the first application, the sink device may be a device that does not install the first application. The notification message body sent by the source end to the sink end carries indication information (for example, a 'quick reply' flag attribute is used for indicating that the source end adds a reply control in the message reminding box). After the message analysis module at the sink end identifies the attribute of the 'quick reply' flag, the system adaptation module adapts the system interfaces corresponding to the systems aiming at the sink ends of different operating systems, and supports an 'quick reply' input box, UI presentation, user interactive response and the like.
Illustratively, if the sink end is a device of an Android system, the sink end may provide a notification manager, so as to implement drawing of a message reminding box, where the message reminding box includes notification content in a notification message body, a reply control drawn by the notification manager, and the like. For example, as shown in (a) in fig. 6, after receiving a message body sent by a mobile phone, the smart television may display the content of the message and a reply control in a message alert box.
When the sink end detects that the user inputs the reply content, the sink end can display a text input box and a sending control. When the sink end detects that the user inputs reply content in the text input box and clicks the sending control, the content input by the user can be obtained through a callback interface getResultsFromIntent (). getresultsfrompotent () may establish an association of a send control and a text input box. And when the sink end detects that the user clicks the sending control, the sink end is triggered to acquire the content in the text input box. In the embodiment of the application, the callback interfaces can be registered on different systems, so that when the sink side detects the reply content of the user, the reply content of the user can be obtained through the callback interface. And after detecting that the user inputs the reply content in the text input box and detecting that the user clicks the sending control, the sink end can trigger the sending of the reply content of the user to the source end. Illustratively, table 1 shows a correspondence table between devices and notification optimization methods.
TABLE 1
Figure PCTCN2020142600-APPB-000001
Illustratively, as shown in (a) and (b) of fig. 5, when a notification comes, the smart television displays only the notification content without displaying the action control, and a subsequent user can view and reply to the notification message in the notification center. Thus, the experience for the user is consistent, and the user cannot be interfered.
As shown in (a) and (b) of fig. 6, when the notification comes, the smart tv may first determine whether there is interaction with the user. If there is no interaction with the user (e.g., the smart television is playing a video), the smart television may automatically focus the cursor on the reply control of the message reminder box and display the action control in the message reminder box; if the smart television interacts with the user (the user is changing channels by using the remote controller), the smart television displays an action control in the message reminding box and focuses the cursor on the reply control instead of the reply control by prompting the user to click a menu key. This helps to promote ease of interaction.
For example, as shown in fig. 7, when the mobile phone sends the notification message to the car machine, the car machine may prompt the user of the content of the notification message in a voice broadcast manner.
For example, as shown in fig. 9, when the smart watch receives a notification message sent by the notebook computer, the smart watch may remind the user through voice, vibration, display on a display screen, and the like.
When detecting that the user replies a certain notification message, the sink terminal can package the action response event and the content replied by the user together into a notification message body. The notification message body can adopt the same packaging format as the notification message sent from the source end to the sink end, so that the source end can analyze the notification message conveniently.
Illustratively, the sink end may send the notification message body to the source end in a BLE packet. The BLE data packet includes a PDU, and the notification message body may be carried in a service data field in the PDU or may also be carried in a vendor specific data field in the PDU. For example, the payload of the service data field may include a plurality of bits, wherein the plurality of bits includes an extensible bit. The sink side and the source side can agree on the content of a certain extensible bit. The sink end can adopt coding modes such as GBK, ISO8859-1 and the like to code the notification message body, and carry the coded information on one or more extensible bits.
In an embodiment, if a notification message body sent by the source end to the sink end carries a notification ID attribute and a notification channel attribute, the sink end may carry the notification ID attribute, the notification channel attribute, an action response event, and content replied by the user in the notification message. If the notification message body sent by the source end to the sink end does not carry the notification ID attribute and the notification channel attribute, the notification message body sent by the sink end to the source end may not carry the notification ID attribute and the notification channel attribute.
In the embodiment of the application, after the sink terminal detects a click event or a delete event of the notification message by the user, the corresponding event can be sent to the source terminal in real time. The source end can make a real-time response, so that the closed-loop processing of the message is completed, and the consistency of the experience of the user on different devices is achieved.
For example, as shown in fig. 5 (c), after the user completes the reply to the App1 message on the smart television, the smart television may send an action response event and reply content to the mobile phone, where the action response event is used to indicate that the smart television has completed the reply to the message. After receiving the action response event and the reply content, the mobile phone can automatically pull up the chat interface, and realize the real reply of the message in a dragging mode.
Analyzing a notification message body sent by a sink end at a source end to obtain a packet header, a notification ID attribute, a notification channel attribute, an action response event and user reply content, and completing real reply to the notification message through the following processes. Wherein, the source end can confirm that the sink end replies the notification message through the action response event; determining which App notification message is replied through the packet header; the notification ID attribute and the notification channel attribute determine which notification message is replied to. And then the true reply of the notification message is completed through the following procedure.
(1) Pull-up social App display interface
If the source end is in the lock screen state, the process can be performed in the following ways.
In a first mode
The source end may first determine which App the reply is for according to the packet header. If the App provides the API to reply the message, the App can directly complete the real reply of the notification message through the API provided by the App after acquiring the content replied by the user. For example, for short message applications, it provides that the API "sendTextMessage" can be used to notify the true reply to the message. After the source end obtains the reply content, it may be determined that the reply content is the reply content for the short message application through the packet header of the reply content. Because the short message provides an API for replying the notification message, the source end can determine which notification message is replied through the notification ID attribute and the notification channel attribute, and further the source end can directly complete the real reply of the notification message through the sendTextMessage.
It should be understood that, for the reply mode of the mode one, the process of message reply is implemented at the source end, and the process is not sensible to the user. If the App does not provide the API for replying the message, the App can adopt a second mode or a third mode, and replying is completed through a dragging event.
Mode two
If the App supports the lock screen loaded flags attribute, the source end can directly load the corresponding display interface on the lock screen interface. Such as a voice call request, a video call request, etc. of WeChat, are directly displayed on the screen locking interface. And (3) returning to the step (2) by dragging the event to finish the message reply.
Mode III
The reply method of the third method can also be called a cross-device unlocking scheme. In the third mode, the sink terminal can complete the collection of the user identity authentication information, for example, the sink terminal can acquire password information used by the user for unlocking the source terminal, such as digital passwords, fingerprint passwords, iris information, facial features and other identity authentication information. Then, the sink end can send the password information to the source end, and therefore the source end judges whether to unlock the equipment or not. And (5) if the source terminal determines to unlock according to the password information, and then the step (2) is carried out, and the message is replied through a dragging event.
(2) Completing a reply by a drag event
The source end responds to the action response event, and can determine that the sink end device detects the operation of the user on the notification message reply. The source end can determine a corresponding App according to the packet header of the notification message body, start the App to the foreground, and find the corresponding notification message through the notification ID attribute and the notification channel attribute. The view system at the source end can send Drag and Drop events and reply content to the corresponding App. And the App responds to the Drag and Drop event, and pulls up the chat interface of a specific contact through the PendingIntent in the notification message, so as to reply to the notification message. At this point the true reply to the notification message is complete. The view system, while sending the Drag and Drop event and the reply content to the corresponding App, may also send a content attribute of the reply content to the corresponding App, which may be used to indicate the type of the reply content (e.g., text, picture, voice, video, file, etc.). The App can inform a three-dimensional graphics processing library (OpenGL ES) to draw different display interfaces according to the type of the reply content. For example, for text content, after pulling up the chat interface of a specific contact, the text content can be directly displayed at the position of the reply. For another example, for voice information, after a chat interface of a specific contact is pulled up, a duration of the voice and a control for answering the voice may be displayed (when the source terminal detects that the user clicks the control, the content of the voice may be played). For another example, for video information, after a chat interface of a certain contact is pulled up, a first frame image in the video may be displayed and a play control may be displayed (after a source endpoint clicks the play control, the content of the video may be played).
(3) Restoring source end-of-site
And after the source end finishes replying, the interface of the App is hidden, and the display interface before starting the App is returned. And (3) if the source end equipment is originally locked in the step (1), locking the screen of the equipment again. By restoring the original state of the equipment, the user can not sense the shortcut reply function, and the user experience can be further improved.
For another example, when the smart television detects that the user ignores the message of App1, the smart television may send a notification message body to the mobile phone, where the notification message body may carry an action response event, and the action response event is used to indicate that the smart television detects that the user ignores the message reminder box. After receiving the action event, the mobile phone can automatically hide the message reminding frame on the screen locking interface of the mobile phone. After receiving the notification message body, the mobile phone may determine App1 according to the packet header of the notification message body. After responding to the action response event, the view system of the application framework layer can send a hidden event to App1 of the application layer. App1 may notify a three-dimensional graphics processing library (OpenGL ES) to hide a notification message reminding frame on the screen locking interface.
Fig. 46 shows a schematic flowchart of a method 4600 of prompting a message provided by an embodiment of the present application. As shown in fig. 46, the method may be performed by a first electronic device and a second electronic device, where the first electronic device may be a source terminal as shown in fig. 45, and the second electronic device may be a sink terminal as shown in fig. 45, and the method 4600 includes:
S4601, the first electronic device receives the message.
Exemplarily, as shown in fig. 5 (a), a mobile phone of a user receives a message of App1 sent by user li.
It should be understood that the message in the embodiment of the present application may be a message sent by a server of a different application, and the type of the message includes, but is not limited to, text, voice, audio, video, link, share (e.g., location share), invite (e.g., invite to join a group), and the like.
S4602, when determining that the device focused by the owner of the first electronic device is the second electronic device, the first electronic device sends the message to the second electronic device.
If the first electronic device stores a device list for message forwarding, the second electronic device may be a device in the device list, and at this time, the first electronic device may send request information to the second electronic device through short-range wireless communication or through a server, where the request information is used to request the second electronic device to determine whether the owner of the first electronic device focuses on the second electronic device; if the first electronic device does not store the device list for message forwarding, the second electronic device may be a device around the first electronic device, and the first electronic device may send the request information to the surrounding devices in a broadcast manner.
Optionally, if the first electronic device determines that the device focused by the user is not the first electronic device, the first electronic device does not perform the notification.
For example, as shown in fig. 5 (a), when the handset receives the message of App1, if the handset determines that the device focused by the user is not the handset, the handset may not light up the message notifying the user of App 1.
In the embodiment of the present application, a determination manner that the device that the first electronic device determines to focus is not the electronic device is not specifically limited.
In one embodiment, if the second electronic device is an electronic device in the device list of the first electronic device and user characteristics of the owner of the first electronic device are stored in the second electronic device (for example, the second electronic device and the first electronic device are two devices under the same user (the same user is logged in, the same user ID), the same user characteristics (for example, fingerprint information, face information, voiceprint information, iris information, and the like) may be stored in the first electronic device and the second electronic device), and the first electronic device may send the request information to the second electronic device through a server or by short-distance communication. After receiving the request information sent by the first electronic device, the second electronic device can judge whether the user characteristics are matched with the preset user characteristics of the owner of the first electronic device by acquiring the user characteristics. And if so, sending response information to the first electronic equipment, wherein the response information is used for indicating that the owner of the first electronic equipment focuses on the second electronic equipment. The first electronic device may send the message to the second electronic device after receiving the response information.
For example, if the first electronic device includes a camera, when the first electronic device receives the message, the first electronic device may turn on the camera to collect face information of the user, and if the face information of the user is not collected by the camera, the first electronic device may determine that the device on which the user is currently focused is not the first electronic device, and may send the request information to the second electronic device. After receiving the request information, the second electronic device may collect face information of the user by turning on the camera, and if the collected face information matches with face information of a owner of the first electronic device preset in the second electronic device, the second electronic device may send response information to the first electronic device.
For another example, the second electronic device may collect voice information of the user by turning on a microphone after receiving the request information. The second electronic device may extract voiceprint information in the voice information, and if the voiceprint information matches with voiceprint information of a owner of the first electronic device preset in the second electronic device, the second electronic device may send response information to the first electronic device.
In this embodiment, the first electronic device may send the request information to the second electronic device through the server, or may send the request information to the second electronic device through short-range communication. For example, the request information may be a field in the BLE data packet for requesting the second electronic device to determine whether the user is focused on the second electronic device.
Illustratively, the BLE data packet includes a PDU, and the request information may be carried in a service data field in the PDU or may be carried in a vendor specific data field in the PDU. For example, the payload of the service data field may include a plurality of bits, wherein the plurality of bits includes an extensible bit. The first electronic device and the second electronic device may agree in advance on a bit for transmission of the request message. When this extendable bit is a preset value (e.g., 1), the second electronic device may learn that the first electronic device requests it to determine whether the owner of the first electronic device is focused on the device.
And when the second electronic equipment determines that the user focuses on the second electronic equipment, sending response information to the first electronic equipment, wherein the response information is used for indicating that the equipment focused by the user is the second electronic equipment.
For example, the second electronic device may send the service data field or vendor specific data field of the PDU in the BLE packet to the first electronic device upon determining that the user focus is on the second electronic device. For example, the response information may be carried on scalable bits in the payload of the service data field. The first electronic device and the second electronic device may agree in advance on a certain scalable bit for transmission of the response message. When the extendable bit is a preset value (e.g., 1), it indicates that the second electronic device determines that the device focused by the owner of the first electronic device is the second electronic device.
It should be understood that, in this embodiment of the present application, the second electronic device may determine whether the user focuses on the second electronic device by using other existing manners of determining a focus device, which is not limited in this embodiment of the present application.
Optionally, the first electronic device and the second electronic device are devices under the same account, and then the first electronic device may further send request information to the second electronic device through the server. Likewise, the second electronic device may also transmit response information to the first electronic device through the server if it is determined that the user is focusing on the second electronic device.
For example, the user wearing the smart watch goes out and the mobile phone is placed at home for charging, and when the mobile phone receives a message, the mobile phone may send a request message to the smart watch through the server. The smart watch may send the response message to the cell phone through the server.
In an embodiment, if the second electronic device is an electronic device in the device list of the first electronic device and the user feature of the owner of the first electronic device is not stored in the second electronic device, or the device list is not stored in the first electronic device, the second electronic device is a device around the first electronic device. The first electronic device may send the request information to the second electronic device upon determining that the owner of the first electronic device is not focused on the first electronic device; after receiving the request information, the second electronic device may collect the user characteristics through the user characteristic collection device, and send the collected user characteristics to the first electronic device. The first electronic device may send the message to the second electronic device upon determining that the user characteristics gathered by the second electronic device match user characteristics pre-set in the first electronic device.
It should be understood that, for the case that the second electronic device is an electronic device in the device list of the first electronic device and the user characteristics of the owner of the first electronic device are stored in the second electronic device, the second electronic device may also collect the characteristics of the user and send the characteristics of the user to the first electronic device after receiving the request message, so that the first electronic device determines whether the owner of the first electronic device focuses on the second electronic device according to the characteristics of the user collected by the second electronic device and the preset user characteristics.
For example, the user feature sent by the second electronic device to the first electronic device may be carried in a service data field or a vendor specific field in the PDU.
For example, the second electronic device includes a camera. After receiving the request information, the second electronic device can acquire face information through the camera and send the face information to the first electronic device. The first electronic equipment judges whether the user is the user according to the face information. For example, the first electronic device may match face information acquired by the second electronic device with face information preset in the first electronic device, and if the matching is successful, it is determined that the owner of the first electronic device is currently focused on the second electronic device. The face information collected by the second electronic device may also be face information of multiple persons, and then the second electronic device may send the face information of the multiple persons to the first electronic device. The first electronic device may match the face information of each of the plurality of persons with the face information preset in the first electronic device, and if the matching is successful, it is determined that the second electronic device is currently focused by the user. The first electronic device may send a message to the second electronic device, which may prompt the user for a message (or prompt the user for a message from the XX device) and not prompt the user for the content of the message.
For another example, the second electronic device includes a microphone, and after receiving the request message, the second electronic device may collect voice message sent by the user through the microphone and send the voice message to the first electronic device. The first electronic equipment judges whether the user is the user himself or not according to the voice information. For example, the first electronic device may extract voiceprint information in the voice information acquired by the second electronic device, match the extracted voiceprint information with voiceprint information preset in the first electronic device, and determine that the second electronic device is currently focused by the user if the matching is successful.
In the embodiment of the application, when the first electronic device requests the second electronic device whether the owner of the first electronic device focuses on the second electronic device, the second electronic device may send the collected user characteristics to the first electronic device, so that the first electronic device may determine whether the second electronic device is a device focused by the owner of the first electronic device according to the received user characteristics and the preset user characteristics. Therefore, the safety of message forwarding is improved, privacy disclosure of users is avoided, and user experience is improved.
Alternatively, if the first electronic device does not receive the response information sent by any device within the preset time, the first electronic device may determine that the user is not focused on devices around the first electronic device. The first electronic device may alert the user of the message by a bright screen or vibration.
Optionally, the first electronic device may send the content of the message to the second electronic device by carrying the content in BLE data packets; alternatively, the first electronic device may send the message to the second electronic device through the server.
Illustratively, the BLE packet includes a PDU, and the message may be carried in a service data field in the PDU or may be carried in a vendor specific data field in the PDU. For example, the payload of the service data field may include a plurality of bits, wherein the plurality of bits includes an extensible bit. The first electronic device and the second electronic device may agree in advance on one or more extensible bits for transmission of the message. The first electronic device may encode the content of the message in a GBK, ISO8859-1, or other encoding manner, and carry the encoded information on one or more scalable bits. After receiving the BLE data packet transmitted by the first electronic device, the second electronic device may decode information on a corresponding bit, thereby obtaining the content of the message.
S4603, the second electronic device prompts the message to the user according to the device information of the second electronic device.
In this embodiment, the device information of the second electronic device may include, but is not limited to: hardware capability information of the second electronic device (e.g., whether the second electronic device is a large screen device or the second electronic device is a car machine), a current state of the device (e.g., whether the device is currently interacting with a user, or whether the device is in an immersive state), and so forth.
Optionally, the device information of the second electronic device includes that the second electronic device has a display screen, and the second electronic device displays a message reminding frame through the display screen, where the message reminding frame includes the message and the message reminding frame does not include a control.
For example, as shown in (a) and (b) in fig. 5, after receiving a message sent by a mobile phone, a smart television may prompt the message through a message alert box, where the message alert box does not include a control. After seeing the message, the user can view the message in a notification center of the intelligent television.
Optionally, the device information of the second electronic device includes that the second electronic device has a display screen, and when the second electronic device detects that the input of the user is being received, a message reminding frame is displayed through the display screen without positioning a cursor in the message reminding frame, where the message reminding frame includes the message.
Illustratively, as shown in fig. 6 (b), when the smart television receives a message sent by a mobile phone, the smart television and the user have interaction, or the smart television detects that the user is operating the smart television. The smart tv may display a message alert box and the smart tv does not position a cursor in the message alert box.
In the embodiment of the application, if the second electronic device is receiving the input of the user when receiving the message, in order to avoid causing trouble to the user, the second electronic device may display the message reminding frame, but does not position the cursor in the message reminding frame. Therefore, the influence on the current interaction caused by the mistaken clicking of the user is avoided.
Optionally, when the second electronic device detects that the input of the user is being received, prompt information is also displayed through the display screen, and the prompt information is used for prompting the user to position the cursor to the message prompting frame through the first operation.
Illustratively, as shown in fig. 6 (b), the prompt message is "click menu key process". When the intelligent television detects that the user clicks a menu key, the cursor can be automatically positioned in the message reminding box. For example, the smart television may position a cursor at a reply control in the message alert box.
In the embodiment of the application, by displaying the prompt message, the user can position the cursor in the message reminding frame through the first operation, so that the message is processed. Therefore, the trouble of the current interaction behavior of the user is avoided, and the cursor on the intelligent television can be quickly and conveniently positioned on the message reminding frame, so that the user can conveniently process the message.
Optionally, the device information of the second electronic device includes that the second electronic device has a display screen, and when the second electronic device detects that the user input is not received, the message reminding frame is displayed through the display screen and the cursor is positioned in the message reminding frame.
Illustratively, as shown in fig. 6 (a), when the smart television receives a message sent by a mobile phone, there is no interaction between the smart television and the user, or the smart television detects that the user does not operate the smart television. The smart tv may display a message alert box and the smart tv positions a cursor in the message alert box. For example, the smart television positions a cursor at a reply control in the message reminder box.
In the embodiment of the application, if the intelligent television does not receive the input of the user when receiving the message, the intelligent television can display the message reminding frame and position the cursor in the message reminding frame, so that the user can conveniently process the message.
Optionally, the device information of the second electronic device includes a device with a voice function of the smart television, and the second electronic device may remind the user of receiving the message through voice; alternatively, the second electronic device may alert the user to the content of the message by voice.
For example, as shown in fig. 7, the car machine includes a voice function, so that after receiving a message sent by a mobile phone, the car machine can convert the content of the message into voice and prompt a user; or the car machine can also prompt the user to receive a message through voice.
In one embodiment, if the second electronic device has both a display screen and a voice reminding function, the second electronic device may continue to determine whether the second electronic device belongs to a large-screen device (for example, a smart television), and if the second electronic device is a large-screen device, the second electronic device may prompt the user through the display screen; alternatively, if the second electronic device is a tablet, a PC, or the like, the user may be prompted through the display screen. If the second electronic device is an intelligent voice device with a small display screen, such as an intelligent sound box, the second electronic device can prompt the user through voice.
Optionally, before the second electronic device prompts the message to the user, the method further includes: the first electronic equipment sends indication information to second electronic equipment, wherein the indication information is used for indicating the second electronic equipment to add a reply control to the message; the second electronic device prompts the message to the user according to the device information, and the method comprises the following steps: the second electronic equipment displays a message reminding frame according to the equipment information and the indication information, wherein the message reminding frame comprises the message and the reply control; after the second electronic device prompts the message to the user, the method further comprises: when detecting that the user replies to the message, the second electronic equipment sends reply content to the first electronic equipment; and the first electronic equipment replies the message according to the reply content.
Illustratively, as shown in fig. 6 (a), the smart television may receive a message and indication information sent by a mobile phone. If the smart television does not receive the input of the user currently, the smart television can display a message reminding frame, the message reminding frame comprises the content of the message and a reply control, and the smart television positions a cursor on the reply control. When the smart television detects that the user clicks the reply control, the input method can be pulled up so that the user can reply to the message conveniently. The smart television can send the replied content to the mobile phone, so that the mobile phone can complete real reply to the message by calling the API or dragging an event.
Illustratively, as shown in (b) of fig. 6, the smart tv may receive a message and indication information sent by a mobile phone. If the smart television is currently receiving the input of the user (for example, the smart television is receiving a zapping operation of the user), the smart television may display a message reminding box, the message reminding box includes the content of the message and a reply control, and the smart television does not position the cursor at the reply control. The intelligent television can also display prompt information to prompt the user to reply the message by clicking a menu key. When the smart television detects that the user clicks a menu key, the smart television can position the cursor at the reply control. When the smart television detects that the user clicks the reply control, the input method can be pulled up so that the user can reply to the message conveniently. The smart television can send the replied content to the mobile phone, so that the mobile phone can complete real reply to the message by calling the API or dragging an event.
Optionally, the message is a message of a first application, and the second electronic device may be a device without the first application installed. Therefore, the user can conveniently complete the reply to the message on the focused device, the process that the user replies to the message on the first electronic device is avoided, and the user experience is favorably improved.
Optionally, the device information of the second electronic device includes that the second electronic device is provided with a camera. And when the second electronic equipment detects that only the owner of the first electronic equipment focuses on the second electronic equipment through the camera, prompting the content of the message to the user.
For example, as shown in fig. 11 (a), when the smart tv detects that only the owner of the mobile phone focuses on the smart tv through the camera, the smart tv may display a message reminding box 1101 through the display screen, where the message reminding box 1101 includes the content of the message (for example, "there is a meeting at 9 am").
Or when the second electronic equipment detects that a plurality of users including the owner of the first electronic equipment focus on the second electronic equipment through the camera, the second electronic equipment prompts the user to receive the message.
Illustratively, as shown in fig. 11 (b), when the smart tv detects that a plurality of users including the owner of the mobile phone are focusing on the smart tv through the camera, the smart tv may display a message reminding box 1102 through the display screen, where the message reminding box 1102 prompts the user that "you have a message, P40 from Lily".
Optionally, the second electronic device determines that the second electronic device is in a non-immersive state prior to prompting the user for the message. Illustratively, the immersive state may include a state in which the user is watching a video, as shown in (b) of fig. 10, and if the user is watching a video at this time, the tablet may not prompt the user for the message. As shown in fig. 10 (a), if the tablet is displaying the desktop at this time, the tablet may prompt the user for the message.
It should be understood that, in the embodiment of the present application, if the second electronic device is in the immersion state, the second electronic device may not prompt the message. The immersive state may also be understood as a notification disabled state, as the immersive state may be that the user has disabled the notification or has turned on the do-not-disturb mode. For example, when a user is in a teleconference, especially when sharing a desktop, if a notification is sent, the notification may not only be disturbed and influenced, but also be seen by other participants, thereby possibly affecting the user experience and possibly revealing the privacy of the user. Alternatively, the immersion state may be a state in which the second electronic device runs a preset application. For example, the immersive state may be that the second electronic device is currently running a video App or a game App.
Optionally, the method 4600 further includes: before transmitting the notification message to the second electronic device, the first electronic device determines the type of the message as a message type set by a user.
For example, as shown in (e) in fig. 13, the user may set the type of the message that can be subjected to notification forwarding in advance, such as the message for App5, app6, and App 7. When the first electronic equipment receives the message, judging whether the type of the message is the message type set by the user. If so, the first electronic device may perform S4602.
Optionally, the method 4600 further includes: before sending the notification message to the second electronic device, the first electronic device determines that the message is an Instant Messaging (IM) type message.
Optionally, the method 4600 further includes: before sending the message to the second electronic device, the first electronic device determines a device for receiving message forwarding set by the second electronic device for the user.
For example, as shown in fig. 14 (b), the user may set a device that can receive message forwarding in the handset in advance. When the first electronic device receives the message and determines that the device that the user is currently focused on is not the first electronic device, the device that the user is currently focused on may be determined from the devices that receive the message forwarding. And sends a message to the device.
Before sending the message to the second electronic device, the first electronic device determines that the account logged in on the first electronic device is associated with the account logged in on the second electronic device.
Optionally, the account logged in on the electronic device and the account logged in on the wearable device may be the same account; or the account number logged in on the electronic equipment and the account number logged in on the wearable equipment are account numbers in the same family group; alternatively, the account logged into the wearable device may be an account authorized by the account logged into the electronic device.
In the embodiment of the application, when the first electronic device determines that the device focused by the user is not the first electronic device but the second electronic device, the first electronic device can forward the message to the second electronic device, and the second electronic device can prompt the message to the user according to the device information of the second electronic device, so that the prompt that the user receives the message in time is facilitated, and the user is prevented from missing important messages; meanwhile, the second electronic equipment presents different reminding modes to the user according to the equipment information of the second electronic equipment, and the experience of the user when receiving the message is improved.
Fig. 47, 49 and 50 show the implementation of the GUI shown in fig. 16 to 22 and the specific interaction process between the devices.
The implementation process of the sending (source) end and the receiving (sink) end in the embodiment of the present application is described below with reference to fig. 47.
The notification service is used for receiving messages sent by the social application server. For example, when the user Tom uses the device a to send a message to the user Lily, the device a first sends the message content and the identification information of the social application account of the user Lily to the social application server. The social application server stores therein device information (for example, device B of Lily) of the social application account to which the user Lily is logged. The social application server may send a corresponding message to the device B of the user Lily according to the identification information of the social application account of the user Lily. The notification service in device B may be used to receive messages sent by the server and information of user Tom.
And the notification monitoring module is used for registering a monitoring notification service to the system and acquiring the message received by the notification service in real time. When a certain social application server sends a notification message, the notification listener can obtain the notification message body, where the notification message body includes a packet header, a notification ID attribute, a notification channel attribute, message content, and the like. Where the packet header is used to determine to which App the notification message belongs, the notification ID attribute and the notification channel attribute may be used to find the corresponding notification message, and the message content may include text information of the message (e.g., "Happy Birthday |" shown in fig. 16 (a)).
The notification processing module of the source end: and when the source end equipment determines that the user does not focus on the source end equipment currently and the message is a message which can be forwarded to other electronic equipment, attaching a quick reply attribute to the message.
The source end may maintain a white list of applications. The source end can perform cross-device forwarding of the message only when receiving the message from the application in the white list. For example, a three-party App supporting drag event response may be added to the white list. The source end may test the application to determine if it supports drag event responses. The view system in the application framework layer may send Drag and Drop events and randomly generated content to an application in the application layer while the test is being performed. After receiving the Drag and Drop event and the randomly generated content, the application program responds to the Drag and Drop event, starts the application program to the foreground, randomly opens a chat interface of a certain contact person, and replies the randomly generated content to the corresponding position of the chat interface. If the reply can be successful, the application program supports a system drag event response; if the reply is unsuccessful, it indicates that the application does not support a system drag event response.
In one embodiment, the source side may also determine whether to forward the message across devices according to the setting of the user. For example, as shown in (e) in fig. 13, a user may set in advance an App corresponding to a message that can be forwarded across devices. For example, the user can set notification messages of App5 and App7 to be forwarded to other devices; while notification messages for App4, app6, app8 and App9 are not forwarded to other devices.
In the embodiment of the present application, the notification message body may include a notification ID attribute and a notification channel attribute of the notification message. The notification ID attribute can be used to distinguish notification ID attributes sent by different users. For example, if a user has multiple contacts in a three-party App, the multiple contacts may be distinguished by different notification ID attributes, respectively. Table 2 shows one way of distinguishing.
Table 2 correspondence between contacts and notification ID attributes corresponding to the contacts
Contact person Notification ID attribute
Contact
1 ID1
Contact
2 ID2
Contact
3 DI3
The character string in the notification channel attribute can be used to distinguish different message types, and for example, table 3 shows the correspondence between different message types and the character string in the notification channel attribute.
Table 3 correspondence of message type and character string in notification channel attribute
Message type Informing in channel attributesCharacter string
Text Character string 1
Speech sound Character string 2
Video Character string 3
Document Character string 4
The corresponding notification message can be determined by the notification ID attribute and the notification channel attribute. For example, with the GUI shown in fig. 17, 2 notification messages are sent by the user Tom and the user Amy, respectively, and can be distinguished by the notification ID attribute. For another example, for 2 notification messages of different message types sent by Tom, they can be distinguished by a notification channel attribute. Different notification messages of the same message type sent by the same user may not be distinguished, or may be distinguished by other identification information.
It should be understood that the description of the notification ID attribute and the notification channel attribute may be different for different applications, and the above description is only made for the notification ID attribute and the notification channel attribute defined in some applications, and the embodiment of the present application is not limited thereto.
The notification processing module at the source end can add a flag attribute of 'quick reply' in the message body. And when the sink terminal determines that the notification message body carries the flag attribute of 'quick reply', a reply control can be added in the message reminding frame. And when the sink end detects that the user clicks the reply control, the text input box and the reply control can be displayed. When the source end sends the notification message body to the sink end, the source end can carry the message content and the flag attribute of the 'quick reply' in the notification message body.
In one embodiment, the notification processing module at the source end may not add the flag attribute of the "shortcut reply" to the notification message body. And when the sink terminal determines that the notification message body comprises the message content, a reply control can be added in the message reminding box.
In one embodiment, when the sink terminal determines that the notification message body includes the message content and the message content is the message content in a preset application program (for example, an IM-class application), a reply control may be added to the message reminder box.
In one embodiment, the source end may also carry the notification ID attribute and/or the notification channel attribute in the notification message body. The purpose of carrying the notification ID attribute and the notification channel attribute is that after the sink end acquires the reply content of the user to the notification message, the sink end sends the reply content, the notification ID attribute and the notification channel attribute to the source end together, so that the source end can determine which notification message the reply content is to.
For example, the source end may send the notification message body to the sink end in a BLE packet. The BLE data packet includes a PDU, and the notification message body may be carried in a service data field in the PDU or may also be carried in a vendor specific data field in the PDU. For example, the payload of the service data field may include a plurality of bits, wherein the plurality of bits includes an extensible bit. The sink side and the source side can agree on the content of a certain extensible bit. The source terminal can encode the notification message body by adopting encoding modes such as GBK, ISO8859-1 and the like, and carry the encoded information on one or more extensible bits.
And the notification processing module of the sink terminal: after receiving the notification message body, the notification processing module at the sink end can analyze the notification message body, thereby obtaining the content of the notification message and the flag attribute of the 'quick reply'. For example, if the source end carries the notification message body in a BLE data packet and sends the notification message body to the sink end, the sink end may perform decoding in a corresponding decoding manner after receiving the BLE data packet, so as to obtain the notification message body.
Notify the UI adaptation module: after the notification processing module at the sink end identifies the attribute of the notification 'quick reply' flag, the notification UI adaptation module adapts system interfaces corresponding to various systems aiming at the sink ends of different operating systems, and supports 'quick reply' input boxes, UI presentation, user interactive response and the like.
Illustratively, if the sink end is a device of an Android system, the sink end may provide a notification manager, so as to implement drawing of a message reminding box, where the message reminding box includes notification content in a notification message body, a reply control drawn by the notification manager, and the like. Illustratively, as shown in fig. 16 (a), the tablet, upon receiving a message body sent by the handset, may display a reply control 1602 in the message alert box. As shown in fig. 16 (b), upon detecting an operation of the user clicking on control 1602, the tablet may display an input box 1603 and a send control 1604.
When the sink end detects that the user inputs the reply content and clicks the sending control, the content input by the user can be obtained through a callback interface getResultsFromIntent (). getresultsfrompotent () may establish an association of a send control and a text input box. And when the sink end detects the operation of clicking the reply control by the user, the sink end is triggered to acquire the content in the text input box. In the embodiment of the application, callback interfaces can be registered on different systems, so that when the sink side detects the reply content of the user, the notification processing module acquires the reply content of the user through the callback interfaces. And after detecting that the user inputs the reply content in the input box and detecting that the user clicks the sending control, the sink end can trigger the sending of the reply content of the user to the source end.
The sink end event processing module: and when recognizing that the user replies a certain notification message, packaging the action response event and the content replied by the user into a notification message body together. The notification message body can adopt the same packaging format as the notification message sent from the source end to the sink end, so that the source end can analyze the notification message conveniently.
Illustratively, the sink end may send the notification message body to the source end in a BLE packet. The BLE data packet includes a PDU, and the notification message body may be carried in a service data field in the PDU or may also be carried in a vendor specific data field in the PDU. For example, the payload of the service data field may include a plurality of bits, wherein the plurality of bits includes an extensible bit. The sink end and the source end can agree on the content of a certain extensible bit. The sink terminal can encode the notification message body by adopting encoding modes such as GBK, ISO8859-1 and the like, and carry the encoded information on one or more extensible bits.
In an embodiment, if a notification message body sent by the source end to the sink end carries a notification ID attribute and a notification channel attribute, the sink end may carry the notification ID attribute, the notification channel attribute, an action response event, and content replied by the user in the notification message. If the notification message body sent by the source end to the sink end does not carry the notification ID attribute and the notification channel attribute, the notification message body sent by the sink end to the source end may not carry the notification ID attribute and the notification channel attribute.
The event processing module of the source end comprises: analyzing a notification message body sent by the sink end to obtain an action response event, a notification ID attribute, a notification channel attribute and user reply content, wherein the source end can determine that the sink end replies to the notification message through the action response event; which notification message is replied to is determined by the notification ID attribute and the notification channel attribute. And then the true reply of the notification message is completed through the following procedure.
(1) Pull-up social App display interface
If the source end is in the lock screen state, the process can be performed in the following ways.
In one mode
The source end may first determine which App the reply content is for according to the header of the notification message body. If the App provides the API to reply the message, the App can directly complete the real reply to the notification message through the API provided by the App after acquiring the content replied by the user. For example, for short message applications, it provides that the API "sendTextMessage" can be used to notify the true reply to the message. After the source end acquires the reply content, it may be determined that the reply content is the reply content for the short message application through the packet header of the notification message body. Because the short message provides the API for replying the notification message, the source end can determine which notification message is replied through the notification ID attribute and the notification channel attribute, and then the source end can directly complete the real replying of the notification message through the 'sendTextMessage'.
It should be understood that, for the reply mode of the mode one, the process of message reply is implemented at the source end, and the process is not sensible to the user. If the App does not provide the API for replying the message, the App can adopt a second mode or a third mode, and replying is completed through a dragging event.
Mode two
If the App supports the lock screen loaded flags attribute, the source end can directly load the corresponding display interface on the lock screen interface. Such as a voice call request, a video call request, etc. of WeChat, are directly displayed on the screen locking interface. And (3) returning to the step (2) by dragging the event to finish the message reply.
Mode III
The reply method of the third method can also be called a cross-device unlocking scheme. In the third mode, the sink terminal can complete the collection of the user identity authentication information, for example, the sink terminal can acquire password information used by the user for unlocking the source terminal, such as digital passwords, fingerprint passwords, iris information, facial features and other identity authentication information. And then, the sink end can carry the password information in the notification message body and send the notification message body to the source end, so that the source end judges whether to unlock the equipment. And if the source end determines to unlock according to the password information, the source end can unlock firstly, enter a non-screen-locking state and then go to the step (2), and the reply of the message is completed through the dragging event.
(2) Completion of replies by dragging events
The source end responds to the action response event, and can determine that the sink end detects the operation of the user for replying the notification message. The source end can determine a corresponding App according to the packet header of the notification message body, start the App to the foreground, and find the corresponding notification message through the notification ID attribute and the notification channel attribute. The source-side view system can send Drag and Drop events and reply content to the corresponding App. And responding to the Drag and Drop event by the App, and pulling up the chat interface of a specific contact through the PendingIntent in the notification message so as to reply the notification message. At this point the true reply to the notification message is complete. The view system, while sending the Drag and Drop event and the reply content to the corresponding App, may also send a content attribute of the reply content to the corresponding App, which may be used to indicate the type of the reply content (e.g., text, picture, voice, video, file, etc.). The App can inform a three-dimensional graphics processing library (OpenGL ES) to draw different display interfaces according to the type of the reply content. For example, for text content, after pulling up the chat interface of a specific contact, the text content can be directly displayed at the position of the reply. For another example, for voice information, after a chat interface of a specific contact is pulled up, a duration of the voice and a control for answering the voice may be displayed (when the source terminal detects that the user clicks the control, the content of the voice may be played). For another example, for video information, after a chat interface of a certain contact is pulled up, a first frame image in the video may be displayed and a play control may be displayed (after a source endpoint clicks the play control, the content of the video may be played).
(3) Restoring source end-of-site
After the source end finishes replying, the interface of the App is hidden, and a display interface before social contact is started can be returned. And if the source end is originally locked, the device is locked again. By restoring the original state of the equipment, the user can not sense the shortcut reply function, and the user experience can be further improved.
In an embodiment, as shown in fig. 47, in the implementation process of the source end and the sink end, the sink end may further include a voice module. For example, as shown in the GUI of fig. 18, a voice module may be included on the car. After the mobile phone sends the received message to the car machine, the car machine can convert the text information corresponding to the message into voice information through the voice module, and remind the user of the message in a voice reminding mode.
Meanwhile, the microphone at the sink end can obtain the content replied by the voice of the user through sound pickup. The sink end can package the content of the user voice reply into the notification message body and send the notification message body to the source end. The voice module can also convert the content replied by the user voice into text information and send the text information to the source terminal. Or, the voice module may not convert the content replied by the user voice into text information, but directly carry the content replied by the user voice in the notification message body. So that the source end determines whether to convert the content of the voice reply into text information.
Because many car central authorities are all big screens at present, carry on intelligent system, provide very abundant entertainment function and driving enjoyment. The intelligent device can be well connected with the vehicle machine through various interconnection technologies, and the notification on the intelligent device can be synchronized to the vehicle machine in real time. The interaction is more humanized by converting the notification content into voice broadcast, and the safety requirement of the car machine can be improved; meanwhile, the voice reply of the user is supported, the process that the user replies the message on the display screen of the vehicle machine is avoided, and the user experience is improved.
In one embodiment, the sink side may also support the function of attachment reply.
For example, in the GUI shown in fig. 19, the user can send an attachment (project plan) to the user Lily on a notebook computer.
When the sink end sends the notification message body to the source end, the notification message body can carry the information of the attachment. Or, the sink end may also carry the path information of the attachment at the sink end in the notification message body. When the user clicks on the attachment at the source end, the source end can request the attachment under the path from the sink end. The sink end then sends the attachment to the source end via the network transmission channel.
It should be understood that BLE data packets may also be used when the sink sends the content or the file of the user voice reply to the source, and the specific sending manner may refer to the description of the foregoing embodiment, and is not described herein again for brevity.
The following describes in detail the implementation of the GUI shown in fig. 20.
As shown in the GUI of fig. 20 (a), when the notebook computer detects an operation of clicking the message alert box 2001 by the user, the notebook computer may send a request to the mobile phone, where the request is for configuration information of the interface element.
Illustratively, the request may be sent to the handset in BLE packets. The BLE data packet includes a PDU, and the request may be carried in a service data field in the PDU or may be carried in a vendor specific data field in the PDU. For example, the payload of the service data field may include a plurality of bits, wherein the plurality of bits includes an extensible bit. The mobile phone and the notebook computer can agree on the content of a certain expandable bit. Illustratively, when an extensible bit is "1", it indicates that configuration information of an interface element is requested.
A source side (e.g., a mobile phone) may determine one or more UI interface elements by using an Integrated Development Environment (IDE) tool, and generate an extensible markup language (XML) file.
Wherein, the XML file comprises: the UI interface comprises a UI interface background, a source-side UI element, a size (for example, the UI element displayed at the sink side is 2 times larger than the UI element at the source side) and a position of the UI element, where the UI element position may include a directional relationship between the UI element and the sink-side add control, and the text input box.
Illustratively, (a) in fig. 48 shows a chat interface displayed on the source end, which may intercept UI elements 4801 on the chat interface, which may also determine the background (e.g., white) of the chat interface drawn on the sink end, and the controls added by the sink end (e.g., send controls) and the positions of the text input boxes and UI elements. For example, the text entry box may be centered directly below the UI element and the send control may be located to the bottom right of the UI element.
For example, the source end may send the XML file to the sink end in BLE packets. The request may be carried in a service data field in the PDU or, alternatively, may be carried in a vendor specific data field in the PDU. For example, the payload of the service data field may include a plurality of bits, wherein the plurality of bits includes an extensible bit. The source end can encode the XML file by using a corresponding encoding technology, and carry the encoded information on one or more extensible bits. The sink terminal can obtain the XML file by decoding the information on the corresponding bit.
The sink end can restore the interface according to the XML file. The sink end may draw the UI interface according to the UI interface background and the size and position of the UI element included in the XML file, as shown in (b) of fig. 20, and the notebook computer may draw the message alert box 2002.
When the notebook computer detects that the user clicks the sending control, the notebook computer can send the reply content to the mobile phone, and the mobile phone completes the real reply to the message. At this time, the notebook computer may request the mobile phone for a new XML file again, and the mobile phone may send the new XML file to the notebook computer again, where the XML file may include the UI element 4802. So that the notebook computer can draw a message alert box as shown in (c) of fig. 20 according to the UI element 4802 in the XML file. It should be understood that, the specific processes of the notebook computer sending the reply content to the mobile phone, the notebook computer being able to request the new XML file from the mobile phone again, and the mobile phone sending the new XML file to the notebook computer may refer to the description in the above embodiments, and for brevity, are not described herein again.
In one embodiment, when the laptop detects the user slider 2004, the laptop may request more UI elements from the cell phone. Illustratively, after receiving the request, the mobile phone sends the UI element 4803 to the notebook computer in an XML file. The notebook computer may draw a message alert box as shown in (d) of fig. 20 according to the XML file.
In the embodiment of the application, the sink end can be combined with a dupwight control to realize the restoration of the UI element, and the control can be combined with a local control (such as a button control, a text control and a list control) of the sink end to draw the message reminding window.
Another implementation of the GUI shown in fig. 21 to 22 will be described in detail below.
Fig. 49 shows a process of reproducing the source-side interface at the sink side through audio-video acquisition. The source end can sample the whole display interface of the source end according to the frequency of a certain sampling frequency (for example, 60 Hz), and sends the sampled data and the position information of the chat interface on the source end interface to the sink end after data coding. The sink end firstly decodes the data sent by the source end; and the sink end cuts the decoded audio and video data according to the position information of the chat interface so as to render images, so that a mirror image of the source end chat interface can be obtained at the sink end. And after the sink end obtains the clipping through clipping and performs image rendering, the sink end can also add a button control (for example, a sending control) and a text input box.
[ 04.01.2021 is corrected according to rules 91 ] as shown in fig. 21 (b), after the mobile phone receives the instruction information sent by the notebook computer and the face information of the user Lily collected by the notebook computer, the mobile phone can connect the video call and display a video call interface. As shown in fig. 48 (b), the mobile phone may sample the image information in the dashed line frame 4804 of the mobile phone at a certain sampling frequency (e.g., 60 Hz), and video-encode the sampled data and transmit the data to the notebook computer. The notebook computer, upon receiving the encoded data, may decode the data to obtain video data, and display the decoded video data in a window 2105 as shown in fig. 21. In one embodiment, the cell phone may also indicate added controls on the notebook computer (e.g., hang up controls) and the positional relationship of the controls to the UI elements of dashed box 4804. For example, the hang-up control may be located in a position centered directly below the UI element of dashed box 4804.
In one embodiment, after detecting that the user clicks the control 2103, the notebook computer may send the instruction information and the face information of the user Lily acquired by the notebook computer to the mobile phone, and after receiving the instruction information, the mobile phone may connect the video call and send the face information of the user Lily acquired by the notebook computer to the device of the user Tom (through the server), so that the user Tom may see the face information of the user Lily through the device. At this time, the mobile phone may not display the video call interface, for example, the mobile phone turns off the screen after the video call is connected.
After receiving the face information of the user Tom (through the server), the mobile phone can send the face information to the notebook computer. Therefore, the notebook computer displays the face information of the user Tom sent to the notebook computer by the mobile phone and the face information of the user Lily acquired by the camera of the notebook computer in the window 2104.
As shown in fig. 22 (b), when the notebook computer detects that the user clicks the control 2203, the notebook computer may send instruction information to the mobile phone, where the instruction information is used to instruct the mobile phone to connect the voice call. If the notebook computer detects the voice content output by the Lily user, the notebook computer can also send the voice content to the mobile phone. After receiving the indication information sent by the notebook computer and the voice information collected by the notebook computer and output by the user Lily, the mobile phone can put through the voice call and display a voice call interface. The mobile phone may send the voice content output by the user Lily to the device of the user Tom (through the server), so that the user Tom may hear the voice content of the user Lily through its own device.
The mobile phone can perform audio coding on the voice content sent by the user Tom and then send the voice content to the notebook computer. The notebook computer can decode the encoded data after receiving the data, so as to obtain the voice content sent by the user Tom. The notebook computer may output the voice content uttered by the user Tom through the speaker, so that the user Lily hears the voice content of the user Tom.
Fig. 50 shows a schematic flow chart of a method 5000 of quick reply provided by an embodiment of the present application. As shown in fig. 50, the method 5000 may be performed by a first electronic device and a second electronic device, the first electronic device may be the source terminal in fig. 47 and the second electronic device may be the sink terminal in fig. 47, and the method 5000 includes:
s5001, the first electronic device receives the message.
Illustratively, as shown in (a) of fig. 16, the first electronic device may be a mobile phone, and the mobile phone receives a message sent by the user Tom.
Illustratively, as shown in (a) of fig. 19, the first electronic device may be a mobile phone, and the mobile phone receives a message sent by the user Tom.
It should be understood that the message in the embodiment of the present application may be a message sent by a server of a different application, and the type of the message includes, but is not limited to, text, voice, audio, video, link, share (e.g., location share), invite (e.g., invite to join a group), and the like.
S5002, when determining that the device focused by the owner of the first electronic device is the second electronic device, the first electronic device sends the message and indication information to the second electronic device, where the indication information is used to indicate that the second electronic device adds a reply control to the message.
Alternatively, the indication information may be a shortcut reply flag attribute.
It should be understood that, for the process of the first electronic device sending the message and the indication information to the second electronic device, reference may be made to the description in the foregoing embodiments, and details are not described herein for brevity.
In one embodiment, the first electronic device sends the message to the second electronic device without sending the indication information when determining that the device in which the owner of the first electronic device focuses is the second electronic device. The second electronic device, in response to receiving the message, can add a reply control to the message.
In one embodiment, the second electronic device can add a reply control to the message in response to receiving the message and determining that the message is a message in an IM-class application.
In one embodiment, the first electronic device, upon determining that the focus of the owner of the first electronic device is not on the first electronic device, may send request information to the peripheral device, the request information for requesting the peripheral device to determine whether the focus of the owner of the first electronic device is on the peripheral device; or the first electronic device may also store a device list for message forwarding, and when the first electronic device determines that the main focus of the first electronic device is not on the first electronic device, the first electronic device may send request information to devices in the device list.
In one embodiment, if the second electronic device is an electronic device in the device list of the first electronic device and user characteristics of the owner of the first electronic device are stored in the second electronic device (for example, the second electronic device and the first electronic device are two devices under the same user (the same user is logged in, the same user ID), the same user characteristics (for example, fingerprint information, face information, voiceprint information, iris information, and the like) may be stored in the first electronic device and the second electronic device), and the first electronic device may send the request information to the second electronic device through a server or by short-distance communication. After receiving the request information sent by the first electronic device, the second electronic device can judge whether the user characteristics are matched with the preset user characteristics of the owner of the first electronic device by acquiring the user characteristics. And if so, sending response information to the first electronic equipment, wherein the response information is used for indicating that the owner of the first electronic equipment focuses on the second electronic equipment. The second electronic device may send the message to the second electronic device after receiving the response information.
Illustratively, the request information may be carried in BLE packets. The BLE data packet includes a PDU, and the request information may be carried in a service data field in the PDU or may be carried in a vendor specific data field in the PDU. For example, a payload (payload) of the service data field may include a plurality of bits, wherein the plurality of bits includes an extensible bit. The first electronic device and the second electronic device may agree in advance on a bit for transmission of the request message. When this extendable bit is a preset value (e.g., 1), the second electronic device can learn that the first electronic device requests it to determine whether the user is focused on the device.
In one embodiment, the first electronic device may save the priority determined by the focal device. Illustratively, the prioritization may be:
(1) A visual focus;
(2) An on-body device;
(3) The focus of the interaction.
It should be understood that, for the process of the first electronic device determining the focus device of the main focus of the first electronic device, reference may be made to the description in the process shown in fig. 45, and for brevity, no detailed description is given here.
Illustratively, the response information may be carried in a service data field or a vendor specific data field of the PDU in the BLE data packet. For example, the response information may be carried on scalable bits in the payload of the service data field. The first electronic device and the second electronic device may agree in advance on a certain scalable bit for transmission of the response message. When the extendable bit is a preset value (e.g., 1), it indicates that the second electronic device determines that the device focused by the user is the second electronic device.
In an embodiment, if the second electronic device is an electronic device in the device list of the first electronic device and the user feature of the owner of the first electronic device is not stored in the second electronic device, or the device list is not stored in the first electronic device, the second electronic device is a device around the first electronic device. The first electronic device may send the request information to the second electronic device upon determining that the owner of the first electronic device is not focused on the first electronic device; after receiving the request information, the second electronic device may collect the user characteristics through the user characteristic collection device, and send the collected user characteristics to the first electronic device. The first electronic device may send the message to the second electronic device upon determining that the user characteristic collected by the second electronic device matches a user characteristic preset in the first electronic device.
It should be understood that, for the case that the second electronic device is an electronic device in the device list of the first electronic device and the user characteristics of the owner of the first electronic device are stored in the second electronic device, the second electronic device may also collect the characteristics of the user and send the characteristics of the user to the first electronic device after receiving the request message, so that the first electronic device determines whether the owner of the first electronic device focuses on the second electronic device according to the characteristics of the user collected by the second electronic device and the preset user characteristics.
For example, the user feature sent by the second electronic device to the first electronic device may be carried in a service data field or a vendor specific field in the PDU.
For example, the second electronic device includes a camera. After receiving the request information, the second electronic device can acquire the face information through the camera and send the face information to the first electronic device. The first electronic equipment judges whether the user is the user according to the face information. For example, the first electronic device may match the face information acquired by the second electronic device with face information preset in the first electronic device, and if the matching is successful, it is determined that the owner of the first electronic device is currently focused on the second electronic device.
For another example, the second electronic device includes a microphone, and after receiving the request message, the second electronic device may collect voice information sent by the user through the microphone and send the voice information to the first electronic device. The first electronic equipment judges whether the user is the user himself or not according to the voice information. For example, the first electronic device may extract voiceprint information in the voice information acquired by the second electronic device, match the extracted voiceprint information with voiceprint information preset in the first electronic device, and determine that the second electronic device is currently focused by the user if the matching is successful.
In the embodiment of the application, when the first electronic device requests the second electronic device whether the owner of the first electronic device focuses on the second electronic device, the second electronic device may send the collected user characteristics to the first electronic device, so that the first electronic device may determine whether the second electronic device is a device focused by the owner of the first electronic device according to the received user characteristics and the preset user characteristics. Therefore, the safety of message forwarding is improved, privacy disclosure of the user is avoided, and user experience is improved.
S5003, the second electronic device displays a message reminding frame, wherein the message reminding frame comprises the message and the reply control.
Illustratively, as shown in fig. 16 (a), the second electronic device may be a tablet. After the tablet receives the message and the indication information sent by the mobile phone, a message reminding frame 1601 can be displayed, wherein the message reminding frame 1601 includes a message "Happy Birthday! And a reply control 1602.
Illustratively, as shown in fig. 19 (a), the second electronic device may be a notebook computer. After the notebook computer receives the message and the indication information sent by the mobile phone, a message reminding box 1901 may be displayed, where the message reminding box 1901 includes a message "please send the project plan to me" and a reply control 1902.
S5004, when the second electronic device detects that the user replies to the message, sending reply content to the first electronic device.
For example, as shown in fig. 16 (b), when the tablet detects an operation of clicking the control 1604 by the user, the tablet may send the reply content in the text input box 1603 to the mobile phone.
For example, as shown in (b) of fig. 17, when the notebook computer detects that the user clicks the control 1705, the notebook computer may send the reply content in the text input box 1704 to the mobile phone.
S5005, the first electronic device replies to the message according to the reply content.
In this embodiment of the application, a manner of replying to the message by the first electronic device may refer to a process of actually replying to the message by the event processing module at the source end in fig. 47, and for brevity, no further description is given here.
For example, as shown in fig. 16 (c) to (e), or as shown in fig. 17 (c) to (e), the handset may open the social App first, then open the chat interface with the user Tom, and finally reply to the content "Thank you |)! "drag to message" Happy Birthday! "position of recovery.
Optionally, the method 5000 further comprises: the first electronic equipment sends the identification information of the message to the second electronic equipment; and when detecting that the user replies to the message, the second electronic equipment sends reply content identified by the identification information to the second electronic equipment. For example, a message body sent by the first electronic device to the second electronic device may carry content, indication information, and identification information of the message.
Optionally, the identification information may be a notification ID attribute and/or a notification channel attribute of the message.
Illustratively, as shown in fig. 17 (a), the handset receives two messages (message 1 from Tom and message 2 from Amy). At this time, the mobile phone sends the message 1 identified by the identification information 1, the message 2 identified by the identification information 2, and the indication information to the notebook computer.
Thus, after receiving the information, the notebook computer can display the message reminding frame 1701 and the message reminding frame 1702. When the notebook computer detects the operation of the user to reply to the message 1, the notebook computer may send reply content (e.g., "Thank you |") identified by the identification information 1 to the mobile phone. By identifying the message, the mobile phone can determine which message the reply content sent by the notebook computer replies to.
In the embodiment of the application, when the first electronic device receives a plurality of messages, the first electronic device can identify the plurality of messages, so that the first electronic device can distinguish reply contents sent by the second electronic device, and the first electronic device can accurately reply the plurality of messages.
Optionally, the method further comprises: the first electronic equipment displays a first interface before receiving the message; and the first electronic equipment displays the first interface after replying to the message.
In the embodiment of the application, after the first electronic device receives the reply content sent by the second electronic device, the first electronic device can complete the real reply to the message. After the reply is complete, the display may continue with the state prior to receipt of the message. By restoring the original state of the first electronic device, the user can not sense the quick reply function, and the user experience can be further improved.
Optionally, the content of the message is text information, the second electronic device includes a voice function, and the second electronic device prompts the text information to the user through voice after receiving the message.
For example, as shown in the GUI in fig. 18, the car machine includes a voice function, and after receiving the message sent by the mobile phone, the car machine may also prompt the user with the text message through voice. For example, the car machine prompts the user by voice "Tom sends Happy Birthday!to you through the social App! ".
In the embodiment of the application, the scene that the second electronic device is the car machine, the car machine can prompt the content of the message to the user in a voice broadcasting mode, so that distraction of the user caused by checking the message can be avoided, and the driving safety of the user is improved.
Optionally, the second electronic device collects voice information replied by the user before sending the reply content to the user; the second electronic device sends the voice information to the first electronic device, or the second electronic device sends text information corresponding to the voice information to the first electronic device.
Illustratively, as shown in the GUI of FIG. 18, the user replies with the phonetic message "Thank you! ". The car machine can send the voice information to the mobile phone, or the car machine can also send text information corresponding to the voice information to the mobile phone.
In one embodiment, the second electronic device may also send the voice message to the first electronic device; the first electronic equipment can reply to the message according to the voice message; or the first electronic device may reply to the message according to the text message corresponding to the voice message.
Optionally, the reply content is a file.
For example, as shown in fig. 19 (b), after the notebook computer detects that the user drags a word file (project plan, doc) to the text input box, the notebook computer may send the word file to the mobile phone.
Alternatively, the file may also be a multimedia file, such as a photograph, video, audio, and so forth.
In the embodiment of the application, the user can directly complete the reply to the message on the second electronic device without returning to the first electronic device to process the message. Therefore, the user can be prevented from missing important information, and the process that the user returns to the first electronic device to process the message is also avoided, so that the user experience is improved.
Fig. 51 to 54 show the implementation of the GUI shown in fig. 23 to 29 and a specific interaction process between devices.
In a non-limiting embodiment, compared to fig. 2, the system library shown in fig. 51 may further include a networking manager configured to discover devices in the system (or, the system 200, the system 300, or the system 400), and the discovered devices may be selected as the hints device and/or the successor device. In some embodiments, when each device in the system is connected to the same AP to form a local area network, each device in the system is a peripheral device, or each device in the system is a device near the user, the networking manager of one device in the system may discover all online other devices located under the same AP, and all online devices under the same AP (including one device and other devices in the system) complete networking. In some embodiments, in the system, the networking manager of one device discovers that, in other devices online in the lan of the same AP, an account logged in by some other device is an account of the one device, or an account logged in by some other device is an associated account of the one device logged in, and then the other device logged in to the account or the associated account is a device trusted by the one device, or all devices logged in to the account or the associated account are devices trusted by each other. Networking is completed among devices logging in an account or associating accounts under the same AP, and the safety and privacy of notification can be guaranteed. In some embodiments, when the devices in the system communicate through the mobile network or the internet, the networking manager can discover through the mobile network or the internet that part of accounts logged in by the devices in the system are the same account or related accounts, and when the part of devices are located near the user, the part of devices complete networking, so that the security and privacy of the notification can be guaranteed. Wherein the associated accounts may be accounts authorized by the same account. Specifically, the distance between the devices and the distance between the user and each device may be determined by using bluetooth Received Signal Strength (RSSI) ranging or satellite positioning. The networking manager can find the equipment in the system near the user, and can increase the interaction capacity and the coordination capacity among the equipment, so that the user is prevented from manually searching and screening the equipment in the system, the user operation is reduced, and the efficiency is improved.
Fig. 52 shows a flowchart of interaction between devices according to an embodiment of the present application. As shown in fig. 52, when each device in the system is connected to the same AP to form a local area network, the process of networking by each device may include steps S5201 to S5204, where any device in the system may be the device 521 in fig. 52, and other devices in the system may be the device 522A, the device 522B, or the device 522C. For example, when the mobile phone 201 is used as the terminal device 521, any one of the smart watch 202, the smart speaker 203, the personal computer 204, the smart television 205, and the tablet computer 206 may be used as the device 522A, the device 522B, or the device 522C. The networking process is described in detail as follows:
step S5201, the networking manager of the device 521 reads the public key corresponding to the current account (the account of the device 521) from the trusted area (e.g., the server), and encrypts the current account using the public key to form an account cryptograph. For example, the account of device 521 may be a hua account, and device 521 may first obtain a public key corresponding to the hua account from a hua cloud server, then encrypt the hua account using the public key, and then generate an account ciphertext.
Steps S5202 to S5204, the device 521 start broadcasting the account ciphertext. Illustratively, the device 521 may broadcast by way of bluetooth or Wi-Fi P2P, and in fig. 52, three devices for receiving the broadcast are shown, respectively device 522A, device 522B, and device 522C. The device 521 starts timing after sending the broadcast until the expiration of a statistical time (which may be set or preset for the user, e.g., 3 seconds).
The networking managers of steps S5205 to S5207, the device 522A, the device 522B, and the device 522C decrypt the account ciphertext using the respective private keys. Specifically, the device 522A reads the private key corresponding to the current account (the account of the device 522A) from the trusted area (e.g., the server), the device 522B reads the private key corresponding to the current account (the account of the device 522B) from the trusted area (e.g., the server), and the device 522C reads the private key corresponding to the current account (the account of the device 522C) from the trusted area (e.g., the server), and then decrypts the account ciphertext using the private key. The account described in the embodiment of the present application may be an account provided by a cloud service provider for a user, such as a millet @ account, a hua be @ account, an Apple @ account (Apple ID), and the like, and may also be an account used for logging in an application, such as a WeChat @ account, a Google @ mailbox account, and the like. For example, if the accounts of the device 522A, the device 522B, and the device 522C are all hua accounts, the device 522A, the device 522B, and the device 522C may obtain a private key corresponding to the hua account of the own device from the hua cloud server in advance, and store the private key in the trusted storage area.
Steps S5208 to S5209, in an embodiment, the device 522A and the device 522B successfully decrypt, the decryption result of the device 522A is consistent with the current account (the account of the device 522A) or the associated account, and the decryption result of the device 522B is consistent with the current account (the account of the device 522B) or the associated account.
Step S5210, in one embodiment, the device 522C fails to decrypt, the decryption result of the device 522C is inconsistent with the current account (the account of the device 522C) or the associated account, or the private key of the device 522C cannot decrypt the account ciphertext. Device 522C need not reply with the result to device 521 or reply with the result of the decryption failure to device 521.
Steps S5211 to S5212, the device 522A, and the device 522B reply the result of successful decryption to the device 521, and after the device 521 receives the result of reply from the device 522A and the device 522B, the networking manager of the device 521 may know that the device 522A, the device 522B, and the device 521 are in the same account, and establish a network connection between the device 521, the device 522A, and the device 522B, thereby completing networking. In addition, the devices 522A and 522B may also reply their own device states to the device 521, where the device states include turning on or off, turning off or lighting the electronic device, and whether the device is worn on a person. When equipment is wearable equipment, wearable equipment can judge whether wearable equipment is dressed on one's body at the user through modes such as infrared detection, rhythm of the heart monitoring.
In a possible embodiment, the method further includes step S5213, counting the time expires, and counting the devices in the same account and/or associated accounts in the local area network, in the embodiment shown in fig. 52, two devices in the same account and/or associated accounts with the device 521 are in the local area network, and are respectively the device 522A and the device 522B, and the device 521, the device 522A, and the device 522B establish network connection, complete networking, and can send messages to each other between the devices that establish networking.
Therefore, other devices decrypt the account ciphertext by using the private key corresponding to the self account, if the decryption is successful, the self device and the device sending the account ciphertext are mutually trusted devices, the networking manager of the device can authenticate the other devices according to the method, so that the notification can be performed on one device of a user in a cross-device manner, the notification can be transferred to a more appropriate device for processing, and meanwhile, the security of data access and information transmission between the devices is ensured.
The system library may further include a notification decision manager 310, where the notification decision manager 310 is configured to decide whether the background service notification received by the current device needs to be prompted to another device, and to which device to prompt, and to which device to continue a subsequent task. The notification decision manager 310 can increase the interoperability between devices so that tasks in notifications can be handled in coordination between multiple devices. In one embodiment, as shown in fig. 53, notification decision manager 310 includes a status manager 311, a prompt device manager 312, and a continuation device manager 313.
The status manager 311 may determine whether the notification of the source device (the device generating the notification, i.e. the first device) needs to be prompted to another device (the prompting device, the device generating the notification, i.e. the second device), and may increase the interaction capability and the coordination capability between the devices to avoid the user missing important notifications. The state manager 311 may determine according to the first reference information, which may include: the type of notification, the type of notification Service (Service), the state of the device (source device), and whether the user is handling the notification in a timely manner may increase the interoperability between devices. The types of notifications may include negative one-screen notifications (notifications on a display screen page), banner notifications, status bar notifications, and lock screen notifications, among others. The types of notification services may include music services, play video services, video call services, mail services, and so forth. The type of the notification and the type of the notification service can be set by the system, or can be set according to the selection of the user, for example, which notifications are negative one-screen notifications, banner notifications, status bar notifications or lock screen notifications, and the user can also set that notification services of the types such as mail service, video playing service and the like need to be prompted to other devices. The state of the device may include whether the device is being used, and specifically, whether the device is being used may be determined by turning on or off the device, turning off or on the screen, or wearing the device on the user. In one embodiment, if the type of the notification service is an important type such as a video call service, a mail service, etc., and the user does not process the notification within a preset time (or the source device is in a screen-off unused state), the state manager 311 determines that the notification needs to be prompted to another device (a prompting device); in one embodiment, when the source device is executing the first task, the source device receives a notification of executing the second task, and if executing the second task may affect the execution of the first task, the state manager 311 determines that the notification needs to be transferred to another device (a subsequent device, a device executing the notification corresponding to the task, that is, a third device) for execution.
The notification device manager 312 determines which device in the system is suitable as the notification device according to the second reference information of the devices in the system, so that the interaction capability and the coordination capability between the devices can be increased to avoid the user missing important notifications. The second reference information includes but is not limited to: the use of the device, which may include the device being turned on or off, turned off or on, whether the device is worn on the user, the distance between the device and the user, etc., and the physical characteristics of the device. The physical characteristics of the device may include privacy attributes of the device, display capabilities of the device, audio capabilities of the device, interaction capabilities of the device, and the like, without limitation, the physical characteristics of several devices and the types of notification services they are adapted to perform, see table 1. For example, in one embodiment, the handset 201 generates a notification, and the state manager 311 determines that the handset 201 is in the off-screen state and needs to send the notification to other devices for prompting. The reminder device manager 312 knows that the watch 202 in the system is worn on the user, the privacy of the watch 202 is good, and the watch 202 can display the notification text with interactive capability, so the watch 202 can be used as a reminder device to remind the source device to generate the notification. In some embodiments, the cueing device manager 312 may also determine the manner of cueing based on the interaction capabilities of the cueing device. For example, when the prompting device is the watch 202, the prompting mode can be a text prompt and a vibration prompt; when the prompting device is a sound box, the prompting mode can be voice prompt. In some embodiments, hinting device manager 312 can select a suitable device among the other devices of the network as the hinting device.
TABLE 4
Figure PCTCN2020142600-APPB-000002
Figure PCTCN2020142600-APPB-000003
The connection device manager 313 determines which device in the system is suitable as a connection device (executes a task corresponding to the notification received by the source device) according to the third reference information of the devices in the system, so that operations of searching and screening the connection device by the user can be saved. The third reference information includes, but is not limited to: the use of the device, which may include the device being turned on or off, turned off or on, whether the device is worn on the user, the distance between the device and the user, etc., and the physical characteristics of the device. The physical characteristics of the device may include privacy attributes of the device, display capabilities of the device, audio capabilities of the device, interaction capabilities of the device, and so forth, without limitation, the physical characteristics of several common devices and the types of notification services they are adapted to perform, see table 4. For example, in an embodiment, the mobile phone 201 generates a notification to notify that the corresponding service type is playing video, the connection device manager 313 can know that the smart tv 205 in the system is turned on, and the smart tv 205 is larger than the screen of the mobile phone 201 and is more suitable for playing video, so that the connection device manager 313 can use the smart screen as a connection device to execute the service corresponding to the notification. Further, when a plurality of devices are all suitable as the connection device, the connection device manager 313 may take the plurality of devices as the connection device to be selected, and select which device is to be the connection device by the user. The coordination capability between the devices can be further increased, so that tasks in the notification can be cooperatively processed among the devices. In some embodiments, prompt device manager 312 may select a suitable device among the other devices of the network as the successive device.
In the embodiment shown in fig. 51, the notification decision manager 310 is located in the system library, and thus, no matter what type of notification is, it can decide whether the background service notification received by the current device needs to be prompted to other devices, and to which device to prompt, and to which device to continue the subsequent task. It is understood that in other embodiments, the notification decision manager 310 may be encapsulated in a specific application to decide whether the specific application needs to notify other devices, to which device to prompt, and to which device to continue with subsequent tasks.
Fig. 54 shows a flowchart of a notification processing method in one embodiment, and the method shown in fig. 54 includes steps S5410 to S5460. The notification processing method shown in fig. 54 is explained below with reference to an application scenario.
FIG. 23 is a diagram illustrating devices in an application scenario, where the scenario illustrated in FIG. 23 uses the method illustrated in FIG. 54, under an embodiment. In fig. 23, the mobile phone 201 is the source device 31 in fig. 54, the watch 202 is the reminder device 32 in fig. 54, and the notebook 204 is the continuation device 33 in fig. 54.
In step S5410, as shown in (a) of fig. 23, the mobile phone 201 may generate a notification according to the received background service notification. Illustratively, the email application background of the cell phone 201 receives a new email, and may pull up the notification service, run the window manager and the notification manager to pop up a window 2311 on the display 2310 to remind the user of the receipt of the new email. If the mobile phone 201 is in the off-screen non-use state or the mobile phone 201 is not in front of the user, the user cannot timely sense the notification of the mobile phone 201, and important e-mails may be missed. The state manager 311 of the mobile phone 201 may determine that the notification needs to be sent to another device for prompt according to the type of the notification (in this embodiment, a screen-off notification) and the type of the notification service (in this embodiment, an email), so as to increase the interaction capability and the coordination capability between the devices, so as to prevent the user from missing an important email.
And step S5420, selecting a proper device for prompting, and selecting a proper device to complete the task of notification. For example, the notification device manager 312 of the cell phone 201 may select a suitable device (as a notification device) from the devices networked with the cell phone 201 to perform the notification, and the connection device manager 313 of the cell phone 201 may select a suitable device (as a connection device) from the devices networked with the cell phone 201 to perform the notification task. The notification device and the successive device are selected from the devices in the networking, so that the safety of data access and information transmission between the devices can be ensured.
In one embodiment, the system includes a tablet 206, a watch 202, and a notebook 204, and the prompt device manager 312 may be informed that the watch 202 is worn on the user through the networking manager, the watch 202 has interactive capability and may display brief information of a notification, but the watch 202 has poor interactive capability and is not suitable for replying to emails. Further, the watch can be instantly alerted to the user by sound and vibration, and therefore, the alert device manager 312 selects the watch 202 as an alert device for alerting the user that the cellular phone 201 receives a mail. The cueing device manager 312 may also determine that the watch 202 is displaying the brief information of the notification textually based on the physical characteristics of the watch 202 (poor display and interaction capabilities of the watch 202). The prompt device manager 312 determines which device is the best one to be notified of according to the device conditions near the user, and transfers the important notification to the devices such as the watch 202 for prompting, so that the interaction capability and the coordination capability between the devices can be increased, the user is prevented from missing the important notification, and the latest notification can be quickly and conveniently acquired.
Continuing equipment manager 313 learns that notebook 204 is closest to watch 202 (note 204 may be considered closest to the user), notebook 204 has better display and interactive capabilities, suitable for handling e-mail. Therefore, the connecting apparatus manager 313 selects the notebook 204 as the connecting apparatus, and performs the task of notification (i.e., processing the email). For example, the watch 202 may determine the distance to each device in the system by bluetooth RSSI ranging. In an embodiment, an intelligent camera may be further connected to the local area network of the system, the intelligent camera may detect the location of the user through face recognition or infrared detection, and send the location of the user to the notification device manager 312, and the notification device manager 312 determines which device in the system is closer to the user according to the location of the user and the locations of the devices in the system (the locations of the devices may be obtained through satellite positioning), and is more suitable for being used as the notification device. When a device closer to the user is selected as the cueing device, the user is more likely to know the cue of the cueing device.
In step S5430, the mobile phone 201 sends a prompt message (i.e. a first message) to the watch 202 according to the decision of the notification decision manager 310 (i.e. prompting the watch 202 that the mobile phone 201 receives the email and processing the email in the notebook 204), wherein the prompt message is used for prompting the watch 202 that the mobile phone 201 receives the email and notifying the notebook 204 that the mobile phone 201 is suitable as a relay device to process the email, and the prompt message may include the information notified in the mobile phone 201 and the information of the notebook 204. Illustratively, the prompting message may include one or more of information of the connected device (i.e., third device information, such as information for identifying the connected device, including a name of the connected device, a media access control address (MAC) address of the connected device, and the like), an intention of the service (Intent, describing an action of one operation in the application, action-related data, additional data for solving communication between components of the application), a name of the service, service data, a manner of prompting, and the like, for prompting on the watch 202. The mobile phone 201 may send the prompting message to the watch 202 through a wireless connection established by networking, for example, a bluetooth connection, a P2P connection, or the like.
When the mobile phone 201 receives the notification of the new mail, the mobile phone 201 can select the notification device and the decision device through the notification decision manager 310 without running a mail application, and the mobile phone 201 sends a prompt message to the notification device through the wireless connection between the devices according to the decision result of the notification decision manager 310.
In step S5440, after the watch 202 receives the notification message, the mobile phone 201 is prompted to receive a notification. In a particular embodiment, as shown in fig. 23 (b), after watch 202 receives the reminder message, watch 202 runs a notification manager (which may include a system program or a service program). For example, the wireless communication module of watch 202 monitors whether watch 202 has received a prompt message, and once watch 202 has received the prompt message, watch 202 runs the display program and runs the window manager and notification manager to display prompt text 2341 on display 2340, prompt text 2341 is used to prompt the user that mobile phone 201 has received an e-mail, and display 2340 of watch 202 may also display an interface element, such as shortcut 2342, and the corresponding word "notebook viewing details". When watch 202 receives notification of a new email, it may send a prompt message directly to notebook 204 through interaction with the user without running an email application.
In addition, the watch 202 runs a vibration program, a motor in the watch 202 is driven, and the watch 202 prompts the user to view the display screen of the watch 202 through vibration of the motor. The shortcut portal 2342 may be clicked on when the user wants to process email through the notebook 204.
Step S5450, in response to the click operation (i.e., input) of the user, the watch 202 sends an execution message (i.e., a second message) to the notebook 204 to trigger the notebook 204 to process the email, where the execution message may include information required by the notebook 204 to execute a task, for example, the execution message may include one or more items of an intention (Intent) of a service, a service name, service data, and the like, so that after the notebook 204 receives the execution message, the notebook 204 may automatically pull up a corresponding service or run a corresponding application or program, thereby improving the inter-device cooperative processing capability and improving the efficiency of information processing. In one embodiment, when the user clicks the shortcut entrance 2342, the watch 202 may directly send an execution message to the notebook 204 through a wireless connection established by networking, such as a bluetooth connection, a P2P connection, and the like, based on the information of the connected device in the prompt message.
Step S5460, after the execution message is received by the notebook 204, the notebook 204 executes the notification task, pops up the email prompt box or directly opens the corresponding email, as shown in (c) in fig. 23, the corresponding email is directly displayed on the display screen 2370, the corresponding email does not need to be opened from the email application entry, and the email application does not need to be in a running state (including background running) before receiving the execution message. The user can directly process the e-mail in the notebook 204 through one-time interaction with the watch 202, so that the operation is saved, the display efficiency is improved, and the user experience is improved.
Specifically, after receiving the execution message, the notebook 204 opens the corresponding service according to the service name, and executes the corresponding operation according to the description of Intent and the service data. In one embodiment, the handset 201 receives a "subscription content" mail from Lucy, and the service name is used for the service of opening the mail, for example, the service name may be "mail". Intent is used for telling the operating system of the notebook 204 to do a "view" action, the service data is used for telling the operating system of the notebook 204 that the view object corresponding to the action is a mail of "order content", then the corresponding Activity (Activity) is called, the corresponding operation is executed, and the "order content" mail sent by Lucy is opened, so that the "order notification" email can be directly displayed on the display screen 2370 of the notebook 204.
If the notebook 204 is in the unlocked state when the notebook 204 receives the execution message, the email service is automatically pulled up, the corresponding email is opened, and the user can directly process the email and reply to the email. If the notebook 204 is in the screen-locked state when the notebook 204 receives the execution message, the email service is automatically pulled up after the user unlocks the notebook 204, the corresponding email is opened, and the user can directly process the email and reply the email; or, if the notebook 204 receives the execution message, the notebook 204 is in the screen-locked state, and before the user unlocks the notebook 204, the operating system of the notebook automatically pulls up the email service and opens the corresponding email, and after the user unlocks the notebook 204, the user may directly process the email and reply to the email.
In one embodiment, if no mail application is installed in the notebook 204, when the notebook 204 receives the execution message, the notebook 204 may send a screen-casting request to the mobile phone 201 through a wireless connection established by networking, where the screen-casting request is used to invite the mobile phone 201 to execute a service corresponding to the notification, and project a display interface in its own screen onto the notebook 204. Wherein, the notebook 204 identifier (such as the name of the notebook and/or the MAC address of the notebook) that can be carried in the screen-casting request, so that the invited device handset 201 sends the display data of its display interface to the notebook 204 according to the identifier of the notebook 204. In a specific scenario, when the notebook 204 sends a screen projection request to the mobile phone 201, the mobile phone 201 wakes up a screen projection service process, the screen projection service process can further wake up an operating system of the mobile phone 201, so that the operating system of the mobile phone 201 generates corresponding display data to be stored in a display card of the mobile phone 201, meanwhile, the mobile phone 201 sends the display data to the notebook 204 through wireless connection, and after the display data is received by the notebook 204, a standby interface or a current interface of the mobile phone 201 is displayed on the display screen 2370 of the notebook 204. In another specific scenario, when the notebook 204 sends a screen projection request to the mobile phone 201, the screen projection request further includes an execution message, the mobile phone 201 wakes up the screen projection service process, the screen projection service process can further wake up the operating system of the mobile phone 201, and then the operating system of the mobile phone 201 generates corresponding display data to be stored in the display card of the mobile phone 201. Meanwhile, the mobile phone 201 runs the mail application according to the execution message, invokes a corresponding Activity (Activity), and opens an "order content" mail sent by Lucy. In addition, the mobile phone 201 sends the display data to the notebook 204 through the wireless connection, and after the notebook 204 receives the display data, the mobile phone 201 projects the e-mail of "order notification" directly on the display screen 2370 of the notebook 204. Subsequently, the screen projection service process may still transmit the display data to the notebook 204 in real time.
It should be noted that, when the screen projection service process wakes up the operating system of the mobile phone 201, the screen of the mobile phone 201 may be simultaneously woken up, and the screen projection service may also be executed in a black screen state, which is not limited in this embodiment of the present invention.
In one embodiment, step S5450 may further include: the watch 202 transmits a confirmation message for informing the cellular phone 201 that the user confirms viewing of the mail on the notebook 204 to the cellular phone 201 through the wireless connection in response to the click operation (i.e., confirmation operation) by the user. After receiving the confirmation message, the mobile phone 201 sends an execution message to the notebook 204 through the wireless connection to trigger the notebook 204 to process the mail. Wherein the execution message is used to open a mail application on the notebook 204 or to screen the mail application on the cell phone 201 onto the notebook 204. In a specific scenario, after the notebook 204 receives the execution message, the notebook 204 executes a notification task, and pops up an email prompt box or directly opens a corresponding email. In another specific scenario, when the notebook 204 sends a screen-casting request to the mobile phone 201, the screen-casting request is used to invite the mobile phone 201 to execute a service corresponding to the notification, and project a display interface in its own screen onto the notebook 204. In another specific scenario, after receiving the confirmation message sent by the watch 202, the handset 201 runs the email application, invokes the corresponding activity and intention, and opens an "order content" email sent by Lucy. In addition, the mobile phone 201 sends an execution message to the notebook 204, and the execution message is used for shooting the "order content" mail on the mobile phone 201 to the display 2370 of the notebook 204. After receiving the execution message, the notebook 204 establishes screen-casting connection with the mobile phone 201, the mobile phone 201 casts the "order content" mail on the display screen 2370 of the notebook 204, and the user can check the mail on the notebook 204.
In one embodiment, connectivity device manager 313 of watch 202 may be used to select which device in the system is to be the connectivity device. Specifically, in step S5420, the notification decision manager 310 of the mobile phone 201 selects the watch 202 as the notification device, and the mobile phone 201 may not select the subsequent device; in step S5430, the cell phone 201 sends a prompt message to the watch 202, where the prompt message is used to prompt the cell phone 201 to receive an email on the watch 202, and the prompt message may not include information of a connection device; in step S5440, the watch 202 prompts the cell phone 201 to receive the e-mail, and meanwhile, the connection device manager 313 of the watch 202 selects the notebook 204 as a connection device according to the states of the devices in the system obtained by networking, and displays a shortcut entry of the notebook 204; in step S5450, the watch 202 sends an execution message to the notebook 204 in response to an operation of the user clicking the shortcut entry; in step S5450, after receiving the execution message, the notebook 204 executes the notified task.
In one embodiment, each device in the system may establish a wireless connection and learn the status of other devices through the wireless connection. In a specific embodiment, after the mobile phone 201 generates the notification, the mobile phone 201 sends a request for establishing a connection, and after receiving the response message of the tablet, the television, the speaker, the notebook computer, and the like, the mobile phone 201 may first establish an untrusted connection with each device. Based on the established untrusted connection, information required for trusted authentication may be transmitted between the handset 201 and each device. Illustratively, the handset 201 may learn the status of each device in the system by:
In the mode 1, each device in the system can report the state of the device to the mobile phone 201 through the established wireless connection;
in the method 2, the mobile phone 201 can obtain the states of the devices in the system from the cloud server. The server may be a device for providing smart home services for the mobile phone 201 and various home devices, such as a sound box, a television, a tablet, and the like.
The specific implementation of the handset 201 determining whether there is a device that can be used to prompt the handset 201 to generate a notification or execute a task of the notification may be: before the mobile phone 201 generates the notification, the mobile phone 201 and each device in the system have already completed networking, and after the mobile phone 201 generates the notification, the mobile phone 201 selects a prompt device and a connection device from the devices in the networking after the mobile phone 201 generates the notification.
In some embodiments, verification of the trust relationship between the mobile phone 201 and each device may then be accomplished through a central authentication or a distributed authentication, so that the mobile phone 201 determines which of the tablet, the television, the speaker, the notebook, and the like are trusted devices, or which of the tablet, the television, the speaker, the notebook, and the like, the mobile phone 201 and the tablet, the television, the speaker, the notebook, and the like are trusted devices. If the mobile phone 201 determines that the tablet, the television and the sound box are trusted devices, or the mobile phone 201 and the tablet trust each other, the mobile phone 201 and the television trust each other, the mobile phone 201 and the sound box trust each other, and the notebook is not a trusted device or the mobile phone 201 and the notebook are not trusted devices.
In one embodiment, smart speaker 203 may be alert device 32 of FIG. 54, and when handset 201 receives a notification of a new e-mail, alert device manager 312 may select smart speaker 203 as the alert device if smart speaker 203 is closer to the user. Handset 201 may send a prompt message to a speaker server (e.g., a cloud server) via a router, where the speaker server performs voice interaction with a user via smart speaker 203. After receiving the prompt message, the speaker server may control the smart speaker 203 to send a voice prompt to prompt the user to receive the email, for example, the smart speaker 203 may report "the mobile phone receives a new email". In addition, smart speaker 203 may also report a voice asking the user to confirm the transfer to notebook processing, e.g., smart speaker 203 may report "whether to transfer to notebook for processing". When the user wants to process the email through notebook 204, smart speaker 203 replies with "confirm", "agree", and so on, and smart speaker 203 reports the reply of the user to the speaker server. In response to the user's reply, the speaker server may send an executive message to notebook 204 through the router. A user may directly process an email in notebook 204 through one interaction with smart speaker 203.
In one embodiment, source device 31 and sink device 33 may be the same device, for example, as shown in FIG. 24, source device 31 and sink device 33 are both notebooks 204. As shown in fig. 24 (a), when notebook 204 receives a notification of a new email, display 2370 of notebook 204 may pop up a window 2371 to prompt the user that notebook 204 received the email. If the watch 202 is worn on the user, the reminder device manager 312 may select the watch 202 as the reminder device. After watch 202 receives the prompt message, display 230 of watch 202 displays prompt text 2341 and shortcut entry 2342, where shortcut entry 2342 corresponds to the word "notebook viewing details". As shown in fig. 24 (b), when the user wants to process an email through the notebook 204, the shortcut portal 2342 may be clicked. In response to the user's click, the watch 202 sends an execution message to the notebook 204, as shown in (c) of fig. 24, after the notebook 204 receives the execution message, the notebook 204 may pull up an email service, open a corresponding email, and the user may directly process the email and reply to the email. The user may directly process the email in notebook 204 through one interaction with watch 202.
In one embodiment, one or more of the notification decision manager 310, the alert device manager 312, and the connectivity device manager 313 may be located at a decision server, for example, when the handset 201 generates a notification, the handset 201 reports the notification to the decision server, and the decision server decides whether the notification received by the handset 201 needs to be prompted to other devices, to which device to prompt, and to which device to connect the task of the notification. And after making a decision, the decision server sends a prompt message to the prompt equipment. In another embodiment, one or more of notification decision manager 310, alert device manager 312, and continuation device manager 313 may be located on different devices. For example, the notification decision manager 310 and the notification device manager 312 in the handset 201 decide whether the notification received by the handset 201 needs to be prompted to other devices, to which device to prompt, and the following device manager 313 in the prompting device decides to which device to continue the task of the notification.
FIG. 25 is a diagram illustrating devices in an application scenario in one embodiment, the scenario illustrated in FIG. 25 using the method illustrated in FIG. 54. In fig. 25, the handset 201 is the source device 31 in fig. 54, the tablet 206 is the reminder device 32 in fig. 54, and the smart tv 205 is the continuation device 33 in fig. 54. In different application scenarios, the same process in the notification processing method is not described again.
In step S5410, as shown in (a) of fig. 25, when the video application background of the mobile phone 201 receives a notification of video playing update, the mobile phone 201 may pop up a window 2312 on the display 2310 to remind the user that the basketball game has been updated. If the user cannot timely perceive the notification from the cellular phone 201, the latest basketball game may be missed. The state manager 311 of the handset 201 may determine that the notification needs to be sent to another device for prompt according to the type of the notification (in this embodiment, a screen-off notification) and the type of the notification service (in this embodiment, a video application), and may increase the interaction capability and the coordination capability between the devices to avoid the user missing the latest basketball game.
Step S5420, exemplarily, the tablet pc 206, the smart tv 205, and the notebook 204 are all networked with the mobile phone 201, and the notification device manager 312 learns that the tablet pc 206 is in use, and the tablet pc 206 may display a notification and may prompt the user immediately, so that the notification device manager 312 selects the tablet pc 206 as the notification device. The reminder device manager 312 may also determine that the tablet computer 206 displays the notification in text, which may increase the interaction and coordination capabilities between devices to avoid the user missing the latest basketball game. In one embodiment, if the tablet 206 is bright and the application is running, the tablet 206 may send the bright screen and the running state of the application to the mobile phone 201 through the bluetooth connection or the P2P connection, and the mobile phone 201 may determine that the tablet 206 is in use according to the bright screen of the tablet 206 and the running state of the application.
The connection device manager 313 of the mobile phone 201 knows that the smart television 205 and the notebook 204 are both close to the tablet computer 206, and that the smart television 205 and the notebook 204 both have better display capabilities and are suitable for playing videos. Therefore, splicing apparatus manager 313 may push smart tv 205 and notebook 204 to the user, which one is selected by the user as a splicing apparatus to perform the notified task (i.e., play a basketball game).
Step S5430, the mobile phone 201 sends a prompt message to the tablet pc 206 according to the decision of the notification decision manager 310, where the prompt message may include one or more items of information of two connected devices (information of the smart television 205 and the notebook 204), an intention of a service, a service name, service data, a prompting mode, and the like.
In step S5440, after the tablet pc 206 receives the prompt message, the mobile phone 201 is prompted to receive a notification. As shown in fig. 25 (b), after the tablet pc 206 receives the prompt message, the tablet pc 206 operates the window manager and the notification manager to pop up a window 2321 on the display screen 2320, where the window 2321 is used to prompt the user that the mobile phone 201 receives a notification of update of the basketball game, and the window 2321 may be displayed in a status bar or a banner notification. The window 2321 may also display two shortcut entries, where the word pattern "smart screen view" corresponding to the shortcut entry 2322 is a word pattern "notebook view" corresponding to the shortcut entry 2323. When the user wants to watch a basketball game through the smart tv 205, the shortcut entry 2322 may be clicked. The shortcut entry 2323 may be clicked when the user wants to watch the basketball game via the notebook 204.
Step S5450, in an embodiment, the user wants to watch a basketball game through the smart tv 205, and clicks on the shortcut entrance 2322. In response to the click operation (i.e., the confirmation operation) of the user, the tablet pc 206 sends an execution message to the smart tv 205 to trigger the smart tv 205 to play the basketball game. The tablet personal computer 206 automatically connects the notified task to the optimal smart television 205 for processing by prompting the user to select an interactive mode of connecting the devices and responding to the confirmation operation of the user, so that the steps of repeatedly searching and processing from the user to the source device (the mobile phone 201) are reduced, the task processing efficiency is improved, and the experience is more natural.
Step S5460, as shown in (c) of fig. 25, after the smart television 205 receives the execution message, the smart television 205 executes the notified task, opens the video application, invokes the corresponding activity, plays the basketball game, and directly pulls up the video service and plays the basketball game without the user himself opening the corresponding video application installed in the smart television 205, thereby saving the operation of the user and improving the user experience.
If the smart television 205 receives the execution message, and the smart television 205 does not have the corresponding video application installed, the smart television 205 sends a screen-casting request to the cell phone 201, where the screen-casting request is used to invite the cell phone 201 to execute a service corresponding to the notification, and projects a display interface in a screen of the smart television 205. The smart television 205 identifier carried in the screen-casting request is used for the invited device handset 201 to send the display data of the display interface to the smart television 205 according to the identifier of the smart television 205. When the smart television 205 sends a screen projection request to the mobile phone 201, the mobile phone 201 wakes up a screen projection service process, the screen projection service process can further wake up an operating system of the mobile phone 201, and then the operating system of the mobile phone 201 generates corresponding display data to be stored in a display card of the mobile phone 201. Meanwhile, the mobile phone 201 runs the video application according to the execution message, calls the corresponding activity, and plays the basketball game. In addition, the mobile phone 201 sends the display data to the smart television 205 through the wireless connection, and after the smart television 205 receives the display data, the mobile phone 201 directly projects the basketball game on the display screen 2360 of the smart television 205. Subsequently, the screen projection service process may still transmit the display data to the smart tv 205 in real time. It should be noted that, when the screen projection service process wakes up the operating system of the mobile phone 201, the screen of the mobile phone 201 may be simultaneously woken up, and the screen projection service may also be executed in a black screen state, which is not limited in this embodiment of the present invention.
In one embodiment, the source device 31 and the reminder device 32 may be the same device, for example, as shown in FIG. 26, the source device 31 and the reminder device 32 are both cell phones 201. When the cell phone 201 receives a notification that a basketball game is live, the reminder device manager 312 can select the cell phone 201 as the reminder device if the cell phone 201 is in use. If smart tv 205 is near the user, connectivity device manager 313 may select smart tv 205 as the reminder device. As shown in fig. 26 (a), the display 2310 of the mobile phone 201 displays a window 2313 and a shortcut entry 2314, and the wording "smart screen view" corresponding to the shortcut entry 2314. When the user wants to play a basketball game through the smart tv 205, the shortcut entry 2314 can be clicked. In response to the click of the user, the handset 201 sends an execution message to the smart tv 205, and after receiving the execution message, the smart tv 205 executes the notified task, as shown in (b) in fig. 26, and the smart tv 205 plays a basketball game. The user can directly play the basketball game on the smart television 205 through one interaction with the mobile phone 201.
In one embodiment, as shown in fig. 27, reminder device 32 and continuation device 33 may be the same device, e.g., both reminder device 32 and continuation device 33 are tablet 206. When the cellular phone 201 receives a notification that the basketball game is being live, as shown in fig. 27 (a), the cellular phone 201 may pop up a window 2312 on the display screen 2310 to remind the user that the basketball game has been updated. If the tablet computer 206 is in use at this time and the user has no devices with a size larger than the display screen of the tablet computer 206, the reminder device manager 312 may select the tablet computer 206 as the reminder device. As shown in fig. 27 (b), after the tablet pc 206 receives the prompt message, the display 2320 of the tablet pc 206 pops up a window 2324, and the window 2324 may also display a shortcut entry 2325, and the word "tablet view" corresponding to the shortcut entry 2325. When the user wants to play the basketball game through the tablet computer 206, the shortcut entry 2325 may be clicked. In response to the user's click, the tablet computer 206 pulls up the corresponding video service and plays the basketball game. The user may play the basketball game directly in the tablet 206 through one interaction with the tablet 206.
FIG. 28 is a diagram illustrating devices in an application scenario in accordance with one embodiment, the scenario illustrated in FIG. 28 employing the method illustrated in FIG. 54. In fig. 28, the handset 201 is the source device 31 in fig. 54, the watch 202 is the reminder device 32 in fig. 54, and the smart tv 205 is the continuation device 33 in fig. 54. In different application scenarios, the same process in the notification processing method is not described again.
In step S5410, as shown in (a) of fig. 28, the cellular phone 201 is performing an operation. For example, in one embodiment, the operation is a camera operation, and the cell phone 201 may display a camera window 2315 on the display 2310. In another embodiment, the operation may also be a photographing operation, a calling operation, a video playing operation, a music playing operation, and the like.
The video call application background of the mobile phone 201 receives the notification of the video call request, as shown in fig. 28 (b), the mobile phone 201 may pop up a window 2316 on the display 2310 to remind the user of a new video call request. If the request of the video call is received on the mobile phone 201, the video call application calls the camera of the mobile phone 201, and the mobile phone 201 interrupts the image pickup. The state manager 311 of the mobile phone 201 may determine that the notification needs to be prompted by another device according to the type of the notification (in this embodiment, a bright screen notification) and the type of the notification service (in this embodiment, a video call application), and may increase the interaction capability and the coordination capability between the devices to avoid interruption of image capturing by the mobile phone 201 due to execution of a video call.
Step S5420, exemplarily, the watch 202 and the smart tv 205 are both networked with the mobile phone 201, and the device manager 312 is prompted to know that the watch 202 is worn by the user, and the watch 202 may display a notification and may prompt the user immediately, so that the device manager 312 is prompted to select the watch 202 as the prompting device. The prompt device manager 312 may also determine that the watch 202 displays the notified information in an icon and text manner, and may increase the interaction capability and coordination capability between devices to avoid the mobile phone 201 from interrupting the camera shooting due to the execution of the video call.
The connection device manager 313 of the handset 201 knows that the smart tv 205 is closer to the watch 202, and the smart tv 205 has better display capability and is suitable for video call. Thus, the splicing device manager 313 can push the smart tv 205 to the user to perform the task of notification (i.e., conduct a video call).
Step S5430, the mobile phone 201 sends a prompt message to the watch 202 according to the decision of the notification decision manager 310, where the prompt message may include information of a connection device (information of the smart tv 205), an intention of a service, a service name, service data, a prompting mode, and the like.
In step S5440, after the watch 202 receives the notification message, the mobile phone 201 is prompted to receive the notification. As shown in (c) of fig. 28, the watch 202 receives the notification message, the watch 202 runs the display program, and runs the window manager and notification manager to prompt the user at the display screen 2340 that the cellular phone 201 receives the notification of the video call request. The display 2340 may also display shortcut entry 2343, and the word "smart screen on" corresponding to shortcut entry 2343. When a user wants to play a video call through the smart tv 205, the shortcut entrance 2343 may be clicked.
Step S5450, in an embodiment, the user wants to perform a video call through the smart tv 205, and clicks on the shortcut entry 2343. In response to the click operation (i.e., the confirmation operation) of the user, the watch 202 sends an execution message to the smart tv 205 to trigger the smart tv 205 to execute the video call task. The watch 202 automatically connects the notified task to the optimal smart television 205 for processing by prompting the user to select the interactive mode of connecting the devices and responding to the confirmation operation of the user, so that the cooperative work among multiple devices is realized, and all devices in the system can process the notification received by one device in a linkage manner.
Step S5460, as shown in (d) in fig. 28, after the smart tv 205 receives the execution message, the smart tv 205 pulls up the corresponding video call service, invokes the corresponding activity, invokes the camera and the audio module of the smart tv 205, executes the notified task, i.e., performs the video call, and can automatically and directly pull up the video call service and perform the video call without entering the video call application from the video call application entry of the smart tv 205. Through the interaction with the watch 202 once, the cooperative work of multiple devices can be realized, the operation of a user is saved, and the user experience is improved.
Fig. 55 to 62 show the implementation of the GUI shown in fig. 32 to 36 and the specific interaction process between the devices.
Fig. 55 shows an overall flowchart of a method for transmitting pictures across devices according to an embodiment of the present application:
s5501: the first device may obtain the picture to be transmitted through a first operation (e.g., taking, scanning, screen capturing, etc.).
S5502: after obtaining the picture to be transmitted, the first device pushes a first message to the second device in a near field connection state (for example, wi-Fi P2P connection or bluetooth connection, etc.), where the first message is picture notification information.
S5503: the second device displays a first prompt to the user if the second operation is detected, wherein the second operation is an interaction event (such as sliding of a mouse), wherein the first prompt comprises a picture thumbnail notification box.
S5504: the second device sends a second message to the first device in response to a third operation, namely the original picture is requested, wherein the third operation comprises the operation (such as clicking, dragging and the like) of the picture thumbnail notification box by the user.
S5505: and the first equipment responds to the original picture request and sends a third message to the second equipment, wherein the third message comprises the original picture.
The first device and the second device are in a near field connection state, and the first device and the second device can be in a relationship of logging in the same account (ID) or in a user-defined association. Therefore, the operation steps of transmitting the pictures by the user can be simplified, the purpose of transmitting the pictures is improved, and convenience is provided for the user.
In one embodiment, the second device may display a first prompt to the user if the second operation is detected after receiving the picture notification information; or, after detecting the second operation, the second device may display the first prompt to the user if receiving the picture notification information.
In one embodiment, the second device may detect the second operation within a preset time period after receiving the picture notification information, and display the first prompt to the user.
In one embodiment, the second device may receive the picture notification message within a preset time period after detecting the second operation, and display a first prompt to the user.
In the method, the picture notification information pushed to the second device by the first device comprises two scenes, wherein one scene is that the picture notification information pushed to the second device by the first device is thumbnail information, and the second device displays a thumbnail notification frame to a user after detecting an interaction event; the other is that the picture notification information pushed to the second device by the first device is a message to be received by the picture, the second device requests the first device for a thumbnail after detecting the interaction event, then the first device sends the thumbnail to the second device, and finally the second device displays a thumbnail notification frame to the user. The two scenarios have differences in underlying network interaction, but the user does not perceive the differences, which will be specifically described in the following embodiments.
Fig. 56 is a first flowchart of a picture sharing method according to an embodiment of the present application, where the first flowchart illustrates an interaction flow between the devices in the GUI shown in fig. 32. The following describes a picture sharing method according to an embodiment of the present application in detail with reference to the above system architecture diagram 31 and fig. 56. As shown in fig. 56, the method includes:
s5601, the mobile phone 310, the mobile phone 311, and the Personal Computer (PC) 312 establish a near field connection (e.g., bluetooth connection), wherein the mobile phone 310, the mobile phone 311, and the Personal Computer (PC) 312 log in to the same account (ID), and belong to different devices of the same user.
S5602, the mobile phone 310 invokes a photo/screen capture/scan program to obtain a picture.
S5603, the mobile phone 310 transmits the thumbnail of the picture that has just been obtained to the other device, the mobile phone 311, or the Personal Computer (PC) 312.
S5604, the Personal Computer (PC) 312 detects a sliding operation of the mouse after receiving the thumbnail, generates a thumbnail notification box, and displays the thumbnail notification box on the desktop.
S5605, the user may click, download, drag, and the like, the thumbnail notification box displayed on the desktop of the Personal Computer (PC) 312.
S5606, the Personal Computer (PC) 312 sends an original picture request message to the mobile phone 310 in response to the user clicking, downloading, dragging, and the like, the thumbnail notification box.
S5607, the mobile phone 310 transmits the original photograph of the photograph just taken to the Personal Computer (PC) 312 in response to the original photograph request message.
S5608, the Personal Computer (PC) 312 opens the received original photograph with a photograph viewer, or saves it in a destination folder, or inserts it into a destination application.
It can be seen that the method for sharing pictures provided in an embodiment of the present application includes: the user obtains the picture after using first equipment to shoot, scan, screen capture operation, and first equipment can send the thumbnail of picture to second equipment. Wherein the first device and the second device are in a near field connection state and log in the same ID. The second device displays thumbnail notification information of the picture to the user when an interaction event (e.g., a sliding of a mouse) is detected. The second device can click, download, drag and the like on the thumbnail notification information of the picture, and trigger the first device to send the original picture to the second device. Therefore, the second device can inform the user of the receivable pictures as long as the interaction event is detected, and the user can obtain the pictures after clicking and other operations. Meanwhile, the first device can automatically send the thumbnail of the picture to the second device only after the picture is obtained after shooting, scanning and screen capturing operations, and further send the original picture to the second device in response to the original picture request message of the second device, so that the operation steps of sharing the picture obtained by shooting, scanning and screen capturing by a user are simplified, the purpose of picture sharing is improved, and the use experience of the user is enhanced.
Optionally, S5603 and S5604 in the above process steps may be exchanged, and after the first device takes a picture, scans, and captures the picture, and waits until the second device detects that the user interaction event sends the thumbnail request message to the first device, the first device sends the thumbnail to the second device. Such a process requires the second device to actively send the interactive event to the first device continuously, and the first device responds to the active request of the second device to send the thumbnail to the second device.
Fig. 57 is a second flowchart of a picture sharing method according to an embodiment of the present application, where the second flowchart illustrates an interaction flow between devices in the GUI shown in fig. 33. Another picture sharing method provided by an embodiment of the present application is described in detail below with reference to the above system architecture fig. 31 and fig. 57. As shown in fig. 57, the method includes:
s5701, the mobile phone 310, the mobile phone 311 and the Personal Computer (PC) 312 establish a near field connection state (for example, bluetooth connection), wherein the mobile phone 310, the mobile phone 311 and the Personal Computer (PC) 312 log in the same ID and belong to different devices of the same user.
S5702, the mobile phone 310 performs photographing/scanning/screen capturing operations to obtain a picture.
S5703, the mobile phone 310 broadcasts a message to notify the mobile phone 311 and the Personal Computer (PC) 312 that the picture is to be received, that is, notifies the operating systems of the mobile phone 311 and the Personal Computer (PC) 312 that the picture is to be received.
S5704, after the photo is taken by the cell phone 310, a timer a with a certain duration (for example, 1 minute) is started, and a thumbnail request message of another device is waited during the running of the timer a.
S5705, the mobile phone 311 and the Personal Computer (PC) 312 start a timer B for monitoring the user interaction event for a certain time (for example, 1 minute) after receiving the message to be received.
S5706, the Personal Computer (PC) 312 receives the user interaction event (such as a keyboard and mouse operation) during the operation of the timer B for monitoring the user interaction event.
S5707, the Personal Computer (PC) 312 transmits a thumbnail request message to the cell phone 310 in response to the user' S interactive event.
S5708, the cell phone 310 sends the thumbnail of the picture just acquired to the Personal Computer (PC) 312 in response to the thumbnail request message.
S5709, the Personal Computer (PC) 312 generates a thumbnail notification and displays the thumbnail notification on the desktop after receiving the thumbnail.
S5710, the user may click, download, drag, and the like on the thumbnail displayed on the desktop of the Personal Computer (PC) 312.
S5711, the Personal Computer (PC) 312 sends an original picture request message to the mobile phone 310 in response to the user clicking, downloading, dragging, and the like on the thumbnail.
S5712, the mobile phone 310 sends the original picture of the picture just acquired to the Personal Computer (PC) 312 in response to the original picture request message.
S5713, the Personal Computer (PC) 312 opens the received original picture with a photo viewer, or saves it in a target folder, or inserts it into a target application.
It can be seen that the method for picture sharing provided by an embodiment of the present application includes: after a user uses the first device to perform shooting, scanning and screen capturing operations, the first device pushes a message that a picture is to be received to the second device, that is, an operating system of the second device is notified that the picture is to be received. The first device and the second device are in a near field connection state, and log in the same ID. After the second device receives the picture to-be-received message, if a user interaction event (such as a keyboard and mouse operation) is detected, the second device sends a thumbnail request message to the first device. The first device sends the thumbnail of the picture obtained by shooting, scanning and screen capturing to the second device in response to the thumbnail request message. The second device can click, download, drag and the like the thumbnail of the picture to obtain the original picture. Therefore, the operation steps of the user for transmitting the pictures obtained by shooting, scanning and screen capturing can be simplified, and the efficiency of transmitting the pictures to other equipment is improved.
Compared with the picture sharing method in fig. 56, in the picture sharing method shown in fig. 57, the first device does not directly send the thumbnail to the other device after obtaining the picture, but waits for the other device to request to send the thumbnail to the other device. Therefore, network resources can be saved, and the situation that thumbnails of the photos are pushed to other devices without the photos is avoided.
Optionally, in the above step S5705, the mobile phone 311 starts the timer B for monitoring the user interaction event after receiving the message to be received. After receiving the interaction event of the user during the operation period of the timer B, the mobile phone 311 sends a thumbnail request message to the mobile phone 310 to request a thumbnail. The cell phone 310 transmits the thumbnail to the cell phone 311 in response to the thumbnail request message.
Fig. 58 to fig. 60 are a third flowchart of a picture sharing method according to an embodiment of the present application, where the second flowchart illustrates an interaction flow between the devices in the GUI illustrated in fig. 34. Another picture sharing method provided by an embodiment of the present application will be described in detail below with reference to the above system architecture fig. 31 and fig. 58 to 60. As shown in fig. 58, the method includes:
s5801, the mobile phone 310, the mobile phone 311 and the Personal Computer (PC) 312 establish a near field connection state (for example, bluetooth connection), wherein the mobile phone 310, the mobile phone 311 and the Personal Computer (PC) 312 log in the same ID and belong to different devices of the same user.
S5802, the mobile phone 310 takes/scans/captures the screen to obtain the first photo.
S5803, the mobile phone 310 broadcasts a message to notify the mobile phone 311 and the Personal Computer (PC) 312 that the picture is to be received, that is, notifies the mobile phone 311 and the Personal Computer (PC) 312 that the picture is to be received.
S5804, after the photo/scan/screen capture by the cell phone 310, start a timer a with a certain duration (e.g. 1 minute), and wait for the thumbnail request message of other devices during the running of the timer a.
S5805, the handset 311 and the Personal Computer (PC) 312 start a timer B for monitoring the user interaction event for a certain time (for example, 1 minute) after receiving the message to be received.
S5806, the mobile phone 310 continues to take/scan/capture a plurality of photos during the timer a operation.
S5807, the mobile phone 310 broadcasts a message notifying the mobile phone 311 and the Personal Computer (PC) 312 that a picture is to be received every time a picture is acquired in S806.
S5808, the Personal Computer (PC) 312 receives the user' S input operation (such as a keyboard and mouse operation) during the operation of the timer B for monitoring the user interaction event.
S5809 the Personal Computer (PC) 312 sends a plurality of thumbnail request messages to the handset 310 in response to the user' S interaction event.
S5810, the mobile phone 310 sends the thumbnail of the last photo of the photos just taken/scanned/captured and the information of the total number of photos to the requesting Personal Computer (PC) 312 in response to the multiple-thumbnail request message. The total number of photos is the total number of photos obtained together from step S5802 when the mobile phone 310 receives multiple thumbnail request messages. Taking the android system as an example, the mobile phone 310 may package the thumbnail of the last photo and the total number of photos into a message according to a certain message structure, and call a network module interface to send the packaged message to the Personal Computer (PC) 312.
S5811, after receiving the thumbnail of the last photo and the number information of the total photos, the Personal Computer (PC) 312 displays a thumbnail notification box of the plurality of photos on the desktop, where the thumbnail of the last photo and the number of the total photos are displayed in the notification box.
After the Personal Computer (PC) 312 displays the thumbnail notification boxes of the multiple pictures on the desktop, different procedures exist according to the specific operation of the user on the thumbnail notification boxes of the multiple pictures.
When the user drags the thumbnails in the thumbnail notification boxes of the multiple pictures or clicks the download button on the notification box, the interaction flow is as shown in fig. 59:
s5901, the Personal Computer (PC) 312 receives a drag of a user to thumbnails in the thumbnail notification boxes of multiple pictures or a click operation of a download button therein.
S5902, the Personal Computer (PC) 312 sends a request message for an original photo of the plurality of photos to the mobile phone 310 in response to a drag or download operation of the user.
S5903, the mobile phone 310 sends the original photos of the plurality of photos to the Personal Computer (PC) 312 in response to the request message of the original photos of the plurality of photos.
S5904, the Personal Computer (PC) 312 stores the received plurality of photos in a destination folder or inserts the photos into a destination application.
When the user clicks on thumbnails in the thumbnail notification boxes of multiple pictures, the interaction flow is as shown in fig. 60:
s6001, the Personal Computer (PC) 312 receives a click operation of the user on thumbnails in the thumbnail notification boxes of the multiple pictures.
S6002, the Personal Computer (PC) 312 sends a request message for thumbnail images of other pictures except the last picture to the mobile phone 310 in response to the click operation of the user.
S6003, the mobile phone 310 sends the thumbnail of the picture other than the last picture to the Personal Computer (PC) 312 in response to the received request message of the thumbnail of the picture other than the last picture.
S6004, the Personal Computer (PC) 312 receives thumbnails of the other pictures except the last picture, and then displays thumbnails of all the pictures in combination with the received thumbnail of the last picture.
S6005, the Personal Computer (PC) 312 receives the operation of clicking, downloading and dragging a certain thumbnail by the user.
S6006, the Personal Computer (PC) 312 sends an original photo request of a thumbnail to the mobile phone 310 in response to the user clicking, downloading, and dragging the thumbnail.
S6007, the mobile phone 310 sends the corresponding original photo to the Personal Computer (PC) 312 in response to the original photo request.
S6008, the Personal Computer (PC) 312 receives the original photograph, and stores the original photograph in the destination folder or inserts the original photograph into the destination application.
It can be seen that the method for sharing pictures provided in an embodiment of the present application includes: a user uses a first device to continuously shoot, scan and screen a plurality of photos within a period of time, and the first device sends a message that the photos are to be received to a second device when obtaining each photo, namely, the second device is informed that the photos are to be received. The first device and the second device are in a near field connection state, and log in the same ID. After receiving the messages to be received of the plurality of photo pictures, the second device sends a plurality of thumbnail request messages to the first device if a user interaction event (such as a keyboard and mouse operation) is detected. The first device sends the thumbnail of the last picture of the plurality of pictures obtained by shooting, scanning and screen capturing and the total number of the pictures to the second device in response to the plurality of thumbnail request messages. The second device may perform operations such as expanding, downloading, dragging, and the like on the thumbnail of the received picture. Therefore, the operation steps of the user for transmitting the multiple pictures obtained by continuous shooting, scanning and screen capturing can be simplified, and the efficiency of transmitting the multiple pictures to other equipment is improved.
Optionally, in step S5810, after receiving the multiple thumbnail request messages from the first device, the second device may send thumbnails of all pictures and the total number of pictures to the second device. The first device does not need to wait for the subsequent second device to send the other picture thumbnail requests except the last picture, and then sends the other picture thumbnails except the last picture to the second device, so that the time delay caused by network interaction in user operation is reduced.
Optionally, in the above step S5803, the mobile phone 310 may send the thumbnail of the picture to the mobile phone 311 and the Personal Computer (PC) 312 each time it acquires one photo. After the mobile phone 311 and the Personal Computer (PC) 312 detect the interaction event, thumbnails of all pictures can be directly displayed to the user, so that time delay caused by network interaction in user operation is reduced.
Based on the above description of the embodiments, the following introduces the bottom implementation of the embodiments of the present application by combining the system module architecture for picture sharing and the message interaction between the main modules in the embodiments of the present application, as shown in fig. 61, taking the sending end of a picture as a mobile phone and the receiving end of the picture as a Personal Computer (PC) for example.
The mobile phone mainly comprises a photographing system, a screenshot system, a scanning system, a media library monitoring module, a network module and the like. The photographing system mainly calls a camera of the mobile phone to capture a static image or a video, a photosensitive material in the camera converts an optical signal into an electric signal, then the electric signal is transmitted to an Image Signal Processor (ISP) to be converted into a digital image signal, and finally the ISP generates a picture through the digital image signal and stores the picture in a memory of the mobile phone. The screenshot system mainly comprises an application processor, wherein the application processor responds to the operation of a user, collects the content of a display screen of the mobile phone, generates a picture and stores the picture in a memory of the mobile phone. The scanning system mainly calls a camera of the mobile phone to collect images of paper files, and a software algorithm in the mobile phone can cut or correct the collected images. The media library is mainly used for managing files such as pictures and videos in the mobile phone, can complete functions of displaying the pictures and playing the videos for a user, and can also be used for sharing the files such as the pictures and the videos by combining with other software. The network module mainly completes the communication function with external equipment, such as file transmission and the like, through hardware such as Bluetooth, wi-Fi and the like in the mobile phone. The media library monitoring module mainly completes functions of automatically notifying other devices of the photos newly added to the media library and sending the photos and the like in the embodiment of the application. The photographing system, the screenshot system, the scanning system, the media library and the network module are self-contained in the mobile phone operating system, and the media library monitoring module is a new module in the technical scheme in the embodiment of the application.
The Personal Computer (PC) mainly comprises a resource manager, a three-party application, a notification frame module, a network module and the like. The resource manager is used for managing various file resources stored in a Personal Computer (PC), and specific file information is shown to a user in a list mode. The three-party application mainly refers to various application programs installed in a Personal Computer (PC), such as a document editing program, an instant messaging program, and the like. The network module mainly completes the communication function with external equipment, such as file transmission and the like, through hardware such as Bluetooth, wi-Fi and the like in the PC. The notification box module mainly completes functions of receiving pictures in the embodiment of the application, such as drawing a notification box, responding to mouse click and drag operations of a user, requesting photos, and the like. The resource manager, the three-party application and the network module are self-contained in the PC operating system, and the notification frame module is a new module in the technical scheme in the embodiment of the application.
In order to implement the functions of the embodiment of the present application, the mobile phone needs to implement a media library monitoring function. Specifically, the mobile phone registers a callback function by using a media library interface and calls when a new file in the media library is monitored. The callback function identifies whether the added file of the media library is from a camera system, a screenshot system or a scanning system, for example, by checking whether the file name of the added file contains a specific keyword. For example, in the android system, the file name of a picture taken contains a Camera field, the file name of a picture shot contains a screenshots field, and the file name of a picture scanned contains a Scanning field. After determining that the file is a newly added file from a photographing system, a screenshot system or a scanning system, sending a message to inform other devices meeting certain conditions that a photo is to be received. Other ways may also be used to determine whether the newly added file in the media library comes from the photographing system, the screenshot system, or the scanning system, which is not limited in this application.
In order to implement functions such as mouse clicking and dragging in a Personal Computer (PC) notification box in the embodiment of the present application, the notification box function needs to be implemented. The specific Personal Computer (PC) notification box is an application window for displaying thumbnails, and may include operation buttons (e.g., open, download, etc.), and functions such as opening and downloading an original picture are implemented by monitoring click events of the buttons and dragging actions of the window. The system functions and interfaces used by different operating systems are different, for example, the Windows operating system uses the CButton interface of MFC (microsoft basic class library) and the doDragDrop method of Windows to implement click events of a listening button and drag events of Windows.
Fig. 62 is an interactive message flow of main modules in a mobile phone terminal and a Personal Computer (PC), which specifically includes:
s6201, a media library monitoring module of the mobile phone registers a monitoring task to the media library, and notifies the media library monitoring module to process when the media library detects a newly added file. Taking the android system as an example, the onChange interface of the media library interface ContentObserver can realize the function of callback when a file is newly added to the media library.
S6202, the mobile phone calls a photographing/screen capturing/scanning program to obtain a picture, and the picture is stored in a media library.
And S6203, after the media library of the mobile phone monitors that the new photo is added, calling a corresponding callback function for processing. Taking the android system as an example, an onChange interface is called for processing, and the processing function of the embodiment of the application is newly added in onChange.
S6204, the media library monitoring module of the mobile phone judges whether the source of the new photo comes from the photographing system, the screenshot system or the scanning system, and the source of the new photo can be obtained by using the file name to perform rule judgment. And (3) obtaining a specific file name through a file handle transmitted by an onChange interface by using the android system, and then judging whether the file name contains Camera, screen ports and Scanning fields to determine whether the newly added file comes from a photographing system, a screenshot system and a Scanning system.
S6205, if the mobile phone determines that the newly added picture comes from the photographing system, the screenshot system or the scanning system, the interfaces of other modules are called to obtain the information of the equipment for logging in the same ID in the near field connection state. Taking how the android system acquires the device information of the same ID in a near field connection (for example, wi-Fi connection) state as an example, the mobile phone may acquire a public key corresponding to the current account, encrypt the current account using the public key, and then call an interface of the network module to send an encrypted account cryptograph to the near field connection device. And the equipment connected with the near field decrypts the received account ciphertext, and returns a response message to the mobile phone to indicate that the account ciphertext is the same account if the decryption is successful. Other methods may be used to obtain the device information of the same ID, and this is not limited in particular.
And S6206, the media library monitoring module of the mobile phone calls an interface provided by the network module to send the thumbnail of the latest photo to other equipment, wherein the other equipment is the equipment which is in the near field connection state and logs in the same ID and is just obtained.
S6207, the network module of the mobile phone sends the thumbnail of the photo to the network module of a Personal Computer (PC) through a universal network Socket interface.
S6208, after receiving the thumbnail of the photo, the network module of the Personal Computer (PC) sends the thumbnail of the photo to the notification frame module. After the notification box module detects the user's interaction event through the interaction event response interface registered with the operating system, the notification box module draws a thumbnail notification box, such as a MessageBox interface of the Windows operating system using MFC (microsoft foundation class library) to display the notification box.
S6209, after detecting an operation of the drawn notification box by the user, an operating system of a Personal Computer (PC), calling a processing function corresponding to the notification box module to perform processing, where for example, when the user clicks a thumbnail in the notification box, the corresponding processing function calls an interface of the network module to send an original photo request message to the mobile phone. Taking the Windows operating system as an example, the specific operation can be realized by using the dotragdrop method registration processing function of the CButton interface and the window of the MFC (microsoft basic class library).
S6210, the network module of the Personal Computer (PC) sends the original photo request message to the network module of the mobile phone through the universal network Socket interface. The original photo request message encapsulates a specific handle of the thumbnail, and the mobile phone receives the original photo request message and can find the corresponding original picture through the thumbnail handle in the message.
S6211, the network module of the mobile phone sends the original photo request message to the media library monitoring module.
S62312, the media library monitoring module of the mobile phone analyzes the corresponding original photo request message, extracts the thumbnail handle therein, and calls the network module interface to send the original photo to a Personal Computer (PC) after acquiring the corresponding original photo from the media library through the thumbnail handle.
S6213, the network module of the mobile phone sends the original photo to the network module of the Personal Computer (PC) through the universal network Socket interface.
S6214, after receiving the original photo, the network module of the Personal Computer (PC) sends the original photo to the notification frame module.
S6215, inserting the original photo into the three-party application according to the operation of the user by a notification frame module of a Personal Computer (PC), or calling a photo viewer to open the display.
Optionally, in step S6207 of the interactive message flow of the module, the mobile phone may send, through a Socket interface of the network module, a message to be received of the picture to a Personal Computer (PC). After detecting an interactive event of a user through an interactive event response interface registered to an operating system, a notification frame module of a Personal Computer (PC) calls a network module Socket interface to send a thumbnail request to a mobile phone, and after receiving the thumbnail request, a media library monitoring module of the mobile phone sends a thumbnail to the Personal Computer (PC). The subsequent process is identical to the above-described interactive message process.
In the embodiments of the present application, a Personal Computer (PC) is used as a picture receiving end, but other types of devices, such as a mobile phone, a Television (TV), a tablet computer (PAD), etc., may also be used as the picture receiving end in the embodiments of the present application. Taking an android mobile phone as an example, taking the mobile phone 311 as a receiving end of a picture as an example, and taking the mobile phone 310 as a sending end of the picture. After taking a picture, the mobile phone 310 sends thumbnail information to the mobile phone 311. When the mobile phone 311 detects a screen lighting operation or a screen sliding operation, the notification frame module of the mobile phone 311 calls a notification module interface of the android system to display a notification message of receiving a new picture to the user, and the notification message can display thumbnail information to the user. After the user clicks the notification message, the mobile phone 311 sends an original picture request message to the mobile phone 310, and the mobile phone 310 sends the original picture to the mobile phone 311 in response to the original picture request message. After receiving the original picture, the mobile phone 311 calls the album program to open the picture. If the user does not click the notification message of the new picture within a period of time, the notification message is automatically hidden, and the subsequent user can find the notification message in the notification center of the mobile phone 311 to continue the operation.
In the embodiments of the present application, a mobile phone is used as a sending end of a picture, but other devices, such as a Personal Computer (PC), a Television (TV), a tablet computer (PAD), etc., may also be used as the sending end of the picture. Taking a Personal Computer (PC) 312 equipped with a microsoft operating system as a sending end of a picture and a mobile phone 310 equipped with an android system as a receiving end of the picture as an example, the Personal Computer (PC) 312 can obtain a screenshot picture in response to a screenshot operation of a user, and a media library monitoring module in the Personal Computer (PC) 312 registers a callback function through a microsoft basic type library (MFC). When the resource manager monitors that a certain folder is added with a picture file, the resource manager calls a callback function to send a thumbnail of the picture to the mobile phone 310. When the mobile phone 310 detects a screen lighting operation or a screen sliding operation, the notification frame module of the mobile phone 310 calls a notification module interface of the android system to display a notification message of receiving a new picture to the user, and the notification message can display thumbnail information to the user. After the user clicks the notification message, the cellular phone 310 transmits an original picture request message to the Personal Computer (PC) 312, and the Personal Computer (PC) 312 transmits an original picture to the cellular phone 310 in response to the original picture request message. After receiving the original picture, the mobile phone 310 calls the album program to open the picture.
The embodiment of the application may also include other implementations, for example, the first device directly sends the original picture to the second device after obtaining the picture by photographing, scanning, and screen capturing, and the second device may also set some default operations to be performed after receiving the picture, such as directly opening the display, and the like.
The embodiment of the present application not only can share pictures, but also can include other implementations, for example, the embodiments can be applied to video and short video sharing, taking the mobile phone 310 of the android system as a video sending end, and taking the Personal Computer (PC) 312 of the microsoft operating system as a video receiving end as an example. The mobile phone 310 captures a video using a camera, and the media library monitoring module of the mobile phone 310 registers a callback function through the media library interface. When a file is newly added to the media library, the callback function is called to judge whether the file is a video file or not, and the method can be realized by judging whether the file name of the newly added file contains keywords or not. If the newly added file is judged to be a video file, a thumbnail of the corresponding video is sent to a Personal Computer (PC) 312. The Personal Computer (PC) 312 detects the user's interaction event and displays a thumbnail notification of the video to the user. The Personal Computer (PC) 312 sends a video file request message to the cell phone 310 in response to a user clicking on a thumbnail of a video. The cellular phone 310 transmits the video file to a Personal Computer (PC) 312 in response to the video file request message.
The embodiment of the application can also include other implementations, some picture recognition technologies can be combined, after the first device obtains the picture, some key information such as a mailbox address, a website link and the like can be extracted by using the picture recognition technologies, and then the key information is shared to the second device. For a specific implementation process, reference may be made to the description corresponding to the GUI in fig. 36, and details are not described herein for brevity.
The embodiment of the application can also comprise other implementations, and the website of a certain browsing webpage can be shared to other devices. For example, the first device is browsing a web page using a browser, and the first device extracts the web address of the browser in response to a gesture operation specific to the user (e.g., continuously clicking three screens). And then the website is sent to other equipment through a network module interface, the other equipment draws a notification card after detecting the interaction event, the content in the card is the website link, and the user can continue browsing the webpage on the other equipment. Therefore, the browsed website is quickly shared to other required equipment, so that webpage browsing can be continuously carried out on other equipment, and the user experience can be enhanced.
The embodiment of the application can also comprise other implementations, and the file which is just downloaded and completed on the terminal equipment can be shared to other equipment. For example, the first device is downloading an ebook file using a browser, and the media library listening module registers a callback function with the resource manager to listen for changes to a folder file. When a file is newly added into the folder, the callback function is triggered to send the file name to other equipment through the network module, the other equipment draws a notification card after detecting the interaction event, the content in the notification card is the file name of the file, and a user can click the notification card to request the file from the first equipment. Therefore, the device can rapidly share the content which is just downloaded to other devices which need to be used, and the user experience can be enhanced.
According to the descriptions of the embodiments, the scheme of the embodiment of the application can be used for sharing content, such as sharing when a new picture is obtained by shooting, scanning and screen capturing, sharing when a new file is obtained by downloading, and sharing of key information by combining with a picture recognition technology. The device sent in the sharing method does not need to perform other operations, and the receiving device can see the content notification only by detecting the interaction event, so that the purpose and the efficiency of content sharing are improved, and the user experience is enhanced.
Fig. 63 to 64 show the implementation of the GUI shown in fig. 37 to 43 and the specific interactive process between the devices.
Referring to fig. 63, fig. 63 is a flowchart illustrating a task processing method according to an embodiment of the present disclosure. The first device is an electronic device that displays a notification, such as the smartphone described above, and the second device is an electronic device that executes content in the notification, such as the living room TV, pad, and the like described above. As shown in fig. 63, the method of task processing may include the following steps.
Step S6301: the first device obtains a first message.
The first message may be notification information provided by a third-party server, and the first device acquires the notification information provided by the third-party server, where the third-party server includes a backend server, a cloud server, and the like of an application (e.g., some video software). The first device is provided with application software corresponding to the third-party server, and when the application software meets the set items, the server corresponding to the application software generates notification information and sends the notification information to the first device. The notification information includes information such as the content of the notification, the source of the notification, and the like. The notification information may further include a notification type, and optionally, the first device may determine the notification type based on the notification content and the notification source.
The notification content includes text content (e.g., title, notification text), and task content (e.g., playing video, audio, web page jump, etc.). The text content is used for briefly explaining the notification; the task content is used to indicate a task to be executed, and includes resource information for executing the task to be executed, for example, the task content is a playing video, the resource information in the task content includes a Uniform Resource Locator (URL) of the video, and the resource information may be used to initiate a request to a network to obtain resource data and jump to a corresponding application interface. The notification types include audio type notifications, video type notifications, text-picture type notifications, web page type notifications, mail type notifications, and so on.
In the implementation of the present application, the above text content may be referred to as description information of the task content. Optionally, the first message may be referred to as description information of the task content except the task content, such as text content, notification source, notification type, and the like.
Taking a video type notification as an example, the first device acquires notification information sent by the third-party server. The notification information comprises that the notification type is a video notification; informing that the source is certain video software; the text content of the notification content is 'some video software, some video is updated'; and the task content comprises the resource address of the video and indicates that the video is played.
Taking the web page type notification as an example, the text content of the notification may be "view the latest news today", and the task content includes the web page address of "view the latest news today", indicating to jump to the web page address.
In some embodiments, the notification information may also include a list of devices supported by the notification type of the notification information. And the third-party server can determine the device list supported by the notification type according to the notification type. For example, when the application software satisfies the set conditions, the server corresponding to the application software generates notification information, for example, a video in a certain video software is updated, and the server of the video software generates notification information about the video update. The server comprises a corresponding relation between the notification type and the equipment, and the equipment supported by the video notification is determined according to the corresponding relation. For example, the web page notification supports electronic devices with display screens, and the device list may include electronic devices such as a computer, a tablet, a mobile phone, and the like; the method comprises the following steps of audio notification, wherein supported equipment is electronic equipment with an audio output function, and an equipment list can comprise electronic equipment such as a sound box, an earphone, a mobile phone and a tablet; the supported devices are electronic devices with display screens and audio output functions, and the device list can include electronic devices such as mobile phones, televisions, computers and the like.
Optionally, the notification information may include device attributes supported by the notification type of the notification information. Such as a video-type notification, supported device attributes are a display screen and audio output capability.
In some embodiments, the notification information may also include a device priority supported by the notification type of the notification information. The third-party server comprises a corresponding relation between the notification type and the equipment priority. For example, the video notification support device is an electronic device having a display screen and an audio output function, such as a mobile phone, a television, a computer, and the like. And aiming at the video notification, the priority of the television is higher than that of the computer and that of the mobile phone.
For another example, the device supported by the audio notification is an electronic device with an audio output function, such as a sound, a mobile phone, a television, a computer, and the like. Aiming at the audio notification, the priority of the sound is higher than that of the mobile phone and that of the computer and that of the television.
Optionally, the correspondence between the notification type and the device priority may be different for different application software.
In some embodiments, the first message may be notification information provided by a system application of the first device, and the first device acquires the notification information provided by the system application, for example, an alarm clock, a memo, and the like. The notification information includes information such as notification content, notification source, notification type, and the like.
The notification content includes text content (e.g., title, notification text), and task content (e.g., video playback path, picture viewing path). The text content is used for briefly explaining the notification, the task content provides resource information for executing the notification information, and the corresponding playing interface or viewing interface can be jumped to through the resource information. The notification types include audio type notifications, video type notifications, text-picture type notifications, web page type notifications, mail type notifications, and so on.
Taking a video notification as an example, the first device obtains notification information sent by the system application. The notification information comprises that the type of the notification is a video type notification; the source of the notification is certain video software; the notified text content is that a certain video has been downloaded, and the task content is a playing path of the video, indicating that the video is played.
Taking the photo type notification as an example, the text content of the notification may be "view the highlight album of the last year and this day", and the task content is the photo data of the last year and this day, and the photo data is instructed to be output.
The first message is not limited to the illustration of the first message in the above embodiment, and the first message may also originate from other electronic devices. In some embodiments, the first message may be a message forwarded or sent by the other electronic device. For example, the other electronic devices share data content to the first device (for example, send pictures, documents, and the like in a share manner), and the other electronic devices send a first message carrying the data content to the first device. The first device receives the first message, the first message comprises information such as text content, task content and message sources, the text content is used for briefly explaining the first message, and the task content comprises data content and indicates to view the data content.
Step S6302: and determining available equipment which supports the execution of the task content in the first message from the peripheral equipment according to the first message, wherein the available equipment at least comprises second equipment.
After receiving the first message, the first device analyzes the first message, and obtains information such as text content, task content, message source, task type of the task content, and the like. And according to the task type, determining available equipment which supports the execution of the task content of the first message from the peripheral equipment.
Specifically, the first device obtains device information of a peripheral device, where the peripheral device is an electronic device included in a device management module of the first device. The device information includes, among other things, a device identification, device attributes (e.g., display capabilities of the device, audio capabilities of the device, interaction capabilities of the device, etc.), a current state of the device (e.g., the device is turned on or off, turned off or on, whether the device is worn by the user, a distance between the device and the first device, etc.), and priority information of the device.
The device management module of the first device periodically acquires device information of the peripheral device. The peripheral device includes a device within communication range of the first device; or, the peripheral device includes a device that is a trusted device with respect to the first device. Broadcasting a detection request by the first equipment at every other preset interval, wherein the detection request carries an equipment identifier of the first equipment; the first device receives a probe response based on the probe request, the probe response carries the device identifier of the other device, and the first device acquires and updates the device information of the device within the communication range of the first device. For example, a probe response returned by a certain device at the time T1 indicates that the device is a computer, the device attribute is that display output is supported, audio output is supported, and the like, the first device obtains device information of the device that is the computer, display output is supported, audio output is supported, and the like, and can determine information such as a physical distance between the device and the first device, a current power-on state of the device, and the like based on the probe response; when the device does not return the detection response at the time T2, it may be determined that the state of the device at the time T2 is a shutdown state or a network disconnection state. The equipment information is changed, and the first equipment updates the current state in the equipment information of the equipment to be a shutdown state or a network disconnection state; when the device returns a probe response at time T3, the first device may update the device information of the device again based on the device identification of the device.
Illustratively, the device information is shown in table 5,
TABLE 5
Figure PCTCN2020142600-APPB-000004
After receiving the first message, the first device parses the first message, and obtains the device identifier of the device supporting the task content output of the first message from table 5. For example, if the first message includes a notification that the task type is video, the available device is an electronic device having a display screen and capable of playing video. For another example, the first message includes that the task source is a certain video software; the text content is 'a certain video software, a certain video is updated', and the task content is the resource address of the video and indicates that the video is played. The available device is an electronic device with a display screen capable of playing video.
The first device determines available devices according to current states of the computer, the tablet and the mobile phone, for example, if the current television is in a power-off state and the tablet and the computer are in a power-on state, the first device determines that the available devices are the tablet and the computer.
The available devices may be preset to be one or more, and when the electronic devices supporting the target content output include two or more devices, in some embodiments, the first device determines only one available device according to a preset condition. For example,
And presetting a condition one, and selecting the equipment with the highest priority as available equipment. For example, the first device has a correspondence between the notification type and the device priority. As shown in the table 6 below, the following examples,
TABLE 6
Figure PCTCN2020142600-APPB-000005
For video-type notifications, devices that can output video include televisions, computers, tablets, cell phones, and the like. As can be seen from table 6, for the video notification, the priority of the television is higher than that of the computer and that of the tablet is higher than that of the mobile phone. The first device determines that the only available device is a television.
And presetting a second condition, and selecting the equipment closest to the preset second condition as available equipment. The first device can determine the distance to other devices through Bluetooth RSSI ranging or satellite positioning. For video-type notifications, devices that can output video include televisions, computers, tablets, and the like. If the first device determines that the distance between the current television and the first device is larger than the distance between the computer and the first device and larger than the distance between the tablet and the first device, the first device determines that the available device is the tablet.
It will be appreciated that the first device may determine two (or three or four) \ 8230; \8230;) available devices by the preset conditions described above.
In some embodiments, the first message includes a list of devices that support performing the task content of the first message. The first device determines available devices according to the device list and in combination with the electronic devices in table 5. For example, the list of devices indicated as supported in the first message includes a television, a computer, a tablet, and a vehicle-mounted display device. Since there is no in-vehicle display device in table 5, the first device determines that the available devices include a television, a computer, and a tablet.
Further, the first device determines the available device according to the current states of the computer, the tablet and the mobile phone, for example, if the current television is in an off state and the tablet and the computer are in an on state, the first device determines that the available device is the tablet and the computer.
Alternatively, the list of devices supporting the execution of the task content of the first message may be a list comprising device attributes. The first device determines available devices according to the device attributes and in combination with the electronic devices in table 5. Such as video-type notifications, supported device attributes are display screen and audio output capabilities. An electronic device having a display screen and an audio output function, which is an available device, is selected from table 5.
Optionally, the first message includes a device list supporting task content of executing the first message and device priority. The first device determines available devices according to the device list and the device priority, in combination with the electronic devices in table 5. For example, the list of devices indicated as supported in the first message includes a television, a computer, a tablet, and a vehicle display device. Since there is no in-vehicle display device in table 5, the first device determines that the available devices include a television, a computer, and a tablet. The available equipment can be preset to be one or more, the priority of the indicating equipment in the first message is that the television is larger than the computer and is larger than the panel, and when the available equipment is preset to be one, the available equipment is the television; when the available devices are preset to two, the available devices are a television and a computer.
Step S6303: and displaying prompt information, wherein the prompt information is used for prompting a user to determine a second device for executing the task content from the available devices.
And after the first equipment determines the available equipment, displaying prompt information on a display screen. The prompt message comprises text content in the notification content and one or more controls. Each of the one or more widgets is associated with a device identifier and with task content in the notification content. When the first device detects user operation for one of the controls, the first device sends the task content associated with the control to the device corresponding to the control, and instructs the device corresponding to the control to execute the task content. In this embodiment, the device corresponding to the control is the second device.
The display form of the prompt information includes a slide-down interface notification (refer to 4003 in fig. 40), a status bar notification (refer to 3702 in (b) in fig. 38), and a lock screen notification (refer to 3702 in (a) in fig. 38), and the like. The display form of the prompt message may be based on the current state of the first device, for example, if the first device is in the screen-locking state, the display form of the prompt message is as shown in 3702 in (a) of fig. 38 (or 3702 in (a) of fig. 37); if the first device is in the bright screen and screen-off state, the prompt message is displayed in the form of 3702 (or 3702) in (b) of fig. 38; if the first device is in the bright-screen and screen-off state at the time and no user operation for processing the prompt message is received after the preset time, the display form of the prompt message is as 4003 in fig. 40.
Specifically, taking (b) in fig. 38 as an example, the prompt message includes two controls, namely, immediate view and living room TV view. Wherein the immediate view control is associated with the first device and the living room TV view control is associated with the device identification of the living room TV.
In some embodiments, the prompt message includes textual content of the notification content, and one or more controls. One of the one or more controls may be associated with a device identifier and task content in the notification content. One control may also be associated with one or more controls, each of which is associated with a device identifier and task content in the notification content.
Taking fig. 37 (b) and fig. 37 (d) as examples, the prompt message includes two controls, namely an immediate view control and a selection device view control. The immediate view control is associated with a first device, and the selected device view control is associated with one or more controls, each of which is associated with a device identifier of an electronic device.
In some embodiments, the controls in the reminder information may change as the device information changes. For example, the prompt information includes a control associated with the device identifier of the second device, and the control may trigger the second device to execute the task content; before the first device detects the user operation for the control, the first device detects that the device information of the second device is changed, and at this time, the second device is not within the communication range of the first device, for example, the second device is powered off, disconnected from the network, and the like. At this time, the second device is no longer an available device, the first device changes the control associated with the device identifier of the second device in the prompt message, and the change mode may be to delete the control, or to associate the control with another device, so as to trigger the other device to execute the task content. Taking (a) to (d) in fig. 41 as an example, at time T1 and time T2, the control in the same prompt message output by the first device may change.
Step S6304: and responding to the user operation, and sending an execution message to the second device, wherein the execution message comprises information required by the second device for executing the task content.
The first device displays the prompt message, and when a user triggers one of the controls in the prompt message and the device identifier associated with the control indicates the second device, the first device responds to the user operation and sends an execution message to the second device. The execution message includes information required by the second device to execute the task content, including resource information, execution application, and the like. In particular, the method comprises the following steps of,
for video-like notifications, the second device may be a television, a computer, a tablet, etc. The execution message may include a URL of the video, instructing the second device to initiate a request to play the video to the network via the URL; the task content may also include video data, and the first device sends the video data to the second device through a Digital Living Network Alliance (DLNA), miracast, or other protocol, and instructs the second device to play the video. Teletext like notifications can also be handled with this scheme.
For audio type notifications, the second device may be a speaker, a cell phone, etc. The execution message may include audio data, the first device transmitting the audio data to the second device via a bluetooth protocol, instructing the second device to play the audio; the task content may also include a URL instructing the second device to initiate an audio play request to the network via the URL.
For teletext notifications, the majority of the teletext notification is text content, which the first device can transmit directly to the second device. Optionally, the second device extracts keywords (such as titles) based on the text content to perform search engine search, so as to further recommend reading of the user, and improve reading experience of the user.
It should be noted that the task content acquired by the first device in step S6301 may include a URL, and may also include video playing data, picture data, and the like; when the acquired resource content is a URL, the first device may download a data resource (e.g., video playing data, audio playing data, and picture data) corresponding to the URL as a local file, and in step S6304, send a path of the data resource of the local file to the second device.
In some embodiments, the control may be associated with a time at which the first device sends the execution message. The first device transmits an execution message to the second device after a preset time in response to a user operation. For example, the first device displays the prompt message, when the user triggers one of the controls in the prompt message, the device identifier associated with the control indicates the second device, and the control indicates that the execution message is sent five minutes after the user triggers the control. The first device sends the execute message to the second device in five minutes in response to the user action.
Step S6305: the second device executes the task content.
And the second equipment receives the execution message sent by the first equipment and executes the task content. The execution information includes information required by the second device to execute the task content, including resource information, execution application, and the like. And after receiving the execution message, the second device can automatically run a corresponding application or program to execute the task content. Specifically, when the task content includes a URL, the second device opens corresponding application software according to the execution application, sends a request to the network according to the URL, acquires a data resource corresponding to the URL, and outputs the data resource in the application software; when the resource content is video playing data, audio playing data, picture data and the like in the local file, the second device establishes screen projection connection with the first device and directly outputs the resource content.
For example, some video software sends notification information to the first device, where the notification information includes a URL of a video to be played, and the first device sends the URL to the second device. After the second device receives the URL, the second device opens the video software (or a client having the same server as the video software), and requests the server of the video software for the playing data of the video to be played according to the URL. And the second equipment plays the video to be played through the video software. As shown in fig. 39, the living room TV directly turns on the video software for playing, and the video software does not need to be in a running state (including background running) until the resource content is received. The user can directly watch the video on the TV in the living room through one-time interaction in the prompt message, so that the operation is saved, and the user experience is improved.
In some embodiments, the second device cannot play the video to be played without a client of the video software in the second device. The second device sends a screen-casting request to the first device through the wireless connection established by the networking after receiving the execution message sent by the first device, wherein the screen-casting request can carry the device identifier of the second device. After the first device receives the screen projection request, the first device wakes up the screen projection service process, the screen projection service process can further wake up an operating system of the first device, and then the operating system of the first device generates corresponding display data (playing data of a video to be played) and stores the display data in a display card of the first device. Meanwhile, the first device sends the display data to the second device through protocols such as DLNA and Miracast, and the second device displays the display data on a display screen of the second device after receiving the display data. Subsequently, the screen projection service process can still transmit the display data to the second device in real time.
In some embodiments, the control may associate a time at which the second device performed the task content. And the first device responds to the user operation aiming at the control, sends an execution message to the second device, and executes the execution time of the task, so that the second device executes the task content according to the execution time. For example, if the second device receives the execution message sent by the first device and further receives an instruction instructing the second device to execute the task content after five minutes, the second device executes the task content after counting five minutes by using the timer.
In some embodiments, the second device may control the progress of the execution of the task content. Referring to fig. 39, the second device is a living room TV, and the living room TV outputs the video resource after receiving the video resource transmitted by the first device. In the process of outputting the video resource, the living room TV can correspondingly control the video playing progress through, for example, a back control 3902, a playing control 3903 and a fast-forward control 3904.
In this embodiment, the second device and the first device may be the same device, for example, in (b) of fig. 38, when the first device detects a control for "view immediately", the electronic device executing the task content is the first device. The first device directly outputs the resource information in the task content without performing the above step S6304.
In the embodiment of the application, the first device acquires the first message, and determines the available device supporting the task execution content from the peripheral devices according to the first message. The first device displays prompt information based on the available devices, the prompt information associating the task content with device identifications of the available devices for prompting a user to determine a second device to execute the task content among the available devices. The first device sends an execution message to the second device in response to a user operation, and the second device executes the task content in response to the execution message. In the embodiment, the task content and the available equipment are associated, and different available equipment is provided for different task contents; and the prompt information is used as a carrier, so that the user can realize cross-device content output in the prompt information, the prompt information is processed in real time, and the user experience is improved.
With reference to the foregoing embodiment, in some possible implementations, after the foregoing step S6304, after the first device sends the execution message to the second device in response to the user operation; and outputting other function controls in the prompt message.
Specifically, the other function controls include a stop control and a toggle control. Wherein the stop control is associated with a device identification of the second device for instructing the second device to stop viewing; the switching control is associated with the device identifications of the other available devices for instructing the other electronic devices to view. Referring to (d) in fig. 40, "in-view to living room TV" is displayed in the guidance information, indicating that the guidance information is currently being processed in the living room TV. The prompt message comprises a 'stop living room TV viewing' control and a 'switching equipment viewing' control, wherein the 'stop living room TV viewing' control is associated with the equipment identification of the living room TV, the 'switching equipment viewing' control is associated with one or more controls, and each control in the one or more controls is associated with the equipment identification of one electronic equipment.
When the first device detects a user operation directed to a "stop viewing of a living room TV" control, the first device sends a request to stop playing to the living room TV, and the second device stops executing task content in response to the request. When the first device detects user operation aiming at the control of 'switching device view', the first device outputs one or more controls of switchable devices, such as a pad, a computer, a mobile phone and the like, and if the user selects to switch to the pad view, the first device responds to the user operation, sends a request for stopping playing to the TV in the living room, and sends an execution message to the pad, and the pad executes the task content. Optionally, the playing progress of the pad may be from the beginning, or may be continued from the playing progress of the living room TV stopped.
In combination with the above embodiments, in some possible implementations, a control for later viewing may be further included in the prompt message. Specifically, the first device receives a first message, where the first message includes a device list supporting execution of task content of the first message. If the electronic device in the device list does not have a device matching with the peripheral device of the first device, the first device determines that the available device supporting the task execution content is zero from the peripheral devices according to the first message. A later-viewed control is included in the reminder information output by the first device, the control being associated with the device identification of the electronic device in the list of devices supported by the first message. After the first device detects the user operation aiming at the control, the first device responds to the user operation, and when the first device detects that the electronic device associated with the control is included in the peripheral device of the first device, the first device sends an execution message to the electronic device.
For example, the email application sends a notification message to the mobile phone, indicating that a new email is received. The notification information indicates that the device supported by the notification information is a unique device, a computer. However, the mobile phone is not connected to the computer through the network at this time, and the available devices acquired by the mobile phone do not include the computer. Then, the mobile phone displays a prompt message which comprises the text content of the new mail prompt and a 'later computer viewing' control.
When the mobile phone detects the user operation aiming at the 'later computer viewing' control, the mobile phone continuously monitors the peripheral equipment; when the mobile phone subsequently detects that the network connection between the mobile phone and the computer is established (equipment information of the computer is acquired in peripheral equipment), the mobile phone sends an execution message to the computer, wherein the execution message is used for opening a mail application on the computer or for projecting the mail application on the mobile phone to the computer. Specifically, after the computer receives the execution message, the computer pops up an email prompt box (see (a) in fig. 43) or directly opens the corresponding email (see (b) in fig. 43). In another specific scenario, after receiving the execution message, the computer sends a screen-casting request to the mobile phone, and after receiving the screen-casting request, the mobile phone sends display data (text content of a new mail) applied to the mail to the computer through protocols such as DLNA and Miracast. The computer displays the text content of the new mail in the mail application based on the display data.
When the mobile phone does not detect the user operation aiming at the prompt message within the preset time, the mobile phone continuously monitors the peripheral equipment; when the mobile phone subsequently detects that the network connection between the mobile phone and the computer is established (the equipment information of the computer is acquired in the peripheral equipment), the mobile phone displays the prompt information again, and the control in the prompt information is a 'computer check' control. At this time, the computer is currently in the communication range of the mobile phone, so that the 'later computer viewing' control in the prompt message is changed into the 'computer viewing' control. That is, the controls in the reminder information may change dynamically as the device information changes.
Optionally, when the mobile phone does not detect the user operation aiming at the prompt message within the preset time, the mobile phone continuously monitors the peripheral equipment; when the mobile phone subsequently detects that the network connection between the mobile phone and the computer is established (the equipment information of the computer is acquired in the peripheral equipment), the mobile phone measures the distance between the mobile phone and the computer, and if the distance between the mobile phone and the computer is smaller than a threshold value, the mobile phone outputs prompt information again to prompt a user to check a new mail. The control in the prompt message is a 'computer check' control.
Wherein, the measuring mode of the distance between the mobile phone and the computer comprises the following steps: bluetooth RSSI ranging or satellite positioning. The Bluetooth RSSI ranging method includes the steps that a Bluetooth module of a mobile phone sends broadcast signals, a computer sends response signals based on the broadcast signals, the mobile phone calculates the distance between the mobile phone and the computer through a mathematical relation based on a set of positioning algorithm according to the strength of the received signals of the computer, and therefore the signal strength is converted into the distance measurement and calculation.
The measuring mode of the distance between the mobile phone and the computer further comprises the following steps: measured with an on-body device. For example, the mobile phone and the smart watch are connected through bluetooth, and when the mobile phone receives a new mail prompt sent by a mail application, the mobile phone displays a prompt message and synchronizes to the smart watch through bluetooth. The mobile phone and the computer are in a local area network, the smart watch measures the distance between the mobile phone and the computer, if the distance between the smart watch and the computer is smaller than a threshold value, the smart watch sends an indication message to the mobile phone, the mobile phone sends the prompt message to the computer in response to the indication message, the computer receives the prompt message, an e-mail prompt box is output on a display screen of the computer (refer to (a) in fig. 43), and a user is prompted to check new mails.
In some application scenarios, when the mail application receives a new mail, it sends a notification message to the mobile phone, and after receiving the notification message, the mobile phone determines an available device supporting the task execution content from the peripheral devices based on the notification message. If the devices supported in the notification message are a computer and a mobile phone, and the available devices are zero except the mobile phone, the mobile phone displays prompt messages, wherein the prompt messages comprise an 'immediate view' control and a 'later computer view' control.
Based on the application scene, in a home scene, each electronic device in a home can be automatically accessed into the same local area network. The user wears the intelligent watch and sits in front of the computer desk for working, if the mobile phone of the user is not near the user, when the mail application receives a new mail, the mobile phone displays prompt information, and the prompt information comprises a control for immediately viewing and a control for viewing later by a computer. The mobile phone synchronizes the prompt information to the smart watch through the Bluetooth, and for the smart watch, the smart watch does not check the function or permission of the mail, and then displays a control of 'checking the computer later' on the smart watch. At the moment, the user can trigger a 'computer check later' control on the intelligent watch, open the computer and access the computer to the network. The smart watch sends an instruction to the mobile phone after detecting the user operation, and in response to the instruction, when the mobile phone detects that the network connection with the computer is established, the mobile phone sends an execution message to the computer, wherein the execution message is used for opening a mail application on the computer or for projecting the mail application on the mobile phone to the computer.
In some embodiments, the first device receives notification information including a list of devices supported by the notification type of the notification information. If the electronic device in the device list and the peripheral device of the first device have the uniquely matched device, the device is the available device. If the available device is in a standby state (screen off), a control to be viewed later is included in the prompt message output by the first device, and the control is associated with the device identification of the available device. After the first device detects the user operation aiming at the control, the first device responds to the user operation, and the first device sends an execution message to the available device. When the available device is in the on state, the available device executes the task content.
The method for processing the task provided by the embodiment of the application can comprise the following steps:
s6311, the first device acquires a first message, and the first message comprises task content and description information of the task content.
Step S6311 can refer to the description of step S6301 in the illustrated embodiment of fig. 63, and is not described herein again.
S6312, the first device displays a first prompt window on the first interface, the first prompt window comprises a first control and description information of task content, and the first control is associated with the second device.
Step S6312 may refer to the description of step S6303 in the illustrated embodiment of fig. 63, and is not described herein again. The first prompt window may be the prompt information in step S6303. The first hint window can be, for example, (a) through (d) in fig. 37, and notification bar 3702 in (a) and (b) of fig. 38; the first control may be, for example, the control 3705 in (b) in fig. 38, and the second device may be, for example, a living room TV shown in fig. 39.
S6313, the first device receives a first input of a user for the first control. The first input is not limited to a click operation, a touch operation, a voice operation, or the like.
And S6314, responding to the first input, the first device sends a first instruction to the second device, wherein the first instruction is used for instructing the second device to execute the task content.
Step S6314 may refer to the description of step S6304 in the embodiment shown in fig. 63, and is not described herein again. The first instruction may be an execution message in step S6304.
In some embodiments, the first device sends the first instruction to the second device, and then further comprises: the first device displays a second control; the first device receives a second input of the user for the second control; in response to the second input, the first device sends a second instruction to the second device, the second instruction being used to instruct the second device to stop executing the task content. Here, a stop control (second control) is provided, which the first device may output during the execution of the task content by the second device, and the user controls the second device to stop executing the task content through the second control. The user can pause the equipment to execute the task content at any time, and the user experience is improved.
Here, the second control may be, for example, control 4101 in (d) of fig. 40. The electronic device receives a user operation (second input) directed to the control 4101, instructing the living room TV (second device) to stop executing the task content.
In some embodiments, the first device sends the first instruction to the second device, and then further comprises: the first device displaying a third control, the third control associated with the third device; the first device receives a third input of the user for a third control element; responding to the third input, the first device sends a second instruction to the second device, wherein the second instruction is used for instructing the second device to stop executing the task content; in response to the third input, the first device sends a third instruction to the third device, the third instruction being for instructing the third device to execute the task content. Here, a switching control (third control) is provided, during execution of the task content by the second device, the first device may output the switching control, and the user controls the second device to stop executing the task content through the third control and instructs the other devices to execute the task content. The effect of real-time switching is realized, the user can switch the equipment for executing the task content at any time, and the user experience is improved.
Optionally, the third device may resume executing the task content, or continue to execute the task content in accordance with the execution progress of the second device. For example, the second device is executing task content and playing a video; at this time, the first device receives a user operation for switching the playing of the third device, and the first device instructs the second device to stop playing the video and instructs the third device to play the video at the same time, where the playing progress of the third device may be from the beginning or may be continued from the progress at which the playing of the second device is stopped.
The third control element may be, for example, the control 4103 or the control 4104 in (f) of fig. 40, where the control 4103 and the control 4104 are respectively associated with a device pad and a computer, and the electronic device receives a user operation (third input) for the control 4103 and instructs the pad (third device) to execute task content; alternatively, the electronic device receives a user operation (third input) for the control 4104, and instructs a computer (third device) to execute the task content.
Optionally, the third control element may also be, for example, a control 4102 in (d) of fig. 40, where the control 4102 is associated with two controls, that is, a control 4103 and a control 4104, and the control 4103 and the control 4104 are associated with a device pad and a computer, respectively. The electronic device receives a user operation (third input) with respect to control 4102, and the first device can instruct the corresponding device (third device) to execute the task content by selecting control 4103 or control 4104.
In some embodiments, the first interface is a lock screen interface; in response to the first input, the first device sends a first instruction to the second device, which specifically includes: in response to the first input, when the first device detects an unlocking operation for the screen locking interface and the unlocking is successful, the first device sends a first instruction to the second device. The description here is that in a scenario where a first prompt window is output on a screen lock interface, when the electronic device detects a first input, an instruction for executing task content is sent to the second device after the electronic device needs to be unlocked.
The lock screen interface may be, for example, a user interface shown in fig. 37 (a) or fig. 38 (a).
In some embodiments, the first message includes a task type of the task content, and before the first device displays the first prompt window on the first interface, further includes: the method comprises the steps that a first device acquires device information of one or more devices within a communication range of the first device; the first device determines one or more available devices supporting the task type of the task content to be executed based on device information of one or more devices within a communication range of the first device, wherein the available devices comprise the second device. The second device is a device within the communication range of the first device and is a device supporting the type of the person performing the task content, where the task type may include a video-type task, an audio-type task, a text-type task, and the like, and then the corresponding device supporting the video-type task needs to have a display function and an audio function, the device supporting the audio-type task needs to have an audio function, and the device supporting the text-type task needs to have a display function and the like.
In some embodiments, the first message includes a list of devices that support performing the task content, and before the first device displays the first prompt window on the first interface, further includes: the method comprises the steps that a first device acquires device information of one or more devices within a communication range of the first device; the first device determines one or more available devices in the device information of one or more devices within the communication range of the first device based on the device list, wherein the available devices include the second device. The device list supporting the task execution content in the first message may be a list including device types, for example, the device list supporting the task execution content is a computer or a tablet; the list of devices supporting the task execution content in the first message may be a list including device attributes, for example, the list of devices supporting the task execution content is a device having a display function and an audio function; the list of devices in the first device that support performing the task content may be a list comprising specific device identifications, each device identification representing a device, etc.
In some embodiments, the method further comprises: the first device determines an available device with a highest priority among the one or more available devices as the second device. The only available equipment is selected through the priority, the most suitable equipment for executing the task content is provided for the user, and the selection operation of the user is reduced. Wherein the priority information may be set by a user, may be a system default of the first device, may be set by a third party application, may be automatically determined by the first device according to device attributes, and so on.
In some embodiments, the method further comprises: the first device determines an available device with a smallest physical distance from the first device among the one or more available devices as the second device. The only available equipment is selected according to the distance of the physical distance, the most suitable equipment for executing the task content is provided for the user, and the selection operation of the user is reduced.
In some embodiments, the first prompt window further includes a fourth control, the fourth control being associated with one or more controls, each of the one or more controls being associated with an available device other than the second device; after the first device displays the first prompt window on the first interface, the method further includes: when the first device detects a fourth input directed to a fourth control, the first device displays one or more controls. Here, a control for selecting a device list is provided, and after the first device determines one or more available devices, the first device may output a control for selecting a device list (fourth control), through which the user may view the one or more available devices and then autonomously select a device for executing the task content. The effect of autonomous selection is achieved, a user can select the device for executing the task content from the multiple devices, and user experience is improved.
Among them, the fourth control may be, for example, the control 3704 in (a) to (d) in fig. 37, and may be the control 40034 in (a) of fig. 41, (c) of fig. 41, and (e) of fig. 41. Illustratively, the electronic device receives a user operation (fourth input) directed to control 40034 in (a) of fig. 41, the first device outputs control 40035 and control 40036. The control 40035 is associated with a pad, the control 40036 is associated with a computer, and both the pad and the computer are available devices.
In some embodiments, the one or more controls include a fifth control, the fifth control associated with a fifth device, and after the first device displays the first prompt window on the first interface, further including: when the fifth device is no longer included in the one or more devices within the communication range of the first device, the first device deletes the fifth control; the first device detects a fourth input directed to a fourth control, the first device displaying one or more controls, the fifth control not included in the one or more controls. It is described herein that the controls in the first prompt window may change as the device state changes. At the first moment, the fifth equipment is available equipment, and the first equipment outputs a fifth control related to the fifth equipment; at a second time, the state of the fifth device changes (is not within communication range of the first device), and the first device deletes the fifth control associated with the fifth device. Similarly, if the state of the fifth device changes again (within the communication range of the first device) at the third time, the first device outputs a fifth control associated with the fifth device. Therefore, the mode of outputting the control is changed according to the equipment state, the timeliness is improved, the latest available equipment can be provided for the user in real time, and the user experience is improved.
Where the fifth control may be, for example, the control 40035 in (b) of fig. 41, and then the fifth device is a device associated with the control 40035. The first time is T1, and fig. 41 (b) includes a control 40035; the second time is T2, and at this time, the pad state corresponding to the control 40035 changes (is not within the communication range of the first device), then the control 40035 is not included in (d) of fig. 41; the third time is T3, and at this time, the pad state corresponding to the control 40035 changes again (within the communication range of the first device), and then the control 40035 is included in (f) of fig. 41.
In some embodiments, the second device and the first device are the same device.
In some embodiments, the first device and the second device are logged into the same account or associated accounts of the same account.
In some embodiments, the first message comprises: mail notification information, video application notification information, instant messaging message notification information, and video call notification information.
In some embodiments, the first device is a cell phone or watch and the second device is a computer or tablet or television.
A schematic block diagram of a task processing system provided in an embodiment of the present application is described below.
As shown in fig. 64, the task processing system includes a first device and a second device. Wherein the first device includes a notification processing module 6401, a device management module 6402, and a notification presentation module 6403; the second device includes a notification processing module 6404 and a notification service module 6405. Wherein,
1. The three-party application or the system application transmits notification information including text content and task content to the notification processing module 6401 of the first device. The notification information may be referred to as a first message.
2. The notification processing module 6401 acquires device information of the peripheral device to the device management module 6402.
3. The notification processing module 6401 determines, according to the type of the task content, an available device that supports execution of the task content among the peripheral devices; associating the device identification of the available device with the control, instructing the notification presentation module 6403 to present a prompt including the control of the available device. The hint information may be referred to as a first hint window, with the controls in the hint information including a first control.
4. The notification presentation module 6403 receives a user operation (first input) for a control (first control) in the prompt information, and feeds back the user operation to the notification processing module 6401.
5. The notification processing module 6401 sends an execution message to a device (second device) associated with the control, the execution message including task content.
6. After receiving the execution message, the notification processing module 6404 of the second device parses the execution message, obtains the task content, and instructs the notification service module 6405 to execute the task content.
For parts that are not described in detail in the embodiments of the present application, reference may be made to the embodiment shown in fig. 63, which is not described herein again.
In some embodiments, the first device is further to: after the first instruction is sent to the second equipment, displaying a second control; the first device is further used for receiving second input of the user for the second control; the first device is also used for responding to the second input and sending a second instruction to the second device; and the second equipment is also used for stopping executing the task content based on the received second instruction. Here, a stop control (second control) is provided, which the first device may output during the execution of the task content by the second device, and the user controls the second device to stop executing the task content through the second control. The user can pause the equipment to execute the task content at any time, and the user experience is improved.
The second control may be, for example, the control 4101 in (d) of fig. 40. The electronic device receives a user operation (second input) for the widget 4101, instructing the living room TV (second device) to stop executing the task content.
In some embodiments, the system further comprises a third device; the first device is further used for displaying a third control after the first instruction is sent to the second device, and the third control is associated with the third device; the first device is also used for receiving a third input aiming at a third control element from a user; the first device is also used for responding to the third input and sending a second instruction to the second device; the second equipment is also used for stopping executing the task content based on the received second instruction; the first device is also used for responding to a third input and sending a third instruction to the third device; and a third device for executing the task content based on a third input. Here, a switching control (third control) is provided, during execution of the task content by the second device, the first device may output the switching control, and the user controls the second device to stop executing the task content through the third control and instructs the other devices to execute the task content. The effect of real-time switching is realized, the user can switch the equipment for executing the task content at any time, and the user experience is improved.
Optionally, the third device may resume executing the task content, or continue to execute the task content in accordance with the execution progress of the second device. For example, the second device is executing task content and playing a video; at this time, the first device receives a user operation for switching the playing of the third device, and the first device instructs the second device to stop playing the video and instructs the third device to play the video at the same time, where the playing progress of the third device may be from the beginning or may be continued from the progress at which the playing of the second device is stopped.
The third control element may be, for example, the control 4103 or the control 4104 in (f) of fig. 40, where the control 4103 and the control 4104 are respectively associated with a device pad and a computer, and the electronic device receives a user operation (third input) for the control 4103 and instructs the pad (third device) to execute task content; alternatively, the electronic device receives a user operation (third input) for the control 4104, and instructs a computer (third device) to execute the task content.
Optionally, the third control element may also be, for example, a control 4102 in fig. 40 (d), where the control 4102 is associated with two controls, which are a control 4103 and a control 4104, respectively, and the control 4103 and the control 4104 are associated with a device pad and a computer, respectively. The electronic device receives a user operation (third input) with respect to control 4102, and the first device can instruct the corresponding device (third device) to execute the task content by selecting control 4103 or control 4104.
In some embodiments, the first interface is a lock screen interface; the first device is used for responding to the first input, and sending a first instruction to the second device after unlocking operation aiming at the screen locking interface is detected and the unlocking is successful. In a scenario where the first prompt window is output on the screen locking interface, when the electronic device detects the first input, an instruction for executing the task content is sent to the second device after the electronic device needs to be unlocked.
The lock screen interface may be, for example, a user interface shown in fig. 37 (a) or fig. 38 (a).
In some embodiments, the first message includes a task type of the task content; the first device is further used for acquiring device information of one or more devices within the communication range of the first device before the first prompt window is displayed on the first interface; the first device is further used for determining one or more available devices supporting the task type of the task content to be executed based on the device information of one or more devices within the communication range of the first device, and the available devices comprise the second device. The second device is a device within the communication range of the first device and is a device supporting the character type of the task content, wherein the task type may include a video task, an audio task, a text task, and the like, and then the corresponding device supporting the video task needs to have a display function and an audio function, the device supporting the audio task needs to have an audio function, and the device supporting the text task needs to have a display function and the like.
In some embodiments, the first message includes a list of devices that support performing task content; the first device is further used for acquiring device information of one or more devices within the communication range of the first device before the first prompt window is displayed on the first interface; the first device is further configured to determine one or more available devices from the device information of the one or more devices within the communication range of the first device based on the device list, where the available devices include the second device. The device list supporting the task execution content in the first message may be a list including device types, for example, the device list supporting the task execution content is a computer or a tablet; the list of devices supporting the task execution content in the first message may be a list including device attributes, for example, the list of devices supporting the task execution content is a device having a display function and an audio function; the list of devices in the first device that support performing the task content may be a list comprising specific device identifications, each device identification representing a device, etc.
In some embodiments, the first device is further configured to determine an available device with a highest priority among the one or more available devices as the second device. The only available equipment is selected through the priority, the most suitable equipment for executing the task content is provided for the user, and the selection operation of the user is reduced. Wherein the priority information may be set by a user, may be a system default of the first device, may be set by a third party application, may be automatically determined by the first device according to device attributes, and so on.
In some embodiments, the first device is further configured to determine an available device of the one or more available devices having a smallest physical distance from the first device as the second device. The only available equipment is selected according to the distance of the physical distance, the most suitable equipment for executing the task content is provided for the user, and the selection operation of the user is reduced.
In some embodiments, the first prompt window further includes a fourth control, the fourth control being associated with one or more controls, each of the one or more controls being associated with an available device other than the second device; the first device is further configured to display one or more controls upon detecting a fourth input directed to a fourth control after displaying the first prompt window on the first interface. Here, a control for selecting a device list is provided, and after the first device determines one or more available devices, the first device may output a control for selecting a device list (fourth control), through which the user may view the one or more available devices and then autonomously select a device for executing the task content. The effect of autonomous selection is achieved, a user can select the device for executing the task content from the multiple devices, and user experience is improved.
Among them, the fourth control may be, for example, the control 3704 in (a) to (d) of fig. 37, and may be the control 40034 in (a), (c), (e) of fig. 41. Illustratively, the electronic device receives a user operation (fourth input) directed to the control 40034 in (a) of fig. 41, the first device outputs the control 40035 and the control 40036. The control 40035 is associated with a pad, the control 40036 is associated with a computer, and both the pad and the computer are available devices.
In some embodiments, the one or more controls include a fifth control, the fifth control associated with a fifth device; the first device is further used for deleting the fifth control when the fifth device is no longer included in the one or more devices within the communication range of the first device after the first prompt window is displayed on the first interface; the first device is further configured to detect a fourth input directed to a fourth control, and display one or more controls that do not include the fifth control. It is described herein that the controls in the first prompt window may change as the device state changes. At the first moment, the fifth equipment is available equipment, and the first equipment outputs a fifth control related to the fifth equipment; at the second time, the state of the fifth device changes (not within the communication range of the first device), and the first device deletes the fifth control associated with the fifth device. Similarly, if the state of the fifth device changes again (within the communication range of the first device) at the third time, the first device outputs a fifth control associated with the fifth device. Therefore, the mode of outputting the control is changed according to the equipment state, the timeliness is improved, the latest available equipment can be provided for the user in real time, and the user experience is improved.
Wherein, the fifth control may be, for example, the control 40035 in (b) of fig. 41, and then the fifth device is a device associated with the control 40035. The first time is T1, and fig. 41 (b) includes a control 40035; the second time is T2, and at this time, the pad state corresponding to the control 40035 changes (is not within the communication range of the first device), then the control 40035 is not included in (d) of fig. 41; the third time is T3, and at this time, the pad state corresponding to the control 40035 changes again (within the communication range of the first device), and then the control 40035 is included in (f) of fig. 41.
In some embodiments, the second device and the first device are the same device.
In some embodiments, the first device and the second device are logged into the same account or associated accounts of the same account.
In some embodiments, the first message comprises: mail notification information, video application notification information, instant messaging message notification information, and video call notification information.
In some embodiments, the first device is a cell phone or watch and the second device is a computer or tablet or television.
FIG. 65 illustrates another set of GUIs provided by an embodiment of the present application.
As shown in fig. 65, the handset receives a social App message from the user Tom (e.g., the message content is Happy Birthday |), and a message reminder box 6501 may be displayed. At this time, the mobile phone may forward the message to the smart watch on which the user is focusing, and the smart watch may display the message alert box 6502. The message alert box 6502 includes, among other things, the source of the message (e.g., the name of the social App, and the message from P40 of the user Lily), the reminder information ("a message received"), and controls 6502.
In one embodiment, the mobile phone may determine that the prompting device is a smart watch and the connection device is a laptop, so that the mobile phone may forward the message to the prompting device and indicate information of the connection device to the prompting device.
It should be understood that, the process of determining the prompting device and the continuing device by the mobile phone may refer to the description in the embodiment of the method shown in fig. 54, and for brevity, will not be described again here.
In one embodiment, the mobile phone may forward the message to the smart watch after determining that the reminder device is a smart watch. After receiving the message, the smart watch may determine the continuing device, thereby generating control 6502 to reply to the message on the continuing device.
It should be understood that, for brevity, the process of the smart watch determining that the connected device is a notebook computer may refer to the description of the method embodiment shown in fig. 54.
In one embodiment, the message forwarded by the mobile phone to the smart watch may also carry message content (for example, the message content is Happy Birthday |), and the smart watch may also display the message content. In response to detecting the user's operation of clicking on control 6502, the smart watch may send the message to the laptop computer for forwarding. The notebook computer, in response to receiving this message, may display a reminder box 6503, where the message reminder box 6503 includes the source of the message (e.g., the name of the social App, and the message is from P40 of the user Lily), the user information that sent the message (e.g., "Tom" and the avatar of the user Tom), the content of the message (e.g., happy Birthday |), and a reply control.
It should be understood that, in the embodiment of the present application, a message sent by a mobile phone to a smart watch may carry content of the message, a source of the message, user information for sending the message, and content of the message. Because the screen of the watch is relatively small, the watch can select the information to be displayed. As shown in fig. 65, the handset may choose to display the source of the message (e.g., the name of the social App, and the message from P40 of the user Lily).
In one embodiment, in response to detecting the operation of the user clicking the control 6502, the smart watch may further send an indication message to the notebook computer, where the indication message is used to instruct the notebook computer to add a reply control to the message.
It should be understood that the process of detecting that the user replies to the message by the notebook computer may refer to the above-mentioned GUIs shown in fig. 16 to 20, and for brevity, the description is omitted here.
In one embodiment, the message forwarded by the cell phone to the smart watch may not carry the message content (e.g., the message content is Happy Birthday |). In response to detecting the user's operation of clicking on control 6502, the smart watch may instruct the cell phone user to select a reply on the laptop. After receiving the instruction of the smart watch, the mobile phone can forward the message to the notebook computer, wherein the message forwarded to the notebook computer by the mobile phone can carry message content.
In one embodiment, the mobile phone may further send an indication message to the notebook computer, where the indication message is used to instruct the notebook computer to add a reply control to the message.
It should be understood that the smart watch may be an account that is logged into the same social App as the cell phone; or the smart watch may also be a smart watch that is installed with the social App but the user does not log in the social App using the same account number as the mobile phone; alternatively, the smart watch may also be a device that does not have the social App installed.
In one embodiment, the notebook computer may send the reply content to the smart watch in response to detecting the content replied by the user. After receiving the reply content, the smart watch can send the reply content to the mobile phone. And the mobile phone replies the message according to the reply content.
In one embodiment, the smart watch may also indicate information of the source device (cell phone) to the notebook. Then, when the notebook computer detects the reply content of the user, the notebook computer can directly send the reply content to the mobile phone. And the mobile phone replies the message according to the reply content.
It should be understood that, for the specific implementation processes of the notebook computer sending the reply content to the smart watch, the smart watch sending the reply content to the mobile phone, and the notebook computer sending the reply content to the mobile phone, reference may be made to the implementation process shown in fig. 47, and for brevity, detailed descriptions thereof are omitted here.
FIG. 66 illustrates another set of GUIs provided by an embodiment of the present application.
As shown in FIG. 66, the handset receives a social App message from the user Tom (e.g., the message content is Happy Birthday!), and a message reminder box 6601 may be displayed. At this point, the mobile phone may forward the message to the tablet that the user is focusing on, and the tablet may display the message alert box 6602. The message reminder box 6602 includes, among other things, the source of the message (e.g., the name of the social App, and the message is from P40 of the user Lily), a reminder ("a message received"), a control 6603, and a control 6604.
In one embodiment, the mobile phone may determine that the prompting device is a tablet computer and the connection device is a notebook computer, so that the mobile phone may forward the message to the prompting device and indicate information of the connection device to the prompting device.
It should be understood that, the process of determining the prompting device and the continuing device by the mobile phone may refer to the description in the embodiment of the method shown in fig. 54, and for brevity, will not be described again here.
In one embodiment, the mobile phone may forward the message to the tablet computer after determining that the reminder device is the tablet computer. After receiving the message, the tablet computer may determine the connected device, thereby generating a control 6604 for replying the message on the connected device.
In one embodiment, the mobile phone may further carry indication information in the message forwarded to the tablet computer, where the indication information is used to instruct the tablet computer to add a reply control to the message. The tablet computer may display a control 6603 in response to receiving the indication information.
It should be understood that the display process after the tablet pc detects the operation of the user clicking the control 6603 may refer to the above-mentioned GUIs shown in fig. 16 to 20, and for brevity, will not be described again.
In one embodiment, the message forwarded by the mobile phone to the tablet computer may carry message content (e.g., the message content is Happy Birthday |). After receiving the message content, the tablet computer may display the message content in the message alert box 6602. In response to detecting the user clicking on control 6604, the tablet computer may send the message to the notebook computer for forwarding. The notebook may display a reminder box 6605 in response to receiving the message, the message reminder box 6605 including the source of the message (e.g., the name of the social App, and the message from P40 of the user Lily), the user information that sent the message (e.g., "Tom" and the avatar of the user Tom), the content of the message (e.g., happy Birthday |), and a reply control.
In one embodiment, in response to detecting that the user clicks the control 6604, the tablet pc may further send an indication message to the notebook computer, where the indication message is used to instruct the notebook computer to add a reply control to the message.
It should be understood that the process of the notebook computer detecting that the user replies to the message may refer to the above-mentioned GUIs shown in fig. 16 to 20, and for brevity, will not be described again.
In one embodiment, the message forwarded by the mobile phone to the tablet may not carry the message content (e.g., the message content is Happy Birthday |). In response to detecting the user's operation of clicking on the control 6604, the tablet computer may instruct the cell phone user to select to reply on the notebook computer. After receiving the instruction of the tablet computer, the mobile phone can forward the message to the notebook computer, wherein the message forwarded to the notebook computer by the mobile phone can carry message content.
In one embodiment, the mobile phone may further send an indication message to the notebook computer, where the indication message is used to instruct the notebook computer to add a reply control to the message.
It should be understood that the tablet computer may be an account that is logged into the same social App as the mobile phone; or, the tablet computer may also be installed with the social App but the user does not log in the social App by using the same account number as the mobile phone; alternatively, the tablet may be a device without the social App installed.
FIG. 67 illustrates another set of GUIs provided by an embodiment of the present application.
As shown in FIG. 67, the handset receives a social App message from the user Tom (e.g., the message content is Happy Birthday!). At this point, the mobile phone may forward the message to the tablet that the user is focusing on, and the tablet may display a message alert box 6701. The message reminder box 6701 includes, among other things, the source of the message (e.g., the name of the social App, and the message is from P40 of the user Lily), a reminder ("a message received"), an immediate reply control, a notebook reply control, and a control 6703. The control 6703 is associated with one or more controls, and each of the one or more controls is associated with an available device other than a laptop. Illustratively, when the tablet computer detects an operation of clicking on a control 6703 by a user, the tablet computer may display a device list 6703, wherein the device list 6703 includes controls corresponding to a plurality of devices (e.g., a living room TV and a loudspeaker).
In response to detecting that the user clicks the operation of the control corresponding to the living room TV, the tablet computer can forward the message to the living room TV; or, in response to detecting that the user clicks the control corresponding to the living room TV, the tablet computer may indicate to the mobile phone that the user selects the living room TV as a device for message reply, and the mobile phone may forward the message to the living room TV after receiving the indication of the tablet computer. The living room TV may display a message alert box 6704 after receiving the message, and the user may complete a virtual reply to the message on the living room TV. The living room TV can send the reply content to the mobile phone when detecting the reply content of the user, thereby finishing the real reply to the message on the mobile phone.
It should be understood that, for the process that the tablet forwards the message to the living room TV, or the tablet triggers the mobile phone to forward the message to the living room TV, reference may be made to the description in the foregoing embodiment, and for brevity, no further description is given here.
In one embodiment, an immediate reply control is also included in the message alert box 6701, which is associated with the tablet computer. When the tablet computer detects that the user clicks the control of the immediate reply, the tablet computer may display a GUI as shown in (b) of fig. 16. In one embodiment, the devices in the device list 6703 may be devices detected by the tablet that are within communication range of the tablet.
In one embodiment, the devices in the device list 6703 may also be devices detected by the handset that are within communication range of the handset. The cell phone can send information of multiple devices (e.g., a living room TV and a sound box) to the tablet computer, causing the tablet computer to display the multiple devices in the device list 6703.
In one embodiment, if the devices in the device list 6703 are devices detected by the tablet that are within communication range of the tablet, the tablet may delete information of the living room TV from the device list 6703 when the plurality of devices within communication range of the tablet do not include the living room TV.
In one embodiment, if the devices in the device list 6703 are devices detected by the cell phone that are content within the communication range of the cell phone, then the cell phone may indicate to the tablet to delete information for the living room TV in the device list 6703 when multiple devices within the communication range of the cell phone do not include the living room TV.
FIG. 68 illustrates another set of GUIs provided by an embodiment of the present application.
A GUI shown in (a) of fig. 68, which is a notification center interface of a tablet computer. The notification center interface displays a number of switch controls, such as bluetooth, flashlight, airplane mode, and the like. The notification center interface further includes notification management, where the notification management includes a reminder box 6801 of the message, where the reminder box 6801 includes a current state of the message (in a notebook reply), a control 6802, and a control 6803.
In response to detecting that the user clicks the control 6802, the tablet computer may send instruction information to the notebook computer, where the instruction information is used to instruct the tablet computer to stop replying to the message. Illustratively, the notebook computer, upon receiving the indication, may hide a reminder box (e.g., reminder box 6503) of the message.
Alternatively, in response to detecting that the user clicks the control 6802, the tablet may send instruction information to the mobile phone, where the instruction information is used to instruct the tablet computer to stop replying to the message. After receiving the instruction of the tablet computer, the mobile phone can instruct the notebook computer to stop replying the message. Illustratively, the notebook computer may hide the reminder box (e.g., reminder box 6503) of the message upon receiving the indication.
A GUI as shown in (b) of fig. 68, which is another notification center interface of the tablet. In response to detecting that the user clicks the control 6803, the tablet computer may display a control 6804 and a control 6805, where the control 6804 is a control for switching to a living room TV response, and the control 6805 is a control for switching to a sound box response.
When the tablet detects an operation of the user clicking the control 6804, the tablet may instruct the notebook to stop replying with the message and forward the message to the living room TV. The notebook computer, upon receiving the indication, may hide a reminder box (e.g., reminder box 6503) of the message. After receiving the message forwarded by the tablet computer, the living room TV can display a message reminding frame.
It should be understood that the process of the living room TV displaying the message reminder box can refer to the GUI shown in fig. 6, and the description is omitted here for brevity.
In one embodiment, the device indicated by control 6804 (living room TV) and the device indicated by control 6805 (speaker) may be devices within communication range of a tablet computer. When the tablet computer detects that a device (e.g., a speaker) is not within communication range of the tablet computer, the tablet computer may delete the control 6805. Alternatively, when the tablet detects that a new device (e.g., a bedroom TV) is added within communication range, the tablet may add a new control instructing the bedroom TV to reply to the message.
Fig. 69 is a schematic flowchart of a notification processing method 6900 provided in an embodiment of the present application, where the notification processing method 6900 includes:
s6901, the first device obtains the notification.
The notification in the embodiment of the present application may be a notification obtained by the first device from a server, and exemplarily, the first device obtains a mail from a server of a mail application; illustratively, the first device obtains a notification message from a server of the social application; illustratively, the first device obtains a message for video update from a server of the video application. The notification may also be a notification local to the first device, for example, the user may set an alarm on the first device to take a meeting at 10 am, and at 10 am the first device may generate a notification prompting the user to take the meeting. The specific form of the notification is not particularly limited in the embodiments of the present application.
S6902, when the first device determines that the focus of the owner of the first device is on the second device, sending a first message to the second device.
It should be understood that, for the process that the first device determines that the owner of the first device is not focused on the first device and is focused on the second device, reference may be made to the implementation process shown in fig. 45, which is not described herein again for brevity.
For example, as shown in fig. 23 (a), the handset may generate a notification according to the received background service notification. For example, when the mail application background of the mobile phone receives a new e-mail, the mobile phone may pop up a window 2311 through the display 2310 to remind the user of receiving the new e-mail. When the mobile phone determines that the focus of the user is not on the mobile phone and the focus of the user is on the smart watch, the mobile phone may send the first message to the smart watch.
For example, as shown in fig. 25 (a), when the video application background of the cell phone receives a notification of video playing update, the cell phone 201 may pop up a window 2312 on the display 2310 to remind the user that the basketball game has been updated. When the mobile phone determines that the focus of the user is not on the mobile phone and the focus of the user is on the smart watch, the mobile phone may send the first message to the smart watch.
Illustratively, as shown in FIG. 65, the handset receives a message from the server of the social App (the message content is Happy Birthday!). When the mobile phone determines that the focus of the user is not on the mobile phone and the focus of the user is on the smart watch, the mobile phone may send the first message to the smart watch.
Illustratively, as shown in FIG. 66, the handset receives a message from the server of the social App (the message content is Happy Birthday!). When the mobile phone determines that the focus of the user is not on the mobile phone and the focus of the user is on the tablet computer, the mobile phone may send the first message to the tablet computer.
It should be understood that, in the process of selecting the smart watch or the tablet pc for prompting by the mobile phone, reference may be made to the notification processing method shown in fig. 54, or, reference may also be made to the method shown in fig. 50, and for brevity, no further description is provided here.
In one embodiment, the first message includes notification information and information of a third device, the notification information is used to prompt the first device to acquire the notification, and the third device is used to generate the notification in the second device.
S6902, the second device receives the first message and generates a prompt to execute the task corresponding to the notification in the third device.
Illustratively, as shown in fig. 23 (b), after the watch receives the first message, the display 2340 displays a prompt text 2341, where the prompt text 2341 is used to prompt the user that the mobile phone 201 receives an email, and the display 2340 of the watch 202 may also display an interface element, such as a shortcut entrance 2342, and a corresponding word "notebook viewing details". When watch 202 receives notification of a new email, a prompt message may be sent directly to notebook 204 through interaction with the user without running an email application.
For example, as shown in fig. 65, after receiving the first message sent by the mobile phone, the watch displays a message reminder box 6502 through the display screen. The message reminding box 6502 includes a message source (for example, the name of the social App, and the message comes from P40 of the user Lily), a prompt message ("a message is received"), and a control 6502, and the control 6502 indicates the notebook computer.
Optionally, the first message includes description information of the task, and the second device is specifically configured to: in response to receiving the first message, displaying a first prompt window, the first prompt window including a first control and descriptive information of the task, the first control being associated with the third device; the second message is sent in response to user input to the first control.
In the embodiment of the application, the first message sent by the first device to the second device can carry the description information of the task, the second device displays the description information of the task content, and the user selects the third device to execute the task content, so that the cross-device task processing is realized. When the second device can conveniently trigger the third device to process the task through the first prompt window, the user experience is improved.
Illustratively, as shown in fig. 65, the first message may include description information about a message received by the mobile phone, and the description information may include a message source (for example, a name of a social App, and the message is from P40 of the user Lily).
Optionally, the method further comprises: the second device displays a second control after sending the second message; and receiving input of the user for the second control, and sending a fourth message to request the third device to stop executing the task.
In the embodiment of the application, a stopping control is provided on a prompting device (second device), the second device can output the stopping control (second control) in the process of executing the task content by a third device, and a user controls the third device to stop executing the task content through the second control. The user can pause the device to execute the task content at any time, and the efficiency of processing the notification among the devices is improved, so that the user experience is improved.
Illustratively, as shown in fig. 68 (a), the tablet computer may display a control 6802 after sending the second message to the notebook computer. When the tablet computer detects that the user clicks the control 6802, the notebook computer may be triggered to stop replying the message.
In one embodiment, the tablet may directly instruct the laptop to stop replying to the message. After receiving the instruction, the notebook computer can hide the message reminding frame.
In one embodiment, the tablet may first indicate to the phone that the user chooses to stop replying to messages on the laptop. After receiving the instruction of the tablet personal computer, the mobile phone can instruct the notebook personal computer to stop replying the message. After receiving the instruction, the notebook computer can hide the message reminding frame.
Optionally, the method 6900 further includes: the second device displaying a third control after sending the second message, the third control being associated with the fourth device; in response to receiving user input for the third control, a fifth message is sent to request the third device to stop performing the task and to request the fourth device to perform the task.
In the embodiment of the application, a switching control is provided on a prompting device (a second device), and during the process of executing the task content, the third device may output the switching control (a third control), and a user controls the third device to stop executing the task content through the third control and instructs another device (a fourth device) to execute the task content. The effect of real-time switching is realized, the user can switch the equipment for executing the task content at any time, and the efficiency of processing the notification among a plurality of equipment is improved, so that the user experience is improved.
For example, as shown in fig. 68 (b), when the tablet detects the operation of clicking the control 6804 by the user, the tablet may instruct the notebook computer to stop the message reply and instruct the living room TV to reply with the message (send the message content to the living room TV). After the notebook computer receives the instruction, the message reminding frame can be hidden. The living room TV may display a message reminder box upon receiving the indication. It should be understood that the process of displaying the message reminder box in the living room TV can refer to the GUI shown in fig. 5 or fig. 6, and the description thereof is omitted here for brevity.
In some possible implementations, in response to receiving the input of the user for the third control, the second device sends first indication information to the third device and second indication information to the fourth device, where the first indication information is used to instruct the third device to stop executing the task, and the second indication information is used to instruct the fourth device to execute the task.
In some possible implementations, the second device sends the fifth message to the first device in response to receiving the user input for the third control; the first device sends first indication information to the third device and second indication information to the fourth device in response to receiving the fifth message, wherein the first indication information is used for indicating the third device to stop executing the task, and the second indication information is used for indicating the fourth device to execute the task.
Optionally, the first prompt window further includes a fourth control, the fourth control is associated with one or more controls, and each of the one or more controls is associated with an available device other than the third device; the second device is further configured to display the one or more controls upon detecting input directed to the fourth control after displaying the first prompt window.
In the embodiment of the application, a control for selecting a device list is provided on a prompting device (second device), the second device may output a control (fourth control) for selecting a continuation device list, and a user may view the one or more available devices through the fourth control and then autonomously select a device for executing task content. The effect of autonomous selection is achieved, a user can select the device for executing the task content from the multiple devices, and the efficiency of processing the notification among the multiple devices is improved, so that the user experience is improved.
Illustratively, as shown in fig. 67, the tablet computer may output an immediate reply control (associated with the tablet computer), a notebook reply control (associated with the notebook computer), and a control 6702 in a reminder box 6701, where the control 6702 is associated with a plurality of controls, and each of the plurality of controls is associated with an available device other than the notebook computer. When the tablet computer detects the user input for the control 6702, the tablet computer may display a device list 6703 including the plurality of controls in the device list 6703. As shown in fig. 67, the plurality of controls are used to associate the living room TV and the loudspeaker box, respectively.
In some possible implementations, the first message includes device information of one or more devices associated with one or more controls; the second device, in response to receiving the device information for the one or more devices, displays the fourth control.
Optionally, a fifth control is included in the one or more controls, and the fifth control is associated with a fifth device; after the second device displays the first prompt window, deleting the fifth control when the fifth device is no longer included in one or more devices within the communication range of the second device; the second device detects input directed to the fourth control, and displays one or more controls that do not include the fifth control.
In the embodiment of the application, the control in the first prompt window may change along with the change of the device state. At the first moment, the fifth device is available, and the prompting device (the second device) outputs a fifth control related to the fifth device; at the second time, the state of the fifth device changes (not within the communication range of the second device), and the second device deletes the fifth control associated with the fifth device. Similarly, if the state of the fifth device changes again (within the communication range of the second device) at the third time, the second device outputs a fifth control associated with the fifth device. Therefore, the method for outputting the control is changed according to the equipment state, timeliness is improved, the latest available equipment can be provided for the user in real time, and user experience is improved.
For example, the plurality of devices associated with the control in the device list 6703 may be a plurality of devices within communication range of the tablet computer. When one of the devices (e.g., the living room TV) is not within communication range of the tablet, the tablet may delete the control associated with the living room TV in the device list 6703.
In some possible implementations, at the second time, if the state of the fifth device changes (is not within the communication range of the first device), the first device instructs the second device to delete the fifth control associated with the fifth device, so that the second device deletes the fifth control associated with the fifth device. Similarly, if the state of the fifth device changes again (within the communication range of the first device) at the third time, the first device instructs the second device to output the fifth control associated with the fifth device, so that the second device outputs the fifth control associated with the fifth device.
For example, the plurality of devices associated with the control in the device list 6703 can be a plurality of devices within communication range of a mobile phone. When one of the devices (e.g., the living room TV) is not within communication range of the cell phone, the cell phone can instruct the tablet to delete the control associated with the living room TV in the device list 6703.
S6903, in response to receiving the input corresponding to the prompt from the user, the second device sends a second message to request the third device to execute the task corresponding to the notification.
When the first device generates a notification, the second device prompts the first device to receive the notification and prompts the third device that the task can be performed. And when the second equipment detects that the user inputs the prompt, sending a second message to the third equipment so as to execute the task on the third equipment. The first device and the second device have the capability of cooperatively notifying, and the efficiency of processing notifications among the devices is improved.
Optionally, the method 6900 further includes: the third device executes the task corresponding to the notification. A system formed by the first device, the second device and the third device can work cooperatively, and tasks corresponding to notifications processed among the multiple devices are achieved.
Optionally, the sending the second message comprises: sending the second message to the third device; wherein the method 6900 further comprises: the third device performs the task in response to receiving the second message.
In the embodiment of the application, the second message can be sent to the third device when the second device detects the input of the user, so that the third device can execute the task, and the efficiency of processing the notification among multiple devices is improved.
In some possible implementations, the second message includes content of the task.
Illustratively, as shown in FIG. 65, upon detecting operation of the user click control 6502, the watch may send a second message to the laptop, the second message including message content (e.g., happy Birthday!). When the notebook computer receives the second message, a reminding frame 6503 can be displayed, and the reminding frame 6503 comprises message content and a reply control.
It should be appreciated that the third device, upon receiving the second message, may add a reply control in the reminder box 6503 if it is determined that message content is included in the second message.
Optionally, the sending the second message comprises: the second device sends a second message to the first device; wherein the method 6900 further comprises: the first device sending a third message to the third device in response to receiving the second message; the third device performs the task in response to receiving the third message.
In the embodiment of the application, the second message can be sent to the first device when the second device detects the input of the user, so that the first device can send the third message to the third device, the third device can execute the task, and the efficiency of processing notifications among multiple devices is improved.
In some possible implementations, the third message includes the content of the task.
For example, as shown in fig. 65, when detecting that the user clicks on the control 6502, the watch may send a second message to the mobile phone, where the second message is used to indicate that the user selects a notebook computer to reply to the message. In response to receiving the second message, the mobile phone may send a third message to the notebook computer, the third message including message content. When the notebook computer receives the third message, a reminding frame 6503 can be displayed, and the reminding frame 6503 includes message content and a reply control.
Optionally, the notification is message content in the first application, wherein the third device receives the message content and displays a message reminding frame, and the message reminding frame includes the message content and a reply control; when the operation that the user replies to the message content is detected, the reply content is sent to the first equipment; the first device is further configured to reply to the message content according to the reply content.
In the embodiment of the present application, after seeing the prompt for the message on the prompting device (second device), the user may reply to the message content on the continuing device (third device), and finally complete the real reply to the message on the first device. Therefore, the method and the device are beneficial to the user to timely receive the message reminding and complete the reply to the message, the efficiency of processing the notification among a plurality of devices is improved, the process that the user replies to the message on the first device is avoided, the user is also prevented from missing important messages, and the user experience is promoted.
In one embodiment, if the third device is connected to the message content that is determined to be the message content of the IM-class message, the message content and the reply control may be displayed in the reminder box.
In some possible implementations, the third device may be a keyboard-equipped device (e.g., a laptop).
Optionally, the third device receives indication information, where the indication information is used to instruct the third device to add the reply control to the message content.
In one embodiment, the second device may further send the indication information to the third device when sending the message content to the third device, where the indication information is used to instruct the third device to add a reply control to the message content.
In one embodiment, the second device sends the second message to the first device; and the first device sends the message content and indication information to the third device in response to receiving the second message, wherein the indication information is used for indicating the third device to add the reply control to the message content.
It should be understood that, for the process of sending the message content and the indication information to the third device by the second device, or sending the message content and the indication information to the third device by the first device, reference may be made to the implementation process shown in fig. 47, and for brevity, details are not described here again.
Optionally, the message content in the first application includes a first message content and a second message content, where the third device receives the first message content identified by the first identification information and the second message content identified by the second identification information; in response to detecting a first reply content replied by the user to the first message content and detecting a second reply content replied by the user to the second message content, sending the first reply content identified by the first identification information and the second reply content identified by the second identification information to the first device; the first device replies to the first message content according to the first reply content identified by the first identification information, and replies to the second message content according to the second reply content identified by the second identification information.
In the embodiment of the application, the third device receives the message identified by the identification information, so that when the third device acquires the reply content of the user, the third device can identify the reply content by using the identification information, and the first device can determine the reply content of which message the reply content is directed to. Under the condition that the first device receives a plurality of messages, the accuracy of the first device in message reply is improved.
It should be understood that, in this embodiment of the present application, the first device may identify the first message content and the second message content, or the second device may identify the first message content and the second message content.
In one embodiment, the third device may receive the first message content identified by the first identification information and the second message content identified by the second identification information, which are sent by the first device. After detecting the first reply content and the second reply content, the third device identifies the first reply content by using the first identification information, and identifies the second reply content by using the second identification information, thereby sending the identified reply content to the first device. This may enable the first device to specify that the first reply content is reply content to the first message content and that the second reply content is reply content to the second message content.
In an embodiment, a first message sent by a first device to a second device may carry the first message content identified by the first identification information and the second message content identified by the second identification information. When the second device detects that the user inputs the prompt, a second message may be sent to the third device, where the second message may carry the first message content identified by the first identification information and the second message content identified by the second identification information. After detecting the first reply content and the second reply content input by the user, the third device may identify the first reply content using the first identification information, and identify the second reply content using the second identification information, thereby sending the identified reply content to the second device. The second device may forward the reply content to the first device after receiving the identified reply content.
Optionally, the third device is a device in which the first application is not installed.
Optionally, the second device generates, according to the device information of the second device, a prompt for executing the task corresponding to the notification in the third device.
In the embodiment of the application, different second devices have different device information, so that prompts in different modes can be presented to a user, and the user experience when looking up the prompts on the prompting device is facilitated to be improved.
It should be understood that, for the process of the second device generating, according to the device information of the second device, a prompt for executing the task corresponding to the notification in the third device, reference may be made to the implementation processes shown in fig. 45 to fig. 46, and for brevity, details are not described here again.
Optionally, the device information includes that the second device has a display screen, and when the second device detects that the input of the user is being received, the second device displays a first prompt window through the display screen without positioning a cursor in the first prompt window, where the first prompt window includes prompt information for prompting execution of the task in a third device; or when the second device detects that the input of the user is not received, the second device displays the first prompt window through the display screen and positions the cursor in the first prompt window.
In the embodiment of the application, for the second device with the display screen, if the second device is receiving the input of the user (or the second device is interacting with the user), the second device may not position the cursor in the first prompt window, so that the trouble of the current operation of the user is avoided, and the user experience is improved.
Optionally, when the second device detects that the input of the user is being received, the second device further displays a prompt message through the display screen, where the prompt message is used to prompt the user to position the cursor to the first prompt window through the first operation.
In this embodiment of the application, if the second device is receiving an input from the user (or the second device is interacting with the user), the second device may further prompt the user to position the cursor to the message reminding box through a preset operation. Therefore, the user can be quickly positioned to the first prompt window through the prompt information while the trouble caused by the current operation of the user is avoided, and the user experience is favorably improved.
Fig. 70 is a schematic block diagram of a notification processing apparatus 7000 provided in an embodiment of the present application. The apparatus 7000 may be provided in the first device shown in fig. 69, and the apparatus 7000 includes: an obtaining unit 7010 configured to obtain a notification; a detecting unit 7020, configured to detect that a currently focused device of the first device is a second device; a sending unit 7030, configured to send the first message to the second device, so that the second device generates a prompt for executing a task corresponding to the notification on the third device.
Fig. 71 is a schematic block diagram of a notification processing apparatus 7100 provided in an embodiment of the present application. The apparatus 7100 may be disposed in a second device shown in fig. 69, wherein the apparatus 7100 comprises: a receiving unit 7110, configured to receive a first message sent by a first device; a generating unit 7120 for generating a prompt to execute the task corresponding to the notification in the third device; a detecting unit 7130, configured to receive an input corresponding to the prompt from the user; a sending unit 7140, configured to send a second message in response to the input, to request the third device to perform a task corresponding to the notification.
Fig. 72 shows a schematic structural diagram of an electronic device 7200 provided in an embodiment of the present application. As shown in fig. 72, the electronic apparatus includes: one or more processors 7210, one or more memories 7220, the one or more memories 7220 storing one or more computer programs, the one or more computer programs comprising instructions. When executed by the one or more processors 7210, cause the first electronic device to perform the aspects of the embodiments described above; alternatively, when the instructions are executed by the one or more processors 7210, the second electronic device is caused to execute the technical solutions in the above embodiments.
The embodiment of the application provides a system for processing a notification, which comprises a first device and a second device, and the system is used for executing the technical scheme of the prompt message in the embodiment. The implementation principle and technical effect are similar to those of the embodiments related to the method, and are not described herein again.
The embodiment of the present application provides a computer program product, which, when the computer program product runs on a first device, enables the first device to execute the technical solution in the above embodiment; or, when the computer program product runs on the second device, the second device is caused to execute the technical solution in the above embodiment. The implementation principle and technical effect are similar to those of the embodiments related to the method, and are not described herein again.
The embodiment of the present application provides a readable storage medium, where the readable storage medium contains instructions, and when the instructions are executed on a first device, the first device is caused to execute the technical solution of the above embodiment; or, when the instruction runs on the second device, the second device is caused to execute the technical solution of the above embodiment. The implementation principle and the technical effect are similar, and the detailed description is omitted here.
The embodiment of the application provides a chip, wherein the chip is used for executing instructions, and when the chip runs, the technical scheme in the embodiment is executed. The realization principle and the technical effect are similar, and the description is omitted here.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (39)

  1. A system, characterized in that the system comprises a first device and a second device, wherein,
    the first device is used for acquiring a notification;
    the first device is further configured to send a first message to the second device when it is determined that the device focused by the owner of the first device is the second device;
    the second device is used for responding to the first message and generating a prompt for executing the task corresponding to the notification in the third device;
    the second device is further configured to send a second message to request the third device to execute a task corresponding to the notification in response to receiving an input corresponding to the prompt from the user.
  2. The system of claim 1, further comprising the third device, wherein,
    the second device is specifically configured to: sending the second message to the third device;
    the third device is configured to execute the task in response to receiving the second message.
  3. The system of claim 1, further comprising the third device, wherein,
    the second device is specifically configured to: sending a second message to the first device;
    The first device is further configured to: in response to receiving the second message, sending a third message to the third device;
    the third device to execute the task in response to receiving the third message.
  4. The system according to any one of claims 1 to 3, wherein the first message includes description information of the task, and the second device is specifically configured to:
    in response to receiving the first message, displaying a first prompt window, the first prompt window including a first control and descriptive information of the task, the first control being associated with the third device;
    sending the second message in response to user input to the first control.
  5. The system of claim 4, wherein the second device is further configured to:
    after sending the second message, displaying a second control;
    and receiving the input of the user for the second control, and sending a fourth message to request the third equipment to stop executing the task.
  6. The system of claim 4, further comprising a fourth device, the second device further configured to:
    after sending the second message, displaying a third control, the third control associated with the fourth device;
    In response to receiving user input for the third control, sending a fifth message to request the third device to stop performing the task and to request the fourth device to perform the task.
  7. The system of claim 4, wherein the first prompt window further comprises a fourth control, the fourth control associated with one or more controls, each of the one or more controls associated with a respective available device other than the third device;
    the second device is further configured to display the one or more controls upon detecting input directed to the fourth control after displaying the first prompt window.
  8. The system of claim 7, wherein a fifth control is included in the one or more controls, the fifth control associated with a fifth device;
    the second device is further configured to delete the fifth control when the fifth device is no longer included in the one or more devices within the communication range of the second device after the first prompt window is displayed;
    the second device is further configured to detect an input directed to the fourth control, and display one or more controls, the one or more controls not including the fifth control.
  9. The system according to any one of claims 1 to 8, wherein the notification is message content in the first application, wherein,
    the third device is configured to receive the message content and display a message reminding frame, where the message reminding frame includes the message content and a reply control;
    when the operation that a user replies to the message content is detected, sending reply content to the first equipment;
    the first device is further configured to reply to the message content according to the reply content.
  10. The system of claim 9,
    the third device is further configured to receive indication information, where the indication information is used to indicate the third device to add the reply control.
  11. The system according to claim 9 or 10, wherein the message content in the first application comprises a first message content and a second message content, and wherein the third device is specifically configured to:
    receiving the first message content identified by the first identification information and the second message content identified by the second identification information;
    in response to detecting a first reply content replied by the user to the first message content and detecting a second reply content replied by the user to the second message content, sending the first reply content identified by the first identification information and the second reply content identified by the second identification information to the first device;
    The first device is specifically configured to: and replying the first message content according to the first reply content identified by the first identification information, and replying the second message content according to the second reply content identified by the second identification information.
  12. The system according to any one of claims 9 to 11, wherein the third device is a device on which the first application is not installed.
  13. The system according to any one of claims 1 to 12, characterized in that the second device is specifically configured to:
    and generating a prompt for executing the task corresponding to the notification in the third equipment according to the equipment information of the second equipment.
  14. The system of claim 13, wherein the device information includes that the second device has a display screen, and the second device is specifically configured to:
    when the second device detects that the input of a user is being received, displaying a first prompt window through the display screen without positioning a cursor in the first prompt window, wherein the first prompt window comprises prompt information which is used for prompting the execution of the task in a third device; or,
    When the second device detects that the input of the user is not received, displaying the first prompt window through the display screen and positioning the cursor in the first prompt window.
  15. The system of claim 14, wherein the second device is further configured to:
    when the second device detects that the input of the user is being received, prompt information is displayed through the display screen, and the prompt information is used for reminding the user to position the cursor to the first prompt window through a first operation.
  16. The system of any of claims 13 to 15, wherein the second device is further configured to:
    determining that the second device is in a non-do-not-disturb mode or that the second device is not currently running a preset application before generating a prompt to execute the task corresponding to the notification in the third device.
  17. A method of notification processing, the method being applied to a first device, the method comprising:
    the first device acquires a notification;
    when the first device determines that the device focused by the owner of the first device is the second device, the first device sends a first message to the second device, so that the second device generates a prompt for executing the task corresponding to the notification on the third device.
  18. The method of claim 17, further comprising:
    the first equipment receives a second message sent by the second equipment;
    in response to receiving the second message, the first device sends a third message to the third device, the third message including content of the task, to cause the third device to perform the task.
  19. The method of claim 18, further comprising:
    the first equipment receives a fourth message sent by the second equipment;
    in response to receiving the fourth message, the first device sends a fifth message to the third device, where the fifth message is used to instruct the third device to stop executing the task.
  20. The method of claim 19, further comprising:
    in response to receiving the fourth message, the first device sends a sixth message to the fourth device, the sixth message being used to instruct the fourth device to perform the task.
  21. The method of any of claims 18 to 20, wherein the notification is message content in a first application, and wherein the message content is included in the third message, the method further comprising:
    The first equipment receives reply content to the message content sent by the third equipment;
    and the first equipment replies the message content according to the reply content.
  22. The method of claim 21, further comprising:
    in response to receiving the second message, the first device sends indication information to the third device, where the indication information is used to instruct the third device to add a reply control to the message content.
  23. The method according to claim 21 or 22, wherein the message content in the first application comprises a first message content and a second message content, and the third message comprises the first message content identified by the first identification information and the second message content identified by the second identification information;
    wherein the method further comprises:
    receiving the first reply content identified by the first identification information and the second reply content identified by the second identification information, which are sent by the third device;
    and replying the first message content according to the first reply content identified by the first identification information, and replying the second message content according to the second reply content identified by the second identification information.
  24. A method for notification processing, the method being applied to a second device, the method comprising:
    the second equipment receives a first message sent by the first equipment;
    the second device generates a prompt for executing a task corresponding to the notification in a third device in response to receiving the first message;
    and the second equipment responds to the received input of the user corresponding to the prompt, and sends a second message to request the third equipment to execute a task corresponding to the notification.
  25. The method of claim 24, wherein sending the second message comprises:
    and sending the second message to the third device, wherein the second message comprises the content of the task.
  26. The method of claim 24, wherein sending the second message comprises:
    and sending the second message to the first device, so that the first device sends a third message to the third device according to the second message, wherein the third message comprises the content of the task.
  27. The method according to any one of claims 24 to 26, wherein the first message includes description information of the task, and the second device generates a prompt for executing the notification corresponding task in a third device, including:
    In response to receiving the first message, displaying a first prompt window, the first prompt window including a first control and descriptive information of the task, the first control being associated with the third device;
    sending the second message in response to user input to the first control.
  28. The method of claim 27, wherein the method comprises:
    after sending the second message, displaying a second control;
    in response to receiving user input for the second control, sending a fourth message to request the third device to stop performing the task.
  29. The method of claim 27, wherein the method comprises:
    after sending the second message, displaying a third control, the third control associated with the fourth device;
    in response to receiving user input for the third control, sending a fifth message to request the third device to stop performing the task and request the fourth device to perform the task.
  30. The method of claim 27, wherein the first prompt window further comprises a fourth control, the fourth control associated with one or more controls, each of the one or more controls associated with a respective available device other than the third device;
    Wherein the method comprises the following steps:
    after displaying the first prompt window, when input directed to the fourth control is detected, displaying the one or more controls.
  31. The method of claim 30, wherein a fifth control is included in the one or more controls, the fifth control being associated with a fifth device;
    wherein the method further comprises:
    after the first prompt window is displayed, deleting the fifth control when the fifth device is no longer included in the one or more devices within the communication range of the second device;
    detecting input directed to the fourth control, displaying one or more controls, the one or more controls not including the fifth control.
  32. The method according to any one of claims 24 to 31, wherein the notification is message content in a first application, the method further comprising:
    and the second equipment sends indication information to the third equipment, wherein the indication information is used for indicating the third equipment to add a reply control to the message content.
  33. The method of claim 32, wherein generating, by the second device, a prompt to execute the task corresponding to the notification in a third device comprises:
    And generating a prompt for executing the task corresponding to the notification in the third equipment according to the equipment information of the second equipment.
  34. The method of claim 33, wherein the device information comprises that the second device is provided with a display screen, and wherein the second device generates a prompt to execute the task corresponding to the notification in a third device, comprising:
    when the second device detects that the input of a user is being received, displaying a first prompt window through the display screen without positioning a cursor in the first prompt window, wherein the first prompt window comprises prompt information, and the prompt information is used for prompting the execution of the task in a third device; or,
    when the second device detects that the input of the user is not received, displaying the first prompt window through the display screen and positioning the cursor in the first prompt window.
  35. The method of claim 34, further comprising:
    when the second device detects that the input of the user is being received, prompt information is displayed through the display screen, and the prompt information is used for reminding the user to position the cursor to the first prompt window through a first operation.
  36. The method of any one of claims 33 to 35, further comprising:
    determining that the second device is in a non-do-not-disturb mode or that the second device is not currently running a preset application before generating a prompt to execute a task corresponding to the notification in the third device.
  37. An electronic device, characterized by one or more processors; one or more memories; the one or more memories store one or more computer programs comprising instructions which, when executed by the one or more processors, cause the electronic device to perform the method of notification processing of any of claims 17-23.
  38. An electronic device, characterized by one or more processors; one or more memories; the one or more memories store one or more computer programs, the one or more computer programs comprising instructions, which when executed by the one or more processors, cause the electronic device to perform the method of notification processing of any of claims 24-36.
  39. A computer readable storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method of notification processing of any of claims 17 to 23; or,
    the computer instructions, when run on an electronic device, cause the electronic device to perform a method of notification processing as claimed in any of claims 24 to 36.
CN202080100013.XA 2020-02-19 2020-12-31 Notification processing method, electronic equipment and system Pending CN115428413A (en)

Applications Claiming Priority (11)

Application Number Priority Date Filing Date Title
CN202010102903.6A CN111404802A (en) 2020-02-19 2020-02-19 Notification processing system and method and electronic equipment
CN2020101029036 2020-02-19
CN2020104737811 2020-05-29
CN202010473781.1A CN113746718B (en) 2020-05-29 2020-05-29 Content sharing method, device and system
CN2020108443130 2020-08-20
CN2020108446425 2020-08-20
CN202010844642.5A CN114173000B (en) 2020-08-20 2020-08-20 Method, electronic equipment and system for replying message and storage medium
CN2020108444148 2020-08-20
CN202010844414.8A CN114173204B (en) 2020-08-20 2020-08-20 Message prompting method, electronic equipment and system
CN202010844313.0A CN114157756A (en) 2020-08-20 2020-08-20 Task processing method and related electronic equipment
PCT/CN2020/142600 WO2021164445A1 (en) 2020-02-19 2020-12-31 Notification processing method, electronic apparatus, and system

Publications (1)

Publication Number Publication Date
CN115428413A true CN115428413A (en) 2022-12-02

Family

ID=77390406

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080100013.XA Pending CN115428413A (en) 2020-02-19 2020-12-31 Notification processing method, electronic equipment and system

Country Status (2)

Country Link
CN (1) CN115428413A (en)
WO (1) WO2021164445A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117135385A (en) * 2023-03-23 2023-11-28 荣耀终端有限公司 Multi-device collaborative screen recording and sharing method, electronic device and communication system
WO2024199472A1 (en) * 2023-03-30 2024-10-03 Oppo广东移动通信有限公司 Notification message control processing method and apparatus, electronic device and storage medium

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113746718B (en) * 2020-05-29 2022-10-28 华为技术有限公司 Content sharing method, device and system
CN115842930A (en) * 2021-09-18 2023-03-24 华为技术有限公司 Cross-device information display method and electronic device
CN116027692A (en) * 2021-10-25 2023-04-28 华为技术有限公司 Automatic control method, electronic equipment and system based on human body perception
CN114063968A (en) * 2021-11-11 2022-02-18 北京字节跳动网络技术有限公司 Audio device selection method and device and electronic device
CN116257201A (en) * 2021-12-10 2023-06-13 华为技术有限公司 Content collaboration method, electronic device, and computer-readable storage medium
CN114390333B (en) * 2022-02-09 2024-04-02 百果园技术(新加坡)有限公司 Interface content display method, device, equipment and storage medium
CN114844983A (en) * 2022-03-28 2022-08-02 海信视像科技股份有限公司 Display device, communication device and screen projection control method
CN116107467B (en) * 2022-06-15 2023-11-21 荣耀终端有限公司 Method and device for automatically triggering shortcut instruction
CN117914992A (en) * 2022-08-31 2024-04-19 荣耀终端有限公司 Notification display method and terminal device
CN116048832B (en) * 2022-08-31 2023-11-03 荣耀终端有限公司 Batch clearing method and device for notification
CN117908814A (en) * 2022-10-19 2024-04-19 荣耀终端有限公司 Screen throwing system and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103037319A (en) * 2011-09-30 2013-04-10 联想(北京)有限公司 Communication transferring method, mobile terminal and server
CN106294798A (en) * 2016-08-15 2017-01-04 华为技术有限公司 A kind of images share method based on thumbnail and terminal
US20190158725A1 (en) * 2017-11-21 2019-05-23 Olympus Corporation Information apparatus, control method, and computer readable recording medium
CN110224920A (en) * 2019-04-23 2019-09-10 维沃移动通信有限公司 A kind of sharing method and terminal device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9356687B2 (en) * 2012-12-03 2016-05-31 Samsung Electronics Co., Ltd. Information providing method and mobile terminal therefor
WO2016018057A1 (en) * 2014-07-31 2016-02-04 Samsung Electronics Co., Ltd. Method and device for providing function of mobile terminal
CN105049329A (en) * 2015-07-08 2015-11-11 杭州木梢科技有限公司 Portable message pushing method in home environment and equipment control system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103037319A (en) * 2011-09-30 2013-04-10 联想(北京)有限公司 Communication transferring method, mobile terminal and server
CN106294798A (en) * 2016-08-15 2017-01-04 华为技术有限公司 A kind of images share method based on thumbnail and terminal
US20190158725A1 (en) * 2017-11-21 2019-05-23 Olympus Corporation Information apparatus, control method, and computer readable recording medium
CN110224920A (en) * 2019-04-23 2019-09-10 维沃移动通信有限公司 A kind of sharing method and terminal device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117135385A (en) * 2023-03-23 2023-11-28 荣耀终端有限公司 Multi-device collaborative screen recording and sharing method, electronic device and communication system
WO2024199472A1 (en) * 2023-03-30 2024-10-03 Oppo广东移动通信有限公司 Notification message control processing method and apparatus, electronic device and storage medium

Also Published As

Publication number Publication date
WO2021164445A1 (en) 2021-08-26

Similar Documents

Publication Publication Date Title
WO2021164445A1 (en) Notification processing method, electronic apparatus, and system
CN113885759B (en) Notification message processing method, device, system and computer readable storage medium
CN114173204B (en) Message prompting method, electronic equipment and system
EP4113415A1 (en) Service recommending method, electronic device, and system
CN114173000B (en) Method, electronic equipment and system for replying message and storage medium
CN113067940B (en) Method for presenting video when electronic equipment is in call and electronic equipment
CN113835649B (en) Screen projection method and terminal
CN113170279B (en) Communication method based on low-power Bluetooth and related device
WO2022037480A1 (en) Task processing method and related electronic device
US11949805B2 (en) Call method and apparatus
CN114115770B (en) Display control method and related device
CN114185503B (en) Multi-screen interaction system, method, device and medium
WO2021233161A1 (en) Family schedule fusion method and apparatus
US12028300B2 (en) Method, apparatus, and system for sending pictures after thumbnail selections
CN116489268A (en) Equipment identification method and related device
US20240184749A1 (en) File management method, electronic device, and computer-readable storage medium
CN114356195B (en) File transmission method and related equipment
CN114489876A (en) Text input method, electronic equipment and system
WO2022052706A1 (en) Service sharing method, system and electronic device
WO2023142941A1 (en) Playing record display method and related device
EP4202618A1 (en) Method for controlling device, electronic device, and system
WO2022022435A1 (en) Method for quickly joining conference and related device
CN115145665A (en) Display method, electronic equipment and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20221202

WD01 Invention patent application deemed withdrawn after publication