WO2022135199A1 - 信息处理方法、电子设备及系统 - Google Patents

信息处理方法、电子设备及系统 Download PDF

Info

Publication number
WO2022135199A1
WO2022135199A1 PCT/CN2021/137325 CN2021137325W WO2022135199A1 WO 2022135199 A1 WO2022135199 A1 WO 2022135199A1 CN 2021137325 W CN2021137325 W CN 2021137325W WO 2022135199 A1 WO2022135199 A1 WO 2022135199A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
notification
target operation
perform
receiving
Prior art date
Application number
PCT/CN2021/137325
Other languages
English (en)
French (fr)
Inventor
林楠
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP21909196.4A priority Critical patent/EP4250696A4/en
Priority to US18/258,473 priority patent/US20240040343A1/en
Publication of WO2022135199A1 publication Critical patent/WO2022135199A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/12Messaging; Mailboxes; Announcements
    • H04W4/14Short messaging services, e.g. short message services [SMS] or unstructured supplementary service data [USSD]
    • GPHYSICS
    • G04HOROLOGY
    • G04GELECTRONIC TIME-PIECES
    • G04G21/00Input or output devices integrated in time-pieces
    • G04G21/04Input or output devices integrated in time-pieces using radio waves
    • GPHYSICS
    • G04HOROLOGY
    • G04GELECTRONIC TIME-PIECES
    • G04G21/00Input or output devices integrated in time-pieces
    • GPHYSICS
    • G04HOROLOGY
    • G04GELECTRONIC TIME-PIECES
    • G04G21/00Input or output devices integrated in time-pieces
    • G04G21/08Touch switches specially adapted for time-pieces
    • GPHYSICS
    • G04HOROLOGY
    • G04GELECTRONIC TIME-PIECES
    • G04G9/00Visual time or date indication means
    • G04G9/0064Visual time or date indication means in which functions not related to time can be displayed
    • GPHYSICS
    • G04HOROLOGY
    • G04GELECTRONIC TIME-PIECES
    • G04G9/00Visual time or date indication means
    • G04G9/0064Visual time or date indication means in which functions not related to time can be displayed
    • G04G9/007Visual time or date indication means in which functions not related to time can be displayed combined with a calculator or computing means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72409User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories
    • H04M1/724094Interfacing with a device worn on the user's body to provide access to telephonic functionalities, e.g. accepting a call, reading or composing a message
    • H04M1/724095Worn on the wrist, hand or arm
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72409User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories
    • H04M1/72412User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories using two-way short-range wireless interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72436User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for text messaging, e.g. short messaging services [SMS] or e-mails
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72484User interfaces specially adapted for cordless or mobile telephones wherein functions are triggered by incoming communication events
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W68/00User notification, e.g. alerting and paging, for incoming communication, change of service or the like

Definitions

  • the present application relates to the technical field of electronic devices, and in particular, to an information processing method, electronic device and system.
  • Wearable electronic devices such as smart bracelets can be connected to mobile terminals such as mobile phones through communication methods such as Bluetooth and wireless fidelity (Wi-Fi). After the connection, the user can quickly obtain information such as incoming calls, short messages, and instant messaging messages received by the mobile terminal through the wearable electronic device without taking out the mobile terminal.
  • Wi-Fi wireless fidelity
  • the smart bracelet can display the short message received by the mobile phone: "You navigate to location A, I am waiting for you here".
  • the user can perform the following operations according to the short message: take out and open the mobile phone, start the navigation application on the mobile phone, enter the destination "Location A" in the navigation application and search, and finally get the route information.
  • the user may also perform the above-mentioned specific operation based on the wearable electronic device, and the wearable electronic device is usually small, so the process of performing the specific operation (especially inputting text) will be more difficult, the user's operation is inconvenient, and the experience is also poor.
  • the embodiments of the present application disclose an information processing method, electronic device and system, which can quickly execute the target operation indicated by the first information, simplify the steps of manual operation by the user, and improve the convenience of operation.
  • an embodiment of the present application provides an information processing method, which is applied to a system including a first device and a second device, the first device and the second device are connected, and the method includes: the second device The device receives the first information, and sends the first information to the first device; the first information is used to instruct the execution of the target operation; after receiving the first information, the first device outputs the first information information; the first device receives a first operation; the first device sends a first notification to the second device in response to the first operation; after the second device receives the first notification , execute the target operation.
  • the second device may directly execute the target operation indicated by the first information. That is to say, after obtaining the first information, the user can directly view the execution result of the target operation through the first operation, without requiring the user to manually perform the target operation, which achieves the effect of "one-step direct access” and improves the convenience of operation and user experience. sense.
  • the method further includes: after the second device receives the first information, recognizing that the first information is used to instruct to perform a target operation; the first device receives the first information Before the operation, the method further includes: the second device sends a second notification to the first device; the first device sends a first notification to the second device in response to the first operation, including : After receiving the second notification, the first device sends the first notification to the second device in response to the received first operation.
  • the first device even if the first device cannot identify the content indicated by the first information, it can work according to the notification of the second device, allowing the user to directly view the execution result of the target operation through the first operation after obtaining the first information, without the need for the user to manually Perform the target action. That is to say, the first device with poor performance can also be used to implement the present application, and the application scenarios are more extensive.
  • the method before the first device receives the first operation, the method further includes: after the first device receives the first information, recognizing that the first information is used to instruct execution target operation; the first device sends a third notification to the second device, where the third notification includes indication information of the target operation; after receiving the third notification, the second device sends a The first device sends a fourth notification; the first device sends the first notification to the second device in response to the first operation, including: after the first device receives the fourth notification, responding to The received first operation sends the first notification to the second device.
  • the method before the first device receives the first operation, the method further includes: after the first device receives the first information, recognizing that the first information is used to instruct execution target operation; the first notification includes indication information of the target operation.
  • the content indicated by the first information can also be identified by the first device, the implementation manner is more flexible, and the application scenarios are more extensive.
  • the second notification or the fourth notification is used to instruct the first device to enable the function of receiving the first operation.
  • the second device may instruct the first device to enable the function of receiving a specific first operation, and only when a specific first operation is received, the first device will send the first notification to the second device, so that the second device The device performs the target action.
  • the situation that the second device also performs the target operation when the user touches it by mistake is avoided, and the user experience is improved.
  • the first device sending a third notification to the second device includes: when the first device determines that the second device can perform the target operation, sending a third notification to the second device.
  • the second device sends a third notification.
  • sending a fourth notification to the first device includes: after the second device receives the third notification, Determine whether the target operation can be executed; when the second device determines that the target operation can be executed, send the fourth notification to the first device.
  • the first device or the second device may first determine whether the second device can perform the target operation. When it is determined that the second device can perform the target operation, the first device or the second device continues to perform the relevant operation, which avoids unnecessary power consumption and provides higher usability.
  • performing the target operation includes: after the second device receives the first notification, performing user identity authentication ; when the authentication of the user identity is passed, the second device performs the target operation.
  • the second device may perform user identity authentication first, and when the authentication is passed, the second device performs the target operation, which improves security and reliability.
  • the first device is a wearable electronic device
  • the second device is a mobile terminal device.
  • the target operation includes at least one of the following: starting the first application, displaying the first interface, and enabling or disabling the first function.
  • the first information is instant messaging information or short information.
  • an embodiment of the present application provides an information processing system, the system includes a first device and a second device, the first device and the second device are connected, wherein: the second device is used for Receive the first information, and send the first information to the first device; the first information is used to instruct the execution of the target operation; the first device is configured to output all the information after receiving the first information receiving the first operation; in response to the first operation, sending a first notification to the second device; the second device is configured to execute the target after receiving the first notification operate.
  • the second device may directly execute the target operation indicated by the first information. That is to say, after obtaining the first information, the user can directly view the execution result of the target operation through the first operation, without requiring the user to manually perform the target operation, which achieves the effect of "one-step direct access” and improves the convenience of operation and user experience. sense.
  • the second device is further configured to: after receiving the first information, identify that the first information is used to instruct to perform a target operation; the first device receives the first operation Before, the second device is further configured to: send a second notification to the first device; when the first device sends a first notification to the second device in response to the first operation, the first A device is specifically configured to: after receiving the second notification, in response to the received first operation, send the first notification to the second device.
  • the first device even if the first device cannot identify the content indicated by the first information, it can work according to the notification of the second device, allowing the user to directly view the execution result of the target operation through the first operation after obtaining the first information, without the need for the user to manually Perform the target action. That is to say, the first device with poor performance can also be used to implement the present application, and the application scenarios are more extensive.
  • the first device before the first device receives the first operation, the first device is further configured to: after receiving the first information, identify that the first information is used to indicate an execution target operation; sending a third notification to the second device, where the third notification includes indication information of the target operation; the second device is further configured to: after receiving the third notification, send a notification to the first device The device sends a fourth notification; when the first device sends the first notification to the second device in response to the first operation, the first device is specifically configured to: after receiving the fourth notification, respond On the received first operation, the first notification is sent to the second device.
  • the first device before the first device receives the first operation, is further configured to: after receiving the first information, identify that the first information is used to indicate an execution target operation; the first notification includes indication information of the target operation.
  • the content indicated by the first information can also be identified by the first device, the implementation manner is more flexible, and the application scenarios are more extensive.
  • the second notification or the fourth notification is used to instruct the first device to enable the function of receiving the first operation.
  • the second device may instruct the first device to enable the function of receiving a specific first operation, and only when a specific first operation is received, the first device will send the first notification to the second device, so that the second device The device performs the target action.
  • the situation where the second device also performs the target operation when the user touches it by mistake is avoided, and the user experience is improved.
  • the first device when the first device sends the third notification to the second device, the first device is specifically configured to: when it is determined that the second device can perform the target operation, send a notification to the second device.
  • the second device sends a third notification.
  • the second device when the second device sends a fourth notification to the first device after receiving the third notification, the second device is specifically configured to: receive the third notification After the notification, it is determined whether the target operation can be executed; when it is determined that the target operation can be executed, the fourth notification is sent to the first device.
  • the first device or the second device may first determine whether the second device can perform the target operation. When it is determined that the second device can perform the target operation, the first device or the second device continues to perform the relevant operation, which avoids unnecessary power consumption and provides higher usability.
  • the second device when the second device performs the target operation after receiving the first notification, the second device is specifically configured to: after receiving the first notification, perform a user Identity authentication; when the user identity authentication is passed, the second device performs the target operation.
  • the second device may perform user identity authentication first, and when the authentication is passed, the second device performs the target operation, which improves security and reliability.
  • the first device is a wearable electronic device
  • the second device is a mobile terminal device.
  • the target operation includes at least one of the following: starting the first application, displaying the first interface, and enabling or disabling the first function.
  • the first information is instant messaging information or short information.
  • an embodiment of the present application provides an electronic device, the electronic device includes at least one memory and at least one processor, the at least one memory is coupled to the at least one processor, and the at least one memory is used to store a computer program,
  • the above-mentioned at least one processor is used to call the above-mentioned computer program, and the above-mentioned computer program includes instructions.
  • the above-mentioned instructions are executed by the above-mentioned at least one processor, the above-mentioned electronic device is made to perform any one of the first aspect and the first aspect in the embodiments of the present application.
  • the information processing method provided by the implementation.
  • the embodiments of the present application provide a computer storage medium, including computer instructions, when the computer instructions are executed on an electronic device, the above-mentioned electronic device is made to execute any one of the first aspect and the first aspect in the embodiments of the present application.
  • An information processing method provided by an implementation.
  • the embodiments of the present application provide a computer program product that, when the computer program product runs on an electronic device, enables the electronic device to perform any one of the first aspect and the first aspect in the embodiments of the present application.
  • the information processing method provided by the method.
  • an embodiment of the present application provides a chip, where the chip includes at least one processor, an interface circuit, and a memory, the memory, the interface circuit, and the at least one processor are interconnected through a line, and a computer program is stored in the memory. , when the above-mentioned computer program is executed by the above-mentioned at least one processor, the information processing method provided by the first aspect and any one of the implementation manners of the first aspect in the embodiments of the present application is implemented.
  • the electronic device provided in the third aspect, the computer storage medium provided in the fourth aspect, the computer program product provided in the fifth aspect, and the chip provided in the sixth aspect are all used to execute the first aspect and the first aspect.
  • FIG. 1 is a schematic diagram of the architecture of an information processing system provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a software architecture of an electronic device provided by an embodiment of the present application.
  • 4-6 are schematic diagrams of some human-computer interactions provided by the embodiments of the present application.
  • FIG. 1 is a schematic structural diagram of an information processing system provided by an embodiment of the present application.
  • the information processing system may include a first device 200 and a second device 300 .
  • the first device 200 and the second device 300 may be directly connected and communicated in a wired or wireless manner.
  • the first device 200 and the second device 300 may also be connected to the Internet in a wired or wireless manner, and communicate via the Internet.
  • the wired method includes, for example, universal serial bus (USB), twisted pair, coaxial cable, and optical fiber, etc.
  • the wireless method includes, for example, wireless fidelity (Wi-Fi), Bluetooth, cellular communication, etc.
  • the Internet includes at least one server, and the server may be a hardware server or a cloud server.
  • the first device 200 may display a user interface 400 , where the user interface 400 may include text 401 and information 402 , and the text 401 is the nickname of the user who sends the information 402 .
  • the information 402 may be the instant messaging information or short message received by the first device 200 from the Internet or other devices, or may be the instant messaging information or short message received by the second device 300 from the Internet or other devices and then forwarded to the first device 200. information.
  • the message 402 is the text: "You can directly open the application store and search the browser to see it".
  • the second device 300 may display a user interface 500, and the user interface 500 may include a status bar 510, an application icon 520, and a switch page option 530, wherein:
  • the status bar 520 may include the name of the connected mobile network, the WI-FI icon, the signal strength and the current remaining battery.
  • the mobile network accessed is a fifth-generation mobile communication technology (5th generation mobile networks, 5G) network with a signal grid number of 4 (that is, the signal strength is the best).
  • 5G fifth-generation mobile communication technology
  • Application icons 520 may include, for example, gallery icons 521, music icons 522, application store icons 523, weather icons 524, navigation icons 525, camera icons 526, phone icons 527, contacts icon 528, short
  • the information icon 529 and the like may also include icons of other applications, which are not limited in this embodiment of the present application.
  • the icon of any application can be used to respond to a user's operation, such as a touch operation, so that the second device 300 starts the application corresponding to the icon.
  • Switch page options 530 may include a first page option 531 , a second page option 532 and a third page option 533 .
  • the first page option 531 in the user interface 500 is selected, indicating that the user interface 500 is the first interface of the desktop displayed by the second device 300 .
  • the second device 300 may receive a swipe operation (eg, swipe from right to left) that the user acts on the blank area of the user interface 500 or switch the page option 530 , and in response to the swipe operation, the second device 300 may switch the display interface to the desktop one The second interface or the third interface.
  • the second page option 532 is selected, and when the second device 300 displays the third interface of the desktop, the third page option 533 is selected.
  • the user can view the information 402 through the first device 200, and then perform the following operations on the second device 300 according to the information 402: click the icon 523 of the application store in the user interface 500 to start the application store, click the search bar in the user interface of the application store (for example, the search bar 601 in the user interface 600 shown in FIG. 4(C) below), enter a browser in the search bar and trigger a search (for example, click the search bar 601 in the user interface 600 shown in FIG. 4(C) below) control 602), and finally obtain the corresponding search result (for example, the list 603 in the user interface 600 shown in (C) of FIG. 4 below).
  • click the icon 523 of the application store in the user interface 500 to start the application store click the search bar in the user interface of the application store (for example, the search bar 601 in the user interface 600 shown in FIG. 4(C) below), enter a browser in the search bar and trigger a search (for example, click the search bar 601 in the user interface 600 shown in FIG. 4(C
  • the present application provides an information processing method, which can be applied to the information processing system shown in FIG. 1 .
  • the first information displayed by the first device 200 may be identified as being used to instruct to perform the target operation.
  • the first device 200 may receive a first operation (eg, a double-click operation, a long-press operation, a touch operation acting on a specific control, etc.), and in response to the first operation, the first device 200 or the second device 300 may perform a target operation.
  • the target operation includes, for example, starting the first application, displaying the first interface, enabling or disabling the first function, and the like.
  • the present application can directly execute the target operation under the condition of receiving the first operation, which simplifies the manual operation steps of the user, and improves the convenience of the operation and the user experience.
  • the electronic devices involved in this application may be smart screens, smart TVs, mobile phones, tablet computers, desktops, laptops, notebook computers, Ultra-mobile Personal Computers (UMPCs), handheld computers, netbooks, personal digital Assistant (Personal Digital Assistant, PDA), wearable electronic devices (such as smart bracelets, smart glasses) and other devices.
  • the first device 200 and the second device 300 may be any of the devices listed above, wherein the first device 200 and the second device 300 may be the same or different.
  • the first device 200 is a smart watch
  • the second device 300 is a mobile phone.
  • FIG. 2 exemplarily shows a schematic structural diagram of an electronic device 100 .
  • the electronic device 100 may be the first device 200 or the second device 300 in the information processing system shown in FIG. 1 .
  • the electronic device 100 may include a processor 1010, a memory 1020, and a transceiver 1030, and the processor 1010, the memory 1020, and the transceiver 1030 are connected to each other through a bus.
  • the processor 1010 may be one or more central processing units (central processing units, CPUs). When the processor 1010 is a CPU, the CPU may be a single-core CPU or a multi-core CPU.
  • the memory 1020 includes, but is not limited to, random access memory (RAM), read-only memory (ROM), erasable programmable read only memory (EPROM), or A portable read-only memory (compact disc read-only memory, CD-ROM), the memory 1020 is used to store related computer programs and data.
  • the transceiver 1030 is used to receive and transmit data.
  • the transceiver 1030 may provide a wireless communication solution including 2G/3G/4G/5G etc. applied on the electronic device 100 .
  • the transceiver 1030 may provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT) ), global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • WLAN wireless local area networks
  • BT wireless fidelity
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • infrared technology infrared, IR
  • the electronic device 100 may communicate with the network and other devices through the transceiver 1030 using wireless communication technology.
  • Wireless communication technologies may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband code division Multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM , and/or IR technology, etc.
  • GSM global system for mobile communications
  • GPRS general packet radio service
  • CDMA broadband code division Multiple access
  • WCDMA wideband code division multiple access
  • TD-SCDMA time division code division multiple access
  • long term evolution long term evolution
  • BT GNSS
  • WLAN wireless local area network
  • NFC long term evolution
  • FM long term evolution
  • BT GNSS
  • GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (GLONASS), a Beidou navigation satellite system (BDS), a quasi-zenith satellite system (
  • the electronic device 100 is a second device, and the first device is a wearable device.
  • the second device may collect the connection status of the first device through the transceiver 1030, that is, whether the first device is connected to the second device.
  • the second device may also receive the user's biometric information (eg, pulse information, heart rate information) sent by the first device through the transceiver 1030 .
  • the processor 1010 may determine whether the user wears the first device according to the above-mentioned biometric information.
  • the electronic device 100 may determine that the user's identity verification is passed.
  • the electronic device 100 may further include a speaker, also referred to as a "speaker", for converting audio electrical signals into sound signals.
  • the electronic device 100 may play music through a speaker, or play the first information.
  • the electronic device 100 may further include an earphone interface, and the earphone interface is used to connect a wired earphone.
  • the headphone interface can be a USB interface, or a 3.5mm open mobile terminal platform (OMTP) standard interface, a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the electronic device 100 may be connected to a wireless headset, such as a Bluetooth headset, through the transceiver 1030 .
  • the electronic device 100 may play music or play the first information through wired or wireless headphones.
  • the electronic device 100 may also include a display screen.
  • the display screen is used to display images, videos, text, etc.
  • the display includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • LED liquid crystal display
  • OLED organic light-emitting diode
  • AMOLED organic light-emitting diode
  • FLED flexible light-emitting diode
  • Miniled MicroLed, Micro-oLed
  • quantum dot light-emitting diode quantum dot light emitting diodes, QLED
  • the electronic device 100 may include 1 or N display screens, where N is a positive integer greater than 1.
  • the electronic device 100 may also include one or more sensors, such as, but not limited to, a touch sensor, a pressure sensor, a pulse sensor, a heart rate sensor, and the like. in:
  • the pressure sensor is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • pressure sensors such as resistive pressure sensors, inductive pressure sensors, capacitive pressure sensors, etc.
  • a pressure sensor may be provided in the display screen.
  • Touch sensor also known as "touch device”.
  • the touch sensor may be disposed on the display screen, and the touch sensor and the display screen form a touch screen, also referred to as a "touch screen”.
  • the electronic device 100 can detect the intensity, position, etc. of the touch operation through a pressure sensor and/or a touch sensor, and transmit the detected touch operation to the processor 1010 to determine the value of the touch event. type.
  • the electronic device 100 may also provide visual output related to the touch operation through the display screen.
  • the pressure sensor and/or the touch sensor may also be disposed on the surface of the electronic device 100, which is different from the position where the display screen is located.
  • the electronic device 100 may collect the user's touch screen behavior information (eg, the position and area of the touch area, the time stamp of occurrence, the number of touches, and the pressure level) through the pressure sensor and/or the touch sensor.
  • the pulse sensor can detect the pulse signal.
  • a pulse sensor can detect changes in pressure created by arterial pulses and convert them into electrical signals.
  • pulse sensors such as piezoelectric pulse sensors, piezoresistive pulse sensors, and photoelectric pulse sensors.
  • the piezoelectric pulse sensor and the piezoresistive pulse sensor can convert the pressure process of pulse beating into signal output through micro-pressure-type materials (such as piezoelectric sheets, bridges, etc.).
  • the photoelectric pulse sensor can convert the change of the light transmittance of the blood vessel during the pulse beating process into a signal output by means of reflection or transmission, that is, to obtain the pulse signal by photoplethysmogram (PPG).
  • PPG photoplethysmogram
  • the electronic device 100 may collect the pulse information of the user through the pulse sensor.
  • Heart rate sensors can detect heart rate signals.
  • the heart rate sensor may acquire the heart rate signal by photoplethysmogram (PPG).
  • Heart rate sensors can convert changes in vascular dynamics, such as changes in blood pulse rate (heart rate) or blood volume (cardiac output), into signal output by means of reflection or transmission.
  • the heart rate sensor may measure signals of electrical activity induced in cardiac tissue through electrodes attached to the human skin, ie, heart rate signals are acquired by electrocardiography (ECG).
  • ECG electrocardiography
  • the electronic device 100 may collect the user's heart rate information through the heart rate sensor.
  • the electronic device 100 may include 1 or N cameras, where N is a positive integer greater than 1. Cameras are used to capture still images or video.
  • the N cameras may be front cameras, rear cameras, lift cameras, detachable cameras, etc.
  • the embodiments of the present application do not limit the connection method and mechanical mechanism of the N cameras and the electronic device 100 .
  • the electronic device 100 can obtain the user's face information through a camera, and implement functions such as face unlocking, access application lock, etc. based on the face information.
  • the electronic device 100 is the first device.
  • the electronic device 100 may display the first information through the display screen.
  • the electronic device 100 may detect the first operation (eg, a double-click operation, a long-press operation, a touch operation acting on a specific control, etc.) through a pressure sensor and/or a touch sensor.
  • the processor 1010 may perform a target operation, wherein the first information may be identified as indicating the execution of the target operation, such as starting the first application, displaying the first interface, enabling or disabling the first function, etc. .
  • the electronic device 100 may send a notification to the second device through the transceiver 1030 to cause the second device to perform the target operation.
  • the electronic device 100 is used to perform a target operation, and the target operation can be performed only when the user's identity verification is passed. Then, when the electronic device 100 detects the above-mentioned first operation, in response to the first operation, the electronic device 100 may first collect the biometric information of the user. For example, the electronic device 100 collects the connection state and/or wearing state of the wearable device through the transceiver 1030, collects the user's touch screen behavior information through the pressure sensor and/or the touch sensor, collects the user's pulse information through the pulse sensor, and collects the user's pulse information through the heart rate sensor. The user's heart rate information is collected, and the user's face information is collected through the camera.
  • the processor 1010 performs user identity verification based on the collected biometric information.
  • the processor 1010 executes the target operation, and when the verification fails, the processor 1010 temporarily does not execute the target operation until the verification is passed, and then executes the target operation.
  • the electronic device 100 may include a SIM card interface for connecting a SIM card.
  • the SIM card can be contacted and separated from the electronic device 100 by inserting into the SIM card interface or pulling out from the SIM card interface.
  • the electronic device 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • the SIM card interface can support Nano SIM card, Micro SIM card, SIM card, etc.
  • the same SIM card interface can insert multiple cards at the same time. The multiple cards may be of the same type or different.
  • the SIM card interface can also be compatible with different types of SIM cards.
  • the SIM card interface is also compatible with external memory cards.
  • the electronic device 100 interacts with the network through the SIM card to implement functions such as call and data communication.
  • the electronic device 100 employs an eSIM, ie: an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100 .
  • the electronic device 100 is the first device.
  • the electronic device 100 can communicate with the connected second device only through the transceiver 1030, for example, the electronic device 100 is a wearable device without a SIM card and without an Internet connection.
  • the electronic device 100 can not only communicate with the connected second device through the transceiver, but also communicate with the Internet and other devices.
  • the electronic device 100 is a wearable device or a terminal connected with a SIM card and connected to the Internet.
  • the processor 1010 in the electronic device 100 can be used to read the computer program and data stored in the memory 1020, and execute the information processing methods shown in FIGS. 7-15.
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiments of the present application take an Android system with a layered architecture as an example to exemplarily describe the software structure of the electronic device 100 .
  • FIG. 3 is a block diagram of a software structure of an electronic device 100 according to an embodiment of the present invention.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate with each other through software interfaces.
  • the The system is divided into four layers, from top to bottom, the application layer, the application framework layer, the Android runtime (Android runtime) and the system library, and the kernel layer.
  • the software framework shown in FIG. 3 is only an example, and the system of the electronic device 100 may also be other operating systems, such as Huawei mobile services (huawei mobile services, HMS), etc.
  • the application layer can include a series of application packages.
  • the application package can include applications such as camera, gallery, navigation, music, call, short message, calendar, Bluetooth, application store and so on.
  • the application framework layer provides an application programming interface (API) and a programming framework for applications in the application layer.
  • API application programming interface
  • the application framework layer includes some predefined functions.
  • the application framework layer may include window managers, content providers, view systems, telephony managers, resource managers, notification managers, and the like.
  • a window manager is used to manage window programs.
  • the window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, take screenshots, etc.
  • Content providers are used to store and retrieve data and make these data accessible to applications.
  • the data may include video, images, audio, calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on. View systems can be used to build applications.
  • a display interface can consist of one or more views.
  • the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
  • the phone manager is used to provide the communication function of the electronic device 100 .
  • the management of call status including connecting, hanging up, etc.).
  • the resource manager provides various resources for the application, such as localization strings, icons, pictures, layout files, video files and so on.
  • the notification manager enables applications to display notification information in the status bar, which can be used to convey notification-type information, and can disappear automatically after a brief pause without user interaction. For example, the notification manager is used to notify download completion, information reminders, etc.
  • the notification manager can also display notifications in the status bar at the top of the system in the form of graphs or scroll bar text, such as notifications of applications running in the background, and notifications on the screen in the form of dialog windows. For example, text information is prompted in the status bar, a prompt sound is issued, the electronic device vibrates, and the indicator light flashes.
  • Android Runtime includes core libraries and a virtual machine. Android runtime is responsible for scheduling and management of the Android system.
  • the core library consists of two parts: one is the function functions that the java language needs to call, and the other is the core library of Android.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object lifecycle management, stack management, thread management, safety and exception management, and garbage collection.
  • a system library can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg: SGL), etc.
  • surface manager surface manager
  • media library Media Libraries
  • 3D graphics processing library eg: OpenGL ES
  • 2D graphics engine eg: SGL
  • the Surface Manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing.
  • 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display drivers, camera drivers, audio drivers, and sensor drivers.
  • the display driver may be used to drive a display screen in the control hardware, such as the display screen shown in FIG. 2 .
  • the camera driver can be used to drive the camera in the control hardware, such as the camera shown in Figure 2.
  • the sensor driver can be used to drive multiple sensors in the control hardware, such as the pressure sensor, touch sensor and other sensors shown in FIG. 2 .
  • the workflow of the software and hardware of the electronic device 100 is exemplarily described below in conjunction with the scene of viewing information by touch.
  • the corresponding hardware interrupt is sent to the kernel layer.
  • the kernel layer processes touch operations into raw input events (including touch coordinates, timestamps of touch operations, etc.). Raw input events are stored at the kernel layer.
  • the application framework layer obtains the original input event from the kernel layer, and identifies the control corresponding to the input event. Take the touch operation as a touch click operation, and the control corresponding to the click operation is a prompt box of a short message.
  • the short message calls the interface of the application framework layer, and then calls the kernel layer to control the display driver, and displays the prompt box through the display corresponding details.
  • the hardware and software architectures shown in FIG. 2 and FIG. 3 are applicable to the first device, and may also be applicable to the second device.
  • the first device is a smart watch and the second device is a smart phone as an example for description.
  • FIG. 4 exemplarily shows a schematic diagram of human-computer interaction.
  • the first device displays a user interface 400
  • the user interface 400 may include text 401 and information 402
  • the text 401 is the nickname of the user who sends the information 402 .
  • the information 402 may be received by the second device and forwarded to the first device for display.
  • the message 402 is the text: "You can directly open the application store and search the browser to see it".
  • the second device may perform semantic analysis on the information 402 to identify that the information 402 is used to instruct the execution of the target operation, and obtain a result of the semantic analysis (that is, the information of the target operation represented by the information 402 ).
  • the result of the semantic parsing may include the identifier of the application to be started, the identifier of the user interface of the application to be displayed, the parameters to be input, and the like.
  • the second device may send first indication information to the first device, where the first indication information is used to instruct the first device to send the second indication information to the second device when the first operation is received.
  • the first operation is, for example, but not limited to, a double-click operation acting on the display screen of the first device, a long-press operation shown in FIG. 5 below, and a touch operation acting on a specific control shown in FIG. 6 below.
  • the first device after receiving the first instruction information sent by the second device, the first device detects a double-click operation acting on the display screen, and in response to the double-click operation, the first device can send the second device to the second device. Send second indication information.
  • the second indication information represents that the first device has received the first operation.
  • the second device in response to the second indication information, performs the target operation according to the result of the above-mentioned semantic analysis, and displays the user interface 600 .
  • the information of the target operation represented by the information 402 may specifically include: the identifier of the application mall, the identifier of the search page of the application mall, and the parameter "browser" to be input.
  • the target operation performed by the second device includes: starting the application identified as the logo of the above-mentioned application mall, displaying the user interface 600 identified as the logo of the above-mentioned search page, and entering the above-mentioned parameters to be input in the search bar 601 of the user interface 600. Browser", perform a search, and finally get the search result: list 603.
  • the control 602 in the user interface 600 can be clicked when the user manually "searches".
  • the information 402 may also be an instant message or short message received by the first device from the Internet or other devices, and after receiving the information 402, the first device may forward it to the second device, so that the second device Information 402 is semantically parsed.
  • the first device may also perform semantic analysis on the information 402 . After obtaining the result of semantic parsing, the first device may directly send the result of semantic parsing to the second device, and then the second device sends the first indication information to the first device according to the result of semantic parsing. Alternatively, after the first device obtains the result of semantic parsing, if the first operation is detected, the first device may send third indication information to the second device. The third indication information includes the result of semantic parsing, and is used to instruct the second device to perform the target operation. Optionally, if the ability of the first device to receive the first operation is disabled, the first device may first enable the ability to receive the first operation after obtaining the result of the semantic parsing.
  • the first indication information is further used to instruct the first device to enable the ability to receive the first operation.
  • the first indication information is further used to indicate a specific type of the first operation, for example, the first information is used to instruct the first device to enable the ability to receive a double-click operation.
  • the second device may first determine whether the target operation can be executed according to the result of the semantic analysis. For example, the second device determines whether to install the application mall according to the identification of the application mall. If the application mall is installed, it is determined that the second device can perform the target operation, and if the application mall is not installed, it is determined that the second device cannot perform the target operation.
  • the first device may first determine the result of the semantic analysis before sending the third indication information to the second device. Whether the second device can perform the target operation. When it is determined that the second device can perform the target operation, the first device then sends third indication information to the second device.
  • the second device can perform the target operation only if the user authentication is passed, for example, the second device is in a locked screen state before the target operation is performed. Then, when the second device receives the second indication information or the second indication information, it can acquire the biometric information of the user by itself or through the first device, and then verify the user's identity based on the biometric information. For example, the second device collects the user's face information through the camera of the first device, and performs identity verification on the face information. If the user identity verification is passed, the second device may perform the target operation, otherwise, the second device may not perform the target operation or perform the target operation until the user identity verification is passed.
  • the second information may be sent to the first device for display.
  • the second information may include the execution status of the target operation, such as whether the execution is successful, and result information obtained by executing the target operation (list 603 shown in (C) of FIG. 4 ). That is, the user not only does not need to perform the target operation manually, but also can quickly obtain the execution status of the target operation through the first device, which greatly facilitates the user's use. A specific example is shown in FIG. 5 below.
  • FIG. 5 exemplarily shows another schematic diagram of human-computer interaction.
  • FIG. 5 takes the second device performing semantic analysis and executing the target operation as an example for description.
  • the first device displays a user interface 400
  • the user interface 400 may include text 401 and information 403
  • the text 401 is the nickname of the user who sent the information 403 .
  • the information 403 may be received by the second device and forwarded to the first device for display.
  • the message 403 is the text: "Try to turn on the bluetooth of the mobile phone”.
  • the second device may perform semantic analysis on the information 403 to recognize that the information 403 is used to indicate the execution of the target operation, and obtain information of the target operation represented by the information 403: the identifier of the Bluetooth application. Then, the second device may send the first indication information to the first device.
  • the first device after receiving the first indication information sent by the second device, the first device detects a long-press operation acting on the display screen, and in response to the long-press operation, the first device can send the The second device sends the second indication information.
  • the second device in response to the second indication information, performs the target operation according to the result of the above semantic analysis: starting the application identified as the above Bluetooth application identification, and enabling the Bluetooth function.
  • the control 611 may be clicked in the user interface 610 .
  • other devices such as previously connected devices, can be automatically connected through Bluetooth.
  • the device list 612 in the user interface 610 is used to display the device name connected to the second device via Bluetooth. After the second device performs the target operation, it can send the second information to the first device.
  • the second information is used to indicate that the Bluetooth function has been enabled, and optionally, it may also include a device name to which the second device is connected through Bluetooth.
  • the first device may display a user interface 410 according to the second information, and the user interface 410 includes prompt information 411 and a device list 412 .
  • the prompt information 411 indicates that the Bluetooth function is enabled, and the device list 412 includes the device name of the second device connected via Bluetooth (ie, the device name included in the device list 612 in the user interface 610).
  • the user interface 610 may not be displayed, but the second information may be directly sent to the first device, so as to provide the user with the target operation information through the user interface 410 displayed by the first device.
  • the user is indifferent to the process of executing the target operation. After executing the first operation (including the above-mentioned long-press operation acting on the display screen), the user can directly view the execution result of the target operation through the user interface of the first device, without the need for Turn on the second device.
  • the situation shown in Figure 5 can also be applied to a shopping scenario.
  • the information displayed by the first device is: "help me buy a toy on a shopping application”.
  • the second device can perform the target operation: open the shopping application, search the name of the item to be purchased (ie toys) in the user interface of the shopping application, select a search result to purchase according to preset rules, and finally obtain the purchase result.
  • the second device may send the above information of the purchase result as the second information to the first device, so that the first device displays the information of the purchase result.
  • the second device needs to verify the user's identity when making a purchase, and the purchase can be made only when the verification is successful.
  • the biometric information used for user identity verification may be obtained by the second device through a camera, sensor, transceiver, etc. of the first device or the second device, such as face information, heart rate information, pulse information, and the like. Users do not need to manually authenticate the process, which is more convenient and fast to use.
  • the situation shown in Figure 5 can also be applied to navigation scenarios.
  • the information displayed by the first device is: "You navigate to location A, I am waiting for you here".
  • the second device may perform the target operation: open the navigation application, search the destination name "Location A" in the user interface of the navigation application, and finally obtain at least one route information. Then, the second device may send the at least one piece of route information as the second information to the first device, so that the first device displays any one or more pieces of route information, so that the user can quickly navigate.
  • the first device may also perform the target operation, and a specific example is shown in FIG. 6 below.
  • FIG. 6 exemplarily shows yet another schematic diagram of human-computer interaction.
  • the first device displays a user interface 400 .
  • the user interface 400 may include text 401 , information 404 and a control 405 , and the text 401 is the nickname of the user who sent the information 404 .
  • the information 404 may be an instant messaging message or a short message that is received by the second device and forwarded to the first device.
  • the information 404 is text: "I recommend you to listen to song A”.
  • the first device may perform semantic analysis on the information 404 to recognize that the information 404 is used to indicate the execution of the target operation, and obtain information of the target operation represented by the information 404: the identification of the music application, the song name (ie, song A).
  • the first device can detect a touch operation (eg, a click operation) acting on the control 405 , and in response to the touch operation, the first device can execute the information of the target operation represented by the above-mentioned information 404 Target operation: start the application identified as the above music application identification, search for a song named song A and play the song.
  • the first device may display the user interface 420 shown in (C) of FIG. 6 , and the user interface 420 represents that the first device is playing the song A through the music application. Therefore, the user can obtain the execution status of the target operation through the user interface 420 , that is, the target operation is successfully executed.
  • the second device may also perform semantic analysis on the information 404 .
  • the second device may directly send the result of semantic parsing to the first device.
  • the first device performs the target operation according to the result of the semantic analysis.
  • the information 404 may also be received by the first device from the Internet or other devices. After receiving the information 402 , the first device can forward it to the second device, so that the second device can perform semantic analysis on the information 402 .
  • the first device may enable the ability to receive the first operation to receive the first operation subsequently.
  • the ability of the first device to receive the first operation can be disabled by default, after the first device receives the semantic parsing result sent by the second device , the ability to receive the first operation can be enabled first, so as to receive the first operation subsequently.
  • the first device may first determine whether the target operation can be executed according to the result of the semantic analysis. For example, the first device determines whether to install the music application according to the identification of the music application. If the music application is installed, it is determined that the first device can perform the target operation. If the music application is not installed, it is determined that the first device cannot perform the target operation.
  • the second device if the second device is used to perform semantic analysis on the information 404, and the second device obtains a list of applications installed by the first device, a list of executable functions, etc., the second device sends the semantics to the first device Before the parsing result, it may be determined whether the first device can perform the target operation according to the semantic parsing result. When it is determined that the first device can perform the target operation, the second device sends the result of semantic parsing to the first device.
  • the first device can perform the target operation only when the user authentication is passed. Then, when the first device detects the first operation, it can obtain the user's biometric information by itself or through the second device, and then verify the user's identity based on the biometric information. For example, the first device collects the user's pulse information through a pulse sensor of the first device, and performs identity verification on the pulse information. If the user identity verification is passed, the first device may perform the target operation, otherwise, the first device may not perform the target operation or perform the target operation until the user identity verification is passed.
  • the information processing method provided by the present application is introduced under different application scenarios, and the method can be implemented based on the information processing system shown in FIG. 1 .
  • FIG. 7 exemplarily shows a schematic flowchart of an information processing method.
  • FIG. 7 takes the second device for receiving the first information and performing the target operation as an example for description. The method includes but is not limited to the following steps:
  • S1001 The second device receives the first information.
  • S1002 The second device sends the first information to the first device.
  • the first device After receiving the first information, the first device outputs the first information.
  • the second device may receive the first information from the Internet or other devices, and then forward it to the first device for display.
  • the first information is, for example, but not limited to, instant messaging information, short messages, and the like.
  • the first information includes, for example, but not limited to, text, pictures, positioning information, files, and the like.
  • the first information please refer to the information 402 in FIG. 1 and FIG. 4 , the information 403 in FIG. 5 , and the information 404 in FIG. 6 .
  • the first device may, but is not limited to, display the first information through a display screen, and play the first information through a speaker, an earphone, or the like.
  • the first device or the second device may perform semantic analysis on the first information to recognize that the first information is used to instruct to perform the target operation.
  • the target operation is, for example, but not limited to, starting the first application, displaying the first interface, enabling or disabling the first function, and the like.
  • the semantic analysis of the first information by the second device please refer to the embodiment shown in FIG. 8 below, and the description of the semantic analysis of the first information by the first device may refer to the embodiment shown in FIG. 9 to FIG. described.
  • S1004 The first device receives the first operation.
  • the first operation is, for example, but not limited to, a touch operation (such as a click operation), a double-click operation, a long-press operation, a sliding operation, etc. acting on the display screen and keys.
  • a touch operation such as a click operation
  • a double-click operation such as a click operation
  • a long-press operation such as a sliding operation
  • a sliding operation such as a sliding operation
  • the double-click operation The long-press operation shown in FIG. 5 and the touch operation acting on a specific control shown in FIG. 6 .
  • the first device sends a first notification to the second device in response to the first operation.
  • the first notification represents that the first device has received the first operation.
  • the first notification includes indication information of the target operation (referred to as information of the target operation) (ie, the result of semantic parsing).
  • FIG. 8 exemplarily shows a schematic flowchart of yet another information processing method.
  • FIG. 8 illustrates by taking the second device as an example for receiving the first information, performing semantic analysis on the first information, and performing a target operation.
  • the method includes but is not limited to the following steps:
  • S101 The second device receives the first information.
  • S102 The second device sends the first information to the first device.
  • S101-S103 are the same as S1001-S1003 shown in FIG. 7 , and details are not repeated here.
  • S104 The second device performs semantic analysis on the first information to obtain information of the target operation.
  • the second device may perform semantic analysis on the first information to recognize that the first information is used to instruct the execution of the target operation, and obtain information of the target operation represented by the first information (also referred to as a result of semantic analysis) .
  • the information of the target operation may include at least one of the following items: an identifier of an application to be activated, an identifier of a user interface to be displayed, a parameter to be input, a function identifier, and the like.
  • the information of the target operation obtained by semantic analysis may include: an identifier of an application for viewing the file (eg, an identifier of a reading application).
  • the information of the target operation obtained by semantic analysis may include: the identification of the navigation application, the identification of the interface used to input the destination in the navigation application, the name of the destination to be input (that is, the name of the destination where the positioning information is located). location of representation). For other examples, see the results of semantic parsing shown in Figures 4-6.
  • S102-S103 and S104 is not limited.
  • S105 The second device sends a second notification to the first device.
  • the second notification may be used to instruct the first device to send a notification to the second device when the first operation is received.
  • the second notification may also be used to instruct the first device to enable the ability to receive the first operation.
  • the second notification may also be used to indicate a specific type of the first operation, for example, the first information is used to instruct the first device to enable the ability to receive a double-click operation.
  • reference may be made to the first indication information shown in FIG. 4 to FIG. 5 .
  • S106 The first device receives the first operation.
  • the first device sends a first notification to the second device.
  • the first device in response to the second notification, when the first operation is received, the first device sends the first notification to the second device.
  • the first notification characterizes that the first device has received the first operation.
  • the method may further include: the second device determines whether the target operation can be performed according to the result of the semantic analysis. For example, the second device determines whether to install the above application to be activated. The second device determines whether the list of executable functions includes the function corresponding to the above-mentioned function identifier. In the case that it is determined that the second device can perform the target operation, the second device performs S105 again.
  • the second device may first obtain the user's biometric information, such as but not limited to obtaining the user's touch screen behavior information through a pressure sensor and/or a touch sensor, obtaining the user's face information through a camera, and obtaining the user's pulse through a pulse sensor. information, obtain the user's heart rate information through the heart rate sensor, etc. Then, the second device performs user identity verification based on the acquired biometric information. When the user identity verification is passed, the second device may perform S108, otherwise, the second device may not perform the target operation or perform the target operation until the user identity verification is passed.
  • biometric information such as but not limited to obtaining the user's touch screen behavior information through a pressure sensor and/or a touch sensor, obtaining the user's face information through a camera, and obtaining the user's pulse through a pulse sensor. information, obtain the user's heart rate information through the heart rate sensor, etc.
  • the second device performs user identity verification based on the acquired biometric information.
  • the second device may perform S
  • FIG. 9 exemplarily shows a schematic flowchart of yet another information processing method.
  • FIG. 9 illustrates an example in which the first device is used to perform semantic analysis on the first information, and the second device is used to receive the first information and perform a target operation.
  • the method includes but is not limited to the following steps:
  • S201 The second device receives the first information.
  • S202 The second device sends the first information to the first device.
  • S201-S203 are the same as S1001-S1003 in FIG. 7 , and details are not repeated here.
  • S204 The first device performs semantic analysis on the first information to obtain information of the target operation.
  • S204 is similar to S104 in FIG. 8 , except that S104 in FIG. 8 is performed by the second device, and S204 in FIG. 9 is performed by the first device.
  • S104 in FIG. 8 is performed by the second device
  • S204 in FIG. 9 is performed by the first device.
  • S205 The first device receives the first operation.
  • S206 The first device sends a third notification to the second device.
  • the first device may send a third notification to the second device.
  • the third notification may include the result of the semantic parsing (ie information of the target operation), and the third notification may also be used to instruct the second device to perform the target operation.
  • the third notification reference may be made to the third indication information shown in FIG. 4 .
  • the first device may first enable the ability to receive the first operation, so as to receive the first operation subsequently.
  • the method may further include: the second device determines whether the target operation can be performed according to the result of the semantic parsing in the third notification.
  • the method may further include: the first device determines whether the second device is based on a result of semantic analysis.
  • the target operation can be performed. In the case that it is determined that the second device can perform the target operation, the first device performs S205-S206 again.
  • the second device can perform the target operation only when the user identity verification is passed, for example, the second device is in a locked screen state before S207. Therefore, the second device may first obtain the biometric information of the user, and then perform user identity verification based on the obtained biometric information. When the user identity verification is passed, the second device may perform S207; otherwise, the second device may not perform the target operation or perform the target operation until the user identity verification is passed.
  • FIG. 10 exemplarily shows a schematic flowchart of yet another information processing method.
  • FIG. 10 illustrates an example in which the first device is used to perform semantic analysis on the first information, and the second device is used to receive the first information and perform a target operation.
  • the method includes but is not limited to the following steps:
  • S301 The second device receives the first information.
  • S302 The second device sends the first information to the first device.
  • S304 The first device performs semantic analysis on the first information to obtain information of the target operation.
  • S301-S304 are consistent with S201-S204 in FIG. 9 , and details are not repeated here.
  • the first device sends the information of the target operation (ie, the result of semantic analysis) to the second device.
  • S306 The second device sends a second notification to the first device.
  • S306 is the same as S105 in FIG. 8 , and details are not repeated here.
  • the second device may determine whether the target operation can be performed according to the result of the semantic analysis. In the case that it is determined that the target operation can be performed, the second device may send a second notification to the first device (ie, perform S306).
  • S307 The first device receives the first operation.
  • S308 The first device sends a first notification to the second device.
  • S307-S309 are consistent with S106-S108 in FIG. 8 , and details are not repeated here.
  • the method may further include: the first device determines whether the second device is based on a result of semantic analysis.
  • the target operation can be performed. In the case that it is determined that the second device can perform the target operation, the first device performs S305 again.
  • the second device can perform the target operation only when the user identity verification is passed, for example, the second device is in a locked screen state before S309. Therefore, the second device may first obtain the biometric information of the user, and then perform user identity verification based on the obtained biometric information. When the user identity verification is passed, the second device may perform S309, otherwise, the second device may not perform the target operation or perform the target operation until the user identity verification is passed.
  • the first information may also be received by the first device itself, for example, the first device is a wearable device connected with a SIM card, as shown in Figures 11-13 below for details.
  • FIG. 11 exemplarily shows a schematic flowchart of yet another information processing method.
  • FIG. 11 takes the example that the first device is used to receive the first information, and the second device is used to perform semantic analysis on the first information and perform the target operation.
  • the method includes but is not limited to the following steps:
  • S401 The first device receives the first information.
  • the first device outputs the first information.
  • S403 The first device sends the first information to the second device.
  • the first device may receive the first information from the Internet or other devices, and output the first information. Moreover, the first device may send the first information to the second device for semantic analysis. For the description of the first information and outputting the first information, reference may be made to the descriptions of S1001 to S1003 shown in FIG. 7 , which will not be repeated.
  • S404 The second device performs semantic analysis on the first information to obtain information of the target operation.
  • S404 is the same as S104 in FIG. 8 , and details are not repeated here.
  • S405 The second device sends a second notification to the first device.
  • S406 The first device receives the first operation.
  • the first device sends a first notification to the second device.
  • S405-S408 are the same as S105-S108 in FIG. 8 , and details are not repeated here.
  • the method may further include: the second device determines whether the target operation can be performed according to the result of the semantic analysis. In the case that it is determined that the second device can perform the target operation, the second device performs S405 again.
  • the second device can perform the target operation only when the user identity verification is passed, for example, the second device is in a locked screen state before S408. Therefore, the second device may first obtain the biometric information of the user, and then perform user identity verification based on the obtained biometric information. When the user identity verification is passed, the second device may perform S408; otherwise, the second device may not perform the target operation or perform the target operation until the user identity verification is passed.
  • FIG. 12 exemplarily shows a schematic flowchart of yet another information processing method.
  • FIG. 12 takes the example that the first device is used to receive the first information, perform semantic analysis on the first information, and the second device is used to perform the target operation.
  • the method includes but is not limited to the following steps:
  • S501 The first device receives the first information.
  • S502 The first device outputs the first information.
  • S501-S502 are the same as S401-S402 in FIG. 11 , and details are not repeated here.
  • S503 The first device performs semantic analysis on the first information to obtain information of the target operation.
  • S503 is similar to S104 in FIG. 8 , except that S104 in FIG. 8 is performed by the second device, and S503 in FIG. 12 is performed by the first device.
  • S104 in FIG. 8 is performed by the second device
  • S503 in FIG. 12 is performed by the first device.
  • S504 The first device receives the first operation.
  • S505 The first device sends a third notification to the second device.
  • S504-S506 are the same as S205-S207 in FIG. 9, and are not repeated here.
  • the first device may first enable the ability to receive the first operation, so as to receive the first operation subsequently.
  • the method may further include: the second device determines whether the target operation can be performed according to the result of the semantic parsing in the third notification. In the case that it is determined that the second device can perform the target operation, the second device performs S506 again.
  • the method further includes: the first device determines whether the second device can Perform the target action. In the case that it is determined that the second device can perform the target operation, the first device performs S504-S505 again.
  • the second device can perform the target operation only when the user identity verification is passed, for example, the second device is in a locked screen state before S506. Therefore, the second device may first acquire the biometric information of the user, and then perform user identity verification based on the acquired biometric information. When the user identity verification is passed, the second device may perform S506, otherwise, the second device may not perform the target operation or perform the target operation until the user identity verification is passed.
  • FIG. 13 exemplarily shows a schematic flowchart of yet another information processing method.
  • FIG. 13 takes as an example that the first device is used to receive the first information, perform semantic analysis on the first information, and the second device is used to perform the target operation.
  • the method includes but is not limited to the following steps:
  • S601 The first device receives the first information.
  • the first device outputs the first information.
  • S601-S602 are the same as S401-S402 in FIG. 11 , and details are not repeated here.
  • S603 The first device performs semantic analysis on the first information to obtain information of the target operation.
  • S603 is similar to S104 in FIG. 8 , except that S104 in FIG. 8 is performed by the second device, and S603 in FIG. 13 is performed by the first device.
  • S104 in FIG. 8 is performed by the second device
  • S603 in FIG. 13 is performed by the first device.
  • S604 The first device sends information of the target operation to the second device. (ie the result of semantic parsing).
  • S605 The second device sends a second notification to the first device.
  • S606 The first device receives the first operation.
  • the first device sends a first notification to the second device.
  • S605-S608 are the same as S105-S108 in FIG. 8 , and details are not repeated here.
  • the second device may determine whether the target operation can be performed according to the information of the target operation. In the case that it is determined that the second device can perform the target operation, the second device may send the first information to the first device (ie, perform S605).
  • the method further includes: the first device determines whether the second device can Perform the target action. In the case that it is determined that the second device can perform the target operation, the first device performs S604 again.
  • the second device can perform the target operation only when the user identity verification is passed, for example, the second device is in a locked screen state before S608. Therefore, the second device may first acquire the biometric information of the user, and then perform user identity verification based on the acquired biometric information. When the user identity verification is passed, the second device may perform S608, otherwise, the second device may not perform the target operation or perform the target operation until the user identity verification is passed.
  • the information processing method shown in FIGS. 7-13 may further include: the second device sends the execution status of the target operation to the first device.
  • the first device may display the execution status of the target operation for the user to view, and a specific example is shown in FIG. 5 above. That is to say, the user can obtain the execution status of the target operation through the first device without taking out the second device, which greatly facilitates the use of the user.
  • the second device can directly execute the target operation. That is, after acquiring the first information, the user can directly view the execution result of the target operation indicated to be executed by the first information through the first operation.
  • the execution result is, for example, the user interface 600 shown in FIG. 4 , the user interface 610 or the user interface 410 shown in FIG. 5 , and the user interface 420 shown in FIG. 6 . Therefore, there is no need for the user to manually perform the target operation, the effect of "one-step direct access" is realized, and the convenience of operation and the sense of user experience are improved.
  • the first device may also perform the target operation, as shown in FIGS. 14-16 for details.
  • FIG. 14 exemplarily shows a schematic flowchart of yet another information processing method.
  • FIG. 14 takes the example that the first device is used to perform the target operation, and the second device is used to receive the first information and perform semantic analysis on the first information.
  • the method includes but is not limited to the following steps:
  • S701 The second device receives the first information.
  • S702 The second device sends the first information to the first device.
  • S704 The second device performs semantic analysis on the first information to obtain information of the target operation.
  • S701-S704 are the same as S101-S104 in FIG. 8 , and details are not repeated here. It should be noted that the sequence of S702-S703 and S704 is not limited.
  • S705 The second device sends the information of the target operation to the first device.
  • S706 The first device receives the first operation.
  • the first device may perform the target operation according to the information of the target operation sent by the second device.
  • the first device may first enable the ability to receive the first operation so as to receive the first operation subsequently.
  • the method may further include: the first device determines whether the target operation can be performed according to the information of the above target operation. In the case that it is determined that the first device can perform the target operation, the first device performs S707 again.
  • the method may further include: the second device determines the first device according to the information about the above target operation Whether the target operation can be performed. In the case that it is determined that the first device can perform the target operation, the second device performs S705 again.
  • the first device may first obtain the biometric information of the user, and then perform user identity verification based on the obtained biometric information.
  • the first device may perform S707, otherwise, the first device may not perform the target operation or perform the target operation until the user identity verification is passed.
  • FIG. 15 exemplarily shows a schematic flowchart of yet another information processing method.
  • FIG. 15 illustrates by taking the example that the first device is used to perform the target operation and semantically parse the first information, and the second device is used to receive the first information.
  • the method includes but is not limited to the following steps:
  • S801 The second device receives the first information.
  • S802 The second device sends the first information to the first device.
  • S801-S803 are the same as S101-S103 in FIG. 8 , and details are not repeated here.
  • S804 The first device performs semantic analysis on the first information to obtain information of the target operation.
  • S804 is similar to S104 in FIG. 8 , except that S104 in FIG. 8 is performed by the second device, and S804 in FIG. 15 is performed by the first device.
  • S104 in FIG. 8 is performed by the second device
  • S804 in FIG. 15 is performed by the first device.
  • sequence of S803 and S804 is not limited.
  • S805 The first device receives the first operation.
  • the first device may perform the target operation according to the obtained information of the target operation.
  • the first device may first enable the ability to receive the first operation, so as to receive the first operation subsequently.
  • the method may further include: the first device determines whether the target operation can be performed according to the obtained information of the target operation. In the case that it is determined that the first device can perform the target operation, the first device performs S806 again.
  • the first device may first obtain the biometric information of the user, and then perform user identity verification based on the obtained biometric information.
  • the first device may perform S806, otherwise, the first device may not perform the target operation or perform the target operation until the user identity verification is passed.
  • FIG. 16 exemplarily shows a schematic flowchart of yet another information processing method.
  • FIG. 16 illustrates by taking the example that the first device is used to receive the first information and execute the target operation, and the second device is used to perform semantic analysis on the first information.
  • the method includes but is not limited to the following steps:
  • S901 The first device receives the first information.
  • the first device displays and outputs the first information.
  • S903 The first device sends the first information to the second device.
  • S904 The second device performs semantic analysis on the first information to obtain information of the target operation.
  • S901-S904 are the same as S401-S404 in FIG. 11 , and details are not repeated here.
  • S905 The second device sends the information of the target operation to the first device.
  • S906 The first device receives the first operation.
  • the first device may perform the target operation according to the information of the target operation sent by the second device.
  • the first device may first enable the ability to receive the first operation so as to receive the first operation subsequently.
  • the method may further include: the first device judges whether the target operation can be performed according to the information of the above target operation. In the case that it is determined that the first device can perform the target operation, the first device performs S907 again.
  • the method may further include: the second device determines the first device according to the information about the above target operation Whether the target operation can be performed. In the case that it is determined that the first device can perform the target operation, the second device performs S905 again.
  • the first device can perform the target operation only when the user identity verification is passed, for example, the first device is in a locked screen state before S907. Therefore, the first device may first obtain the biometric information of the user, and then perform user identity verification based on the obtained biometric information. When the user identity verification is passed, the first device may perform S907, otherwise, the first device may not perform the target operation or perform the target operation until the user identity verification is passed.
  • the first device may directly execute the target operation. That is to say, after obtaining the first information, the user can directly view the execution result of the target operation indicated to be executed by the first information through the first device.
  • the execution result is, for example, the user interface 600 shown in FIG. 4 , the user interface 610 or the user interface 410 shown in FIG. 5 , and the user interface 420 shown in FIG. 6 . Therefore, the steps for the user to manually operate the first device are reduced, the effect of "one-step direct access" is realized, and the convenience of operation and the sense of user experience are improved.
  • the above-mentioned first information can also identify the device used to perform the target operation, that is, the result of the above-mentioned semantic analysis (ie, the above-mentioned information of the above-mentioned target operation) includes the information of the device used to perform the target operation, such as the device name. Therefore, after obtaining the result of the semantic analysis, the first device and the second device can confirm the device for performing the target operation according to the result of the semantic analysis.
  • the first information is text: "You can open the shopping application on your mobile phone to see if there are suitable clothes"
  • the information of the target operation may include: the name of the device (ie the mobile phone) used to perform the target operation, the The identifier of the shopping application, the identifier of the search page of the shopping application to be displayed, and the parameters to be input (ie, clothes).
  • the first device is a smart bracelet and the second device is a smartphone
  • the first device and the second device can confirm that the device for performing the target operation is the second device according to the result of the semantic analysis.
  • the first device and the second device may also preset devices for performing the target operation.
  • the processing capability of the first device is relatively weak
  • the device that is preset by the first device and the second device to perform the target operation is the second device.
  • the first device and the second device may also confirm the device for performing the target operation in response to the user-defined operation.
  • the user may set the device that conveniently implements the information content as the second device on pages such as the setting interface of the first device or the second device, the user interface of the installed sports health application, and the like.
  • the second device is used to perform the target operation by default, and the specific implementation is shown in FIG. 7-FIG. 13 above.
  • the first device and the second device may also preset the priority of the device for performing the target operation, or confirm the priority of the device performing the target operation in response to the user-defined operation. For example, if it is confirmed that among the devices used for executing the target operation, the priority of the first device is higher than that of the second device, after obtaining the result of semantic analysis, it can be determined whether the first device can execute the target operation.
  • the specific implementation is as shown in FIG. 14-FIG. 16 above.
  • the first device cannot perform the target operation determine whether the second device can perform the target operation.
  • the specific implementation is as shown in Figures 7-13 above.
  • the above-mentioned embodiments it may be implemented in whole or in part by software, hardware, firmware or any combination thereof.
  • software it can be implemented in whole or in part in the form of a computer program product.
  • the computer program product described above includes one or more computer instructions.
  • the computer program instructions described above are loaded and executed on a computer, the procedures or functions described above in accordance with the present application are produced in whole or in part.
  • the aforementioned computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
  • the above-mentioned computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the above-mentioned computer instructions may be transmitted from a website site, computer, server or data center via wired communication. (eg coaxial cable, optical fiber, digital subscriber line) or wireless (eg infrared, wireless, microwave, etc.) to another website site, computer, server or data center.
  • the above-mentioned computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, a data center, etc. that includes one or more available media integrated.
  • the above-mentioned usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media, or semiconductor media (eg, solid-state drives), and the like.
  • magnetic media eg, floppy disks, hard disks, magnetic tapes
  • optical media eg, optical-state drives
  • semiconductor media eg, solid-state drives

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Environmental & Geological Engineering (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Telephone Function (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请实施例提供一种信息处理方法,应用于包括第一设备和第二设备的系统,所述第一设备和所述第二设备连接,该方法包括:第二设备接收第一信息,并向第一设备发送所述第一信息;第一信息用于指示执行目标操作;第一设备接收到第一信息后,输出第一信息;第一设备接收第一操作;第一设备响应于第一操作,向第二设备发送第一通知;第二设备接收到第一通知后,执行目标操作。采用本申请实施例可以快速执行第一信息指示的目标操作,简化了用户手动操作的步骤,提升了操作的便利性。

Description

信息处理方法、电子设备及系统
本申请要求于2020年12月21日提交中国专利局、申请号为202011534011.X、申请名称为“信息处理方法、电子设备及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及电子设备技术领域,尤其涉及一种信息处理方法、电子设备及系统。
背景技术
智能手环等可穿戴电子设备可以通过蓝牙、无线保真(wireless fidelity,Wi-Fi)等通信方式和手机等移动终端连接。连接后,用户可以通过可穿戴电子设备快速获取移动终端接收的来电、短信息、即时通讯信息等信息,而无需拿出移动终端。
用户可能会根据上述信息执行特定操作。例如,智能手环可以显示手机接收的短信息:“你导航到地点A吧,我在这等你”。用户可以根据该短信息执行以下操作:拿出并打开手机,启动手机上的导航应用,在导航应用中输入目的地“地点A”并搜索,最后得到路线信息。但这样的操作过程较为繁琐。并且,用户也可能基于可穿戴电子设备执行上述特定操作,而可穿戴电子设备通常较小,所以执行特定操作的过程(尤其是输入文字)会更加困难,用户操作不便,体验感也较差。
发明内容
本申请实施例公开了一种信息处理方法、电子设备及系统,可以快速执行第一信息指示的目标操作,简化了用户手动操作的步骤,提升了操作的便利性。
第一方面,本申请实施例提供了一种信息处理方法,应用于包括第一设备和第二设备的系统,所述第一设备和所述第二设备连接,该方法包括:所述第二设备接收第一信息,并向所述第一设备发送所述第一信息;所述第一信息用于指示执行目标操作;所述第一设备接收到所述第一信息后,输出所述第一信息;所述第一设备接收第一操作;所述第一设备响应于所述第一操作,向所述第二设备发送第一通知;所述第二设备接收到所述第一通知后,执行所述目标操作。
本申请中,第一设备接收第一操作后,第二设备可以直接执行第一信息指示的目标操作。也就是说,用户获取第一信息后,可以直接通过第一操作查看目标操作的执行结果,而无需用户手动执行目标操作,实现了“一步直达”的效果,提升了操作的便利性和用户体验感。
在一种可能的实现方式中,该方法还包括:所述第二设备接收到所述第一信息后,识别到所述第一信息用于指示执行目标操作;所述第一设备接收第一操作之前,所述方法还包括:所述第二设备向所述第一设备发送第二通知;所述第一设备响应于所述第一操作,向所述第二设备发送第一通知,包括:所述第一设备接收到所述第二通知后,响应于接收到的所述第一操作,向所述第二设备发送所述第一通知。
本申请中,即使第一设备无法识别第一信息指示的内容,也可以根据第二设备的通知工作,让用户获取第一信息后直接通过第一操作查看目标操作的执行结果,而无需用户手动执 行目标操作。也就是说,性能较差的第一设备也可以用于实现本申请,应用场景更为广泛。
在一种可能的实现方式中,所述第一设备接收第一操作之前,该方法还包括:所述第一设备接收到所述第一信息后,识别到所述第一信息用于指示执行目标操作;所述第一设备向所述第二设备发送第三通知,所述第三通知包括所述目标操作的指示信息;所述第二设备接收到所述第三通知后,向所述第一设备发送第四通知;所述第一设备响应于所述第一操作,向所述第二设备发送第一通知,包括:所述第一设备接收到所述第四通知后,响应于接收到的所述第一操作,向所述第二设备发送所述第一通知。
在一种可能的实现方式中,所述第一设备接收第一操作之前,该方法还包括:所述第一设备接收到所述第一信息后,识别到所述第一信息用于指示执行目标操作;所述第一通知包括所述目标操作的指示信息。
本申请中,也可以通过第一设备识别第一信息指示的内容,实现方式较为灵活,应用场景更为广泛。
在一种可能的实现方式中,所述第二通知或所述第四通知用于指示所述第一设备开启接收所述第一操作的功能。
本申请中,第二设备可以指示第一设备开启接收特定的第一操作的功能,只有接收到特定的第一操作时,第一设备才会给第二设备发送第一通知,以使第二设备执行目标操作。避免了用户误触时第二设备也执行目标操作的情况,提升用户体验。
在一种可能的实现方式中,所述第一设备向所述第二设备发送第三通知,包括:所述第一设备确定所述第二设备能执行所述目标操作时,向所述第二设备发送第三通知。
在一种可能的实现方式中,所述第二设备接收到所述第三通知后,向所述第一设备发送第四通知,包括:所述第二设备接收到所述第三通知后,判断是否能执行所述目标操作;所述第二设备确定能执行所述目标操作时,向所述第一设备发送所述第四通知。
本申请中,在第二设备执行目标操作之前,第一设备或第二设备可以先判断第二设备是否能执行目标操作。确定第二设备能执行目标操作时,第一设备或第二设备才继续进行相关操作,避免了不必要的功耗,可用性更高。
在一种可能的实现方式中,所述第二设备接收到所述第一通知后,执行所述目标操作,包括:所述第二设备接收到所述第一通知后,进行用户身份的认证;当所述用户身份的认证通过时,所述第二设备执行所述目标操作。
本申请中,在第二设备执行目标操作之前,第二设备可以先进行用户身份的认证,当认证通过时,第二设备才执行目标操作,提高了安全性和可靠性。
在一种可能的实现方式中,所述第一设备为可穿戴电子设备,所述第二设备为移动终端设备。
在一种可能的实现方式中,所述目标操作包括以下至少一项:启动第一应用,显示第一界面,启动或关闭第一功能。
在一种可能的实现方式中,所述第一信息是即时通讯信息或短信息。
第二方面,本申请实施例提供了一种信息处理系统,该系统包括第一设备和第二设备,所述第一设备和所述第二设备连接,其中:所述第二设备,用于接收第一信息,并向所述第一设备发送所述第一信息;所述第一信息用于指示执行目标操作;所述第一设备,用于接收到所述第一信息后,输出所述第一信息;接收第一操作;响应于所述第一操作,向所述第二设备发送第一通知;所述第二设备,用于接收到所述第一通知后,执行所述目标操作。
本申请中,第一设备接收第一操作后,第二设备可以直接执行第一信息指示的目标操作。也就是说,用户获取第一信息后,可以直接通过第一操作查看目标操作的执行结果,而无需用户手动执行目标操作,实现了“一步直达”的效果,提升了操作的便利性和用户体验感。
在一种可能的实现方式中,所述第二设备还用于:接收到所述第一信息后,识别到所述第一信息用于指示执行目标操作;所述第一设备接收第一操作之前,所述第二设备还用于:向所述第一设备发送第二通知;所述第一设备响应于所述第一操作,向所述第二设备发送第一通知时,所述第一设备具体用于:接收到所述第二通知后,响应于接收到的所述第一操作,向所述第二设备发送所述第一通知。
本申请中,即使第一设备无法识别第一信息指示的内容,也可以根据第二设备的通知工作,让用户获取第一信息后直接通过第一操作查看目标操作的执行结果,而无需用户手动执行目标操作。也就是说,性能较差的第一设备也可以用于实现本申请,应用场景更为广泛。
在一种可能的实现方式中,所述第一设备接收第一操作之前,所述第一设备还用于:接收到所述第一信息后,识别到所述第一信息用于指示执行目标操作;向所述第二设备发送第三通知,所述第三通知包括所述目标操作的指示信息;所述第二设备还用于:接收到所述第三通知后,向所述第一设备发送第四通知;所述第一设备响应于所述第一操作,向所述第二设备发送第一通知时,所述第一设备具体用于:接收到所述第四通知后,响应于接收到的所述第一操作,向所述第二设备发送所述第一通知。
在一种可能的实现方式中,所述第一设备接收第一操作之前,所述第一设备还用于:接收到所述第一信息后,识别到所述第一信息用于指示执行目标操作;所述第一通知包括所述目标操作的指示信息。
本申请中,也可以通过第一设备识别第一信息指示的内容,实现方式较为灵活,应用场景更为广泛。
在一种可能的实现方式中,所述第二通知或所述第四通知用于指示所述第一设备开启接收所述第一操作的功能。
本申请中,第二设备可以指示第一设备开启接收特定的第一操作的功能,只有接收到特定的第一操作时,第一设备才会给第二设备发送第一通知,以使第二设备执行目标操作。避免了用户误触时第二设备也执行目标操作的情况,提升用户体验。
在一种可能的实现方式中,所述第一设备向所述第二设备发送第三通知时,所述第一设备具体用于:确定所述第二设备能执行所述目标操作时,向所述第二设备发送第三通知。
在一种可能的实现方式中,所述第二设备接收到所述第三通知后,向所述第一设备发送第四通知时,所述第二设备具体用于:接收到所述第三通知后,判断是否能执行所述目标操作;确定能执行所述目标操作时,向所述第一设备发送所述第四通知。
本申请中,在第二设备执行目标操作之前,第一设备或第二设备可以先判断第二设备是否能执行目标操作。确定第二设备能执行目标操作时,第一设备或第二设备才继续进行相关操作,避免了不必要的功耗,可用性更高。
在一种可能的实现方式中,所述第二设备接收到所述第一通知后,执行所述目标操作时,所述第二设备具体用于:接收到所述第一通知后,进行用户身份的认证;当所述用户身份的认证通过时,所述第二设备执行所述目标操作。
本申请中,在第二设备执行目标操作之前,第二设备可以先进行用户身份的认证,当认证通过时,第二设备才执行目标操作,提高了安全性和可靠性。
在一种可能的实现方式中,所述第一设备为可穿戴电子设备,所述第二设备为移动终端 设备。
在一种可能的实现方式中,所述目标操作包括以下至少一项:启动第一应用,显示第一界面,启动或关闭第一功能。
在一种可能的实现方式中,所述第一信息是即时通讯信息或短信息。
第三方面,本申请实施例提供了一种电子设备,上述电子设备包括至少一个存储器、至少一个处理器,上述至少一个存储器与上述至少一个处理器耦合,上述至少一个存储器用于存储计算机程序,上述至少一个处理器用于调用上述计算机程序,上述计算机程序包括指令,当上述指令被上述至少一个处理器执行时,使得上述电子设备执行本申请实施例中第一方面、第一方面的任意一种实现方式提供的信息处理方法。
第四方面,本申请实施例提供了一种计算机存储介质,包括计算机指令,当上述计算机指令在电子设备上运行时,使得上述电子设备执行本申请实施例中第一方面、第一方面的任意一种实现方式提供的信息处理方法。
第五方面,本申请实施例提供了一种计算机程序产品,当该计算机程序产品在电子设备上运行时,使得该电子设备执行本申请实施例中第一方面、第一方面的任意一种实现方式提供的信息处理方法。
第六方面,本申请实施例提供了一种芯片,上述芯片包括至少一个处理器、接口电路、存储器,上述存储器、上述接口电路和上述至少一个处理器通过线路互联,上述存储器中存储有计算机程序,上述计算机程序被上述至少一个处理器执行时实现本申请实施例中第一方面、第一方面的任意一种实现方式提供的信息处理方法。
可以理解地,上述第三方面提供的电子设备、上述第四方面提供的计算机存储介质、第五方面提供的计算机程序产品以及第六方面提供的芯片均用于执行第一方面、第一方面的任意一种实现方式提供的信息处理方法。因此,其所能达到的有益效果可参考第一方面所提供的信息处理方法中的有益效果,不再赘述。
附图说明
以下对本申请实施例用到的附图进行介绍。
图1是本申请实施例提供的一种信息处理系统的架构示意图;
图2是本申请实施例提供的一种电子设备的硬件结构示意图;
图3是本申请实施例提供的一种电子设备的软件架构示意图;
图4-图6是本申请实施例提供的一些人机交互示意图;
图7-图16是本申请实施例提供的一些信息处理方法的流程示意图。
具体实施方式
本申请以下实施例中所使用的术语只是为了描述特定实施例的目的,而并非旨在作为对本申请的限制。如在本申请的说明书和所附权利要求书中所使用的那样,单数表达形式“一个”、“一种”、“所述”、“上述”、“该”和“这一”旨在也包括复数表达形式,除非其上下文中明确地有相反指示。还应当理解,本申请中使用的术语“和/或”是指并包含一个或多个所列出项目的任何或所有可能组合。
请参见图1,图1是本申请实施例提供的一种信息处理系统的架构示意图。该信息处理系统可以包括第一设备200和第二设备300。第一设备200和第二设备300可以通过有线或 无线方式直接连接和通信。不限于此,第一设备200和第二设备300也可以通过有线或无线方式连接至互联网,并通过互联网进行通信。其中,有线方式例如包括通用串行总线(universal serial bus,USB)、双绞线、同轴电缆和光纤等,无线方式例如包括无线保真(wireless fidelity,Wi-Fi)、蓝牙、蜂窝通信等。互联网例如包括至少一个服务器,服务器可以是硬件服务器,也可以是云端服务器。
如图1所示,第一设备200可以显示用户界面400,其中用户界面400可以包括文字401和信息402,文字401为发送信息402的用户的昵称。信息402可以是第一设备200从互联网、其他设备处接收的即时通讯信息或短信息,也可以是第二设备300从互联网、其他设备处接收后转发给第一设备200的即时通讯信息或短信息。信息402为文字:“你直接打开应用商城,搜索浏览器看看呢”。
如图1所示,第二设备300可以显示用户界面500,用户界面500可以包括状态栏510、应用程序图标520和切换页面选项530,其中:
状态栏520中可以包括接入的移动网络名称,WI-FI图标、信号强度和当前剩余电量。其中,接入的移动网络是信号格数为4格(即信号强度最好)的第五代移动通信技术(5th generation mobile networks,5G)网络。
应用程序图标520可以包括例如图库的图标521、音乐的图标522、应用商城的图标523、天气的图标524、导航的图标525、相机的图标526、电话的图标527、通讯录的图标528、短信息的图标529等,还可以包含其他应用的图标,本申请实施例对此不作限定。任一个应用的图标可用于响应用户的操作,例如触摸操作,使得第二设备300启动图标对应的应用。
切换页面选项530可以包括第一页面选项531、第二页面选项532和第三页面选项533。用户界面500中第一页面选项531为选中状态,表示用户界面500是第二设备300显示的桌面的第一界面。第二设备300可以接收用户作用于用户界面500的空白区域或切换页面选项530的滑动操作(例如,从右往左滑动),响应于该滑动操作,第二设备300可以切换显示界面为桌面的第二界面或第三界面。第二设备300显示桌面的第二界面时,第二页面选项532为选中状态,第二设备300显示桌面的第三界面时,第三页面选项533为选中状态。
用户可以通过第一设备200查看信息402,然后根据信息402在第二设备300上执行以下操作:点击用户界面500中应用商城的图标523以启动应用商城,在应用商城的用户界面中点击搜索栏(例如下图4的(C)所示的用户界面600中的搜索栏601),在搜索栏中输入浏览器并触发搜索(例如点击下图4的(C)所示的用户界面600中的控件602),最后得到对应的搜索结果(例如下图4的(C)所示的用户界面600中的列表603)。但这样的操作较为繁琐,用户使用起来不方便。
本申请提供了一种信息处理方法,可以应用于图1所示的信息处理系统。第一设备200显示的第一信息可以被识别为用于指示执行目标操作。第一设备200可以接收第一操作(例如双击操作、长按操作、作用于特定控件的触摸操作等),响应于第一操作,第一设备200或第二设备300可以执行目标操作。目标操作例如包括启动第一应用,显示第一界面,启动或关闭第一功能等。本申请可以在接收到第一操作的情况下直接执行目标操作,简化了用户手动操作的步骤,提升了操作的便利性和用户体验感。
本申请涉及的电子设备可以是智慧屏、智能电视、手机、平板电脑、桌面型、膝上型、笔记本电脑、超级移动个人计算机(Ultra-mobile Personal Computer,UMPC)、手持计算机、上网本、个人数字助理(Personal Digital Assistant,PDA)、可穿戴电子设备(如智能手环、智能眼镜)等设备。第一设备200和第二设备300可以是上述列举的任意一种设备,其中, 第一设备200和第二设备300可以相同,也可以不同。例如,第一设备200为智能手表,第二设备300为手机。
接下来介绍本申请实施例中提供的示例性的电子设备。
请参见图2,图2示例性示出了一种电子设备100的结构示意图。电子设备100可以是图1所示的信息处理系统中的第一设备200或第二设备300。电子设备100可以包括处理器1010、存储器1020和收发器1030,处理器1010、存储器1020和收发器1030通过总线相互连接。
处理器1010可以是一个或多个中央处理器(central processing unit,CPU),在处理器1010是一个CPU的情况下,该CPU可以是单核CPU,也可以是多核CPU。存储器1020包括但不限于是随机存储记忆体(random access memory,RAM)、只读存储器(read-only memory,ROM)、可擦除可编程只读存储器(erasable programmable read only memory,EPROM)、或便携式只读存储器(compact disc read-only memory,CD-ROM),存储器1020用于存储相关计算机程序及数据。
收发器1030用于接收和发送数据。在一些实施例中,收发器1030可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。在一些实施例中,收发器1030可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。电子设备100可以通过收发器1030采用无线通信技术与网络以及其他设备通信。无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
示例性地,电子设备100为第二设备,第一设备为可穿戴设备。第二设备可以通过收发器1030采集第一设备的连接状态,即第一设备是否和第二设备连接。在第一设备和第二设备连接的情况下,第二设备还可以通过收发器1030接收第一设备发送的用户的生物特征信息(如脉搏信息、心率信息)。处理器1010可以根据上述生物特征信息判断用户是否佩戴第一设备。假设第一设备被第二设备识别为可信的设备(例如已连接过的可穿戴设备、已通过密码验证的设备等),则当第二设备确定已连接第一设备和/或用户已佩戴第一设备的情况下,电子设备100可以确定用户的身份验证通过。
在一些实施例中,电子设备100还可以包括扬声器,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器播放音乐,或播放第一信息。
在一些实施例中,电子设备100还可以包括耳机接口,耳机接口用于连接有线耳机。耳机接口可以是USB接口,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry  association of the USA,CTIA)标准接口。在一些实施例中,电子设备100可以通过收发器1030连接无线耳机,例如蓝牙耳机。电子设备100可以通过有线耳机或无线耳机播放音乐,或播放第一信息。
在一些实施例中,电子设备100还可以包括显示屏。显示屏用于显示图像、视频、文字等。显示屏包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。可选地,电子设备100可以包括1个或N个显示屏,N为大于1的正整数。
在一些实施例中,电子设备100还可以包括一个或多个传感器,例如但不限于触摸传感器、压力传感器、脉搏传感器、心率传感器等。其中:
压力传感器用于感受压力信号,可以将压力信号转换成电信号。压力传感器的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。可选地,压力传感器可以设置于显示屏中。触摸传感器,也称“触控器件”。可选地,触摸传感器可以设置于显示屏,由触摸传感器与显示屏组成触摸屏,也称“触控屏”。当有触摸操作作用于显示屏时,电子设备100可以通过压力传感器和/或触摸传感器检测该触摸操作的强度、位置等,并将检测到的触摸操作传递给处理器1010,以确定触摸事件的类型。可选地,电子设备100还可以通过显示屏提供与触摸操作相关的视觉输出。不限于此,压力传感器和/或触摸传感器也可以设置于电子设备100的表面,与显示屏所处的位置不同。电子设备100可以通过压力传感器和/或触摸传感器采集用户的触屏行为信息(例如触摸区域的位置、面积,发生时间戳,触摸次数、压力大小)。
脉搏传感器可以检测脉搏信号。在一些实施例中,脉搏传感器可以检测动脉搏动时产生的压力变化,并将其转换为电信号。脉搏传感器的种类很多,例如压电式脉搏传感器、压阻式脉搏传感器、光电式脉搏传感器等。其中,压电式脉搏传感器和压阻式脉搏传感器可以通过微压力型的材料(如压电片、电桥等)将脉搏跳动的压力过程转换为信号输出。光电式脉搏传感器可以通过反射或透射等方式,将血管在脉搏跳动过程中透光率的变化转换为信号输出,即通过光电体积描记法(photoplethysmogram,PPG)获取脉搏信号。电子设备100可以通过脉搏传感器采集用户的脉搏信息。
心率传感器可以检测心率信号。在一些实施例中,心率传感器可以通过光电体积描记法(photoplethysmogram,PPG)获取心率信号。心率传感器可以通过反射或透射等方式,将血管动力发生的变化,例如血脉搏率(心率)或血容积(心输出量)发生的变化,转换为信号输出。在一些实施例中,心率传感器可以通过连接到人体皮肤上的电极来测量心脏组织中所引发电气活动的信号,即通过心电描记法(electrocardiography,ECG)获取心率信号。电子设备100可以通过心率传感器采集用户的心率信息。
在一些实施例中,电子设备100可以包括1个或N个摄像头,N为大于1的正整数。摄像头用于捕获静态图像或视频。可选地,这N个摄像头可以是前置摄像头、后置摄像头、升降式摄像头、可拆卸式摄像头等,本申请实施例对N个摄像头和电子设备100的连接方式以及机械机构没有限定。本申请中,电子设备100可以通过摄像头获取用户的人脸信息,并基于该人脸信息实现人脸解锁、访问应用锁等功能。
示例性地,电子设备100为第一设备。电子设备100可以通过显示屏显示第一信息。电 子设备100可以通过压力传感器和/或触摸传感器检测第一操作(例如双击操作、长按操作、作用于特定控件的触摸操作等)。响应于该第一操作,处理器1010可以执行目标操作,其中第一信息可以被识别为用于指示执行目标操作,目标操作例如启动第一应用、显示第一界面、启动或关闭第一功能等。或者,响应于该第一操作,电子设备100可以通过收发器1030向第二设备发送通知,以使第二设备执行目标操作。
在一些实施例中,电子设备100用于执行目标操作,并且目标操作是在用户身份验证通过的情况下才能被执行的。则当电子设备100检测到上述第一操作时,响应于该第一操作,电子设备100可以先采集用户的生物特征信息。例如,电子设备100通过收发器1030采集可穿戴设备的连接状态和/或佩戴状态,通过压力传感器和/或触摸传感器采集用户的触屏行为信息,通过脉搏传感器采集用户的脉搏信息,通过心率传感器采集用户的心率信息,通过摄像头采集用户的人脸信息等。然后,处理器1010基于采集的生物特征信息进行用户身份的验证。验证通过时,处理器1010执行目标操作,验证不通过时,处理器1010暂时不执行目标操作,直到验证通过时再执行目标操作。
在一些实施例中,电子设备100可以包括SIM卡接口,SIM卡接口用于连接SIM卡。SIM卡可以通过插入SIM卡接口,或从SIM卡接口拔出,实现和电子设备100的接触和分离。电子设备100可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口可以同时插入多张卡。这多张卡的类型可以相同,也可以不同。SIM卡接口也可以兼容不同类型的SIM卡。SIM卡接口也可以兼容外部存储卡。电子设备100通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备100采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备100中,不能和电子设备100分离。
在一些实施例中,电子设备100为第一设备。可选地,电子设备100仅可以通过收发器1030和连接的第二设备通信,例如电子设备100为未连接SIM卡、未连接互联网的可穿戴设备。可选地,电子设备100不仅可以通过收发器和连接的第二设备通信,而且可以和互联网、其他设备通信。例如电子设备100为已连接SIM卡、已连接互联网的可穿戴设备或终端。
电子设备100中的处理器1010可以用于读取存储器1020中存储的计算机程序及数据,执行图7-图15所示的信息处理方法。
电子设备100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本申请实施例以分层架构的Android系统为例,示例性说明电子设备100的软件结构。
图3是本发明实施例的电子设备100的软件结构框图。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将
Figure PCTCN2021137325-appb-000001
系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。在本申请中,图3所示的软件框架仅仅是一个示例,电子设备100的系统还可以是其他操作系统,诸如
Figure PCTCN2021137325-appb-000002
华为移动服务(huawei mobile services,HMS)等。
应用程序层可以包括一系列应用程序包。
如图3所示,应用程序包可以包括相机,图库,导航,音乐,通话,短信息,日历,蓝牙,应用商城等应用程序。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming  interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图3所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供电子设备100的通信功能。例如通话状态的管理(包括接通,挂断等)。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的信息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,信息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。
2D图形引擎是2D绘图的绘图引擎。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。其中,显示驱动可以用于驱动控制硬件中的显示屏,例如图2所示的显示屏。摄像头驱动可以用于驱动控制硬件中的摄像头,例如图2所示的摄像头。传感器驱动可以用于驱动控制硬件中的多个传感器,例如图2所示的压力传感器、触摸传感器等传感器。
下面结合触摸查看信息的场景,示例性说明电子设备100软件以及硬件的工作流程。
当触摸传感器接收到触摸操作,相应的硬件中断被发给内核层。内核层将触摸操作加工成原始输入事件(包括触摸坐标,触摸操作的时间戳等信息)。原始输入事件被存储在内核层。应用程序框架层从内核层获取原始输入事件,识别该输入事件所对应的控件。以该触摸操作是触摸单击操作,该单击操作所对应的控件为短信息的提示框为例,短信息调用应用框架层的接口,进而调用内核层控制显示驱动,通过显示屏显示提示框所对应的详细信息。其中, 在本申请中,图2和图3所示的硬件和软件架构适用于第一设备,也可以适用于第二设备。
下面介绍本申请实施例涉及的应用场景以及该场景下的人机交互示意图,其中以第一设备为智能手表,第二设备为智能手机为例进行说明。
请参见图4,图4示例性示出一种人机交互示意图。
如图4的(A)所示,第一设备显示用户界面400,用户界面400可以包括文字401和信息402,文字401为发送信息402的用户的昵称。信息402可以是第二设备接收后转发给第一设备显示的。信息402为文字:“你直接打开应用商城,搜索浏览器看看呢”。第二设备可以对信息402进行语义解析,以识别出信息402用于指示执行目标操作,并得到语义解析的结果(即信息402所表征的目标操作的信息)。语义解析的结果可以包括需启动的应用程序的标识、需显示的应用程序的用户界面的标识、需输入的参数等。然后,第二设备可以向第一设备发送第一指示信息,第一指示信息用于指示第一设备接收到第一操作时向第二设备发送第二指示信息。第一操作例如但不限于为作用于第一设备显示屏的双击操作、下图5所示的长按操作、下图6所示的作用于特定控件的触摸操作等。
如图4的(B)所示,第一设备接收到第二设备发送的第一指示信息后,检测到作用于显示屏的双击操作,响应于该双击操作,第一设备可以向第二设备发送第二指示信息。第二指示信息表征第一设备已接收到第一操作。
如图4的(C)所示,响应于第二指示信息,第二设备根据上述语义解析的结果执行目标操作,并显示用户界面600。信息402所表征的目标操作的信息可以具体包括:应用商城的标识、应用商城的搜索页面的标识、需输入的参数“浏览器”。则第二设备执行的目标操作包括:启动标识为上述应用商城的标识的应用,显示标识为上述搜索页面的标识的用户界面600,在用户界面600的搜索栏601中输入上述需输入的参数“浏览器”,进行搜索,最后得到搜索结果:列表603。其中,用户手动“进行搜索”时,可以点击用户界面600中的控件602。
在一些实施例中,信息402也可以是第一设备从互联网、其他设备处接收的即时通讯信息或短信息,则第一设备接收到信息402后可以转发给第二设备,以使第二设备对信息402进行语义解析。
在一些实施例中,也可以是第一设备对信息402进行语义解析。第一设备得到语义解析的结果后,可以将语义解析的结果直接发送给第二设备,然后第二设备再根据语义解析的结果向第一设备发送第一指示信息。或者,第一设备得到语义解析的结果后,若检测到第一操作,则第一设备可以向第二设备发送第三指示信息。第三指示信息包括语义解析的结果,并用于指示第二设备执行目标操作。可选地,若第一设备接收第一操作的能力为关闭状态,则第一设备得到语义解析的结果后,可以先开启接收第一操作的能力。
在一些实施例中,若第一设备接收第一操作的能力默认为关闭状态,第一指示信息还用于指示第一设备开启接收第一操作的能力。可选地,第一指示信息还用于指示第一操作的具体类型,例如第一信息用于指示第一设备开启接收双击操作的能力。
在一些实施例中,执行目标操作之前(例如发送第一指示信息之前),第二设备可以先根据语义解析的结果判断是否能执行目标操作。例如,第二设备根据应用商城的标识判断是否安装应用商城,若已安装应用商城,则确定第二设备能够执行目标操作,若未安装应用商城,则确定第二设备无法执行目标操作。
在一些实施例中,若第一设备获取到了第二设备安装的应用列表、可执行的功能列表等, 则第一设备向第二设备发送第三指示信息之前,可以先根据语义解析的结果判断第二设备是否能执行目标操作。当确定第二设备能够执行目标操作时,第一设备再向第二设备发送第三指示信息。
在一些实施例中,假设用户身份验证通过的情况下第二设备才能执行目标操作,例如执行目标操作之前第二设备为锁屏状态。则第二设备接收到第二指示信息或第二指示信息时,可以自行获取或通过第一设备获取用户的生物特征信息,然后基于该生物特征信息进行用户身份的验证。例如,第二设备通过第一设备的摄像头采集用户的人脸信息,并对该人脸信息进行身份验证。若用户身份验证通过,则第二设备可以执行目标操作,否则,第二设备可以不执行目标操作或者直到用户身份验证通过再执行目标操作。
在一些实施例中,第二设备执行目标操作后,可以发送第二信息给第一设备进行显示。第二信息可以包括目标操作的执行情况,例如是否执行成功、执行目标操作得到的结果信息(如图4的(C)所示的列表603)。也就是说,用户不仅无需手动执行目标操作,而且可以通过第一设备快速获取目标操作的执行情况,大大方便了用户的使用,具体示例如下图5所示。
请参见图5,图5示例性示出又一种人机交互示意图。图5以第二设备进行语义解析,以及执行目标操作为例进行说明。
如图5的(A)所示,第一设备显示用户界面400,用户界面400可以包括文字401和信息403,文字401为发送信息403的用户的昵称。信息403可以是第二设备接收后转发给第一设备显示的。信息403为文字:“你试试开下手机的蓝牙呢”。第二设备可以对信息403进行语义解析,以识别到信息403用于指示执行目标操作,并得到信息403所表征的目标操作的信息:蓝牙应用的标识。然后,第二设备可以向第一设备发送第一指示信息。
如图5的(B)所示,第一设备接收到第二设备发送的第一指示信息后,检测到作用于显示屏的长按操作,响应于该长按操作,第一设备可以向第二设备发送第二指示信息。
如图5的(C)所示,响应于第二指示信息,第二设备根据上述语义解析的结果执行目标操作:启动标识为上述蓝牙应用的标识的应用,开启蓝牙功能。其中,若用户手动开启蓝牙功能,则可以在用户界面610中点击控件611。第二设备的蓝牙功能开启后,可以通过蓝牙方式自动连接其他设备,例如之前连接过的设备。用户界面610中的设备列表612用于显示和第二设备通过蓝牙连接的设备名称。第二设备执行目标操作后,可以向第一设备发送第二信息。第二信息用于指示蓝牙功能已开启,可选地,还可以包括第二设备通过蓝牙方式连接的设备名称。第一设备可以根据第二信息显示用户界面410,用户界面410包括提示信息411和设备列表412。其中,提示信息411表征蓝牙功能已开启,设备列表412包括第二设备通过蓝牙方式连接的设备名称(即用户界面610中的设备列表612包括的设备名称)。
在一些实施例中,第二设备执行目标操作后也可以不显示用户界面610,而是直接发送第二信息给第一设备,以此通过第一设备显示的用户界面410为用户提供目标操作的执行情况。
因此,用户对于执行目标操作的过程是无感的,执行第一操作(包括上述作用于显示屏的长按操作)后,可以直接通过第一设备的用户界面查看目标操作的执行结果,而无需打开第二设备。
图5所示情况也可以应用于购物场景。示例性地,第一设备显示的信息为:“帮我在购物应用上买个玩具”。第二设备可以执行目标操作:打开购物应用,在购物应用的用户界面中搜索需购买的物品名称(即玩具),按照预设规则选择一个搜索结果进行购买,最后得到购买结 果。然后,第二设备可以将上述购买结果的信息作为第二信息发送给第一设备,以使第一设备显示购买结果的信息。可选地,第二设备进行购买时需对用户的身份进行验证,验证成功时才能购买。其中,用于进行用户身份验证的生物特征信息可以是第二设备通过第一设备或第二设备的摄像头、传感器、收发器等获取的,例如人脸信息、心率信息、脉搏信息等。用户无需手动进行身份验证的过程,使用起来更加方便快捷。
图5所示情况也可以应用于导航场景。示例性地,第一设备显示的信息为:“你导航到地点A,我在这等你”。第二设备可以执行目标操作:打开导航应用,在导航应用的用户界面中搜索目的地名称“地点A”,最后得到至少一个路线信息。然后,第二设备可以将上述至少一个路线信息作为第二信息发送给第一设备,以使第一设备显示任意一个或多个路线信息,便于用户快速进行导航。
在一些实施例中,也可以是第一设备执行目标操作,具体示例如下图6所示。
请参见图6,图6示例性示出又一种人机交互示意图。
如图6的(A)所示,第一设备显示用户界面400,用户界面400可以包括文字401、信息404和控件405,文字401为发送信息404的用户的昵称。信息404可以是第二设备接收后转发给第一设备的即时通讯信息或短信息。信息404为文字:“推荐你听歌曲A”。第一设备可以对信息404进行语义解析,以识别到信息404用于指示执行目标操作,并得到信息404所表征的目标操作的信息:音乐应用的标识,歌曲名称(即歌曲A)。
如图6的(B)所示,第一设备可以检测作用于控件405的触摸操作(例如点击操作),响应于该触摸操作,第一设备可以根据上述信息404所表征的目标操作的信息执行目标操作:启动标识为上述音乐应用的标识的应用,搜索名称为歌曲A的歌曲并播放该歌曲。此时,第一设备可以显示图6的(C)所示的用户界面420,用户界面420表征第一设备正在通过音乐应用播放歌曲A。因此,用户可以通过用户界面420获取到目标操作的执行情况,即目标操作执行成功。
在一些实施例中,也可以是第二设备对信息404进行语义解析。第二设备得到语义解析的结果后,可以将语义解析的结果直接发送给第一设备。然后,第一设备再在检测到第一操作(包括上述用于控件405的触摸操作)时,根据语义解析的结果执行目标操作。可选地,信息404也可以是第一设备从互联网、其他设备处接收的。第一设备接收到信息402后可以转发给第二设备,以使第二设备对信息402进行语义解析。
在一些实施例中,若第一设备接收第一操作的能力默认为关闭状态,则第一设备得到语义解析的结果之前,可以先开启接收第一操作的能力,以便后续接收第一操作。
在一些实施例中,若第二设备用于对信息404进行语义解析,并且第一设备接收第一操作的能力默认为关闭状态,则第一设备接收到第二设备发送的语义解析的结果后,可以先开启接收第一操作的能力,以便后续接收第一操作。
在一些实施例中,执行目标操作之前,第一设备可以先根据语义解析的结果判断是否能够执行目标操作。例如,第一设备根据音乐应用的标识判断是否安装音乐应用,若已安装音乐应用,则确定第一设备能够执行目标操作,若未安装音乐应用,则确定第一设备无法执行目标操作。
在一些实施例中,若第二设备用于对信息404进行语义解析,并且第二设备获取到了第一设备安装的应用列表、可执行的功能列表等,则第二设备向第一设备发送语义解析的结果之前,可以先根据语义解析的结果判断第一设备是否能够执行目标操作。当确定第一设备能 够执行目标操作时,第二设备再向第一设备发送语义解析的结果。
在一些实施例中,用户身份验证通过的情况下,第一设备才能执行目标操作。则第一设备检测到第一操作时,可以自行获取或通过第二设备获取用户的生物特征信息,然后基于该生物特征信息进行用户身份的验证。例如,第一设备通过第一设备的脉搏传感器采集用户的脉搏信息,并对该脉搏信息进行身份验证。若用户身份验证通过,则第一设备可以执行目标操作,否则,第一设备可以不执行目标操作或者直到用户身份验证通过再执行目标操作。
接下来在不同的应用场景下介绍本申请提供的信息处理方法,该方法可以基于图1所示的信息处理系统实现。
请参见图7,图7示例性示出了一种信息处理方法的流程示意图。图7以第二设备用于接收第一信息和执行目标操作为例进行说明。该方法包括但不限于如下步骤:
S1001:第二设备接收第一信息。
S1002:第二设备向第一设备发送第一信息。
S1003:第一设备接收到第一信息后,输出第一信息。
具体地,第二设备可以从互联网、其他设备处接收第一信息,然后转发给第一设备进行显示。第一信息例如但不限于为即时通讯信息、短信息等。第一信息例如但不限于包括文字、图片、定位信息、文件等。第一信息的示例可参见图1和图4的信息402、图5的信息403、图6的信息404。第一设备可以但不限于通过显示屏显示第一信息,通过扬声器、耳机等播放第一信息。
第一设备或第二设备可以对第一信息进行语义解析,以识别到第一信息用于指示执行目标操作。目标操作例如但不限于为启动第一应用,显示第一界面,启动或关闭第一功能等。第二设备对第一信息进行语义解析的说明可参见下图8所示实施例,第一设备对第一信息进行语义解析的说明可参见下图9-图10所示实施例,暂不详述。
S1004:第一设备接收第一操作。
具体地,第一操作例如但不限于为作用于显示屏、按键的触摸操作(例如点击操作)、双击操作、长按操作、滑动操作等等,具体示例可参见图4所示的双击操作、图5所示的长按操作、图6所示的作用于特定控件的触摸操作。
S1005:第一设备响应于第一操作,向第二设备发送第一通知。
在一些实施例中,第一通知表征第一设备接收到第一操作,具体可参见下图8和图10中的第一通知。在另一些实施例中,第一通知包括目标操作的指示信息(简称目标操作的信息)(即语义解析的结果),具体可参见下图9中的第三通知。
S1006:第二设备接收到第一通知后,执行目标操作。
请参见图8,图8示例性示出了又一种信息处理方法的流程示意图。图8以第二设备用于接收第一信息、对第一信息进行语义解析、执行目标操作为例进行说明。该方法包括但不限于如下步骤:
S101:第二设备接收第一信息。
S102:第二设备向第一设备发送第一信息。
S103:第一设备接收到第一信息后,输出第一信息。
具体地,S101-S103和图7所示的S1001-S1003一致,不再赘述。
S104:第二设备对第一信息进行语义解析,得到目标操作的信息。
具体地,第二设备可以对第一信息进行语义解析,以识别到第一信息用于指示执行目标操作,并得到第一信息所表征的目标操作的信息(也可以称为语义解析的结果)。目标操作的信息可以包括下述至少一项:需启动的应用的标识、需显示的用户界面的标识、需输入的参数、功能标识等。
例如,第一信息包括文件,则语义解析得到的目标操作的信息可以包括:用于查看该文件的应用的标识(如阅读应用的标识)。或者,第一信息包括定位信息,则语义解析得到的目标操作的信息可以包括:导航应用的标识、导航应用中用于输入目的地的界面的标识、需输入的目的地名称(即定位信息所表征的地点)。其他示例可参见图4-图6所示的语义解析的结果。
需要说明的是,S102-S103和S104的顺序不作限定。
S105:第二设备向第一设备发送第二通知。
具体地,第二通知可以用于指示第一设备接收到第一操作时向第二设备发送通知。可选地,若第一设备接收第一操作的能力默认为关闭状态,第二通知还可以用于指示第一设备开启接收第一操作的能力。可选地,第二通知还可以用于指示第一操作的具体类型,例如第一信息用于指示第一设备开启接收双击操作的能力。第二通知的示例可参见图4-图5所示的第一指示信息。
S106:第一设备接收第一操作。
S107:第一设备向第二设备发送第一通知。
具体地,响应于第二通知,当接收到第一操作时,第一设备向第二设备发送第一通知。第一通知表征第一设备已接收到第一操作。第一通知的示例可参见图4-图5所示的第二指示信息。
S108:响应于第一通知,第二设备执行目标操作。
在一些实施例中,S105之前,该方法还可以包括:第二设备根据语义解析的结果判断是否能执行目标操作。例如,第二设备判断是否安装上述需启动的应用。第二设备判断可执行的功能列表中是否包含上述功能标识对应的功能。在确定第二设备能执行目标操作的情况下,第二设备再执行S105。
在一些实施例中,假设用户身份验证通过的情况下第二设备才能执行目标操作,例如S108之前第二设备为锁屏状态。因此,第二设备可以先获取用户的生物特征信息,例如但不限于通过压力传感器和/或触摸传感器获取用户的触屏行为信息、通过摄像头获取用户的人脸信息、通过脉搏传感器获取用户的脉搏信息、通过心率传感器获取用户的心率信息等。然后,第二设备再基于获取的生物特征信息进行用户身份的验证。当用户身份验证通过时,第二设备可以执行S108,否则,第二设备可以不执行目标操作或者直到用户身份验证通过再执行目标操作。
请参见图9,图9示例性示出了又一种信息处理方法的流程示意图。图9以第一设备用于对第一信息进行语义解析,第二设备用于接收第一信息、执行目标操作为例进行说明。该方法包括但不限于如下步骤:
S201:第二设备接收第一信息。
S202:第二设备向第一设备发送第一信息。
S203:第一设备接收到第一信息后,输出第一信息。
具体地,S201-S203和图7的S1001-S1003一致,不再赘述。
S204:第一设备对第一信息进行语义解析,得到目标操作的信息。
具体地,S204和图8的S104类似,只是图8的S104由第二设备执行,图9的S204由第一设备执行,具体可参见图8的S104的说明。
需要说明的是,S203和S204的顺序不作限定。
S205:第一设备接收第一操作。
S206:第一设备向第二设备发送第三通知。
具体地,当接收到第一操作时,第一设备可以向第二设备发送第三通知。第三通知可以包括语义解析的结果(即目标操作的信息),第三通知还可以用于指示第二设备执行目标操作。第三通知的示例可参见图4所示的第三指示信息。
在一些实施例中,若第一设备接收第一操作的能力为关闭状态,则S204之后,第一设备可以先开启接收第一操作的能力,以便后续接收第一操作。
S207:响应于第三通知,第二设备执行目标操作。
在一些实施例中,S207之前,该方法还可以包括:第二设备根据第三通知中语义解析的结果判断是否能执行目标操作。
在一些实施例中,若第一设备获取到了第二设备安装的应用列表、可执行的功能列表等,则S204之后,该方法还可以包括:第一设备根据语义解析的结果判断第二设备是否能执行目标操作。在确定第二设备能执行目标操作的情况下,第一设备再执行S205-S206。
在一些实施例中,假设用户身份验证通过的情况下第二设备才能执行目标操作,例如S207之前第二设备为锁屏状态。因此,第二设备可以先获取用户的生物特征信息,再基于获取的生物特征信息进行用户身份的验证。当用户身份验证通过时,第二设备可以执行S207,否则,第二设备可以不执行目标操作或者直到用户身份验证通过再执行目标操作。
请参见图10,图10示例性示出了又一种信息处理方法的流程示意图。图10以第一设备用于对第一信息进行语义解析,第二设备用于接收第一信息、执行目标操作为例进行说明。该方法包括但不限于如下步骤:
S301:第二设备接收第一信息。
S302:第二设备向第一设备发送第一信息。
S303:第一设备接收到第一信息后,输出第一信息。
S304:第一设备对第一信息进行语义解析,得到目标操作的信息。
具体地,S301-S304和图9的S201-S204一致,不再赘述。
需要说明的是,S303和S304的顺序不作限定。
S305:第一设备向第二设备发送目标操作的信息(即语义解析的结果)。
S306:第二设备向第一设备发送第二通知。
具体地,S306和图8的S105一致,不再赘述。
在一些实施例中,第二设备接收到目标操作的信息后,可以根据语义解析的结果判断是否能执行目标操作。在确定能执行目标操作的情况下,第二设备可以向第一设备发送第二通知(即执行S306)。
S307:第一设备接收第一操作。
S308:第一设备向第二设备发送第一通知。
S309:响应于第一通知,第二设备执行目标操作。
具体地,S307-S309和图8的S106-S108一致,不再赘述。
在一些实施例中,若第一设备获取到了第二设备安装的应用列表、可执行的功能列表等,则S305之前,该方法还可以包括:第一设备根据语义解析的结果判断第二设备是否能执行目 标操作。在确定第二设备能执行目标操作的情况下,第一设备再执行S305。
在一些实施例中,假设用户身份验证通过的情况下第二设备才能执行目标操作,例如S309之前第二设备为锁屏状态。因此,第二设备可以先获取用户的生物特征信息,再基于获取的生物特征信息进行用户身份的验证。当用户身份验证通过时,第二设备可以执行S309,否则,第二设备可以不执行目标操作或者直到用户身份验证通过再执行目标操作。
在一些实施例中,第一信息也可以是第一设备自行接收的,例如第一设备为连接有SIM卡的可穿戴设备,具体可参见下图11-13所示。
请参见图11,图11示例性示出了又一种信息处理方法的流程示意图。图11以第一设备用于接收第一信息,第二设备用于对第一信息进行语义解析、执行目标操作为例进行说明。该方法包括但不限于如下步骤:
S401:第一设备接收第一信息。
S402:第一设备输出第一信息。
S403:第一设备向第二设备发送第一信息。
具体地,第一设备可以从互联网、其他设备处接收第一信息,并输出第一信息。并且,第一设备可以将第一信息发送给第二设备进行语义解析。第一信息和输出第一信息的说明可参见图7所示的S1001-S1003的说明,不再赘述。
需要说明的是,S402和S403的顺序不作限定。
S404:第二设备对第一信息进行语义解析,得到目标操作的信息。
具体地,S404和图8的S104一致,不再赘述。
S405:第二设备向第一设备发送第二通知。
S406:第一设备接收第一操作。
S407:第一设备向第二设备发送第一通知。
S408:响应于第一通知,第二设备执行目标操作。
具体地,S405-S408和图8的S105-S108一致,不再赘述。
在一些实施例中,S405之前,该方法还可以包括:第二设备根据语义解析的结果判断是否能执行目标操作。在确定第二设备能执行目标操作的情况下,第二设备再执行S405。
在一些实施例中,假设用户身份验证通过的情况下第二设备才能执行目标操作,例如S408之前第二设备为锁屏状态。因此,第二设备可以先获取用户的生物特征信息,再基于获取的生物特征信息进行用户身份的验证。当用户身份验证通过时,第二设备可以执行S408,否则,第二设备可以不执行目标操作或者直到用户身份验证通过再执行目标操作。
请参见图12,图12示例性示出了又一种信息处理方法的流程示意图。图12以第一设备用于接收第一信息、对第一信息进行语义解析,第二设备用于执行目标操作为例进行说明。该方法包括但不限于如下步骤:
S501:第一设备接收第一信息。
S502:第一设备输出第一信息。
具体地,S501-S502和图11的S401-S402一致,不再赘述。
S503:第一设备对第一信息进行语义解析,得到目标操作的信息。
具体地,S503和图8的S104类似,只是图8的S104由第二设备执行,图12的S503由第一设备执行,具体可参见图8的S104的说明。
需要说明的是,S502和S503的顺序不作限定。
S504:第一设备接收第一操作。
S505:第一设备向第二设备发送第三通知。
S506:响应于第三通知,第二设备执行目标操作。
具体地,S504-S506和图9的S205-S207一致,不再赘述。
在一些实施例中,若第一设备接收第一操作的能力为关闭状态,则S503之后,第一设备可以先开启接收第一操作的能力,以便后续接收第一操作。
在一些实施例中,S506之前,该方法还可以包括:第二设备根据第三通知中语义解析的结果判断是否能执行目标操作。在确定第二设备能执行目标操作的情况下,第二设备再执行S506。
在一些实施例中,若第一设备获取到了第二设备安装的应用列表、可执行的功能列表等,则S503之后,该方法还包括:第一设备根据语义解析的结果判断第二设备是否能执行目标操作。在确定第二设备能执行目标操作的情况下,第一设备再执行S504-S505。
在一些实施例中,假设用户身份验证通过的情况下第二设备才能执行目标操作,例如S506之前第二设备为锁屏状态。因此,第二设备可以先获取用户的生物特征信息,再基于获取的生物特征信息进行用户身份的验证。当用户身份验证通过时,第二设备可以执行S506,否则,第二设备可以不执行目标操作或者直到用户身份验证通过再执行目标操作。
请参见图13,图13示例性示出了又一种信息处理方法的流程示意图。图13以第一设备用于接收第一信息、对第一信息进行语义解析,第二设备用于执行目标操作为例进行说明。该方法包括但不限于如下步骤:
S601:第一设备接收第一信息。
S602:第一设备输出第一信息。
具体地,S601-S602和图11的S401-S402一致,不再赘述。
S603:第一设备对第一信息进行语义解析,得到目标操作的信息。
具体地,S603和图8的S104类似,只是图8的S104由第二设备执行,图13的S603由第一设备执行,具体可参见图8的S104的说明。
需要说明的是,S602和S603的顺序不作限定。
S604:第一设备向第二设备发送目标操作的信息。(即语义解析的结果)。
S605:第二设备向第一设备发送第二通知。
S606:第一设备接收第一操作。
S607:第一设备向第二设备发送第一通知。
S608:响应于第一通知,第二设备执行目标操作。
具体地,S605-S608和图8的S105-S108一致,不再赘述。
在一些实施例中,第二设备接收到目标操作的信息后,可以根据目标操作的信息判断是否能执行目标操作。在确定第二设备能执行目标操作的情况下,第二设备可以向第一设备发送第一信息(即执行S605)。
在一些实施例中,若第一设备获取到了第二设备安装的应用列表、可执行的功能列表等,则S604之前,该方法还包括:第一设备根据语义解析的结果判断第二设备是否能执行目标操作。在确定第二设备能执行目标操作的情况下,第一设备再执行S604。
在一些实施例中,假设用户身份验证通过的情况下第二设备才能执行目标操作,例如S608之前第二设备为锁屏状态。因此,第二设备可以先获取用户的生物特征信息,再基于获取的生物特征信息进行用户身份的验证。当用户身份验证通过时,第二设备可以执行S608,否则,第二设备可以不执行目标操作或者直到用户身份验证通过再执行目标操作。
在一些实施例中,图7-图13所示的信息处理方法还可以包括:第二设备向第一设备发送目标操作的执行情况。第一设备可以显示目标操作的执行情况,以便用户进行查看,具体示例如上图5所示。也就是说,用户可以无需拿出第二设备,就能通过第一设备获取到目标操作的执行情况,大大方便了用户的使用。
在图7-图13所示的方法中,第一设备接收第一操作后,第二设备可以直接执行目标操作。也就是说,用户获取第一信息后,可以直接通过第一操作查看第一信息指示执行的目标操作的执行结果。执行结果例如为图4所示的用户界面600,图5所示的用户界面610或用户界面410,图6所示的用户界面420。因此无需用户手动执行目标操作,实现了“一步直达”的效果,提升了操作的便利性和用户体验感。
在一些实施例中,也可以是第一设备执行目标操作,具体可参见下图14-图16所示。
请参见图14,图14示例性示出了又一种信息处理方法的流程示意图。图14以第一设备用于执行目标操作,第二设备用于接收第一信息、对第一信息进行语义解析为例进行说明。该方法包括但不限于如下步骤:
S701:第二设备接收第一信息。
S702:第二设备向第一设备发送第一信息。
S703:第一设备接收到第一信息后,输出第一信息。
S704:第二设备对第一信息进行语义解析,得到目标操作的信息。
具体地,S701-S704和图8的S101-S104一致,不再赘述。需要说明的是,S702-S703和S704的顺序不作限定。
S705:第二设备向第一设备发送目标操作的信息。
S706:第一设备接收第一操作。
S707:响应于第一操作,第一设备执行目标操作。
具体地,当接收到第一操作时,第一设备可以根据第二设备发送的目标操作的信息执行目标操作。
在一些实施例中,若第一设备接收第一操作的能力为关闭状态,则第一设备接收到目标操作的信息后,可以先开启接收第一操作的能力,以便后续接收第一操作。
在一些实施例中,S707之前,该方法还可以包括:第一设备根据上述目标操作的信息判断是否能执行目标操作。在确定第一设备能执行目标操作的情况下,第一设备再执行S707。
在一些实施例中,若第二设备获取到了第一设备安装的应用列表、可执行的功能列表等,则S705之前,该方法还可以包括:第二设备根据上述目标操作的信息判断第一设备是否能执行目标操作。在确定第一设备能执行目标操作的情况下,第二设备再执行S705。
在一些实施例中,假设用户身份验证通过的情况下第一设备才能执行目标操作,例如S707之前第一设备为锁屏状态。因此,第一设备可以先获取用户的生物特征信息,再基于获取的生物特征信息进行用户身份的验证。当用户身份验证通过时,第一设备可以执行S707,否则,第一设备可以不执行目标操作或者直到用户身份验证通过再执行目标操作。
请参见图15,图15示例性示出了又一种信息处理方法的流程示意图。图15以第一设备用于执行目标操作、对第一信息进行语义解析,第二设备用于接收第一信息为例进行说明。该方法包括但不限于如下步骤:
S801:第二设备接收第一信息。
S802:第二设备向第一设备发送第一信息。
S803:第一设备接收到第一信息后,输出第一信息。
具体地,S801-S803和图8的S101-S103一致,不再赘述。
S804:第一设备对第一信息进行语义解析,得到目标操作的信息。
具体地,S804和图8的S104类似,只是图8的S104由第二设备执行,图15的S804由第一设备执行,具体可参见图8的S104的说明。
需要说明的是,S803和S804的顺序不作限定。
S805:第一设备接收第一操作。
S806:响应于第一操作,第一设备执行目标操作。
具体地,当接收到第一操作时,第一设备可以根据得到的目标操作的信息执行目标操作。
在一些实施例中,若第一设备接收第一操作的能力为关闭状态,则第一设备得到目标操作的信息后,可以先开启接收第一操作的能力,以便后续接收第一操作。
在一些实施例中,S806之前,该方法还可以包括:第一设备根据得到的目标操作的信息判断是否能执行目标操作。在确定第一设备能执行目标操作的情况下,第一设备再执行S806。
在一些实施例中,假设用户身份验证通过的情况下第一设备才能执行目标操作,例如S806之前第一设备为锁屏状态。因此,第一设备可以先获取用户的生物特征信息,再基于获取的生物特征信息进行用户身份的验证。当用户身份验证通过时,第一设备可以执行S806,否则,第一设备可以不执行目标操作或者直到用户身份验证通过再执行目标操作。
请参见图16,图16示例性示出了又一种信息处理方法的流程示意图。图16以第一设备用于接收第一信息、执行目标操作,第二设备用于对第一信息进行语义解析为例进行说明。该方法包括但不限于如下步骤:
S901:第一设备接收第一信息。
S902:第一设备显输出第一信息。
S903:第一设备向第二设备发送第一信息。
S904:第二设备对第一信息进行语义解析,得到目标操作的信息。
具体地,S901-S904和图11的S401-S404一致,不再赘述。
S905:第二设备向第一设备发送目标操作的信息。
S906:第一设备接收第一操作。
S907:响应于第一操作,第一设备执行目标操作。
具体地,当接收到第一操作时,第一设备可以根据第二设备发送的目标操作的信息执行目标操作。
在一些实施例中,若第一设备接收第一操作的能力为关闭状态,则第一设备接收到目标操作的信息后,可以先开启接收第一操作的能力,以便后续接收第一操作。
在一些实施例中,S907之前,该方法还可以包括:第一设备根据上述目标操作的信息判断是否能执行目标操作。在确定第一设备能执行目标操作的情况下,第一设备再执行S907。
在一些实施例中,若第二设备获取到了第一设备安装的应用列表、可执行的功能列表等,则S905之前,该方法还可以包括:第二设备根据上述目标操作的信息判断第一设备是否能执行目标操作。在确定第一设备能执行目标操作的情况下,第二设备再执行S905。
在一些实施例中,假设用户身份验证通过的情况下第一设备才能执行目标操作,例如S907之前第一设备为锁屏状态。因此,第一设备可以先获取用户的生物特征信息,再基于获取的生物特征信息进行用户身份的验证。当用户身份验证通过时,第一设备可以执行S907,否则,第一设备可以不执行目标操作或者直到用户身份验证通过再执行目标操作。
在图14-图16所示的方法中,第一设备接收第一操作后,可以直接执行目标操作。也就 是说,用户获取第一信息后,可以直接通过第一设备查看第一信息指示执行的目标操作的执行结果。执行结果例如为图4所示的用户界面600,图5所示的用户界面610或用户界面410,图6所示的用户界面420。因此减少了用户手动操作第一设备的步骤,实现了“一步直达”的效果,提升了操作的便利性和用户体验感。
本申请中,上述第一信息还可以被识别出用于执行目标操作的设备,即上述语义解析的结果(即上述目标操作的信息)包括用于执行目标操作的设备的信息,例如设备名称。因此,得到语义解析的结果后,第一设备和第二设备可以根据语义解析的结果确认用于执行目标操作的设备。例如,第一信息为文字:“你可以打开手机上的购物应用看看有没有合适的衣服”,则目标操作的信息可以包括:用于执行目标操作的设备名称(即手机),需启动的购物应用的标识,需显示的购物应用的搜索页面的标识,需输入的参数(即衣服)。假设第一设备为智能手环,第二设备为智能手机,则第一设备和第二设备可以根据语义解析的结果确认用于执行目标操作的设备为第二设备。
在一些实施例中,第一设备和第二设备也可以预设用于执行目标操作的设备。例如,第一设备的处理能力较弱,第一设备和第二设备预设用于执行目标操作的设备为第二设备。或者,第一设备和第二设备也可以响应于用户自定义设置的操作,确认用于执行目标操作的设备。例如,用户可以在第一设备或第二设备的设置界面、安装的运动健康应用的用户界面等页面,将便捷实现信息内容的设备设置为第二设备。此时,第二设备默认用于执行目标操作,具体实现方式如上图7-图13所示。
不限于此,第一设备和第二设备也可以预设用于执行目标操作的设备的优先级,或者,响应于用户自定义设置的操作,确认执行目标操作的设备的优先级。例如,确认用于执行目标操作的设备中,第一设备的优先级高于第二设备,则得到语义解析的结果后,可以先判断第一设备是否能执行目标操作。当第一设备能执行目标操作时,具体实现方式如上图14-图16所示。当第一设备无法执行目标操作时,再判断第二设备能否执行目标操作。当第二设备能执行目标操作时,具体实现方式如上图7-图13所示。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。上述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行上述计算机程序指令时,全部或部分地产生按照本申请上述的流程或功能。上述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。上述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,上述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。上述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。上述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质、或者半导体介质(例如固态硬盘)等。总之,以上上述仅为本发明技术方案的实施例而已,并非用于限定本发明的保护范围。凡根据本发明的揭露,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。显然,本领域的技术人员可以对本发明进行各种改动和变型而不脱离本发明的精神和范围。这样,倘若本发明的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。

Claims (20)

  1. 一种信息处理方法,其特征在于,应用于包括第一设备和第二设备的系统,所述第一设备和所述第二设备连接,所述方法包括:
    所述第二设备接收第一信息,并向所述第一设备发送所述第一信息;所述第一信息用于指示执行目标操作;
    所述第一设备接收到所述第一信息后,输出所述第一信息;
    所述第一设备接收第一操作;
    所述第一设备响应于所述第一操作,向所述第二设备发送第一通知;
    所述第二设备接收到所述第一通知后,执行所述目标操作。
  2. 如权利要求1所述的方法,其特征在于,所述方法还包括:所述第二设备接收到所述第一信息后,识别到所述第一信息用于指示执行目标操作;
    所述第一设备接收第一操作之前,所述方法还包括:所述第二设备向所述第一设备发送第二通知;
    所述第一设备响应于所述第一操作,向所述第二设备发送第一通知,包括:所述第一设备接收到所述第二通知后,响应于接收到的所述第一操作,向所述第二设备发送所述第一通知。
  3. 如权利要求1所述的方法,其特征在于,所述第一设备接收第一操作之前,所述方法还包括:
    所述第一设备接收到所述第一信息后,识别到所述第一信息用于指示执行目标操作;
    所述第一设备向所述第二设备发送第三通知,所述第三通知包括所述目标操作的指示信息;
    所述第二设备接收到所述第三通知后,向所述第一设备发送第四通知;
    所述第一设备响应于所述第一操作,向所述第二设备发送第一通知,包括:所述第一设备接收到所述第四通知后,响应于接收到的所述第一操作,向所述第二设备发送所述第一通知。
  4. 如权利要求1所述的方法,其特征在于,所述第一设备接收第一操作之前,所述方法还包括:
    所述第一设备接收到所述第一信息后,识别到所述第一信息用于指示执行目标操作;
    所述第一通知包括所述目标操作的指示信息。
  5. 如权利要求2或3所述的方法,其特征在于,所述第二通知或所述第四通知用于指示所述第一设备开启接收所述第一操作的功能。
  6. 如权利要求3所述的方法,其特征在于,所述第一设备向所述第二设备发送第三通知,包括:
    所述第一设备确定所述第二设备能执行所述目标操作时,向所述第二设备发送第三通知。
  7. 如权利要求3所述的方法,其特征在于,所述第二设备接收到所述第三通知后,向所 述第一设备发送第四通知,包括:
    所述第二设备接收到所述第三通知后,判断是否能执行所述目标操作;
    所述第二设备确定能执行所述目标操作时,向所述第一设备发送所述第四通知。
  8. 如权利要求1所述的方法,其特征在于,所述第二设备接收到所述第一通知后,执行所述目标操作,包括:
    所述第二设备接收到所述第一通知后,进行用户身份的认证;
    当所述用户身份的认证通过时,所述第二设备执行所述目标操作。
  9. 如权利要求1所述的方法,其特征在于,所述第一设备为可穿戴电子设备,所述第二设备为移动终端设备。
  10. 如权利要求1所述的方法,其特征在于,所述目标操作包括以下至少一项:启动第一应用,显示第一界面,启动或关闭第一功能。
  11. 如权利要求1所述的方法,其特征在于,所述第一信息是即时通讯信息或短信息。
  12. 一种信息处理系统,其特征在于,所述系统包括第一设备和第二设备,所述第一设备和所述第二设备连接,其中:
    所述第二设备,用于接收第一信息,并向所述第一设备发送所述第一信息;所述第一信息用于指示执行目标操作;
    所述第一设备,用于接收到所述第一信息后,输出所述第一信息;接收第一操作;响应于所述第一操作,向所述第二设备发送第一通知;
    所述第二设备,用于接收到所述第一通知后,执行所述目标操作。
  13. 如权利要求12所述的系统,其特征在于,所述第二设备还用于:接收到所述第一信息后,识别到所述第一信息用于指示执行目标操作;
    所述第一设备接收第一操作之前,所述第二设备还用于:向所述第一设备发送第二通知;
    所述第一设备响应于所述第一操作,向所述第二设备发送第一通知时,所述第一设备具体用于:接收到所述第二通知后,响应于接收到的所述第一操作,向所述第二设备发送所述第一通知。
  14. 如权利要求12所述的系统,其特征在于,所述第一设备接收第一操作之前,所述第一设备还用于:接收到所述第一信息后,识别到所述第一信息用于指示执行目标操作;向所述第二设备发送第三通知,所述第三通知包括所述目标操作的指示信息;所述第二设备还用于:接收到所述第三通知后,向所述第一设备发送第四通知;
    所述第一设备响应于所述第一操作,向所述第二设备发送第一通知时,所述第一设备具体用于:接收到所述第四通知后,响应于接收到的所述第一操作,向所述第二设备发送所述第一通知。
  15. 如权利要求12所述的系统,其特征在于,所述第一设备接收第一操作之前,所述第 一设备还用于:接收到所述第一信息后,识别到所述第一信息用于指示执行目标操作;所述第一通知包括所述目标操作的指示信息。
  16. 如权利要求14所述的系统,其特征在于,所述第一设备向所述第二设备发送第三通知时,所述第一设备具体用于:确定所述第二设备能执行所述目标操作时,向所述第二设备发送第三通知。
  17. 如权利要求14所述的系统,其特征在于,所述第二设备接收到所述第三通知后,向所述第一设备发送第四通知时,所述第二设备具体用于:接收到所述第三通知后,判断是否能执行所述目标操作;确定能执行所述目标操作时,向所述第一设备发送所述第四通知。
  18. 如权利要求12所述的系统,其特征在于,所述第二设备接收到所述第一通知后,执行所述目标操作时,所述第二设备具体用于:接收到所述第一通知后,进行用户身份的认证;当所述用户身份的认证通过时,所述第二设备执行所述目标操作。
  19. 一种计算机程序产品,当所述计算机程序产品在电子设备上运行时,使得所述电子设备执行权利要求1-11任一项所述的方法。
  20. 一种计算机存储介质,其特征在于,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求1-11任一项所述的方法。
PCT/CN2021/137325 2020-12-21 2021-12-13 信息处理方法、电子设备及系统 WO2022135199A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21909196.4A EP4250696A4 (en) 2020-12-21 2021-12-13 INFORMATION PROCESSING METHOD, ELECTRONIC DEVICE AND SYSTEM
US18/258,473 US20240040343A1 (en) 2020-12-21 2021-12-13 Information Processing Method, Electronic Device, and System

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011534011.XA CN114650332B (zh) 2020-12-21 2020-12-21 信息处理方法、系统及计算机存储介质
CN202011534011.X 2020-12-21

Publications (1)

Publication Number Publication Date
WO2022135199A1 true WO2022135199A1 (zh) 2022-06-30

Family

ID=81992262

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/137325 WO2022135199A1 (zh) 2020-12-21 2021-12-13 信息处理方法、电子设备及系统

Country Status (4)

Country Link
US (1) US20240040343A1 (zh)
EP (1) EP4250696A4 (zh)
CN (1) CN114650332B (zh)
WO (1) WO2022135199A1 (zh)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104023128A (zh) * 2014-05-16 2014-09-03 北京金山安全软件有限公司 设备间通信的方法和穿戴式设备
CN104270826A (zh) * 2014-09-23 2015-01-07 联想(北京)有限公司 一种信息处理方法及电子设备
CN105515951A (zh) * 2015-12-14 2016-04-20 阿里巴巴集团控股有限公司 基于可穿戴设备的即时通信方法及装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014143776A2 (en) * 2013-03-15 2014-09-18 Bodhi Technology Ventures Llc Providing remote interactions with host device using a wireless device
US10965622B2 (en) * 2015-04-16 2021-03-30 Samsung Electronics Co., Ltd. Method and apparatus for recommending reply message
CN110602309A (zh) * 2019-08-02 2019-12-20 华为技术有限公司 设备解锁方法、系统和相关设备
CN111404802A (zh) * 2020-02-19 2020-07-10 华为技术有限公司 通知处理系统、方法以及电子设备
CN111599358A (zh) * 2020-04-09 2020-08-28 华为技术有限公司 语音交互方法及电子设备

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104023128A (zh) * 2014-05-16 2014-09-03 北京金山安全软件有限公司 设备间通信的方法和穿戴式设备
CN104270826A (zh) * 2014-09-23 2015-01-07 联想(北京)有限公司 一种信息处理方法及电子设备
CN105515951A (zh) * 2015-12-14 2016-04-20 阿里巴巴集团控股有限公司 基于可穿戴设备的即时通信方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4250696A4

Also Published As

Publication number Publication date
EP4250696A4 (en) 2024-05-01
US20240040343A1 (en) 2024-02-01
EP4250696A1 (en) 2023-09-27
CN114650332A (zh) 2022-06-21
CN114650332B (zh) 2023-06-02

Similar Documents

Publication Publication Date Title
KR102470275B1 (ko) 음성 제어 방법 및 전자 장치
US10969954B2 (en) Electronic device for processing user input and method for processing user input
CN110276007B (zh) 用于提供信息的装置和方法
WO2021063343A1 (zh) 语音交互方法及装置
US10747944B1 (en) Unified web and application framework
US10616397B2 (en) Method for performing function and electronic device supporting the same
US20180143802A1 (en) Method for processing various inputs, and electronic device and server for the same
CN106055300B (zh) 用于控制声音输出的方法及其电子设备
KR20170046958A (ko) 전자 장치 및 그의 음성 인식을 이용한 기능 실행 방법
WO2018223558A1 (zh) 数据处理方法及电子设备
WO2019214072A1 (zh) 一种显示输入法虚拟键盘的方法及终端
WO2019200588A1 (zh) 一种应用退出时的显示方法及终端
CN112420217B (zh) 消息推送方法、装置、设备及存储介质
KR20170060782A (ko) 전자 장치 및 전자 장치의 통화 서비스 제공 방법
US11803403B2 (en) Contextual navigation menu
KR20170100309A (ko) 음성 인식 제어를 제공하는 전자 장치 및 그 동작 방법
US20210405767A1 (en) Input Method Candidate Content Recommendation Method and Electronic Device
WO2022135485A1 (zh) 电子设备及其主题设置方法和介质
KR20170036300A (ko) 동영상 제공 방법 및 이를 수행하는 전자 장치
KR102323797B1 (ko) 전자 장치 및 그의 정보 공유 방법
WO2020047709A1 (zh) 一种中文输入法候选词的搜索方法、终端及服务器
WO2022135199A1 (zh) 信息处理方法、电子设备及系统
CN117201730A (zh) 摄像头数据传输方法、装置、电子设备及介质
KR20180094331A (ko) 전자 장치 및 전자 장치의 메시지 데이터 출력 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21909196

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 18258473

Country of ref document: US

ENP Entry into the national phase

Ref document number: 2021909196

Country of ref document: EP

Effective date: 20230620

NENP Non-entry into the national phase

Ref country code: DE