WO2022089259A1 - 设备通信方法、系统和装置 - Google Patents

设备通信方法、系统和装置 Download PDF

Info

Publication number
WO2022089259A1
WO2022089259A1 PCT/CN2021/124800 CN2021124800W WO2022089259A1 WO 2022089259 A1 WO2022089259 A1 WO 2022089259A1 CN 2021124800 W CN2021124800 W CN 2021124800W WO 2022089259 A1 WO2022089259 A1 WO 2022089259A1
Authority
WO
WIPO (PCT)
Prior art keywords
mobile phone
interface
input
large screen
screen
Prior art date
Application number
PCT/CN2021/124800
Other languages
English (en)
French (fr)
Inventor
饶凯浩
魏万军
毕晟
徐辉
朱振宗
鲍思源
陈刚
杨云帆
朱爽
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202110267000.8A external-priority patent/CN114527909A/zh
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP21884987.5A priority Critical patent/EP4221239A4/en
Priority to US18/251,127 priority patent/US20230403421A1/en
Publication of WO2022089259A1 publication Critical patent/WO2022089259A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4126The peripheral being portable, e.g. PDAs or mobile phones
    • H04N21/41265The peripheral being portable, e.g. PDAs or mobile phones having a remote control device for bidirectional communication between the remote control device and client device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0489Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using dedicated keyboard keys or combinations thereof
    • G06F3/04895Guidance during keyboard input operation, e.g. prompting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G08SIGNALLING
    • G08CTRANSMISSION SYSTEMS FOR MEASURED VALUES, CONTROL OR SIMILAR SIGNALS
    • G08C17/00Arrangements for transmitting signals characterised by the use of a wireless electrical link
    • G08C17/02Arrangements for transmitting signals characterised by the use of a wireless electrical link using a radio link
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72409User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories
    • H04M1/72412User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories using two-way short-range wireless interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • H04N21/42206User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor characterized by hardware details
    • H04N21/42208Display device provided on the remote control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • H04N21/42206User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor characterized by hardware details
    • H04N21/42212Specific keyboard arrangements
    • H04N21/42213Specific keyboard arrangements for facilitating data entry
    • H04N21/42214Specific keyboard arrangements for facilitating data entry using alphanumerical characters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • H04N21/43076Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of the same content streams on multiple devices, e.g. when family members are watching the same movie on different devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/4363Adapting the video or multiplex stream to a specific local network, e.g. a IEEE 1394 or Bluetooth® network
    • H04N21/43637Adapting the video or multiplex stream to a specific local network, e.g. a IEEE 1394 or Bluetooth® network involving a wireless protocol, e.g. Bluetooth, RF or wireless LAN [IEEE 802.11]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/458Scheduling content for creating a personalised stream, e.g. by combining a locally stored advertisement with an incoming stream; Updating operations, e.g. for OS modules ; time-related management operations
    • H04N21/4583Automatically resolving scheduling conflicts, e.g. when a recording by reservation has been programmed for two programs in the same time slot
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4786Supplemental services, e.g. displaying phone caller identification, shopping application e-mailing
    • GPHYSICS
    • G08SIGNALLING
    • G08CTRANSMISSION SYSTEMS FOR MEASURED VALUES, CONTROL OR SIMILAR SIGNALS
    • G08C2201/00Transmission systems of control signals via wireless link
    • G08C2201/30User interface
    • GPHYSICS
    • G08SIGNALLING
    • G08CTRANSMISSION SYSTEMS FOR MEASURED VALUES, CONTROL OR SIMILAR SIGNALS
    • G08C2201/00Transmission systems of control signals via wireless link
    • G08C2201/90Additional features
    • G08C2201/93Remote control using other portable devices, e.g. mobile phone, PDA, laptop
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2354/00Aspects of interface with display user
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/04Exchange of auxiliary data, i.e. other than image data, between monitor and graphics controller
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/16Use of wireless transmission of display information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/70Details of telephonic subscriber devices methods for entering alphabetical characters, e.g. multi-tap or dictionary disambiguation

Definitions

  • the present application relates to the field of communication technologies, and in particular, to a device communication method, system, and apparatus.
  • TVs can provide better video images based on large screens, but when searching for programs on TVs, you need to use the remote control to select pinyin letters one by one for text input, which is inefficient and inconvenient for input operations; mobile phones Convenient and efficient text input, etc. can be implemented based on an input method framework, etc., but the screen of a mobile phone is usually small, which is not conducive to users watching videos or images.
  • the embodiments of the present application provide a device communication method, system, and device, so that different electronic devices can work together, give play to their respective advantages, and provide users with convenient and comfortable services.
  • a first aspect of the embodiments of the present application provides a device communication method, which is applied to a system including a first device, a second device, and a third device.
  • the method includes: the first device displays a first interface including a first edit box; A device sends an instruction message to the second device and the third device; the second device displays a second interface according to the instruction message, and the second interface includes a second edit box; in the case of an editing state in the second edit box, the first device Synchronize the editing state to the first editing box; the third device sends a preemption message to the first device; the third device displays a third interface including the third editing box; the third editing box is synchronized with the editing in the first editing box state.
  • the third device can preempt, so that the manner of assisting the input of the first device is more flexible.
  • the second device includes an interface service, and the interface service is used for synchronizing editing states between the first device and the second device. In this way, based on the interface service, the first device can be synchronized to any editing state of the second device.
  • the editing state includes one or more of the following: text content, a cursor or a highlight mark of the text content.
  • the second device displays the second interface according to the instruction message, including: the second device displays the first notification interface in response to the instruction message; the first notification interface includes an option to confirm the auxiliary input; Trigger the operation, and the second device displays the second interface.
  • the second interface further includes: all or part of the content of the first interface. In this way, the user can see the situation in the first device on the second device, which is convenient for the user to understand the dynamics of the auxiliary first device.
  • the second edit box is displayed in layers with all or part of the content of the first interface, and the second edit box is displayed on the upper layer of all or part of the content of the first interface.
  • the method further includes: in response to triggering the second edit box, the second device displays a virtual keyboard; the second device displays a virtual keyboard according to the virtual keyboard and/or For the input operation received in the second edit box, the edit state is displayed in the second edit box.
  • the first device includes any of the following: a TV, a large screen, or a wearable device;
  • the second device or the third device includes any of the following: a mobile phone, a tablet, or a wearable device.
  • the method further includes: when the input content is received in the third editing box, the first device synchronizes the input content to the first editing box.
  • the third device sending a preemption message to the first device includes: the third device receiving a preemption request from the second device; and the third device sending a preemption message to the first device based on the preemption request.
  • the third device sends a preemption message to the first device, including: the third device displays a second notification interface according to a user operation; the second notification interface includes an option for confirming preemption; in response to the option for confirming preemption triggering operation, the third device sends a preemption message to the first device.
  • the first and second aspects of the embodiments of the present application provide a device communication method, which is applied to a system including a first device, a second device, and a third device.
  • the method includes: the second device displays a fourth interface including options of the first device; In response to the selection operation of the option of the first device, the second device sends an instruction message to the first device; the first device displays the first interface including the first edit box; the second device displays the second interface, and the second interface includes the first interface.
  • Second edit box in the case of an edit state in the second edit box, the first device synchronizes the edit state to the first edit box; the third device sends a preemption message to the first device; the third device displays that the third edit is included The third interface of the box; the third editing box is synchronized with the editing state of the first editing box.
  • the first and third aspects of the embodiments of the present application provide a device communication method, which is applied to a first device.
  • the method includes: the first device displays a first interface including a first edit box; the first device sends a message to the second device and the third device. an instruction message; the instruction message is used to instruct the second device to display a second interface, and the second interface includes a second edit box; in the case of an edit state in the second edit box, the first device synchronizes the edit state to the first edit box in; the first device receives the preemption message from the third device.
  • the first and fourth aspects of the embodiments of the present application provide a device communication method, which is applied to a second device.
  • the method includes: the second device displays a fourth interface including options of the first device; in response to a selection operation on the options of the first device , the second device sends an instruction message to the first device; the first device displays the first interface including the first edit box; the second device displays the second interface, the second interface includes the second edit box; In the case of editing state, the second device synchronizes the editing state to the first editing frame; the second device receives the preemption message from the third device.
  • a first and fifth aspect of an embodiment of the present application provides a device communication system, including a first device, a second device, and a third device, where the first device is configured to perform the communication of any of the first devices according to the first aspect to the first and fourth aspects. Steps, the second device is configured to perform the steps of the second device according to any of the first aspect to the first and fourth aspects, and the third device is configured to perform the steps of any third device such as the first aspect to the first and fourth aspects.
  • the first and sixth aspects of the embodiments of the present application provide a first device, including: at least one memory and at least one processor; the memory is used to store program instructions; the processor is used to call the program instructions in the memory to cause the first device to execute the first Steps performed by any of the first devices in one aspect to the first and fourth aspects.
  • a first and seventh aspect of an embodiment of the present application provides a second device, including: at least one memory and at least one processor; the memory is used to store program instructions; the processor is used to call program instructions in the memory to cause the second device to execute the first Steps performed by any second device in one aspect to the first and fourth aspects.
  • the first and eighth aspects of the embodiments of the present application provide a third device, including: at least one memory and at least one processor; the memory is used to store program instructions; the processor is used to call the program instructions in the memory to cause the third device to execute the first Steps performed by any third device in one aspect to the first and fourth aspects.
  • a ninth aspect of an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, so that when the computer program is executed by a processor of a first device, any one of the first aspects to the first and fourth aspects can be implemented.
  • the steps performed by the device; or, when the computer program is executed by the processor of the second device, the steps performed by any second device as in the first aspect to the first and fourth aspects are realized; or, the computer program is processed by the third device.
  • the device When the device is executed, it implements the steps executed by any third device according to the first aspect to the first and fourth aspects.
  • the interaction of the first device, the second device, and the third device is used as an example to illustrate the specific device communication method, and one of the first device, the second device, or the third device is used as the execution method.
  • the steps to be performed in any of the above-mentioned embodiments can be selected to obtain a one-sided implementation manner of the first device, the second device, or the third device, which will not be repeated here.
  • the functions of the second device and the third device are similar, and any steps executed in the second device can be applied to the third device if the steps of the third device do not conflict.
  • the display screen of each device may be used to realize the display step, and the first interface, the second interface, the third interface or the fourth interface, etc. Differentiated descriptions of different display interfaces of each device.
  • the first interface, the second interface, the third interface or the fourth interface may be corresponding to the specific embodiment provided by the text in combination with the specific content of the embodiment. The specific interface will not be repeated here.
  • a second aspect of an embodiment of the present application provides a device communication method, which is applied to a system including a first device, a second device, and a third device.
  • the method includes: the first device displays a first interface including a first edit box; responding to For the selection operation on the first edit box, the first device determines that the second device and the third device join the distributed networking; the first device displays a second interface, and the second interface includes the first option corresponding to the second device and the corresponding first option.
  • the second option of the three devices in response to the triggering operation for the first option, the first device sends an instruction message to the second device; the second device displays a third interface according to the instruction message, and the third interface includes a second edit box; If there is an edit state in the second edit box, synchronize the edit state to the first edit box.
  • the first device may provide a selection interface for selecting the second device or the third device, and upon receiving the selection of the second device, may send an instruction message to the second device, instructing the second device to assist In this way, the first device may not send an indication message to the third device, so as to avoid disturbing the third device.
  • the second device includes an interface service, and the interface service is used for synchronizing editing states between the first device and the second device. In this way, based on the interface service, the first device can be synchronized to any editing state of the second device.
  • the editing state includes one or more of the following: text content, a cursor or a highlight mark of the text content.
  • the second device displays the third interface according to the instruction message, including: the second device displays a notification interface in response to the instruction message; the notification interface includes a third option for confirming the auxiliary input; Triggering the operation, the second device displays the third interface.
  • the third interface further includes: all or part of the content of the first interface. In this way, the user can see the situation in the first device on the second device, which is convenient for the user to understand the dynamics of the auxiliary first device.
  • the second edit box is displayed in layers with all or part of the content of the first interface, and the second edit box is displayed on the upper layer of all or part of the content of the first interface.
  • the method further includes: in response to triggering the second edit box, the second device displays a virtual keyboard; the second device displays a virtual keyboard according to the virtual keyboard and/or For the input operation received in the second edit box, the edit state is displayed in the second edit box.
  • the first device includes any of the following: a TV, a large screen, or a wearable device;
  • the second device or the third device includes any of the following: a mobile phone, a tablet, or a wearable device.
  • a second and second aspect of an embodiment of the present application provides a device communication method, which is applied to a system including a first device, a second device, and a third device.
  • the method includes: the first device displays a first interface including a first edit box; responding to For the selection operation on the first edit box, the first device determines that the second device and the third device join the distributed networking; the first device determines that the second device is an auxiliary input device; the first device sends an instruction message to the second device; The second device displays a third interface according to the instruction message, and the third interface includes a second edit box; in the case that an edit state exists in the second edit box, the edit state is synchronized to the first edit box.
  • the second and third aspects of the embodiments of the present application provide a device communication method, which is applied to a system including a first device, a second device, and a third device.
  • the method includes: the second device displays a fourth interface including options of the first device; In response to the selection operation of the option of the first device, the second device sends an instruction message to the first device; the first device displays a first interface including the first edit box; the second device displays a third interface, and the third interface includes the first interface.
  • Second edit box if there is an edit state in the second edit box, synchronize the edit state to the first edit box.
  • a second and fourth aspect of an embodiment of the present application provides a device communication method, which is applied to a first device.
  • the method includes: the first device displays a first interface including a first edit box; in response to a selection operation on the first edit box, the first A device determines that the second device and the third device join the distributed networking; the first device displays a second interface, and the second interface includes a first option corresponding to the second device and a second option corresponding to the third device;
  • the first device sends an instruction message to the second device; the instruction message is used to instruct the second device to display a third interface, and the third interface includes a second edit box; in the case of an edit state in the second edit box , synchronize the editing state to the first edit box.
  • a second and fifth aspect of an embodiment of the present application provides a device communication method, which is applied to a second device.
  • the method includes: the second device displays a fourth interface including options of the first device; in response to a selection operation on the options of the first device , the second device sends an instruction message to the first device; the instruction message is used to instruct the first device to display the first interface including the first edit box; the second device displays the third interface, and the third interface includes the second edit box; If there is an edit state in the second edit box, synchronize the edit state to the first edit box.
  • a second and sixth aspect of an embodiment of the present application provides a device communication system, including a first device, a second device, and a third device, where the first device is configured to perform the communication of any of the first devices in the second aspect to the second and fifth aspects. Steps, the second device is configured to perform the steps of any second device according to the second aspect to the second and fifth aspects, and the third device is configured to perform the steps of any third device such as the second aspect to the second and fifth aspects.
  • a second and seventh aspect of an embodiment of the present application provides a first device, including: at least one memory and at least one processor; the memory is used to store program instructions; the processor is used to call the program instructions in the memory to cause the first device to execute the second Steps performed by any of the first devices in the one aspect to the second and fifth aspects.
  • a second and eighth aspect of an embodiment of the present application provides a second device, including: at least one memory and at least one processor; the memory is used to store program instructions; the processor is used to call the program instructions in the memory to cause the second device to execute the second device. Steps performed by any of the second devices of the one aspect to the second and fifth aspects.
  • a second and ninth aspect of an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, so that when the computer program is executed by a processor of a first device, any of the first aspects such as the second aspect to the second and fifth aspects are implemented.
  • the steps performed by any third device according to the second aspect to the second and fifth aspects are implemented.
  • the interaction of the first device, the second device, and the third device is used as an example to illustrate the specific device communication method, and one of the first device, the second device, or the third device is used as the execution method.
  • the steps to be performed in any of the above-mentioned embodiments can be selected to obtain a one-sided implementation manner of the first device, the second device, or the third device, which will not be repeated here.
  • the functions of the second device and the third device are similar, and any steps executed in the second device can be applied to the third device if the steps of the third device do not conflict.
  • the display screen of each device may be used to realize the display step, and the first interface, the second interface, the third interface or the fourth interface, etc. Differentiated descriptions of different display interfaces of each device.
  • the first interface, the second interface, the third interface or the fourth interface may be corresponding to the specific embodiment provided by the text in combination with the specific content of the embodiment. The specific interface will not be repeated here.
  • a third aspect of the embodiments of the present application provides a device communication method, which is applied to a system including a first device and a second device.
  • the method includes: the first device displays a first interface including a first edit box; The second device sends an instruction message; the second device displays a second interface according to the instruction message, and the second interface includes a second edit box; if a keyword exists in the second edit box, the first device synchronizes the keyword to the first edit box box; the first device determines the candidate words corresponding to the keywords; the second device obtains the candidate words and displays a third interface, where the third interface includes the candidate words.
  • the keywords input in the second device can be synchronized to the first device, and the candidate words associated with the first device based on the keywords can be synchronized to the second device, so that the second device can select the first device based on the first device.
  • the operation of the candidate word of the device realizes convenient and efficient auxiliary input of the first device.
  • the second device includes an interface service, and the interface service is used for synchronizing editing states between the first device and the second device. In this way, based on the interface service, the first device can be synchronized to any editing state of the second device.
  • the editing state includes one or more of the following: text content, a cursor or a highlight mark of the text content.
  • the second device displays the second interface according to the instruction message, including: the second device displays a notification interface in response to the instruction message; the notification interface includes an option to confirm the auxiliary input; in response to a triggering operation on the option, the first The second device displays the second interface.
  • the third interface further includes: all or part of the content of the first interface. In this way, the user can see the situation in the first device on the second device, which is convenient for the user to understand the dynamics of the auxiliary first device.
  • the second edit box is displayed in layers with all or part of the content of the first interface, and the second edit box is displayed on the upper layer of all or part of the content of the first interface.
  • the method further includes: in response to triggering the second edit box, the second device displays a virtual keyboard; the second device displays a virtual keyboard according to the virtual keyboard and/or For the input operation received in the second edit box, the edit state is displayed in the second edit box.
  • the first device includes any of the following: a TV, a large screen, or a wearable device;
  • the second device includes any of the following: a mobile phone, a tablet, or a wearable device.
  • the third interface also includes local candidate words associated with the second device based on keywords, and the display modes of the candidate words and local candidate words on the third interface include any of the following: candidate words and local candidate words.
  • the words are displayed in columns in the third interface; the candidate words are displayed in front of the local candidate words in the third interface; the candidate words are displayed behind the local candidate words in the third interface; the candidate words and the local candidate words are displayed in the third interface Mixed display in the middle; candidate words and local candidate words are distinguished by different identifiers in the third interface.
  • the ranking of the candidate words is related to historical user behaviors in the first device.
  • the method further includes: in response to a user triggering any candidate word, the second device displays any candidate word in the second edit box.
  • the third and second aspects of the embodiments of the present application provide a device communication method, which is applied to a system including a first device and a second device.
  • the method includes: the second device displays a fourth interface including options of the first device; For a selection operation of an option of a device, the second device sends an instruction message to the first device; the first device displays a first interface including a first edit box; the second device displays a second interface, and the second interface includes a second edit box;
  • the first device synchronizes the keyword to the first edit box; the first device determines the candidate word corresponding to the keyword; the second device obtains the candidate word and displays the third interface , the third interface includes candidate words.
  • the third and third aspects of the embodiments of the present application provide a device communication method, which is applied to a first device.
  • the method includes: the first device displays a first interface including a first edit box; the first device sends an instruction message to the second device; The message is used to instruct the second device to display the second interface, and the second interface includes a second edit box; if there is a keyword in the second edit box, the first device synchronizes the keyword to the first edit box; the first The device determines candidate words corresponding to the keywords; the first device synchronizes the candidate words to the second device.
  • the third and fourth aspects of the embodiments of the present application provide a device communication method.
  • the method includes: the second device receives an instruction message from the first device; the first device displays a first interface including a first edit box; The instruction message displays a second interface, and the second interface includes a second edit box; if a keyword exists in the second edit box, the second device synchronizes the keyword to the first edit box for the first device to determine the key The candidate word corresponding to the word; the second device acquires the candidate word, and displays a third interface, where the third interface includes the candidate word.
  • a third and fifth aspect of an embodiment of the present application provides a device communication method, which is applied to a second device.
  • the method includes: the second device displays a fourth interface including options of the first device; in response to a selection operation on the options of the first device , the second device sends an instruction message to the first device; the instruction message is used to instruct the first device to display the first interface including the first edit box; the second device displays the second interface, and the third interface includes the second edit box; If there is a keyword in the second edit box, the second device synchronizes the keyword to the first edit box for the first device to determine the candidate word corresponding to the keyword; the second device obtains the candidate word and displays the third interface , the third interface includes candidate words.
  • a third and sixth aspect of an embodiment of the present application provides a device communication system, including a first device and a second device, where the first device is configured to perform any of the steps of the first device in the third aspect to the third and fifth aspects, and the second device The apparatus is configured to perform the steps of the second apparatus according to any of the third aspect to the third and fifth aspects.
  • a third and seventh aspect of an embodiment of the present application provides a first device, including: at least one memory and at least one processor; the memory is used to store program instructions; the processor is used to call the program instructions in the memory to cause the first device to execute the third Steps performed by any of the first devices in one aspect to the third and fifth aspects.
  • a third and eighth aspect of an embodiment of the present application provides a second device, including: at least one memory and at least one processor; the memory is used to store program instructions; the processor is used to call the program instructions in the memory to cause the second device to execute the third Steps performed by any of the second devices of the one aspect to the third and fifth aspects.
  • a third and ninth aspect of an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, so that when the computer program is executed by a processor of a first device, any of the first aspects such as the third aspect to the third and fifth aspects are implemented.
  • the steps performed by any third device according to the third aspect to the third and fifth aspects are implemented.
  • the interaction between the first device and the second device is used as an example to illustrate the specific device communication method.
  • the steps to be executed respectively are selected to obtain the one-sided implementation manner of the first device or the second device, which will not be repeated here.
  • the display screen of each device may be used to realize the display step, and the first interface, the second interface, the third interface or the fourth interface, etc. Differentiated descriptions of different display interfaces of each device.
  • the first interface, the second interface, the third interface or the fourth interface may be corresponding to the specific embodiment provided by the text in combination with the specific content of the embodiment. The specific interface will not be repeated here.
  • a fourth aspect of the embodiments of the present application provides a device communication method, which is applied to a system including a first device, a second device, and a third device.
  • the method includes: the first device, the second device, and the third device access a distributed Networking; the second device obtains target candidate words, the target candidate words do not belong to the candidate thesaurus of the first device, and the target candidate words do not belong to the candidate thesaurus of the third device; the first device receives user input related to the target candidate word.
  • keyword the first device displays the target candidate word; and/or, the third device receives a keyword related to the target candidate word input by the user, and the third device displays the target candidate word.
  • the first device, the second device, and the third device can access the distributed networking and synchronize their respective candidate word banks with each other, so that efficient and convenient input can be implemented based on the synchronized candidate word banks.
  • the method further includes: synchronizing the respective candidate word libraries among the first device, the second device and the third device.
  • it also includes: when the first device, the second device, or the third device exits the distributed networking, displaying whether to delete the synchronized candidate vocabulary in the first device, the second device, or the third device.
  • the prompt interface includes an option for indicating deletion and an option for indicating no deletion; in response to the triggering operation for the option indicating deletion, the first device, the second device or the third device deletes each from other devices Synchronized candidate thesaurus; or, in response to a triggering operation for an option indicating no deletion, the first device, the second device, or the third device retains the synchronized candidate thesaurus from the distributed networking.
  • it further includes: the first device, the second device or the third device determine their respective access types; when the first device, the second device or the third device exits the distributed networking, the first device, the second device or the third device The device, the second device or the third device determines whether to delete the candidate vocabulary synchronized from the distributed networking according to the respective access types.
  • it further includes: the first device displays a first interface including the first edit box; the first device sends an instruction message to the second device; the second device displays a second interface according to the instruction message, and the second interface A second edit box is included; if an edit state exists in the second edit box, the edit state is synchronized to the first edit box.
  • the second device includes an interface service, and the interface service is used for synchronizing editing states between the first device and the second device. In this way, based on the interface service, the first device can be synchronized to any editing state of the second device.
  • the editing state includes one or more of the following: text content, a cursor or a highlight mark of the text content.
  • the second device displays the second interface according to the instruction message, including: the second device displays a notification interface in response to the instruction message; the notification interface includes a third option for confirming the auxiliary input; Trigger the operation, and the second device displays the second interface.
  • the second interface further includes: all or part of the content of the first interface.
  • the second edit box is displayed in layers with all or part of the content of the first interface, and the second edit box is displayed on the upper layer of all or part of the content of the first interface.
  • the method further includes: in response to triggering the second edit box, the second device displays a virtual keyboard; the second device displays a virtual keyboard according to the virtual keyboard and/or For the input operation received in the second edit box, the edit state is displayed in the second edit box.
  • the first device includes any of the following: a TV, a large screen, or a wearable device;
  • the second device or the third device includes any of the following: a mobile phone, a tablet, or a wearable device.
  • the second device displays a fourth interface including options of the first device; in response to a selection operation of the options of the first device, the second device sends an instruction message to the first device;
  • a device displays a first interface including a first edit box;
  • a second device displays a second interface, and the second interface includes a second edit box; if there is an edit state in the second edit box, synchronize the edit state to the first edit box edit box.
  • a fourth and second aspect of an embodiment of the present application provides a device communication method, which is applied to a system including a first device, a second device, and a third device.
  • the method includes: the first device, the second device, and the third device access a distributed Networking; the first device, the second device and the third device synchronize their respective candidate thesaurus with each other to obtain a candidate thesaurus set; when the first device, the second device or the third device performs text editing, the first device , the second device or the third device to display candidate words according to the candidate vocabulary set.
  • the fourth and third aspects of the embodiments of the present application provide a device communication method, which is applied to a first device, including: the first device is connected to a distributed networking; other devices are also connected to the distributed networking; the first device is based on distributed networking
  • the candidate thesaurus set of other devices is synchronized through the network in a simple manner to obtain a candidate thesaurus set; when the first device performs text editing, the first device displays the candidate words according to the candidate thesaurus set.
  • a fourth and fourth aspect of an embodiment of the present application provides a device communication system, including a first device, a second device, and a third device, where the first device is configured to perform the communication of any of the first devices in the fourth aspect to the fourth and third aspects. Steps, the second device is configured to perform the steps of the second device according to any of the fourth aspect to the fourth and third aspects, and the third device is configured to perform the steps of the third device as any of the fourth aspect to the fourth and third aspects.
  • a fourth and fifth aspect of an embodiment of the present application provides a first device, including: at least one memory and at least one processor; the memory is used to store program instructions; the processor is used to call the program instructions in the memory to cause the first device to execute the fourth Steps performed by any of the first devices in the first aspect to the fourth and third aspects.
  • the fourth and sixth aspects of the embodiments of the present application provide a second device, including: at least one memory and at least one processor; the memory is used to store program instructions; the processor is used to call the program instructions in the memory to cause the second device to execute the fourth Steps performed by any second device in the one aspect to the fourth and third aspects.
  • a fourth and seventh aspect of the embodiments of the present application provides a computer-readable storage medium, on which a computer program is stored, so that when the computer program is executed by a processor of a first device, any of the first aspects such as the fourth aspect to the fourth and third aspects is implemented.
  • the steps performed by the device; or, when the computer program is executed by the processor of the second device, any of the steps performed by the second device in the fourth aspect to the fourth and third aspects are implemented; or, the computer program is processed by the third device.
  • the steps performed by any third device according to the fourth aspect to the fourth and third aspects are implemented.
  • the interaction of the first device, the second device, and the third device is used as an example to illustrate the specific device communication method, and one of the first device, the second device, or the third device is used as the execution method.
  • the steps to be executed in any of the foregoing embodiments may be selected to obtain a one-sided implementation manner of the first device, the second device, or the third device, which will not be repeated here.
  • the functions of the second device and the third device are similar, and any steps executed in the second device can be applied to the third device if the steps of the third device do not conflict.
  • the display screen of each device may be used to realize the display step, and the first interface, the second interface, the third interface or the fourth interface, etc. Differentiated descriptions of different display interfaces of each device.
  • the first interface, the second interface, the third interface or the fourth interface may be corresponding to the specific embodiment provided by the text in combination with the specific content of the embodiment. The specific interface will not be repeated here.
  • a fifth aspect of embodiments of the present application provides a device communication method, which is applied to a system including a first device, a second device, and a third device.
  • the method includes: the first device displays a first interface including a first edit box The first device sends an instruction message to the second device and the third device; the second device displays a second interface according to the instruction message, and the second interface includes a second edit box; the third device displays a third interface according to the instruction message, and the third interface Including a third edit box; when there is an edit state in the second edit box, the first device synchronizes the edit state to the first edit box, and the third device synchronizes the edit state to the third edit box; or, In the case that the editing state exists in the third editing box, the first device synchronizes the editing state to the first editing box, and the second device synchronizes the editing state to the second editing box; or, in the first editing box In the presence of an edit state, the second device synchronizes the edit state to the second edit box,
  • the second device and the third device may jointly assist the input of the first device, thereby realizing convenient and efficient input.
  • the second device includes an interface service, and the interface service is used for synchronizing editing states between the first device and the second device. In this way, based on the interface service, the first device can be synchronized to any editing state of the second device.
  • the editing state includes one or more of the following: text content, a cursor or a highlight mark of the text content.
  • the second device displays the second interface according to the instruction message, including: the second device displays a notification interface in response to the instruction message; the notification interface includes an option for confirming the auxiliary input; in response to a triggering operation on the option, the first The second device displays the second interface.
  • the second interface further includes: all or part of the content of the first interface. In this way, the user can see the situation in the first device on the second device, which is convenient for the user to understand the dynamics of the auxiliary first device.
  • the second edit box is displayed in layers with all or part of the content of the first interface, and the second edit box is displayed on the upper layer of all or part of the content of the first interface.
  • the method further includes: in response to triggering the second edit box, the second device displays a virtual keyboard; the second device displays a virtual keyboard according to the virtual keyboard and/or For the input operation received in the second edit box, the edit state is displayed in the second edit box.
  • the first device includes any of the following: a TV, a large screen, or a wearable device;
  • the second device or the third device includes any of the following: a mobile phone, a tablet, or a wearable device.
  • the editing state in the first editing box includes the identification of the first device
  • the editing state in the second editing box includes the identification of the second device
  • the third editing box includes the identification of the first device.
  • the first device decides the input content of the second edit box and the display mode of the third edit box.
  • a fifth and second aspect of an embodiment of the present application provides a device communication method, which is applied to a system including a first device, a second device, and a third device.
  • the method includes: the first device displays a first interface including a first edit box; A device sends an instruction message to the second device; the second device displays a second interface according to the instruction message, and the second interface includes a second edit box; the second device sends an auxiliary input request to the third device; the third device displays according to the auxiliary input request
  • the third interface, the third interface includes a third edit box; in the case of an edit state in the second edit box, the first device synchronizes the edit state to the first edit box, and the third device synchronizes the edit state to the first edit box three edit boxes; or, if there is an edit state in the third edit box, the first device synchronizes the edit state to the first edit box, and the second device synchronizes the edit state to the second edit box; or , in the case that the editing state exists in the first
  • a fifth and third aspect of an embodiment of the present application provides a device communication method, which is applied to a system including a first device, a second device, and a third device.
  • the method includes: the second device displays a fourth interface including options of the first device; In response to the selection operation of the option of the first device, the second device sends an instruction message to the first device; the first device displays the first interface including the first edit box; the second device displays the second interface, and the second interface includes the first interface.
  • Second edit box the second device sends an auxiliary input request to the third device; the third device displays a third interface according to the auxiliary input request, and the third interface includes a third edit box; when the second edit box exists in an editing state, The first device synchronizes the edit state to the first edit box, and the third device synchronizes the edit state to the third edit box; or, if the edit state exists in the third edit box, the first device synchronizes the edit state Synchronize to the first edit box, and the second device synchronizes the edit state to the second edit box; or, in the case that the edit state exists in the first edit box, the second device synchronizes the edit state to the second edit box , and the third device synchronizes the editing state to the third editing box.
  • a fifth and fourth aspect of an embodiment of the present application provides a device communication method, which is applied to a first device.
  • the method includes: the first device displays a first interface including a first edit box; the first device sends a message to the second device and the third device. an instruction message; for the second device to display a second interface according to the instruction message, the second interface includes a second edit box, and for the third device to display a third interface according to the instruction message, the third interface includes a third edit box; In the case that the editing state exists in the second editing box, the first device synchronizes the editing state to the first editing box; or, in the case that the editing state exists in the third editing box, the first device synchronizes the editing state to the first editing box. edit box.
  • a fifth and fifth aspect of an embodiment of the present application provides a device communication method, which is applied to a second device.
  • the method includes: the second device displays a fourth interface including options of the first device; in response to a selection operation on the options of the first device , the second device sends an instruction message to the first device; it is used for the first device to display the first interface including the first edit box; the second device displays the second interface, and the second interface includes the second edit box;
  • the third device sends an auxiliary input request; it is used for the third device to display a third interface according to the auxiliary input request, and the third interface includes a third editing box; in the case of an editing state in the third editing box, the second device synchronizes the editing state into the second edit box; or, in the case that the edit state exists in the first edit box, the second device synchronizes the edit state to the second edit box.
  • a fifth and sixth aspect of an embodiment of the present application provides a device communication system, including a first device, a second device, and a third device, where the first device is configured to perform the communication of any of the first devices according to the fifth aspect to the fifth and fifth aspects. Steps, the second device is configured to perform the steps of any second device according to the fifth aspect to the fifth and fifth aspects, and the third device is configured to perform the steps of any third device such as the fifth aspect to the fifth and fifth aspects.
  • a fifth and seventh aspect of an embodiment of the present application provides a first device, including: at least one memory and at least one processor; the memory is used to store program instructions; the processor is used to call the program instructions in the memory to cause the first device to execute the program as described in the fifth Steps performed by any of the first devices in one aspect to the fifth and fifth aspects.
  • a fifth and eighth aspect of an embodiment of the present application provides a second device, including: at least one memory and at least one processor; the memory is used to store program instructions; the processor is used to call the program instructions in the memory to cause the second device to execute the program as described in the fifth Steps performed by any of the second devices of the one aspect to the fifth and fifth aspects.
  • a fifth and ninth aspect of the embodiments of the present application provides a computer-readable storage medium, on which a computer program is stored, so that when the computer program is executed by a processor of a first device, any of the first aspects such as the fifth aspect to the fifth and fifth aspects are implemented.
  • the steps performed by the device; or, when the computer program is executed by the processor of the second device, any of the steps performed by the second device in the fifth aspect to the fifth and fifth aspects are implemented; or, the computer program is processed by the third device.
  • the steps performed by any third device according to the fifth aspect to the fifth and fifth aspects are implemented.
  • the interaction of the first device, the second device and the third device is used as an example to illustrate the specific device communication method, and one of the first device, the second device or the third device is used as the execution
  • the steps to be executed in any of the foregoing embodiments may be selected to obtain a one-sided implementation manner of the first device, the second device, or the third device, which will not be repeated here.
  • the functions of the second device and the third device are similar, and any steps executed in the second device can be applied to the third device if the steps of the third device do not conflict.
  • the display screen of each device may be used to implement the display step, and the first interface, the second interface, the third interface or the fourth interface described in the above embodiment are correct Differentiated descriptions of different display interfaces of each device.
  • the first interface, the second interface, the third interface or the fourth interface can be corresponding to the specific embodiment provided by the text in combination with the specific content of the embodiment. The specific interface will not be repeated here.
  • FIG. 1 shows a schematic diagram of the architecture of a communication system provided by an embodiment of the present application
  • FIG. 2 shows a schematic diagram of the architecture of another communication system provided by an embodiment of the present application
  • FIG. 3 shows a schematic diagram of the architecture of another communication system provided by an embodiment of the present application.
  • FIG. 4 shows a functional schematic block diagram of a first device provided by an embodiment of the present application
  • FIG. 5 shows a functional schematic block diagram of a second device provided by an embodiment of the present application
  • FIG. 6 shows a schematic diagram of software architecture of a first device and a second device provided by an embodiment of the present application
  • FIG. 7 shows a schematic diagram of a system architecture of a device communication method provided by an embodiment of the present application.
  • FIG. 8 shows a schematic diagram of a specific system architecture of a device communication method provided by an embodiment of the present application.
  • FIG. 9 shows a schematic diagram of a large-screen user interface provided by an embodiment of the present application.
  • FIG. 10 shows a schematic diagram of a user interface of a mobile phone provided by an embodiment of the present application.
  • FIG. 11 shows a schematic diagram of another mobile phone user interface provided by an embodiment of the present application.
  • FIG. 12 shows a schematic diagram of a user interface provided by an embodiment of the present application.
  • FIG. 13 shows another schematic diagram of a mobile phone user interface provided by an embodiment of the present application.
  • FIG. 14 shows another schematic diagram of a large-screen user interface provided by an embodiment of the present application.
  • FIG. 15 shows another schematic diagram of a mobile phone user interface provided by an embodiment of the present application.
  • FIG. 16 shows another schematic diagram of a large-screen user interface provided by an embodiment of the present application.
  • FIG. 17 shows another schematic diagram of a mobile phone user interface provided by an embodiment of the present application.
  • FIG. 18 shows a schematic diagram of another mobile phone user interface provided by an embodiment of the present application.
  • FIG. 19 shows another schematic diagram of a mobile phone user interface provided by an embodiment of the present application.
  • FIG. 20 shows a schematic diagram of another mobile phone user interface provided by an embodiment of the present application.
  • FIG. 21 shows another schematic diagram of a user interface provided by an embodiment of the present application.
  • FIG. 22 shows another schematic diagram of a user interface provided by an embodiment of the present application.
  • FIG. 23 shows a schematic flowchart of a specific flow of communication between a mobile phone and a large screen provided by an embodiment of the present application
  • FIG. 24 is a schematic structural diagram of a device provided by an embodiment of the application.
  • 25 is a schematic structural diagram of another device according to an embodiment of the application.
  • FIG. 26 shows a schematic diagram of a system architecture of another device communication method provided by an embodiment of the present application.
  • FIG. 27 shows a schematic diagram of a user interface of a large screen provided by an embodiment of the present application.
  • FIG. 28 shows a schematic diagram of a user interface of a mobile phone provided by an embodiment of the present application.
  • FIG. 29 shows a schematic diagram of a user interface of a large screen provided by an embodiment of the present application.
  • FIG. 30 shows a schematic diagram of a user interface of another mobile phone provided by an embodiment of the present application.
  • FIG. 31 shows a schematic diagram of a user interface of another mobile phone provided by an embodiment of the present application.
  • FIG. 32 shows a schematic diagram of a mobile phone auxiliary large-screen input interface provided by an embodiment of the present application
  • FIG. 33 shows a schematic diagram of another mobile phone auxiliary large-screen input interface provided by an embodiment of the present application.
  • FIG. 34 shows a schematic diagram of a mobile phone auxiliary large-screen input interface provided by an embodiment of the present application.
  • FIG. 35 shows a schematic diagram of a mobile phone interface provided by an embodiment of the present application.
  • 36 is a schematic structural diagram of a device provided by an embodiment of the application.
  • FIG. 37 is a schematic structural diagram of another device according to an embodiment of the application.
  • FIG. 38 shows a schematic diagram of a user interface for communication between a mobile phone and a large screen provided by an embodiment of the present application
  • FIG. 39 shows a schematic diagram of a specific system architecture of a device communication method provided by an embodiment of the present application.
  • FIG. 40 shows a schematic flowchart of communication between a mobile phone and a large screen provided by an embodiment of the present application
  • FIG. 41 shows a schematic diagram of a large-screen user interface provided by an embodiment of the present application.
  • FIG. 42 shows a schematic diagram of a user interface of a mobile phone provided by an embodiment of the present application.
  • FIG. 43 shows another schematic diagram of a mobile phone user interface provided by an embodiment of the present application.
  • FIG. 44 shows another schematic diagram of a large-screen user interface provided by an embodiment of the present application.
  • FIG. 45 shows a schematic diagram of a user interface of a mobile phone provided by an embodiment of the present application.
  • FIG. 46 shows a schematic diagram of another mobile phone user interface provided by an embodiment of the present application.
  • FIG. 47 shows another schematic diagram of a mobile phone user interface provided by an embodiment of the present application.
  • FIG. 48 shows another schematic diagram of a mobile phone user interface provided by an embodiment of the present application.
  • FIG. 49 shows another schematic diagram of a mobile phone user interface provided by an embodiment of the present application.
  • FIG. 50 shows another schematic diagram of a user interface of a mobile phone provided by an embodiment of the present application.
  • FIG. 51 shows a schematic flowchart of a specific flow of communication between a mobile phone and a large screen provided by an embodiment of the present application
  • FIG. 52 is a schematic structural diagram of a device provided by an embodiment of the application.
  • 53 is a schematic structural diagram of another device according to an embodiment of the application.
  • FIG. 54 shows a schematic diagram of a specific application scenario of an embodiment of the present application.
  • FIG. 55 shows a schematic diagram of a specific system architecture of a device communication method provided by an embodiment of the present application.
  • FIG. 56 shows a schematic diagram of a user interface of a mobile phone provided by an embodiment of the present application.
  • FIG. 57 shows a schematic diagram of another mobile phone user interface provided by an embodiment of the present application.
  • FIG. 58 shows a schematic diagram of a large-screen user interface provided by an embodiment of the present application.
  • FIG. 59 shows another schematic diagram of a mobile phone user interface provided by an embodiment of the present application.
  • FIG. 60 shows a schematic diagram of a user interface for communication between a mobile phone and a large screen provided by an embodiment of the present application
  • FIG. 61 shows another schematic diagram of a large-screen user interface provided by an embodiment of the present application.
  • FIG. 62 shows a schematic diagram of a user interface of a mobile phone provided by an embodiment of the present application.
  • FIG. 63 shows a schematic diagram of a user interface of a mobile phone provided by an embodiment of the present application.
  • FIG. 64 shows another schematic diagram of a user interface of a mobile phone provided by an embodiment of the present application.
  • FIG. 65 shows another schematic diagram of a user interface of a mobile phone provided by an embodiment of the present application.
  • 66 is a schematic structural diagram of a device provided by an embodiment of the application.
  • FIG. 67 is a schematic structural diagram of another device according to an embodiment of the application.
  • FIG. 68 shows a schematic diagram of a system architecture of another device communication method provided by an embodiment of the present application.
  • FIG. 69 shows a schematic diagram of a large-screen interface provided by an embodiment of the present application.
  • FIG. 70 shows a schematic diagram of a mobile phone interface provided by an embodiment of the present application.
  • FIG. 71 shows another schematic diagram of a mobile phone interface provided by an embodiment of the present application.
  • FIG. 72 shows a schematic diagram of a mobile phone interface provided by an embodiment of the present application.
  • FIG. 73 shows another schematic diagram of a mobile phone interface provided by an embodiment of the present application.
  • FIG. 74 shows a schematic diagram of another mobile phone interface provided by an embodiment of the present application.
  • FIG. 75 shows a schematic diagram of a system architecture of another device communication method provided by an embodiment of the present application.
  • FIG. 76 shows a schematic diagram of a mobile phone auxiliary large-screen input interface provided by an embodiment of the present application.
  • FIG. 77 shows a schematic diagram of another mobile phone auxiliary large-screen input interface provided by an embodiment of the present application.
  • FIG. 78 shows a schematic diagram of a mobile phone auxiliary large-screen input interface provided by an embodiment of the present application.
  • FIG. 79 shows a schematic diagram of a mobile phone auxiliary large-screen input interface provided by an embodiment of the present application.
  • FIG. 80 shows a schematic diagram of a mobile phone auxiliary large-screen input interface provided by an embodiment of the present application.
  • FIG. 81 shows a schematic diagram of a system architecture of another device communication method provided by an embodiment of the present application.
  • FIG. 82 shows a schematic diagram of a mobile phone auxiliary large-screen input interface provided by an embodiment of the present application.
  • FIG. 83 shows a schematic diagram of generating a circular chain provided by an embodiment of the present application.
  • FIG. 84 shows a schematic diagram of a system architecture of another device communication method provided by an embodiment of the present application.
  • FIG. 85 shows a schematic diagram of a mobile phone interface provided by an embodiment of the present application.
  • FIG. 86 is a schematic structural diagram of a device provided by an embodiment of the application.
  • FIG. 87 is a schematic structural diagram of still another device according to an embodiment of the present application.
  • words such as “first” and “second” are used to distinguish the same or similar items with basically the same function and effect.
  • the first device and the second device are only used to distinguish different devices, and the sequence of the first device and the second device is not limited.
  • the words “first”, “second” and the like do not limit the quantity and execution order, and the words “first”, “second” and the like are not necessarily different.
  • At least one means one or more
  • plural means two or more.
  • And/or which describes the association relationship of the associated objects, indicates that there can be three kinds of relationships, for example, A and/or B, which can indicate: the existence of A alone, the existence of A and B at the same time, and the existence of B alone, where A, B can be singular or plural.
  • the character “/” generally indicates that the associated objects are an “or” relationship.
  • At least one item(s) below” or similar expressions thereof refer to any combination of these items, including any combination of single item(s) or plural items(s).
  • At least one (a) of a, b, or c can represent: a, b, c, a-b, a-c, b-c, or a-b-c, where a, b, c may be single or multiple .
  • FIG. 1 shows a schematic diagram of the architecture of a communication system provided by an embodiment of the present application.
  • the communication system may include: a first device 101 and a second device 102 .
  • the first device 101 may be a relatively inconvenient device when the user edits content such as text, or may be understood as an assisted device with weak input capability, such as a TV, a smart screen (or a large screen), a smart watch Wait.
  • the first device 101 may further include a camera (not shown in the figure), etc., and the first device 101 is not specifically limited in this embodiment of the present application.
  • the remote control 103 when inputting content such as text in the first device 101, the user needs to use the remote control 103 to select pinyin in sequence and press the confirmation key, which is cumbersome and inefficient.
  • the second device 102 may be a device that is more convenient for users to edit text and other content, or may be understood as an auxiliary device with strong input capabilities, such as a mobile phone, tablet, computer, etc.
  • the specific type of the second device 102 is not limited.
  • the embodiments of the present application take the first device 101 as an example of a large screen for illustration, and the second device 102 as an example of a mobile phone for illustration.
  • the mobile phone and the large screen can be wired or wirelessly connected.
  • the wireless connection may include a wireless fidelity (wireless fidelity, Wi-Fi) connection, a Bluetooth connection, or a ZigBee connection, etc., which is not limited in this embodiment of the present application.
  • the user can assist the large-screen input in the mobile phone.
  • the user moves the option through the remote control on the large screen and selects the editing box of the large screen, and a dialog box for auxiliary input can pop up in the mobile phone that communicates with the large screen.
  • the content such as text input on the keyboard can be displayed on the large screen.
  • functions such as searching for programs on the large screen can be realized according to the content entered by the user in the mobile phone.
  • FIG. 2 shows a schematic diagram of the architecture of another communication system provided by an embodiment of the present application.
  • the communication system may include: a large screen 201 , a first mobile phone 202 and a second mobile phone 203 .
  • the large screen 201 , the first mobile phone 202 and the second mobile phone 203 are in a distributed network, and the distributed network can support the large screen 201 , the first mobile phone 202 and the second mobile phone 203 to realize communication connection.
  • one client can connect to multiple servers at the same time for distributed input, and one server can also be connected by multiple clients at the same time.
  • the large screen 201 with weak input capability can be used as a distributed input client, and the mobile phones 101 and 102 with strong input capability can be used as distributed input servers.
  • the large screen 201 , the first mobile phone 202 and the second mobile phone 203 can mutually implement one or more functions such as device discovery, device connection or data transmission.
  • the first mobile phone 202 and the second mobile phone 203 join the distributed networking, mutual device discovery and device connection can be implemented. Furthermore, the first mobile phone 202 and the second mobile phone 203 can simultaneously assist the large screen 201 to input content such as text. Alternatively, the first mobile phone 202 and the second mobile phone 203 can respectively assist the large screen 201 to input content such as text. Alternatively, the first mobile phone 202 or the second mobile phone 203 may preempt the input from the other mobile phone when one of the mobile phones assists the input on the large screen 201 . Alternatively, the large screen 201 can select the first mobile phone 202 or the second mobile phone 203 for auxiliary input, and so on. The specific auxiliary input or preemptive input and other processes will be described in detail in subsequent embodiments, and will not be repeated here.
  • FIG. 3 shows a schematic structural diagram of another communication system provided by an embodiment of the present application.
  • the communication system may include: a first large screen 301 , a second large screen 302 , a first mobile phone 303 and a second mobile phone 304 .
  • the first large screen 301, the second large screen 302, the first mobile phone 303 and the second mobile phone 304 are in a distributed network.
  • the two large screens 302 , the first mobile phone 303 and the second mobile phone 304 can mutually implement functions such as device discovery, device connection or data transmission.
  • the first mobile phone 303 and the second mobile phone 304 join the distributed networking, mutual device discovery and device connection can be implemented. Furthermore, the first mobile phone 303 and the second mobile phone 304 can simultaneously assist the first large screen 301 and/or the second large screen 302 to input content such as text. Alternatively, the first mobile phone 303 and the second mobile phone 304 can respectively assist the first large screen 301 and/or the second large screen 302 to input content such as text. Alternatively, when one of the first mobile phone 303 or the second mobile phone 304 assists the first large screen 301 and/or the second large screen 302 for input, the other can preempt input.
  • the first large screen 301 and/or the second large screen 302 may select the first mobile phone 303 or the second mobile phone 304 for auxiliary input, and the like.
  • the specific auxiliary input or preemptive input and other processes will be described in detail in subsequent embodiments, and will not be repeated here.
  • FIG. 4 shows a functional block diagram of a first device provided by an embodiment of the present application.
  • the first device 400 may include: a processor 401, a memory 402, a communication interface 403, a speaker 404, a display 405, etc., and these components may pass through one or more communication buses or A signal line (not shown in the figure) is used for communication.
  • the processor 401 is the control center of the first device 400, and uses various interfaces and lines to connect various parts of the first device 400, by running or executing the application program stored in the memory 402, and calling the data stored in the memory 402, Various functions of the first device 400 are executed and data is processed.
  • the processor 401 may include one or more processing units, for example, the processor 401 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU) ), image signal processor (ISP), controller, memory, video codec, digital signal processor (DSP), baseband processor, and/or neural-network processor (neural- network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • the controller may be the nerve center and command center of the first device 400 . The controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 401 for storing instructions and data.
  • the memory in processor 401 is a cache memory.
  • the memory may hold instructions or data that have just been used or recycled by the processor 401 . If the processor 401 needs to use the instruction or data again, it can be directly called from the memory, which avoids repeated access, reduces the waiting time of the processor 401, and thus improves the efficiency of the system.
  • the processor 401 may run the software codes/modules of the device communication method provided by some embodiments of the present application, so as to realize the function of controlling the first device 400 .
  • the memory 402 is used to store application programs and data, and the processor 401 executes various functions of the first device 400 and data processing by running the application programs and data stored in the memory 402 .
  • the memory 402 mainly includes a stored program area and a stored data area, wherein the stored program area can store an operating system (operating system, OS), an application program required for at least one function (such as a device discovery function, a video search function, a video playback function, etc. ); the storage data area may store data (such as audio and video data, etc.) created according to the use of the first device.
  • OS operating system
  • an application program required for at least one function such as a device discovery function, a video search function, a video playback function, etc.
  • the storage data area may store data (such as audio and video data, etc.) created according to the use of the first device.
  • the memory 402 may include high-speed random access memory (RAM), and may also include non-volatile memory, such as magnetic disk storage devices, flash memory devices, or other volatile solid-state storage devices. In some embodiments, memory 402 may store various operating systems.
  • RAM random access memory
  • non-volatile memory such as magnetic disk storage devices, flash memory devices, or other volatile solid-state storage devices.
  • memory 402 may store various operating systems.
  • the above-mentioned memory 402 may be independent and connected to the processor 401 through the above-mentioned communication bus; the memory 402 may also be integrated with the processor 401 .
  • the communication interface 403 may be a wired interface (eg, an Ethernet interface) or a wireless interface (eg, a cellular network interface or an interface using a wireless local area network), for example, the communication interface 403 may be specifically used to communicate with one or more second devices, and the like.
  • a wired interface eg, an Ethernet interface
  • a wireless interface eg, a cellular network interface or an interface using a wireless local area network
  • Speakers 404 also referred to as “speakers”, are used to convert audio electrical signals into sound signals.
  • the first device 400 may play the sound signal through the speaker 404 .
  • the display 405 (or referred to as a display screen, a screen, etc.) can be used to display a display interface of an application, such as an interface for searching for a video or a currently playing video picture.
  • Display 405 may include a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • a touch sensor may be provided in the display 405 to form a touch screen, which is not limited in this application.
  • a touch sensor is used to detect touches on or near it.
  • the touch sensor can communicate the detected touch operation to the processor 401 to determine the type of touch event.
  • Processor 401 may provide visual output related to touch operations via display 405 .
  • the first device 400 may further include a power supply device 406 (such as a battery and a power management chip) for supplying power to various components, and the battery may be logically connected to the processor 401 through the power management chip, so that the power supply device 406 can manage charging, discharging, and power management functions.
  • a power supply device 406 such as a battery and a power management chip for supplying power to various components
  • the battery may be logically connected to the processor 401 through the power management chip, so that the power supply device 406 can manage charging, discharging, and power management functions.
  • the first device 400 may further include a sensor module (not shown in the figure), and the sensor module may include an air pressure sensor, a temperature sensor, and the like.
  • the first device 400 may also include more or fewer sensors, or use other sensors with the same or similar functions to replace the above-listed sensors, etc., which is not limited in this application.
  • the device structure shown in FIG. 4 does not constitute a specific limitation on the first device.
  • the first device may include more or fewer components than shown, or some components may be combined, or some components may be split, or a different arrangement of components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • FIG. 5 shows a functional block diagram of a second device 500 provided by an embodiment of the present application.
  • the second device 500 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, sensor module 180, key 190, motor 191, indicator 192, camera 193, a display screen 194, and a subscriber identification module (subscriber identification module, SIM) card interface 195 and the like.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
  • the structures illustrated in the embodiments of the present application do not constitute a specific limitation on the second device 500 .
  • the second device 500 may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • baseband processor baseband processor
  • neural-network processing unit neural-network processing unit
  • the controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 110 . If the processor 110 needs to use the instruction or data again, it can be called from memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby increasing the efficiency of the system.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver (universal asynchronous transmitter) receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / or universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus that includes a serial data line (SDA) and a serial clock line (SCL).
  • the processor 110 may contain multiple sets of I2C buses.
  • the processor 110 can be respectively coupled to the touch sensor 180K, the charger, the flash, the camera 193 and the like through different I2C bus interfaces.
  • the processor 110 may couple the touch sensor 180K through an I2C interface, so that the processor 110 communicates with the touch sensor 180K through an I2C bus interface, so as to implement the touch function of the second device 500 .
  • the I2S interface can be used for audio communication.
  • the processor 110 may contain multiple sets of I2S buses.
  • the processor 110 may be coupled with the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170 .
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface, so as to realize the function of answering calls through a Bluetooth headset.
  • the PCM interface can also be used for audio communications, sampling, quantizing and encoding analog signals.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • a UART interface is typically used to connect the processor 110 with the wireless communication module 160 .
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function.
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193 .
  • MIPI interfaces include camera serial interface (CSI), display serial interface (DSI), etc.
  • the processor 110 communicates with the camera 193 through a CSI interface to implement the shooting function of the second device 500 .
  • the processor 110 communicates with the display screen 194 through the DSI interface to implement the display function of the second device 500 .
  • the GPIO interface can be configured by software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface may be used to connect the processor 110 with the camera 193, the display screen 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like.
  • the GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface that conforms to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 130 can be used to connect a charger to charge the second device 500, and can also be used to transmit data between the second device 500 and peripheral devices. It can also be used to connect headphones to play audio through the headphones.
  • the interface can also be used to connect other electronic devices, such as AR devices.
  • the interface connection relationship between the modules illustrated in the embodiments of the present application is a schematic illustration, and does not constitute a structural limitation of the second device 500 .
  • the second device 500 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 may receive charging input from the wired charger through the USB interface 130 .
  • the charging management module 140 may receive wireless charging input through the wireless charging coil of the second device 500 . While the charging management module 140 charges the battery 142 , it can also supply power to the terminal device through the power management module 141 .
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the display screen 194, the camera 193, and the wireless communication module 160, etc.
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, battery health status (leakage, impedance).
  • the power management module 141 may also be provided in the processor 110 .
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the second device 500 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modulation and demodulation processor, the baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • the antennas in the second device 500 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1 can be multiplexed as a diversity antenna of the wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 may provide a wireless communication solution including 2G/3G/4G/5G etc. applied on the second device 500 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor, and then turn it into an electromagnetic wave for radiation through the antenna 1 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the same device as at least part of the modules of the processor 110 .
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low frequency baseband signal is processed by the baseband processor and passed to the application processor.
  • the application processor outputs sound signals through audio devices (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or videos through the display screen 194 .
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent of the processor 110, and may be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 may provide applications on the second device 500 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation satellites Wireless communication solutions such as global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), and infrared technology (IR).
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • IR infrared technology
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , perform frequency modulation on it, amplify it, and convert it into electromagnetic waves for radiation through the antenna 2 .
  • the antenna 1 of the second device 500 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the second device 500 can communicate with the network and other devices through wireless communication technology.
  • Wireless communication technologies may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband code division Multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM , and/or IR technology, etc.
  • GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou satellite navigation system (BDS), quasi-zenith satellite system (quasi-zenith satellite system) , QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou satellite navigation system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the second device 500 implements a display function through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • Display screen 194 is used to display images, videos, and the like.
  • Display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • LED diode AMOLED
  • flexible light-emitting diode flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on.
  • the second device 500 may include 1 or N display screens 194 , where N is a positive integer greater than 1.
  • the second device 500 may implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
  • the ISP is used to process the data fed back by the camera 193 .
  • the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, converting it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin tone.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193 .
  • Camera 193 is used to capture still images or video.
  • the object is projected through the lens to generate an optical image onto the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the second device 500 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • a digital signal processor is used to process digital signals, in addition to processing digital image signals, it can also process other digital signals. For example, when the second device 500 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy, and the like.
  • Video codecs are used to compress or decompress digital video.
  • the second device 500 may support one or more video codecs.
  • the second device 500 can play or record videos in various encoding formats, for example, moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, and so on.
  • MPEG moving picture experts group
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • Applications such as intelligent cognition of the second device 500 can be implemented through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the second device 500.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example to save files like music, video etc in external memory card.
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like.
  • the storage data area may store data (such as audio data, phone book, etc.) created during the use of the second device 500 and the like.
  • the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the processor 110 executes various functional applications and data processing of the second device 500 by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
  • the second device 500 may implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playback, recording, etc.
  • the audio module 170 is used for converting digital audio information into analog audio signal output, and also for converting analog audio input into digital audio signal. Audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be provided in the processor 110 , or some functional modules of the audio module 170 may be provided in the processor 110 .
  • Speaker 170A also referred to as a "speaker” is used to convert audio electrical signals into sound signals.
  • the second device 500 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also referred to as "earpiece" is used to convert audio electrical signals into sound signals.
  • the voice can be answered by placing the receiver 170B close to the human ear.
  • the microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals.
  • the user can make a sound by approaching the microphone 170C through a human mouth, and input the sound signal into the microphone 170C.
  • the second device 500 may be provided with at least one microphone 170C.
  • the second device 500 may be provided with two microphones 170C, which may implement a noise reduction function in addition to collecting sound signals.
  • the second device 500 may further be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
  • the earphone jack 170D is used to connect wired earphones.
  • the earphone interface 170D can be the USB interface 130, or can be a 3.5mm open mobile terminal platform (OMTP) standard interface, a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the pressure sensor 180A is used to sense pressure signals, and can convert the pressure signals into electrical signals.
  • the pressure sensor 180A may be provided on the display screen 194 .
  • the capacitive pressure sensor may be comprised of at least two parallel plates of conductive material. When a force is applied to the pressure sensor 180A, the capacitance between the electrodes changes.
  • the second device 500 determines the intensity of the pressure according to the change in capacitance. When a touch operation acts on the display screen 194, the second device 500 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the second device 500 may also calculate the touched position according to the detection signal of the pressure sensor 180A. In some embodiments, touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions.
  • the gyro sensor 180B may be used to determine the motion attitude of the second device 500 .
  • the angular velocity of the second device 500 about three axes ie, the x, y, and z axes
  • the gyro sensor 180B can be used for image stabilization.
  • the gyro sensor 180B detects the shaking angle of the second device 500, calculates the distance to be compensated by the lens module according to the angle, and allows the lens to counteract the shaking of the second device 500 through reverse motion, so as to prevent the shaking of the second device 500. shake.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenarios.
  • the air pressure sensor 180C is used to measure air pressure.
  • the second device 500 calculates the altitude through the air pressure value measured by the air pressure sensor 180C to assist in positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the second device 500 can detect the opening and closing of the flip holster using the magnetic sensor 180D.
  • the second device 500 may detect the opening and closing of the flip according to the magnetic sensor 180D. Further, according to the detected opening and closing state of the leather case or the opening and closing state of the flip cover, characteristics such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the second device 500 in various directions (generally three axes).
  • the magnitude and direction of gravity can be detected when the second device 500 is stationary. It can also be used to identify the posture of terminal devices, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
  • the second device 500 may measure the distance through infrared or laser. In some embodiments, when shooting a scene, the second device 500 can use the distance sensor 180F to measure the distance to achieve fast focusing.
  • Proximity light sensor 180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
  • the light emitting diodes may be infrared light emitting diodes.
  • the second device 500 emits infrared light to the outside through light emitting diodes.
  • the second device 500 detects infrared reflected light from nearby objects using photodiodes. When sufficient reflected light is detected, it may be determined that there is an object near the second device 500 . When insufficient reflected light is detected, the second device 500 may determine that there is no object near the second device 500 .
  • the second device 500 can use the proximity light sensor 180G to detect that the user holds the second device 500 close to the ear to talk, so as to automatically turn off the screen to save power.
  • Proximity light sensor 180G can also be used in holster mode, pocket mode automatically unlocks and locks the screen.
  • the ambient light sensor 180L is used to sense ambient light brightness.
  • the second device 500 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the second device 500 is in the pocket, so as to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the second device 500 can use the collected fingerprint characteristics to unlock the fingerprint, access the application lock, take a picture with the fingerprint, answer the incoming call with the fingerprint, and the like.
  • the temperature sensor 180J is used to detect the temperature.
  • the second device 500 uses the temperature detected by the temperature sensor 180J to execute the temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the second device 500 performs performance reduction of the processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection.
  • the second device 500 when the temperature is lower than another threshold, the second device 500 heats the battery 142 to avoid abnormal shutdown of the second device 500 caused by the low temperature.
  • the second device 500 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
  • Touch sensor 180K also called “touch device”.
  • the touch sensor 180K may be disposed on the display screen 194 , and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”.
  • the touch sensor 180K is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to touch operations may be provided through display screen 194 .
  • the touch sensor 180K may also be disposed on the surface of the second device 500 , which is different from the location where the display screen 194 is located.
  • the bone conduction sensor 180M can acquire vibration signals. In some embodiments, the bone conduction sensor 180M can acquire the vibration signal of the vibrating bone mass of the human voice. The bone conduction sensor 180M can also contact the pulse of the human body and receive the blood pressure beating signal. In some embodiments, the bone conduction sensor 180M can also be disposed in the earphone, combined with the bone conduction earphone.
  • the audio module 170 can analyze the voice signal based on the vibration signal of the voice part vibrating bone mass obtained by the bone conduction sensor 180M, so as to realize the voice function.
  • the application processor can analyze the heart rate information based on the blood pressure beat signal obtained by the bone conduction sensor 180M, and realize the function of heart rate detection.
  • the keys 190 include a power-on key, a volume key, and the like. Keys 190 may be mechanical keys. It can also be a touch key.
  • the second device 500 may receive key inputs, and generate key signal inputs related to user settings and function control of the second device 500 .
  • Motor 191 can generate vibrating cues.
  • the motor 191 can be used for vibrating alerts for incoming calls, and can also be used for touch vibration feedback.
  • touch operations acting on different applications can correspond to different vibration feedback effects.
  • the motor 191 can also correspond to different vibration feedback effects for touch operations on different areas of the display screen 194 .
  • Different application scenarios for example: time reminder, receiving information, alarm clock, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 192 can be an indicator light, which can be used to indicate the charging state, the change of the power, and can also be used to indicate a message, a missed call, a notification, and the like.
  • the SIM card interface 195 is used to connect a SIM card.
  • the SIM card can be contacted and separated from the second device 500 by inserting into the SIM card interface 195 or pulling out from the SIM card interface 195 .
  • the second device 500 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • the SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card and so on. Multiple cards can be inserted into the same SIM card interface 195 at the same time. Multiple cards can be of the same type or different.
  • the SIM card interface 195 can also be compatible with different types of SIM cards.
  • the SIM card interface 195 is also compatible with external memory cards.
  • the second device 500 interacts with the network through the SIM card to implement functions such as call and data communication.
  • the second device 500 employs an eSIM, ie an embedded SIM card.
  • the eSIM card can be embedded in the second device 500 and cannot be separated from the second device 500 .
  • the software systems of the first device 400 and the second device 500 can both adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture, and the like.
  • the embodiments of the present application take an Android system with a layered architecture as an example to exemplarily describe the software structures of the first device 400 and the second device 500 .
  • the left diagram of FIG. 6 shows a software architecture block diagram of a first device provided by an embodiment of the present application.
  • the software system of the first device may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture, or the like.
  • the embodiment of the present application is exemplified by taking the operating system of the first device as an Android system as an example. As shown in Figure 6, the Android system is divided into four layers, from top to bottom, the application layer, the application framework layer, the Android runtime (Android runtime) and the system library, and the kernel layer.
  • the application layer can include a series of application packages.
  • the application layer may include one or more applications such as gallery, calendar, music, video, on-demand, smart home or device control.
  • an input box may be provided in any of the above-mentioned application programs, so that the user can input keywords in the input box to realize operations such as searching in the application program.
  • Smart home applications can be used to control or manage connected home devices.
  • household equipment can include lights, air conditioners, security door locks, speakers, robot vacuums, sockets, body fat scales, desk lamps, air purifiers, refrigerators, washing machines, water heaters, microwave ovens, rice cookers, curtains, fans, televisions, Set-top boxes, doors and windows, etc.
  • the device control application is used to control or manage a single device (eg, the first device).
  • the application layer may also include system applications such as a control center and/or a notification center.
  • the control center is a pull-down message notification bar of the first device, such as a user interface displayed by the first device when the user performs a downward operation on the first device.
  • the notification center is a pull-up message notification bar of the first device, that is, a user interface displayed by the first device when the user performs an upward operation on the first device.
  • the application framework layer provides an application programming interface (API) and a programming framework for the applications in the application layer.
  • API application programming interface
  • the application framework layer includes some predefined functions.
  • the application framework layer may include one or more of a window manager, a content provider, a resource manager, a view system, a notification manager, a distributed networking framework, a remote input service or an input method framework, etc. kind.
  • a window manager is used to manage window programs.
  • the window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, touch the screen, drag the screen, take a screenshot, etc.
  • Content providers are used to store and retrieve data and make these data accessible to applications.
  • Data can include videos, images, audio, browsing history and bookmarks, etc.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on. View systems can be used to build applications.
  • a display interface can consist of one or more views.
  • the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
  • the resource manager provides various resources for the application, such as localization strings, icons, pictures, layout files, video files and so on.
  • the notification manager enables applications to display notification information in the status bar, which can be used to convey notification-type messages, and can disappear automatically after a brief pause without user interaction. For example, the notification manager is used to notify download completion, message reminders, etc.
  • the notification manager can also display notifications in the status bar at the top of the system in the form of graphs or scroll bar text, such as notifications of applications running in the background, and notifications on the screen in the form of dialog windows. For example, text information is prompted in the status bar, a prompt sound is issued, and the indicator light flashes.
  • the distributed networking framework enables the first device to discover other devices in the same distributed networking, and then establish a communication connection with other devices.
  • a remote input service (which may also be referred to as a remote input atomic ability (AA)) enables a first device to receive remote input from other devices.
  • AA remote input atomic ability
  • the input method framework may support the first device to input content in the input box.
  • the Android runtime includes core libraries and a virtual machine. Android runtime is responsible for scheduling and management of the Android system.
  • the core library consists of two parts: one is the function functions that the java language needs to call, and the other is the core library of Android.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object lifecycle management, stack management, thread management, safety and exception management, and garbage collection.
  • a system library can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg: SGL), etc.
  • surface manager surface manager
  • media library Media Libraries
  • 3D graphics processing library eg: OpenGL ES
  • 2D graphics engine eg: SGL
  • the Surface Manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing.
  • 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display drivers, camera drivers, audio drivers, and sensor drivers.
  • FIG. 6 shows a software architecture block diagram of a second device provided by an embodiment of the present application.
  • the Android system is divided into four layers, from top to bottom, the application layer, the application framework layer, the Android runtime (Android runtime) and the system library, and the kernel layer.
  • the application layer can include a series of application packages.
  • the application package can include applications such as camera, calendar, phone, map, phone, music, settings, mailbox, video, social, etc.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer may include one or more of a window manager, a content provider, a resource manager, a view system, a notification manager, a distributed networking framework, an input method framework, or an interface service, etc. kind.
  • a window manager is used to manage window programs.
  • the window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, touch the screen, drag the screen, take a screenshot, etc.
  • Content providers are used to store and retrieve data and make these data accessible to applications.
  • Data can include videos, images, audio, calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on. View systems can be used to build applications.
  • a display interface can consist of one or more views.
  • the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
  • the resource manager provides various resources for the application, such as localization strings, icons, pictures, layout files, video files and so on.
  • the notification manager enables applications to display notification information in the status bar, which can be used to convey notification-type messages, and can disappear automatically after a brief pause without user interaction. For example, the notification manager is used to notify download completion, message reminders, etc.
  • the notification manager can also display notifications in the status bar at the top of the system in the form of graphs or scroll bar text, such as notifications of applications running in the background, and notifications on the screen in the form of dialog windows. For example, text information is prompted in the status bar, a prompt sound is issued, the terminal device vibrates, and the indicator light flashes.
  • the distributed networking framework enables the second device to discover other devices in the same distributed networking, and then establish a communication connection with other devices.
  • the input method framework can support the second device to input content in the input box.
  • the interface service can define an interface between the second device and other devices, so that the second device and other devices can implement data transmission based on the interface defined by the interface service.
  • the interface service may include: auxiliary AA, wherein the atomic ability (AA) is developed by the developer and is a program entity that implements a single function, and may have no user interface (UI).
  • the Android runtime includes core libraries and a virtual machine. Android runtime is responsible for scheduling and management of the Android system.
  • the core library consists of two parts: one is the function functions that the java language needs to call, and the other is the core library of Android.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object lifecycle management, stack management, thread management, safety and exception management, and garbage collection.
  • a system library can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg: SGL), etc.
  • surface manager surface manager
  • media library Media Libraries
  • 3D graphics processing library eg: OpenGL ES
  • 2D graphics engine eg: SGL
  • the Surface Manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing.
  • 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display drivers, camera drivers, audio drivers, and sensor drivers.
  • the first device and the second device may use their respective distributed networking frameworks to implement access to distributed networking and device discovery or data transmission and other communication services. For example, after the first device and the second device are connected to the distributed networking, the first device can pull up the interface service of the second device based on the remote input service, and then use the interface service of the second device to call the input method of the second device
  • the framework implements using the input method framework of the second device to assist the input of the first device.
  • the remote input service of the first device may also send the content input based on the input method framework of the first device to the second device through the interface service of the second device, etc.
  • an interface service (such as an auxiliary AA) is set in the framework layer of the second device, and the interface service is similar to that between the input method frameworks of the first device and the second device.
  • build a bridge between the two devices so that the first device can pull up the input method frame of the second device, and the content input using the input method frame of the second device can be displayed in the input box of the first device equally (for example, the input box of the second device).
  • the cursor, highlighting and other content of the first device can be displayed in the input box of the first device), so that the second device can be used to assist the input of the first device.
  • the interface service (for example, the auxiliary AA) may also be implemented in the form of an application program.
  • the application program is loaded, so as to realize the function of the above-mentioned interface service in the embodiment of the present application based on the application program.
  • the application may have an application icon displayed in the user interface (or it can be understood that the user can perceive the application), or the application may not have an application icon displayed in the user interface ( Or it may be understood that the user does not perceive the application), and the specific implementation of the interface service is not limited in the embodiments of the present application.
  • the interface service is used as the auxiliary AA for schematic description in the following.
  • FIG. 7 shows a system architecture diagram of a device communication method provided by an embodiment of the present application.
  • an application editing box or a search box
  • a database and a remote input method framework service or a remote input service
  • client large screen
  • Auxiliary AA, notification manager (or simply called notification), window manager (or simply called window), database and input method framework can be set in the server (mobile phone).
  • the application editing box in the large screen can be provided by the input method framework of the large screen, and the application editing box on the large screen can receive input from the remote control, etc.
  • the remote control selects the application editing box, subsequent assistance can be triggered.
  • Implementation of the input when the remote control selects the application editing box and selects content on the soft keyboard (or called the virtual keyboard) corresponding to the application editing box, the realization of the subsequent auxiliary input can be triggered.
  • the database in the large screen can store the relationship between keywords and programs. For example, after the large screen obtains keywords in the application editing box, it can search for programs based on the relationship between keywords and programs in the database.
  • the remote input method framework service in the large screen can enable the large screen to receive remote input.
  • the remote input method framework may include a large-screen local input method framework and a remote input service AA.
  • an interface between the large screen and an external device can be defined, so that the large screen can receive remote input from an external device through the interface.
  • the remote input service AA (or it may be referred to as the remote input service AA interface) on the large screen may include one or more of the following interfaces: an external interface for setting text to the large screen, an external application for focus switching to the large screen Interfaces, callbacks registered externally to the large screen, or interfaces provided to the small keyboard, etc.
  • the auxiliary AA in the mobile phone can define the interface between the mobile phone and other devices, so that the mobile phone and other devices can realize data transmission based on the interface defined by the interface service.
  • the mutual call between the mobile phone local input method framework and the large-screen remote input method framework can be established based on the auxiliary AA, so that any content input in the mobile phone local input method framework can be synchronously displayed in the large-screen edit box.
  • the realization of mutual calls between the mobile phone local input method framework and the large-screen remote input method framework based on the auxiliary AA includes: the large-screen remote input method framework and the auxiliary AA hold each other's remote procedure calls (remote procedure call, RPC) object. Then, when the large screen and the mobile phone interact with each other, they can call the device process of the other party according to the RPC object of the other party, and notify the device process of the other party to call the local interface of the other party to perform the adaptation operation.
  • RPC remote procedure call
  • the notification manager in the mobile phone can display the notification content in the mobile phone interface based on the operation of obtaining the focus of the edit box on the large screen, and prompt the mobile phone user to perform auxiliary input on the large screen.
  • a window manager in a mobile phone can display a user interface, such as a notification interface, an auxiliary input interface, and the like.
  • the database in the mobile phone can store the relationship between the keywords and the candidate content. For example, after the keywords are obtained in the input method edit box of the mobile phone, the candidate content can be displayed based on the relationship between the keywords and the candidate content in the database.
  • the input method framework in the mobile phone can provide convenient input method implementation.
  • both large screens and mobile phones can be added to the distributed networking, and device discovery, communication connection establishment, and data transmission can be implemented in the distributed networking. Since joining the distributed networking is a relatively common technology, it will not be repeated here.
  • the database of the large screen (such as the candidate vocabulary) and the database of the mobile phone (such as the candidate vocabulary) can be combined. Synchronization enables the large screen and the mobile phone to share each other's databases, so that the user can select a more convenient candidate word based on the candidate word database in the large screen and the mobile phone.
  • the large screen can start the local input method channel through the remote input method framework, and pass it to the remote input method framework, the remote input method framework (input method framework, IMF) Query the auxiliary AA in the current distributed networking framework from the distributed networking framework (it needs to be understood that the auxiliary AA is taken as an example here, and it can actually be any application process in the mobile phone that can carry relevant capabilities).
  • the mobile phone returns the RPC object of the auxiliary AA to the large screen, and then the mobile phone calls the interface to pass the RPC object of the input channel of the large screen to the mobile phone. Then the subsequent mobile phone can synchronously edit the status information to the large screen through the RPC object of the input channel of the large screen, and the large screen can also synchronously edit the status information to the mobile phone through the RPC object of the input channel of the mobile phone.
  • the auxiliary AA in the mobile phone can further instruct the notification manager to pop up the notification.
  • the mobile phone receives the confirmation of the user's click on the notification, it can further pop up the input box in the window of the mobile phone, pull up the local input method in the local input method frame of the mobile phone, and then put the mobile phone
  • the content input by the user using the local input method is synchronized to the remote input method framework service of the large screen, so that the edit box in the large screen can display the content of the mobile phone input box synchronously.
  • the keywords and other information in the edit box of the mobile phone can be synchronized to the edit box for the large screen, which can improve the performance of the large screen. Efficiency entered in .
  • auxiliary input of mobile phone A may be interrupted, and mobile phone A cannot continue to assist the large screen to continue inputting.
  • the user who uses the mobile phone A does not want to continue to use the mobile phone A to assist the input on the large screen.
  • the user may want to switch other auxiliary devices to assist the large-screen input. For example, to switch to mobile phone B to assist the large-screen input, the user needs to use the large-screen remote control again, click the large-screen edit box again, and re-trigger the connection between the large-screen and mobile phone B. The process more complicated.
  • an embodiment of the present application provides a device communication method.
  • other auxiliary devices in the same distributed network as the large screen and the mobile phone can preempt the assistance of the mobile phone.
  • Input further, based on the input content of the mobile phone, it can continue to assist the large-screen input, during which the user does not need to use the remote control and other devices to select the editing box of the large-screen again, so as to realize convenient and efficient auxiliary large-screen input.
  • FIG. 8 shows a schematic diagram of a specific system architecture of a device communication method in which multiple devices preempt input according to an embodiment of the present application.
  • a distributed network including a large screen, mobile phone A, and mobile phone B is taken as an example to exemplarily illustrate the process of mobile phone B preempting the auxiliary input when the auxiliary input of mobile phone A is used.
  • the large screen in the embodiment of the present application may have the capability of requesting remote input, and both the mobile phone A and the mobile phone B may have a distributed input method to assist AA.
  • the device communication method in this embodiment of the present application may include a pull-up process and a preemption process.
  • the large screen can establish a connection with mobile phone A and mobile phone B, and mobile phone A confirms the auxiliary large screen input.
  • mobile phone B can preempt mobile phone A to use mobile phone B to assist large-screen input.
  • the user can click the edit box of the large screen through the large screen remote control, the edit box of the large screen requests the remote input method from the input method framework of the large screen, and the large screen finds it in the distributed networking.
  • mobile phone A and mobile phone B the large screen can be connected with the auxiliary AA of mobile phone A and the auxiliary AA of mobile phone B respectively, and the data channel interface of the large screen can be transferred to the auxiliary AA of mobile phone A and the auxiliary AA of mobile phone B respectively.
  • Both mobile phone A and mobile phone B can pop up a notification, and the notification can be used to indicate that the large screen requests auxiliary input.
  • the user can confirm in the notification of mobile phone A that mobile phone A is used to assist the input on the large screen, and notify that the device currently preempted by the large screen is mobile phone A.
  • An edit box for assisting large-screen input can pop up in mobile phone A, and the user can pull up the local input method of the mobile phone in the editing box of mobile phone A to assist large-screen input.
  • the user may input "Hello,” in the edit box of the mobile phone A, and "Hello,” may be simultaneously displayed on the large screen.
  • the user may confirm in the notification of the mobile phone B that the mobile phone B is used to assist the input on the large screen, and notify that the device currently preempted by the large screen is the mobile phone B.
  • the large screen can broadcast to mobile phone A and mobile phone B in the distributed network that the current preempting device is mobile phone B. If mobile phone A does not perform the preemption step again, the edit box used for auxiliary input in mobile phone A can be hidden.
  • An editing box for assisting large-screen input pops up, and the user can pull up the local input method of the mobile phone in the editing box of mobile phone B to assist large-screen input.
  • the content synchronized from the mobile phone A can be simultaneously displayed on the large screen in the edit box of the mobile phone B.
  • the edit box of the large screen has been synchronized to "Hello” from the edit box of mobile phone A
  • the "Hello” can be synchronized to the edit box popped up on mobile phone B.
  • the notification on the mobile phone B can be hidden first, and during the preemption process, the user can trigger the display of the hidden notification in the mobile phone B, and realize the preemption in the notification. For example, after the user clicks the notification on mobile phone A to confirm the selection, the notification on mobile phone B can be hidden in the notification bar.
  • the user wants to use mobile phone B for preemptive input, the user can pull down the notification bar of mobile phone B, The notification is displayed in B, and the notification of mobile phone B is clicked to realize the preemption of mobile phone B.
  • the mobile phone A can preempt the auxiliary large-screen input again based on the above-mentioned similar process of the mobile phone B, which will not be repeated here.
  • mobile phone A may be in a state that cannot assist large-screen input such as a call, and mobile phone A may also be in a state that can assist large-screen input.
  • the mobile phone B can initiate preemption at any time, and the embodiment of the present application does not limit the time when the preemption occurs.
  • mobile phone A may also request mobile phone B to preempt.
  • the elderly hold mobile phone A to assist in inputting on the large screen, but the elderly may input at a slower speed. If they wish to request the young person holding mobile phone B to assist in inputting on the large screen, mobile phone A can send a request to mobile phone B. , mobile phone B is requested to preempt auxiliary input, and mobile phone B can preempt based on the request of mobile phone A.
  • the large-screen can also initiate preemption.
  • the user clicks the edit box on the large-screen with the remote control, and the large-screen can broadcast to all auxiliary devices in the distributed network that the current preemptive device ID is the large-screen ID.
  • the devices in the distributed network will judge the preempting device ID: for a large screen, it is judged that the current preempting device is the current preempting device. Pull up the local input method of the large screen, and the user can use the remote control.
  • the edit box of the large screen for other auxiliary devices, judge that the device is not currently preempted, and other auxiliary devices can hide the input method of other auxiliary devices.
  • the embodiment corresponding to FIG. 8 is a possible implementation manner of the embodiment of the present application.
  • the user may select the virtual keyboard under the edit box provided by an application on the large screen to trigger the subsequent input process of the auxiliary large-screen, or the user may trigger the auxiliary large-screen input on the mobile phone.
  • the process of screen input is not specifically limited in this embodiment of the present application.
  • the following is an exemplary description of the user interface for the interaction between the large screen and the mobile phone.
  • FIGS. 9-10 show schematic diagrams of user interfaces in which a user triggers an auxiliary input.
  • FIG. 9 shows a user interface diagram of a large screen.
  • the user can use the remote controller 901 to select the edit box 902 in the large screen, and then can trigger the execution of the subsequent process of the mobile phone assisting the large screen input in the embodiment of the present application.
  • the user can use the remote controller 901 to select any content 902 in the virtual keyboard in the large screen, and then can trigger the execution of the subsequent process of the mobile phone assisting the large screen input in the embodiment of the present application.
  • the specific manner in which the mobile phone assists the large-screen input will be described in the subsequent embodiments, and will not be repeated here.
  • FIG. 9 shows a schematic diagram of setting an edit box in a user interface diagram of a large screen.
  • the user interface of the large screen may include multiple edit boxes, and the user triggers any edit box to trigger the subsequent process of the mobile phone-assisted large-screen input in the embodiment of the present application, which is not specified in the embodiment of the present application. limited.
  • FIG. 10 shows a user interface diagram of a mobile phone.
  • the user can display the user interface shown in Figure a in Figure 10 by pulling down on the main screen of the mobile phone, etc.
  • the user interface shown in Figure a in Figure 10 may include one or more of the following items of the mobile phone Functions: WLAN, Bluetooth, Torch, Mute, Airplane Mode, Mobile Data, Wireless Screencasting, Screenshot or Auxiliary Input 1001.
  • the auxiliary input 1001 may be the function of assisting the large-screen input of the mobile phone in the embodiment of the application.
  • the mobile phone can search for devices such as large screens in the same distributed network, obtain the search box in the large screen, and establish a communication connection with the large screen.
  • the mobile phone can further display the user interface shown in Figure c of Figure 10, and in the user interface shown in Figure c of Figure 10, an edit box for assisting large-screen input can be displayed, and the user can assist the large-screen based on the edit box. to enter.
  • the mobile phone can also display the user interface shown in b of Figure 10.
  • the user interface shown in the figure can display multiple large screen logos, and the large screen logo can be the device number, user name or nickname of the large screen.
  • the user can select a large screen (for example, click on large screen A or large screen B) in the user interface as shown in Figure b in Figure 10, and enter the user interface shown in Figure c in Figure 10, an embodiment of the present application. This is not specifically limited.
  • the large screen can search for an auxiliary device (such as a mobile phone) with auxiliary input capability in the distributed network, and automatically determine the mobile phone used for auxiliary input, or Send notifications to all mobile phones found in the distributed network.
  • an auxiliary device such as a mobile phone
  • the large screen can automatically select the auxiliary input device as the mobile phone.
  • the large screen can automatically select the mobile phone with the default auxiliary input as the auxiliary input mobile phone. equipment.
  • the large screen finds that there are multiple mobile phones in the distributed networking, but among the multiple mobile phones, there is a mobile phone with the auxiliary input selected by the user when the user performed the auxiliary input last time, the large screen can automatically select the user to perform the auxiliary input last time.
  • the auxiliary input mobile phone selected during auxiliary input is the auxiliary input device.
  • the large screen finds that there are multiple mobile phones in the distributed network, and the large screen obtains the mobile phone with the highest frequency of auxiliary input selected by the user, the large screen can automatically select the mobile phone selected by the user as the auxiliary input.
  • the mobile phone with the highest input frequency is the auxiliary input device.
  • the large screen finds that there are multiple mobile phones in the distributed networking, but there is a mobile phone with the same user account as the user account logged in by the large screen, the large screen can automatically select the mobile phone with the same user account logged in the large screen.
  • the mobile phone with the same user account is the auxiliary input device.
  • One or more mobile phones can be connected to the distributed network.
  • the mobile phone can assist the large screen for input.
  • the distributed networking may include multiple mobile phones, and the multiple mobile phones may implement the preemption process described in the procedure embodiment of the present application.
  • the multiple mobile phones can implement the preemption process as described in the procedure embodiments of the present application.
  • Figures 11-23 show the process of preempting the device to assist in inputting on the large screen. Take a large screen, mobile phone A, and mobile phone B in a distributed network as an example for illustration.
  • FIG. 11 shows a schematic diagram of a user interface of a large screen.
  • the large screen can be connected to the auxiliary AA of mobile phone A and the auxiliary AA of mobile phone B, and request auxiliary input from mobile phone A and mobile phone B.
  • a notification can pop up in both mobile phone A and mobile phone B. The notification is used to prompt the large screen to request auxiliary input.
  • 11 shows a user interface diagram of a notification received in mobile phone A or mobile phone B.
  • FIG. 12 shows a schematic diagram of an interface for determining an auxiliary large-screen input by mobile phone A.
  • the user selects mobile phone A for auxiliary input, and the user can click the OK button in the notification of mobile phone A.
  • the large screen can prompt that the currently preempted device is mobile phone A. If there is no other device preempting for a period of time, it can be confirmed that mobile phone A assists the large screen. enter. It can be understood that, in another possible implementation manner, in the process of confirming the auxiliary input by mobile phone A, mobile phone A may not be prompted to preempt on the large screen, and users watching the large screen may not perceive the preempting process of mobile phone B.
  • the user interface corresponding to FIG. 11 and FIG. 12 may not be displayed, and the large screen can confirm that mobile phone A is assisted. Big screen input.
  • FIG. 13 shows a schematic diagram of an interface for using mobile phone A to assist large-screen input.
  • an editing box for auxiliary input pops up in the mobile phone A, and then the user can assist the large-screen input in the editing box.
  • the user can enter "Hello,” in the edit box of mobile phone A, and it can be adapted.
  • Figure 14 in the edit box of the large screen, you can "Hello,” in the edit box of mobile phone A is displayed synchronously.
  • the edit box on the large screen as shown in FIG. 14 can simultaneously display states such as deletion, highlight selection or cursor movement performed in the edit box of mobile phone A.
  • the auxiliary input of mobile phone A may be interrupted for some reasons.
  • mobile phone A receives a call from the mobile phone during the auxiliary input process, or mobile phone A is in the process of auxiliary input Received a video or voice call.
  • the user wants to switch the device to input to the large screen during the auxiliary input process, it involves the process of preempting the input.
  • the preempting device can be mobile phone B, and the preempting device can also be the large screen itself.
  • Figures 15-17 show schematic interface diagrams of mobile phone B performing preemption input on a large screen.
  • FIG. 15 shows a schematic interface diagram of mobile phone B pulling down the notification bar to preempt.
  • the user can pull down the notification bar of mobile phone B, and the notification bar for requesting auxiliary input received from the large screen of mobile phone B can be displayed in the notification bar.
  • the pop-up notification prompting the large screen to request auxiliary input initiates preemption.
  • FIG. 16 shows a schematic diagram of a user interface of the large screen.
  • the auxiliary device AA of the mobile phone B notifies the large-screen preempting device of the ID of the mobile phone B.
  • the large-screen user interface can pop up a notification that the ID of mobile phone B of the preemptive device is mobile phone B00.
  • the large screen can broadcast the ID of the current preemption device mobile phone B to mobile phone A and mobile phone B.
  • the user interface of mobile phone A can display a notification that the ID of the current preempted device mobile phone B is mobile phone B00.
  • Mobile phone B determines that the current preempted device is the device, mobile phone B can pull up the local input method keyboard, mobile phone A determines that the current preempted device is not the local device, mobile phone A can hide the local input method keyboard, and the large screen determines that the current preempted device is not the local input method keyboard. device, the large screen hides the local input method keyboard.
  • the big screen can broadcast the current preemption device mobile phone B to mobile phone A and mobile phone B without popping up a notification on the user interface that the ID of mobile phone B is mobile phone B00.
  • mobile phone B determines that the current preempted device is the device, mobile phone B pulls up the local input method keyboard, mobile phone A determines that the current preempted device is not this device, mobile phone A hides the local input method keyboard, and the large screen determines that the current preempted device is not On this device, the large screen hides the local input method keyboard.
  • mobile phone B may not be prompted to preempt on the large screen, and users watching the large screen may not perceive the preemption process of mobile phone B.
  • the user of the mobile phone B can preempt through the request of the mobile phone A.
  • the mobile phone A in FIG. 18 may display an interface for requesting the auxiliary input of the mobile phone B, and the user can request the auxiliary large-screen input from the mobile phone B by clicking the OK option in the mobile phone A.
  • the mobile phone B can notify the mobile phone A to request the auxiliary large-screen input, and the user can accept the request of the mobile phone A in the mobile phone B to realize the preemption of the auxiliary large-screen input.
  • the large screen and mobile phone A are connected to the distributed networking
  • the mobile phone A assists the large-screen input
  • the mobile phone B is connected to the distributed networking
  • the mobile phone B can display a message for prompting Whether the user preempts the interface for auxiliary input.
  • an interface prompting the user whether to preempt the auxiliary input is displayed on the mobile phone B, and the user can click the OK option on the mobile phone B to realize the preemption of the auxiliary large-screen input.
  • FIG. 20 shows a schematic diagram of a user interface of the mobile phone B.
  • the content "Hello,” in the edit box of mobile phone B can be synchronized to the edit box of the large screen.
  • the user can continue to input in the editing box of mobile phone B.
  • the editing box of mobile phone B can display "Hello, friend”.
  • the large-screen edit box can simultaneously display "Hello, friend” in the mobile phone B edit box.
  • the large-screen can also be preempted.
  • the mobile phone A is assisting the large-screen input, and has entered "Hello," in the editing box of the mobile phone A and the large-screen editing box, and the user wishes to use the large-screen input.
  • the user can select the edit box on the large-screen by using a remote control or the like.
  • the large screen can broadcast to the large screen, mobile phone A, and mobile phone B in the distributed network that the ID of the current preempting device is the large screen ID.
  • the user interface shown in the right figure in Figure 18 can pop up on mobile phone A or mobile phone B.
  • a user interface for prompting the current preemption can be displayed.
  • the device is a large screen.
  • the mobile phone A and the mobile phone B may not display a notification for prompting that the current clearing device is a large screen, which is not specifically limited in this embodiment of the present application.
  • the large screen, mobile phone A and mobile phone B can judge the preemptive device ID, mobile phone A and mobile phone B judge that the current preempted device is not the device, mobile phone A and mobile phone B hide the local input method keyboard, and the large screen judges the current preemptive device.
  • the preempted device is the device, the large screen pulls up the local input method keyboard, and the user can use the remote control to continue inputting in the editing box of the large screen.
  • the above user interface diagrams when the mobile phone assists the large-screen input are all exemplary descriptions.
  • part or all of the content in the large screen may also be synchronized. This enables mobile phone users to know the status of the large screen based on the mobile phone interface.
  • FIG. 22 shows a user interface of a mobile phone.
  • the user when the user uses the mobile phone to assist the input on the large screen, the user can project all or part of the content of the large screen to the mobile phone.
  • the upper layer displays the editing box of the mobile phone, so that when the user uses the editing box of the mobile phone to input, the state of the large-screen editing box can be simultaneously seen in the user interface of the mobile phone, and the user does not need to look up at the large screen when assisting input. the input state in .
  • the user assists in inputting Chinese characters on the large screen as an example.
  • the user may assist the large screen in inputting English phrases or other forms of text input.
  • the specific content of the input is not limited.
  • FIG. 23 shows a schematic flowchart of a specific mobile phone-assisted large-screen inputting according to an embodiment of the present application.
  • the mobile phone-assisted large-screen input may include: a remote input method pulling process and a remote input method preempting process.
  • a distributed network in the process of pulling up the remote input method, can be formed, and a large screen, two mobile phones (such as mobile phone A and mobile phone B), and a tablet can be connected to the distributed network.
  • the user can use a remote control or other device to click on the edit box on the large screen, so that the edit box on the large screen gets the focus.
  • the large screen can query all the auxiliary equipment with auxiliary AA in the distributed network.
  • the large screen connects the auxiliary AA of each auxiliary equipment, and transmits the data channel interface to each auxiliary AA.
  • Auxiliary AA pop-up notification the notification is used to remind the large screen that auxiliary input is required.
  • auxiliary devices queried on the large screen include mobile phone A and mobile phone as an example, a notification can pop up in mobile phone A and mobile phone B and wait for the user to select and confirm.
  • Mobile phone A can pop up an edit box, and at the same time, the edit box pulls up the input method keyboard, and the user can input in the edit box of mobile phone A.
  • the user wants to switch other auxiliary devices for input, such as mobile phone B, to enter the remote input method preemption process.
  • mobile phone B determines that the current preemptive device is this device, mobile phone B will pull up the local input method keyboard, and at the same time synchronize the content of the large-screen edit box to the edit box of mobile phone B through the data channel interface. input inside.
  • Mobile phone A determines that the current preemptive device is not the device, and mobile phone A can determine whether the input method keyboard has been pulled up. If mobile phone A pulls up the input method keyboard, mobile phone A can hide the local input method keyboard.
  • any auxiliary device in the distributed networking can initiate a convenient preemption at any time, and after the preemption is successful, it can assist the large screen for input.
  • each functional module is divided according to each function, as shown in FIG. 24 , it shows a possible schematic structural diagram of a first device, a second device or a third device provided by an embodiment of the present application.
  • a device, a second device or a third device includes: a display screen 2401 and a processing unit 2402 .
  • the display screen 2401 is used to support the first device, the second device, or the third device to perform the display steps in the foregoing embodiments, or other processes of the technologies described in the embodiments of this application.
  • the display screen 2401 may be a touch screen or other hardware or a combination of hardware and software.
  • the processing unit 2402 is configured to support the first device, the second device, or the third device to perform the processing steps in the foregoing method embodiments, or other processes of the technologies described in the embodiments of this application.
  • the electronic device includes but is not limited to the unit modules listed above.
  • the specific functions that can be implemented by the above functional units also include but are not limited to the functions corresponding to the method steps described in the above examples.
  • the detailed description of other units of the electronic device please refer to the detailed description of the corresponding method steps. This application implements Examples are not repeated here.
  • the first device, the second device or the third device involved in the above embodiments may include: a processing module, a storage module and a display screen.
  • the processing module is used to control and manage the actions of the first device, the second device or the third device.
  • the display screen is used to display content according to the instructions of the processing module.
  • the storage module is used for saving program codes and data of the first device, the second device or the third device.
  • the first device, the second device or the third device may also include an input module, a communication module, and the communication module is used to support the communication between the first device, the second device or the third device and other network entities to achieve Calls, data interaction, Internet access and other functions of the first device, the second device or the third device.
  • the processing module may be a processor or a controller.
  • the communication module may be a transceiver, an RF circuit or a communication interface or the like.
  • the storage module may be a memory.
  • the display module can be a screen or a display.
  • the input module can be a touch screen, a voice input device, or a fingerprint sensor.
  • the above-mentioned communication module may include an RF circuit, and may also include a wireless fidelity (Wi-Fi) module, a near field communication (NFC) module, and a Bluetooth module.
  • Wi-Fi wireless fidelity
  • NFC near field communication
  • Bluetooth Bluetooth module
  • Communication modules such as the RF circuit, the NFC module, the WI-FI module, and the Bluetooth module may be collectively referred to as a communication interface.
  • the above-mentioned processor, RF circuit, display screen and memory can be coupled together through a bus.
  • FIG. 25 it shows another possible structural schematic diagram of the first device, the second device, or the third device provided by the embodiment of the present application, including: one or more processors 2501 , a memory 2502 , and a camera 2504 and display screen 2503; the above devices may communicate through one or more communication buses 2506.
  • one or more computer programs are stored in the memory 2502 by 2505, and are configured to be executed by one or more processors 2501; the one or more computer programs 2505 include instructions for performing the display method of any of the above steps .
  • the electronic device includes but is not limited to the above-mentioned devices.
  • the above-mentioned electronic device may also include a radio frequency circuit, a positioning device, a sensor, and the like.
  • Embodiment 21 A device communication method, applied to a system including a first device, a second device, and a third device, the method comprising:
  • the first device displays a first interface including a first edit box
  • the first device determines that the second device and the third device join the distributed networking
  • the first device displays a second interface, and the second interface includes a first option corresponding to the second device and a second option corresponding to the third device;
  • the first device In response to the triggering operation on the first option, the first device sends an indication message to the second device;
  • the second device displays a third interface according to the instruction message, and the third interface includes a second edit box;
  • the editing state is synchronized to the first editing box.
  • Embodiment 22 The method of Embodiment 21, the second device comprising an interface service for synchronizing edit states between the first device and the second device.
  • Embodiment 23 The method of Embodiment 21 or 22, wherein the editing state includes one or more of the following: text content, cursor, or highlighting of text content.
  • Embodiment 24 The method according to any one of Embodiments 21-23, wherein the second device displays a third interface according to the instruction message, comprising:
  • the second device displays a notification interface in response to the instruction message;
  • the notification interface includes a third option for confirming an auxiliary input;
  • the second device In response to the triggering operation on the third option, the second device displays the third interface.
  • Embodiment 25 The method according to any one of Embodiments 21-24, wherein the third interface further comprises: all or part of the content of the first interface.
  • Embodiment 26 The method according to Embodiment 25, wherein the second edit box is displayed in layers with all or part of the content of the first interface, and the second edit box is displayed on all of the first interface or the upper layer of part of the content.
  • Embodiment 27 The method according to any one of Embodiments 21-26, after the second device displays the third interface according to the instruction message, the method further includes:
  • the second device In response to the triggering of the second edit box, the second device displays a virtual keyboard
  • the second device displays the editing state in the second editing box according to the input operation received by the virtual keyboard and/or the second editing box.
  • Embodiment 28 The method according to any one of Embodiments 21-27, wherein the first device includes any one of the following: a TV, a large screen, or a wearable device; the second device or the third device Devices include any of the following: phone, tablet, or wearable.
  • the first device includes any one of the following: a TV, a large screen, or a wearable device
  • the second device or the third device Devices include any of the following: phone, tablet, or wearable.
  • Embodiment 29 A device communication method, applied to a system including a first device, a second device, and a third device, the method comprising:
  • the first device displays a first interface including a first edit box
  • the first device determines that the second device and the third device join the distributed networking
  • the first device determines that the second device is an auxiliary input device
  • the second device displays a third interface according to the instruction message, and the third interface includes a second edit box;
  • the editing state is synchronized to the first editing box.
  • Embodiment 210 A device communication method, applied to a system including a first device, a second device, and a third device, the method comprising:
  • the second device displays a fourth interface including options of the first device
  • the second device In response to the selection operation of the option of the first device, the second device sends an indication message to the first device;
  • the first device displays a first interface including a first edit box
  • the second device displays a third interface, and the third interface includes a second edit box;
  • the editing state is synchronized to the first editing box.
  • Embodiment 211 A device communication method, applied to a first device, the method comprising:
  • the first device displays a first interface including a first edit box
  • the first device determines that the second device and the third device join the distributed networking
  • the first device displays a second interface, and the second interface includes a first option corresponding to the second device and a second option corresponding to the third device;
  • the first device sends an instruction message to the second device; the instruction message is used to instruct the second device to display a third interface, where the third interface includes The second edit box;
  • the editing state is synchronized to the first editing box.
  • Embodiment 212 A device communication method, applied to a second device, the method comprising:
  • the second device displays a fourth interface including options of the first device
  • the second device In response to a selection operation on an option of the first device, the second device sends an instruction message to the first device; the instruction message is used to instruct the first device to display a first edit box including a first edit box. interface;
  • the second device displays a third interface, and the third interface includes a second edit box;
  • the editing state is synchronized to the first editing box.
  • Embodiment 213 A device communication system, comprising a first device, a second device, and a third device, where the first device is configured to execute the first device according to any one of Embodiments 21-29 and 210-212 , the second device is configured to perform the steps of the second device according to any one of Embodiments 21-29 and 210-212, and the third device is configured to execute the steps of Step 212 of any one of the third device.
  • a first device comprising: at least one memory and at least one processor;
  • the memory is used to store program instructions
  • the processor is configured to invoke program instructions in the memory to cause the first device to perform the steps performed by the first device described in any one of Embodiments 21-29 and 210-212.
  • a second device comprising: at least one memory and at least one processor
  • the memory is used to store program instructions
  • the processor is configured to invoke program instructions in the memory to cause the second device to perform the steps performed by the second device described in any one of Embodiments 21-29 and 210-212.
  • Embodiment 216 A computer-readable storage medium having a computer program stored thereon, so that when the computer program is executed by a processor of a first device, the above-described method in any one of Embodiments 21-29 and 210-212 is implemented.
  • the steps performed by the first device; or, when the computer program is executed by the processor of the second device, the steps performed by the second device described in any one of Embodiments 21-29 and 210-212 are implemented; or , so that when the computer program is executed by the processor of the third device, the steps performed by the third device according to any one of Embodiments 21-29 and 210-212 are implemented.
  • Embodiments 21-29 and Embodiments 210-216 reference may be made to the descriptions in FIGS. 26-37 .
  • the large-screen when the large-screen finds an auxiliary device with auxiliary input function in the distributed network, the large-screen can send a broadcast to all auxiliary devices with auxiliary input function in the distributed network. , informing all auxiliary devices that the large screen requires auxiliary input.
  • auxiliary devices may participate in the auxiliary large-screen input, and the large-screen broadcasts to all auxiliary devices in the distributed network, which will cause certain problems to other auxiliary devices in the distributed network that do not participate in the auxiliary large-screen input. interference.
  • the embodiments of the present application provide a device communication method, in which a user can select a target auxiliary device on a large screen, and then can send a notification to the target auxiliary device but not other auxiliary devices, thereby avoiding interference to other devices.
  • the user can select the target auxiliary device on the large screen, establish a communication connection with the target auxiliary device, and pop up an edit box for assisting input on the large screen in the target auxiliary device.
  • the notification interface may not be displayed in the target auxiliary device. There is also no need for the user to trigger the notification.
  • FIG. 26 shows a schematic diagram of a specific system architecture of a device communication method according to an embodiment of the present application.
  • the large screen can search for a device with auxiliary input capability (such as a mobile phone with auxiliary AA) in the distributed network, and the large screen can find a device with auxiliary input capability.
  • a device with auxiliary input capability such as a mobile phone with auxiliary AA
  • an interface including the identification of the device with auxiliary input capability may pop up on the large screen.
  • the identifier of the device with auxiliary input capability may be the device number, user name or nickname of the device with auxiliary input capability.
  • the large screen can transmit the input data interface to the input method management framework IMF of the large screen, and the input method management framework of the large screen can be combined with the mobile phone.
  • the auxiliary AA of the mobile phone can establish a connection, and the auxiliary AA of the mobile phone can simulate the click event to pull up the local input method of the mobile phone, and the input window of the editing box is included in the mobile phone.
  • the auxiliary AA can return a cross-process interface to the input method management framework of the large screen, and the input method management framework of the large screen can wrap the input data interface of the large screen across the process through the cross-process interface and pass it to the auxiliary AA of the mobile phone.
  • Auxiliary AA can synchronize the content in the editing box of the mobile phone to the large screen based on the input data interface of the large screen.
  • the auxiliary AA of the mobile phone can call the input data interface of the large screen to synchronize the content in the mobile phone edit box to the large screen. edit box on the screen.
  • the large screen can use the local input method of the large screen for input.
  • FIG. 26 is a possible implementation manner of the embodiment of the present application.
  • the user may select the virtual keyboard under the edit box provided by an application on the large screen to trigger the subsequent input process of the auxiliary large-screen, or the user may trigger the auxiliary large-screen input on the mobile phone.
  • the process of screen input is not specifically limited in this embodiment of the present application.
  • the following is an exemplary description of the user interface for the interaction between the large screen and the mobile phone.
  • FIGS. 27-28 show schematic diagrams of user interfaces in which a user triggers an auxiliary input.
  • FIG. 27 shows a user interface diagram of a large screen.
  • the user can use the remote control 2701 to select the edit box 2702 in the large screen, and then can trigger the execution of the subsequent process of the mobile phone-assisted large screen input in the embodiment of the present application.
  • the user can use the remote controller 2701 to select any content 2702 in the virtual keyboard in the large screen, and then can trigger the execution of the subsequent process of the mobile phone-assisted large-screen input in the embodiment of the present application.
  • the specific manner in which the mobile phone assists the large-screen input will be described in the subsequent embodiments, and will not be repeated here.
  • FIG. 27 shows a schematic diagram of setting an edit box in a user interface diagram of a large screen.
  • the user interface of the large screen may include multiple edit boxes, and the user triggers any edit box to trigger the subsequent process of the mobile phone-assisted large-screen input in the embodiment of the present application, which is not specified in the embodiment of the present application. limited.
  • Figure 28 shows a user interface diagram of a cell phone.
  • the user can display the user interface shown in Figure a in Figure 28 by pulling down on the main screen of the mobile phone, etc.
  • the user interface shown in Figure a in Figure 28 may include one or more of the following items of the mobile phone Functions: WLAN, Bluetooth, Torch, Mute, Airplane Mode, Mobile Data, Wireless Screencasting, Screenshot or Auxiliary Input 2801.
  • the auxiliary input 2801 may be the function of the mobile phone auxiliary large-screen input in this embodiment of the application.
  • the mobile phone can search for devices such as large screens in the same distributed network, obtain the search box in the large screen, and establish a communication connection with the large screen.
  • the mobile phone can further display the user interface shown in Figure c of Figure 28, and in the user interface shown in Figure c of Figure 28, an edit box for assisting large-screen input can be displayed, and the user can assist the large-screen based on the edit box. to enter.
  • the mobile phone can also display the user interface shown in b of Figure 28.
  • the user interface shown in the figure can display multiple large screen logos, and the large screen logo can be the device number, user name or nickname of the large screen.
  • the user can select the large screen (for example, click on large screen A or large screen B) in the user interface shown in Figure b of Figure 28, and enter the user interface shown in Figure c in Figure 28, an embodiment of the present application. This is not specifically limited.
  • FIGS. 29-34 show schematic diagrams of user interfaces for assisting the large-screen input on the mobile phone.
  • Figure 29 shows a user interface diagram of a large screen.
  • the user can use the method corresponding to Figure 27 to trigger the entry into the auxiliary input scene.
  • the large screen can search for auxiliary devices with auxiliary input capabilities in the distributed network, and display the found auxiliary equipment on the large screen.
  • the identification of the auxiliary device may be displayed in any possible form on the large screen, for example, the identification of the auxiliary device may be displayed in a list, a picture, or a number.
  • the user can use a device such as a remote control to select "mobile phone B" on the large screen, and the subsequent large screen can interact with the mobile phone B to use the mobile phone B to assist the large screen input.
  • a device such as a remote control
  • the large screen can also automatically determine the device used for auxiliary input.
  • the large screen can automatically select the mobile phone as the auxiliary input device, and does not display the user interface shown in FIG. 29 .
  • the large screen may display the user interface shown in Figure 29.
  • the large screen can automatically select the mobile phone with the default auxiliary input as the auxiliary input mobile phone. device, and does not display the user interface shown in Figure 29.
  • the large screen finds that there are multiple mobile phones in the distributed networking, but among the multiple mobile phones, there is a mobile phone with the auxiliary input selected by the user when the user performed the auxiliary input last time, the large screen can automatically select the user to perform the auxiliary input last time.
  • the auxiliary input mobile phone selected during auxiliary input is an auxiliary input device, and the user interface shown in FIG. 29 is not displayed.
  • the large screen finds that there are multiple mobile phones in the distributed network, and the large screen obtains the mobile phone with the highest frequency of auxiliary input selected by the user, the large screen can automatically select the mobile phone selected by the user as the auxiliary input.
  • the mobile phone with the highest input frequency is an auxiliary input device and does not display the user interface shown in FIG. 29 .
  • the large screen finds that there are multiple mobile phones in the distributed networking, but there is a mobile phone with the same user account as the user account logged in by the large screen, the large screen can automatically select the mobile phone with the same user account logged in the large screen.
  • the mobile phone with the same user account is an auxiliary input device, and the user interface shown in Figure 29 is not displayed.
  • the large-screen user interface shown in FIG. 29 is not necessary, and the user interface shown in FIG. 29 may not be displayed.
  • This embodiment of the present application does not limit the specific form of the user interface shown in FIG. 29 and the manner of triggering the display of the user interface shown in FIG. 29 .
  • a notification for prompting the large screen to request auxiliary input may pop up in mobile phone B, and the user may trigger the notification in mobile phone B to confirm the auxiliary large-screen input, and further, As shown in the user interface shown in the middle diagram of Figure 30, an edit box for assisting large-screen input can pop up in the mobile phone B. Further, the user can trigger the editing box shown in the middle diagram of Figure 30 by clicking, etc., and the mobile phone B can display As shown in the user interface on the far right of FIG. 30 , the virtual keyboard (or soft keyboard) of the mobile phone can be displayed in the user interface, and the user can use the virtual keyboard of the mobile phone B to assist the large-screen input later.
  • the virtual keyboard or soft keyboard
  • the mobile phone B may not receive a notification, but an edit box as shown in the left figure of Figure 31 for assisting input on the large screen will pop up, and further , the user can trigger the edit box as shown in the left figure of Figure 31 by clicking, etc., and the mobile phone B can display the user interface shown in the right figure of Figure 31, in which the virtual keyboard (or soft keyboard) of the mobile phone can be displayed. , the user can use the virtual keyboard of the mobile phone B to assist the input on the large screen.
  • FIG. 32 shows a schematic diagram of a user interface in which the user assists the large-screen input in the editing box of the mobile phone B.
  • the user can input "lion” in the editing box of mobile phone B, and the cursor can also be displayed behind "lion” in the editing box of mobile phone B, such as:
  • the user interface diagram of the large screen shown in the right figure of Figure 32 can be synchronized to the "lion" and the cursor in the edit box of the large screen.
  • FIG. 33 shows a schematic diagram of a user interface in which the user can move the cursor in the edit box of the mobile phone B.
  • the user can move the cursor to the edit box of mobile phone B before "lion", and add "old” in front of the cursor, as shown in the right picture of Figure 33.
  • the user interface diagram of the large screen shown in the figure can be synchronized to the cursor before the "lion” and the "old” before the cursor in the edit box of the large screen.
  • FIG. 34 shows a schematic diagram of the user interface in which the user can perform highlighting of the target word in the edit box of the mobile phone B.
  • the user can highlight "Old" in the editing box of mobile phone B.
  • the user interface diagram of the large screen shown in the right picture of Figure 34 can be displayed in the The edit box of the big screen is synchronized to the highlighted "old".
  • the user interface of mobile phone A may be similar to the user interface of mobile phone B, which is not repeated here.
  • the above user interface diagrams when the mobile phone assists the large-screen input are all exemplary descriptions.
  • part or all of the content in the large screen may also be synchronized. This enables mobile phone users to know the status of the large screen based on the mobile phone interface.
  • FIG. 35 shows a user interface of a mobile phone.
  • a mobile phone such as mobile phone A or mobile phone B above
  • he can project all or part of the content of the large screen to the mobile phone, such as displaying the editing of the large screen in the mobile phone box-related content, and display the editing box of the mobile phone on the upper layer of the content on the large screen, so that when the user uses the editing box of the mobile phone to input, they can simultaneously see the status of the editing box on the large screen in the user interface of the mobile phone.
  • the user assists in inputting Chinese characters on the large screen as an example.
  • the user may assist the large screen in inputting English phrases or other forms of text input.
  • the specific content of the input is not limited.
  • each functional module is divided according to each function, as shown in FIG. 36 , it shows a possible schematic structural diagram of a first device, a second device or a third device provided by an embodiment of the present application.
  • a device, a second device or a third device includes: a display screen 3601 and a processing unit 3602 .
  • the display screen 3601 is used to support the first device, the second device, or the third device to perform the display steps in the foregoing embodiments, or other processes of the technologies described in the embodiments of this application.
  • the display screen 3601 may be a touch screen or other hardware or a combination of hardware and software.
  • the processing unit 3602 is configured to support the first device, the second device, or the third device to perform the processing steps in the foregoing method embodiments, or other processes of the technologies described in the embodiments of this application.
  • the electronic device includes but is not limited to the unit modules listed above.
  • the specific functions that can be realized by the above functional units also include but are not limited to the functions corresponding to the method steps described in the above examples.
  • the detailed description of other units of the electronic device please refer to the detailed description of the corresponding method steps. This application implements Examples are not repeated here.
  • the first device, the second device or the third device involved in the above embodiments may include: a processing module, a storage module and a display screen.
  • the processing module is used to control and manage the actions of the first device, the second device or the third device.
  • the display screen is used to display content according to the instructions of the processing module.
  • the storage module is used for saving program codes and data of the first device, the second device or the third device.
  • the first device, the second device or the third device may also include an input module, a communication module, and the communication module is used to support the communication between the first device, the second device or the third device and other network entities to achieve Calls, data interaction, Internet access and other functions of the first device, the second device or the third device.
  • the processing module may be a processor or a controller.
  • the communication module may be a transceiver, an RF circuit or a communication interface or the like.
  • the storage module may be a memory.
  • the display module can be a screen or a display.
  • the input module can be a touch screen, a voice input device, or a fingerprint sensor.
  • the above-mentioned communication module may include an RF circuit, and may also include a wireless fidelity (Wi-Fi) module, a near field communication (NFC) module, and a Bluetooth module.
  • Wi-Fi wireless fidelity
  • NFC near field communication
  • Bluetooth Bluetooth module
  • Communication modules such as the RF circuit, the NFC module, the WI-FI module, and the Bluetooth module may be collectively referred to as a communication interface.
  • the above-mentioned processor, RF circuit, display screen and memory can be coupled together through a bus.
  • FIG. 37 it shows another possible schematic structural diagram of the first device, the second device, or the third device provided by the embodiment of the present application, including: one or more processors 3701 , a memory 3702 , and a camera 3704 and display screen 3703; the above devices may communicate through one or more communication buses 3706.
  • one or more computer programs are stored in the memory 3702 by 3705, and are configured to be executed by one or more processors 3701; the one or more computer programs 3705 include instructions for performing the display method of any of the above steps .
  • the electronic device includes but is not limited to the above-mentioned devices.
  • the above-mentioned electronic device may also include a radio frequency circuit, a positioning device, a sensor, and the like.
  • Embodiment 31 A device communication method, applied to a system comprising a first device and a second device, the method comprising:
  • the first device displays a first interface including a first edit box
  • the second device displays a second interface according to the instruction message, the second interface includes a second edit box;
  • the first device synchronizes the keyword to the first edit box
  • the first device determines a candidate word corresponding to the keyword
  • the second device acquires the candidate word, and displays a third interface, where the third interface includes the candidate word.
  • Embodiment 32 The method of Embodiment 31, the second device comprising an interface service for synchronizing edit states between the first device and the second device.
  • Embodiment 33 The method of Embodiment 32, the editing state comprising one or more of the following: textual content, a cursor, or highlighting of textual content.
  • Embodiment 34 The method according to any one of Embodiments 31-33, wherein the second device displays a second interface according to the instruction message, comprising:
  • the second device displays a notification interface in response to the instruction message;
  • the notification interface includes an option to confirm the auxiliary input;
  • the second device In response to a triggering operation on the option, the second device displays the second interface.
  • Embodiment 35 The method of any one of Embodiments 31-34, wherein the second interface further comprises: all or part of the content of the first interface.
  • Embodiment 36 The method according to Embodiment 35, wherein the second edit box is displayed in layers with all or part of the content of the first interface, and the second edit box is displayed on all of the first interface or the upper layer of part of the content.
  • Embodiment 37 The method according to any one of Embodiments 31-36, after the second device displays the second interface according to the instruction message, the method further includes:
  • the second device In response to the triggering of the second edit box, the second device displays a virtual keyboard
  • the second device displays the editing state in the second editing box according to the input operation received by the virtual keyboard and/or the second editing box.
  • Embodiment 38 The method according to any one of Embodiments 31-37, wherein the first device includes any one of the following: a television, a large screen, or a wearable device; the second device includes any one of the following : Phone, tablet or wearable device.
  • the first device includes any one of the following: a television, a large screen, or a wearable device
  • the second device includes any one of the following : Phone, tablet or wearable device.
  • Embodiment 39 The method according to any one of Embodiments 31-38, wherein the third interface further comprises a local candidate word associated by the second device based on the keyword, the candidate word and the local candidate word
  • the display mode of the word on the third interface includes any of the following:
  • the candidate word and the local candidate word are displayed in columns in the third interface
  • the candidate word is displayed in front of the local candidate word in the third interface
  • the candidate word is displayed behind the local candidate word in the third interface
  • the candidate word and the local candidate word are mixed and displayed in the third interface
  • the candidate words and the local candidate words are distinguished by different identifiers in the third interface.
  • Embodiment 310 The method of any one of Embodiments 31-39, wherein the ranking of the candidate words is related to historical user behavior in the first device.
  • Embodiment 311 The method of any one of Embodiments 31-39, 310, further comprising:
  • the second device displays any one of the candidate words in the second edit box in response to a user triggering any one of the candidate words.
  • Embodiment 312 A device communication method, applied to a system including a first device and a second device, the method comprising:
  • the second device displays a fourth interface including options of the first device
  • the second device In response to the selection operation of the option of the first device, the second device sends an indication message to the first device;
  • the first device displays a first interface including a first edit box
  • the second device displays a second interface, the second interface includes a second edit box
  • the first device synchronizes the keyword to the first edit box
  • the first device determines a candidate word corresponding to the keyword
  • the second device acquires the candidate word, and displays a third interface, where the third interface includes the candidate word.
  • Embodiment 313 A device communication method, applied to a first device, the method comprising:
  • the first device displays a first interface including a first edit box
  • the first device sends an instruction message to the second device;
  • the instruction message is used to instruct the second device to display a second interface, and the second interface includes a second edit box;
  • the first device synchronizes the keyword to the first edit box
  • the first device determines a candidate word corresponding to the keyword
  • the first device synchronizes the candidate words to the second device.
  • Embodiment 31 A device communication method, applied to a second device, the method comprising:
  • the second device receives the instruction message from the first device; the first device displays a first interface including a first edit box;
  • the second device displays a second interface according to the instruction message, the second interface includes a second edit box;
  • the second device synchronizes the keyword to the first edit box, so that the first device determines a candidate corresponding to the keyword word;
  • the second device acquires the candidate word, and displays a third interface, where the third interface includes the candidate word.
  • Embodiment 315 A device communication method, applied to a second device, the method comprising:
  • the second device displays a fourth interface including options of the first device
  • the second device In response to a selection operation of an option of the first device, the second device sends an instruction message to the first device; the instruction message is used to instruct the first device to display a first edit box including a first edit box. interface;
  • the second device displays a second interface, and the third interface includes a second edit box;
  • the second device synchronizes the keyword to the first edit box, so that the first device determines a candidate corresponding to the keyword word;
  • the second device acquires the candidate word, and displays a third interface, where the third interface includes the candidate word.
  • Embodiment 316 A device communication system, comprising a first device and a second device, the first device configured to perform a method such as
  • a first device comprising: at least one memory and at least one processor;
  • the memory is used to store program instructions
  • the processor is configured to invoke the program instructions in the memory to cause the first device to perform the steps performed by the first device described in any one of Embodiments 31-39 and 310-315.
  • a second device comprising: at least one memory and at least one processor
  • the memory is used to store program instructions
  • the processor is configured to invoke the program instructions in the memory to cause the second device to perform the steps performed by the second device described in any one of Embodiments 31-39 and 310-315.
  • Embodiment 319 A computer-readable storage medium having a computer program stored thereon, so that when the computer program is executed by a processor of a first device, the above-described method in any one of Embodiments 31-39 and 310-315 is implemented.
  • the steps performed by the first device; or, when the computer program is executed by the processor of the second device, the steps performed by the second device described in any one of Embodiments 31-39 and 310-315 are implemented.
  • Embodiment 31 to Embodiment 39 and Embodiment 310 to Embodiment 319 reference may be made to the descriptions in FIGS. 38 to 53 .
  • the text content in the mobile phone edit box can be synchronized to the large screen side in real time, so as to realize quick input, so as to achieve the purpose of improving user input efficiency .
  • the text content in the edit box of the mobile phone can be synchronized to the large screen, but the content in the large screen cannot be synchronized to the mobile phone.
  • a keyword such as some movie names, some music names, or some contacts, etc.
  • the keyword can be synchronized in the large screen edit box, and the large screen can use the keyword to get The target entry (such as the complete movie name, complete music name, or complete contact, etc.), at this time, because the target entry on the large screen cannot be synchronized to the mobile phone, and the candidate thesaurus of the mobile phone itself is usually different from the program content on the large screen.
  • the candidate word usually selected by the user in the mobile phone is different from the candidate word usually selected by the user in the large screen.
  • the candidate word usually selected in the large screen may be the same
  • the candidate words usually selected in mobile phones may be general words such as "Western”. Therefore, the user needs to completely input the target entry on the mobile phone, or the user needs to manually select the target entry on the large screen by means of other hardware devices (eg, remote control, etc.), and the input efficiency is low.
  • FIG. 38 shows a schematic diagram of a user interface of a mobile phone assisting large-screen input.
  • the schematic diagram of the interface of the mobile phone is shown in the left picture of Figure 38.
  • the user wants to search for the TV series "I love this land” on the big screen, the user enters the keyword "I love” in the edit box of the mobile phone, as shown in Figure 38
  • the keyword "I love” in the edit box of the mobile phone can be displayed simultaneously.
  • the large screen associates the candidate word "I love this land” with the keyword "I love”, but
  • the mobile phone cannot synchronize the candidate word "I love this land”, the user still needs to enter the complete text "I love this land” in the mobile phone and click Finish, or the user can use the remote control to select the candidate word "I love this land” on the large screen. Love this land”, you can search for "I love this land”, the input efficiency is low.
  • an embodiment of the present application provides a device communication method.
  • a user uses a mobile phone to assist input on a large screen
  • the user enters text in an inputable edit box (such as a search edit box, drop-down box, or combo box, etc.) of the mobile phone.
  • the large screen can synchronize the text, and according to the specific input scene, combined with the user habits of the large screen statistics, the candidate word bank or dictionary of the large screen, etc., to associate the candidate words corresponding to the text, and synchronize the candidate words that the large screen associates with the text. to the mobile phone, so that the user can select the candidate words associated with the large screen in the mobile phone to realize convenient input.
  • FIG. 39 shows a schematic diagram of a specific system architecture of a device communication method provided by an embodiment of the present application.
  • the embodiment of the present application uses a distributed networking including a large screen (or a large screen device) and a mobile phone (or an auxiliary device) as an example to describe the process of the mobile phone assisting the large screen input.
  • a distributed networking including a large screen (or a large screen device) and a mobile phone (or an auxiliary device) as an example to describe the process of the mobile phone assisting the large screen input.
  • the auxiliary AA, notification manager, window manager and input method framework can be set in the auxiliary device (mobile phone).
  • the edit box in the large screen can be used to trigger auxiliary input, receive text input from a remote control, or receive auxiliary input from a mobile phone.
  • Candidate words may be stored in the local or cloud thesaurus of the large screen, and the candidate words may include, for example, a program name and/or an application name in the large screen, and the like.
  • the input method framework in the large screen, the auxiliary AA in the mobile phone, the notification in the mobile phone, the window in the mobile phone, and the input method framework in the mobile phone can refer to the above records, and will not be repeated here.
  • the user can use the remote control and other devices to select the edit box of the large screen, and the large screen can request the input method framework to connect to the auxiliary AA in the mobile phone.
  • the auxiliary AA in can instruct the notification manager to pop up a notification.
  • the mobile phone receives the user's click on the notification to confirm the auxiliary input, it can further pop up an edit box in the window of the mobile phone and pull up the input method frame of the mobile phone.
  • the user can input data in the edit box provided by the input method framework of the mobile phone. For example, the user can enter the word "I love" in the mobile phone, and the word "I love” can be synchronized to the input method framework of the large screen. 's edit box displays the "I love" synchronously.
  • the large screen monitors the text changes in the large screen edit box, and can obtain the word "I love” in the edit box, match the relevant entry in the large screen thesaurus according to "I love", and fill the relevant entry into the search.
  • the list of candidate words for the box for example, related terms may include "I love this land”.
  • the matching rules may be determined according to actual application scenarios. For example, matching rules include but are not limited to regular string matching, synonym matching, synonym matching, exact matching or fuzzy matching.
  • the large screen can synchronize the content of the candidate word list of the large screen to the input method framework of the mobile phone, and the input method framework of the mobile phone can display the content of the candidate word list of the large screen in the interface of the mobile phone, for example, the mobile phone "I love this land” can be displayed on the mobile phone interface as a candidate word, and the user can click the candidate word "I love this land” on the mobile interface to fill "I love this land” into the editing box of the mobile phone , the "I love this land” can be synchronized to the edit box on the big screen, enabling convenient and efficient input.
  • FIG. 40 shows a flow chart of candidate words matched synchronously by the interaction between the large screen and the mobile phone.
  • the user can enter keywords or keywords in the edit box of the auxiliary device (such as a mobile phone) for assisting large-screen input, and the keywords or keywords can be synchronized to the large-screen device.
  • a candidate word matching the keyword or keyword is obtained from the keyword or keyword, and the candidate word is synchronized to the mobile phone.
  • the candidate words synchronized from the large screen to the mobile phone include the target entry that the user wants to select.
  • the user can select the target entry by clicking, etc., and the mobile phone synchronizes the target entry to the large screen to complete the auxiliary input.
  • the user can continue to input keywords or keywords in the mobile phone, and repeat the above steps until the candidate words synchronized from the large screen are
  • There is a target entry that the user wants to select in the mobile phone The user can select the target entry by clicking, etc., and the mobile phone synchronizes the target entry to the large screen to complete the auxiliary input.
  • FIG. 39 or FIG. 40 is a possible implementation manner of the embodiment of the present application.
  • the user may select the virtual keyboard under the edit box provided by an application on the large screen to trigger the subsequent input process of the auxiliary large-screen, or the user may trigger the auxiliary large-screen input on the mobile phone.
  • the process of screen input is not specifically limited in this embodiment of the present application.
  • the following is an exemplary description of the user interface for the interaction between the large screen and the mobile phone.
  • Figures 41-42 show schematic diagrams of user interfaces in which a user triggers an auxiliary input.
  • Figure 41 shows a user interface diagram of a large screen.
  • the user can use the remote control 4101 to select the edit box 4102 in the large screen, and then can trigger the execution of the subsequent process of the mobile phone-assisted large screen input in the embodiment of the present application.
  • the user can use the remote controller 4101 to select any content 4102 in the virtual keyboard in the large screen, and then can trigger the execution of the subsequent process of the mobile phone assisting the large screen input in the embodiment of the present application.
  • the specific manner in which the mobile phone assists the large-screen input will be described in the subsequent embodiments, and will not be repeated here.
  • FIG. 41 shows a schematic diagram of setting an edit box in a user interface diagram of a large screen.
  • the user interface of the large screen may include multiple edit boxes, and the user triggers any edit box to trigger the subsequent process of the mobile phone-assisted large-screen input in the embodiment of the present application, which is not specified in the embodiment of the present application. limited.
  • Figure 42 shows a user interface diagram of a cell phone.
  • the user can display the user interface as shown in Figure a in Figure 42 by pulling down on the main screen of the mobile phone, etc.
  • the user interface shown in Figure a in Figure 42 may include one or more of the following items of the mobile phone Function: WLAN, Bluetooth, Torch, Mute, Airplane Mode, Mobile Data, Wireless Screencasting, Screenshot or Auxiliary Input 4201.
  • the auxiliary input 4201 may be the function of assisting the large-screen input of the mobile phone in this embodiment of the application.
  • the mobile phone can search for devices such as large screens in the same distributed network, obtain the search box in the large screen, and establish a communication connection with the large screen.
  • the mobile phone can further display the user interface as shown in Figure c of Figure 42, and in the user interface shown in Figure c of Figure 42, an edit box for assisting large-screen input can be displayed, and the user can assist the large-screen based on the edit box. to enter.
  • the mobile phone can also display the user interface shown in Figure 42 b, in Figure 42 b
  • the user interface shown in the figure can display multiple large screen logos, and the large screen logo can be the device number, user name or nickname of the large screen.
  • the user can select the large screen (for example, click on large screen A or large screen B) in the user interface shown in Figure b of Figure 42, and enter the user interface shown in Figure c in Figure 42, an embodiment of the present application. This is not specifically limited.
  • the large screen can search for an auxiliary device (such as a mobile phone) with auxiliary input capability in the distributed network, and automatically determine the mobile phone used for auxiliary input, or Send notifications to all mobile phones found in the distributed network.
  • an auxiliary device such as a mobile phone
  • the large screen can automatically select the auxiliary input device as the mobile phone.
  • the large screen can automatically select the mobile phone with the default auxiliary input as the auxiliary input mobile phone. equipment.
  • the large screen finds that there are multiple mobile phones in the distributed networking, but among the multiple mobile phones, there is a mobile phone with the auxiliary input selected by the user when the user performed the auxiliary input last time, the large screen can automatically select the user to perform the auxiliary input last time.
  • the auxiliary input mobile phone selected during auxiliary input is the auxiliary input device.
  • the large screen finds that there are multiple mobile phones in the distributed network, and the large screen obtains the mobile phone with the highest frequency of auxiliary input selected by the user, the large screen can automatically select the mobile phone selected by the user as the auxiliary input.
  • the mobile phone with the highest input frequency is the auxiliary input device.
  • the large screen finds that there are multiple mobile phones in the distributed networking, but there is a mobile phone with the same user account as the user account logged in by the large screen, the large screen can automatically select the mobile phone with the same user account logged in the large screen.
  • the mobile phone with the same user account is the auxiliary input device.
  • Figures 43-49 illustrate the process of the mobile phone assisting the large screen input by using candidate words synchronized from the large screen.
  • FIG. 43 shows a schematic diagram of a user interface for determining an auxiliary large-screen input by a mobile phone.
  • a notification can pop up in the mobile phone, prompting the large screen to request auxiliary input, and the user can trigger the notification in the mobile phone to confirm the auxiliary input.
  • Large-screen input further, as shown in the user interface in the middle of Figure 43, an edit box for assisting large-screen input can pop up in the mobile phone, and further, the user can trigger the editing shown in the middle of Figure 43 by clicking, etc. box, the mobile phone can display the user interface as shown in the rightmost figure in Figure 43, the user interface can display the virtual keyboard (or soft keyboard) of the mobile phone, and the user can use the virtual keyboard of the mobile phone to assist the large-screen input in the future.
  • the virtual keyboard or soft keyboard
  • the user can edit the edit box of the mobile phone as shown in the rightmost picture in Figure 43. Enter the keyword "auxiliary" in the .
  • the keyword "auxiliary" in the edit box of the mobile phone can be synchronized to the edit box on the side of the big screen, and the big screen searches for the matching "auxiliary" according to the local or cloud thesaurus of the big screen.
  • Word selection displayed in the candidate word list.
  • the candidate word list on the large screen can include content related to "auxiliary" in multiple categories.
  • the application category can include "auxiliary applications and voice input, and auxiliary applications", etc.
  • Accessibility categories can include Accessibility and Accessibility, etc.
  • the large screen is a keyword matching candidate word in the large screen edit box, it may be related to the function to be implemented by the large screen (or referred to as the scene in which it is located). Or it can be understood that the keywords entered by the user in the editing boxes are the same, but because the interfaces where the editing boxes are located are different and the functions implemented are different, the candidate words associated with the keywords in the editing boxes on the large screen can be the same or different.
  • the big screen can match the keyword with the relevant movie name in combination with the movie library.
  • the big screen can combine the TV series library to match keywords with the relevant TV series names.
  • the large screen can match the relevant music name for the keyword in combination with the music library.
  • the large screen may match the relevant function name for the keyword in combination with the function library.
  • the ranking of candidate words displayed on the large screen is related to the user's historical behavior.
  • the keywords entered by the user in the edit box are the same, but the user previously selected the candidate words corresponding to the keyword differently, and the order of the candidate words for the keyword in the large screen can be changed.
  • the large screen can display candidate word A at the top of the ranking.
  • the candidate word selected by the user using the keyword on the large screen has the highest frequency or the highest candidate word is candidate word B, then the large screen can display candidate word B at the top of the ranking.
  • the candidate words in the candidate word list on the large screen can be further synchronized to the input interface of the mobile phone, and the display mode of the candidate words in the candidate word list on the large screen can be set according to the actual application scenario.
  • the sorting of the candidate words may also be synchronized to the display interface of the mobile phone, so as to recommend the candidate words to the user in the mobile phone that conform to the user's habit on the large screen, which is not specifically limited in this embodiment of the present application.
  • the candidate words of the mobile phone based on the keyword association of the local input method of the mobile phone may be the same as or different from the candidate words of the large screen based on the keyword association of the large screen.
  • the mobile phone candidate words based on keyword association in the mobile phone local input method are the same as the large screen candidate words based on keyword association on the large screen, but the sorting of mobile phone candidate words based on keyword association in the mobile phone local input method is the same as that on the large screen.
  • the order of the large-screen candidate words based on keyword association can be the same.
  • Figures 45-48 show several interface schematic diagrams of the mobile phone when synchronizing the candidate words in the large-screen candidate word list to the mobile phone.
  • the candidate words obtained by the mobile phone synchronously from the large screen can be displayed similar to the candidate words of the local input method of the mobile phone, for example, from the large screen
  • the candidate words obtained by synchronization can be displayed in the input interface of the mobile phone in the form of a list.
  • the user may not perceive whether the candidate words provided by the mobile phone are local to the mobile phone or synchronized from the large screen, but because the candidate words provided by the mobile phone in the embodiment of the present application are The word provides candidate words obtained from the synchronization of the large screen. Yes, the candidate words provided to the user in the mobile phone are closer to the content of the large screen, which is more conducive to assisting the user to achieve quick input.
  • the candidate words in the candidate word list on the large screen can be displayed in separate columns from the candidate words of the local input method of the mobile phone.
  • the candidate words matching the "auxiliary" obtained from the synchronization of the large screen can be listed in a column (for example, large Display the candidate words obtained by using the "auxiliary" association of the local input method of the mobile phone in a column (for example, the mobile phone candidate word column).
  • the candidate words in the candidate word list of the large screen and the candidate words of the local input method of the mobile phone are displayed in separate columns, although the candidate words in the candidate word list of the large screen and the local input method of the mobile phone are displayed in separate columns.
  • the candidate words are the same, but the order of the candidate words in the candidate word list of the large screen and the candidate words of the local input method of the mobile phone can be different.
  • the candidate words in the candidate word list on the large screen can be ranked in front of the candidate words in the local input method of the mobile phone, and marked with a horizontal line, etc.
  • the candidate words of , and the candidate words of the mobile phone local input method are divided.
  • the candidate words in the candidate word list of the large screen can be arranged behind the candidate words of the local input method of the mobile phone, and marked with a horizontal line, and the candidate words in the candidate word list of the large screen can be compared with those of the local input method of the mobile phone.
  • Candidate word division (not shown in FIG. 48 ).
  • the candidate words in the candidate word list on the large screen can be distinguished from the candidate words of the local input method of the mobile phone by identification.
  • the identifiers of the candidate words in the candidate word list on the large screen are different from the candidate words of the local input method of the mobile phone.
  • the candidate word matching "auxiliary" obtained from the synchronization of the large screen can be added with an arrow pointing to the lower right
  • the candidate word obtained by using the "auxiliary" association with the mobile phone's local input method is added with an arrow pointing to the upper left as an identifier, so that the user can know the source of the candidate word based on the identifier of each candidate word.
  • the embodiment of the present application does not limit the specific display order of the candidate words in the candidate word list of the large screen and the candidate words of the local input method of the mobile phone.
  • the candidate words in the candidate word list of the large screen and the candidate words of the local input method of the mobile phone may be sorted according to the historical usage times in descending order according to the user's historical search situation.
  • the popularity of the candidate words in the candidate word list on the large screen and the candidate words of the local input method of the mobile phone can be combined, and the order of popularity can be sorted from high to low.
  • the candidate words in the candidate word list of the large screen may be ranked in front of the candidate words of the local input method of the mobile phone.
  • the candidate words in the candidate word list of the large screen may be cross-sorted with the candidate words of the local input method of the mobile phone.
  • the candidate words in the candidate word list of the large screen and the candidate words of the local input method of the mobile phone may be randomly ordered.
  • the user can click the desired target candidate word, fill the target candidate word in the editing box of the mobile phone, and then realize the target candidate word on the large screen based on the target candidate word. search.
  • the above-mentioned technical realization of synchronizing the candidate words in the candidate word list of the large screen to the mobile phone may include: reading the candidate words in the candidate word list of the large screen based on the input method framework of the large screen, and distributing the candidate words through the distribution. network, and send the candidate words in the candidate word list of the large screen to the input method framework of the mobile phone.
  • the embodiments of the present application do not limit the technical implementation of synchronizing the candidate words in the candidate word list on the large screen to the mobile phone.
  • the above user interface diagrams when the mobile phone assists the large-screen input are all exemplary descriptions.
  • part or all of the content in the large screen may also be synchronized. This enables mobile phone users to know the status of the large screen based on the mobile phone interface.
  • FIG. 50 shows a user interface of a mobile phone.
  • the user when the user uses the mobile phone to assist the input on the large screen, the user can project all or part of the content of the large screen to the mobile phone.
  • the upper layer displays the editing box of the mobile phone, so that when the user uses the editing box of the mobile phone to input, the state of the large-screen editing box can be simultaneously seen in the user interface of the mobile phone, and the user does not need to look up at the large screen when assisting input. the input state in .
  • the user assists in inputting Chinese characters on the large screen as an example.
  • the user may assist the large screen in inputting English phrases or other forms of text input.
  • the specific content of the input is not limited.
  • FIG. 51 shows a specific flowchart of a mobile phone assisting a large screen to perform input.
  • the mobile phone-assisted large-screen input may include: near-field device discovery, identity authentication and remote data channel establishment, and synchronization of the large-screen candidate entries to the mobile phone.
  • the large-screen remote input service in the process of near-field device discovery, the large-screen remote input service is turned on, the large-screen can enable the function of near-field auxiliary device discovery, and obtain the focus in the search box of the large-screen (for example, the user selects with the remote control) search box), the large screen can send a broadcast to query distributed auxiliary input devices (such as mobile phones) with auxiliary input capabilities.
  • distributed auxiliary input devices such as mobile phones
  • a notification can pop up on the screen, which is used to remind the large screen that auxiliary input is required.
  • the large screen can use Bluetooth or LAN broadcast to query the near-field devices, and distributed auxiliary input devices with auxiliary input capabilities will receive notifications.
  • the distributed auxiliary input device receives the notification
  • the user can click the notification message in the distributed auxiliary input device to trigger the large screen and the distributed auxiliary input device (subsequently
  • the distributed auxiliary input device is a mobile phone as an example to illustrate) both parties perform identity authentication, such as verifying the legitimacy of the identities of both parties.
  • identity authentication such as verifying the legitimacy of the identities of both parties.
  • a remote data channel can be established between the large screen side and the mobile phone side, and then the data transfer between the large screen and the mobile phone can be realized according to the remote data channel.
  • the auxiliary input mark box can be loaded and displayed (or called an edit box), the user can enter keywords in the input tag box.
  • identity authentication can be selected according to actual application scenarios. For example, in some scenarios (such as scenarios with low security requirements), identity authentication may not be performed between the large screen and the mobile phone, but the user on the mobile phone side triggers the authentication. After the notification, a remote data channel can be established between the large screen side and the mobile phone side.
  • the mobile phone can synchronize the input keywords or keywords to the large screen side through the remote data channel, and the large screen side can transfer the keywords or keywords through the local data channel.
  • the large screen side can display the keywords or keywords entered on the mobile phone side, the large screen side search box finds matching candidate words according to the keywords or keywords, and the matching candidate words are filled into the large screen search box
  • the large screen synchronizes the candidate word list to the mobile phone side through the remote data channel.
  • the mobile phone displays the candidate word list to the candidate word list interface of the mobile phone through the local data channel.
  • the user can continue to input keywords or keywords in the mobile phone; if the target word that the user wants exists in the candidate word list, the user clicks the target word, and the mobile phone sends the target word through the remote data channel.
  • the vocabulary is synchronized to the input box on the side of the large screen to realize auxiliary input.
  • the above user interface diagrams when the mobile phone assists the large-screen input are all exemplary descriptions.
  • part or all of the content in the large screen may also be synchronized. This enables mobile phone users to know the status of the large screen based on the mobile phone interface.
  • FIG. 52 a schematic structural diagram of a first device or a second device provided by an embodiment of the present application is shown.
  • the second device includes: a display screen 5201 and a processing unit 5202 .
  • the display screen 5201 is used to support the first device or the second device to perform the display steps in the foregoing embodiments, or other processes of the technologies described in the embodiments of this application.
  • the display screen 5201 may be a touch screen or other hardware or a combination of hardware and software.
  • the processing unit 5202 is configured to support the first device or the second device to perform the processing steps in the foregoing method embodiments, or other processes of the technologies described in the embodiments of this application.
  • the electronic device includes but is not limited to the unit modules listed above.
  • the specific functions that can be implemented by the above functional units also include but are not limited to the functions corresponding to the method steps described in the above examples.
  • the detailed description of other units of the electronic device please refer to the detailed description of the corresponding method steps. This application implements Examples are not repeated here.
  • the first device or the second device involved in the above embodiments may include: a processing module, a storage module, and a display screen.
  • the processing module is used to control and manage the actions of the first device or the second device.
  • the display screen is used to display content according to the instructions of the processing module.
  • the storage module is used to save the program codes and data of the first device or the second device.
  • the first device or the second device may also include an input module, a communication module, and the communication module is used to support the communication between the first device or the second device and other network entities, so as to realize the communication between the first device or the second device. Call, data interaction, Internet access and other functions.
  • the processing module may be a processor or a controller.
  • the communication module may be a transceiver, an RF circuit or a communication interface or the like.
  • the storage module may be a memory.
  • the display module can be a screen or a display.
  • the input module can be a touch screen, a voice input device, or a fingerprint sensor.
  • the above-mentioned communication module may include an RF circuit, and may further include a wireless fidelity (Wi-Fi) module, a near field communication (NFC) module, and a Bluetooth module.
  • Wi-Fi wireless fidelity
  • NFC near field communication
  • Bluetooth Bluetooth module
  • Communication modules such as the RF circuit, the NFC module, the WI-FI module, and the Bluetooth module may be collectively referred to as a communication interface.
  • the above-mentioned processor, RF circuit, display screen and memory can be coupled together through a bus.
  • FIG. 53 it shows another possible structural schematic diagram of the first device or the second device provided by this embodiment of the present application, including: one or more processors 5301 , a memory 5302 , a camera 5304 , and a display screen 5303 ;
  • the above devices can communicate through one or more communication buses 5306.
  • one or more computer programs are stored in the memory 5302 by 5305, and are configured to be executed by one or more processors 5301; the one or more computer programs 5305 include instructions for performing the display method of any of the above steps .
  • the electronic device includes but is not limited to the above-mentioned devices.
  • the above-mentioned electronic device may also include a radio frequency circuit, a positioning device, a sensor, and the like.
  • Embodiment 41 A device communication method, applied to a system including a first device, a second device, and a third device, the method comprising:
  • the first device, the second device, and the third device are connected to a distributed networking
  • the second device acquires a target candidate word, the target candidate word does not belong to the candidate word database of the first device, and the target candidate word does not belong to the candidate word database of the third device;
  • the first device receives a keyword related to the target candidate word input by the user, and the first device displays the target candidate word;
  • the third device receives a keyword related to the target candidate word input by the user, and the third device displays the target candidate word.
  • Embodiment 42 The method of Embodiment 41, further comprising:
  • the respective candidate lexicons of the first device, the second device and the third device are synchronized with each other.
  • Embodiment 43 The method of embodiment 41 or 42, further comprising:
  • the first device, the second device or the third device When the first device, the second device or the third device exits the distributed networking, the first device, the second device or the third device displays whether to delete the synchronization information.
  • the first device, the second device, or the third device deletes the candidate lexicon each synchronized from the other device;
  • the first device, the second device, or the third device retains a candidate vocabulary synchronized from the distributed networking in response to a triggering operation for the option indicating no deletion.
  • Embodiment 44 The method of embodiment 41 or 42, further comprising:
  • the first device, the second device or the third device respectively determine the respective access type
  • the first device, the second device or the third device determines according to the respective access types Whether to delete candidate thesaurus synchronized from the distributed networking.
  • Embodiment 45 The method of any one of Embodiments 41-44, further comprising:
  • the first device displays a first interface including a first edit box
  • the second device displays a second interface according to the instruction message, the second interface includes a second edit box;
  • the editing state is synchronized to the first editing box.
  • Embodiment 46 The method of Embodiment 45, the second device comprising an interface service for synchronizing edit states between the first device and the second device.
  • Embodiment 47 The method of Embodiment 45 or 46, wherein the editing state includes one or more of the following: text content, cursor, or highlighting of text content.
  • Embodiment 48 The method according to any one of Embodiments 45-47, wherein the second device displays a second interface according to the instruction message, comprising:
  • the second device displays a notification interface in response to the instruction message;
  • the notification interface includes a third option for confirming an auxiliary input;
  • the second device In response to the triggering operation on the third option, the second device displays the second interface.
  • Embodiment 49 The method of any one of Embodiments 45-48, wherein the second interface further comprises: all or part of the content of the first interface.
  • Embodiment 410 The method according to Embodiment 49, wherein the second edit box is displayed in layers with all or part of the content of the first interface, and the second edit box is displayed on all of the first interface or the upper layer of part of the content.
  • Embodiment 411 The method according to any one of Embodiments 45-49 and 410, after the second device displays the second interface according to the instruction message, the method further includes:
  • the second device In response to the triggering of the second edit box, the second device displays a virtual keyboard
  • the second device displays the editing state in the second editing box according to the input operation received by the virtual keyboard and/or the second editing box.
  • Embodiment 412 The method according to any one of Embodiments 41-49 and 410-411, wherein the first device includes any one of the following: a TV, a large screen, or a wearable device; the second device or any other device; The third device includes any one of the following: a mobile phone, a tablet or a wearable device.
  • Embodiment 413 The method of any one of Embodiments 41-49, 410-412, further comprising:
  • the second device displays a fourth interface including options of the first device
  • the second device In response to the selection operation of the option of the first device, the second device sends an indication message to the first device;
  • the first device displays a first interface including a first edit box
  • the second device displays a second interface, the second interface includes a second edit box
  • the editing state is synchronized to the first editing box.
  • Embodiment 414 A device communication method, applied to a system including a first device, a second device, and a third device, the method comprising:
  • the first device, the second device, and the third device are connected to a distributed networking
  • the first device, the second device and the third device synchronize their respective candidate thesaurus with each other to obtain a candidate thesaurus set;
  • the first device, the second device or the third device When the first device, the second device or the third device performs text editing, the first device, the second device or the third device displays candidate words according to the candidate vocabulary set .
  • Embodiment 415 A device communication method, applied to a first device, comprising:
  • the first device is connected to a distributed networking; other devices are also connected to the distributed networking;
  • the first device synchronizes the candidate lexicons of the other devices based on the distributed networking to obtain a candidate lexicon set
  • the first device When the first device performs text editing, the first device displays candidate words according to the candidate word library set.
  • Embodiment 416 A device communication system, comprising a first device, a second device, and a third device, where the first device is configured to perform the steps of the first device according to any one of Embodiments 41-415, wherein The second device is used to execute the steps of the second device according to any one of Embodiments 41-49 and 410-415, and the third device is used to execute the steps of any one of Embodiments 41-49 and 410-415 the steps of the third device.
  • a first device comprising: at least one memory and at least one processor;
  • the memory is used to store program instructions
  • the processor is configured to invoke program instructions in the memory to cause the first device to perform the steps performed by the first device described in any one of Embodiments 41-49 and 410-415.
  • a second device comprising: at least one memory and at least one processor
  • the memory is used to store program instructions
  • the processor is configured to invoke the program instructions in the memory to cause the second device to perform the steps performed by the second device described in any one of Embodiments 41-49 and 410-415.
  • Embodiment 419 A computer-readable storage medium having a computer program stored thereon, so that when the computer program is executed by a processor of a first device, the above-described method in any one of Embodiments 41-49 and 410-415 is implemented.
  • the steps performed by the first device; or, when the computer program is executed by the processor of the second device, the steps performed by the second device described in any one of Embodiments 41-49 and 410-415 are implemented; or , so that when the computer program is executed by the processor of the third device, the steps performed by the third device according to any one of Embodiments 41-49 and 410-415 are implemented.
  • the content of the candidate thesaurus of the mobile phone itself (or can be understood as the input method candidate thesaurus corresponding to the input method used by the user)
  • the keyword is associated, and the recommended candidate words are displayed.
  • the user can click the candidate words to realize quick input, so as to achieve the purpose of improving the user's input efficiency.
  • the candidate thesaurus of the mobile phone itself is usually not related to the program content on the large screen.
  • users still need to Verbatim selection, less efficient typing.
  • some products may use the user account method to realize the synchronization of the candidate thesaurus among the user's multiple devices (such as a mobile phone and a large screen).
  • the specific candidate words such as words selected by the user verbatim
  • the candidate thesaurus of the large screen may also include the candidate thesaurus of the mobile phone.
  • the candidate thesaurus of the mobile phone may also include the candidate words of the large screen. library.
  • the synchronization of the candidate thesaurus completely depends on the user account of the input method. If the user does not log in to the user account in a certain device, the synchronization of the candidate thesaurus cannot be realized. Or, because the user account is usually for an input method of a certain company, if a certain device does not support the input method of the company, or the user changes the type of the input method, the synchronization of the candidate thesaurus cannot be achieved. Moreover, if the user changes devices, etc., it is necessary to switch or log in to the user account, which is a cumbersome operation. Moreover, in actual use, there are not many users who register an input method account, and it is even rarer to log in to a user account when using the input method for input, so that the implementation method cannot fully play a role.
  • the embodiments of the present application provide a device communication method, which can synchronize the candidate vocabulary in the distributed networking to the device after the device joins the distributed networking, so that the user account that relies on the input method is not required. , it can also easily realize the sharing of candidate lexicons among multiple devices, and can better provide input services for users.
  • FIG. 54 shows a schematic diagram of a specific application scenario of an embodiment of the present application.
  • a large screen, tablet, mobile phone A, and mobile phone B are connected in the distributed network.
  • the large screen, tablet, mobile phone A and mobile phone B can respectively synchronize the candidate word libraries of other devices based on distributed networking, so that the large screen, tablet, mobile phone A and mobile phone B can all obtain a candidate vocabulary set, which can be It is understood as the union of the candidate thesaurus of the large screen, the candidate thesaurus of the tablet, the candidate thesaurus of the mobile phone A and the candidate thesaurus of the mobile phone B.
  • the subsequent large screen, tablet, mobile phone A, and mobile phone B can use the candidate vocabulary set to realize convenient candidate word recommendation and improve user input efficiency.
  • the large screen, the tablet, the mobile phone A and the mobile phone B can be connected to the same WIFI to realize the establishment of a distributed network.
  • the large screen, tablet, mobile phone A, and mobile phone B may be added to the distributed networking in any possible form, which is not specifically limited in this embodiment of the present application.
  • the type and quantity of devices to be specifically accessed in the distributed networking may be determined according to actual application scenarios, and the embodiments of the present application do not specifically limit the devices to be accessed in the distributed networking.
  • each device When each device joins the distributed network, each device can push the content in its candidate thesaurus to the candidate thesaurus path of other devices in the distributed network based on the distributed database synchronization capability provided by the FWK layer of each device. , to realize the synchronization of candidate lexicons of each device. It can be understood that if each device in the distributed networking performs the input step and generates a new candidate word after joining the distributed networking, it can also be adapted to synchronize the new candidate word to the candidate word of each device. library.
  • the candidate lexicon paths of each device in the distributed networking may be the same.
  • the candidate lexicon paths of each device may be set to the distributed candidate lexicon system path "data/inputmethod/candicateWords", then Each device in the distributed networking can conveniently push its own candidate vocabulary based on the same path.
  • each device in the distributed networking when each device in the distributed networking synchronizes its own candidate word bank with other devices, it can add its own device information to the candidate word in its own candidate word bank, so that the follow-up can be based on the candidate word attached to the candidate word.
  • device information to realize flexible management of candidate words. For example, when a certain device exits the distributed networking, the candidate words having the device information of the certain device may be deleted from the candidate word database of other devices in the distributed networking, and so on.
  • the user when the device joins the distributed network for the first time, the user can set the access type of the device (or it can be understood as a permission), or each device can automatically determine the access type of the device, and then execute according to the access type of the device. adaptation steps.
  • the device joins the distributed networking again (or it can be understood that the device does not join the distributed networking for the first time), it can automatically identify the access type previously set by the device, and then perform adaptation steps according to the access type of the device.
  • the access type of the device may include a common device, a temporary visitor or a blacklisted device, and the like.
  • a commonly used device can indicate that the security level of the device is relatively high.
  • the commonly used device can be allowed to synchronize the candidate word library of other devices in the distributed network, and the candidate words of the commonly used device itself.
  • the library is synchronized to other devices in the distributed network.
  • the candidate vocabulary that the common device synchronizes from the distributed networking can be retained, so that the common device can continue to use the synchronized to the distributed networking after exiting the distributed networking.
  • the candidate thesaurus implements rich candidate word recommendation.
  • the candidate vocabulary of the common device that is synchronized to the distributed networking can also be retained in the distributed networking, so that other devices in the distributed networking can continue to use the common device in the future.
  • the candidate word database of the device realizes rich candidate word recommendation. It can be understood that the specific authority of the commonly used device may also be set according to an actual application scenario, which is not specifically limited in this embodiment of the present application.
  • a temporary visitor can indicate that the security level of the device is average.
  • the temporary visitor can be allowed to synchronize the candidate word bank of other devices in the distributed network, and the candidate word bank of the temporary visitor itself. Synchronized to other devices in the distributed network.
  • the candidate thesaurus synchronized by the temporary visitor from the distributed networking can be deleted, and the candidate thesaurus synchronized by the temporary visitor to the distributed networking can also be synchronized in the distributed networking. deleted in.
  • the specific authority of the temporary visitor may also be set according to an actual application scenario, which is not specifically limited in this embodiment of the present application.
  • a blacklisted device can indicate that the security level of the device is low.
  • the blacklisted device can be prohibited from synchronizing the candidate vocabulary of other devices in the distributed network, and the blacklisted device can be prohibited. Synchronize its own candidate vocabulary to other devices in the distributed network.
  • the specific authority of the blacklist device may also be set according to an actual application scenario, which is not specifically limited in this embodiment of the present application.
  • one possible implementation of setting the access type of the device is: setting an administrator device in a distributed network (for example, joining one or more devices with administrator functions in the distributed network), the administrator The FWK layer of the device can monitor the status of the distributed networking, and obtain the information of the distributed networking and the information of the devices in the distributed networking.
  • the administrator device monitors that an access device (such as a large screen, tablet, mobile phone A or mobile phone B) joins the distributed network, the administrator device can set the access type of the access device.
  • the access type of each device in the distributed networking can also be modified by the administrator device in the follow-up as required.
  • a modification interface for modifying the access type of the device can be provided in the administrator device, and the administrator can adapt to modify the access type of each device in the distributed networking in the modification interface.
  • each device can send a request to modify the access type to the administrator device, and the administrator device can modify the device's access type based on the request, and so on.
  • This embodiment of the present application does not limit the specific manner of modifying the access type of the device.
  • a possible implementation of setting the access type of the device is: arbitrarily accessing a device in the distributed networking, when accessing the distributed networking, a function for setting the access type of the device is provided, and the user can Set the access type for this device as required.
  • mobile phone C can also synchronize the candidate vocabulary with the large screen, tablet, mobile phone A and mobile phone B, similar to the above.
  • the description of the synchronous candidate lexicon for the large screen, tablet, mobile phone A and mobile phone B will not be repeated here.
  • a device such as a large screen, tablet, mobile phone A, mobile phone B, or mobile phone C
  • candidate words which are not specifically limited in the embodiments of the present application.
  • FIG. 55 shows a schematic diagram of a specific system architecture of a device communication method according to an embodiment of the present application.
  • the embodiment of the present application takes the distributed networking including a large screen, mobile phone A, and mobile phone C as an example, and schematically illustrates that the large screen, mobile phone A, and mobile phone C access the distributed networking, and the large screen, mobile phone A, and mobile phone C are connected to the distributed networking.
  • a and mobile phone C synchronize the candidate thesaurus, and mobile phone A uses the synchronized candidate thesaurus to assist in inputting on the large screen, and the process of leaving the distributed network on the large screen, mobile phone A or mobile phone C.
  • large screen and mobile phone A are commonly used devices, and mobile phone C is a temporary visitor.
  • the large screen, mobile phone A, and mobile phone C can all be set up with a distributed networking framework, a distributed database (which may also become a database), and an input method framework (which may also be called a remote input method framework service).
  • mobile phone C can use the display interface, voice The user who operates the mobile phone C is asked for the access type (also referred to as the device type) selected by the prompt, etc. After the user selects the appropriate access type, the mobile phone C can be triggered to synchronize the candidate lexicons of other devices in the distributed network.
  • the steps for connecting the large screen and mobile phone A to the distributed networking and synchronizing the candidate vocabulary are similar to those for mobile phone C, and are not repeated here. Exemplarily, if there is a candidate word for "Pozhimei" in the candidate word database of mobile phone C, both the large screen and mobile phone A can be synchronized to the candidate word for "Pozhimei".
  • the user can click the input method edit box on the large screen, and the large screen can pull up the input method of mobile phone A. Enter content in the input box to achieve the effect of assisting large-screen input.
  • "pozhimie” or "PZM” etc. in the input box, based on the candidate lexicon including the candidate word "Pozhime” synchronized from mobile phone A from mobile phone C, "Pozhime” can be displayed on the interface of mobile phone A.
  • the candidate word “Pozhimei” the user can trigger the candidate word “Pozhimei” by clicking and other methods, and display "Pozhimei” in the input box of the large screen.
  • mobile phone A, and mobile phone C exiting (or disconnecting) the distributed networking take the distributed networking framework of mobile phone C monitoring the disconnection of mobile phone C to the distributed networking as an example, because the mobile phone C is a temporary visitor, so you can delete the candidate words synchronized from the big screen and mobile phone A in the candidate word database of mobile phone C. If you are suitable, you can also delete the candidate words obtained by mobile phone A from mobile phone C (or mobile phone A obtained from mobile phone C and unused candidates).
  • the candidate words synchronized by mobile phone A or the large screen from other devices in the distributed network can be reserved, or they can be distributed in the distributed network.
  • the candidate words that are synchronized from the mobile phone A or the large screen are reserved in other devices in the network.
  • the mobile phone C can be restored to the candidate vocabulary before accessing the distributed networking, and the mobile phone A
  • the candidate thesaurus can include the candidate thesaurus before the mobile phone A is connected to the distributed networking and the candidate thesaurus before the large screen is connected to the distributed networking.
  • the candidate thesaurus of the large screen can include the mobile phone A accessing the distributed networking
  • the candidate thesaurus before the distributed networking and the candidate thesaurus before the large screen is connected to the distributed networking.
  • the processing of the new candidate word can also be adapted and adjusted according to the access types of the large screen, mobile phone A, and mobile phone C. For example, if the new candidate word is generated due to the input behavior of the mobile phone C, it can be deleted from the candidate word database of the mobile phone A and the large screen as the mobile phone C is disconnected from the distributed networking. If the new candidate word is generated due to the input behavior of mobile phone A or the large screen, after mobile phone A or the large screen is disconnected from the distributed networking, the new candidate word can be retained in the candidate word bank of mobile phone A and the large screen .
  • the candidate words in the candidate vocabulary of mobile phone C are used in the process of connecting the large screen, mobile phone A and mobile phone C in a distributed network, for example, when the above-mentioned mobile phone A assists the large screen input , using the candidate word "Pozhibaa” in mobile phone C, after mobile phone C is disconnected from the distributed network, the used candidate word "Pozhibaa” can be retained in the candidate word bank of mobile phone A and the large screen middle.
  • the synchronization candidate lexicon is used as an example for description.
  • the method of the embodiment of the present application is also applicable to any data sharing scenario.
  • the distributed networking is used to realize the synchronization of files, music, videos and/or pictures among multiple devices. It can be understood that because the candidate thesaurus is usually written, the space occupied is usually relatively small. In the implementation of the synchronization candidate thesaurus, it is not necessary to pay attention to the selection of the storage control. If the synchronized data is large, in practical applications, it can also be combined. The size of the controls of the data that needs to be synchronized, select the appropriate storage space for the data that needs to be synchronized,
  • the candidate thesaurus is synchronized with the large screen, mobile phone A and mobile phone C below, mobile phone A uses the synchronized candidate vocabulary to assist the large screen to input, and the large screen, mobile phone A or mobile phone C leave the distributed
  • the user interface for networking is described as an example.
  • FIG. 56 shows a schematic diagram of a user interface for selecting a device type.
  • mobile phone C can display as shown in the figure.
  • User interface shown in 56 can display as shown in the figure.
  • the user interface may include a common device control 5601 and a temporary visitor control 5602 for setting the access type of the mobile phone C.
  • the user can set the mobile phone C as a temporary visitor by clicking the button of the temporary visitor control 5602 .
  • the user can set the mobile phone A and the large screen as common devices, which will not be repeated here.
  • the large screen, mobile phone A or mobile phone C can also determine their own access types according to the frequency, duration and/or number of times they join the distributed networking.
  • any one of the large screen, mobile phone A or mobile phone C if the frequency of joining the distributed networking is higher than a certain threshold, it can be determined as a commonly used device; or, if the frequency of joining the distributed networking is lower than a certain threshold Threshold, it can be determined as a temporary visitor; or, if the duration of joining the distributed network is higher than a certain threshold, it can be determined as a commonly used device; or, if the duration of joining the distributed network is lower than a certain threshold, it can be determined as a temporary visitor Guest; or, if the number of joining the distributed network is higher than a certain threshold, it can be determined as a common device; or, if the number of joining the distributed network is lower than a certain threshold, it can be determined as a temporary visitor; or, if joining If the number of times of the distributed networking is higher than a certain threshold and the duration is longer than a certain threshold, it can be determined as a commonly used device; or, if the number of times of joining the distributed networking is
  • the access type of the device set by the user last time can be automatically used as the access type of the device, and the user interface shown in Figure 56 is not displayed. .
  • the same multiple of the logged-in user account can be automatically determined as commonly used devices, and the device shown in Figure 56 will not be displayed.
  • the access type of the large screen, mobile phone A, or mobile phone C may not be set, and the large screen, mobile phone A, or mobile phone C share common rights, which are not specifically limited in this embodiment of the present application.
  • the user interface shown in Figure 56 may not be prompted.
  • the corresponding candidate vocabulary addition or deletion step may be performed subsequently based on the access type. If no access type is set on the large screen, mobile phone A, or mobile phone C, the steps of adding or deleting a word selection database corresponding to any of the above-mentioned access types may be performed subsequently, which is not limited in this embodiment of the present application.
  • FIG. 57 shows a schematic diagram of an interface for generating candidate words in mobile phone C.
  • FIG. 57 shows a schematic diagram of an interface for generating candidate words in mobile phone C.
  • the user can select candidate words one by one to obtain “Pozhimei", which can be stored as a candidate word in the candidate vocabulary of mobile phone C.
  • the user can also enter the English phrases "apple”, “banana” and “meat” in the edit box of the mobile phone C, and select the above phrases one by one to obtain the candidate English phrase "apple banana meat".
  • the "apple banana mea” can be used as The candidate words are stored in the candidate thesaurus of the mobile phone C.
  • mobile phone C is connected to the distributed network, and the large screen and mobile phone A can be synchronized to the candidate words such as "Pozhibaa".
  • the process of synchronizing the candidate lexicon may not have a user interface, and the user may not perceive the process of synchronizing the candidate lexicon.
  • the subsequent mobile phone A assists the large-screen input, it can realize quick input based on the candidate word of "Pozhibaa”.
  • the mobile phone itself is inputting, it can realize quick input based on the candidate word of "Pozhibaa”.
  • Figs. 58-60 show the process that the mobile phone A uses the candidate word "Pozhibaa" to assist the input on the large screen.
  • Fig. 58 shows a schematic diagram of a user interface of a large screen.
  • the user can select the input method editing box on the large screen through the remote control and other devices, and the input method editing box control of the large screen can request the input method framework (IMF) of the large screen to start the local Input method, and transmit the data channel to IMF.
  • IMF queries the server with distributed capabilities through distributed networking.
  • the server can include mobile phone A, then the large screen can be connected to the auxiliary AA of mobile phone A and request auxiliary input from mobile phone A. Or pop up an input box in mobile phone A, etc.
  • FIG. 59 shows a schematic diagram of a user interface for mobile phone A to determine the auxiliary large-screen input.
  • a notification for prompting the large screen to request auxiliary input can pop up in mobile phone B, and the user can trigger the notification in mobile phone B to confirm the auxiliary large screen input.
  • an edit box for assisting large-screen input can pop up in mobile phone A.
  • the user can trigger the edit box shown in the middle of Figure 59 by clicking, etc., and mobile phone A can display the most In the user interface shown in the right figure, the virtual keyboard (or soft keyboard) of the mobile phone can be displayed in the user interface, and the user can use the virtual keyboard of the mobile phone A to assist the input on the large screen later.
  • the virtual keyboard or soft keyboard
  • FIG. 60 shows a schematic diagram of a user interface in which mobile phone A uses the candidate word "Pozhibaa” synchronized from mobile phone C to assist large-screen input.
  • the input method of mobile phone A can display the candidate word "Pozhimei” based on the synchronized candidate vocabulary , the user clicks the candidate word "Pozhimei", then the input box of mobile phone A can display "Pozhimei", the user clicks Finish, and can enter the user interface diagram of the large screen shown in the right figure of Figure 60, in the large screen
  • the "Pozhibaa” in the input box of mobile phone A can be displayed simultaneously.
  • the content of the input box can be simultaneously displayed in the edit box of the large screen as shown in the right figure of Figure 60,
  • the editing box of the large screen as shown in the right picture of FIG. Such as deletion, highlight selection or cursor movement in the input box of mobile phone A.
  • FIG. 61 shows a schematic diagram of a user interface for realizing quick input by using the candidate word "Pozhi Me” on a large screen.
  • the large screen can display the word “Pozhime” in the user interface based on the candidate word "Pozhime” synchronized from the mobile phone C.
  • the candidate word "Pozhimei” the user can select the candidate word "Pozhimei” to realize convenient input.
  • FIG. 62 shows a schematic diagram of the user interface of mobile phone A using the candidate word "Pozhibaa” to implement quick input.
  • mobile phone A can display the word “Pozhime” synchronized from mobile phone C in the user interface based on the candidate word "pozhimie”
  • the candidate word "Pozhimei” the user can select the candidate word "Pozhimei” to realize convenient input.
  • the large screen, mobile phone A and mobile phone C can all use the candidate vocabulary that is synchronized with each other to achieve convenient input, which will not be repeated here. .
  • the large screen and mobile phone A can also retain the candidate vocabulary synchronized from each other, which is convenient input of.
  • the mobile phone C is a temporary visitor.
  • the content of the candidate vocabulary synchronized by the mobile phone C using the distributed networking can be deleted in the mobile phone C, as well as the Delete the content of the candidate vocabulary of the mobile phone C before accessing the distributed networking from other devices.
  • Fig. 63 shows a possible user interface diagram of mobile phone C. As shown in Fig. 63, the user can be prompted whether to delete the synchronization candidate vocabulary, and the options of "Yes" and "No" can be provided. According to the requirements, select the appropriate option to delete or retain the synchronized candidate vocabulary.
  • FIG. 64 shows a user interface of mobile phone A when mobile phone C disconnects from the distributed networking and deletes the candidate vocabulary of mobile phone C itself synchronized by mobile phone C in the distributed networking.
  • the candidate word "Pozhime” is deleted from the candidate vocabulary of mobile phone A, when "pozhimie” is input in mobile phone A, among the candidate words recommended by mobile phone A, there is no "Pozhime”. recommend.
  • the used candidate words After the mobile phone C is disconnected from the distributed networking, it can continue to be used by other devices in the distributed networking. After the mobile phone C is disconnected from the distributed networking, the mobile phone C can delete the content of the candidate lexicon synchronized by the mobile phone C using the distributed networking in the mobile phone C, and delete the content of the mobile phone C in the access distribution of the other devices in the distributed networking.
  • the candidate thesaurus of mobile phone C also includes "Scarlet Red”. Because mobile phone A uses “Pozhibaa” when assisting the input on the large screen, after mobile phone C disconnects the distributed networking, the "pozhibaa” can be used as a candidate word for mobile phone A, and continues to be used by mobile phone A and the large screen. use. However, because “Crimson” has not been used, it was cleared when mobile phone C was disconnected from the distributed network, and could not continue to be used by mobile phone A and the large screen.
  • the above user interface diagrams when the mobile phone assists the large-screen input are all exemplary descriptions.
  • part or all of the content in the large screen may also be synchronized. This enables mobile phone users to know the status of the large screen based on the mobile phone interface.
  • FIG. 65 shows a user interface of a cell phone.
  • the user when the user uses the mobile phone to assist the input on the large screen, the user can project all or part of the content of the large screen to the mobile phone.
  • the upper layer displays the editing box of the mobile phone, so that when the user uses the editing box of the mobile phone to input, the state of the large-screen editing box can be simultaneously seen in the user interface of the mobile phone, and the user does not need to look up at the large screen when assisting input. the input state in .
  • the user assists in inputting Chinese characters on the large screen as an example.
  • the user may assist the large screen in inputting English phrases or other forms of text input.
  • the specific content of the input is not limited.
  • each functional module is divided according to each function, as shown in FIG. 66 , it shows a possible schematic structural diagram of a first device, a second device, or a third device provided by an embodiment of the present application.
  • a device, a second device or a third device includes: a display screen 6601 and a processing unit 6602 .
  • the display screen 6601 is used to support the first device, the second device, or the third device to perform the display steps in the foregoing embodiments, or other processes of the technologies described in the embodiments of this application.
  • the display screen 6601 may be a touch screen or other hardware or a combination of hardware and software.
  • the processing unit 6602 is configured to support the first device, the second device, or the third device to perform the processing steps in the foregoing method embodiments, or other processes of the technologies described in the embodiments of this application.
  • the electronic device includes but is not limited to the unit modules listed above.
  • the specific functions that can be implemented by the above functional units also include but are not limited to the functions corresponding to the method steps described in the above examples.
  • the detailed description of other units of the electronic device please refer to the detailed description of the corresponding method steps. This application implements Examples are not repeated here.
  • the first device, the second device or the third device involved in the above embodiments may include: a processing module, a storage module and a display screen.
  • the processing module is used to control and manage the actions of the first device, the second device or the third device.
  • the display screen is used to display content according to the instructions of the processing module.
  • the storage module is used for saving program codes and data of the first device, the second device or the third device.
  • the first device, the second device or the third device may also include an input module, a communication module, and the communication module is used to support the communication between the first device, the second device or the third device and other network entities to achieve Calls, data interaction, Internet access and other functions of the first device, the second device or the third device.
  • the processing module may be a processor or a controller.
  • the communication module may be a transceiver, an RF circuit or a communication interface or the like.
  • the storage module may be a memory.
  • the display module can be a screen or a display.
  • the input module can be a touch screen, a voice input device, or a fingerprint sensor.
  • the above-mentioned communication module may include an RF circuit, and may also include a wireless fidelity (Wi-Fi) module, a near field communication (NFC) module, and a Bluetooth module.
  • Wi-Fi wireless fidelity
  • NFC near field communication
  • Bluetooth Bluetooth module
  • Communication modules such as the RF circuit, the NFC module, the WI-FI module, and the Bluetooth module may be collectively referred to as a communication interface.
  • the above-mentioned processor, RF circuit, display screen and memory can be coupled together through a bus.
  • FIG. 67 it shows another possible structural schematic diagram of the first device, the second device, or the third device provided by the embodiment of the present application, including: one or more processors 6701 , a memory 6702 , and a camera 6704 and display screen 6703; the above devices may communicate through one or more communication buses 6706.
  • one or more computer programs are stored in the memory 6702 by 6705, and are configured to be executed by one or more processors 6701; the one or more computer programs 6705 include instructions for performing the display method of any of the above steps .
  • the electronic device includes but is not limited to the above-mentioned devices.
  • the above-mentioned electronic device may also include a radio frequency circuit, a positioning device, a sensor, and the like.
  • Embodiment 51 A device communication method, applied to a system including a first device, a second device, and a third device, the method comprising:
  • the first device displays a first interface including a first edit box
  • the second device displays a second interface according to the instruction message, the second interface includes a second edit box;
  • the third device displays a third interface according to the instruction message, and the third interface includes a third edit box;
  • the first device synchronizes the edit state to the first edit box
  • the third device synchronizes the edit state to the The third edit box
  • the first device synchronizes the editing state to the first editing box, and the second device synchronizes the editing state to the the second edit box;
  • the second device synchronizes the editing state to the second editing box
  • the third device synchronizes the editing state to the third edit box.
  • Embodiment 52 The method of Embodiment 51, the second device comprising an interface service for synchronizing edit states between the first device and the second device.
  • Embodiment 53 The method of Embodiment 51 or 52, wherein the editing state includes one or more of the following: text content, cursor, or highlighting of text content.
  • Embodiment 54 The method according to any one of Embodiments 51-53, wherein the second device displays a second interface according to the instruction message, comprising:
  • the second device displays a notification interface in response to the instruction message;
  • the notification interface includes an option to confirm the auxiliary input;
  • the second device In response to a triggering operation on the option, the second device displays the second interface.
  • Embodiment 55 The method of any one of Embodiments 51-54, wherein the second interface further comprises: all or part of the content of the first interface.
  • Embodiment 56 The method according to Embodiment 55, wherein the second edit box is displayed in layers with all or part of the content of the first interface, and the second edit box is displayed on all of the first interface or the upper layer of part of the content.
  • Embodiment 57 The method according to any one of Embodiments 51-56, after the second device displays the second interface according to the instruction message, the method further includes:
  • the second device In response to the triggering of the second edit box, the second device displays a virtual keyboard
  • the second device displays the editing state in the second editing box according to the input operation received by the virtual keyboard and/or the second editing box.
  • Embodiment 58 The method according to any one of Embodiments 51-57, wherein the first device comprises any one of the following: a television, a large screen, or a wearable device; the second device or the third device Include any of the following: phone, tablet, or wearable.
  • Embodiment 59 The method according to any one of Embodiments 51-58, wherein the edit state in the first edit box includes the identifier of the first device, and/or the edit state in the second edit box includes the identifier of the second device, and/or, the edit state in the third edit box includes the identifier of the first device.
  • Embodiment 510 The method according to any one of Embodiments 51-59, wherein when input content is received in the second edit box and the third edit box at the same time, the first device arbitrates the second edit box the input content and the display mode of the third edit box.
  • Embodiment 511 A device communication method, applied to a system including a first device, a second device, and a third device, the method comprising:
  • the first device displays a first interface including a first edit box
  • the second device displays a second interface according to the instruction message, the second interface includes a second edit box;
  • the second device sends an auxiliary input request to the third device
  • the third device displays a third interface according to the auxiliary input request, the third interface includes a third edit box;
  • the first device synchronizes the edit state to the first edit box
  • the third device synchronizes the edit state to the The third edit box
  • the first device synchronizes the editing state to the first editing box, and the second device synchronizes the editing state to the the second edit box;
  • the second device synchronizes the editing state to the second editing box
  • the third device synchronizes the editing state to the third edit box.
  • Embodiment 512 A device communication method, applied to a system including a first device, a second device, and a third device, the method comprising:
  • the second device displays a fourth interface including options of the first device
  • the second device In response to the selection operation of the option of the first device, the second device sends an indication message to the first device;
  • the first device displays a first interface including a first edit box
  • the second device displays a second interface, the second interface includes a second edit box
  • the second device sends an auxiliary input request to the third device
  • the third device displays a third interface according to the auxiliary input request, the third interface includes a third edit box;
  • the first device synchronizes the edit state to the first edit box
  • the third device synchronizes the edit state to the The third edit box
  • the first device synchronizes the editing state to the first editing box, and the second device synchronizes the editing state to the the second edit box;
  • the second device synchronizes the editing state to the second editing box
  • the third device synchronizes the editing state to the third edit box.
  • Embodiment 513 A device communication method, applied to a first device, the method comprising:
  • the first device displays a first interface including a first edit box
  • the first device synchronizes the editing state to the first editing box
  • the first device synchronizes the editing state to the first editing box.
  • Embodiment 514 A device communication method, applied to a second device, the method comprising:
  • the second device displays a fourth interface including options of the first device
  • the second device In response to the selection operation of the option of the first device, the second device sends an instruction message to the first device; for the first device to display a first interface including a first edit box;
  • the second device displays a second interface, the second interface includes a second edit box
  • the second device synchronizes the editing state to the second editing box
  • the second device synchronizes the editing state to the second editing box.
  • Embodiment 515 A device communication system, comprising a first device, a second device, and a third device, where the first device is configured to execute the first device according to any one of Embodiments 51-59 and 510-514 , the second device is configured to perform the steps of the second device according to any one of Embodiments 51-59 and 510-514, and the third device is configured to perform the steps of 514 any one of the steps of the third device.
  • a first device comprising: at least one memory and at least one processor
  • the memory is used to store program instructions
  • the processor is configured to invoke program instructions in the memory to cause the first device to perform the steps performed by the first device described in any one of Embodiments 51-59 and 510-514.
  • a second device comprising: at least one memory and at least one processor
  • the memory is used to store program instructions
  • the processor is configured to invoke program instructions in the memory to cause the second device to perform the steps performed by the second device described in any one of Embodiments 51-59 and 510-514.
  • Embodiment 518 A computer-readable storage medium having a computer program stored thereon, so that when the computer program is executed by a processor of a first device, the above-described method in any one of Embodiments 51-59 and 510-514 is implemented.
  • the steps performed by the first device; or, when the computer program is executed by the processor of the second device, the steps performed by the second device described in any one of Embodiments 51-59 and 510-514 are implemented; or , so that when the computer program is executed by the processor of the third device, the steps performed by the third device according to any one of Embodiments 51-59 and 510-514 are implemented.
  • Embodiment 51 to Embodiment 59 and Embodiment 510 to Embodiment 519 reference may be made to the descriptions in FIGS. 68 to 87 .
  • the cursor on the mobile phone side moves, but the cursor on the large screen side does not display or does not move although it is displayed, so that the deletion or insertion process of the text in the large screen edit box does not work. It conforms to the usual editing and display process and affects the user's viewing experience.
  • the embodiment of the present application proposes the system framework described in the above-mentioned FIG. 7 .
  • the framework has the possibility of calling any process between the mobile phone and the large screen side. Therefore, the cursor position display or the highlighted area display can be implemented using the present application. Example implementation of the above framework.
  • any editing state in the mobile phone editing box can be synchronized in the large-screen editing box.
  • the editing state may refer to a state that can be changed when editing in the editing box of the mobile phone, for example, including the text content in the editing box, the cursor position in the editing box, and/or the highlighted area in the editing box.
  • FIG. 68 shows a schematic diagram of a specific system architecture of an embodiment of the present application.
  • a distributed network including a large screen (client) and a mobile phone (server) is used as an example to schematically illustrate the editing status in the edit boxes of both sides of the synchronization between the large screen and the mobile phone. the process of.
  • the user can click the edit box provided by the application (application, APP) in the large screen through the remote control, etc., the local input method of the large screen can be activated in the large screen, and the data channel interface can be transmitted to the IMF of the large screen, and the IMF of the large screen can be Query the device with remote auxiliary input capability in the distributed networking, and connect the auxiliary AA of the mobile phone with remote auxiliary input capability.
  • the application application, APP
  • the local input method of the large screen can be activated in the large screen
  • the data channel interface can be transmitted to the IMF of the large screen
  • the IMF of the large screen can be Query the device with remote auxiliary input capability in the distributed networking, and connect the auxiliary AA of the mobile phone with remote auxiliary input capability.
  • the auxiliary AA of the mobile phone can pull up the local input method application of the mobile phone, for example, an edit box for assisting input on the large screen will pop up in the mobile phone.
  • the auxiliary AA of the mobile phone can return the RPC object of the auxiliary AA to the large screen through the distributed networking, and the large screen can send the RPC object related to the large screen input channel to the mobile phone.
  • the subsequent mobile phone can synchronize the editing status of the editing box of the large screen according to the RPC object related to the input channel of the large screen, and the large screen can obtain the editing status of the editing box of the mobile phone from the mobile phone according to the RPC object of the auxiliary AA of the mobile phone.
  • the user can traverse the RPC objects related to the input channel of the large screen held by the auxiliary AA when the user changes the editing state in the mobile phone based on the input method APP of the mobile phone or clicks the editing box in the mobile phone based on the auxiliary AA.
  • Use the RPC object related to the input channel of the large screen to synchronize the update of the editing state to the large screen.
  • the update of the editing state of the mobile phone may include one or more of the following: addition or deletion of text content in the mobile phone editing box, cursor movement in the mobile phone editing box, highlighting of a certain text in the mobile phone editing box, and the like.
  • the large screen After the large screen is synchronized to the updated editing state in the mobile phone, it can call the local interface of the large screen to update the editing state in the editing box of the large screen.
  • the large screen can use the above-mentioned large screen to synchronize with the mobile phone.
  • the method of updating the editing status is to synchronize the updating of the editing status in the other device in the large screen; the mobile phone can also use the above method of synchronizing the updating status of the editing status between the large screen and the mobile phone to synchronize the editing status of the other device in the mobile phone 's update.
  • the editing state of the editing box on the large screen changes, for example, when the user uses a remote control to perform editing operations in the editing box on the large screen while the mobile phone and/or other devices are assisting the input on the large screen
  • the The change of the editing state in the screen editing box can also be synchronized to the mobile phone and/or other devices through the distributed networking and the RPC objects of the mobile phone and/or other devices.
  • the local interface of the mobile phone and/or other devices can be called to update the editing status of the mobile phone and/or other devices.
  • the embodiment corresponding to FIG. 68 is a possible implementation manner of the embodiment of the present application.
  • the user may select the virtual keyboard under the edit box provided by an application on the large screen to trigger the subsequent input process of the auxiliary large-screen, or the user may trigger the auxiliary large-screen input on the mobile phone.
  • the process of screen input is not specifically limited in this embodiment of the present application.
  • the following is an exemplary description of the user interface for the interaction between the large screen and the mobile phone.
  • Figures 69-70 show schematic diagrams of user interfaces in which a user triggers an auxiliary input.
  • Figure 69 shows a user interface diagram of a large screen.
  • the user can use the remote control 6901 to select the edit box 6902 in the large screen, and then can trigger the execution of the subsequent process of the mobile phone-assisted large-screen input in the embodiment of the present application.
  • the user can use the remote control 6901 to select any content 6902 in the virtual keyboard in the large screen, and then the process of performing the subsequent process of the mobile phone-assisted large-screen input in the embodiment of the present application can be triggered.
  • the specific manner in which the mobile phone assists the large-screen input will be described in the subsequent embodiments, and will not be repeated here.
  • FIG. 69 shows a schematic diagram of setting an edit box in a user interface diagram of a large screen.
  • the user interface of the large screen may include multiple edit boxes, and the user triggers any edit box to trigger the subsequent process of the mobile phone-assisted large-screen input in the embodiment of the present application, which is not specified in the embodiment of the present application. limited.
  • Figure 70 shows a user interface diagram of a cell phone.
  • the user can display the user interface shown in Figure a in Figure 70 by pulling down on the main screen of the mobile phone, etc.
  • the user interface shown in Figure a in Figure 70 may include one or more of the following items of the mobile phone Functions: WLAN, Bluetooth, Torch, Mute, Airplane Mode, Mobile Data, Wireless Screencasting, Screenshot or Auxiliary Input 7001.
  • the auxiliary input 7001 may be the function of assisting the large-screen input of the mobile phone in this embodiment of the application.
  • the mobile phone can search for devices such as large screens in the same distributed network, obtain the search box in the large screen, and establish a communication connection with the large screen.
  • the mobile phone may further display the user interface as shown in Figure c of Figure 70, and in the user interface shown in Figure c of Figure 70, an edit box for assisting large-screen input can be displayed, and the user can assist the large-screen based on the edit box. Enter.
  • the mobile phone can also display the user interface shown in b of Figure 70.
  • the user interface shown in the figure can display multiple large screen logos, and the large screen logo can be the device number, user name or nickname of the large screen.
  • the user can select a large screen (for example, click on large screen A or large screen B) in the user interface shown in Figure b of Figure 70, and enter the user interface shown in Figure c in Figure 70, an embodiment of the present application. This is not specifically limited.
  • the large screen can search for an auxiliary device (such as a mobile phone) with auxiliary input capability in the distributed network, and automatically determine the mobile phone used for auxiliary input, or Send notifications to all mobile phones found in the distributed network.
  • an auxiliary device such as a mobile phone
  • the large screen can automatically select the auxiliary input device as the mobile phone.
  • the large screen can automatically select the mobile phone with the default auxiliary input as the auxiliary input mobile phone. equipment.
  • the large screen finds that there are multiple mobile phones in the distributed networking, but among the multiple mobile phones, there is a mobile phone with the auxiliary input selected by the user when the user performed the auxiliary input last time, the large screen can automatically select the user to perform the auxiliary input last time.
  • the auxiliary input mobile phone selected during auxiliary input is the auxiliary input device.
  • the large screen finds that there are multiple mobile phones in the distributed network, and the large screen obtains the mobile phone with the highest frequency of auxiliary input selected by the user, the large screen can automatically select the mobile phone selected by the user as the auxiliary input.
  • the mobile phone with the highest input frequency is the auxiliary input device.
  • the large screen finds that there are multiple mobile phones in the distributed networking, but there is a mobile phone with the same user account as the user account logged in by the large screen, the large screen can automatically select the mobile phone with the same user account logged in the large screen.
  • the mobile phone with the same user account is the auxiliary input device.
  • the large screen sending a notification to a mobile phone in a distributed network as an example
  • the following is an exemplary description of the user interface of the synchronous editing state of the large screen and the mobile phone.
  • One or more mobile phones can be connected to the distributed network.
  • the mobile phone can assist the large screen for input.
  • the distributed networking can include multiple mobile phones, and multiple mobile phones can jointly assist the large screen for input. In the case of accessing multiple mobile phones in a distributed network, the multiple mobile phones can jointly assist the large screen for input.
  • the elderly hold mobile phone A to assist the large-screen input, but the elderly may input at a slower speed
  • the young person holding mobile phone B can assist the large-screen input together with mobile phone A, mobile phone A, mobile phone B
  • the content in the edit box on the big screen can be synchronized with each other, and the elderly can also learn about the input of young people based on mobile phone B based on mobile phone A.
  • the elderly can request the young person holding mobile phone B to assist the input on the large screen, and then mobile phone A can send a request to mobile phone B to request
  • the mobile phone B assists the auxiliary input, and the mobile phone B can jointly assist the large-screen input based on the request of the mobile phone A.
  • the initial editing state of the large screen is synchronized between mobile phone A and mobile phone B, and the editing state of mobile phone A is updated synchronously between the large screen and mobile phone B.
  • the large screen, mobile phone A, and mobile phone B have been connected to the distributed networking.
  • the large screen can connect the auxiliary AA of mobile phone A and the auxiliary AA of mobile phone B, request auxiliary input from mobile phone A or pop up input box in mobile phone A, and request auxiliary input from mobile phone B or pop up input box in mobile phone B, etc.
  • FIG. 71 shows a schematic diagram of the user interface of the mobile phone AA to determine the auxiliary large-screen input.
  • a notification for prompting the large screen to request auxiliary input can pop up in mobile phone A, and the user can trigger the notification in mobile phone A to confirm the auxiliary large screen input.
  • an edit box for assisting large-screen input can pop up in mobile phone A.
  • the user can trigger the edit box shown in the middle of Figure 71 by clicking, etc., and mobile phone A can display the most important part in Figure 71.
  • the virtual keyboard (or soft keyboard) of the mobile phone can be displayed in the user interface, and the user can use the virtual keyboard of the mobile phone A to assist the input on the large screen later.
  • the mobile phone A may not receive a notification, but an edit box for assisting input on the large screen as shown in the left figure of Figure 72 will pop up. Further, the user can click etc. Trigger the edit box shown in the left figure of Figure 72, and the mobile phone A can display the user interface shown in the right figure of Figure 72.
  • the user interface can display the virtual keyboard (or soft keyboard) of the mobile phone, and the user can use the mobile phone later. A's virtual keyboard assists large-screen input.
  • the schematic diagram of the user interface of the mobile phone B for determining the auxiliary large-screen input is similar to that of the mobile phone A, and details are not repeated here.
  • the large screen, mobile phone A, and mobile phone B have been connected to the distributed networking.
  • the large screen can be connected to the auxiliary AA of mobile phone A, and request auxiliary input from mobile phone A or pop up an input box in mobile phone A; after that, mobile phone A requests mobile phone B to jointly assist the large-screen input.
  • an interface for requesting auxiliary input from mobile phone B can be displayed in mobile phone A, and the user can request mobile phone B for auxiliary large-screen input by clicking the OK option in mobile phone A.
  • the mobile phone B can notify the mobile phone A to request the auxiliary large-screen input.
  • the user can accept the mobile phone A's request in the mobile phone B, and the edit box interface shown in Figure 72 is displayed in the mobile phone B to realize the preemption of the auxiliary large-screen input. .
  • the large screen and mobile phone A are connected to the distributed networking, and mobile phone A assists the large-screen input. Whether users jointly assist the input interface.
  • FIG. 74 an interface prompting the user whether to jointly assist the input is displayed in the mobile phone B.
  • the user can click the OK option in the mobile phone B, and the editing box interface shown in FIG. 72 is displayed in the mobile phone B.
  • Mobile phone A jointly assists the large-screen input.
  • steps such as identity authentication or authentication can also be performed between the mobile phone A, the mobile phone B, and the large screen to improve communication security.
  • steps such as identity authentication or authentication can also be performed between the mobile phone A, the mobile phone B, and the large screen to improve communication security.
  • the embodiment of the present application This is not specifically limited.
  • the input content in the editing box of the large screen can be synchronized to the editing box of mobile phone A. or the edit box of phone B.
  • FIG. 75 shows a schematic diagram of a framework for synchronizing the input content in the edit box of the large screen to the edit box of the mobile phone A or the edit box of the mobile phone B.
  • the server with distributed input capability through the distributed network.
  • the dialog edit box for auxiliary AA auxiliary input pops up on mobile phone A and mobile phone B, and the respective input method soft keyboard ( As shown in Figure 71), the large screen will hold the RPC objects of the auxiliary AA of mobile phone A and mobile phone B in the callback after the connection is established.
  • the RPC object wrapped by the input data channel of the large screen is passed to the auxiliary AA side of mobile phone A and mobile phone B.
  • Mobile phone A and mobile phone B can obtain the initial editing status from the large screen side through the RPC object related to the input data channel transmitted from the large screen. Then the mobile phone A and the mobile phone B call the local interface to update the initial editing state. In this way, the complete editing state in the large screen can be synchronized to the mobile phone, so that the user does not need to repeatedly input the initial input content in the editing box of the large screen in the mobile phone.
  • Figures 71 and 12 show a schematic diagram of the user interface of mobile phone A or mobile phone B when there is no initial input content in the edit box of the large screen. If there is initial input content in the edit box of the large screen, then mobile phone A or mobile phone B The edit box of mobile phone B will be synchronized to the initial input content of the large screen. For the convenience of description, the process of assisting the input on the large screen by the mobile phone is described in the following by taking an example that there is no initial input content in the editing box of the large screen.
  • FIG. 76 shows a schematic diagram of the user interface in which the user assists the large-screen input in the editing box of the mobile phone B.
  • the user can input "lion” in the editing box of mobile phone B, and the cursor can also be displayed behind "lion” in the editing box of mobile phone B, such as:
  • the user interface diagram of the large screen shown on the right of Figure 76, the "lion" and the cursor can be synchronized to the edit box of the large screen.
  • FIG. 77 shows a schematic diagram of the user interface in which the user can perform moving the cursor in the edit box of the mobile phone B.
  • the user can move the cursor to "lion” in the edit box of mobile phone B, and add "old” before the cursor, as shown in the right figure of Figure 77.
  • the user interface diagram of the large screen shown in the figure can be synchronized to the cursor before the "lion” and the "old” before the cursor in the edit box of the large screen.
  • FIG. 78 shows a schematic diagram of the user interface in which the user can perform highlighting of the target word in the edit box of the mobile phone B.
  • the user can highlight "Old" in the editing box of mobile phone B, and the user interface diagram of the large screen shown in the right picture of Figure 78 can be displayed in The edit box of the large screen and the edit box of mobile phone A are synchronized to the highlighted "Old".
  • the user interface of the mobile phone A may be similar to the user interface of the mobile phone B, which is not repeated here.
  • both mobile phone A and mobile phone B receive a request from a large screen for requesting auxiliary input
  • the user of mobile phone A can determine the auxiliary large-screen input by clicking on the control for agreeing to the auxiliary input, and the user of mobile phone B can confirm the auxiliary large-screen input.
  • You can also confirm the auxiliary large-screen input by clicking the control that agrees to the auxiliary input, etc., then the editing status in the editing box of mobile phone A can be synchronized to the editing box of the large screen and the editing box of mobile phone B, and the editing box of mobile phone B.
  • the editing status in the large screen can be synchronized to the editing box of the large screen and the editing box of mobile phone A, and the editing status of the editing box of the large screen can be synchronized to the editing box of mobile phone A and the editing box of mobile phone B.
  • FIG. 79 shows a schematic diagram of the user interface in which the user uses mobile phone A and mobile phone B to assist the input on the large screen.
  • the user can enter "old lion" in the edit box of mobile phone A, move the cursor between "old” and “lion", and highlight " old".
  • the editing status displayed in the editing box of the large screen is the same as the editing status displayed in the editing box of the mobile phone A.
  • the editing status displayed in the editing box of mobile phone B is the same as the editing status displayed in the editing box of mobile phone A.
  • mobile phone A may input “Old” and select “Old” in the virtual keyboard. Therefore, “Old” can be displayed on mobile phone A and mobile phone B. edit box. At the same time, “big” may be entered in mobile phone B, but “big” is not selected in the virtual keyboard on mobile phone B, then “big” is not displayed in the edit boxes of mobile phone A and mobile phone B. Or it can be understood that when mobile phone A and mobile phone B jointly assist the large-screen input, the content in the editing box of mobile phone A and mobile phone B is the same, and the content of mobile phone A and mobile phone B except for the editing box can be displayed the same, or displayed differently.
  • the large screen can decide to display "old”. Before “big”, “big” is still displayed before “old”.
  • the basis for the ruling may be the time sequence of receiving the "old” of the mobile phone A or the “old” of the mobile phone B, or the frequency of the auxiliary input of the mobile phone A and the mobile phone B, or a random ruling, which is not specified in this embodiment of the application. limited.
  • FIG. 80 shows another schematic diagram of the user interface in which the user uses mobile phone A and mobile phone B to assist the input on the large screen.
  • the user moves the cursor after "old lion" in the edit box of mobile phone B, and then enters "king" from "old lion".
  • the editing status displayed in the editing box of the large screen is the same as the editing status displayed in the editing box of the mobile phone B.
  • the editing status displayed in the editing box of mobile phone A is the same as the editing status displayed in the editing box of mobile phone B.
  • FIG. 80 Another example, in the user interface of the large screen as shown in Figure 80, on the basis of Figure 79, the user moves the cursor after "Old Lion" in the edit box of the large screen, and then enters "King" from the "Old Lion” .
  • the editing status displayed in the editing box of mobile phone A is the same as the editing status displayed in the editing box of the large screen.
  • the editing status displayed in the editing box of mobile phone B is the same as the editing status displayed in the editing box of the large screen.
  • FIG. 81 shows a schematic diagram of processing logic when a mobile phone assists input on a large screen.
  • the updating of the editing state may include: the user uses the input method of the mobile phone A to input or delete text, move the cursor in the text editing box or select the highlighted mark A certain paragraph of text in the edit box, etc.), the edit box of the mobile phone A assists AA to capture the change of the editing state, query the RPC object that already holds the input data channel of the large screen, and send it to the large screen through the proxy that wraps the RPC object.
  • Side sync edit status The large screen side invokes its local related interface for changing the editing state to synchronize the editing state updated by the mobile phone A.
  • the big screen when the big screen is updating the editing status, it will ask the IMF whether it holds the RPC object of the auxiliary AA of other servers.
  • the RPC object informs mobile phone B to synchronize the editing state and pass the synchronization factor.
  • Mobile phone B invokes its local related interface for changing the editing state to synchronize the editing state passed from the large screen side.
  • the source of this update is mobile phone A, which is also a distributed input server in the network. Therefore, if it does not continue to update to the client in the network, the update is completed once.
  • the distributed networking may also include multiple large screens.
  • FIG. 79 shows a schematic diagram of a user interface for mutual auxiliary input between multiple devices when the distributed networking includes a large screen A, a large screen B, a mobile phone A, and a mobile phone B.
  • the user can use mobile phone A, mobile phone B and/or large screen A to edit "Old Lion King", then large screen B can also synchronize to the "Old Lion King" in the editing box of large screen B.
  • Figure 83 shows a schematic diagram of a synchronous circular chain.
  • the mobile phone A when the user operates the mobile phone A to update the editing status, the mobile phone A synchronizes with the large screen A and the large screen B.
  • the large screen A detects that there is still a mobile phone B in the current distributed network, so it will update the mobile phone B.
  • Synchronization mobile phone B finds that there is still a large screen B in the distributed network, so it will synchronize with the large screen B, and the large screen B will synchronize with the mobile phone A, so a synchronization loop chain will be generated.
  • the embodiments of the present application introduce a synchronization factor into the distributed networking input synchronization technology in order to suppress the generation of the synchronization loop chain.
  • the synchronization factor records the factors of each update initiation.
  • the synchronization factor may include information such as the device ID and/or the terminal type (server or client) of the update initiator. Every time the editing status is updated, the synchronization factor will be transmitted, and the device will detect the synchronization factor when updating the editing status. If the update is initiated by the server, the synchronization factor will record the source server of the update operation, and then update to other servers. It will not be updated from time to time.
  • FIG. 84 shows a schematic diagram of processing logic when the mobile phone of FIG. 82 assists input on a large screen.
  • the edit box of the large-screen A application APP captures the change of the editing state, and queries the IMF to find the RPC objects returned by the auxiliary AA of mobile phone A and mobile phone B. Synchronize edit state to phone A and phone B through the proxy wrapping the RPC object and pass the synchronization factor.
  • Mobile phone A and mobile phone B assist the AA side to call its local related interface that can change the editing state to synchronize the editing state updated by the large screen A.
  • mobile phone A or mobile phone B synchronizes the editing state, it will query the IMF whether it holds the RPC object of the data channel sent by other clients.
  • the RPC object that holds the input data channel of the large screen B is queried.
  • the big screen B tell the big screen B to synchronize the editing state and transmit the synchronization factor through the RPC object.
  • the big screen B calls its local related interface that can change the editing state to synchronize the editing state passed by the mobile phone A.
  • the source of this update is the large screen A, which is also a distributed input client in the network. Therefore, if the update is not continued to the server in the network, the update is completed at one time.
  • the above user interface diagrams when the mobile phone assists the large-screen input are all exemplary descriptions.
  • part or all of the content in the large screen may also be synchronized. This enables mobile phone users to know the status of the large screen based on the mobile phone interface.
  • FIG. 85 shows a user interface of a cell phone.
  • the user when the user uses the mobile phone to assist the input on the large screen, the user can project all or part of the content of the large screen to the mobile phone.
  • the upper layer displays the editing box of the mobile phone, so that when the user uses the editing box of the mobile phone to input, the state of the large-screen editing box can be simultaneously seen in the user interface of the mobile phone, and the user does not need to look up at the large screen when assisting input. the input state in .
  • the user assists in inputting Chinese characters on the large screen as an example.
  • the user may assist the large screen in inputting English phrases or other forms of text input.
  • the specific content of the input is not limited.
  • each functional module is divided according to each function, as shown in FIG. 86 , it shows a possible structural schematic diagram of a first device, a second device, or a third device provided by an embodiment of the present application.
  • a device, a second device or a third device includes: a display screen 8601 and a processing unit 8602 .
  • the display screen 8601 is used to support the first device, the second device, or the third device to perform the display steps in the foregoing embodiments, or other processes of the technologies described in the embodiments of this application.
  • the display screen 8601 may be a touch screen or other hardware or a combination of hardware and software.
  • the processing unit 8602 is configured to support the first device, the second device, or the third device to perform the processing steps in the foregoing method embodiments, or other processes of the technologies described in the embodiments of this application.
  • the electronic device includes but is not limited to the unit modules listed above.
  • the specific functions that can be implemented by the above functional units also include but are not limited to the functions corresponding to the method steps described in the above examples.
  • the detailed description of other units of the electronic device please refer to the detailed description of the corresponding method steps. This application implements Examples are not repeated here.
  • the first device, the second device or the third device involved in the above embodiments may include: a processing module, a storage module and a display screen.
  • the processing module is used to control and manage the actions of the first device, the second device or the third device.
  • the display screen is used to display content according to the instructions of the processing module.
  • the storage module is used for saving program codes and data of the first device, the second device or the third device.
  • the first device, the second device or the third device may also include an input module, a communication module, and the communication module is used to support the communication between the first device, the second device or the third device and other network entities to achieve Calls, data interaction, Internet access and other functions of the first device, the second device or the third device.
  • the processing module may be a processor or a controller.
  • the communication module may be a transceiver, an RF circuit or a communication interface or the like.
  • the storage module may be a memory.
  • the display module can be a screen or a display.
  • the input module can be a touch screen, a voice input device, or a fingerprint sensor.
  • the above-mentioned communication module may include an RF circuit, and may also include a wireless fidelity (Wi-Fi) module, a near field communication (NFC) module, and a Bluetooth module.
  • Wi-Fi wireless fidelity
  • NFC near field communication
  • Bluetooth Bluetooth module
  • Communication modules such as the RF circuit, the NFC module, the WI-FI module, and the Bluetooth module may be collectively referred to as a communication interface.
  • the above-mentioned processor, RF circuit, display screen and memory can be coupled together through a bus.
  • FIG. 87 it shows another possible schematic structural diagram of the first device, the second device, or the third device provided by this embodiment of the present application, including: one or more processors 8701 , a memory 8702 , and a camera 8704 and display screen 8703; the above devices can communicate through one or more communication buses 8706.
  • one or more computer programs are stored in the memory 8702 by 8705, and are configured to be executed by one or more processors 8701; the one or more computer programs 8705 include instructions for performing the display method of any of the above steps .
  • the electronic device includes but is not limited to the above-mentioned devices.
  • the above-mentioned electronic device may also include a radio frequency circuit, a positioning device, a sensor, and the like.
  • Embodiments of the present application further provide a computer storage medium, including computer instructions, when the computer instructions are executed on the electronic device, the electronic device is made to execute the display method of any of the above steps.
  • Embodiments of the present application further provide a computer program product, which, when the computer program product runs on a computer, causes the computer to execute the display method as described in any of the above steps.
  • An embodiment of the present application further provides an apparatus, the apparatus having the function of implementing the behavior of the electronic device in each of the above display methods.
  • the above functions can be implemented by hardware, or by executing corresponding software by hardware.
  • the hardware or software includes one or more modules corresponding to the above functions.
  • the electronic equipment, computer storage medium, computer program product, or device provided in the embodiments of the present application are all used to execute the corresponding methods provided above. Therefore, for the beneficial effects that can be achieved, reference may be made to the provided above. The beneficial effects in the corresponding method will not be repeated here.
  • the disclosed system, apparatus and method may be implemented in other manners.
  • the device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be Incorporation may either be integrated into another system, or some features may be omitted, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the embodiments of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium.
  • a computer-readable storage medium includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: flash memory, removable hard disk, read-only memory, random access memory, magnetic disk or optical disk and other media that can store program codes.

Abstract

本申请实施例提供一种设备通信方法、系统和装置,应用于通信技术领域。本申请实施例中,在用户利用手机辅助大屏输入的过程中,与大屏和该手机处于同一分布式组网中的其他辅助设备可以抢占该手机的辅助输入,进一步的,还可以在该手机的输入内容的基础上,继续辅助大屏输入,期间用户不需要利用遥控器等设备再次选择大屏的编辑框,实现便捷高效的辅助大屏输入。

Description

设备通信方法、系统和装置
本申请要求于2020年10月31日提交中国国家知识产权局的五件中国专利申请的优先权。该五件中国专利申请分别为:申请号为202011197035.0、申请名称为“设备通信方法、系统和装置”的中国专利申请,申请号为202011197048.8、申请名称为“设备通信方法、系统和装置”的中国专利申请,申请号为202011197030.8、申请名称为“设备通信方法、系统和装置”的中国专利申请,申请号为202011198861.7、申请名称为“设备通信方法、系统和装置”的中国专利申请,申请号为202011198863.6、申请名称为“设备通信方法、系统和装置”的中国专利申请。以及要求于2021年03月11日提交中国专利局、申请号为202110267000.8、申请名称为“设备通信方法、系统和装置”的中国专利申请的优先权。该六件中国专利申请全部内容通过引用结合在本申请中。
技术领域
本申请涉及通信技术领域,尤其涉及一种设备通信方法、系统和装置。
背景技术
随着智能终端技术的不断发展,越来越多的电子设备得到开发,但是不同的电子设备往往具备不同的优势和劣势,各电子设备往往不能为用户提供较好的服务。
以电视和手机为例,电视可以基于大屏提供较好的视频画面,但是在电视中搜索节目时,需要利用遥控器逐个选择拼音字母等进行文字输入,效率较低,输入操作较为不便;手机可以基于输入法框架等实现便捷高效的文字输入等,但是手机的屏幕通常较小,不利于用户观看视频或图像。
发明内容
本申请实施例提供设备通信方法、系统和装置,使得不同的电子设备之间可以协同工作,发挥各自的优势,为用户提供便捷、舒适的服务。
本申请实施例第一一方面提供一种设备通信方法,应用于包括第一设备、第二设备和第三设备的系统,方法包括:第一设备显示包括第一编辑框的第一界面;第一设备向第二设备和第三设备发送指示消息;第二设备根据指示消息显示第二界面,第二界面包括第二编辑框;在第二编辑框中存在编辑状态的情况下,第一设备将编辑状态同步到第一编辑框中;第三设备向第一设备发送抢占消息;第三设备显示包括第三编辑框的第三界面;第三编辑框中同步有第一编辑框中的编辑状态。
这样,在第二设备辅助第一设备输入的过程中,第三设备可以进行抢占,使得辅助第一设备输入的方式更加灵活。
一种可能的实现方式中,第二设备包括接口服务,接口服务用于第一设备与第二设备之间的编辑状态的同步。这样,基于接口服务,可以使得第一设备同步到第二设备的任意编辑状态。
一种可能的实现方式中,编辑状态包括下述一项或多项:文本内容、光标或文字内容的高亮标记。
一种可能的实现方式中,第二设备根据指示消息显示第二界面,包括:第二设备响应于指示消息显示第一通知界面;第一通知界面包括确认辅助输入的选项;响应于对选项的触发操作,第二设备显示第二界面。
一种可能的实现方式中,第二界面还包括:第一界面的全部或部分内容。这样,用户可以在第二设备上看到第一设备中的情况,方便用户了解辅助第一设备的动态。
一种可能的实现方式中,第二编辑框与第一界面的全部或部分内容分层显示,且第二编辑框显示在第一界面的全部或部分内容的上层。
一种可能的实现方式中,第二设备根据指示消息显示第二界面之后,方法还包括:响应于对第二编辑框的触发,第二设备显示虚拟键盘;第二设备根据虚拟键盘和/或第二编辑框中接收的输入操作,在第二编辑框中显示编辑状态。
一种可能的实现方式中,第一设备包括下述任一项:电视、大屏或可穿戴设备;第二设备或第三设备包括下述任一项:手机、平板或可穿戴设备。
一种可能的实现方式中,还包括:在第三编辑框中接收到输入内容的情况下,第一设备将输入内容同步到第一编辑框中。
一种可能的实现方式中,第三设备向第一设备发送抢占消息,包括:第三设备接收来自第二设备的抢占请求;第三设备基于抢占请求向第一设备发送抢占消息。
一种可能的实现方式中,第三设备向第一设备发送抢占消息,包括:第三设备根据用户操作显示第二通知界面;第二通知界面包括确认抢占的选项;响应于对确认抢占的选项的触发操作,第三设备向第一设备发送抢占消息。
本申请实施例第一二方面提供一种设备通信方法,应用于包括第一设备、第二设备和第三设备的系统,方法包括:第二设备显示包括第一设备的选项的第四界面;响应于对第一设备的选项的选择操作,第二设备向第一设备发送指示消息;第一设备显示包括第一编辑框的第一界面;第二设备显示第二界面,第二界面包括第二编辑框;在第二编辑框中存在编辑状态的情况下,第一设备将编辑状态同步到第一编辑框中;第三设备向第一设备发送抢占消息;第三设备显示包括第三编辑框的第三界面;第三编辑框中同步有第一编辑框中的编辑状态。
需要说明的是,第一一方面中任意可能的实现方式中,在与第一二方面提供的方法不冲突的情况下中,均可以用于限定第一二方面提供的方法,在此不再赘述。
本申请实施例第一三方面提供一种设备通信方法,应用于第一设备,方法包括:第一设备显示包括第一编辑框的第一界面;第一设备向第二设备和第三设备发送指示消息;指示消息用于指示第二设备显示第二界面,第二界面包括第二编辑框;在第二编辑框中存在编辑状态的情况下,第一设备将编辑状态同步到第一编辑框中;第一设备接收来自第三设备的抢占消息。
需要说明的是,第一一方面中任意可能的实现方式中,在与第一三方面提供的方法不冲突的情况下中,均可以用于限定第一三方面提供的方法,在此不再赘述。
本申请实施例第一四方面提供一种设备通信方法,应用于第二设备,方法包括:第二设备显示包括第一设备的选项的第四界面;响应于对第一设备的选项的选择操作,第二设备向第一设备发送指示消息;第一设备显示包括第一编辑框的第一界面;第二设备显示第二界面,第二界面包括第二编辑框;在第二编辑框中存在编辑状态的情况下,第二设备将 编辑状态同步到第一编辑框中;第二设备接收来自第三设备的抢占消息。
需要说明的是,第一一方面中任意可能的实现方式中,在与第一四方面提供的方法不冲突的情况下中,均可以用于限定第一四方面提供的方法,在此不再赘述。
本申请实施例第一五方面提供一种设备通信系统,包括第一设备、第二设备和第三设备,第一设备用于执行如第一一方面至第一四方面任意的第一设备的步骤,第二设备用于执行如第一一方面至第一四方面任意的第二设备的步骤,第三设备用于执行如第一一方面至第一四方面任意的第三设备的步骤。
本申请实施例第一六方面提供一种第一设备,包括:至少一个存储器和至少一个处理器;存储器用于存储程序指令;处理器用于调用存储器中的程序指令使得第一设备执行如第一一方面至第一四方面任意的第一设备执行的步骤。
本申请实施例第一七方面提供一种第二设备,包括:至少一个存储器和至少一个处理器;存储器用于存储程序指令;处理器用于调用存储器中的程序指令使得第二设备执行如第一一方面至第一四方面任意的第二设备执行的步骤。
本申请实施例第一八方面提供一种第三设备,包括:至少一个存储器和至少一个处理器;存储器用于存储程序指令;处理器用于调用存储器中的程序指令使得第三设备执行如第一一方面至第一四方面任意的第三设备执行的步骤。
本申请实施例第一九方面提供一种计算机可读存储介质,其上存储有计算机程序使得计算机程序被第一设备的处理器执行时实现如第一一方面至第一四方面任意的第一设备执行的步骤;或者,使得计算机程序被第二设备的处理器执行时实现如第一一方面至第一四方面任意的第二设备执行的步骤;或者,使得计算机程序被第三设备的处理器执行时实现如第一一方面至第一四方面任意的第三设备执行的步骤。
需要说明的是,本申请实施例中,以第一设备、第二设备和第三设备交互为例说明具体的设备通信方法,在第一设备、第二设备或第三设备中的一个作为执行主体时,可以在上述任意实施例中选择各自所执行的步骤,得到第一设备、第二设备或第三设备的单侧实现方式,在此不再赘述。第二设备与第三设备的功能相似,第二设备中执行的任意步骤,在于第三设备的步骤不冲突的情况下,均可以应用于第三设备。
需要说明的是,在上述实施例中,用于实现显示步骤的可以是各设备的显示屏,上述实施例中所描述第一界面、第二界面、第三界面或第四界面等,是对各设备的不同显示界面的区分描述,在后续具体实施例中,可以结合实施例具体的内容,通过文字将第一界面、第二界面、第三界面或第四界面对应到具体实施例提供的具体的界面中,在此不再赘述。
应当理解的是,本申请实施例的第一二方面至第一九方面与本申请实施例的第一一方面的技术方案相对应,各方面及对应的可行实施方式所取得的有益效果相似,不再赘述。
本申请实施例第二一方面提供一种设备通信方法,应用于包括第一设备、第二设备和第三设备的系统,方法包括:第一设备显示包括第一编辑框的第一界面;响应于对第一编辑框的选择操作,第一设备确定第二设备和第三设备加入分布式组网;第一设备显示第二界面,第二界面包括对应第二设备的第一选项和对应第三设备的第二选项;响应于对第一选项的触发操作,第一设备向第二设备发送指示消息;第二设备根据指示消息显示第三界面,第三界面包括第二编辑框;在第二编辑框中存在编辑状态的情况下,将编辑状态同步到第一编辑框中。
本申请实施例中,第一设备可以提供用于选择第二设备或第三设备的选择界面,在接收到对第二设备的选择时,可以向第二设备发送指示消息,指示第二设备辅助第一设备,这样,可以不向第三设备发送指示消息,避免对第三设备的打扰。
一种可能的实现方式中,第二设备包括接口服务,接口服务用于第一设备与第二设备之间的编辑状态的同步。这样,基于接口服务,可以使得第一设备同步到第二设备的任意编辑状态。
一种可能的实现方式中,编辑状态包括下述一项或多项:文本内容、光标或文字内容的高亮标记。
一种可能的实现方式中,第二设备根据指示消息显示第三界面,包括:第二设备响应于指示消息显示通知界面;通知界面包括确认辅助输入的第三选项;响应于对第三选项的触发操作,第二设备显示第三界面。
一种可能的实现方式中,第三界面还包括:第一界面的全部或部分内容。这样,用户可以在第二设备上看到第一设备中的情况,方便用户了解辅助第一设备的动态。
一种可能的实现方式中,第二编辑框与第一界面的全部或部分内容分层显示,且第二编辑框显示在第一界面的全部或部分内容的上层。
一种可能的实现方式中,第二设备根据指示消息显示第三界面之后,方法还包括:响应于对第二编辑框的触发,第二设备显示虚拟键盘;第二设备根据虚拟键盘和/或第二编辑框中接收的输入操作,在第二编辑框中显示编辑状态。
一种可能的实现方式中,第一设备包括下述任一项:电视、大屏或可穿戴设备;第二设备或第三设备包括下述任一项:手机、平板或可穿戴设备。
本申请实施例第二二方面提供一种设备通信方法,应用于包括第一设备、第二设备和第三设备的系统,方法包括:第一设备显示包括第一编辑框的第一界面;响应于对第一编辑框的选择操作,第一设备确定第二设备和第三设备加入分布式组网;第一设备确定第二设备为辅助输入设备;第一设备向第二设备发送指示消息;第二设备根据指示消息显示第三界面,第三界面包括第二编辑框;在第二编辑框中存在编辑状态的情况下,将编辑状态同步到第一编辑框中。
需要说明的是,第二一方面中任意可能的实现方式中,在与第二二方面提供的方法不冲突的情况下中,均可以用于限定第二二方面提供的方法,在此不再赘述。
本申请实施例第二三方面提供一种设备通信方法,应用于包括第一设备、第二设备和第三设备的系统,方法包括:第二设备显示包括第一设备的选项的第四界面;响应于对第一设备的选项的选择操作,第二设备向第一设备发送指示消息;第一设备显示包括第一编辑框的第一界面;第二设备显示第三界面,第三界面包括第二编辑框;在第二编辑框中存在编辑状态的情况下,将编辑状态同步到第一编辑框中。
需要说明的是,第二一方面中任意可能的实现方式中,在与第二三方面提供的方法不冲突的情况下中,均可以用于限定第二三方面提供的方法,在此不再赘述。
本申请实施例第二四方面提供一种设备通信方法,应用于第一设备,方法包括:第一设备显示包括第一编辑框的第一界面;响应于对第一编辑框的选择操作,第一设备确定第二设备和第三设备加入分布式组网;第一设备显示第二界面,第二界面包括对应第二设备的第一选项和对应第三设备的第二选项;响应于对第一选项的触发操作,第一设备向第二 设备发送指示消息;指示消息用于指示第二设备显示第三界面,第三界面包括第二编辑框;在第二编辑框中存在编辑状态的情况下,将编辑状态同步到第一编辑框中。
需要说明的是,第二一方面中任意可能的实现方式中,在与第二四方面提供的方法不冲突的情况下中,均可以用于限定第二四方面提供的方法,在此不再赘述。
本申请实施例第二五方面提供一种设备通信方法,应用于第二设备,方法包括:第二设备显示包括第一设备的选项的第四界面;响应于对第一设备的选项的选择操作,第二设备向第一设备发送指示消息;指示消息用于指示第一设备显示包括第一编辑框的第一界面;第二设备显示第三界面,第三界面包括第二编辑框;在第二编辑框中存在编辑状态的情况下,将编辑状态同步到第一编辑框中。
需要说明的是,第二一方面中任意可能的实现方式中,在与第二五方面提供的方法不冲突的情况下中,均可以用于限定第二五方面提供的方法,在此不再赘述。
本申请实施例第二六方面提供一种设备通信系统,包括第一设备、第二设备和第三设备,第一设备用于执行如第二一方面至第二五方面任意的第一设备的步骤,第二设备用于执行如第二一方面至第二五方面任意的第二设备的步骤,第三设备用于执行如第二一方面至第二五方面任意的第三设备的步骤。
本申请实施例第二七方面提供一种第一设备,包括:至少一个存储器和至少一个处理器;存储器用于存储程序指令;处理器用于调用存储器中的程序指令使得第一设备执行如第二一方面至第二五方面任意的第一设备执行的步骤。
本申请实施例第二八方面提供一种第二设备,包括:至少一个存储器和至少一个处理器;存储器用于存储程序指令;处理器用于调用存储器中的程序指令使得第二设备执行如第二一方面至第二五方面任意的第二设备执行的步骤。
本申请实施例第二九方面提供一种计算机可读存储介质,其上存储有计算机程序使得计算机程序被第一设备的处理器执行时实现如第二一方面至第二五方面任意的第一设备执行的步骤;或者,使得计算机程序被第二设备的处理器执行时实现如第二一方面至第二五方面任意的第二设备执行的步骤;或者,使得计算机程序被第三设备的处理器执行时实现如第二一方面至第二五方面任意的第三设备执行的步骤。
需要说明的是,本申请实施例中,以第一设备、第二设备和第三设备交互为例说明具体的设备通信方法,在第一设备、第二设备或第三设备中的一个作为执行主体时,可以在上述任意实施例中选择各自所执行的步骤,得到第一设备、第二设备或第三设备的单侧实现方式,在此不再赘述。第二设备与第三设备的功能相似,第二设备中执行的任意步骤,在于第三设备的步骤不冲突的情况下,均可以应用于第三设备。
需要说明的是,在上述实施例中,用于实现显示步骤的可以是各设备的显示屏,上述实施例中所描述第一界面、第二界面、第三界面或第四界面等,是对各设备的不同显示界面的区分描述,在后续具体实施例中,可以结合实施例具体的内容,通过文字将第一界面、第二界面、第三界面或第四界面对应到具体实施例提供的具体的界面中,在此不再赘述。
应当理解的是,本申请实施例的第二二方面至第二九方面与本申请实施例的第二一方面的技术方案相对应,各方面及对应的可行实施方式所取得的有益效果相似,不再赘述。
本申请实施例第三一方面提供一种设备通信方法,应用于包括第一设备和第二设备的系统,方法包括:第一设备显示包括第一编辑框的第一界面;第一设备向第二设备发送指 示消息;第二设备根据指示消息显示第二界面,第二界面包括第二编辑框;在第二编辑框中存在关键字的情况下,第一设备将关键字同步到第一编辑框中;第一设备确定关键字对应的候选词;第二设备获取候选词,并显示第三界面,第三界面包括候选词。
本申请实施例中,可以将第二设备中输入的关键字同步到第一设备中,并将第一设备基于关键字联想的候选词同步到第二设备,使得第二设备可以基于选择第一设备的候选词的操作,实现便捷高效的辅助第一设备输入。
一种可能的实现方式中,第二设备包括接口服务,接口服务用于第一设备与第二设备之间的编辑状态的同步。这样,基于接口服务,可以使得第一设备同步到第二设备的任意编辑状态。
一种可能的实现方式中,编辑状态包括下述一项或多项:文本内容、光标或文字内容的高亮标记。
一种可能的实现方式中,第二设备根据指示消息显示第二界面,包括:第二设备响应于指示消息显示通知界面;通知界面包括确认辅助输入的选项;响应于对选项的触发操作,第二设备显示第二界面。
一种可能的实现方式中,第三界面还包括:第一界面的全部或部分内容。这样,用户可以在第二设备上看到第一设备中的情况,方便用户了解辅助第一设备的动态。
一种可能的实现方式中,第二编辑框与第一界面的全部或部分内容分层显示,且第二编辑框显示在第一界面的全部或部分内容的上层。
一种可能的实现方式中,第二设备根据指示消息显示第二界面之后,方法还包括:响应于对第二编辑框的触发,第二设备显示虚拟键盘;第二设备根据虚拟键盘和/或第二编辑框中接收的输入操作,在第二编辑框中显示编辑状态。
一种可能的实现方式中,第一设备包括下述任一项:电视、大屏或可穿戴设备;第二设备包括下述任一项:手机、平板或可穿戴设备。
一种可能的实现方式中,第三界面还包括第二设备基于关键字联想的本地候选词,候选词和本地候选词在第三界面的显示方式包括下述任一种:候选词和本地候选词在第三界面中分栏显示;候选词在第三界面中显示在本地候选词的前面;候选词在第三界面中显示在本地候选词的后面;候选词和本地候选词在第三界面中混合显示;候选词和本地候选词在第三界面中采用不同标识区分。
一种可能的实现方式中,候选词的排序与第一设备中的历史用户行为相关。
一种可能的实现方式中,还包括:第二设备响应于用户对任一项候选词的触发,在第二编辑框中显示任一项候选词。
本申请实施例第三二方面提供一种设备通信方法,应用于包括第一设备和第二设备的系统,方法包括:第二设备显示包括第一设备的选项的第四界面;响应于对第一设备的选项的选择操作,第二设备向第一设备发送指示消息;第一设备显示包括第一编辑框的第一界面;第二设备显示第二界面,第二界面包括第二编辑框;在第二编辑框中存在关键字的情况下,第一设备将关键字同步到第一编辑框中;第一设备确定关键字对应的候选词;第二设备获取候选词,并显示第三界面,第三界面包括候选词。
需要说明的是,第三一方面中任意可能的实现方式中,在与第三二方面提供的方法不冲突的情况下中,均可以用于限定第三二方面提供的方法,在此不再赘述。
本申请实施例第三三方面提供一种设备通信方法,应用于第一设备,方法包括:第一设备显示包括第一编辑框的第一界面;第一设备向第二设备发送指示消息;指示消息用于指示第二设备显示第二界面,第二界面包括第二编辑框;在第二编辑框中存在关键字的情况下,第一设备将关键字同步到第一编辑框中;第一设备确定关键字对应的候选词;第一设备将候选词同步到第二设备。
需要说明的是,第三一方面中任意可能的实现方式中,在与第三三方面提供的方法不冲突的情况下中,均可以用于限定第三三方面提供的方法,在此不再赘述。
本申请实施例第三四方面提供一种设备通信方法,方法包括:第二设备接收来自第一设备的指示消息;第一设备中显示有包括第一编辑框的第一界面;第二设备根据指示消息显示第二界面,第二界面包括第二编辑框;在第二编辑框中存在关键字的情况下,第二设备将关键字同步到第一编辑框中,用于第一设备确定关键字对应的候选词;第二设备获取候选词,并显示第三界面,第三界面包括候选词。
需要说明的是,第三一方面中任意可能的实现方式中,在与第三四方面提供的方法不冲突的情况下中,均可以用于限定第三四方面提供的方法,在此不再赘述。
本申请实施例第三五方面提供一种设备通信方法,应用于第二设备,方法包括:第二设备显示包括第一设备的选项的第四界面;响应于对第一设备的选项的选择操作,第二设备向第一设备发送指示消息;指示消息用于指示第一设备显示包括第一编辑框的第一界面;第二设备显示第二界面,第三界面包括第二编辑框;在第二编辑框中存在关键字的情况下,第二设备将关键字同步到第一编辑框中,用于第一设备确定关键字对应的候选词;第二设备获取候选词,并显示第三界面,第三界面包括候选词。
需要说明的是,第三一方面中任意可能的实现方式中,在与第三五方面提供的方法不冲突的情况下中,均可以用于限定第三五方面提供的方法,在此不再赘述。
本申请实施例第三六方面提供一种设备通信系统,包括第一设备和第二设备,第一设备用于执行如第三一方面至第三五方面任意的第一设备的步骤,第二设备用于执行如第三一方面至第三五方面任意的第二设备的步骤。
本申请实施例第三七方面提供一种第一设备,包括:至少一个存储器和至少一个处理器;存储器用于存储程序指令;处理器用于调用存储器中的程序指令使得第一设备执行如第三一方面至第三五方面任意的第一设备执行的步骤。
本申请实施例第三八方面提供一种第二设备,包括:至少一个存储器和至少一个处理器;存储器用于存储程序指令;处理器用于调用存储器中的程序指令使得第二设备执行如第三一方面至第三五方面任意的第二设备执行的步骤。
本申请实施例第三九方面提供一种计算机可读存储介质,其上存储有计算机程序使得计算机程序被第一设备的处理器执行时实现如第三一方面至第三五方面任意的第一设备执行的步骤;或者,使得计算机程序被第二设备的处理器执行时实现如第三一方面至第三五方面任意的第二设备执行的步骤;或者,使得计算机程序被第三设备的处理器执行时实现如第三一方面至第三五方面任意的第三设备执行的步骤。
需要说明的是,本申请实施例中,以第一设备和第二设备交互为例说明具体的设备通信方法,在第一设备或第二设备中的一个作为执行主体时,可以在上述任意实施例中选择各自所执行的步骤,得到第一设备或第二设备的单侧实现方式,在此不再赘述。
需要说明的是,在上述实施例中,用于实现显示步骤的可以是各设备的显示屏,上述实施例中所描述第一界面、第二界面、第三界面或第四界面等,是对各设备的不同显示界面的区分描述,在后续具体实施例中,可以结合实施例具体的内容,通过文字将第一界面、第二界面、第三界面或第四界面对应到具体实施例提供的具体的界面中,在此不再赘述。
应当理解的是,本申请实施例的第三二方面至第三九方面与本申请实施例的第三一方面的技术方案相对应,各方面及对应的可行实施方式所取得的有益效果相似,不再赘述。
本申请实施例第四一方面提供一种设备通信方法,应用于包括第一设备、第二设备和第三设备的系统,方法包括:第一设备、第二设备和第三设备接入分布式组网;第二设备获取目标候选词,目标候选词不属于第一设备的候选词库,目标候选词不属于第三设备的候选词库;第一设备接收用户输入的与目标候选词相关的关键字,第一设备显示目标候选词;和/或,第三设备接收用户输入的与目标候选词相关的关键字,第三设备显示目标候选词。
本申请实施例中,第一设备、第二设备和第三设备可以接入分布式组网,并互相同步各自的候选词库,从而可以基于同步的候选词库实现高效便捷的输入。
一种可能的实现方式中,还包括:第一设备、第二设备和第三设备之间互相同步各自的候选词库。
一种可能的实现方式中,还包括:第一设备、第二设备或第三设备退出分布式组网时,在第一设备、第二设备或第三设备中显示是否删除同步的候选词库的提示界面;提示界面中包括用于表示删除的选项和用于表示不删除的选项;响应于对表示删除的选项的触发操作,第一设备、第二设备或第三设备删除各自从其他设备同步的候选词库;或者,响应于对表示不删除的选项的触发操作,第一设备、第二设备或第三设备保留从分布式组网同步的候选词库。
一种可能的实现方式中,还包括:第一设备、第二设备或第三设备分别确定各自的访问类型;在第一设备、第二设备或第三设备退出分布式组网时,第一设备、第二设备或第三设备根据各自的访问类型确定是否删除从分布式组网同步的候选词库。
一种可能的实现方式中,还包括:第一设备显示包括第一编辑框的第一界面;第一设备向第二设备发送指示消息;第二设备根据指示消息显示第二界面,第二界面包括第二编辑框;在第二编辑框中存在编辑状态的情况下,将编辑状态同步到第一编辑框中。
一种可能的实现方式中,第二设备包括接口服务,接口服务用于第一设备与第二设备之间的编辑状态的同步。这样,基于接口服务,可以使得第一设备同步到第二设备的任意编辑状态。
一种可能的实现方式中,编辑状态包括下述一项或多项:文本内容、光标或文字内容的高亮标记。
一种可能的实现方式中,第二设备根据指示消息显示第二界面,包括:第二设备响应于指示消息显示通知界面;通知界面包括确认辅助输入的第三选项;响应于对第三选项的触发操作,第二设备显示第二界面。
一种可能的实现方式中,第二界面还包括:第一界面的全部或部分内容。
一种可能的实现方式中,第二编辑框与第一界面的全部或部分内容分层显示,且第二编辑框显示在第一界面的全部或部分内容的上层。
一种可能的实现方式中,第二设备根据指示消息显示第二界面之后,方法还包括:响应于对第二编辑框的触发,第二设备显示虚拟键盘;第二设备根据虚拟键盘和/或第二编辑框中接收的输入操作,在第二编辑框中显示编辑状态。
一种可能的实现方式中,第一设备包括下述任一项:电视、大屏或可穿戴设备;第二设备或第三设备包括下述任一项:手机、平板或可穿戴设备。
一种可能的实现方式中,还包括:第二设备显示包括第一设备的选项的第四界面;响应于对第一设备的选项的选择操作,第二设备向第一设备发送指示消息;第一设备显示包括第一编辑框的第一界面;第二设备显示第二界面,第二界面包括第二编辑框;在第二编辑框中存在编辑状态的情况下,将编辑状态同步到第一编辑框中。
本申请实施例第四二方面提供一种设备通信方法,应用于包括第一设备、第二设备和第三设备的系统,方法包括:第一设备、第二设备和第三设备接入分布式组网;第一设备、第二设备和第三设备之间互相同步各自的候选词库,得到候选词库集;在第一设备、第二设备或第三设备进行文字编辑时,第一设备、第二设备或第三设备根据候选词库集显示候选词。
需要说明的是,第四一方面中任意可能的实现方式中,在与第四二方面提供的方法不冲突的情况下中,均可以用于限定第四二方面提供的方法,在此不再赘述。
本申请实施例第四三方面提供一种设备通信方法,应用于第一设备,包括:第一设备接入分布式组网;分布式组网中还接入有其他设备;第一设备基于分布式组网同步其他设备的候选词库,得到候选词库集;在第一设备进行文字编辑时,第一设备根据候选词库集显示候选词。
需要说明的是,第四一方面中任意可能的实现方式中,在与第四三方面提供的方法不冲突的情况下中,均可以用于限定第四三方面提供的方法,在此不再赘述。
本申请实施例第四四方面提供一种设备通信系统,包括第一设备、第二设备和第三设备,第一设备用于执行如第四一方面至第四三方面任意的第一设备的步骤,第二设备用于执行如第四一方面至第四三方面任意的第二设备的步骤,第三设备用于执行如第四一方面至第四三方面任意的第三设备的步骤。
本申请实施例第四五方面提供一种第一设备,包括:至少一个存储器和至少一个处理器;存储器用于存储程序指令;处理器用于调用存储器中的程序指令使得第一设备执行如第四一方面至第四三方面任意的第一设备执行的步骤。
本申请实施例第四六方面提供一种第二设备,包括:至少一个存储器和至少一个处理器;存储器用于存储程序指令;处理器用于调用存储器中的程序指令使得第二设备执行如第四一方面至第四三方面任意的第二设备执行的步骤。
本申请实施例第四七方面提供一种计算机可读存储介质,其上存储有计算机程序使得计算机程序被第一设备的处理器执行时实现如第四一方面至第四三方面任意的第一设备执行的步骤;或者,使得计算机程序被第二设备的处理器执行时实现如第四一方面至第四三方面任意的第二设备执行的步骤;或者,使得计算机程序被第三设备的处理器执行时实现如第四一方面至第四三方面任意的第三设备执行的步骤。
需要说明的是,本申请实施例中,以第一设备、第二设备和第三设备交互为例说明具体的设备通信方法,在第一设备、第二设备或第三设备中的一个作为执行主体时,可以在 上述任意实施例中选择各自所执行的步骤,得到第一设备、第二设备或第三设备的单侧实现方式,在此不再赘述。第二设备与第三设备的功能相似,第二设备中执行的任意步骤,在于第三设备的步骤不冲突的情况下,均可以应用于第三设备。
需要说明的是,在上述实施例中,用于实现显示步骤的可以是各设备的显示屏,上述实施例中所描述第一界面、第二界面、第三界面或第四界面等,是对各设备的不同显示界面的区分描述,在后续具体实施例中,可以结合实施例具体的内容,通过文字将第一界面、第二界面、第三界面或第四界面对应到具体实施例提供的具体的界面中,在此不再赘述。
应当理解的是,本申请实施例的第四二方面至第四七方面与本申请实施例的第四一方面的技术方案相对应,各方面及对应的可行实施方式所取得的有益效果相似,不再赘述。
本申请实施例第五一方面提供一种设备通信方法,应用于包括第一设备、第二设备和第三设备的系统,所述方法包括:第一设备显示包括第一编辑框的第一界面;第一设备向第二设备和第三设备发送指示消息;第二设备根据指示消息显示第二界面,第二界面包括第二编辑框;第三设备根据指示消息显示第三界面,第三界面包括第三编辑框;在第二编辑框中存在编辑状态的情况下,第一设备将编辑状态同步到第一编辑框中,以及第三设备将编辑状态同步到第三编辑框中;或者,在第三编辑框中存在编辑状态的情况下,第一设备将编辑状态同步到第一编辑框中,以及第二设备将编辑状态同步到第二编辑框中;或者,在第一编辑框中存在编辑状态的情况下,第二设备将编辑状态同步到第二编辑框中,以及第三设备将编辑状态同步到第三编辑框中。
本申请实施例中,第二设备可以和第三设备共同辅助第一设备输入,从而能实现便捷高效的输入。
一种可能的实现方式中,第二设备包括接口服务,接口服务用于第一设备与第二设备之间的编辑状态的同步。这样,基于接口服务,可以使得第一设备同步到第二设备的任意编辑状态。
一种可能的实现方式中,编辑状态包括下述一项或多项:文本内容、光标或文字内容的高亮标记。
一种可能的实现方式中,第二设备根据指示消息显示第二界面,包括:第二设备响应于指示消息显示通知界面;通知界面包括确认辅助输入的选项;响应于对选项的触发操作,第二设备显示第二界面。
一种可能的实现方式中,第二界面还包括:第一界面的全部或部分内容。这样,用户可以在第二设备上看到第一设备中的情况,方便用户了解辅助第一设备的动态。
一种可能的实现方式中,第二编辑框与第一界面的全部或部分内容分层显示,且第二编辑框显示在第一界面的全部或部分内容的上层。
一种可能的实现方式中,第二设备根据指示消息显示第二界面之后,方法还包括:响应于对第二编辑框的触发,第二设备显示虚拟键盘;第二设备根据虚拟键盘和/或第二编辑框中接收的输入操作,在第二编辑框中显示编辑状态。
一种可能的实现方式中,第一设备包括下述任一项:电视、大屏或可穿戴设备;第二设备或第三设备包括下述任一项:手机、平板或可穿戴设备。
一种可能的实现方式中,第一编辑框中编辑状态中包括第一设备的标识,和/或,第二编辑框中编辑状态中包括第二设备的标识,和/或,第三编辑框中编辑状态中包括第一设备 的标识。
一种可能的实现方式中,在第二编辑框和第三编辑框中同时接收到输入内容时,第一设备裁定第二编辑框的输入内容和第三编辑框的显示方式。
本申请实施例第五二方面提供一种设备通信方法,应用于包括第一设备、第二设备和第三设备的系统,方法包括:第一设备显示包括第一编辑框的第一界面;第一设备向第二设备发送指示消息;第二设备根据指示消息显示第二界面,第二界面包括第二编辑框;第二设备向第三设备发送辅助输入请求;第三设备根据辅助输入请求显示第三界面,第三界面包括第三编辑框;在第二编辑框中存在编辑状态的情况下,第一设备将编辑状态同步到第一编辑框中,以及第三设备将编辑状态同步到第三编辑框中;或者,在第三编辑框中存在编辑状态的情况下,第一设备将编辑状态同步到第一编辑框中,以及第二设备将编辑状态同步到第二编辑框中;或者,在第一编辑框中存在编辑状态的情况下,第二设备将编辑状态同步到第二编辑框中,以及第三设备将编辑状态同步到第三编辑框中。
需要说明的是,第五一方面中任意可能的实现方式中,在与第五二方面提供的方法不冲突的情况下中,均可以用于限定第五二方面提供的方法,在此不再赘述。
本申请实施例第五三方面提供一种设备通信方法,应用于包括第一设备、第二设备和第三设备的系统,方法包括:第二设备显示包括第一设备的选项的第四界面;响应于对第一设备的选项的选择操作,第二设备向第一设备发送指示消息;第一设备显示包括第一编辑框的第一界面;第二设备显示第二界面,第二界面包括第二编辑框;第二设备向第三设备发送辅助输入请求;第三设备根据辅助输入请求显示第三界面,第三界面包括第三编辑框;在第二编辑框中存在编辑状态的情况下,第一设备将编辑状态同步到第一编辑框中,以及第三设备将编辑状态同步到第三编辑框中;或者,在第三编辑框中存在编辑状态的情况下,第一设备将编辑状态同步到第一编辑框中,以及第二设备将编辑状态同步到第二编辑框中;或者,在第一编辑框中存在编辑状态的情况下,第二设备将编辑状态同步到第二编辑框中,以及第三设备将编辑状态同步到第三编辑框中。
需要说明的是,第五一方面中任意可能的实现方式中,在与第五三方面提供的方法不冲突的情况下中,均可以用于限定第五三方面提供的方法,在此不再赘述。
本申请实施例第五四方面提供一种设备通信方法,应用于第一设备,方法包括:第一设备显示包括第一编辑框的第一界面;第一设备向第二设备和第三设备发送指示消息;用于第二设备根据指示消息显示第二界面,第二界面包括第二编辑框,以及用于第三设备根据指示消息显示第三界面,第三界面包括第三编辑框;在第二编辑框中存在编辑状态的情况下,第一设备将编辑状态同步到第一编辑框中;或者,在第三编辑框中存在编辑状态的情况下,第一设备将编辑状态同步到第一编辑框中。
需要说明的是,第五一方面中任意可能的实现方式中,在与第五四方面提供的方法不冲突的情况下中,均可以用于限定第五四方面提供的方法,在此不再赘述。
本申请实施例第五五方面提供一种设备通信方法,应用于第二设备,方法包括:第二设备显示包括第一设备的选项的第四界面;响应于对第一设备的选项的选择操作,第二设备向第一设备发送指示消息;用于第一设备显示包括第一编辑框的第一界面;第二设备显示第二界面,第二界面包括第二编辑框;第二设备向第三设备发送辅助输入请求;用于第三设备根据辅助输入请求显示第三界面,第三界面包括第三编辑框;在第三编辑框中存在 编辑状态的情况下,第二设备将编辑状态同步到第二编辑框中;或者,在第一编辑框中存在编辑状态的情况下,第二设备将编辑状态同步到第二编辑框中。
需要说明的是,第五一方面中任意可能的实现方式中,在与第五五方面提供的方法不冲突的情况下中,均可以用于限定第五五方面提供的方法,在此不再赘述。
本申请实施例第五六方面提供一种设备通信系统,包括第一设备、第二设备和第三设备,第一设备用于执行如第五一方面至第五五方面任意的第一设备的步骤,第二设备用于执行如第五一方面至第五五方面任意的第二设备的步骤,第三设备用于执行如第五一方面至第五五方面任意的第三设备的步骤。
本申请实施例第五七方面提供一种第一设备,包括:至少一个存储器和至少一个处理器;存储器用于存储程序指令;处理器用于调用存储器中的程序指令使得第一设备执行如第五一方面至第五五方面任意的第一设备执行的步骤。
本申请实施例第五八方面提供一种第二设备,包括:至少一个存储器和至少一个处理器;存储器用于存储程序指令;处理器用于调用存储器中的程序指令使得第二设备执行如第五一方面至第五五方面任意的第二设备执行的步骤。
本申请实施例第五九方面提供一种计算机可读存储介质,其上存储有计算机程序使得计算机程序被第一设备的处理器执行时实现如第五一方面至第五五方面任意的第一设备执行的步骤;或者,使得计算机程序被第二设备的处理器执行时实现如第五一方面至第五五方面任意的第二设备执行的步骤;或者,使得计算机程序被第三设备的处理器执行时实现如第五一方面至第五五方面任意的第三设备执行的步骤。
需要说明的是,本申请实施例中,以第一设备、第二设备和第三设备交互为例说明具体的设备通信方法,在第一设备、第二设备或第三设备中的一个作为执行主体时,可以在上述任意实施例中选择各自所执行的步骤,得到第一设备、第二设备或第三设备的单侧实现方式,在此不再赘述。第二设备与第三设备的功能相似,第二设备中执行的任意步骤,在于第三设备的步骤不冲突的情况下,均可以应用于第三设备。
需要说明的是,在上述实施例中,用于实现显示步骤的可以是各设备的显示屏,上述实施例中所描述第一界面、第二界面、第三界面或第四界面等,是对各设备的不同显示界面的区分描述,在后续具体实施例中,可以结合实施例具体的内容,通过文字将第一界面、第二界面、第三界面或第四界面对应到具体实施例提供的具体的界面中,在此不再赘述。
应当理解的是,本申请实施例的第五二方面至第五九方面与本申请实施例的第五一方面的技术方案相对应,各方面及对应的可行实施方式所取得的有益效果相似,不再赘述。
附图说明
图1示出了本申请实施例提供的一种通信系统的架构示意图;
图2示出了本申请实施例提供的另一种通信系统的架构示意图;
图3示出了本申请实施例提供的又一种通信系统的架构示意图;
图4示出了本申请实施例提供的一种第一设备的功能示意框图;
图5示出了本申请实施例提供的一种第二设备的功能示意框图;
图6示出了本申请实施例提供的一种第一设备和第二设备的软件架构示意图;
图7示出了本申请实施例提供的一种设备通信方法的系统架构示意图;
图8示出了本申请实施例提供的一种设备通信方法的具体系统架构示意图;
图9示出了本申请实施例提供的一种大屏用户界面示意图;
图10示出了本申请实施例提供的一种手机用户界面示意图;
图11示出了本申请实施例提供的另一种手机用户界面示意图;
图12示出了本申请实施例提供的一种用户界面示意图;
图13示出了本申请实施例提供的另一种手机用户界面示意图;
图14示出了本申请实施例提供的另一种大屏用户界面示意图;
图15示出了本申请实施例提供的另一种手机用户界面示意图;
图16示出了本申请实施例提供的另一种大屏用户界面示意图;
图17示出了本申请实施例提供的另一种手机用户界面示意图;
图18示出了本申请实施例提供的另一种手机用户界面示意图;
图19示出了本申请实施例提供的另一种手机用户界面示意图;
图20示出了本申请实施例提供的另一种手机用户界面示意图;
图21示出了本申请实施例提供的另一种用户界面示意图;
图22示出了本申请实施例提供的又一种用户界面示意图;
图23示出了本申请实施例提供的一种手机与大屏通信的具体流程示意图;
图24为本申请实施例提供的一种设备的结构示意图;
图25为本申请实施例的又一种设备的结构示意图;
图26示出了本申请实施例提供的另一种设备通信方法的系统架构示意图;
图27示出了本申请实施例提供的一种大屏的用户界面示意图;
图28示出了本申请实施例提供的一种手机的用户界面示意图;
图29示出了本申请实施例提供的一种大屏的用户界面示意图;
图30示出了本申请实施例提供的另一种手机的用户界面示意图;
图31示出了本申请实施例提供的另一种手机的用户界面示意图;
图32示出了本申请实施例提供的一种手机辅助大屏输入界面示意图;
图33示出了本申请实施例提供的另一种手机辅助大屏输入界面示意图;
图34示出了本申请实施例提供的一种手机辅助大屏输入界面示意图;
图35示出了本申请实施例提供的一种手机界面示意图;
图36为本申请实施例提供的一种设备的结构示意图;
图37为本申请实施例的又一种设备的结构示意图;
图38示出了本申请实施例提供的一种手机与大屏通信的用户界面示意图;
图39示出了本申请实施例提供的一种设备通信方法的具体系统架构示意图;
图40示出了本申请实施例提供的一种手机与大屏通信的流程示意图;
图41示出了本申请实施例提供的一种大屏用户界面示意图;
图42示出了本申请实施例提供的一种手机用户界面示意图;
图43示出了本申请实施例提供的另一种手机用户界面示意图;
图44示出了本申请实施例提供的另一种大屏用户界面示意图;
图45示出了本申请实施例提供的一种手机用户界面示意图;
图46示出了本申请实施例提供的另一种手机用户界面示意图;
图47示出了本申请实施例提供的另一种手机用户界面示意图;
图48示出了本申请实施例提供的另一种手机用户界面示意图;
图49示出了本申请实施例提供的又一种手机用户界面示意图;
图50示出了本申请实施例提供的又一种手机用户界面示意图;
图51示出了本申请实施例提供的一种手机与大屏通信的具体流程示意图;
图52为本申请实施例提供的一种设备的结构示意图;
图53为本申请实施例的又一种设备的结构示意图;
图54示出了本申请实施例的具体应用场景示意图;
图55示出了本申请实施例提供的一种设备通信方法的具体系统架构示意图;
图56示出了本申请实施例提供的一种手机用户界面示意图;
图57示出了本申请实施例提供的另一种手机用户界面示意图;
图58示出了本申请实施例提供的一种大屏用户界面示意图;
图59示出了本申请实施例提供的另一种手机用户界面示意图;
图60示出了本申请实施例提供的一种手机和大屏通信的用户界面示意图;
图61示出了本申请实施例提供的另一种大屏用户界面示意图;
图62示出了本申请实施例提供的一种手机用户界面示意图;
图63示出了本申请实施例提供的一种手机用户界面示意图;
图64示出了本申请实施例提供的另一种手机用户界面示意图;
图65示出了本申请实施例提供的又一种手机用户界面示意图;
图66为本申请实施例提供的一种设备的结构示意图;
图67为本申请实施例的又一种设备的结构示意图;
图68示出了本申请实施例提供的另一种设备通信方法的系统架构示意图;
图69示出了本申请实施例提供的一种大屏界面示意图;
图70示出了本申请实施例提供的一种手机界面示意图;
图71示出了本申请实施例提供的另一种手机界面示意图;
图72示出了本申请实施例提供的一种手机界面示意图;
图73示出了本申请实施例提供的另一种手机界面示意图;
图74示出了本申请实施例提供的另一种手机界面示意图;
图75示出了本申请实施例提供的另一种设备通信方法的系统架构示意图;
图76示出了本申请实施例提供的一种手机辅助大屏输入界面示意图;
图77示出了本申请实施例提供的另一种手机辅助大屏输入界面示意图;
图78示出了本申请实施例提供的一种手机辅助大屏输入界面示意图;
图79示出了本申请实施例提供的一种手机辅助大屏输入界面示意图;
图80示出了本申请实施例提供的一种手机辅助大屏输入界面示意图;
图81示出了本申请实施例提供的另一种设备通信方法的系统架构示意图;
图82示出了本申请实施例提供的一种手机辅助大屏输入界面示意图;
图83示出了本申请实施例提供的一种循环链产生示意图;
图84示出了本申请实施例提供的另一种设备通信方法的系统架构示意图;
图85示出了本申请实施例提供的一种手机界面示意图;
图86为本申请实施例提供的一种设备的结构示意图;
图87为本申请实施例的又一种设备的结构示意图。
具体实施方式
为了便于清楚描述本申请实施例的技术方案,在本申请的实施例中,采用了“第一”、“第二”等字样对功能和作用基本相同的相同项或相似项进行区分。例如,第一设备和第二设备仅仅是为了区分不同的设备,并不对其先后顺序进行限定。本领域技术人员可以理解“第一”、“第二”等字样并不对数量和执行次序进行限定,并且“第一”、“第二”等字样也并不限定一定不同。
需要说明的是,本申请中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其他实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
本申请中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b,或c中的至少一项(个),可以表示:a,b,c,a-b,a-c,b-c,或a-b-c,其中a,b,c可以是单个,也可以是多个。
图1示出了本申请实施例提供的一种通信系统的架构示意图。如图1所示,通信系统可以包括:第一设备101以及第二设备102。
其中,第一设备101可以是用户编辑文字等内容时较为不方便的设备,或者可以理解为具有弱输入能力的被辅助设备,例如可以包括电视、智慧屏(或称为大屏)、智能手表等。可能的实现方式中,第一设备101中还可以包括摄像头(图中未示出)等,本申请实施例对第一设备101不作具体限定。通常的,在第一设备101中进行文本等内容输入时,需要用户使用遥控器103依次选择拼音并按压确认键,输入较为繁琐且效率低下。
第二设备102可以是用户编辑文字等内容时较为方便的设备,或者可以理解为具有强输入能力的辅助设备,例如可以包括手机、平板、电脑等,本申请实施例对第一设备101和第二设备102的具体类型不作限定。为了便于说明,本申请实施例以第一设备101为大屏为例进行示意,以第二设备102为手机为例进行示意。
可能的实现中,手机和大屏可以有线连接或无线连接。例如,无线连接可以包括:无线保真(wireless fidelity,Wi-Fi)连接、蓝牙连接或者ZigBee连接等,本申请实施例对此不作限定。进而基于本申请后续实施例的方法,用户可以在手机中辅助大屏输入。
例如,用户在大屏上通过遥控器移动选项并选择大屏的编辑框,与大屏通信的手机中可以弹出辅助输入的对话框,用户在手机对话框的编辑框中使用手机的输入法软键盘输入文本等内容,该内容可以显示在大屏上,用户在手机中确认输入完毕后,可以根据用户在手机中输入的内容实现搜索大屏中的节目等功能。
在一些实施例中,手机的数量可以为多个。示例性的,图2示出了本申请实施例提供的另一种通信系统的架构示意图。如图2所示,通信系统可以包括:大屏201、第一手机202和第二手机203。
可能的实施例中,大屏201、第一手机202和第二手机203处于一个分布式组网中,分布式组网可以支持大屏201、第一手机202和第二手机203实现通信连接。在同一分布式组网内,一个客户端可以同时连接多个服务端进行分布式输入,一个服务端也可以同时被多个客户端连接。例如,同一个分布式组网中,弱输入能力的大屏201可以作为分布式输入的客户端,强输入能力的手机101和手机102可以作为分布式输入的服务端。
基于分布式组网的架构,大屏201、第一手机202和第二手机203可以互相实现设备发现、设备连接或数据传输等一项或多项功能。
示例性的,大屏201、第一手机202和第二手机203加入分布式组网后,可以实现互相的设备发现和设备连接。进而,第一手机202和第二手机203可以同时辅助大屏201输入文字等内容。或者第一手机202和第二手机203可以分别辅助大屏201输入文字等内容。或者第一手机202或第二手机203可以在其中一个手机辅助大屏201输入时,另一个手机进行抢占输入。或者大屏201可以选择第一手机202或第二手机203为其进行辅助输入,等。具体辅助输入或抢占输入等的过程将在后续实施例中详细说明,在此不再赘述。
在一些实施例中,手机的数量可以为多个,大屏的数量也可以为多个。示例性的,图3示出了本申请实施例提供的另一种通信系统的架构示意图。如图3所示,通信系统可以包括:第一大屏301、第二大屏302、第一手机303和第二手机304。
可能的实施例中,第一大屏301、第二大屏302、第一手机303和第二手机304处于一个分布式组网中,基于分布式组网的架构,第一大屏301、第二大屏302、第一手机303和第二手机304可以互相实现设备发现、设备连接或数据传输等功能。
示例性的,第一大屏301、第二大屏302、第一手机303和第二手机304加入分布式组网后,可以实现互相的设备发现和设备连接。进而,第一手机303和第二手机304可以同时辅助第一大屏301和/或第二大屏302输入文字等内容。或者第一手机303和第二手机304可以分别辅助第一大屏301和/或第二大屏302输入文字等内容。或者第一手机303或第二手机304可以在其中一个辅助第一大屏301和/或第二大屏302输入时,另一个进行抢占输入。或者第一大屏301和/或第二大屏302可以选择第一手机303或第二手机304为其进行辅助输入,等。具体辅助输入或抢占输入等的过程将在后续实施例中详细说明,在此不再赘述。
图4示出了本申请实施例提供的一种第一设备的功能框图。在可能的实现方式中,如图4所示,第一设备400可以包括:处理器401、存储器402、通信接口403、扬声器404、显示器405等,这些部件可通过一根或多根通信总线或信号线(图中未示出)进行通信。
下面结合图4对第一设备400的各个部件进行具体的介绍:
处理器401是第一设备400的控制中心,利用各种接口和线路连接第一设备400的各个部分,通过运行或执行存储在存储器402内的应用程序,以及调用存储在存储器402内的数据,执行第一设备400的各种功能和处理数据。
在一些实施例中,处理器401可包括一个或多个处理单元,例如:处理器401可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解 码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。其中,控制器可以是第一设备400的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。在另一些实施例中,处理器401中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器401中的存储器为高速缓冲存储器。该存储器可以保存处理器401刚用过或循环使用的指令或数据。如果处理器401需要再次使用该指令或数据,可从所述存储器中直接调用,避免了重复存取,减少了处理器401的等待时间,因而提高了系统的效率。处理器401可以运行本申请一些实施例提供的设备通信方法的软件代码/模块,实现控制第一设备400的功能。
存储器402用于存储应用程序以及数据,处理器401通过运行存储在存储器402的应用程序以及数据,执行第一设备400的各种功能以及数据处理。存储器402主要包括存储程序区以及存储数据区,其中,存储程序区可存储操作系统(operating system,OS)、至少一个功能所需的应用程序(比如设备发现功能,视频搜索功能,视频播放功能等);存储数据区可以存储根据使用第一设备时所创建的数据(比如音视频数据等)。此外,存储器402可以包括高速随机存取存储器(random access memory,RAM),还可以包括非易失存储器,例如磁盘存储器件、闪存器件或其他易失性固态存储器件等。在一些实施例中,存储器402可以存储各种操作系统。上述存储器402可以是独立的,通过上述通信总线与处理器401相连接;存储器402也可以和处理器401集成在一起。
通信接口403可以为有线接口(例如以太网接口)或无线接口(例如蜂窝网络接口或使用无线局域网接口),例如,通信接口403具体可用于与一个或多个第二设备进行通信等。
扬声器404,也称“喇叭”,用于将音频电信号转换为声音信号。第一设备400可以通过扬声器404播放声音信号。
显示器405(或称为显示屏、屏幕等),可以用于显示应用的显示界面,比如搜索视频的界面或当前播放的视频画面等。显示器405可以包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organiclight-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flexlight-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,显示器405中可以设置触摸传感器,形成触摸屏,本申请对此不做限定。触摸传感器用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给处理器401,以确定触摸事件类型。处理器401可以通过显示器405提供与触摸操作相关的视觉输出。
另外,第一设备400还可以包括给各个部件供电的电源装置406(比如电池和电源管理芯片),电池可以通过电源管理芯片与处理器401逻辑相连,从而通过电源装置406实现管理充电、放电、以及功耗管理等功能。
另外,第一设备400还可以包括传感器模块(图中未示出),传感器模块可以包括气压传感器、温度传感器等。在实际应用中,第一设备400还可以包括更多或很少的传感器, 或者使用其他具有相同或类似功能的传感器替换上述列举的传感器等等,本申请不做限定。
可以理解的是,图4中示出的设备结构并不构成对第一设备的具体限定。在另一些实施例中,第一设备可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
图5示出了本申请实施例提供的一种第二设备500的功能框图。如图5所示,第二设备500可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本申请实施例示意的结构并不构成对第二设备500的具体限定。在本申请另一些实施例中,第二设备500可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从存储器中调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器110可以包含多组I2C 总线。处理器110可以通过不同的I2C总线接口分别耦合触摸传感器180K,充电器,闪光灯,摄像头193等。例如:处理器110可以通过I2C接口耦合触摸传感器180K,使处理器110与触摸传感器180K通过I2C总线接口通信,实现第二设备500的触摸功能。
I2S接口可以用于音频通信。在一些实施例中,处理器110可以包含多组I2S总线。处理器110可以通过I2S总线与音频模块170耦合,实现处理器110与音频模块170之间的通信。在一些实施例中,音频模块170可以通过I2S接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块170与无线通信模块160可以通过PCM总线接口耦合。在一些实施例中,音频模块170也可以通过PCM接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。I2S接口和PCM接口都可以用于音频通信。
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器110与无线通信模块160。例如:处理器110通过UART接口与无线通信模块160中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块170可以通过UART接口向无线通信模块160传递音频信号,实现通过蓝牙耳机播放音乐的功能。
MIPI接口可以被用于连接处理器110与显示屏194,摄像头193等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现第二设备500的拍摄功能。处理器110和显示屏194通过DSI接口通信,实现第二设备500的显示功能。
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器110与摄像头193,显示屏194,无线通信模块160,音频模块170,传感器模块180等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为第二设备500充电,也可以用于第二设备500与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。
可以理解的是,本申请实施例示意的各模块间的接口连接关系,是示意性说明,并不构成对第二设备500的结构限定。在本申请另一些实施例中,第二设备500也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过第二设备500的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为终端设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,显示 屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
第二设备500的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。第二设备500中的天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在第二设备500上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在第二设备500上的包括无线局域网(wirelesslocal area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,第二设备500的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得第二设备500可以通过无线通信技术与网络以及其他设备通信。无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。GNSS可以包括全 球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidounavigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellitesystem,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
第二设备500通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot lightemitting diodes,QLED)等。在一些实施例中,第二设备500可以包括1个或N个显示屏194,N为大于1的正整数。
第二设备500可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,第二设备500可以包括1个或N个摄像头193,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当第二设备500在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。第二设备500可以支持一种或多种视频编解码器。这样,第二设备500可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现第二设备500的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展第二设备500的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,可执行程序代码包括指令。内 部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储第二设备500使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理器110通过运行存储在内部存储器121的指令,和/或存储在设置于处理器中的存储器的指令,执行第二设备500的各种功能应用以及数据处理。
第二设备500可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。第二设备500可以通过扬声器170A收听音乐,或收听免提通话。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当第二设备500接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。第二设备500可以设置至少一个麦克风170C。在另一些实施例中,第二设备500可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,第二设备500还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。压力传感器180A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器180A,电极之间的电容改变。第二设备500根据电容的变化确定压力的强度。当有触摸操作作用于显示屏194,第二设备500根据压力传感器180A检测触摸操作强度。第二设备500也可以根据压力传感器180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。
陀螺仪传感器180B可以用于确定第二设备500的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定第二设备500围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器180B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器180B检测第二设备500抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动 抵消第二设备500的抖动,实现防抖。陀螺仪传感器180B还可以用于导航,体感游戏场景。
气压传感器180C用于测量气压。在一些实施例中,第二设备500通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。
磁传感器180D包括霍尔传感器。第二设备500可以利用磁传感器180D检测翻盖皮套的开合。在一些实施例中,当第二设备500是翻盖机时,第二设备500可以根据磁传感器180D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。
加速度传感器180E可检测第二设备500在各个方向上(一般为三轴)加速度的大小。当第二设备500静止时可检测出重力的大小及方向。还可以用于识别终端设备姿态,应用于横竖屏切换,计步器等应用程序。
距离传感器180F,用于测量距离。第二设备500可以通过红外或激光测量距离。在一些实施例中,拍摄场景,第二设备500可以利用距离传感器180F测距以实现快速对焦。
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。第二设备500通过发光二极管向外发射红外光。第二设备500使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定第二设备500附近有物体。当检测到不充分的反射光时,第二设备500可以确定第二设备500附近没有物体。第二设备500可以利用接近光传感器180G检测用户手持第二设备500贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。
环境光传感器180L用于感知环境光亮度。第二设备500可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测第二设备500是否在口袋里,以防误触。
指纹传感器180H用于采集指纹。第二设备500可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器180J用于检测温度。在一些实施例中,第二设备500利用温度传感器180J检测的温度,执行温度处理策略。例如,当温度传感器180J上报的温度超过阈值,第二设备500执行降低位于温度传感器180J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,第二设备500对电池142加热,以避免低温导致第二设备500异常关机。在其他一些实施例中,当温度低于又一阈值时,第二设备500对电池142的输出电压执行升压,以避免低温导致的异常关机。
触摸传感器180K,也称“触控器件”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于第二设备500的表面,与显示屏194所处的位置不同。
骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器180M可以获取人体声部振动骨块的振动信号。骨传导传感器180M也可以接触人体脉搏,接收血压 跳动信号。在一些实施例中,骨传导传感器180M也可以设置于耳机中,结合成骨传导耳机。音频模块170可以基于骨传导传感器180M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于骨传导传感器180M获取的血压跳动信号解析心率信息,实现心率检测功能。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。第二设备500可以接收按键输入,产生与第二设备500的用户设置以及功能控制有关的键信号输入。
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用程序(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏194不同区域的触摸操作,马达191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和第二设备500的接触和分离。第二设备500可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口195可以同时插入多张卡。多张卡的类型可以相同,也可以不同。SIM卡接口195也可以兼容不同类型的SIM卡。SIM卡接口195也可以兼容外部存储卡。第二设备500通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,第二设备500采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在第二设备500中,不能和第二设备500分离。
第一设备400和第二设备500的软件系统均可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构,等。本申请实施例以分层架构的Android系统为例,示例性说明第一设备400和第二设备500的软件结构。
图6左图示出了本申请实施例提供的一种第一设备的软件架构框图。第一设备的软件系统可以采用分层架构、事件驱动架构、微核架构、微服务架构或云架构等。本申请实施例以第一设备的操作系统为Android系统为例示例性说明。如图6所示,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。
应用程序层可以包括一系列应用程序包。
如图6所示,应用程序层可以包括图库、日历、音乐、视频、点播、智能家居或设备控制等的一种或多种应用程序。
其中,上述任一种应用程序中均可以提供输入框,使得用户可以在输入框中输入关键字等实现在该应用程序中的搜索等操作。
智能家居应用可用于对具有联网功能的家居设备进行控制或管理。例如,家居设备可以包括电灯、空调、防盗门锁、音箱、扫地机器人、插座、体脂秤、台灯、空气净化器、电冰箱、洗衣机、热水器、微波炉、电饭锅、窗帘、风扇、电视、机顶盒、门窗等。
设备控制应用用于对单一设备(例如第一设备)进行控制或者管理。
另外,应用程序层还可以包括:控制中心和/或通知中心等系统应用程序。
其中,控制中心为第一设备的下拉消息通知栏,如当用户在第一设备上进行向下操作时第一设备所显示出的用户界面。通知中心为第一设备的上拉消息通知栏,即当用户在第一设备上进行向上操作时第一设备所显示出的用户界面。
应用程序框架层(framework)为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图6所示,应用程序框架层可以包括窗口管理器,内容提供器,资源管理器,视图系统,通知管理器,分布式组网框架,远程输入服务或输入法框架等的一种或多种。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,触摸屏幕,拖拽屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。数据可以包括视频,图像,音频,浏览历史和书签等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,指示灯闪烁等。
分布式组网框架使得第一设备可以发现处于同一分布式组网中的其他设备,进而建立与其他设备的通信连接。
远程输入服务(也可能称为远程输入元能力(atomic ability,AA))使得第一设备可以接收其他设备的远程输入。
输入法框架可以支持第一设备在输入框中进行内容输入。
Android runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。
2D图形引擎是2D绘图的绘图引擎。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。
图6右图示出了本申请实施例提供的一种第二设备的软件架构框图。示例性的,如图6所示,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。
应用程序层可以包括一系列应用程序包。
如图6所示,应用程序包可以包括相机,日历,电话,地图,电话,音乐,设置,邮箱,视频,社交等应用程序。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图6所示,应用程序框架层可以包括窗口管理器,内容提供器,资源管理器,视图系统,通知管理器,分布式组网框架,输入法框架,或接口服务等的一种或多种。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,触摸屏幕,拖拽屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,终端设备振动,指示灯闪烁等。
分布式组网框架使得第二设备可以发现处于同一分布式组网中的其他设备,进而建立与其他设备的通信连接。
输入法框架可以支持第二设备在输入框中进行内容输入。
接口服务可以定义第二设备与其他设备之间的接口,使得第二设备与其他设备可以基于接口服务定义的接口实现数据传输。可能的实现中,接口服务可以包括:辅助AA,其中,元能力(atomic ability,AA)由开发人员开发,是实现单一功能的程序实体,可以无用户界面(user interface,UI)。
Android runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架 层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。
2D图形引擎是2D绘图的绘图引擎。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。
基于图6所示的第一设备和第二设备的软件架构,本申请实施例中,第一设备和第二设备可以利用各自的分布式组网框架,实现接入分布式组网、设备发现或数据发送等通信业务。例如,在第一设备和第二设备接入分布式组网后,第一设备可以基于远程输入服务拉起第二设备的接口服务,进而利用第二设备的接口服务调用第二设备的输入法框架,实现利用第二设备的输入法框架辅助第一设备输入。第一设备的远程输入服务也可以将基于第一设备的输入法框架输入的内容,通过第二设备的接口服务等,发送给第二设备。
可能的理解方式中,图6对应的实施例是在第二设备的framework层中设置了接口服务(例如辅助AA),通过接口服务,类似于在第一设备与第二设备的输入法框架之间搭建桥梁,使得第一设备可以拉起第二设备的输入法框架,则利用第二设备的输入法框架输入的内容可以在第一设备的输入框中等同显示(例如第二设备输入框中的光标、高亮显示等内容均可以在第一设备的输入框显示),实现利用第二设备辅助第一设备输入。
应理解,可能的实现中,接口服务(例如辅助AA)也可以以应用程序的形式实现,例如,可以开发用于实现本申请实施例接口服务的应用程序(application,APP),进而在手机中加载该应用程序,以基于该应用程序实现本申请实施例的上述接口服务的功能。可能的实现方式中,该应用程序可以具备显示在用户界面中的应用程序图标(或者可以理解为用户可以感知该应用程序),该应用程序也可以不具备显示在用户界面中的应用程序图标(或者可以理解为用户不感知该应用程序),本申请实施例对接口服务的具体实现不作限定。为了便于描述,后续以接口服务为辅助AA示意说明。
下面,结合图7,对手机辅助大屏输入的具体实现过程进行举例。
图7示出了本申请实施例提供的一种设备通信方法的系统架构图。如图7所示,客户端(大屏)中可以设置应用程序编辑框(或称为搜索框)、数据库和远程输入法框架服务(或称为远程输入服务)。服务端(手机)中可以设置辅助AA、通知管理器(或者简称为通知)、窗口管理器(或者简称为窗口)、数据库和输入法框架。
大屏中的应用程序编辑框可以是大屏的输入法框架提供的,大屏的应用程序编辑框可以接收遥控器等的输入,在遥控器选定应用程序编辑框时,可以触发后续的辅助输入的实现。或者,在遥控器选定应用程序编辑框,并在对应于应用程序编辑框的软键盘(或称为虚拟键盘)中选择内容时,可以触发后续的辅助输入的实现。
大屏中的数据库可以存储关键词与节目的关联关系等,例如,大屏在应用程序编辑框中获取关键词后,可以基于数据库中关键词与节目的关联关系搜索节目。
大屏中的远程输入法框架服务可以使得大屏接收远程输入。示例性的,远程输入法框架可以包括大屏的本地输入法框架和远程输入服务AA,基于远程输入服务AA可以定义大屏与外部设备的接口,使得大屏可以通过接口接收外部设备的远程输入。例如,大屏上的远程输入服务AA(或可以称为远程输入服务AA接口)可以包括下述接口的一种或多种:外部向大屏设置文本的接口、外部向大屏申请焦点切换的接口、外部向大屏注册的回调或提供给小键盘使用的接口等。
手机中的辅助AA可以定义手机与其他设备之间的接口,使得手机与其他设备可以基于接口服务定义的接口实现数据传输。例如,可以基于辅助AA建立手机本地输入法框架与大屏远程输入法框架的互相调用,实现将手机本地输入法框架中输入的任意内容在大屏的编辑框中同步显示。
可能的实现中,基于辅助AA建立手机本地输入法框架与大屏远程输入法框架的互相调用的实现包括:大屏远程输入法框架和辅助AA互相持有对方的远程过程调用(remote procedure call,RPC)对象。则后续大屏和手机进行数据交互时,可以根据对方的RPC对象调用对方的设备进程,通知对方设备进程调用对方的本地接口执行适应操作。
手机中的通知管理器可以基于大屏的获取编辑框焦点的操作,在手机界面中显示通知内容,提示手机用户进行大屏辅助输入。
手机中的窗口管理器可以显示用户界面,例如显示通知界面、辅助输入界面等。
手机中的数据库可以存储关键词和候选内容的关联关系,例如,手机的输入法编辑框中获取关键词后,可以基于数据库中关键词与候选内容的关联关系显示候选内容。
手机中的输入法框架可以提供便捷的输入法实现。
其中,大屏和手机均可以加入分布式组网,在分布式组网中实现设备发现、通信连接建立和数据传输等,由于加入分布式组网是较为通用的技术,在此不再赘述。
在如图7所示的设备通信方法的系统架构图中,在大屏和手机加入分布式组网时,可以将大屏的数据库(例如候选词库)和手机的数据库(例如候选词库)同步,使得大屏和手机可以共享彼此的数据库,从而用户可以基于大屏和手机中的候选词库,实现较为便捷的候选词选定。
例如,用户采用遥控等设备选定大屏的编辑框后,大屏可以通过远程输入法框架启动本地的输入法通道,并传递给远程输入法框架,远程输入法框架(input method framework,IMF)向分布式组网框架查询当前分布式组网内的辅助AA(需理解,这里以辅助AA为例,实际可以是手机中任意能承载相关能力的应用进程)。手机将辅助AA的RPC对象返回给大屏,继而手机调用接口将大屏的输入通道的RPC对象传递给手机。则后续手机可以通过大屏的输入通道的RPC对象向大屏同步编辑状态信息,大屏也可以通过手机的输入通道的RPC对象向手机同步编辑状态信息。
手机中的辅助AA可以进一步指示通知管理器弹出通知,在手机接收到用户点击通知确认时,可以进一步在手机的窗口弹出输入框,拉起手机本地输入法框架中的本地输入法,进而将手机用户利用本地输入法输入的内容同步到大屏的远程输入法框架服务,实现在大屏中的编辑框同步显示手机输入框中的内容。
在利用手机辅助大屏输入的可能实现中,用户在手机的编辑框中输入关键字后,手机编辑框中的关键字等信息可以同步到给大屏的编辑框中,从而可以提升在大屏中输入的效率。
通常的实现中,在用户使用手机A辅助大屏输入的场景下,如果手机A在辅助大屏输入的过程中被打断,例如,手机A在辅助大屏输入的过程中收到来电,则手机A的辅助输入可能会打断,手机A无法继续辅助大屏继续输入。或者,使用手机A的用户不希望继续使用手机A辅助大屏输入。用户可能希望切换其他辅助设备辅助大屏输入,例如切换到手机B辅助大屏输入,则用户需要再次使用大屏遥控器,重新点击大屏编辑框,重新触发大屏与手机B的连接,过程较为繁琐。
基于此,本申请实施例提供了一种设备通信方法,在用户利用手机辅助大屏输入的过程中,与大屏和该手机处于同一分布式组网中的其他辅助设备可以抢占该手机的辅助输入,进一步的,还可以在该手机的输入内容的基础上,继续辅助大屏输入,期间用户不需要利用遥控器等设备再次选择大屏的编辑框,实现便捷高效的辅助大屏输入。
示例性的,图8示出了本申请实施例的多个设备抢占输入的设备通信方法的具体系统架构示意图。
如图8所示,本申请实施例以分布式组网中包括大屏、手机A和手机B为例,示例性说明在手机A的辅助输入时,手机B抢占辅助输入的过程。可以理解,本申请实施例的大屏可以具备请求远程输入的能力,手机A和手机B均可以具有分布式输入法辅助AA。
如图8所示,本申请实施例的设备通信方法可以包括拉起过程和抢占过程。拉起过程中,大屏可以与手机A和手机B建立连接,手机A确认辅助大屏输入。抢占过程中,手机B可以抢占手机A实现利用手机B辅助大屏输入。
示例性的,在拉起过程中,用户可以通过大屏遥控器点击大屏的编辑框,大屏的编辑框向大屏的输入法框架请求远程输入法,大屏查找到分布式组网中的手机A和手机B,大屏可以分别与手机A的辅助AA和手机B的辅助AA建立连接,将大屏的数据通道接口分别传递给手机A的辅助AA和手机B的辅助AA。手机A和手机B中均可以弹出通知,该通知可以用于表示大屏请求辅助输入。用户可以在手机A的通知中确认利用手机A辅助大屏输入,并通知大屏当前抢占的设备是手机A。手机A中可以弹出用于辅助大屏输入的编辑框,用户可以在手机A的编辑框中拉起手机的本地输入法,辅助大屏输入。示例性的,用户可以在手机A的编辑框中输入“你好啊,”,大屏上可以同步显示“你好啊,”。
示例性的,在抢占过程中,用户可以在手机B的通知中确认利用手机B辅助大屏输入,并通知大屏当前抢占的设备是手机B。大屏可以向分布式组网中的手机A和手机B广播当前抢占设备是手机B,如果手机A没有再次执行抢占步骤,可以隐藏手机A中用于辅助输入的编辑框,在手机B中可以弹出用于辅助大屏输入的编辑框,用户可以在手机B的编辑框中拉起手机的本地输入法,辅助大屏输入。可能的实现中,手机B实现抢占后,在手机B的编辑框中,可以同步显示大屏从手机A中同步的内容。例如,大屏的编辑框中已经从手机A的编辑框中同步到“你好啊,”,则在手机B弹出的编辑框中,可以同步到该“你好啊,”。
可能的实现方式中,在上述的拉起过程中,手机B上的通知可以先被隐藏,抢占过程 中,用户可以触发显示手机B中隐藏的通知,并在通知中实现抢占。例如,在用户点击手机A上的通知进行选择确认后,手机B上的通知可以隐藏在通知栏中,当用户想要使用手机B进行抢占输入时,用户可以下拉手机B的通知栏,在手机B中显示该通知,点击手机B的通知,实现手机B的抢占。
可以理解,在手机B抢占成功,辅助大屏输入的过程中,手机A可以基于上述手机B相似的过程,再次抢占辅助大屏输入,在此不再赘述。
需要说明的是,在手机B抢占手机A的辅助输入的实现中,手机A可以处于通话等不能辅助大屏输入的状态,手机A也可以处于能够辅助大屏输入的状态。或者可以理解为,手机B可以在任意时机发起抢占,本申请实施例对抢占发生的时机不作限定。
可能的实现方式中,手机A也可以请求手机B抢占。例如,在家庭场景中,老年人持有手机A辅助大屏输入,但是老年人可能输入速度较慢,希望请求持有手机B的年轻人辅助大屏输入,则手机A可以向手机B发送请求,请求手机B抢占辅助输入,手机B可以基于手机A的请求实现抢占。
可能的实现方式中,在手机A辅助大屏输入的过程中,大屏也可以发起抢占。例如,在手机A辅助大屏输入的过程中,用户利用遥控器点击大屏上的编辑框,大屏可以向分布式组网内所有辅助设备广播当前抢占设备ID是大屏ID。分布式组网内的设备收到当前抢占设备ID的广播后,会对该抢占设备ID进行判断:对于大屏,判断是当前抢占设备,拉起大屏的本地输入法,用户可以使用遥控器在大屏的编辑框中继续进行输入;对于其他辅助设备,判断非当前抢占设备,其他辅助设备可以隐藏其他辅助设备的输入法。
需要说明的是,图8对应的实施例是本申请实施例的一种可能实现方式。在其他可能的实现方式中,可以是用户通过遥控器选定大屏上某应用提供的编辑框下的虚拟键盘触发后续的辅助大屏输入的过程,或者,可以是用户在手机中触发辅助大屏输入的过程,本申请实施例对此不作具体限定。
结合上述的描述,下面对大屏和手机交互的用户界面进行示例性说明。
示例性的,图9-10示出了用户触发进行辅助输入的用户界面示意图。
图9示出了大屏的一种用户界面图。如图9所示,用户可以利用遥控器901选定大屏中的编辑框902,则可以触发执行本申请实施例后续的手机辅助大屏输入的过程。或者,用户可以利用遥控器901选定大屏中的虚拟键盘中任意内容902,则可以触发执行本申请实施例后续的手机辅助大屏输入的过程。具体的手机辅助大屏输入的方式将在后续实施例中说明,在此不再赘述。
需要说明的是,图9示出了大屏的用户界面图中设置一个编辑框的示意图。可能的实现方式中,大屏的用户界面中可以包括多个编辑框,用户触发任一个编辑框均可以触发本申请实施例后续的手机辅助大屏输入的过程,本申请实施例对此不作具体限定。
图10示出了手机的一种用户界面图。例如,用户可以通过在手机的主屏幕下拉等方式,显示如图10的a图所示用户界面,在如图10的a图所示用户界面中,可以包括手机的一项或多项下述功能:WLAN、蓝牙、手电筒、静音、飞行模式、移动数据、无线投屏、截屏或辅助输入1001。其中辅助输入1001可以为本申请实施例的手机辅助大屏输入的功能。
可能的实现方式中,在用户点击辅助输入1001后,手机可以查找处于同一分布式组 网中的大屏等设备,并获取大屏中的搜索框,建立与大屏之间的通信连接,在手机中可以进一步显示如图10的c图所示用户界面,在如图10的c图所示用户界面中,可以显示用于辅助大屏输入的编辑框,用户可以基于该编辑框辅助大屏进行输入。
可能的实现方式中,如果手机查到处于同一分布式组网中的大屏等设备的数量为多个,手机中还可以显示如图10的b图所示用户界面,在如图10的b图所示用户界面,可以显示多个大屏的标识,大屏的标识可以是该大屏的设备号、用户名或昵称等。用户可以在如图10的b图所示用户界面中选择希望辅助输入的大屏(例如点击大屏A或大屏B),并进入如图10的c图所示用户界面,本申请实施例对此不作具体限定。
在用户通过上述任意方式触发进行大屏输入后,示例性的,大屏可以查找分布式组网中的具有辅助输入能力的辅助设备(例如手机),并自动确定用于辅助输入的手机,或者向分布式组网中查找到的全部手机发送通知。
例如,如果大屏查找到分布式组网中存在一个手机,则大屏可以自动选择辅助输入的设备为该手机。
例如,如果大屏查找到分布式组网中存在多个手机,但是该多个手机中,存在用户设置的默认辅助输入的手机,则大屏可以自动选择该默认辅助输入的手机为辅助输入的设备。
例如,如果大屏查找到分布式组网中存在多个手机,但是该多个手机中,存在用户上次进行辅助输入时选择的辅助输入的手机,则大屏可以自动选择该用户上次进行辅助输入时选择的辅助输入的手机为辅助输入的设备。
例如,如果大屏查找到分布式组网中存在多个手机,大屏获取该多个手机中,被用户选择为辅助输入的频次最高的手机,则大屏可以自动选择该被用户选择为辅助输入的频次最高的手机为辅助输入的设备。
例如,如果大屏查找到分布式组网中存在多个手机,但是该多个手机中,存在与大屏所登录的用户账号相同的手机,则大屏可以自动选择该与大屏所登录的用户账号相同的手机为辅助输入的设备。
示例性的,以大屏向分布式组网中的手机发送通知为例,下面对大屏和手机进行抢占输入的用户界面进行示例性说明。
分布式组网中可以接入有一个或多个手机,在分布式组网中接入一个手机的情况下,该一个手机可以辅助大屏进行输入,后续如果有其他手机也接入该分布式组网,则分布式组网中可以包括多个手机,多个手机可以实现如本申请手续实施例所描述的抢占过程。在分布式组网中接入多个手机的情况下,多个手机可以实现如本申请手续实施例所描述的抢占过程。
示例性的,图11-23示出了抢占设备辅助大屏进行输入的过程。以分布式组网中有大屏、手机A和手机B为例进行说明。
图11示出了大屏的一种用户界面示意图。大屏可以连接手机A的辅助AA和手机B的辅助AA,向手机A和手机B请求辅助输入,手机A和手机B中均可以弹出通知,该通知用于提示大屏请求辅助输入,示例性的,图11示出了手机A或手机B中收到通知的用户界面图。
图12示出了手机A确定辅助大屏输入的界面示意图。如图12的左图所示手机的用户界面图,用户选择手机A进行辅助输入,用户可以点击手机A通知中的确定按钮。可能的 实现方式中,如图12的右图所示大屏的用户界面图,大屏中可以提示当前抢占的设备为手机A,如果一段时间没有其他设备抢占,则可以确认手机A辅助大屏输入。可以理解,另一种可能的实现方式中,手机A确认辅助输入的过程,大屏中可以不提示手机A抢占,观看大屏的用户对手机B抢占过程可以无感知。
可能的实现方式中,如图10所示的实施例中,如果用户从手机A发起辅助大屏输入的过程,则图11和图12对应的用户界面可以不显示,大屏可以确认手机A辅助大屏输入。
图13示出了利用手机A辅助大屏输入的界面示意图。如图13的左图所示的手机A的用户界面图,手机A中弹出辅助输入的编辑框,进而用户可以该编辑框中辅助大屏输入。例如,如图13的右图所示的用户界面图,用户可以在手机A的编辑框中输入“你好啊,”,适应的,如图14所示,在大屏的编辑框中,可以同步显示手机A的编辑框中的“你好啊,”。
可能的实现方式中,用户在如图13的右图所示的手机A的编辑框中输入时,如果用户在如图13的右图所示手机A的编辑框中进行删除、高亮选定或光标移动等操作时,如图14所示的大屏的编辑框中可以同步显示如手机A的编辑框中进行的删除、高亮选定或光标移动等状态。
在用户使用手机A辅助大屏进行输入时,可能由于某些原因导致手机A的辅助输入被中断,例如,手机A在辅助输入的过程中接到手机来电,或者,手机A在辅助输入的过程中接到视频或语音通话。又或者,用户在辅助输入的过程中想要切换设备对大屏进行输入的情况,则涉及到抢占输入的过程,抢占设备可以是手机B,抢占设备也可以是大屏本身。
示例性的,图15-17示出了手机B对大屏进行抢占输入的界面示意图。
一种可能的实现方式中,手机B的用户可以通过触发通知栏进行抢占。示例性的,图15示出了手机B下拉通知栏进行抢占的界面示意图。如图15所示,在手机B进行抢占输入时,用户可以下拉手机B的通知栏,通知栏中可以显示手机B之前从大屏收到的用于请求辅助输入的通知,用户通过手机B之前弹出的提示大屏请求辅助输入的通知发起抢占,例如用户点击该通知中的确定辅助大屏输入的控件,手机B可以连接大屏进行抢占输入。
可能的实现方式中,图16示出了大屏的一种用户界面示意图,如图16所示,用户在手机B上确认抢占后,手机B的辅助设备AA通知大屏抢占设备手机B的ID,大屏用户界面可以弹出抢占设备手机B的ID为手机B00的通知。
可选的,大屏可以对手机A和手机B广播当前抢占设备手机B的ID,如图17所示,手机A的用户界面中可以显示当前抢占设备手机B的ID为手机B00的通知。手机B判断出当前抢占设备为本设备,手机B可以拉起本地输入法键盘,手机A判断出当前抢占设备不是本设备,手机A可以隐藏本地输入法键盘,大屏判断出当前抢占设备不是本设备,大屏隐藏本地输入法键盘。
可能的实现方式中,用户在手机B中确认抢占后,大屏可以不用在用户界面弹出抢占设备手机B的ID为手机B00的通知,大屏会对手机A和手机B广播当前抢占设备手机B的ID,手机B判断出当前抢占设备为本设备,手机B拉起本地输入法键盘,手机A判断出当前抢占设备不是本设备,手机A隐藏本地输入法键盘,大屏判断出当前抢占设备不是本设备,大屏隐藏本地输入法键盘。或者可以理解为,手机B发起抢占的过程,大屏中可以不提示手机B抢占,观看大屏的用户对手机B抢占过程可以无感知。
另一种可能的实现方式中,手机B的用户可以通过手机A的请求进行抢占。示例性的, 如图18所示,图18中手机A中可以显示请求手机B辅助输入的界面,用户可以通过点击手机A中的确定选项,向手机B请求辅助大屏输入。图18中手机B中可以通知手机A请求辅助大屏输入,用户可以在手机B中接受手机A的请求,实现辅助大屏输入的抢占。
另一种可能的实现方式中,初始时,大屏和手机A接入分布式组网,手机A辅助大屏输入,之后,手机B接入分布式组网,手机B中可以显示用于提示用户是否抢占辅助输入的界面。示例性的,如图19所示,手机B中显示提示用户是否抢占辅助输入的界面,用户可以在手机B中点击确定选项,实现辅助大屏输入的抢占。
可以理解,手机B实现抢占的方式还可以根据实际应用场景设定,本申请实施例对此不作具体限定。
示例性的,图20示出了手机B的一种用户界面示意图。如图20的左图所示,在手机B抢占成功后,手机B的编辑框中可以同步到大屏的编辑框中的内容“你好啊,”。用户可以在手机B的编辑框中继续输入,例如,用户在手机B的编辑框内继续输入“朋友”,如图20的右图所示,手机B的编辑框中可以显示“你好啊,朋友”。适应的,大屏编辑框内可以同步显示手机B编辑框中的“你好啊,朋友”。
可以理解,在手机B辅助大屏输入的过程中,手机A可以再次进行抢占,抢占过程类似于手机B的上述抢占过程,在此不再赘述。
可能的实现方式中,在手机A或手机B辅助大屏输入的过程中,大屏也可以进行抢占。
示例性的,手机A正在辅助大屏输入,且已在手机A的编辑框和大屏的编辑框中输入了“你好啊,”,用户希望采用大屏输入。
如图21的左图的大屏用户界面图,用户可以利用遥控器等选中大屏上的编辑框。大屏可以向分布式组网内的大屏、手机A和手机B广播当前抢占设备的ID是大屏ID。手机A和手机B接收到该广播后,可以在手机A或手机B中弹出如图18右图所示的用户界面,该图21右图所示的用户界面中,可以显示用于提示当前抢占的设备为大屏。可能的实现方式中,手机A和手机B中也可以不显示用于提示当前清账设备为大屏的通知,本申请实施例对此不作具体限定。
进一步的,大屏、手机A和手机B可以对该抢占设备ID进行判断,手机A和手机B判断出当前抢占设备不是本设备,手机A和手机B隐藏本地输入法键盘,大屏判断出当前抢占设备为本设备,大屏拉起本地输入法键盘,用户可以使用遥控器在大屏的编辑框中继续输入。
需要说明的是,上述手机辅助大屏输入时的用户界面图均是示例性说明,可能的实现方式中,手机辅助大屏输入时的界面中,也可以同步大屏中的部分或全部内容,使得手机用户可以基于手机界面了解大屏的状态。
示例性的,图22示出了一种手机的用户界面。如图22所示,用户在利用手机辅助大屏输入时,可以将大屏的全部或部分内容投屏到手机中,例如在手机中显示大屏的编辑框相关的内容,并在大屏内容的上层显示手机的编辑框,这样用户在利用手机的编辑框中输入时,在手机的用户界面中可以同步看到大屏编辑框中的状态,用户在辅助输入时,不需要抬头看大屏中的输入状态。
需要说明的是,上述实施例中,以用户辅助大屏输入汉字为例进行示例,可能的实现方式中,用户可以辅助大屏进行英文词组输入或其他形式的文本输入,本申请实施例对辅 助输入的具体内容不做限定。
对应于上述的框架和用户界面示例,示例性的,图23示出了本申请实施例一种具体的手机辅助大屏进行输入的流程示意图。
如图23所示,手机辅助大屏输入可以包括:远程输入法拉起过程和远程输入法抢占过程。
示例性的,在远程输入法拉起过程中,可以组建分布式组网,分布式组网内可以接入一个大屏,两部手机(例如手机A和手机B),以及一部平板,等。
用户可以利用遥控器等设备点击大屏的编辑框,使得大屏的编辑框中获取焦点。
大屏可以查询分布式组网内所有具有辅助AA的辅助设备,在查询到辅助设备的情况下,大屏连接各辅助设备的辅助AA,同时传递数据通道接口给各辅助AA,各辅助设备的辅助AA弹出通知,该通知用于提示大屏需要辅助输入。以大屏查询到的辅助设备包括手机A和手机为例,手机A和手机B中可以弹出通知等待用户选择确认。
用户点击手机A的通知确认利用手机A辅助大屏输入,手机A可以弹出编辑框,同时编辑框拉起输入法键盘,用户可以在手机A的编辑框中进行输入。
用户在手机A的输入的过程中,由于某些原因,用户想要切换其他辅助设备进行输入,比如手机B,进入远程输入法抢占过程。
在远程输入法抢占过程中,用户点击手机B的通知(或编辑框),手机B的辅助AA会通知大屏当前抢占设备ID是手机B的ID,同时,大屏会广播当前抢占设备ID,分布式组网内所有辅助设备(例如手机A和手机B)会根据大屏广播的抢占设备ID判断本设备是否是当前的抢占设备。
例如,手机B判断出当前的抢占设备是本设备,手机B会拉起本地输入法键盘,同时通过数据通道接口同步大屏编辑框内容到手机B的编辑框,用户可以在手机B的编辑框内进行输入。手机A判断出当前的抢占设备不是本设备,手机A可以判断是否已经拉起输入法键盘,如果手机A拉起输入法键盘,手机A可以隐藏本地输入法键盘。
这样,基于本申请实施例的上述方法,分布式组网中的任一辅助设备可以随时发起便捷的抢占,抢占成功后,可以辅助大屏进行输入。
在采用对应各个功能划分各个功能模块的情况下,如图24所示,示出了本申请实施例提供一种第一设备、第二设备或第三设备的一种可能的结构示意图,该第一设备、第二设备或第三设备包括:显示屏幕2401和处理单元2402。
其中,显示屏幕2401,用于支持第一设备、第二设备或第三设备执行上述实施例中的显示步骤,或者本申请实施例所描述的技术的其他过程。显示屏幕2401可以是触摸屏或其他硬件或硬件与软件的综合体。
处理单元2402,用于支持第一设备、第二设备或第三设备执行上述方法实施例中的处理步骤,或者本申请实施例所描述的技术的其他过程。
其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
当然,电子设备包括但不限于上述所列举的单元模块。并且,上述功能单元的具体所能够实现的功能也包括但不限于上述实例所述的方法步骤对应的功能,电子设备的其他单元的详细描述可以参考其所对应方法步骤的详细描述,本申请实施例这里不予赘述。
在采用集成的单元的情况下,上述实施例中所涉及的第一设备、第二设备或第三设备可以包括:处理模块、存储模块和显示屏幕。处理模块用于对第一设备、第二设备或第三设备的动作进行控制管理。显示屏幕用于根据处理模块的指示进行内容显示。存储模块,用于保存第一设备、第二设备或第三设备的程序代码和数据。进一步的,该第一设备、第二设备或第三设备还可以包括输入模块,通信模块,该通信模块用于支持第一设备、第二设备或第三设备与其他网络实体的通信,以实现第一设备、第二设备或第三设备的通话,数据交互,Internet访问等功能。
其中,处理模块可以是处理器或控制器。通信模块可以是收发器、RF电路或通信接口等。存储模块可以是存储器。显示模块可以是屏幕或显示器。输入模块可以是触摸屏,语音输入装置,或指纹传感器等。
其中,上述通信模块可以包括RF电路,还可以包括无线保真(wireless fidelity,Wi-Fi)模块、近距离无线通信技术(near field communication,NFC)模块和蓝牙模块。RF电路、NFC模块、WI-FI模块和蓝牙模块等通信模块可以统称为通信接口。其中,上述处理器、RF电路、和显示屏幕和存储器可以通过总线耦合在一起。
如图25所示,示出了本申请实施例提供的第一设备、第二设备或第三设备的又一种可能的结构示意图,包括:一个或多个处理器2501、存储器2502、摄像头2504和显示屏幕2503;上述各器件可以通过一个或多个通信总线2506通信。
其中,一个或多个计算机程序被2505存储在存储器2502中,并被配置为被一个或多个处理器2501执行;一个或多个计算机程序2505包括指令,指令用于执行上述任意步骤的显示方法。当然,电子设备包括但不限于上述所列举的器件,例如,上述电子设备还可以包括射频电路、定位装置、传感器等等。
本申请还提供以下实施例。需要说明的是,以下实施例的编号并不一定需要遵从前面实施例的编号顺序。
实施例21.一种设备通信方法,应用于包括第一设备、第二设备和第三设备的系统,所述方法包括:
所述第一设备显示包括第一编辑框的第一界面;
响应于对所述第一编辑框的选择操作,所述第一设备确定所述第二设备和所述第三设备加入分布式组网;
所述第一设备显示第二界面,所述第二界面包括对应所述第二设备的第一选项和对应所述第三设备的第二选项;
响应于对所述第一选项的触发操作,所述第一设备向所述第二设备发送指示消息;
所述第二设备根据所述指示消息显示第三界面,所述第三界面包括第二编辑框;
在所述第二编辑框中存在编辑状态的情况下,将所述编辑状态同步到所述第一编辑框中。
实施例22.根据实施例21所述的方法,所述第二设备包括接口服务,所述接口服务用于所述第一设备与所述第二设备之间的编辑状态的同步。
实施例23.根据实施例21或22所述的方法,所述编辑状态包括下述一项或多项:文本内容、光标或文字内容的高亮标记。
实施例24.根据实施例21-23任一项所述的方法,所述第二设备根据所述指示消息显示第三界面,包括:
所述第二设备响应于所述指示消息显示通知界面;所述通知界面包括确认辅助输入的第三选项;
响应于对所述第三选项的触发操作,所述第二设备显示所述第三界面。
实施例25.根据实施例21-24任一项所述的方法,所述第三界面还包括:所述第一界面的全部或部分内容。
实施例26.根据实施例25所述的方法,所述第二编辑框与所述第一界面的全部或部分内容分层显示,且所述第二编辑框显示在所述第一界面的全部或部分内容的上层。
实施例27.根据实施例21-26任一项所述的方法,.所述第二设备根据所述指示消息显示第三界面之后,所述方法还包括:
响应于对所述第二编辑框的触发,所述第二设备显示虚拟键盘;
所述第二设备根据所述虚拟键盘和/或所述第二编辑框中接收的输入操作,在所述第二编辑框中显示所述编辑状态。
实施例28.根据实施例21-27任一项所述的方法,.所述第一设备包括下述任一项:电视、大屏或可穿戴设备;所述第二设备或所述第三设备包括下述任一项:手机、平板或可穿戴设备。
实施例29.一种设备通信方法,应用于包括第一设备、第二设备和第三设备的系统,所述方法包括:
所述第一设备显示包括第一编辑框的第一界面;
响应于对所述第一编辑框的选择操作,所述第一设备确定所述第二设备和所述第三设备加入分布式组网;
所述第一设备确定所述第二设备为辅助输入设备;
所述第一设备向所述第二设备发送指示消息;
所述第二设备根据所述指示消息显示第三界面,所述第三界面包括第二编辑框;
在所述第二编辑框中存在编辑状态的情况下,将所述编辑状态同步到所述第一编辑框中。
实施例210.一种设备通信方法,应用于包括第一设备、第二设备和第三设备的系统,所述方法包括:
所述第二设备显示包括所述第一设备的选项的第四界面;
响应于对所述第一设备的选项的选择操作,所述第二设备向所述第一设备发送指示消息;
所述第一设备显示包括第一编辑框的第一界面;
所述第二设备显示第三界面,所述第三界面包括第二编辑框;
在所述第二编辑框中存在编辑状态的情况下,将所述编辑状态同步到所述第一编辑框中。
实施例211.一种设备通信方法,应用于第一设备,所述方法包括:
所述第一设备显示包括第一编辑框的第一界面;
响应于对所述第一编辑框的选择操作,所述第一设备确定所述第二设备和所述第三设 备加入分布式组网;
所述第一设备显示第二界面,所述第二界面包括对应所述第二设备的第一选项和对应所述第三设备的第二选项;
响应于对所述第一选项的触发操作,所述第一设备向所述第二设备发送指示消息;所述指示消息用于指示所述第二设备显示第三界面,所述第三界面包括第二编辑框;
在所述第二编辑框中存在编辑状态的情况下,将所述编辑状态同步到所述第一编辑框中。
实施例212.一种设备通信方法,应用于第二设备,所述方法包括:
所述第二设备显示包括第一设备的选项的第四界面;
响应于对所述第一设备的选项的选择操作,所述第二设备向所述第一设备发送指示消息;所述指示消息用于指示所述第一设备显示包括第一编辑框的第一界面;
所述第二设备显示第三界面,所述第三界面包括第二编辑框;
在所述第二编辑框中存在编辑状态的情况下,将所述编辑状态同步到所述第一编辑框中。
实施例213.一种设备通信系统,包括第一设备、第二设备和第三设备,所述第一设备用于执行如实施例21-29、210-212任一项所述的第一设备的步骤,所述第二设备用于执行如实施例21-29、210-212任一项所述的第二设备的步骤,所述第三设备用于执行如实施例21-29、210-212任一项所述的第三设备的步骤。
实施例214.一种第一设备,包括:至少一个存储器和至少一个处理器;
所述存储器用于存储程序指令;
所述处理器用于调用所述存储器中的程序指令使得所述第一设备执行实施例21-29、210-212任一项所述的第一设备执行的步骤。
实施例215.一种第二设备,包括:至少一个存储器和至少一个处理器;
所述存储器用于存储程序指令;
所述处理器用于调用所述存储器中的程序指令使得所述第二设备执行实施例21-29、210-212任一项所述的第二设备执行的步骤。
实施例216.一种计算机可读存储介质,其上存储有计算机程序,使得所述计算机程序被第一设备的处理器执行时实现实施例21-29、210-212任一项所述的所述第一设备执行的步骤;或者,使得所述计算机程序被第二设备的处理器执行时实现实施例21-29、210-212任一项所述的所述第二设备执行的步骤;或者,使得所述计算机程序被第三设备的处理器执行时实现实施例21-29、210-212任一项所述的所述第三设备执行的步骤。
上述实施例21-29、实施例210-实施例216的具体实现可以参照如图26-37的说明。
在利用手机辅助大屏输入的可能实现中,大屏在分布式组网内查找到具有辅助输入功能的辅助设备时,大屏可以向分布式组网内所有具有辅助输入功能的辅助设备发送广播,通知该所有辅助设备大屏需要辅助输入。
然而,可能只有一个或部分辅助设备参与辅助大屏输入,而大屏向分布式组网内所有辅助设备发送广播,会对分布式组网内其他不参与辅助大屏输入的辅助设备造成一定的干扰。
基于此,本申请实施例提供一种设备通信方法,用户可以在大屏中选择目标辅助设备, 进而可以向目标辅助设备发送通知,不向其他辅助设备发送通知,可以避免对其他设备的干扰。或者,用户可以在大屏中选择目标辅助设备,建立与目标辅助设备的通信连接,在目标辅助设备中弹出用于辅助大屏输入的编辑框,期间,目标辅助设备中可以不显示通知界面,也不需要用户触发通知。
示例性的,图26示出了本申请实施例的设备通信方法的具体系统架构示意图。
用户通过遥控器点击大屏上某应用提供的编辑框,大屏可以在分布式组网内查找具有辅助输入能力的设备(例如设置有辅助AA的手机),大屏查找到具有辅助输入能力的设备后,大屏上可以弹出包括具有辅助输入能力的设备的标识的界面。其中,具有辅助输入能力的设备的标识可以是该具有辅助输入能力的设备的设备号、用户名或昵称等。
如果用户在具有辅助输入能力的设备中选择了目标设备,以目标设备为手机为例,大屏可传递输入数据接口给大屏的输入法管理框架IMF,大屏的输入法管理框架可以与手机的辅助AA建立连接,手机的辅助AA可以模拟点击事件拉起手机的本地输入法,在手机中包含编辑框的输入窗口,后续用户可以在编辑框中采用手机的本地输入法输入内容,手机的辅助AA可以向大屏的输入法管理框架返回跨进程接口,大屏的输入法管理框架可以通过该跨进程接口将大屏的输入数据接口跨进程包装后传递给手机的辅助AA,后续手机的辅助AA可以基于大屏的输入数据接口将手机编辑框中的内容向大屏同步。
例如,用户在手机的编辑框中进行文本输入、文本删除、高亮选定文本或移动光标等操作时,手机的辅助AA可以调用大屏的输入数据接口将手机编辑框中的内容同步到大屏的编辑框中。
可以理解,如果大屏不具备远程输入的能力,或大屏在分布式组网中没有查找到辅助设备,则大屏可以利用大屏的本地输入法进行输入。
需要说明的是,图26是本申请实施例的一种可能实现方式。在其他可能的实现方式中,可以是用户通过遥控器选定大屏上某应用提供的编辑框下的虚拟键盘触发后续的辅助大屏输入的过程,或者,可以是用户在手机中触发辅助大屏输入的过程,本申请实施例对此不作具体限定。
结合上述的描述,下面对大屏和手机交互的用户界面进行示例性说明。
示例性的,图27-28示出了用户触发进行辅助输入的用户界面示意图。
图27示出了大屏的一种用户界面图。如图27所示,用户可以利用遥控器2701选定大屏中的编辑框2702,则可以触发执行本申请实施例后续的手机辅助大屏输入的过程。或者,用户可以利用遥控器2701选定大屏中的虚拟键盘中任意内容2702,则可以触发执行本申请实施例后续的手机辅助大屏输入的过程。具体的手机辅助大屏输入的方式将在后续实施例中说明,在此不再赘述。
需要说明的是,图27示出了大屏的用户界面图中设置一个编辑框的示意图。可能的实现方式中,大屏的用户界面中可以包括多个编辑框,用户触发任一个编辑框均可以触发本申请实施例后续的手机辅助大屏输入的过程,本申请实施例对此不作具体限定。
图28示出了手机的一种用户界面图。例如,用户可以通过在手机的主屏幕下拉等方式,显示如图28的a图所示用户界面,在如图28的a图所示用户界面中,可以包括手机的一项或多项下述功能:WLAN、蓝牙、手电筒、静音、飞行模式、移动数据、无线投屏、截屏或辅助输入2801。其中辅助输入2801可以为本申请实施例的手机辅助大屏输入的功 能。
可能的实现方式中,在用户点击辅助输入2801后,手机可以查找处于同一分布式组网中的大屏等设备,并获取大屏中的搜索框,建立与大屏之间的通信连接,在手机中可以进一步显示如图28的c图所示用户界面,在如图28的c图所示用户界面中,可以显示用于辅助大屏输入的编辑框,用户可以基于该编辑框辅助大屏进行输入。
可能的实现方式中,如果手机查到处于同一分布式组网中的大屏等设备的数量为多个,手机中还可以显示如图28的b图所示用户界面,在如图28的b图所示用户界面,可以显示多个大屏的标识,大屏的标识可以是该大屏的设备号、用户名或昵称等。用户可以在如图28的b图所示用户界面中选择希望辅助输入的大屏(例如点击大屏A或大屏B),并进入如图28的c图所示用户界面,本申请实施例对此不作具体限定。在用户通过上述任意方式触发进行大屏输入后,示例性的,图29-34示出了手机辅助大屏输入的用户界面示意图。
图29示出了大屏的一种用户界面图。如图29所示,用户可以采用如图27对应的方式触发进入辅助输入的场景,大屏可以在分布式组网内查找具有辅助输入能力的辅助设备,并在大屏上显示查找到的辅助设备“手机A”和“手机B”。可以理解,大屏中可以采用任一可能的形式显示辅助设备的标识,例如可以以列表、图片或数字等。
用户可以利用遥控器等设备,在大屏中选择“手机B”,则后续大屏可以与手机B交互,以利用手机B辅助大屏输入。
可能的实现方式中,大屏也可以自动确定用于辅助输入的设备。
例如,如果大屏查找到分布式组网中存在一个手机,则大屏可以自动选择辅助输入的设备为该手机,且不显示如图29所示的用户界面。可选的,在后续如果大屏发现除该手机外的其他手机接入分布式组网,大屏中可以显示如图29所示的用户界面。
例如,如果大屏查找到分布式组网中存在多个手机,但是该多个手机中,存在用户设置的默认辅助输入的手机,则大屏可以自动选择该默认辅助输入的手机为辅助输入的设备,且不显示如图29所示的用户界面。
例如,如果大屏查找到分布式组网中存在多个手机,但是该多个手机中,存在用户上次进行辅助输入时选择的辅助输入的手机,则大屏可以自动选择该用户上次进行辅助输入时选择的辅助输入的手机为辅助输入的设备,且不显示如图29所示的用户界面。
例如,如果大屏查找到分布式组网中存在多个手机,大屏获取该多个手机中,被用户选择为辅助输入的频次最高的手机,则大屏可以自动选择该被用户选择为辅助输入的频次最高的手机为辅助输入的设备,且不显示如图29所示的用户界面。
例如,如果大屏查找到分布式组网中存在多个手机,但是该多个手机中,存在与大屏所登录的用户账号相同的手机,则大屏可以自动选择该与大屏所登录的用户账号相同的手机为辅助输入的设备,且不显示如图29所示的用户界面。
也就是说,图29所示的大屏的用户界面不是必要的,也可以不显示如图29所示的用户界面。本申请实施例对图29所示的用户界面的具体形式,以及触发显示图29所示的用户界面的方式不作限定。
以用户利用遥控器等设备,在大屏中选择“手机B”作为辅助输入设备为例,用户在大屏中选择手机B后,手机B中可以弹出通知,提示用户大屏请求辅助输入。
示例性的,如图30最左图所示的用户界面,手机B中可以弹出用于提示大屏请求辅助输入的通知,用户可以触发手机B中的通知以确认辅助大屏输入,进一步的,如图30中间图所示的用户界面,手机B中可以弹出用于辅助大屏输入的编辑框,进一步的,用户可以通过点击等触发如图30中间图所示的编辑框,手机B可以显示如图30最右图所示的用户界面,该用户界面中可以显示手机的虚拟键盘(或称为软键盘),用户后续可以利用手机B的虚拟键盘辅助大屏输入。
另一种可能的实现中,用户在大屏中选择手机B后,手机B中可以不接收到通知,而是弹出如图31左图所示的用于辅助大屏输入的编辑框,进一步的,用户可以通过点击等触发如图31左图所示的编辑框,手机B可以显示如图31右图所示的用户界面,该用户界面中可以显示手机的虚拟键盘(或称为软键盘),用户后续可以利用手机B的虚拟键盘辅助大屏输入。
可以理解,用户在大屏中没有选择“手机A”,所以大屏可以不与手机A交互,对手机A的用户不会造成打扰。
需要说明的是,如果本申请实施例采用如图28对应的方式触发进行辅助大屏输入,则省略如图29-31所示的用户界面图,且在辅助输入时,除触发进入辅助大屏输入的手机,其他手机可以无感知,不会对其他手机用户造成干扰。
示例性的,图32示出了用户在手机B的编辑框中辅助大屏输入的用户界面示意图。示例性的,如图32左图所示的手机B的用户界面图,用户可以在手机B的编辑框中输入“狮子”,手机B的编辑框中还可以在“狮子”后面显示光标,如图32右图所示大屏的用户界面图,可以在大屏的编辑框中同步到该“狮子”和光标。
图33示出了用户可以在手机B的编辑框中执行移动光标的用户界面示意图。示例性的,如图33左图所示的手机B的用户界面图,用户可以在手机B的编辑框中将光标移动至“狮子”之前,在光标前并添加“老”,如图33右图所示大屏的用户界面图,可以在大屏的编辑框中同步到该“狮子”之前的光标以及光标之前的“老”。
图34示出了用户可以在手机B的编辑框中执行高亮选中目标词的用户界面示意图。示例性的,如图34左图所示的手机B的用户界面图,用户可以在手机B的编辑框高亮选中“老”,如图34右图所示大屏的用户界面图,可以在大屏的编辑框中同步到该高亮显示的“老”。
可以理解,如果用户在大屏中选择了手机A,用户在手机A中辅助大屏输入实现中,手机A的用户界面可以与手机B的用户界面相似,在此不在赘述。
需要说明的是,上述手机辅助大屏输入时的用户界面图均是示例性说明,可能的实现方式中,手机辅助大屏输入时的界面中,也可以同步大屏中的部分或全部内容,使得手机用户可以基于手机界面了解大屏的状态。
示例性的,图35示出了一种手机的用户界面。如图35所示,用户在利用手机(如上述的手机A或手机B)辅助大屏输入时,可以将大屏的全部或部分内容投屏到手机中,例如在手机中显示大屏的编辑框相关的内容,并在大屏内容的上层显示手机的编辑框,这样用户在利用手机的编辑框中输入时,在手机的用户界面中可以同步看到大屏编辑框中的状态,用户在辅助输入时,不需要抬头看大屏中的输入状态。
需要说明的是,上述实施例中,以用户辅助大屏输入汉字为例进行示例,可能的实现 方式中,用户可以辅助大屏进行英文词组输入或其他形式的文本输入,本申请实施例对辅助输入的具体内容不做限定。
在采用对应各个功能划分各个功能模块的情况下,如图36所示,示出了本申请实施例提供一种第一设备、第二设备或第三设备的一种可能的结构示意图,该第一设备、第二设备或第三设备包括:显示屏幕3601和处理单元3602。
其中,显示屏幕3601,用于支持第一设备、第二设备或第三设备执行上述实施例中的显示步骤,或者本申请实施例所描述的技术的其他过程。显示屏幕3601可以是触摸屏或其他硬件或硬件与软件的综合体。
处理单元3602,用于支持第一设备、第二设备或第三设备执行上述方法实施例中的处理步骤,或者本申请实施例所描述的技术的其他过程。
其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
当然,电子设备包括但不限于上述所列举的单元模块。并且,上述功能单元的具体所能够实现的功能也包括但不限于上述实例所述的方法步骤对应的功能,电子设备的其他单元的详细描述可以参考其所对应方法步骤的详细描述,本申请实施例这里不予赘述。
在采用集成的单元的情况下,上述实施例中所涉及的第一设备、第二设备或第三设备可以包括:处理模块、存储模块和显示屏幕。处理模块用于对第一设备、第二设备或第三设备的动作进行控制管理。显示屏幕用于根据处理模块的指示进行内容显示。存储模块,用于保存第一设备、第二设备或第三设备的程序代码和数据。进一步的,该第一设备、第二设备或第三设备还可以包括输入模块,通信模块,该通信模块用于支持第一设备、第二设备或第三设备与其他网络实体的通信,以实现第一设备、第二设备或第三设备的通话,数据交互,Internet访问等功能。
其中,处理模块可以是处理器或控制器。通信模块可以是收发器、RF电路或通信接口等。存储模块可以是存储器。显示模块可以是屏幕或显示器。输入模块可以是触摸屏,语音输入装置,或指纹传感器等。
其中,上述通信模块可以包括RF电路,还可以包括无线保真(wireless fidelity,Wi-Fi)模块、近距离无线通信技术(near field communication,NFC)模块和蓝牙模块。RF电路、NFC模块、WI-FI模块和蓝牙模块等通信模块可以统称为通信接口。其中,上述处理器、RF电路、和显示屏幕和存储器可以通过总线耦合在一起。
如图37所示,示出了本申请实施例提供的第一设备、第二设备或第三设备的又一种可能的结构示意图,包括:一个或多个处理器3701、存储器3702、摄像头3704和显示屏幕3703;上述各器件可以通过一个或多个通信总线3706通信。
其中,一个或多个计算机程序被3705存储在存储器3702中,并被配置为被一个或多个处理器3701执行;一个或多个计算机程序3705包括指令,指令用于执行上述任意步骤的显示方法。当然,电子设备包括但不限于上述所列举的器件,例如,上述电子设备还可以包括射频电路、定位装置、传感器等等。
本申请还提供以下实施例。需要说明的是,以下实施例的编号并不一定需要遵从前面实施例的编号顺序。
实施例31.一种设备通信方法,应用于包括第一设备和第二设备的系统,所述方法包括:
所述第一设备显示包括第一编辑框的第一界面;
所述第一设备向所述第二设备发送指示消息;
所述第二设备根据所述指示消息显示第二界面,所述第二界面包括第二编辑框;
在所述第二编辑框中存在关键字的情况下,所述第一设备将所述关键字同步到所述第一编辑框中;
所述第一设备确定所述关键字对应的候选词;
所述第二设备获取所述候选词,并显示第三界面,所述第三界面包括所述候选词。
实施例32.根据实施例31所述的方法,所述第二设备包括接口服务,所述接口服务用于所述第一设备与所述第二设备之间的编辑状态的同步。
实施例33.根据实施例32所述的方法,所述编辑状态包括下述一项或多项:文本内容、光标或文字内容的高亮标记。
实施例34.根据实施例31-33任一项所述的方法,所述第二设备根据所述指示消息显示第二界面,包括:
所述第二设备响应于所述指示消息显示通知界面;所述通知界面包括确认辅助输入的选项;
响应于对所述选项的触发操作,所述第二设备显示所述第二界面。
实施例35.根据实施例31-34任一项所述的方法,所述第二界面还包括:所述第一界面的全部或部分内容。
实施例36.根据实施例35所述的方法,所述第二编辑框与所述第一界面的全部或部分内容分层显示,且所述第二编辑框显示在所述第一界面的全部或部分内容的上层。
实施例37.根据实施例31-36任一项所述的方法,所述第二设备根据所述指示消息显示第二界面之后,所述方法还包括:
响应于对所述第二编辑框的触发,所述第二设备显示虚拟键盘;
所述第二设备根据所述虚拟键盘和/或所述第二编辑框中接收的输入操作,在所述第二编辑框中显示所述编辑状态。
实施例38.根据实施例31-37任一项所述的方法,所述第一设备包括下述任一项:电视、大屏或可穿戴设备;所述第二设备包括下述任一项:手机、平板或可穿戴设备。
实施例39.根据实施例31-38任一项所述的方法,所述第三界面还包括所述第二设备基于所述关键字联想的本地候选词,所述候选词和所述本地候选词在所述第三界面的显示方式包括下述任一种:
所述候选词和所述本地候选词在所述第三界面中分栏显示;
所述候选词在所述第三界面中显示在所述本地候选词的前面;
所述候选词在所述第三界面中显示在所述本地候选词的后面;
所述候选词和所述本地候选词在所述第三界面中混合显示;
所述候选词和所述本地候选词在所述第三界面中采用不同标识区分。
实施例310.根据实施例31-39任一项所述的方法,所述候选词的排序与所述第一设备中的历史用户行为相关。
实施例311.根据实施例31-39、310任一项所述的方法,还包括:
所述第二设备响应于用户对任一项所述候选词的触发,在所述第二编辑框中显示所述任一项候选词。
实施例312.一种设备通信方法,应用于包括第一设备和第二设备的系统,所述方法包括:
所述第二设备显示包括所述第一设备的选项的第四界面;
响应于对所述第一设备的选项的选择操作,所述第二设备向所述第一设备发送指示消息;
所述第一设备显示包括第一编辑框的第一界面;
所述第二设备显示第二界面,所述第二界面包括第二编辑框;
在所述第二编辑框中存在关键字的情况下,所述第一设备将所述关键字同步到所述第一编辑框中;
所述第一设备确定所述关键字对应的候选词;
所述第二设备获取所述候选词,并显示第三界面,所述第三界面包括所述候选词。
实施例313.一种设备通信方法,应用于第一设备,所述方法包括:
所述第一设备显示包括第一编辑框的第一界面;
所述第一设备向所述第二设备发送指示消息;所述指示消息用于指示所述第二设备显示第二界面,所述第二界面包括第二编辑框;
在所述第二编辑框中存在关键字的情况下,所述第一设备将所述关键字同步到所述第一编辑框中;
所述第一设备确定所述关键字对应的候选词;
所述第一设备将所述候选词同步到所述第二设备。
实施例314.一种设备通信方法,应用于第二设备,所述方法包括:
所述第二设备接收来自所述第一设备的指示消息;所述第一设备中显示有包括第一编辑框的第一界面;
所述第二设备根据所述指示消息显示第二界面,所述第二界面包括第二编辑框;
在所述第二编辑框中存在关键字的情况下,所述第二设备将所述关键字同步到所述第一编辑框中,用于所述第一设备确定所述关键字对应的候选词;
所述第二设备获取所述候选词,并显示第三界面,所述第三界面包括所述候选词。
实施例315.一种设备通信方法,应用于第二设备,所述方法包括:
所述第二设备显示包括第一设备的选项的第四界面;
响应于对所述第一设备的选项的选择操作,所述第二设备向所述第一设备发送指示消息;所述指示消息用于指示所述第一设备显示包括第一编辑框的第一界面;
所述第二设备显示第二界面,所述第三界面包括第二编辑框;
在所述第二编辑框中存在关键字的情况下,所述第二设备将所述关键字同步到所述第一编辑框中,用于所述第一设备确定所述关键字对应的候选词;
所述第二设备获取所述候选词,并显示第三界面,所述第三界面包括所述候选词。
实施例316.一种设备通信系统,包括第一设备和第二设备,所述第一设备用于执行如
实施例31-39、310-315任一项所述的第一设备的步骤,所述第二设备用于执行如实施例 31-39、310-315任一项所述的第二设备的步骤。
实施例317.一种第一设备,包括:至少一个存储器和至少一个处理器;
所述存储器用于存储程序指令;
所述处理器用于调用所述存储器中的程序指令使得所述第一设备执行实施例31-39、310-315任一项所述的第一设备执行的步骤。
实施例318.一种第二设备,包括:至少一个存储器和至少一个处理器;
所述存储器用于存储程序指令;
所述处理器用于调用所述存储器中的程序指令使得所述第二设备执行实施例31-39、310-315任一项所述的第二设备执行的步骤。
实施例319.一种计算机可读存储介质,其上存储有计算机程序,使得所述计算机程序被第一设备的处理器执行时实现实施例31-39、310-315任一项所述的所述第一设备执行的步骤;或者,使得所述计算机程序被第二设备的处理器执行时实现实施例31-39、310-315任一项所述的所述第二设备执行的步骤。
上述实施例31-实施例39、实施例310-实施例319的具体实现可以参照如图38-53的说明。
在利用手机辅助大屏输入的过程中,在手机中输入关键字或关键词后,手机编辑框中的文本内容可以实时同步到大屏侧,以实现快捷输入,从而达到提升用户输入效率的目的。
然而,通常的实现中,手机编辑框中的文本内容能够同步到大屏,大屏中的内容不能同步到手机。例如,当用户在手机的编辑框中输入关键字(例如部分电影名、部分音乐名或部分联系人等)后,在大屏编辑框中可以同步该关键字,大屏可以利用该关键字得到目标词条(例如完整电影名、完整音乐名或完整联系人等),此时由于大屏的目标词条不能同步到手机,且手机本身的候选词库与大屏中的节目内容等通常不相关,在利用手机的候选词库对关键字进行联想时,往往不能联想到与大屏节目相关的内容。或者,对于同一关键字,用户在手机中通常选择的候选词与用户在大屏中通常选择的候选词不同,例如,对关键字“西”,大屏中通常选择的候选词可能是与西相关的影视名词,手机中通常选择的候选词可能是“西方”等通用词汇。因此导致用户需要将目标词条在手机中完整输入,或者,用户需要借助其他硬件设备(例如,遥控器等)在大屏中进行手动选择,才能搜索目标词条,输入效率较低。
示例性的,图38示出了手机辅助大屏输入的用户界面示意图。如图38的左图所示手机的界面示意图,用户想在要在大屏上搜索电视剧“我爱这片土地”时,用户在手机的编辑框中输入关键词“我爱”,如图38的右图所示的大屏的编辑框中,可以同步显示手机编辑框中的关键词“我爱”,大屏根据关键词“我爱”联想到了候选词“我爱这片土地”,但手机无法同步该候选词“我爱这片土地”,用户依然需要在手机中输入完整文本“我爱这片土地”,并点击完成,或者,用户使用遥控器在大屏上选中候选词“我爱这片土地”,才可以搜索“我爱这片土地”,输入效率较低。
因此,在手机辅助大屏输入的过程中,让手机同步到大屏上匹配的候选词是提升用户输入效率的可行方式。
基于此,本申请实施例提供了一种设备通信方法,当用户利用手机辅助大屏输入时,用户在手机的可输入编辑框(例如搜索编辑框、下拉框或组合框等)内输入文本时,大屏 可以同步该文本,并根据具体的输入场景结合大屏统计的用户习惯、大屏的候选词库或词典等联想该文本对应的候选词,将大屏依据该文本联想的候选词同步至手机,从而用户可以在手机中选择大屏联想的候选词,实现便捷的输入。
示例性的,图39示出了本申请实施例提供的一种设备通信方法的具体系统架构示意图。
如图39所示,本申请实施例以分布式组网中包括大屏(或称为大屏设备)和手机(或称为辅助设备)为例,说明手机辅助大屏输入的过程。大屏中可以本地或云端词库、设置编辑框和输入法框架。辅助设备(手机)中可以设置辅助AA、通知管理器、窗口管理器和输入法框架。
其中,大屏中的编辑框可以用于触发辅助输入、接收遥控器文字输入或接收手机辅助输入等。大屏的本地或云端词库可以中可以存储候选词,候选词例如可以包括节目名称和/或大屏中的应用名称,等。大屏中的输入法框架、手机中的辅助AA、手机中的通知、手机中的窗口和手机中的输入法框架可以参照上述记载,在此不再赘述。
如图39所示,在大屏和手机接入同一分布式组网后,用户可以使用遥控器等设备选定大屏的编辑框,大屏可以请求输入法框架连接手机中的辅助AA,手机中的辅助AA可以指示通知管理器弹出通知,在手机接收到用户点击通知确认辅助输入时,可以进一步在手机的窗口弹出编辑框,拉起手机的输入法框架。用户可以在手机的输入法框架提供的编辑框中输入数据,例如,用户可以在手机中输入词汇“我爱”,该词汇“我爱”可以同步到大屏的输入法框架,在大屏中的编辑框同步显示该“我爱”。
大屏监听到大屏编辑框的文本变化,可以获取编辑框中的词汇“我爱”,根据“我爱”在大屏的词库中匹配相关词条,并将该相关词条填充到搜索框的候选词列表,例如相关词条可以包括“我爱这片土地”。其中,匹配规则可以根据实际应用场景确定,例如匹配规则包括但不限于字符串正则匹配、近义词匹配、同义词匹配、精准匹配或模糊匹配等。
进一步的,大屏可以将大屏的候选词列表中的内容同步到手机的输入法框架,手机的输入法框架可以将大屏的候选词列表中的内容在手机的界面中显示,例如,手机可以将“我爱这片土地”作为候选词在手机界面中显示,用户可以通过在手机界面点击该候选词“我爱这片土地”,将“我爱这片土地”填充到手机的编辑框中,该“我爱这片土地”可以同步到大屏的编辑框中,从而实现便捷高效的输入。
为了更清楚的说明上述步骤,图40示出了大屏与手机的交互同步匹配的候选词的流程图。
如图40所示,用户可以在辅助设备(例如手机)的用于辅助大屏输入的编辑框中输入关键字或关键词,该关键字或关键词可以同步至大屏设备,大屏设备根据该关键字或关键词得到与该关键字或关键词相匹配的候选词,并将该候选词同步至手机。
可能的实现方式中,大屏同步至手机的候选词有用户想要选择的目标词条,用户可以通过点击等选择目标词条,手机将目标词条同步至大屏,完成辅助输入。
可能的实现方式中,大屏同步至手机的候选词中没有用户想要选择的目标词条,用户可以继续在手机中输入关键字或关键词,重复上述步骤,直到从大屏同步的候选词中存在用户想要选择的目标词条,用户可以通过点击等选择目标词条,手机将目标词条同步至大屏,完成辅助输入。
需要说明的是,图39或图40是本申请实施例的一种可能实现方式。在其他可能的实现方式中,可以是用户通过遥控器选定大屏上某应用提供的编辑框下的虚拟键盘触发后续的辅助大屏输入的过程,或者,可以是用户在手机中触发辅助大屏输入的过程,本申请实施例对此不作具体限定。
结合上述的描述,下面对大屏和手机交互的用户界面进行示例性说明。
示例性的,图41-42示出了用户触发进行辅助输入的用户界面示意图。
图41示出了大屏的一种用户界面图。如图41所示,用户可以利用遥控器4101选定大屏中的编辑框4102,则可以触发执行本申请实施例后续的手机辅助大屏输入的过程。或者,用户可以利用遥控器4101选定大屏中的虚拟键盘中任意内容4102,则可以触发执行本申请实施例后续的手机辅助大屏输入的过程。具体的手机辅助大屏输入的方式将在后续实施例中说明,在此不再赘述。
需要说明的是,图41示出了大屏的用户界面图中设置一个编辑框的示意图。可能的实现方式中,大屏的用户界面中可以包括多个编辑框,用户触发任一个编辑框均可以触发本申请实施例后续的手机辅助大屏输入的过程,本申请实施例对此不作具体限定。
图42示出了手机的一种用户界面图。例如,用户可以通过在手机的主屏幕下拉等方式,显示如图42的a图所示用户界面,在如图42的a图所示用户界面中,可以包括手机的一项或多项下述功能:WLAN、蓝牙、手电筒、静音、飞行模式、移动数据、无线投屏、截屏或辅助输入4201。其中辅助输入4201可以为本申请实施例的手机辅助大屏输入的功能。
可能的实现方式中,在用户点击辅助输入4201后,手机可以查找处于同一分布式组网中的大屏等设备,并获取大屏中的搜索框,建立与大屏之间的通信连接,在手机中可以进一步显示如图42的c图所示用户界面,在如图42的c图所示用户界面中,可以显示用于辅助大屏输入的编辑框,用户可以基于该编辑框辅助大屏进行输入。
可能的实现方式中,如果手机查到处于同一分布式组网中的大屏等设备的数量为多个,手机中还可以显示如图42的b图所示用户界面,在如图42的b图所示用户界面,可以显示多个大屏的标识,大屏的标识可以是该大屏的设备号、用户名或昵称等。用户可以在如图42的b图所示用户界面中选择希望辅助输入的大屏(例如点击大屏A或大屏B),并进入如图42的c图所示用户界面,本申请实施例对此不作具体限定。在用户通过上述任意方式触发进行大屏输入后,示例性的,大屏可以查找分布式组网中的具有辅助输入能力的辅助设备(例如手机),并自动确定用于辅助输入的手机,或者向分布式组网中查找到的全部手机发送通知。
例如,如果大屏查找到分布式组网中存在一个手机,则大屏可以自动选择辅助输入的设备为该手机。
例如,如果大屏查找到分布式组网中存在多个手机,但是该多个手机中,存在用户设置的默认辅助输入的手机,则大屏可以自动选择该默认辅助输入的手机为辅助输入的设备。
例如,如果大屏查找到分布式组网中存在多个手机,但是该多个手机中,存在用户上次进行辅助输入时选择的辅助输入的手机,则大屏可以自动选择该用户上次进行辅助输入时选择的辅助输入的手机为辅助输入的设备。
例如,如果大屏查找到分布式组网中存在多个手机,大屏获取该多个手机中,被用户 选择为辅助输入的频次最高的手机,则大屏可以自动选择该被用户选择为辅助输入的频次最高的手机为辅助输入的设备。
例如,如果大屏查找到分布式组网中存在多个手机,但是该多个手机中,存在与大屏所登录的用户账号相同的手机,则大屏可以自动选择该与大屏所登录的用户账号相同的手机为辅助输入的设备。
示例性的,以大屏向分布式组网中的手机发送通知为例,图43-49示出了手机利用从大屏同步的候选词辅助大屏输入的过程。
示例性的,图43示出了手机确定辅助大屏输入的用户界面示意图。如图43的最左图所示的用户界面图,在手机收到大屏设备的通知的情况下,手机中可以弹出通知,提示大屏请求辅助输入,用户可以触发手机中的通知以确认辅助大屏输入,进一步的,如图43中间图所示的用户界面,手机中可以弹出用于辅助大屏输入的编辑框,进一步的,用户可以通过点击等触发如图43中间图所示的编辑框,手机可以显示如图43最右图所示的用户界面,该用户界面中可以显示手机的虚拟键盘(或称为软键盘),用户后续可以利用手机的虚拟键盘辅助大屏输入。
例如,如果用户想要改变大屏中的辅助应用的设置,但由于大屏的设置页面中应用的选项较多,查找困难,用户可以在如图43的最右图所示的手机的编辑框中输入关键字“辅”。
如图44所示的大屏用户界面中,手机编辑框中的关键字“辅”可以同步至大屏侧编辑框,大屏根据大屏本地或云端词库搜索出与“辅”匹配的侯选词,显示在候选词列表中,例如大屏的候选词列表中可以包括多个类别中与“辅”相关的内容,比如应用类别中可以包括“辅助应用和语音输入,以及辅助应用”等,辅助功能类别中可以包括“辅助功能和无障碍”等。
需要说明的是,大屏为大屏编辑框中的关键字匹配候选词时,可以与大屏要实现的功能(或称为所处的场景)相关。或者可以理解为,用户在编辑框中输入的关键字相同,但是因为各编辑框所在的界面不同,实现的功能不同,大屏基于各编辑框中的关键字联想的候选词可以相同或不同。
例如,如果大屏当前利用手机辅助搜索电影,则大屏可以结合电影库为关键字匹配相关的电影名称。
例如,如果大屏当前利用手机辅助搜索电视剧,则大屏可以结合电视剧库为关键字匹配相关的电视剧名称。
例如,如果大屏当前利用手机辅助搜索音乐,则大屏可以结合音乐库为关键字匹配相关的音乐名称。
例如,如果大屏当前利用手机辅助搜索大屏中的功能,则大屏可以结合功能库为关键字匹配相关的功能名称。
可能的实现方式中,大屏中显示的候选词的排序与用户历史行为相关。或者可以理解为,用户在编辑框中输入的关键字相同,但是用户之前对该关键字对应的候选词的选择不同,大屏中针对该关键字的候选词的排序可以发生改变。
例如,对于某一关键词,用户上次在大屏中利用该关键字选择的候选词是候选词A,则大屏中可以将候选词A显示在排序靠前的位置。
例如,对于某一关键词,用户在大屏中利用该关键字选择的候选词频次最高或较高的 是候选词B,则大屏中可以将候选词B显示在排序靠前的位置。大屏的候选词列表中的候选词可以进一步同步至手机的输入界面中,大屏的候选词列表中的候选词在手机中的显示方式可以根据实际应用场景设定,例如,大屏中的候选词的排序也可以同步到手机的显示界面中,以在手机中向用户推荐符合用户在大屏中习惯的候选词,本申请实施例对此不作具体限定。
需要说明的是,手机本地输入法基于关键字联想的手机候选词,可以与大屏基于关键字联想的大屏候选词相同或不同。可能的实现方式中,手机本地输入法基于关键字联想的手机候选词与大屏基于关键字联想的大屏候选词相同,但是手机本地输入法基于关键字联想的手机候选词的排序与大屏基于关键字联想的大屏候选词的排序可以相同。
示例性的,图45-48示出了几种将大屏候选词列表中的候选词同步在手机时手机的界面示意图。
如图45所示的手机显示界面中,用户在手机的编辑框中输入“辅”时,手机从大屏同步得到的候选词可以类似于手机本地输入法的候选词显示,例如,从大屏同步得到的候选词可以以列表的形式,在手机的输入界面中显示。可能的理解方式中,如图45所示的手机显示界面中,用户可以不感知手机提供的候选词具体是手机本地的还是从大屏同步的,但是因为本申请实施例中在手机提供的候选词中提供了从大屏同步得到的候选词,是的手机中为用户提供的候选词与大屏的内容更加接近,更有利于辅助用户实现快捷输入。
如图46所示的手机用户界面中,大屏的候选词列表中的候选词可以与手机本地输入法的候选词分栏显示。例如,如图46所示,用户在手机的编辑框中输入“辅”时,在手机的用户界面中,可以将从大屏同步得到的“辅”相匹配的候选词在一栏(例如大屏候选搜索词栏)显示,将手机本地输入法利用“辅”联想得到的候选词在一栏(例如手机候选词栏)中显示。
可能的实现方式中,如图47所示,大屏的候选词列表中的候选词与手机本地输入法的候选词分栏显示,虽然大屏的候选词列表中的候选词与手机本地输入法的候选词相同,但是大屏的候选词列表中的候选词与手机本地输入法的候选词的排序可以不同。
如图48所示的手机用户界面中,可以将大屏的候选词列表中的候选词排在手机本地输入法的候选词的前面,并采用横线等标识,将大屏的候选词列表中的候选词与手机本地输入法的候选词划分。或者,可以将大屏的候选词列表中的候选词排在手机本地输入法的候选词的后面,并采用横线等标识,将大屏的候选词列表中的候选词与手机本地输入法的候选词划分(图48未示出)。
如图49所示的手机用户界面中,大屏的候选词列表中的候选词可以与手机本地输入法的候选词采用标识区分。其中,大屏的候选词列表中的候选词的标识与手机本地输入法的候选词的不同,标识的具体形式可以包括:颜色、文字和/或图像等,本申请实施例不做限定。
例如,如图49所示,用户在手机的编辑框中输入“辅”时,在手机的用户界面中,可以将从大屏同步得到的“辅”相匹配的候选词添加朝向右下的箭头作为标识,将手机本地输入法利用“辅”联想得到的候选词添加朝向左上的箭头作为标识,使得用户可以基于各候选词的标识知晓候选词的来源。
本申请实施例对大屏的候选词列表中的候选词与手机本地输入法的候选词的具体显 示顺序不作限定。可能的实现方式中,可以结合用户历史搜索情况,将大屏的候选词列表中的候选词与手机本地输入法的候选词按照历史使用次数从高到低的顺序排序。可能的实现方式中,可以结合大屏的候选词列表中的候选词与手机本地输入法的候选词的热度,按照热度从高到低的顺序排序。可能的实现方式中,可以将大屏的候选词列表中的候选词排在手机本地输入法的候选词的前面。可能的实现方式中,可以将大屏的候选词列表中的候选词与手机本地输入法的候选词交叉排序。可能的实现方式中,可以将大屏的候选词列表中的候选词与手机本地输入法的候选词随机排序。
这样,在图45-49任一所示的手机用户界面中,用户可以点击需要的目标候选词,将目标候选词填充在手机的编辑框中,进而基于该目标候选词实现在大屏中的搜索。
可能的实现方式中,上述的将大屏的候选词列表中的候选词同步到手机的技术实现可以包括:基于大屏的输入法框架读取大屏的候选词列表中的候选词,通过分布式组网,将大屏的候选词列表中的候选词发送给手机的输入法框架。本申请实施例对将大屏的候选词列表中的候选词同步到手机的技术实现不作限定。
需要说明的是,上述手机辅助大屏输入时的用户界面图均是示例性说明,可能的实现方式中,手机辅助大屏输入时的界面中,也可以同步大屏中的部分或全部内容,使得手机用户可以基于手机界面了解大屏的状态。
示例性的,图50示出了一种手机的用户界面。如图50所示,用户在利用手机辅助大屏输入时,可以将大屏的全部或部分内容投屏到手机中,例如在手机中显示大屏的编辑框相关的内容,并在大屏内容的上层显示手机的编辑框,这样用户在利用手机的编辑框中输入时,在手机的用户界面中可以同步看到大屏编辑框中的状态,用户在辅助输入时,不需要抬头看大屏中的输入状态。
需要说明的是,上述实施例中,以用户辅助大屏输入汉字为例进行示例,可能的实现方式中,用户可以辅助大屏进行英文词组输入或其他形式的文本输入,本申请实施例对辅助输入的具体内容不做限定。
示例性的,图51示出了一种具体的手机辅助大屏进行输入的流程示意图。
如图51所示,手机辅助大屏输入可以包括:近场设备发现、身份认证和远程数据通道建立、以及大屏候选词条同步至手机。
示例性的,在近场设备发现的过程中,大屏远程输入服务开机启动,大屏可以开启近场辅助设备发现的功能,在大屏的搜索框中获取焦点(例如用户用遥控器选定到的搜索框)的情况下,大屏可以发送广播,以查询具有辅助输入能力的分布式辅助输入设备(例如手机),分布式辅助输入设备收到大屏的广播后,分布式辅助输入设备上可以弹出通知,该通知用于提示大屏需要辅助输入。
可能的实现方式中,在近场设备发现的过程中,大屏可以利用蓝牙或局域网广播等对近场设备进行查询,具备辅助输入能力的分布式辅助输入设备均会收到通知。
在身份认证和远程数据通道的建立的过程中,在分布式辅助输入设备接收到该通知后,用户可以在分布式辅助输入设备中点击通知消息,触发大屏与分布式辅助输入设备(后续以分布式辅助输入设备为手机为例进行示例说明)双方进行身份认证,例如验证双方身份的合法性等。认证完成后大屏侧与手机侧可以建立远程数据通道,后续可以根据远程数据通道实现大屏与手机之间的数据传递,例如,手机侧收到远程数据通道后,可以加载显示 辅助输入标记框(或称为编辑框),用户可以在输入标记框中输入关键字。
可以理解,身份认证的步骤可以根据实际应用场景适应选择,例如,在一些场景(例如安全性要求不高的场景等)中,可以不在大屏和手机间进行身份认证,在手机侧的用户触发通知后,可以在大屏侧与手机侧可以建立远程数据通道。
在将大屏侧的侯选词列表同步至手机的过程中,手机可以通过远程数据通道将输入的关键字或关键词同步至大屏侧,大屏侧通过本地数据通道将关键字或关键词提交至编辑框,大屏侧可以显示出手机侧输入的关键字或关键词,大屏侧搜索框根据关键字或关键词找出匹配的候选词,匹配的候选词被填充到大屏搜索框的候选词列表中,大屏通过远程数据通道将候选词列表同步至手机侧,手机接收到候选词列表后,手机通过本地数据通道将候选词列表显示至手机的候选词列表界面中,如果候选词列表不存在用户想要的目标词汇时,用户可以在手机中继续输入关键字或关键词;如果候选词列表中存在用户想要的目标词汇,用户点击目标词汇,手机通过远程数据通道将目标词汇同步至大屏侧的输入框中,实现辅助输入。
需要说明的是,上述手机辅助大屏输入时的用户界面图均是示例性说明,可能的实现方式中,手机辅助大屏输入时的界面中,也可以同步大屏中的部分或全部内容,使得手机用户可以基于手机界面了解大屏的状态。
在采用对应各个功能划分各个功能模块的情况下,如图52所示,示出了本申请实施例提供一种第一设备或第二设备的一种可能的结构示意图,该第一设备或第二设备包括:显示屏幕5201和处理单元5202。
其中,显示屏幕5201,用于支持第一设备或第二设备执行上述实施例中的显示步骤,或者本申请实施例所描述的技术的其他过程。显示屏幕5201可以是触摸屏或其他硬件或硬件与软件的综合体。
处理单元5202,用于支持第一设备或第二设备执行上述方法实施例中的处理步骤,或者本申请实施例所描述的技术的其他过程。
其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
当然,电子设备包括但不限于上述所列举的单元模块。并且,上述功能单元的具体所能够实现的功能也包括但不限于上述实例所述的方法步骤对应的功能,电子设备的其他单元的详细描述可以参考其所对应方法步骤的详细描述,本申请实施例这里不予赘述。
在采用集成的单元的情况下,上述实施例中所涉及的第一设备或第二设备可以包括:处理模块、存储模块和显示屏幕。处理模块用于对第一设备或第二设备的动作进行控制管理。显示屏幕用于根据处理模块的指示进行内容显示。存储模块,用于保存第一设备或第二设备的程序代码和数据。进一步的,该第一设备或第二设备还可以包括输入模块,通信模块,该通信模块用于支持第一设备或第二设备与其他网络实体的通信,以实现第一设备或第二设备的通话,数据交互,Internet访问等功能。
其中,处理模块可以是处理器或控制器。通信模块可以是收发器、RF电路或通信接口等。存储模块可以是存储器。显示模块可以是屏幕或显示器。输入模块可以是触摸屏,语音输入装置,或指纹传感器等。
其中,上述通信模块可以包括RF电路,还可以包括无线保真(wireless fidelity,Wi-Fi) 模块、近距离无线通信技术(near field communication,NFC)模块和蓝牙模块。RF电路、NFC模块、WI-FI模块和蓝牙模块等通信模块可以统称为通信接口。其中,上述处理器、RF电路、和显示屏幕和存储器可以通过总线耦合在一起。
如图53所示,示出了本申请实施例提供的第一设备或第二设备的又一种可能的结构示意图,包括:一个或多个处理器5301、存储器5302、摄像头5304和显示屏幕5303;上述各器件可以通过一个或多个通信总线5306通信。
其中,一个或多个计算机程序被5305存储在存储器5302中,并被配置为被一个或多个处理器5301执行;一个或多个计算机程序5305包括指令,指令用于执行上述任意步骤的显示方法。当然,电子设备包括但不限于上述所列举的器件,例如,上述电子设备还可以包括射频电路、定位装置、传感器等等。
本申请还提供以下实施例。需要说明的是,以下实施例的编号并不一定需要遵从前面实施例的编号顺序。
实施例41.一种设备通信方法,应用于包括第一设备、第二设备和第三设备的系统,所述方法包括:
所述第一设备、所述第二设备和所述第三设备接入分布式组网;
所述第二设备获取目标候选词,所述目标候选词不属于所述第一设备的候选词库,所述目标候选词不属于所述第三设备的候选词库;
所述第一设备接收用户输入的与所述目标候选词相关的关键字,所述第一设备显示所述目标候选词;
和/或,所述第三设备接收用户输入的与所述目标候选词相关的关键字,所述第三设备显示所述目标候选词。
实施例42.根据实施例41所述的方法,还包括:
所述第一设备、所述第二设备和所述第三设备之间互相同步各自的候选词库。
实施例43.根据实施例41或42所述的方法,还包括:
所述第一设备、所述第二设备或所述第三设备退出所述分布式组网时,在所述第一设备、所述第二设备或所述第三设备中显示是否删除同步的候选词库的提示界面;所述提示界面中包括用于表示删除的选项和用于表示不删除的选项;
响应于对所述表示删除的选项的触发操作,所述第一设备、所述第二设备或所述第三设备删除各自从其他设备同步的候选词库;
或者,响应于对所述表示不删除的选项的触发操作,所述第一设备、所述第二设备或所述第三设备保留从所述分布式组网同步的候选词库。
实施例44.根据实施例41或42所述的方法,还包括:
所述第一设备、所述第二设备或所述第三设备分别确定各自的访问类型;
在所述第一设备、所述第二设备或所述第三设备退出所述分布式组网时,所述第一设备、所述第二设备或所述第三设备根据各自的访问类型确定是否删除从所述分布式组网同步的候选词库。
实施例45.根据实施例41-44任一项所述的方法,还包括:
所述第一设备显示包括第一编辑框的第一界面;
所述第一设备向所述第二设备发送指示消息;
所述第二设备根据所述指示消息显示第二界面,所述第二界面包括第二编辑框;
在所述第二编辑框中存在编辑状态的情况下,将所述编辑状态同步到所述第一编辑框中。
实施例46.根据实施例45所述的方法,所述第二设备包括接口服务,所述接口服务用于所述第一设备与所述第二设备之间的编辑状态的同步。
实施例47.根据实施例45或46所述的方法,所述编辑状态包括下述一项或多项:文本内容、光标或文字内容的高亮标记。
实施例48.根据实施例45-47任一项所述的方法,所述第二设备根据所述指示消息显示第二界面,包括:
所述第二设备响应于所述指示消息显示通知界面;所述通知界面包括确认辅助输入的第三选项;
响应于对所述第三选项的触发操作,所述第二设备显示所述第二界面。
实施例49.根据实施例45-48任一项所述的方法,所述第二界面还包括:所述第一界面的全部或部分内容。
实施例410.根据实施例49所述的方法,所述第二编辑框与所述第一界面的全部或部分内容分层显示,且所述第二编辑框显示在所述第一界面的全部或部分内容的上层。
实施例411.根据实施例45-49、410任一项所述的方法,所述第二设备根据所述指示消息显示第二界面之后,所述方法还包括:
响应于对所述第二编辑框的触发,所述第二设备显示虚拟键盘;
所述第二设备根据所述虚拟键盘和/或所述第二编辑框中接收的输入操作,在所述第二编辑框中显示所述编辑状态。
实施例412.根据实施例41-49、410-411任一项所述的方法,所述第一设备包括下述任一项:电视、大屏或可穿戴设备;所述第二设备或所述第三设备包括下述任一项:手机、平板或可穿戴设备。
实施例413.根据实施例41-49、410-412任一项所述的方法,还包括:
所述第二设备显示包括所述第一设备的选项的第四界面;
响应于对所述第一设备的选项的选择操作,所述第二设备向所述第一设备发送指示消息;
所述第一设备显示包括第一编辑框的第一界面;
所述第二设备显示第二界面,所述第二界面包括第二编辑框;
在所述第二编辑框中存在编辑状态的情况下,将所述编辑状态同步到所述第一编辑框中。
实施例414.一种设备通信方法,应用于包括第一设备、第二设备和第三设备的系统,所述方法包括:
所述第一设备、所述第二设备和所述第三设备接入分布式组网;
所述第一设备、所述第二设备和所述第三设备之间互相同步各自的候选词库,得到候选词库集;
在所述第一设备、所述第二设备或所述第三设备进行文字编辑时,所述第一设备、所 述第二设备或所述第三设备根据所述候选词库集显示候选词。
实施例415.一种设备通信方法,应用于第一设备,包括:
所述第一设备接入分布式组网;所述分布式组网中还接入有其他设备;
所述第一设备基于所述分布式组网同步所述其他设备的候选词库,得到候选词库集;
在所述第一设备进行文字编辑时,所述第一设备根据所述候选词库集显示候选词。
实施例416.一种设备通信系统,包括第一设备、第二设备和第三设备,所述第一设备用于执行如实施例41-415任一项所述的第一设备的步骤,所述第二设备用于执行如实施例41-49、410-415任一项所述的第二设备的步骤,所述第三设备用于执行如实施例41-49、410-415任一项所述的第三设备的步骤。
实施例417.一种第一设备,包括:至少一个存储器和至少一个处理器;
所述存储器用于存储程序指令;
所述处理器用于调用所述存储器中的程序指令使得所述第一设备执行实施例41-49、410-415任一项所述的第一设备执行的步骤。
实施例418.一种第二设备,包括:至少一个存储器和至少一个处理器;
所述存储器用于存储程序指令;
所述处理器用于调用所述存储器中的程序指令使得所述第二设备执行实施例41-49、410-415任一项所述的第二设备执行的步骤。
实施例419.一种计算机可读存储介质,其上存储有计算机程序,使得所述计算机程序被第一设备的处理器执行时实现实施例41-49、410-415任一项所述的所述第一设备执行的步骤;或者,使得所述计算机程序被第二设备的处理器执行时实现实施例41-49、410-415任一项所述的所述第二设备执行的步骤;或者,使得所述计算机程序被第三设备的处理器执行时实现实施例41-49、410-415任一项所述的所述第三设备执行的步骤。
上述实施例41-实施例49、实施例410-实施例419的具体实现可以参照如图54-67的说明。
在利用手机辅助大屏输入的可能实现中,在手机中输入关键字后,可以依据手机本身的候选词库(或者可以理解为用户所使用的输入法对应的输入法候选词库)的内容对关键字进行联想,显示推荐的候选词,用户可以通过点击候选词实现快捷输入,从而达到提升用户输入效率的目的。
然而,手机本身的候选词库与大屏中的节目内容等通常不相关,在利用手机的候选词库对关键字进行联想时,往往不能联想到与大屏节目相关的内容,导致用户依然需要逐字选择,输入效率较低。
因此,让手机得到具有丰富内容的候选词库是提升用户输入效率的可行方式。
在可能的实现方式中,一些产品可能利用用户账号的方式,实现用户的多个设备(例如手机和大屏)之间的候选词库的同步。例如,在用户注册输入法的用户账号后,无论用户将用户账号登录在任何设备,在用户利用输入法输入过程中产生的特定候选词(例如用户逐字选择得到的词语),都可以保存到与该用户账号对应的候选词库中。后续如果用户在大屏上登录用户账号,大屏的候选词库中也可以包括手机的候选词库,如果用户在手机中登录用户账号,手机的候选词库中也可以包括大屏的候选词库。
然而,该实现方式中,候选词库的同步完全依赖于输入法的用户账户,如果用户在某 一设备中没有登录用户账户,则无法实现候选词库的同步。或者,因为用户账户通常是针对某一公司的输入法的,如果某一设备不支持该公司的输入法,或用户更换输入法的种类,则无法实现候选词库的同步。且,如果用户换设备等,需要切换或登录用户账户,操作较为繁琐。且,实际使用中,注册输入法账户的用户并不多,利用输入法输入时登录用户账户的情况更为少见,导致该实现方式不能充分发挥作用。
基于此,本申请实施例提供了一种设备通信方法,可以在设备加入分布式组网后,将分布式组网中的候选词库同步到该设备中,从而不需要依赖输入法的用户账户,也能让多个设备之间便捷的实现候选词库共享,能较好的为用户提供输入服务。
示例性,图54示出了一种本申请实施例的具体应用场景示意图。
如图54所示,分布式组网中接入有大屏、平板、手机A和手机B。大屏、平板、手机A和手机B可以分别基于分布式组网同步其他设备的候选词库,使得大屏、平板、手机A和手机B均可以得到候选词库集,该候选词库集可以理解为大屏的候选词库、平板的候选词库、手机A的候选词库和手机B的候选词库的并集。则后续大屏、平板、手机A和手机B均可以利用该候选词库集,实现便捷的候选词推荐,提升用户输入效率。
示例性的,大屏、平板、手机A和手机B可以连接到同一个WIFI中,实现分布式组网的组建。其中,大屏、平板、手机A和手机B可以以任意可能的形式加入到该分布式组网中,本申请实施例对此不作具体限定。
可以理解,分布式组网中具体接入的设备类型和数量可以根据实际应用场景确定,本申请实施例对分布式组网中接入的设备不作具体限定。
在各设备加入该分布式组网时,各设备可以基于各设备FWK层提供的分布式数据库同步能力,将各自候选词库中的内容推送到分布式组网中其他设备的候选词库路径下,实现各设备的候选词库同步。可以理解,如果分布式组网中的各设备在加入分布式组网后,执行了输入步骤,并产生了新的候选词,则也可以适应将该新的候选词同步到各设备的候选词库。
可能的实现方式中,分布式组网中各设备的候选词库路径可以相同,例如,各设备的候选词库路径可以均设置为分布式候选词库系统路径“data/inputmethod/candicateWords”,则分布式组网中各设备可以基于该相同的路径便捷的推送各自的候选词库。
可能的实现方式中,分布式组网中各设备向其他设备同步自己的候选词库时,可以将自己的候选词库中的候选词附加自己的设备信息,使得后续可以基于附加于该候选词的设备信息,实现对候选词的灵活管理。例如,可以在某一设备退出分布式组网时,将具备该某一设备的设备信息的候选词从分布式组网中的其他设备的候选词库中删除,等。
可能的实现方式中,在设备初次加入该分布式组网时,用户可以设置设备的访问类型(或者可以理解为权限),或者各设备可以自动确定设备的访问类型,进而依据设备的访问类型执行适应的步骤。设备再次加入该分布式组网(或者可以理解为该设备非初次加入该分布式组网)时,可以自动识别该设备之前设置的访问类型,进而依据设备的访问类型执行适应的步骤。示例性的,设备的访问类型可以包括常用设备、临时访客或黑名单设备等。
例如,常用设备可以表示该设备的安全等级较高,则常用设备加入分布式组网时,可以允许常用设备同步分布式组网中其他设备的候选词库,以及将该常用设备本身的候选词 库同步给分布式组网中的其他设备。在常用设备退出分布式组网时,常用设备从该分布式组网中同步到的候选词库可以保留,使得该常用设备能够在退出分布式组网后继续利用在分布式组网中同步到的候选词库实现丰富的候选词推荐。在常用设备退出分布式组网时,常用设备同步到分布式组网中的自身的候选词库也可以在分布式组网中保留,使得分布式组网中的其他设备后续可以继续利用该常用设备的候选词库实现丰富的候选词推荐。可以理解,常用设备的具体权限还可以根据实际的应用场景设定,本申请实施例对此不作具体限定。
例如,临时访客可以表示该设备的安全等级一般,则临时访客加入分布式组网时,可以允许临时访客同步分布式组网中其他设备的候选词库,以及将该临时访客本身的候选词库同步给分布式组网中的其他设备。在临时访客退出分布式组网时,临时访客从该分布式组网中同步到的候选词库可以删除,临时访客同步到分布式组网中的自身的候选词库也可以在分布式组网中删除。可以理解,临时访客的具体权限还可以根据实际的应用场景设定,本申请实施例对此不作具体限定。
例如,黑名单设备可以表示该设备的安全等级较低,则黑名单设备加入分布式组网时,可以禁止黑名单设备同步分布式组网中的其他设备的候选词库,以及禁止黑名单设备将本身的候选词库同步给分布式组网中的其他设备。可以理解,黑名单设备的具体权限还可以根据实际的应用场景设定,本申请实施例对此不作具体限定。
这样,通过区分设备的访问类型,在方便用户输入的同时,可以起到数据保护的作用。
示例性的,设置设备的访问类型的一种可能实现为:分布式组网中设置管理员设备(例如加入该分布式组网中的其中一个或多个具备管理员作用的设备),管理员设备的FWK层可以监听分布式组网的状态,以及获取分布式组网的信息和分布式组网内的设备的信息。在管理员设备监听到某个接入设备(例如大屏、平板、手机A或手机B)加入该分布式组网时,管理员设备可以设置该接入设备的访问类型。
可以理解,后续也可以根据需要,由管理员设备对分布式组网中各设备的访问类型进行修改。例如,可以在管理员设备中提供用于修改设备访问类型的修改界面,管理员可以在修改界面中适应修改分布式组网中各设备的访问类型。或者,例如,各设备可以向管理员设备发送用于修改访问类型的请求,管理员设备可以基于该请求,修改该设备的访问类型,等。本申请实施例对具体的修改设备的访问类型的方式不作限定。
示例性的,设置设备的访问类型的一种可能实现为:任意接入分布式组网中的设备,在接入分布式组网时,提供用于设置该设备的访问类型的功能,用户可以根据需求设置该设备的访问类型。
可能的实现方式中,在后续如果有新增设备(例如手机C)接入该分布式组网,则手机C也可以与大屏、平板、手机A和手机B同步候选词库,类似于上述大屏、平板、手机A和手机B同步候选词库的描述,在此不再赘述。
可能的实现方式中,如果有已加入分布式组网的设备(例如大屏、平板、手机A、手机B或手机C)退出分布式组网,可以根据该退出的设备的访问类型,在该退出的设备的候选词库中删除该退出的设备在分布式组网中同步到的候选词,也可以在该退出的设备的候选词库中保留该退出的设备在分布式组网中同步到的候选词,本申请实施例对此不作具体限定。
示例性的,图55示出了本申请实施例的设备通信方法的具体系统架构示意图。
如图55所示,本申请实施例以分布式组网中包括大屏、手机A和手机C为例,示意性说明大屏、手机A和手机C接入分布式组网,大屏、手机A和手机C同步候选词库,手机A利用同步的候选词库辅助大屏进行输入,以及大屏、手机A或手机C离开分布式组网的过程。
其中,大屏和手机A为常用设备,手机C为临时访客。大屏、手机A和手机C均可以设置分布式组网框架、分布式数据库(也可能成为数据库)和输入法框架(也可能称为远程输入法框架服务)。
在大屏、手机A和手机C接入分布式组网的过程中,以手机C的分布式组网框架监听到手机C接入到分布式组网为例,手机C可以采用显示界面、语音提示等询问操作该手机C的用户所选择的访问类型(也可以称为设备类型),用户选择适应的访问类型后,可以触发手机C同步分布式组网中其他设备的候选词库。大屏和手机A接入分布式组网和同步候选词库的步骤类似手机C,不再赘述。示例性的,如果手机C的候选词库中,存在“坡止咩”的候选词,则大屏和手机A均可以同步到该“坡止咩”的候选词。
在手机A辅助大屏B输入的过程中,用户可以点击大屏中的输入法编辑框,大屏可以拉起手机A的输入法,例如在手机A中弹出输入框,则用户可以在手机A的输入框中输入内容,达到辅助大屏输入的效果。示例性的,如果用户在输入框中输入“pozhimie”或者“PZM”等,基于手机A从手机C同步的包括“坡止咩”候选词的候选词库,可以在手机A的界面中显示“坡止咩”的候选词,用户可以通过点击等方式触发候选词“坡止咩”,并将“坡止咩”显示在大屏的输入框中。本申请实施例中,手机A的用户,因为从手机C同步到“坡止咩”的候选词,所以在用户A输入“pozhimie”或者“PZM”等时,可以不需要逐个选择希望输入的字,从而可以提升输入效率。
在大屏、手机A和手机C退出(或称为断开)分布式组网的过程中,以手机C的分布式组网框架监听到手机C断开到分布式组网为例,因为手机C是临时访客,因此可以删除手机C候选词库中从大屏和手机A同步到的候选词,适应的,也可以删除手机A从手机C获取的候选词(或者手机A从手机C获取且未使用过的候选词)。对于手机A和大屏,因为是常用设备,如果手机A或大屏断开分布式组网,可以保留手机A或大屏从分布式组网中其他设备同步到的候选词,也可以在分布式组网其他设备中保留从手机A或大屏同步的候选词。
示例性的,以大屏、手机A和手机C均断开分布式组网为例,断开分布式组网后,手机C可以恢复到接入分布式组网前的候选词库,手机A的候选词库可以包括手机A接入到分布式组网前的候选词库和大屏接入到分布式组网前的候选词库,大屏的候选词库可以包括手机A接入到分布式组网前的候选词库和大屏接入到分布式组网前的候选词库。
可以理解,如果在大屏、手机A和手机C连接在分布式组网的过程中,由于大屏、手机A或手机C等的输入行为,产生新的候选词,该新的候选词的处理方式也可以根据大屏、手机A和手机C的访问类型适应调整。例如,如果该新的候选词是由于手机C的输入行为产生,则可以随着手机C断开分布式组网,从手机A和大屏的候选词库中删除。如果该新的候选词是由于手机A或大屏的输入行为产生,则手机A或大屏断开分布式组网后,该新的候选词可以保留在手机A和大屏的候选词库中。
可能的实现方式中,如果手机C的候选词库中的候选词,在大屏、手机A和手机C连接在分布式组网的过程中被使用过,例如,上述手机A辅助大屏输入时,使用了手机C中的候选词“坡止咩”,则手机C断开分布式组网后,该被使用过的“坡止咩”候选词可以保留在手机A和大屏的候选词库中。
需要说明的是,本申请上述实施例中,均以同步候选词库为例进行说明,可能的实现中,本申请实施例的方法也适用于任何数据共享的场景,例如,可以采用上述与同步候选词库相似的方式,利用分布式组网,实现多个设备间的文件、音乐、视频和/或图片等的同步。可以理解,候选词库由于通常都是文字,占用空间通常比较小,在同步候选词库的实现中,可以不关注存储控件的选择,如果同步的数据较大,在实际应用中,还可以结合需要同步的数据的控件占用大小,为需要同步的数据选择适应的存储空间,
对应于图55描述的过程,下面对大屏、手机A和手机C同步候选词库,手机A利用同步的候选词库辅助大屏进行输入,以及大屏、手机A或手机C离开分布式组网的用户界面进行示例说明。
示例性的,图56示出了一种用于选择设备类型的用户界面示意图,以手机C的分布式组网框架监听到手机C接入到分布式组网为例,手机C可以显示如图56所示的用户界面。
如图56所示,用户界面中可以包括用于设置手机C的访问类型的常用设备控件5601和临时访客控件5602,用户可以通过点击临时访客控件5602按钮,将手机C设置为临时访客。类似的方法,用户可以将手机A和大屏设置为常用设备,在此不再赘述。
可能的实现方式中,如果大屏、手机A或手机C之前在该分布式组网中设置过访问类型,则大屏、手机A或手机C再次加入该分布式组网时,可以保留之前设置过的访问类型,不提示如图56所示的用户界面。
可能的实现方式中,大屏、手机A或手机C也可以根据自身加入该分布式组网的频率、时长和/或次数等,自行确定各自的访问类型。
例如,对于大屏、手机A或手机C中的任一个:如果加入该分布式组网的频率高于一定阈值,可以确定为常用设备;或者,如果加入该分布式组网的频率低于一定阈值,可以确定为临时访客;或者,如果加入该分布式组网的时长高于一定阈值,可以确定为常用设备;或者,如果加入该分布式组网的时长低于一定阈值,可以确定为临时访客;或者,如果加入该分布式组网的次数高于一定阈值,可以确定为常用设备;或者,如果加入该分布式组网的次数低于一定阈值,可以确定为临时访客;或者,如果加入该分布式组网的次数高于一定阈值且时长高于一定阈值,可以确定为常用设备;或者,如果加入该分布式组网的次数低于一定阈值且时长低于一定阈值,可以确定为临时访客;等,本申请实施例对此不作具体限定。该方式中也可以不提示如图56所示的用户界面。
可能的实现方式中,如果用户上次已设置了设备的访问类型,则可以自动将该用户上次设置的设备的访问类型作为该设备的访问类型,且不显示如图56所示的用户界面。
可能的实现方式中,如果多个设备中,存在与登录的用户账号相同的多个,则可以自动将该登录的用户账号相同的多个确定为常用设备,且不显示如图56所示的用户界面。
可能的实现方式中,也可以不设置大屏、手机A或手机C的访问类型,大屏、手机A或手机C享有共同的权限,本申请实施例对此不作具体限定。该方式中也可以不提示如图 56所示的用户界面。
可以理解,如果大屏、手机A或手机C设定了访问类型,后续可以基于访问类型执行相应的候选词库添加或删除步骤。如果大屏、手机A或手机C没有设定访问类型,后续可以执行于上述任一种访问类型对应的选词库添加或删除步骤,本申请实施例对此不作限定。
图57示出了手机C中产生候选词的界面示意图。如图57所示,用户可以在输入框中输入“pozhimie”后,逐个选择候选字,得到“坡止咩”,该“坡止咩”可以作为候选词存储在手机C的候选词库中。
或者,用户也可以在手机C的编辑框中输入英文词组“apple”“banana”和“meat”,并逐个选择上述词组,得到候选英文词组“apple banana meat”,该“apple banana mea”可以作为候选词存储在手机C的候选词库中。
之后,手机C接入到分布式组网中,大屏和手机A可以同步到该“坡止咩”等的候选词。可以理解,同步候选词库的过程,可以没有用户界面,用户对同步候选词库的过程可以无感知。在后续手机A辅助大屏输入时,可以基于该“坡止咩”的候选词,实现快捷输入。大屏本身输入时,可以基于该“坡止咩”的候选词,实现快捷输入。手机本身输入时,可以基于该“坡止咩”的候选词,实现快捷输入。
示例性的,图58-60,示出了手机A利用“坡止咩”候选词辅助大屏输入的过程。
图58示出了大屏的一种用户界面示意图。如图58所示,用户可以通过遥控器等设备在大屏中选定输入法编辑框,大屏的输入法编辑框控件可以向大屏的输入法框架(input method framework,IMF)请求启动本地输入法,并传输数据通道到IMF,IMF通过分布式组网查询具有分布式能力的服务端,例如服务端可以包括手机A,则大屏可以连接手机A的辅助AA,向手机A请求辅助输入或者在手机A中弹出输入框等。
图59示出了手机A确定辅助大屏输入的用户界面示意图。如图59最左图所示的用户界面,手机B中可以弹出用于提示大屏请求辅助输入的通知,用户可以触发手机B中的通知以确认辅助大屏输入,进一步的,如图59中间图所示的用户界面,手机A中可以弹出用于辅助大屏输入的编辑框,进一步的,用户可以通过点击等触发如图59中间图所示的编辑框,手机A可以显示如图59最右图所示的用户界面,该用户界面中可以显示手机的虚拟键盘(或称为软键盘),用户后续可以利用手机A的虚拟键盘辅助大屏输入。
图60示出了手机A利用从手机C同步的“坡止咩”候选词辅助大屏输入的用户界面示意图。如图60的左图所示手机A的用户界面图,用户在手机A的输入框中输入“pozhimie”后,手机A的输入法可以基于同步的候选词库,显示“坡止咩”候选词,用户点击“坡止咩”候选词,则手机A的输入框中可以显示“坡止咩”,用户点击完成,可以进入图60的右图所示的大屏的用户界面图,在大屏的编辑框中,可以同步显示手机A输入框中的“坡止咩”。
可能的实现方式中,用户在如图60的左图所示手机A的输入框中输入时,输入框中的内容可以同步显示在如图60的右图所示的大屏的编辑框中,例如,用户在如图60的左图所示手机A的输入框中进行删除、高亮选定或光标移动等操作时,如图60的右图所示的大屏的编辑框中可以同步显示如手机A的输入框中进行的删除、高亮选定或光标移动等状态。
示例性的,图61示出了大屏利用“坡止咩”候选词实现快捷输入的用户界面示意图。 如图61所示,用户在大屏的编辑中通过遥控器等设备输入“pozhimie”或者其简写形式时,大屏可以基于从手机C同步的候选词“坡止咩”,在用户界面中显示候选词“坡止咩”,用户可以选定该候选词“坡止咩”,实现便捷输入。
示例性的,图62示出了手机A利用“坡止咩”候选词实现快捷输入的用户界面示意图。如图62所示,用户在手机A的本地输入法的输入框中输入“pozhimie”或者其简写形式时,手机A可以基于从手机C同步的候选词“坡止咩”,在用户界面中显示候选词“坡止咩”,用户可以选定该候选词“坡止咩”,实现便捷输入。
可以理解,在大屏、手机A和手机C连接在分布式组网的过程中,大屏、手机A和手机C均可以利用互相同步的候选词库,实现便捷的输入,在此不再赘述。
在大屏和手机A断开分布式组网后,因为大屏和手机A是常用设备,所以如图56对应的描述,大屏和手机A还可以保留从对方同步的候选词库,实现便捷的输入。
手机C是临时访客,一种可能的实现中,手机C断开分布式组网时,可以在手机C中删除手机C利用分布式组网同步的候选词库内容,以及在分布式组网的其他设备中删除手机C在接入分布式组网之前的候选词库的内容。
可能的实现方式,在上述任意设备退出分布式组网时,可以在改设备中提示是否删除同步的候选词库,则用户可以灵活的选择删除还是保留同步的候选词库。示例性的,图63示出了一种可能的手机C的用户界面图,如图63所示,可以提示用户是否删除同步候选词库,并提供“是”和“否”的选项,用户可以结合需求选择适应的选项,实现删除或保留同步的候选词库。
示例性的,图64示出了手机C断开分布式组网,并删除手机C在分布式组网中同步的手机C本身的候选词库时,手机A的一种用户界面。如图64所示,因为候选词“坡止咩”从手机A的候选词库中删除,所以在手机A中输入“pozhimie”时,手机A推荐的候选词中,没有“坡止咩”的推荐。
另一种可能的实现中,如果手机C接入分布式组网前的候选词库中,在手机C接入分布式组网后被使用过部分候选词,则该部分被使用过的候选词可以在手机C断开分布式组网后,继续被分布式组网中的其他设备使用。在手机C断开分布式组网后,手机C可以在手机C中删除手机C利用分布式组网同步的候选词库内容,以及在分布式组网的其他设备中删除手机C在接入分布式组网之前没有被使用的候选词。
例如,手机C的候选词库中除了上述的“坡止咩”,还包括“绯里红”。因为手机A在辅助大屏输入时,使用了“坡止咩”,在手机C断开分布式组网后,该“坡止咩”可以作为手机A的候选词,继续被手机A和大屏使用。而“绯里红”因为没有被使用过,在手机C断开分布式组网时被清除,不能继续被手机A和大屏使用。
需要说明的是,上述手机辅助大屏输入时的用户界面图均是示例性说明,可能的实现方式中,手机辅助大屏输入时的界面中,也可以同步大屏中的部分或全部内容,使得手机用户可以基于手机界面了解大屏的状态。
示例性的,图65示出了一种手机的用户界面。如图65所示,用户在利用手机辅助大屏输入时,可以将大屏的全部或部分内容投屏到手机中,例如在手机中显示大屏的编辑框相关的内容,并在大屏内容的上层显示手机的编辑框,这样用户在利用手机的编辑框中输入时,在手机的用户界面中可以同步看到大屏编辑框中的状态,用户在辅助输入时,不需 要抬头看大屏中的输入状态。
需要说明的是,上述实施例中,以用户辅助大屏输入汉字为例进行示例,可能的实现方式中,用户可以辅助大屏进行英文词组输入或其他形式的文本输入,本申请实施例对辅助输入的具体内容不做限定。
在采用对应各个功能划分各个功能模块的情况下,如图66所示,示出了本申请实施例提供一种第一设备、第二设备或第三设备的一种可能的结构示意图,该第一设备、第二设备或第三设备包括:显示屏幕6601和处理单元6602。
其中,显示屏幕6601,用于支持第一设备、第二设备或第三设备执行上述实施例中的显示步骤,或者本申请实施例所描述的技术的其他过程。显示屏幕6601可以是触摸屏或其他硬件或硬件与软件的综合体。
处理单元6602,用于支持第一设备、第二设备或第三设备执行上述方法实施例中的处理步骤,或者本申请实施例所描述的技术的其他过程。
其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
当然,电子设备包括但不限于上述所列举的单元模块。并且,上述功能单元的具体所能够实现的功能也包括但不限于上述实例所述的方法步骤对应的功能,电子设备的其他单元的详细描述可以参考其所对应方法步骤的详细描述,本申请实施例这里不予赘述。
在采用集成的单元的情况下,上述实施例中所涉及的第一设备、第二设备或第三设备可以包括:处理模块、存储模块和显示屏幕。处理模块用于对第一设备、第二设备或第三设备的动作进行控制管理。显示屏幕用于根据处理模块的指示进行内容显示。存储模块,用于保存第一设备、第二设备或第三设备的程序代码和数据。进一步的,该第一设备、第二设备或第三设备还可以包括输入模块,通信模块,该通信模块用于支持第一设备、第二设备或第三设备与其他网络实体的通信,以实现第一设备、第二设备或第三设备的通话,数据交互,Internet访问等功能。
其中,处理模块可以是处理器或控制器。通信模块可以是收发器、RF电路或通信接口等。存储模块可以是存储器。显示模块可以是屏幕或显示器。输入模块可以是触摸屏,语音输入装置,或指纹传感器等。
其中,上述通信模块可以包括RF电路,还可以包括无线保真(wireless fidelity,Wi-Fi)模块、近距离无线通信技术(near field communication,NFC)模块和蓝牙模块。RF电路、NFC模块、WI-FI模块和蓝牙模块等通信模块可以统称为通信接口。其中,上述处理器、RF电路、和显示屏幕和存储器可以通过总线耦合在一起。
如图67所示,示出了本申请实施例提供的第一设备、第二设备或第三设备的又一种可能的结构示意图,包括:一个或多个处理器6701、存储器6702、摄像头6704和显示屏幕6703;上述各器件可以通过一个或多个通信总线6706通信。
其中,一个或多个计算机程序被6705存储在存储器6702中,并被配置为被一个或多个处理器6701执行;一个或多个计算机程序6705包括指令,指令用于执行上述任意步骤的显示方法。当然,电子设备包括但不限于上述所列举的器件,例如,上述电子设备还可以包括射频电路、定位装置、传感器等等。
本申请还提供以下实施例。需要说明的是,以下实施例的编号并不一定需要遵从前面实施例的编号顺序。
实施例51.一种设备通信方法,应用于包括第一设备、第二设备和第三设备的系统,所述方法包括:
所述第一设备显示包括第一编辑框的第一界面;
所述第一设备向所述第二设备和所述第三设备发送指示消息;
所述第二设备根据所述指示消息显示第二界面,所述第二界面包括第二编辑框;
所述第三设备根据所述指示消息显示第三界面,所述第三界面包括第三编辑框;
在所述第二编辑框中存在编辑状态的情况下,所述第一设备将所述编辑状态同步到所述第一编辑框中,以及所述第三设备将所述编辑状态同步到所述第三编辑框中;
或者,在所述第三编辑框中存在编辑状态的情况下,所述第一设备将所述编辑状态同步到所述第一编辑框中,以及所述第二设备将所述编辑状态同步到所述第二编辑框中;
或者,在所述第一编辑框中存在编辑状态的情况下,所述第二设备将所述编辑状态同步到所述第二编辑框中,以及所述第三设备将所述编辑状态同步到所述第三编辑框中。
实施例52.根据实施例51所述的方法,所述第二设备包括接口服务,所述接口服务用于所述第一设备与所述第二设备之间的编辑状态的同步。
实施例53.根据实施例51或52所述的方法,所述编辑状态包括下述一项或多项:文本内容、光标或文字内容的高亮标记。
实施例54.根据实施例51-53任一项所述的方法,所述第二设备根据所述指示消息显示第二界面,包括:
所述第二设备响应于所述指示消息显示通知界面;所述通知界面包括确认辅助输入的选项;
响应于对所述选项的触发操作,所述第二设备显示所述第二界面。
实施例55.根据实施例51-54任一项所述的方法,所述第二界面还包括:所述第一界面的全部或部分内容。
实施例56.根据实施例55所述的方法,所述第二编辑框与所述第一界面的全部或部分内容分层显示,且所述第二编辑框显示在所述第一界面的全部或部分内容的上层。
实施例57.根据实施例51-56任一项所述的方法,所述第二设备根据所述指示消息显示第二界面之后,所述方法还包括:
响应于对所述第二编辑框的触发,所述第二设备显示虚拟键盘;
所述第二设备根据所述虚拟键盘和/或所述第二编辑框中接收的输入操作,在所述第二编辑框中显示所述编辑状态。
实施例58.根据实施例51-57任一项所述的方法,所述第一设备包括下述任一项:电视、大屏或可穿戴设备;所述第二设备或所述第三设备包括下述任一项:手机、平板或可穿戴设备。
实施例59.根据实施例51-58任一项所述的方法,所述第一编辑框中编辑状态中包括所述第一设备的标识,和/或,所述第二编辑框中编辑状态中包括所述第二设备的标识,和/或,所述第三编辑框中编辑状态中包括所述第一设备的标识。
实施例510.根据实施例51-59任一项所述的方法,在所述第二编辑框和所述第三编辑 框中同时接收到输入内容时,第一设备裁定所述第二编辑框的输入内容和所述第三编辑框的显示方式。
实施例511.一种设备通信方法,应用于包括第一设备、第二设备和第三设备的系统,所述方法包括:
所述第一设备显示包括第一编辑框的第一界面;
所述第一设备向所述第二设备发送指示消息;
所述第二设备根据所述指示消息显示第二界面,所述第二界面包括第二编辑框;
所述第二设备向所述第三设备发送辅助输入请求;
所述第三设备根据所述辅助输入请求显示第三界面,所述第三界面包括第三编辑框;
在所述第二编辑框中存在编辑状态的情况下,所述第一设备将所述编辑状态同步到所述第一编辑框中,以及所述第三设备将所述编辑状态同步到所述第三编辑框中;
或者,在所述第三编辑框中存在编辑状态的情况下,所述第一设备将所述编辑状态同步到所述第一编辑框中,以及所述第二设备将所述编辑状态同步到所述第二编辑框中;
或者,在所述第一编辑框中存在编辑状态的情况下,所述第二设备将所述编辑状态同步到所述第二编辑框中,以及所述第三设备将所述编辑状态同步到所述第三编辑框中。
实施例512.一种设备通信方法,应用于包括第一设备、第二设备和第三设备的系统,所述方法包括:
所述第二设备显示包括所述第一设备的选项的第四界面;
响应于对所述第一设备的选项的选择操作,所述第二设备向所述第一设备发送指示消息;
所述第一设备显示包括第一编辑框的第一界面;
所述第二设备显示第二界面,所述第二界面包括第二编辑框;
所述第二设备向所述第三设备发送辅助输入请求;
所述第三设备根据所述辅助输入请求显示第三界面,所述第三界面包括第三编辑框;
在所述第二编辑框中存在编辑状态的情况下,所述第一设备将所述编辑状态同步到所述第一编辑框中,以及所述第三设备将所述编辑状态同步到所述第三编辑框中;
或者,在所述第三编辑框中存在编辑状态的情况下,所述第一设备将所述编辑状态同步到所述第一编辑框中,以及所述第二设备将所述编辑状态同步到所述第二编辑框中;
或者,在所述第一编辑框中存在编辑状态的情况下,所述第二设备将所述编辑状态同步到所述第二编辑框中,以及所述第三设备将所述编辑状态同步到所述第三编辑框中。
实施例513.一种设备通信方法,应用于第一设备,所述方法包括:
所述第一设备显示包括第一编辑框的第一界面;
所述第一设备向所述第二设备和所述第三设备发送指示消息;用于所述第二设备根据所述指示消息显示第二界面,所述第二界面包括第二编辑框,以及用于所述第三设备根据所述指示消息显示第三界面,所述第三界面包括第三编辑框;
在所述第二编辑框中存在编辑状态的情况下,所述第一设备将所述编辑状态同步到所述第一编辑框中;
或者,在所述第三编辑框中存在编辑状态的情况下,所述第一设备将所述编辑状态同步到所述第一编辑框中。
实施例514.一种设备通信方法,应用于第二设备,所述方法包括:
所述第二设备显示包括所述第一设备的选项的第四界面;
响应于对所述第一设备的选项的选择操作,所述第二设备向所述第一设备发送指示消息;用于所述第一设备显示包括第一编辑框的第一界面;
所述第二设备显示第二界面,所述第二界面包括第二编辑框;
所述第二设备向所述第三设备发送辅助输入请求;用于所述第三设备根据所述辅助输入请求显示第三界面,所述第三界面包括第三编辑框;
在所述第三编辑框中存在编辑状态的情况下,所述第二设备将所述编辑状态同步到所述第二编辑框中;
或者,在所述第一编辑框中存在编辑状态的情况下,所述第二设备将所述编辑状态同步到所述第二编辑框中。
实施例515.一种设备通信系统,包括第一设备、第二设备和第三设备,所述第一设备用于执行如实施例51-59、510-514任一项所述的第一设备的步骤,所述第二设备用于执行如实施例51-59、510-514任一项所述的第二设备的步骤,所述第三设备用于执行如实施例51-59、510-514任一项所述的第三设备的步骤。
实施例516.一种第一设备,包括:至少一个存储器和至少一个处理器;
所述存储器用于存储程序指令;
所述处理器用于调用所述存储器中的程序指令使得所述第一设备执行实施例51-59、510-514任一项所述的第一设备执行的步骤。
实施例517.一种第二设备,包括:至少一个存储器和至少一个处理器;
所述存储器用于存储程序指令;
所述处理器用于调用所述存储器中的程序指令使得所述第二设备执行实施例51-59、510-514任一项所述的第二设备执行的步骤。
实施例518.一种计算机可读存储介质,其上存储有计算机程序,使得所述计算机程序被第一设备的处理器执行时实现实施例51-59、510-514任一项所述的所述第一设备执行的步骤;或者,使得所述计算机程序被第二设备的处理器执行时实现实施例51-59、510-514任一项所述的所述第二设备执行的步骤;或者,使得所述计算机程序被第三设备的处理器执行时实现实施例51-59、510-514任一项所述的所述第三设备执行的步骤。
上述实施例51-实施例59、实施例510-实施例519的具体实现可以参照如图68-87的说明。
在利用手机辅助大屏输入的可能实现中,用户在手机的编辑框中输入关键字后,通常大屏中只能同步到手机编辑框中的文本内容。
这是因为,在通常的手机辅助大屏输入的实现中,只是简单的在大屏和手机间定义文本复制接口,因此只能将手机编辑框中的文本内容复制到大屏的编辑框中。
这样,如果用户在手机的编辑框中执行删除或插入文字操作时,手机侧光标移动,而大屏侧光标不显示或虽然显示但没有移动,使得大屏编辑框中文字的删除或插入过程不符合通常的编辑显示过程,影响用户观看体验。
基于此,本申请实施例提出了上述图7对应描述的系统框架,该框架具备了手机和大屏侧之间调用任意进程的可能,因此,光标位置显示或高亮区域显示可以利用本申请实施 例的上述框架实现。
可以理解,本申请实施例可以应用于图1-3任一应用场景中,实现在手机辅助大屏输入时手机编辑框中的任意编辑状态均可以在大屏的编辑框中同步。其中,编辑状态可以指在手机编辑框中编辑时所能改变的状态,例如包括编辑框中的文本内容、编辑框中的光标位置、和/或编辑框中的高亮区域等。
示例性,图68示出了本申请实施例的具体系统架构示意图。
如图68所示,本申请实施例中以分布式组网中包括大屏(客户端)和手机(服务端)为例,示意性说明大屏和手机之间同步双方编辑框中的编辑状态的过程。
用户可以通过遥控器等点击大屏中的应用(application,APP)提供的编辑框,大屏中可以启动大屏的本地输入法,并传递数据通道接口给大屏的IMF,大屏的IMF可以在分布式组网中查询具有远程辅助输入能力的设备,并连接具有远程辅助输入能力的手机的辅助AA。
手机的辅助AA可以拉起手机的本地输入法应用,例如在手机中弹出用于辅助大屏输入的编辑框。另外,手机的辅助AA可以通过分布式组网返回辅助AA的RPC对象给大屏,大屏可以将大屏输入通道相关的RPC对象给手机。则后续手机可以根据大屏输入通道相关的RPC对象同步大屏编辑框中的编辑状态,大屏可以根据手机辅助AA的PRC对象向手机获取手机的编辑框中的编辑状态。
示例性的,用户在手机中,基于手机的输入法APP或点击手机中基于辅助AA拉起的编辑框改变编辑状态时,可以遍历辅助AA所持有的大屏的输入通道相关的RPC对象,利用大屏的输入通道相关的RPC对象向大屏同步编辑状态的更新。其中,手机编辑状态的更新可以包括下述的一种或多种:手机编辑框中的文本内容添加或删除、手机编辑框中的光标移动、手机编辑框中某段文字的高亮标记等。
大屏在同步到手机中的更新的编辑状态后,可以调用大屏本地接口更新大屏的编辑框中的编辑状态。
可能的实现方式中,如果分布式组网中还有其他与大屏连接的其他设备,且大屏与该其他设备也互相持有各自的RPC对象,则大屏可以采用上述大屏与手机同步编辑状态的更新的方式,在大屏中同步该其他设备中的编辑状态的更新;手机也可以采用上述大屏与手机同步编辑状态的更新的方式,在手机中同步该其他设备中的编辑状态的更新。
可能的实现中,如果大屏的编辑框中编辑状态发生改变,例如,用户在手机和/或其他设备辅助大屏输入的过程中,用遥控在大屏的编辑框中进行编辑操作,则大屏编辑框中编辑状态的改变也可以通过分布式组网以及手机和/或其他设备的RPC对象同步到手机和/或其他设备中,该手机和/或其他设备同步到大屏的编辑状态后,可以调用该手机和/或其他设备的本地接口,以更新该手机和/或其他设备的编辑状态。
需要说明的是,图68对应的实施例是本申请实施例的一种可能实现方式。在其他可能的实现方式中,可以是用户通过遥控器选定大屏上某应用提供的编辑框下的虚拟键盘触发后续的辅助大屏输入的过程,或者,可以是用户在手机中触发辅助大屏输入的过程,本申请实施例对此不作具体限定。
结合上述的描述,下面对大屏和手机交互的用户界面进行示例性说明。
示例性的,图69-70示出了用户触发进行辅助输入的用户界面示意图。
图69示出了大屏的一种用户界面图。如图69所示,用户可以利用遥控器6901选定大屏中的编辑框6902,则可以触发执行本申请实施例后续的手机辅助大屏输入的过程。或者,用户可以利用遥控器6901选定大屏中的虚拟键盘中任意内容6902,则可以触发执行本申请实施例后续的手机辅助大屏输入的过程。具体的手机辅助大屏输入的方式将在后续实施例中说明,在此不再赘述。
需要说明的是,图69示出了大屏的用户界面图中设置一个编辑框的示意图。可能的实现方式中,大屏的用户界面中可以包括多个编辑框,用户触发任一个编辑框均可以触发本申请实施例后续的手机辅助大屏输入的过程,本申请实施例对此不作具体限定。
图70示出了手机的一种用户界面图。例如,用户可以通过在手机的主屏幕下拉等方式,显示如图70的a图所示用户界面,在如图70的a图所示用户界面中,可以包括手机的一项或多项下述功能:WLAN、蓝牙、手电筒、静音、飞行模式、移动数据、无线投屏、截屏或辅助输入7001。其中辅助输入7001可以为本申请实施例的手机辅助大屏输入的功能。
可能的实现方式中,在用户点击辅助输入7001后,手机可以查找处于同一分布式组网中的大屏等设备,并获取大屏中的搜索框,建立与大屏之间的通信连接,在手机中可以进一步显示如图70的c图所示用户界面,在如图70的c图所示用户界面中,可以显示用于辅助大屏输入的编辑框,用户可以基于该编辑框辅助大屏进行输入。
可能的实现方式中,如果手机查到处于同一分布式组网中的大屏等设备的数量为多个,手机中还可以显示如图70的b图所示用户界面,在如图70的b图所示用户界面,可以显示多个大屏的标识,大屏的标识可以是该大屏的设备号、用户名或昵称等。用户可以在如图70的b图所示用户界面中选择希望辅助输入的大屏(例如点击大屏A或大屏B),并进入如图70的c图所示用户界面,本申请实施例对此不作具体限定。
在用户通过上述任意方式触发进行大屏输入后,示例性的,大屏可以查找分布式组网中的具有辅助输入能力的辅助设备(例如手机),并自动确定用于辅助输入的手机,或者向分布式组网中查找到的全部手机发送通知。
例如,如果大屏查找到分布式组网中存在一个手机,则大屏可以自动选择辅助输入的设备为该手机。
例如,如果大屏查找到分布式组网中存在多个手机,但是该多个手机中,存在用户设置的默认辅助输入的手机,则大屏可以自动选择该默认辅助输入的手机为辅助输入的设备。
例如,如果大屏查找到分布式组网中存在多个手机,但是该多个手机中,存在用户上次进行辅助输入时选择的辅助输入的手机,则大屏可以自动选择该用户上次进行辅助输入时选择的辅助输入的手机为辅助输入的设备。
例如,如果大屏查找到分布式组网中存在多个手机,大屏获取该多个手机中,被用户选择为辅助输入的频次最高的手机,则大屏可以自动选择该被用户选择为辅助输入的频次最高的手机为辅助输入的设备。
例如,如果大屏查找到分布式组网中存在多个手机,但是该多个手机中,存在与大屏所登录的用户账号相同的手机,则大屏可以自动选择该与大屏所登录的用户账号相同的手机为辅助输入的设备。
示例性的,以大屏向分布式组网中的手机发送通知为例,下面对大屏和手机同步编辑 状态的用户界面进行示例性说明。
分布式组网中可以接入有一个或多个手机,在分布式组网中接入一个手机的情况下,该一个手机可以辅助大屏进行输入,后续如果有其他手机也接入该分布式组网,则分布式组网中可以包括多个手机,多个手机可以共同辅助大屏进行输入。在分布式组网中接入多个手机的情况下,多个手机可以共同辅助大屏进行输入。
例如,在家庭中,老年人持有手机A辅助大屏输入,但是老年人可能输入速度较慢,则持有手机B的年轻人可以与手机A一起共同辅助大屏输入,手机A、手机B和大屏的编辑框中的内容可以互相同步,老年人也可以基于手机A了解到年轻人基于手机B的输入情况。或者,老年人持有手机A辅助大屏输入,但是老年人可能输入速度较慢,老年人可以请求持有手机B的年轻人共同辅助大屏输入,则手机A可以向手机B发送请求,请求手机B辅助辅助输入,手机B可以基于手机A的请求共同辅助大屏输入。
示例性的,下面以分布式组网中包括大屏、手机A和手机B为例,对手机A和手机B同步大屏的初始编辑状态,以及大屏和手机B同步更新手机A的编辑状态进行示例说明。
一种可能的实现方式中,大屏、手机A和手机B已经接入分布式组网。大屏可以连接手机A的辅助AA和手机B的辅助AA,向手机A请求辅助输入或者在手机A中弹出输入框,以及向手机B请求辅助输入或者在手机B中弹出输入框等。
示例性的,图71示出了手机AA确定辅助大屏输入的用户界面示意图。如图71最左图所示的用户界面,手机A中可以弹出用于提示大屏请求辅助输入的通知,用户可以触发手机A中的通知以确认辅助大屏输入,进一步的,如图71中间图所示的用户界面,手机A中可以弹出用于辅助大屏输入的编辑框,进一步的,用户可以通过点击等触发如图71中间图所示的编辑框,手机A可以显示如图71最右图所示的用户界面,该用户界面中可以显示手机的虚拟键盘(或称为软键盘),用户后续可以利用手机A的虚拟键盘辅助大屏输入。
或者,用户在大屏中选择手机A后,手机A中可以不接收到通知,而是弹出如图72左图所示的用于辅助大屏输入的编辑框,进一步的,用户可以通过点击等触发如图72左图所示的编辑框,手机A可以显示如图72右图所示的用户界面,该用户界面中可以显示手机的虚拟键盘(或称为软键盘),用户后续可以利用手机A的虚拟键盘辅助大屏输入。
手机B确定辅助大屏输入的用户界面示意图与手机A类似,在此不再赘述。
需要说明的是,如果本申请实施例中手机A和手机B采用如图70对应的方式触发进行辅助大屏输入,则省略如图71-72所示的用户界面图。
另一种可能的实现方式中,大屏、手机A和手机B已经接入分布式组网。大屏可以连接手机A的辅助AA,向手机A请求辅助输入或者在手机A中弹出输入框;之后,手机A请求手机B共同辅助大屏输入。
示例性的,如图73所示,图73中手机A中可以显示请求手机B辅助输入的界面,用户可以通过点击手机A中的确定选项,向手机B请求辅助大屏输入。图73中手机B中可以通知手机A请求辅助大屏输入,用户可以在手机B中接受手机A的请求,在手机B中显示如图72所示的编辑框界面,实现辅助大屏输入的抢占。
又一种可能的实现方式中,初始时,大屏和手机A接入分布式组网,手机A辅助大屏 输入,之后,手机B接入分布式组网,手机B中可以显示用于提示用户是否共同辅助输入的界面。示例性的,如图74所示,手机B中显示提示用户是否共同辅助输入的界面,用户可以在手机B中点击确定选项,在手机B中显示如图72所示的编辑框界面,实现与手机A共同辅助大屏输入。
可以理解,手机B实现与手机A共同辅助大屏输入的方式还可以根据实际应用场景设定,本申请实施例对此不作具体限定。
可能的实现方式中,在手机A或手机B确定辅助大屏输入时,还可以在手机A、手机B和大屏之间执行身份认证或鉴权等步骤,以提升通信安全,本申请实施例对此不作具体限定。
可能的实现方式中,如果大屏在与手机A或手机B连接时,大屏的编辑框中已经存在输入内容,则可以将大屏的编辑框中的输入内容同步到手机A的编辑框中或手机B的编辑框中。
示例性的,图75示出了将大屏的编辑框中的输入内容同步到手机A的编辑框中或手机B的编辑框的框架示意图。
用户通过遥控器按键点击大屏上某个应用的编辑框,编辑框控件会向IMF请求启动本地输入法并传递输入数据通道给IMF,IMF通过分布式组网查询具有分布式输入能力的服务端,查询到组网内有手机A和手机B能够为大屏提供辅助输入能力,则连接手机A和手机B的分布式输入辅助AA。建立连接后,可以经过一些预处理操作(比如通知用户确认或输入一些鉴权码等),手机A和手机B上弹起辅助AA辅助输入的对话编辑框并弹起各自的输入法软键盘(如图71所示),大屏在建立连接后的回调中会持有手机A和手机B的辅助AA的RPC对象。大屏的输入数据通道包裹成的RPC对象传递给手机A和手机B的辅助AA侧。手机A和手机B可以通过大屏传来的输入数据通道相关的RPC对象向大屏侧获取初始编辑状态。进而手机A和手机B调用本地接口更新初始编辑状态。这样,可以将大屏中完整的编辑状态同步到手机,使得用户在手机中可以不需要重复输入大屏的编辑框中的初始输入内容。
可以理解,图71和12示出了大屏的编辑框中没有初始输入内容的情况下,手机A或手机B的用户界面示意图,如果大屏的编辑框中有初始输入内容,则手机A或手机B的编辑框中将同步到大屏的初始输入内容。为便于描述,后续以大屏的编辑框中没有初始输入内容为例,说明手机辅助大屏输入的过程。
示例性的,图76示出了用户在手机B的编辑框中辅助大屏输入的用户界面示意图。示例性的,如图76左图所示的手机B的用户界面图,用户可以在手机B的编辑框中输入“狮子”,手机B的编辑框中还可以在“狮子”后面显示光标,如图76右图所示大屏的用户界面图,可以在大屏的编辑框中同步到该“狮子”和光标。
图77示出了用户可以在手机B的编辑框中执行移动光标的用户界面示意图。示例性的,如图77左图所示的手机B的用户界面图,用户可以在手机B的编辑框中将光标移动至“狮子”之前,在光标前并添加“老”,如图77右图所示大屏的用户界面图,可以在大屏的编辑框中同步到该“狮子”之前的光标以及光标之前的“老”。
图78示出了用户可以在手机B的编辑框中执行高亮选中目标词的用户界面示意图。 示例性的,如图78左图所示的手机B的用户界面图,用户可以在手机B的编辑框高亮选中“老”,如图78右图所示大屏的用户界面图,可以在大屏的编辑框中和手机A的编辑框中同步到该高亮显示的“老”。
可以理解,用户在手机A中辅助大屏输入实现中,手机A的用户界面可以与手机B的用户界面相似,在此不在赘述。
可能的实现方式中,在手机A和手机B均接收到大屏用于请求辅助输入的请求时,手机A的用户可以通过点击同意辅助输入的控件等方式确定辅助大屏输入,手机B的用户也可以通过点击同意辅助输入的控件等方式确定辅助大屏输入,则后续手机A的编辑框中的编辑状态可以同步到大屏的编辑框中和手机B的编辑框中,手机B的编辑框中的编辑状态可以同步到大屏的编辑框中和手机A的编辑框中,大屏的编辑框中的编辑状态可以同步到手机A的编辑框中和手机B的编辑框中。
图79示出了用户利用手机A和手机B辅助大屏输入的用户界面示意图。示例性的,如图79所示的手机A的用户界面图,用户可以在手机A的编辑框中输入“老狮子”,光标移动到“老”和“狮子”之间,并高亮选中“老”。如图79所示大屏的用户界面图中,大屏的编辑框中显示的与手机A的编辑框中的编辑状态相同。如图79所示手机B的用户界面图中,手机B的编辑框中显示的与手机A的编辑框中的编辑状态相同。
可能的实现方式中,如图79手机A和手机B的界面图所示,手机A可能输入“老”并在虚拟键盘中选择“老”,因此,“老”可以显示在手机A和手机B的编辑框中。同时,手机B中可能输入了“大”,但是手机B未在虚拟键盘中选择“大”,则“大”没有显示在手机A和手机B的编辑框中。或者可以理解为,在手机A和手机B共同辅助大屏输入时,手机A和手机B的编辑框中的内容相同,手机A和手机B中除编辑框外的内容,可以显示相同,也可以显示不同。
可能的实现方式中,如果手机A输入“老”并在虚拟键盘中选择“老”,手机B中输入“大”并在虚拟键盘中选择“大”,则大屏可以裁定将“老”显示在“大”之前,还是将“大”显示在“老”之前。裁定的依据可以是接收到手机A的“老”或手机B的“大”的时间先后,也可以是手机A和手机B的辅助输入频次,也可以随机裁定,本申请实施例对此不作具体限定。
图80示出了另一种用户利用手机A和手机B辅助大屏输入的用户界面示意图。
一种示例,如图80手机B的用户界面中,用户在图79的基础上,在手机B的编辑框中将光标移动到“老狮子”之后,并接着“老狮子”输入“王”。如图80所示大屏的用户界面图中,大屏的编辑框中显示的与手机B的编辑框中的编辑状态相同。如图80所示手机A的用户界面图中,手机A的编辑框中显示的与手机B的编辑框中的编辑状态相同。
另一种示例,如图80大屏的用户界面中,用户在图79的基础上,在大屏的编辑框中将光标移动到“老狮子”之后,并接着“老狮子”输入“王”。如图80所示手机A的用户界面图中,手机A的编辑框中显示的与大屏的编辑框中的编辑状态相同。如图80所示手机B的用户界面图中,手机B的编辑框中显示的与大屏的编辑框中的编辑状态相同。
示例性的,图81示出了手机辅助大屏输入时的一种处理逻辑示意图。
如图81所示,用户通过操作手机A的辅助AA更新编辑状态时(编辑状态的更新可以包括:用户使用手机A的输入法输入或删除文本,移动文本编辑框中的光标或选择高亮 标记编辑框中某段文字等),手机A辅助AA的编辑框捕捉到编辑状态的改变,查询到其已经持有了大屏的输入数据通道的RPC对象,并通过包裹RPC对象的代理向大屏侧同步编辑状态。大屏侧调用其本地用于改变编辑状态的相关接口同步手机A所更新的编辑状态。
可能的实现方式中,大屏在进行编辑状态的更新时,会向IMF查询其是否持有其他服务端的辅助AA的RPC对象,这里查询到其持有了手机B辅助AA的RPC对象,通过该RPC对象告知手机B进行编辑状态的同步同时传递同步因子。手机B调用其本地用于改变编辑状态的相关接口同步大屏侧传递过来的编辑状态。通过检查同步因子,发现此次更新来源是组网内同为分布式输入服务端的手机A,因此不继续往组网内的客户端进行更新,则一次更新完成。
这样,当用户操作分布式组网中某一设备更新该设备的编辑状态时,分布式组网中的其他设备都能同步更新的编辑状态。
可能的实现方式中,在分布式组网中还可以能包括多个大屏。示例性的,图79示出了分布式组网中包括大屏A、大屏B、手机A和手机B时,多设备之间互相辅助输入的用户界面示意图。
如图82所示,用户可以利用手机A、手机B和/或大屏A编辑“老狮子王”,则大屏B也可以在大屏B的编辑框中同步到该“老狮子王”。
需要说明的是,在分布式组网中包括多个大屏时,可能出现同步循环链。示例性的,图83示出了一种同步循环链示意图。如图83所示,用户操作手机A更新编辑状态时,手机A向大屏A和大屏B进行同步,大屏A检测到当前分布式组网内还存在手机B,因此会向手机B进行同步,手机B发现分布式组网内还存在大屏B,因此会向大屏B进行同步,大屏B又会向手机A进行同步,因此会产生同步循环链。
基于此,本申请实施例为抑制同步循环链的产生,在分布式组网输入同步技术中引入了同步因子。同步因子记录了每一次更新发起的因素,例如同步因子可以包括更新发起方的设备ID和/或端类别(服务端还是客户端)等信息。每次进行编辑状态更新时,会传递同步因子,设备在更新编辑状态时会检测同步因子,如果是服务端发起的更新,则同步因子记录这次更新操作来源服务端,则更新至其他服务端时就不在往下更新。
示例性的,图84示出了图82的手机辅助大屏输入时的一种处理逻辑示意图。
如图84所示,用户通过操作大屏A自主更新大屏A的编辑状态时(编辑状态的更新可以包括用户使用大屏A的输入法输入或删除文本,移动APP文本编辑框中的光标或选择高亮标记编辑框中某段文字等),大屏A应用APP的编辑框捕捉到编辑状态的改变,向IMF查询到其已经持有了手机A和手机B的辅助AA返回的RPC对象,通过包裹RPC对象的代理向手机A和手机B同步编辑状态并且转递同步因子。手机A和手机B辅助AA侧调用其本地能改变编辑状态相关接口同步大屏A所更新的编辑状态。手机A或手机B在进行编辑状态的同步时,会向IMF查询其是否持有其他客户端传来的数据通道的RPC对象,这里查询到其持有了大屏B的输入数据通道的RPC对象,通过该RPC对象告知大屏B进行编辑状态的同步同时传递同步因子。大屏B调用其本地能改变编辑状态的相关接口同步手机A传递过来的编辑状态。通过检查同步因子,发现此次更新来源是组网内同为分布式输入客户端的大屏A,因此不继续往组网内的服务端进行更新,则一次更新完成。
这样,当用户操作分布式组网中某一设备更新该设备的编辑状态时,分布式组网中的 其他设备都能同步更新的编辑状态。且因为在分布式组网中设备更新编辑状态时,可以同步传递同步因子,避免产生循环链。
需要说明的是,上述手机辅助大屏输入时的用户界面图均是示例性说明,可能的实现方式中,手机辅助大屏输入时的界面中,也可以同步大屏中的部分或全部内容,使得手机用户可以基于手机界面了解大屏的状态。
示例性的,图85示出了一种手机的用户界面。如图85所示,用户在利用手机辅助大屏输入时,可以将大屏的全部或部分内容投屏到手机中,例如在手机中显示大屏的编辑框相关的内容,并在大屏内容的上层显示手机的编辑框,这样用户在利用手机的编辑框中输入时,在手机的用户界面中可以同步看到大屏编辑框中的状态,用户在辅助输入时,不需要抬头看大屏中的输入状态。
需要说明的是,上述实施例中,以用户辅助大屏输入汉字为例进行示例,可能的实现方式中,用户可以辅助大屏进行英文词组输入或其他形式的文本输入,本申请实施例对辅助输入的具体内容不做限定。
在采用对应各个功能划分各个功能模块的情况下,如图86所示,示出了本申请实施例提供一种第一设备、第二设备或第三设备的一种可能的结构示意图,该第一设备、第二设备或第三设备包括:显示屏幕8601和处理单元8602。
其中,显示屏幕8601,用于支持第一设备、第二设备或第三设备执行上述实施例中的显示步骤,或者本申请实施例所描述的技术的其他过程。显示屏幕8601可以是触摸屏或其他硬件或硬件与软件的综合体。
处理单元8602,用于支持第一设备、第二设备或第三设备执行上述方法实施例中的处理步骤,或者本申请实施例所描述的技术的其他过程。
其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
当然,电子设备包括但不限于上述所列举的单元模块。并且,上述功能单元的具体所能够实现的功能也包括但不限于上述实例所述的方法步骤对应的功能,电子设备的其他单元的详细描述可以参考其所对应方法步骤的详细描述,本申请实施例这里不予赘述。
在采用集成的单元的情况下,上述实施例中所涉及的第一设备、第二设备或第三设备可以包括:处理模块、存储模块和显示屏幕。处理模块用于对第一设备、第二设备或第三设备的动作进行控制管理。显示屏幕用于根据处理模块的指示进行内容显示。存储模块,用于保存第一设备、第二设备或第三设备的程序代码和数据。进一步的,该第一设备、第二设备或第三设备还可以包括输入模块,通信模块,该通信模块用于支持第一设备、第二设备或第三设备与其他网络实体的通信,以实现第一设备、第二设备或第三设备的通话,数据交互,Internet访问等功能。
其中,处理模块可以是处理器或控制器。通信模块可以是收发器、RF电路或通信接口等。存储模块可以是存储器。显示模块可以是屏幕或显示器。输入模块可以是触摸屏,语音输入装置,或指纹传感器等。
其中,上述通信模块可以包括RF电路,还可以包括无线保真(wireless fidelity,Wi-Fi)模块、近距离无线通信技术(near field communication,NFC)模块和蓝牙模块。RF电路、NFC模块、WI-FI模块和蓝牙模块等通信模块可以统称为通信接口。其中,上述处理器、 RF电路、和显示屏幕和存储器可以通过总线耦合在一起。
如图87所示,示出了本申请实施例提供的第一设备、第二设备或第三设备的又一种可能的结构示意图,包括:一个或多个处理器8701、存储器8702、摄像头8704和显示屏幕8703;上述各器件可以通过一个或多个通信总线8706通信。
其中,一个或多个计算机程序被8705存储在存储器8702中,并被配置为被一个或多个处理器8701执行;一个或多个计算机程序8705包括指令,指令用于执行上述任意步骤的显示方法。当然,电子设备包括但不限于上述所列举的器件,例如,上述电子设备还可以包括射频电路、定位装置、传感器等等。
本申请实施例还提供一种计算机存储介质,包括计算机指令,当计算机指令在电子设备上运行时,使得电子设备执行如上述任意步骤的显示方法。
本申请实施例还提供一种计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行如上述任意步骤的显示方法。
本申请实施例还提供一种装置,该装置具有实现上述各显示方法中电子设备行为的功能。上述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个与上述功能相对应的模块。
其中,本申请实施例提供的电子设备、计算机存储介质、计算机程序产品、或装置均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请实施例所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请实施例各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形 式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:快闪存储器、移动硬盘、只读存储器、随机存取存储器、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请实施例的具体实施方式,但本申请实施例的保护范围并不局限于此,任何在本申请实施例揭露的技术范围内的变化或替换,都应涵盖在本申请实施例的保护范围之内。因此,本申请实施例的保护范围应以所述权利要求的保护范围为准。

Claims (20)

  1. 一种设备通信方法,其特征在于,应用于包括第一设备、第二设备和第三设备的系统,所述方法包括:
    所述第一设备显示包括第一编辑框的第一界面;
    所述第一设备向所述第二设备和所述第三设备发送指示消息;
    所述第二设备根据所述指示消息显示第二界面,所述第二界面包括第二编辑框;
    在所述第二编辑框中存在编辑状态的情况下,所述第一设备将所述编辑状态同步到所述第一编辑框中;
    所述第三设备向所述第一设备发送抢占消息;
    所述第三设备显示包括第三编辑框的第三界面;所述第三编辑框中同步有所述第一编辑框中的编辑状态。
  2. 根据权利要求1所述的方法,其特征在于,所述第二设备包括接口服务,所述接口服务用于所述第一设备与所述第二设备之间的编辑状态的同步。
  3. 根据权利要求1或2所述的方法,其特征在于,所述编辑状态包括下述一项或多项:文本内容、光标或文字内容的高亮标记。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述第二设备根据所述指示消息显示第二界面,包括:
    所述第二设备响应于所述指示消息显示第一通知界面;所述第一通知界面包括确认辅助输入的选项;
    响应于对所述选项的触发操作,所述第二设备显示所述第二界面。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述第二界面还包括:所述第一界面的全部或部分内容。
  6. 根据权利要求5所述的方法,其特征在于,所述第二编辑框与所述第一界面的全部或部分内容分层显示,且所述第二编辑框显示在所述第一界面的全部或部分内容的上层。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,所述第二设备根据所述指示消息显示第二界面之后,所述方法还包括:
    响应于对所述第二编辑框的触发,所述第二设备显示虚拟键盘;
    所述第二设备根据所述虚拟键盘和/或所述第二编辑框中接收的输入操作,在所述第二编辑框中显示所述编辑状态。
  8. 根据权利要求1-7任一项所述的方法,其特征在于,所述第一设备包括下述任一项:电视、大屏或可穿戴设备;所述第二设备或所述第三设备包括下述任一项:手机、平板或可穿戴设备。
  9. 根据权利要求1-8任一项所述的方法,其特征在于,还包括:
    在所述第三编辑框中接收到输入内容的情况下,所述第一设备将所述输入内容同步到所述第一编辑框中。
  10. 根据权利要求1-9任一项所述的方法,其特征在于,所述第三设备向所述第一设备发送抢占消息,包括:
    所述第三设备接收来自所述第二设备的抢占请求;
    所述第三设备基于所述抢占请求向所述第一设备发送所述抢占消息。
  11. 根据权利要求1-9任一项所述的方法,其特征在于,所述第三设备向所述第一设备发送抢占消息,包括:
    所述第三设备根据用户操作显示第二通知界面;所述第二通知界面包括确认抢占的选项;
    响应于对所述确认抢占的选项的触发操作,所述第三设备向所述第一设备发送抢占消息。
  12. 一种设备通信方法,其特征在于,应用于包括第一设备、第二设备和第三设备的系统,所述方法包括:
    所述第二设备显示包括所述第一设备的选项的第四界面;
    响应于对所述第一设备的选项的选择操作,所述第二设备向所述第一设备发送指示消息;
    所述第一设备显示包括第一编辑框的第一界面;
    所述第二设备显示第二界面,所述第二界面包括第二编辑框;
    在所述第二编辑框中存在编辑状态的情况下,所述第一设备将所述编辑状态同步到所述第一编辑框中;
    所述第三设备向所述第一设备发送抢占消息;
    所述第三设备显示包括第三编辑框的第三界面;所述第三编辑框中同步有所述第一编辑框中的编辑状态。
  13. 一种设备通信方法,其特征在于,应用于第一设备,所述方法包括:
    所述第一设备显示包括第一编辑框的第一界面;
    所述第一设备向所述第二设备和所述第三设备发送指示消息;所述指示消息用于指示所述第二设备显示第二界面,所述第二界面包括第二编辑框;
    在所述第二编辑框中存在编辑状态的情况下,所述第一设备将所述编辑状态同步到所述第一编辑框中;
    所述第一设备接收来自所述第三设备的抢占消息。
  14. 一种设备通信方法,其特征在于,应用于第二设备,所述方法包括:
    所述第二设备显示包括第一设备的选项的第四界面;
    响应于对所述第一设备的选项的选择操作,所述第二设备向所述第一设备发送指示消息;
    所述第一设备显示包括第一编辑框的第一界面;
    所述第二设备显示第二界面,所述第二界面包括第二编辑框;
    在所述第二编辑框中存在编辑状态的情况下,所述第二设备将所述编辑状态同步到所述第一编辑框中;
    所述第二设备接收来自所述第三设备的抢占消息。
  15. 一种设备通信系统,其特征在于,包括第一设备、第二设备和第三设备,所述第一设备用于执行如权利要求1-11任一项所述的第一设备的步骤,所述第二设备用于执行如权利要求1-11任一项所述的第二设备的步骤,所述第三设备用于执行如权利要求1-11任一项所述的第三设备的步骤。
  16. 一种第一设备,其特征在于,包括:至少一个存储器和至少一个处理器;
    所述存储器用于存储程序指令;
    所述处理器用于调用所述存储器中的程序指令使得所述第一设备执行权利要求1-11任一项所述的第一设备执行的步骤。
  17. 一种第二设备,其特征在于,包括:至少一个存储器和至少一个处理器;
    所述存储器用于存储程序指令;
    所述处理器用于调用所述存储器中的程序指令使得所述第二设备执行权利要求1-11任一项所述的第二设备执行的步骤。
  18. 一种第三设备,其特征在于,包括:至少一个存储器和至少一个处理器;
    所述存储器用于存储程序指令;
    所述处理器用于调用所述存储器中的程序指令使得所述第三设备执行权利要求1-11任一项所述的第三设备执行的步骤。
  19. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,使得所述计算机程序被第一设备的处理器执行时实现权利要求1-11任一项所述的所述第一设备执行的步骤;或者,使得所述计算机程序被第二设备的处理器执行时实现权利要求1-11任一项所述的所述第二设备执行的步骤;或者,使得所述计算机程序被第三设备的处理器执行时实现权利要求1-11任一项所述的所述第三设备执行的步骤。
  20. 一种程序产品,其特征在于,所述程序产品包括计算机程序,所述计算机程序存储在可读存储介质中,通信装置的至少一个处理器可以从所述可读存储介质读取所述计算机程序,所述至少一个处理器执行所述计算机程序使得通信装置实施如权利要求1‐11任意一项所述的方法或者如权利要求12所述的方法或者如权利要求13所述的方法或者如权利要求14所述的方法。
PCT/CN2021/124800 2020-10-31 2021-10-19 设备通信方法、系统和装置 WO2022089259A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21884987.5A EP4221239A4 (en) 2020-10-31 2021-10-19 METHOD, SYSTEM AND DEVICE FOR DEVICE COMMUNICATION
US18/251,127 US20230403421A1 (en) 2020-10-31 2021-10-19 Device Communication Method and System, and Apparatus

Applications Claiming Priority (12)

Application Number Priority Date Filing Date Title
CN202011197048 2020-10-31
CN202011198863 2020-10-31
CN202011197030 2020-10-31
CN202011198861 2020-10-31
CN202011197030.8 2020-10-31
CN202011197048.8 2020-10-31
CN202011197035 2020-10-31
CN202011198863.6 2020-10-31
CN202011197035.0 2020-10-31
CN202011198861.7 2020-10-31
CN202110267000.8 2021-03-11
CN202110267000.8A CN114527909A (zh) 2020-10-31 2021-03-11 设备通信方法、系统和装置

Publications (1)

Publication Number Publication Date
WO2022089259A1 true WO2022089259A1 (zh) 2022-05-05

Family

ID=81381936

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/124800 WO2022089259A1 (zh) 2020-10-31 2021-10-19 设备通信方法、系统和装置

Country Status (3)

Country Link
US (1) US20230403421A1 (zh)
EP (1) EP4221239A4 (zh)
WO (1) WO2022089259A1 (zh)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103150014A (zh) * 2013-01-25 2013-06-12 东莞宇龙通信科技有限公司 被控终端和控制方法
US20130314302A1 (en) * 2012-05-25 2013-11-28 Samsung Electronics Co., Ltd. Multiple display method with multiple communication terminals, machine-readable storage medium and communication terminal
CN103607779A (zh) * 2013-11-13 2014-02-26 四川长虹电器股份有限公司 多屏协同智能输入系统及其实现方法
CN105072246A (zh) * 2015-07-01 2015-11-18 小米科技有限责任公司 信息同步方法、装置及终端
CN106162364A (zh) * 2015-03-30 2016-11-23 腾讯科技(深圳)有限公司 智能电视系统输入方法及装置、终端辅助输入方法及装置
CN108476339A (zh) * 2016-12-30 2018-08-31 华为技术有限公司 一种遥控方法和终端
CN109067641A (zh) * 2018-09-11 2018-12-21 孙培严 移动端和pc端进行消息编辑框消息同步的方法和服务器
CN110417992A (zh) * 2019-06-20 2019-11-05 华为技术有限公司 一种输入方法、电子设备和投屏系统
CN111866571A (zh) * 2020-06-30 2020-10-30 北京小米移动软件有限公司 在智能电视上编辑内容的方法、装置及存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101642111B1 (ko) * 2009-08-18 2016-07-22 삼성전자주식회사 방송수신장치, 모바일 디바이스, 서비스 제공 방법 및 방송수신장치 제어 방법
KR101058525B1 (ko) * 2009-10-09 2011-08-23 삼성전자주식회사 텍스트 입력방법 및 이를 적용한 디스플레이 장치
CN103593213B (zh) * 2013-11-04 2017-04-05 华为技术有限公司 文本信息输入方法及装置
KR102621236B1 (ko) * 2018-11-07 2024-01-08 삼성전자주식회사 디스플레이장치 및 그 제어방법

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130314302A1 (en) * 2012-05-25 2013-11-28 Samsung Electronics Co., Ltd. Multiple display method with multiple communication terminals, machine-readable storage medium and communication terminal
CN103150014A (zh) * 2013-01-25 2013-06-12 东莞宇龙通信科技有限公司 被控终端和控制方法
CN103607779A (zh) * 2013-11-13 2014-02-26 四川长虹电器股份有限公司 多屏协同智能输入系统及其实现方法
CN106162364A (zh) * 2015-03-30 2016-11-23 腾讯科技(深圳)有限公司 智能电视系统输入方法及装置、终端辅助输入方法及装置
CN105072246A (zh) * 2015-07-01 2015-11-18 小米科技有限责任公司 信息同步方法、装置及终端
CN108476339A (zh) * 2016-12-30 2018-08-31 华为技术有限公司 一种遥控方法和终端
CN109067641A (zh) * 2018-09-11 2018-12-21 孙培严 移动端和pc端进行消息编辑框消息同步的方法和服务器
CN110417992A (zh) * 2019-06-20 2019-11-05 华为技术有限公司 一种输入方法、电子设备和投屏系统
CN111866571A (zh) * 2020-06-30 2020-10-30 北京小米移动软件有限公司 在智能电视上编辑内容的方法、装置及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4221239A4

Also Published As

Publication number Publication date
EP4221239A1 (en) 2023-08-02
EP4221239A4 (en) 2024-03-13
US20230403421A1 (en) 2023-12-14

Similar Documents

Publication Publication Date Title
CN110471639B (zh) 显示方法及相关装置
US20220004315A1 (en) Multimedia Data Playing Method and Electronic Device
WO2021063343A1 (zh) 语音交互方法及装置
WO2020103764A1 (zh) 一种语音控制方法及电子设备
WO2020177622A1 (zh) Ui组件显示的方法及电子设备
CN113157231A (zh) 一种数据传输的方法及相关设备
WO2021164445A1 (zh) 一种通知处理方法、电子设备和系统
WO2021000804A1 (zh) 锁定状态下的显示方法及装置
WO2021185244A1 (zh) 一种设备交互的方法和电子设备
JP2023509533A (ja) クロスデバイス・タスク処理、電子デバイス及び記憶媒体のための対話方法
WO2020233556A1 (zh) 一种通话内容处理方法和电子设备
WO2022052776A1 (zh) 一种人机交互的方法、电子设备及系统
WO2022068819A1 (zh) 一种界面显示方法及相关装置
WO2022068483A1 (zh) 应用启动方法、装置和电子设备
WO2022037726A1 (zh) 分屏显示方法和电子设备
WO2021249318A1 (zh) 一种投屏方法和终端
WO2020156230A1 (zh) 一种电子设备在来电时呈现视频的方法和电子设备
WO2020134892A1 (zh) 一种媒体文件裁剪的方法、电子设备和服务器
WO2022143883A1 (zh) 一种拍摄方法、系统及电子设备
WO2022135157A1 (zh) 页面显示的方法、装置、电子设备以及可读存储介质
WO2020062014A1 (zh) 一种向输入框中输入信息的方法及电子设备
US20220292122A1 (en) Data Processing Method and Apparatus
WO2023045597A1 (zh) 大屏业务的跨设备流转操控方法和装置
WO2022089259A1 (zh) 设备通信方法、系统和装置
WO2022152174A1 (zh) 一种投屏的方法和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21884987

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021884987

Country of ref document: EP

Effective date: 20230424

NENP Non-entry into the national phase

Ref country code: DE