CN115705128A - Cross-device input method, device and system - Google Patents

Cross-device input method, device and system Download PDF

Info

Publication number
CN115705128A
CN115705128A CN202110888389.8A CN202110888389A CN115705128A CN 115705128 A CN115705128 A CN 115705128A CN 202110888389 A CN202110888389 A CN 202110888389A CN 115705128 A CN115705128 A CN 115705128A
Authority
CN
China
Prior art keywords
information
input
target device
edit
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110888389.8A
Other languages
Chinese (zh)
Inventor
陈刚
卞超
陈才龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202110888389.8A priority Critical patent/CN115705128A/en
Priority to PCT/CN2022/109475 priority patent/WO2023011418A1/en
Publication of CN115705128A publication Critical patent/CN115705128A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/133Protocols for remote procedure calls [RPC]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a cross-device input method, a device and a system, which relate to the technical field of electronics and can improve the convenience and efficiency of cross-device input. The approach may be based on a single device (e.g., the first device) initiating remote input to a target device (e.g., a large screen device) without interfering with other high input capability devices. Further, for the condition that the display interface of the target device comprises multiple edit boxes, the method can realize the arbitrary switching of the focus boxes through a single device, and has simple operation and good user experience.

Description

Cross-device input method, device and system
Technical Field
The embodiment of the application relates to the technical field of electronics, in particular to a cross-device input method, device and system.
Background
More and more electronic devices are connected to a network and accept user input for searching or inputting. For example, the television may be connected to a home network, receive a program name input by the user in the search box, search for a program corresponding to the program name through the home network, and play the program. Usually the title of a program received by the television set is entered by the user in a search box via a remote control.
In some examples, as shown in fig. 1, the television 01 may receive a program name input in the search box F by the user controlling the movement of a cursor on the television 01 on an input keyboard (E shown in fig. 1) through direction keys (a, B, C, or D shown in fig. 1) of the remote controller 02. In other examples, as shown in fig. 1, the television set 01 may receive a program name input by the user in the search box F through an input keyboard (G shown in fig. 1) of the remote controller 02. However, the above method of inputting the program name in the search box by the remote controller is cumbersome to operate and inconvenient to use.
Disclosure of Invention
The application provides a cross-device input method, device and system, which can realize arbitrary switching of a focus frame based on a single device during cross-device input and are simple to operate.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical solutions:
in a first aspect, a cross-device input method is provided, the method comprising: in response to receiving an operation of a user for starting a remote input function, the first device determines a target device according to interface information of one or more second devices; the method comprises the steps that a first device and a target device establish wireless connection for transmitting information input by a user and sent to the target device by the first device; the first device displays a remote input interface that includes an input box and at least one option for adjusting a focus box on the target device. The target device is one of the one or more second devices, and one or more edit boxes are included on an interface of the target device.
The solution provided by the first aspect above may be based on a single device (e.g. the first device) initiating a remote input to a target device (e.g. a large screen device) without interfering with other high input capability devices. Further, for the condition that the display interface of the target device comprises multiple edit boxes, the method can realize the arbitrary switching of the focus boxes through a single device, and has simple operation and good user experience.
In a possible implementation manner, the determining, by the first device, the target device according to the interface information of the one or more second devices includes: and the first equipment displays equipment information according to the interface information of the plurality of second equipment. The interface of a plurality of second devices is provided with an edit box, and the device information comprises identification information of one or more second devices of which the interface is provided with the edit box. And the first equipment determines target equipment according to the selection operation of the user on the identification information in the equipment information. As one implementation, the first device may present the identification information of the plurality of second devices to the user for the user to select the target device. Through the User Interface (UI) interaction, more humanized user service can be realized, and the user experience is improved.
In a possible implementation manner, only one of the one or more second devices has an edit box on an interface, and the first device determines that the device having the edit box on the interface is the target device. As one implementation, the first device may automatically determine the target device based on the fact that one or more second devices have edit boxes on their interfaces. For example, when only one second device has an edit box on it, it is directly determined that the device is the target device. Through the scheme, the operation of cross-device input can be simplified, the efficiency of cross-device input is improved, and the user experience is improved.
In a possible implementation manner, the method further includes: the first equipment receives information input by a user in the input box; the first device sends information entered by a user in an input box to a target device, the information entered by the user in the input box being used to populate a default focus box to the target device. As a possibility, the first device may directly input corresponding information into the default focus frame of the target device according to information input by the user in the input frame for cross-device input. Through the scheme, the experience degree of the user in cross-device input can be improved.
In a possible implementation manner, the method further includes: the first equipment responds to the received operation that the user selects any one of the at least one option; the first equipment determines a switched focus frame; the first device transmits focus frame switching information to the target device. The focus frame switching information includes the identification information of the switched focus frame. As one possibility, the first device correspondingly performs focus frame switching according to an operation of switching the focus frame of the target device by the user. According to the scheme, the flow and operation of switching the focus frames in the cross-device input process can be simplified, the cross-device input efficiency is improved, and the user experience is improved.
In a possible implementation manner, the determining, by the first device, the focus frame after switching includes: the first equipment determines the switched focus frame according to the edit frame information; the edit box information includes priorities of the plurality of edit boxes. As one implementation, the focus frame switching may be performed based on a priority status of the edit frame. Through the scheme, the cross-device input of the edit box with higher importance degree can be preferentially carried out, and the user experience is improved.
In a possible implementation manner, the interface of the target device includes a plurality of edit boxes; the method further comprises the following steps: the first device receives position information of a plurality of edit boxes from the target device; the first device determines priorities of the plurality of edit boxes according to the position information of the plurality of edit boxes. As one implementation, the focus frame switching may be performed based on position information of the edit frame. The scheme accords with the use habit of a user, for example, the habit that the user inputs the cross-equipment editing frame according to the sequence from top to bottom and from left to right, and improves the user experience.
In a possible implementation manner, the method further includes: the first device receives edit box information from the target device. As one implementation, the first device may acquire edit box information such as priority of an edit box or position information of the edit box from the target device to improve accuracy of information on which the focus box is switched.
In a possible implementation manner, the at least one option includes at least one of an option for switching a next edit box or an option for switching a previous edit box. As an implementation manner, the first device may provide an option for switching to a next and/or previous focus frame, so that the user may perform focus frame switching as needed, thereby improving user experience.
In one possible implementation, the at least one option includes an option for switching across edit boxes. As an implementation manner, the first device may provide an option for switching across edit boxes, so that a user can perform focus box switching as needed, and user experience is improved.
In one possible implementation, the wireless connection is a peer-to-peer (P2P) connection. As a possibility, the method provided by the present application is applicable to a P2P network architecture, and can improve the applicability and compatibility of the method to different network architectures.
In one possible implementation, the wireless connection is based on a Remote Procedure Call (RPC) protocol. As a possibility, the method provided by the application is suitable for the wireless connection based on the RPC protocol, and the applicability and the compatibility of the method to different communication protocols can be improved.
In a second aspect, a cross-device input method is provided, which is applied to a target device, and includes: the target device determines a default focus box after establishing a wireless connection with the first device. The wireless connection is used for the first device to send information input by a user to the target device. And the target equipment receives the information sent by the first equipment through the wireless connection and fills the information into a default focus frame. It should be understood that: the target device is one of the one or more second devices.
In the second aspect, the target device (e.g., a large screen device) may establish a wireless connection with the first device for cross-device input, and the target device receives remote input from the first device through the established wireless connection. In the method, the establishment process of the wireless connection cannot cause interference to other devices with strong input capacity, the operation is simple, and the user experience is good.
In a possible implementation manner, the method further includes: the target device receives focus frame switching information from the first device, the focus frame switching information including identification information of a focus frame; and the target equipment switches the focus frame into an edit frame corresponding to the identification information. As an implementation manner, for example, for a case that the display interface of the target device includes multiple edit boxes, arbitrary switching of the focus boxes can be implemented by a single device, so that the operation is simple, and the user experience is good.
In a possible implementation manner, the default focus frame is a first edit frame on an interface of the target device; alternatively, the default focus frame is the highest priority edit box on the interface of the target device. As one implementation, the default focus frame may be determined based on a priority condition of the edit box. Through the scheme, cross-device input of the edit box with higher importance degree can be preferentially carried out, and user experience is improved. As another implementation, a default focus frame may be determined based on position information of the edit box. The scheme accords with the use habit of a user, for example, the habit that the user inputs the cross-equipment editing frame according to the sequence from top to bottom and from left to right is met, and the user experience is improved.
In a possible implementation manner, the interface of the target device includes a plurality of edit boxes; the method further comprises the following steps: the target equipment sends the position information of the plurality of edit boxes to the first equipment; or the target device sends the priorities of the plurality of edit boxes to the first device. As one implementation, the first device may acquire edit box information such as priority of an edit box or position information of the edit box from the target device to improve accuracy of information on which the focus box is switched.
In one possible implementation, the wireless connection is a P2P connection. As a possibility, the method provided by the present application is applicable to a P2P network architecture, and can improve the applicability and compatibility of the method to different network architectures.
In one possible implementation, the wireless connection is based on the RPC protocol. As a possibility, the method provided by the application is suitable for the wireless connection based on the RPC protocol, and the applicability and the compatibility of the method to different communication protocols can be improved.
In a third aspect, a first device is provided, the first device comprising: a processing unit to: in response to receiving an operation of a user for starting a remote input function, determining target equipment according to interface information of one or more second equipment; establishing a wireless connection with a target device for transmitting information input by a user, which is sent to the target device by a first device; a display unit for: displaying a remote input interface including an input box and at least one option to adjust a focus box on the target device. The target device is one of the one or more second devices, and one or more edit boxes are included on an interface of the target device.
The third aspect provides a solution that may initiate a remote input to a target device (e.g., a large screen device) based on a single device (e.g., a first device) without interfering with other high input capability devices. Further, for the condition that the display interface of the target device comprises multiple edit boxes, the method can realize the arbitrary switching of the focus boxes through a single device, and has simple operation and good user experience.
In a possible implementation manner, the processing unit is specifically configured to: displaying equipment information according to interface information of a plurality of second equipment; and determining the target equipment according to the selection operation of the user on the identification information in the equipment information. The interface of a plurality of second devices is provided with an edit box, and the device information comprises identification information of one or more second devices of which the interface is provided with the edit box. As one implementation, the first device may present the identification information of the plurality of second devices to the user for the user to select the target device. Through the UI interaction, the user can be served more humanizedly, and the user experience is improved.
In a possible implementation manner, only one of the one or more second devices has an edit box on the interface, and the processing unit determines that the device having the edit box on the interface is the target device. As one implementation, the first device may automatically determine the target device based on the fact that one or more second devices have edit boxes on their interfaces. For example, when only one second device has an edit box on it, it is directly determined that the device is the target device. Through the scheme, the operation of cross-device input can be simplified, the efficiency of cross-device input is improved, and the user experience is improved.
In a possible implementation manner, the processing unit is further configured to: receiving information input by a user in the input box; and sending information input by a user in an input box to the target device, wherein the information input by the user in the input box is used for filling a default focus box of the target device. As a possibility, the first device may input corresponding information into a default focus frame of the target device directly according to information input by the user in an input frame for cross-device input. Through the scheme, the experience degree of the user in cross-device input can be improved.
In a possible implementation manner, the processing unit is further configured to: responding to the received operation that the user selects any one of the at least one option; determining a switched focus frame; and sending the focus frame switching information to the target equipment. The focus frame switching information includes the identification information of the switched focus frame. As one possibility, the first device correspondingly performs focus frame switching according to an operation of switching the focus frame of the target device by the user. Through the scheme, the flow and operation of switching the focus frames in the cross-device input process can be simplified, the cross-device input efficiency is improved, and the user experience is improved.
In a possible implementation manner, the processing unit is specifically configured to: determining a switched focus frame according to the edit frame information; the edit box information includes priorities of the plurality of edit boxes. As one implementation, the focus frame switching may be performed based on a priority status of the edit frame. Through the scheme, cross-device input of the edit box with higher importance degree can be preferentially carried out, and user experience is improved.
In a possible implementation manner, the interface of the target device includes a plurality of edit boxes; the processing unit is further configured to: receiving location information of a plurality of edit boxes from a target device; and determining the priority of the plurality of edit boxes according to the position information of the plurality of edit boxes. As one implementation, the focus frame switching may be performed based on the position information of the edit frame. The scheme accords with the use habit of a user, for example, the habit that the user inputs the cross-equipment editing frame according to the sequence from top to bottom and from left to right, and improves the user experience.
In a possible implementation manner, the first device further includes: and the transceiving unit is used for receiving the edit box information from the target equipment. As one implementation, the first device may acquire edit box information such as priority of an edit box or position information of the edit box from the target device to improve accuracy of information on which the focus box is switched.
In a possible implementation manner, the at least one option includes at least one of an option for switching a next edit box or an option for switching a previous edit box. As an implementation manner, the first device may provide an option for switching to a next and/or previous focus frame, so that the user performs focus frame switching as needed, thereby improving user experience.
In one possible implementation, the at least one option includes an option for switching across edit boxes. As an implementation manner, the first device may provide an option for switching across edit boxes, so that a user can switch focus boxes as needed, and user experience is improved.
In one possible implementation, the wireless connection is a P2P connection. As a possibility, the method provided by the present application is applicable to a P2P network architecture, and can improve the applicability and compatibility of the method to different network architectures.
In one possible implementation, the wireless connection is based on the RPC protocol. As a possibility, the method provided by the application is suitable for the wireless connection based on the RPC protocol, and the applicability and the compatibility of the method to different communication protocols can be improved.
In a fourth aspect, there is provided a target device comprising: a processing unit for establishing a wireless connection with a first device; a default focus frame is determined. The wireless connection is used for the first device to send information input by a user to the target device. And the transceiving unit is used for receiving the information sent by the first equipment through the wireless connection and filling the information into the default focus frame. It should be understood that: the target device is one of the one or more second devices.
In the solution provided by the fourth aspect above, the target device (e.g. a large screen device) may establish a wireless connection with the first device for cross-device input, and receive the remote input of the first device through the established wireless connection. In the method, the establishment process of the wireless connection cannot cause interference to other devices with strong input capacity, the operation is simple, and the user experience is good.
In a possible implementation manner, the transceiver unit is further configured to: receiving focus frame switching information from a first device, the focus frame switching information including identification information of a focus frame; the processing unit is further configured to: and switching the focus frame into an edit frame corresponding to the identification information. As an implementation manner, for example, for a case that the display interface of the target device includes multiple edit boxes, arbitrary switching of the focus boxes can be implemented by a single device, so that the operation is simple, and the user experience is good.
In a possible implementation manner, the default focus frame is a first edit frame on an interface of the target device; alternatively, the default focus box is the highest priority edit box on the interface of the target device. As one implementation, the default focus box may be determined based on a priority condition of the edit box. Through the scheme, cross-device input of the edit box with higher importance degree can be preferentially carried out, and user experience is improved. As another implementation, a default focus frame may be determined based on the position information of the edit box. The scheme accords with the use habit of a user, for example, the habit that the user inputs the cross-equipment editing frame according to the sequence from top to bottom and from left to right, and improves the user experience.
In a possible implementation manner, the interface of the target device includes a plurality of edit boxes; the transceiver unit is further configured to: transmitting the position information of the plurality of editing frames to the first device; or, the priorities of the plurality of edit boxes are sent to the first device. As one implementation, the first device may acquire edit box information such as priority of an edit box or position information of the edit box from the target device to improve accuracy of information on which the focus box is switched.
In one possible implementation, the wireless connection is a P2P connection. As a possibility, the method provided by the present application is applicable to a P2P network architecture, and can improve the applicability and compatibility of the method to different network architectures.
In one possible implementation, the wireless connection is based on the RPC protocol. As a possibility, the method provided by the application is suitable for the wireless connection based on the RPC protocol, and the applicability and compatibility of the method to different communication protocols can be improved.
In a fifth aspect, a first device is provided, the first device comprising: a display screen; one or more processors; one or more memories; the memory stores one or more programs, the one or more programs comprising instructions, which when executed by the one or more processors, cause the first device to perform the method of any of the first aspect and the first aspect.
In a sixth aspect, there is provided a target device comprising: a display screen; one or more processors; one or more memories; the memory stores one or more programs, the one or more programs comprising instructions, which when executed by the one or more processors, cause the target device to perform the method of any of the second and third aspects. It should be understood that: the target device is one of the one or more second devices.
In a seventh aspect, a cross-device input method is provided, the method including: in response to receiving an operation used by a user for starting a remote input function, the first device determines a target device according to interface information of one or more second devices, wherein the target device is one of the one or more second devices, and one or more edit boxes are included on an interface of the target device; the method comprises the steps that a first device establishes wireless connection with a target device, wherein the wireless connection is used for transmitting information input by a user and sent to the target device by the first device; the target device determines a default focus frame; the first device displays a remote input interface that includes an input box and at least one option for adjusting a focus box on the target device. It should be understood that: the target device is one of the one or more second devices.
The seventh aspect provides a solution that can initiate a remote input to a target device (e.g., a large screen device) based on a single device (e.g., a first device) without interfering with other devices with strong input capabilities. Further, for the condition that the display interface of the target device comprises multiple edit boxes, the method can realize the arbitrary switching of the focus boxes through a single device, and has simple operation and good user experience.
In a possible implementation manner, the method further includes: the method comprises the steps that a first device receives information input by a user in an input box; the method comprises the steps that information input in an input box by a user is sent to a target device by a first device, and the information input in the input box by the user is used for filling a default focus box of the target device; the target device populates the default focus box with the information. As a possibility, the first device may directly input corresponding information into the default focus frame of the target device according to information input by the user in the input frame for cross-device input. Through the scheme, the experience degree of the user in cross-device input can be improved.
In a possible implementation manner, the method further includes: responding to the received operation of selecting any one of the at least one option by the user through the first equipment; the first equipment determines a switched focus frame; the first equipment sends focus frame switching information to the target equipment, wherein the focus frame switching information comprises identification information of the switched focus frame; and the target equipment switches the focus frame into an edit frame corresponding to the identification information. As one possibility, the first device performs focus frame switching correspondingly according to an operation of switching the focus frame of the target device by the user. According to the scheme, the flow and operation of switching the focus frames in the cross-device input process can be simplified, the cross-device input efficiency is improved, and the user experience is improved.
In a possible implementation manner, the interface of the target device includes a plurality of edit boxes; the method further comprises the following steps: the target equipment sends the position information of the multiple edit boxes to the first equipment; alternatively, the target device sends the priorities of the plurality of edit boxes to the first device. As one implementation, the first device may acquire edit box information such as priority of an edit box or position information of the edit box from the target device to improve accuracy of information on which the focus box is switched.
In an eighth aspect, there is provided a communication system comprising: the first device as in any one of the possible implementations of the third aspect or the fifth aspect, and the target device as in any one of the possible implementations of the fourth aspect or the sixth aspect. It should be understood that: the target device is one of the one or more second devices.
In a ninth aspect, a computer readable storage medium is provided, having computer program code stored thereon, which, when executed by a processor, causes the processor to implement the method as in any one of the possible implementations of the first or second aspect.
In a tenth aspect, a chip system is provided, which includes a processor and a memory, wherein the memory stores computer program codes; the computer program code, when executed by the processor, causes the processor to implement the method as in any one of the possible implementations of the first or second aspect. The chip system may be formed by a chip, and may also include a chip and other discrete devices.
In an eleventh aspect, a computer program product is provided that includes computer instructions. The computer instructions, when executed on a computer, cause the computer to implement a method as in any one of the possible implementations of the first aspect or the second aspect.
Drawings
FIG. 1 is an exemplary diagram of a cross-device input method;
fig. 2 is a schematic hardware structure diagram of a large-screen device according to an embodiment of the present application;
fig. 3 is a schematic diagram of a hardware structure of a mobile phone according to an embodiment of the present disclosure;
FIG. 4 is a diagram illustrating an exemplary cross-device input method according to an embodiment of the present disclosure;
FIG. 5 is an exemplary diagram of a cross-device input scenario provided in an embodiment of the present application;
fig. 6 is a block diagram of a mobile phone and a large-screen device according to an embodiment of the present disclosure;
fig. 7 is a flowchart of a cross-device input method provided in an embodiment of the present application;
FIG. 8 is a diagram of a cross-device input example provided by an embodiment of the present application;
fig. 9 is a flowchart of a cross-device input method provided in an embodiment of the present application;
fig. 10 is a flowchart of a cross-device input method provided in the embodiment of the present application;
FIG. 11 is a cross-device input example provided by an embodiment of the present application, FIG. two;
FIG. 12 is a diagram of a cross-device input example provided by an embodiment of the present application;
fig. 13 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. In the description of the embodiments herein, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" herein is merely an association relationship describing an associated object, and means that there may be three relationships, for example, a and/or B, and may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, in the description of the embodiments of the present application, "a plurality" means two or more than two.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present embodiment, "a plurality" means two or more unless otherwise specified.
The embodiment of the application provides a cross-device input method, which is applied to the process of accepting input of strong input-capability equipment such as a mobile phone by weak input-capability equipment such as large-screen equipment.
In the embodiment of the present application, the weak input capability and the strong input capability are relative concepts. For example, a television is a weak input capability device relative to a cell phone; a cell phone is a strong input capability device relative to a smart watch. In the following, a cross-device input method provided by the embodiment of the present application is introduced by taking a large-screen device as a weak input capability device and taking a mobile phone as a strong input capability device.
The low input capability devices in the present application include one or more display screens, among others. For example, the device may be a television, a smart camera, a Personal Digital Assistant (PDA), a Portable Multimedia Player (PMP), an Augmented Reality (AR)/Virtual Reality (VR) device, an ultra-mobile personal computer (UMPC), a smart bracelet, a smart watch, and the like. Alternatively, the device may be other types or configurations of electronic devices including one or more display screens, and the application is not limited thereto.
Referring to fig. 2, fig. 2 is a schematic diagram of a hardware structure of a large-screen device, taking a television as an example. As shown in fig. 2, the large-screen device 200 may include: processor 210, external memory interface 220, internal memory 221, universal Serial Bus (USB) interface 230, power management module 240, antenna, wireless communication module 260, audio module 270, speaker 270A, microphone 270C, speaker interface 270B, sensor module 280, buttons 290, indicator 291, camera 293, and display 292, among others.
It is to be understood that the illustrated structure of the present embodiment does not constitute a specific limitation to the large-screen apparatus 200. In other embodiments, large screen device 200 may include more or fewer components than illustrated, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 210 may include one or more processing units, such as: the processor 210 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
In the embodiment of the present application, the processor 210 may include an auxiliary module (an auxiliary module 2 shown in fig. 6 (b)). The auxiliary module may be used to send interface information to a strong input device (e.g., a cell phone), bind with a strong input device (e.g., a cell phone), request remote input from a strong input device (e.g., a cell phone), receive input from a strong input device (e.g., a cell phone), process focus frame switching information from a strong input device (e.g., a cell phone), and the like.
The controller may be the neural center and command center of the large screen device 200. The controller can finish instruction fetching according to the instruction, generate an operation control signal and further execute the control of the instruction. In the embodiment of the application, the controller can generate the operation control signal according to the instruction included in the control information from the strong input device (such as a mobile phone), so as to complete the input in the edit box (such as a search box) on the display screen of the large-screen device.
A memory may also be provided in processor 210 for storing instructions and data. In some embodiments, the memory in the processor 210 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 210. If the processor 210 needs to use the instruction or data again, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 210, thereby increasing the efficiency of the system. In some embodiments, processor 210 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose-output (GPIO) interface, and/or a USB interface, etc.
It should be understood that the interface connection relationship between the modules illustrated in the present embodiment is only an exemplary illustration, and does not constitute a structural limitation on the large-screen device 200. In other embodiments, the large-screen device 200 may also adopt different interface connection modes or a combination of multiple interface connection modes in the above embodiments.
The power management module 240 is used to connect to a power source. The charging management module 240 may also be connected to the processor 210, the internal memory 221, the display 294, the camera 293, the wireless communication module 260, and the like. The power management module 241 receives power input to supply power to the processor 210, the internal memory 221, the display 294, the camera 293, the wireless communication module 260, and the like. In some embodiments, the power management module 241 may also be disposed in the processor 210.
The wireless communication function of the large screen device 200 may be implemented by the antenna and the wireless communication module 260, and the like. The wireless communication module 260 may provide a solution for wireless communication applied to the large-screen device 200, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), global Navigation Satellite System (GNSS), frequency Modulation (FM), near Field Communication (NFC), infrared (IR), and the like. In this embodiment, the large-screen device 200 may receive input information from a strong input device (e.g., a mobile phone) through the antenna and the wireless communication module 260, and then complete input in the edit box on the display screen of the large-screen device according to the received input information.
The wireless communication module 260 may be one or more devices integrating at least one communication processing module. The wireless communication module 260 receives electromagnetic waves via an antenna, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 210. The wireless communication module 260 may also receive a signal to be transmitted from the processor 210, frequency modulate it, amplify it, and convert it into electromagnetic waves via an antenna for radiation. In some embodiments, the antenna of large-screen device 200 is coupled with wireless communication module 260 so that large-screen device 200 can communicate with a network and other devices through wireless communication techniques.
The large-screen device 200 implements a display function by a GPU, a display 292, and an application processor, etc. The GPU is a microprocessor for image processing, and is connected to a display 292 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 210 may include one or more GPUs that execute program instructions to generate or alter display information. The display screen 292 is used to display images, videos, etc., and the display screen 292 includes a display panel.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the large screen device 200 is in frequency point selection, the digital signal processor is used to perform fourier transform or the like on the frequency point energy. Video codecs are used to compress or decompress digital video. The large screen device 200 may support one or more video codecs. In this way, the large screen device 200 can play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The external memory interface 220 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the large screen device 400. The external memory card communicates with the processor 210 through the external memory interface 220 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
Internal memory 221 may be used to store computer-executable program code, including instructions. The processor 210 executes various functional applications of the large screen device 200 and data processing by executing instructions stored in the internal memory 221. For example, in the present embodiment, the processor 210 may execute instructions stored in the internal memory 221, and the internal memory 221 may include a program storage area and a data storage area.
The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, and the like) required by at least one function, and the like. The storage data area may store data (such as audio data, a phonebook, etc.) created during use of the large-screen device 200, and the like. In addition, the internal memory 221 may include a high speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a Universal Flash Storage (UFS), and the like.
The large screen device 200 may implement audio functions through the audio module 270, the speaker 270A, the microphone 270C, the speaker interface 270B, and the application processor. Such as music playing, recording, etc.
It will be appreciated that the configuration illustrated in figure 2 does not constitute a specific limitation to large screen devices. It may have more or fewer components than shown in fig. 2, may combine two or more components, or may have a different configuration of components. For example, the large screen device may further include a speaker, a remote controller, and the like. The various components shown in fig. 2 may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing or application specific integrated circuits.
For example, the strong input device in the present application may be a smart phone, a netbook, a tablet computer, or the like. Alternatively, the strong input device may be an electronic device of other types or configurations, and the application is not limited thereto.
Referring to fig. 3, fig. 3 illustrates a hardware structure diagram of a strong input device, taking a smart phone (hereinafter referred to as a mobile phone) as an example. As shown in fig. 3, the mobile phone 300 may include a processor 310, a memory (including an external memory interface 320 and an internal memory 321), a Universal Serial Bus (USB) interface 330, a charging management module 340, a power management module 341, a battery 342, an antenna 1, an antenna 2, a mobile communication module 350, a wireless communication module 360, an audio module 370, a speaker 370A, a receiver 370B, a microphone 370C, a headset interface 370D, a sensor module 380, a button 390, a motor 391, an indicator 392, a camera 393, a display 394, and a Subscriber Identity Module (SIM) card interface 395. Wherein the sensor module 380 may include a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, etc.
It is to be understood that the illustrated structure of the embodiment of the present invention is not to be specifically limited to a mobile phone. In other embodiments of the present application, the handset may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 310 may include one or more processing units. For example: the processor 310 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a flight controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
A memory may also be provided in the processor 310 for storing instructions and data. In some embodiments, the memory in the processor 310 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 310. If the processor 310 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 310, thereby increasing the efficiency of the system.
In the embodiment of the present application, the processor 310 may include an auxiliary module (an auxiliary module 1 shown in fig. 6 (a)). The auxiliary module may be configured to obtain interface information from the large-screen device, perform target device selection, bind with the target device, receive an input request from the target device, control the display 394 to display a remote input interface, input to the target device according to an input of a user on the remote input interface, instruct a switch to the target device according to an operation of the user to change a focus frame on the remote input interface, and the like.
In some embodiments, processor 310 may include one or more interfaces. The interface may include an integrated circuit I2C interface, an integrated circuit built-in audio I2S interface, a pulse code modulation PCM interface, a universal asynchronous receiver transmitter UART interface, a mobile industry processor interface MIPI, a general purpose input output GPIO interface, a subscriber identity module SIM interface, and/or a universal serial bus USB interface, etc.
The wireless communication function of the mobile phone can be realized by the antenna 1, the antenna 2, the mobile communication module 350, the wireless communication module 360, the modem processor, the baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the handset may be used to cover a single or multiple communications bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 350 may provide a solution including wireless communication of 2G/3G/4G/5G/6G, etc. applied to a mobile phone. The mobile communication module 350 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 350 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the filtered electromagnetic wave to the modem processor for demodulation. The mobile communication module 350 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 350 may be provided in the processor 310. In some embodiments, at least some of the functional modules of the mobile communication module 350 may be disposed in the same device as at least some of the modules of the processor 310.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 370A, the receiver 370B, etc.) or displays images or video through the display 394. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be separate from the processor 310 and may be disposed in the same device as the mobile communication module 350 or other functional modules.
The wireless communication module 360 may provide solutions for wireless communication applied to the mobile phone, including WLAN (such as Wi-Fi network), bluetooth BT, global navigation satellite system GNSS, frequency modulation FM, near field communication technology NFC, infrared technology IR, and the like. The wireless communication module 360 may be one or more devices integrating at least one communication processing module. The wireless communication module 360 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 310. The wireless communication module 360 may also receive a signal to be transmitted from the processor 310, frequency-modulate and amplify the signal, and convert the signal into electromagnetic waves via the antenna 2 to radiate the electromagnetic waves.
In some embodiments, the handset antenna 1 is coupled to the mobile communication module 350 and the handset antenna 2 is coupled to the wireless communication module 360 so that the handset can communicate with the network and other devices via wireless communication techniques.
The mobile phone realizes the display function through the GPU, the display screen 394, the application processor and the like. The GPU is an image processing microprocessor coupled to a display 394 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 310 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 394 is used to display images, video, and the like. The display screen 394 includes a display panel. In some embodiments, the cell phone may include 1 or N display screens 394, N being a positive integer greater than 1.
The mobile phone may implement a camera function via the ISP, camera module 393, video codec, GPU, display 394 and application processor.
The external memory interface 320 may be used to connect an external memory card, such as a Micro SD card, to extend the storage capability of the mobile phone. The external memory card communicates with the processor 310 through the external memory interface 320 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 321 may be used to store computer-executable program code, which includes instructions. The internal memory 321 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The data storage area can store data (such as audio data, a phone book and the like) created in the use process of the mobile phone. In addition, the internal memory 321 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like. The processor 310 executes various functional applications of the cellular phone and data processing by executing instructions stored in the internal memory 321 and/or instructions stored in a memory provided in the processor.
The mobile phone can realize an audio function through the audio module 370, the speaker 370A, the receiver 370B, the microphone 370C, the application processor, and the like. Such as music playing, recording, etc. As to the specific operation principle and action of the audio module 370, the speaker 370A, the receiver 370B and the microphone 370C, reference may be made to the description in the conventional art.
The keys 390 include a power-on key (also referred to as a power key), a volume key, and the like. The keys 390 may be mechanical keys. Or may be touch keys. The mobile phone may receive a key input, and generate a key signal input related to user setting and function control of the mobile phone. For example, in the embodiment of the present application, the mobile phone may receive that the power-on key and the volume key are pressed simultaneously, and instruct the target device to perform focus frame switching.
For specific working principles and functions of the charging management module 340, the power management module 341, the motor 391, the indicator 392, and the SIM card interface 395, reference may be made to descriptions in the conventional technology, and details are not repeated in the embodiments of the present application.
It should be noted that the hardware modules included in the mobile phone shown in fig. 3 are only exemplary descriptions, and do not limit the specific structure of the mobile phone. For example, the mobile phone may also include other functional modules.
As an example, in the embodiment of the present application, the large screen device and the mobile phone may be connected to the same network, for example, a home Wi-Fi network. The large-screen device can receive input information from the mobile phone through Wi-Fi, and then input in an edit box (such as a search box) on a display screen of the large-screen device is completed according to the received input information.
Illustratively, a search box F as shown in fig. 4 may be displayed on the large-screen device 01. The large-screen device 01 may trigger remote input upon receiving an operation of the user moving an input cursor to the search box F using a remote controller. The large screen device 01 sends broadcast information to one or more high input capability devices for the one or more high input capability devices to display a remote input confirmation interface for the user to select the corresponding remote input device. In some examples, the high input capability device may pop up the remote input confirmation interface for the user to select whether to use the high input capability device for remote input after receiving the broadcast information from the large screen device 01.
In some examples, when performing cross-device input based on, for example, the method shown in fig. 4, the one or more devices with high input capability that receive the broadcast information may include a device that has established a wireless connection for remote input with the large-screen device 01 for a preset time period, and/or a device that has established a distributed networking with the large-screen device 01, and the like, which is not limited in the present application.
Further, when a strong input-capability device (such as the mobile phone 04 shown in fig. 4) detects that the user confirms the operation of remote input using the device on the device, the mobile phone 04 may invoke the remote input service to display the input interface H shown in fig. 4. Further, the large-screen device 01 may receive the text "hello" edited by the user on the input interface H shown in fig. 4, input "hello" in the search box F of the large-screen device 01, and further search for a program whose name includes "hello" and/or whose name is related to "hello".
In the example shown in fig. 4, the cell phone 04 may share input method capabilities to the large screen device 01, thereby breaking the hardware limitation. However, in the example shown in fig. 4, when performing cross-device input, a user needs to use a remote controller and a mobile phone to jointly implement remote input on a large-screen device, and there are many devices used and the operation process is complicated.
In addition, for the case that the display interface of the large-screen device includes multiple edit boxes, for example, when the cross-device input is performed by the method shown in fig. 4, if the edit boxes are to be replaced, the user needs to first use the remote controller to perform focus box switching, so as to re-trigger one or more devices with strong input capability to display the remote input confirmation interface. Wherein the focus frame refers to the currently active edit frame. Furthermore, the user needs to confirm the operation of using the device for remote input on the remote input confirmation interface of a device with strong input capability (such as a mobile phone) again, so that the device can be used for inputting information in the switched edit box, the device is frequently replaced, and the operation process is complicated.
For example, in the cross-device input scenario of account registration shown in fig. 5, it is assumed that 4 edit boxes are displayed on the large-screen device 01, including edit box 1, edit box 2, edit box 3, and edit box 4. For example, when the user finishes inputting the phone number in the edit box 3 using the mobile phone 04 (as shown in (a) of fig. 5), if the user wants to modify the user name in the edit box 1, the following operations (a) to (D) are required: after (a) putting down the mobile phone 04 → (B) picking up the remote controller → (C) switching the focus frame to the edit box 1 → (D) picking up the mobile phone 04 with the remote controller to confirm the use of the device for remote input, the user name in the edit box 1 can be modified after the mobile phone 04 displays the input interface (as shown in (B) in fig. 5).
Further, when cross-device input is performed based on the method shown in fig. 4 or fig. 5, for example, if the large-screen device 01 sends broadcast information to all of the devices with strong input capabilities, serious interference may be caused to other devices. Further, in the case where the device pops up to display the remote input confirmation interface, the long-time non-recessing of the pops may cause serious disturbance to the user, especially for the device being used.
To solve the above problem, embodiments of the present application provide a cross-device input method that can initiate remote input to a large-screen device (e.g., a target device) based on a single device (e.g., a first device) without interfering with other devices with strong input capabilities. In addition, for the condition that the display interface of the large-screen device comprises a plurality of editing frames, the method can realize the arbitrary switching of the focus frame through a single device, and is simple to operate.
As shown in fig. 6, a strong input device (such as a mobile phone) and a large-screen device in a cross-device input process according to an embodiment of the present application may include an auxiliary module. Illustratively, as shown in fig. 6 (a), the handset may include an auxiliary module 1. As shown in (b) of fig. 6, the large-screen apparatus may include the auxiliary module 2.
For example, the mobile phone and the large-screen device including the Android system of the hierarchical architecture are taken as an example, the operating system of the mobile phone and the large-screen device may include an application layer, an application framework layer, a system library, an Android runtime layer and a kernel layer. The auxiliary module 1 may be located on an application framework layer of a mobile phone operating system; the auxiliary module 2 may be located at the application framework layer of the large screen device operating system. The application framework layer may provide an Application Programming Interface (API) and a programming framework for an application of the application layer. For specific introduction of the operating system and various layers (such as an application layer, an application framework layer, a system library, an Android runtime layer, and an kernel layer of the Android system), reference may be made to explanation and description in the conventional technology, which is not repeated herein.
A cross-device input method provided in the embodiments of the present application will be specifically described below with reference to the accompanying drawings by taking a strong input device as a mobile phone as an example.
It can be understood that, in the embodiment of the present application, assuming that a user wishes to perform a remote input on an edit box on a large-screen device (i.e., a second device) by using a mobile phone (i.e., a first device), the user may initiate the remote input on the mobile phone, and thus initiate a connection between the mobile phone and the large-screen device. For example, the user may first turn on the remote input function on the cell phone and then establish a wireless connection with the large screen device for remote input on the cell phone. Specifically, in this embodiment of the present application, the wireless connection is specifically used to transmit information input by a user, which is sent by a mobile phone (i.e., a first device) to a large-screen device (i.e., a second device).
As shown in fig. 7, a cross-device input method provided in an embodiment of the present application may include the following steps S701 to S706:
s701, the mobile phone receives an operation used by a user for starting a remote input function.
For example, the operation for turning on the remote input function may include, but is not limited to: the user turns on the operation of the remote input switch.
For example, assuming that an application program for remote input (hereinafter referred to as "remote input APP") is installed on a mobile phone, the operation for turning on the remote input function may be an operation for turning on the remote input APP by a user.
For another example, assuming that an application interface of the remote input APP installed on the mobile phone includes a remote input switch, the operation for turning on the remote input function may be an operation for turning on the remote input switch by a user.
For another example, assuming that a setting function of remote input is integrated on a mobile phone, the operation for turning on the remote input function may be an operation for turning on a remote input switch on a setting interface of remote input by a user.
For another example, assuming that a remote input switch is disposed in a pull-down menu bar of the mobile phone, the operation for turning on the remote input function may be an operation for turning on the remote input switch in the pull-down menu bar by the user.
It is to be understood that the above-described operation for turning on the remote input function is only an example, and the specific form of the operation is not limited in the present application.
S702, in response to receiving the operation for starting the remote input function, the mobile phone acquires interface information of one or more large-screen devices (such as one or more second devices).
The interface information may include, but is not limited to, information on whether an edit box is present on the interface and/or the number of edit boxes present on the interface.
In some embodiments, the one or more large-screen devices are large-screen devices in a distributed networking. The distributed networking refers to a distributed device cluster which is composed of a plurality of devices and can perform peer-to-peer (P2P) communication.
For example, as shown in fig. 7, assuming that the distributed networking includes a large-screen device 1, a large-screen device 2, and a large-screen device 3, in response to receiving the operation for starting the remote input function, the mobile phone acquires interface information of the large-screen device 1, interface information of the large-screen device 2, and interface information of the large-screen device 3.
In other embodiments, the one or more large-screen devices are large-screen devices that have established a wireless connection with the handset for remote input for a predetermined period of time (e.g., within 24 hours, within a week, within a month, or within a half year).
For example, assuming that the preset time period is within one week, in response to receiving the operation for starting the remote input function, the mobile phone acquires a large-screen device that has established a wireless connection for remote input with the mobile phone within one week.
In other embodiments, the one or more large-screen devices are N large-screen devices that have recently established a wireless connection with the handset for remote input; n is a positive integer.
For example, assuming that N is a preset value of 3, in response to receiving the operation for starting the remote input function, the mobile phone obtains interface information of 3 large-screen devices that have recently established wireless connection with the mobile phone for remote input.
It should be noted that the rule for acquiring the interface information of the large-screen device by using the mobile phone is only used as an example, and the specific rule is not limited in the embodiment of the present application. For example, the rule for the mobile phone to acquire the interface information of the large-screen device may also be a combination of a plurality of conditions, for example, in response to receiving the operation for turning on the remote input function, the mobile phone acquires the interface information of N (e.g., N = 3) large-screen devices that have established a wireless connection with the mobile phone for remote input within a preset time period (e.g., within one week). For another example, in response to receiving the operation for starting the remote input function, the mobile phone obtains interface information of the large-screen device that is three times before the wireless connection frequency for the remote input with the mobile phone is established within a preset time period (e.g., within a week).
As an implementation manner, the mobile phone may obtain interface information from one or more large-screen devices through the auxiliary module 1.
As another implementation, the handset may obtain interface information from one or more large-screen devices through a module for distributed connectivity (e.g., a device discovery module). The specific module for information acquisition is not limited in the application.
S703, the mobile phone determines the target equipment.
Wherein the target device is one of the one or more large-screen devices.
In some embodiments, assuming that the interface information of one or more large-screen devices acquired by the mobile phone indicates that only one large-screen device (such as the large-screen device 1 shown in fig. 7) has an edit box on the interface, the mobile phone may determine that the large-screen device is the target device.
In other embodiments, assuming that the interface information of one or more large-screen devices acquired by the mobile phone indicates that only one large-screen device (such as the large-screen device 1 shown in fig. 7) has an edit box on the interface, the mobile phone may further display, to the user, identification information of the device, which is used to notify that the large-screen device is a target device, or to specify that the large-screen device is a target device by the user.
In other embodiments, the handset may receive a user selection and determine the target device. For example, a cell phone may present device information to a user, including identification information of a large screen device having an edit box on an interface. The device information is used for showing a large screen device which can be remotely input by the mobile phone to a user so that the user can select a target device from the large screen device. The mobile phone may determine the target device according to an operation of the user selecting a certain large-screen device (such as large-screen device 1 shown in fig. 7) from the device information.
S704, wireless connection for remote input is established between the mobile phone and the target device.
As an example, the handset and the target device may establish a wireless connection for remote input through respective auxiliary modules. Taking a mobile phone having a structure shown in fig. 6 (a) and a target device having a structure shown in fig. 6 (b) as an example, the mobile phone establishes a wireless connection for remote input with an auxiliary module of the target device through its own auxiliary module.
For example, in the case of distributed networking between the mobile phone and the target device, the accessory module 1 of the mobile phone may establish a wireless connection with the accessory module 2 of the target device through a Remote Procedure Call (RPC). Namely, the mobile phone and the target equipment establish wireless connection based on RPC protocol. For the introduction of the RPC, reference may be made to explanations and descriptions in the conventional technology, and no further description is given to the embodiments of the present application.
S705, the target device sends request information to the mobile phone, and the request information is used for requesting the mobile phone to remotely input to a default focus frame of the target device.
Taking a mobile phone having a structure shown in fig. 6 (a) and a target device having a structure shown in fig. 6 (b) as an example, the target device may transmit request information to the supplementary module 1 of the mobile phone through the supplementary module 2. For example, in the case of distributed networking between a mobile phone and a target device, the assistance module 2 of the target device may send request information to the assistance module 1 of the mobile phone through RPC.
The request information includes identification information (e.g., edit box Identification (ID)) of a default focus box of the target device and a data channel interface of the default focus box. Wherein the data channel interface is used for remote input.
In some embodiments, assuming that the interface of the target device includes only one edit box, the default focus box is the edit box. For this case, before step S705, the auxiliary module 2 of the target device is further configured to focus the edit box, i.e., the auxiliary module 2 is further configured to set the edit box as a default focus box.
In other embodiments, assuming that the interface of the target device includes multiple edit boxes, then as one case, the default focus box may be the first edit box on the interface of the target device. For example, the target device may determine the first edit box according to coordinate values of the plurality of edit boxes in the preset coordinate system. For example, the origin of coordinates of the preset coordinate system may be a lower left corner of the screen of the target device, the x-axis of the preset coordinate system may be a lower edge of the screen of the target device, and the y-axis of the preset coordinate system may be a left edge of the screen of the target device. For example, the first edit box may be an edit box satisfying both the minimum x-coordinate value and the maximum y-coordinate value among a plurality of edit boxes. For this case, before step S705, the auxiliary module 2 of the target device is further configured to focus the first edit box, that is, the auxiliary module 2 is further configured to set the first edit box as a focus box
As another case, the default focus box may also be the highest priority edit box on the interface of the target device. For example, the priority may be determined by the auxiliary module 2 of the target device based on at least one of position information of a plurality of edit boxes, or a history input frequency, or default setting information.
Alternatively, the default focus frame may also be determined based on other rules or principles, and the application is not limited to the specific setting rule of the initial focus frame (i.e., the default focus frame) after the wireless connection for remote input is established between the mobile phone and the target device.
It should be noted that, in the embodiment of the present application, the default focus box may be visible to the user, so that the user knows at any time to which edit box to input information. For example, the default focus frame may be visible to the user in the form of an edit box highlight, a cursor displayed within the edit box, an edit box bolder (as shown in FIG. 8), and so forth.
And S706, displaying a remote input interface by the mobile phone.
The remote input interface is used for inputting information to the default focus frame through editing by a user. Illustratively, the remote input interface includes an input box and an input method window.
The embodiment of the application provides a cross-device input method, which can initiate remote input to a large-screen device based on a single device (such as a mobile phone 04), does not interfere with other devices with strong input capacity, and has good user experience.
As shown in fig. 7, after the mobile phone performs step S706, for the case where the user directly inputs information to the default focus frame of the target device through the mobile phone, the mobile phone performs the following step S707-1:
s707-1, in response to receiving the information input by the user in the input box, the mobile phone sends the information input by the user in the input box to the target device, wherein the information input by the user in the input box is used for filling a default focus box of the target device.
Taking the mobile phone having the structure shown in (a) of fig. 6 and the target device having the structure shown in (b) of fig. 6 as examples, the mobile phone may transmit information input by the user in the input box to the auxiliary module 2 of the target device through the auxiliary module 1, so that the auxiliary module 2 inputs corresponding information to the default focus box of the target device.
Referring to fig. 8, fig. 8 shows an example of cross-device input by taking an example that an interface of a target device includes a plurality of edit boxes. As shown in (a) in fig. 8, the default focus frame is an edit box 1, and the remote input interface includes an input box 801 shown in (a) in fig. 8 and an input method window 802. As shown in fig. 8 (b), the mobile phone 04 can input a user name (shown as "sun" in fig. 8 (b)) to the edit box 1 according to the text edited in the input box 801 by the user operating in the input method window 802.
In some embodiments, the remote input interface may further include at least one option. The at least one option is used for user to switch focus frames, i.e. for user to adjust focus frames on the target device. For example, in a case where the interface of the target device includes a plurality of edit boxes, the remote input interface may also be used for the user to arbitrarily switch the focus box among the plurality of edit boxes.
In some embodiments, the at least one option includes an option for switching a next edit box and an option for switching a previous edit box. As shown in fig. 8, the remote input interface on the cell phone 04 includes "previous" and "next" buttons for the user to switch focus frames. Therefore, the complicated process that the focus frame is switched by using a remote controller and then confirmed again at the mobile phone side is avoided. Wherein, the 'previous' button is used for switching the editing frame of the previous priority, and the 'next' button is used for switching the editing frame of the next priority.
As another example, after the user inputs a user name to the edit box 1 on the remote input interface shown in fig. 8, and assuming that the user wants to input information to other edit boxes, after the mobile phone performs step S707-1, as shown in fig. 9, the cross-device input method provided by the embodiment of the present application further includes the following steps S708-1, S709-1, and S710-1:
s708-1, responding to the operation of changing the focus frame by the user, the mobile phone sends focus frame switching information to the target equipment.
Wherein the focus frame switching information is used to instruct switching of the focus frame. The focus frame switching information includes identification information (e.g., ID) of the switched focus frame.
In some embodiments, the handset side may have edit box information saved. Wherein the edit box information includes an ID of each edit box and a priority of each edit box on the target device interface. In response to the operation of replacing the focus frame by the user, the mobile phone may determine the ID of the switched focus frame according to the edit frame information, and send the ID to the target device through the focus frame switching information.
For example, taking the target device interface shown in fig. 8 as an example, the edit box information saved on the mobile phone side may be as shown in table 1 below:
TABLE 1
Edit box ID Priority level
ID of edit box 1 1
ID of edit box 2 2
ID of edit box 3 3
ID of edit box 4 4
For example, if the operation of the user to change the focus frame is an operation of clicking a "next" button on the remote input interface when the target device interface and the mobile phone remote input interface are shown in fig. 8, the mobile phone determines that the edit box with the next priority of the current focus frame (i.e., edit box 1) is edit box 2 according to the edit box information stored on the mobile phone side, and then the mobile phone may determine that the ID of the switched focus frame is the ID of the edit box 2.
For another example, assuming that the current focus frame of the target device shown in fig. 8 is the edit frame 3, the user changes the focus frame on the remote input interface of the mobile phone shown in fig. 8, and the user clicks the operation of the "previous" button, then the mobile phone determines, according to the edit frame information stored on the mobile phone side, that the edit frame with the previous priority of the current focus frame (i.e., the edit frame 3) is the edit frame 2, and then the mobile phone may determine that the ID of the switched focus frame is the ID of the edit frame 2.
The edit box information stored on the mobile phone side may be calculated by the mobile phone itself, may be specified by the target device, or may be preset in the mobile phone by the developer, which is not limited in the present application.
For example, after the handset establishes a wireless connection with the target device for remote input in step S704, the target device may send identification information (e.g., edit box IDs) of a plurality of edit boxes on the interface of the target device and location information of the plurality of edit boxes to the handset, so that the handset may determine priorities of the plurality of edit boxes according to the location information of the plurality of edit boxes. For example, the position information of the edit box may be coordinate values of the edit box in a preset coordinate system.
Taking the example that the origin of coordinates of the preset coordinate system is the lower left corner of the screen of the target device, the x-axis of the preset coordinate system is the lower edge of the screen of the target device, and the y-axis of the preset coordinate system is the left edge of the screen of the target device, for example, the larger the y-axis coordinate value of the edit box is, the smaller the x-axis coordinate value is, the higher the priority of the edit box is.
For another example, after the handset establishes a wireless connection with the target device for remote input in step S704, the target device may send, to the handset, identification information (e.g., edit box IDs) of a plurality of edit boxes on the interface of the target device and historical input data of the plurality of edit boxes, so that the handset determines priorities of the plurality of edit boxes according to the historical input data of the plurality of edit boxes. For example, the historical input data of the plurality of edit boxes may include, but is not limited to, one or more of the input frequency of the plurality of edit boxes counted in the background within a preset time period, and default setting information. For example, the higher the input frequency of the background statistics in the preset time period, the higher the priority of the edit box.
As another example, after the handset establishes a wireless connection for remote input with the target device at step S704, the target device may transmit the edit box information to the handset.
As one implementation, the edit box information is calculated and determined by the target device. The edit box information is determined by the target device based on the position information of the plurality of edit boxes, for example. The edit box information is determined by the target device based on historical input data for a plurality of edit boxes.
As another implementation, the edit box information may also be preset in the target device by the developer.
In this embodiment, when switching the focus frame, the mobile phone 04 may instruct the target device to switch the focus frame according to the edit frame information. By means of the mode that the focus frame is switched based on the priority, the editing frame with the higher priority can be guaranteed to be edited preferentially, and user experience is improved.
And S709-1, the target device switches the focus frame according to the focus frame switching information.
Taking the target device with the structure shown in fig. 6 (b) as an example, after receiving the focus frame switching information from the mobile phone, the auxiliary module 2 of the target device may switch the focus frame according to the identification information (e.g., ID) of the switched focus frame carried in the focus frame switching information.
As shown in (b) in fig. 8, the default focus frame is edit frame 1, and assuming that the edit frame of the next priority of the edit frame 1 is edit frame 2, the cell phone 04 can switch the focus frame to edit frame 2 shown in (c) in fig. 8 according to the operation of the user of single-clicking the "next" button in the input frame 801. Here, the input box 801 shown in fig. 8 (c) is used to input a password into the switched focus box (i.e., the edit box 2).
S710-1, responding to the input operation of the user on the remote input interface, and inputting information to the switched focus frame by the mobile phone.
For example, assuming that the user inputs "123456" in the input box 801 shown in fig. 8 (c), the cell phone 04 inputs the password 123456 in the focus box after switching (i.e., the edit box 2) in response to the input operation by the user.
As another example, for a case where the user abandons inputting information into the default focus frame and replaces the focus frame, as shown in fig. 10, after the target device performs step S706, the cross-device input method provided by the embodiment of the present application further includes the following steps S707-2, S708-2, and S709-2:
and S707-2, responding to the operation of replacing the focus frame by the user, and sending focus frame switching information to the target equipment by the mobile phone.
The focus frame switching information is used for indicating switching of the focus frame. The focus frame switching information includes identification information (e.g., ID) of the switched focus frame.
S708-2, the target device switches the focus frame according to the focus frame switching information.
For example, as shown in (a) of fig. 11, the current focus frame is edit box 1, and assuming that the user abandons editing text in the input box 801 shown in (a) of fig. 11, but clicks the "next" button in the input box 801, and assuming that the edit box of the next priority of the edit box 1 is edit box 2, the cell phone 04 can switch the focus frame to edit box 2 shown in (b) of fig. 11 in accordance with the operation of the user clicking the "next" button in the input box 801. Here, the input box 801 shown in fig. 11 (b) is used to input a password into the switched focus box (i.e., the edit box 2).
And S709-2, responding to the input operation of the user on the remote input interface, and inputting information to the switched focus frame by the mobile phone.
For example, assuming that the user inputs "123456" in the input box 801 shown in fig. 11 (b), the cell phone 04 inputs the password 123456 in the switched focus box (i.e., the edit box 2) in response to the input operation by the user.
For specific descriptions of the steps S707-2, S708-2, and S709-2, reference may be made to the steps S708-1, S709-1, and S710-1 in this embodiment, which are not described herein again.
According to the cross-device input method, under the condition that the display interface of the large-screen device comprises multiple edit boxes, the focus boxes can be switched randomly based on a single device, the operation is simple, and the user experience is good.
It should be noted that, in the embodiment of the present application, clicking the "previous" and "next" buttons shown in fig. 8 is only used as an example of a manner for replacing the focus frame, and the present application does not limit a specific manner for replacing the focus frame.
For example, the focus frame may be replaced by pressing a physical key (e.g., pressing a power key and a volume key at the same time), a preset slide gesture (e.g., a leftward slide gesture), or the like.
For another example, the focus frame may be replaced by double-clicking the "previous" or "next" button shown in fig. 8, long-pressing the "previous" or "next" button shown in fig. 8, and so on. In response to the user double-clicking, long-pressing the "previous" or "next" button, and the like, how to change the focus frame specifically may be determined according to specific settings.
For example, in the embodiment of the present application, the at least one option may further include an option for switching across edit boxes.
It should be noted that, in the embodiment of the present application, the at least one option may be in a form of a menu, a button, or a preset operation, and the present application is not limited. For example, by a user double-clicking or long-pressing the "previous" button, switching can be made across edit boxes, enabling a jump in focus of the edit boxes, e.g., a jump in focus of two edit boxes, such as returning to the first edit box on the interface.
Taking the example shown in fig. 12 as an example, as shown in (a) in fig. 12, the current focus frame is the edit frame 3. Assuming that the user wants to return to the edit box 1 to modify the user name after completing the remote input to the edit box 3, and assuming that an operation of long-pressing the "previous" button is used to effect a return to the first edit box on the interface or a jump forward of two edit boxes, as shown in (b) in fig. 12, the cellular phone 04 instructs the large-screen device 01 to switch the first edit box on the interface (i.e., the edit box 1) to the focus box in response to an operation of long-pressing the "previous" button by the user. Here, the input box 801 shown in fig. 12 (b) is used to edit the user name in the switched focus box (i.e., edit box 1). Based on the interface shown in fig. 12 (b), the user can use the cell phone 04 to make remote modification of the user name, for example, to modify "sun" to "moon".
Taking the example shown in fig. 12 as an example, it can be understood that, assuming that the user wants to return to the edit box 1 to modify the user name after completing the remote input to the edit box 3, the user can also focus on the edit box 1 by an operation of clicking the "previous" button twice. However, for cross-device input, the long press of the "previous" button can save the effort of the cell phone 04 and the interaction between the cell phone 04 and the large screen device 01 compared to the double click of the "previous" button. For example, for an operation of clicking the "previous" button twice, the operation of clicking the "previous" button for the first time triggers the edit box ID determination and the focus box switching information transmission of the mobile phone 04 for one time; the second click of the "previous" button also triggers the edit box ID determination and focus box switch information transmission of the cell phone 04 again. For the operation of long pressing the "last" button, the edit box ID determination and the focus frame switching information transmission of the mobile phone 04 are triggered only once, so that the operation complexity can be reduced, the operation efficiency can be improved, the jump switching can be realized, and the switching efficiency can be improved.
It should be noted that the above operation for switching across edit boxes is only an example, and the embodiment of the present application does not limit a specific trigger form.
It should be understood that the various aspects of the embodiments of the present application can be reasonably combined and explained, and the explanation or explanation of the various terms appearing in the embodiments can be mutually referred to or explained in the various embodiments, which is not limited.
It should also be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not imply any order of execution, and the order of execution of the processes should be determined by their functions and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
It is to be understood that the electronic device (e.g. the first device or the second device) comprises corresponding hardware structures and/or software modules for performing the respective functions in order to realize the functions of any of the above-described embodiments. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, functional modules may be divided for an electronic device (such as a first device or a second device), for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, in the embodiment of the present application, the division of the module is schematic, and is only one logic function division, and there may be another division manner in actual implementation.
For example, in a case where each functional module is divided in an integrated manner, as shown in fig. 13, the functional module is a block diagram of an electronic device provided in an embodiment of the present application. For example, the electronic device may be a first device or a second device (e.g., a target device). As shown in fig. 13, the electronic device may include a transceiver unit 1310, a processing unit 1320, a storage unit 1330, and a display unit 1340.
When the electronic device is a first device, the transceiving unit 1310 is configured to support the first device to complete the above steps S704, S705, S707-1, S707-2, S708-1, S709-2, S710-1, and/or other processes related to the embodiments of the present application. The processing unit 1320 is configured to support the first device to perform the above steps S701, S702, S703, S704, S709-1, S708-2, and/or other processes related to the embodiments of the present application. The display unit 1340 is configured to support the first device to perform the step S706 and/or other interfaces related to the embodiments of the present application.
When the electronic device is a second device (e.g., a target device), the transceiving unit 1310 is configured to enable the target device to establish a wireless connection with the first device, receive information sent by the first device through the wireless connection, receive information about focus frame switching from the first device, send location information of a plurality of edit boxes to the first device, send priorities of the plurality of edit boxes to the first device, and/or perform other processes related to embodiments of the present application. The processing unit 1320 is configured to enable a second device (e.g., a target device) to determine a default focus frame after the transceiving unit 1310 establishes a wireless connection with the first device, fill the default focus frame with information sent by the first device to the default focus frame through the wireless connection, switch the focus frame to an edit frame corresponding to the identification information, and/or perform other processes related to the embodiment of the present application. The display unit 1340 is used to support a second device (e.g., a target device) to display an interface including at least one edit box and/or other interfaces related to embodiments of the present application.
The storage unit 1330 is used for storing computer programs and implementing processing data and/or processing results and the like in the methods provided by the embodiments of the present application.
It should be noted that the transceiver 1310 may include a radio frequency circuit. Specifically, the electronic device (e.g., the first device or the second device) may receive and transmit wireless signals through the radio frequency circuit. Typically, the radio frequency circuitry includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency circuitry may also communicate with other devices via wireless communication. The wireless communication may use any communication standard or protocol including, but not limited to, global system for mobile communications, general packet radio service, code division multiple access, wideband code division multiple access, long term evolution, email, short message service, and the like.
It should be understood that the modules in the electronic device may be implemented in software and/or hardware, and are not particularly limited thereto. In other words, the electronic device is presented in the form of a functional module. As used herein, a "module" may refer to an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor and memory that execute one or more software or firmware programs, an integrated logic circuit, and/or other devices that may provide the described functionality.
In an alternative, when the data transfer is implemented using software, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions described in the embodiments of the present application are implemented in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.).
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware or may be embodied in software instructions executed by a processor. The software instructions may consist of corresponding software modules that may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, a hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC. Additionally, the ASIC may reside in an electronic device. Of course, the processor and the storage medium may reside as discrete components in an electronic device.
Through the description of the foregoing embodiments, it will be clear to those skilled in the art that, for convenience and simplicity of description, only the division of the functional modules is illustrated, and in practical applications, the above function distribution may be completed by different functional modules as needed, that is, the internal structure of the apparatus may be divided into different functional modules to complete all or part of the above described functions.

Claims (22)

1. A cross-device input method, the method comprising:
in response to receiving an operation of a user for starting a remote input function, the first device determines a target device according to interface information of one or more second devices, wherein the target device is one of the one or more second devices, and one or more edit boxes are included on an interface of the target device;
the first equipment establishes wireless connection with the target equipment, and the wireless connection is used for transmitting information input by a user and sent to the target equipment by the first equipment;
the first device displays a remote input interface including an input box and at least one option for adjusting a focus box on the target device.
2. The method of claim 1, wherein the first device determining the target device according to interface information of one or more second devices comprises:
the first device displays device information according to interface information of a plurality of second devices, wherein the interfaces of the plurality of second devices are provided with edit boxes, and the device information comprises identification information of one or more second devices of which the interfaces are provided with edit boxes;
and the first equipment determines the target equipment according to the selection operation of the user on the identification information in the equipment information.
3. The method of claim 1, wherein only one of the one or more second devices has an edit box on the interface, and the first device determines that the device having the edit box on the interface is the target device.
4. The method according to any one of claims 1-3, further comprising:
the first equipment receives information input by a user in the input box;
and the first device sends information input in the input box by the user to the target device, wherein the information input in the input box by the user is used for filling a default focus frame of the target device.
5. The method according to any one of claims 1-4, further comprising:
the first device responds to receiving an operation of selecting any one of the at least one option by a user;
the first device determines a switched focus frame;
and the first equipment sends focus frame switching information to the target equipment, wherein the focus frame switching information comprises identification information of the switched focus frame.
6. The method of claim 5, wherein the first device determining the switched focus frame comprises:
the first device determines the switched focus frame according to the edit frame information; the edit box information includes priorities of a plurality of edit boxes.
7. The method of claim 6, wherein the interface of the target device comprises a plurality of edit boxes; the method further comprises the following steps:
the first device receiving location information of the plurality of edit boxes from the target device;
the first device determines priorities of the edit boxes according to the position information of the edit boxes.
8. The method of claim 6, further comprising:
the first device receives the edit box information from the target device.
9. The method of any of claims 1-8, wherein the at least one option comprises at least one of an option to switch to a next edit box or an option to switch to a previous edit box.
10. The method of any of claims 1-8, wherein the at least one option comprises an option for switching across edit boxes.
11. The cross-device input method is applied to a target device, and one or more edit boxes are included on an interface of the target device; the method comprises the following steps:
the target device determines a default focus frame after establishing a wireless connection with a first device; the wireless connection is used for the first device to send information input by a user to the target device;
and the target equipment receives the information sent by the first equipment through the wireless connection and fills the information into a default focus frame.
12. The method of claim 11, further comprising:
the target device receiving focus frame switching information from the first device, the focus frame switching information including identification information of a focus frame;
and the target equipment switches the focus frame into the edit frame corresponding to the identification information.
13. The method according to claim 11 or 12,
the default focus frame is a first edit frame on an interface of the target device; or,
the default focus frame is an edit frame with the highest priority on the interface of the target device.
14. The method according to any one of claims 11-13, wherein a plurality of edit boxes are included on the interface of the target device; the method further comprises the following steps:
the target device sends the position information of the edit boxes to the first device; or,
the target device sends the priorities of the plurality of edit boxes to the first device.
15. A cross-device input method, the method comprising:
in response to receiving an operation of a user for starting a remote input function, the first device determines a target device according to interface information of one or more second devices, wherein the target device is one of the one or more second devices, and one or more edit boxes are included on an interface of the target device;
the first equipment establishes wireless connection with the target equipment, and the wireless connection is used for transmitting information input by a user and sent to the target equipment by the first equipment;
the target device determines a default focus frame;
the first device displays a remote input interface including an input box and at least one option for adjusting a focus box on the target device.
16. The method of claim 15, further comprising:
the first equipment receives information input by a user in the input box;
the first device sends information input by a user in the input box to the target device, wherein the information input by the user in the input box is used for filling a default focus frame of the target device;
the target device populates the information to a default focus box.
17. The method according to claim 15 or 16, further comprising:
the first device responds to receiving an operation of selecting any one of the at least one option by a user;
the first device determines a switched focus frame;
the first device sends focus frame switching information to the target device, wherein the focus frame switching information comprises identification information of the switched focus frame;
and the target equipment switches the focus frame into the edit frame corresponding to the identification information.
18. The method according to any one of claims 15-17, wherein a plurality of edit boxes are included on the interface of the target device; the method further comprises the following steps:
the target device sends the position information of the edit boxes to the first device; or,
the target device sends the priorities of the edit boxes to the first device.
19. A first device, characterized in that the first device comprises:
a display screen;
one or more processors;
one or more memories;
the memory stores one or more programs that, when executed by the one or more processors, cause the first device to perform the method of any of claims 1-10.
20. A target device, the target device comprising:
a display screen;
one or more processors;
one or more memories;
the memory stores one or more programs that, when executed by the one or more processors, cause the target device to perform the method of any of claims 11-14.
21. A communication system, characterized in that the communication system comprises:
the first device of claim 19; and (c) a second step of,
the target device of claim 20.
22. A computer-readable storage medium, having computer program code stored thereon, which, when executed by a processing circuit, implements the method of any of claims 1-10 or claims 11-14.
CN202110888389.8A 2021-08-03 2021-08-03 Cross-device input method, device and system Pending CN115705128A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110888389.8A CN115705128A (en) 2021-08-03 2021-08-03 Cross-device input method, device and system
PCT/CN2022/109475 WO2023011418A1 (en) 2021-08-03 2022-08-01 Cross-device input method, devices and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110888389.8A CN115705128A (en) 2021-08-03 2021-08-03 Cross-device input method, device and system

Publications (1)

Publication Number Publication Date
CN115705128A true CN115705128A (en) 2023-02-17

Family

ID=85154439

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110888389.8A Pending CN115705128A (en) 2021-08-03 2021-08-03 Cross-device input method, device and system

Country Status (2)

Country Link
CN (1) CN115705128A (en)
WO (1) WO2023011418A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7227511B2 (en) * 2000-04-24 2007-06-05 Microsoft Corporation Method for activating an application in context on a remote input/output device
CN102638716A (en) * 2012-03-21 2012-08-15 华为技术有限公司 Method, device and system for television remote control by mobile terminal
CN104375666B (en) * 2014-12-11 2018-03-02 上海触乐信息科技有限公司 Input method, processing unit, input equipment and the intelligent display device of striding equipment
WO2017190233A1 (en) * 2016-05-05 2017-11-09 Nanoport Technology Inc. Cross-device interaction verification
CN110417992B (en) * 2019-06-20 2021-02-12 华为技术有限公司 Input method, electronic equipment and screen projection system

Also Published As

Publication number Publication date
WO2023011418A1 (en) 2023-02-09

Similar Documents

Publication Publication Date Title
US11711623B2 (en) Video stream processing method, device, terminal device, and computer-readable storage medium
WO2018113675A1 (en) Video playing method and terminal device
CN113676269B (en) Data transmission method of electronic device, medium thereof, and electronic device
KR20170099665A (en) Display apparatus and method for setting a operating channel
CN114885442A (en) Input device connection method, device and system
JP7234379B2 (en) Methods and associated devices for accessing networks by smart home devices
CN114201128A (en) Display method and device
CN113672133A (en) Multi-finger interaction method and electronic equipment
KR102648102B1 (en) Apparatus and mehtod for providing work environment for application program between electronic device and external server
CN113489844B (en) Volume gear adjusting method and electronic equipment
CN114900737A (en) Video progress adjusting method and electronic equipment
WO2024037025A1 (en) Wireless communication circuit, bluetooth communication switching method, and electronic device
CN114915745A (en) Multi-scene video recording method and device and electronic equipment
CN116582942A (en) Dual-channel data transmission method, device and storage medium
EP4354270A1 (en) Service recommendation method and electronic device
CN115705128A (en) Cross-device input method, device and system
CN116155874A (en) Audio transmission method, electronic device and storage medium
RU2687268C1 (en) Method and communication device
CN113437773B (en) Method, device and terminal for reducing sideband radiation stray in charging scene
CN116528209B (en) Bluetooth scanning method, device, chip system and storage medium
CN114915619B (en) File sharing method and electronic equipment
CN116743761B (en) Cooperative working method and electronic equipment
US11328585B1 (en) Simultaneous multi-IR code protocol using phase modulation
WO2022052907A1 (en) Display method and electronic device
CN118072732A (en) Voice input method, electronic equipment, readable storage medium and chip

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination