CN114489876A - Text input method, electronic equipment and system - Google Patents

Text input method, electronic equipment and system Download PDF

Info

Publication number
CN114489876A
CN114489876A CN202011240756.5A CN202011240756A CN114489876A CN 114489876 A CN114489876 A CN 114489876A CN 202011240756 A CN202011240756 A CN 202011240756A CN 114489876 A CN114489876 A CN 114489876A
Authority
CN
China
Prior art keywords
user
electronic device
input
text
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011240756.5A
Other languages
Chinese (zh)
Inventor
胡凯
卞苏成
周星辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202011240756.5A priority Critical patent/CN114489876A/en
Priority to EP20949470.7A priority patent/EP4187876A4/en
Priority to PCT/CN2020/142564 priority patent/WO2022032979A1/en
Priority to CN202080104076.2A priority patent/CN116171568A/en
Priority to US18/041,196 priority patent/US20230305680A1/en
Priority to PCT/CN2021/127888 priority patent/WO2022095820A1/en
Publication of CN114489876A publication Critical patent/CN114489876A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72409User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories
    • H04M1/72415User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories for remote control of appliances
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72436User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for text messaging, e.g. short messaging services [SMS] or e-mails
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • H04N21/42206User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor characterized by hardware details
    • H04N21/4222Remote control device emulator integrated into a non-television apparatus, e.g. a PDA, media center or smart toy

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Telephone Function (AREA)

Abstract

The application provides a text input method, electronic equipment and a system, wherein the method comprises the following steps: the method comprises the steps that first electronic equipment displays a text input interface through a display screen, wherein the text input interface comprises a text input box; the first electronic equipment responds to the displayed text input interface and sends a first message, and the first message is used for indicating that the first electronic equipment needs to perform text input; the second electronic equipment detects the preset operation of the user and monitors the first message; the second electronic equipment responds to the detection of the preset operation of the user and receives the first message, and detects the content input by the user; the second electronic equipment sends the first content to the first electronic equipment when responding to the operation that the user inputs the first content; the first electronic equipment displays text content corresponding to the first content in the text input box. The method and the device are favorable for improving the convenience of the user in text input on the equipment, and can reduce the interference to the user.

Description

Text input method, electronic equipment and system
Technical Field
The present application relates to the field of terminals, and more particularly, to a method, an electronic device, and a system for text input.
Background
The input operation of the current wisdom large-screen equipment (for example, smart television) mainly carries out operation control through the handheld remote controller that the wisdom large-screen was equipped with, but is limited to the restriction of handheld remote controller size, only has a little physical button such as channel switching, volume control and direction control about from top to bottom on the remote controller, and it is extremely inconvenient when carrying out text content input such as characters, account number, password.
With the development of intelligent devices, more and more devices are needed to input, for example, when a fresh food is stored in an intelligent refrigerator, the shelf life of the food is set, and even an automatic purchasing plan is set. It is cumbersome and cumbersome to use its dedicated remote control for input to each smart device.
Disclosure of Invention
The application provides a text input method, electronic equipment and a text input system, which are beneficial to improving the convenience of a user in text input on the equipment and reducing the interference to the user.
In a first aspect, a system is provided that includes a first electronic device for displaying a text entry interface via a display screen, the text entry interface including a text entry box; the first electronic device is further used for responding to the display of the text input interface and sending a first message, wherein the first message is used for indicating that the first electronic device needs to input text; the second electronic device is used for detecting the preset operation of the user and monitoring the first message; the second electronic device is further used for responding to the detection of the preset operation of the user and the reception of the first message and detecting the content input by the user; the second electronic equipment is also used for responding to the detection of the operation of inputting the first content by the user, and sending the first content to the first electronic equipment; the first electronic device is further configured to display text content corresponding to the first content in the text input box.
In the embodiment of the application, when the first electronic device needs to perform text input, a user can pick up any device (for example, a mobile phone or a Pad) at his or her side for inputting, which is helpful for improving convenience of the user in performing text input, thereby improving user experience. Meanwhile, before the user performs the preset operation on the second electronic device and receives the first message, the second electronic device does not generate any prompt message which possibly interferes the user, interference on the user is avoided, and therefore user experience is facilitated to be improved.
In some possible implementations, the first electronic device is specifically configured to: in response to displaying the text input interface, a plurality of first messages are sent within a preset time duration.
In some possible implementations, the second electronic device is specifically configured to: responding to the detection of the preset operation of the user, and starting to monitor the first message; in response to receiving the first message, content input by the user is detected.
In the embodiment of the application, the second electronic device starts to monitor the first message after detecting the preset operation of the user, so that the interference brought to the user by prompting input to the user by the device which does not detect the preset operation can be avoided.
In some possible implementations, the second electronic device is specifically configured to: detecting a preset operation of a user in response to receiving the first message; and in response to detecting the preset operation of the user, detecting the content input by the user.
In the embodiment of the application, the second electronic device may be in a state of monitoring the first message all the time, and the second electronic device starts to detect the preset operation of the user after receiving the first message, so that interference caused by prompting input to the user by other electronic devices which receive the first message but do not detect the preset operation can be avoided.
In some possible implementations, the second electronic device is specifically configured to: detecting a preset operation of a user in response to receiving the first message; and when the preset operation of the user is detected and the time interval between the first message and the preset operation of the user is smaller than the preset time interval, detecting the content input by the user.
In this embodiment of the application, the other electronic devices may not detect the operation of the user within a period of time after receiving the first message, and the user may not use the other electronic devices for input. The other electronic device may ignore the first message when the other electronic device detects the preset operation of the user, or the other electronic device may not prompt the user for an input.
In some possible implementations, the second electronic device is specifically configured to: in response to detecting the preset operation of the user and receiving the first message, and the first electronic device is within a preset angle range of the second electronic device (for example, the device directly opposite to the second electronic device is the first electronic device), detecting the content input by the user.
With reference to the first aspect, in some implementations of the first aspect, the second electronic device is specifically configured to: responding to the detection of the preset operation of the user and the reception of the first message, and displaying an input method; text content input by a user through the input method is detected.
In the embodiment of the application, when the second electronic device detects the preset operation of the user and receives the first message, the input method can be displayed, so that convenience of the user in text input is improved, and user experience is improved. Meanwhile, before the user performs the preset operation on the second electronic device and receives the first message, the second electronic device does not generate any prompt message which possibly interferes the user, interference on the user is avoided, and therefore user experience is facilitated to be improved.
With reference to the first aspect, in some implementations of the first aspect, the second electronic device is specifically configured to: responding to the detection of the preset operation of the user and the reception of the first message, and detecting voice content input by the user; in response to detecting that a user inputs a first voice content, sending the first voice content to the first electronic equipment; wherein, this first electronic equipment is used for specifically: determining the text content corresponding to the first voice content; the text content is displayed in the text entry box.
In the embodiment of the application, when the second electronic device detects the preset operation of the user and receives the first message, the voice content input by the user can be monitored, so that convenience of the user in text input is facilitated to be improved, and user experience is improved. Meanwhile, before the user performs the preset operation on the second electronic device and receives the first message, the second electronic device does not generate any prompt message which possibly interferes the user, interference on the user is avoided, and therefore user experience is facilitated to be improved.
In some possible implementations, the second electronic device is specifically configured to: in response to detecting the preset operation of the user and receiving the first message, prompting the user to select text input or voice input; when the operation that a user selects text input is detected, displaying an input method; when an operation of selecting voice input by a user is detected, voice content input by the user is detected.
With reference to the first aspect, in certain implementations of the first aspect, the second electronic device is further configured to: before the content input by the user is detected, displaying first prompt information, wherein the first prompt information is used for prompting that the second electronic equipment is equipment capable of inputting to the first electronic equipment.
In the embodiment of the application, the second electronic device can prompt the user to input text through the reminding box when detecting the preset operation of the user and receiving the first message, and the user is helped to clearly determine the input equipment which can be used by the second electronic device
With reference to the first aspect, in certain implementations of the first aspect, the first electronic device is further configured to: and before the text content is displayed in the text input box, displaying second prompt information through the display screen, wherein the second prompt information is used for prompting a user to input to the first electronic equipment through the second electronic equipment.
In the embodiment of the application, before the first electronic device receives the input of the user from the second electronic device, the first electronic device can prompt the user to input through the second electronic device through the display screen, so that the user is helped to clarify the device which can be used as the input by the second electronic device.
With reference to the first aspect, in some implementations of the first aspect, the second electronic device is specifically configured to: and when detecting that the user starts the operation of the first application program, starting to monitor the first message.
With reference to the first aspect, in certain implementations of the first aspect, the first application is a remote control application.
With reference to the first aspect, in certain implementations of the first aspect, the first electronic device is a smart television.
In a second aspect, a method for text input is provided, where the method is applied to an electronic device, and the method includes: the electronic equipment detects preset operation of a user and monitors a first message, wherein the first message is used for indicating that another electronic equipment needs to perform text input; in response to detecting the preset operation of the user and receiving the first message, the electronic equipment detects content input by the user; in response to detecting an operation of a user to input first content, the electronic device transmits the first content to the other electronic device.
In some possible implementations, the detecting, by the electronic device, content input by a user in response to detecting the preset operation of the user and receiving the first message includes: responding to the detection of the preset operation of the user, and starting to monitor the first message; in response to receiving the first message, content input by the user is detected.
In some possible implementations, the detecting, by the electronic device, content input by a user in response to detecting the preset operation of the user and receiving the first message includes: detecting a preset operation of a user in response to receiving the first message; and in response to detecting the preset operation of the user, detecting the content input by the user.
In some possible implementations, the detecting, by the electronic device, content input by a user in response to detecting the preset operation of the user and receiving the first message includes: detecting a preset operation of a user in response to receiving the first message; and when the preset operation of the user is detected and the time interval between the first message and the preset operation of the user is smaller than the preset time interval, detecting the content input by the user.
In some possible implementations, the detecting, by the electronic device, content input by a user in response to detecting the preset operation of the user and receiving the first message includes: in response to detecting the preset operation of the user and receiving the first message, and the first electronic device is within a preset angle range of the second electronic device (for example, the device directly opposite to the second electronic device is the first electronic device), detecting the content input by the user.
With reference to the second aspect, in some implementations of the second aspect, the detecting, by the electronic device, an input of the user in response to detecting the preset operation of the user and receiving the first message includes: responding to the preset operation of the user and the first message, and displaying an input method by the electronic equipment; the electronic equipment detects the text content input by the user through the input method.
With reference to the second aspect, in some implementations of the second aspect, the detecting, by the electronic device, an input of the user in response to detecting the preset operation of the user and receiving the first message includes: in response to detecting the preset operation of the user and receiving the first message, the electronic equipment detects the voice content input by the user.
In some possible implementations, the electronic device, in response to detecting the preset operation of the user and receiving the first message, detects an input of the user, including: in response to detecting the preset operation of the user and receiving the first message, prompting the user to select text input or voice input; when the operation that a user selects text input is detected, displaying an input method; when an operation of selecting voice input by a user is detected, voice content input by the user is detected.
With reference to the second aspect, in some implementations of the second aspect, before the electronic device detects the content input by the user, the method further includes: the electronic equipment displays prompt information which is used for prompting a user that the electronic equipment is equipment capable of inputting to the other electronic equipment.
With reference to the second aspect, in some implementations of the second aspect, the detecting, by the electronic device, a preset operation of a user includes: the electronic device detects an operation of a user to start a first application program.
With reference to the second aspect, in some implementations of the second aspect, the first application is a remote control application.
With reference to the second aspect, in some implementations of the second aspect, the another electronic device is a smart tv.
In a third aspect, an apparatus for text input is provided, the apparatus comprising: the first detection unit is used for detecting the preset operation of a user; the receiving unit is used for monitoring a first message, and the first message is used for indicating that another electronic device needs to perform text input; the second detection unit is used for responding to the first detection unit detecting the preset operation of the user and the receiving unit receiving the first message, and detecting the content input by the user; a transmitting unit configured to transmit the first content to the other electronic device in response to the second detecting unit detecting an operation of the user to input the first content.
In a fourth aspect, an electronic device is provided, comprising: one or more processors; a memory; and one or more computer programs. Wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions. The instructions, when executed by the electronic device, cause the electronic device to perform the method of text input in any one of the possible implementations of the second aspect described above.
In a fifth aspect, a computer program product comprising instructions is provided, which, when run on a first electronic device, causes the electronic device to perform the method of text input according to the second aspect.
In a sixth aspect, there is provided a computer-readable storage medium comprising instructions which, when run on an electronic device, cause the electronic device to perform the method of text input of the second aspect described above.
In a seventh aspect, a chip is provided for executing instructions, and when the chip runs, the chip executes the text input method according to the second aspect.
Drawings
Fig. 1 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Fig. 2 is a block diagram of a software structure provided in an embodiment of the present application.
FIG. 3 is a set of graphical user interfaces provided by embodiments of the present application.
Fig. 4 is another set of graphical user interfaces provided by embodiments of the present application.
Fig. 5 is another set of graphical user interfaces provided by embodiments of the present application.
Fig. 6 is another set of graphical user interfaces provided by embodiments of the present application.
Fig. 7 is another set of graphical user interfaces provided by embodiments of the present application.
Fig. 8 is a system architecture diagram provided in an embodiment of the present application.
Fig. 9 is a schematic flow chart of a method of text input provided by an embodiment of the present application.
Fig. 10 is a schematic block diagram of a text input device provided by an embodiment of the present application.
Fig. 11 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. In the description of the embodiments herein, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, in the description of the embodiments of the present application, "a plurality" or "a plurality" means two or more.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present embodiment, "a plurality" means two or more unless otherwise specified.
The method provided by the embodiment of the application can be applied to electronic devices such as a mobile phone, a tablet personal computer, a wearable device, an in-vehicle device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), and the like, and the embodiment of the application does not limit the specific types of the electronic devices at all.
Fig. 1 shows a schematic structural diagram of an electronic device 100. The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identification Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be, among other things, a neural center and a command center of the electronic device 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor 180K via an I2C interface, such that the processor 110 and the touch sensor 180K communicate via an I2C bus interface to implement the touch functionality of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may communicate audio signals to the wireless communication module 160 via the I2S interface, enabling answering of calls via a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled by a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a bluetooth headset.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the capture functionality of electronic device 100. The processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transmit data between the electronic device 100 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It should be understood that the interface connection relationship between the modules illustrated in the embodiments of the present application is only an illustration, and does not limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, with N being a positive integer greater than 1.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic apparatus 100 can listen to music through the speaker 170A or listen to a handsfree call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic apparatus 100 receives a call or voice information, it can receive voice by placing the receiver 170B close to the ear of the person.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking the user's mouth near the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, perform directional recording, and so on.
The headphone interface 170D is used to connect a wired headphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but have different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., the x, y, and z axes) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects a shake angle of the electronic device 100, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude, aiding in positioning and navigation, from barometric pressure values measured by barometric pressure sensor 180C.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip holster using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip phone, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the opening and closing state of the leather sheath or the opening and closing state of the flip cover, the automatic unlocking of the flip cover is set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, electronic device 100 may utilize range sensor 180F to range for fast focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light to the outside through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there are no objects near the electronic device 100. The electronic device 100 can utilize the proximity sensor 180G to detect that the user holds the electronic device 100 close to the ear for talking, so as to automatically turn off the screen to save power. The proximity light sensor 180G may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor 180L is used to sense the ambient light level. Electronic device 100 may adaptively adjust the brightness of display screen 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and so on.
The temperature sensor 180J is used to detect temperature. In some embodiments, electronic device 100 implements a temperature processing strategy using the temperature detected by temperature sensor 180J. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 100 performs a reduction in performance of a processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, the electronic device 100 heats the battery 142 when the temperature is below another threshold to avoid the low temperature causing the electronic device 100 to shut down abnormally. In other embodiments, when the temperature is lower than a further threshold, the electronic device 100 performs boosting on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, the bone conduction sensor 180M may acquire a vibration signal of the human vocal part vibrating the bone mass. The bone conduction sensor 180M may also contact the human pulse to receive the blood pressure pulsation signal. In some embodiments, the bone conduction sensor 180M may also be disposed in a headset, integrated into a bone conduction headset. The audio module 170 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part acquired by the bone conduction sensor 180M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so as to realize the heart rate detection function.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be brought into and out of contact with the electronic apparatus 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
The software system of the electronic device 100 may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present application takes an Android system with a layered architecture as an example, and exemplarily illustrates a software structure of the electronic device 100.
Fig. 2 is a block diagram of a software structure of the electronic device 100 according to the embodiment of the present application. The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom. The application layer may include a series of application packages.
As shown in fig. 2, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide communication functions of the electronic device 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), media libraries (media libraries), three-dimensional graphics processing libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
It should be understood that the embodiments of the present application may be applicable to Android, IOS, or hongmeng systems.
Fig. 3 is a set of Graphical User Interfaces (GUIs) provided in embodiments of the present application.
Referring to fig. 3 (a), a user is conducting a movie search using a television remote controller, a movie search display interface is displayed on the smart television, and a cursor of the smart television is located in a text input box 301 at this time. When the smart tv detects that the user moves the cursor to the text input box 301, a broadcast message may be sent to surrounding devices, which may be used to indicate that the smart tv needs to make text input.
In one embodiment, the smart tv may also send the broadcast message to surrounding devices upon detecting that the user moved the cursor to a key (e.g., the "ABC" key) in the input method displayed by the smart tv.
It should be understood that the movie search display interface of the smart tv display as shown in (a) of fig. 3 may also be referred to as a text input interface. The text input interface in the embodiment of the present application may include a text input box thereon, or the text input interface may include a text input box and an input method.
In an embodiment, the broadcast message may also carry a communication address (e.g., an Internet Protocol (IP) address, a port number, or a bluetooth address) of the smart tv.
Referring to fig. 3 (b), a reminder box 302 appears on the mobile phone, and the user can select not to use the tv remote controller for text input, but to select the mobile phone for text input. The user may be prompted in the reminder box 302 "detect that the smart tv needs to perform text input, and whether to open the remote control application for input. When the mobile phone detects an operation of clicking the control 303 by the user, a GUI as shown in (c) of fig. 3 may be displayed.
In one embodiment, when the mobile phone detects that the user clicks the control 303, the mobile phone may establish a connection with the smart television through the communication address carried in the broadcast message.
In one embodiment, after the mobile phone and the smart tv are connected, the mobile phone may further send device information of the mobile phone (for example, a device name "P40" of the mobile phone and a user name "Tom" of using the mobile phone) to the smart tv. After receiving the device information sent by the mobile phone, the smart television may display a prompt message on a display screen of the smart television, for example, the prompt message is "please enter text on P40 of Tom".
Referring to the GUI shown in (c) of fig. 3, the GUI is a display interface of a remote controller application on a mobile phone. The display interface includes a plurality of function controls, such as functions of switching, muting, inputting, menu, expanding key, changing channels, turning down or up volume, and returning. When the handset detects an operation of the user clicking the input control 304, the handset may display a GUI as shown in (d) of fig. 3.
In one embodiment, if the mobile phone is in the screen locking interface when receiving the broadcast message, when the mobile phone detects that the user clicks the control 303, the mobile phone may start the camera to collect the face information of the user. If the face information collected by the camera is matched with the face information preset in the mobile phone, the mobile phone can be unlocked firstly, so that a non-screen-locking interface is entered, and the remote controller application is automatically opened under the non-screen-locking interface. Or, when the mobile phone detects that the user clicks the control 303, the mobile phone may collect fingerprint information of the user. If the acquired fingerprint information is matched with the fingerprint information preset in the mobile phone, the mobile phone can be unlocked firstly, so that a non-screen-locking interface is entered, and the remote controller application is automatically opened under the non-screen-locking interface.
Referring to the GUI shown in (d) of fig. 3, the GUI is another display interface of the remote controller application on the mobile phone. When the mobile phone detects that the user clicks the control 304, the mobile phone may call up the input method of the mobile phone. The user can input the text content by the input method called out by the mobile phone. When the mobile phone detects that the user inputs the text content "movie 1" in the text input box 306 and clicks the control 305, the mobile phone may send the text content to the smart television.
In one embodiment, when the mobile phone detects the operation of clicking the control 303 by the user, the mobile phone may also directly display the GUI as shown in (d) of fig. 3. That is to say, after the mobile phone detects that the user clicks the control 303, the mobile phone may perform an unlocking operation through the collected face information or fingerprint information, so as to enter a non-screen-locking interface. The mobile phone automatically opens the remote controller application and calls up the input method of the mobile phone under the non-screen-locking interface, thereby displaying the GUI as shown in (d) of fig. 3.
In one embodiment, if the mobile phone is in the non-screen-lock interface when receiving the broadcast message, the mobile phone may directly call the input method of the mobile phone on the non-screen-lock interface without opening the remote control application. The user can input text by the input method called out by the mobile phone. When the mobile phone detects that the user inputs the text content "movie 1" in the text input box 306 through the input method and clicks the control 305, the mobile phone may send the text content to the smart television.
In one embodiment, the mobile phone can also transmit the text content input by the user to the smart television in real time. For example, when the mobile phone detects that the user inputs the text content "power" in the text input box 306, the mobile phone may transmit the text information to the smart tv, so that the text content "power" is displayed in the text input box 301 of the smart tv. When the mobile phone detects that the user then enters the text content "movie" in the text entry box 306, the mobile phone may continue to send the text content "movie" to the smart tv, thereby displaying the text content "movie" in the text entry box 301 of the smart tv. When the mobile phone detects that the user then inputs the text content "1" in the text input box 306, the mobile phone may continue to transmit the text content "1" to the smart tv, so that the text content "movie 1" is displayed in the text input box 301 of the smart tv.
It should be understood that if the mobile phone detects that the user deletes the text content in the text input box 306, the mobile phone may indicate the deleted text content to the smart tv in real time, so that the text content in the text input box 301 of the smart tv is synchronized with the text content in the text input box 306 of the mobile phone.
Referring to (e) in fig. 3, after receiving the text content sent by the mobile phone, the smart tv may display the text content (e.g., "movie 1") in a text input box 301 of the smart tv. Also, the smart tv may display information (e.g., genre, director, etc.) corresponding to movie 1.
In the embodiment of the application, the mobile phone can provide text input for the smart television after receiving the broadcast message sent by the smart television, so that convenience of a user in text input is improved. Meanwhile, the mobile phone and the smart television do not need to be devices under the same account, and text content input can be provided for the smart television as long as the mobile phone is near the smart television, so that the user experience is improved.
FIG. 4 is another set of GUIs provided by embodiments of the present application.
Referring to the GUI shown in (a) of fig. 4, when the mobile phone receives a broadcast message transmitted by the smart tv, the mobile phone may display a text input icon 401 on the lock screen interface. When the cell phone detects an operation of the user clicking the icon 401, a GUI as shown in (b) of fig. 4 may be displayed.
In an embodiment, when the mobile phone detects that the user clicks the icon 401, the mobile phone may establish a connection with the smart tv according to the communication address of the smart tv carried in the broadcast message.
Referring to the GUI shown in (b) of fig. 4, the GUI is a display interface after the mobile phone detects that the user has clicked the icon 401. The mobile phone may display the input method on the lock screen interface, and the user may input the text content in the text input box 403. When the mobile phone detects that the user inputs text content (for example, "movie 1") in the text input box 403 and clicks the control 402, the mobile phone may transmit the text content to the smart television. The smart tv may display a GUI as shown in (e) of fig. 3.
In one embodiment, the mobile phone may send the content in the text input box 403 to the smart tv in real time, so that the text content in the text input box 301 of the smart tv is synchronized with the content in the text input box 403 of the mobile phone.
In the embodiment of the application, after the mobile phone receives the broadcast message sent by the smart television, the user can be prompted by the icon to assist the smart television to input the text content through the mobile phone, so that the text content can be input in the screen locking state without entering the non-screen locking state and opening the remote controller application by the mobile phone. The mobile phone can provide text input for the smart television under the screen locking interface, so that convenience of a user in text input of large-screen equipment is facilitated, and user experience is improved. Meanwhile, the mobile phone and the smart television do not need to be devices under the same account, and text content input can be provided for the smart television as long as the mobile phone is near the smart television, so that the user experience is improved.
Referring to the GUI shown in (c) of fig. 4, the GUI is a display interface after the mobile phone detects that the user has clicked the icon 401. When the mobile phone detects that the user clicks the icon 401, the mobile phone may start a camera to collect face information of the user. If the face information collected by the camera is matched with the face information preset in the mobile phone, the mobile phone can be unlocked firstly, so that a non-screen-locking interface is entered, and the input method of the mobile phone is automatically called out under the non-screen-locking interface. Alternatively, when the mobile phone detects that the user clicks the icon 401, the mobile phone may collect fingerprint information of the user. If the acquired fingerprint information is matched with the fingerprint information preset in the mobile phone, the mobile phone can be unlocked firstly, so that a non-screen-locking interface is entered, and the input method of the mobile phone is automatically called out under the non-screen-locking interface.
In the embodiment of the application, after the mobile phone receives the broadcast message sent by the smart television, the user can be prompted through the icon to assist the smart television to input the text content through the mobile phone, and the mobile phone can automatically call out the input method after entering the non-screen-locking interface without starting the application of a remote controller through the mobile phone. The mobile phone can provide text input for the smart television without starting an application program on a non-screen-locking interface, so that convenience of a user in text input of large-screen equipment is improved, and user experience is improved. FIG. 5 is another set of GUIs provided by an embodiment of the present application.
Referring to fig. 5 (a), a user is conducting a movie search using a television remote controller, a movie search display interface is displayed on the smart television, and a cursor of the smart television is located at a text input box 501 at this time. When the smart tv detects that the user moves the cursor to the input box 501, a broadcast message may be sent to surrounding devices, and the broadcast message may be used to indicate that the smart tv needs to perform text input.
In one embodiment, the broadcast message may also carry a communication address (e.g., an IP address, a port number, or a bluetooth address, etc.) of the smart television.
Referring to fig. 5 (b), the user may pick up the mobile phone, at which time the mobile phone may be in an unlocked state and the mobile phone displays the mobile phone desktop.
Referring to the GUI shown in (c) of fig. 5, the GUI is a desktop of a mobile phone. When the mobile phone detects that the user clicks the icon 502 of the remote controller application on the desktop, the mobile phone can open the remote controller application. The handset may start to listen to broadcast messages sent by surrounding devices when detecting that the user opens the remote control application. When the mobile phone receives the broadcast message sent by the smart tv, the mobile phone may display a GUI as shown in (d) of fig. 5.
Referring to the GUI shown in (d) of fig. 5, the GUI is a display interface of a remote controller application of a mobile phone. After receiving the broadcast message sent by the smart television, the mobile phone may display a reminding frame 503, where the reminding frame 503 includes a prompt message "detect that the smart television needs to perform text input, and determine whether to perform input using an input function". When the handset detects an operation of the user clicking on control 504, the handset may display a GUI as shown in (e) of fig. 5.
In one embodiment, when the mobile phone detects that the user clicks the control 504, the mobile phone may establish a connection with the smart television according to the communication address of the smart television carried in the broadcast message.
Referring to the GUI shown in (e) of fig. 5, the GUI is another display interface of the remote controller application of the cellular phone. When the mobile phone detects that the user clicks the control 504, the mobile phone may call up the input method of the mobile phone. When the mobile phone detects that the user inputs the text content "movie 1" in the text input box 506 and clicks the control 505, the mobile phone may send the text content to the smart television.
Referring to the GUI shown in (f) of fig. 5, the GUI is another display interface of the remote controller application of the cellular phone. When the mobile phone detects that the user clicks the control 504, the mobile phone may detect the voice content input by the user. As shown in (f) in fig. 5, the user utters the voice content "movie 1". After the mobile phone detects the voice content input by the user, the text content 'movie 1' corresponding to the voice content can be determined, so that the mobile phone sends the text content to the smart television.
In the embodiment of the application, the mobile phone may include an Automatic Speech Recognition (ASR) module, and the ASR module is mainly used for recognizing the speech content of the user as the text content.
In one embodiment, the mobile phone can also send the voice content to the smart television after detecting the voice content input by the user. The smart tv may convert the speech content into text content for display in text entry box 501.
In one embodiment, when the mobile phone detects that the user has clicked on control 504, the mobile phone may further prompt the user to select text input or voice input. If the user selects text input, the mobile phone can call out the input method, so that the user can input text contents through the input method; alternatively, if the user selects voice input, the handset may begin to detect the voice content of the user input.
In one embodiment, the handset may start listening for broadcast messages when it detects a user operation to open the remote control application. After the mobile phone receives the broadcast message sent by the smart television, the mobile phone may not display the reminding box 503, and the mobile phone may directly call the input method to detect the text content input by the user, or the mobile phone may start to monitor the voice content input by the user.
In one embodiment, the mobile phone may send the content in the text input box 506 to the smart tv in real time, so that the text content in the text input box 501 of the smart tv is synchronized with the content in the text input box 506 of the mobile phone.
Referring to (g) in fig. 5, after receiving the text content sent by the mobile phone, the smart tv may display the text content (e.g., "movie 1") in an input box 501 of the smart tv. Also, the smart tv may display information (e.g., genre, director, etc.) corresponding to movie 1.
In the embodiment of the application, the mobile phone and the smart television do not need to be connected or bound in advance, but an input and input relation is temporarily established in a dynamic matching mode when the smart television needs to input. When the smart television needs to perform input, a user can take any device (for example, a mobile phone and a Pad) nearby with a hand to perform input, so that convenience of the user in text input is facilitated to be improved, and user experience is improved. Meanwhile, the mobile phone starts monitoring after the remote controller application is started, and prompts a user to input text through the reminding box after the broadcast message is monitored, so that the user can clearly determine that the mobile phone can be used as input equipment. Before a user initiates an input operation on the mobile phone, the mobile phone does not generate any prompt information which possibly interferes the user, interference on the user is avoided, and therefore user experience is improved.
FIG. 6 is another set of GUIs provided by embodiments of the present application.
Referring to the GUI shown in (a) of fig. 6, the GUI is a desktop of a mobile phone. When the handset detects an operation of the user clicking an icon of the remote controller application on the desktop, the handset may start listening to a broadcast message transmitted by the surrounding devices and displaying a GUI as shown in (b) of fig. 6.
Referring to the GUI shown in (b) of fig. 6, the GUI is a display interface of a remote controller application of a mobile phone. When the mobile phone does not receive the broadcast message sent by the smart television, the color of the keys on the display interface is gray (the gray indicates that the control is not available).
Referring to the GUI shown in (c) of fig. 6, the GUI is another display interface of the remote controller application of the cellular phone. When the mobile phone receives a broadcast message sent by the smart television, the color of the keys on the display interface changes from grey to black (the black indicates that the control can be used). At this time, the user can perform input of text content using an input function on the remote controller. When the cell phone detects an operation of the user clicking the input control 601, the cell phone may display a GUI as shown in (d) of fig. 6.
In one embodiment, when the mobile phone receives a broadcast message sent by the smart television, the mobile phone may establish a connection with the smart television according to the communication address of the smart television carried in the broadcast message.
Referring to the GUI shown in (d) of fig. 6, the GUI is another display interface of the remote controller application of the cellular phone. When the mobile phone detects that the user clicks the input control 601, the mobile phone can call the input method of the mobile phone. When the mobile phone detects that the user inputs the text content "movie 1" in the text input box 602 and clicks the control 603, the mobile phone may lead to the smart tv to transmit the text content.
It should be understood that, in the embodiment of the present application, when the mobile phone receives the broadcast message sent by the smart television, the mobile phone may directly display the GUI as shown in (d) in fig. 6. That is to say, when the mobile phone receives the broadcast message sent by the smart television, the color of the keys on the display interface changes from gray to black, and the mobile phone can automatically call the input method of the mobile phone.
Referring to the GUI shown in (e) of fig. 6, the GUI is another display interface of the remote controller application of the cellular phone. After the mobile phone receives the broadcast message, the mobile phone can detect the voice content input by the user. As shown in (e) in fig. 6, the user utters the voice content "movie 1". After the mobile phone detects the voice content of the user, the text content 'movie 1' corresponding to the voice content can be determined, so that the mobile phone sends the text content to the smart television.
In one embodiment, after the mobile phone receives the broadcast message sent by the smart television, the color of the keys on the display interface of the remote controller application changes from gray to black, and meanwhile, the mobile phone can prompt the user to select text input or voice input. If the user selects text input, the mobile phone can call an input method, so that the user can input text contents through the input method; alternatively, if the user selects voice input, the handset may begin to detect the voice content of the user input.
In one embodiment, the mobile phone may send the content in the text entry box 602 to the smart tv in real time, so that the text content in the text entry box of the smart tv is synchronized with the content in the text entry box 602 of the mobile phone.
After receiving the text content sent by the mobile phone, the smart television can display the text content (e.g., "movie 1") in an input box of the smart television. Also, the smart tv may display information (e.g., genre, director, etc.) corresponding to movie 1.
In the embodiment of the application, the mobile phone and the smart television do not need to be connected or bound in advance, but an input and input relation is temporarily established in a dynamic matching mode when the smart television needs to input. When the smart television needs to perform input, a user can take any one of the devices (such as a mobile phone and a Pad) at the user's side for input, so that convenience of the user in text input is improved, and user experience is improved. Meanwhile, the mobile phone starts monitoring after the remote controller application is started, and the mobile phone is reminded to be used as input equipment through the change of the color of the control after the broadcast message is monitored. Before a user initiates an input operation on the mobile phone, the mobile phone does not generate any prompt message which possibly interferes the user, so that the interference to the user is avoided, and the user experience is promoted.
FIG. 7 is another set of GUIs provided by embodiments of the present application.
Referring to the GUI shown in (a) of fig. 7, the GUI is a screen lock interface of a mobile phone. When the mobile phone detects a preset operation of the user on the screen locking interface (for example, the mobile phone detects that the user draws an "S" on the screen locking interface), the mobile phone may start to monitor a broadcast message sent by the peripheral device. When the handset detects a broadcast message transmitted by the smart tv, the handset may display a GUI as shown in (b) of fig. 7.
In one embodiment, the trigger condition for the mobile phone to start monitoring the broadcast message sent by the peripheral device may be that the mobile phone detects that a user draws a pattern with a preset shape on a currently displayed interface; or, the gesture may also be an air gesture detected by the mobile phone on the current interface; alternatively, the mobile phone may also detect an operation of pressing a physical key (e.g., a volume key and a power key) of the mobile phone by the user; or, the mobile phone may also detect a preset gesture of the user on the current interface and an operation of pressing a physical key.
Referring to the GUI shown in (b) of fig. 7, the GUI is another lock screen interface of the mobile phone. In response to receiving a broadcast message sent by the smart tv, the handset may display a text entry icon 701 on the lock screen interface. When the mobile phone detects an operation of the user clicking the icon 701, a GUI as shown in (c) in fig. 7 may be displayed, or a GUI as shown in (d) in fig. 7 may be displayed.
Referring to the GUI shown in (c) of fig. 7, the GUI is a display interface after the mobile phone detects that the user has clicked the icon 701. The mobile phone may display the input method on the lock screen interface, and the user may input the text content in the text input box 703. When the mobile phone detects that the user inputs text content (for example, "movie 1") in the text input box 703 and clicks the control 702, the mobile phone may transmit the text content to the smart television. The smart tv may display a GUI as shown in (e) of fig. 3.
Referring to the GUI shown in (d) of fig. 7, the GUI is another display interface after the mobile phone detects that the user has clicked the icon 701. When the operation that the user clicks the icon 701 is detected, the mobile phone can start the camera to collect the face information of the user. If the face information collected by the camera is matched with the face information preset in the mobile phone, the mobile phone can be unlocked firstly, so that a non-screen-locking interface is entered, and the input method of the mobile phone is automatically called out under the non-screen-locking interface. Or, when the mobile phone detects that the user clicks the icon 701, the mobile phone may collect fingerprint information of the user. If the acquired fingerprint information is matched with the fingerprint information preset in the mobile phone, the mobile phone can be unlocked firstly, so that a non-screen-locking interface is entered, and the input method of the mobile phone is automatically called out under the non-screen-locking interface. When the mobile phone detects that the user inputs text content (for example, "movie 1") in the text input box 703 and clicks the control 702, the mobile phone may send the text content to the smart television. The smart tv may display a GUI as shown in (e) of fig. 3. In one embodiment, after detecting the preset operation of the user, if a broadcast message transmitted by the smart television is received, the mobile phone may directly display a GUI as shown in (c) or (d) of fig. 7. That is to say, in response to the mobile phone receiving the broadcast message sent by the smart television, the mobile phone may also directly call the input method on the screen locking interface, or the mobile phone may call the input method after entering the non-screen locking interface.
In one embodiment, the mobile phone can also transmit the text content input by the user to the smart television in real time. For example, when the mobile phone detects that the user inputs the text content "power" in the text input box 703, the mobile phone may send the text information to the smart tv, so that the text content "power" is displayed in the text input box of the smart tv. When the mobile phone detects that the user then enters the text content "movie" in the text entry box 703, the mobile phone may continue to send the text content "movie" to the smart tv, thereby displaying the text content "movie" in the text entry box of the smart tv. When the mobile phone detects that the user then inputs the text content "1" in the text input box 703, the mobile phone may continue to transmit the text content "1" to the smart tv, so that the text content "movie 1" is displayed in the text input box of the smart tv.
Fig. 8 shows a system architecture diagram provided in an embodiment of the present application. The system comprises a device A and a device B, wherein the device A is an input device (for example, a mobile phone in FIGS. 3 to 7); device B is an input device (e.g., the smart tv in fig. 3 and 5).
Device B detects that the text entry box of device B gets focus, at which time device B enters an input state.
After receiving the information that the device B enters the input state, the input management module 810 of the device B notifies the input state sending module 820 of the device B to send a broadcast message to surrounding devices, where the broadcast message is used to indicate that the device B needs to perform text input; at the same time, the input content receiving module 830 of the notification apparatus B enters an input content receiving state.
The input state transmission module 820 of the device B transmits the broadcast message to the peripheral devices after receiving the above instruction. The input content receiving module 830 of the device B starts to monitor the message containing the input content sent by the peripheral device after receiving the instruction to enter the input content receiving state.
Illustratively, the broadcast message may be a Bluetooth Low Energy (BLE) packet, and the BLE packet may carry indication information for indicating that device B needs to perform text input. The BLE packet includes a Protocol Data Unit (PDU), and the indication information may be carried in a service data field (service data) in the PDU, or may also be carried in a vendor specific data field (vendor specific data) in the PDU. For example, a payload (payload) of the service data field may include a plurality of bits, wherein the plurality of bits includes an extensible bit. Device a and device B may agree on the content of some scalable bit. When a certain expandable bit is 1, the device a can know that the device B needs to perform text input.
In an embodiment, the broadcast message may also carry a Media Access Control (MAC) address of the device B. For example, if the broadcast message is a BLE packet, the MAC address of the device B may be carried in an access address (access address) field in the BLE packet.
For example, the broadcast message may be a User Datagram Protocol (UDP) packet, where the UDP packet may carry the indication information, and the indication information is used to indicate that the device B needs to perform text input. The data portion of the IP datagram is included in the UPD packet. The data portion of an IP datagram may include scalable bits. Device a and device B may agree on the content of a certain scalable bit. When a certain expandable bit is 1, the device a can know that the device B needs to perform text input.
In one embodiment, the UDP packet may carry an IP address and a port number of device B (including a source port number and a destination port number, where the source port number refers to a port number used by device B when sending data, and the destination port number refers to a port used by device B when receiving data), and the IP address and the port number of device B may carry a UDP header in a data portion of the IP datagram. Alternatively, the UDP packet may also carry an IP address and not a port number.
The input state receiving module 850 of device a may be in a broadcast message listening state at all times. When the input state receiving module 850 receives the broadcast message transmitted by the input state transmitting module 820 of the device B, it notifies the input management module 840 of the device a of an event that the device B needs text input. The input management module 840 of device a notifies the display screen to display a reminder box (as shown in (b) of fig. 3), or prompts the user for a text input icon (as shown in (a) of fig. 4).
After detecting that the user clicks the control 303, the device a opens the remote controller application. When the device a detects that the user has clicked the input control 304, the input method is displayed. Or after the device a detects the operation of clicking the icon 401 by the user on the lock screen interface, the input method is displayed on the lock screen interface.
The input management module 840 of the device a may transmit the text content to the input content transmitting module 860 of the device a after acquiring the text content input by the user in the text input box.
For example, if the BLE data packet sent by the device B to the device a carries the MAC address of the device B, the device a may establish a bluetooth connection with the device B after obtaining the MAC address of the device B. Device a may send the textual content to device B via BLE data packets. The textual content may be carried in a service data field or a vendor specific data field in the PDU. For example, the payload of the service data field may include a plurality of bits, wherein the plurality of bits includes an extensible bit. Device a may encode the text content detected by device a and input by the user in GBK, ISO8859-1, or Unicode (e.g., UTF-8, UTF-16), and carry the encoded information on one or more scalable bits. After receiving the BLE data packet sent by the device a, the device B may decode information on a corresponding bit, thereby obtaining text content input by the user on the device a.
Illustratively, if the UDP packet carries the IP address and the destination port number of the device B. The device a may establish a Transmission Control Protocol (TCP) connection with the device B via the IP address and the destination port number. Device a may send the textual content of the user input detected by device a to the destination port number over the TCP connection.
For example, if the UDP packet carries the IP address of the device B and does not carry the destination port number, the device a may not establish the TCP connection with the device B after acquiring the IP address of the device B. Device a may send a UDP packet to device B, where the UDP packet may carry the text content input by the user detected on device a. Illustratively, the textual content may carry the data portion of an IP datagram in a UDP packet. The data portion includes an extensible field, and the device a and the device B may agree on an extensible bit to carry the text content. The device a may encode the text content input by the user, which is detected by the device a, by using an encoding method such as GBK, ISO8859-1, or Unicode, and carry information obtained after encoding on one or more scalable bits. After receiving the UDP packet sent by the device a, the device B may decode information on a corresponding bit, thereby obtaining text content input by the user on the device a.
The input content receiving module 830 in the input content receiving state receives the text content sent by the input content sending module 860, and then sends the text content to the input management module 810 of the device B. The input management module 810 of device B displays the received text content in the text input box.
The above gives an internal implementation procedure in the GUI shown in fig. 3 and 4, in which the input state receiving module 850 of the device a may be in the broadcast message listening state at all times. The internal implementation process in the GUI shown in fig. 5 and 6 is described below. The internal implementation process of the GUI shown in fig. 5 and 6 is different from that of fig. 3 and 4 in that the input state receiving module 850 of the device a starts listening for a broadcast message when the device a detects a preset operation by a user.
Device a may detect that the user opens the remote controller application, and then the input management module 840 notifies the input state receiving module 850 to enter the broadcast message monitoring state, thereby starting to monitor the broadcast message; alternatively, when the device a detects that the user clicks the input control, the input method is displayed, and the input management module 840 monitors the mobile phone to display the input method. The input management module 840 notifies the input state receiving module 850 to enter a broadcast message listening state, thereby starting listening for broadcast messages.
It should be understood that, in the embodiment of the present application, the device a detects that the user enters the broadcast message monitoring state after opening the remote controller application, and the embodiment of the present application is not limited thereto. Device a may also enter a broadcast message listening state upon detecting that another application (e.g., App1) is opened. For example, after detecting that the user clicks the icon of App1, App1 of the application layer sends a label (e.g., a Process Identifier (PID)) corresponding to App1 and a process name corresponding to App1 to a system service of the application framework layer, and the system service may determine that App1 is started according to the label and the process name. The system service may trigger an input state receiving module 850 (e.g., the wireless communication module in fig. 1) of device a to enter a broadcast message listening state upon determining that App1 is activated.
In one embodiment, the device a may also enter the broadcast message listening state after detecting a preset operation of the user. Illustratively, the preset operation may be an operation of double-clicking, long-pressing, folding or unfolding a screen, or the like. The input state receiving module 850 may be triggered to enter a broadcast message listening state when the device a detects a preset operation of the user. When the device a receives the broadcast message transmitted by the surrounding devices, the input method may be automatically displayed.
The input state receiving module 850 of the device a receives the broadcast message transmitted by the input state transmitting module 820 of the device B, and knows that the device B needs text input. It should be understood that, the manner in which the device B sends the broadcast message may refer to the description in the foregoing embodiments, and for brevity, the description is omitted here.
The input state receiving module 850 of the device a notifies the input management module 840 of the device a of an event that the device B requires text input; an input management module 840 of the device A acquires text content input by a user through an input method service; the input management module 840 of device a calls the input content transmission module 860 of device a to transmit the text content input by the user to device B. It should be understood that, the manner in which the device a sends the user input content to the device B may refer to the description in the above embodiments, and for brevity, the description is omitted here.
The input content receiving module 830 of the device B in the input content receiving state receives the text content transmitted by the input content transmitting module 860 of the device a, and then transmits the text content to the input management module 810 of the device B. The input management module 810 of device B displays the received text content in the text input box.
Through the above flow, the cross-device text input function to the device B through the device a is completed.
In the above process, the communication between the device a and the device B may be selected from bluetooth communication or local area network communication as required. When the device B sends the broadcast message to the outside, it may select one of the bluetooth and the lan, or send the broadcast message in both ways.
When the device A sends the text content input by the user to the device B, the mode with the highest speed can be selected to send according to whether the device A is in Bluetooth pairing with the device B or in the same local area network.
For example, if the BLE data packet received by the device a and sent by the device B includes the MAC address of the device B, the device a may determine whether bluetooth pairing has been performed with the device B before according to the MAC address of the device B. If the devices a and B are bluetooth paired, the devices a and B may perform bluetooth pairing and establish bluetooth connection. After establishing the bluetooth connection, device a may send BLE packets to device B, where the BLE packets carry text content input by the user.
For example, if the broadcast message received by the device a includes the IP address of the device B, the device a may determine whether the device a and the device B are in the same local area network according to the IP address of the device B. If the device a determines that the device a and the device B are in the same local area network, the device a may establish a TCP connection with the device B if the UDP packet also carries the destination port number of the device B. After establishing the TCP connection, device a may send the text message to device B over the TCP connection. Or, if the device B only carries the IP address of the device B and does not carry the destination port number in the UDP packet, the device a may send the UDP packet to the device B, where the UDP packet carries the text content input by the user.
In one embodiment, to ensure that the user input text content sent by device a to device B is not compromised, device a may send the text content encrypted using the encryption key of device B. For example, device B may have a public key and a private key stored therein, and device B may carry its own public key in a broadcast message. Illustratively, the public key of device B may be carried in a service data field or vendor specific field in the BLE data packet. Device a, when sending the text content to device B, may encrypt the text content using device B's public key. Illustratively, if the device a and the device B establish a TCP connection, the device a may send the text content encrypted by the public key to the device B through the TCP connection; or, if the device a does not establish a TCP connection with the device B, the device B may send a UDP data packet to the device a, where a data portion of an IP datagram in the UPD data packet may carry text content encrypted by a public key; alternatively, device a may send BLE packets to device B, and the service data field or vendor specific field in the BLE packets may carry the text content encrypted by the public key. After receiving the text content encrypted by the public key, the device B may decrypt the text content by the private key, thereby obtaining the text content sent by the device a. Other devices can also monitor the text content encrypted by the public key, but the text content encrypted by the public key cannot be decrypted because the private key of the device B does not exist. Thereby ensuring that the text content sent by the device a to the device B is not revealed.
In the embodiment of the application, the device a and the device B do not need to actively enter pairing in advance or establish connection through a network, and when the device B needs to input text, the device a can dynamically acquire related information and assist the device B in completing text content input.
Fig. 9 shows a schematic flow chart of a method 900 of text input provided by an embodiment of the present application. The method 900 may be performed by a first electronic device and a second electronic device, the method 900 comprising:
s901, the first electronic device displays a text input interface through a display screen, wherein the text input interface comprises a text input box.
Illustratively, the display interface of the smart tv shown in fig. 3 (a) is a text input interface, and the text input interface includes a text input box 301.
S902, the first electronic device sends a first message in response to the text input interface being displayed, wherein the first message is used for indicating that the first electronic device needs to perform text input.
In one embodiment, the first electronic device, in response to displaying the text input interface, includes the first electronic device responding to the first electronic device's current focus being in a text entry box of the text input interface; alternatively, the first electronic device includes the first electronic device being responsive to the first electronic device being currently focused on a key of an input method displayed by the text input interface in response to displaying the input interface.
Illustratively, the display interface of the smart tv shown in fig. 3 (a) is a text input interface, and the current focus of the smart tv is a text input box 301. In response to the current focus of the smart tv being the text entry box 301, the smart tv sends a first message.
In one embodiment, the first electronic device, in response to displaying the text input interface, sends a first message comprising: the first electronic device sends the first message to one or more devices in response to displaying the text input interface.
Illustratively, the one or more devices and the first electronic device are devices under the same account (e.g., hua is account); or the one or more devices and the first electronic device are accounts in the same family group.
For example, the one or more devices may be devices that have completed bluetooth pairing with the first electronic device; alternatively, the one or more devices may be devices that are under the same Wi-Fi as the first electronic device.
In one embodiment, the first electronic device, in response to displaying the text input interface, sends a first message comprising: the first electronic device sends a broadcast message to surrounding devices in response to displaying the text input interface, wherein the broadcast message is used for indicating that the first electronic device needs to perform text input.
It should be understood that, for brevity, details of the manner in which the first electronic device sends the broadcast message may refer to the description in the foregoing embodiments.
And S903, the second electronic device detects the preset operation of the user and monitors the first message.
It should be understood that, in the embodiment of the present application, there is no particular limitation on the sequence of detecting the operation of the user by the second electronic device and monitoring the first message, and the second electronic device may detect the operation of the user first and then receive the first message; or, the second electronic device may also receive the first message first and then detect a preset operation of the user.
In one embodiment, the preset operation may be an operation of opening an application by a user; alternatively, the preset operation may be a preset gesture of the user (for example, the user draws a preset pattern on the display screen of the second electronic device, or a blank gesture of the user); or, the preset operation may also be an operation in which the user presses a certain physical key; alternatively, the preset operation may also be a combination of a preset gesture and pressing of a physical key.
In one embodiment, the preset operation may also be an operation of the user to pick up the second electronic device.
Illustratively, a gyro sensor (e.g., the gyro sensor 180B in fig. 1) is included in the second electronic device, and the second electronic device may detect whether the user picks up the second electronic device through the gyro sensor.
In an embodiment, the preset operation may also be an operation of a user to unlock the second electronic device.
In one embodiment, the second electronic device may detect whether the first electronic device is within a preset angle range of the second electronic device while detecting a preset operation of the user and monitoring the first message.
Illustratively, the second electronic device may be an AOA (Angle of arrival) enabled device. For example, a compass, a Bluetooth/Wi-Fi antenna array, and the like can be included in the second electronic device. The second electronic device may calculate the position of the first electronic device, the bluetooth/WiFi antenna array of the second electronic device may receive the wireless signal of the first electronic device, and calculate the position of the first electronic device according to equations (1) and (2):
Figure BDA0002768287430000221
Figure BDA0002768287430000222
wherein d is the distance between the Bluetooth/Wi-Fi antenna array of the second electronic device and the Bluetooth/Wi-Fi antenna of the first electronic device,
Figure BDA0002768287430000223
λ is the wavelength of the bluetooth signal (e.g., the first message) transmitted by the first electronic device, and θ is the angle of arrival, which is the phase difference between the bluetooth/Wi-Fi antenna array of the second electronic device and the bluetooth/Wi-Fi antenna of the first electronic device.
In one embodiment, the detecting, by the second electronic device, the preset operation of the user and the listening to the first message includes: the second electronic equipment detects preset operation of a user; and responding to the detection of the preset operation of the user, and the second electronic equipment starts to monitor the first message.
Illustratively, as shown in fig. 5 (c), when the handset detects that the user opens the remote control application, the handset starts to listen to the first message.
Illustratively, as shown in fig. 7 (a), when the mobile phone detects an operation of drawing "S" on the display interface by the user, the mobile phone starts to listen to the first message.
It should be appreciated that, considering that the second electronic device may display the text input interface for a certain time interval when the mobile phone detects the preset gesture of the user, the first electronic device may send a plurality of first messages within a first preset duration (e.g., 1 minute) when displaying the text input interface. This ensures that the second electronic device can receive the first message after detecting the preset operation of the user.
In one embodiment, the detecting, by the second electronic device, the preset operation of the user and the listening to the first message includes: the second electronic equipment monitors the first message; in response to receiving the first message, the second electronic device detects a preset operation of a user.
Illustratively, as shown in fig. 5 (c), the handset receives the first message before the handset detects that the user opens the remote control application. But since the handset has not detected the user's preset operation, the handset may first save the first message without any prompt for text input to the user. As shown in (d) in fig. 5, when the mobile phone detects an operation of opening the remote controller application by the user, the mobile phone may directly display the reminding box 503, so as to prompt the user to use the mobile phone to perform text input on the smart television; alternatively, when the mobile phone detects an operation of the user to open the remote controller application, the mobile phone may directly call out the input method, displaying a GUI as shown in (e) in fig. 5.
Illustratively, as shown in fig. 7 (a), the mobile phone receives the first message before the mobile phone detects an operation of drawing "S" on the lock screen interface by the user. However, since the mobile phone has not detected the preset operation of the user, the mobile phone may first save the first message without any prompt for text input to the user. When the mobile phone detects an operation of drawing "S" by the user, the mobile phone may directly display a GUI as shown in (b) of fig. 7; alternatively, the handset may directly display the GUI as shown in (c) of fig. 7; alternatively, the mobile phone displays a GUI as shown in (d) of fig. 7 after entering the lock screen interface.
It should be understood that the second electronic device may listen to the first message at all times, considering that there may be a certain time interval from the first electronic device displaying the text input interface to the second electronic device detecting the preset gesture of the user. If the mobile phone detects the preset gesture of the user after receiving the first message. For this case, the first electronic device may send the plurality of first messages within a second preset time period (e.g., 5 seconds) upon detecting that the text input interface is displayed. The second electronic device may start detecting the preset operation of the user after receiving the first message.
And S904, in response to detecting the preset operation of the user and receiving the first message, the second electronic device detects the content input by the user.
It should be understood that it is introduced above that the second electronic device may start listening for the first message in response to detecting a preset operation by the user; alternatively, the second electronic device may start detecting the preset operation of the user in response to listening to the first message. In this embodiment of the application, there may not be any association between the second electronic device detecting the preset operation of the user and monitoring the first message, and as long as the second electronic device detects the preset operation of the user and receives the first message, the second electronic device may detect the input of the user.
In one embodiment, in response to detecting the preset operation of the user and receiving the first message, the second electronic device detects content input by the user, and includes: and in response to the detection of the preset operation of the user and the reception of the first message, the time interval between the detection of the preset operation of the user by the second electronic device and the reception of the first message by the second electronic device is less than the preset time interval, and the second electronic device detects the content input by the user.
In one embodiment, in response to detecting the preset operation of the user and receiving the first message, the second electronic device detects content input by the user, and the method includes: and in response to detecting the preset operation of the user, receiving the first message and determining that the first electronic equipment is within the preset angle range of the second electronic equipment, the second electronic equipment detects the content input by the user.
In one embodiment, in response to detecting the preset operation of the user and receiving the first message, the second electronic device detects content input by the user, and includes: and if the second electronic equipment detects the preset operation of the user within a third preset time after the first message is received, the second electronic equipment detects the input of the user.
In this embodiment, the second electronic device may not detect the preset operation of the user for a long time after receiving the first message, and then the user may not use the second electronic device to perform text input on the first electronic device. The second electronic device may ignore the first message beyond a third preset time period. That is, the second electronic device detects the preset operation of the user after the third preset duration, and the second electronic device does not give any prompt for text input to the user; or, the second electronic device does not call the input method. Therefore, the interference brought to the user by the fact that the user does not use the second electronic equipment for a long time if preset operation is carried out on the second electronic equipment can be avoided.
Illustratively, as shown in fig. 5 (e), when the mobile phone detects that the user opens the remote control application and receives the first message, the mobile phone may call up the input method, so that the mobile phone starts to detect the content input by the user.
For example, as shown in (c) in fig. 7, when the mobile phone detects that the user draws an "S" on the lock screen interface and receives the first message, the mobile phone may call up the input method, so that the mobile phone starts to detect the content input by the user.
In one embodiment, a first message sent by a first electronic device may be received by a plurality of electronic devices (e.g., a second electronic device, a third electronic device, and a fourth electronic device included in the plurality of electronic devices). If the second electronic device and the third electronic device request to establish connection with the first electronic device within a fourth preset time period after the first electronic device sends the first message, the first electronic device can establish connection with the second electronic device and the third electronic device, and therefore the second electronic device and the third electronic device can call out the content input by the input method detection user.
After a fourth preset duration, if the fourth electronic device also receives the first message, the fourth electronic device requests to establish a connection with the first electronic device. At this point, the first electronic device may reject the request from the fourth electronic device, so that no text input prompt is displayed on the fourth electronic device or the fourth electronic device does not call up the input method. This also helps to avoid interference to the user from the prompt or input method displayed on the electronic device that receives the first message after a period of time.
In one embodiment, the first electronic device may also establish a connection only with the first device (e.g., the second electronic device) requesting establishment of a connection. For requests by other electronic devices, the first electronic device may ignore its request. This helps to avoid interference when input through multiple devices.
And S905, the second electronic device sends the first content to the first electronic device when the second electronic device detects that the user inputs the first content.
Illustratively, as shown in fig. 5 (e), when the mobile phone detects an operation of inputting a text content movie 1 ″ by the user, the mobile phone may send the text content to the smart.
Illustratively, as shown in fig. 5 (f), when the mobile phone detects an operation of inputting a voice content movie 1 ″ by the user, the mobile phone may send the text content to the smart.
Illustratively, as shown in fig. 6 (e), when the mobile phone detects an operation of inputting a voice content movie 1 ″ by the user, the mobile phone may send the text content to the smart.
For example, as shown in fig. 7 (c), when the mobile phone detects an operation of inputting a text content movie 1 ″ by the user, the mobile phone may transmit the text content to the smart.
It should be understood that, in the embodiment of the present application, the mobile phone may send the detected content to the smart television in real time, and for a specific process, reference may be made to the description in the foregoing embodiment, and for brevity, no further description is provided here.
S906, the first electronic equipment displays the text content corresponding to the first content in the text input box.
For example, as shown in (e) in fig. 3, after receiving the text content "movie 1" sent by the mobile phone, the smart television may display the text content in a text input box.
For example, if the smart tv receives the voice content sent by the mobile phone, the smart tv may convert the voice content into text content, and then display the text content in the text input box 301.
In one embodiment, if the second electronic device detects that the content input by the user is voice content, the second electronic device may also convert the voice content into text content and send the text content to the first electronic device, so that the first electronic device displays the corresponding text content in the text input box 301.
In the embodiment of the application, when the first electronic device needs to perform text input, a user can pick up any device (for example, a mobile phone or a Pad) at his or her side for inputting, which is helpful for improving convenience of the user in performing text input, thereby improving user experience. Meanwhile, the second electronic device can prompt the user to input text through the reminding box when detecting the preset operation of the user and receiving the first message, and the user is helped to clearly determine that the second electronic device can be used as an input device. Before the user performs the preset operation on the second electronic device and receives the first message, the second electronic device does not generate any prompt message which possibly interferes the user, interference on the user is avoided, and therefore user experience is facilitated to be improved.
Fig. 10 shows a schematic block diagram of a device 1000 for text input provided by an embodiment of the present application. The apparatus 1000 may be disposed in the second electronic device in fig. 9, where the apparatus 1000 includes: a first detection unit 1010 for detecting a preset operation by a user; a receiving unit 1020, configured to listen to a first message, where the first message is used to indicate that another electronic device needs to perform text input; the second detecting unit 1030, configured to detect content input by the user when the first detecting unit 1010 detects a preset operation of the user and the receiving unit 1020 receives the first message; a sending unit 1040, configured to send the first content to the another electronic device in response to the second detecting unit 1030 detecting an operation of inputting the first content by the user.
Fig. 11 shows a schematic structural diagram of an electronic device 1100 provided in an embodiment of the present application. As shown in fig. 11, the electronic apparatus includes: one or more processors 1110, one or more memories 1120, the one or more memories 1120 storing one or more computer programs, the one or more computer programs comprising instructions. When executed by the one or more processors 1110, the instructions cause the second electronic device (or the mobile phone in the above embodiment) to perform the technical solution in the above embodiment.
The embodiment of the application provides a text input system, which comprises a first electronic device and a second electronic device, and is used for executing the technical scheme of text input in the embodiment. The implementation principle and technical effect are similar to those of the embodiments related to the method, and are not described herein again.
An embodiment of the present application provides a computer program product, which, when running on a second electronic device, enables the second electronic device (or a mobile phone in the foregoing embodiment) to execute the technical solution in the foregoing embodiment. The implementation principle and technical effect are similar to those of the embodiments related to the method, and are not described herein again.
An embodiment of the present application provides a readable storage medium, where the readable storage medium includes an instruction, and when the instruction runs on a second electronic device (or a mobile phone in the foregoing embodiment), the second electronic device is caused to execute the technical solution of the foregoing embodiment. The implementation principle and the technical effect are similar, and the detailed description is omitted here.
The embodiment of the application provides a chip, wherein the chip is used for executing instructions, and when the chip runs, the technical scheme in the embodiment is executed. The implementation principle and the technical effect are similar, and the detailed description is omitted here.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (17)

1. A system comprising a first electronic device and a second electronic device,
the first electronic equipment is used for displaying a text input interface through a display screen, and the text input interface comprises a text input box;
the first electronic device is further configured to send a first message in response to displaying the text input interface, where the first message is used to indicate that the first electronic device needs to perform text input;
the second electronic device is used for detecting preset operation of a user and monitoring the first message;
the second electronic device is further used for responding to the detection of the preset operation and the reception of the first message, and detecting content input by a user;
the second electronic equipment is further used for responding to the detection of the operation of inputting the first content by the user, and sending the first content to the first electronic equipment;
the first electronic device is further configured to display text content corresponding to the first content in the text input box.
2. The system of claim 1, wherein the second electronic device is specifically configured to:
responding to the detection of the preset operation and the reception of the first message, and displaying an input method;
and detecting the text content input by the user through the input method.
3. The system of claim 1, wherein the second electronic device is specifically configured to:
responding to the detection of the preset operation and the reception of the first message, and detecting voice content input by a user;
in response to detecting that a user inputs first voice content, sending the first voice content to the first electronic equipment;
wherein the first electronic device is specifically configured to:
determining the text content corresponding to the first voice content;
displaying the text content in the text entry box.
4. The system of any of claims 1-3, wherein the second electronic device is further configured to:
displaying first prompt information before the content input by the user is detected, wherein the first prompt information is used for prompting that the second electronic equipment is equipment capable of inputting to the first electronic equipment.
5. The system of any of claims 1-4, wherein the first electronic device is further configured to:
and before the text content is displayed in the text input box, displaying second prompt information through the display screen, wherein the second prompt information is used for prompting a user to input to the first electronic equipment through the second electronic equipment.
6. The system according to any one of claims 1 to 5, characterized in that the second electronic device is specifically configured to:
and detecting the content input by the user when the operation that the user starts the first application program is detected and the first message is received.
7. The system of claim 6, wherein the first application is a remote control application.
8. The system according to any one of claims 1 to 7, wherein the first electronic device is a smart television.
9. A method for inputting text, the method being applied to an electronic device, the method comprising:
the electronic equipment detects preset operation of a user and monitors a first message, wherein the first message is used for indicating that another electronic equipment needs to perform text input;
in response to detecting the preset operation of the user and receiving the first message, the electronic equipment detects content input by the user;
in response to detecting an operation of a user to input first content, the electronic device transmits the first content to the other electronic device.
10. The method of claim 9, wherein in response to detecting the preset operation by the user and receiving the first message, the electronic device detects an input by the user, and comprises:
responding to the detection of the preset operation of the user and the reception of the first message, and displaying an input method by the electronic equipment;
the electronic equipment detects the text content input by the user through the input method.
11. The method of claim 9, wherein in response to detecting the preset operation by the user and receiving the first message, the electronic device detects an input by the user, and comprises:
in response to detecting the preset operation of the user and receiving the first message, the electronic device detects voice content input by the user.
12. The method of any of claims 9-11, wherein prior to the electronic device detecting the content input by the user, the method further comprises:
and the electronic equipment displays prompt information, wherein the prompt information is used for prompting a user that the electronic equipment is equipment capable of inputting to the other electronic equipment.
13. The method according to any one of claims 9 to 12, wherein the detecting of the preset operation by the user comprises:
the electronic equipment detects the operation of starting the first application program by the user.
14. The method of claim 13, wherein the first application is a remote control application.
15. The method according to any one of claims 9 to 14, wherein the other electronic device is a smart television.
16. An electronic device, characterized by one or more processors; one or more memories; the one or more memories store one or more computer programs, the one or more computer programs comprising instructions, which when executed by the one or more processors, cause the electronic device to perform the method of text input of any of claims 9-15.
17. A computer readable storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform a method of text input as claimed in any of claims 9 to 15.
CN202011240756.5A 2020-08-13 2020-11-09 Text input method, electronic equipment and system Pending CN114489876A (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
CN202011240756.5A CN114489876A (en) 2020-11-09 2020-11-09 Text input method, electronic equipment and system
EP20949470.7A EP4187876A4 (en) 2020-08-13 2020-12-31 Method for invoking capabilities of other devices, electronic device, and system
PCT/CN2020/142564 WO2022032979A1 (en) 2020-08-13 2020-12-31 Method for invoking capabilities of other devices, electronic device, and system
CN202080104076.2A CN116171568A (en) 2020-08-13 2020-12-31 Method for calling capabilities of other equipment, electronic equipment and system
US18/041,196 US20230305680A1 (en) 2020-08-13 2020-12-31 Method for invoking capability of another device, electronic device, and system
PCT/CN2021/127888 WO2022095820A1 (en) 2020-11-09 2021-11-01 Text input method, electronic device, and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011240756.5A CN114489876A (en) 2020-11-09 2020-11-09 Text input method, electronic equipment and system

Publications (1)

Publication Number Publication Date
CN114489876A true CN114489876A (en) 2022-05-13

Family

ID=81457576

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011240756.5A Pending CN114489876A (en) 2020-08-13 2020-11-09 Text input method, electronic equipment and system

Country Status (2)

Country Link
CN (1) CN114489876A (en)
WO (1) WO2022095820A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023040848A1 (en) * 2021-09-16 2023-03-23 荣耀终端有限公司 Device control method and apparatus

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101841637B (en) * 2009-03-19 2014-01-01 华为技术有限公司 Method for interacting with set-top box and corresponding device
ES2558759T3 (en) * 2013-04-29 2016-02-08 Swisscom Ag Method; electronic device and system for entering text remotely
CN103634640A (en) * 2013-11-29 2014-03-12 乐视致新电子科技(天津)有限公司 Method and system for controlling voice input of smart television terminal by using mobile terminal equipment
CN109698969A (en) * 2018-12-20 2019-04-30 北京四达时代软件技术股份有限公司 The text entry method and device of TV

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023040848A1 (en) * 2021-09-16 2023-03-23 荣耀终端有限公司 Device control method and apparatus

Also Published As

Publication number Publication date
WO2022095820A1 (en) 2022-05-12

Similar Documents

Publication Publication Date Title
CN110381197B (en) Method, device and system for processing audio data in many-to-one screen projection
CN113542839B (en) Screen projection method of electronic equipment and electronic equipment
CN111602379B (en) Voice communication method, electronic equipment and system
CN111666119A (en) UI component display method and electronic equipment
CN111316598A (en) Multi-screen interaction method and equipment
CN115599566A (en) Notification message processing method, device, system and computer readable storage medium
WO2022100610A1 (en) Screen projection method and apparatus, and electronic device and computer-readable storage medium
CN114125130B (en) Method for controlling communication service state, terminal device and readable storage medium
CN113496426A (en) Service recommendation method, electronic device and system
CN114173204A (en) Message prompting method, electronic equipment and system
CN114173000B (en) Method, electronic equipment and system for replying message and storage medium
CN112543447A (en) Device discovery method based on address list, audio and video communication method and electronic device
CN114827581A (en) Synchronization delay measuring method, content synchronization method, terminal device, and storage medium
CN114115770A (en) Display control method and related device
CN114722377A (en) Method, electronic device and system for authorization by using other devices
US20240098354A1 (en) Connection establishment method and electronic device
CN114500901A (en) Double-scene video recording method and device and electronic equipment
CN114338913B (en) Fault diagnosis method, electronic device and readable storage medium
CN114006712A (en) Method, electronic equipment and system for acquiring verification code
WO2022095820A1 (en) Text input method, electronic device, and system
EP4293997A1 (en) Display method, electronic device, and system
WO2022152174A9 (en) Screen projection method and electronic device
WO2022152167A1 (en) Network selection method and device
WO2022062902A1 (en) File transfer method and electronic device
CN114254334A (en) Data processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination