CN114666441B - Method for calling capabilities of other devices, electronic device, system and storage medium - Google Patents

Method for calling capabilities of other devices, electronic device, system and storage medium Download PDF

Info

Publication number
CN114666441B
CN114666441B CN202011527018.9A CN202011527018A CN114666441B CN 114666441 B CN114666441 B CN 114666441B CN 202011527018 A CN202011527018 A CN 202011527018A CN 114666441 B CN114666441 B CN 114666441B
Authority
CN
China
Prior art keywords
content
electronic device
user
function
request information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011527018.9A
Other languages
Chinese (zh)
Other versions
CN114666441A (en
Inventor
刘敏
杜仲
丁宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202011527018.9A priority Critical patent/CN114666441B/en
Priority to US18/041,196 priority patent/US20230305680A1/en
Priority to EP20949470.7A priority patent/EP4187876A4/en
Priority to CN202080104076.2A priority patent/CN116171568A/en
Priority to PCT/CN2020/142564 priority patent/WO2022032979A1/en
Publication of CN114666441A publication Critical patent/CN114666441A/en
Application granted granted Critical
Publication of CN114666441B publication Critical patent/CN114666441B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The application provides a method, electronic equipment and a system for calling the capabilities of other equipment, wherein the method comprises the following steps: the source terminal equipment requests the capability information of the destination terminal equipment; the destination terminal equipment sends capability information to the source terminal equipment; when the source terminal equipment detects a first operation of a user, first content and first request information are sent to the destination terminal equipment, wherein the first request information is used for requesting the destination terminal equipment to process the first content by using a first function; the destination terminal device processes the first content by using a first function according to the first request information and sends a processing result of the first content to the source terminal device; the source device prompts the processing result to the user. In the embodiment of the application, the user can use the function of one device, so that the intelligent degree of the electronic device is improved, and the user experience is improved.

Description

Method for calling capabilities of other devices, electronic device, system and storage medium
Technical Field
The present application relates to the field of terminals, and more particularly, to a method, an electronic device, and a system for invoking capabilities of other devices.
Background
At present, users have more and more devices, the linkage among the devices is more and more, and the technologies of screen throwing, multi-screen interaction and the like are also layered endlessly. However, most of the existing inter-equipment linkage technology is limited to interface fusion and file transmission. Many times users need to perform difficult tasks on a single device, which is inconvenient for the user due to the limited capabilities of the individual devices.
Disclosure of Invention
The application provides a method, electronic equipment and system for calling the capabilities of other equipment, and a user can use the function of one piece of equipment, so that the intelligent degree of the electronic equipment is improved, and the user experience is improved.
In a first aspect, a system is provided that includes a first electronic device and a second electronic device, wherein the first electronic device is configured to request capability information of the second electronic device; the second electronic device is used for sending the capability information to the first electronic device, the capability information comprises one or more functions, and the one or more functions comprise a first function; the first electronic device is further configured to send, when a first operation of a user is detected, first content and first request information to the second electronic device, where the first request information is used to request the second electronic device to process the first content using the first function; the second electronic device is further configured to process the first content using the first function according to the first request information and send a processing result of the first content to the first electronic device; the first electronic device is also used for prompting the processing result to a user.
In the embodiment of the application, the user can use the function of the second electronic device on the first electronic device, and the capability boundary of the first electronic device is expanded, so that the task which is difficult for the first electronic device is conveniently and efficiently completed, and the experience of the user is improved.
In some possible implementations, the interface of the second electronic device may not change during a period from receiving the first content and the first request information to sending a result of processing the first content to the first electronic device.
With reference to the first aspect, in certain implementation manners of the first aspect, the first electronic device is specifically configured to: displaying a function list when detecting the operation of selecting the first content by the user, wherein the function list comprises the first function; an operation of selecting the first function by a user is detected, and the first content and the first request information are transmitted to the second electronic device.
In the embodiment of the application, when the first electronic device detects the operation of selecting the first content by the user, the function list can be displayed, and the function list comprises the first function of the second electronic device, so that the user can conveniently use the first function to process the first content, and the user experience can be improved.
With reference to the first aspect, in certain implementation manners of the first aspect, the first electronic device is specifically configured to: the list of functions is displayed according to the type of the first content.
In the embodiment of the application, the first electronic device can display the function list according to the type of the first content, so that the trouble brought to the user by too many functions in the function list is avoided, and the user experience is improved.
In some possible implementations, the first electronic device is specifically configured to: and displaying the function list when detecting the operation of selecting the first content by the user and detecting the operation of clicking the right button of the mouse by the user.
In some possible implementations, the first content is text in type, and the functions in the function list may include word taking and translation. The first content is of a picture type, and the functions in the function list can comprise identification and shopping.
With reference to the first aspect, in certain implementation manners of the first aspect, the first electronic device is specifically configured to: in response to receiving the capability information, displaying a list of functions, the list of functions including the one or more functions; responsive to detecting a user selection of the first function from the one or more functions, beginning to detect user-selected content; and in response to detecting the operation of selecting the first content by the user, sending the first content and the first request information to the second electronic device.
In this embodiment of the present application, after receiving the capability information sent by the second electronic device, the first electronic device may display a function list, where one or more functions of the second electronic device may be displayed in the function list. The user can select the content to be processed after selecting the first function, so that the user can conveniently process the first content by using the first function, and the user experience is improved.
With reference to the first aspect, in certain implementation manners of the first aspect, the first electronic device is specifically configured to: and sending the first content and the first request information to the second electronic equipment in response to detecting that the user selects the first content and detecting that the operation of selecting other content by the user is not performed within a preset time period from the time when the user selects the first content.
In this embodiment of the present invention, when detecting that a user selects a first content and detecting no operation that the user selects other content within a preset period of time, the first electronic device may send the first content and the first request information to the second electronic device, so that accuracy of the first electronic device in detecting the content selected by the user is improved, and user experience is improved.
With reference to the first aspect, in certain implementations of the first aspect, the first electronic device is further configured to: and in response to detecting the operation of selecting the second content by the user, sending the second content and second request information to the second electronic device, wherein the second request information is used for requesting the second electronic device to process the second content by using the first function.
In the embodiment of the application, after the user finishes selecting the first content, if the first electronic device continues to detect the operation of selecting the second content by the user, the first electronic device can directly send the second content and the second request information to the second electronic device without clicking the first function again, so that convenience of the user in processing the second content by using the first function is improved, and user experience is improved.
With reference to the first aspect, in certain implementation manners of the first aspect, the first electronic device is specifically configured to: and transmitting the first content and the first request information to the second electronic device in response to the operation of selecting the first content and clicking the first key by the user, wherein the first key is associated with the first function.
In the embodiment of the application, when the first electronic device detects that the user selects the first content and clicks the shortcut key, the first electronic device sends the first content and the first request information to the second electronic device, so that the user can conveniently use the first function to process the first content, and the user experience is improved.
In some possible implementations, the first electronic device is further configured to: before the first content and the first request information are sent to the second electronic device, an operation of associating the first function with the first key by the user is detected.
With reference to the first aspect, in certain implementations of the first aspect, the account number registered on the first electronic device is associated with the account number registered on the second electronic device.
In a second aspect, there is provided a method of invoking capabilities of other devices, the method being applied to a first electronic device, the method comprising: the first electronic device requests the capability information of the second electronic device; the first electronic equipment receives the capability information sent by the second electronic equipment, wherein the capability information comprises one or more functions, and the one or more functions comprise a first function; when the first electronic equipment detects a first operation of a user, first content and first request information are sent to the second electronic equipment, wherein the first request information is used for requesting the second electronic equipment to process the first content by using the first function; the first electronic equipment receives a processing result of the first content by the second electronic equipment; the first electronic device prompts the processing result to a user.
In the embodiment of the application, the user can use the function of the second electronic device on the first electronic device, and the capability boundary of the first electronic device is expanded, so that the task which is difficult for the first electronic device is conveniently and efficiently completed, and the experience of the user is improved.
With reference to the second aspect, in some implementations of the second aspect, the sending, by the first electronic device, the first content and the first request information to the second electronic device when detecting the first operation of the user includes: when detecting that a user selects the first content, the first electronic device displays a function list, wherein the function list comprises the first function; the first electronic device transmits the first content and the first request information to the second electronic device, upon detecting a user selection of the first function.
In the embodiment of the application, when the first electronic device detects the operation of selecting the first content by the user, the function list can be displayed, and the function list comprises the first function of the second electronic device, so that the user can conveniently use the first function to process the first content, and the user experience can be improved.
With reference to the second aspect, in certain implementations of the second aspect, the first electronic device displays a list of functions, including: the first electronic device displays the function list according to the type of the first content.
In the embodiment of the application, the first electronic device can display the function list according to the type of the first content, so that the trouble brought to the user by too many functions in the function list is avoided, and the user experience is improved.
In some possible implementations, the first electronic device is specifically configured to: and displaying the function list when detecting the operation of selecting the first content by the user and detecting the operation of clicking the right button of the mouse by the user.
In some possible implementations, the first content is text in type, and the functions in the function list may include word taking and translation. The first content is of a picture type, and the functions in the function list can comprise identification and shopping.
With reference to the second aspect, in some implementations of the second aspect, the sending, by the first electronic device, the first content and the first request information to the second electronic device when detecting the first operation of the user includes: in response to receiving the capability information, the first electronic device displays a list of functions, the list of functions including the one or more functions; responsive to detecting a user selection of the first function from the one or more functions, the first electronic device begins detecting user-selected content; in response to detecting a user selection of the first content, the first electronic device transmits the first content and the first request information to the second electronic device.
In this embodiment of the present application, after receiving the capability information sent by the second electronic device, the first electronic device may display a function list, where one or more functions of the second electronic device may be displayed in the function list. The user can select the content to be processed after selecting the first function, so that the user can conveniently process the first content by using the first function, and the user experience is improved.
With reference to the second aspect, in some implementations of the second aspect, the sending, by the first electronic device, the first content and the first request information to the second electronic device in response to detecting a user selection of the first content includes: and in response to detecting that the user selects the first content and that no operation of selecting other content by the user is detected within a preset time period from the user selecting the first content, the first electronic device sends the first content and the first request information to the second electronic device.
In this embodiment of the present invention, when detecting that a user selects a first content and detecting no operation that the user selects other content within a preset period of time, the first electronic device may send the first content and the first request information to the second electronic device, so that accuracy of the first electronic device in detecting the content selected by the user is improved, and user experience is improved.
With reference to the second aspect, in certain implementations of the second aspect, the method further includes: in response to detecting operation of selecting a second content by a user, the first electronic device transmits the second content and second request information to the second electronic device, the second request information being for requesting the second electronic device to process the second content using the first function.
In the embodiment of the application, after the user finishes selecting the first content, if the first electronic device continues to detect the operation of selecting the second content by the user, the first electronic device can directly send the second content and the second request information to the second electronic device without clicking the first function again, so that convenience of the user in processing the second content by using the first function is improved, and user experience is improved.
With reference to the second aspect, in some implementations of the second aspect, the sending, by the first electronic device, the first content and the first request information to the second electronic device when detecting the first operation of the user includes: in response to a user selecting a first content and clicking a first key, the first electronic device sends the first content and the first request information to the second electronic device, wherein the first key is associated with the first function.
In the embodiment of the application, when the first electronic device detects that the user selects the first content and clicks the shortcut key, the first electronic device sends the first content and the first request information to the second electronic device, so that the user can conveniently use the first function to process the first content, and the user experience is improved.
With reference to the second aspect, in some implementations of the second aspect, the account number registered on the first electronic device is associated with the account number registered on the second electronic device.
In some possible implementations, the account numbers registered on the first electronic device and the second electronic device are the same; or the account number logged in on the first electronic device and the account number logged in on the second electronic device are located in the same family group.
In a third aspect, a method for invoking capabilities of other devices is provided, the method being applied to a second electronic device, the method comprising: the second electronic equipment receives first request information sent by the first electronic equipment, wherein the first request information is used for requesting the capability information of the second electronic equipment; the second electronic device sends the capability information to the first electronic device, wherein the capability information comprises one or more functions, and the one or more functions comprise a first function; the second electronic equipment receives first content and second request information sent by the first electronic equipment, wherein the second request information is used for processing the first content by the second electronic equipment by using the first function; the second electronic device processes the first content by using the first function according to the second request information and sends a processing result of the first content to the first electronic device.
With reference to the third aspect, in some implementations of the third aspect, the account number registered on the first electronic device is associated with the account number registered on the second electronic device.
In a fourth aspect, there is provided an apparatus comprising: a sending unit, configured to request capability information of the second electronic device; a receiving unit, configured to receive the capability information sent by the second electronic device, where the capability information includes one or more functions, and the one or more functions include a first function; a detection unit configured to detect a first operation by a user; the sending unit is further used for responding to the first operation and sending first content and first request information to the second electronic equipment, wherein the first request information is used for requesting the second electronic equipment to process the first content by using the first function; the receiving unit is also used for receiving the processing result of the second electronic equipment on the first content; and the prompting unit is used for prompting the processing result to the user.
In a fifth aspect, there is provided an apparatus comprising: a receiving unit, configured to receive first request information sent by the first electronic device, where the first request information is used to request capability information of the device; a transmitting unit, configured to transmit the capability information to the first electronic device, where the capability information includes one or more functions, and the one or more functions include a first function; the receiving unit is further used for receiving first content and second request information sent by the first electronic equipment, and the second request information is used for processing the first content by the device through the first function; a processing unit, configured to process the first content using the first function according to the second request information; and the sending unit is also used for sending the processing result of the first content to the first electronic equipment.
In a sixth aspect, there is provided an electronic device comprising: one or more processors; a memory; and one or more computer programs. Wherein one or more computer programs are stored in the memory, the one or more computer programs comprising instructions. The instructions, when executed by an electronic device, cause the electronic device to perform the method in any of the possible implementations of the second aspect described above.
In a seventh aspect, there is provided an electronic device comprising: one or more processors; a memory; and one or more computer programs. Wherein one or more computer programs are stored in the memory, the one or more computer programs comprising instructions. The instructions, when executed by an electronic device, cause the electronic device to perform the method in any one of the possible implementations of the third aspect described above.
In an eighth aspect, there is provided a computer program product comprising instructions which, when run on a first electronic device, cause the electronic device to perform the method of the second aspect above; alternatively, the computer program product, when run on a second electronic device, causes the electronic device to perform the method of the third aspect described above.
In a ninth aspect, there is provided a computer readable storage medium comprising instructions that when run on a first electronic device cause the electronic device to perform the method of the second aspect above; alternatively, the instructions, when executed on the second electronic device, cause the electronic device to perform the method according to the third aspect above.
In a tenth aspect, there is provided a chip for executing instructions, which when executed performs the method of the second aspect above; alternatively, the chip performs the method of the third aspect.
Drawings
Fig. 1 is a schematic hardware structure of an electronic device according to an embodiment of the present application.
Fig. 2 is a block diagram of a software structure provided in an embodiment of the present application.
FIG. 3 is a set of graphical user interfaces provided by embodiments of the present application.
FIG. 4 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 5 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 6 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 7 is another set of graphical user interfaces provided by embodiments of the present application.
FIG. 8 is another set of graphical user interfaces provided by embodiments of the present application.
Fig. 9 is a schematic diagram of a system architecture provided in an embodiment of the present application.
Fig. 10 is a schematic flowchart of a method for invoking device capabilities provided in an embodiment of the present application.
Fig. 11 is a schematic structural diagram of an apparatus provided in an embodiment of the present application.
Fig. 12 is another schematic structural view of an apparatus provided in an embodiment of the present application.
Fig. 13 is another schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. Wherein, in the description of the embodiments of the present application, "/" means or is meant unless otherwise indicated, for example, a/B may represent a or B; "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, in the description of the embodiments of the present application, "plural" or "plurality" means two or more than two.
The terms "first" and "second" are used below for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present embodiment, unless otherwise specified, the meaning of "plurality" is two or more.
The method provided by the embodiment of the application can be applied to electronic devices such as mobile phones, tablet computers, wearable devices, vehicle-mounted devices, augmented reality (augmented reality, AR)/Virtual Reality (VR) devices, notebook computers, ultra-mobile personal computer (UMPC), netbooks, personal digital assistants (personal digital assistant, PDA) and the like, and the embodiment of the application does not limit the specific types of the electronic devices.
By way of example, fig. 1 shows a schematic diagram of an electronic device 100. The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and a command center of the electronic device 100, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The I2C interface is a bi-directional synchronous serial bus comprising a serial data line (SDA) and a serial clock line (derail clock line, SCL). In some embodiments, the processor 110 may contain multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, charger, flash, camera 193, etc., respectively, through different I2C bus interfaces. For example: the processor 110 may be coupled to the touch sensor 180K through an I2C interface, such that the processor 110 communicates with the touch sensor 180K through an I2C bus interface to implement a touch function of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, the processor 110 may contain multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may transmit an audio signal to the wireless communication module 160 through the I2S interface, to implement a function of answering a call through the bluetooth headset.
PCM interfaces may also be used for audio communication to sample, quantize and encode analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface to implement a function of answering a call through the bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus for asynchronous communications. The bus may be a bi-directional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is typically used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit an audio signal to the wireless communication module 160 through a UART interface, to implement a function of playing music through a bluetooth headset.
The MIPI interface may be used to connect the processor 110 to peripheral devices such as a display 194, a camera 193, and the like. The MIPI interfaces include camera serial interfaces (camera serial interface, CSI), display serial interfaces (display serial interface, DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the photographing functions of electronic device 100. The processor 110 and the display 194 communicate via a DSI interface to implement the display functionality of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal or as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, an MIPI interface, etc.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transfer data between the electronic device 100 and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other electronic devices, such as AR devices, etc.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only illustrative, and does not limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also use different interfacing manners, or a combination of multiple interfacing manners in the foregoing embodiments.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters. In other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional module, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied to the electronic device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 150 of electronic device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that electronic device 100 may communicate with a network and other devices through wireless communication techniques. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the electronic device 100 may be implemented through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The electronic device 100 may listen to music, or to hands-free conversations, through the speaker 170A.
A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When electronic device 100 is answering a telephone call or voice message, voice may be received by placing receiver 170B in close proximity to the human ear.
Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 170C through the mouth, inputting a sound signal to the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may also be provided with three, four, or more microphones 170C to enable collection of sound signals, noise reduction, identification of sound sources, directional recording functions, etc.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be a USB interface 130 or a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. The capacitance between the electrodes changes when a force is applied to the pressure sensor 180A. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the touch operation intensity according to the pressure sensor 180A. The electronic device 100 may also calculate the location of the touch based on the detection signal of the pressure sensor 180A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example: and executing an instruction for checking the short message when the touch operation with the touch operation intensity smaller than the first pressure threshold acts on the short message application icon. And executing an instruction for newly creating the short message when the touch operation with the touch operation intensity being greater than or equal to the first pressure threshold acts on the short message application icon.
The gyro sensor 180B may be used to determine a motion gesture of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., x, y, and z axes) may be determined by gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic device 100 through the reverse motion, so as to realize anti-shake. The gyro sensor 180B may also be used for navigating, somatosensory game scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude from barometric pressure values measured by barometric pressure sensor 180C, aiding in positioning and navigation.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip cover using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip machine, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the detected opening and closing state of the leather sheath or the opening and closing state of the flip, the characteristics of automatic unlocking of the flip and the like are set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. The electronic equipment gesture recognition method can also be used for recognizing the gesture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, the electronic device 100 may range using the distance sensor 180F to achieve quick focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light outward through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it may be determined that there is an object in the vicinity of the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there is no object in the vicinity of the electronic device 100. The electronic device 100 can detect that the user holds the electronic device 100 close to the ear by using the proximity light sensor 180G, so as to automatically extinguish the screen for the purpose of saving power. The proximity light sensor 180G may also be used in holster mode, pocket mode to automatically unlock and lock the screen.
The ambient light sensor 180L is used to sense ambient light level. The electronic device 100 may adaptively adjust the brightness of the display 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust white balance when taking a photograph. Ambient light sensor 180L may also cooperate with proximity light sensor 180G to detect whether electronic device 100 is in a pocket to prevent false touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 may utilize the collected fingerprint feature to unlock the fingerprint, access the application lock, photograph the fingerprint, answer the incoming call, etc.
The temperature sensor 180J is for detecting temperature. In some embodiments, the electronic device 100 performs a temperature processing strategy using the temperature detected by the temperature sensor 180J. For example, when the temperature reported by temperature sensor 180J exceeds a threshold, electronic device 100 performs a reduction in the performance of a processor located in the vicinity of temperature sensor 180J in order to reduce power consumption to implement thermal protection. In other embodiments, when the temperature is below another threshold, the electronic device 100 heats the battery 142 to avoid the low temperature causing the electronic device 100 to be abnormally shut down. In other embodiments, when the temperature is below a further threshold, the electronic device 100 performs boosting of the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperatures.
The touch sensor 180K, also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 at a different location than the display 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, bone conduction sensor 180M may acquire a vibration signal of a human vocal tract vibrating bone pieces. The bone conduction sensor 180M may also contact the pulse of the human body to receive the blood pressure pulsation signal. In some embodiments, bone conduction sensor 180M may also be provided in a headset, in combination with an osteoinductive headset. The audio module 170 may analyze the voice signal based on the vibration signal of the sound portion vibration bone block obtained by the bone conduction sensor 180M, so as to implement a voice function. The application processor may analyze the heart rate information based on the blood pressure beat signal acquired by the bone conduction sensor 180M, so as to implement a heart rate detection function.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also correspond to different vibration feedback effects by touching different areas of the display screen 194. Different application scenarios (such as time reminding, receiving information, alarm clock, game, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 195, or removed from the SIM card interface 195 to enable contact and separation with the electronic device 100. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support Nano SIM cards, micro SIM cards, and the like. The same SIM card interface 195 may be used to insert multiple cards simultaneously. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to realize functions such as communication and data communication. In some embodiments, the electronic device 100 employs an embedded SIM (eSIM) card, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
It should be understood that the phone cards in the embodiments of the present application include, but are not limited to, SIM cards, eSIM cards, universal subscriber identity cards (universal subscriber identity module, USIM), universal integrated phone cards (universal integrated circuit card, UICC), and the like.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In this embodiment, taking an Android system with a layered architecture as an example, a software structure of the electronic device 100 is illustrated.
Fig. 2 is a software configuration block diagram of the electronic device 100 according to the embodiment of the present application. The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run) and system libraries, and a kernel layer, respectively. The application layer may include a series of application packages.
As shown in fig. 2, the application package may include applications for cameras, gallery, calendar, phone calls, maps, navigation, WLAN, bluetooth, music, video, short messages, etc.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layer may include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The telephony manager is used to provide the communication functions of the electronic device 100. Such as the management of call status (including on, hung-up, etc.).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc.
Android run time includes a core library and virtual machines. Android run time is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media library (media library), three-dimensional graphics processing library (e.g., openGL ES), 2D graphics engine (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
It should be understood that the technical solution in the embodiment of the present application may be used in Android, IOS, hong meng, and other systems.
Fig. 3 is a set of graphical user interfaces (graphical user interface, GUI) provided by embodiments of the present application.
Referring to the GUI shown in fig. 3 (a), the notebook computer displays a section of text (english) through the display screen, and the mobile phone displays the desktop of the mobile phone. The notebook computer displays a GUI as shown in (b) of fig. 3 when detecting that the user selects the original content "Today is … first" and detecting that the user clicks the right mouse button.
In one embodiment, a wireless connection (e.g., bluetooth, wi-Fi, or NFC) may be established between the notebook computer and the cell phone; or the notebook computer and the mobile phone are connected through a wire.
In one embodiment, the same account number (e.g., hua as an ID) is registered on the notebook computer and the mobile phone; or the Hua-Cheng ID registered by the notebook computer and the mobile phone is located in the same family group; alternatively, the mobile phone has been authorized with the laptop's ability to access the mobile phone.
Referring to the GUI shown in (b) of fig. 3, in response to detecting that the user selects the english content and detecting that the user clicks the right mouse button, the notebook computer may display the function list 301. Among these, the function list 301 includes cut, copy, paste, word-taking, and translation functions. The functions of cutting, copying and pasting are the functions of a notebook computer, and the functions of word taking and translation are the functions of a mobile phone. When the notebook computer detects an operation of the user selecting the translation function 302, a GUI as shown in (c) of fig. 3 is displayed.
In one embodiment, the notebook computer may request capability information of the mobile phone from the mobile phone when establishing a wireless connection (or a wired connection) with the mobile phone. After receiving the request, the handset may send its own capability information (e.g., translation, wisdom knowledge, word taking, wisdom assistant, etc.) to the notebook. Thus, when the notebook computer detects that the user selects the english content and detects that the user clicks the right mouse button, the function of the mobile phone (such as the word taking and translating functions shown in the function list in fig. 3 (b)) can be displayed in the function list.
In one embodiment, if the mobile phone and the notebook computer log in the same account (for example, the ID), the notebook computer may also request the cloud server for the capability information of other devices under the account. After receiving the request, the cloud server may send a request to other devices (which may include a mobile phone in other devices) under the account, where the request is used to request capability information of the other devices. After receiving the capability information of other devices, the cloud server can send the capability information of the other devices to the notebook computer. Thus, when the notebook computer detects that the user selects the english content and detects that the user clicks the right mouse button, the function of the mobile phone (such as the word taking and translating functions shown in the function list in fig. 3 (b)) can be displayed in the function list.
In one embodiment, the word capturing and translation functions shown in the function list 301 in (B) of fig. 3 may be from different devices, for example, the laptop determines that the capability information of the mobile phone a includes the word capturing function and the capability information of the mobile phone B includes the translation function. Thus, when the mobile phone detects that the user selects the english content and detects that the user clicks the right mouse button, the function from the mobile phone a (such as the word capturing function shown in the function list in (B) of fig. 3) and the function from the mobile phone B (such as the translation function shown in the function list in (B) of fig. 3) can be displayed in the function list.
In one embodiment, when the notebook computer detects the operation of selecting the english content by the user, the notebook computer can display the function list 301 without detecting the operation of clicking the right button of the mouse by the user.
Referring to the GUI shown in fig. 3 (c), in response to detecting the operation of the user selection translation function 302, the notebook computer transmits the original content to the mobile phone and request information for requesting the mobile phone to translate the original content. After receiving the request information and the original text content, the mobile phone can translate the original text content, so as to obtain corresponding translated text content. The mobile phone can send the translated text content to the notebook computer. When the notebook computer receives the translated text, a prompt box 303 may be displayed, where the prompt box 303 includes the translated text.
In one embodiment, in response to detecting operation of the user selection translation function 302, the notebook may prompt the user to translate the original text into which language (e.g., chinese, japanese, korean, spanish, etc.). When the notebook computer detects that the user selects the operation of translating the original text into Chinese, the notebook computer can send the original text content and request information to the mobile phone, wherein the request information is used for requesting the mobile phone to translate the original text content into Chinese.
In one embodiment, the prompt box 303 can be dragged, and the length and width of the prompt box 303 can be adjusted, so that the user can compare the original text with the translated text conveniently.
Referring to the GUI shown in (d) of fig. 3, the GUI is another display interface of the translation result by the notebook computer. The mobile phone can cover the currently selected text through the prompt box 304, wherein the prompt box 304 includes translated text content (for example, the mobile phone automatically translates the text into Chinese), and the prompt box 304 further includes controls such as copying, saving to the local, displaying the text, and the like.
In one embodiment, after receiving the original text and the request information, the mobile phone may translate the original text. For example, if the default language of the mobile phone is chinese, the mobile phone may translate the original text into chinese. If the text is Chinese, the mobile phone can translate the text into English by default.
In the embodiment of the application, the user can use the function of one device to expand the capability boundary of the device, and meanwhile, the device can conveniently and efficiently complete some difficult tasks. For the GUI shown in fig. 3, the notebook computer can directly display the translation function from the mobile phone on the display interface of the original text without the user logging in the translation website or opening the translation application (for example, the user needs to switch back and forth between the translation software and the original text), so that the mobile phone can be used for translating the original text in real time. Therefore, the efficiency of the user in the process of translating the original text can be improved, excessive user operation in the process of translating the original text is avoided, and the user experience is improved.
Fig. 4 is another set of GUIs provided by an embodiment of the present application.
Referring to the GUI shown in fig. 4 (a), the notebook computer displays a picture 401 through the display screen, and the mobile phone displays the desktop of the mobile phone. When the notebook computer detects that the user clicks the right mouse button on the picture, a GUI shown in fig. 4 (b) is displayed.
In one embodiment, a wireless connection (e.g., bluetooth, wi-Fi, or NFC) may be established between the notebook computer and the cell phone; or the notebook computer and the mobile phone are connected through a wire.
In one embodiment, the same account number (e.g., hua as an ID) is registered on the notebook computer and the mobile phone; or the Hua-Cheng ID registered by the notebook computer and the mobile phone is located in the same family group; alternatively, the mobile phone has been authorized with the laptop's ability to access the mobile phone.
Referring to the GUI shown in (b) of fig. 4, in response to detecting a user's click of the right mouse button on the picture, the notebook computer may display the function list 402. The function list 402 includes functions of sending pictures to a mobile phone, saving pictures, copying pictures, viewing full screen, recognizing things, shopping, translating, fetching words, and the like. The functions of sending pictures to the mobile phone, saving the pictures as well as copying the pictures and viewing the pictures in a full screen are the functions of the notebook computer, and the functions of recognizing things, shopping, translating and fetching words are the functions of the mobile phone. When the notebook computer detects an operation of the user selection recognition function 403, a GUI as shown in (c) of fig. 4 is displayed.
It should be understood that the process of displaying the recognition, shopping, translation and word taking functions from the mobile phone on the notebook computer may be described with reference to the embodiment of fig. 3, and will not be described herein for brevity.
Referring to the GUI shown in (c) of fig. 4, in response to detecting that the user selects the recognition function 403, the notebook computer transmits the picture to the mobile phone and request information for requesting the mobile phone to recognize the content of the picture. After receiving the picture and the request information, the mobile phone can identify the content on the picture. After the mobile phone finishes identifying the content on the picture, the mobile phone can send the identification result to the notebook computer. In response to receiving the recognition result of the picture by the mobile phone, the notebook computer may display a prompt box 404, where the prompt box 404 includes prompt information "find the following similar content for you," information source (e.g., xx website), name of the object on the picture (e.g., football), and multiple shopping links of the object (e.g., shopping link 1, shopping link 2, and shopping link 3).
In the embodiment of the application, the user does not need to log in the recognition software or send the picture to the recognition website to recognize the object, and the notebook computer can directly display the recognition function from the mobile phone on the display interface of the picture, so that the content in the picture can be recognized by using the mobile phone. Therefore, the efficiency of the user in recognizing the object in the picture can be improved, and the experience of the user can be improved.
Fig. 5 is another set of GUIs provided by an embodiment of the present application.
Referring to the GUI shown in fig. 5 (a), the notebook computer displays a picture 501 through the display screen, and at this time, the mobile phone displays the desktop of the mobile phone. When the notebook computer detects that the user clicks the right mouse button on this picture 501, a GUI shown in fig. 5 (b) is displayed.
In one embodiment, a wireless connection (e.g., bluetooth, wi-Fi, or NFC) may be established between the notebook computer and the cell phone; or the notebook computer and the mobile phone are connected through a wire.
In one embodiment, the same account number (e.g., hua as an ID) is registered on the notebook computer and the mobile phone; or the Hua-Cheng ID registered by the notebook computer and the mobile phone is located in the same family group; alternatively, the mobile phone has been authorized with the laptop's ability to access the mobile phone.
Referring to the GUI shown in (b) of fig. 5, in response to detecting a user's click of the right mouse button on the picture 501, the notebook computer may display a function list 502. The function list 502 includes functions of sending pictures to a mobile phone, saving pictures, copying pictures, viewing full screen, recognizing things, shopping, translating, fetching words and the like. The functions of sending pictures to the mobile phone, saving the pictures as well as copying the pictures and viewing the pictures in a full screen are the functions of the notebook computer, and the functions of recognizing things, shopping, translating and fetching words are the functions of the mobile phone. When the notebook computer detects an operation of selecting the word selecting function 503 by the user, a GUI as shown in (c) of fig. 5 is displayed.
It should be understood that the process of displaying the recognition, shopping, translation and word capturing functions from the mobile phone on the notebook computer may refer to the description in the above embodiments, and will not be repeated herein for brevity.
Referring to the GUI shown in fig. 5 (c), in response to detecting that the user selects the word capturing function 503, the notebook computer transmits the picture and request information to the mobile phone, the request information being used to request the mobile phone to capture a word of the content of the picture. After the mobile phone receives the picture and the request information, the mobile phone can identify the characters on the picture. For example, the cell phone may recognize text in the picture using optical character recognition (optical character recognition, OCR).
In one embodiment, after recognizing the text in the picture, the mobile phone may further perform word segmentation processing on the text.
The mobile phone can perform word segmentation processing on the identified characters through word segmentation technology in natural language processing (natural language processing, NLP). The word segmentation technology in NLP is a module based on comparison. For Latin languages such as English, the words can be simply and accurately extracted under the general condition because spaces between words are used as word boundaries. The characters such as Chinese, japanese and the like are closely connected except punctuation marks, and no obvious word boundary exists, so that the segmentation is difficult to extract. Currently, word segmentation processing may be performed on text content based on some manner. For example, based on a dictionary, that is, a character string matching manner, a text segment of a string of text is matched with an existing dictionary, and if the text segment is matched, the text segment can be used as a word segmentation; for another example, the word segmentation process may be performed by a forward maximum matching method, a reverse maximum matching method, or a bidirectional maximum matching method. For example, for text content "no difficulty dilemma can block our advancing step", the electronic device performs word segmentation processing on the text content to obtain 10 segmented words, which are "any", "difficult", "dilemma", "all", "cannot", "block", "we", "advance", "and" step ", respectively.
It should be understood that, in the embodiment of the present application, the word segmentation processing manner for text content may refer to a word segmentation manner in the prior art, and for brevity, will not be described herein again.
After the mobile phone finishes word recognition and word segmentation processing on the picture, the mobile phone can send word-taking results to the notebook computer. In response to receiving the word capturing result, the notebook computer may display a prompt box 504, where the prompt box 504 includes the word capturing result for the text in the picture and the word segmentation result for the identified text.
When the notebook computer detects that the user selects the content of the word capturing result and clicks the right mouse button, the notebook computer may display a function list 505 as shown in (c) of fig. 5, wherein the function list 505 includes copy and translation functions. The copy function 506 is a function of a notebook computer, and the translation function is a function of a mobile phone. When the mobile phone detects the operation of selecting the copy function by the user, the mobile phone can copy the word capturing result.
According to the method and the device, the user does not need to manually input corresponding characters with reference to the content on the picture, the notebook computer can directly display the word taking function from the mobile phone on the display interface of the picture, and therefore word taking and word segmentation operations can be carried out on the content on the picture by the mobile phone. Therefore, the efficiency of converting the characters on the pictures into character strings can be improved, and the experience of the users can be improved.
FIG. 6 is another set of GUIs provided in an embodiment of the present application.
Referring to the GUI shown in fig. 6 (a), the notebook computer displays the desktop of the notebook computer through the display screen, and the mobile phone displays the desktop of the mobile phone at this time. Wherein, the desktop of the notebook computer includes a document 1, and when the notebook computer detects that the user clicks the right mouse button on the document 1, a GUI as shown in (b) of fig. 6 is displayed.
In one embodiment, a wireless connection (e.g., bluetooth, wi-Fi, or NFC) may be established between the notebook computer and the cell phone; or the notebook computer and the mobile phone are connected through a wire.
In one embodiment, the same account number (e.g., hua as an ID) is registered on the notebook computer and the mobile phone; or the Hua-Cheng ID registered by the notebook computer and the mobile phone is located in the same family group; alternatively, the mobile phone has been authorized with the laptop's ability to access the mobile phone.
Referring to the GUI shown in (b) of fig. 6, in response to detecting that the user clicks the right mouse button on the document 1, the notebook computer may display a function list 601. The function list 601 includes functions such as opening, copying, cutting, printing, translating, and word taking. The functions of opening, copying, cutting and printing are those of a notebook computer, and the functions of translating and fetching words are those of a mobile phone. When the notebook computer detects an operation of the user selection translation function 602, a GUI as shown in (c) of fig. 6 is displayed.
Referring to the GUI shown in fig. 6 (c), in response to detecting that the user selects the operation of the translation function 602, the notebook computer transmits the document 1 and request information for requesting the mobile phone to translate the contents in the document 1 to the mobile phone. After receiving the request information and the document 1, the mobile phone can translate the content in the document 1, thereby obtaining corresponding translated content. The mobile phone can send the translated text content to the notebook computer. When the notebook computer receives the translated text, a prompt box 603 may be displayed, where the prompt box 603 includes the translated text.
In one embodiment, in response to detecting operation of the user selection translation function 602, the notebook may prompt the user to translate the original text into which language (e.g., chinese, japanese, korean, spanish, etc.). When the notebook computer detects that the user selects the operation of translating the original text into Chinese, the notebook computer can send the document 1 and request information to the mobile phone, wherein the request information is used for requesting the mobile phone to translate the content in the document 1 into Chinese.
According to the method and the device, a user does not need to open the document and copy the characters in the document to the translation application or to the translation website, and the notebook computer can display the translation function from the mobile phone directly after detecting the operation of the right-click document of the user, so that the content in the document can be translated by the mobile phone. Therefore, the efficiency of the user in the process of translating the original text can be improved, excessive user operation in the process of translating the original text is avoided, and the user experience is improved.
FIG. 7 is another set of GUIs provided in an embodiment of the present application.
Referring to the GUI shown in fig. 7 (a), the notebook computer displays the desktop of the notebook computer through the display screen, and the mobile phone displays the desktop of the mobile phone. The desktop of the notebook computer comprises a function list 701, wherein the function list comprises intelligent voice, shopping, translation, word taking and object recognizing functions. The functions in the function list 701 are from the handset.
In one embodiment, after the wireless connection is established between the notebook computer and the mobile phone, the notebook computer may request the capability information of the mobile phone from the mobile phone. After receiving the request, the handset may send its own capability information (e.g., smart voice, shopping, translation, word and object capturing functions, etc.) to the notebook computer. So that the notebook computer can display the function list 701 on the desktop.
In one embodiment, if the mobile phone and the notebook computer log in the same account (for example, the ID), the notebook computer may also request the cloud server for the capability information of other devices under the account. The cloud server, upon receiving the request, may send a request to other devices (e.g., mobile phones may be included in other devices) under the account, where the request is for capability information of the other devices. After receiving the capability information of other devices, the cloud server can send the capability information of the other devices to the notebook computer. So that the notebook computer can display the function list 701 on the desktop.
In one embodiment, the smart voice, shopping, translation, word taking and recognition functions shown in the function list 701 in fig. 7 (a) may be from different devices, for example, the notebook computer determines that the smart voice function is included in the capability information of the mobile phone a and the shopping, translation, word taking and recognition functions are included in the capability information of the mobile phone B. So that the notebook computer can display functions from the display handset a (intelligent voice functions as shown in the function list in fig. 7 (a)) and functions from the handset B (shopping, translation, word taking and recognition functions as shown in the function list in fig. 7 (a)) on the desktop.
Referring to the GUI shown in fig. 7 (b), a photo 2 is displayed on the notebook computer, and when the user wishes to view the shopping link of a certain commodity in the photo, the user can view using the shopping function. When the notebook computer detects an operation of the user clicking the shopping function 702, the notebook computer may display a GUI as shown in (c) of fig. 7.
Referring to the GUI shown in (c) of fig. 7, in response to the notebook computer detecting an operation of clicking the shopping function 702 by the user, the notebook computer may display a window 703. Wherein the size of the window 703 may change with user operation (e.g., the user drags the window 703 to the left or right on the side of the window 703 using a cursor) and the position of the window 703 may change with user operation (e.g., the user drags the window 703 to other display areas using a cursor).
In one embodiment, when the notebook computer detects that the state in which the window 703 remains unchanged continues for the first preset period of time, the notebook computer may acquire the image information of the window 703, and send the image information and first request information to the mobile phone, where the first request information is used to request the mobile phone to identify the image information and request the mobile phone to query the shopping link of the object. And the mobile phone responds to the received image information and the first request information to identify the image information (for example, the mobile phone can identify that the object in the image information is a smart television). And the handset queries shopping links of the identified objects through a server (e.g., a server of the shopping App). The cell phone may send the thumbnail image queried to the smart tv and the shopping connection to the notebook computer (e.g., shopping link 1, shopping link 2, shopping link 3, and shopping link 4).
Referring to the GUI shown in fig. 7 (d), in response to receiving a thumbnail of the object and a shopping link, the notebook computer may display a prompt box 704. Among them, the prompt box 704 includes information of prompt information "find the following similar content for you" and thumbnail images of objects and shopping links.
In one embodiment, the notebook computer may also respond to the operation of clicking the shopping link 1 by the user, and view the website corresponding to the shopping link through the browser application on the notebook computer, so that the user browses the commodity to be purchased.
Referring to the GUI shown in (e) of fig. 7, in response to the user's operation of adjusting the size and position of the window 703, other objects in the photo 2 are displayed in the window 703 currently displayed by the notebook computer. In response to detecting that the state in which the window 703 remains unchanged continues for the first preset period of time, the notebook computer may acquire another image information of the window 703 and send the another image information and second request information to the mobile phone, where the second request information is used to request the mobile phone to identify the another image information and request the mobile phone to query the shopping link of the object. The mobile phone responds to the receiving of the other image information and the second request information to identify the image information (for example, the mobile phone can identify the object in the image information as an intelligent sound box). And the mobile phone inquires of shopping links (e.g., shopping link 5, shopping link 6, shopping link 7, and shopping link 8) of the identified object through a server (e.g., a server of the shopping App). The mobile phone can send the thumbnail of the inquired object and the shopping connection to the notebook computer.
Referring to the GUI shown in (e) of fig. 7, in response to receiving the thumbnail image of the object and the shopping link, the notebook computer may update the information of the thumbnail image and the shopping link displayed in the prompt box 704. Among them, the prompt box 704 includes information of prompt information "find the following similar content for you" and thumbnail images of the smart box and shopping links (e.g., shopping link 5, shopping link 6, shopping link 7, and shopping link 8).
In the embodiment of the application, the user does not need to log in the recognition software or send the picture to the recognition website to recognize the object, and the notebook computer can directly display the recognition function from the mobile phone on the display interface of the picture, so that the content in the picture can be recognized by using the mobile phone. Therefore, the efficiency of the user in recognizing the object in the picture can be improved, and the experience of the user can be improved. Meanwhile, the user can acquire shopping links of objects corresponding to the images in the window in real time only by updating the position of the window on the notebook computer, so that the user experience in shopping is improved.
FIG. 8 is another set of GUIs provided in an embodiment of the present application.
Referring to the GUI shown in fig. 8 (a), the notebook computer displays the desktop of the notebook computer through the display screen, and the mobile phone displays the desktop of the mobile phone at this time. The desktop of the notebook computer comprises a function list 801, wherein the function list comprises intelligent voice, shopping, translation, word taking and object recognizing functions. The functions in the function list 801 are from the handset.
It should be understood that the process of displaying the function list 801 by the notebook computer may refer to the description in the above embodiment, and will not be repeated herein for brevity.
When the notebook computer detects that the user clicks on the operation of the smart voice function 702, the notebook computer may start to detect a voice command input by the user. For example, as shown in (a) of fig. 8, in response to receiving a voice command "how today is weather" of the user, the notebook computer may transmit the voice command to the mobile phone together with request information for requesting the mobile phone to recognize the user's intention in the voice command. In response to receiving the voice command and the request information sent by the notebook computer, the mobile phone can analyze the voice command. The speech recognition (automatic speech recognition, ASR) module of the handset may first forward the speech information as text information to analyze the text information. The cell phone can recognize the slot information in the text information and the user's intention through a semantic understanding (natural language understanding, NLU) module.
For example, table 1 shows user intent and slot information determined by the handset.
TABLE 1
Intent (intent) "query weather"
Groove (slot) Time= "today"
It should be understood that the process of analyzing the voice command by the mobile phone may refer to the prior art, and will not be described herein for brevity.
After the mobile phone acquires the slot information and the user intention in the text information, the slot information and the user intention in the text information can be sent to an intention processing module of the mobile phone, and the intention processing module can determine that the intention of the user is 'inquiring weather', and the slot information related to the intention is 'today', so that the user can inquire the weather today. After inquiring the weather information of the present day, the mobile phone can send the weather information to the notebook computer.
Referring to the GUI shown in (b) of fig. 8, in response to receiving the weather information, the notebook computer may prompt the user ' today's cloudy turning sunny, at a temperature of 10 to 22 deg.c ' through voice.
In one embodiment, after the mobile phone obtains the weather information by inquiry, the text information corresponding to the weather information can be sent to the notebook computer, and the notebook computer can convert the text information into voice information through the ASR module, so that the voice information is prompted to a user.
In another embodiment, after the mobile phone obtains the weather information through inquiry, the ASR module of the mobile phone can convert the text information corresponding to the weather information into voice information, so that the voice information is sent to the notebook computer. In response to receiving the voice information, the notebook computer prompts the user for the voice information.
In the embodiment of the application, when the user uses the smart voice function of the mobile phone on the notebook computer, the user does not need to switch to the mobile phone to send the voice command, but sends the voice command to the mobile phone through the notebook computer, so that the convenience of the user in using the smart voice is improved. Since most of the notebook computers have smart voice capability, but may be different from the voice assistant of the mobile phone, for example, the voice assistant of the notebook computer (for example, the notebook computer is a windows system) is cotana, the voice assistant of the mobile phone is mini, the voice assistant of the apple is siri, and so on. Therefore, when the user uses the voice assistant, the user does not need to switch wake-up words and use habits, and the user experience is improved. In addition, the data support behind the mobile phone is more than that of the notebook computer, so that the accuracy of the data acquired by the user can be ensured.
The above description has been made with reference to fig. 3 to 6, in which the user displays capability information from other devices in the function list popped up by clicking the right mouse button at a certain position of the display screen, and fig. 7 and 8 describe the display of capability information from other devices by adding a function list 701 and a function list 801 on the notebook side. In the embodiment of the present application, the manner of information about the capability of the user to use other devices is not limited. Illustratively, the user may customize the shortcut key on the notebook computer to invoke the capability information of the cell phone. Alternatively, the user may customize a list of functions from other devices such that the user may select to use a function from the other devices in the list of functions. For example, after the notebook computer detects that a text segment is selected by the user and the Tab key and the T key are clicked on the keyboard, the translation function of the other device may be invoked, so that the notebook computer may send the text segment and request information to the other device, where the request information is used to request translation of the text segment.
Fig. 9 shows a schematic diagram of a system architecture of an embodiment of the present application. The system architecture includes a source device 910 (e.g., a notebook in the above embodiment) and a destination device 920 (e.g., a mobile phone in the above embodiment). The source terminal device comprises an application program layer and a proxy module, wherein the application program layer comprises a picture application 911, a document application 912 and the like; the proxy module includes a network connection module 913, an event processing module 914, and a User Interface (UI) presentation module 915. The network connection module 913 is configured to establish a wireless connection (or a wired connection) with the network connection module 921 of the sink device; the event processing module 914 is configured to generate a corresponding event and receive a processing result of the sink end device on the event from the network connection module 914; the UI display module 915 is configured to draw a window, so as to display a processing result of the sink device on the event.
The sink end device comprises a capability center and an agent module, wherein capability information (such as translation, object recognition, word taking, shopping, intelligent voice and the like) of the sink end device is stored in the capability center. The proxy module includes a network connection module 921 and an event processing module 922, where the network connection module 921 is configured to establish a wireless connection (or a wired connection) with the network connection module 913 of the source device; the event processing module 922 is used for calling the interface of the corresponding capability in the capability center, and performing corresponding processing on the event content sent by the source terminal device.
Fig. 10 shows a schematic flowchart of a method 1000 for invoking device capabilities provided in an embodiment of the present application. As shown in fig. 10, the method 1000 may be performed by a source end device and sink end device shown in fig. 9, the method 1000 comprising:
s1001, the source end device and sink end device establish connection.
In one embodiment, the source end device and sink end device may establish a wireless connection (e.g., a Bluetooth, wi-Fi, or NFC connection) through respective network connection modules.
In one embodiment, if the source end device and sink end device do not establish a connection, the source end device may send a broadcast message to surrounding devices and carry its own communication address in the broadcast message.
Illustratively, the broadcast message may be a bluetooth low energy (Bluetooth low energy, BLE) packet, and the source end device may carry a media access control (media access control, MAC) address of the source end device in an access address (access address) field in the BLE packet. After receiving the broadcast message, the network connection module 921 of the sink end device may establish bluetooth connection with the source end device according to the MAC address carried in the broadcast message.
Illustratively, the broadcast message may be a user datagram protocol (user datagram protocol, UDP) packet, which may carry an internet protocol (internet protocol, IP) address and a port number of the source end device (including a source port number and a destination port number, where the source port number refers to a port number used when the source end device transmits data, and the destination port number refers to a port used by the source end device to receive data). The IP address and port number of the source end device may carry a UDP header in the data portion of the IP datagram. After receiving the broadcast message, the network connection module 921 of the sink end device may establish a transmission control protocol (transmission control protocol, TCP) connection with the source end device according to the IP address and port number carried therein.
S1002, the source end device requests the capability information of the sink end device.
In one embodiment, before the source end device requests the capability information of the sink end device, the source end device may determine whether the source end device and the sink end device log in the same account, or the source end device may determine whether the source end device and the sink end device are in the same home group.
Illustratively, the account number registered by the source device is denoted as ID1. After the source end device and sink end device establish connection, information of the device name of the sink end device can be obtained. The source device may request the cloud server to determine whether the device corresponding to the device name is a device under ID1. If the cloud server determines that the sink end device is the device under the ID1, the source end device requests the capability information of the sink end device.
The account number logged in by the source device is shown as ID1, and the account number logged in by the sink device is shown as ID2. After the source end device and sink end device establish connection, information of the device name of the sink end device can be obtained. The source device may request the cloud server to determine whether the Hua Ji ID registered on the device corresponding to the device name is located in the same home group as Hua Ji ID1. If the cloud server determines that the Hua is an ID (e.g., hua is an ID 2) and Hua is an ID1 registered on the device corresponding to the device name are located in the same home group, the source device requests capability information of the sink device. It should be appreciated that in the embodiment of the present application, a user may invite an account of another family member (for example, the user may invite an account of another family member to form a family group with the account of the user and the account of the other family member by using the account registered on a certain device (for example, the user may invite the account of the other family member to be ID 1). After the family group is formed, the user account and the accounts of other family members can share information, for example, the user account can acquire information such as equipment names, equipment types, addresses and the like from the accounts of the other family members; for another example, if the user purchases a member of a certain application, other family members may acquire the membership of the user; for another example, members of the same family group may share the storage space of the cloud server.
In one embodiment, the source end device requests capability information of sink end device, including: the source end device sends first request information to the sink end device, wherein the first request information is used for requesting to acquire the capability information of the sink end device.
Illustratively, the source end device establishes a bluetooth connection to the sink end device. The source end device sends a BLE data packet to the sink end device, where the BLE data packet may carry first request information, where the first request information is used to request capability information of the sink end device. The BLE packet includes a protocol data unit (protocol data unit, PDU), and the first request information may be carried in a service data (service data) field in the PDU, or may be carried in vendor specific data (manufacturer specific data) field in the PDU. For example, a plurality of bits may be included in a payload (payload) of the service data field, wherein the plurality of bits includes scalable bits. source end devices and sink end devices may agree on the content of a certain extensible bit. When a certain extensible bit is 1, sink end equipment can know that source end equipment needs to request the capability information. After receiving the BLE data packet, the network connection module 921 of the sink end device may send the BLE data packet to the event processing module 922. The event processing module 922 determines, through the first request information in the BLE data packet, that the source end device wants to acquire the capability information thereof, and the sink end device may inform the source end device of the capability information in the capability center of the sink end device.
If the capability center of the sink device includes translation, recognition, word taking, and intelligent voice capabilities, the event processing module 922 of the sink device may carry capability information in a BLE data packet, where the indication information may be carried in a service data field in the PDU, or may also be carried in a vendor specific data field in the PDU. For example, the payload of the service data field may include a plurality of bits, wherein the plurality of bits includes scalable bits. source end devices and sink end devices may agree on the content of a plurality of scalable bits. For example, source end devices and sink end devices may agree on content on 4 bits. When the first bit is 1, it indicates that the sink device has a translation function (when the first bit is 0, it indicates that the sink device does not have a translation function); when the second bit is 1, it indicates that the sink device has the object identifying function (when the second bit is 0, it indicates that the sink device does not have the object identifying function); when the third bit is 1, it indicates that the sink terminal device has a word capturing function (when the third bit is 0, it indicates that the sink terminal device does not have a word capturing function); when the fourth bit is 1, it indicates that the sink device has the smart voice function (when the fourth bit is 0, it indicates that the sink device does not have the smart voice function). After receiving the BLE data packet, the network connection module 913 of the source device may forward the BLE data packet to the event processing module 914, so that the event processing module 914 determines the capability information of the sink device. After determining the capability information of the sink end device, the event processing module 914 may inform the UI display module 915 of the capability information.
In this embodiment of the present application, after receiving the first request information, the sink device may search package name information of an application program installed in the application program layer, for example, the sink device searches package name 1 of the application program 1, package name 2 of the application program 2, and package name 3 of the application program 3. After the package names of all the application programs in the application programs are found, the sink terminal device can query a list supporting the sharing function. Illustratively, table 2 shows a list of sink end devices supporting sharing functionality.
TABLE 2
Package name of application supporting sharing Application program corresponding function
Bag name 1 Translation
Bag name 2 Article for identifying
After the sink terminal device queries the table 2, it can be known that the package name of the application program supported to be shared by the current sink terminal device is the application program corresponding to the package name 1 and the package name 2, and the corresponding functions are translation and object recognition respectively. The sink end device may send the source end device functional information that it supports sharing. For the application 3, although the sink end device includes the application, since the sink end device does not support sharing, the sink end device may not share its corresponding function with the source end device.
It should be understood that table 2 shown above is merely illustrative, and is not limiting in this application.
Illustratively, the source end device establishes a TCP connection to the sink end device. The source end device sends a TCP data packet to the sink end device, wherein the TCP data packet can carry first request information, and the first request information is used for requesting the capability information of the sink end device. The TCP data packet includes a TCP header and a TCP data portion, and the first request message may be carried in the TCP data portion. For example, a plurality of bits may be included in the TCP data portion, wherein the plurality of bits includes scalable bits. source end devices and sink end devices may agree on the content of a certain extensible bit. When a certain extensible bit is 1, sink end equipment can know that source end equipment needs to request the capability information. After receiving the TCP packet, the network connection module 921 of the sink end device may send the BLE packet to the event processing module 922. The event processing module 922 determines, through the first request information in the TCP packet, that the source end device wants to acquire the capability information thereof, and the sink end device may inform the source end device of the capability information in the capability center of the sink end device.
If the capability center of the sink device includes translation, recognition, word taking, and intelligent voice capabilities, the event processing module 922 of the sink device may carry capability information in a TCP data packet, and the indication information may be carried in a TCP data portion in the TCP data packet. For example, the TCP data portion may include a plurality of bits, wherein the plurality of bits includes scalable bits. source end devices and sink end devices may agree on the content of a plurality of scalable bits. For example, source end devices and sink end devices may agree on content on 4 bits. When the first bit is 1, it indicates that the sink device has a translation function (when the first bit is 0, it indicates that the sink device does not have a translation function); when the second bit is 1, it indicates that the sink device has the object identifying function (when the second bit is 0, it indicates that the sink device does not have the object identifying function); when the third bit is 1, it indicates that the sink terminal device has a word capturing function (when the third bit is 0, it indicates that the sink terminal device does not have a word capturing function); when the fourth bit is 1, it indicates that the sink device has the smart voice function (when the fourth bit is 0, it indicates that the sink device does not have the smart voice function). After receiving the TCP packet, the network connection module 913 of the source device may forward the TCP packet to the event processing module 914, so that the event processing module 914 determines the capability information of the sink device. After determining the capability information of the sink end device, the event processing module 914 may inform the UI display module 915 of the capability information.
In one embodiment, the UI display module 915 may display the function list through a display of the source end device, thereby displaying the capability information of the sink end device in the function list. For example, as shown in fig. 7 (a), the UI presentation module 915 may draw a function list 701, wherein various capabilities of the mobile phone (e.g., smart voice, shopping, translation, word taking, and knowledge functions) are included in the function list 701.
In one embodiment, the UI display module 915 may also display the capability information of the sink device to the user after the source device detects the preset operation of the user. For example, as shown in fig. 4 (b), when the notebook computer detects a right click operation by the user on the picture 401, the UI display module 915 may draw the function list 402, wherein various capabilities (e.g., recognition, shopping, translation, and word taking) of the mobile phone are included in the function list 402.
In one embodiment, the source device may establish a correspondence between the content type selected by the user, the interaction mode, and the displayed sink capability information. For example, table 3 shows a correspondence between a content type selected by a user, an interaction mode, and displayed sink capability information.
TABLE 3 Table 3
The source end device may display different capability information depending on the content selected by the user. For example, as in the GUI shown in (b) of fig. 3, when the notebook computer detects that the user selects the original text content and clicks the right button, the word taking and translating function may be displayed in the function list 301 without displaying the shopping and recognition functions. For another example, as in the GUI shown in (b) of fig. 4, when the notebook computer detects that the user clicks the right button on the picture 401, the recognition, shopping, translation, and word taking functions may be displayed in the function list 402.
S1003, the source end device detects a first operation of a user, and sends first content and second request information to the sink end device, wherein the second request information is used for indicating the sink end device to perform corresponding processing on the first content.
In one embodiment, the source end device detects a first operation of a user, and sends first content and second request information to the sink end device, including:
when the source end device detects the operation of selecting the first content by the user, a function list is displayed, wherein the function list comprises one or more functions, and the one or more functions are capability information acquired by the source end device from the sink end device.
In response to detecting a user selecting a first function from the one or more functions, the source end device sends the first content and second request information to the sink end device, the second request information being used to request the sink end device to process the first content using the first function.
For example, as shown in fig. 3 (b), after the notebook computer detects that the user selects a piece of english (for example, today is a … first), a function list 301 may be displayed, where the translation and word taking function in the function list 301 is capability information acquired by the notebook computer from the mobile phone. When the notebook computer detects that the user selects the translation function, the notebook computer can send the English content and request information to the mobile phone, wherein the request information is used for requesting the mobile phone to translate the English content.
In one embodiment, before the source end device detects the first operation of the user and sends the first content and the second request information to the sink end device, the method further includes: the source end device displays one or more functions, wherein the one or more functions are capability information acquired by the source end device from the sink end device, and the one or more functions comprise a first function;
The source end device detects a first operation of a user, sends first content and second request information to sink end device, and includes:
in response to a user selecting a first function from the one or more functions, the source end device detecting content selected by the user;
and responding to the operation of selecting the first content by the user, and sending the first content and the second request information to sink terminal equipment by source terminal equipment, wherein the second request information is used for requesting the sink terminal equipment to process the first content by using a first function.
For example, as shown in fig. 7 (a), after the notebook computer obtains the capability information of the mobile phone (including, for example, smart voice, shopping, translation, word taking and knowledge functions) from the mobile phone, the notebook computer may display a function list 701, where the function list 701 includes the capability information of the mobile phone. As described with reference to (b) of fig. 7, in response to detecting an operation of selecting the shopping function 702 from the function list 701 by the user, the notebook computer may start to detect the selected content. As shown in (c) of fig. 7, when the notebook computer detects that the user selects the content in the window 703, the notebook computer may transmit the content in the window 703 and request information for indicating an identification operation of the image information in the window 703 to the mobile phone.
In one embodiment, before the source end device detects the first operation of the user and sends the first content and the second request information to the sink end device, the method further includes: the source end device displays one or more functions, wherein the one or more functions are capability information acquired by the source end device from the sink end device, and the one or more functions comprise a first function;
the source end device detects a first operation of a user, sends first content and second request information to sink end device, and includes:
and responding to the operation of selecting the first content and selecting the first function by the user, and sending the first content and the second request information to the sink end equipment by the source end equipment, wherein the second request information is used for requesting the sink end equipment to process the first content by using the first function.
For example, as shown in fig. 7 (a), after the notebook computer obtains the capability information of the mobile phone (including, for example, smart voice, shopping, translation, word taking and knowledge functions) from the mobile phone, the notebook computer may display a function list 701, where the function list 701 includes the capability information of the mobile phone. When the notebook computer detects that the user selects a piece of original text content and clicks the translation function in the function list 701, the notebook computer can send the selected english content and request information to the mobile phone, where the request information is used to request the mobile phone to translate the english content.
In one embodiment, the capability information acquired by the source end device from the sink end device includes one or more functions, where the one or more functions include a first function, the source end device detects a first operation of a user, and sends first content and second request information to the sink end device, where the method includes:
in response to detecting that a user selects first content and detecting that the user clicks a first key, the source end device sends first content and second request information to the sink end device, where the second request information is used to request the sink end device to process the first content using a first function, and the first key is associated with the first function.
For example, the user may set a mapping relationship between the first function and the first key. For example, the user may associate a translation function with the key ctrl+t on the keyboard.
S1004, the sink end device processes the first content and sends a processing result of the first content to the source end device in response to receiving the first content and the second request information.
The concrete implementation of the source end to send the first content and the second request information is described below by taking the source end device as a notebook computer and the sink end device as a mobile phone and combining the GUI.
For the GUI shown in FIG. 3
When the notebook detects that the english content is selected and detects that the user clicks the right mouse button, the UI display module 915 in the notebook may draw the function list 301. When the notebook detects that the user has selected the operation of the translation function 302, the event processing module 914 of the notebook may generate a TCP data packet, where the TCP data portion of the TCP data packet may include textual content and type information (e.g., text or pictures) of the textual content. In this embodiment of the present application, the function of the second request information may be implemented by type information of the original text content. For example, after the mobile phone obtains the type information of the original text content as text, the mobile phone can learn that the notebook computer wants to translate or take words from the original text content. Or, the TCP data packet may only carry the original text content, and after the mobile phone obtains the original text content, the mobile phone may determine the type information of the original text content, so as to determine that the notebook computer wants to translate or take words from the original text content through the type information (e.g. text) of the original text content.
In one embodiment, the event processing module 914 may further carry indication information in a TCP data portion of the TCP data packet, where the indication information is used to indicate that the textual content is translated or tokenized. For example, the TCP data portion may include a plurality of bits therein, wherein the plurality of bits includes scalable bits therein. Notebook computers and cell phones can agree on the content of a certain extensible bit. When a certain expandable bit is 1, the mobile phone can know that the notebook computer needs to translate the original text content; when the expandable bit is 0, the mobile phone can know that the notebook computer needs to fetch words from the original text content.
The event processing module 914 may encode the content selected by the user in GBK, ISO8859-1, or Unicode, and carry the encoded information on one or more scalable bits in the TCP data portion. The network connection module 921 may send the TCP packet to the event processing module 922 after receiving the TCP packet, so that the event processing module 922 decodes the original content and the type information of the original content. For example, the event processing module 922 of the mobile phone may invoke an interface of the translation function in the capability center to translate the original content after obtaining the original content (e.g., today is a … first), the type information of the original content (e.g., text), and the indication information (with an extensible bit of 1) indicating that the mobile phone translates the original content.
The event processing module 922 may generate a TCP packet after obtaining the corresponding translation content, and carry the translation content in a TCP data portion of the TCP packet. The event processing module 922 may encode the translated content using GBK, ISO8859-1, unicode, or other encoding methods, and carry the encoded information on one or more scalable bits in the TCP data portion. And transmitted to the notebook computer by the network connection module 921. After receiving the TCP packet, the network connection module 913 of the notebook computer may send the TCP packet to the event processing module 914, and the event processing module 914 may decode the TCP packet using a corresponding decoding technique, thereby obtaining the translated text.
It should be understood that, the process of sending the first content and the second request information to the sink end device by the source end device may be implemented by a TCP packet; or may be implemented by BLE packets. The implementation process of the BLE packet may be combined with the description in the above embodiment, and will not be repeated here for brevity.
For the GUI illustrated in FIG. 4
When the notebook computer detects that the user clicks the right mouse button on the picture 401, the UI display module 915 in the notebook computer may draw the function list 402. When the notebook computer detects that the user selects the operation of the recognition function 402, the event processing module 914 of the notebook computer may generate a TCP data packet, where a TCP data portion of the TCP data packet may include image content of the picture 401 and type information of the image content. In this embodiment of the present application, the function of the second request information may be implemented by type information of the first content. For example, after the mobile phone acquires the type information of the first content as an image, the mobile phone can learn that the notebook computer wants to recognize or shop on the image. Alternatively, the TCP packet may only carry the image content of the picture 401, and after the mobile phone acquires the image content of the picture 401, the mobile phone may determine the type information of the first content, so as to determine that the notebook computer wants to recognize, shop, translate or fetch the first content through the type information (e.g., image) of the first content.
In one embodiment, event processing module 914 may also carry indication information in the TCP data portion of the TCP data packet that indicates identifying, shopping, translating, or word taking the image content of picture 401. For example, the TCP data portion may include a plurality of bits therein, wherein the plurality of bits includes scalable bits therein. Notebook computers and cell phones can agree on 2 bits of content that can be extended. When the 2 expandable bits are 00, the mobile phone can know that the notebook computer needs to recognize the image content of the picture 401; when the expandable bit is 01, the mobile phone can learn the shopping link of the object of which the notebook computer needs to inquire the image content of the picture 401; when the expandable bit is 10, the mobile phone can learn that the notebook computer requests to translate the image content of the picture 401; when the expandable bit is 11, the mobile phone can learn that the notebook computer requests to fetch the word from the image content of the picture 401.
The event processing module 914 may encode the image content of the picture 401 using image encoding techniques and carry the encoded information on one or more scalable bits in the TCP data portion. The network connection module 921 may send the TCP packet to the event processing module 922 after receiving the TCP packet, so that the event processing module 922 decodes the image content of the picture 401 through an image decoding technique. For example, the event processing module 922 of the mobile phone may call the interface of the object recognition function in the capability center to recognize the image content after acquiring the image content of the picture 401, the type information (e.g., image) of the image content, and the indication information (with the extensible bit of 00) indicating that the mobile phone recognizes the image content.
Event processing module 922, upon obtaining an identification result (e.g., including a textual description of an object in the image, a thumbnail of the object, a shopping link for the object), may generate a TCP data packet, and carry the identification content in a TCP data portion of the TCP data packet. The event processing module 922 may employ coding modes such as GBK, ISO8859-1, or Unicode to encode text description and shopping link information of an object in an image, and use image encoding technology to encode a thumbnail of the object, and carry the encoded information on one or more expandable bits in the TCP data portion. And transmitted to the notebook computer by the network connection module 921. After receiving the TCP packet, the network connection module 913 of the notebook computer may send the TCP packet to the event processing module 914, and the event processing module 914 may decode the TCP packet using a corresponding decoding technique, thereby obtaining the object recognition result.
For the GUI shown in FIG. 5
The process of sending the photo 1 by the event processing module 914 may refer to the description of the above embodiments, and will not be repeated herein for brevity.
Unlike the internal implementation shown in fig. 4, the event processing module 914 in fig. 4 has a bit value of 00 (indicating that the requesting handset recognizes the image content) in the 2 expandable bits of the TCP data portion, and the event processing module 914 in fig. 5 has a bit value of 11 (indicating that the requesting handset recognized the image content) in the 2 expandable bits of the TCP data portion.
The event processing module 922 of the handset decodes the image content of the picture 401 by image decoding techniques. For example, the event processing module 922 of the mobile phone may call the interface of the word-taking function in the capability center to take words on the image content after obtaining the image content of the photo 1, the type information (e.g., image) of the image content, and the indication information (with the expandable bit of 00) indicating that the mobile phone takes words on the image content. For the sake of brevity, the specific word extracting process may refer to the description in the above embodiments, which is not repeated here.
It should also be understood that the content implementation process of the GUI shown in fig. 6 is similar to that of fig. 3, except that the first content sent to the mobile phone by the notebook in fig. 3 is the original text content selected by the notebook, where the translation result returned by the mobile phone may include the translated text content corresponding to the selected original text content. In fig. 6, the first content sent to the mobile phone by the notebook computer is original text content in the whole document, where the result returned by the mobile phone may include translated text content corresponding to the original text content in the whole document.
It should also be appreciated that the internal implementation of the GUI illustrated in FIG. 7 is similar to that of FIG. 4. The difference is that the first content sent to the mobile phone by the notebook computer in fig. 4 is a picture 401 on the position of the cursor of the notebook computer, and the second indication information is used for indicating the mobile phone to identify the first content; the first content sent to the mobile phone by the notebook computer in fig. 7 is the image content displayed in the window 703 displayed on the notebook computer, and the second indication information is used to indicate a shopping link for querying the object corresponding to the image content. In addition, as shown in (b) of fig. 4, the notebook computer detects that the user selects the picture 401 and clicks the right button to display the function list 402, and when the notebook computer detects that the user selects the recognition function, the notebook computer may send the picture 401 and request information to the mobile phone, where the request information is used to request the mobile phone to recognize the picture 401; in fig. 7 (b) to (c), the notebook computer displays the function list 701 before selecting the content, and when detecting that the user selects the shopping function 702 from the function list 701, the notebook computer resumes detecting the content selected by the user. When the notebook computer detects the image information in the user selection window 703, the notebook computer may send the image information in the window 703 and request information to the mobile phone, where the request information is used to request the mobile phone to query the shopping link of the object corresponding to the image information.
For the GUI shown in FIG. 8
When the notebook computer detects that the user selects the operation of the smart voice function 703, the notebook computer may receive a voice command input by the user through the microphone, and may generate a TCP packet through the event processing module 914, where a TCP data portion of the TCP packet may include the voice command and type information of the voice command. In this embodiment of the present application, the function of the second request information may be implemented by type information of the first content. For example, after the mobile phone obtains the type information of the first content as voice, the mobile phone can learn that the notebook computer wants to process the user intention corresponding to the voice. Or, the TCP packet may only carry a voice command, and after the mobile phone obtains the voice command, the mobile phone may determine the type information of the first content, so as to determine, through the type information (for example, voice) of the first content, that the notebook computer wants the mobile phone to process the user intention corresponding to the voice.
In one embodiment, event processing module 914 may also carry indication information on the TCP data portion of the TCP data packet indicating the user's intent to process the voice command. For example, the TCP data portion may include a plurality of bits therein, wherein the plurality of bits includes scalable bits therein. Notebook computers and cell phones can agree on the content of a certain extensible bit. When the expandable bit is 1, the mobile phone can know that the notebook computer hopes the mobile phone to process the user intention corresponding to the voice instruction.
The event processing module 914 may encode the voice instructions using audio encoding techniques and carry the encoded information on one or more scalable bits in the TCP data portion. The network connection module 921 of the mobile phone may send the TCP packet to the event processing module 922 after receiving the TCP packet, so that the event processing module 922 decodes the voice command through an audio decoding technique. For example, the event processing module 922 of the mobile phone may invoke an interface of the intelligent voice function in the capability center to process the user intention corresponding to the voice command after obtaining the voice command, the type information (e.g. voice) of the voice command, and the instruction to process the user intention of the voice command (the expandable bit is 1) by the mobile phone.
The event processing module 922 may generate a TCP packet after acquiring a processing result for the user's intention, and carry the processing result in a TCP data portion of the TCP packet.
For example, if the processing result is text, the event processing module 922 may encode the text using GBK, ISO8859-1, unicode, or other encoding methods, and carry the encoded information on one or more scalable bits in the TCP data portion. And transmitted to the notebook computer by the network connection module 921. After receiving the TCP packet, the network connection module 913 of the notebook computer may send the TCP packet to the event processing module 914, and the event processing module 914 may decode the TCP packet using a corresponding decoding technique, thereby obtaining a processing result. The notebook computer can convert the text into voice content through the ASR module, so that the voice content is prompted to a user.
For another example, if the processing result is speech, the event processing module 922 may encode the speech using audio coding, and carry the encoded information on one or more scalable bits in the TCP data portion. And transmitted to the notebook computer by the network connection module 921. After receiving the TCP packet, the network connection module 913 of the notebook computer may send the TCP packet to the event processing module 914, and the event processing module 914 may decode the TCP packet using a corresponding decoding technique, thereby obtaining a processing result. And the notebook computer can prompt the user for the voice content.
S1005, the source terminal device prompts the processing result of the first content to the user.
For example, as shown in fig. 3 (c), the UI display module 915 of the notebook computer may draw a window 303 and display the translated content through the window 303.
For example, as shown in fig. 3 (d), the UI display module 915 of the notebook computer may also draw a window on the original text content, thereby displaying the translated text content in the window.
For example, as shown in fig. 4 (c), the UI display module 915 of the notebook computer may draw the window 404 and display information of an object, thumbnail information of the object, and a corresponding shopping link in the window 404.
For example, as shown in fig. 5 (c), the UI presentation module 915 of the notebook computer may draw the window 504 and present the word segmentation result of the content on photo 1 in the window 504.
Illustratively, as shown in (b) of fig. 8, the notebook computer may prompt the user "today's cloudy to turn sunny, temperature 10 ℃ to 22 ℃) through a speaker.
In the embodiment of the application, the user can use the function of the second electronic device on the first electronic device, and the capability boundary of the first electronic device is expanded, so that the task which is difficult for the first electronic device is conveniently and efficiently completed, and the experience of the user is improved.
Fig. 11 shows a schematic block diagram of an apparatus 1100 provided by an embodiment of the present application. The apparatus 1100 may be disposed in the first electronic device in fig. 10, where the apparatus 1100 includes: a transmitting unit 1110 for requesting capability information of the second electronic device; a receiving unit 1120, configured to receive the capability information sent by the second electronic device, where the capability information includes one or more functions, and the one or more functions include a first function; a detection unit 1130 for detecting a first operation by a user; the sending unit 1110 is further configured to send, in response to the first operation, first content and first request information to the second electronic device, where the first request information is used to request the second electronic device to process the first content using the first function; the receiving unit 1120 is further configured to receive a processing result of the first content by the second electronic device; a prompting unit 1140 for prompting the processing result to the user.
Fig. 12 shows a schematic block diagram of an apparatus 1200 provided by an embodiment of the present application. The apparatus 1200 may be disposed in the second electronic device in fig. 10, where the apparatus 1200 includes: a receiving unit 1210, configured to receive first request information sent by the first electronic device, where the first request information is used to request capability information of the second electronic device; a transmitting unit 1220 configured to transmit the capability information to the first electronic device, where the capability information includes one or more functions, and the one or more functions include a first function; the receiving unit 1210 is further configured to receive first content and second request information sent by the first electronic device, where the second request information is used for the second electronic device to process the first content using the first function; a processing unit 1230 for processing the first content using the first function according to the second request information; the sending unit 1220 is further configured to send a processing result of the first content to the first electronic device.
Fig. 13 shows a schematic block diagram of an electronic device 1300 provided in an embodiment of the present application. As shown in fig. 13, the electronic device includes: one or more processors 1310, one or more memories 1320, the one or more memories storing 1320 storing one or more computer programs, the one or more computer programs including instructions. The instructions, when executed by the one or more processors 1310, cause the first electronic device or the second electronic device to perform the technical solutions of the above embodiments.
The embodiment of the application provides a system, which comprises a first electronic device and a second electronic device, and is used for executing the technical scheme in the embodiment. The implementation principle and technical effects are similar to those of the related embodiments of the method, and are not repeated here.
An embodiment of the present application provides a computer program product, which when executed on a first electronic device (or a notebook computer in the foregoing embodiment) causes the first electronic device to execute the technical solution in the foregoing embodiment. The implementation principle and technical effects are similar to those of the related embodiments of the method, and are not repeated here.
An embodiment of the present application provides a computer program product, which when the computer program product runs on a second electronic device (or a mobile phone in the foregoing embodiment), causes the second electronic device to execute the technical solution in the foregoing embodiment. The implementation principle and technical effects are similar to those of the related embodiments of the method, and are not repeated here.
An embodiment of the present application provides a readable storage medium, where the readable storage medium includes instructions, when the instructions are executed on a first electronic device (or a notebook computer in the foregoing embodiment), cause the first electronic device to execute the technical solution of the foregoing embodiment. The implementation principle and technical effect are similar, and are not repeated here.
An embodiment of the present application provides a readable storage medium, where the readable storage medium includes instructions, when the instructions are executed on a second electronic device (or a mobile phone in the foregoing embodiment), cause the second electronic device to execute the technical solution of the foregoing embodiment. The implementation principle and technical effect are similar, and are not repeated here.
The embodiment of the application provides a chip for executing instructions, and when the chip runs, the technical scheme in the embodiment is executed. The implementation principle and technical effect are similar, and are not repeated here.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (21)

1. A system comprising a first electronic device and a second electronic device, characterized in that,
the first electronic device is used for requesting the capability information of the second electronic device;
the second electronic device is configured to send the capability information to the first electronic device, where the capability information includes one or more functions, and the one or more functions includes a first function, and the first electronic device has limited capability of using the first function;
the first electronic device is further configured to send, when a first operation of a user is detected, first content and first request information to the second electronic device, where the first request information is used to request the second electronic device to process the first content using the first function;
the second electronic device is further configured to process the first content using the first function according to the first request information and send a processing result of the first content to the first electronic device;
the first electronic device is further configured to prompt the user for the processing result;
the second electronic device is specifically configured to take a word from the first content and/or translate the first content; or,
The type of the first content is a picture, and the second electronic device is specifically configured to use at least one of the following processing modes: identifying objects according to the first content, inquiring shopping links according to the first content, extracting words from texts in the first content and translating the texts in the first content; or,
the first content is of a voice instruction type, and the second electronic device is specifically configured to identify a user intention according to the first content and perform intention processing.
2. The system of claim 1, wherein the first electronic device is specifically configured to:
displaying a function list when detecting the operation of selecting the first content by a user, wherein the function list comprises the first function;
and detecting the operation of selecting the first function by a user, and sending the first content and the first request information to the second electronic equipment.
3. The system of claim 2, wherein the first electronic device is specifically configured to:
and displaying the function list according to the type of the first content.
4. The system of claim 1, wherein the first electronic device is specifically configured to:
In response to receiving the capability information, displaying a list of functions, the list of functions including the one or more functions;
responsive to detecting a user selection of the first function from the one or more functions, beginning to detect user-selected content;
and in response to detecting the operation of selecting the first content by the user, sending the first content and the first request information to the second electronic device.
5. The system of claim 4, wherein the first electronic device is specifically configured to:
and sending the first content and the first request information to the second electronic equipment in response to detecting that the user selects the first content and detecting that the operation of selecting other content by the user is not performed within a preset time period from the time when the user selects the first content.
6. The system of claim 4 or 5, wherein the first electronic device is further configured to: and in response to detecting the operation of selecting the second content by the user, sending the second content and second request information to the second electronic device, wherein the second request information is used for requesting the second electronic device to process the second content by using the first function.
7. The system of claim 1, wherein the first electronic device is specifically configured to:
and responding to the operation of selecting first content and clicking a first key by a user, and sending the first content and the first request information to the second electronic equipment, wherein the first key is associated with the first function.
8. The system of any one of claims 1 to 5, or claim 7, wherein an account registered on the first electronic device is associated with an account registered on the second electronic device.
9. A method of invoking capabilities of other devices, the method being applied to a first electronic device, the method comprising:
the first electronic device requests the capability information of the second electronic device;
the first electronic equipment receives the capability information sent by the second electronic equipment, wherein the capability information comprises one or more functions, the one or more functions comprise a first function, and the first electronic equipment has limited capability of using the first function;
when the first electronic equipment detects a first operation of a user, first content and first request information are sent to the second electronic equipment, wherein the first request information is used for requesting the second electronic equipment to process the first content by using the first function;
The first electronic equipment receives a processing result of the first content by the second electronic equipment;
the first electronic equipment prompts the processing result to a user;
the type of the first content is text, and the processing result of the second electronic device on the first content is obtained by using the following processing mode: word taking is carried out on the first content, and/or translation is carried out on the first content; or,
the type of the first content is a picture, and the processing result of the second electronic device on the first content is obtained by using at least one of the following processing modes: identifying objects according to the first content, inquiring shopping links according to the first content, extracting words from texts in the first content and translating the texts in the first content; or,
the type of the first content is a voice instruction, and the processing result of the second electronic device on the first content is obtained by using the following processing mode: and identifying the user intention according to the first content and carrying out intention processing.
10. The method of claim 9, wherein the first electronic device sending the first content and the first request information to the second electronic device when detecting the first operation of the user comprises:
When detecting the operation of selecting the first content by a user, the first electronic equipment displays a function list, wherein the function list comprises the first function;
and detecting the operation of selecting the first function by a user, and sending the first content and the first request information to the second electronic device by the first electronic device.
11. The method of claim 10, wherein the first electronic device displaying a list of functions comprises:
and the first electronic equipment displays the function list according to the type of the first content.
12. The method of claim 9, wherein the first electronic device sending the first content and the first request information to the second electronic device when detecting the first operation of the user comprises:
in response to receiving the capability information, the first electronic device displays a list of functions, the list of functions including the one or more functions;
responsive to detecting a user selection of the first function from the one or more functions, the first electronic device begins detecting user-selected content;
in response to detecting a user selection of the first content, the first electronic device transmits the first content and the first request information to the second electronic device.
13. The method of claim 12, wherein the first electronic device sending the first content and the first request information to the second electronic device in response to detecting a user selection of the first content comprises:
and the first electronic equipment sends the first content and the first request information to the second electronic equipment in response to detecting that the user selects the first content and detecting that the operation of selecting other content by the user is not performed within a preset time period from the time when the user selects the first content.
14. The method according to claim 12 or 13, characterized in that the method further comprises:
in response to detecting operation of selecting second content by a user, the first electronic device sends the second content and second request information to the second electronic device, wherein the second request information is used for requesting the second electronic device to process the second content by using the first function.
15. The method of claim 9, wherein the first electronic device sending the first content and the first request information to the second electronic device when detecting the first operation of the user comprises:
In response to a user selecting first content and clicking a first key, the first electronic device sends the first content and the first request information to the second electronic device, wherein the first key is associated with the first function.
16. The method of any one of claims 9 to 13, or claim 15, wherein an account registered on the first electronic device is associated with an account registered on the second electronic device.
17. A method of invoking capabilities of a further device, the method being applied to a second electronic device, the method comprising:
the second electronic equipment receives first request information sent by first electronic equipment, wherein the first request information is used for requesting capability information of the second electronic equipment;
the second electronic device sends the capability information to the first electronic device, wherein the capability information comprises one or more functions, the one or more functions comprise a first function, and the first electronic device has limited capability of using the first function;
the second electronic equipment receives first content and second request information sent by the first electronic equipment, wherein the second request information is used for processing the first content by the second electronic equipment through the first function;
The second electronic equipment processes the first content by using the first function according to the second request information and sends a processing result of the first content to the first electronic equipment;
the first content is text, and the second electronic device processes the first content by using the first function according to the second request information, and the processing includes: the second electronic equipment uses the first function to fetch words from the first content and/or translate the first content according to the second request information; or,
the type of the first content is a picture, and the second electronic device uses the first function to process the first content according to the second request information, including at least one of the following processing modes: the second electronic equipment uses the first function to identify objects according to the first content, inquires shopping links according to the first content, takes words from texts in the first content and translates the texts in the first content according to the second request information; or,
the type of the first content is a voice instruction, and the second electronic equipment uses the first function to identify the user intention according to the first content and process the intention according to the second request information.
18. The method of claim 17, wherein the account number registered on the first electronic device is associated with the account number registered on the second electronic device.
19. An electronic device, comprising:
one or more processors;
one or more memories;
the one or more memories store one or more computer programs comprising instructions that, when executed by the one or more processors, cause the electronic device to perform the method of any of claims 9-16.
20. An electronic device, comprising:
one or more processors;
one or more memories;
the one or more memories store one or more computer programs comprising instructions that, when executed by the one or more processors, cause the electronic device to perform the method of claim 17 or 18.
21. A computer readable storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method of any one of claims 9 to 16; or,
The computer instructions, when run on an electronic device, cause the electronic device to perform the method as claimed in claim 17 or 18.
CN202011527018.9A 2020-08-13 2020-12-22 Method for calling capabilities of other devices, electronic device, system and storage medium Active CN114666441B (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN202011527018.9A CN114666441B (en) 2020-12-22 2020-12-22 Method for calling capabilities of other devices, electronic device, system and storage medium
US18/041,196 US20230305680A1 (en) 2020-08-13 2020-12-31 Method for invoking capability of another device, electronic device, and system
EP20949470.7A EP4187876A4 (en) 2020-08-13 2020-12-31 Method for invoking capabilities of other devices, electronic device, and system
CN202080104076.2A CN116171568A (en) 2020-08-13 2020-12-31 Method for calling capabilities of other equipment, electronic equipment and system
PCT/CN2020/142564 WO2022032979A1 (en) 2020-08-13 2020-12-31 Method for invoking capabilities of other devices, electronic device, and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011527018.9A CN114666441B (en) 2020-12-22 2020-12-22 Method for calling capabilities of other devices, electronic device, system and storage medium

Publications (2)

Publication Number Publication Date
CN114666441A CN114666441A (en) 2022-06-24
CN114666441B true CN114666441B (en) 2024-02-09

Family

ID=82024136

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011527018.9A Active CN114666441B (en) 2020-08-13 2020-12-22 Method for calling capabilities of other devices, electronic device, system and storage medium

Country Status (1)

Country Link
CN (1) CN114666441B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106940662A (en) * 2017-03-17 2017-07-11 上海传英信息技术有限公司 A kind of multi-task planning method of mobile terminal
CN109101329A (en) * 2018-07-25 2018-12-28 陕西师范大学 The finegrained tasks distribution method and system of data are acquired by multiple mobile terminals
CN109660842A (en) * 2018-11-14 2019-04-19 华为技术有限公司 A kind of method and electronic equipment playing multi-medium data
WO2020034227A1 (en) * 2018-08-17 2020-02-20 华为技术有限公司 Multimedia content synchronization method and electronic device
CN111371849A (en) * 2019-02-22 2020-07-03 华为技术有限公司 Data processing method and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106940662A (en) * 2017-03-17 2017-07-11 上海传英信息技术有限公司 A kind of multi-task planning method of mobile terminal
CN109101329A (en) * 2018-07-25 2018-12-28 陕西师范大学 The finegrained tasks distribution method and system of data are acquired by multiple mobile terminals
WO2020034227A1 (en) * 2018-08-17 2020-02-20 华为技术有限公司 Multimedia content synchronization method and electronic device
CN109660842A (en) * 2018-11-14 2019-04-19 华为技术有限公司 A kind of method and electronic equipment playing multi-medium data
CN111371849A (en) * 2019-02-22 2020-07-03 华为技术有限公司 Data processing method and electronic equipment

Also Published As

Publication number Publication date
CN114666441A (en) 2022-06-24

Similar Documents

Publication Publication Date Title
EP3872807B1 (en) Voice control method and electronic device
US11567623B2 (en) Displaying interfaces in different display areas based on activities
CN110910872B (en) Voice interaction method and device
WO2020000448A1 (en) Flexible screen display method and terminal
CN113885759A (en) Notification message processing method, device, system and computer readable storage medium
CN114327666B (en) Application starting method and device and electronic equipment
US20220358089A1 (en) Learning-Based Keyword Search Method and Electronic Device
CN116360725B (en) Display interaction system, display method and device
CN111881315A (en) Image information input method, electronic device, and computer-readable storage medium
CN113641271B (en) Application window management method, terminal device and computer readable storage medium
WO2023273543A1 (en) Folder management method and apparatus
WO2020062014A1 (en) Method for inputting information into input box and electronic device
US20210385187A1 (en) Method and device for performing domain name resolution by sending key value to grs server
WO2023029916A1 (en) Annotation display method and apparatus, terminal device, and readable storage medium
CN112416984A (en) Data processing method and device
WO2022062902A1 (en) File transfer method and electronic device
CN114666441B (en) Method for calling capabilities of other devices, electronic device, system and storage medium
CN114003319B (en) Method for off-screen display and electronic equipment
CN113497835B (en) Multi-screen interaction method, electronic equipment and computer readable storage medium
WO2022206762A1 (en) Display method, electronic device and system
EP4287014A1 (en) Display method, electronic device, and system
WO2023124829A1 (en) Collaborative voice input method, electronic device, and computer-readable storage medium
WO2022135273A1 (en) Method for invoking capabilities of other devices, electronic device, and system
CN111566631B (en) Information display method and device
CN116301510A (en) Control positioning method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant