CN115048193A - Multi-device distributed scheduling method and related devices - Google Patents

Multi-device distributed scheduling method and related devices Download PDF

Info

Publication number
CN115048193A
CN115048193A CN202110255387.5A CN202110255387A CN115048193A CN 115048193 A CN115048193 A CN 115048193A CN 202110255387 A CN202110255387 A CN 202110255387A CN 115048193 A CN115048193 A CN 115048193A
Authority
CN
China
Prior art keywords
network
equipment
request
service
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110255387.5A
Other languages
Chinese (zh)
Inventor
成翔
李加润
季冬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202110255387.5A priority Critical patent/CN115048193A/en
Publication of CN115048193A publication Critical patent/CN115048193A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/448Execution paradigms, e.g. implementations of programming paradigms
    • G06F9/4488Object-oriented
    • G06F9/449Object-oriented method invocation or resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The application provides a multi-device distributed scheduling method and related devices. Wherein, the method comprises the following steps: the method comprises the steps that first equipment receives first request calling information sent by second equipment, wherein the first request calling information is used for requesting to call the first equipment to communicate with third equipment; the second device is a device in a first network, and the first network is a network in which the first device establishes communication connection with at least one device through a first communication mode; the third device is a device in a second network, and the second network is a network in which the first device establishes communication connection with at least one device through a second communication mode; and the first equipment sends an instruction for executing the target calling service to the third equipment based on the first request calling information. The method can realize function expansion by using other equipment under the condition of not improving the hardware capability of the equipment, and simultaneously enhances the user experience.

Description

Multi-device distributed scheduling method and related devices
Technical Field
The present application relates to distributed technologies, and in particular, to a multi-device distributed scheduling method and related devices.
Background
At present, The memory of The Internet of Things (IOT) product is mostly KB level, and The performance of The IOT product is limited by The small memory. For example, the camera supports face acquisition, but does not support face recognition, and other devices with face recognition functions are usually required to be added to help complete face recognition, so that the cost of a user is increased. For another example, the lightweight smart watch does not have a photographing function, so that photographing through the watch cannot be achieved, if the photographing function is added to the watch, the burden of the lightweight device is increased, the overall capacity of the device is affected, and the cost is also increased.
Therefore, how to expand the functions of the IOT device without improving the hardware capability of the IOT device is a problem to be solved urgently.
Disclosure of Invention
The application provides a multi-device distributed scheduling method and related devices, which can use other existing devices to expand the functions of IOT devices, realize multi-device distributed scheduling and improve user experience.
In a first aspect, the present application provides a multi-device distributed scheduling method, which is applied to a first device, and the method may include: the method comprises the steps that first equipment receives first request calling information sent by second equipment, wherein the first request calling information is used for requesting to call the first equipment to communicate with third equipment; the second device is a device in a first network, and the first network is a network in which the first device establishes communication connection with at least one device through a first communication mode; the third device is a device in a second network, and the second network is a network in which the first device establishes communication connection with at least one device through a second communication mode; and the first equipment sends an instruction for executing a target calling service to the third equipment based on the first request calling information.
In the scheme provided by the application, the second device cannot directly call the third device, the first device is connected with the second device in the first communication mode at the moment and is connected with the third device in the second communication mode, the second device can call the third device through the first device proxy, namely, under the condition that the connection between the first device and the second device is not disconnected, the second device can call the third device, the hardware capacity of the second device is not required to be increased, the function of the second device is expanded, and the user experience is enhanced.
With reference to the first aspect, in a possible implementation manner of the first aspect, the sending, by the first device, an instruction to execute a target call service to the third device based on the first request call information includes: the first device carries out deserialization processing on the first request calling information to obtain a device identifier and target calling service information of the third device; the first equipment establishes a service calling channel based on the equipment identification of the third equipment; and the first device sends the instruction for executing the target calling service to the third device through the service calling channel, wherein the instruction for executing the target calling service comprises the target calling service information.
In the scheme provided by the application, the first device carries out deserialization processing on the first request calling information to obtain the device identifier and the target calling service information of the third device, so that the third device is called, the second device can call the third device without disconnecting the first device from the second device or increasing the hardware capacity of the second device, the function of the second device is expanded, and the user experience is enhanced.
With reference to the first aspect, in a possible implementation manner of the first aspect, the sending, by the first device, an instruction to execute a target call service to the third device based on the first request call information includes: the first equipment carries out serialization processing on the first request calling information to obtain second request calling information; and the first equipment sends the second request calling information to the third equipment, wherein the second request calling information is used for triggering the third equipment to execute the target calling service.
In the scheme provided by the application, the first device carries out serialization processing on the first request calling information to obtain the second request calling information, calling of the third device is realized by sending the second request calling information to the third device, the calling can be completed without disconnecting the first device from the second device and increasing the hardware capability of the second device, the functions of the second device are expanded by utilizing the existing device, and meanwhile, the user experience is also enhanced.
With reference to the first aspect, in a possible implementation manner of the first aspect, after the first device sends the second request invocation information to the third device, the method further includes: and the first equipment reflects and calls the target calling service of the third equipment based on the second request calling information.
In the scheme provided by the application, the third device is called through the second request calling information and the reflection mechanism, the service does not need to be bound firstly, compared with a traditional remote calling mode, the calling mode is simpler, and the complexity of calling the third device by the first device agent and the second device is reduced to a certain extent.
In a second aspect, the present application provides a multi-device distributed scheduling method, which is applied to a second device, and the method may include: the method comprises the steps that a second device sends first request calling information to a first device, wherein the first request calling information is used for requesting to call the first device to communicate with a third device; the second device is a device in a first network, and the first network is a network in which the first device establishes communication connection with at least one device through a first communication mode; the third device is a device in a second network, and the second network is a network in which the first device establishes a communication connection with at least one device through a second communication mode.
In the scheme provided by the application, the second device requests the first device to proxy the second device to call the third device by sending the first request call information to the first device, and the third device can be called without disconnecting the connection between the first device and the second device and without increasing the hardware capability of the second device, so that the functions of the second device are expanded, and the user experience is enhanced.
With reference to the second aspect, in a possible implementation manner of the second aspect, the method further includes: and the second equipment receives an instruction which is sent by the first equipment and used for executing the target calling service.
In the scheme provided by the application, the second device can call other devices through the first device, and similarly, the other devices can call the second device through the first device, so that the multi-device distributed scheduling is realized, the functions of the devices are expanded by using the existing other devices in the network, and the user experience is enhanced.
In a third aspect, the present application provides a multi-device distributed scheduling method, which is applied to a third device, and the method may include: the third equipment receives an instruction for executing the target calling service sent by the first equipment; the third device is a device in a second network, and the second network is a network in which the first device establishes a communication connection with at least one device through a second communication mode.
In the scheme provided by the application, the third device receives the instruction for executing the target call service sent by the first device, and executes the target call service, so that the second device calls the third device, in the process, the second device does not disconnect from the first device and establish connection with the third device to realize calling, the function of the third device is not realized by enhancing the hardware capability of the second device, the existing first device is utilized to realize calling of the third device, the function of the second device is expanded, and meanwhile, the user experience is also enhanced.
With reference to the third aspect, in a possible implementation manner of the third aspect, the receiving, by the third device, an instruction for executing the target call service, where the instruction is sent by the first device, includes: and the third equipment receives second request calling information sent by the first equipment, wherein the second request calling information is used for triggering the third equipment to execute the target calling service.
In the scheme provided by the application, the third device triggers and executes the target calling service by receiving the second request calling information, so that the process called by the second device is completed, the function of the second device is expanded, and the user experience is enhanced.
With reference to the third aspect, in a possible implementation manner of the third aspect, the method further includes: the third device sends third request calling information to the first device, wherein the third request calling information is used for requesting to call the first device to communicate with a fourth device; the fourth device is a device in a first network, and the first network is a network in which the first device establishes a communication connection with at least one device through a first communication mode.
In the scheme provided by the application, the third device can be called by other devices through the first device, and similarly, the third device can also call other devices through the first device, so that the multi-device distributed scheduling is realized, the functions of the devices are expanded by using the existing other devices in the network, and the user experience is enhanced.
In a fourth aspect, the present application provides an electronic device comprising a processor, a memory, and a communication interface, the memory, the processor and the communication interface being coupled, the memory being configured to store computer program code, the computer program code comprising computer instructions, the processor being configured to execute the method for multi-device distributed scheduling according to the first aspect and in combination with any implementation manner of the first aspect.
In a fifth aspect, the present application provides an electronic device comprising a processor, a memory, and a communication interface, the memory, the processor and the communication interface being coupled, the memory being configured to store computer program code, the computer program code comprising computer instructions, the processor being configured to execute the method for multi-device distributed scheduling according to the second aspect and in combination with any implementation manner of the second aspect.
In a sixth aspect, the present application provides an electronic device comprising a processor, a memory, and a communication interface, the memory, the processor and the communication interface being coupled, the memory being configured to store computer program code, the computer program code comprising computer instructions, the processor being configured to execute, by the program code in the memory, the method for multi-device distributed scheduling according to the third aspect and as provided in connection with any implementation manner of the third aspect.
In a seventh aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the processor executes the first aspect and the multi-device distributed scheduling method provided in connection with any one implementation manner of the first aspect.
In an eighth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the processor executes the second aspect and the multi-device distributed scheduling method provided in conjunction with any one implementation manner of the second aspect.
In a ninth aspect, the present application provides a computer-readable storage medium storing a computer program, which, when executed by a processor, executes the third aspect and the multi-device distributed scheduling method provided in conjunction with any one of the implementations of the third aspect.
A tenth aspect provides a computer program product, which includes instructions that, when executed by a computer, enable the computer to perform the first aspect and the flow of the multi-device distributed scheduling method provided in connection with any one implementation manner of the first aspect.
In an eleventh aspect, a computer program product is provided, which includes instructions that, when executed by a computer, enable the computer to perform the second aspect and the flow of the multi-device distributed scheduling method provided in connection with any one implementation manner of the second aspect.
In a twelfth aspect, a computer program product is provided, which includes instructions that, when executed by a computer, enable the computer to execute the third aspect and the flow of the multi-device distributed scheduling method provided in connection with any one of the implementations of the third aspect.
It is understood that the electronic device provided by the fourth aspect, the computer-readable storage medium provided by the seventh aspect, and the computer program product provided by the tenth aspect are all configured to execute the multi-device distributed scheduling method provided by the first aspect; the electronic device provided by the fifth aspect, the computer-readable storage medium provided by the eighth aspect, and the computer program product provided by the eleventh aspect are all configured to execute the multi-device distributed scheduling method provided by the second aspect; the electronic device provided by the sixth aspect, the computer-readable storage medium provided by the ninth aspect, and the computer program product provided by the twelfth aspect are all configured to execute the multi-device distributed scheduling method provided by the third aspect. Therefore, the beneficial effects that can be achieved by the method can refer to the beneficial effects in the multi-device distributed scheduling methods provided by the first aspect, the second aspect, and the third aspect, and are not described herein again.
Drawings
Fig. 1 is a schematic diagram of a multi-device connection manner provided in an embodiment of the present application;
fig. 2 is a schematic diagram of another multi-device connection method provided in the embodiment of the present application;
fig. 3 is a schematic hardware structure diagram of a terminal according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of a software structure of a terminal according to an embodiment of the present application;
fig. 5 is a schematic diagram of a multi-device distributed scheduling system architecture according to an embodiment of the present application;
fig. 6 is a schematic flowchart of a multi-device distributed scheduling method according to an embodiment of the present application;
fig. 7 is a field structure diagram of a lightweight service description protocol according to an embodiment of the present application;
fig. 8 is a schematic diagram of a multi-device distributed scheduling scenario provided in an embodiment of the present application;
fig. 9 is a schematic diagram of another multi-device distributed scheduling scenario provided in an embodiment of the present application;
fig. 10 is a schematic interface diagram of a smart watch according to an embodiment of the present application;
fig. 11 is an operation schematic diagram of a smart watch according to an embodiment of the present application;
fig. 12 is a schematic diagram of another multi-device distributed scheduling scenario provided in an embodiment of the present application;
fig. 13 is a schematic diagram of another multi-device distributed scheduling scenario provided in an embodiment of the present application;
fig. 14 is a schematic interface diagram of a smart screen according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 16 is a schematic structural diagram of another electronic device provided in the embodiment of the present application;
fig. 17 is a schematic structural diagram of another electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application are described below clearly and completely with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
First, a part of words and related technologies referred to in the present application will be explained to facilitate understanding by those skilled in the art.
Wi-Fi is a wireless communication technology that is networked through the airwaves. A wireless router is commonly known, and the effective range covered by the electric wave of the wireless router can be networked by using a Wi-Fi connection mode, and if the wireless router is connected with an ADSL line or another internet access line, the wireless router is also called a hotspot.
The hong meng system (harmony os) is a "future-oriented," full-scenario (mobile office, sports health, social communication, media entertainment, etc.) oriented distributed operating system. On the basis of the traditional single-equipment system capacity, the HarmonyOS provides a distributed idea based on the same set of system capacity and adaptive to various terminal forms, and can support various terminal equipment such as mobile phones, tablets, intelligent wearing, smart screens and vehicle machines.
System capabilities (SA) are the basic software services and hardware services provided by the OS.
The meta program (FA) represents interfaced abilitys for interacting with users.
Meta-services (PA) represent non-interface abilitys, and mainly provide support for FAs, for example, providing computing power as a background service or providing data access capability as a data warehouse.
The Ability is an important component of an application and is an abstraction of the capabilities of the application. Ability is classified into two types, FA and PA.
Serialization (Serialization) is the process of converting state information of an object into a form that can be stored or transmitted. During serialization, the object writes its current state to a temporary or persistent store. Later, the object may be recreated by reading or deserializing the state of the object from storage. Accordingly, deserialization (unsequencing) is a process that converts a form of an object that can be stored or transmitted into its state information.
Remote Procedure Call (RPC) is a technique for requesting services from a Remote computer program over a network without knowledge of the underlying network technology. The Remote Procedure call Protocol (Remote Procedure call Protocol) assumes the existence of some Transmission protocols, such as Transmission Control Protocol (TCP) or User Datagram Protocol (UDP), and carries information data between communication programs. In the Open System Interconnection (OSI) network communication model, RPC spans the transport layer and the application layer. RPC makes it easier to develop applications including network distributed multiprogrammers.
The Package Name (Package Name) is the identity of the application in the android system. The package name uniquely identifies an application, i.e., one package name represents one application, and two applications do not use the same package name. In addition, the package name is mainly used for system identification applications and is hardly directly visible to the user.
Bluetooth Low Energy (BLE) is a Bluetooth communication mode with lower cost and lower power consumption, and BLE uses the same Adaptive Frequency Hopping (AFH) technology as the conventional Bluetooth technology, so that it can ensure that Bluetooth Low Energy can maintain stable transmission in a "noisy" radio frequency environment in residential, industrial, and medical applications. To minimize the cost and power consumption of using AFH, bluetooth low energy technology has reduced the number of channels from 79 wide channels at 1 mhz of conventional bluetooth technology to 40 wide channels at 2 mhz. Compared with the traditional Bluetooth technology, a new function added to the low-power Bluetooth technology is a broadcast function. By this function, the slave device can inform it that it needs to send data to the master device.
Lite application refers to a thin version of the application.
The JAVA reflection mechanism is that in a running state, all the attributes and methods of any class can be known; any method and property can be called for any object. That is, the JAVA reflection mechanism refers to a function of dynamically acquiring information and dynamically calling an object method.
At present, a plurality of devices can be interconnected through different connection modes, and in one aspect, as shown in fig. 1, fig. 1 is a schematic view of a multi-device connection mode provided in an embodiment of the present application, a smart phone is connected with a tablet computer through Wi-Fi, and the smart phone is connected with a smart speaker through bluetooth. It can be understood that the smart sound box in fig. 1 may also be other IOT devices, the smart phone in fig. 1 may also be other devices that can be connected to the smart sound box through bluetooth, and the third-party device in fig. 1 may also be other devices that can be connected to the smart phone through Wi-Fi.
In another case, as shown in fig. 2, fig. 2 is a schematic diagram of another multi-device connection manner provided in the embodiment of the present application, a smart phone, a smart screen, and a tablet computer are directly connected through Wi-Fi, and a smart watch cannot be connected with other devices through Wi-Fi because the smart watch does not have a capability of accessing Wi-Fi. The device carrying the Harmony OS can adopt the distributed network based on Wi-Fi direct connection to realize calling among devices, for most IOT products (such as smart watches and the like), the requirement of carrying the Harmony OS on device hardware is high, and in addition, most IOT products do not support protocols such as Wi-Fi and the like and cannot be directly connected with other devices.
Based on the above, the present application provides a multi-device distributed scheduling method and related devices, where when a second device has already established a communication connection with a first device and cannot establish a communication connection with a third-party device any more, the second device may invoke a service of the third-party device by using the first device, and similarly, the third-party device may also invoke a service of the second device by using the first device, so that function expansion of the device is achieved by using the existing device, and user experience is enhanced.
In order to better understand a multi-device distributed scheduling method and related devices disclosed in the embodiments of the present application, a description is first given below to a terminal related to the embodiments of the present application. Referring to fig. 3, fig. 3 is a schematic diagram of a hardware structure of a terminal according to an embodiment of the present disclosure.
The following describes an embodiment specifically by taking the terminal shown in fig. 3 as an example. It should be understood that the terminal shown in fig. 3 may have more or fewer components than shown, may combine two or more components, or may have a different configuration of components. The various components shown in fig. 3 may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The terminal may include: the mobile terminal includes a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is understood that the illustrated structure of the embodiments of the present application does not constitute a specific limitation to the terminal. In other embodiments of the present application, the terminal may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components may be used. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the Processor 110 may include an Application Processor (AP), a modem Processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband Processor, and/or a Neural-Network Processing Unit (NPU), among others. Wherein, the different processing units may be independent devices or may be integrated in one or more processors.
Wherein, the controller can be the neural center and the command center of the terminal. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
In the embodiment provided in the present application, the terminal may execute the multi-device distributed scheduling method through the processor 110. A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The Interface may include an Integrated Circuit (I2C) Interface, an Inter-Integrated Circuit built-in audio (I2S) Interface, a Pulse Code Modulation (PCM) Interface, a Universal Asynchronous Receiver/Transmitter (UART) Interface, a Mobile Industry Processor Interface (MIPI), a General-Purpose Input/output (GPIO) Interface, a Subscriber Identity Module (SIM) Interface, and/or a Universal Serial Bus (USB) Interface, etc.
The I2C interface is a bi-directional synchronous Serial bus that includes a Serial Data Line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor 180K through an I2C interface, so that the processor 110 and the touch sensor 180K communicate through an I2C bus interface to implement the touch function of the terminal.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface, so as to receive phone calls through the bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled by a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 and the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a bluetooth headset.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI Interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, the processor 110 and the camera 193 communicate through a CSI interface to implement the terminal's shooting function. The processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the terminal.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The SIM interface may be used to communicate with the SIM card interface 195, implementing functions to transfer data to or read data from the SIM card.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the terminal, and may also be used to transmit data between the terminal and a peripheral device. And the method can also be used for connecting a headset and playing audio through the headset. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It should be understood that the connection relationship between the modules illustrated in the embodiment of the present application is only an exemplary illustration, and does not form a limitation on the structure of the terminal. In other embodiments of the present application, the terminal may also adopt different interface connection manners or a combination of multiple interface connection manners in the foregoing embodiments.
The charging management module 140 is configured to receive a charging input from a charger. The charger may be a wireless charger or a wired charger.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like.
The wireless communication function of the terminal may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, the baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in a terminal may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including wireless communication of 2G/3G/4G/5G, etc. applied on a terminal. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then passed to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The Wireless Communication module 160 may provide solutions for Wireless Communication applied to a terminal, including Wireless Local Area Networks (WLANs), such as Wireless Fidelity (Wi-Fi) Networks, Bluetooth (BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves via the antenna 2 to radiate the electromagnetic waves.
In some embodiments, the antenna 1 of the terminal is coupled to the mobile communication module 150 and the antenna 2 is coupled to the wireless communication module 160 so that the terminal can communicate with the network and other devices through wireless communication technology. The wireless communication technology may include Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Time-Division Code Division Multiple Access (TD-SCDMA), Long Term Evolution (Long Term Evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a Global Navigation Satellite System (GLONASS), a BeiDou Navigation Satellite System (BDS), a Quasi-Zenith Satellite System (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The terminal implements the display function through the GPU, the display screen 194, and the application processor, etc. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The Display panel may be a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), an Active Matrix Organic Light-Emitting Diode (Active-Matrix Organic Light-Emitting Diode, AMOLED), a flexible Light-Emitting Diode (FLED), a MiniLED, a Micro led, a Micro OLED, a Quantum Dot Light-Emitting Diode (QLED), or the like. In some embodiments, the terminal may include 1 or N display screens 194, with N being a positive integer greater than 1.
The terminal can implement a photographing function through the ISP, the camera 193, the video codec, the GPU, the display screen 194, and the application processor, etc.
The ISP is used to process the data fed back by the camera 193. For example, when a user takes a picture, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, an optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and converting into an image visible to the naked eye. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a Complementary Metal-Oxide-Semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the terminal may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the terminal selects a frequency point, the digital signal processor is used for performing fourier transform and the like on the frequency point energy.
Video codecs are used to compress or decompress digital video. The terminal may support one or more video codecs. In this way, the terminal can play or record video in a plurality of coding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a Neural-Network (NN) computing processor, which processes input information quickly by using a biological Neural Network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. The NPU can realize applications such as intelligent cognition of the terminal, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the storage capability of the terminal. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the terminal and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application (such as a face recognition function, a fingerprint recognition function, a mobile payment function, and the like) required by at least one function, and the like. The storage data area can store data (such as face information template data, fingerprint information template and the like) created in the use process of the terminal and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk Storage device, a Flash memory device, a Universal Flash Storage (UFS), and the like.
The terminal can implement an audio function through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into a sound signal. The terminal can listen to music through the speaker 170A or listen to a hands-free call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the terminal answers a call or voice information, it can answer a voice by placing the receiver 170B close to the human ear.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking the user's mouth near the microphone 170C. The terminal may be provided with at least one microphone 170C. In other embodiments, the terminal may be provided with two microphones 170C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the terminal may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
The headphone interface 170D is used to connect a wired headphone. The headset interface 170D may be the USB interface 130, or may be an Open Mobile Terminal Platform (OMTP) standard interface of 3.5mm, or a Cellular Telecommunications Industry Association (Cellular Telecommunications Industry Association of the USA, CTIA) standard interface.
The pressure sensor 180A is used for sensing a pressure signal, and can convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The terminal determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the terminal detects the intensity of the touch operation according to the pressure sensor 180A. The terminal may also calculate the touched position based on the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B may be used to determine the motion attitude of the terminal. In some embodiments, the angular velocity of the terminal about three axes (i.e., the x, y, and z axes) may be determined by the gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. Illustratively, when the shutter is pressed, the gyroscope sensor 180B detects a shake angle of the terminal, calculates a distance to be compensated for the lens module according to the shake angle, and allows the lens to counteract the shake of the terminal through a reverse motion, thereby achieving anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, the terminal calculates altitude from the barometric pressure measured by barometric pressure sensor 180C to assist in positioning and navigation.
The magnetic sensor 180D includes a hall sensor. The terminal may detect the opening and closing of the flip holster using the magnetic sensor 180D. In some embodiments, when the terminal is a folder, the terminal may detect the opening and closing of the folder according to the magnetic sensor 180D. And then according to the detected opening and closing state of the leather sheath or the opening and closing state of the flip, the characteristics of automatic unlocking of the flip and the like are set.
The acceleration sensor 180E can detect the magnitude of acceleration of the terminal in various directions (typically three axes). When the terminal is static, the size and the direction of gravity can be detected. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The terminal may measure the distance by infrared or laser. In some embodiments, a scene is photographed and the terminal may range using the distance sensor 180F to achieve fast focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The terminal emits infrared light outward through the light emitting diode. The terminal uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the terminal. When insufficient reflected light is detected, the terminal may determine that there are no objects near the terminal. The terminal can utilize the proximity light sensor 180G to detect that the user holds the terminal to talk close to the ear, so that the screen is automatically extinguished to achieve the purpose of saving power. The proximity light sensor 180G may also be used in a holster mode, a pocket mode automatically unlocking and locking the screen.
The ambient light sensor 180L is used to sense ambient light brightness. The terminal may adaptively adjust the brightness of the display screen 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the terminal is in a pocket, to prevent accidental touches.
The fingerprint sensor 180H is used to collect a fingerprint. The terminal can utilize the collected fingerprint characteristics to realize fingerprint unlocking, access to an application lock, fingerprint photographing, fingerprint incoming call answering and the like.
The temperature sensor 180J is used to detect temperature. In some embodiments, the terminal implements a temperature processing strategy using the temperature detected by the temperature sensor 180J. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the terminal performs a reduction in the performance of the processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, the terminal heats the battery 142 when the temperature is below another threshold to avoid a low temperature causing an abnormal shutdown of the terminal. In other embodiments, the terminal performs a boost on the output voltage of the battery 142 when the temperature is below a further threshold to avoid an abnormal shutdown due to low temperature.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on the surface of the terminal at a different position than the display screen 194.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The terminal may receive a key input, and generate a key signal input related to user setting and function control of the terminal.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be brought into and out of contact with the terminal by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The terminal can support 1 or N SIM card interfaces, wherein N is a positive integer larger than 1. The SIM card interface 195 may support a Nano-SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 is also compatible with external memory cards. The terminal interacts with the network through the SIM card to realize functions of conversation, data communication and the like.
Fig. 4 is a schematic diagram of a software structure of a terminal according to an embodiment of the present application.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the system is divided into four layers, an application layer, an application framework layer, a Runtime (Runtime) and system library, and a kernel layer, from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 4, the application package may include applications (also referred to as applications) such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
It is understood that the application layer may also include Lite applications, which are reduced versions of applications that are more compact than others. In one embodiment of the present application, the Lite application may be a harmony os application in a Lite week enabled (lightweight smart Wearable device), for example, the Lite application may be included in a smart watch containing harmony os.
The Application framework layer provides an Application Programming Interface (API) and a Programming framework for the Application program of the Application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 4, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The telephone manager is used for providing a communication function of the terminal. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog interface. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The Runtime (Runtime) includes a core library and a virtual machine. Runtime is responsible for scheduling and management of the system.
The core library comprises two parts: one part is a function that a programming language (for example, java language) needs to call, and the other part is a core library of the system.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes programming files (e.g., java files) of the application layer and the application framework layer as binary files. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface Manager (Surface Manager), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), two-dimensional graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provides a fusion of two-Dimensional (2-Dimensional, 2D) and three-Dimensional (3-Dimensional, 3D) layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing 3D graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The kernel layer at least comprises a display driver, a camera driver, an audio driver, a sensor driver and a virtual card driver.
The following describes an exemplary workflow of the terminal software and hardware in conjunction with capturing a photo scene.
When the touch sensor 180K receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into an original input event (including touch coordinates, timestamp of the touch operation, and the like). The raw input events are stored at the kernel layer. And the application program framework layer acquires the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and taking a control corresponding to the click operation as a control of a camera application icon as an example, the camera application calls an interface of an application framework layer, starts the camera application, further starts a camera drive by calling a kernel layer, and captures a still image or a video through the camera 193.
An embodiment of the present application provides a multi-device distributed scheduling system architecture, as shown in fig. 5, the multi-device distributed scheduling system architecture includes a first device, a second device, and a third device, where the second device is a device in a first network, the first network is a network in which the first device establishes a communication connection with at least one device through a first communication manner, the third device is a device in a second network, and the second network is a network in which the first device establishes a communication connection with at least one device through a second communication manner. The second device may invoke the service of the third device through the first device, and the third device may also invoke the service of the second device through the first device.
It should be noted that the second device is a terminal having a function of receiving a user instruction, the first device is a terminal capable of interacting with the second device, and the third device is a terminal capable of interacting with the first device, for example, the first device may be any one of terminal devices such as a smart phone, a tablet Computer, and a PC (Personal Computer), for example, the second device may be any one of terminal devices such as a smart watch, a tablet Computer, a smart speaker, and a smart screen, and for example, the third device may be any one of terminal devices such as a smart watch, a tablet Computer, a smart speaker, and a smart screen.
It is understood that the first communication means includes, but is not limited to, Wi-Fi, bluetooth, etc., and the second communication means includes, but is not limited to, Wi-Fi, bluetooth, etc. In one embodiment of the present application, when the first communication mode is Wi-Fi, the second communication mode is bluetooth; and when the first communication mode is Bluetooth, the second communication mode is Wi-Fi.
It should be further noted that the first device, the second device, and the third device are not limited to include only the terminal shown in fig. 5, and may also include other devices that are not shown in fig. 5, and the present invention is not further enumerated here. In addition, the specific hardware structures and software structures of the first device, the second device, and the third device shown in fig. 5 may refer to the descriptions of fig. 3 and fig. 4, respectively, and are not repeated herein.
It should be further noted that the multi-device distributed scheduling system architecture shown in fig. 5 is only an exemplary implementation manner in the embodiment of the present application, and the multi-device distributed scheduling system architecture in the embodiment of the present application includes, but is not limited to, the above structures.
The following describes, with reference to the flowchart of the multi-device distributed scheduling method shown in fig. 6, a process of implementing multi-device distributed scheduling by a first device, a second device, and a third device in the multi-device distributed scheduling system architecture shown in fig. 5, where the process may include the following steps:
s610: the second device sends the first request invocation information to the first device.
Specifically, the second device detects that a user triggers a certain call request, and in response to the operation, the second device generates a call instruction, and sends the call instruction to the first device, or the second device performs field filling and serialization processing on the call instruction according to a lightweight service description protocol to obtain a processed call instruction, and the second device sends the processed call instruction to the first device. It is to be understood that the first request invoking information may be the invoking instruction in the foregoing content or the processed invoking instruction, and in other embodiments of the present application, the first request invoking information may also be information of other different content and/or form, which is not limited in this application.
For example, when the second device is a terminal shown in fig. 3 and 4, the pressure sensor 180A may detect a touch operation of a user, and the touch operation may be processed into an original input event (including touch coordinates, a timestamp of the touch operation, and other information) by the kernel layer. The raw input events are stored at the kernel layer. And the application program framework layer acquires the original input event from the kernel layer and identifies the control corresponding to the input event. In response to the operation, the second device generates a call instruction, and the wireless communication module 160 sends the call instruction to the first device, or the processor 110 performs field filling and serialization processing on the call instruction according to the lightweight service description protocol, and then the wireless communication module 160 sends the processed call instruction to the first device.
As shown in fig. 7, fig. 7 is a field structure diagram of a lightweight service description protocol provided in this embodiment, where the lightweight service description protocol includes an instruction (Command ID), a Device identifier (Device ID), a target calling application package Name (Called Bundle Name), a target meta capability Name (Called availability Name), a signature (caller fingerprint information), and valid Data (Data PayLoad), and contents of each field are shown in table 1 below:
Figure BDA0002968078110000151
Figure BDA0002968078110000161
TABLE 1
S620: the first device receives first request calling information sent by the second device.
Specifically, the first device receives the call instruction sent by the second device, or the first device receives the processed call instruction sent by the second device. It is to be understood that the first request invoking information may be the invoking instruction in the foregoing content or the processed invoking instruction, and in other embodiments of the present application, the first request invoking information may also be information of other different content and/or form, which is not limited in this application.
It should be noted that, after the first device receives the first request invocation information sent by the second device, the first device processes the first request invocation information. When the first request calling information is the calling instruction, the first equipment carries out field filling and serialization processing on the calling instruction according to a lightweight service description protocol to obtain second request calling information, and the second request calling information is used for triggering the third equipment to execute the target calling service; when the first request calling information is the processed calling instruction, the first device performs deserialization processing on the processed calling instruction to obtain a device identifier of a third device and target calling service information, the first device establishes a service calling channel based on the device identifier of the third device, and the service calling channel is used for sending the instruction for executing the target calling service to the third device.
Illustratively, when the first device is the terminal shown in fig. 3, the wireless communication module 160 receives the call instruction or the processed call instruction sent by the second device. If the wireless communication module 160 receives the call instruction, the processor 110 performs field filling and serialization processing on the call instruction according to the lightweight service description protocol to obtain second request call information; if the processed call instruction is received by the wireless communication module 160, the processor 110 performs deserialization on the processed call instruction to obtain the device identifier and the target call service information of the third device.
In an embodiment of the present application, the service call channel may be generated through a specific specification, and it is understood that the specific specification may include, but is not limited to, An Interface Definition Language (AIDL), and the service call channel may be specifically implemented by code.
S630: the first device sends an instruction to the third device to execute the target call service.
Specifically, the first device sends an instruction for executing the target call service to the third device, and it is understood that the instruction for executing the target call service may be the second request call information, or the first device sends the instruction for executing the target call service to the third device through the service call channel, where the instruction for executing the target call service includes the target call service information.
Illustratively, when the first device is the terminal shown in fig. 3, the wireless communication module 160 transmits an instruction to execute the target call service to the third device.
S640: and the third equipment receives the instruction which is sent by the first equipment and used for executing the target calling service.
Specifically, the third device receives an instruction for executing the target call service sent by the first device, and it is understood that the instruction for executing the target call service may be the second request call information.
Illustratively, when the third device is the terminal shown in fig. 3, the wireless communication module 160 receives an instruction to execute the target invocation service sent by the first device.
It is understood that the third device executes the target call service after receiving the instruction to execute the target call service. When the instruction for executing the target call service is the second request call information, the first device may reflect the target call service calling the third device, and the third device executes the target call service, where the reflection may be a JAVA reflection mechanism.
It should be noted that other devices may invoke the second device through the first device, and in this case, the second device may receive the instruction for executing the target invocation service sent by the first device, and the specific implementation manner is see steps S610 to S640 above, which is not described herein again.
In addition, the third device may also invoke other devices through the first device, for example, the third device may send third request invocation information to the first device, where the third request invocation information is used to request to invoke the first device to communicate with the fourth device; the fourth device is a device in a first network, and the first network is a network in which the first device establishes a communication connection with at least one device through a first communication manner, where the specific implementation manner is referred to in steps S610 to S640, and details are not described here again.
The embodiment of the present application provides a multi-device distributed scheduling scenario, where the scenario is based on the system architecture shown in fig. 5, and the method shown in fig. 6 may be used in the scenario, which is described below with reference to fig. 8 and 12.
For example, one: as shown in fig. 8, the multi-device distributed scheduling scenario includes a first device 810, a second device 820, and a third device 830, where a first communication method adopted between the first device 810 and the second device 820 is bluetooth, and a second communication method adopted between the first device 810 and the third device 830 is Wi-Fi.
It is understood that the first device, the second device, and the third device shown in fig. 8 are only an exemplary division manner, and the units in the first device, the second device, and the third device may be added, reduced, or combined as needed, which is not limited in this application.
As shown in fig. 8, the second device 820 includes a Lite application 8210, an Invoke AA interface 8220, and a Distributed Management Service (DMS) module 8230, and it is understood that in an embodiment of the present application, the second device 820 may further include a bluetooth communication module 8240. The Lite application 8210 may be a Lite application of the application layer shown in fig. 4, the Invoke AA interface 8220 may be an interface in the processor 110 shown in fig. 3, it may be understood that the Invoke AA interface 8220 may also be an interface in the application framework layer shown in fig. 4, the DMS module 8230 may be one processing unit or a set of several processing units in the processor 110 shown in fig. 3, and the bluetooth communication module 8240 may be one communication module or a set of several communication modules in the wireless communication module 160 shown in fig. 3.
Specifically, the second device 820 detects that the user triggers the Lite application 8210, in response to the operation, the second device 820 generates a call instruction, then, the second device 820 calls the Invoke AA interface 8220 to use the DMS module 8230, the DMS module 8230 performs field filling and serialization on the call instruction according to the lightweight service description protocol shown in fig. 7 and table 1, so as to obtain first request call information, and the second device 820 sends the first request call information to the first device 810 through the bluetooth communication module 8240.
The Lite application 8210 is a lightweight application, which is more compact than other application programs, and particularly, the Lite application 8210 may include one or more Lite applications having the same or different functions, and in one embodiment of the present application, the Lite applications may be harmony os applications in Lite weak (lightweight intelligent Wearable device).
The Invoke AA interface 8220 is an interface opened to a third party developer to use the DMS, and optionally, the Invoke AA interface 8220 may also be another interface opened to the third party developer to use the DMS.
The DMS module 8230 is configured to process, send, and receive the request invocation information, for example, perform deserialization on the request invocation information, or process the request invocation information according to a lightweight service description protocol, or send the request invocation information to the first device, so as to invoke a third device through the first device, or receive the request invocation information sent by the first device and invoke a local service.
The bluetooth communication module 8240 is configured to enable bluetooth communication between the second device 820 and other devices.
As shown in fig. 8, the first device 810 includes an Invoke AA interface 8110, a DMS module 8120, a distributed database 8130, and a proxy module 8140, wherein the proxy module 8140 may include a schedule services module 81410, a bind services module 81420, and a call services module 81430. It is understood that in one embodiment of the present application, the first device 810 may also include a bluetooth communication module 8150. Here, the Invoke AA interface 8110 may be an interface in the processor 110 shown in fig. 3, it is understood that the Invoke AA interface 8110 may also be an interface in an application framework layer shown in fig. 4, the DMS module 8120 may be one processing unit or a set of several processing units in the processor 110 shown in fig. 3, the distributed database 8130 may be one module in a data storage area in the internal memory 121 shown in fig. 3, the proxy module 8140 may be one processing unit or a set of several processing units in the processor 110 shown in fig. 3, and the bluetooth communication module 8150 may be one communication module or a set of several communication modules in the wireless communication module 160 shown in fig. 3.
Specifically, the first device 810 receives the first request invocation information sent by the second device 8120 through the bluetooth module 8150, the agent module 8140 performs deserialization processing on the first request invocation information to obtain the device identifier and the target invocation service information of the third device 830, the Device identifier includes, but is not limited to, the Device ID shown in table 1, the target call service information includes, but is not limited to, the Data PayLoad shown in table 1, the scheduling service module 81410 searches the target call Device (the third Device 830) and the target call service in the distributed database 8130 according to the obtained Device identifier, and after the search is completed, the binding service module 81420 establishes a service call channel between the first device 810 and the third device 830, and the first device 810 sends an instruction to execute the target call service to the third device 830 through the service call channel, where the instruction to execute the target call service includes the target call service information.
Accordingly, the third device 830 receives the instruction to execute the target call service and executes the target call service.
The distributed database 8130 stores information of all devices in the first network and/or the second network, including but not limited to device Identification (ID), device address, service information of the device, and the like, and the device information stored in the distributed database 8130 is updated synchronously when the device in the first network and/or the second network goes online or offline.
The agent module 8140 is configured to implement cross-device service invocation, and the scheduling service module 81410 may search for a target invocation device and a target invocation service in the distributed database 8130 according to the acquired information; the binding service module 81420 can establish a service call channel between the first device 810 and the other device to enable the function of calling the service of the other device, for example, according to the above, the binding service module 81420 can establish a service call channel between the first device 810 and the third device 830 to enable the proxy of the second device 820 to call the function of the third device 830; the invoke service module 81430 may invoke the service directly by function name, class name, etc. by reflection, and in one embodiment of the present application, the invoke service module 81430 may invoke the service of the third device 830 by reflection, thereby implementing the proxy of the second device 820 to invoke the function of the third device 820.
The roles of the Invoke AA interface 8110, the DMS module 8120, and the bluetooth communication module 8150 refer to the roles of the Invoke AA interface 8220, the DMS module 8230, and the bluetooth communication module 8240 of the second device 820, which are not described herein again.
For example, as shown in fig. 9, a user wants to use a smart watch to call a camera of a smart screen to shoot a family blessing, at this time, the first device 810 is a smart phone, the second device 820 is a smart watch, the third device 830 is a smart screen, the first communication mode is bluetooth, the second communication mode is Wi-Fi, and if so, the smart watch is connected to the smart phone through bluetooth, and the smart phone is connected to the smart screen, a tablet computer, and other devices through Wi-Fi. Figure 10 illustrates an interface of Lite application 8210 of a smart watch including a "smart screen shot" control, the smart watch detects that a user clicks the "smart screen shot" control in figure 10 (as shown in figure 11), in response to which the smart watch generates an instruction to call the smart screen camera, the smart watch calls Invoke AA interface 8220 to use DMS module 8230, DMS module 8230 field-fills the instruction to call the smart screen camera according to the lightweight service description protocol shown in figure 7 and table 1 to obtain call information as shown in table 2 below, and then serializes the call information to obtain first request call information, the smart watch sends the first request call information to the smartphone through bluetooth communication module 8240, the first request call information carrying the lightweight service description protocol (as shown in figure 7 and table 1), after the smart phone receives the first request calling information through the bluetooth module 8150, the agent module 8140 performs deserialization processing on the first request calling information to obtain a Device ID (shown in table 2) and target calling service information (Data PayLoad, shown in table 2) of the smart screen, the binding service module 81420 establishes a service calling channel between the smart phone and the smart screen, and sends an instruction for executing target calling service to the smart screen through the service calling channel, where the instruction for executing the target calling service includes the Data PayLoad shown in table 2, so as to call a camera on the smart screen to shoot.
Figure BDA0002968078110000191
TABLE 2
Example two: as shown in fig. 12, the multi-device distributed scheduling scenario includes a first device 810, a second device 820, and a third device 830, where a first communication method adopted between the first device 810 and the second device 820 is Wi-Fi, and a second communication method adopted between the first device 810 and the third device 830 is bluetooth.
Specifically, when the second device 820 detects that a user triggers a certain call request, in response to the operation, the second device 820 generates a call instruction and sends the call instruction to the first device 810, where the call instruction is first request call information, after the first device 810 receives the call instruction, the DMS module 8120 performs field filling and serialization processing on the call instruction according to a lightweight service description protocol to obtain second request call information, where the second request call information is an instruction for executing a target call service, and the first device 810 sends the second request call information to the third device 830 through the bluetooth communication module 8150.
As shown in fig. 12, the third device 830 includes a DMS module 8310 and an Availability Management Service (AMS) module 8320, and it can be understood that in one embodiment of the present application, the third device 830 may further include a bluetooth communication module 8330. The DMS module 8310 may be one or a plurality of processing units in the processor 110 shown in fig. 3, the AMS module 8320 may be one or a plurality of processing units in the processor 110 shown in fig. 3, it is understood that the AMS module 8320 may also be one or a plurality of modules in the application framework layer and a plurality of modules in the kernel layer shown in fig. 4, and the bluetooth communication module 8150 may be one or a plurality of communication modules in the wireless communication module 160 shown in fig. 3.
Accordingly, the third device 830 receives the second request call information through the bluetooth communication module 8330, the DMS module 8310 deserializes the second request call information, and then the AMS module 8320 pulls up the target call application, and the third device executes the target call service, thereby completing the call of the target call service.
The AMS module 8320 is used to manage applications on the third device 830, including managing the launching, foreground, background, and exit of applications.
The functions of the DMS module 8310 and the bluetooth communication module 8330 refer to the functions of the DMS module 8230 and the bluetooth communication module 8240 of the second device 820 shown in fig. 8, and are not described in detail herein.
For example, as shown in fig. 13, a user wants to play audio of a smart screen by using a smart speaker, at this time, the first device 810 is a tablet computer, the second device 820 is the smart screen, the third device 830 is the smart speaker, the first communication mode is Wi-Fi, the second communication mode is bluetooth, and then the smart screen is connected to devices such as the tablet computer and the smart phone through Wi-Fi, and the smart speaker is connected to the smart phone through bluetooth. Fig. 14 exemplarily shows a user interface of a smart screen, where the user interface includes a "smart speaker plays audio" control, the smart screen detects a user click (e.g., a touch screen click, a gesture trigger, or a remote controller click, etc.) "the smart speaker plays audio" control, the smart screen generates an instruction (a first request call message) for calling the smart speaker to play audio and sends the instruction to a tablet pc, after the tablet pc receives the instruction, a DMS module 8120 performs field filling on the instruction for calling the smart screen camera according to the lightweight service description protocol shown in fig. 7 and table 1 to obtain a call message, and then performs serialization on the call message to obtain a second request call message, the tablet pc sends the second request call message to the smart speaker through a bluetooth communication module 8150, after the smart speaker receives the second request call message through a bluetooth communication module 8330, deserializing it and then pulling up the target call service by the AMS module 8320, i.e., executing the target call service that plays smart screen audio.
Referring to fig. 15, fig. 15 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 15, the electronic apparatus 1500 includes: a processor 1510, a communication interface 1520, and a memory 1530, the processor 1510, the communication interface 1520, and the memory 1530 being interconnected via an internal bus 1540. Wherein, the processor 1510 may include the processor 110 in fig. 3, the communication interface 1520 may include the mobile communication module 150 and/or the wireless communication module 160 in fig. 3, and the memory 1530 may include the internal memory 121 in fig. 3, and the functions of these components/units/modules may refer to fig. 3 and are not described herein again.
Bus 1540 can be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus 1540 may be divided into an address bus, a data bus, a control bus, and the like. Bus 1540 may be implemented within processor 110 as depicted in fig. 3, with only one thick line being depicted in fig. 15 for ease of illustration, and not to indicate that there is only one bus or type of bus.
The electronic device 1500 may be the first device in fig. 5 and fig. 6, or may be the first device 810 in fig. 8 and fig. 12, and functions performed by the first device in fig. 5 and fig. 6, and the first device 810 in fig. 8 and fig. 12 are actually performed by the processor 1510 of the electronic device 1500. The memory 1530 is configured to store a program code for executing the foregoing embodiment of the multi-device distributed scheduling method, and in an implementation, the memory 1530 may further buffer other data and be controlled by the processor 1510 to execute the program code, so as to implement the functional unit shown in the first device, or to implement the method step in which the first device is used as an execution subject in the embodiment of the method shown in fig. 6. The method comprises the following specific steps: the processor 1510 controls the communication interface 1520 to receive first request call information sent by a second device, where the first request call information is used to request to call the electronic device 1500 to communicate with a third device; the second device is a device in a first network, and the first network is a network in which the electronic device 1500 establishes a communication connection with at least one device through a first communication manner; the third device is a device in a second network, and the second network is a network in which the electronic device 1500 establishes a communication connection with at least one device through a second communication manner;
the processor 1510 controls the communication interface 1520 to send an instruction to execute a target call service to the third device based on the first request call information.
In one implementation, the processor 1510 controls the communication interface 1520 to send an instruction to the third device to execute the target call service based on the first request call information, including: the processor 1510 performs deserialization processing on the first request call information to obtain the device identifier and the target call service information of the third device; the processor 1510 establishes a service invocation channel based on the device identification of the third device; the processor 1510 sends the instruction for executing the target call service to the third device through the service call channel control communication interface 1520, where the instruction for executing the target call service includes the target call service information.
In one implementation, the processor 1510 controls the communication interface 1520 to send an instruction to the third device to execute the target call service based on the first request call information, including: the processor 1510 performs serialization processing on the first request call information to obtain second request call information; the processor 1510 controls the communication interface 1520 to send the second request call information to the third device, where the second request call information is used to trigger the third device to execute the target call service.
In one implementation, after the processor 1510 controls the communication interface 1520 to send the second request call information to the third device, the method further includes: processor 1510 reflectively invokes the target invocation service of the third device based on the second request invocation information.
Referring to fig. 16, fig. 16 is a schematic structural diagram of another electronic device provided in the embodiment of the present application. As shown in fig. 16, the electronic device 1600 includes: a processor 1610, a communication interface 1620, and a memory 1630, which are connected to each other via an internal bus 1640. The processor 1610 may include the processor 110 in fig. 3, the communication interface 1620 may include the mobile communication module 150 and/or the wireless communication module 160 in fig. 3, the memory 1630 may include the internal memory 121 in fig. 3, and the functions of these components/units/modules may refer to fig. 3 and are not described herein again.
The bus 1640 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus, for example. The bus 1640 may be divided into an address bus, a data bus, a control bus, etc. The bus 1640 may be provided in the processor 110 shown in fig. 3, and is represented by only one thick line in fig. 16 for ease of illustration, but does not represent only one bus or one type of bus.
The electronic device 1600 may be the second device in fig. 5 and 6, or the second device 820 in fig. 8 and 12, and functions performed by the second device in fig. 5 and 6, or the second device 820 in fig. 8 and 12 are actually performed by the processor 1610 of the electronic device 1600. The memory 1630 is used for storing program codes for executing the foregoing embodiments of the multi-device distributed scheduling method, and in an implementation, the memory 1630 may also buffer other data and be controlled by the processor 1610 to execute the program codes to implement the functional units shown in the second device or to implement the method steps in the embodiment of the method shown in fig. 6, where the second device is used as an execution subject. The method comprises the following specific steps:
the processor 1610 controls the communication interface 1620 to transmit first request call information for requesting a call to the first device to communicate with a third device to the first device; the electronic device 1600 is a device in a first network, where the first network is a network in which the first device establishes a communication connection with at least one device through a first communication manner; the third device is a device in a second network, and the second network is a network in which the first device establishes a communication connection with at least one device through a second communication mode.
In one implementation, the method further includes: the processor 1610 controls the communication interface 1620 to receive an instruction to execute a target call service transmitted by the first device.
Referring to fig. 17, fig. 17 is a schematic structural diagram of another electronic device provided in the embodiment of the present application. As shown in fig. 17, the electronic apparatus 1700 includes: a processor 1710, a communication interface 1720, and a memory 1730, the processor 1710, the communication interface 1720, and the memory 1730 being interconnected via an internal bus 1740. The processor 1710 may include the processor 110 in fig. 3, the communication interface 1720 may include the mobile communication module 150 and/or the wireless communication module 160 in fig. 3, the memory 1730 may include the internal memory 121 in fig. 3, and the functions of these components/units/modules may refer to fig. 3 and are not described herein again.
Bus 1740 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus 1740 may be divided into an address bus, a data bus, a control bus, etc. Bus 1740 may be provided in processor 110 in FIG. 3. this is shown in FIG. 17 by a single thick line for ease of illustration, but does not represent only a single bus or type of bus.
The electronic device 1700 may be the third device in fig. 5 and fig. 6, or the third device 830 in fig. 8 and fig. 12, and functions performed by the third device in fig. 5 and fig. 6, and the third device 830 in fig. 8 and fig. 12 are actually performed by the processor 1710 of the electronic device 1700. The memory 1730 is used for storing program codes for executing the foregoing embodiments of the multi-device distributed scheduling method, and in an implementation, the memory 1730 may also cache other data and be controlled by the processor 1710 to execute the program codes, so as to implement the functional unit shown in the third device, or to implement the method step of the embodiment shown in fig. 6, in which the third device is an execution subject. The method comprises the following specific steps:
the processor 1710 controls the communication interface 1720 to receive an instruction sent by the first device to execute a target call service; the electronic device 1700 is a device in a second network, and the second network is a network in which the first device establishes a communication connection with at least one device through a second communication method.
In one implementation, the processor 1710 controls the communication interface 1720 to receive an instruction sent by the first device to execute a target call service, including: the processor 1710 controls the communication interface 1720 to receive second request call information sent by the first device, where the second request call information is used to trigger the electronic device 1700 to execute the target call service.
In one implementation, the method further includes: the processor 1710 controls the communication interface 1720 to send third request call information to the first device, the third request call information being for requesting to call the first device to communicate with a fourth device; the fourth device is a device in a first network, and the first network is a network in which the first device establishes a communication connection with at least one device through a first communication mode.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
It should be understood that the reference herein to first, second, third, fourth, and various numerical designations is merely a convenient division to describe and is not intended to limit the scope of the present application.
It should be understood that the term "and/or" herein is only one kind of association relationship describing the association object, and means that there may be three kinds of relationships, for example, a and/or B, and may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
It should also be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs.
The modules in the device can be merged, divided and deleted according to actual needs.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (15)

1. A method for multi-device distributed scheduling, the method comprising:
the method comprises the steps that first equipment receives first request calling information sent by second equipment, wherein the first request calling information is used for requesting to call the first equipment to communicate with third equipment; the second device is a device in a first network, and the first network is a network in which the first device establishes communication connection with at least one device through a first communication mode; the third device is a device in a second network, and the second network is a network in which the first device establishes communication connection with at least one device through a second communication mode;
and the first equipment sends an instruction for executing the target calling service to the third equipment based on the first request calling information.
2. The method of claim 1, wherein the first device sending an instruction to the third device to execute a target call service based on the first request call information comprises:
the first device carries out deserialization processing on the first request calling information to obtain a device identifier and target calling service information of the third device;
the first equipment establishes a service calling channel based on the equipment identification of the third equipment;
and the first equipment sends the instruction for executing the target calling service to the third equipment through the service calling channel, wherein the instruction for executing the target calling service comprises the target calling service information.
3. The method of claim 1, wherein the first device sending an instruction to the third device to execute a target call service based on the first request call information comprises:
the first equipment carries out serialization processing on the first request calling information to obtain second request calling information;
and the first equipment sends the second request calling information to the third equipment, wherein the second request calling information is used for triggering the third equipment to execute the target calling service.
4. The method of claim 3, wherein after the first device sends the second request invocation information to the third device, the method further comprises:
and the first equipment reflects and calls the target calling service of the third equipment based on the second request calling information.
5. A method for multi-device distributed scheduling, the method comprising:
the method comprises the steps that a second device sends first request calling information to a first device, wherein the first request calling information is used for requesting to call the first device to communicate with a third device; the second device is a device in a first network, and the first network is a network in which the first device establishes communication connection with at least one device through a first communication mode; the third device is a device in a second network, and the second network is a network in which the first device establishes a communication connection with at least one device through a second communication mode.
6. The method of claim 5, wherein the method further comprises: and the second equipment receives an instruction which is sent by the first equipment and used for executing the target calling service.
7. A method for multi-device distributed scheduling, the method comprising: the third equipment receives an instruction for executing the target calling service sent by the first equipment; the third device is a device in a second network, and the second network is a network in which the first device establishes a communication connection with at least one device through a second communication mode.
8. The method of claim 7, wherein the third device receiving an instruction sent by the first device to perform a target call service comprises:
and the third equipment receives second request calling information sent by the first equipment, wherein the second request calling information is used for triggering the third equipment to execute the target calling service.
9. The method of claim 7, wherein the method further comprises:
the third device sends third request calling information to the first device, wherein the third request calling information is used for requesting to call the first device to communicate with a fourth device; the fourth device is a device in a first network, and the first network is a network in which the first device establishes a communication connection with at least one device through a first communication mode.
10. An electronic device comprising a processor, a memory, and a communication interface, the memory, the processor coupled with the communication interface, the memory to store computer program code, the computer program code comprising computer instructions, the processor to invoke the computer instructions to implement the method implemented by the first device as claimed in any of claims 1-4.
11. An electronic device comprising a processor, a memory, and a communication interface, the memory, the processor coupled with the communication interface, the memory to store computer program code, the computer program code comprising computer instructions, the processor to invoke the computer instructions to implement the method implemented by the second device of any of claims 5-6.
12. An electronic device comprising a processor, a memory, and a communication interface, the memory, the processor coupled with the communication interface, the memory to store computer program code, the computer program code comprising computer instructions, the processor to invoke the computer instructions to implement the method implemented by the third device of any of claims 7-9.
13. A computer-readable storage medium comprising instructions that, when executed on a first device, cause the first device to perform the method of any of claims 1-4.
14. A computer-readable storage medium comprising instructions that, when executed on a second device, cause the second device to perform the method of any of claims 5-6.
15. A computer-readable storage medium comprising instructions that, when executed on a third device, cause the third device to perform the method of any of claims 7-9.
CN202110255387.5A 2021-03-09 2021-03-09 Multi-device distributed scheduling method and related devices Pending CN115048193A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110255387.5A CN115048193A (en) 2021-03-09 2021-03-09 Multi-device distributed scheduling method and related devices

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110255387.5A CN115048193A (en) 2021-03-09 2021-03-09 Multi-device distributed scheduling method and related devices

Publications (1)

Publication Number Publication Date
CN115048193A true CN115048193A (en) 2022-09-13

Family

ID=83156496

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110255387.5A Pending CN115048193A (en) 2021-03-09 2021-03-09 Multi-device distributed scheduling method and related devices

Country Status (1)

Country Link
CN (1) CN115048193A (en)

Similar Documents

Publication Publication Date Title
WO2020238871A1 (en) Screen projection method and system and related apparatus
WO2021017889A1 (en) Display method of video call appliced to electronic device and related apparatus
WO2020253719A1 (en) Screen recording method and electronic device
WO2020259452A1 (en) Full-screen display method for mobile terminal, and apparatus
CN113645351B (en) Application interface interaction method, electronic device and computer-readable storage medium
WO2022257977A1 (en) Screen projection method for electronic device, and electronic device
WO2020093988A1 (en) Image processing method and electronic device
CN113961157B (en) Display interaction system, display method and equipment
CN114556294A (en) Theme switching method and theme switching device
CN112130788A (en) Content sharing method and device
CN113923230A (en) Data synchronization method, electronic device, and computer-readable storage medium
WO2021159746A1 (en) File sharing method and system, and related device
CN113935898A (en) Image processing method, system, electronic device and computer readable storage medium
CN111371849A (en) Data processing method and electronic equipment
CN114844978A (en) Equipment capacity scheduling method and electronic equipment
CN113448382A (en) Multi-screen display electronic device and multi-screen display method of electronic device
WO2022143180A1 (en) Collaborative display method, terminal device, and computer readable storage medium
CN115967851A (en) Quick photographing method, electronic device and computer readable storage medium
CN112532508B (en) Video communication method and video communication device
CN114064160A (en) Application icon layout method and related device
CN113438366A (en) Information notification interaction method, electronic device and storage medium
EP4293997A1 (en) Display method, electronic device, and system
WO2022135195A1 (en) Method and apparatus for displaying virtual reality interface, device, and readable storage medium
CN113805825B (en) Method for data communication between devices, device and readable storage medium
WO2022062902A1 (en) File transfer method and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination