Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a schematic diagram of an allocation method for cloud devices according to an embodiment of the present disclosure, where the embodiment is applicable to a case where an optimal cloud device is allocated to a client according to a cloud device application request provided by the client, and the method may be executed by an edge node server included in a cloud server cluster; the method can also be executed by a distribution device of the cloud equipment, and the device can be realized in a software and/or hardware mode and is integrated in an edge node server; in this embodiment, the edge node server may be an arm (advanced RISC machines) cloud server, a server of a distributed system, or a server combining a block chain, and the like. Specifically, referring to fig. 1, the method specifically includes the following steps:
s110, according to the cloud equipment application request of the client, determining target cloud equipment in the plurality of cloud equipment which establish communication connection.
The client may be an application installed in a mobile terminal such as a mobile phone, a tablet computer, or a smart watch, and may provide a cloud device application request to any edge node server in the server cluster, so that a target cloud application selected by a user may be used on the applied cloud device.
In this embodiment, the cloud device application request may be for requesting use of a target cloud application on the cloud device; the target cloud application may be a certain large-scale game application, a video viewing application, or a live broadcast application, and the like, which is not limited in this embodiment.
It should be noted that, in this embodiment, the cloud service cluster may include a plurality of edge node servers, and each edge node server may be simultaneously in communication connection with a plurality of cloud devices, for example, 80 cloud devices or 90 cloud devices, and the like, which is not limited in this embodiment; meanwhile, each edge node server may include one storage server, and each storage server stores all cloud applications.
In an optional implementation manner of this embodiment, when the server cluster receives a cloud device application request sent by the client, any edge node server in the cloud service cluster may be selected to respond to the cloud device application request, for example, a first edge node server in the cloud service cluster may be selected to respond to the cloud device application request; further, the first edge node server may select one cloud device from the plurality of cloud devices communicatively connected thereto as the target cloud device according to the cloud device application request.
In a specific example of this embodiment, when a client needs to use a large-scale game cloud application, a cloud device application request may be sent to a cloud service cluster, and the cloud server cluster may select a relatively idle edge node server to respond to the cloud device application request according to a busy degree of each edge node server, for example, the cloud server cluster may count the number of cloud devices on which each edge node server is working, and select an edge node server with the least number of cloud devices on which the edge node server is working, so that the response efficiency of the edge node server to the cloud device application request may be improved, and a target cloud device may be quickly determined; the cloud server cluster can also randomly select an edge node server to respond to the cloud equipment application request; further, the edge node server may randomly select one cloud device from the plurality of cloud devices in communication connection with the edge node server as a target cloud device according to the cloud device application request, so that the target cloud device provides the client with the large game cloud application.
And S120, if the target cloud application is determined not to be mounted in the target cloud equipment in advance, forming a mounting link corresponding to the target cloud application according to the storage address of the target cloud application.
The target cloud application is the cloud application carried in the cloud device application request, that is, the cloud application to be used by the client, for example, a large-scale game application, a live broadcast application, or a video play application, and the like, which is not limited in this embodiment.
And the storage address of the target cloud application is the storage position of the storage server of the target cloud application in the edge node server.
In an optional implementation manner of this embodiment, after determining, by the edge node server, a target cloud device among a plurality of cloud devices that establish communication connection with the edge node server according to a cloud device application request, whether the target cloud application is previously installed in the target cloud device may be further determined, that is, whether the target cloud application is cached in the target cloud device may be determined; if it is determined that the target cloud application is not cached in the target cloud device, a mount link corresponding to the target cloud application can be generated according to the storage address of the target cloud application.
It should be noted that, in this embodiment, the mount link corresponding to the target cloud application may be used to copy the file related to the target cloud application stored in the saving server to the target cloud device, and further, the target cloud application may be used in the target cloud device.
In another optional implementation manner of this embodiment, if it is determined that the target cloud application is cached in the target cloud device, the representation of the target cloud device may be directly fed back to the client, so that time for establishing a mount link corresponding to the target cloud device and the target cloud application may be saved, a cloud device application request of the client may be quickly responded, and algorithm execution time may be saved.
S130, caching the target cloud application to the target cloud equipment through the mounting link, and feeding back the identification of the target cloud equipment to the client.
The identifier of the target cloud device may be a unique identification code such as an ID (Identity document) or an IP Address (Internet Protocol Address) of the target cloud device, which is not limited in this embodiment.
In an optional implementation manner of this embodiment, after the mount link corresponding to the target cloud device and the target cloud application is formed, the target cloud application may be further cached from the storage server to the target cloud device through the formed mount link, and then the identifier of the target cloud device may be fed back to the client, so that the client may use the target cloud application through the target cloud device.
In a specific example of this embodiment, if the target cloud application is a large game a, the edge node server determines that the cloud device a is a target cloud device among 90 cloud devices that establish communication connection with the edge node server; further, if it is determined that the cloud equipment A does not cache the large game A in advance, a mounting link between the cloud equipment A and the large game A can be formed according to the storage address of the large game A in the storage server of the edge node server; the large game A is cached in the cloud equipment A through the mounting link, and the identification of the cloud equipment A is fed back to the client, so that the client can operate the large game A cached in the cloud equipment A through the identification of the cloud equipment A.
According to the scheme of the embodiment, any edge node server in several circles of cloud service determines target cloud equipment in a plurality of cloud equipment establishing communication connection according to a cloud equipment application request of a client; if the target cloud application is determined not to be mounted in the target cloud equipment in advance, forming a mounting link corresponding to the target cloud application according to the storage address of the target cloud application; through the mounting link, the target cloud application is cached to the target cloud equipment, and the identification of the target cloud equipment is fed back to the client, so that accurate and efficient distribution of the cloud equipment is realized, and the utilization rate of the cloud equipment is improved.
Fig. 2 is a schematic diagram of another cloud device allocation method according to an embodiment of the present disclosure, where the embodiment is a further refinement of the foregoing technical solution, and the technical solution in the embodiment may be combined with various alternatives in one or more of the foregoing embodiments. As shown in fig. 2, the voice interaction method of the intelligent voice device includes the following steps:
s210, acquiring idle cloud equipment from a plurality of cloud equipment establishing communication connection; and each cloud device establishes communication connection with the edge node server through a preset network card.
The idle cloud equipment is cloud equipment which is not occupied by any client; it can be understood that, in this embodiment, each edge node server may include a plurality of network cards, and a communication connection between each cloud device and the edge node server may be established through each network card; for example, the edge node server may include three network cards, which are a network card a, a network card B, and a network card C; meanwhile, 25 cloud devices establish communication connection with the edge node server through the network card A, 26 cloud devices establish communication connection with the edge node server through the network card B, and 30 cloud devices establish communication connection with the edge node server through the network card C.
In an optional implementation manner of this embodiment, after receiving a cloud device application request of a client, an edge node server may obtain an idle cloud device from a plurality of cloud devices that establish communication connection with the edge node server through each network card.
It should be noted that, in this embodiment, the number and the identifier of the acquired idle cloud devices are not fixed, and are changed along with the change of the operating condition of each cloud device.
S220, counting the number value of each idle cloud device corresponding to each network card, and acquiring a target network card meeting a preset number value threshold condition.
The preset number value threshold may be 20, 25, or 30, and the like, which is not limited in this embodiment. It can be understood that, in this embodiment, the preset number value threshold should be less than or equal to the number value of the cloud device corresponding to each network card; for example, if 30 cloud devices establish communication connection with the edge node server through the target network card, the preset quantity value threshold should be less than or equal to 30.
In an optional implementation manner of this embodiment, after the idle cloud devices are obtained, the number values of the idle cloud devices corresponding to each network card may be further counted, so as to determine a target network card meeting a preset number value threshold condition.
For example, in the above example, if the preset number threshold condition is 20, and the number of idle cloud devices corresponding to the network card a is 10, the number of idle cloud devices corresponding to the network card B is 12, and the number of idle cloud devices corresponding to the network card C is 25 (greater than the preset number threshold condition 20), the network card C may be determined as the target network card.
In another optional implementation manner of this embodiment, after the idle cloud devices are obtained, the number values of the idle cloud devices corresponding to each network card may be further counted, the number values of the idle cloud devices are sorted, and the network card corresponding to the largest number value of the idle cloud device is determined as the target network card, so that the target network card can be determined quickly, and the algorithm execution time is saved.
For example, in the above example, if the number value of the idle cloud devices corresponding to the network card a is 10, the number value of the idle cloud devices corresponding to the network card B is 12, and the number value of the idle cloud devices corresponding to the network card C is 25, the network card C (the network card corresponding to the largest number value of the idle cloud devices) may be determined as the target network card.
And S230, acquiring the target cloud equipment from the idle cloud equipment corresponding to the target network card.
In an optional implementation manner of this embodiment, after the target network card is determined, the target cloud device may be further obtained in each idle cloud device corresponding to the target network card; in this embodiment, acquiring the target cloud device in each idle cloud device corresponding to the target network card may include: judging whether target idle cloud equipment applied to the cached target cloud exists in each idle cloud equipment corresponding to the target network card; if yes, determining the target idle cloud equipment as target cloud equipment; if not, randomly acquiring the target cloud equipment from the idle cloud equipment corresponding to the target network card.
In this embodiment, after the target network card is determined, cloud applications cached in each idle cloud device in the target network card may be further determined, and whether a target cloud application exists in each cloud application is determined, and when it is determined that a reference cloud application consistent with the target cloud application exists in each cloud application, the idle cloud device corresponding to the reference cloud application may be determined as the target cloud device; if it is determined that a reference cloud application consistent with the target cloud application does not exist in the cloud applications, one idle cloud device can be randomly determined as the target cloud device in each idle cloud device corresponding to the target network card.
The method has the advantages that the idle cloud equipment cached with the target cloud application can be determined as the target cloud equipment, the target cloud application does not need to be cached again, and the loading speed of the cloud application is increased; meanwhile, the target cloud application does not need to be acquired from the storage server, and no flow passes through the target network card, so that the network performance of the edge node server is not affected.
And S240, if the target cloud application is determined not to be mounted in the target cloud equipment in advance, forming a mounting link corresponding to the target cloud application according to the storage address of the target cloud application.
And S250, caching the target cloud application to the target cloud equipment through the mounting link, and feeding back the identification of the target cloud equipment to the client.
In the scheme of this embodiment, the edge node server may obtain idle cloud devices from a plurality of cloud devices with which a plurality of communication connections are established; counting the quantity values of the idle cloud equipment corresponding to each network card respectively, and acquiring a target network card meeting a preset quantity value threshold condition; the target cloud equipment is obtained from the idle cloud equipment corresponding to the target network card, the target cloud equipment can be quickly obtained, and the problems that the cloud equipment under the same network card is excessively loaded at the same time, the network card is on-line, subsequent cloud equipment mounting fails, and cloud application cannot be cached can be avoided.
Fig. 3 is a schematic diagram of another cloud device allocation method according to an embodiment of the present disclosure, where the embodiment is a further refinement of the foregoing technical solution, and the technical solution in the embodiment may be combined with various alternatives in one or more of the foregoing embodiments. As shown in fig. 3, the voice interaction method of the intelligent voice device includes the following steps:
s310, according to the cloud equipment application request of the client, determining target cloud equipment in the plurality of cloud equipment which establish communication connection.
And S320, determining that the target cloud application is not mounted in the target cloud equipment in advance.
S330, detecting whether a target cloud application is stored in a storage server of the edge node server; if so, forming a mounting link corresponding to the target cloud application according to the storage address of the target cloud application in the storage server; otherwise, acquiring the mirror image of the target cloud application from the reference storage servers of other edge nodes, and storing the mirror image of the target cloud application in the storage servers; and forming a mounting link corresponding to the target cloud application according to the storage address of the mirror image of the target cloud application in the storage server.
In an optional implementation manner of this embodiment, when the edge node server determines a target cloud device among a plurality of cloud devices that establish communication connection with the edge node server, and determines that a target cloud application is not cached in the target cloud device, it may be further detected whether the storage server of the edge node server stores the target cloud application; if the storage server of the edge node server stores the target cloud application, a mount link corresponding to the target cloud application can be formed directly according to the storage address of the target cloud application in the storage server.
In another optional implementation manner of this embodiment, in the process of detecting whether the storage server of the edge node server stores the target cloud application, if it is determined that the storage server of the edge node server does not store the target cloud application, the mirror image of the target cloud application may be obtained from the reference storage servers of other edge nodes in the cloud service cluster, and the mirror image of the target cloud application is copied to the storage server; and forming a mounting link corresponding to the target cloud application according to the storage address of the mirror image of the target cloud application in the storage server.
In a specific example of this embodiment, if the edge node server a determines that the cloud device B is the target cloud device among the 90 cloud devices that establish communication connection with the edge node server a, and determines that the target cloud application is not cached in the cloud device B, it may continue to detect whether the storage server C of the edge node server a stores the cloud application D; if the storage server C stores the cloud application D, a mounting link X corresponding to the target cloud application can be formed directly according to the storage address F of the cloud application D in the storage server C; in a specific example of this embodiment, if it is determined that the storage server C does not store the cloud application D, the mirror image of the cloud application D may be obtained from a reference storage server of another edge node in the cloud service cluster (for example, the reference server N of the edge node server M), and the mirror image of the cloud application D is copied to the storage server C; and according to the mirror image of the cloud application D, storing the address H in the storage server C, and forming a mounting link Y corresponding to the target cloud application.
S340, caching the target cloud application to the target cloud equipment through the mounting link, and feeding back the identification of the target cloud equipment to the client.
According to the scheme of the embodiment, after the target cloud application is determined not to be previously mounted in the target cloud equipment, whether the target cloud application is stored in the storage server of the edge node server can be further detected; if so, forming a mounting link corresponding to the target cloud application according to the storage address of the target cloud application in the storage server; otherwise, acquiring the mirror image of the target cloud application from the reference storage servers of other edge nodes, and storing the mirror image of the target cloud application in the storage servers; according to the method, the mounting link corresponding to the target cloud application is formed according to the storage address of the mirror image of the target cloud application in the storage server, the cloud applications in the storage servers of different edge node servers in the server cluster can be shared, the applications of all the edge node servers in the cloud service cluster can be updated by updating the cloud applications in one storage server, the update time of the cloud applications is saved, and a basis is provided for subsequent rapid allocation of cloud equipment.
Fig. 4 is a schematic diagram of a still another cloud device allocation method according to an embodiment of the present disclosure, where this embodiment is a further refinement of the above technical solution, and the technical solution in this embodiment may be combined with various alternatives in one or more of the above embodiments. As shown in fig. 4, the voice interaction method of the intelligent voice device includes the following steps:
s410, according to the cloud equipment application request of the client, determining target cloud equipment in the plurality of cloud equipment which establish communication connection.
And S420, if the target cloud application is determined not to be mounted in the target cloud equipment in advance, forming a mounting link corresponding to the target cloud application according to the storage address of the target cloud application.
S430, caching the target cloud application to the target cloud equipment through the mounting link, and feeding back the identification of the target cloud equipment to the client.
S440, in response to a cloud equipment release instruction of the client aiming at the target cloud equipment, disconnecting the mounting link between the target cloud equipment and the target cloud application; detecting whether other cloud applications except the target cloud application are cached in a storage area of the target cloud equipment; and if so, deleting other cloud applications cached in the storage area.
In an optional implementation manner of this embodiment, after the identifier of the target cloud device is fed back to the client, if a release instruction of the client for the cloud device of the target cloud device is received, that is, when the client stops using the target cloud device, the mount link between the target cloud device and the target cloud application may be disconnected; and detecting whether other cloud applications except the target cloud application are cached in the storage area of the target cloud equipment, and if detecting that other cloud applications except the target cloud application are cached in the storage area of the target cloud equipment, deleting the other cloud applications.
In a specific example of this embodiment, after the identifier of the cloud device a is fed back to the client, when the client stops using the cloud device a, the mount link between the cloud device a and the cloud application B may be disconnected; and detecting whether other cloud applications except the cloud application B are cached in the storage area of the cloud equipment A, and if the cloud application C is cached in the storage area of the cloud equipment A, deleting the cloud application C.
According to the scheme of the embodiment, after the identifier of the target cloud equipment is fed back to the client, a mount link between the target cloud equipment and the target cloud application can be disconnected in response to a cloud equipment release instruction of the client for the target cloud equipment; detecting whether other cloud applications except the target cloud application are cached in a storage area of the target cloud equipment; if yes, deleting other cloud applications cached in the storage area, so that the condition that only one target cloud application is cached in the target cloud equipment can be ensured, the client can quickly respond to the target cloud application when the client applies for using the target cloud application next time, and the performance of the target cloud equipment cannot be reduced due to the fact that the target cloud equipment caches too many cloud applications.
In order to make those skilled in the art better understand the method for allocating cloud devices related to the present disclosure, the following describes the present disclosure with a specific example, which mainly includes the following:
(1) a network storage server: and building a distributed file system based on the Ceph for storing cloud application.
(2) Remote mounting of cloud equipment: when the user opens the application on the cloud equipment, the application on the network storage server is opened through remote mounting (mount).
In this embodiment, by using a network storage scheme, applications do not need to be installed on all cloud devices, and in the case of deployment of edge node servers, only a Ceph mirror image needs to be made on one edge node server and synchronized to other edge node servers, and cloud devices under these edge node servers can open applications in a remote mount manner, and if an application has an update, only the mirror image of each edge node server needs to be updated; meanwhile, a plurality of hard disks can be installed on the network storage server, the number of applications capable of being stored is greatly increased, and the applications which can be used by the cloud equipment are not limited by the internal storage space.
It should be noted that although the cloud device can remotely mount the application on the network storage server, the requirement on the network speed is very high in the process from the start of loading to the completion of loading, and when the cloud devices on the same server are loaded too much at the same time, the network card upper limit of the server is reached, which results in the failure of the remote mount application of the cloud device.
In this embodiment, in order to solve the problem, a balanced loading strategy is adopted, and the number of concurrent loads of cloud device instances under the same network card is controlled; meanwhile, the network cards connected with the cloud equipment are identified, when the cloud equipment is distributed, the number of the cloud equipment loaded on each network card is sequenced, and the cloud equipment on the network card with the least number is always selected.
It should be further noted that, when loading for the first time, the file of the application program needs to be loaded into the memory, and the network loading speed is slower than the local loading speed, which may cause a problem that the network storage scheme is slow in loading speed.
In the embodiment, a cache scheme is used for solving the problem of slow application loading; and after the cloud application is loaded for the first time, storing the cloud equipment cache when the mirror image is unloaded, and recording the cache in the system. When the next distribution is carried out, the cached cloud equipment is firstly distributed to the client, so that the application can be directly loaded from the memory, and the speed is higher; secondly, when the cloud application is loaded, no flow passes through the network card, and the influence on the network performance of the server is reduced.
Fig. 5 is a schematic diagram of an allocation apparatus of a cloud device according to an embodiment of the present disclosure, where the apparatus may perform an allocation method of a cloud device related in any embodiment of the present disclosure; referring to fig. 5, the allocation apparatus 500 of the smart cloud device includes: a target cloud device determination module 510, a mount link formation module 520, and a target cloud device identification feedback module 530.
A target cloud device determining module 510, configured to determine a target cloud device among a plurality of cloud devices that establish communication connection according to a cloud device application request of a client; wherein the cloud device application request is for requesting use of a target cloud application on a cloud device;
a mount link forming module 520, configured to form a mount link corresponding to the target cloud application according to a storage address of the target cloud application if it is determined that the target cloud application is not previously mounted in the target cloud device;
an identifier feedback module 530 of the target cloud device, configured to cache the target cloud application to the target cloud device through the mount link, and feed back the identifier of the target cloud device to the client.
According to the scheme of the embodiment, the target cloud equipment is determined in the plurality of cloud equipment for establishing communication connection through the target cloud equipment determining module; if it is determined that the target cloud application is not previously mounted in the target cloud device, a mounting link corresponding to the target cloud application is formed according to a storage address of the target cloud application through a mounting link forming module 520; the target cloud application is cached to the target cloud equipment through the mounting link by the identification feedback module of the target cloud equipment, and the identification of the target cloud equipment is fed back to the client, so that accurate and efficient distribution of the cloud equipment is realized, and the utilization rate of the cloud equipment is improved.
In an optional implementation manner of this embodiment, the target cloud device determining module 510 includes: the idle cloud equipment acquisition submodule, the target network card acquisition submodule and the target cloud equipment acquisition submodule are arranged;
the idle cloud equipment acquisition sub-module is used for acquiring idle cloud equipment from a plurality of cloud equipment which establish communication connection; each cloud device establishes communication connection with an edge node server through a preset network card;
the target network card obtaining submodule is used for counting the quantity value of each idle cloud device corresponding to each network card and obtaining a target network card meeting the preset quantity value threshold condition;
and the target cloud equipment acquisition submodule is used for acquiring the target cloud equipment from each idle cloud equipment corresponding to the target network card.
In an optional implementation manner of this embodiment, the target cloud device obtaining sub-module is specifically configured to determine whether a target idle cloud device that caches the target cloud application exists in each idle cloud device corresponding to the target network card;
if yes, determining the target idle cloud equipment as the target cloud equipment;
and if not, randomly acquiring the target cloud equipment in each idle cloud equipment corresponding to the target network card.
In an optional implementation manner of this embodiment, the mount link forming module 520 is specifically configured to detect whether the storage server of the edge node server stores the target cloud application;
if so, forming a mounting link corresponding to the target cloud application according to a storage address of the target cloud application in the storage server;
otherwise, acquiring the mirror image of the target cloud application from the reference storage servers of other edge nodes, and storing the mirror image of the target cloud application in the storage servers; and forming a mounting link corresponding to the target cloud application according to a storage address of the mirror image of the target cloud application in the storage server.
In an optional implementation manner of this embodiment, the apparatus further includes: the mounting link disconnecting module is used for responding to a cloud equipment release instruction of the client aiming at the target cloud equipment and disconnecting the mounting link between the target cloud equipment and the target cloud application;
detecting whether other cloud applications except the target cloud application are cached in a storage area of the target cloud equipment;
and if so, deleting the other cloud applications cached in the storage area.
The distribution device of the cloud equipment can execute the distribution method of the cloud equipment provided by any embodiment of the disclosure, and has corresponding functional modules and beneficial effects of the execution method. For technical details that are not described in detail in this embodiment, reference may be made to a method for allocating cloud devices provided in any embodiment of the present disclosure.
Fig. 6 is a schematic diagram of a cloud server cluster system provided in accordance with an embodiment of the present disclosure; any edge node server in the cloud server cluster system may execute the cloud device allocation method described in the foregoing embodiments; referring to fig. 6, a cloud server cluster system 600 may include: at least one edge node server (edge node server 610, edge node server 620, and edge node server 630; it is understood that fig. 6 illustrates only three edge node servers, which is not a limitation of the present embodiment); a cloud device management server 611 and a storage server 612 are deployed in each edge node server;
each cloud device management server 611 establishes communication connection with a plurality of cloud devices, and one or more cloud applications are stored in the storage server 612;
the edge node server is used for realizing the distribution method of the cloud equipment according to any embodiment of the disclosure; it is understood that the edge node server referred to in this embodiment may be the electronic device referred to in this disclosure.
Any edge node server in the cloud server cluster system can execute the cloud equipment allocation method provided by any embodiment of the disclosure, and has corresponding functional modules and beneficial effects of the execution method. For technical details that are not described in detail in this embodiment, reference may be made to a method for allocating cloud devices provided in any embodiment of the present disclosure.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 7 illustrates a schematic block diagram of an example electronic device 700 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the apparatus 700 includes a computing unit 701, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)702 or a computer program loaded from a storage unit 707 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the device 700 can also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in the device 700 are connected to the I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, or the like; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 707 such as a magnetic disk, an optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
Computing unit 701 may be a variety of general purpose and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The computing unit 701 executes the respective methods and processes described above, such as the allocation method of the cloud device. For example, in some embodiments, the allocation method of cloud devices may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 707. In some embodiments, part or all of a computer program may be loaded onto and/or installed onto device 700 via ROM 702 and/or communications unit 709. When the computer program is loaded into the RAM 703 and executed by the computing unit 701, one or more steps of the cloud device allocation method described above may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured by any other suitable means (e.g., by means of firmware) to perform the cloud device allocation method.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.