Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for model training based on federal learning, an electronic device, and a storage medium, so as to ensure the safety of industrial data and ensure the effect of model training.
Additional features and advantages of embodiments of the invention will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of embodiments of the invention.
In a first aspect of the present disclosure, an embodiment of the present invention provides a model training method based on federal learning, which is executed by each private cloud server, and includes:
training a local model based on local data, and sending trained local model algorithm parameters to a public cloud server so that the public cloud server verifies whether the received algorithm parameters need to be adopted to update the algorithm parameters of the combined model;
receiving updated joint model algorithm parameters pushed by the public cloud server;
and if the received joint model algorithm parameters are needed to be adopted for updating the local model through verification, updating the local model algorithm parameters into the received joint model algorithm parameters.
In one embodiment, verifying whether the local model needs to be updated with the received parameters of the joint model algorithm comprises:
calculating an effect index of the local model by adopting a prior data set to obtain a first index value;
after the local model algorithm parameters are replaced by the received combined model algorithm parameters, calculating effect indexes of the local model after the algorithm parameters are replaced by the prior data set to obtain second index values;
and determining whether the local model needs to be updated by adopting the received joint model algorithm parameters according to the sizes of the first index value and the second index value.
In an embodiment, the verifying, by the public cloud server, whether the received algorithm parameters need to be used for updating the joint model algorithm parameters includes:
calculating an effect index of the combined model by adopting a prior data set to obtain a third index value;
after the algorithm parameters of the combined model are replaced by the received algorithm parameters of the local model, calculating the effect indexes of the combined model after the algorithm parameters are replaced by the prior data set to obtain a fourth index value;
and determining whether the received local model algorithm parameters need to be adopted to update the combined model according to the sizes of the third index value and the fourth index value.
In one embodiment, the performance indicators include accuracy and/or recall.
In one embodiment, before training the local model based on the local data, the method further comprises receiving an initial model algorithm parameter sent by a public cloud server;
training the local model based on the local data includes: training a local model based on the initial model algorithm parameters and local data.
In a second aspect of the present disclosure, an embodiment of the present invention further provides a federal learning-based model training apparatus configured in each private cloud server, where the apparatus includes:
the local training and parameter uploading unit is used for training a local model based on local data and sending the trained local model algorithm parameters to the public cloud server so that the public cloud server verifies whether the received algorithm parameters are required to update the combined model algorithm parameters;
the joint model parameter receiving unit is used for receiving updated joint model algorithm parameters pushed by the public cloud server;
and the verification and updating unit is used for updating the local model algorithm parameters into the received joint model algorithm parameters if the received joint model algorithm parameters are needed to be adopted for verification to update the local model.
In an embodiment, the verifying and updating unit for verifying whether the local model needs to be updated with the received parameters of the joint model algorithm includes:
calculating an effect index of the local model by adopting a prior data set to obtain a first index value;
after the local model algorithm parameters are replaced by the received combined model algorithm parameters, calculating effect indexes of the local model after the algorithm parameters are replaced by the prior data set to obtain second index values;
and determining whether the local model needs to be updated by adopting the received joint model algorithm parameters according to the sizes of the first index value and the second index value.
In an embodiment, the verifying, by the public cloud server in the local training and parameter uploading unit, whether the received algorithm parameters need to be used for updating the algorithm parameters of the joint model includes:
calculating an effect index of the combined model by adopting a prior data set to obtain a third index value;
after the algorithm parameters of the combined model are replaced by the received algorithm parameters of the local model, calculating the effect indexes of the combined model after the algorithm parameters are replaced by the prior data set to obtain a fourth index value;
and determining whether the received local model algorithm parameters need to be adopted to update the combined model according to the sizes of the third index value and the fourth index value.
In one embodiment, the performance indicators include accuracy and/or recall.
In an embodiment, the apparatus further includes an initial model parameter receiving unit, configured to receive an initial model algorithm parameter sent by the public cloud server before training the local model based on the local data;
the local training and parameter uploading unit is used for training a local model based on local data and comprises: for training a local model based on the initial model algorithm parameters and local data.
In a third aspect of the present disclosure, a model training system based on federal learning is provided, including a public cloud server and a plurality of private cloud servers;
each private cloud server trains a local model based on local data, and sends the trained algorithm parameters of the local model to the public cloud server;
the public cloud server verifies whether the received parameters are required to be adopted to update the algorithm parameters of the joint model, if so, the received parameters are adopted to update the algorithm parameters of the joint model, and the updated algorithm parameters of the joint model are pushed to each private cloud server;
and if the private cloud servers receive the joint model algorithm parameters pushed by the public cloud server, verifying whether the local model needs to be updated by adopting the received joint model algorithm parameters, and if so, updating the local model algorithm parameters into the received joint model algorithm parameters.
In an embodiment, before each private cloud server trains the local model based on the local data, the method further includes:
the public cloud server sends the initial model algorithm parameters to each private cloud server, and each private cloud server trains a local model based on the initial model algorithm parameters and local data.
In a fourth aspect of the present disclosure, an electronic device is provided. The electronic device includes: a processor; and a memory for storing executable instructions that, when executed by the processor, cause the electronic device to perform the method of the first aspect.
In a fifth aspect of the present disclosure, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the method of the first aspect.
The technical scheme provided by the embodiment of the invention has the beneficial technical effects that:
the method comprises the steps that local models are trained through each private cloud server based on local data, and trained local model algorithm parameters are sent to a public cloud server, so that the public cloud server verifies whether the received algorithm parameters need to be adopted to update combined model algorithm parameters; receiving updated joint model algorithm parameters pushed by the public cloud server; if the received united model algorithm parameters are needed to be adopted to update the local model, the local model algorithm parameters are updated to the received united model algorithm parameters, so that the safety of industrial data can be guaranteed, and meanwhile, the effect of model training is guaranteed.
Detailed Description
In order to make the technical problems solved, the technical solutions adopted and the technical effects achieved by the embodiments of the present invention clearer, the technical solutions of the embodiments of the present invention will be described in further detail below with reference to the accompanying drawings, and it is obvious that the described embodiments are only some embodiments, but not all embodiments, of the embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, belong to the scope of protection of the embodiments of the present invention.
It should be noted that the terms "system" and "network" are often used interchangeably herein in embodiments of the present invention. Reference to "and/or" in embodiments of the invention is intended to include any and all combinations of one or more of the associated listed items. The terms "first", "second", and the like in the description and claims of the present disclosure and in the drawings are used for distinguishing between different objects and not for limiting a particular order.
It should be further noted that, in the embodiments of the present invention, each of the following embodiments may be executed alone, or may be executed in combination with each other, and the embodiments of the present invention are not limited in this respect.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The technical solutions of the embodiments of the present invention are further described by the following detailed description with reference to the accompanying drawings.
Fig. 1 shows a flow diagram of a model training method based on federated learning according to an embodiment of the present invention, where this embodiment is applicable to a case where a plurality of private cloud servers train a model through federated learning, and the method may be executed by a model training device based on federated learning configured on each private cloud server, as shown in fig. 1, the model training method based on federated learning according to this embodiment includes:
in step S110, a local model is trained based on local data, and the trained local model algorithm parameters are sent to a public cloud server, so that the public cloud server verifies whether the received algorithm parameters need to be used to update the joint model algorithm parameters.
When the public cloud server verifies whether the received algorithm parameters need to be adopted to update the algorithm parameters of the combined model, the public cloud server can adopt a prior data set to verify so as to determine whether the training effect of the federal model is better (for example, whether the model precision is higher) if the local model algorithm parameters trained by the private cloud server are adopted by the federal model. Specifically, a priori data set may be used to calculate an effect index of the joint model to obtain a third index value, the received local model algorithm parameter is replaced with an algorithm parameter of the joint model, the effect index of the joint model after the algorithm parameter is replaced with the prior data set is used to obtain a fourth index value, and whether the joint model needs to be updated by the received local model algorithm parameter is determined according to the third index value and the fourth index value.
In step S120, updated joint model algorithm parameters pushed by the public cloud server are received.
In step S130, it is verified whether the local model needs to be updated by using the received parameters of the joint model algorithm, if yes, step S140 is executed, otherwise, step S110 is returned to.
For example, a priori data set may be used to calculate an effect index of a local model to obtain a first index value, after a local model algorithm parameter is replaced with the received joint model algorithm parameter, the prior data set may be used to calculate the effect index of the local model after replacing the algorithm parameter to obtain a second index value, and whether the local model needs to be updated by using the received joint model algorithm parameter is determined according to the first index value and the second index value.
In step S140, the local model algorithm parameters are updated to the received joint model algorithm parameters.
According to one or more embodiments of the present disclosure, the performance indicator may include various indicators, including but not limited to accuracy, recall, etc. of model prediction.
According to one or more embodiments of the present disclosure, each private cloud server may further receive an initial model algorithm parameter issued by a public cloud server before training a local model based on local data, and train the local model based on the initial model algorithm parameter and the local data to synchronize an initial state of the local model of each private cloud server.
In the embodiment, each private cloud server trains a local model based on local data, and sends trained local model algorithm parameters to a public cloud server, so that the public cloud server verifies whether the received algorithm parameters need to be adopted to update the joint model algorithm parameters; receiving updated joint model algorithm parameters pushed by the public cloud server; if the received united model algorithm parameters are needed to be adopted to update the local model, the local model algorithm parameters are updated to the received united model algorithm parameters, so that the safety of industrial data can be guaranteed, and meanwhile, the effect of model training is guaranteed.
Fig. 2 is a schematic flow chart of a method of a model training system based on federal learning according to an embodiment of the present invention, where the model training system based on federal learning includes a public cloud server and a plurality of private cloud servers. As shown in fig. 2, the method for training a model based on federal learning according to this embodiment includes:
in step S210, each private cloud server trains a local model based on local data, and sends trained local model algorithm parameters to the public cloud server;
in step S220, the public cloud server verifies whether the received parameters are required to update the joint model algorithm parameters, and if so, updates the joint model algorithm parameters with the received parameters, and pushes the updated joint model algorithm parameters to each private cloud server;
in step S230, if each private cloud server receives the joint model algorithm parameter pushed by the public cloud server, step S240 is executed.
In step S240, it is verified whether the local model needs to be updated by using the received parameters of the joint model algorithm, if yes, step S250 is executed, otherwise, step S210 is returned to.
In step S250, the local model algorithm parameters are updated to the received joint model algorithm parameters.
According to one or more embodiments of the present disclosure, before step S210, the public cloud server may further send initial model algorithm parameters to each private cloud server, and each private cloud server trains a local model based on the initial model algorithm parameters and local data.
According to the technical scheme, the industrial data security is guaranteed, the continuous optimization of the industrial data model is met on the premise that open data is not needed, and the use effect of the AI technology in industrial application is improved.
Fig. 3A is a schematic diagram of another method of a model training system based on federal learning according to an embodiment of the present invention, which is a safe data model training scheme that performs local model training in a private cloud, integrates and optimizes local models into a joint model, and then feeds back the joint model to various regions,
as shown in fig. 3A, the present embodiment mainly adopts technologies such as federal learning, distributed computation, algorithm model integration optimization, and the like. Federal learning is a paradigm of distributed collaborative training machine learning models, which can be cooperatively trained on a large number of edge devices (clients) without centralized training data, and is characterized in that a large number of decentralized stages are linked to a centralized server, and the participants have zero trust with each other and can only access local training data. Distributed computation is to divide a large computation task into a plurality of small computation tasks to be distributed on a plurality of machines for computation, and then result summarization is carried out, and distributed computation in the federal learning process is to calculate in various places to form data models, and then upload the models to carry out result summarization. The convergence of the algorithm model always integrates the local calculation and the model into a combined model through federal learning and distributed calculation on the basis of the same parameter definition. In a federated machine learning setting, a global model is initialized and maintained by a central parameter server and shared by the server to edge devices. In order to train a global model in a distributed cooperative mode, a client uses local private data to calculate model updating, then uploads the updated model to a server, meanwhile, privacy-sensitive training data are kept on own equipment, and a federated learning system trains to obtain an integrated model through multiple iterations of distributed security aggregation.
Fig. 3B is a flowchart illustrating a further method for model training based on federal learning according to an embodiment of the present invention, where the present embodiment is based on the foregoing embodiment and is implemented with improved optimization. As shown in fig. 3B, the method for training a model based on federal learning according to this embodiment includes:
in step S301, the factory private cloud starts model training.
In step S302, model algorithm parameters are shared to the federated model.
In step S303, the unified model verifies whether updating is required, if so, step S305 is performed, otherwise, step S304 is performed.
In step S304, the existing model is not updated, and step S306 is executed.
In step S305, the existing federated model is updated.
In step S306, the federated model pushes the updated model algorithm parameters to the plant.
In step S307, the factory private cloud determines whether the federated model is optimal, if so, step S310 is performed, otherwise, step S308 is performed.
In step S308, the existing model is not updated, and step S309 is executed.
In step S309, the local training is continued, and the process ends.
In step S310, the local model is iteratively updated.
The method comprises the steps that a factory private cloud starts local algorithm model training, a trained model is shared to a combined model, the combined model needs model verification before receiving a factory shared model, if the existing model does not need to be updated, the existing model is optimized and updated by using factory shared information, and model optimization can be performed by using shared information of a plurality of factories when sharing of a plurality of factory ends is achieved. The joint model can regularly push the joint model to the factory end(s), the factory end can carry out model verification when being pushed, if the pushed model is superior to the existing local model, the local model is iteratively updated, otherwise, the existing local model is updated, and local training is continuously carried out. The pushing process is that parameters updated after learning are pushed to the combined model in real time by each local model in the process of continuously learning, the combined model reversely pushes the updated model parameters to each factory end after updating according to the real-time parameters, the parameters are based on a uniform definition rule, and the algorithm model of the factory local end continuously pushes iterative updating along with the updating of the combined model.
The technical scheme of the embodiment mainly adopts the technologies of federal learning, distributed computing, algorithm model combined integration and the like, the combined model can be placed on a public cloud, data sharing is not needed in all places, only the data model needs to be shared, and continuous model optimization is carried out on all places through the public cloud continuous optimization combined model. The whole process data does not leave the factory, and the training result of the shared local data model improves the use effect of the industrial AI technology. The safety of industrial data is guaranteed, meanwhile, the continuous optimization of an industrial data model is met on the premise that open data is not needed, and the use effect of the AI technology in industrial application is improved.
As an implementation of the method shown in each of the above figures, the present application provides an embodiment of a model training device based on federal learning, and fig. 4 shows a schematic structural diagram of the model training device based on federal learning provided in this embodiment, where the embodiment of the device corresponds to the embodiment of the method shown in fig. 1, and the device may be specifically applied to various electronic devices in each private cloud server. As shown in fig. 4, the model training apparatus based on federal learning according to this embodiment includes a local training and parameter uploading unit 410, a joint model parameter receiving unit 420, and a verification and update unit 430.
The local training and parameter uploading unit 410 is configured to train a local model based on local data, and send trained local model algorithm parameters to a public cloud server, so that the public cloud server verifies whether the received algorithm parameters need to be used for updating the joint model algorithm parameters.
The federated model parameter receiving unit 420 is configured to receive updated federated model algorithm parameters pushed by the public cloud server.
The verification and update unit 430 is configured to update the local model algorithm parameters to the received joint model algorithm parameters if verification requires updating the local model with the received joint model algorithm parameters.
According to one or more embodiments of the present disclosure, the verifying and updating unit 430 is configured to further:
calculating an effect index of the local model by adopting a prior data set to obtain a first index value;
after the local model algorithm parameters are replaced by the received combined model algorithm parameters, calculating effect indexes of the local model after the algorithm parameters are replaced by the prior data set to obtain second index values;
and determining whether the local model needs to be updated by adopting the received joint model algorithm parameters according to the sizes of the first index value and the second index value.
According to one or more embodiments of the present disclosure, the verifying, by the public cloud server in the local training and parameter uploading unit 410, whether the received algorithm parameters need to be used for updating the algorithm parameters of the joint model includes:
calculating an effect index of the combined model by adopting a prior data set to obtain a third index value;
after the algorithm parameters of the combined model are replaced by the received algorithm parameters of the local model, calculating the effect indexes of the combined model after the algorithm parameters are replaced by the prior data set to obtain a fourth index value;
and determining whether the received local model algorithm parameters need to be adopted to update the combined model according to the sizes of the third index value and the fourth index value.
According to one or more embodiments of the present disclosure, the performance indicator includes an accuracy rate and/or a recall rate.
According to one or more embodiments of the present disclosure, the apparatus further includes an initial model parameter receiving unit, configured to receive an initial model algorithm parameter sent by a public cloud server before a local model is trained based on local data;
the local training and parameter uploading unit is used for training a local model based on local data and comprises: for training a local model based on the initial model algorithm parameters and local data.
The model training device based on federal learning provided by the embodiment can execute the model training method based on federal learning provided by the embodiment of the method disclosed by the invention, and has corresponding functional modules and beneficial effects of the execution method.
Fig. 5 is a schematic structural diagram of another federate learning-based model training device according to an embodiment of the present invention, and as shown in fig. 5, the federate learning-based model training device according to the embodiment includes an initial model parameter receiving unit 510, a local training and parameter uploading unit 520, a joint model parameter receiving unit 530, and a verification and update unit 540.
The initial model parameter receiving unit 510 is configured to receive initial model algorithm parameters sent by a public cloud server.
The local training and parameter uploading unit 520 is configured to train a local model based on local data, including: the method is used for training a local model based on the initial model algorithm parameters and local data, and sending the trained local model algorithm parameters to a public cloud server so that the public cloud server verifies whether the received algorithm parameters are required to update the joint model algorithm parameters.
The federated model parameter receiving unit 530 is configured to receive updated federated model algorithm parameters pushed by the public cloud server.
The verification and update unit 540 is configured to update the local model algorithm parameters to the received joint model algorithm parameters if verification requires updating the local model with the received joint model algorithm parameters.
According to one or more embodiments of the present disclosure, the verifying and updating unit 540 configured for verifying whether the local model needs to be updated with the received joint model algorithm parameters comprises: is configured for:
calculating an effect index of the local model by adopting a prior data set to obtain a first index value;
after the local model algorithm parameters are replaced by the received combined model algorithm parameters, calculating effect indexes of the local model after the algorithm parameters are replaced by the prior data set to obtain second index values;
and determining whether the local model needs to be updated by adopting the received joint model algorithm parameters according to the sizes of the first index value and the second index value.
In accordance with one or more embodiments of the present disclosure, the verifying, by the public cloud server in the local training and parameter upload ticket 520, whether the received algorithm parameters need to be used to update the joint model algorithm parameters includes:
calculating an effect index of the combined model by adopting a prior data set to obtain a third index value;
after the algorithm parameters of the combined model are replaced by the received algorithm parameters of the local model, calculating the effect indexes of the combined model after the algorithm parameters are replaced by the prior data set to obtain a fourth index value;
and determining whether the received local model algorithm parameters need to be adopted to update the combined model according to the sizes of the third index value and the fourth index value.
According to one or more embodiments of the present disclosure, the performance indicator includes an accuracy rate and/or a recall rate.
The model training device based on federal learning provided by the embodiment can execute the model training method based on federal learning provided by the embodiment of the method disclosed by the invention, and has corresponding functional modules and beneficial effects of the execution method.
Referring now to FIG. 6, a block diagram of an electronic device 600 suitable for use in implementing embodiments of the present invention is shown. The terminal device in the embodiment of the present invention is, for example, a mobile device, a computer, or a vehicle-mounted device built in a floating car, or any combination thereof. In some embodiments, the mobile device may include, for example, a cell phone, a smart home device, a wearable device, a smart mobile device, a virtual reality device, and the like, or any combination thereof. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 6, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present invention, the processes described above with reference to the flowcharts may be implemented as a computer software program. For example, embodiments of the invention include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing means 601, performs the above-described functions defined in the method of an embodiment of the invention.
It should be noted that the computer readable medium mentioned above can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In yet another embodiment of the invention, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: training a local model based on local data, and sending trained local model algorithm parameters to a public cloud server so that the public cloud server verifies whether the received algorithm parameters need to be adopted to update the algorithm parameters of the combined model; receiving updated joint model algorithm parameters pushed by the public cloud server; and if the received joint model algorithm parameters are needed to be adopted for updating the local model through verification, updating the local model algorithm parameters into the received joint model algorithm parameters.
Computer program code for carrying out operations for embodiments of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present invention may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
The foregoing description is only a preferred embodiment of the invention and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure in the embodiments of the present invention is not limited to the specific combinations of the above-described features, but also encompasses other embodiments in which any combination of the above-described features or their equivalents is possible without departing from the spirit of the disclosure. For example, the above features and (but not limited to) the features with similar functions disclosed in the embodiments of the present invention are mutually replaced to form the technical solution.