Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for model training based on federal learning, an electronic device, and a storage medium, so as to avoid leakage of a large amount of industrial data and ensure an effect of model training.
Additional features and advantages of embodiments of the invention will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of embodiments of the invention.
In a first aspect of the present disclosure, an embodiment of the present invention provides a model training method based on federal learning, which is executed by each private cloud server, and includes:
acquiring local data, identifying the acquired local data by a local model, generating a sample set according to an identification result, sharing the sample set to a public cloud server, so that the public cloud server trains a combined model by using the sample set and shares the combined model, wherein the data volume of the sample set is less than that of the local data;
downloading the combined model, and replacing the local model by the downloaded combined model.
In one embodiment, the generating the sample set according to the recognition result and sharing the sample set to the public cloud server includes:
generating a sample set according to data which cannot be identified by the local model, and sharing the sample set to the public cloud server; and/or
And generating a sample set according to the data with the error recognized by the local model and sharing the sample set to the public cloud server.
In one embodiment, after acquiring the local data, the method further includes: and training a local model based on the local data, and sending the trained local model algorithm parameters to a public cloud server so that the public cloud server verifies whether the received algorithm parameters need to be adopted to update the joint model algorithm parameters.
In an embodiment, the verifying, by the public cloud server, whether the received algorithm parameters need to be used for updating the joint model algorithm parameters includes:
calculating an effect index of the combined model by adopting a prior data set to obtain a first index value;
after the algorithm parameters of the combined model are replaced by the received algorithm parameters of the local model, calculating the effect indexes of the combined model after the algorithm parameters are replaced by the prior data set to obtain a second index value;
and determining whether the received local model algorithm parameters need to be adopted to update the joint model according to the sizes of the first index value and the second index value.
In one embodiment, the performance indicators include accuracy and/or recall.
In one embodiment, the generating the sample set according to the recognition result and sharing the sample set to the public cloud server includes:
and generating a sample set according to the identification result, and encrypting the sample set, and then uploading and storing the encrypted sample set to a block chain to share the encrypted sample set to a public cloud server.
In a second aspect of the present disclosure, an embodiment of the present invention further provides a federal learning-based model training apparatus configured in each private cloud server, where the apparatus includes:
the training sample uploading unit is used for acquiring local data, generating a sample set according to a recognition result and sharing the sample set to a public cloud server after the acquired local data are recognized by a local model, so that the public cloud server trains a combined model by using the sample set and shares the combined model, wherein the data volume of the sample set is less than that of the local data;
and the model downloading unit is used for downloading the combined model and replacing the local model with the downloaded combined model.
In an embodiment, the training sample uploading unit, configured to generate a sample set according to the recognition result and share the sample set to the public cloud server, includes:
generating a sample set according to data which cannot be identified by the local model, and sharing the sample set to the public cloud server; and/or
And generating a sample set according to the data with the error recognized by the local model and sharing the sample set to the public cloud server.
In an embodiment, the training sample uploading unit is further configured to, after obtaining local data, train a local model based on the local data, and send a trained local model algorithm parameter to a public cloud server, so that the public cloud server verifies whether it is necessary to update a joint model algorithm parameter with the received algorithm parameter.
In an embodiment, in the training sample uploading unit, the verifying, by the public cloud server, whether the received algorithm parameters need to be used for updating the joint model algorithm parameters includes:
calculating an effect index of the combined model by adopting a prior data set to obtain a first index value;
after the algorithm parameters of the combined model are replaced by the received algorithm parameters of the local model, calculating the effect indexes of the combined model after the algorithm parameters are replaced by the prior data set to obtain a second index value;
and determining whether the received local model algorithm parameters need to be adopted to update the joint model according to the sizes of the first index value and the second index value.
In one embodiment, the performance indicators include accuracy and/or recall.
In an embodiment, the training sample uploading unit is configured to generate a sample set according to the recognition result and share the sample set to the public cloud server, and includes:
and the system is used for generating a sample set according to the identification result, and storing the encrypted sample set to a block chain to share the encrypted sample set to a public cloud server.
In a third aspect of the disclosure, an embodiment of the present invention further provides a model training system based on federal learning, including a public cloud server and a plurality of private cloud servers;
each private cloud server acquires local data, generates a sample set according to an identification result after the acquired local data are identified by a local model, and shares the sample set to the public cloud server, wherein the data volume of the sample set is smaller than that of the local data;
the public cloud server trains a combined model by adopting the sample set and shares the combined model;
and each private cloud server downloads the combined model from the public cloud server and replaces the local model with the downloaded combined model.
In an embodiment, before the public cloud server trains the joint model by using the sample set, the method further includes judging whether the sample set needs to be trained;
and if the sample set is judged to need training, training the combined model by adopting the sample set.
In an embodiment, the generating the sample set according to the recognition result and sharing the sample set to the public cloud server includes:
and generating a sample set according to the identification result, and encrypting the sample set, and then uploading and storing the encrypted sample set to a block chain to share the encrypted sample set to a public cloud server.
In a fourth aspect of the present disclosure, an electronic device is provided. The electronic device includes: a processor; and a memory for storing executable instructions that, when executed by the processor, cause the electronic device to perform the method of the first aspect.
In a fifth aspect of the present disclosure, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the method of the first aspect.
The technical scheme provided by the embodiment of the invention has the beneficial technical effects that:
according to the embodiment of the invention, local data are obtained through each private cloud server, and after the obtained local data are identified by a local model, a sample set is generated according to an identification result and shared to a public cloud server, so that the public cloud server trains a combined model by adopting the sample set and shares the combined model, wherein the data volume of the sample set is less than that of the local data; downloading the combined model, and replacing the local model by the downloaded combined model. The technical scheme of the embodiment of the invention can avoid leakage of a large amount of industrial data and ensure the effect of model training.
Detailed Description
In order to make the technical problems solved, the technical solutions adopted and the technical effects achieved by the embodiments of the present invention clearer, the technical solutions of the embodiments of the present invention will be described in further detail below with reference to the accompanying drawings, and it is obvious that the described embodiments are only some embodiments, but not all embodiments, of the embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, belong to the scope of protection of the embodiments of the present invention.
It should be noted that the terms "system" and "network" are often used interchangeably herein in embodiments of the present invention. Reference to "and/or" in embodiments of the invention is intended to include any and all combinations of one or more of the associated listed items. The terms "first", "second", and the like in the description and claims of the present disclosure and in the drawings are used for distinguishing between different objects and not for limiting a particular order.
It should be further noted that, in the embodiments of the present invention, each of the following embodiments may be executed alone, or may be executed in combination with each other, and the embodiments of the present invention are not limited in this respect.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The technical solutions of the embodiments of the present invention are further described by the following detailed description with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a method of a model training system based on federal learning according to an embodiment of the present invention, which is based on the foregoing embodiment and performs improved optimization. As shown in fig. 1, the method of the model training system based on federal learning according to this embodiment includes:
in step S110, each private cloud server obtains local data, identifies the obtained local data by a local model, and generates a sample set according to an identification result to share the sample set to the public cloud server, where a data amount of the sample set is smaller than a data amount of the local data.
When each private cloud server generates a sample set according to the identification result and shares the sample set to the public cloud server, the sample set can be generated according to data which cannot be identified by the local model and shares the sample set to the public cloud server, or the sample set can be generated according to data which is identified by the local model and is wrong and shares the sample set to the public cloud server. And then or, a sample set can be generated according to the data which cannot be identified and identified by the local model and shared to the public cloud server, so that the data volume of data uploaded to the public cloud server by each private cloud server is reduced, and a large amount of industrial data can be prevented from being leaked.
According to one or more embodiments of the disclosure, when the sample set is generated according to the identification result and shared with the public cloud server, the sample set can be generated according to the identification result, the sample set is encrypted and then is stored in the block chain in a chain manner to be shared with the public cloud server, and the security and traceability of data uploaded to the cloud server by each private cloud server can be further increased.
In step S120, the public cloud server trains a combined model using the sample set and shares the combined model.
The local model on each private cloud server is obtained by downloading the joint model from the public cloud server, so that the accuracy of the local model on each private cloud server is not higher than that of the joint model on the public cloud server, the data which can be correctly identified by the local model on each private cloud server can be identified by the joint model, the training significance of the joint model by the data which can be identified by the local model on the private cloud server is not large, the identification capability of the joint model is difficult to enhance, and the data which can be identified by the local model on each private cloud server can be eliminated, so that the data volume uploaded to the public cloud server is reduced, the bandwidth occupation can be reduced, and the communication overhead is reduced.
In step S130, each private cloud server downloads the federated model from the public cloud server, and replaces the local model with the downloaded federated model.
After the joint model is trained by adopting data uploaded by each private cloud server, stronger identification capability can be obtained, and the local model is updated into the joint model by each private cloud server, so that the identification capability of the local data can be improved, the data which cannot be identified or are identified wrongly can be correspondingly reduced, the data uploaded to the public cloud server can be correspondingly reduced, the bandwidth occupation can be further reduced, and the communication overhead can be reduced.
According to one or more embodiments of the present disclosure, before the public cloud server trains the joint model by using the sample set, it may be further determined whether the sample set needs to be trained, and if it is determined that the sample set needs to be trained, the joint model is trained by using the sample set.
According to one or more embodiments of the present disclosure, the sample set is generated according to the recognition result and shared with the public cloud server, the sample set can be generated according to the recognition result, and the encrypted sample set is stored in the block chain in an uplink manner and shared with the public cloud server.
According to the embodiment of the invention, local data are obtained through each private cloud server, after the obtained local data are identified through a local model, a sample set is generated according to an identification result and is shared to a public cloud server, and the public cloud server adopts the sample set to train a combined model and share the combined model; and each private cloud server downloads the joint model, and the downloaded joint model is used for replacing the local model, so that the model training effect is ensured while a large amount of industrial data is prevented from being leaked.
Fig. 2 is an interaction diagram of a federate learning-based model training system according to an embodiment of the present invention, where the federate learning-based model training system is a machine learning model training method in which each participating plant encrypts respective uncertain data and provides the encrypted data to a virtual model, and the training is optimized and fed back to each participating party for sharing. As shown in fig. 2, the technical solution of this embodiment mainly adopts technologies such as federal learning and blockchain, and discloses a machine learning model training method in which each participating factory provides a part of encrypted data for virtual model training optimization and feeds back the encrypted data to each participating party for sharing, and each participating factory only shares a part of its own uncertain data and encrypts the data, so that maximum data privacy can be ensured, and a machine learning model satisfying performance requirements can be obtained with less communication overhead.
Fig. 3 is a schematic flow chart of a method of a model training system based on federal learning according to an embodiment of the present invention, and as shown in fig. 3, the method of the model training system based on federal learning according to the embodiment includes:
in step S301, each plant collects data on a daily basis.
In step S302, the factory determines whether the data needs to be uploaded, if so, step S304 is executed, otherwise, step S303 is executed.
In step S303, the process returns to step S301 without uploading data.
In step S304, data is uploaded to the federated model.
In step S305, the joint model determines whether the collected data needs to be trained, if so, step S306 is executed, otherwise, the process is ended.
In step S306, the joint model training updates the optimization.
In step S307, each participating plant downloads the optimization model.
In the technical scheme of this embodiment, each participating plant starts training a local algorithm model, shares the trained models to form a combined model, and sends uncertain data, which cannot be recognized by each model in each model data set, to the combined model. Therefore, the combined model can collect all uncertain data sets participating in the plant, the data sets are effective data which are beneficial to training and optimizing the model, and the combined model utilizes the data to perform training and optimizing to obtain a model which is better than the model before training. The optimized joint model can be directly downloaded by each participating plant, and the joint model not only can identify the data sets which cannot be identified by the respective model before, but also can identify other data which cannot be identified by the participating plants. For example, the data set [ a, B, C ] cannot be identified before plant A, the data set [ d, e, f, g ] cannot be identified before plant B, the data set [ h, I, g, k, l, m, n ] cannot be identified before plant C, and after the data is collected by the combined model and trained and optimized, the combined model can identify [ a, B, C, d, e, f, g, h, I, j, k, l, m, n ]. After each plant downloads the federated model and replaces the local model with the downloaded federated model, plant A can not only identify the data sets [ a, B, C ] that it did not previously recognize, but can also identify [ d, e, f, g, h, I, j, k, l, m, n ] that plant B, C did not recognize, and similarly, for plants B and C, can also identify all data sets that were previously unrecognizable participating in the plant.
The process is a process of model training for the first time, from the second time, each participating factory only needs to send a data set which cannot be identified and is uncertain to the combined model in real time, the models do not need to be shared, the combined model judges whether the data needs to be trained after new data are collected, data coincident with the data set are removed, model optimization is carried out only by using the new data which need to be trained, iterative updating is continuously carried out according to the data set uploaded by each participating party, an optimized combined model is obtained, and each participating factory can also download the optimized model.
In the process, each participating factory only needs to upload part of uncertain data and encrypt the data in a block chain chaining mode, so that the maximum data privacy can be ensured, the industrial data safety is maintained, only a small bandwidth needs to be occupied, the communication overhead is reduced, effective data can be contributed to the maximum efficiency, and the rapid model optimization is facilitated.
According to the technical scheme, each participant uploads only part of uncertain data and encrypts and uploads the data, so that the communication overhead is reduced to the maximum extent while the industrial data safety is guaranteed, all high-quality and effective data can be obtained through the virtual model, the optimized model is obtained through continuous training, the data are fed back to each factory to share the optimal model, the transmitted data are few, the uploading is not repeated, the occupied bandwidth is small, and the communication overhead cost is low.
Fig. 4 shows a flowchart of a model training method based on federated learning according to an embodiment of the present invention, where this embodiment is applicable to a case where each private cloud server performs model training based on federated learning, and the method may be executed by a model training apparatus based on federated learning configured in each private cloud server, as shown in fig. 4, the model training method based on federated learning according to this embodiment includes:
in step S410, local data is acquired, and after the acquired local data is identified by the local model, a sample set is generated according to an identification result and shared to the public cloud server, so that the public cloud server trains the combined model by using the sample set and shares the combined model, wherein the data volume of the sample set is smaller than that of the local data.
The sample set is generated according to the recognition result and shared to the public cloud server, and the sample set can be generated according to data which cannot be recognized by the local model and/or data with recognition errors and shared to the public cloud server.
The specific sharing method may include multiple methods, for example, a sample set may be generated according to the recognition result, and the encrypted sample set may be stored in a block chain and shared to a public cloud server.
In step S420, the joined model is downloaded, and the local model is replaced with the downloaded joined model.
In this embodiment, local data is acquired by each private cloud server, and after the acquired local data is identified by a local model, a sample set is generated according to an identification result and shared to a public cloud server, so that the public cloud server trains a combined model by using the sample set and shares the combined model, wherein the data volume of the sample set is less than that of the local data; downloading the combined model, and replacing the local model by the downloaded combined model. The technical scheme of the embodiment of the invention can avoid leakage of a large amount of industrial data and ensure the effect of model training.
Fig. 5 is a flow chart of another federal learning-based model training method according to an embodiment of the present invention, which is based on the foregoing embodiment and is implemented with improved optimization. As shown in fig. 5, the method for training a model based on federal learning according to this embodiment includes:
in step S510, local data is acquired. Local incremental data may be periodically acquired for model training.
In step S520, a local model is trained based on the local data, and the trained local model algorithm parameters are sent to the public cloud server, so that the public cloud server verifies whether the received algorithm parameters need to be used to update the joint model algorithm parameters.
The public cloud server may verify whether the received algorithm parameters need to be used for updating the algorithm parameters of the joint model, and the method may include multiple methods, for example, a priori data set may be used for calculating an effect index of the joint model to obtain a first index value, the received local model algorithm parameters are replaced with the algorithm parameters of the joint model, the priori data set is used for calculating the effect index of the joint model after the algorithm parameters are replaced to obtain a second index value, and whether the received local model algorithm parameters need to be used for updating the joint model is determined according to the first index value and the second index value.
The effect index is used for measuring the prediction capability of the model, and the measurement angle thereof may include various types, which is not limited in this embodiment, and may include one or more types of comprehensive indexes such as accuracy, recall rate, and the like.
In step S530, after the obtained local data is identified by the local model, a sample set is generated according to an identification result and shared to the public cloud server, so that the public cloud server trains the joint model by using the sample set and shares the joint model, wherein a data volume of the sample set is smaller than a data volume of the local data.
The sample set is generated according to the recognition result and shared to the public cloud server, and the sample set can be generated according to data which cannot be recognized by the local model and/or data with recognition errors and shared to the public cloud server.
The specific sharing method may include multiple methods, for example, a sample set may be generated according to the recognition result, and the encrypted sample set may be stored in a block chain and shared to a public cloud server.
In step S540, the joined model is downloaded, the local model is replaced with the downloaded joined model, and the process returns to step S510.
In this embodiment, local data is acquired by each private cloud server, and after the acquired local data is identified by a local model, a sample set is generated according to an identification result and shared to a public cloud server, so that the public cloud server trains a combined model by using the sample set and shares the combined model, wherein the data volume of the sample set is less than that of the local data; downloading the combined model, and replacing the local model with the downloaded combined model, so that the model training effect is ensured while a large amount of industrial data is prevented from being leaked.
As an implementation of the methods shown in the above diagrams, the present application provides an embodiment of a model training apparatus based on federal learning, and fig. 6 shows a schematic structural diagram of the model training apparatus based on federal learning provided in this embodiment, where the embodiment of the apparatus corresponds to the method embodiments shown in fig. 4 and fig. 5, and the apparatus may be specifically applied to various electronic devices. As shown in fig. 6, the model training apparatus based on federal learning according to this embodiment includes a training sample uploading unit 610 and a model downloading unit 620.
The training sample uploading unit 610 is configured to acquire local data, generate a sample set according to a recognition result after the acquired local data is recognized by a local model, and share the sample set to a public cloud server, so that the public cloud server trains a joint model by using the sample set and shares the joint model, wherein the data volume of the sample set is smaller than that of the local data.
The model downloading unit 620 is configured to download the joined model, and replace the local model with the downloaded joined model.
According to one or more embodiments of the present disclosure, the training sample uploading unit 610 is configured to generate a sample set according to the recognition result to be shared with the public cloud server, and includes generating a sample set according to data that cannot be recognized by the local model to be shared with the public cloud server; and/or generating a sample set according to the data with the local model recognition error and sharing the sample set to the public cloud server.
According to one or more embodiments of the present disclosure, the training sample uploading unit 610 is configured to, after obtaining local data, train a local model based on the local data, and send trained local model algorithm parameters to a public cloud server, so that the public cloud server verifies whether it is necessary to update joint model algorithm parameters with the received algorithm parameters.
In the training sample uploading unit 610, the verifying whether the received algorithm parameters need to be used to update the algorithm parameters of the joint model by the public cloud server includes: calculating an effect index of the combined model by adopting a prior data set to obtain a first index value; after the algorithm parameters of the combined model are replaced by the received algorithm parameters of the local model, calculating the effect indexes of the combined model after the algorithm parameters are replaced by the prior data set to obtain a second index value;
and determining whether the received local model algorithm parameters need to be adopted to update the joint model according to the sizes of the first index value and the second index value.
According to one or more embodiments of the present disclosure, the performance indicator includes an accuracy rate and/or a recall rate.
According to one or more embodiments of the present disclosure, the training sample uploading unit 610 is configured to generate a sample set according to a recognition result and share the sample set to a public cloud server, including: and the system is used for generating a sample set according to the identification result, and storing the encrypted sample set to a block chain to share the encrypted sample set to a public cloud server.
The model training device based on federal learning provided by the embodiment can execute the model training method based on federal learning provided by the embodiment of the method disclosed by the invention, and has corresponding functional modules and beneficial effects of the execution method.
Referring now to FIG. 7, shown is a schematic diagram of an electronic device 700 suitable for use in implementing embodiments of the present invention. The terminal device in the embodiment of the present invention is, for example, a mobile device, a computer, or a vehicle-mounted device built in a floating car, or any combination thereof. In some embodiments, the mobile device may include, for example, a cell phone, a smart home device, a wearable device, a smart mobile device, a virtual reality device, and the like, or any combination thereof. The electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 7, electronic device 700 may include a processing means (e.g., central processing unit, graphics processor, etc.) 701 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)702 or a program loaded from storage 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for the operation of the electronic apparatus 700 are also stored. The processing device 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Generally, the following devices may be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 708 including, for example, magnetic tape, hard disk, etc.; and a communication device 709. The communication means 709 may allow the electronic device 700 to communicate wirelessly or by wire with other devices to exchange data. While fig. 7 illustrates an electronic device 700 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present invention, the processes described above with reference to the flowcharts may be implemented as a computer software program. For example, embodiments of the invention include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communication means 709, or may be installed from the storage means 708, or may be installed from the ROM 702. The computer program, when executed by the processing device 701, performs the above-described functions defined in the methods of embodiments of the present invention.
It should be noted that the computer readable medium mentioned above can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In yet another embodiment of the invention, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring local data, identifying the acquired local data by a local model, generating a sample set according to an identification result, sharing the sample set to a public cloud server, so that the public cloud server trains a combined model by using the sample set and shares the combined model, wherein the data volume of the sample set is less than that of the local data; downloading the combined model, and replacing the local model by the downloaded combined model.
Computer program code for carrying out operations for embodiments of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present invention may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
The foregoing description is only a preferred embodiment of the invention and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure in the embodiments of the present invention is not limited to the specific combinations of the above-described features, but also encompasses other embodiments in which any combination of the above-described features or their equivalents is possible without departing from the spirit of the disclosure. For example, the above features and (but not limited to) the features with similar functions disclosed in the embodiments of the present invention are mutually replaced to form the technical solution.