CN116204325B - Algorithm training platform based on AIGC - Google Patents
Algorithm training platform based on AIGC Download PDFInfo
- Publication number
- CN116204325B CN116204325B CN202310491432.6A CN202310491432A CN116204325B CN 116204325 B CN116204325 B CN 116204325B CN 202310491432 A CN202310491432 A CN 202310491432A CN 116204325 B CN116204325 B CN 116204325B
- Authority
- CN
- China
- Prior art keywords
- module
- algorithm
- trained
- training
- control module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5072—Grid computing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/502—Proximity
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Manipulator (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an AIGC-based algorithm training platform, which comprises a cloud computing module, an edge computing module, an evaluation module, an input module, a data expansion module and a control module. The algorithm training platform obtains the computing resource quantitative evaluation index related to the algorithm to be trained, and when the computing resource quantitative evaluation index is judged to be greater than or equal to a predetermined computing resource quantitative evaluation index threshold, the cloud computing module executes the training step related to the algorithm to be trained, so that the strong computing power of the cloud computing module is fully exerted, and the high efficiency of algorithm training is improved; when the computing resource quantitative evaluation index is not equal to or greater than the predetermined computing resource quantitative evaluation index threshold, the edge computing module executes the training step related to the algorithm to be trained, so that the communication resource occupation amount caused by data interaction before the training step is executed is reduced.
Description
Technical Field
The invention relates to the technical field of intelligent systems, in particular to an AIGC-based algorithm training platform.
Background
Patent CN108665072a discloses a whole process training method and system of a machine learning algorithm based on a cloud architecture, wherein the method comprises the following steps: uploading a training data set and a problem data set to be solved to a cloud data server through a web application program, and selecting a pre-established machine learning algorithm; the cloud computing server acquires a training data set through the cloud data server, and performs model training by utilizing the training data set to acquire a training model; the cloud computing server acquires a problem to be solved data set through the cloud data server, acquires a solving result by utilizing a training model according to the problem to be solved data set, and returns the solving result to the cloud data server; the cloud data server returns the solution results to the web application. In the algorithm training method and system based on the cloud architecture, the strong computing power of the cloud computing server is fully exerted, and the algorithm training efficiency is improved. However, in the data interaction process between the local server and the cloud computing server, a large amount of communication resources are required to be occupied, and for the algorithm training process with relatively small calculation amount, the powerful calculation power of the cloud computing server cannot fully play a role.
Then, how to reasonably configure the computing resources (or computing power) and the communication resources to further improve the efficiency of the algorithm training is a technical problem to be solved.
Disclosure of Invention
The technical problem to be solved by the invention is to provide an Algorithm training platform based on AIGC, and to reasonably configure computing resources (or computing power) and communication resources so as to further improve the high efficiency of algorithm training.
In order to solve the technical problems, the invention discloses an AIGC-based algorithm training platform, which comprises a cloud computing module, an edge computing module, an evaluation module, an input module, a data expansion module and a control module, wherein the cloud computing module is in communication connection with the control module based on a public network, the edge computing module is in communication connection with the control module based on a local area network, the evaluation module, the input module and the data expansion module are respectively and electrically connected with the control module, and the control module comprises the following steps:
the control module acquires an original sample data set required by an algorithm to be trained in a training process and the iteration times of a loss function in the training process from the input module;
the control module controls the data expansion module to execute data expansion operation based on an AIGC technology and taking the original sample data set as a basic data set to generate an expanded sample data set;
the control module controls the evaluation module to calculate a computing resource quantitative evaluation index related to the algorithm to be trained according to the data capacity value of the extended sample data set and the iteration number of the loss function in the training process;
the control module judges whether the computing resource quantitative evaluation index is larger than or equal to a predetermined computing resource quantitative evaluation index threshold, if yes, the control module sends a first training request about the algorithm to be trained to the cloud computing module, so that the cloud computing module executes a training step about the algorithm to be trained, and if not, the control module sends a second training request about the algorithm to be trained to the edge computing module, so that the edge computing module executes the training step about the algorithm to be trained.
According to the AIGC-based algorithm training platform disclosed by the invention, the computing resource quantitative evaluation index related to the algorithm to be trained is obtained according to the data capacity value of the extended sample data set and the number of loss function iterations in the training process, and when the computing resource quantitative evaluation index is judged to be greater than or equal to a predetermined computing resource quantitative evaluation index threshold, the cloud computing module executes the training step related to the algorithm to be trained, so that the strong computing power of the cloud computing module is fully exerted, and the high efficiency of algorithm training is improved; when the computing resource quantitative evaluation index is not equal to or greater than the predetermined computing resource quantitative evaluation index threshold, the edge computing module executes the training step related to the algorithm to be trained, so that the communication resource occupation amount caused by data interaction before the training step is executed is reduced. Therefore, the AIGC-based algorithm training platform disclosed by the invention is beneficial to reasonably configuring the computing resources and the communication resources, so that the high efficiency of algorithm training is further improved.
As an optional implementation manner, in the present invention, when the algorithm to be trained is a deep convolutional neural network algorithm, the training attribute information includes the number of neurons of a convolutional layer, the number of neurons of a pooling layer, and the number of neurons of a full-connection layer of the deep convolutional neural network.
In an alternative embodiment, in the present invention, the formula adopted by the evaluation module for obtaining the calculation resource quantization evaluation index of the algorithm to be trained is as follows:
wherein M is the depth to be trainedThe calculation resource quantization evaluation index of the convolutional neural network, R is the capacity value of the unit of the Ethernet byte of the extended sample data set required by the deep convolutional neural network to be trained in the training process, D is the iteration times of the loss function of the deep convolutional neural network to be trained in the training process,for the number of neurons of the convolutional layer of the deep convolutional neural network to be trained, +.>For the number of neurons of the pooling layer of the deep convolutional neural network to be trained, +.>The number of neurons for the fully connected layer of the deep convolutional neural network to be trained.
In an alternative embodiment, the algorithm training platform further comprises a timing module and an output module electrically connected with the control module,
after the control module sends a first training request about the algorithm to be trained to the cloud computing module so that the cloud computing module executes a training step about the algorithm to be trained, the control module controls the timing module to acquire a first time length used by the cloud computing module to execute the training step about the algorithm to be trained, and when the first time length exceeds a predetermined first time length threshold range, the control module controls the output module to output first result information representing that the time spent by the cloud computing module to execute the training step about the algorithm to be trained is abnormal;
or after the control module sends a second training request about the algorithm to be trained to the edge computing module, so that the control module executes a training step about the algorithm to be trained, the control module controls the timing module to acquire a second time length used by the edge computing module to execute the training step about the algorithm to be trained, and when the second time length exceeds a predetermined second time length threshold range, the control module controls the output module to output second result information representing that the time length of the edge computing module to execute the training step about the algorithm to be trained is abnormal.
In an alternative embodiment, the algorithm training platform further comprises a node information transceiver module, the node information transceiver module has two communication ends, wherein a first communication end is connected between the control module and the cloud computing module, a second communication end is connected between the control module and the edge computing module,
after the control module sends a first training request about the algorithm to be trained to the cloud computing module so that the cloud computing module executes a training step about the algorithm to be trained, the control module controls the node information transceiver module to send first node verification information to the cloud computing module so that the cloud computing module sends first node feedback information matched with the first node verification information to the node information transceiver module, and the control module determines a training step currently executed by the cloud computing module according to the first node feedback information;
or after the control module sends a second training request about the algorithm to be trained to the edge computing module, so that the edge computing module executes a training step about the algorithm to be trained, the control module controls the node information transceiving module to send second node verification information to the edge computing module, so that the edge computing module sends second node feedback information matched with the second node verification information to the node information transceiving module, and the control module determines the training step currently executed by the edge computing module according to the second node feedback information.
As an optional implementation manner, in the present invention, the control module outputs, through the output module, first training information indicating a training step currently performed by the cloud computing module or outputs second training information indicating a training step currently performed by the edge computing module.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic structural diagram of an AIGC-based algorithm training platform according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating steps performed by the control module according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of another flow chart of the control module executing steps according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart of a control module according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of another AIGC-based algorithm training platform according to an embodiment of the invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms first, second and the like in the description and in the claims, are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, apparatus, article, or device that comprises a list of steps or elements is not limited to the list of steps or elements but may, in the alternative, include other steps or elements not expressly listed or inherent to such process, method, article, or device.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the invention. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
AIGC (artificial intelligence generated content, artificial Intelligence Generated Content, abbreviated as AIGC) is expected to be a new engine for innovation and development of digital content, and the advantages mainly include: 1) The AIGC can bear basic mechanical labor such as information mining, material calling, re-engraving editing and the like at a manufacturing capacity and knowledge level superior to those of human beings, and can meet mass personalized requirements in a low marginal cost and high-efficiency manner from the technical level; 2) AIGC can inoculate a new business state new mode by supporting multidimensional interaction, fusion and permeation of digital content and other industries; 3) The development of the "meta universe" is assisted, and the physical world is accelerated and repeated through AIGC to carry out infinite content creation, so that spontaneous organic growth is realized.
Embodiment one: the invention discloses an AIGC-based algorithm training platform, which comprises a cloud computing module, an edge computing module, an evaluation module, an input module, a data expansion module and a control module, wherein the cloud computing module is in communication connection with the control module based on a public network, the edge computing module is in communication connection with the control module based on a local area network, and the evaluation module, the input module and the data expansion module are respectively and electrically connected with the control module.
Wherein, as shown in fig. 2, the steps executed by the control module include:
s101, the control module acquires a sample data set required to be used in the training process of the algorithm to be trained and the iteration times of the loss function in the training process from the input module.
S102, the control module controls the data expansion module to execute data expansion operation based on the AIGC technology and based on the original sample data set as a basic data set, and an expanded sample data set is generated. The extended sample data set may include an original sample data set and a gain data set obtained after the data augmentation operation.
S103, the control module controls the evaluation module to calculate a calculation resource quantization evaluation index related to the algorithm to be trained according to the data capacity value of the extended sample data set and the iteration number of the loss function in the training process.
And S104, the control module judges whether the calculated resource quantitative evaluation index is larger than or equal to a preset calculated resource quantitative evaluation index threshold, if so, the step S105a is executed, and if not, the step S105b is executed.
S105a, the control module sends a first training request about the algorithm to be trained to the cloud computing module, so that the cloud computing module executes a training step about the algorithm to be trained.
S105b, the control module sends a second training request about the algorithm to be trained to the edge computing module, so that the edge computing module executes a training step about the algorithm to be trained.
Before the cloud computing module or the edge computing module performs the training step with respect to the algorithm to be trained, the computer readable program including the computer readable program with respect to the algorithm to be trained, the sample data set required during training, and the computer readable program including the loss function during training and the step required for training is transmitted to the execution subject (e.g., the cloud computing module or the edge computing module) that performs the training step with respect to the algorithm to be trained. Then, before performing the training step, an operation of data augmentation may be performed based on the AIGC technique using the original sample data set obtained by the input module as the base data, and the expanded sample data set may be generated so that the data capacity of the sample data set required to be used in the training process obtained by the cloud computing module or the edge computing module is sufficiently strong, thereby being beneficial to enhancing the robustness of the algorithm after training. Alternatively, for the case where the original data set is an image data set, the data expansion module may fight two neural networks based on a generative fight network, i.e., one of the neural networks acts as a generator with the other neural network discriminator. The generator takes the image data of the original data set (i.e. the basic data) as input to generate 'evolution data', and the discriminator is used for judging that the input image data is 'evolution data' or is the basic data, and in the countermeasure of the two, strong 'evolution capability' is gradually evolved, and the evolution capability is used for synthesizing the image, and the synthesized image can be used as a sample image in a gain data set in an expansion sample data set.
According to the AIGC-based algorithm training platform disclosed by the invention, the computing resource quantitative evaluation index related to the algorithm to be trained is obtained according to the data capacity value of the extended sample data set and the number of loss function iterations in the training process, and when the computing resource quantitative evaluation index is judged to be greater than or equal to a predetermined computing resource quantitative evaluation index threshold, the cloud computing module executes the training step related to the algorithm to be trained, so that the strong computing power of the cloud computing module is fully exerted, and the high efficiency of algorithm training is improved; when the computing resource quantitative evaluation index is not equal to or greater than the predetermined computing resource quantitative evaluation index threshold, the edge computing module executes the training step related to the algorithm to be trained, so that the communication resource occupation amount caused by data interaction before the training step is executed is reduced. Therefore, the AIGC-based algorithm training platform disclosed by the invention is beneficial to reasonably configuring the computing resources and the communication resources, so that the high efficiency of algorithm training is further improved.
Embodiment two: the deep convolutional neural network algorithm is widely applied, and the application scene of the deep convolutional neural network algorithm comprises automatic driving, face recognition and the like. Optionally, when the algorithm to be trained is a deep convolutional neural network algorithm, the training attribute information includes the number of neurons of a convolutional layer, the number of neurons of a pooling layer and the number of neurons of a full-connection layer of the deep convolutional neural network, which is beneficial to improving the effectiveness of the computing resource quantization evaluation index about the algorithm to be trained, which is obtained when the algorithm to be trained is the deep convolutional neural network algorithm.
In order to improve the efficiency of the calculation process of the calculation resource quantitative evaluation index of the algorithm to be trained, further optionally, the control module controls the evaluation module to calculate the calculation resource quantitative evaluation index of the deep convolutional neural network to be trained according to the training attribute information, wherein the formula adopted by the calculation module is as follows:
wherein M is the calculation resource quantization evaluation index of the deep convolutional neural network to be trained, R is the capacity value of the extended sample data set Ethernet byte which is needed in the training process of the deep convolutional neural network to be trained, D is the loss function iteration times of the deep convolutional neural network to be trained in the training process,for the number of neurons of the convolutional layer of the deep convolutional neural network to be trained, +.>For the number of neurons of the pooling layer of the deep convolutional neural network to be trained, +.>The number of neurons for the fully connected layer of the deep convolutional neural network to be trained.
In order to avoid that the training step of the algorithm to be trained is excessively long, so that the operation efficiency of the algorithm training platform is reduced due to long-time occupation, as shown in fig. 1, the algorithm training platform can be provided with a timing module and an output module which are electrically connected with a control module, and abnormal information is timely output when the training step is excessively long. Specifically, after the control module sends a first training request about an algorithm to be trained to the cloud computing module so that the cloud computing module performs a training step about the algorithm to be trained, the control module controls the timing module to acquire a first time length used by the cloud computing module to perform the training step about the algorithm to be trained, and when the first time length exceeds a predetermined first time length threshold range, the control module controls the output module to output first result information representing that the cloud computing module performs the training step about the algorithm to be trained in an abnormal manner; or after the control module sends a second training request about the algorithm to be trained to the edge computing module so that the edge computing module executes a training step about the algorithm to be trained, the control module controls the timing module to acquire a second time length used by the edge computing module to execute the training step about the algorithm to be trained, and when the second time length exceeds a predetermined second time length threshold range, the control module controls the output module to output second result information representing that the edge computing module is abnormal when the training step about the algorithm to be trained is executed.
Embodiment III: in order to enable the user to learn about the training steps performed at the present stage, the algorithmic training platform may determine the currently performed training steps from the node information. Specifically, as shown in fig. 5, the algorithm training platform further includes a node information transceiver module, where the node information transceiver module has two communication ends, a first communication end is connected between the control module and the cloud computing module, and a second communication end is connected between the control module and the edge computing module.
Further, as shown in fig. 3, after the control module sends the first training request about the algorithm to be trained to the cloud computing module, so that the cloud computing module performs the training step about the algorithm to be trained (i.e. after step S105 a), the steps performed by the control module further include:
and S106a, the control module controls the node information receiving and transmitting module to transmit the first node verification information to the cloud computing module, so that the cloud computing module transmits first node feedback information matched with the first node verification information to the node information receiving and transmitting module. The control module may determine, by obtaining first node feedback information that matches the first node verification information, that a currently executing subject with respect to the training step is a cloud computing module, on the one hand, that the training step is being executed, and on the other hand, that the executing subject is a targeted executing subject (i.e., a cloud computing module).
And S107a, the control module determines the training step currently executed by the cloud computing module according to the feedback information of the first node. Optionally, in step S107a, the control module may determine the currently performed training step according to a pre-constructed correspondence between the first node feedback information and the training step.
Alternatively, as shown in fig. 4, after the control module sends a second training request about the algorithm to be trained to the edge calculation module, so that the edge calculation module performs the training step about the algorithm to be trained (i.e., after step S105 b), the steps performed by the control module further include:
and S106b, the control module controls the node information receiving and transmitting module to transmit second node verification information to the edge calculation module, so that the edge calculation module transmits second node feedback information matched with the second node verification information to the node information receiving and transmitting module. The control module may determine, by obtaining second node feedback information matching the second node verification information, that the currently executing subject with respect to the training step is an edge calculation module, on the one hand, determining that the training step is being executed, and on the other hand, determining that the executing subject is a target executing subject (i.e., edge calculation module).
And S107b, the control module determines the training step currently executed by the edge calculation module according to the feedback information of the second node. Optionally, in step S107b, the control module may determine the currently performed training step according to a pre-constructed correspondence between the second node feedback information and the training step.
Alternatively, the first node verification information may specifically be 24 hours, minutes, and seconds corresponding to the time when the cloud computing module performs the current step, for example, 20 hours, 40 minutes, and 30 seconds. Further optionally, in order to conveniently determine the first node feedback information that matches the first node verification information, the mapping relationship between the first node verification information and the first node feedback information may be determined based on the following formula:
wherein F represents the feedback information of the first node, H represents the time of the representation in the verification information of the first nodeThe value of "M is a value representing" score "in the first node authentication information, and S is a value representing" second "in the first node authentication information. Then, for the first node authentication information of "20 hours 40 minutes 30 seconds" mentioned above, the first node feedback information matched therewith。
Alternatively, the second node verification information mentioned above may specifically be 24 hours, minutes, and seconds, for example, 20 hours, 40 minutes, and 30 seconds, corresponding to the time when the edge calculation module performs the current step. Further optionally, in order to conveniently determine the second node feedback information that matches the second node verification information, a mapping relationship between the second node verification information and the second node feedback information may be determined based on the following formula:
wherein, F represents the feedback information of the second node, H is the value representing "time" in the first node verification information, M is the value representing "minute" in the first node verification information, and S is the value representing "second" in the first node verification information. Then, for the second node authentication information of the aforementioned "20 hours 40 minutes 30 seconds", the second node feedback information matched therewith。
In order to facilitate the user to directly learn the currently performed training step from the algorithm training platform, the control module may further optionally output, through the output module, first training information representing the currently performed training step of the cloud computing module or second training information representing the currently performed training step of the edge computing module.
Finally, it should be noted that: in the algorithm training platform based on AIGC disclosed in the embodiment of the invention, the disclosed embodiment is only a preferred embodiment of the invention, and is only used for illustrating the technical scheme of the invention, but not limiting the technical scheme; although the invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that; the technical scheme described in the foregoing embodiments can be modified or some of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.
Claims (4)
1. The utility model provides an algorithm training platform based on AIGC, its characterized in that includes cloud computing module, edge computing module, evaluation module, input module, data expansion module and control module, cloud computing module with control module is based on public network communication connection, edge computing module with control module is based on LAN communication connection, evaluation module, input module and data expansion module respectively with control module electricity is connected, wherein, the step that control module carried out includes:
the control module acquires an original sample data set required by an algorithm to be trained in a training process and the iteration times of a loss function in the training process from the input module;
the control module controls the data expansion module to execute data expansion operation based on an AIGC technology and taking the original sample data set as a basic data set to generate an expanded sample data set;
the control module controls the evaluation module to calculate a computing resource quantitative evaluation index related to the algorithm to be trained according to the data capacity value of the extended sample data set and the iteration number of the loss function in the training process;
the control module judges whether the computing resource quantitative evaluation index is larger than or equal to a predetermined computing resource quantitative evaluation index threshold, if yes, the control module sends a first training request about the algorithm to be trained to the cloud computing module so that the cloud computing module executes a training step about the algorithm to be trained, and if not, the control module sends a second training request about the algorithm to be trained to the edge computing module so that the edge computing module executes the training step about the algorithm to be trained; when the algorithm to be trained is a deep convolutional neural network algorithm, training attribute information comprises the number of neurons of a convolutional layer, the number of neurons of a pooling layer and the number of neurons of a full-connection layer of the deep convolutional neural network;
the evaluation module obtains a formula adopted by a calculation resource quantization evaluation index of the algorithm to be trained as follows:
in the method, in the process of the invention,quantifying an evaluation index for computing resources of a deep convolutional neural network to be trained, < >>For the capacity value in Ethernet bytes of the extended sample data set required by the deep convolutional neural network to be trained in the training process, +.>For the loss function iteration times of the depth convolution neural network to be trained in the training process, the +.>For the number of neurons of the convolutional layer of the deep convolutional neural network to be trained, +.>For the number of neurons of the pooling layer of the deep convolutional neural network to be trained, +.>The number of neurons for the fully connected layer of the deep convolutional neural network to be trained.
2. The AIGC based algorithm training platform of claim 1, further comprising a timing module and an output module electrically connected to the control module,
after the control module sends a first training request about the algorithm to be trained to the cloud computing module so that the cloud computing module executes a training step about the algorithm to be trained, the control module controls the timing module to acquire a first time length used by the cloud computing module to execute the training step about the algorithm to be trained, and when the first time length exceeds a predetermined first time length threshold range, the control module controls the output module to output first result information representing that the time spent by the cloud computing module to execute the training step about the algorithm to be trained is abnormal;
or after the control module sends a second training request about the algorithm to be trained to the edge computing module, so that the control module executes a training step about the algorithm to be trained, the control module controls the timing module to acquire a second time length used by the edge computing module to execute the training step about the algorithm to be trained, and when the second time length exceeds a predetermined second time length threshold range, the control module controls the output module to output second result information representing that the time length of the edge computing module to execute the training step about the algorithm to be trained is abnormal.
3. The AIGC-based algorithm training platform of claim 2, further comprising a node information transceiver module having two communication terminals, wherein a first communication terminal is connected between the control module and the cloud computing module, a second communication terminal is connected between the control module and the edge computing module,
after the control module sends a first training request about the algorithm to be trained to the cloud computing module so that the cloud computing module executes a training step about the algorithm to be trained, the control module controls the node information transceiver module to send first node verification information to the cloud computing module so that the cloud computing module sends first node feedback information matched with the first node verification information to the node information transceiver module, and the control module determines a training step currently executed by the cloud computing module according to the first node feedback information;
or after the control module sends a second training request about the algorithm to be trained to the edge computing module, so that the edge computing module executes a training step about the algorithm to be trained, the control module controls the node information transceiving module to send second node verification information to the edge computing module, so that the edge computing module sends second node feedback information matched with the second node verification information to the node information transceiving module, and the control module determines the training step currently executed by the edge computing module according to the second node feedback information.
4. The AIGC-based algorithm training platform of claim 3, wherein the control module outputs, through the output module, first training information representative of a training step currently performed by the cloud computing module or second training information representative of a training step currently performed by the edge computing module.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310491432.6A CN116204325B (en) | 2023-05-05 | 2023-05-05 | Algorithm training platform based on AIGC |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310491432.6A CN116204325B (en) | 2023-05-05 | 2023-05-05 | Algorithm training platform based on AIGC |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116204325A CN116204325A (en) | 2023-06-02 |
CN116204325B true CN116204325B (en) | 2023-06-30 |
Family
ID=86517647
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310491432.6A Active CN116204325B (en) | 2023-05-05 | 2023-05-05 | Algorithm training platform based on AIGC |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116204325B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117472550B (en) * | 2023-12-27 | 2024-03-01 | 环球数科集团有限公司 | Computing power sharing system based on AIGC |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020122778A1 (en) * | 2018-12-13 | 2020-06-18 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and machine learning agent for executing machine learning in an edge cloud |
CN116010054A (en) * | 2022-12-28 | 2023-04-25 | 哈尔滨工业大学 | Heterogeneous edge cloud AI system task scheduling frame based on reinforcement learning |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3667512A1 (en) * | 2018-12-11 | 2020-06-17 | Siemens Aktiengesellschaft | A cloud platform and method for efficient processing of pooled data |
CN110865878B (en) * | 2019-11-11 | 2023-04-28 | 广东石油化工学院 | Intelligent scheduling method based on task multi-constraint in edge cloud cooperative environment |
CN113269718B (en) * | 2021-04-15 | 2022-09-16 | 安徽大学 | Concrete prefabricated part crack detection method based on deep learning |
-
2023
- 2023-05-05 CN CN202310491432.6A patent/CN116204325B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020122778A1 (en) * | 2018-12-13 | 2020-06-18 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and machine learning agent for executing machine learning in an edge cloud |
CN116010054A (en) * | 2022-12-28 | 2023-04-25 | 哈尔滨工业大学 | Heterogeneous edge cloud AI system task scheduling frame based on reinforcement learning |
Also Published As
Publication number | Publication date |
---|---|
CN116204325A (en) | 2023-06-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112181666B (en) | Equipment assessment and federal learning importance aggregation method based on edge intelligence | |
CN116204325B (en) | Algorithm training platform based on AIGC | |
CN113194493B (en) | Wireless network data missing attribute recovery method and device based on graph neural network | |
JP2012185812A5 (en) | ||
CN111325340B (en) | Information network relation prediction method and system | |
CN111126794A (en) | Data enhancement and neural network confrontation training system based on small samples | |
CN112312299A (en) | Service unloading method, device and system | |
CN109670600A (en) | Decision-making technique and system based on cloud platform | |
CN113159539B (en) | Method for combining green energy scheduling and dynamic task allocation in multi-layer edge computing system | |
CN117056478A (en) | Control method and device for electric power customer service system | |
CN104022937B (en) | A kind of mapping method of virtual network based on cellular type P system | |
CN111368998A (en) | Spark cluster-based model training method, device, equipment and storage medium | |
CN116669186A (en) | Adaptive power distribution method based on Markov decision process | |
CN112667334B (en) | Configuration method and device of equipment control information, electronic equipment and storage medium | |
Lin | Design of intelligent distance music education system based on pan-communication technology | |
CN112766486A (en) | Searching method of neural network structure, terminal, server and readable storage medium | |
CN113326430A (en) | Information pushing method and system based on live social big data | |
CN115515144B (en) | Heterogeneous AIoT Ad hoc network signal full-coverage method and device | |
CN115361089B (en) | Data security communication method, system and device of electric power Internet of things and storage medium | |
WO2023231706A1 (en) | Data channel model sending method and apparatus, and information sending method and apparatus | |
CN117439810A (en) | Honey network node deployment method, system and storable medium for electric power Internet of things | |
CN113766623B (en) | Cognitive radio power distribution method based on improved artificial bee colony | |
CN112799680B (en) | Method and equipment for accelerating AI model deployment | |
CN116822644A (en) | Hierarchical federal learning method, server and system | |
Yining et al. | Intellicise model transmission for semantic communication in intelligence-native 6G networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |