CN112039992B - Model management method and system based on cloud computing architecture - Google Patents

Model management method and system based on cloud computing architecture Download PDF

Info

Publication number
CN112039992B
CN112039992B CN202010905581.9A CN202010905581A CN112039992B CN 112039992 B CN112039992 B CN 112039992B CN 202010905581 A CN202010905581 A CN 202010905581A CN 112039992 B CN112039992 B CN 112039992B
Authority
CN
China
Prior art keywords
computing
node
calculation
model
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010905581.9A
Other languages
Chinese (zh)
Other versions
CN112039992A (en
Inventor
王昊
高寒冰
罗水权
刘剑
李燕婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Asset Management Co Ltd
Original Assignee
Ping An Asset Management Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Asset Management Co Ltd filed Critical Ping An Asset Management Co Ltd
Priority to CN202010905581.9A priority Critical patent/CN112039992B/en
Publication of CN112039992A publication Critical patent/CN112039992A/en
Application granted granted Critical
Publication of CN112039992B publication Critical patent/CN112039992B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0813Configuration setting characterised by the conditions triggering a change of settings
    • H04L41/082Configuration setting characterised by the conditions triggering a change of settings the condition being updates or upgrades of network functionality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/20Network architectures or network communication protocols for network security for managing network security; network security policies in general

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Information Transfer Between Computers (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to the field of big data, and provides a model management method based on a cloud computing architecture, which comprises the steps of issuing a computing model of a first computing node in the cloud computing architecture to a plurality of second computing nodes; receiving selection instructions sent by the second computing nodes according to the corresponding computing models of the second computing nodes, and generating a plurality of computing selection data based on the selection instructions; receiving a user request instruction through a second computing node; when the calculation selection data in the second calculation node is the first selection data, executing a user request instruction through a calculation model in the second calculation node to generate a calculation result; and when the calculation selection data in the second calculation node is the second selection data, sending the user request instruction to the first calculation node through the second calculation node so as to execute the user request instruction and generate a calculation result. And the calculation authority is sunk through the first calculation node, and the node resources in the network are dynamically scheduled through the calculation selection data of the second calculation node, so that the calculation efficiency is improved.

Description

Model management method and system based on cloud computing architecture
Technical Field
The embodiment of the invention relates to the field of big data, in particular to a model management method and system based on a cloud computing architecture.
Background
Cloud computing (cloud computing mode) service architecture as an emerging shared infrastructure, can connect a huge pool of system resources together to provide various IT services. The service architecture is widely applied to various commercial scenes. However, the existing cloud computing service architecture usually performs computing at a cloud center node or an edge node, and data newly produced by a network per year increases at an exponential rate, which brings computing pressure to the cloud computing service architecture, and has low computing efficiency and poor user experience.
Disclosure of Invention
In view of this, embodiments of the present invention provide a model management method, system, computer device and computer-readable storage medium based on a cloud computing architecture, which are used to solve the problem of low computation efficiency when an existing cloud computing service architecture computes mass data.
The embodiment of the invention solves the technical problems through the following technical scheme:
a model management method based on a cloud computing architecture comprises the following steps:
issuing a computing model of a first computing node in the cloud computing architecture to a plurality of second computing nodes;
receiving selection instructions sent by the second computing nodes according to the corresponding computing models of the second computing nodes, and generating a plurality of user forms corresponding to the second computing nodes based on the selection instructions, wherein the user forms comprise a plurality of fields, and the fields are used for recording computing selection data of the second computing nodes;
receiving, by the plurality of second computing nodes, user request instructions;
when the calculation selection data in the second calculation node is the first selection data, executing the user request instruction through a calculation model in the second calculation node to generate a calculation result;
and when the calculation selection data in the second calculation node is second selection data, the user request instruction is sent to the first calculation node through the second calculation node so as to execute the user request instruction and generate a calculation result.
Further, the method further comprises:
acquiring a preset rule in the calculation model;
acquiring first updating data of a first computing node;
and adjusting the calculation model in the first calculation node based on the preset rule and the first updating data to obtain an updated calculation model in the first calculation node.
Further, after the obtaining the first update data of the first computing node, the method further includes:
acquiring second updating data of each second computing node;
and adjusting the corresponding calculation model in the second calculation node based on the preset rule and the first updating data and/or the second updating data to obtain the updated calculation model in each second calculation node.
Further, when the calculation selection data in the second calculation node is the first selection data, executing the user request instruction by the calculation model in the second calculation node to generate the calculation result includes:
when the computing selection data in the second computing node is detected to be the first selection data, user preference data are obtained through the second computing node;
inputting the user preference data into a computational model of the second computational node to output conclusion data to the second computational node via the computational model.
Further, when the calculation selection data in the second calculation node is second selection data, sending the user request instruction to the first calculation node through the second calculation node to execute the user request instruction and generate a calculation result further includes:
monitoring a plurality of second computing nodes through the first computing node, and defining the second computing nodes carrying scheduling labels as second candidate computing nodes;
when the computing selection data of the second computing node is detected to be second selection data, acquiring load data of the second candidate computing node and the first computing node;
determining a target compute node based on the second candidate compute node and the load data of the first compute node;
sending the user request instruction to the target computing node through the first computing node, so that the computing model in the target computing node executes the user request instruction to generate a computing result.
Further, the method further comprises:
when a model re-updating request sent by the second computing node is received through the first computing node within a preset time, the model updating request carries a computing model corresponding to the second computing node and user data;
fusing the calculation model of the first calculation node and the calculation models corresponding to the plurality of second calculation nodes through a first distillation algorithm to generate a first target calculation model;
verifying the first target calculation model through user data to generate a model verification result;
when the model verification result is that the verification is passed, replacing the calculation model of the first calculation node with a first target calculation model so as to store the first target calculation model in the first calculation node;
and issuing the first target calculation model to the second calculation node through the first calculation node so that the second calculation node fuses the first target calculation model and the corresponding calculation model through a second distillation algorithm to obtain a corresponding second target calculation model.
Further, the second compute node includes a plurality of edge nodes, the method further comprising:
storing the model parameters of the calculation model of the first calculation node and first updating data in a preset central database, and storing the model parameters of the edge nodes and second updating data corresponding to the edge nodes in an edge database;
and storing the central database and the edge database in a block chain.
In order to achieve the above object, an embodiment of the present invention further provides a model management system based on a cloud computing architecture, including:
the distribution module is used for issuing the computing model of the first computing node in the cloud computing architecture to a plurality of second computing nodes;
the recording module is used for receiving selection instructions sent by the second computing nodes according to the corresponding computing models of the second computing nodes, and generating a plurality of user forms corresponding to the second computing nodes based on the selection instructions, wherein the user forms comprise a plurality of fields, and the fields are used for recording computing selection data of the second computing nodes;
a receiving module, configured to receive, through the plurality of second computing nodes, a user request instruction;
the first execution module is used for executing the user request instruction through the calculation model in the second calculation node to generate a calculation result when the calculation selection data in the second calculation node is the first selection data;
and the second execution module is used for sending the user request instruction to the first computing node through the second computing node so as to execute the user request instruction and generate a computing result when the computing selection data in the second computing node is the second selection data.
In order to achieve the above object, an embodiment of the present invention further provides a computer device, where the computer device includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor, when executing the computer program, implements the steps of the cloud computing architecture-based model management method as described above.
To achieve the above object, an embodiment of the present invention further provides a computer-readable storage medium, in which a computer program is stored, where the computer program is executable by at least one processor to cause the at least one processor to execute the steps of the cloud computing architecture-based model management method as described above.
According to the model management method, the model management system, the computer equipment and the computer readable storage medium based on the cloud computing architecture, the computing model is issued to the plurality of second computing nodes through the first computing node, the computing model of the computing node is selected according to computing selection data in the second computing nodes to execute the user request instruction, the computing authority is sunk to the second computing nodes through the first computing node, and node resources in a network are dynamically scheduled through the computing selection data of the second computing nodes, so that the computing efficiency is improved.
The invention is described in detail below with reference to the drawings and specific examples, but the invention is not limited thereto.
Drawings
FIG. 1 is a flowchart illustrating a method for managing a model based on a cloud computing architecture according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating steps of updating a computing model of a first computing node in a cloud computing architecture based model management method according to an embodiment of the present invention;
fig. 3 is a flowchart illustrating a step of updating a computing model of a second computing node in the cloud computing architecture based model management method according to the first embodiment of the present invention;
fig. 4 is a flowchart illustrating a step of executing a user request instruction by a second node in the cloud computing architecture based model management method according to the first embodiment of the present invention;
fig. 5 is a flowchart illustrating steps of dynamically scheduling computing nodes in a model management method based on a cloud computing architecture according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating steps of re-initializing an updated model in a cloud computing architecture-based model management method according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of program modules of a cloud-based model management system according to a second embodiment of the present invention;
fig. 8 is a schematic hardware structure diagram of a computer device according to a third embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
Technical solutions between the embodiments may be combined with each other, but must be based on the realization of the technical solutions by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present invention.
Example one
Referring to fig. 1, a flowchart illustrating steps of a model management method based on a cloud computing architecture according to an embodiment of the present invention is shown. It is to be understood that the flow charts in the embodiments of the present method are not intended to limit the order in which the steps are performed. The following description is given by taking a computer device as an execution subject, specifically as follows:
as shown in fig. 1, the model management method based on the cloud computing architecture may include steps S100 to S500, where:
step S100, issuing the computing model of the first computing node in the cloud computing architecture to a plurality of second computing nodes.
Step S200, receiving a selection instruction sent by the second computing nodes according to the corresponding computing models thereof, and generating a plurality of user forms corresponding to the second computing nodes based on the selection instruction, where the user forms include a plurality of fields, and the fields are used to record computing selection data of the second computing nodes.
In an exemplary embodiment, the selection instructions include, but are not limited to: selecting a self-updating instruction, selecting an updating instruction of other computing nodes, selecting a self-calculating instruction, selecting a calculating instruction of other computing nodes and the like.
Wherein the computing selection data of the second computing node comprises first selection data and second selection data. The first selection data is used for representing self-updating or self-computing of the corresponding second computing node. The second selection data is used to indicate that the corresponding second computing node requests other computing nodes to perform an update or computation.
Step S300, receiving user request instructions through the plurality of second computing nodes.
In an exemplary embodiment, the user request instruction is received by a user node of the plurality of second computing nodes. The user request instruction received by a certain user node is used for requesting to execute the task carried in the user request instruction, and the task carried in the user request instruction can be executed through the computing model corresponding to the user node or other computing nodes.
Step S400, when the calculation selection data in the second calculation node is the first selection data, executing a user request instruction through the calculation model in the second calculation node to generate a calculation result.
In an exemplary embodiment, when the computing selection data in the user node is the first selection data, private use is made of the private data of the user directly on the user node. The user does not upload own private data any more, but directly uses the private data on the local (corresponding user node) to execute the user request instruction, and the safety of the user private data is ensured. The user node can support the computation thereon, the first computing node does not need to provide a request execution service for the user request instruction received by the user node, and the decoupling mode solves the countermeasure of the first computing node and the user node on the private data.
Step S500, when the calculation selection data in the second calculation node is second selection data, sending the user request instruction to the first calculation node through the second calculation node, so as to execute the user request instruction and generate a calculation result.
In an exemplary embodiment, the first computing node in the cloud computing architecture is a cloud center node and the plurality of second computing nodes includes a plurality of user nodes and a plurality of edge nodes. The plurality of user nodes can be understood as nodes formed by combining the sensors of the internet of things and the intelligent client. Such as intelligent air conditioner, intelligent TV, intelligent sound box, intelligent mobile phone, etc.
Further, an initial training of the computational model is performed in the first computational node. Taking the service recommended by financial news as an example, the first computing node acquires some data such as historical data, internal employee data, research data and the like, and the required data can be acquired from an external database, including but not limited to a financial news content database, a wangde financial information database and the like; or from an internally constructed financial information database. The historical data includes, but is not limited to, news browsing history of the user, financial detail categories concerned by the user, companies concerned by the user, news clicking conditions of the user, and the like, and an initialized calculation model is obtained through training of the historical training data, and can be obtained through training of recommendation models such as wide & deep (shallow linear neural network and deep neural network) models. The accuracy of the initialized calculation model is relatively low, but the initialized calculation model is pre-configured with preset rules related to the calculation model updating function, such as model using rules, updating rules, scheduling rules, and the like. The initialized computing model is stored in the cloud center node, namely the initialized computing model is stored in the first computing node, and the computing model in the first computing node is issued to the plurality of edge nodes to provide model support for computing of the edge nodes.
Illustratively, when a user node requests to download a computation model in a first computation node, the computation model is issued to a router of the user node, but historical training data is not issued to a second computation node. After the user nodes take the calculation models, the calculation models can be stored in respective routers, and the respective routers are respectively connected with respective intelligent clients and the sensors of the Internet of things.
In an exemplary embodiment, as shown in fig. 2, the method further comprises:
step S601, obtaining a preset rule in the calculation model.
Step S602, first update data of the first computing node is obtained.
Step S603, adjusting the computation model in the first computation node based on the preset rule and the first update data, so as to obtain an updated computation model in the first computation node.
Specifically, taking a service recommended by financial news as an example, the first update data is the latest daily financial news information acquired by the first computing node. The computational model update of the first computational node is dependent on first update data collected by the first computational node.
In an exemplary embodiment, as shown in fig. 3, after the obtaining the first update data of the first computing node, the method further includes:
step S611, obtain second update data of each second computing node.
Step S612, adjusting the computation model in the corresponding second computation node based on the preset rule and the first update data and/or the second update data, so as to obtain an updated computation model in each second computation node.
For example, the user node may request to acquire the latest financial news information acquired by the first computing node every day, and then update the computing model of the user node with the acquired data. Or the user node can also request to download the latest financial news information from other nodes in the network architecture in a p2p (peer-to-peer) mode, or can request other channels such as an external website, a database, an intelligent sensor and the like through the internet to acquire the latest financial news information, and then the acquired data is used for updating the calculation model of the user node.
Illustratively, the second update data is personalized data of a user corresponding to the user node, the user node may also select to update and evolve the computation model by using the personalized data, training data used when the computation model is updated by each user node is different, and training data that each user node can acquire is richer.
Specifically, the second update data may be newly added data content between the current time T2 and the last update time T1, and the data content is organized by rows, for example, [ T1 time year, month, day, hour, minute and second, news content recommended by the model, category recommended by the model, company involved, user browsing duration, user feedback score, etc. ], [ T2 time year, month, day, hour, minute and second, financial information recommended by the model, user browsing duration, user feedback score, etc. ], [ T3 \8230 ], [ tn \8230 ], T1, T2, T3, \8230, tn is located between T1 and T2, and a piece of second update data is used for each row, and if the user has feedback on recommendation information corresponding to a certain piece of second update data, the piece of second update data includes first tag data: the user feeds back score of 5, wherein the user feedback score is derived from feedback data returned by the user to a corresponding user node, the user can update a feedback help model of a popped news/financial event and score 0-5, and the user does not feed back the score; if the user has no feedback on the recommendation information corresponding to a certain piece of second updating data, the piece of second updating data comprises second tag data: 1-exp (-alpha) user browsing duration, exp is an exponential function with a natural constant e, and alpha is a rule hyper-parameter which is preset to be positive during initialization and can be 1.0. Model updating of incremental supervision of the computational model of the user node can be performed through the second updating data.
For example, the user node may update and evolve the computing model of the user node according to the latest financial news information acquired from the first computing node and the personalized data of the user corresponding to the user node.
In an exemplary embodiment, the computing models of the user nodes in the first computing node and the second computing node are not updated uniformly, so that the computing models of the user nodes get rid of consistency of public training data of the first computing node, and the updated computing models of the second computing nodes can provide more personalized and more appropriate services for the corresponding second computing nodes. The model evolution directions of different second computing nodes can be different, and the homogeneity is eliminated. For example, the calculation model of the first calculation node is a recommendation model, and after a certain user node requests the recommendation model of the first calculation node, along with the difference in the service requirements of the user corresponding to the user node, if the service requirements of the user do not use other recommendations except for music services, the calculation model may be continuously updated at the user node, and evolves into a calculation model that only recommends music.
In addition, the user node can only collect corresponding user data locally to update the model, the private data of the user does not need to be uploaded to the cloud, namely the private data of the user does not need to be uploaded to the first computing node, the user has complete ownership and use right for the private data of the user, and the problem of leakage of the private information of the user is solved to a great extent.
In an exemplary embodiment, when a user node with weak computing power needs to be updated, the first computing node may dynamically invoke resources on the network architecture to update the computing model of the user node. When updating is triggered each time, the user node needs to send the private data of the user node and the model parameters of the calculation model corresponding to the user node to the first calculation node, the first calculation node dynamically schedules the private data to the first calculation node or idle user nodes in the edge node or other user nodes, and after the model parameters are updated, the updated gradient is returned. Only incremental gradient calculation is carried out during updating, the private calculation model of the user cannot be disclosed, and the safety of private data and the model is guaranteed. The small-batch updating requests other computing node resources through the cloud center every time, and basically, the updating on the same computing node is not continuous, so that the occurrence of the condition that user data is leaked is reduced.
In an exemplary embodiment, the method further comprises: storing the model parameters of the calculation model of the first calculation node and first updating data in a preset central database, and storing the model parameters of the edge nodes and second updating data corresponding to the edge nodes in an edge database; and storing the central database and the edge database in a block chain.
The Blockchain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, which is used for verifying the validity (anti-counterfeiting) of the information and generating a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
In an exemplary embodiment, as shown in fig. 4, when the calculation selection data in the second calculation node is the first selection data, the generating a calculation result by executing the user request instruction through the calculation model in the second calculation node includes:
step S401, when it is detected that the calculation selection data in the second calculation node is the first selection data, obtaining, by the second calculation node, user preference data.
Step S402, inputting the user preference data into a calculation model of the second calculation node, so as to output conclusion data to the second calculation node through the calculation model.
Illustratively, when the calculation selection data is the first selection data, after receiving a user request instruction, the router of the user node responds to the user request instruction, and may be triggered once at eight am every day by its corresponding calculation model, and through the latest news and financial information base collected at that time as the candidate content, and the user history browsing, clicking, etc., as the features, input to the router node model, and obtain an accurate output, such as obtaining 3 most likely clicks of the user from 1000 candidate news, and popping up to the user.
In an exemplary embodiment, as shown in fig. 5, when the computing selection data in the second computing node is second selection data, sending, by the second computing node, the user request instruction to the first computing node to execute the user request instruction to generate a computing result further includes:
step S501, monitoring a plurality of second computing nodes through the first computing node, and defining the second computing nodes carrying scheduling labels as second candidate computing nodes;
step S502, when the calculation selection data of the second calculation node is detected to be second selection data, acquiring load data of the second candidate calculation node and the first calculation node;
step S503, determining a target computing node based on the second candidate computing node and the load data of the first computing node;
step S504, sending a user request instruction to the target computing node through the first computing node, so that the computing model in the target computing node executes the user request instruction to generate a computing result.
In an exemplary embodiment, the second computing node carrying the scheduling tag may be understood as: the user nodes with strong computing power actively provide a part of computing resources of the user nodes, and help other nodes with weak computing power to update or compute.
Specifically, when the computing selection data is the second selection data, it can be understood that the second computing node needs to request other computing nodes to perform computing, and the scheduling is performed through the first computing node, that is, the scheduling is performed through the cloud center.
Furthermore, the first computing node dynamically calls resources on the network architecture to execute a user request instruction received by the user node, the user node is required to send privacy data and model parameters of a computing model thereof to the first computing node, and the first computing node determines a target computing node according to load data of other computing nodes to complete dynamic scheduling of the resources. The first computing node may dynamically schedule the first computing node itself, or an idle user node in the edge node or other user nodes to execute the user request instruction received by the user node, and after the result is output, the output result may be pushed data, and the pushed data in the result is returned to the user node.
In an exemplary embodiment, as shown in fig. 6, the method further comprises:
step S701, when a model re-update request sent by the second computing node is received by the first computing node within a preset time, the model update request carries a computing model corresponding to the second computing node and user data.
Step S702, fusing the calculation model of the first calculation node and the calculation models corresponding to the plurality of second calculation nodes through a first distillation algorithm to generate a first target calculation model.
And S703, verifying the first target calculation model through user data to generate a model verification result.
Step S704, when the model verification result is that the verification passes, replacing the calculation model of the first calculation node with a first target calculation model, so that the first target calculation model is stored in the first calculation node.
Step S705, issuing the first target calculation model to the second calculation node through the first calculation node, so that the second calculation node fuses the first target calculation model and the corresponding calculation model through a second distillation algorithm to obtain a corresponding second target calculation model.
In an exemplary embodiment, the preset time may be half a month or a month, and is set according to actual requirements, and the preset time is not limited in the embodiment of the present invention. At intervals, the first computing node can collect model parameters and privacy data of user nodes which need to be reinitialized and updated or are willing to provide data, the first computing node collects the data together, updates the computing model through a tournament algorithm to obtain a first target computing model, and determines whether the first target computing model is used as a reinitialization model in the first computing node according to the performance of the first target computing model.
It can be understood that: the first computing node can collect 10000 private data and 100 models sent by 100 user nodes willing to share data in a period of time, each time one piece of data is updated, the 100 models on the piece of data all give a result, and the result is obtained through L = Y t log(Y p )+(1-Y t )log(1-Y p ) Wherein L is a cross entropy function, and the label Y of the real output t And the result Y output by the calculation model p And calculating cross entropy loss, and selecting one or more models which perform best based on the cross entropy loss. The model or models which perform best are called teacher model, the model or models have a series of outputs in the middle layer besides the last layer for outputting scores corresponding to the results, and the output of the network layer before the final result is marked as Y t-k The layer network before the final result of the calculation model of the first calculation nodeThe output of the collateral layer is recorded as Y p-k By passing
Figure BDA0002661316750000121
(Y t-k Yp-k) 2, where L is a cross entropy function and α is a weight parameter of the teacher model, the squared loss is calculated, and the model weights can be better updated to obtain the first target calculation model.
After the first target calculation model is updated, the first target calculation model is predicted again on 10000 data, whether the accuracy of the output evaluation result and the accuracy of the result output by the calculation model in the previous first calculation node are improved or not is judged, and if the accuracy of the output evaluation result and the accuracy of the result output by the calculation model in the previous first calculation node are improved, the first target calculation model can be replaced by the reinitialization calculation model; or verifying whether the accuracy of the first target calculation model is improved or not through the nearby calculation nodes, and if the accuracy of the first target calculation model is improved, conditionally synchronizing the first target calculation model to the whole network.
Further, the first computing node can inform all computing nodes whether incremental updating exists after being reinitialized, if the rest computing nodes are willing to update, the first target computing model in the first computing node is downloaded and updated by the second distillation algorithm, otherwise, the updating is not carried out, and the reinitialization updating of the whole network is completed.
The invention has at least the following beneficial effects: combining a block chain idea in a cloud computing service architecture, sinking the computing authority of a cloud center to a terminal node, namely sinking the computing authority to a user node, and reducing the cost of establishing the cloud center by an enterprise; each user node can select to upload the private data or not to upload the private data, so that the safety of the private data of the user is greatly guaranteed; the updating of the calculation model in each user node can be different from person to person, personalized model updating can be carried out through respective private data, and the scene of model updating is closer to the user; the customized development model can be adapted to the deployment of the calculation model under various conditions according to different applications and different scenes; the advantages of various sensors of the Internet of things can be comprehensively utilized, the model is updated more accurately, and the user experience is improved; the added value of the home router node in the era of the Internet of things is improved, and the popularization of intelligent interconnection equipment is promoted; the pressure of a communication transmission backbone network is reduced, and the information transmission delay is reduced; the model automatically evolves through preset rules or personal preferences, and the network is automatically and synchronously updated.
Example two
Continuing to refer to fig. 7, a schematic diagram of program modules of the cloud computing architecture-based model management system of the present invention is shown. In the embodiment, the model management system 20 based on the cloud computing architecture may include or be divided into one or more program modules, and the one or more program modules are stored in a storage medium and executed by one or more processors to complete the present invention, and the above model management method based on the cloud computing architecture may be implemented. The program module referred to in the embodiments of the present invention refers to a series of computer program instruction segments capable of performing specific functions, and is more suitable for describing the execution process of the model management system 20 based on the cloud computing architecture in the storage medium than the program itself. The following description will specifically describe the functions of the program modules of the present embodiment:
a distribution module 800, configured to issue a computing model of a first computing node in the cloud computing architecture to a plurality of second computing nodes;
a recording module 810, configured to receive a selection instruction sent by the second computing nodes according to the corresponding computing models thereof, and generate a plurality of user forms corresponding to the second computing nodes based on the selection instruction, where the user forms include a plurality of fields, and the fields are used to record computing selection data of the second computing nodes;
a receiving module 820, configured to receive a user request instruction through the plurality of second computing nodes;
a first executing module 830, configured to execute a user request instruction through a computing model in the second computing node to generate a computing result when the computing selection data in the second computing node is the first selection data;
a second executing module 840, configured to send, by the second computing node, the user request instruction to the first computing node when the computing selection data in the second computing node is second selection data, so as to execute the user request instruction and generate a computing result.
In the exemplary embodiment, the system further includes an update module 850, the update module 850 configured to: acquiring a preset rule in the calculation model; acquiring first updating data of a first computing node; and adjusting the calculation model in the first calculation node based on the preset rule and the first updating data to obtain an updated calculation model in the first calculation node.
In an exemplary embodiment, the update module 850 is further configured to: acquiring second updating data of each second computing node; and adjusting the corresponding calculation model in the second calculation node based on the preset rule and the first updating data and/or the second updating data to obtain the updated calculation model in each second calculation node.
In an exemplary embodiment, the first execution module 830 is further configured to: when the computing selection data in the second computing node is detected to be the first selection data, user preference data are obtained through the second computing node; inputting the user preference data into a computational model of the second computational node to output conclusion data to the second computational node via the computational model.
In an exemplary embodiment, the second execution module 840 is further configured to: monitoring a plurality of second computing nodes through the first computing node, and defining the second computing nodes carrying scheduling labels as second candidate computing nodes; when the computing selection data of the second computing node is detected to be second selection data, acquiring load data of the second candidate computing node and the first computing node; determining a target compute node based on the second candidate compute node and the load data of the first compute node; sending a user request instruction to the target computing node through the first computing node so that the computing model in the target computing node executes the user request instruction to generate a computing result.
In an exemplary embodiment, the update module 850 is further configured to: when a model re-updating request sent by the second computing node is received through the first computing node within a preset time, the model updating request carries a computing model corresponding to the second computing node and user data; fusing the calculation model of the first calculation node and the calculation models corresponding to the plurality of second calculation nodes through a first distillation algorithm to generate a first target calculation model; verifying the first target calculation model through user data to generate a model verification result; when the model verification result is that the verification is passed, replacing the calculation model of the first calculation node with a first target calculation model so as to store the first target calculation model in the first calculation node; and issuing the first target calculation model to the second calculation node through the first calculation node, so that the second calculation node fuses the first target calculation model and the corresponding calculation model through a second distillation algorithm to obtain a corresponding second target calculation model.
EXAMPLE III
Fig. 8 is a schematic diagram of a hardware architecture of a computer device according to a third embodiment of the present invention. In this embodiment, the computer device 2 is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction. The computer device 2 may be a rack server, a blade server, a tower server or a cabinet server (including an independent server or a server cluster composed of a plurality of servers), and the like. As shown in fig. 8, the computer device 2 includes, but is not limited to, at least a memory 21, a processor 22, a network interface 23, and a model management system 20 based on a cloud computing architecture, which are communicatively connected to each other through a system bus. Wherein:
in this embodiment, the memory 21 includes at least one type of computer-readable storage medium including a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. In some embodiments, the storage 21 may be an internal storage unit of the computer device 2, such as a hard disk or a memory of the computer device 2. In other embodiments, the memory 21 may also be an external storage device of the computer device 2, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like provided on the computer device 2. Of course, the memory 21 may also comprise both internal and external memory units of the computer device 2. In this embodiment, the memory 21 is generally used for storing an operating system and various application software installed in the computer device 2, such as the program code of the model management system 20 based on the cloud computing architecture in the above embodiment. Further, the memory 21 may also be used to temporarily store various types of data that have been output or are to be output.
Processor 22 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 22 is typically used to control the overall operation of the computer device 2. In this embodiment, the processor 22 is configured to run the program code stored in the memory 21 or process data, for example, run the model management system 20 based on the cloud computing architecture, so as to implement the model management method based on the cloud computing architecture of the above embodiment.
The network interface 23 may comprise a wireless network interface or a wired network interface, and the network interface 23 is generally used for establishing communication connection between the computer device 2 and other electronic apparatuses. For example, the network interface 23 is used to connect the computer device 2 to an external terminal through a network, establish a data transmission channel and a communication connection between the computer device 2 and the external terminal, and the like. The network may be an Intranet (Internet), the Internet (Internet), a Global System for mobile communications (GSM), a Wideband Code Division Multiple Access (WCDMA), a 4G network, a 5G network, a Bluetooth (Bluetooth), a Wi-Fi or other wireless or wired network.
It is noted that fig. 8 only shows the computer device 2 with components 20-23, but it is to be understood that not all shown components are required to be implemented, and that more or less components may be implemented instead.
In this embodiment, the cloud computing architecture-based model management system 20 stored in the memory 21 may also be divided into one or more program modules, and the one or more program modules are stored in the memory 21 and executed by one or more processors (in this embodiment, the processor 22) to complete the present invention.
For example, fig. 7 is a schematic diagram of program modules of a second embodiment for implementing the cloud computing architecture-based model management system 20, and in this embodiment, the cloud computing architecture-based model management system 20 may be divided into a distribution module 800, a recording module 810, a receiving module 820, a first execution module 830, and a second execution module 840. The program module referred to in the present invention refers to a series of computer program instruction segments capable of performing specific functions, and is more suitable than a program for describing the execution process of the cloud computing architecture based model management system 20 in the computer device 2. The specific functions of the program modules 800-840 have been described in detail in the second embodiment, and are not described herein again.
Example four
The present embodiment also provides a computer-readable storage medium, such as a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, a server, an App application store, etc., on which a computer program is stored, which when executed by a processor implements corresponding functions. The computer-readable storage medium of the embodiment is used for storing the model management system 20 based on the cloud computing architecture, and when being executed by the processor, the computer-readable storage medium implements the model management method based on the cloud computing architecture of the embodiment.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A model management method based on a cloud computing architecture is characterized by comprising the following steps:
issuing a computing model of a first computing node in the cloud computing architecture to a plurality of second computing nodes;
receiving selection instructions sent by the second computing nodes according to the corresponding computing models of the second computing nodes, and generating a plurality of user forms corresponding to the second computing nodes based on the selection instructions, wherein the user forms comprise a plurality of fields, the fields are used for recording computing selection data of the second computing nodes, the computing selection data comprise first selection data and second selection data, and the first selection data represent private use of privacy data of users;
receiving, by the plurality of second computing nodes, user request instructions;
when the calculation selection data in the second calculation node is the first selection data, executing the user request instruction through a calculation model in the second calculation node to generate a calculation result;
and when the calculation selection data in the second calculation node is second selection data, the user request instruction is sent to the first calculation node through the second calculation node so as to execute the user request instruction and generate a calculation result.
2. The cloud computing architecture based model management method of claim 1, the method further comprising:
acquiring a preset rule in the calculation model;
acquiring first updating data of a first computing node;
and adjusting the calculation model in the first calculation node based on the preset rule and the first updating data to obtain an updated calculation model in the first calculation node.
3. The cloud computing architecture based model management method of claim 2, wherein obtaining the first update data for the first computing node further comprises:
acquiring second updating data of each second computing node;
and adjusting the calculation model in the corresponding second calculation node based on the preset rule and the first updating data and/or the second updating data to obtain the updated calculation model in each second calculation node.
4. The cloud computing architecture-based model management method of claim 2, wherein the generating of the computation result by the computing model in the second computing node executing the user request instruction when the computing selection data in the second computing node is the first selection data comprises:
when the computing selection data in the second computing node is detected to be the first selection data, user preference data are obtained through the second computing node;
inputting the user preference data into a computational model of the second computational node to output conclusion data to the second computational node via the computational model.
5. The cloud computing architecture-based model management method according to claim 2, wherein the sending, by the second computing node, the user request instruction to the first computing node to execute the user request instruction to generate the computing result when the computing selection data in the second computing node is the second selection data further comprises:
monitoring a plurality of second computing nodes through the first computing node, and defining the second computing nodes carrying scheduling labels as second candidate computing nodes;
when the computing selection data of the second computing node is detected to be second selection data, acquiring load data of the second candidate computing node and the first computing node;
determining a target compute node based on the second candidate compute node and the load data of the first compute node;
sending, by a first computing node, a user request instruction to the target computing node, so that a computation model in the target computing node executes the user request instruction to generate a computation result.
6. The cloud computing architecture based model management method of claim 1, the method further comprising:
when a model updating request sent by the second computing node is received by the first computing node within a preset time, the model updating request carries a computing model corresponding to the second computing node and user data;
fusing the calculation model of the first calculation node and the calculation models corresponding to the plurality of second calculation nodes through a first distillation algorithm to generate a first target calculation model;
verifying the first target calculation model through user data to generate a model verification result;
when the model verification result is that the verification is passed, replacing the calculation model of the first calculation node with a first target calculation model so as to store the first target calculation model in the first calculation node;
and issuing the first target calculation model to the second calculation node through the first calculation node so that the second calculation node fuses the first target calculation model and the corresponding calculation model through a second distillation algorithm to obtain a corresponding second target calculation model.
7. The cloud computing architecture based model management method of claim 1, wherein the second computing node comprises a plurality of edge nodes, the method further comprising:
storing the model parameters of the calculation model of the first calculation node and first updating data in a preset central database, and storing the model parameters of the edge nodes and second updating data corresponding to the edge nodes in an edge database;
and storing the central database and the edge database in a block chain.
8. A model management system based on a cloud computing architecture is characterized by comprising:
the distribution module is used for issuing a computing model of a first computing node in the cloud computing architecture to a plurality of second computing nodes;
the recording module is used for receiving selection instructions sent by the second computing nodes according to the corresponding computing models of the second computing nodes, and generating a plurality of user forms corresponding to the second computing nodes based on the selection instructions, wherein the user forms comprise a plurality of fields, the fields are used for recording computing selection data of the second computing nodes, the computing selection data comprise first selection data and second selection data, and the first selection data represent private use of private data of users;
a receiving module, configured to receive, through the plurality of second computing nodes, a user request instruction;
the first execution module is used for executing the user request instruction through a calculation model in the second calculation node to generate a calculation result when the calculation selection data in the second calculation node is the first selection data;
and the second execution module is used for sending the user request instruction to the first computing node through the second computing node so as to execute the user request instruction and generate a computing result when the computing selection data in the second computing node is the second selection data.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the computer program implements the steps of the cloud computing architecture based model management method of any one of claims 1 to 7.
10. A computer-readable storage medium, having stored therein a computer program executable by at least one processor to cause the at least one processor to perform the steps of the cloud computing architecture based model management method of any one of claims 1 to 7.
CN202010905581.9A 2020-09-01 2020-09-01 Model management method and system based on cloud computing architecture Active CN112039992B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010905581.9A CN112039992B (en) 2020-09-01 2020-09-01 Model management method and system based on cloud computing architecture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010905581.9A CN112039992B (en) 2020-09-01 2020-09-01 Model management method and system based on cloud computing architecture

Publications (2)

Publication Number Publication Date
CN112039992A CN112039992A (en) 2020-12-04
CN112039992B true CN112039992B (en) 2022-10-28

Family

ID=73590906

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010905581.9A Active CN112039992B (en) 2020-09-01 2020-09-01 Model management method and system based on cloud computing architecture

Country Status (1)

Country Link
CN (1) CN112039992B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107959708A (en) * 2017-10-24 2018-04-24 北京邮电大学 A kind of car networking service collaboration computational methods and system based on high in the clouds-marginal end-car end

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9781205B2 (en) * 2011-09-12 2017-10-03 Microsoft Technology Licensing, Llc Coordination engine for cloud selection
US10536356B2 (en) * 2015-09-21 2020-01-14 Splunk Inc. Generating and displaying topology map time-lapses of cloud computing resources
US10382466B2 (en) * 2017-03-03 2019-08-13 Hitachi, Ltd. Cooperative cloud-edge vehicle anomaly detection
CN107087019B (en) * 2017-03-14 2020-07-07 西安电子科技大学 Task scheduling method and device based on end cloud cooperative computing architecture
CN110162018B (en) * 2019-05-31 2020-11-24 天津开发区精诺瀚海数据科技有限公司 Incremental equipment fault diagnosis method based on knowledge distillation and hidden layer sharing
CN111585916B (en) * 2019-12-26 2023-08-01 国网辽宁省电力有限公司电力科学研究院 LTE power wireless private network task unloading and resource allocation method based on cloud edge cooperation
CN111262906B (en) * 2020-01-08 2021-05-25 中山大学 Method for unloading mobile user terminal task under distributed edge computing service system
CN111240821B (en) * 2020-01-14 2022-04-22 华南理工大学 Collaborative cloud computing migration method based on Internet of vehicles application security grading

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107959708A (en) * 2017-10-24 2018-04-24 北京邮电大学 A kind of car networking service collaboration computational methods and system based on high in the clouds-marginal end-car end

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
一种基于边缘计算的传感云低耦合方法;梁玉珠等;《计算机研究与发展》;20200315(第03期);全文 *
基于综合信任的边缘计算资源协同研究;邓晓衡等;《计算机研究与发展》;20180315(第03期);全文 *

Also Published As

Publication number Publication date
CN112039992A (en) 2020-12-04

Similar Documents

Publication Publication Date Title
CN109003028B (en) Method and device for dividing logistics area
KR102358604B1 (en) Convergence data processing method and information recommendation system
CN111915019A (en) Federal learning method, system, computer device, and storage medium
JP2021508395A (en) Client, server, and client-server systems adapted to generate personalized recommendations
US11194869B2 (en) Method and apparatus for enriching metadata via a network
CN112039992B (en) Model management method and system based on cloud computing architecture
CN115249082A (en) User interest prediction method, device, storage medium and electronic equipment
CN110347973B (en) Method and device for generating information
US11651271B1 (en) Artificial intelligence system incorporating automatic model updates based on change point detection using likelihood ratios
US11636377B1 (en) Artificial intelligence system incorporating automatic model updates based on change point detection using time series decomposing and clustering
CN111221517A (en) Model creating method and device, computer equipment and readable storage medium
CN113448876B (en) Service testing method, device, computer equipment and storage medium
CN115705578A (en) Method and device for determining delivery area and storage medium
CN111598390B (en) Method, device, equipment and readable storage medium for evaluating high availability of server
CN114925275A (en) Product recommendation method and device, computer equipment and storage medium
CN114510627A (en) Object pushing method and device, electronic equipment and storage medium
CN114912627A (en) Recommendation model training method, system, computer device and storage medium
CN112579246B (en) Virtual machine migration processing method and device
CN114219663A (en) Product recommendation method and device, computer equipment and storage medium
CN113221016A (en) Resource recommendation method and device, computer equipment and medium
CN112559221A (en) Intelligent list processing method, system, equipment and storage medium
CN115545248A (en) Target object prediction method, device, equipment and medium
CN113326333A (en) Data processing method, system, computer device and computer storage medium
CN111309993A (en) Method and system for generating enterprise asset data portrait
CN114547434B (en) Object recommendation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant