CN113590336A - Algorithm management method and device of edge computing equipment - Google Patents

Algorithm management method and device of edge computing equipment Download PDF

Info

Publication number
CN113590336A
CN113590336A CN202110920263.4A CN202110920263A CN113590336A CN 113590336 A CN113590336 A CN 113590336A CN 202110920263 A CN202110920263 A CN 202110920263A CN 113590336 A CN113590336 A CN 113590336A
Authority
CN
China
Prior art keywords
algorithm
deployment
data
edge computing
computing device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110920263.4A
Other languages
Chinese (zh)
Inventor
朱彦祺
陶泽沛
李新桥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Rentong Electronic Technology Co ltd
Original Assignee
Shanghai Rentong Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Rentong Electronic Technology Co ltd filed Critical Shanghai Rentong Electronic Technology Co ltd
Priority to CN202110920263.4A priority Critical patent/CN113590336A/en
Publication of CN113590336A publication Critical patent/CN113590336A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing

Abstract

The embodiment of the invention discloses an algorithm management method and a device of edge computing equipment, which are applied to the edge computing equipment, and the method comprises the following steps: receiving a packing algorithm file, wherein the packing algorithm file is obtained by packing at least a main entry file and an auxiliary file of the operation of the algorithm to be deployed; and receiving the parameter configuration data of the algorithm to be deployed, completing the parameter configuration work of the algorithm to be deployed, and obtaining the deployment algorithm which can be scheduled for use. In the scheme, no coupling relation exists between the deployment algorithm and the edge computing equipment, and the algorithm form of matching the main entry file with the auxiliary file compression packet is adopted, so that the method can be adapted to various algorithms, the flexibility of algorithm deployment is improved, and the operation difficulty is reduced.

Description

Algorithm management method and device of edge computing equipment
Technical Field
The invention relates to an industrial automation technology, in particular to an algorithm management method and device of edge computing equipment.
Background
Currently, the edge computing technology has been widely applied to the fields of industrial automation, internet of things application and the like. Most of the existing edge computing products are divided into two types, one type is that edge signal acquisition equipment is arranged at an edge end, and acquired signals are transmitted to a ground terminal for algorithm analysis, and the method has low performance requirements on the edge equipment, but depends on signal transmission quality and has poor real-time performance; the other method is to complete signal acquisition and data analysis at the edge end, and the method has the advantages of high response speed, small data transmission quantity and the like, but has higher requirements on edge equipment and algorithms.
On the whole, the existing edge computing equipment products mainly have the functions of signal acquisition and primary signal processing, and a small amount of equipment realizes the functions of data analysis, algorithm reasoning and the like. However, in the current edge computing device, a higher coupling degree exists between the edge computing device and the algorithm, so that the algorithm is complex to deploy and difficult to maintain, and the requirements of the industry on the intelligence of the edge computing and the algorithm diversification are more and more difficult to meet.
Disclosure of Invention
In view of this, the present invention provides the following technical solutions:
an algorithm management method of an edge computing device is applied to the edge computing device and comprises the following steps:
receiving a packing algorithm file, wherein the packing algorithm file is obtained by packing at least a main entry file and an auxiliary file of the operation of the algorithm to be deployed;
and receiving the parameter configuration data of the algorithm to be deployed, completing the parameter configuration work of the algorithm to be deployed, and obtaining the deployment algorithm which can be scheduled for use.
Optionally, the parameter configuration data includes algorithm basic parameters, algorithm identification parameters, algorithm enabling parameters, algorithm input parameters, and algorithm internal output parameters.
Optionally, the receiving the parameter configuration data of the algorithm to be deployed, and after completing the parameter configuration work of the algorithm to be deployed, the method further includes:
and calling the deployment algorithm to carry out data reasoning to obtain a data reasoning result.
Optionally, the invoking the deployment algorithm to perform data inference to obtain a data inference result includes:
the algorithm management end receives an algorithm task and controls and starts the deployment algorithm, and the execution of the algorithm task is realized at least based on the deployment algorithm;
the algorithm management terminal sends the original data to the running process of the deployment algorithm so that the running process of the deployment algorithm carries out data reasoning based on the original data;
and the algorithm management terminal receives and returns the data reasoning result sent by the running process of the deployment algorithm.
Optionally, before the algorithm management end sends the original data to the running process of the deployment algorithm, the method further includes:
and the algorithm management terminal determines a binding port of the deployment algorithm, and the binding port is used for data transmission between the running process of the deployment algorithm and the algorithm management terminal.
Optionally, the determining, by the algorithm management end, the bound port of the deployment algorithm includes:
and the algorithm management terminal receives an algorithm information message sent by the running process of the deployment algorithm, wherein the algorithm information message comprises a binding port and an algorithm ID of the deployment algorithm.
Optionally, after the algorithm management end determines the binding port of the deployment algorithm, the method further includes:
and the algorithm management terminal sends an information confirmation message to the running process of the deployment algorithm.
Optionally, the binding port is a communication port conforming to a user datagram protocol.
Optionally, the method further includes:
and the algorithm management terminal sends a calling termination message to the running process of the deployment algorithm, so that the running process of the deployment algorithm is automatically closed after receiving the calling termination message.
An algorithm management device of an edge computing device, applied to the edge computing device, comprises:
the algorithm receiving module is used for receiving a packing algorithm file, and the packing algorithm file is obtained by packing at least a main entry file and an auxiliary file of the operation of the algorithm to be deployed;
and the parameter configuration module is used for receiving the parameter configuration data of the algorithm to be deployed, completing the parameter configuration work of the algorithm to be deployed and obtaining the deployment algorithm which can be scheduled for use.
As can be seen from the foregoing technical solutions, compared with the prior art, the embodiment of the present invention discloses an algorithm management method and apparatus for an edge computing device, which is applied to the edge computing device, and the method includes: receiving a packing algorithm file, wherein the packing algorithm file is obtained by packing at least a main entry file and an auxiliary file of the operation of the algorithm to be deployed; and receiving the parameter configuration data of the algorithm to be deployed, completing the parameter configuration work of the algorithm to be deployed, and obtaining the deployment algorithm which can be scheduled for use. In the scheme, no coupling relation exists between the deployment algorithm and the edge computing equipment, and the algorithm form of matching the main entry file with the auxiliary file compression packet is adopted, so that the method can be adapted to various algorithms, the flexibility of algorithm deployment is improved, and the operation difficulty is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flowchart of an algorithm management method of an edge computing device according to an embodiment of the present invention;
FIG. 2 is a block diagram of an implementation of the algorithm management method of the edge computing device according to the embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating an interaction principle between a common algorithm side and an algorithm management side according to an embodiment of the present invention;
FIG. 4 is a flowchart of another algorithm management method of an edge computing device according to an embodiment of the present disclosure;
FIG. 5 is a flowchart illustrating an implementation of invoking a deployment algorithm for data inference, according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram illustrating a data management interaction flow of a multi-level algorithm according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of an algorithm scheduling process disclosed in the embodiment of the present invention;
FIG. 8 is a flowchart illustrating the processes of the algorithm end according to the embodiment of the present invention;
fig. 9 is a schematic structural diagram of an algorithm management apparatus of an edge computing device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flowchart of an algorithm management method of an edge computing device according to an embodiment of the present disclosure, where the method shown in fig. 1 is applied to an edge computing device. Referring to fig. 1, an algorithm management method of an edge computing device may include:
step 101: and receiving a packing algorithm file, wherein the packing algorithm file is obtained by packing at least a main entry file and an auxiliary file operated by the algorithm to be deployed.
In the current edge computing device, the coupling degree between the edge computing device and the algorithm is high, so that the problems of complex algorithm deployment, difficult maintenance and the like exist. In order to overcome the problem, in the present application, the deployment algorithm is independently configured in the edge computing device, so that the deployment algorithm can be run in the edge computing device in an independent process manner, thereby achieving decoupling of the deployment algorithm and the edge computing device.
Specifically, in the embodiment of the present application, when an algorithm is deployed on an edge computing device, a main entry file and an assistance file for operation of the algorithm to be deployed are packaged into a compressed package and uploaded to an algorithm platform of the edge computing device, where the assistance file may be, but is not limited to, any one or more of other library files, model files, and algorithm parameter files on which the algorithm to be deployed depends. The algorithm to be deployed is input into the edge computing equipment in the deployment form of the compression package, so that the deployment algorithm and the edge computing equipment cannot have a coupling relation, the deployment of the algorithm is more flexible, and the later maintenance is simpler.
Step 102: and receiving the parameter configuration data of the algorithm to be deployed, completing the parameter configuration work of the algorithm to be deployed, and obtaining the deployment algorithm which can be scheduled for use.
After receiving the packed algorithm file, the algorithm parameter configuration of the algorithm to be deployed is required, and it can be ensured that the deployed algorithm can normally run on the edge computing device. In the implementation scheme, the algorithm configuration file can be filled according to a required format, and the deployment algorithm which can be scheduled to be used can be obtained after the parameter configuration work of the algorithm to be deployed is completed.
The existing algorithm of the current edge computing device only aims at a certain specific part of a target object, the algorithm parameters, input data and returned result forms are different among different algorithms, an algorithm operating platform cannot be unified, the operation is complex, the difficulty is high, and the algorithm maintenance cost of the edge device is increased. In the application, in order to overcome the problem that configuration parameters cannot be uniformly managed on an algorithm platform due to different algorithm input and output data forms, a mode for uniformly configuring the algorithm input and output parameters is designed, and the mode can be realized through uniform configuration of parameter configuration data. Because the modes of the input parameters and the output parameters of the algorithm are configured uniformly, the management and maintenance difficulty of the algorithm on the edge computing equipment can be greatly reduced.
According to the algorithm management method of the edge computing device, the deployment algorithm and the edge computing device do not have a coupling relation, the algorithm form of the main entry file matched with the auxiliary file compression package is adopted, the algorithm management method can be adapted to various algorithms, the flexibility of algorithm deployment is improved, and the operation difficulty is reduced.
In a specific implementation, the parameter configuration data of the algorithm to be deployed may be implemented in the form of table 1.
TABLE 1 Algorithm input configuration Table
Figure BDA0003207145660000051
As shown in connection with table 1, the parameter configuration data may include, but is not limited to, the following five categories:
1. algorithm basic parameters (corresponding to algorithm basic information in table 1): the platform is used for identifying parameters of the algorithms, wherein the algorithm ID is the unique identification attribute of each algorithm on the platform, and the calculation grade refers to the interactive relation used for configuring the algorithms, so that the multi-stage scheduling algorithm of the algorithm management end is facilitated;
2. algorithm identification parameters (corresponding to the algorithm description in table 1): identification parameters of the algorithm, such as algorithm version number, algorithm target components and the like, are configured according to the actual information of the algorithm;
3. algorithm enable parameter (corresponding to algorithm control in table 1): the function identification of the algorithm on the platform is that whether the algorithm is started or not, whether the algorithm input data is recorded according to the algorithm output result or not and the like, and the configuration is carried out according to the actual management requirement of the platform;
4. algorithm input parameters: determining a data message form transmitted to the algorithm by an algorithm management end, establishing a mapping relation between a platform input variable and an algorithm variable transmitted by a platform, wherein the mapping relation comprises a data name and a data type, and the data type determines the number of bytes occupied by data in a 2-system UDP message;
5. algorithm internal output parameters (corresponding to algorithm basic information in table 1): and identifying the form of the algorithm result data message sent by the algorithm to the algorithm management end, wherein the form comprises parameter names, offset in the message, data types and description.
In a specific implementation of the present application, in order to solve the problem of high coupling degree between the edge computing device and the algorithm, the edge computing function may be split into two ends, namely, an algorithm layer and an algorithm management layer, where the algorithm layer and the algorithm management layer operate in the edge computing device in an independent process manner. The overall architecture diagram of the edge computing function implementation in the edge computing device based on this implementation is shown in fig. 2. The algorithm end and the algorithm management end may interact through UDP (User Datagram Protocol) communication of an internal port (an ethernet port of a backplane) of the edge computing device, and a communication content form may be, but is not limited to, a 2-ary character string.
The port number of the UDP port (i.e., the port number in the UDP communication protocol layer) at the algorithm management end is fixed to the deployment algorithm, so that the problem of port collision during the operation of multiple deployment algorithms can be avoided. After the UDP port of the algorithm is bound, the information message can be transmitted to the appointed deployment algorithm according to the corresponding port number, and error transmission or repeated transmission is avoided. For example, message 1 needs to be sent to algorithm 1, only needs to be sent to the UDP port bound before algorithm 1, and does not need to be sent to multiple UDP ports, and algorithm 2 does not receive data from the UDP port bound by algorithm 1 to avoid port collision. The interaction mode between the algorithm side and the algorithm management side can be seen in fig. 3.
The algorithm management method of the edge computing device unifies the interaction form of the algorithm and the management platform, can reduce the configuration and maintenance cost of the edge computing algorithm, and improves the reliability and stability of the operation of the edge computing algorithm.
Fig. 4 is a flowchart of another algorithm management method of an edge computing device according to an embodiment of the present invention, and referring to fig. 4, the algorithm management method of the edge computing device may include:
step 401: and receiving a packing algorithm file, wherein the packing algorithm file is obtained by packing at least a main entry file and an auxiliary file operated by the algorithm to be deployed.
Step 402: and receiving the parameter configuration data of the algorithm to be deployed, completing the parameter configuration work of the algorithm to be deployed, and obtaining the deployment algorithm which can be scheduled for use.
Step 403: and calling the deployment algorithm to carry out data reasoning to obtain a data reasoning result.
In this implementation, step 401 and step 402 belong to an algorithm deployment phase, and after the algorithm deployment is completed in step 401 and step 402, an algorithm scheduling phase may be entered. In the algorithm scheduling phase, the deployment algorithm can perform corresponding data processing.
The implementation process of invoking the deployment algorithm to perform data inference can be shown in fig. 5, where fig. 5 is a flow chart for implementing invoking the deployment algorithm to perform data inference disclosed in the embodiment of the present invention, and as shown in fig. 5, the invoking the deployment algorithm to perform data inference to obtain a data inference result may include:
step 501: and the algorithm management terminal receives an algorithm task and controls and starts the deployment algorithm, and the execution of the algorithm task is realized at least based on the deployment algorithm.
After receiving the algorithm task at the algorithm management end, the related deployment algorithm can be called to process data according to the task requirement. In the implementation, the algorithm management terminal needs to control the starting of the deployment algorithm at first, and after the starting and some preparation work are finished, the deployment algorithm can start data reasoning and other work.
Step 502: and the algorithm management terminal sends the original data to the running process of the deployment algorithm so as to enable the running process of the deployment algorithm to carry out data reasoning based on the original data.
The raw data refers to data required by the actual operation of the deployment algorithm, and may be AD sensor sampling data (voltage, current, temperature, vibration) or data transmitted by other information sources (system operation parameters, picture data, etc.). For example, if the algorithm task is to average a certain measurement over a certain time of day, the corresponding raw data may be all measurements collected during that day.
Step 503: and the algorithm management terminal receives and returns the data reasoning result sent by the running process of the deployment algorithm.
After the deployment algorithm performs corresponding data inference based on the original data, a corresponding data inference result is obtained, and the running process of the deployment algorithm sends the data inference result to the algorithm management end so that the algorithm management end can perform subsequent work processing based on the obtained data inference result, for example, the data inference result can be uploaded to an upper computer, and the data inference result can also be transmitted to other deployment algorithms as the original data of other deployment algorithms. The data reasoning result is required to serve as a deployment algorithm of original data of other algorithms, and belongs to a low-level algorithm, and the data reasoning result of other algorithms serves as the deployment algorithm of the original data, and belongs to a high-level algorithm.
Here, when data interaction exists among the multi-level deployment algorithms in the edge computing device, the output of the low-level algorithm is used as the input of the high-level algorithm, and the algorithm management terminal encapsulates the output data of the low-level algorithm and transmits the encapsulated output data to the high-level algorithm to be used as the input of the high-level algorithm. The multi-hierarchy algorithm data management interaction flow is shown in fig. 6. A multi-level algorithm data management interactive process belongs to one of specific application modes of sending original data to an operation process of a deployment algorithm by an algorithm management end.
In another implementation, before the algorithm management end sends the original data to the running process of the deployment algorithm, the method may further include: and the algorithm management terminal determines a binding port of the deployment algorithm, and the binding port is used for data transmission between the running process of the deployment algorithm and the algorithm management terminal. The binding port may be a communication port conforming to a user datagram protocol, that is, a UDP port. The algorithm management end determines a binding port of the deployment algorithm, which may include but is not limited to: and the algorithm management terminal receives an algorithm information message sent by the running process of the deployment algorithm, wherein the algorithm information message comprises a binding port and an algorithm ID of the deployment algorithm.
Further, after the algorithm management end determines the binding port of the deployment algorithm, the method may further include: and the algorithm management end sends an information confirmation message to the operation process of the deployment algorithm, so that the operation process of the deployment algorithm enters the next operation process after the algorithm management end confirms that the self binding port is defined, namely, the operation process enters a state of waiting for receiving the original data sent by the algorithm management end.
In another implementation, in the algorithm scheduling stage, on the basis of the foregoing, the method may further include: and the algorithm management terminal sends a call termination message to the running process of the deployment algorithm, so that the running process of the deployment algorithm is automatically closed after receiving the call termination message, and the algorithm scheduling task is ended.
Based on the above, it can be understood that the algorithm scheduling link refers to the algorithm management end performing scheduling operations such as starting, calling, running, stopping and the like on the uploaded algorithm according to the algorithm configuration file. In order to better understand the implementation of the algorithm scheduling phase, the flow will be described in detail below with reference to fig. 7, where fig. 7 is a schematic diagram of an algorithm scheduling flow disclosed in an embodiment of the present invention, and as shown in fig. 7, the algorithm scheduling flow may include:
step 1: the algorithm management end starts the algorithm and starts the algorithm by calling the main entry file operated by the algorithm;
step 2: after the algorithm is started, automatically acquiring the operation information of the algorithm by calling a code file from the algorithm parameter configuration of the algorithm management end and sending an algorithm information message to the algorithm management end;
the self-operation information acquired by the algorithm can comprise: after starting, automatically binding the port number of the idle port (UDP port) of the current system, the identification ID (self-definition) of the algorithm, the process number of the operation of the algorithm (used for monitoring the operation state of the process of the algorithm in the operating system), and arranging the process number into an algorithm information message according to the form of the attached table 1 and sending the algorithm information message to an algorithm management end.
And step 3: the algorithm management end replies an information confirmation message to the algorithm end after receiving the algorithm information message;
the algorithm can automatically bind the UDP ports which are randomly idle after running, thereby reducing the port management work of all algorithms during the configuration of the algorithm and avoiding the occurrence of repeated ports.
And 4, step 4: the algorithm end enters an algorithm reasoning link and waits for the input of an algorithm data message;
and after the algorithm end finishes the confirmation of the port information, the process for port information interaction is finished, and the two processes for data receiving and data processing enter a working cycle from a waiting state.
And 5: the algorithm management terminal sends the algorithm data message corresponding to the algorithm to a UDP port bound by the algorithm;
the algorithm data packet may include a general packet header portion and data of a predetermined format that needs to be processed.
The algorithm management end transmits the original data to the algorithm end in a fixed UDP message format, namely, according to the number of parameters required by the algorithm input, the variable forms a message according to the position, the variable length (the number of bytes occupied) and the variable value of the variable in the algorithm input parameter (the input variable parameter required by the operation of the algorithm is completed on the algorithm parameter configuration interface of the algorithm management end), the transmission variable may be a numerical value, a character string or a picture, the message format is shown in Table 2, wherein index is the variable ID in the algorithm configuration, length is the number of bytes of 2 system occupied by the algorithm in the algorithm configuration, and the original data refers to the data input by an external signal source or a signal acquisition device.
Table 2 algorithm input raw data message format
Figure BDA0003207145660000091
Step 6: the algorithm end receives the algorithm data message, analyzes the message data, completes data reasoning and obtains a data reasoning result;
the process of receiving the algorithm data message by the algorithm end can be called a data receiving process. Identifying the message type according to the function flag bit of the data message, if the message is the data message, adding a corresponding receiving timestamp to the message, and transmitting the message to a next process (data inference process), wherein the data message in a period of time can be cached according to the requirement of a data processing algorithm in the next process in the process; if the message is an abort message, step 9 is entered.
After the data reasoning process obtains the algorithm data message, the data part of the message is analyzed byte by byte according to the form of the algorithm data message in the algorithm configuration until the tail end of the message, the analyzed data is arranged into the data type applicable to the data reasoning algorithm or the model, and the algorithm reasoning link is entered to obtain a reasoning result.
And 7: the algorithm end arranges the data reasoning result into a result message form and returns the result message form to the algorithm management end;
and the data reasoning process sorts the results obtained by reasoning into algorithm result messages according to the form of the results output by the algorithm inside the algorithm configuration file and sends the algorithm result messages to the algorithm management terminal.
And 8: the algorithm management end sends a termination command message to the algorithm end;
and step 9: the algorithm end receives the termination command message, returns a termination confirmation message and ends all processes;
after receiving the termination message from the algorithm management end, the algorithm management end has two algorithm termination modes, one is that the data receiving process closes the data reasoning process, then the data receiving process sends a termination confirmation message to the algorithm management end, the other is that the data receiving process transmits an algorithm termination signal to the data reasoning process, the data reasoning process closes the process by itself, then the data reasoning process sends the termination confirmation message to the algorithm management end.
The messages in the steps (1) to (9) all refer to communication messages based on a UDP protocol, and the message form comprises a header message and a data message; the format of the header message is fixed, the format of the data message is determined by the message function, and the header message contains the information of the source, type, length, count and the like of the message; the data message is determined according to the actual form of the message to be transmitted, and the specific form is shown in table 3. According to the function distinction, the messages interacted between the algorithm management end and the algorithm end comprise 6 basic forms, and each type of message comprises a function identification bit fixed aiming at the current platform, and the specific form is shown in table 4.
Table 3 message header format
Figure BDA0003207145660000101
TABLE 4 basic message forms
Serial number Type of message Direction of communication
1 Basic information of algorithm Algorithm->Algorithm management terminal
2 Algorithm basic information validation Algorithm management terminal->Algorithm
3 Algorithm input data Algorithm management terminal->Algorithm
4 Result of algorithm Algorithm->Algorithm management terminal
5 Algorithm termination notification Algorithm management terminal->Algorithm
6 Result of algorithm termination Algorithm->Algorithm management terminal
Based on the above, correspondingly, the deployment algorithm may include at least 4 sub-processes in the running process, as follows:
and (3) process 1: sending an algorithm information message, wherein the algorithm information message can be but is not limited to a general header message, algorithm port information, an algorithm ID, a process number and the like;
and (3) process 2: receiving an algorithm management end information confirmation message;
and 3, process 3: receiving an algorithm management end input data message and analyzing data;
and 4, process 4: receiving the data analyzed by the process 3, finishing data reasoning and sending a result message to an algorithm management end;
wherein, the process 1 circularly and continuously sends the algorithm information message until the process 2 receives the information confirmation message; after receiving the message confirmation message, the process 2 sends a stop message to the process 1 through the interprocess message queue, and sends start messages to the processes 3 and 4. And after the transmission is finished, the process 2 exits, and the process 1 exits the cycle termination after receiving the termination message. And the processes 3 and 4 enter a working state after receiving the starting message, the process 3 continuously receives the data message transmitted by the algorithm management end, and the data message is packaged and transmitted to the process 4 through the interprocess message queue. And the process 4 receives and analyzes the data message, completes data reasoning and returns a reasoning result to the algorithm management end. The flow principle of each process is shown in fig. 8. Process 4 may comprise multiple processes or threads depending on data processing requirements.
In a specific implementation, taking a fault diagnosis algorithm based on edge sensor signals as an example, the whole algorithm management process is divided into two processes of algorithm deployment and algorithm scheduling.
Firstly, starting an algorithm management end to complete algorithm deployment, and comprising the following steps:
1. packaging a main file, a data feature extraction code file, an algorithm internal parameter file and an algorithm model file of a fault diagnosis algorithm into a compressed file form;
2. uploading the compressed packet to an algorithm management end;
3. and filling an algorithm input parameter configuration file in an algorithm management end, determining basic information, an algorithm version number, an algorithm label and an algorithm type of the algorithm, determining a sensor signal list, a signal data type and a length required by the algorithm, and finally determining the position, the form and the length of a variable in an algorithm output result.
And then entering an algorithm scheduling link to start an algorithm, wherein the algorithm scheduling link comprises the following steps:
1. starting an algorithm master file at an algorithm management end;
2. after the algorithm is started, 4 sub-processes are called in the algorithm, a process 1 automatically binds an idle UDP port, encapsulates an algorithm ID, an algorithm process number and a port number of the UDP port bound by the algorithm to generate a message with an algorithm information function identifier and sends the message to an algorithm management end, and a process 2 continuously waits for the port of the algorithm management end to confirm the message;
3. after receiving the algorithm port information message, the algorithm management terminal binds the algorithm ID and the port number of the UDP port and sends a message with a port confirmation identifier to the port;
4. after receiving the port confirmation message, the process 2 in the algorithm sends a confirmation signal to the process 1, starts the process 3 and the process 4, and then finishes the operation of the process 2; and after the process 1 receives the confirmation signal, stopping circularly sending the algorithm information message, and ending the process 1. Starting the process 3 and then waiting for the algorithm management end to send a data message; after the process 4 is started, transmitting the process number of the process (the sub-process number of the multi-process algorithm is used for monitoring the operation of the algorithm and finishing the algorithm) to the process 3, loading a data processing algorithm and a model file, and waiting for the process 4 to transmit data;
5. the algorithm management end receives a plurality of sensor data transmitted from the outside, analyzes and encapsulates the sensor signal data required by the algorithm into a message with a data function identifier (used for identifying the type of data to which the current message belongs), and continuously transmits the message to the port bound by the algorithm;
6. the process 3 receives the UDP data message sent by the algorithm management end from the UDP port, and if the UDP data message is identified as the data message, the message is cached in the memory, and in the algorithm, the message data received in 5s is cached each time and then is transmitted to the output processing process;
7. the process 4 receives the data of the cache 5s transmitted by the process 3, analyzes the data in the message according to the form of the configuration file, arranges the data into the form required by the data processing algorithm, enters a data processing link, calls the existing model file and obtains a data reasoning result; after the data inference result is obtained, the data inference result is arranged and packaged into a message with a result identifier according to the form of a configuration file and is sent to an algorithm management end;
8. in the process of processing the data reasoning result by the process 4, the process 3 continuously receives and caches data from the algorithm management end;
9. when stopping the algorithm, the algorithm management end sends a message with a stopping command to the algorithm end;
10 the algorithm end process 3 receives the message with the pause command to analyze the pause signal, immediately ends the data processing process 4 by closing the process, then generates the message with pause confirmation mark according to the algorithm output configuration form in the algorithm configuration file and sends the message to the algorithm management end, and the process 3 finishes the operation completely and automatically.
The deployment and management of one independent algorithm are completed, and the algorithm deployment and the algorithm management can be completed according to similar steps for a plurality of independent or related algorithms.
According to the algorithm management method of the edge computing device, the algorithm and the edge computing platform are decoupled, an algorithm form in a mode that a main entry (main) file is matched with a model library file compression package is designed, and the algorithm management method can be adapted to various algorithms. Specifically, the algorithm management end does not need to pay attention to specific implementation languages and implementation logics of the deployed algorithms and library files called in the algorithms, and the whole communication interaction process is carried out according to a fixed format, so that the method can be adapted to various algorithms. For example: the fast Fourier transform algorithm can be realized by Python or C + +, and the fast Fourier transform algorithm realized by any language can be used in the system; the following steps are repeated: the video processing needs to use a deep learning algorithm library, different target detection objects need to call different deep learning algorithm libraries, an algorithm management end does not need to care that the algorithm calls the deep learning algorithm library, and only the deep learning algorithm library needs to interact with the algorithm through a main algorithm entry file.
In the implementation, the influence of different algorithm input and output forms on algorithm management can be overcome by unifying the form of the algorithm configuration parameters and the form of the information interaction between the algorithm end and the algorithm management end, the configuration, management and maintenance costs of the edge calculation algorithm are reduced, and the reliability and stability of the operation of the edge calculation algorithm are improved. Meanwhile, the multi-process interaction mode in the algorithm is standardized, the input and output mode of the algorithm can be matched with the algorithm management terminal, and data can be effectively processed and a scheduling command of the algorithm management terminal can be responded.
While, for purposes of simplicity of explanation, the foregoing method embodiments have been described as a series of acts or combination of acts, it will be appreciated by those skilled in the art that the present invention is not limited by the illustrated ordering of acts, as some steps may occur in other orders or concurrently with other steps in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
The method is described in detail in the embodiments disclosed above, and the method of the present invention can be implemented by various types of apparatuses, so that the present invention also discloses an apparatus, and the following detailed description will be given of specific embodiments.
Fig. 9 is a schematic structural diagram of an algorithm management apparatus of an edge computing device according to an embodiment of the present invention, and the apparatus shown in fig. 9 is applied to the edge computing device. Referring to fig. 9, the algorithm management device 90 of the edge computing apparatus may include:
and the algorithm receiving module is used for receiving a packing algorithm file, and the packing algorithm file is obtained by packing at least a main entry file and an auxiliary file which are operated by the algorithm to be deployed.
And the parameter configuration module 902 is configured to receive the parameter configuration data of the algorithm to be deployed, complete the parameter configuration work of the algorithm to be deployed, and obtain a deployment algorithm that can be scheduled for use.
The algorithm management device of the edge computing equipment has no coupling relation between the deployment algorithm and the edge computing equipment, adopts the algorithm form that the main entry file is matched with the auxiliary file compression package form, can adapt to various algorithms, increases the flexibility of algorithm deployment and reduces the operation difficulty.
For specific implementation and other possible extended implementations of each module in the algorithm management apparatus of the edge computing device, reference may be made to content descriptions of corresponding parts in the method embodiment, and details are not repeated here.
The algorithm management device of any one of the edge computing devices in the above embodiments includes a processor and a memory, where the algorithm receiving module, the parameter configuration module, and the like in the above embodiments are all stored in the memory as program modules, and the processor executes the above program modules stored in the memory to implement corresponding functions.
The processor comprises a kernel, and the kernel calls the corresponding program module from the memory. The kernel can be provided with one or more, and the processing of the return visit data is realized by adjusting the kernel parameters.
The memory may include volatile memory in a computer readable medium, Random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip.
An embodiment of the present invention provides a storage medium on which a program is stored, where the program, when executed by a processor, implements the algorithm management method of the edge computing device described in the above embodiments.
An embodiment of the present invention provides a processor, where the processor is configured to execute a program, where the program executes the algorithm management method of the edge computing device in the foregoing embodiment when running.
Further, the present embodiment provides an electronic device, which includes a processor and a memory. Wherein the memory is used for storing executable instructions of the processor, and the processor is configured to execute the algorithm management method of the edge computing device described in the above embodiments via executing the executable instructions.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
It is further noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. An algorithm management method of an edge computing device, applied to the edge computing device, is characterized by comprising the following steps:
receiving a packing algorithm file, wherein the packing algorithm file is obtained by packing at least a main entry file and an auxiliary file of the operation of the algorithm to be deployed;
and receiving the parameter configuration data of the algorithm to be deployed, completing the parameter configuration work of the algorithm to be deployed, and obtaining the deployment algorithm which can be scheduled for use.
2. The algorithm management method of the edge computing device according to claim 1, wherein the parameter configuration data includes an algorithm basic parameter, an algorithm identification parameter, an algorithm enable parameter, an algorithm input parameter, and an algorithm internal output parameter.
3. The method for managing the algorithm of the edge computing device according to claim 1, wherein the receiving the parameter configuration data of the algorithm to be deployed, and after completing the parameter configuration work of the algorithm to be deployed, further comprises:
and calling the deployment algorithm to carry out data reasoning to obtain a data reasoning result.
4. The algorithm management method for the edge computing device according to claim 3, wherein the invoking the deployment algorithm for data inference to obtain a data inference result comprises:
the algorithm management end receives an algorithm task and controls and starts the deployment algorithm, and the execution of the algorithm task is realized at least based on the deployment algorithm;
the algorithm management terminal sends the original data to the running process of the deployment algorithm so that the running process of the deployment algorithm carries out data reasoning based on the original data;
and the algorithm management terminal receives and returns the data reasoning result sent by the running process of the deployment algorithm.
5. The method for managing the algorithm of the edge computing device according to claim 4, wherein before the algorithm management end sends the original data to the running process of the deployment algorithm, the method further comprises:
and the algorithm management terminal determines a binding port of the deployment algorithm, and the binding port is used for data transmission between the running process of the deployment algorithm and the algorithm management terminal.
6. The algorithm management method of the edge computing device according to claim 5, wherein the determining, by the algorithm management side, the bound port of the deployment algorithm includes:
and the algorithm management terminal receives an algorithm information message sent by the running process of the deployment algorithm, wherein the algorithm information message comprises a binding port and an algorithm ID of the deployment algorithm.
7. The algorithm management method of the edge computing device according to claim 5, wherein after the algorithm management side determines the binding port of the deployment algorithm, the method further comprises:
and the algorithm management terminal sends an information confirmation message to the running process of the deployment algorithm.
8. The algorithm management method of the edge computing device according to claim 5, wherein the binding port is a communication port conforming to a user datagram protocol.
9. The algorithm management method of the edge computing device according to any one of claims 3 to 8, further comprising:
and the algorithm management terminal sends a calling termination message to the running process of the deployment algorithm, so that the running process of the deployment algorithm is automatically closed after receiving the calling termination message.
10. An algorithm management device of an edge computing device, applied to the edge computing device, comprising:
the algorithm receiving module is used for receiving a packing algorithm file, and the packing algorithm file is obtained by packing at least a main entry file and an auxiliary file of the operation of the algorithm to be deployed;
and the parameter configuration module is used for receiving the parameter configuration data of the algorithm to be deployed, completing the parameter configuration work of the algorithm to be deployed and obtaining the deployment algorithm which can be scheduled for use.
CN202110920263.4A 2021-08-11 2021-08-11 Algorithm management method and device of edge computing equipment Pending CN113590336A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110920263.4A CN113590336A (en) 2021-08-11 2021-08-11 Algorithm management method and device of edge computing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110920263.4A CN113590336A (en) 2021-08-11 2021-08-11 Algorithm management method and device of edge computing equipment

Publications (1)

Publication Number Publication Date
CN113590336A true CN113590336A (en) 2021-11-02

Family

ID=78257262

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110920263.4A Pending CN113590336A (en) 2021-08-11 2021-08-11 Algorithm management method and device of edge computing equipment

Country Status (1)

Country Link
CN (1) CN113590336A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114708643A (en) * 2022-06-02 2022-07-05 杭州智诺科技股份有限公司 Computing power improving method for edge video analysis device and edge video analysis device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090144730A1 (en) * 2007-11-30 2009-06-04 Huawei Technologies Co., Ltd. Software deployment method and system, software deployment server and user server
US20110248995A1 (en) * 2010-04-09 2011-10-13 Fuji Xerox Co., Ltd. System and methods for creating interactive virtual content based on machine analysis of freeform physical markup
US20190123959A1 (en) * 2017-10-24 2019-04-25 Honeywell International Inc. Systems and methods for adaptive industrial internet of things (iiot) edge platform
US20190222652A1 (en) * 2019-03-28 2019-07-18 Intel Corporation Sensor network configuration mechanisms
US20190340521A1 (en) * 2016-12-27 2019-11-07 Huawei Technologies Co., Ltd. Intelligent Recommendation Method and Terminal
CN112464672A (en) * 2020-11-25 2021-03-09 重庆邮电大学 Optimization method for building semantic model in Internet of things edge equipment
CN112671582A (en) * 2020-12-25 2021-04-16 苏州浪潮智能科技有限公司 Artificial intelligence reasoning method and system based on edge reasoning cluster
CN112910723A (en) * 2021-01-15 2021-06-04 广州穗能通能源科技有限责任公司 Edge terminal management method, device, equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090144730A1 (en) * 2007-11-30 2009-06-04 Huawei Technologies Co., Ltd. Software deployment method and system, software deployment server and user server
US20110248995A1 (en) * 2010-04-09 2011-10-13 Fuji Xerox Co., Ltd. System and methods for creating interactive virtual content based on machine analysis of freeform physical markup
US20190340521A1 (en) * 2016-12-27 2019-11-07 Huawei Technologies Co., Ltd. Intelligent Recommendation Method and Terminal
US20190123959A1 (en) * 2017-10-24 2019-04-25 Honeywell International Inc. Systems and methods for adaptive industrial internet of things (iiot) edge platform
US20190222652A1 (en) * 2019-03-28 2019-07-18 Intel Corporation Sensor network configuration mechanisms
CN112464672A (en) * 2020-11-25 2021-03-09 重庆邮电大学 Optimization method for building semantic model in Internet of things edge equipment
CN112671582A (en) * 2020-12-25 2021-04-16 苏州浪潮智能科技有限公司 Artificial intelligence reasoning method and system based on edge reasoning cluster
CN112910723A (en) * 2021-01-15 2021-06-04 广州穗能通能源科技有限责任公司 Edge terminal management method, device, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114708643A (en) * 2022-06-02 2022-07-05 杭州智诺科技股份有限公司 Computing power improving method for edge video analysis device and edge video analysis device
CN114708643B (en) * 2022-06-02 2022-09-13 杭州智诺科技股份有限公司 Computing power improving method for edge video analysis device and edge video analysis device

Similar Documents

Publication Publication Date Title
CN110554958B (en) Graph database testing method, system, device and storage medium
JP7012689B2 (en) Command execution method and device
CN111464639B (en) Data acquisition method and device, storage medium and processor
US20190317808A1 (en) Distributed Multiple Tier Multi-Node Serverless Framework for Complex Analytics Task Execution
CN111797969A (en) Neural network model conversion method and related device
CN113590336A (en) Algorithm management method and device of edge computing equipment
CN107402869A (en) Collecting method, device and system
CN110673839A (en) Distributed tool configuration construction generation method and system
CN110908789A (en) Visual data configuration method and system for multi-source data processing
CN110830759B (en) Intelligent application deployment method, device and system
CN109343856A (en) The generation method and device of custom algorithm component
US20200213397A1 (en) Data processing method, apparatus, device and storage medium based on unmanned vehicle
CN116303320A (en) Real-time task management method, device, equipment and medium based on log file
CN115695535A (en) Configurable plug-in type cluster data acquisition master control system
CN115033301A (en) Equipment access method, device, equipment and storage medium of edge computing host
CN114090074A (en) Method and device for configuring operating environment, storage medium and electronic device
CN115080771A (en) Data processing method and device based on artificial intelligence, medium and gateway equipment
CN113849287A (en) Processing method and device of algorithm service, electronic equipment and storage medium
CN111861853A (en) Method and apparatus for processing data
CN117555708B (en) Windows microservice framework program calling method, system and device
CN109947418A (en) A kind of data model translation method and device
CN115037639B (en) Processing method, device, equipment and medium of edge data platform interface information
CN109117381A (en) The adjusting, measuring method and device of processing task
CN114780578A (en) Query statement processing method and related device
CN115878630B (en) Custom interface calling method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination