CN113297590A - Artificial intelligence algorithm source code transplanting method and system - Google Patents

Artificial intelligence algorithm source code transplanting method and system Download PDF

Info

Publication number
CN113297590A
CN113297590A CN202110464539.2A CN202110464539A CN113297590A CN 113297590 A CN113297590 A CN 113297590A CN 202110464539 A CN202110464539 A CN 202110464539A CN 113297590 A CN113297590 A CN 113297590A
Authority
CN
China
Prior art keywords
algorithm
file
module
artificial intelligence
source code
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110464539.2A
Other languages
Chinese (zh)
Inventor
宁琨
莫堃
曾一鸣
徐娜
张坤
张权耀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongfang Electric Wind Power Co Ltd
Original Assignee
Dongfang Electric Wind Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongfang Electric Wind Power Co Ltd filed Critical Dongfang Electric Wind Power Co Ltd
Priority to CN202110464539.2A priority Critical patent/CN113297590A/en
Publication of CN113297590A publication Critical patent/CN113297590A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6209Protecting access to data via a platform, e.g. using keys or access control rules to a single file or object, e.g. in a secure envelope, encrypted and accessed using a key, or with access control rules appended to the object itself
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4812Task transfer initiation or dispatching by interrupt, e.g. masked
    • G06F9/4831Task transfer initiation or dispatching by interrupt, e.g. masked with variable priority

Abstract

The invention discloses a method and a system for artificial intelligence algorithm source code transplantation, wherein the method comprises the following steps: s1, carrying out encryption packaging on the algorithm source code file and the dependence file thereof to generate an algorithm data packet after encryption packaging; s2, storing the encrypted and packaged algorithm data packet, and constructing a special virtual operation environment to decrypt, operate and train the algorithm data packet; and S3, classifying, encrypting and storing the operation and training results of the algorithm file, and transmitting the operation and training results to the user end equipment. The invention solves the problem that the artificial intelligence algorithm file is inconvenient to encrypt and transplant to the user side for use in the prior art.

Description

Artificial intelligence algorithm source code transplanting method and system
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a source code transplanting method and system of an artificial intelligence algorithm.
Background
With the rapid development of artificial intelligence technology, the influence range is wider and wider, and the artificial intelligence technology is widely applied to various industries. With the increasing economic and social benefits brought by the artificial intelligence technology, the safety, confidentiality, stability and application convenience of the artificial intelligence algorithm are also widely regarded as the core of the artificial intelligence technology.
The safety, confidentiality, stable operation and convenient application of the artificial intelligence algorithm play an important role in the popularization and application of the artificial intelligence technology. At present, as for the wide application of the artificial intelligence algorithm in engineering projects, the safety and the confidentiality of the source code of the artificial intelligence algorithm are rarely mentioned in the application of the engineering projects, and specific protective measures and methods are provided for the safety and the confidentiality. Meanwhile, in the specific application of engineering projects, a system which takes artificial intelligence algorithm source codes as a whole for consideration and controls the online operation and training of the artificial intelligence algorithm source codes is lacked. This makes many excellent artificial intelligence algorithms stay in the laboratory stage only, and the actual floor application is difficult. Therefore, it is necessary to have a system for performing encryption security protection on the source code of the artificial intelligence algorithm and operating and training the system.
With the rapid development of the artificial intelligence industry, the number of artificial intelligence algorithms which have been developed and developed is increasing, and the demand for artificial intelligence algorithms which can be actually applied to the ground is also increasing. Furthermore, the method has the functions of algorithm source code encryption and algorithm on-line operation and training, and has the important significance for converting the increasing artificial intelligence algorithm into huge economic and social benefits.
Especially for the technical field of electric power, particularly wind power, as the artificial intelligence algorithm operation scene is a production field, more personnel can contact the equipment host, and the confidentiality is not strong; according to the network security requirement of the power grid, no network connection exists, and the system operates in an offline state. Aiming at the condition that data in a production field is unsafe, encryption protection needs to be carried out on algorithm source codes.
How to encrypt and transplant the artificial intelligence algorithm file to the user side for use becomes a technical problem to be solved urgently.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides an artificial intelligence algorithm source code transplanting method and system, which solve the problem that the artificial intelligence algorithm file is inconvenient to encrypt and transplant to a user side for use in the prior art.
The technical scheme adopted by the invention for solving the problems is as follows:
an artificial intelligence algorithm source code transplanting method comprises the following steps:
s1, carrying out encryption packaging on the algorithm source code file and the dependence file thereof to generate an algorithm data packet after encryption packaging;
s2, storing the encrypted and packaged algorithm data packet, and constructing a special virtual operation environment to decrypt, operate and train the algorithm data packet;
and S3, classifying, encrypting and storing the operation and training results of the algorithm file, and transmitting the operation and training results to the user end equipment.
Because the algorithm data packet is generated by adopting encryption and packaging, and the special virtual operating environment is established to decrypt, operate and train the algorithm data packet, the decryption, the operation and the training of the artificial intelligence algorithm file are only performed in the independent special virtual operating environment, and the confidentiality is realized in the transplanting process; the application scenes of field algorithm results are more, and the operation and training results of the algorithm files are required to be provided to different task flows or equipment control through various output formats and interfaces, so that the operation and training results of the algorithm files are classified, encrypted and stored and transmitted to the user side equipment, the output configuration of the algorithm files is discriminated, the output results are classified and managed, and the success of operating the algorithm files on the user side equipment is ensured.
As a preferred technical solution, the method further comprises the following steps:
and S4, performing time-sharing task management and control on each algorithm file, and scheduling in real time according to task priority, task demand memory and CPU resource occupation.
Aiming at the requirement of data timeliness, the operation time of each algorithm file cannot be evenly distributed, when a plurality of algorithm files operate simultaneously, time-sharing task management and control are carried out on each algorithm file, and the data timeliness requirement is guaranteed through real-time scheduling according to task priorities, task demand memory and CPU resource occupation conditions.
As a preferred technical solution, the method further comprises the following steps:
and S5, when the algorithm file is operated and trained, the algorithm file and the outside adopt a message queue plus buffer mode to carry out asynchronous data interaction.
Aiming at the conditions that the flow of a data source is unstable and the pressure of data fluctuates greatly, an algorithm file and the outside adopt a message queue plus cache mode to carry out asynchronous data interaction, and under the conditions that the hardware resources of the system are saved as much as possible and the cost is not increased, the short-time data is buffered and shunted, so that the impact of the short-time large data volume on a system database is reduced.
As a preferred technical solution, the method further comprises the following steps:
and S6, compressing and packaging the whole dependency file of the algorithm, and completing one-click installation configuration on the client device.
Therefore, the algorithm file can be conveniently and rapidly transplanted to a new operation environment, and the working efficiency is improved.
As a preferred technical solution, the step S1 includes the following steps:
s11, acquiring an algorithm import interface to import the algorithm file;
s12, distinguishing the label of the imported algorithm file version information;
s13, setting an algorithm interface and a parameter threshold of the imported algorithm file;
s14, importing an independent algorithm data packet and an independent dependent file required by the operation of the algorithm;
s15, setting the data size of the algorithm operation requirement;
and S16, after the algorithm setting is finished, carrying out encryption packaging on the algorithm file to generate an algorithm encryption file and a corresponding encryption configuration information file.
The confidentiality of the algorithm file in the transplanting process is more carefully realized; the success of running the algorithm file on the user side equipment can be further ensured.
An artificial intelligence algorithm source code transplanting system comprises an algorithm encryption module, a resource management module and an output processing module;
the algorithm encryption module is used for encrypting and packaging the algorithm source code file and the dependent file thereof to generate an encrypted and packaged algorithm data packet;
the resource management module is used for storing the encrypted and packaged algorithm data packet, and building a special virtual operation environment to decrypt, operate and train the algorithm data packet;
and the output processing module is used for classifying, encrypting and storing the operation and training results of the algorithm file and transmitting the operation and training results to the user side equipment.
Because the algorithm data packet is generated by adopting encryption and packaging, and the special virtual operating environment is established to decrypt, operate and train the algorithm data packet, the decryption, the operation and the training of the artificial intelligence algorithm file are only performed in the independent special virtual operating environment, and the confidentiality is realized in the transplanting process; the application scenes of field algorithm results are more, and the operation and training results of the algorithm files are required to be provided to different task flows or equipment control through various output formats and interfaces, so that the operation and training results of the algorithm files are classified, encrypted and stored and transmitted to the user side equipment, the output configuration of the algorithm files is discriminated, the output results are classified and managed, and the success of operating the algorithm files on the user side equipment is ensured.
The task scheduling module is used for performing time-sharing task management and control on each algorithm file and scheduling in real time according to task priority, task demand memory and CPU resource occupation.
Aiming at the requirement of data timeliness, the operation time of each algorithm file cannot be evenly distributed, when a plurality of algorithm files operate simultaneously, time-sharing task management and control are carried out on each algorithm file, and the data timeliness requirement is guaranteed through real-time scheduling according to task priorities, task demand memory and CPU resource occupation conditions.
As a preferred technical scheme, the system further comprises an asynchronous data module, wherein the asynchronous data module is used for performing asynchronous data interaction between the algorithm file and the outside in a message queue and cache mode when the algorithm file is operated and trained.
Aiming at the conditions that the flow of a data source is unstable and the pressure of data fluctuates greatly, an algorithm file and the outside adopt a message queue plus cache mode to carry out asynchronous data interaction, and under the conditions that the hardware resources of the system are saved as much as possible and the cost is not increased, the short-time data is buffered and shunted, so that the impact of the short-time large data volume on a system database is reduced.
As a preferred technical scheme, the system further comprises a rapid deployment module, wherein the rapid deployment module is used for compressing and packaging the whole dependency file of the algorithm and completing one-click installation configuration on user side equipment.
Therefore, the algorithm file can be conveniently and rapidly transplanted to a new operation environment, and the working efficiency is improved.
As a preferred technical solution, the algorithm encrypting module includes:
the algorithm file import module is used for acquiring an algorithm import interface to import the algorithm file;
the version information distinguishing module is used for distinguishing the label of the version information of the imported algorithm file;
the interface and parameter threshold setting module is used for setting an algorithm interface and a parameter threshold of the imported algorithm file;
the independent data import module is used for importing an independent algorithm data packet and an independent dependent file which are required by the operation of the algorithm;
the data volume setting module is used for setting the data volume required by the operation of the algorithm;
and the file generation module is used for encrypting and packaging the algorithm file after the algorithm setting is finished to generate an algorithm encryption file and a corresponding encryption configuration information file.
The confidentiality of the algorithm file in the transplanting process is more carefully realized; the success of running the algorithm file on the user side equipment can be further ensured.
Compared with the prior art, the invention has the following beneficial effects:
(1) according to the invention, the algorithm data packet is generated by adopting encryption and packaging, and the special virtual operating environment is established to decrypt, operate and train the algorithm data packet, so that the decryption, operation and training of the artificial intelligent algorithm file are only performed in the independent special virtual operating environment, and the confidentiality is realized in the transplanting process; the application scenes of field algorithm results are more, and the operation and training results of the algorithm files are required to be provided to different task flows or equipment control through various output formats and interfaces, so that the operation and training results of the algorithm files are classified, encrypted and stored and transmitted to user end equipment, the output configuration of the algorithm files is discriminated, the output results are classified and managed, and the success of operating the algorithm files on the user end equipment is ensured;
(2) time-sharing task management and control are carried out on each algorithm file, real-time scheduling is carried out according to task priority, task demand memory and CPU resource occupation conditions, and data timeliness requirements are guaranteed;
(3) the algorithm file and the outside adopt a message queue plus cache form to carry out asynchronous data interaction, and under the conditions of saving system hardware resources as much as possible and not increasing cost, the short-time data is buffered and shunted, so that the impact of a large short-time data volume on a system database is reduced;
(4) the whole dependence files of the algorithm are compressed and packaged, and one-key installation configuration is completed on the user side equipment, so that the algorithm files are conveniently and quickly transplanted to a new operation environment, and the working efficiency is improved;
(5) the confidentiality of the algorithm file in the transplanting process is more carefully realized; the success of running the algorithm file on the user side equipment can be ensured;
(6) compressing and packaging the integral dependence file of the algorithm, and completing one-key installation configuration on user side equipment; therefore, the algorithm file can be conveniently and rapidly transplanted to a new operation environment, and the working efficiency is improved.
Drawings
FIG. 1 is a block diagram of the migration system of the present invention;
FIG. 2 is a block diagram of an embodiment of an algorithm data packet encryption method for generating an encrypted packet according to the present invention;
FIG. 3 is a schematic flow chart of the present invention for running and training an algorithm file.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited to these examples.
Example 1
As shown in fig. 1 to 3, an artificial intelligence algorithm source code transplantation method includes the following steps:
s1, carrying out encryption packaging on the algorithm source code file and the dependence file thereof to generate an algorithm data packet after encryption packaging;
s2, storing the encrypted and packaged algorithm data packet, and constructing a special virtual operation environment to decrypt, operate and train the algorithm data packet;
and S3, classifying, encrypting and storing the operation and training results of the algorithm file, and transmitting the operation and training results to the user end equipment.
Because the algorithm data packet is generated by adopting encryption and packaging, and the special virtual operating environment is established to decrypt, operate and train the algorithm data packet, the decryption, the operation and the training of the artificial intelligence algorithm file are only performed in the independent special virtual operating environment, and the confidentiality is realized in the transplanting process; the application scenes of field algorithm results are more, and the operation and training results of the algorithm files are required to be provided to different task flows or equipment control through various output formats and interfaces, so that the operation and training results of the algorithm files are classified, encrypted and stored and transmitted to the user side equipment, the output configuration of the algorithm files is discriminated, the output results are classified and managed, and the success of operating the algorithm files on the user side equipment is ensured.
As a preferred technical solution, the method further comprises the following steps:
and S4, performing time-sharing task management and control on each algorithm file, and scheduling in real time according to task priority, task demand memory and CPU resource occupation.
Aiming at the requirement of data timeliness, the operation time of each algorithm file cannot be evenly distributed, when a plurality of algorithm files operate simultaneously, time-sharing task management and control are carried out on each algorithm file, and the data timeliness requirement is guaranteed through real-time scheduling according to task priorities, task demand memory and CPU resource occupation conditions.
As a preferred technical solution, the method further comprises the following steps:
and S5, when the algorithm file is operated and trained, the algorithm file and the outside adopt a message queue plus buffer mode to carry out asynchronous data interaction.
Aiming at the conditions that the flow of a data source is unstable and the pressure of data fluctuates greatly, an algorithm file and the outside adopt a message queue plus cache mode to carry out asynchronous data interaction, and under the conditions that the hardware resources of the system are saved as much as possible and the cost is not increased, the short-time data is buffered and shunted, so that the impact of the short-time large data volume on a system database is reduced.
As a preferred technical solution, the method further comprises the following steps:
and S6, compressing and packaging the whole dependency file of the algorithm, and completing one-click installation configuration on the client device.
Therefore, the algorithm file can be conveniently and rapidly transplanted to a new operation environment, and the working efficiency is improved.
As a preferred technical solution, the step S1 includes the following steps:
s11, acquiring an algorithm import interface to import the algorithm file;
s12, distinguishing the label of the imported algorithm file version information;
s13, setting an algorithm interface and a parameter threshold of the imported algorithm file;
s14, importing an independent algorithm data packet and an independent dependent file required by the operation of the algorithm;
s15, setting the data size of the algorithm operation requirement;
and S16, after the algorithm setting is finished, carrying out encryption packaging on the algorithm file to generate an algorithm encryption file and a corresponding encryption configuration information file.
The confidentiality of the algorithm file in the transplanting process is more carefully realized; the success of running the algorithm file on the user side equipment can be further ensured.
Example 2
As shown in fig. 1 to fig. 3, an artificial intelligence algorithm source code transplantation system includes an algorithm encryption module, a resource management module, and an output processing module;
the algorithm encryption module is used for encrypting and packaging the algorithm source code file and the dependent file thereof to generate an encrypted and packaged algorithm data packet;
the resource management module is used for storing the encrypted and packaged algorithm data packet, and building a special virtual operation environment to decrypt, operate and train the algorithm data packet;
and the output processing module is used for classifying, encrypting and storing the operation and training results of the algorithm file and transmitting the operation and training results to the user side equipment.
Because the algorithm data packet is generated by adopting encryption and packaging, and the special virtual operating environment is established to decrypt, operate and train the algorithm data packet, the decryption, the operation and the training of the artificial intelligence algorithm file are only performed in the independent special virtual operating environment, and the confidentiality is realized in the transplanting process; the application scenes of field algorithm results are more, and the operation and training results of the algorithm files are required to be provided to different task flows or equipment control through various output formats and interfaces, so that the operation and training results of the algorithm files are classified, encrypted and stored and transmitted to the user side equipment, the output configuration of the algorithm files is discriminated, the output results are classified and managed, and the success of operating the algorithm files on the user side equipment is ensured.
The task scheduling module is used for performing time-sharing task management and control on each algorithm file and scheduling in real time according to task priority, task demand memory and CPU resource occupation.
Aiming at the requirement of data timeliness, the operation time of each algorithm file cannot be evenly distributed, when a plurality of algorithm files operate simultaneously, time-sharing task management and control are carried out on each algorithm file, and the data timeliness requirement is guaranteed through real-time scheduling according to task priorities, task demand memory and CPU resource occupation conditions.
As a preferred technical scheme, the system further comprises an asynchronous data module, wherein the asynchronous data module is used for performing asynchronous data interaction between the algorithm file and the outside in a message queue and cache mode when the algorithm file is operated and trained.
Aiming at the conditions that the flow of a data source is unstable and the pressure of data fluctuates greatly, an algorithm file and the outside adopt a message queue plus cache mode to carry out asynchronous data interaction, and under the conditions that the hardware resources of the system are saved as much as possible and the cost is not increased, the short-time data is buffered and shunted, so that the impact of the short-time large data volume on a system database is reduced.
As a preferred technical scheme, the system further comprises a rapid deployment module, wherein the rapid deployment module is used for compressing and packaging the whole dependency file of the algorithm and completing one-click installation configuration on user side equipment.
Therefore, the algorithm file can be conveniently and rapidly transplanted to a new operation environment, and the working efficiency is improved.
As a preferred technical solution, the algorithm encrypting module includes:
the algorithm file import module is used for acquiring an algorithm import interface to import the algorithm file;
the version information distinguishing module is used for distinguishing the label of the version information of the imported algorithm file;
the interface and parameter threshold setting module is used for setting an algorithm interface and a parameter threshold of the imported algorithm file;
the independent data import module is used for importing an independent algorithm data packet and an independent dependent file which are required by the operation of the algorithm;
the data volume setting module is used for setting the data volume required by the operation of the algorithm;
and the file generation module is used for encrypting and packaging the algorithm file after the algorithm setting is finished to generate an algorithm encryption file and a corresponding encryption configuration information file.
The confidentiality of the algorithm file in the transplanting process is more carefully realized; the success of running the algorithm file on the user side equipment can be further ensured.
Example 3
As shown in fig. 1 to fig. 3, as a further optimization of the embodiments 1 and 2, this embodiment includes all the technical features of the embodiments 1 and 2, and in addition, the following technical features are included in this embodiment:
preferably, the configuration information file in S16 is mainly used to record the current operation history settings in the above steps S12, S13, S14, and S15, and the configuration information is loaded by automatic identification and manual identification of the configuration file. Meanwhile, the one-key zero clearing function is matched, so that the same algorithm file can be more conveniently encrypted and packaged. Further, the one-key zero clearing function is mainly used for clearing the existing configuration information.
Preferably, the resource management module decrypts and identifies the stored encryption algorithm package through a dedicated virtual machine, and provides a dedicated operating environment for training and operating the algorithm file.
Preferably, the dedicated operating environment is an independent operating system inside the algorithm on-line operating and training system.
Preferably, the algorithm file is decrypted and identified, on one hand, partial information of the configuration information file can be decrypted, identified and displayed for an algorithm platform operator to perform online maintenance, and on the other hand, the independent algorithm package and the independent dependent file can be imported again.
Preferably, the system has an online maintenance function, on one hand, the threshold value of the parameter partially shown in the configuration information file can be modified, and on the other hand, the non-displayable information is hidden and is only decrypted and used in the special virtual machine.
Preferably, the asynchronous data module performs data interaction with the outside through an algorithm file algorithm interface for the dedicated virtual machine, thereby completing output of input data of algorithm file data and online operation result data.
Preferably, the output processing module binds the operation result of the training algorithm file with the training or online operation algorithm file requiring the training result, and encrypts and stores the training result file. Further, the establishing of the association is performed through the method of keyword association, designated storage position, parallel operation of algorithm file strings and the like.
Preferably, the encrypted storage stores the training result in a bound algorithm file, and can be decrypted and loaded in a special virtual machine.
Preferably, the output processing module discriminates the operation result state code of the online operation algorithm file, and performs different management and processing on the operation result according to different state codes. Furthermore, the output processing module discriminates the algorithm files of training and online operation, and performs classified management and processing on the operation results.
Preferably, the rapid deployment module can complete the whole algorithm management and control system and related configurations such as dependent files thereof in the existing operating environment, and then complete the installation and deployment of the new operating environment through one-key compression packaging and one-key installation and deployment.
Preferably, the following data is added to the encrypted data: input data configuration, output interface configuration, operation parameter configuration, file, directory, main algorithm file, entry function name information and binary data of each file. The encrypted data is AES encrypted.
Preferably, the operation management method is as follows:
and (3) building a virtual operation environment, reading the encrypted data information to enter a memory and identify when the algorithm operates, reconstructing an operation directory and file information in the virtual environment, and calling according to the main file name and the entry function name.
And reading input data configuration, adding the input data into a data centralized processing set during first-time operation, and directly obtaining data input from the data centralized processing set during non-first-time operation.
And reading the configuration of the output interface, and transmitting the operation result of the algorithm to a specified position or equipment according to the output interface.
And when a plurality of tasks are called in parallel subsequently, the running sequence is reasonably arranged and the running resources are managed according to the recorded memory use information, the CPU resource condition and the task priority at the moment.
Preferably, for an off-line environment, data configuration and execution conditions required by the algorithm file need to be synchronously written during encryption, and the data configuration and the execution conditions are automatically identified and run and trained by a field system.
Preferably, the system faces an industrial data scene, and is embodied in that the data timeliness requirement is high, the data sources are multiple, the data step length is unequal, the data source data flow is unstable, and the data volume pressure fluctuation is extremely large. Aiming at the requirement of data timeliness, the operation time cannot be evenly distributed to each algorithm file, and when a plurality of algorithm files operate simultaneously, a task scheduling module is required to schedule in real time according to task priorities, task demand memory and CPU resource occupation conditions.
The following is a scene needing identifying and correcting the wind deviation of the unit, and is corresponding to an algorithm A and an algorithm B,
algorithm A: the method is used for identifying whether the unit has wind deviation or not;
inputting: 10ms interval data from a unit main control system within 10 minutes;
the operation conditions are as follows: running in real time;
and (3) outputting: corresponding to the wind deviation state of the machine set, and storing the wind deviation state into a system relation database;
and algorithm B: the wind deviation angle is calculated and corrected;
inputting: 1s interval data from a unit main control system within one month, and an average value of 1s interval data from a wind measuring tower of a wind power plant within one day;
the operation conditions are as follows: the algorithm A outputs the wind deviation to a certain unit;
and (3) outputting: outputting the wind deviation angle to the unit through a certain communication protocol
The algorithm operating conditions, input data sources, time intervals and preprocessing modes related to the functions are different from the application channels and transmission modes of output data, but the system can carry out data configuration, operation, training and output classification on the model through configuration information written in the algorithm file during encryption and packaging, so that the functions can be realized.
As described above, the present invention can be preferably realized.
All features disclosed in all embodiments in this specification, or all methods or process steps implicitly disclosed, may be combined and/or expanded, or substituted, in any way, except for mutually exclusive features and/or steps.
The foregoing is only a preferred embodiment of the present invention, and the present invention is not limited thereto in any way, and any simple modification, equivalent replacement and improvement made to the above embodiment within the spirit and principle of the present invention still fall within the protection scope of the present invention.

Claims (10)

1. An artificial intelligence algorithm source code transplanting method is characterized by comprising the following steps:
s1, carrying out encryption packaging on the algorithm source code file and the dependence file thereof to generate an algorithm data packet after encryption packaging;
s2, storing the encrypted and packaged algorithm data packet, and constructing a special virtual operation environment to decrypt, operate and train the algorithm data packet;
and S3, classifying, encrypting and storing the operation and training results of the algorithm file, and transmitting the operation and training results to the user end equipment.
2. The artificial intelligence algorithm source code migration method according to claim 1, further comprising the steps of:
and S4, performing time-sharing task management and control on each algorithm file, and scheduling in real time according to task priority, task demand memory and CPU resource occupation.
3. The artificial intelligence algorithm source code migration method according to claim 1, further comprising the steps of:
and S5, when the algorithm file is operated and trained, the algorithm file and the outside adopt a message queue plus buffer mode to carry out asynchronous data interaction.
4. The artificial intelligence algorithm source code migration method according to claim 1, further comprising the steps of:
and S6, compressing and packaging the whole dependency file of the algorithm, and completing one-click installation configuration on the client device.
5. The artificial intelligence algorithm source code migration method according to any one of claims 1 to 4, wherein the step S1 comprises the following steps:
s11, acquiring an algorithm import interface to import the algorithm file;
s12, distinguishing the label of the imported algorithm file version information;
s13, setting an algorithm interface and a parameter threshold of the imported algorithm file;
s14, importing an independent algorithm data packet and an independent dependent file required by the operation of the algorithm;
s15, setting the data size of the algorithm operation requirement;
and S16, after the algorithm setting is finished, carrying out encryption packaging on the algorithm file to generate an algorithm encryption file and a corresponding encryption configuration information file.
6. An artificial intelligence algorithm source code transplanting system is characterized by comprising an algorithm encryption module, a resource management module and an output processing module;
the algorithm encryption module is used for encrypting and packaging the algorithm source code file and the dependent file thereof to generate an encrypted and packaged algorithm data packet;
the resource management module is used for storing the encrypted and packaged algorithm data packet, and building a special virtual operation environment to decrypt, operate and train the algorithm data packet;
and the output processing module is used for classifying, encrypting and storing the operation and training results of the algorithm file and transmitting the operation and training results to the user side equipment.
7. The system of claim 6, further comprising a task scheduling module, wherein the task scheduling module is configured to perform time-sharing task management and control on each algorithm file, and perform real-time scheduling according to task priority, task demand memory, and CPU resource occupation.
8. The system of claim 7, further comprising an asynchronous data module, wherein when the asynchronous data module is used to run and train the algorithm file, the algorithm file interacts with external asynchronous data in a form of message queue plus buffer.
9. The artificial intelligence algorithm source code transplantation system according to claim 8, further comprising a rapid deployment module, wherein the rapid deployment module is used for compressing and packaging the whole dependency files of the algorithm and completing one-click installation configuration on user-side equipment.
10. An artificial intelligence algorithm source code migration system according to any one of claims 6 to 9, wherein the algorithm encryption module comprises:
the algorithm file import module is used for acquiring an algorithm import interface to import the algorithm file;
the version information distinguishing module is used for distinguishing the label of the version information of the imported algorithm file;
the interface and parameter threshold setting module is used for setting an algorithm interface and a parameter threshold of the imported algorithm file;
the independent data import module is used for importing an independent algorithm data packet and an independent dependent file which are required by the operation of the algorithm;
the data volume setting module is used for setting the data volume required by the operation of the algorithm;
and the file generation module is used for encrypting and packaging the algorithm file after the algorithm setting is finished to generate an algorithm encryption file and a corresponding encryption configuration information file.
CN202110464539.2A 2021-04-28 2021-04-28 Artificial intelligence algorithm source code transplanting method and system Pending CN113297590A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110464539.2A CN113297590A (en) 2021-04-28 2021-04-28 Artificial intelligence algorithm source code transplanting method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110464539.2A CN113297590A (en) 2021-04-28 2021-04-28 Artificial intelligence algorithm source code transplanting method and system

Publications (1)

Publication Number Publication Date
CN113297590A true CN113297590A (en) 2021-08-24

Family

ID=77320565

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110464539.2A Pending CN113297590A (en) 2021-04-28 2021-04-28 Artificial intelligence algorithm source code transplanting method and system

Country Status (1)

Country Link
CN (1) CN113297590A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110347389A (en) * 2019-07-19 2019-10-18 中国工商银行股份有限公司 Processing method, the device and system of algorithm file
CN110704135A (en) * 2019-09-26 2020-01-17 北京智能工场科技有限公司 Competition data processing system and method based on virtual environment
CN111028226A (en) * 2019-12-16 2020-04-17 北京百度网讯科技有限公司 Method and device for algorithm transplantation
CN111612132A (en) * 2020-05-20 2020-09-01 广东电网有限责任公司 Artificial intelligence algorithm development system, training method, device and medium
CN111897731A (en) * 2020-08-03 2020-11-06 中关村科学城城市大脑股份有限公司 Artificial intelligence model evaluating and publishing system and method applied to urban brain
CN112132198A (en) * 2020-09-16 2020-12-25 建信金融科技有限责任公司 Data processing method, device and system and server
CN112287377A (en) * 2020-11-25 2021-01-29 南京星环智能科技有限公司 Model training method based on federal learning, computer equipment and storage medium
CN112633501A (en) * 2020-12-25 2021-04-09 深圳晶泰科技有限公司 Development method and system of machine learning model framework based on containerization technology

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110347389A (en) * 2019-07-19 2019-10-18 中国工商银行股份有限公司 Processing method, the device and system of algorithm file
CN110704135A (en) * 2019-09-26 2020-01-17 北京智能工场科技有限公司 Competition data processing system and method based on virtual environment
CN111028226A (en) * 2019-12-16 2020-04-17 北京百度网讯科技有限公司 Method and device for algorithm transplantation
CN111612132A (en) * 2020-05-20 2020-09-01 广东电网有限责任公司 Artificial intelligence algorithm development system, training method, device and medium
CN111897731A (en) * 2020-08-03 2020-11-06 中关村科学城城市大脑股份有限公司 Artificial intelligence model evaluating and publishing system and method applied to urban brain
CN112132198A (en) * 2020-09-16 2020-12-25 建信金融科技有限责任公司 Data processing method, device and system and server
CN112287377A (en) * 2020-11-25 2021-01-29 南京星环智能科技有限公司 Model training method based on federal learning, computer equipment and storage medium
CN112633501A (en) * 2020-12-25 2021-04-09 深圳晶泰科技有限公司 Development method and system of machine learning model framework based on containerization technology

Similar Documents

Publication Publication Date Title
KR102168212B1 (en) Apparatus and method for application log data processing
WO2021179619A1 (en) Robot instruction transmission and processing method and apparatus, electronic device and storage medium
CN102081361A (en) Hardware equipment control method and system
CN115225339B (en) Safe access and data transmission method and system for sensing terminal of power transmission Internet of things
CN110854725B (en) Service linkage system and method between multiple power substations
CN111800277A (en) Serialization method of binary file penetration reverse isolation device
CN101408756A (en) Remote monitoring and anglicizing system and method of nuclear power steam turbine regulation system
CN105554038A (en) Control method for data security during on-line system and off-line system data interaction
CN112181477A (en) Complex event processing method and device and terminal equipment
CN109104458B (en) Data acquisition method and system for cloud platform credibility verification
CN114372291A (en) Privacy joint reasoning method, device, equipment and storage medium
CN117201501B (en) Intelligent engineering sharing management system and operation method
CN113297590A (en) Artificial intelligence algorithm source code transplanting method and system
CN110910193B (en) Order information input method and device based on RPA technology
CN113346999B (en) Splitting encryption-based brain central system
CN112379889B (en) IC remote self-help burning method and system
CN111414341B (en) Data normalization description method in Internet of things environment
CN113468574A (en) Block chain data uplink method and device
CN112036913A (en) System and method for tracing production line equipment by applying block chain technology
CN112821496A (en) System and method for controlling power-off and discharge of battery through Bluetooth controller
CN116185767B (en) Method for monitoring data flow direction based on encryption technology
CN116488931B (en) Information interaction method and device based on distributed networking equipment
CN113098860B (en) CAN bus encryption method and device, engineering machinery and storage medium
CN116185767A (en) Method for monitoring data flow direction based on encryption technology
KR102299145B1 (en) Cyber Physical System for digital forensic evidence collection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 618099 No.99, Zhujiang East Road, Jingyang District, Deyang City, Sichuan Province

Applicant after: Dongfang Electric Wind Power Co.,Ltd.

Address before: 618099 No.99, Zhujiang East Road, Jingyang District, Deyang City, Sichuan Province

Applicant before: DONGFANG ELECTRIC WIND POWER Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210824