CN111930419B - Code packet generation method and system based on deep learning model - Google Patents
Code packet generation method and system based on deep learning model Download PDFInfo
- Publication number
- CN111930419B CN111930419B CN202010749289.2A CN202010749289A CN111930419B CN 111930419 B CN111930419 B CN 111930419B CN 202010749289 A CN202010749289 A CN 202010749289A CN 111930419 B CN111930419 B CN 111930419B
- Authority
- CN
- China
- Prior art keywords
- interface
- packaging
- code
- model
- deep learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/70—Software maintenance or management
- G06F8/71—Version control; Configuration management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/30—Creation or generation of source code
- G06F8/31—Programming languages or programming paradigms
- G06F8/315—Object-oriented languages
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Computer Security & Cryptography (AREA)
- Stored Programmes (AREA)
Abstract
The application relates to a code packet generation method and system based on a deep learning model. The method comprises the following steps: acquiring configuration parameter information and a model file of a deep learning model, wherein the configuration parameter information comprises a configuration platform identifier; performing layered packaging on the deep learning model according to the model file to obtain a packaging code template; inputting the configuration parameter information into the packaging code template to generate a packaging code; and sending the packaging code to a hosting server so as to trigger a construction server to call a construction script when the hosting server receives the packaging code, constructing an application program corresponding to the configuration platform identifier according to the construction script and the packaging code through the construction server, and packaging the application program into a code package. By adopting the method, the code packet generation efficiency can be improved.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a code package generation method and system based on a deep learning model, a computer device, and a storage medium.
Background
A code package refers to a collection of development tools that build application software for a particular software package, software framework, hardware platform, operating system, etc. The code package is used for application program development by developers, and the developers can quickly create application software according to the code package, so that the steps of compiling a hardware base and a basic code framework are saved. For example, the code package may be an SDK (Software Development Kit). In the traditional mode, the code is packaged by manually developing a model and various compiling and constructing works are manually processed, and the generation of the code package is low in efficiency because the generation of the code package involves more application program types, platform types, model types and the like, so that the task load of developers is large.
Disclosure of Invention
In view of the above, it is desirable to provide a code package encapsulation method, system, computer device and storage medium based on a deep learning model, which can improve the code package generation efficiency.
A deep learning model-based code package generation method, the method comprising:
acquiring configuration parameter information and a model file of a deep learning model, wherein the configuration parameter information comprises a configuration platform identifier;
performing layered packaging on the deep learning model according to the model file to obtain a packaging code template;
inputting the configuration parameter information into the packaging code template to generate a packaging code;
and sending the packaging code to a hosting server so as to trigger a construction server to call a construction script when the hosting server receives the packaging code, constructing an application program corresponding to the configuration platform identifier according to the construction script and the packaging code through the construction server, and packaging the application program into a code package.
In one embodiment, the performing hierarchical encapsulation on the deep learning model according to the model file to obtain an encapsulated code template includes:
performing model layer packaging on the deep learning model according to the model file to obtain a first interface;
performing interface layer packaging on the first interface to obtain a second interface;
and performing application layer packaging on the second interface to obtain a packaging code template.
In one embodiment, the performing model layer encapsulation on the deep learning model according to the model file to obtain a first interface includes:
acquiring an operation interface and a model operation strategy corresponding to the deep learning model according to the model file;
and packaging the operation interface and the model operation strategy to obtain a first interface.
In one embodiment, the interface layer packaging the first interface to obtain the second interface includes:
acquiring an object of the first interface;
and calling an interface calling function corresponding to the object according to a preset format, and performing interface layer packaging on the first interface and the interface calling function corresponding to the object to obtain a second interface.
In one embodiment, the performing application layer encapsulation on the second interface to obtain an encapsulated code template includes:
acquiring a preset interface corresponding to an application layer and a mapping file;
establishing a mapping relation between the data type of the second interface and the data type of the preset interface according to the mapping file;
and packaging the second interface establishing the mapping relation to obtain a packaging code template.
A deep learning model-based code package generation system, the system comprising:
the terminal is used for acquiring configuration parameter information and a model file of the deep learning model, wherein the configuration parameter information comprises a configuration platform identifier; packaging the deep learning model in a layering mode according to the model file to obtain a packaging code template, inputting the configuration parameter information into the packaging code template, and generating a packaging code; sending the encapsulated code to a hosting server;
the hosting server is used for triggering the construction server to call the construction script when the packaging code is received;
and the construction server is used for constructing the application program corresponding to the configuration platform identifier according to the construction script and the packaging code and packaging the application program into a code package.
In one embodiment, the terminal is further configured to perform model layer encapsulation on the deep learning model according to the model file to obtain a first interface; performing interface layer packaging on the first interface to obtain a second interface; and performing application layer packaging on the second interface to obtain a packaging code template.
In one embodiment, the terminal is further configured to obtain an operation interface and a model operation policy corresponding to the deep learning model according to the model file; and packaging the operation interface and the model operation strategy to obtain a first interface.
A computer device comprising a memory and a processor, the memory storing a computer program operable on the processor, the processor implementing the steps in the various method embodiments described above when executing the computer program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the respective method embodiment described above.
According to the code packet generation method, the system, the computer equipment and the storage medium based on the deep learning model, the configuration parameter information and the model file of the deep learning model are obtained, the deep learning model is encapsulated in a layering mode according to the model file to obtain an encapsulation code template, the configuration parameter information is input into the encapsulation code template, and an encapsulation code is generated. And sending the packaging code to a hosting server so as to trigger the building server to call the building script when the hosting server receives the packaging code, building an application program corresponding to the configuration platform identifier in the configuration parameter information according to the building script and the packaging code through the building server, and packaging the application program into a code package. The packaging code template is obtained by summarizing different application program types, platform types, model types and the like, and the packaging code in the model packaging process can be flexibly and automatically generated by using the packaging code template, so that the work of manually developing the model packaging code and manually processing various compiling and constructing processes is avoided, the model packaging efficiency is greatly improved, and the code packet generation efficiency is further improved. The construction server constructs the application program corresponding to the configuration platform identification in the configuration parameter information according to the construction script and the packaging code, and then packages the application program to obtain a unified and packaged standardized multi-platform code package, so that the method can be conveniently applied to various formalized platforms and systems. Meanwhile, the deep learning model is encapsulated in a layered mode, a large number of technical details are hidden, and the encapsulated code packet can be generated quickly only by uploading configuration parameter information and model files.
Drawings
FIG. 1 is a diagram of an application environment of a deep learning model-based code package generation method in one embodiment;
FIG. 2 is a schematic flow chart illustrating a deep learning model-based code package generation method according to an embodiment;
FIG. 3 is a block diagram of a deep learning model-based code package generation system in one embodiment;
FIG. 4 is a flowchart illustrating the steps of performing hierarchical encapsulation on a deep learning model according to a model file to obtain an encapsulated code template in one embodiment;
FIG. 5 is a block diagram of a deep learning model-based code package generation system in one embodiment;
FIG. 6 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The code packet generation method based on the deep learning model can be applied to the application environment shown in fig. 1. The terminal 102 and the hosting server 104 communicate with each other through a network, and the hosting server 104 and the construction server 106 communicate with each other through a network. The terminal 102 runs a code packet generation system based on a deep learning model, which may be referred to as an encapsulation system for short. The user logs in the packaging system through the terminal 102, and configuration parameter information and a model file of the deep learning model are uploaded in a configuration page of the packaging system. The configuration parameter information includes a configuration platform identifier. And the terminal 102 performs layered packaging on the deep learning model according to the model file to obtain a packaging code template. Then, the terminal 102 inputs the configuration parameter information into the package code template to generate a package code. The terminal 102 sends the package code to the hosting server 104, and when the hosting server 104 receives the package code, the building server 106 is triggered to call the building script, and the building server 106 builds the application program corresponding to the configuration platform identifier according to the building script and the package code, and packages the application program into a code package. The terminal 102 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices. The hosting server 104 and the build server 106 may be implemented as separate servers or as a server cluster of multiple servers.
In one embodiment, as shown in fig. 2, a deep learning model-based code package generation method is provided, which is described by taking the method as an example applied to the terminal in fig. 1, and includes the following steps:
The parameter configuration information refers to configuration parameters required for generating the package code and building the application program. The model file is a file obtained by statically storing the deep learning model in the form of a file, and is also an object packaged by a user. The deep learning model is used for establishing a deep neural network structure, optimizing and evaluating deep neural network parameters and finally generating the deep neural network structure and the network parameters for solving certain actual requirements, such as target detection, image classification and the like, by utilizing a deep learning technology and depending on a deep learning framework, such as MXNet, TensorFlow and the like.
The terminal is operated with a packaging system. When a user logs in the packaging system for the first time through the terminal, user information needs to be uploaded for user registration. The user information may include a login account and a login password. After the registration is successful, the terminal generates a personal repository for the user. A personal repository may be used to store user information. Therefore, when a user needs to generate a code packet, the user can log in the packaging system through the terminal, the terminal receives parameter configuration operation in a configuration page of the packaging system, and parameter configuration information and a model file of the deep learning model are obtained according to the parameter configuration operation. For example, the code package may be an SDK (Software Development Kit). The parameter configuration information may include: run platform information, code package version information, the deep learning framework type on which the model file depends, model input data and output layer data, and so on. The operating platform information may include a configuration platform identifier, an operating system and a CPU (Central Processing Unit) type, and the like. The configuration platform identification may be a unique identification used to mark the operating platform. The deep learning frame refers to a frame written by repeated codes in the initial stage of deep learning, and the frame is used by all users. For example, the Deep LEarning framework is MXNet, TensorFlow, PaddlePaddle (Padallel Distributed Deep LEarning), Theano, etc. The model input data may include an input data ID (Identification), a numerical type of the input data, a size of a data scale, and the like. The output layer data may include an output layer ID, a data value type and a data size of the output layer, and the like. The model file may include a deep neural network structure and network parameter values in the deep neural network structure stored in a particular format.
And step 204, performing layered packaging on the deep learning model according to the model file to obtain a packaging code template.
And step 206, inputting the configuration parameter information into the packaging code template to generate a packaging code.
In the process of generating the code packet, the terminal firstly needs to automatically package the deep learning model to obtain a packaged code. The encapsulation is to hide the complex technical details of the bottom layer, only externally disclose an interface and control the access level of attribute reading operation and modification operation in an application program; the abstracted data and the behaviors (or functions) are combined to form an organic whole, namely, the data and the source codes of the operation data are organically combined to form a class. Members of a class may include data and functions. Encapsulation is used to hide underlying complex technical details and to provide a directly usable interface directly to the application layer. For example, Android developers need to add a face detection function based on a deep learning technology to an Android application, and the packaging process is to package a deep learning model with the face detection function into a code package through a related technology, where the code package may include a Java interface, an interface instruction and a binary library that can be directly called by the Android application, so that Android development engineers can conveniently and quickly integrate and use the code package in a corresponding development environment.
Specifically, the terminal performs layered packaging on the deep learning model according to the model file, that is, the packaged code in the deep learning model packaging process is abstracted into three levels, for example, the three levels may include model layer packaging, interface layer packaging, and application layer packaging. Each layer of package after abstraction does not depend on a concrete data format and a deep learning frame used by the deep learning model any more, so that the deep learning model is packaged in a layered mode, and a uniform and standardized packaging code template can be constructed. The packaging code template can be obtained by combining a plurality of levels of packaging code templates.
A plurality of parameter configuration items can be included in the package code template, such as a deep learning framework, a running platform and the like. The packaging code template is a code which cannot be directly used, and corresponding parameters in the template need to be replaced according to the configuration parameter information, so that the packaging code template is activated into an available packaging code. For example, a model layer encapsulated code template for a deep learning model is obtained by generalizing variable information in a code and expressing the variable information by a specific parameter name on the basis of model layer encapsulated codes of a plurality of deep learning models, and the variable information in the model layer encapsulated code template may include names of classes and functions in the code, numerical types of output layer data, data size and the like. The model layer encapsulates immutable information in the code template, namely, codes applicable to different deep learning models, such as model loading, initialization steps and deep learning interfaces required to be called.
The parameter configuration information may include the deep learning framework type, runtime platform information, target interface language, etc. that the model file depends on. The terminal reads the configuration parameter items in the encapsulation code template through the analyzer, and identifies the configuration parameters corresponding to the configuration parameter items in the parameter configuration information, so that the configuration parameters are input into the corresponding configuration parameter items, the initial parameters in the corresponding configuration parameter items are replaced, and the encapsulation codes are obtained.
And step 208, sending the packaging code to the hosting server, so that when the hosting server receives the packaging code, the hosting server is triggered to call the construction script, the construction server constructs the application program corresponding to the configuration platform identifier according to the construction script and the packaging code, and the application program is packaged into a code package.
And when the terminal generates the packaging code, entering an automatic construction stage of the application program. Specifically, the terminal sends the generated package code to the hosting server, and the package code is managed through a code warehouse of the hosting server. For example, the hosting server may be Gogs, Github, etc. And when the hosting server receives the packaging code sent by the terminal, the building server is automatically triggered to carry out scheduling and execution of the building process. For example, the build server may be a CI (content Integration)/CD (content Development) server. CI servers are used to frequently integrate code into the backbone, e.g., code integration may be performed multiple times a day. The purpose of persistent integration is to perform fast iterations of the application. The continuous delivery means that the new version of the application program is frequently delivered to an auditor for auditing. When the review passes, the corresponding code for the application may enter the production phase. Continuous delivery may be viewed as the next step of continuous integration. Applications are deliverable anytime and anywhere regardless of how they are updated. Continuous deployment is the next step in continuous delivery, which refers to automatic deployment of code to a production environment after the code passes review. The purpose of continuous deployment is to make the code deployable at any time, allowing it to enter the production phase.
And calling the construction script through the construction server, constructing the application program corresponding to the configuration platform identifier according to the packaging code through the construction script, and packaging the application program into an SDK code packet. The build script is a script program for building an application program, packaging a code package. The environment mirror image corresponding to the configuration platform identification is pulled from a mirror image warehouse of the Docker mirror image environment storage server through the construction server, the platform environment is created according to the environment mirror image, the packaging code in the code warehouse is pulled, and the application program construction is executed. By using the isolation technology of the Docker container, different platform environments are created, for example, different Linux distribution platforms may be a centros (Community Enterprise Operating System), Ubuntu, and the like, so as to ensure that the distributed application library is consistent with the target environment corresponding to the configuration platform identifier. And then, packaging the application program into a code packet through the construction server, and sending the code packet to the code packet storage server for management. For example, when the hardware platform of the deep learning model-based code package generation system is x86 — 64 hardware platform, and the application building platform is Linux system platform or Windows system platform, the architecture diagram of the deep learning model-based code package generation system may be as shown in fig. 3, where each server runs in an independent Docker container environment, the code hosting server may represent a hosting server, the SDK storage server may represent a code package storage server, and the CI (Continuous Integration)/CD (Continuous delivery or deployment) server may represent a building server, and the encapsulated code is to be compiled and built in an independent building environment.
In one embodiment, the terminal may send the parameter configuration information and the model file of the deep learning model to the hosting server, and the hosting server may create a code warehouse in which the parameter configuration information and the model file of the deep learning model are hosted. The code repository may be used to store the generated encapsulated code for subsequent pulling of the encapsulated code in the code repository for application building.
In the embodiment, the configuration parameter information and the model file of the deep learning model are obtained, the deep learning model is encapsulated hierarchically according to the model file to obtain an encapsulation code template, and the configuration parameter information is input into the encapsulation code template to generate the encapsulation code. And sending the packaging code to a hosting server so as to trigger the building server to call the building script when the hosting server receives the packaging code, building an application program corresponding to the configuration platform identifier in the configuration parameter information according to the building script and the packaging code through the building server, and packaging the application program into a code package. The packaging code template is obtained by summarizing different application program types, platform types, model types and the like, and the packaging code in the model packaging process can be flexibly and automatically generated by using the packaging code template, so that the work of manually developing the model packaging code and manually processing various compiling and constructing processes is avoided, the model packaging efficiency is greatly improved, and the code packet generation efficiency is further improved. The construction server constructs the application program corresponding to the configuration platform identification in the configuration parameter information according to the construction script and the packaging code, and then packages the application program to obtain a unified and packaged standardized multi-platform code package, so that the method can be conveniently applied to various formalized platforms and systems. Meanwhile, the deep learning model is encapsulated in a layered mode, a large number of technical details are hidden, and the encapsulated code packet can be generated quickly only by uploading configuration parameter information and model files.
In one embodiment, before the deep learning model is hierarchically encapsulated according to the model file, the parameter configuration information and the model file can also be checked. Specifically, the terminal calls a related interface in the deep learning framework to check. For example, the associated interface may be a Python interface. And directly operating the corresponding checking program by calling the related interface, if the program returns normally or the returned result is consistent with the preset result, indicating that the parameter configuration information and the model file are valid, otherwise, indicating that the parameter configuration information and the model file are invalid. The content of the verification may include whether the deep learning model in the model file can be normally loaded and initialized, whether input data in the verification parameter configuration information matches preset input data of the model, and whether output data in the verification parameter configuration information matches preset output data of the model. By checking the parameter configuration information and the model file, invalid user input, namely the parameter configuration information and the model file, can be intercepted in advance by one step.
In one embodiment, the terminal may also perform automated testing on the generated code packages. Specifically, the terminal can call a test script and a test program, and the code packet is tested through the test script and the test program. The test scripts and test programs may be pre-developed. The execution of the test program depends on parameter configuration information, such as the format of the interface input data, etc. The test content may include: stability of the code packet, system resource occupation, running performance of the code packet, and the like. For example, the stability of the code packet may include whether there is a memory leak, whether there is a bug causing a system downtime, and the like, the system resource occupation may include a computer system memory, a GPU video memory occupation amount, and the like, and the running performance of the code packet may include a running speed of the program. And the terminal acquires the test result output by the test program, stores the test result in a log file in a text form and displays the test result through the terminal.
Because the interface of the code package is produced and defined according to the specific program standard, one set of test program can test various types of code packages. Compared with the traditional automatic test method in which test programs are independently written, developed and maintained for each type or code packages packaged by different models, the development cost is reduced.
In one embodiment, the terminal can also generate an intermediate log according to the program running state in the code packet generation process. The intermediate log may include feedback information output by the program as various phases of the automated execution, and the feedback information may be used to identify the execution state of the program. For example, in the compiling stage of the package code, that is, in the process of building an application program from the package code, the compiler compiles the source code to output echoed information, warning information, error information, and the like. By generating the intermediate log, a user can know the execution state of the program in time so as to process the program in time when the program is in error.
In one embodiment, as shown in fig. 4, the step of hierarchically encapsulating the deep learning model according to the model file to obtain an encapsulated code template includes:
and step 402, performing model layer packaging on the deep learning model according to the model file to obtain a first interface.
And step 404, performing interface layer packaging on the first interface to obtain a second interface.
And 406, performing application layer packaging on the second interface to obtain a packaging code template.
The model file refers to a file obtained by statically storing the deep learning model in the form of a file. The model file may include a deep neural network structure and network parameter values in the deep neural network structure stored in a particular format. In the process of automatically packaging the deep learning model, the packaging process can be divided into three levels according to the technical characteristics of the deep learning model in the packaging process. Therefore, the deep learning model is encapsulated in layers, and the layered encapsulation can be carried out according to the hierarchical relation from inside to outside, namely the model layer encapsulation, the interface layer encapsulation and the application layer encapsulation.
Specifically, the terminal performs model layer encapsulation on a deep neural network structure, namely a deep learning model, in the model file to obtain a first interface. And the model layer encapsulation refers to the encapsulation of the operation interface and the model operation strategy corresponding to the deep learning model. The operation interface refers to the bottom layer interface for loading and operating the deep learning model. The model operation strategy refers to the use flow of the deep learning model. And then, the terminal performs interface layer packaging on the first interface to obtain a second interface. For example, the interface layer may be a C language interface layer, and the second interface obtained after the interface layer is encapsulated is a C language type interface. The interface layer encapsulation refers to secondary encapsulation of the first interface obtained after the molding layer encapsulation, and the first interface is wrapped by a layer of uniform external interface. Therefore, the terminal performs application layer packaging on the second interface, and the application layer packaging focuses on cross programming of the mixed language, namely, the second interface is called by the high-level language of the application layer, such as Java, Python, C # and the like. After the application layer encapsulation is completed, a uniform standardized encapsulation code template can be obtained.
In a traditional mode, code levels and packaging flows in a packaging process are not clearly divided, the code is packaged by manually developing a model and manually processing various compiling and constructing works, implementation details in the packaging process are different from person to person, and especially when the code packages are packaged for different types of application programs, for example, when the code packages with the same functions are provided for Android applications and Web applications, because programming languages and technologies used for development of different types of application layers are different, a set of code packages needs to be developed and maintained for each type of application programs. In this embodiment, the deep learning model is subjected to model layer encapsulation according to the model file, the first interface obtained by the model layer encapsulation is subjected to interface layer encapsulation, and the second interface obtained by the interface layer encapsulation is subjected to application layer encapsulation, so as to obtain the encapsulation code template. The technology and the packaging process applied in the deep learning model packaging process are combed, the processing of the model and the calling processing of the application layer are separated and simplified, when the code package is generated by packaging different types of application programs, the codes produced in the model layer packaging and the interface layer packaging are reused, only the codes packaged by the application layer are required to be modified, the code reuse rate is improved, and the code package generation efficiency is improved.
In one embodiment, performing model layer encapsulation on the deep learning model according to the model file, and obtaining the first interface includes: acquiring an operation interface and a model operation strategy corresponding to the deep learning model according to the model file; and encapsulating the operation interface and the model operation strategy to obtain a first interface.
The model file can include the deep neural network structure and the network parameter values in the deep neural network structure which are stored according to a certain format. In the process of carrying out model layer packaging on the deep learning model, the terminal can obtain an operation interface and a model operation strategy of the deep learning model according to a deep neural network structure in a model file and a network parameter value in the deep neural network structure. The operation interface refers to the bottom interface of the loading and operation model. For example, the operational interface may be a C + + type program interface. The model operation strategy refers to the operation flow of the model.
The model file depends on a deep learning framework, the deep learning framework provides a set of bottom interfaces for establishing a model, training the model and model reasoning, namely, the deep learning framework can provide interfaces for loading the model and initializing the model, and the functions of loading and operating the model can be realized through the interfaces. Furthermore, in the actual use process, the terminal can load the deep learning framework on which the model file depends into the memory of the computer through the interface, analyze the deep learning framework, construct the deep neural network structure in the model file in the memory of the computer, and assign the network parameter values in the model file to the deep neural network structure, thereby obtaining the loading-realization model and the initialization model. After the model initialization is completed, the terminal can perform corresponding operation on the deep learning model according to the model operation strategy by calling the interface. Specifically, the data to be analyzed input by the user is acquired through the interface. For example, the data to be analyzed may be an image or a voice. And preprocessing the data to be analyzed through an interface. The preprocessing mode can include normalization processing, data compression, data encoding and the like. The data to be analyzed can be converted into a form required by an input layer of the deep learning model by preprocessing the data to be analyzed. And then inputting the preprocessed data into an input layer of the deep learning model, controlling the deep learning model to perform prediction operation on the preprocessed data, extracting output layer data of the deep learning model, performing post-processing on the extracted output layer data to obtain a processing result, and outputting the processing result through the interface. The post-processing means performing operations such as row calculation on the output layer data, for example, data decoding, removing redundant data, and the like.
And the terminal encapsulates the operation interface and the model operation strategy to obtain a first interface so as to provide a simpler and more direct interface for the outside. The second interface may be an independent C + + class, which refers to an organic whole obtained by combining data with operations on the data. For example, if the deep learning model implements a face detection function, an interface for inputting data and returning a processing result is provided externally.
In this embodiment, the operation interface and the model operation policy corresponding to the deep learning model are encapsulated to obtain the first interface, and after a user inputs data through the first interface, a processing result can be returned, so that a simpler and more direct interface is provided, and a corresponding function of the model can be realized.
In one embodiment, the interface layer packaging the first interface to obtain the second interface includes: acquiring an object of a first interface; and calling an interface calling function corresponding to the object according to a preset format, and performing interface layer packaging on the first interface and the interface calling function corresponding to the object to obtain a second interface.
The interface layer encapsulation refers to secondary encapsulation of the first interface after the molding layer encapsulation, and a function interface in a program language form of the interface layer is exposed to the outside. For example, the programming language form of the interface layer may be C language. The interface layer encapsulation may be a function interface that encapsulates the first interface, i.e., the C + + class, into a C language form. Because there is no class concept in the C language, the terminal can instantiate the first interface, that is, the C + + class, according to the preset format, to obtain the object of the first interface. For example, the preset format may be a functional form.
And the terminal calls an interface calling function corresponding to the object according to the preset format. For example, an interface call function may be a method in an object. And carrying out interface layer packaging on the first interface and the interface calling function corresponding to the object to obtain a second interface, thereby realizing packaging of the first interface into a function interface in a programming language form of the interface layer. By packaging the first interface in the interface layer, format conversion can be performed on the first interface, which is beneficial to subsequent application layer calling.
In one embodiment, the application layer encapsulating the second interface to obtain an encapsulated code template includes: acquiring a preset interface corresponding to an application layer and a mapping file; establishing a mapping relation between the data type of the second interface and the data type of the preset interface according to the mapping file; and acquiring an interface calling function of the second interface according to the mapping relation, and encapsulating the second interface and the interface calling function of the second interface to obtain an encapsulated code template.
After the terminal obtains the second interface obtained after the interface layer encapsulation, the terminal can perform application layer encapsulation on the second interface. Specifically, the application layer has a set of interfaces substantially consistent with the interface layer, and the function names of the interfaces are consistent, but the data type names in the interfaces may be different. For example, the interface in the interface layer is an unsigned char type, and the Java in the application layer is a byte type.
And the terminal acquires a preset interface corresponding to the application layer and a mapping file. The preset interface may be a high-level language interface in the application layer, for example, an interface of Java, C #, Python, or the like. The mapping file may include a mapping relationship between a data type of a preset interface of the application layer and a data type of an interface in the interface layer. The terminal can establish a mapping relation between the data type of the second interface and the data type of the preset interface according to the mapping file, and the application layer can call the corresponding second interface according to the established mapping relation. And the terminal packages the second interface establishing the mapping relation, so that a package code template can be obtained. The encapsulation code templates may include model layer encapsulation code templates, interface layer encapsulation code templates, and application layer encapsulation code templates.
In this embodiment, a mapping relationship is established between the data type of the second interface and the data type of the preset interface according to the mapping file, the second interface with the mapping relationship established is encapsulated, and the corresponding second interface is called according to the established mapping relationship through a high-level language of the application layer, so that subsequent development work is performed.
It should be understood that, although the steps in the flowcharts of fig. 2 and 4 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2 and 4 may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 5, there is provided a deep learning model-based code package generation system, including: a terminal 502, a hosting server 504, and a build server 506, wherein:
the terminal 502 is configured to obtain configuration parameter information and a model file of the deep learning model, where the configuration parameter information includes a configuration platform identifier; the deep learning model is encapsulated in a layering mode according to the model file to obtain an encapsulation code template, and configuration parameter information is input into the encapsulation code template to generate an encapsulation code; the encapsulated code is sent to a hosted server.
And the managed server 504 is used for triggering the construction server to call the construction script when the packaging code is received.
And the building server 506 is used for building the application program corresponding to the configuration platform identifier according to the building script and the packaging code, and packaging the application program into a code package.
In one embodiment, the terminal 502 is further configured to perform model layer encapsulation on the deep learning model according to the model file to obtain a first interface; performing interface layer packaging on the first interface to obtain a second interface; and performing application layer packaging on the second interface to obtain a packaging code template.
In one embodiment, the terminal 502 is further configured to obtain an operation interface and a model operation policy corresponding to the deep learning model according to the model file; and encapsulating the operation interface and the model operation strategy to obtain a first interface.
In one embodiment, the terminal 502 is further configured to obtain an object of the first interface; and calling an interface calling function corresponding to the object according to a preset format, and performing interface layer packaging on the first interface and the interface calling function corresponding to the object to obtain a second interface.
In an embodiment, the terminal 502 is further configured to obtain a preset interface corresponding to the application layer, and a mapping file; establishing a mapping relation between the data type of the second interface and the data type of the preset interface according to the mapping file; and packaging the second interface establishing the mapping relation to obtain a packaging code template.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 6. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a deep learning model-based code package generation method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 6 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory storing a computer program and a processor implementing the steps of the various embodiments described above when the processor executes the computer program.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the respective embodiments described above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (10)
1. A code packet generation method based on a deep learning model is characterized by comprising the following steps:
acquiring configuration parameter information and a model file of a deep learning model, wherein the configuration parameter information comprises a configuration platform identifier;
performing model layer packaging on the deep learning model according to the model file to obtain a first interface;
performing interface layer packaging on the first interface to obtain a second interface;
performing application layer packaging on the second interface to obtain a packaging code template;
inputting the configuration parameter information into the packaging code template to generate a packaging code;
and sending the packaging code to a hosting server so as to trigger a construction server to call a construction script when the hosting server receives the packaging code, constructing an application program corresponding to the configuration platform identifier according to the construction script and the packaging code through the construction server, and packaging the application program into a code package.
2. The method of claim 1, wherein model-level packaging the deep learning model according to the model file to obtain a first interface comprises:
acquiring an operation interface and a model operation strategy corresponding to the deep learning model according to the model file;
and packaging the operation interface and the model operation strategy to obtain a first interface.
3. The method of claim 1, wherein the interface layer encapsulating the first interface to obtain a second interface comprises:
acquiring an object of the first interface;
and calling an interface calling function corresponding to the object according to a preset format, and performing interface layer packaging on the first interface and the interface calling function corresponding to the object to obtain a second interface.
4. The method of claim 1, wherein the application-layer encapsulating the second interface to obtain an encapsulated code template comprises:
acquiring a preset interface corresponding to an application layer and a mapping file;
establishing a mapping relation between the data type of the second interface and the data type of the preset interface according to the mapping file;
and packaging the second interface establishing the mapping relation to obtain a packaging code template.
5. A deep learning model-based code package generation system, the system comprising:
the terminal is used for acquiring configuration parameter information and a model file of the deep learning model, wherein the configuration parameter information comprises a configuration platform identifier; performing model layer packaging on the deep learning model according to the model file to obtain a first interface; performing interface layer packaging on the first interface to obtain a second interface; performing application layer packaging on the second interface to obtain a packaging code template; inputting the configuration parameter information into the packaging code template to generate a packaging code; sending the encapsulated code to a hosting server;
the hosting server is used for triggering the construction server to call the construction script when the packaging code is received;
and the construction server is used for constructing the application program corresponding to the configuration platform identifier according to the construction script and the packaging code and packaging the application program into a code package.
6. The system according to claim 5, wherein the terminal is further configured to obtain an operation interface and a model operation policy corresponding to the deep learning model according to the model file; and packaging the operation interface and the model operation strategy to obtain a first interface.
7. The system of claim 5, wherein the terminal is further configured to obtain an object of the first interface; and calling an interface calling function corresponding to the object according to a preset format, and performing interface layer packaging on the first interface and the interface calling function corresponding to the object to obtain a second interface.
8. The system according to claim 5, wherein the terminal is further configured to obtain a preset interface corresponding to the application layer and a mapping file; establishing a mapping relation between the data type of the second interface and the data type of the preset interface according to the mapping file; and packaging the second interface establishing the mapping relation to obtain a packaging code template.
9. A computer device comprising a memory and a processor, the memory storing a computer program operable on the processor, wherein the processor implements the steps of the method of any one of claims 1 to 4 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010749289.2A CN111930419B (en) | 2020-07-30 | 2020-07-30 | Code packet generation method and system based on deep learning model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010749289.2A CN111930419B (en) | 2020-07-30 | 2020-07-30 | Code packet generation method and system based on deep learning model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111930419A CN111930419A (en) | 2020-11-13 |
CN111930419B true CN111930419B (en) | 2021-08-10 |
Family
ID=73315371
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010749289.2A Active CN111930419B (en) | 2020-07-30 | 2020-07-30 | Code packet generation method and system based on deep learning model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111930419B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112925507A (en) * | 2021-02-22 | 2021-06-08 | 北京智通云联科技有限公司 | Visual identification method based on python deep learning algorithm |
CN113448545B (en) * | 2021-06-23 | 2023-08-08 | 北京百度网讯科技有限公司 | Method, apparatus, storage medium and program product for machine learning model servitization |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103713896A (en) * | 2013-12-17 | 2014-04-09 | 北京京东尚科信息技术有限公司 | Software development kit generation method and device used for accessing server |
WO2015147656A2 (en) * | 2014-03-26 | 2015-10-01 | Auckland Uniservices Limited | Automatic process and system for software development kit for application programming interface |
CN107766052A (en) * | 2017-09-18 | 2018-03-06 | 网宿科技股份有限公司 | A kind of method and apparatus for building mirror image |
-
2020
- 2020-07-30 CN CN202010749289.2A patent/CN111930419B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103713896A (en) * | 2013-12-17 | 2014-04-09 | 北京京东尚科信息技术有限公司 | Software development kit generation method and device used for accessing server |
WO2015147656A2 (en) * | 2014-03-26 | 2015-10-01 | Auckland Uniservices Limited | Automatic process and system for software development kit for application programming interface |
CN107766052A (en) * | 2017-09-18 | 2018-03-06 | 网宿科技股份有限公司 | A kind of method and apparatus for building mirror image |
Also Published As
Publication number | Publication date |
---|---|
CN111930419A (en) | 2020-11-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Galindo et al. | Testing variability-intensive systems using automated analysis: an application to android | |
US9038032B2 (en) | Symbolic execution and automatic test case generation for JavaScript programs | |
CN111930419B (en) | Code packet generation method and system based on deep learning model | |
CN108415826B (en) | Application testing method, terminal device and computer readable storage medium | |
US20190050209A1 (en) | Method and system to develop, deploy, test, and manage platform-independent software | |
CN111045927A (en) | Performance test evaluation method and device, computer equipment and readable storage medium | |
CN114399019A (en) | Neural network compiling method, system, computer device and storage medium | |
CN115080060A (en) | Application program distribution method, device, equipment, storage medium and program product | |
US11157249B1 (en) | Method and system for identifying and extracting independent services from a computer program | |
CN113312046A (en) | Sub-application page processing method and device and computer equipment | |
CN106933642B (en) | Application program processing method and processing device | |
CN115357898B (en) | Dependency analysis method, device and medium of JAVA component | |
CN110955434B (en) | Software development kit processing method and device, computer equipment and storage medium | |
CN114115884B (en) | Method and related device for managing programming service | |
US20170052765A1 (en) | Method and system for creating app | |
CN112685023A (en) | Front-end development processing method, device, equipment and storage medium based on basic library | |
CN110806891B (en) | Method and device for generating software version of embedded device | |
CN117009972A (en) | Vulnerability detection method, vulnerability detection device, computer equipment and storage medium | |
CN116166907A (en) | Method and device for developing Web application by using WebAsssembly and service page compiling technology | |
CN113760360A (en) | File generation method, device, equipment and storage medium | |
CN115373988A (en) | Test case generation method, test method, electronic device, and storage medium | |
CN116088809A (en) | Method and system for extensible custom operators | |
Mane et al. | A Domain Specific Language to Provide Middleware for Interoperability among SaaS and DaaS/DBaaS through a Metamodel Approach. | |
CN108549585B (en) | Method for modifying application data, application testing method and device | |
CN114721929A (en) | Test method, test device, storage medium and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |