WO2023087764A1 - 算法应用元的打包方法及装置、设备、存储介质和计算机程序产品 - Google Patents

算法应用元的打包方法及装置、设备、存储介质和计算机程序产品 Download PDF

Info

Publication number
WO2023087764A1
WO2023087764A1 PCT/CN2022/107167 CN2022107167W WO2023087764A1 WO 2023087764 A1 WO2023087764 A1 WO 2023087764A1 CN 2022107167 W CN2022107167 W CN 2022107167W WO 2023087764 A1 WO2023087764 A1 WO 2023087764A1
Authority
WO
WIPO (PCT)
Prior art keywords
algorithm
target algorithm
target
information
application element
Prior art date
Application number
PCT/CN2022/107167
Other languages
English (en)
French (fr)
Inventor
屈秋竹
罗春能
陈宇恒
高俊杰
胡武林
舒杰
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2023087764A1 publication Critical patent/WO2023087764A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/65Updates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/31Programming languages or programming paradigms
    • G06F8/315Object-oriented languages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/70Software maintenance or management
    • G06F8/71Version control; Configuration management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44505Configuring for program initiating, e.g. using registry, configuration files

Definitions

  • the embodiment of the present application relates to computer technology, and relates to, but not limited to, a packaging method, device, device, storage medium and computer program product of an algorithm application element.
  • AI Artificial Intelligence, artificial intelligence
  • the embodiments of the present disclosure provide a method and device, device, storage medium and computer program product for packaging algorithm application elements.
  • an embodiment of the present disclosure provides a method for packaging algorithm application elements, the method including:
  • the target algorithm application element is a system, and the target algorithm application unit is decoupled from the algorithm warehouse.
  • the target algorithm can be packaged into an algorithm application unit loosely coupled with the algorithm warehouse system, so that the target algorithm also has the advantage of flexible deployment and iteration, and then the algorithm application unit corresponding to the target algorithm can be directly uploaded to the algorithm warehouse
  • the system is up and running without installation
  • an embodiment of the present disclosure provides a device for packaging algorithm application elements, the device comprising:
  • an acquisition unit configured to acquire a target algorithm
  • a packaging unit configured to package the content corresponding to the target algorithm to obtain the application element of the target algorithm
  • the deployment unit is configured to deploy the target algorithm application unit in a specific algorithm warehouse, so that the function corresponding to the target algorithm can be realized when the target algorithm application unit is started in the algorithm warehouse; wherein, the The algorithm warehouse is a system that can run the target algorithm application unit, and the target algorithm application unit is decoupled from the algorithm warehouse.
  • an embodiment of the present disclosure provides an electronic device, including a memory and a processor, the memory stores a computer program that can run on the processor, and the processor implements the steps in the above method when executing the program .
  • an embodiment of the present disclosure provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps in the above method are implemented.
  • an embodiment of the present disclosure provides a computer program product, the computer program product includes a computer program, and the computer program is operable to cause a computer to execute the steps in the method described in the first aspect above.
  • the computer program product may be a software installation package.
  • Embodiments of the present disclosure provide a packaging method, device, device, storage medium, and computer program product for an algorithm application element, by obtaining a target algorithm; packaging the content corresponding to the target algorithm, and obtaining the target algorithm application element;
  • the target algorithm application element is deployed in the algorithm warehouse, so that the function corresponding to the target algorithm can be realized when the target algorithm application element is started in the algorithm warehouse;
  • the algorithm warehouse can run the target algorithm A system of algorithm application units, and the target algorithm application unit is decoupled from the algorithm warehouse, so that the target algorithm can be packaged into an algorithm application unit loosely coupled with the algorithm warehouse system, so that the target algorithm also has flexible deployment iterations Advantages, and then the algorithm application element corresponding to the target algorithm can be directly uploaded to the algorithm warehouse system and started to run without installation.
  • FIG. 1 is a first schematic diagram of the implementation process of the packaging method of the algorithm application element in the embodiment of the present disclosure
  • FIG. 2 is a second schematic diagram of the implementation process of the packaging method of the algorithm application element in the embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of the third implementation flow of the packaging method of the algorithm application element in the embodiment of the present disclosure
  • FIG. 4 is a schematic diagram 4 of the implementation flow of the packaging method of the algorithm application element in the embodiment of the present disclosure
  • FIG. 5 is a schematic diagram of the fifth implementation flow of the packaging method of the algorithm application element in the embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram of the composition and structure of the packaging device of the algorithm application element in the embodiment of the present disclosure
  • FIG. 7 is a schematic diagram of a hardware entity of an electronic device according to an embodiment of the present disclosure.
  • first ⁇ second ⁇ third involved in the embodiments of the present disclosure is only to distinguish similar objects, and does not represent a specific ordering of objects. Understandably, “first ⁇ second ⁇ third Where permitted, the specific order or sequence may be interchanged such that the embodiments of the disclosure described herein can be practiced in sequences other than those illustrated or described herein.
  • an embodiment of the present disclosure provides a method for packaging algorithm application elements.
  • the method is applied to an electronic device.
  • the functions implemented by the method can be implemented by calling the program code from the processor of the electronic device.
  • the program code can be stored in the storage medium of the electronic device.
  • Fig. 1 is a schematic diagram of the implementation flow of the packaging method of the algorithm application element in the embodiment of the present disclosure. As shown in Fig. 1, the method includes:
  • Step S101 obtaining the target algorithm
  • the electronic devices may be various types of devices with information processing capabilities, such as navigators, smart phones, tablet computers, wearable devices, laptop computers, all-in-one and desktop computers, server clusters, and the like.
  • the target algorithm may be any type of algorithm.
  • the algorithm refers to an accurate and complete description of a problem-solving scheme, a series of clear instructions for solving a problem, and an algorithm represents a systematic method to describe a strategy mechanism for solving a problem.
  • the target algorithm may be a related algorithm in the field of artificial intelligence, for example, an intelligent video analysis algorithm, a face recognition algorithm, a fingerprint recognition algorithm, a human body detection and tracking algorithm, etc. in the field of artificial intelligence.
  • the target algorithm may also be a related algorithm in other fields, and usually the target algorithm can realize a specific business function.
  • This iterative process refers to the complete process of the algorithm from the completion of development to the completion of testing, and then from the completion of testing to deployment and online for users to use.
  • Step S102 packaging the content corresponding to the target algorithm to obtain the target algorithm application element
  • the target algorithm application unit is the Applet corresponding to the target algorithm
  • the Applet is a kind of algorithm application unit, which can be regarded as an application package (that is, a small program with a function as the unit), and the package inside is the algorithm Models, codes, configuration items, etc., that is, an algorithm logic package.
  • the content corresponding to the target algorithm may be packaged with a ZIP tool to obtain the target algorithm application element.
  • a ZIP tool For example, directly through the Linux Zip command, or through internal packaging tools.
  • Step S103 deploying the target algorithm application element in a specific algorithm warehouse, so that the function corresponding to the target algorithm can be realized when the target algorithm application element is activated in the algorithm warehouse; wherein, the algorithm warehouse It is a system that can run the target algorithm application unit, and the target algorithm application unit is decoupled from the algorithm warehouse.
  • the packaged target algorithm application element can be deployed in a specific algorithm warehouse, and the algorithm application element management service in the algorithm warehouse will add signature information to the target algorithm application element, and the signed target algorithm application element can be distributed to user use.
  • the application element of the target algorithm can be started and run directly. When running, it will first check whether its signature is correct, and if it is correct, it will run according to its packaged content, so as to realize the function corresponding to the target algorithm.
  • the specific algorithm warehouse is a general system capable of running any algorithm application element, that is, the algorithm warehouse is a specific algorithm system, and the algorithm warehouse is deployed in the algorithm warehouse Either algorithm in applies meta-decoupling.
  • the algorithm logic is packaged into the image, so the algorithm logic and the system are coupled together.
  • Docker is an open source application container engine. Developers can package applications and dependencies into a portable container, and then publish it to any popular Linux machine. The container is derived from the Docker image, and the image can be self-made by the user or It is generated by the submission of the running container. After the image is generated, it can be pushed to the mirror warehouse for storage, or pulled from the mirror warehouse to the local to run the container.
  • the Docker image is a standardized package for the application and its operating environment. If the algorithm needs to be updated, the entire base service image needs to be updated. The algorithm cannot be released separately, resulting in low efficiency and slow iteration of the algorithm.
  • the target algorithm is defined as an independent and portable algorithm application element, which requires no installation and is more flexible. That is to say, in the embodiment of the present disclosure, the target algorithm is packaged into an algorithm application element loosely coupled with the algorithm warehouse system, so that the target algorithm also has the advantage of flexible deployment iterations, and then can be directly deployed in the algorithm warehouse system.
  • the algorithm application element corresponding to the target algorithm starts running without installation.
  • this embodiment of the present disclosure further provides a method for packaging algorithm application elements, and the method is applied to electronic devices.
  • FIG. 2 the method includes:
  • Step S201 acquiring configuration information and script information corresponding to the target algorithm; wherein, the configuration information is used to configure operating parameters for the packaged target algorithm, and the script information is a script program corresponding to the target algorithm;
  • the packaged target algorithm is the target algorithm application element obtained after the target algorithm is packaged.
  • the configuration information is used to configure operating parameters for the target algorithm application unit, and the operating parameters may be information describing how the target algorithm application unit is composed, or information describing how the target algorithm application unit operates.
  • the configuration information may include an entry file of the target algorithm application element, and the entry file describes the running entry of the target algorithm application element.
  • the configuration information may include the operating environment file of the target algorithm application element, and the operating environment file describes the operating software environment, operating hardware environment (such as supported graphics card type), performance ( Such as how many roads are supported), compatibility information, etc.
  • the configuration information may include a template file, and the template file describes information about using the target algorithm application element to generate other similar algorithm application elements.
  • the script information is a script program corresponding to the target algorithm, and a path of the script program may be specified in a file included in the script information.
  • the script program can be a Lua script, and the function of the script program is to execute the algorithm in the target algorithm application element.
  • Step S202 packaging the content corresponding to the configuration information and the script information to obtain the target algorithm application element
  • the configuration information and the script information are mandatory contents for packaging the target algorithm. That is to say, the target algorithm application element must include configuration information corresponding to the target algorithm and script information corresponding to the target algorithm.
  • the target algorithm can be packaged into the target algorithm application element based on the configuration information and script information.
  • Step S203 deploying the target algorithm application element in a specific algorithm warehouse, so that the function corresponding to the target algorithm can be realized when the target algorithm application element is activated in the algorithm warehouse; wherein, the algorithm warehouse It is a system that can run the target algorithm application unit, and the target algorithm application unit is decoupled from the algorithm warehouse.
  • an embodiment of the present disclosure further provides a method for packaging algorithm application elements, the method is applied to electronic devices, and the method includes:
  • Step S211 obtaining configuration information, script information and model information corresponding to the target algorithm; wherein, the configuration information is used to configure operating parameters for the packaged target algorithm, and the script information is a script program corresponding to the target algorithm , the model information is an algorithm model corresponding to the target algorithm;
  • the model information is an optional content for packaging the target algorithm. That is to say, the target algorithm application element may include the model information corresponding to the target algorithm, or the target algorithm application element may not include the model information corresponding to the target algorithm. It is required to choose whether to package the model information.
  • the model information is the parameters of some algorithms, such as the algorithm model corresponding to the neural network algorithm.
  • the model information in the embodiments of the present disclosure may be organized by different hardware, such as by CPU (Central Processing Unit, central processing unit)/graphics card. If there are many kinds of hardware, corresponding model information exists on each hardware platform. Therefore, different model information corresponding to different hardware platforms may be specified in the file included in the model information.
  • the model information in the embodiments of the present disclosure can also be organized according to different operation stages.
  • the operation stage of the business to which the target algorithm belongs includes four stages: the first feature extraction stage, The face detection stage, the second feature extraction stage, and the face comparison stage, correspondingly, the first stage corresponds to the algorithm model for extracting detection features, the second stage corresponds to the algorithm model for face detection, and the third stage corresponds to the extraction of comparison features
  • the algorithm model of the algorithm model, the fourth stage corresponds to the algorithm model of the face comparison. Therefore, different model information corresponding to different operation stages may be specified in the file included in the model information.
  • Step S212 packaging the content corresponding to the model information, the configuration information and the script information to obtain the target algorithm application element;
  • the target algorithm can be packaged into the target algorithm application unit, so that the packaging parameters can be flexibly configured, and different packaging options can be selected according to actual needs to realize the packaging of the target algorithm.
  • Step S213 deploying the target algorithm application element in a specific algorithm warehouse, so that the function corresponding to the target algorithm can be realized when the target algorithm application element is activated in the algorithm warehouse; wherein, the algorithm warehouse It is a system that can run the target algorithm application unit, and the target algorithm application unit is decoupled from the algorithm warehouse.
  • an embodiment of the present disclosure further provides a method for packaging algorithm application elements, the method is applied to electronic devices, and the method includes:
  • Step S221 obtaining the task type corresponding to the target algorithm
  • the task type corresponding to the target algorithm may be a classification of functions corresponding to the task to which the target algorithm belongs. For example, if the function corresponding to the task to which the target algorithm belongs is face comparison alarm, then the task type corresponding to the target algorithm is object recognition.
  • Step S222 determine the operation pipeline corresponding to the target algorithm
  • the running pipeline i.e., Pipeline
  • the algorithm refers to the running phase of the task to which the target algorithm belongs, that is, several processes in a certain order included in the task to which the target algorithm belongs, or the target Several execution steps in a certain order included in the task to which the algorithm belongs.
  • the algorithm is also a pipeline, the algorithm also has its corresponding running pipeline. For example, the garbage detection and alarm algorithm first detects objects, then looks at the attributes of the detected objects, and finally makes corresponding alarms based on the attributes of the objects.
  • Step S223, determining the operation pipeline as configuration information corresponding to the target algorithm; wherein, the configuration information is used to configure operating parameters for the packaged target algorithm;
  • determining the running pipeline as the configuration information corresponding to the target algorithm means that the configuration information in the target algorithm application element includes the running pipeline corresponding to the task to which the target algorithm belongs.
  • Step S224 determining the algorithm model corresponding to each operation stage in the operation pipeline
  • the algorithm since the algorithm is also a pipeline, the algorithm also has its corresponding operation pipeline, and the operation pipeline includes multiple operation stages, so each operation stage corresponds to its own algorithm model.
  • the operation pipeline corresponding to the garbage detection and alarm algorithm includes three operation stages. The first operation stage is the object detection stage, and the corresponding algorithm model is the detection algorithm model; the second operation stage is the object attribute recognition stage, and the corresponding algorithm model is the identification Algorithm model; the third operation phase is the object alarm phase, and the corresponding algorithm model is the alarm algorithm model.
  • Step S225 determining the algorithm model corresponding to each operation stage as the model information corresponding to the target algorithm; wherein, the model information is the algorithm model corresponding to the target algorithm;
  • determining the algorithm model corresponding to each operation stage as the model information corresponding to the target algorithm means that the model information in the application element of the target algorithm includes the information corresponding to each operation stage in the operation pipeline of the target algorithm.
  • algorithm model For example, the model information corresponding to the aforementioned garbage detection and warning algorithm includes a detection algorithm model, a recognition algorithm model, and a warning algorithm model.
  • the content of the configuration information can be determined based on the operation pipeline of the service corresponding to the target algorithm, and the content of the model information can be determined according to the algorithm model corresponding to each operation stage in the operation pipeline.
  • Step S226, acquiring script information corresponding to the target algorithm; wherein, the script information is a script program corresponding to the target algorithm;
  • Step S227 packaging the content corresponding to the model information, the configuration information and the script information to obtain the target algorithm application element;
  • Step S228, deploying the target algorithm application element in a specific algorithm warehouse, so that the function corresponding to the target algorithm can be realized when the target algorithm application element is activated in the algorithm warehouse; wherein, the algorithm warehouse It is a system that can run the target algorithm application unit, and the target algorithm application unit is decoupled from the algorithm warehouse.
  • an embodiment of the present disclosure further provides a method for packaging algorithm application elements, the method is applied to electronic devices, and the method includes:
  • Step S231 determining the running environment of the packaged target algorithm, and the performance information of the packaged target algorithm in the running environment;
  • the operating environment may include both software and hardware.
  • the software may include an operating system, such as a Windows operating system or a Linux operating system.
  • Hardware may include the configuration of electronic equipment, such as CPU, memory, graphics card, hard disk, etc.
  • Step S232 determining the operating environment and the performance information as configuration information corresponding to the target algorithm; wherein the configuration information is used to configure operating parameters for the packaged target algorithm;
  • the target algorithm application element supports the operation of N-way data in the first operating environment
  • the target algorithm application element supports the operation of M-way data in the second operating environment.
  • N and M can be the same or different, and the operation is supported.
  • the data of N paths and the data supporting operation of M paths are the performance information of the target algorithm application element under different operating environments.
  • Step S233 If there are multiple operating environments, determine the model information corresponding to each of the operating environments in the multiple operating environments;
  • model information corresponding to the target algorithm is different in different operating environments
  • model information in the target algorithm application element includes model information in multiple operating environments.
  • Step S234 determining the model information corresponding to each of the operating environments as the model information corresponding to the target algorithm; wherein the model information is the algorithm model corresponding to the target algorithm;
  • the graphics card whose model is nv_p4 corresponds to the trt2 1 model and the trt5 1 model
  • the graphics card whose model is nv_t4 corresponds to the trt2 2 model and the trt5 2 model.
  • the content of the configuration information can be determined based on the operating environment of the target algorithm application element and the performance of the target algorithm application element in the operating environment, and the package can be determined according to the model information in each operating environment. of the model content.
  • Step S235 acquiring script information corresponding to the target algorithm; wherein, the script information is a script program corresponding to the target algorithm;
  • Step S236 packaging the content corresponding to the model information, the configuration information and the script information to obtain the target algorithm application element;
  • the content of the model information corresponding to the target algorithm, the content of the configuration information corresponding to the target algorithm, and the content of the script information corresponding to the target algorithm are packaged to obtain the target algorithm application element.
  • Step S237 deploying the target algorithm application element in a specific algorithm warehouse, so that the function corresponding to the target algorithm can be realized when the target algorithm application element is activated in the algorithm warehouse; wherein, the algorithm warehouse It is a system that can run the target algorithm application unit, and the target algorithm application unit is decoupled from the algorithm warehouse.
  • an embodiment of the present disclosure further provides a method for packaging algorithm application elements, the method is applied to electronic devices, and the method includes:
  • Step S241 obtaining the task type corresponding to the target algorithm
  • Step S242 according to the task type, determine the operation pipeline corresponding to the target algorithm
  • Step S243 determining the running environment of the packaged target algorithm, and the performance information of the packaged target algorithm in the running environment;
  • Step S244 when the packaged target algorithm is used as a template for other algorithm application elements, acquire template information corresponding to the running pipeline, the running environment and the performance information;
  • using the packaged target algorithm as a template for other algorithm application elements refers to using the target algorithm application element to generate other similar algorithm application elements.
  • the template information corresponding to the running pipeline of the target algorithm and the template information corresponding to the operating environment of the target algorithm application element can be The template information corresponding to the performance information of the target algorithm application element is used as the package content of the target algorithm application element.
  • Step S245 determining the operation pipeline, the operation environment, the performance information and the template information as the configuration information corresponding to the target algorithm; wherein the configuration information is used for the packaged target Algorithm configuration operation parameters;
  • template information can be obtained based on the operation pipeline of the service corresponding to the target algorithm, the operating environment of the target algorithm application element, and the performance of the target algorithm application element in the operating environment, and the template information can be determined as a configuration The content of the information, so that the target algorithm application element can be used to generate other similar algorithm application elements.
  • Step S246 determining the algorithm model corresponding to each operation stage in the operation pipeline
  • Step S247 If there are multiple operating environments, determine the model information corresponding to each of the operating environments in the multiple operating environments;
  • Step S248 determining the algorithm model corresponding to each operation stage and the model information corresponding to each operation environment as the model information corresponding to the target algorithm; wherein the model information is the target algorithm Corresponding algorithm model;
  • the algorithm models corresponding to different operating stages and the model information corresponding to different operating environments are divided from different angles, so an algorithm model may belong to a certain operating environment while belonging to a certain operating stage.
  • Step S249 acquiring script information corresponding to the target algorithm; wherein, the script information is a script program corresponding to the target algorithm;
  • Step S250 packaging the content corresponding to the model information, the configuration information and the script information to obtain the target algorithm application element;
  • Step S251 Deploy the target algorithm application element in a specific algorithm warehouse, so that the function corresponding to the target algorithm can be realized when the target algorithm application element is activated in the algorithm warehouse; wherein, the algorithm warehouse It is a system that can run the target algorithm application unit, and the target algorithm application unit is decoupled from the algorithm warehouse.
  • this embodiment of the disclosure further provides a method for packaging algorithm application elements, the method is applied to electronic devices, and FIG. 3, the method includes:
  • Step S301 Obtain configuration information, script information, model information, and dynamic library information corresponding to the target algorithm; wherein, the configuration information is used to configure operating parameters for the packaged target algorithm, and the script information is the target algorithm
  • the corresponding script program, the model information is the algorithm model corresponding to the target algorithm, and the dynamic library information is used to be called by the code corresponding to the script information so that the packaged target algorithm is applicable to different platforms;
  • the dynamic library information may be a user dynamic link library
  • the user dynamic link library includes functions that can be called by script programs in the script information to complete certain tasks.
  • Step S302 packaging the content corresponding to the model information, the configuration information and the script information to obtain the target algorithm application element;
  • Step S303 Deploy the target algorithm application element in a specific algorithm warehouse, so that the function corresponding to the target algorithm can be realized when the target algorithm application element is activated in the algorithm warehouse; wherein, the algorithm warehouse It is a system that can run the target algorithm application unit, and the target algorithm application unit is decoupled from the algorithm warehouse.
  • an embodiment of the present disclosure further provides a method for packaging algorithm application elements, the method is applied to electronic devices, and the method includes:
  • Step S311 obtaining configuration information, script information and dynamic library information corresponding to the target algorithm; wherein, the configuration information is used to configure operating parameters for the packaged target algorithm, and the script information is the script corresponding to the target algorithm program, the dynamic library information is used to be called by the code corresponding to the script information so that the packaged target algorithm is applicable to different platforms;
  • Step S312 packaging the content corresponding to the configuration information, the script information and the dynamic library information to obtain the target algorithm application element;
  • the target algorithm can be packaged into the target algorithm application element, so that the packaging parameters can be flexibly configured, and different packaging options can be selected according to actual needs. Packaging of the target algorithm.
  • Step S313 deploying the target algorithm application element in a specific algorithm warehouse, so that the function corresponding to the target algorithm can be realized when the target algorithm application element is activated in the algorithm warehouse; wherein, the algorithm warehouse It is a system that can run the target algorithm application unit, and the target algorithm application unit is decoupled from the algorithm warehouse.
  • an embodiment of the present disclosure further provides a method for packaging algorithm application elements, the method is applied to electronic devices, and the method includes:
  • Step S321 acquire configuration information, script information, user documents and format information corresponding to the target algorithm; wherein, the configuration information is used to configure operating parameters for the packaged target algorithm, and the script information is the target algorithm corresponding A script program, the user document is used to describe the usage information of the packaged target algorithm, and the format information is used to standardize the format of the input and output data of the packaged target algorithm;
  • the user document may include an icon of the target algorithm application unit, and user instructions for the target algorithm application unit.
  • the format information is used to standardize the format of the input data and the format of the output data of the application element of the target algorithm, that is, define the format of the input and output data, and verify the format of the input and output data, and receive the data if it conforms to the specification, which is a specification and a illustrate.
  • Step S322 packaging the content corresponding to the configuration information, the script information, the user document and the format information to obtain the target algorithm application element;
  • the target algorithm can be packaged into the target algorithm application element, so that the packaging parameters can be flexibly configured, and different packaging options can be selected according to actual needs to implement the target algorithm. of packing.
  • Step S323. Deploy the target algorithm application element in a specific algorithm warehouse, so that the function corresponding to the target algorithm can be realized when the target algorithm application element is activated in the algorithm warehouse; wherein, the algorithm warehouse It is a system that can run the target algorithm application unit, and the target algorithm application unit is decoupled from the algorithm warehouse.
  • model information, dynamic library information, user documents and format information are all optional contents for packaging the target algorithm into the application element of the target algorithm.
  • packaging parameters can be flexibly configured, and different packaging contents can be selected according to actual needs Realize the packaging of the target algorithm.
  • the present disclosure further provides a method for packaging algorithm application components, which is applied to electronic devices.
  • FIG. 4 the method includes:
  • Step S401 obtaining the target algorithm
  • Step S402 package the content corresponding to the target algorithm through dynamic language, and obtain the application element of the target algorithm;
  • the target algorithm application element is made in a dynamic script language, it can be used directly after decompression, that is to say, after the target algorithm application element is deployed in the algorithm warehouse, it can be decompressed and run for use.
  • the algorithm warehouse system may be a cloud-native algorithm warehouse system.
  • the target algorithm can be packaged using a dynamic language.
  • the packaged algorithm application element and the image are independent of each other, so that the image can be released first, and then the algorithm can be updated to achieve flexible management of the algorithm.
  • Step S403 deploying the target algorithm application element in a specific algorithm warehouse, so that the function corresponding to the target algorithm can be realized when the target algorithm application element is activated in the algorithm warehouse; wherein, the algorithm warehouse As a system capable of running the target algorithm application unit, the target algorithm application unit is decoupled from the algorithm warehouse, and the target algorithm application unit is independent of any image associated with the algorithm warehouse.
  • the algorithm logic is packaged into the image and cannot be separated. If there is an algorithm update in the future, the entire image needs to be updated.
  • the algorithm logic and the image are separated through this packaging method, and there is no need to distribute the package and the environment together in the form of an image.
  • the image and the algorithm are independent of each other. Algorithms can run on different images. In other words, the algorithm and the image are separated, so the image can be distributed first, and the algorithm can be updated later to achieve flexible algorithm management.
  • an algorithm application meta-authorization function is also provided, for example, the task of analyzing video streams will limit the maximum number of ways it can process, wherein the maximum number of ways can be determined through purchase.
  • the present disclosure further provides a method for packaging algorithm application elements, which is applied to electronic devices. 5, the method includes:
  • Step S501 obtaining the target algorithm
  • Step S502 packaging the content corresponding to the target algorithm to obtain the target algorithm application element
  • Step S503 deploying the target algorithm application element in a specific algorithm warehouse, so that the function corresponding to the target algorithm can be realized when the target algorithm application element is activated in the algorithm warehouse;
  • the algorithm warehouse It is a system capable of running the target algorithm application unit, and the target algorithm application unit is decoupled from the algorithm warehouse;
  • Step S504 receiving an update request; wherein, the content of the update request is a request to update the target algorithm application element from the first version to the second version;
  • the first version may be a historical version
  • the second version may be a current version
  • Step S505 in response to the update request, stop using the first version of the target algorithm application element in the algorithm warehouse;
  • Step S506 deploying the target algorithm application element of the second version in the algorithm warehouse.
  • the target algorithm is a garbage detection algorithm
  • the garbage detection algorithm application unit deployed in the algorithm warehouse is the 2015 version
  • the garbage detection algorithm application unit of the 2015 version does not report errors in terms of garbage detection accuracy and recall.
  • developers updated the code of the 2015 version of the garbage detection algorithm in order to solve the above problems, and obtained the 2016 version of the garbage detection algorithm.
  • the 2016 version of the garbage detection algorithm has improved precision and recall.
  • the content corresponding to the 2016 version of the garbage detection algorithm can be packaged to obtain the 2016 version of the garbage detection algorithm application element, and the 2015 version of the garbage detection algorithm application element can be disabled, and the 2016 version of the garbage detection algorithm application element can be uploaded and activated.
  • the new version of the algorithm can be packaged into an algorithm application element, and then deployed to the algorithm warehouse system and enabled to realize the iterative update of the algorithm, thereby solving the problem of the existing technology
  • embodiments of the present disclosure provide an algorithm warehouse mode.
  • AI algorithms are packaged into algorithm application elements (i.e. Applets) decoupled from the runtime system through some dynamic languages.
  • algorithm application elements i.e. Applets
  • the embodiments of the present disclosure also provide a cloud-native, highly available, and easily scalable AI algorithm
  • the application element operation management system ie, the algorithm warehouse system
  • the algorithm application element management service When the algorithm application element management service is running in the algorithm warehouse system, it implements a general way that can run any algorithm application element.
  • the embodiment of the present disclosure mainly provides a packaging method of algorithm application elements in the algorithm warehouse system, which is based on the content of configuration information, model information, script information, dynamic library information, user documents and format information, and realizes the application elements of artificial intelligence algorithms. packaging function.
  • the configuration information is a configuration file of an algorithm application element, for example, including a service operation pipeline (Pipeline) and the like.
  • the configuration file may include the following parts: the entry file of the algorithm application element, the template file of the algorithm application element, the operating environment and performance information of the algorithm application element (such as the content defined by AlgoAppSpec, which has nothing to do with runtime, as render defaults) etc.
  • the model information is the model content corresponding to the algorithm.
  • the graphics card whose model is nv_p4 corresponds to the trt2 1 model and the trt5 1 model
  • the graphics card whose model is nv_t4 corresponds to the trt2 2 model and the trt5 2 model.
  • model information may include model information under different hardware and model information under different running periods.
  • the script information is a script program corresponding to an algorithm, and different algorithms correspond to different implementation codes.
  • the Lua script corresponding to the algorithm.
  • the content of the dynamic library information can be called by the code corresponding to the script information, so that the algorithm application element can be applied to different platforms.
  • the dynamic library information may be a dynamic link library corresponding to the algorithm, for example, a Go standard library. And, it doesn't need to consider the software version, such as Nvidia platform or Huawei Ascend platform.
  • the user document records the information describing the application element of the algorithm.
  • the user document may include a description document of the algorithm application element, and may also include an icon of the algorithm application element.
  • the format information defines the input and output format of the algorithm application element, which defines the field format and content, and is used to verify the input and output data.
  • the algorithm warehouse system provided in the embodiments of the present disclosure is based on the cloud-native architecture and implemented by extending the kuernetes custom interface, and has the characteristics of being able to make use of the distributed cloud platform, automatic scaling, and friendly operation and maintenance deployment.
  • the algorithm warehouse system in the embodiment of the present disclosure defines the intelligent algorithm as an independent and portable algorithm application unit, which does not need to be installed and is more flexible.
  • the algorithm warehouse system can manage the operation life cycle of algorithm application elements, management is based on kubernetes state synchronization, maintains a simple internal state machine, management operations can be performed asynchronously, and can flexibly select and confirm packaging parameters when algorithm application elements are packaged.
  • the business layer can Through the interface (that is, the system interface, the interface can run various algorithm application units) to issue processing tasks or pictures for algorithm analysis. If the iterative algorithm needs to be updated in the future, stop the old version of the algorithm application element, upload and start the new version of the algorithm application element. For example, the garbage detection algorithm of the historical version did not report an error on the accuracy and recall of garbage detection.
  • the algorithm application element corresponding to the old version of the garbage detection algorithm is disabled, and Upload the algorithm application element corresponding to the new version of the garbage detection algorithm.
  • the application element of the new version of the garbage detection algorithm may be the content after packaging the new version of the garbage detection algorithm using configuration information, model information, scripts, dynamic libraries, user documents, and content in a standardized format.
  • the following technical effects can be achieved: 1) based on the decoupling system, the individual packaging of a single algorithm application unit is realized; 2) the intelligent video analysis algorithm is defined as an algorithm application unit loosely coupled with the system, so that the intelligent algorithm It also has the advantages of flexible deployment and iteration, and the defined algorithm application element does not need to be installed, and the packaging parameters can be flexibly configured; 3) The system that manages the released algorithm application element can manage the operation life cycle of the algorithm application element and manage the state based on kubernetes Synchronous, maintains a simple internal state machine, and manages operations asynchronously.
  • the present disclosure provides an algorithm application element packaging device, which includes each unit included, each subunit and each module included in each unit, and each subunit included in each module.
  • Modules and components can be realized by processors in electronic equipment; of course, they can also be realized by specific logic circuits; in the process of implementation, processors can be CPU (Central Processing Unit, central processing unit), MPU (Microprocessor Unit, microprocessor), DSP (Digital Signal Processing, digital signal processor) or FPGA (Field Programmable Gate Array, field programmable gate array), etc.
  • FIG. 6 is a schematic diagram of the composition and structure of the packaging device of the algorithm application element in the embodiment of the present disclosure. As shown in FIG. 6, the device 600 includes:
  • An acquisition unit 601 configured to acquire a target algorithm
  • the packaging unit 602 is configured to package the content corresponding to the target algorithm to obtain the application element of the target algorithm;
  • the deployment unit 603 is configured to deploy the target algorithm application unit in a specific algorithm warehouse, so that the function corresponding to the target algorithm can be realized when the target algorithm application unit is started in the algorithm warehouse; wherein, the The algorithm warehouse is a system capable of running the target algorithm application unit, and the target algorithm application unit is decoupled from the algorithm warehouse.
  • the acquisition unit 601 includes: a first acquisition module configured to acquire configuration information and script information corresponding to the target algorithm; wherein the configuration information is used to configure operating parameters for the target algorithm application element , the script information is a script program corresponding to the target algorithm; the packaging unit 602 includes: a first packaging module configured to package the content corresponding to the configuration information and the script information to obtain the target algorithm application Yuan.
  • the acquisition unit 601 further includes: a second acquisition module configured to acquire model information corresponding to the target algorithm; wherein the model information is an algorithm model corresponding to the target algorithm; the packaging unit 602. Including: a second packaging module configured to package content corresponding to the model information, the configuration information, and the script information to obtain target algorithm application elements.
  • the first acquisition module includes: a task acquisition component configured to acquire a task type corresponding to a target algorithm; a pipeline determination component configured to determine an operation pipeline corresponding to the target algorithm according to the task type
  • the first configuration determination component is configured to determine the operation pipeline as configuration information corresponding to the target algorithm
  • the second acquisition module includes: an algorithm model determination component configured to determine each operation in the operation pipeline The algorithm model corresponding to the stage; the first model determining component is configured to determine the algorithm model corresponding to each operation stage as the model information corresponding to the target algorithm.
  • the first acquisition module further includes: a specification determining component configured to determine the operating environment of the target algorithm application element, and the performance information of the target algorithm application element in the operating environment;
  • the second configuration determining component is configured to determine the operating environment and the performance information as configuration information corresponding to the target algorithm; if there are multiple operating environments, the second acquiring module further includes :
  • the environment model determining component is configured to determine the model information corresponding to each of the multiple operating environments; the second model determining component is configured to determine the model information corresponding to each of the operating environments is the model information corresponding to the target algorithm.
  • the first acquisition module when the target algorithm application element is used as a template for other algorithm application elements, the first acquisition module further includes: a template acquisition component configured to acquire the operation pipeline, the operation Template information corresponding to the environment and the performance information; a third configuration determining component configured to determine the template information as configuration information corresponding to the target algorithm.
  • the obtaining unit 601 further includes: a third obtaining module configured to obtain dynamic library information corresponding to the target algorithm; wherein the dynamic library information is used to be called by the code corresponding to the script information to Make the target algorithm application element applicable to different platforms;
  • the packaging unit 602 includes: a third packaging module configured to package the content corresponding to the configuration information, the script information and the dynamic library information, Obtain the application element of the target algorithm.
  • the acquisition unit 601 further includes: a fourth acquisition module configured to acquire user documents and format information corresponding to the target algorithm; wherein the user documents are used to describe the use of the target algorithm application element information, the format information is used to standardize the format of the input and output data of the target algorithm application element; the packing unit 602 includes: a fourth packing module configured to pack the configuration information, the script information, the The user document and the content corresponding to the format information are packaged to obtain the target algorithm application element.
  • the packaging unit 602 includes: a packaging subunit configured to package the content corresponding to the target algorithm through a dynamic language to obtain a target algorithm application element; wherein, the target algorithm application element is the same as the target algorithm application element Any image associated with the above algorithm warehouse is independent of each other.
  • the device further includes: a request receiving unit configured to receive an update request; wherein, the content of the update request is a request to update the target algorithm application element from the first version to the second version; request The response unit is configured to respond to the update request, stop using the first version of the target algorithm application element in the algorithm warehouse; the update unit is configured to deploy the second version of the target algorithm application element in the algorithm warehouse .
  • the above method of packaging algorithm application elements is implemented in the form of software function modules and sold or used as an independent product, it can also be stored in a computer-readable storage medium .
  • a software product which is stored in a storage medium and includes several instructions for Make an electronic device (which may be a personal computer, a server, etc.) execute all or part of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage medium includes: various media that can store program codes such as U disk, mobile hard disk, ROM (Read Only Memory, read-only memory), magnetic disk or optical disk.
  • embodiments of the present disclosure are not limited to any specific combination of hardware and software.
  • an embodiment of the present disclosure provides an electronic device, including a memory and a processor, the memory stores a computer program that can run on the processor, and the processor implements the computer program provided in the above embodiments when executing the program.
  • the algorithm applies the steps in the packaged method of the meta.
  • an embodiment of the present disclosure provides a readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps in the above method for packaging algorithm application elements are implemented.
  • an embodiment of the present disclosure provides a computer program product or computer program, where the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the electronic device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the electronic device executes the method for packaging algorithm application elements described above in the embodiments of the present disclosure.
  • the computer program product provided by the embodiments of the present application includes a computer-readable storage medium storing program codes, and the instructions included in the program codes can be used to execute the method for packaging algorithm application elements described in the above method embodiments. step.
  • FIG. 7 is a schematic diagram of a hardware entity of an electronic device according to an embodiment of the present disclosure.
  • the hardware entity of the electronic device 700 includes: a processor 701, a communication interface 702, and a memory 703, wherein
  • the processor 701 generally controls the overall operation of the electronic device 700 .
  • the communication interface 702 can enable the electronic device 700 to communicate with other servers or electronic devices or platforms through a network.
  • the memory 703 is configured to store instructions and applications executable by the processor 701, and can also cache data to be processed or processed by the processor 701 and various modules in the electronic device 700 (for example, image data, audio data, voice communication data and Video communication data), can be realized by FLASH (flash memory) or RAM (Random Access Memory, random access memory);
  • bus 704 is used to realize connection and communication between these hardware entities.
  • the disclosed devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division.
  • the coupling, or direct coupling, or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be electrical, mechanical or other forms of.
  • the units described above as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place or distributed to multiple network units; Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present disclosure can be fully integrated into one processing module, or each unit can be used as a single unit, or two or more units can be integrated into one unit; the above-mentioned integration
  • the unit can be realized in the form of hardware or in the form of hardware plus software functional unit.
  • Those of ordinary skill in the art can understand that all or part of the steps to realize the above method embodiments can be completed by hardware related to program instructions, and the aforementioned program can be stored in a computer-readable storage medium. When the program is executed, the It includes the steps of the above method embodiments; and the aforementioned storage medium includes: various media that can store program codes such as removable storage devices, ROM, RAM, magnetic disks or optical disks.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Computing Systems (AREA)
  • Stored Programmes (AREA)

Abstract

本公开实施例公开了一种算法应用元的打包方法及装置、设备、存储介质和计算机程序产品,其中,所述方法包括:获取目标算法;将所述目标算法对应的内容进行打包,得到目标算法应用元;在特定的算法仓中部署所述目标算法应用元,使得在所述算法仓中启动所述目标算法应用元的情况下能够实现所述目标算法对应的功能;其中,所述算法仓为可以运行所述目标算法应用元的系统,且所述目标算法应用元与所述算法仓解耦。

Description

算法应用元的打包方法及装置、设备、存储介质和计算机程序产品
相关申请的交叉引用
本申请基于申请号为202111364206.9、申请日为2021年11月17日、申请名称为“算法应用元的打包方法及装置、设备、存储介质”的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此以全文引用的方式引入本申请。
技术领域
本申请实施例涉及计算机技术,涉及但不限于一种算法应用元的打包方法及装置、设备、存储介质和计算机程序产品。
背景技术
目前,随着人工智能领域的发展,越来越多的AI(Artificial Intelligence,人工智能)算法被开发出来应用到各行各业。算法在应用的过程中需要不断地迭代,以提升算法的精度和算法的性能。
但是,现有情况下大部分算法迭代周期长,算法从开发、测试和部署上线存在流程复杂,成本高,专业要求高等劣势。因此,如何解决上述问题成为本领域技术人员的研究重点。
发明内容
有鉴于此,本公开实施例提供一种算法应用元的打包方法及装置、设备、存储介质和计算机程序产品。
本公开实施例的技术方案是这样实现的:
第一方面,本公开实施例提供一种算法应用元的打包方法,所述方法包括:
获取目标算法;
将所述目标算法对应的内容进行打包,得到目标算法应用元;
在特定的算法仓中部署所述目标算法应用元,使得在所述算法仓中启动所述目标算法应用元的情况下能够实现所述目标算法对应的功能;其中,所述算法仓为可以运行所述目标算法应用元的系统,且所述目标算法应用元与所述算法仓解耦。
通过上述方式,能够将目标算法打包成与算法仓系统松耦合的算法应用元,使得目标算法也具备了灵活的部署迭代优势,进而能够将所述目标算法对应的算法应用元直接上传至算法仓系统并启动运行,无需安装
第二方面,本公开实施例提供一种算法应用元的打包装置,所述装置包括:
获取单元,配置为获取目标算法;
打包单元,配置为将所述目标算法对应的内容进行打包,得到目标算法应用元;
部署单元,配置为在特定的算法仓中部署所述目标算法应用元,使得在所述算法仓中启动所述目标算法应用元的情况下能够实现所述目标算法对应的功能;其中,所述算法仓为可以运行所述目标算法应用元的系统,且所述目标算法应用元与所述算法仓解耦。
第三方面,本公开实施例提供一种电子设备,包括存储器和处理器,所述存储器存储有可在处理器上运行的计算机程序,所述处理器执行所述程序时实现上述方法中的步骤。
第四方面,本公开实施例提供一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述方法中的步骤。
第五方面,本公开实施例提供一种计算机程序产品,上述计算机程序产品包括计算机程序,上述计算机程序可操作来使计算机执行如上述第一方面所述方法中的步骤。该计算机程序产品可以为一个软件安装包。
本公开实施例提供一种算法应用元的打包方法及装置、设备、存储介质和计算机程序产品,通过获取目标算法;将所述目标算法对应的内容进行打包,得到目标算法应用元;在特定的算法仓中部署所述目标算法应用元,使得在所述算法仓中启动所述目标算法应用元的情况下能够实现所述目标算法对应的功能;其中,所述算法仓为可以运行所述目标算法应用元的系统,且所述目标算法应用元与所述算法仓解耦,如此,能够将目标算法打包成与算法仓系统松耦合的算法应用元,使得目标算法也具备了灵活的部署迭代优势,进而能够将所述目标算法对应的算法应用元直接上传至算法仓系统并启动运行,无需安装。
附图说明
图1为本公开实施例算法应用元的打包方法的实现流程示意图一;
图2为本公开实施例算法应用元的打包方法的实现流程示意图二;
图3为本公开实施例算法应用元的打包方法的实现流程示意图三;
图4为本公开实施例算法应用元的打包方法的实现流程示意图四;
图5为本公开实施例算法应用元的打包方法的实现流程示意图五;
图6为本公开实施例算法应用元的打包装置的组成结构示意图;
图7为本公开实施例电子设备的一种硬件实体示意图。
具体实施方式
下面结合附图和实施例对本公开的技术方案进一步详细阐述。显然,所描述的实施例仅是本公开一部分实施例,而不是全部的实施例。基于本公开的实施例,本领域普通技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
在以下的描述中,涉及到“一些实施例”,其描述了所有可能实施例的子集,但是可以理解,“一些实施例”可以是所有可能实施例的相同子集或不同子集, 并且可以在不冲突的情况下相互结合。
在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或“单元”的后缀仅为了有利于本公开的说明,其本身没有特定的意义。因此,“模块”、“部件”或“单元”可以混合地使用。
需要指出,本公开实施例所涉及的术语“第一\第二\第三”仅仅是是区别类似的对象,不代表针对对象的特定排序,可以理解地,“第一\第二\第三”在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本公开实施例能够以除了在这里图示或描述的以外的顺序实施。
基于此,本公开实施例提供一种算法应用元的打包方法,所述方法应用于电子设备,该方法所实现的功能可以通过所述电子设备的处理器调用程序代码来实现,当然程序代码可以保存在所述电子设备的存储介质中。图1为本公开实施例算法应用元的打包方法的实现流程示意图一,如图1所示,所述方法包括:
步骤S101、获取目标算法;
这里,所述电子设备可以为各种类型的具有信息处理能力的设备,例如导航仪、智能手机、平板电脑、可穿戴设备、膝上型便携计算机、一体机和台式计算机、服务器集群等。
需要说明的是,本公开实施例中对所述目标算法的类型并不做限制,所述目标算法可以为任一类型的算法。这里,所述算法是指解题方案的准确而完整的描述,是一系列解决问题的清晰指令,算法代表着用系统的方法描述解决问题的策略机制。所述目标算法可以为人工智能领域的相关算法,例如,人工智能领域的智能视频分析算法、人脸识别算法、指纹识别算法、人体检测跟踪算法等。当然,所述目标算法也可以为其他领域的相关算法,并且通常情况下所述目标算法能够实现特定的业务功能。
在实际使用时,算法在应用中是需要不断迭代的,以提升算法的精度和算法的性能。这个迭代过程,指的就是算法从开发完成到测试完成、再从测试完成到部署上线供用户使用这一整套的流程。
步骤S102、将所述目标算法对应的内容进行打包,得到目标算法应用元;
这里,所述目标算法应用元即所述目标算法对应的Applet,所述Applet是一种算法应用元,可以看作应用程序包(即以功能为单位的一个小程序),里面打包的是算法的模型、代码、配置项等内容,即一个算法逻辑包。
本公开实施例中,可以将所述目标算法对应的内容通过ZIP工具打包,得到目标算法应用元。例如,直接通过Linux Zip命令打包,或者通过内部打包工具进行打包。
步骤S103、在特定的算法仓中部署所述目标算法应用元,使得在所述算法仓中启动所述目标算法应用元的情况下能够实现所述目标算法对应的功能;其中,所述算法仓为可以运行所述目标算法应用元的系统,且所述目标算法应用元与所述算法仓解耦。
这里,可以在特定的算法仓中部署打包得到的目标算法应用元,算法仓中的算法应用元管理服务会在该目标算法应用元中添加签名信息,经过签名的目 标算法应用元就可以分发给用户使用。该目标算法应用元可以被直接启动运行,在运行时会首先校验其签名是否正确,如果正确则按其打包内容运行,以实现所述目标算法对应的功能。
本公开实施例中,所述特定的算法仓为一种通用的能够运行任何算法应用元的系统,即所述算法仓为一个特定的算法系统,且所述算法仓与部署在所述算法仓中的任一算法应用元解耦。现有技术中,是将算法逻辑打包到镜像中,因此算法逻辑与系统是耦合在一起的。例如,Docker是一个开源的应用容器引擎,开发者可以打包应用以及依赖包到一个可移植的容器中,然后发布到任何流行的Linux机器上,容器来源于Docker镜像,而镜像可以由用户自制或由运行中的容器提交来生成,镜像生成后可以推送到镜像仓库中进行保存,也可以从镜像仓库拉取到本地以运行容器。也就是说,Docker镜像是一种对应用程序及其运行环境的标准化封装,这样如果需要更新算法,则需要更新整个底座服务镜像,无法单独发布算法,导致算法上线效率低,迭代慢。
而相比于上述现有技术中的方法,本公开实施例中是将目标算法定义为独立便携的算法应用元,无需安装,更加灵活。也就是说,本公开实施例中将目标算法打包成与算法仓系统松耦合的算法应用元,使得目标算法也具备了灵活的部署迭代优势,进而能够在所述算法仓系统中直接部署所述目标算法对应的算法应用元并启动运行,无需安装。
基于前述的实施例,本公开实施例再提供一种算法应用元的打包方法,所述方法应用于电子设备,图2为本公开实施例算法应用元的打包方法的实现流程示意图二,如图2所示,所述方法包括:
步骤S201、获取目标算法对应的配置信息和脚本信息;其中,所述配置信息用于为打包后的所述目标算法配置运行参数,所述脚本信息为所述目标算法对应的脚本程序;
本公开实施例中,打包后的所述目标算法,即对所述目标算法进行打包后得到的目标算法应用元。
这里,配置信息用于为目标算法应用元配置运行参数,所述运行参数可以是描述该目标算法应用元是如何组成的信息,也可以是描述该目标算法应用元是如何运行的信息。例如,所述配置信息可以包括该目标算法应用元的入口文件,所述入口文件说明了该目标算法应用元的运行入口。又如,所述配置信息可以包括该目标算法应用元的运行环境文件,所述运行环境文件描述了该目标算法应用元的运行软件环境、运行硬件环境(如所支持的显卡类型)、性能(如支持跑多少路)、兼容信息等。又如,所述配置信息可以包括模板文件,所述模板文件说明了使用该目标算法应用元来生成其他类似的算法应用元的相关信息。
这里,所述脚本信息为目标算法对应的脚本程序,所述脚本信息包括的文件中可以指定所述脚本程序的路径。通常情况下,所述脚本程序可以为Lua脚本,脚本程序的作用就是执行该目标算法应用元中的算法。
步骤S202、将所述配置信息和所述脚本信息对应的内容进行打包,得到目标算法应用元;
本公开实施例中,所述配置信息和所述脚本信息为对所述目标算法进行打包的必选内容。也就是说,所述目标算法应用元中必须包括所述目标算法对应的配置信息和所述目标算法对应的脚本信息。
通过上述方式,能够基于配置信息和脚本信息,将目标算法打包成目标算法应用元。
步骤S203、在特定的算法仓中部署所述目标算法应用元,使得在所述算法仓中启动所述目标算法应用元的情况下能够实现所述目标算法对应的功能;其中,所述算法仓为可以运行所述目标算法应用元的系统,且所述目标算法应用元与所述算法仓解耦。
基于前述的实施例,本公开实施例再提供一种算法应用元的打包方法,所述方法应用于电子设备,所述方法包括:
步骤S211、获取目标算法对应的配置信息、脚本信息和模型信息;其中,所述配置信息用于为打包后的所述目标算法配置运行参数,所述脚本信息为所述目标算法对应的脚本程序,所述模型信息为所述目标算法对应的算法模型;
本公开实施例中,所述模型信息为对所述目标算法进行打包的可选内容。也就是说,所述目标算法应用元中可以包括所述目标算法对应的模型信息,所述目标算法应用元中也可以不包括所述目标算法对应的模型信息,本领域技术人员可以根据实际使用需求选择是否将所述模型信息进行打包。
这里,模型信息就是一些算法的参数,例如神经网络算法对应的算法模型。本公开实施例中的模型信息可以按不同的硬件来组织,比如按CPU(Central Processing Unit,中央处理器)/显卡来组织。如果存在很多种硬件,则每一硬件平台上都存在对应的模型信息。因此,所述模型信息包括的文件中可以指定不同硬件平台下对应的不同模型信息。本公开实施例中的模型信息还可以按不同的运行阶段来组织,例如所述目标算法为人脸比对算法,则所述目标算法所属业务的运行阶段包括四个阶段:第一特征提取阶段、人脸检测阶段、第二特征提取阶段、人脸比对阶段,对应地,第一阶段对应提取检测特征的算法模型,第二阶段对应检测人脸的算法模型,第三阶段对应提取比对特征的算法模型,第四阶段对应比对人脸的算法模型。因此,所述模型信息包括的文件中可以指定不同运行阶段下对应的不同模型信息。
步骤S212、将所述模型信息、所述配置信息和所述脚本信息对应的内容进行打包,得到目标算法应用元;
通过上述方式,能够基于配置信息、脚本信息和模型信息,将目标算法打包成目标算法应用元,从而灵活地配置打包参数,根据实际需要选择不同的打包选项实现对所述目标算法的打包。
步骤S213、在特定的算法仓中部署所述目标算法应用元,使得在所述算法仓中启动所述目标算法应用元的情况下能够实现所述目标算法对应的功能;其中,所述算法仓为可以运行所述目标算法应用元的系统,且所述目标算法应用元与所述算法仓解耦。
基于前述的实施例,本公开实施例再提供一种算法应用元的打包方法,所述方法应用于电子设备,所述方法包括:
步骤S221、获取目标算法对应的任务类型;
这里,所述目标算法对应的任务类型可以是所述目标算法所属任务对应的功能的分类。例如,所述目标算法所属任务对应的功能是人脸比对告警,则所述目标算法对应的任务类型就是对象识别。
步骤S222、根据所述任务类型,确定所述目标算法对应的运行管道;
这里,所述目标算法对应的运行管道(即Pipeline),指的是所述目标算法所属任务的运行阶段,即所述目标算法所属任务中包括的具有一定顺序的若干个过程,或者所述目标算法所属任务中包括的具有一定顺序的若干个执行步骤。因为算法也是一个流水线,因此算法也存在其对应的运行管道。例如,垃圾检测告警算法,先进行对象检测,然后再看检测出的对象的属性等,最后再根据对象的属性做相应的告警。
步骤S223、将所述运行管道确定为所述目标算法对应的配置信息;其中,所述配置信息用于为打包后的所述目标算法配置运行参数;
这里,将所述运行管道确定为所述目标算法对应的配置信息,指的是目标算法应用元中的配置信息包括所述目标算法所属任务对应的运行管道。
步骤S224、确定所述运行管道中每一运行阶段对应的算法模型;
本公开实施例中,由于算法也是一个流水线,算法也存在其对应的运行管道,所述运行管道中又包括多个运行阶段,因此每一运行阶段均对应各自的算法模型。例如,垃圾检测告警算法对应的运行管道包括三个运行阶段,第一运行阶段为对象检测阶段,对应的算法模型为检测算法模型;第二运行阶段为对象属性识别阶段,对应的算法模型为识别算法模型;第三运行阶段为对象告警阶段,对应的算法模型为告警算法模型。
步骤S225、将所述每一运行阶段对应的算法模型,确定为所述目标算法对应的模型信息;其中,所述模型信息为所述目标算法对应的算法模型;
这里,将所述每一运行阶段对应的算法模型,确定为所述目标算法对应的模型信息,指的是目标算法应用元中的模型信息包括所述目标算法的运行管道中每一运行阶段对应的算法模型。例如,上述垃圾检测告警算法对应的模型信息,就包括检测算法模型、识别算法模型和告警算法模型。
通过上述方式,能够基于目标算法对应的业务的运行管道确定配置信息的内容,并根据所述运行管道中每一运行阶段对应的算法模型确定模型信息的内容。
步骤S226、获取目标算法对应的脚本信息;其中,所述脚本信息为所述目标算法对应的脚本程序;
步骤S227、将所述模型信息、所述配置信息和所述脚本信息对应的内容进行打包,得到目标算法应用元;
步骤S228、在特定的算法仓中部署所述目标算法应用元,使得在所述算法仓中启动所述目标算法应用元的情况下能够实现所述目标算法对应的功能;其中,所述算法仓为可以运行所述目标算法应用元的系统,且所述目标算法应用 元与所述算法仓解耦。
基于前述的实施例,本公开实施例再提供一种算法应用元的打包方法,所述方法应用于电子设备,所述方法包括:
步骤S231、确定打包后的目标算法的运行环境,以及打包后的所述目标算法在所述运行环境下的性能信息;
这里,所述运行环境可以包括软件和硬件两个方面。软件可以包括操作系统,比如Windows操作系统或者Linux操作系统等。硬件可以包括电子设备的配置,比如CPU、内存、显卡、硬盘等。
步骤S232、将所述运行环境和所述性能信息确定为所述目标算法对应的配置信息;其中,所述配置信息用于为打包后的所述目标算法配置运行参数;
举例来说,目标算法应用元在第一运行环境下支持运行N路的数据,目标算法应用元在第二运行环境下支持运行M路的数据,N和M可以相同也可以不同,则支持运行N路的数据、支持运行M路的数据就是所述目标算法应用元在不同运行环境下的性能信息。
步骤S233、在存在多个所述运行环境的情况下,确定多个所述运行环境中每一所述运行环境对应的模型信息;
也就是说,目标算法对应的模型信息在不同的运行环境下是不同的,所述目标算法应用元中的模型信息包括多个运行环境下的模型信息。
步骤S234、将所述每一所述运行环境对应的模型信息,确定为所述目标算法对应的模型信息;其中,所述模型信息为所述目标算法对应的算法模型;
例如,型号为nv_p4的显卡对应的模型为trt2 1模型和trt5 1模型,型号为nv_t4的显卡对应的模型为trt2 2模型和trt5 2模型。
通过上述方式,能够基于目标算法应用元的运行环境,以及所述运行环境下所述目标算法应用元的性能,来确定配置信息的内容,并根据每一运行环境下的模型信息,来确定打包的模型内容。
步骤S235、获取目标算法对应的脚本信息;其中,所述脚本信息为所述目标算法对应的脚本程序;
步骤S236、将所述模型信息、所述配置信息和所述脚本信息对应的内容进行打包,得到目标算法应用元;
这里,将目标算法对应的模型信息的内容、目标算法对应的配置信息的内容和目标算法对应的脚本信息的内容进行打包,得到目标算法应用元。
步骤S237、在特定的算法仓中部署所述目标算法应用元,使得在所述算法仓中启动所述目标算法应用元的情况下能够实现所述目标算法对应的功能;其中,所述算法仓为可以运行所述目标算法应用元的系统,且所述目标算法应用元与所述算法仓解耦。
基于前述的实施例,本公开实施例再提供一种算法应用元的打包方法,所述方法应用于电子设备,所述方法包括:
步骤S241、获取目标算法对应的任务类型;
步骤S242、根据所述任务类型,确定所述目标算法对应的运行管道;
步骤S243、确定打包后的所述目标算法的运行环境,以及打包后的所述目标算法在所述运行环境下的性能信息;
步骤S244、在将打包后的所述目标算法作为其他算法应用元的模板的情况下,获取所述运行管道、所述运行环境和所述性能信息对应的模板信息;
这里,将打包后的所述目标算法作为其他算法应用元的模板,指的是用目标算法应用元来生成其他类似的算法应用元。
本公开实施例中,如果需要利用目标算法应用元来生成其他类似的算法应用元,则可以将所述目标算法的运行管道对应的模板信息,所述目标算法应用元的运行环境对应的模板信息和所述目标算法应用元的性能信息对应的模板信息,作为所述目标算法应用元的打包内容。
步骤S245、将所述运行管道、所述运行环境、所述性能信息和所述模板信息,确定为所述目标算法对应的配置信息;其中,所述配置信息用于为打包后的所述目标算法配置运行参数;
通过上述方式,能够基于目标算法对应的业务的运行管道、目标算法应用元的运行环境,以及该目标算法应用元在该运行环境下的性能,得到模板信息,并将所述模板信息确定为配置信息的内容,从而可以用所述目标算法应用元来生成其他类似的算法应用元。
步骤S246、确定所述运行管道中每一运行阶段对应的算法模型;
步骤S247、在存在多个所述运行环境的情况下,确定多个所述运行环境中每一所述运行环境对应的模型信息;
步骤S248、将所述每一运行阶段对应的算法模型和所述每一所述运行环境对应的模型信息,确定为所述目标算法对应的模型信息;其中,所述模型信息为所述目标算法对应的算法模型;
这里,不同运行阶段对应的算法模型和不同运行环境对应的模型信息,是从不同的角度去划分的,因此一算法模型在属于某一运行阶段的同时可能也属于某一运行环境。
步骤S249、获取目标算法对应的脚本信息;其中,所述脚本信息为所述目标算法对应的脚本程序;
步骤S250、将所述模型信息、所述配置信息和所述脚本信息对应的内容进行打包,得到目标算法应用元;
步骤S251、在特定的算法仓中部署所述目标算法应用元,使得在所述算法仓中启动所述目标算法应用元的情况下能够实现所述目标算法对应的功能;其中,所述算法仓为可以运行所述目标算法应用元的系统,且所述目标算法应用元与所述算法仓解耦。
基于前述的实施例,本公开实施例再提供一种算法应用元的打包方法,所述方法应用于电子设备,图3为本公开实施例算法应用元的打包方法的实现流程示意图三,如图3所示,所述方法包括:
步骤S301、获取目标算法对应的配置信息、脚本信息、模型信息和动态库 信息;其中,所述配置信息用于为打包后的所述目标算法配置运行参数,所述脚本信息为所述目标算法对应的脚本程序,所述模型信息为所述目标算法对应的算法模型,所述动态库信息用于被所述脚本信息对应的代码调用以使打包后的所述目标算法适用于不同的平台;
这里,所述动态库信息可以为用户动态链接库,所述用户动态链接库中包含能被脚本信息中的脚本程序调用来完成某些工作的函数。
步骤S302、将所述模型信息、所述配置信息和所述脚本信息对应的内容进行打包,得到目标算法应用元;
步骤S303、在特定的算法仓中部署所述目标算法应用元,使得在所述算法仓中启动所述目标算法应用元的情况下能够实现所述目标算法对应的功能;其中,所述算法仓为可以运行所述目标算法应用元的系统,且所述目标算法应用元与所述算法仓解耦。
基于前述的实施例,本公开实施例再提供一种算法应用元的打包方法,所述方法应用于电子设备,所述方法包括:
步骤S311、获取目标算法对应的配置信息、脚本信息和动态库信息;其中,所述配置信息用于为打包后的所述目标算法配置运行参数,所述脚本信息为所述目标算法对应的脚本程序,所述动态库信息用于被所述脚本信息对应的代码调用以使打包后的所述目标算法适用于不同的平台;
步骤S312、将所述配置信息、所述脚本信息和所述动态库信息对应的内容进行打包,得到目标算法应用元;
通过上述方式,能够基于配置信息、脚本信息,以及与所述脚本信息相关的动态库信息,将目标算法打包成目标算法应用元,从而灵活地配置打包参数,根据实际需要选择不同的打包选项实现对所述目标算法的打包。
步骤S313、在特定的算法仓中部署所述目标算法应用元,使得在所述算法仓中启动所述目标算法应用元的情况下能够实现所述目标算法对应的功能;其中,所述算法仓为可以运行所述目标算法应用元的系统,且所述目标算法应用元与所述算法仓解耦。
基于前述的实施例,本公开实施例再提供一种算法应用元的打包方法,所述方法应用于电子设备,所述方法包括:
步骤S321、获取目标算法对应的配置信息、脚本信息、用户文档和格式信息;其中,所述配置信息用于为打包后的所述目标算法配置运行参数,所述脚本信息为所述目标算法对应的脚本程序,所述用户文档用于描述打包后的所述目标算法的使用信息,所述格式信息用于规范打包后的所述目标算法的输入输出数据的格式;
这里,所述用户文档可以包括目标算法应用元的图标,以及所述目标算法应用元的用户使用说明。所述格式信息用于规范目标算法应用元的输入数据的格式和输出数据的格式,即定义输入输出数据的格式,并校验输入输出数据的格式,符合规范则接收数据,是个规范同时也是个说明。
步骤S322、将所述配置信息、所述脚本信息、所述用户文档和所述格式信息对应的内容进行打包,得到目标算法应用元;
通过上述方式,能够基于配置信息、脚本信息,以及用户文档和格式信息,将目标算法打包成目标算法应用元,从而灵活地配置打包参数,根据实际需要选择不同的打包选项实现对所述目标算法的打包。
步骤S323、在特定的算法仓中部署所述目标算法应用元,使得在所述算法仓中启动所述目标算法应用元的情况下能够实现所述目标算法对应的功能;其中,所述算法仓为可以运行所述目标算法应用元的系统,且所述目标算法应用元与所述算法仓解耦。
本公开实施例中,模型信息、动态库信息、用户文档和格式信息均为将目标算法打包成目标算法应用元的可选内容,如此,能够灵活配置打包参数,根据实际需要选择不同的打包内容实现对所述目标算法的打包。
基于前述的实施例,本公开实施例再提供一种算法应用元的打包方法,所述方法应用于电子设备,图4为本公开实施例算法应用元的打包方法的实现流程示意图四,如图4所示,所述方法包括:
步骤S401、获取目标算法;
步骤S402、通过动态语言将所述目标算法对应的内容进行打包,得到目标算法应用元;
这里,因为所述目标算法应用元是用动态脚本语言做的,所以解压后可以直接使用,就是说在算法仓中部署所述目标算法应用元后就可以解压运行使用。在一些实施例中,算法仓系统可以是基于云原生的算法仓系统。
通过上述方式,能够使用动态语言对目标算法进行打包,同时打包后的算法应用元与镜像之间相互独立,从而可以先发布镜像,再更新算法,实现对算法的灵活管理。
步骤S403、在特定的算法仓中部署所述目标算法应用元,使得在所述算法仓中启动所述目标算法应用元的情况下能够实现所述目标算法对应的功能;其中,所述算法仓为可以运行所述目标算法应用元的系统,所述目标算法应用元与所述算法仓解耦,所述目标算法应用元与所述算法仓所关联的任一镜像之间相互独立。
相关技术中,算法逻辑是打包到镜像中的不能分离,如果后续有算法方面的更新则需要更新整个镜像。而本公开实施例中,通过这种打包的方式就把算法逻辑和镜像分离了,不用通过将打包和环境一起执行一个镜像的形式去分发,本公开实施例中镜像和算法是相互独立的,算法可以运行在不同的镜像上面。也就是说,算法和镜像是分离的,因此可以先分发镜像,后面再更新算法,实现灵活的算法管理。
在一些实施例中,还提供算法应用元授权功能,例如分析视频流任务会限制其处理的最大路数,其中,最大路数可以通过购买确定。
基于前述的实施例,本公开实施例再提供一种算法应用元的打包方法,所 述方法应用于电子设备,图5为本公开实施例算法应用元的打包方法的实现流程示意图五,如图5所示,所述方法包括:
步骤S501、获取目标算法;
步骤S502、将所述目标算法对应的内容进行打包,得到目标算法应用元;
步骤S503、在特定的算法仓中部署所述目标算法应用元,使得在所述算法仓中启动所述目标算法应用元的情况下能够实现所述目标算法对应的功能;其中,所述算法仓为可以运行所述目标算法应用元的系统,且所述目标算法应用元与所述算法仓解耦;
步骤S504、接收更新请求;其中,所述更新请求的内容为请求将所述目标算法应用元从第一版本更新为第二版本;
这里,所述第一版本可以为历史版本,所述第二版本可以为当前版本。
步骤S505、响应于所述更新请求,停止使用所述算法仓中第一版本的目标算法应用元;
步骤S506、在所述算法仓中部署所述第二版本的目标算法应用元。
举例来说,所述目标算法为垃圾检测算法,部署在算法仓中的垃圾检测算法应用元为2015版本,2015版本的垃圾检测算法应用元对于垃圾检测的精度和召回未出现报错情况。2016年开发人员为了解决上述问题对2015版本的垃圾检测算法进行了代码更新等操作,得到2016版本的垃圾检测算法,2016版本的垃圾检测算法对精度和召回都有所提升。进而,可以对2016版本的垃圾检测算法对应的内容进行打包得到2016版本的垃圾检测算法应用元,并停用2015版本的垃圾检测算法应用元,上传、启动2016版本的垃圾检测算法应用元。
本公开实施例中,通过上述步骤S501至步骤S506中的方法,能够将新版本的算法打包成算法应用元,然后部署至算法仓系统并启用来实现对算法的迭代更新,从而解决现有技术中更新系统中的算法需要更新整个镜像或者更新整个系统的问题。
基于前述的实施例,本公开实施例提供一种算法仓模式,在所述算法仓模式下,AI算法通过一些动态语言包装成与运行时系统解耦的算法应用元(即Applet),这些算法应用元打包算法模型、业务逻辑脚本代码和配置,并提供开发者说明和示例供调用者阅读,然后在视觉开发等平台运行,以实现各种复杂的AI算法。
在一些实施例中,为了高效、便捷地在开放视觉平台管理这些算法应用元的存储和运行周期,本公开实施例还提供了一种基于云原生的、具备高可用的、易伸缩的AI算法应用元运行管理系统(即算法仓系统)及对应的算法应用元管理服务,所述算法应用元管理服务在所述算法仓系统中运行时实现一种通用的能够运行任何算法应用元的方式。
本公开实施例主要提供一种算法仓系统中算法应用元的打包方法,为基于配置信息、模型信息、脚本信息、动态库信息、用户文档和格式信息的内容,实现对人工智能算法的应用元的打包功能。
下面对上述打包功能的内容进行详细的说明:
(1)配置信息
本公开实施例中,所述配置信息是一个算法应用元的配置文件,例如,包括一个业务的运行管道(Pipeline)等。
举例来说,所述配置文件可以包括如下几部分:算法应用元的入口文件、算法应用元的模板文件、算法应用元的运行环境和性能信息(如AlgoAppSpec定义的内容,与运行时无关,作为渲染默认值)等。
(2)模型信息
本公开实施例中,所述模型信息是算法对应的模型内容。例如,型号为nv_p4的显卡对应的模型为trt2 1模型和trt5 1模型,型号为nv_t4的显卡对应的模型为trt2 2模型和trt5 2模型。
这里,所述模型信息可以包括不同硬件下的模型信息以及不同运行时段下的模型信息。
(3)脚本信息
本公开实施例中,所述脚本信息是算法对应的脚本程序,不同的算法对应不同的实现代码。例如,算法对应的Lua脚本。
(4)动态库信息
本公开实施例中,所述动态库信息的内容可以被上述脚本信息对应的代码调用,以使算法应用元适用不同的平台。所述动态库信息可为算法对应的动态链接库,例如,Go标准库。并且,可以无需考虑软件版本,例如英伟达平台或华为昇腾平台。
(5)用户文档
本公开实施例中,所述用户文档记录了描述算法应用元的信息。例如,用户文档可以包括对算法应用元的描述文档,还可以包括算法应用元的图标。
(6)格式信息
本公开实施例中,可以由本领域技术人员根据实际使用情况进行设置。所述格式信息是定义算法应用元的输入输出格式,其定义了字段格式和内容,用于对输入输出数据做校验。
在以往技术中算法上线效率低、迭代慢,上线运行新的算法需要更新整个底座服务镜像,无法单独发布算法。本公开实施例中提供的算法仓系统是基于云原生架构、通过扩展kuernetes自定义接口实现的,具有能够发挥云平台分布式、自动伸缩和运维部署友好的特点。同时,与以往与系统耦合的算法运行管理系统相比,本公开实施例中的算法仓系统将智能算法定义为独立便携的算法应用元,无需安装,更加灵活。并且,所述算法仓系统可以管理算法应用元的运行生命周期,管理基于kubernetes状态同步,维护简单的内部状态机、管理操作可以异步进行,在算法应用元打包时可以灵活选择和确认打包参数。
本公开实施例中,当用户需要上线一种新的智能视频和图像分析算法时,不需要更新整个服务的版本,只需要上传新发布的算法应用元,然后启动运行,启动后,业务层可以通过接口(即系统接口,该接口可以运行各种算法应用元)下发处理任务或者图片进行算法分析。如果后续需要更新迭代算法,则停止老版本的算法应用元,上传、启动新版本的算法应用元。例如,历史版本的垃圾 检测算法对于垃圾检测的精度和召回未出现报错情况,新版本的垃圾检测算法对精度和召回都有提升后,停用老版本的垃圾检测算法对应的算法应用元,并上传新版本的垃圾检测算法对应的算法应用元。所述新版本的垃圾检测算法应用元可以为使用配置信息、模型信息、脚本、动态库、用户文档、规范格式的内容对新版本的垃圾检测算法进行打包后的内容。
通过上述的打包方法,可以达到如下技术效果:1)基于解耦系统,实现对单个算法应用元的单独打包;2)将智能视频分析算法定义成与系统松耦合的算法应用元,使得智能算法也具备了灵活的部署迭代优势,并且定义的算法应用元无需安装,灵活配置打包参数;3)对发布的算法应用元进行管理的系统可以管理算法应用元的运行生命周期,管理基于kubernetes的状态同步,维护简单的内部状态机,并且管理操作可以异步进行。
基于前述的实施例,本公开实施例提供一种算法应用元的打包装置,该装置包括所包括的各单元、以及各单元所包括的各子单元和各模块、以及各模块所包括的各子模块和各部件,可以通过电子设备中的处理器来实现;当然也可通过具体的逻辑电路实现;在实施的过程中,处理器可以为CPU(Central Processing Unit,中央处理器)、MPU(Microprocessor Unit,微处理器)、DSP(Digital Signal Processing,数字信号处理器)或FPGA(Field Programmable Gate Array,现场可编程门阵列)等。
图6为本公开实施例算法应用元的打包装置的组成结构示意图,如图6所示,所述装置600包括:
获取单元601,配置为获取目标算法;
打包单元602,配置为将所述目标算法对应的内容进行打包,得到目标算法应用元;
部署单元603,配置为在特定的算法仓中部署所述目标算法应用元,使得在所述算法仓中启动所述目标算法应用元的情况下能够实现所述目标算法对应的功能;其中,所述算法仓为可以运行所述目标算法应用元的系统,且所述目标算法应用元与所述算法仓解耦。
在一些实施例中,所述获取单元601,包括:第一获取模块,配置为获取目标算法对应的配置信息和脚本信息;其中,所述配置信息用于为所述目标算法应用元配置运行参数,所述脚本信息为所述目标算法对应的脚本程序;所述打包单元602,包括:第一打包模块,配置为将所述配置信息和所述脚本信息对应的内容进行打包,得到目标算法应用元。
在一些实施例中,所述获取单元601,还包括:第二获取模块,配置为获取目标算法对应的模型信息;其中,所述模型信息为所述目标算法对应的算法模型;所述打包单元602,包括:第二打包模块,配置为将所述模型信息、所述配置信息和所述脚本信息对应的内容进行打包,得到目标算法应用元。
在一些实施例中,所述第一获取模块,包括:任务获取部件,配置为获取目标算法对应的任务类型;管道确定部件,配置为根据所述任务类型,确定所述目标算法对应的运行管道;第一配置确定部件,配置为将所述运行管道确定 为所述目标算法对应的配置信息;所述第二获取模块,包括:算法模型确定部件,配置为确定所述运行管道中每一运行阶段对应的算法模型;第一模型确定部件,配置为将所述每一运行阶段对应的算法模型,确定为所述目标算法对应的模型信息。
在一些实施例中,所述第一获取模块,还包括:规格确定部件,配置为确定所述目标算法应用元的运行环境,以及所述目标算法应用元在所述运行环境下的性能信息;第二配置确定部件,配置为将所述运行环境和所述性能信息确定为所述目标算法对应的配置信息;在存在多个所述运行环境的情况下,所述第二获取模块,还包括:环境模型确定部件,配置为确定多个所述运行环境中每一所述运行环境对应的模型信息;第二模型确定部件,配置为将所述每一所述运行环境对应的模型信息,确定为所述目标算法对应的模型信息。
在一些实施例中,在将所述目标算法应用元作为其他算法应用元的模板的情况下,所述第一获取模块,还包括:模板获取部件,配置为获取所述运行管道、所述运行环境和所述性能信息对应的模板信息;第三配置确定部件,配置为将所述模板信息确定为所述目标算法对应的配置信息。
在一些实施例中,所述获取单元601,还包括:第三获取模块,配置为获取目标算法对应的动态库信息;其中,所述动态库信息用于被所述脚本信息对应的代码调用以使所述目标算法应用元适用于不同的平台;所述打包单元602,包括:第三打包模块,配置为将所述配置信息、所述脚本信息和所述动态库信息对应的内容进行打包,得到目标算法应用元。
在一些实施例中,所述获取单元601,还包括:第四获取模块,配置为获取目标算法对应的用户文档和格式信息;其中,所述用户文档用于描述所述目标算法应用元的使用信息,所述格式信息用于规范所述目标算法应用元的输入输出数据的格式;所述打包单元602,包括:第四打包模块,配置为将所述配置信息、所述脚本信息、所述用户文档和所述格式信息对应的内容进行打包,得到目标算法应用元。
在一些实施例中,所述打包单元602,包括:打包子单元,配置为通过动态语言将所述目标算法对应的内容进行打包,得到目标算法应用元;其中,所述目标算法应用元与所述算法仓所关联的任一镜像之间相互独立。
在一些实施例中,所述装置还包括:请求接收单元,配置为接收更新请求;其中,所述更新请求的内容为请求将所述目标算法应用元从第一版本更新为第二版本;请求响应单元,配置为响应于所述更新请求,停止使用所述算法仓中第一版本的目标算法应用元;更新单元,配置为在所述算法仓中部署所述第二版本的目标算法应用元。
以上装置实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果。对于本公开装置实施例中未披露的技术细节,请参照本公开方法实施例的描述而理解。
需要说明的是,本公开实施例中,如果以软件功能模块的形式实现上述的算法应用元的打包方法,并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本公开实施例的技术方案本质上 或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台电子设备(可以是个人计算机、服务器等)执行本公开各个实施例所述方法的全部或部分。而前述的存储介质包括:U盘、移动硬盘、ROM(Read Only Memory,只读存储器)、磁碟或者光盘等各种可以存储程序代码的介质。这样,本公开实施例不限制于任何特定的硬件和软件结合。
对应地,本公开实施例提供一种电子设备,包括存储器和处理器,所述存储器存储有可在处理器上运行的计算机程序,所述处理器执行所述程序时实现上述实施例中提供的算法应用元的打包方法中的步骤。
对应地,本公开实施例提供一种可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述算法应用元的打包方法中的步骤。
对应地,本公开实施例提供一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。电子设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该电子设备执行本公开实施例上述的算法应用元的打包方法。
即,本申请实施例所提供的计算机程序产品,包括存储了程序代码的计算机可读存储介质,所述程序代码包括的指令可用于执行上述方法实施例中所述的算法应用元的打包方法的步骤。
这里需要指出的是:以上设备、存储介质、程序产品、程序实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果。对于本公开设备、存储介质、程序产品、程序实施例中未披露的技术细节,请参照本公开方法实施例的描述而理解。
需要说明的是,图7为本公开实施例电子设备的一种硬件实体示意图,如图7所示,该电子设备700的硬件实体包括:处理器701、通信接口702和存储器703,其中
处理器701通常控制电子设备700的总体操作。
通信接口702可以使电子设备700通过网络与其他服务器或电子设备或平台通信。
存储器703配置为存储由处理器701可执行的指令和应用,还可以缓存待处理器701以及电子设备700中各模块待处理或已经处理的数据(例如,图像数据、音频数据、语音通信数据和视频通信数据),可以通过FLASH(闪存)或RAM(Random Access Memory,随机访问存储器)实现;
其中,电子设备700中的各个硬件实体通过总线704耦合在一起。可理解,总线704用于实现这些硬件实体之间的连接通信。
在本公开所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、 或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元,即可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。
另外,在本公开各实施例中的各功能单元可以全部集成在一个处理模块中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
本公开所提供的几个方法实施例中所揭露的方法,在不冲突的情况下可以任意组合,得到新的方法实施例。本公开所提供的几个产品实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的产品实施例。本公开所提供的几个方法或设备实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的方法实施例或设备实施例。
以上所述,仅为本公开的具体实施方式,但本公开的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应以所述权利要求的保护范围为准。

Claims (20)

  1. 一种算法应用元的打包方法,其中,所述方法包括:
    获取目标算法;
    将所述目标算法对应的内容进行打包,得到目标算法应用元;
    在特定的算法仓中部署所述目标算法应用元,使得在所述算法仓中启动所述目标算法应用元的情况下能够实现所述目标算法对应的功能;其中,所述算法仓为可以运行所述目标算法应用元的系统,且所述目标算法应用元与所述算法仓解耦。
  2. 根据权利要求1所述的方法,其中,所述获取目标算法,包括:
    获取目标算法对应的配置信息和脚本信息;其中,所述配置信息用于为所述目标算法应用元配置运行参数,所述脚本信息为所述目标算法对应的脚本程序;
    所述将所述目标算法对应的内容进行打包,得到目标算法应用元,包括:
    将所述配置信息和所述脚本信息对应的内容进行打包,得到目标算法应用元。
  3. 根据权利要求2所述的方法,其中,所述获取目标算法,还包括:
    获取目标算法对应的模型信息;其中,所述模型信息为所述目标算法对应的算法模型;
    所述将所述目标算法对应的内容进行打包,得到目标算法应用元,包括:
    将所述模型信息、所述配置信息和所述脚本信息对应的内容进行打包,得到目标算法应用元。
  4. 根据权利要求3所述的方法,其中,所述获取目标算法对应的配置信息,包括:
    获取目标算法对应的任务类型;
    根据所述任务类型,确定所述目标算法对应的运行管道;
    将所述运行管道确定为所述目标算法对应的配置信息;
    所述获取目标算法对应的模型信息,包括:
    确定所述运行管道中每一运行阶段对应的算法模型;
    将所述每一运行阶段对应的算法模型,确定为所述目标算法对应的模型信息。
  5. 根据权利要求4所述的方法,其中,所述获取目标算法对应的配置信息,还包括:
    确定所述目标算法应用元的运行环境,以及所述目标算法应用元在所述运行环境下的性能信息;
    将所述运行环境和所述性能信息确定为所述目标算法对应的配置信息;
    在存在多个所述运行环境的情况下,所述获取目标算法对应的模型信息,还包括:
    确定多个所述运行环境中每一所述运行环境对应的模型信息;
    将所述每一所述运行环境对应的模型信息,确定为所述目标算法对应的模型信息。
  6. 根据权利要求5所述的方法,其中,在将所述目标算法应用元作为其他算法应用元的模板的情况下,所述获取目标算法对应的配置信息,还包括:
    获取所述运行管道、所述运行环境和所述性能信息对应的模板信息;
    将所述模板信息确定为所述目标算法对应的配置信息。
  7. 根据权利要求2至6任一项所述的方法,其中,所述获取目标算法,还包括:
    获取目标算法对应的动态库信息;其中,所述动态库信息用于被所述脚本信息对应的代码调用以使所述目标算法应用元适用于不同的平台;
    所述将所述目标算法对应的内容进行打包,得到目标算法应用元,包括:
    将所述配置信息、所述脚本信息和所述动态库信息对应的内容进行打包,得到目标算法应用元。
  8. 根据权利要求2至7任一项所述的方法,其中,所述获取目标算法,还包括:
    获取目标算法对应的用户文档和格式信息;其中,所述用户文档用于描述所述目标算法应用元的使用信息,所述格式信息用于规范所述目标算法应用元的输入输出数据的格式;
    所述将所述目标算法对应的内容进行打包,得到目标算法应用元,包括:
    将所述配置信息、所述脚本信息、所述用户文档和所述格式信息对应的内容进行打包,得到目标算法应用元。
  9. 根据权利要求1至8任一项所述的方法,其中,所述将所述目标算法对应的内容进行打包,得到目标算法应用元,包括:
    通过动态语言将所述目标算法对应的内容进行打包,得到目标算法应用元;
    其中,所述目标算法应用元与所述算法仓所关联的任一镜像之间相互独立。
  10. 根据权利要求1至9任一项所述的方法,其中,所述方法还包括:
    接收更新请求;其中,所述更新请求的内容为请求将所述目标算法应用元从第一版本更新为第二版本;
    响应于所述更新请求,停止使用所述算法仓中第一版本的目标算法应 用元;
    在所述算法仓中部署所述第二版本的目标算法应用元。
  11. 一种算法应用元的打包装置,其中,所述装置包括:
    获取单元,配置为获取目标算法;
    打包单元,配置为将所述目标算法对应的内容进行打包,得到目标算法应用元;
    部署单元,配置为在特定的算法仓中部署所述目标算法应用元,使得在所述算法仓中启动所述目标算法应用元的情况下能够实现所述目标算法对应的功能;其中,所述算法仓为可以运行所述目标算法应用元的系统,且所述目标算法应用元与所述算法仓解耦。
  12. 根据权利要求11所述的装置,其中,所述获取单元,包括:
    第一获取模块,配置为获取目标算法对应的配置信息和脚本信息;其中,所述配置信息用于为所述目标算法应用元配置运行参数,所述脚本信息为所述目标算法对应的脚本程序;
    所述打包单元,包括:
    第一打包模块,配置为将所述配置信息和所述脚本信息对应的内容进行打包,得到目标算法应用元。
  13. 根据权利要求12所述的装置,其中,所述获取单元,还包括:
    第二获取模块,配置为获取目标算法对应的模型信息;其中,所述模型信息为所述目标算法对应的算法模型;
    所述打包单元,包括:
    第二打包模块,配置为将所述模型信息、所述配置信息和所述脚本信息对应的内容进行打包,得到目标算法应用元。
  14. 根据权利要求13所述的装置,其中,所述第一获取模块,包括:
    任务获取部件,配置为获取目标算法对应的任务类型;
    管道确定部件,配置为根据所述任务类型,确定所述目标算法对应的运行管道;
    第一配置确定部件,配置为将所述运行管道确定为所述目标算法对应的配置信息;
    所述第二获取模块,包括:
    算法模型确定部件,配置为确定所述运行管道中每一运行阶段对应的算法模型;
    第一模型确定部件,配置为将所述每一运行阶段对应的算法模型,确定为所述目标算法对应的模型信息。
  15. 根据权利要求14所述的装置,其中,所述第一获取模块,还包括:
    规格确定部件,配置为确定所述目标算法应用元的运行环境,以及所述目标算法应用元在所述运行环境下的性能信息;
    第二配置确定部件,配置为将所述运行环境和所述性能信息确定为所 述目标算法对应的配置信息;
    在存在多个所述运行环境的情况下,所述第二获取模块,还包括:
    环境模型确定部件,配置为确定多个所述运行环境中每一所述运行环境对应的模型信息;
    第二模型确定部件,配置为将所述每一所述运行环境对应的模型信息,确定为所述目标算法对应的模型信息。
  16. 根据权利要求15所述的装置,其中,在将所述目标算法应用元作为其他算法应用元的模板的情况下,所述第一获取模块,还包括:
    模板获取部件,配置为获取所述运行管道、所述运行环境和所述性能信息对应的模板信息;
    第三配置确定部件,配置为将所述模板信息确定为所述目标算法对应的配置信息。
  17. 根据权利要求12至16任一项所述的装置,其中,所述获取单元,还包括:
    第三获取模块,配置为获取目标算法对应的动态库信息;其中,所述动态库信息用于被所述脚本信息对应的代码调用以使所述目标算法应用元适用于不同的平台;
    所述打包单元,包括:
    第三打包模块,配置为将所述配置信息、所述脚本信息和所述动态库信息对应的内容进行打包,得到目标算法应用元。
  18. 一种电子设备,包括存储器和处理器,所述存储器存储有可在处理器上运行的计算机程序,所述处理器执行所述程序时实现权利要求1至10任一项所述方法中的步骤。
  19. 一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现权利要求1至10任一项所述方法中的步骤。
  20. 一种计算机程序产品,所述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,所述计算机程序被计算机读取并执行时实现权利要求1至10任一项所述方法中的步骤。
PCT/CN2022/107167 2021-11-17 2022-07-21 算法应用元的打包方法及装置、设备、存储介质和计算机程序产品 WO2023087764A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111364206.9A CN114064079A (zh) 2021-11-17 2021-11-17 算法应用元的打包方法及装置、设备、存储介质
CN202111364206.9 2021-11-17

Publications (1)

Publication Number Publication Date
WO2023087764A1 true WO2023087764A1 (zh) 2023-05-25

Family

ID=80277539

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/107167 WO2023087764A1 (zh) 2021-11-17 2022-07-21 算法应用元的打包方法及装置、设备、存储介质和计算机程序产品

Country Status (2)

Country Link
CN (1) CN114064079A (zh)
WO (1) WO2023087764A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114064079A (zh) * 2021-11-17 2022-02-18 深圳市商汤科技有限公司 算法应用元的打包方法及装置、设备、存储介质
CN117555586B (zh) * 2024-01-11 2024-03-22 之江实验室 一种算法应用发布、管理及评分方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111126895A (zh) * 2019-11-18 2020-05-08 青岛海信网络科技股份有限公司 一种复杂场景下调度智能分析算法的管理仓库及调度方法
CN111913743A (zh) * 2019-05-09 2020-11-10 杭州海康威视数字技术股份有限公司 数据处理方法及装置
US20210158201A1 (en) * 2019-11-21 2021-05-27 International Business Machines Corporation Dynamically predict optimal parallel apply algorithms
CN112905328A (zh) * 2021-03-04 2021-06-04 杭州海康威视数字技术股份有限公司 任务处理方法、装置及计算机可读存储介质
CN114064079A (zh) * 2021-11-17 2022-02-18 深圳市商汤科技有限公司 算法应用元的打包方法及装置、设备、存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111913743A (zh) * 2019-05-09 2020-11-10 杭州海康威视数字技术股份有限公司 数据处理方法及装置
CN111126895A (zh) * 2019-11-18 2020-05-08 青岛海信网络科技股份有限公司 一种复杂场景下调度智能分析算法的管理仓库及调度方法
US20210158201A1 (en) * 2019-11-21 2021-05-27 International Business Machines Corporation Dynamically predict optimal parallel apply algorithms
CN112905328A (zh) * 2021-03-04 2021-06-04 杭州海康威视数字技术股份有限公司 任务处理方法、装置及计算机可读存储介质
CN114064079A (zh) * 2021-11-17 2022-02-18 深圳市商汤科技有限公司 算法应用元的打包方法及装置、设备、存储介质

Also Published As

Publication number Publication date
CN114064079A (zh) 2022-02-18

Similar Documents

Publication Publication Date Title
WO2023087764A1 (zh) 算法应用元的打包方法及装置、设备、存储介质和计算机程序产品
CN110580197A (zh) 大型模型深度学习的分布式计算架构
US10366112B2 (en) Compiling extract, transform, and load job test data cases
US11288055B2 (en) Model-based differencing to selectively generate and deploy images in a target computing environment
US11967418B2 (en) Scalable and traceable healthcare analytics management
US11568232B2 (en) Deep learning FPGA converter
US20230376281A1 (en) Systems and methods for generating service access points for rte services in code or other rte service information for use with the code
CN114064083A (zh) 通过在配置中心自定义模板部署云原生应用的方法及应用
US11409564B2 (en) Resource allocation for tuning hyperparameters of large-scale deep learning workloads
WO2021236285A1 (en) Adaptive database compaction
US11700241B2 (en) Isolated data processing modules
US11126541B2 (en) Managing resources used during a development pipeline
CN114201207A (zh) 一种资源同步方法、装置、电子设备及存储介质
CN114721674A (zh) 一种模型部署方法、装置、设备及存储介质
US20130124455A1 (en) Hybrid discovery library adapter book processing
WO2022015773A1 (en) Synchronization of source code under development in multiple concurrent instances of an integrated development environment
CN113741931A (zh) 软件升级方法、装置、电子设备及可读存储介质
US11550555B2 (en) Dependency-based automated data restatement
US20230130627A1 (en) Method for collaboration using cell-based computational notebooks
US11714624B2 (en) Managing and deploying applications in multi-cloud environment
TWI808713B (zh) 用以部屬作業環境的方法與系統
US11204940B2 (en) Data replication conflict processing after structural changes to a database
US11720345B2 (en) Pull based inner-loop code deployment
US11966723B2 (en) Automatic management of applications in a containerized environment
US20230385164A1 (en) Systems and Methods for Disaster Recovery for Edge Devices

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22894307

Country of ref document: EP

Kind code of ref document: A1