CN112667303B - Method and device for processing artificial intelligence task - Google Patents

Method and device for processing artificial intelligence task Download PDF

Info

Publication number
CN112667303B
CN112667303B CN201910925505.1A CN201910925505A CN112667303B CN 112667303 B CN112667303 B CN 112667303B CN 201910925505 A CN201910925505 A CN 201910925505A CN 112667303 B CN112667303 B CN 112667303B
Authority
CN
China
Prior art keywords
artificial intelligence
model file
intelligence engine
algorithm
bound
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910925505.1A
Other languages
Chinese (zh)
Other versions
CN112667303A (en
Inventor
蔡博振
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201910925505.1A priority Critical patent/CN112667303B/en
Publication of CN112667303A publication Critical patent/CN112667303A/en
Application granted granted Critical
Publication of CN112667303B publication Critical patent/CN112667303B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention provides a method and a device for processing an artificial intelligence task, wherein the method is applied to an artificial intelligence engine, and when the artificial intelligence engine is bound with a model file associated with the artificial intelligence task, the method comprises the following steps: acquiring model file information bound by an artificial intelligence engine; obtaining a model file according to the model file information, and loading an algorithm corresponding to the model file; when the model file corresponds to at least 2 different algorithms, acquiring the dependency relationship among the different algorithms according to the model file information or the model file, calling the algorithm corresponding to the model file according to the dependency relationship, and analyzing the object to be identified of the artificial intelligence task. According to the method, the artificial intelligence engine is bound with the model file, the algorithm is directly called through the dependency relationship among the algorithms according to the model file loading algorithm, and different algorithms can be flexibly loaded and called and switched.

Description

Method and device for processing artificial intelligence task
Technical Field
The invention relates to the technical field of computers, in particular to a method and a device for processing an artificial intelligence task.
Background
Currently, artificial Intelligence AI (intellectual Intelligence) is widely used in the security field, for example: the monitoring video or the picture is intelligently analyzed through an intelligent algorithm to obtain structured data, and face recognition comparison, human body recognition, vehicle recognition and the like are achieved. The intelligent IPC and intelligent NVR products can realize the AI function.
The intelligent algorithms are various and have advantages, so that the intelligent analysis method meets the requirements of hybrid intelligent equipment with switchable intelligent algorithms, has the advantages of meeting the requirements of multi-scene intelligent analysis application, and becomes a development hotspot at present. However, the current processing method of the intelligent algorithm is as follows: the algorithm type of the appointed intelligent algorithm can be switched by loading and calling the program, when a new intelligent algorithm needs to be added or the appointed intelligent algorithm needs to be replaced, the algorithm type needs to be reset, and the loading and calling program needs to be modified, so that the time and labor are wasted, and the problems of poor flexibility of intelligent algorithm switching and the like exist.
Disclosure of Invention
In view of this, the present invention provides a method and an apparatus for processing an artificial intelligence task, so as to solve the problem of how to flexibly switch an algorithm by an artificial intelligence engine.
In one embodiment, a processing method for an artificial intelligence task is provided, which is applied to an artificial intelligence engine, and when the artificial intelligence engine is bound with a model file associated with the artificial intelligence task, the method comprises the following steps:
acquiring model file information bound by an artificial intelligence engine;
obtaining a model file according to the model file information, and loading an algorithm corresponding to the model file;
when the model file corresponds to at least 2 different algorithms, acquiring the dependency relationship among the different algorithms according to the model file information or the model file, calling the algorithm corresponding to the model file according to the dependency relationship, and analyzing the object to be identified of the artificial intelligence task.
In another embodiment, a method for processing an artificial intelligence task is provided, which is applied to a CPU, and includes:
when the artificial intelligence engine is bound with the model file associated with the artificial intelligence task, obtaining the model file information bound by the artificial intelligence engine;
starting or restarting the artificial intelligence engine, sending the model file information to the artificial intelligence engine, enabling the artificial intelligence engine to obtain the model file according to the model file information, and loading an algorithm corresponding to the model file;
and when the model file corresponds to at least 2 different algorithms, enabling the artificial intelligence engine to obtain the dependency relationship among the different algorithms according to the model file information or the model file, calling the algorithm corresponding to the model file according to the dependency relationship, and analyzing the object to be identified of the artificial intelligence task.
In another embodiment, the invention also provides a non-transitory computer readable storage medium, the non-transitory computer readable storage medium storing instructions that,
the instructions, when executed by the artificial intelligence engine, cause the artificial intelligence engine to perform steps in the method of processing of artificial intelligence tasks described above;
or instructions which, when executed by the CPU, cause the CPU to perform the steps in the processing method for artificial intelligence tasks described above.
In another embodiment, the present invention also provides an image processing apparatus including an artificial intelligence engine and the above-described non-transitory computer-readable storage medium, or including a CPU.
According to the processing method provided by the invention, the algorithm information bound by the artificial intelligence engine is stored in one model file, the algorithm is loaded according to the model file, the loading is more flexible without depending on a program, and different algorithms can be flexibly switched by adjusting the model file bound by the artificial intelligence engine.
Secondly, the method does not use a program calling algorithm, but directly calls the algorithm through the dependency relationship among the algorithms, so that the method overcomes the defects that the program calling algorithm cannot be flexibly called and switched, further ensures that the method can be flexibly loaded, called and switched among different model files, and is easy to replace the algorithm.
Drawings
The following drawings are only illustrative and explanatory of the invention and do not limit the scope of the invention:
fig. 1 is a first structural diagram of an electronic device in an embodiment of the invention;
FIG. 2 is a second structural diagram of an electronic device according to an embodiment of the invention;
FIG. 3 is a third structural diagram of an electronic device in an embodiment of the invention;
FIG. 4 is a first flowchart of a method for processing an artificial intelligence task according to an embodiment of the present invention;
FIG. 5 is a second flowchart of a processing method for an artificial intelligence task according to an embodiment of the present invention;
FIG. 6 is a first schematic diagram of an apparatus for processing artificial intelligence tasks in an embodiment of the invention;
FIG. 7 is a second structural diagram of a processing apparatus for an artificial intelligence task according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and examples.
In order to realize the flexibility of switching different (AI) algorithms by the artificial intelligence engine, the present invention proposes an electronic device architecture as shown in fig. 1 to 3.
As shown in fig. 1, the electronic apparatus includes: 4 artificial intelligence engine, a CPU and a memory, artificial intelligence engine, CPU and memory are each other for intercommunication through "connecting wire", and the memory has stored first preset file, and the filename is respectively: 4 model files of MPID1, MPID2, MPID3, and MPID4, and a second preset file MPID1.Cfg of MPID1, MPID2.Cfg of MPID2, MPID3.Cfg of MPID3, and a configuration file MPID4.Cfg of MPID4. The second default file contains information of the algorithm corresponding to the model file.
The model file contains one or more trained (AI) algorithms for performing an AI task. Different model files are used to perform different AI tasks, such as "MPID1" for face recognition and "MPID2" for vehicle recognition.
The first preset file stores model file information bound or distributed by all the artificial intelligence engines, the model file information is a model file name or distinguishable information in the model file name, and if the artificial intelligence engine 1 is bound with the MPID1, the model file information corresponding to the artificial intelligence engine 1 is as follows: "MPID1" or "1".
An example of the first preset file is given below, and the first preset file includes: "artificial intelligence engine 1: MPID1 or 1; artificial intelligence engine 2: MPID3 or 3; artificial intelligence engine 3: MPID4 or 4; the artificial intelligence engine 4: ". Wherein "artificial intelligence engine 1: MPID1 or 1' indicates that the artificial intelligence engine 1 is bound with the MPID1, and the binding object of the artificial intelligence engine 4 is null, which indicates that the artificial intelligence engine 4 is not bound with any model file.
One model file can be bound with a plurality of artificial intelligence engines at the same time, but one artificial intelligence engine can only be bound with one model file or not. And the flexible switching among different (AI) algorithms can be realized by changing the model files bound (or distributed) by the artificial intelligence engine. For example, the user may directly modify the content of the first preset file, or modify the model file bound by the artificial intelligence engine through the interface, and update the first preset file after the modification is successful.
The electronic device architecture of fig. 2 is substantially the same as that of fig. 1, except that the model file in fig. 2 contains configuration parameters for one or more (AI) algorithms.
When different (AI) algorithms are generated by a general algorithm without configured parameters using different training sets, the trained (AI) algorithm can also be decomposed into: and the general algorithm of the configured parameters and the unconfigured parameters forms a trained (AI) algorithm after the configured parameters are loaded in the general algorithm of the unconfigured parameters. The model file may therefore also contain configuration parameters for one or more (AI) algorithms.
Compared with fig. 1, fig. 2 occupies a small storage space, which is beneficial to saving the storage space.
The electronic device architecture of fig. 3 is substantially the same as that of fig. 1, except that the model file in fig. 2 contains the names and/or storage locations of one or more (AI) algorithms. All algorithms are stored in one directory or different directories of the memory, and the algorithms corresponding to the model files can be obtained according to the names and/or the storage position information of the algorithms.
It should be noted that the numbers of the artificial intelligence engine, the CPU, the model file and the configuration file thereof in fig. 1 to 3 are only for illustration, the number is not limited in the present invention, and the user can set the numbers as required.
Example one
Based on the electronic device architecture of fig. 1 and fig. 3, the present invention provides a processing method for an artificial intelligence task, which is applied to any artificial intelligence engine, and when the artificial intelligence engine is bound with a model file associated with any artificial intelligence task and is started for executing an AI task, as shown in fig. 4, the method includes:
s101: acquiring model file information bound by an artificial intelligence engine;
each artificial intelligence engine may or may not be bound to a model file. And when the artificial intelligence engine does not have the binding relationship, prompting a user to establish the binding relationship, and executing the method shown in the figure 4 after the binding relationship is established.
Specifically, when the first preset file uniformly stores model file information bound to all artificial intelligence engines of the electronic device, the way of obtaining the model file information bound to the artificial intelligence engines includes:
(1) the artificial intelligence engine requests the model file information bound by the artificial intelligence engine to the CPU, the artificial intelligence engine receives the model file information bound by the artificial intelligence engine and sent by the CPU, and the model file information is read by the CPU from the cache of the first preset file;
(2) the method comprises the steps that an artificial intelligence engine receives model file information which is sent by a CPU and bound by the artificial intelligence engine, wherein the model file information is read by the CPU from a cache of a first preset file;
(3) the method comprises the steps that an artificial intelligence engine requests model file information bound by the artificial intelligence engine to a CPU, the artificial intelligence engine receives the model file information bound by the artificial intelligence engine and sent by the CPU, and the model file information is obtained from a first preset file by the CPU;
(4) the method comprises the steps that an artificial intelligence engine receives model file information which is sent by a CPU and bound by the artificial intelligence engine, wherein the model file information is obtained by the CPU from a first preset file;
(5) and the artificial intelligence engine directly acquires the model file information bound by the artificial intelligence engine from the first preset file.
Wherein, (2) and (4) are CPU, under the premise that the artificial intelligence engine does not request, provide the model file information to the artificial intelligence engine automatically, for example, after CPU detects the artificial intelligence engine starts, trigger the sending model file information automatically.
Assuming that the artificial intelligence engine started in fig. 4 is "artificial intelligence engine 1", the contents of the first preset file are: artificial intelligence engine 1: MPID1 or 1; artificial intelligence engine 2: MPID3 or 3; artificial intelligence engine 3: MPID4 or 4; the artificial intelligence engine 4: then, in S101, the information of the model file bound by the "artificial intelligence engine 1" available according to the first preset file is: MPID1 or 1.
S102: obtaining a model file according to the model file information, and loading an algorithm corresponding to the model file;
according to the model file information: "MPID1" or "1", the model file name bound by the artificial intelligence engine in S101 is determined to be "MPID1", and the file is acquired from the storage according to the model file name.
When the model file is a compressed file or an encrypted file, the model file is decompressed or decrypted firstly and then is used for loading the algorithm corresponding to the model file.
When the model file contains one or more algorithms, as shown in FIG. 1, the algorithms are loaded directly from the model file;
when the model file contains one or more algorithm configuration parameters, as shown in fig. 2, acquiring a general algorithm without configuration parameters, loading the configuration parameters of the model file in the general algorithm without configuration parameters to realize algorithm loading, and configuring the configuration parameters of one algorithm by one general algorithm to form a trained algorithm;
when the model file contains one or more algorithm names and/or storage location information, as shown in fig. 3, each algorithm corresponding to the model file is loaded according to the algorithm name or storage location.
After the algorithm is loaded, the artificial intelligence engine can be used to execute the AI task.
S103: when the model file corresponds to at least 2 different algorithms, acquiring the dependency relationship among the different algorithms according to the model file information or the model file, calling the algorithm corresponding to the model file according to the dependency relationship, and analyzing the object to be identified of the artificial intelligence task;
the dependency relationship between different algorithms in the model file may be stored in a second preset file corresponding to the model file, the second preset file corresponds to the model file one to one, as shown in fig. 1, fig. 2, or fig. 3, the second preset file of the model file "MPID1" is "MPID1.Cfg", the name of the second preset file may be determined according to the name of the model file or the model file information, the file is obtained from the memory according to the name of the second preset file, and the dependency relationship between different algorithms in the model file is obtained by analyzing the file content.
The second preset file can also save: algorithm number, algorithm name, algorithm type, and other information.
The method for mapping the model file and the second preset file is not limited, and other methods such as key value mapping in a dictionary can be used for the method.
The dependency relationship between different algorithms in the model file may also be stored in a header of the model file, for example, the document header of the model file in fig. 3, and the document header is parsed to obtain the dependency relationship between different algorithms corresponding to the model file.
The dependency relationship is a calling relationship of an algorithm corresponding to the model file, for example, the model file "MPID1" includes 5 algorithms, and the dependency relationship is as follows: algorithm 1 → Algorithm 3 → Algorithm 2 → Algorithm 5 → Algorithm 4, then the image processor inputs the object to be recognized (image or video data stream) into Algorithm 1, the output of Algorithm 1 is input into Algorithm 3, the output of Algorithm 3 is input into Algorithm 2, the output of Algorithm 2 is input into Algorithm 5, the output of Algorithm 5 is input into Algorithm 4, and Algorithm 4 outputs the analysis result of the object to be recognized.
S104: and when the model file corresponds to one algorithm, calling the algorithm corresponding to the model file, and analyzing the object to be identified of the artificial intelligence task.
The principle of S104 is the same as S103, and is not described again.
In addition, when the model file corresponds to an algorithm, the artificial intelligence engine can also be directly bound with the algorithm, the model file information is the name of the algorithm, the configuration parameters of the algorithm or the storage position information of the algorithm, and the AI task is directly executed after the algorithm is loaded according to the model file information.
Optionally, in the executing process of the AI task, when it is detected that the model file bound to the artificial intelligence engine is changed, for example, the artificial intelligence engine is originally bound to "MPID2" and is now bound to "MPID3", the artificial intelligence engine is restarted, and the process returns to S101;
during the execution process of the AI task, when it is detected that the artificial intelligence engine is unbound from the model files, for example, the artificial intelligence engine is originally bound with the MPID1, the binding relationship with the MPID1 is released, and the artificial intelligence engine is not bound with any model file instead, the artificial intelligence engine is closed.
Or when the AI task stops executing, the artificial intelligence engine is shut down.
The method of fig. 4 may be set up in a host program of an artificial intelligence engine, which is started to trigger the automatic execution of the method of fig. 4. Or the method of fig. 4 is executed by a CPU controlling an artificial intelligence engine.
The invention has the technical effects that:
(1) Compared with the prior art which adopts a program loading algorithm, the method and the device have the advantages that the method and the device do not depend on the program according to the model file loading algorithm, the loading is more flexible, and the program does not need to be modified when the algorithm is changed.
(2) The algorithm is directly called through the dependency relationship among the algorithms, the calling is more flexible, and the program does not need to be modified when the algorithm is changed.
(3) The first preset file uniformly manages the binding relationship between all the artificial intelligence engines and the model files, and the algorithm is directly called through the dependency relationship between the algorithms according to the model file loading algorithm without depending on programs, so that a user can flexibly adjust the model files bound by the artificial intelligence engines by modifying the first preset file, and flexible switching between different algorithms is realized.
And the model file and the second preset file can be derived through an (intelligent) algorithm training platform and can be flexibly manufactured or replaced, so that the method disclosed by the invention can be flexibly switched among different model files and is easy to implement.
Example two
In another embodiment, a method for processing an artificial intelligence task is provided, which is applied to a CPU, and as shown in fig. 5, includes:
s201: when the artificial intelligence engine is bound with the model file associated with the artificial intelligence task, obtaining the model file information bound by the artificial intelligence engine;
when the artificial intelligence engine needs to execute an AI task, the CPU detects whether the artificial intelligence engine has a binding relationship, if so, S201 is executed, if not, the user is reminded to establish the binding relationship, and after the binding relationship is established, S201 is executed.
When the first preset file uniformly stores the model file information bound by different artificial intelligence engines, the method for acquiring the model file information bound by the artificial intelligence engines comprises the following steps:
(1) the CPU reads the first preset file to the cache in advance, and then the CPU in S201 directly reads the model file information bound by the artificial intelligence engine from the cache of the first preset file.
(2) And the CPU directly reads the model file information bound by the artificial intelligence engine from the first preset file.
The advantages of reading from the cache are: the reading speed is high.
S202: starting or restarting the artificial intelligence engine, sending the model file information to the artificial intelligence engine, enabling the artificial intelligence engine to obtain the model file according to the model file information, and loading an algorithm corresponding to the model file;
when the model file contains the algorithm, the implementation way of "loading the algorithm corresponding to the model file" in S202 is as follows: loading an algorithm contained in the model file;
when the model file contains the name and/or storage location information of the algorithm, the implementation way of "loading the algorithm corresponding to the model file" in S202 is as follows: loading each algorithm corresponding to the model file according to the name and/or the storage position of the algorithm;
when the model file contains the configuration parameters of the algorithm, the implementation way of "loading the algorithm corresponding to the model file" in S202 is as follows: the method comprises the steps of obtaining general algorithms without configuration parameters, loading the configuration parameters of a model file in the general algorithms without configuration parameters, realizing algorithm loading, and configuring the configuration parameters of one algorithm by one general algorithm to form a trained algorithm.
S203: when the model file corresponds to at least 2 different algorithms, enabling the artificial intelligence engine to obtain the dependency relationship among the different algorithms according to the model file information or the model file, calling the algorithm corresponding to the model file according to the dependency relationship, and analyzing the object to be identified of the artificial intelligence task;
the dependency relationship between the different algorithms in the model file in S203 may be stored in a second preset file corresponding to the model file, the second preset file is determined according to the model file information or the model file, and the dependency relationship between the different algorithms corresponding to the model file may be obtained by analyzing the second preset file.
Optionally, the method in fig. 5 further includes, S204: and when the model file corresponds to an algorithm, enabling the artificial intelligence engine to call the algorithm corresponding to the model file and analyzing the object to be identified of the artificial intelligence task.
Optionally, when the CPU detects that the model file bound to the artificial intelligence engine is replaced, the artificial intelligence engine is restarted, and the process returns to "obtain the model file information bound to the artificial intelligence engine" in S201;
and when the CPU detects that the artificial intelligence engine is unbound from the model file, closing the artificial intelligence engine.
Or when the AI task stops executing, the artificial intelligence engine is closed
The principle of the second embodiment is the same as that of the first embodiment, and the relevant points can be referred to each other.
EXAMPLE III
In another embodiment, the present invention also provides a non-transitory computer readable storage medium storing instructions that, when executed by an artificial intelligence engine, cause the artificial intelligence engine to perform the steps in the method of processing an artificial intelligence task in the first embodiment; or the instructions, when executed by the CPU, cause the CPU to perform the steps in the processing method for the artificial intelligence task in the second embodiment.
In another embodiment, the present invention also provides an apparatus for processing an artificial intelligence task, comprising an artificial intelligence engine and the non-transitory computer-readable storage medium described above, or comprising a CPU and the non-transitory computer-readable storage medium described above.
Specifically, the processing device of the artificial intelligence task is located in the artificial intelligence engine, and when the artificial intelligence engine is bound with the model file associated with the artificial intelligence task, as shown in fig. 6, the processing device of the artificial intelligence task comprises:
an acquisition module: acquiring model file information bound by an artificial intelligence engine;
loading a module: obtaining a model file according to the model file information, and loading an algorithm corresponding to the model file;
the identification module 1: when the model file corresponds to at least 2 different algorithms, acquiring the dependency relationship among the different algorithms according to the model file information or the model file, calling the algorithm corresponding to the model file according to the dependency relationship, and analyzing the object to be identified of the artificial intelligence task.
Optionally, the apparatus further comprises: the identification module 2: and when the model file corresponds to one algorithm, calling the algorithm corresponding to the model file, and analyzing the object to be identified of the artificial intelligence task.
Alternatively, the control device for the artificial intelligence engine is located in the CPU, and as shown in fig. 7, includes:
an acquisition module: when the artificial intelligence engine is bound with the model file associated with the artificial intelligence task, obtaining the model file information bound by the artificial intelligence engine;
a loading module: starting or restarting the artificial intelligence engine, sending the model file information to the artificial intelligence engine, enabling the artificial intelligence engine to obtain the model file according to the model file information, and loading an algorithm corresponding to the model file;
identification module 1: and when the model file corresponds to at least 2 different algorithms, enabling the artificial intelligence engine to obtain the dependency relationship among the different algorithms according to the model file information or the model file, calling the algorithm corresponding to the model file according to the dependency relationship, and analyzing the object to be identified of the artificial intelligence task.
Optionally, the apparatus further comprises: the identification module 2: and when the model file corresponds to an algorithm, enabling the artificial intelligence engine to call the algorithm corresponding to the model file and analyzing the object to be identified of the artificial intelligence task.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (12)

1. A method of processing an artificial intelligence task for application to an artificial intelligence engine, the method comprising, when the artificial intelligence engine is bound to a model file associated with the artificial intelligence task:
acquiring model file information bound by the artificial intelligence engine; wherein, different model files are used for executing different artificial intelligence tasks; a model file is bound with one or more artificial intelligence engines, and one artificial intelligence engine is bound with one model file or not;
obtaining the model file according to the model file information, and loading an algorithm corresponding to the model file;
when the model file corresponds to at least 2 different algorithms, acquiring the dependency relationship among the different algorithms according to the model file information or the model file, calling the algorithm corresponding to the model file according to the dependency relationship, and analyzing the object to be identified of the artificial intelligence task.
2. The method of claim 1, wherein the model file contains an algorithm, or wherein the model file contains configuration parameters for an algorithm;
the algorithm corresponding to the loading of the model file comprises the following steps: when the model file contains the algorithm, loading the algorithm contained in the model file; and when the model file contains the configuration parameters of the algorithm, acquiring the general algorithm of the unconfigured parameters, and loading the configuration parameters in the general algorithm of the unconfigured parameters.
3. The method of claim 1, wherein obtaining model file information bound by the artificial intelligence engine comprises:
the method comprises the steps that an artificial intelligence engine receives model file information sent by a CPU and bound by the artificial intelligence engine, wherein the model file information is read by the CPU from a cache of a first preset file, or the model file information is obtained by the CPU from the first preset file;
or the artificial intelligence engine acquires the model file information bound by the artificial intelligence engine from a first preset file;
the first preset file contains model file information bound by different artificial intelligence engines.
4. The method of claim 1, wherein obtaining dependencies between different algorithms in the model file comprises: and determining a second preset file, and analyzing the second preset file to obtain the dependency relationship among different algorithms in the model file.
5. The method according to any one of claims 1 to 4,
when detecting that the model file bound by the artificial intelligence engine is replaced, restarting the artificial intelligence engine, and returning to execute the step of acquiring the model file information bound by the artificial intelligence engine;
or when the artificial intelligence engine is detected to be unbound from the model file, closing the artificial intelligence engine.
6. A processing method of an artificial intelligence task is applied to a CPU and comprises the following steps:
when an artificial intelligence engine is bound with a model file associated with an artificial intelligence task, obtaining model file information bound by the artificial intelligence engine; wherein, different model files are used for executing different artificial intelligence tasks; a model file is bound with one or more artificial intelligence engines, and one artificial intelligence engine is bound with one model file or not;
starting or restarting an artificial intelligence engine, sending the model file information to the artificial intelligence engine, enabling the artificial intelligence engine to obtain the model file according to the model file information, and loading an algorithm corresponding to the model file;
and when the model file corresponds to at least 2 different algorithms, enabling the artificial intelligence engine to obtain the dependency relationship among the different algorithms according to the model file information or the model file, calling the algorithm corresponding to the model file according to the dependency relationship, and analyzing the object to be identified of the artificial intelligence task.
7. The method of claim 6,
the model file contains at least 2 algorithms, or the model file contains configuration parameters of the algorithms;
the algorithm corresponding to the loading of the model file comprises the following steps: when the model file contains the algorithm, loading the algorithm contained in the model file; and when the model file contains the configuration parameters of the algorithm, acquiring the general algorithm of the unconfigured parameters, and loading the configuration parameters in the general algorithm of the unconfigured parameters.
8. The method of claim 6, wherein obtaining model file information bound by the artificial intelligence engine comprises:
and reading the model file information bound by the artificial intelligence engine from the cache of a first preset file, or acquiring the model file information bound by the artificial intelligence engine from the first preset file, wherein the first preset file contains the model file information bound by different artificial intelligence engines.
9. The method of claim 6, wherein obtaining dependencies between different algorithms in the model file comprises: and determining a second preset file, and analyzing the second preset file to obtain the dependency relationship among different algorithms in the model file.
10. The method according to any one of claims 6 to 9,
when detecting that the model file bound by the artificial intelligence engine is changed, restarting the artificial intelligence engine, and returning to execute the step of acquiring the model file information bound by the artificial intelligence engine;
or when the artificial intelligence engine is detected to be unbound with the model file, closing the artificial intelligence engine.
11. A non-transitory computer readable storage medium storing instructions, wherein,
the instructions, when executed by an artificial intelligence engine, cause the artificial intelligence engine to perform the steps in the method of processing an artificial intelligence task of any of claims 1 to 5;
or which instructions, when executed by a CPU, cause the CPU to carry out the steps in the method of processing an artificial intelligence task according to any one of claims 6 to 10.
12. An apparatus for processing an artificial intelligence task, comprising an artificial intelligence engine and the non-transitory computer-readable storage medium of claim 11, or comprising a CPU and the non-transitory computer-readable storage medium of claim 11.
CN201910925505.1A 2019-09-27 2019-09-27 Method and device for processing artificial intelligence task Active CN112667303B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910925505.1A CN112667303B (en) 2019-09-27 2019-09-27 Method and device for processing artificial intelligence task

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910925505.1A CN112667303B (en) 2019-09-27 2019-09-27 Method and device for processing artificial intelligence task

Publications (2)

Publication Number Publication Date
CN112667303A CN112667303A (en) 2021-04-16
CN112667303B true CN112667303B (en) 2023-04-07

Family

ID=75399778

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910925505.1A Active CN112667303B (en) 2019-09-27 2019-09-27 Method and device for processing artificial intelligence task

Country Status (1)

Country Link
CN (1) CN112667303B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018236674A1 (en) * 2017-06-23 2018-12-27 Bonsai Al, Inc. For hiearchical decomposition deep reinforcement learning for an artificial intelligence model
CN109754011A (en) * 2018-12-29 2019-05-14 北京中科寒武纪科技有限公司 Data processing method, device and Related product based on Caffe

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104133667B (en) * 2013-11-29 2017-08-01 腾讯科技(成都)有限公司 Realize method, device and the artificial intelligence editing machine of artificial intelligence behavior
US11836650B2 (en) * 2016-01-27 2023-12-05 Microsoft Technology Licensing, Llc Artificial intelligence engine for mixing and enhancing features from one or more trained pre-existing machine-learning models
CN108229686B (en) * 2016-12-14 2022-07-05 阿里巴巴集团控股有限公司 Model training and predicting method and device, electronic equipment and machine learning platform
CN108280091B (en) * 2017-01-06 2022-05-17 阿里巴巴集团控股有限公司 Task request execution method and device
CN107168743B (en) * 2017-05-22 2019-04-16 哈尔滨工程大学 Algorithm reconstructs device and method
CN109857475B (en) * 2018-12-27 2020-06-16 深圳云天励飞技术有限公司 Framework management method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018236674A1 (en) * 2017-06-23 2018-12-27 Bonsai Al, Inc. For hiearchical decomposition deep reinforcement learning for an artificial intelligence model
CN109754011A (en) * 2018-12-29 2019-05-14 北京中科寒武纪科技有限公司 Data processing method, device and Related product based on Caffe

Also Published As

Publication number Publication date
CN112667303A (en) 2021-04-16

Similar Documents

Publication Publication Date Title
CN108111555B (en) Method and system for controlling installation package downloading process
CN105138376A (en) Mixed model application, webpage resource upgrading method thereof, mobile terminal and system
JP2013517578A (en) Application operating method, apparatus and system
CN102567033B (en) Based on kind loading method in advance and the device of embedded real-time Java virtual machine
CN103870303A (en) Method and device for reducing size of software installation package
CN107908416A (en) Microcontroller firmware upgrade method, device and computer-readable recording medium
CN107592118B (en) Decoding method and device for variable-length coded file
CN110362356B (en) Function data processing method and device, computer equipment and storage medium
CN110046100B (en) Packet testing method, electronic device and medium
CN112667303B (en) Method and device for processing artificial intelligence task
KR20210060213A (en) Method for preloading application and electronic device supporting the same
CN111294377B (en) Dependency network request sending method, terminal device and storage medium
CN109976790A (en) Using update method, device, terminal and storage medium
JP2017126293A (en) Information processing apparatus and resource management method
CN115495020A (en) File processing method and device, electronic equipment and readable storage medium
CN112486513B (en) Container-based cluster management method and system
CN106325838B (en) Picture processing method and device for application program compiling flow
CN116048600A (en) Applet function development method, device, computer equipment and storage medium
CN114610446A (en) Method, device and system for automatically injecting probe
CN112426722A (en) Node map hot updating method and device, storage medium and computer equipment
CN112099858A (en) System data processing logic updating method and device and data processing system
WO2024046260A1 (en) Hotfix method and related apparatus
CN108897639A (en) Document handling method and device
US10038728B2 (en) Communication terminal and communication processing method
WO2022111702A1 (en) Method for migrating display element across applications, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant