CN115271053A - AI processor operator overflow optimization method and system under CANN computing architecture - Google Patents

AI processor operator overflow optimization method and system under CANN computing architecture Download PDF

Info

Publication number
CN115271053A
CN115271053A CN202210635859.4A CN202210635859A CN115271053A CN 115271053 A CN115271053 A CN 115271053A CN 202210635859 A CN202210635859 A CN 202210635859A CN 115271053 A CN115271053 A CN 115271053A
Authority
CN
China
Prior art keywords
operator
overflow
data
cann
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210635859.4A
Other languages
Chinese (zh)
Other versions
CN115271053B (en
Inventor
孙亚楠
欧玉威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN202210635859.4A priority Critical patent/CN115271053B/en
Publication of CN115271053A publication Critical patent/CN115271053A/en
Application granted granted Critical
Publication of CN115271053B publication Critical patent/CN115271053B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention relates to an AI processor operator overflow optimization method and system under a CANN computing architecture, belongs to the technical field of AI processors, finds out an operator causing an overflow problem, and sets the operator to be calculated by using 32-bit floating point numbers, thereby fundamentally avoiding operator overflow. By identifying the operator overflow representation form of the promotion AI processor, the overflow operator is found and the data is recorded. A method for analyzing operator overflow data of an NPU is provided, a conservative strategy is used for finding a root operator causing the overflow problem, and the accuracy problem caused by operator overflow is gradually solved while the calculation performance of the NPU is maintained to the maximum extent. Firstly, a built-in optimization strategy of the CANN calculation architecture to the NPU operator is applied, on the basis, the operator still having the overflow problem is added into a blacklist, and the operator is forcibly appointed to use 32-bit floating point number for operation, so that the efficiency of solving the overflow problem in the whole process can be improved.

Description

AI processor operator overflow optimization method and system under CANN computing architecture
Technical Field
The invention belongs to the technical field of AI processors, and particularly relates to an AI processor operator overflow optimization method and system under a CANN computing architecture.
Background
CANN (computer Architecture for Neural Networks) is a heterogeneous computing Architecture proposed by China for AI scenes, and supports users to quickly construct AI applications and services based on the promotion platform by providing a multi-level programming interface. Among them, model development is one of the important basic functions provided by CANN. CANN provides the functionality of deep neural network model training using the TensorFlow network framework on an Gantent AI processor (NPU).
In the process of deep neural network model training on the NPU, the operators are basic units of the NPU supporting neural network calculation, including convolution operation, pooling operation and the like, and are used for supporting neural network training and reasoning acceleration. Data is typically stored as 16-bit floating point numbers or 32-bit floating point numbers due to limitations of the hardware itself. Therefore, each operator needs to define the data type of the operation in advance, such as 16-bit floating point number or 32-bit floating point number. The calculation speed of the processor can be increased by using the 16-bit floating point number for operation, but when the stored data is too large or too small to be stored by using the 16-bit floating point number, the problem of data overflow inevitably occurs, which will deteriorate the precision of model training and even may completely destroy the training process of the model. When 32-bit floating point numbers are used for operation, although the precision of model training can be ensured, the calculation overhead of the NPU is inevitably increased, and the calculation performance is reduced.
In order to avoid the problem of operator data overflow as much as possible while ensuring the NPU computation performance, two solutions have been proposed under the pani CANN computation architecture:
1. using mixing accuracy
2. Loss scaling is enabled.
The mixed precision is used for accelerating the training process of the deep neural network by mixing the data types of 16-bit floating point numbers and 32-bit floating point numbers, and reducing the use and access of a memory, so that a larger neural network can be trained, and the network precision which can be achieved by training with the 32-bit floating point numbers can be basically maintained.
Enabling loss scaling refers to multiplying the loss obtained by calculation by a loss scaling coefficient in the forward calculation process of the deep neural network model, and playing a role of amplifying the gradient in the backward gradient calculation process, so that the overflow problem caused by the fact that a smaller gradient value cannot be expressed by a 16-bit floating point number in floating point calculation is avoided to the maximum extent, and after parameter gradients are aggregated and before an optimizer updates parameters, the aggregated parameter gradient value is divided by the loss scaling coefficient to be restored.
Although the two schemes are helpful for guaranteeing the NPU computing performance and avoiding the operator data overflow problem as much as possible, the two schemes do not essentially solve the data overflow problem caused by the data storage type, and the data overflow problem still exists. In practice, a user often cannot know in advance which operators use 16-bit floating point storage to cause overflow, and no matter a built-in optimization strategy is used or the operator which causes data overflow is designated to be operated by using the 16-bit floating point, which may cause the precision problem of model training. And the problem of overflow that a smaller gradient value cannot be represented by a 16-bit floating point number in floating point calculation can be avoided to the greatest extent by enabling loss-to-zoom, but the problem of overflow that a larger gradient value cannot be represented by a 16-bit floating point number cannot be solved. Therefore, a method is needed to be designed at present to solve the problem of operator data overflow of the promotion AI processor under the CANN computing architecture.
Disclosure of Invention
The present invention provides a method and a system for optimizing operator overflow of an AI processor under a CANN computing architecture, which are used to solve the above technical problems in the prior art, i.e., to fundamentally solve the problem of operator data overflow of an Itanium AI processor under the CANN computing architecture.
In order to realize the purpose, the technical scheme of the invention is as follows:
the AI processor operator overflow optimization method under the CANN computing architecture comprises the following steps:
s1: finding out operators with overflow problems through overflow detection;
s2: performing overflow data analysis on the operator found in the step S1, and judging whether the operator is a root operator of data abnormality or not;
s3: modifying the operator black, white and gray list on the basis of the step S2, so as to adjust the operator optimization strategy;
wherein, the steps S1-S3 are iterated until the overflow of the operator data of the promotion AI processor is completely resolved.
Further, step S1 is specifically as follows:
when data is stored as a 16-bit floating point number in the NPU, data overflow involves two cases, namely
65504 is present in the input or output value, since 65504 is the largest number that can be represented by a 16-bit floating point number, which means that the data is too large to be represented by a 16-bit floating point number, and overflow occurs;
nan exists in the input or output value, which is caused by the conditions that zero is divided by zero, infinity is divided by infinity, infinity is subtracted from infinity, and infinity is multiplied by zero, which cannot be calculated, and the nature of Nan is that zero and infinity occur in the calculation process because the overflow condition of data exists;
whether data overflow of the operator exists or not can be judged by detecting whether 65504 or Nan exists in the input or input data of each operator in the model training process; if the overflow exists, recording the operator data with the overflow and transmitting the output to the next operator; if there is no overflow, the operator data is not recorded and the output is passed to the next operator.
Further, step S2 is specifically as follows:
firstly, checking an input data value of overflow data, if the input data has 65504 or Nan, indicating that the overflow data of the current operator has the possibility of being transmitted by a forward operator, and at the moment, determining whether the current operator causes data overflow or not, and needing to analyze the forward operator instead; if the input data does not exist 65504 or Nan, the output data value is reviewed for further analysis; if the output data value exists 65504 or Nan at the moment, which indicates that the current operator causes data overflow, the operator is recorded, otherwise, the operator is not recorded.
Further, step S3 is specifically as follows:
when the NPU is used for deep neural network training under the CANN computing architecture, the optimization strategy of the operator is configured through a blacklist, a white list and a gray list; wherein the content of the first and second substances,
the blacklist refers to that operators of the current 32-bit floating point number type are not allowed to be reduced to 16-bit floating point numbers;
the white list refers to operators which allow the current 32-bit floating point number type to be reduced to 16-bit floating point numbers;
the grey list means that a mixed precision processing mechanism of the current operator is consistent with the previous operator, namely, if the previous operator supports precision reduction processing, the current operator also supports precision reduction; if the former operator does not allow precision reduction, the current operator does not support precision reduction;
and adjusting an operator optimization strategy by modifying an operator blacklist according to an operator overflow analysis result, and setting an operator causing an overflow problem to be operated by using 32-bit floating point numbers, so that the problem of operator data overflow of the AI processor under the CANN calculation architecture is fundamentally solved.
Further, the AI processor under CANN computing architecture is specifically Huacheng AI processor under CANN computing architecture.
The AI processor operator overflow optimization system under the CANN computing architecture is used for realizing the AI processor operator overflow optimization method under the CANN computing architecture.
Compared with the prior art, the invention has the beneficial effects that:
one of the beneficial effects of the scheme is that the method for solving the problem of NPU operator data overflow under the CANN calculation architecture is provided, operators causing the overflow problem are found out and are set to be calculated by using 32-bit floating point numbers, and operator overflow is fundamentally avoided. A method for detecting the overflow of NPU operator under CANN computation structure is disclosed, which finds out the overflow operator and records the data by recognizing the operator overflow expression form of the promotion AI processor. A method for analyzing operator overflow data of an NPU is provided, a conservative strategy is used for finding a root operator causing the overflow problem, and the accuracy problem caused by operator overflow is gradually solved while the calculation performance of the NPU is maintained to the maximum extent. Firstly, a built-in optimization strategy of the CANN calculation architecture to the NPU operator is applied, on the basis, the operator still having the overflow problem is added into a blacklist, and the operator is forcibly appointed to use 32-bit floating point number for operation, so that the efficiency of solving the overflow problem in the whole process can be improved.
Drawings
Fig. 1 is a schematic overall flow chart of the method according to the embodiment of the present application.
Fig. 2 is a schematic diagram of overflow detection according to an embodiment of the present application.
Fig. 3 is a schematic diagram of an overflow data analysis process according to an embodiment of the present application.
Fig. 4 is a schematic flow chart of an adjustment operator optimization strategy according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the detailed description and specific examples, while indicating embodiments of the invention, are given by way of illustration only, not by way of limitation, i.e., the embodiments described are intended as a selection of the best mode contemplated for carrying out the invention, not as a full mode. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention. It is noted that relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a" \8230; "does not exclude the presence of additional like elements in the process, method, article, or apparatus that comprises the element.
The features and properties of the present invention are described in further detail below with reference to examples.
Example (b):
an AI processor operator overflow optimization method under a CANN computing architecture is provided, which comprises the following steps:
s1: finding out operators with overflow problems through overflow detection;
s2: performing overflow data analysis on the operator found in the step S1, and judging whether the operator is a root operator of data abnormity;
s3: modifying the operator black, white and gray list on the basis of the step S2, so as to adjust the operator optimization strategy;
wherein, the steps S1-S3 are iterated until the overflow of the operator data of the promotion AI processor is completely resolved.
In step S1:
in order to solve the problem of overflow of operator data of the shangteng AI processor under the Hua CaNN computing architecture, we first design an overflow detection method to find out the operator with overflow problem. The object of overflow detection is the output of each operator in the neural network model, and the schematic diagram is shown in fig. 2 (a).
To detect overflow of the operator's output, we should first clarify what the operator data overflows in. When data is stored as a 16-bit floating point number in the NPU, data overflow includes two cases:
1. 65504 is present in the input or output value, since 65504 is the largest number that can be represented by a 16-bit floating point number, which means that the data is too large to be represented by a 16-bit floating point number, and overflow occurs.
2. Nan exists in the input or output value, which is mainly caused by the conditions that zero is divided by zero, infinity is divided by infinity, infinity is subtracted by infinity, infinity is multiplied by zero, and the like, which cannot be calculated, and essentially also causes zero and infinity to appear in the calculation process because of the overflow condition of data. Therefore, whether the operator has data overflow can be judged by detecting whether 65504 or Nan exists in the input or input data of each operator in the model training process. If the overflow exists, recording operator data with the overflow and transmitting the output to the next operator; if there is no overflow, the operator data is not recorded and the output is passed to the next operator. The overflow detection flow chart is shown in fig. 2 (b).
In step S2:
although we have obtained overflow data, knowing which operators have data overflow, note that not all overflow data is the result of current operator data type missetting, since overflow data is likely to have been passed from previous operators. In order to find out the root operators which really cause the data overflow problem, a set of overflow data analysis scheme is designed, and a flow chart is shown in FIG. 3.
Firstly, looking up an input data value of overflow data, if the input data has 65504 or Nan, indicating that the overflow data of the current operator is most likely transmitted by a forward operator, and being incapable of determining whether the current operator causes data overflow, and analyzing the forward operator; if the input data does not exist 65504 or Nan, the output data value is reviewed for further analysis. If 65504 or Nan exists in the output data value at this time, which indicates that the current operator causes data overflow, the operator is recorded, otherwise, the operator is not recorded. It is worth noting that when the method processes the condition that the input data value overflows, a conservative strategy is adopted, namely when it is uncertain whether the overflow condition is caused by the current operator or transmitted by the forward operator, only the forward operator is analyzed, but the current operator is not analyzed. This is also the reason why the method needs to be iterated until the overflow problem of the operator data of the soar AI processor is completely solved.
In step S3:
when the NPU is used for deep neural network training under the CANN computing architecture, the optimization strategy of the operator is mainly configured through a black list, a white list and a gray list. The blacklist refers to that operators of the current 32-bit floating point number type are not allowed to be reduced to 16-bit floating point numbers; the white list refers to an operator which allows the current 32-bit floating point number type to be reduced to 16-bit floating point numbers; the grey list means that a mixed precision processing mechanism of the current operator is consistent with the previous operator, namely, if the previous operator supports precision reduction processing, the current operator also supports precision reduction; if the former operator does not allow precision reduction, the current operator does not support precision reduction.
According to the operator overflow analysis result, an operator optimization strategy is adjusted by modifying an operator blacklist, an operator causing the overflow problem is set to be operated by using a 32-bit floating point number, so that the problem of operator data overflow of an Itanium AI processor under the Hua-CaNN computing architecture is fundamentally solved, and a flow chart is shown in FIG. 4.
Because the NPU operators are various in types, and the efficiency is low because all operators are reconfigured when the optimization strategy is adjusted every time, the operators of partial 32-bit floating point numbers are automatically reduced to 16-bit floating point numbers by adopting an optimization strategy built in Huaji CANN, such as an automatic mixed precision mode, and on the basis, the optimization strategies of partial operators are adjusted according to overflow data analysis results. If the overflow analysis result shows that a certain operator causes the data overflow problem, the operator is added into a blacklist, and the operator is forced to use 32-bit floating point number to carry out operation, so that the problem of data overflow of an Itanium AI processor under the CANN calculation structure is avoided.
The AI processor operator overflow optimization system under the CANN computing architecture is further provided, and is used for realizing the AI processor operator overflow optimization method under the CANN computing architecture.
The above are preferred embodiments of the present invention, and all changes made according to the technical scheme of the present invention that produce functional effects do not exceed the scope of the technical scheme of the present invention belong to the protection scope of the present invention.

Claims (6)

  1. An AI processor operator overflow optimization method under a CANN computing architecture, which is characterized by comprising the following steps:
    s1: finding out operators with overflow problems through overflow detection;
    s2: performing overflow data analysis on the operator found in the step S1, and judging whether the operator is a root operator of data abnormity;
    s3: modifying the operator black, white and gray list on the basis of the step S2, thereby adjusting the operator optimization strategy;
    wherein, the steps S1-S3 are iterated until the overflow of the operator data of the promotion AI processor is completely resolved.
  2. 2. The method for optimizing AI processor operator overflow under a CANN computing architecture of claim 1, wherein the step S1 is specifically as follows:
    when data is stored as a 16-bit floating point number in the NPU, data overflow involves two cases, namely
    65504 is present in the input or output value, since 65504 is the largest number that can be represented by a 16-bit floating point number, which means that the data is too large to be represented by a 16-bit floating point number, and overflow occurs;
    nan exists in an input value or an output value, which is caused by the conditions that zero is divided by zero, infinity is divided by infinity, infinity minus infinity, and infinity minus zero cannot be calculated, and the essence is that zero and infinity occur in the calculation process due to the overflow condition of data;
    whether the operator has data overflow can be judged by detecting whether the input or the input data of each operator has 65504 or Nan in the model training process; if the overflow exists, recording the operator data with the overflow and transmitting the output to the next operator; if there is no overflow, the operator data is not recorded and the output is passed to the next operator.
  3. 3. The AI processor operator overflow optimization method under the CANN computing architecture of claim 2, wherein the step S2 is specifically as follows:
    firstly, checking an input data value of overflow data, if the input data has 65504 or Nan, indicating that the overflow data of the current operator has the possibility of being transmitted by a forward operator, and at the moment, whether the data overflow is caused by the current operator cannot be determined, and the forward operator needs to be analyzed instead; if the input data does not exist 65504 or Nan, the output data value is reviewed for further analysis; if the output data value exists 65504 or Nan at the moment, which indicates that the current operator causes data overflow, the operator is recorded, otherwise, the operator is not recorded.
  4. 4. The AI processor operator overflow optimization method under the CANN computing architecture of claim 3, wherein the step S3 is specifically as follows:
    when the NPU is used for deep neural network training under the CANN computing architecture, the optimization strategy of the operator is configured through a blacklist, a white list and a gray list; wherein the content of the first and second substances,
    the blacklist refers to that operators of the current 32-bit floating point number type are not allowed to be reduced to 16-bit floating point numbers;
    the white list refers to operators which allow the current 32-bit floating point number type to be reduced to 16-bit floating point numbers;
    the grey list means that a mixed precision processing mechanism of the current operator is consistent with the previous operator, namely, if the previous operator supports precision reduction processing, the current operator also supports precision reduction; if the former operator does not allow precision reduction, the current operator does not support precision reduction;
    and according to the operator overflow analysis result, adjusting an operator optimization strategy by modifying an operator blacklist, and setting the operator causing the overflow problem to be operated by using 32-bit floating point numbers, so that the problem of operator data overflow of the AI processor under the CANN calculation architecture is fundamentally solved.
  5. 5. The method as claimed in claim 4, wherein the CANN-based AI processor is an Huacheng CANN-based Itanium AI processor.
  6. An AI processor operator overflow optimization system under a CANN computing architecture, for implementing the AI processor operator overflow optimization method under a CANN computing architecture according to any of claims 1 to 5.
CN202210635859.4A 2022-06-07 2022-06-07 AI processor operator overflow optimization method and system under CANN computing architecture Active CN115271053B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210635859.4A CN115271053B (en) 2022-06-07 2022-06-07 AI processor operator overflow optimization method and system under CANN computing architecture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210635859.4A CN115271053B (en) 2022-06-07 2022-06-07 AI processor operator overflow optimization method and system under CANN computing architecture

Publications (2)

Publication Number Publication Date
CN115271053A true CN115271053A (en) 2022-11-01
CN115271053B CN115271053B (en) 2023-05-23

Family

ID=83760063

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210635859.4A Active CN115271053B (en) 2022-06-07 2022-06-07 AI processor operator overflow optimization method and system under CANN computing architecture

Country Status (1)

Country Link
CN (1) CN115271053B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106445783A (en) * 2016-09-27 2017-02-22 北京金山安全软件有限公司 Method and device for detecting jamming of electronic equipment and electronic equipment
CN106569734A (en) * 2015-10-12 2017-04-19 北京国双科技有限公司 Method and device for repairing memory overflow during data shuffling
US20200043087A1 (en) * 2018-08-01 2020-02-06 Dynasty Marketplace, Inc. Artificial intelligence based digital leasing assistant
CN110868425A (en) * 2019-11-27 2020-03-06 上海三零卫士信息安全有限公司 Industrial control information safety monitoring system adopting black and white list for analysis
CN111353582A (en) * 2020-02-19 2020-06-30 四川大学 Particle swarm algorithm-based distributed deep learning parameter updating method
CN114461186A (en) * 2021-12-15 2022-05-10 中山大学 Method for automatically compiling and running C/C + + code for Huaji Shengteng accelerator card

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106569734A (en) * 2015-10-12 2017-04-19 北京国双科技有限公司 Method and device for repairing memory overflow during data shuffling
CN106445783A (en) * 2016-09-27 2017-02-22 北京金山安全软件有限公司 Method and device for detecting jamming of electronic equipment and electronic equipment
US20200043087A1 (en) * 2018-08-01 2020-02-06 Dynasty Marketplace, Inc. Artificial intelligence based digital leasing assistant
CN110868425A (en) * 2019-11-27 2020-03-06 上海三零卫士信息安全有限公司 Industrial control information safety monitoring system adopting black and white list for analysis
CN111353582A (en) * 2020-02-19 2020-06-30 四川大学 Particle swarm algorithm-based distributed deep learning parameter updating method
CN114461186A (en) * 2021-12-15 2022-05-10 中山大学 Method for automatically compiling and running C/C + + code for Huaji Shengteng accelerator card

Also Published As

Publication number Publication date
CN115271053B (en) 2023-05-23

Similar Documents

Publication Publication Date Title
CN106557695B (en) A kind of malicious application detection method and system
CN113038302B (en) Flow prediction method and device and computer storage medium
CN112102959B (en) Server, data processing method, data processing device and readable storage medium
US11625583B2 (en) Quality monitoring and hidden quantization in artificial neural network computations
CN116361801B (en) Malicious software detection method and system based on semantic information of application program interface
CN112446869A (en) Unsupervised industrial product defect detection method and device based on deep learning
CN116166967B (en) Data processing method, equipment and storage medium based on meta learning and residual error network
CN113674322A (en) Motion state detection method and related device
CN115271053A (en) AI processor operator overflow optimization method and system under CANN computing architecture
CN111045912B (en) AI application performance evaluation method, device and related equipment
CN110751400B (en) Risk assessment method and device
CN116662904A (en) Method, device, computer equipment and medium for detecting variation of data type
CN116385278A (en) Low-light image visual characteristic self-supervision representation method and system
CN112333155B (en) Abnormal flow detection method and system, electronic equipment and storage medium
CN115019235B (en) Scene division and content detection method and system
CN110825855B (en) Response method and device based on artificial intelligence, computer equipment and storage medium
CN115048487B (en) Public opinion analysis method, device, computer equipment and medium based on artificial intelligence
CN116011593B (en) Method and device for determining energy consumption of network model
TWI762193B (en) Image defect detection method, image defect detection device, electronic device and storage media
CN117536709A (en) DPF regeneration control method, device and equipment based on machine learning
CN115967609A (en) Content delivery network fault detection method and equipment
CN115510077A (en) Method, device, equipment and medium for updating graph data based on message passing
CN112598118A (en) Method, device, storage medium and equipment for processing abnormal labeling in supervised learning
CN115629942A (en) Operation and maintenance data anomaly detection processing method and device based on big data and machine learning in trusted environment, processor and storage medium
CN117150882A (en) Engine oil consumption prediction method, system, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant