CN115271053B - AI processor operator overflow optimization method and system under CANN computing architecture - Google Patents

AI processor operator overflow optimization method and system under CANN computing architecture Download PDF

Info

Publication number
CN115271053B
CN115271053B CN202210635859.4A CN202210635859A CN115271053B CN 115271053 B CN115271053 B CN 115271053B CN 202210635859 A CN202210635859 A CN 202210635859A CN 115271053 B CN115271053 B CN 115271053B
Authority
CN
China
Prior art keywords
operator
overflow
data
cann
floating point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210635859.4A
Other languages
Chinese (zh)
Other versions
CN115271053A (en
Inventor
孙亚楠
欧玉威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN202210635859.4A priority Critical patent/CN115271053B/en
Publication of CN115271053A publication Critical patent/CN115271053A/en
Application granted granted Critical
Publication of CN115271053B publication Critical patent/CN115271053B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention relates to an AI processor operator overflow optimization method and system under a CANN computing architecture, belongs to the technical field of AI processors, and aims to find out operators causing overflow problems, and set the operators to use 32-bit floating point number computing, so that operator overflow is fundamentally avoided. By identifying the operator overflow representation of the rising AI processor, overflow operators are found and their data is recorded. The method for analyzing the operator overflow data of the NPU is provided, a root operator causing the overflow problem is found out by a conservative strategy, and the accuracy problem caused by operator overflow is gradually solved while the calculation performance of the NPU is maintained to the maximum extent. The method for adjusting the operator optimization strategy is provided, firstly, the built-in optimization strategy of the NPU operator by using the CANN computing architecture is applied, on the basis, the operator still having the overflow problem is added into the blacklist, and the operator is forcedly appointed to use the 32-bit floating point number for operation, so that the efficiency of the whole flow for solving the overflow problem can be improved.

Description

AI processor operator overflow optimization method and system under CANN computing architecture
Technical Field
The invention belongs to the technical field of AI processors, and particularly relates to an AI processor operator overflow optimization method and system under a CANN computing architecture.
Background
CANN (Compute Architecture for Neural Networks) is a heterogeneous computing architecture proposed by Huai corporation for AI scenarios, which supports users to quickly build AI applications and services based on a lift platform by providing a multi-level programming interface. Among them, model development is one of the important basic functions provided by CANN. CANN provides functionality for deep neural network model training on a rising AI processor (NPU) using a TensorFlow network framework.
In the process of training the deep neural network model on the NPU, operators are basic units of the NPU for supporting the calculation of the neural network, and comprise convolution operation, pooling operation and the like, and are used for supporting the training of the neural network and the acceleration of reasoning. Data is typically stored by 16-bit floating point numbers or 32-bit floating point numbers due to the limitations of the hardware itself. Thus, each operator needs to define the data type of the operation in advance, such as 16-bit floating point number or 32-bit floating point number. The calculation speed of the processor can be increased by using the 16-bit floating point number for calculation, but when the stored data is too large or too small to be stored by the 16-bit floating point number, the problem of data overflow inevitably occurs, which can deteriorate the accuracy of model training and even possibly completely destroy the training process of the model. When the 32-bit floating point number is used for operation, although the accuracy of model training can be ensured, the calculation cost of the NPU is inevitably increased, and the calculation performance is reduced.
In order to avoid the problem of operator data overflow as much as possible while ensuring the NPU computing performance, two solutions have been proposed under the state of the CANN computing architecture:
1. using mixed precision
2. Enabling loss scaling.
The mixed precision is to accelerate the training process of the deep neural network by mixing 16-bit floating point number and 32-bit floating point number data types, and reduce memory use and access, so that a larger neural network can be trained, and meanwhile, the network precision achieved by training by using the 32-bit floating point number can be basically maintained.
Enabling loss scaling refers to multiplying the calculated loss by a loss scaling coefficient in the forward calculation process of the deep neural network model, and plays a role in amplifying the gradient in the reverse gradient calculation process, so that the overflow problem that a smaller gradient value cannot be expressed by a 16-bit floating point number in floating point calculation is avoided to the greatest extent, and then dividing the aggregated parameter gradient value by the loss scaling coefficient for restoration after parameter gradient aggregation and before an optimizer updates parameters.
While both of the above schemes help to avoid the problem of operator data overflow as much as possible while ensuring NPU computational performance, none of them essentially solve the problem of data overflow due to data storage type, which still exists. Where the use of hybrid precision requires specification of the data type used by each operator, in practice, users often cannot know in advance which operators use 16-bit floating point storage to cause overflow, and whether using built-in optimization strategies or manual specification by users, it is possible to specify operators that cause data overflow as operating using 16-bit floating point, which can lead to precision problems in model training. Enabling the penalty scaling can maximally avoid the overflow problem that the smaller gradient value cannot be represented by the 16-bit floating point number in the floating point calculation, but cannot solve the overflow problem that the larger gradient value cannot be represented by the 16-bit floating point number. Therefore, a method is needed to design at present, which fundamentally solves the problem of overflow of operator data of the rising AI processor under the CANN computing architecture.
Disclosure of Invention
The invention aims to provide an AI processor operator overflow optimization method and system under a CANN computing architecture, which are used for solving the technical problems in the prior art, namely, fundamentally solving the problem of rising AI processor operator data overflow under the CANN computing architecture.
In order to achieve the above purpose, the technical scheme of the invention is as follows:
the AI processor operator overflow optimization method under the CANN computing architecture comprises the following steps:
s1: finding out operators with overflow problems through overflow detection;
s2: carrying out overflow data analysis on the operator found in the step S1, and judging whether the operator is a root operator of data abnormality or not;
s3: modifying the operator black-white gray list on the basis of the step S2, thereby adjusting an operator optimization strategy;
wherein steps S1-S3 are iterated until the rising AI processor operator data overflow is completely resolved.
Further, the step S1 specifically includes:
when data is stored as a 16-bit floating point number in an NPU, data overflow includes two cases, namely
65504 is present in the input or output value because 65504 is the largest number that can be represented by a 16-bit floating point number, which means that data is too large to be represented by a 16-bit floating point number, and overflow occurs;
nan exists in the input or output value, which is caused by the fact that zero is divided by zero, infinity is divided by infinity, infinity is subtracted from infinity, and infinite multiplication cannot be calculated, and the nature is that zero and infinity appear in the calculation process due to the fact that data overflow exists;
detecting whether 65504 or Nan exists in the input or input data of each operator in the model training process, and judging whether the operator has data overflow or not; if overflow exists, operator data with overflow is recorded and output is transferred to the next operator; if there is no overflow, no operator data is recorded and the output is passed to the next operator.
Further, the step S2 specifically includes:
firstly, checking an input data value of overflow data, if 65504 or Nan exists in the input data, indicating that the overflow data of a current operator is possible to be transferred by a forward operator, and at the moment, determining whether the current operator causes data overflow or not cannot be determined, and analyzing the forward operator in turn; if the input data does not exist 65504 or Nan, then looking up the output data value for further analysis; if the output data value exists 65504 or Nan at the moment, the current operator causes data overflow, the operator is recorded, and otherwise, the operator is not recorded.
Further, the step S3 specifically includes:
when NPU is used for deep neural network training under the CANN computing architecture, an optimization strategy of an operator is configured through a blacklist, a whitelist and a gray list; wherein, the liquid crystal display device comprises a liquid crystal display device,
the blacklist refers to an operator which does not allow the current 32-bit floating point number type to be reduced to 16-bit floating point number;
the white list refers to an operator which allows the current 32-bit floating point number type to be reduced in precision to 16-bit floating point numbers;
the gray list means that the mixed precision processing mechanism of the current operator is consistent with the previous operator, namely if the previous operator supports the precision reducing processing, the current operator also supports the precision reducing; if the previous operator does not allow the precision reduction, the current operator does not support the precision reduction;
according to the operator overflow analysis result, an operator optimization strategy is adjusted by modifying an operator blacklist, and an operator causing the overflow problem is set to operate by using 32-bit floating point numbers, so that the problem of operator data overflow of an AI processor under a CANN computing architecture is fundamentally solved.
Further, the AI processor under the CANN computing architecture is specifically a rising AI processor under the CANN computing architecture.
And the AI processor operator overflow optimization system under the CANN computing architecture is used for realizing the AI processor operator overflow optimization method under the CANN computing architecture.
Compared with the prior art, the invention has the following beneficial effects:
one of the beneficial effects of the scheme is that a method for solving the problem of NPU operator data overflow under the condition that the data is the CANN computing architecture is provided, operators causing the overflow problem are found out and set to be calculated by using 32-bit floating point numbers, and operator overflow is fundamentally avoided. A method for detecting overflow of NPU operator under CANN computing architecture is disclosed, which features that the overflow operator is found out and its data is recorded by recognizing the overflow expression form of operator in Al-rising processor. The method for analyzing the operator overflow data of the NPU is provided, a root operator causing the overflow problem is found out by a conservative strategy, and the accuracy problem caused by operator overflow is gradually solved while the calculation performance of the NPU is maintained to the maximum extent. The method for adjusting the operator optimization strategy is provided, firstly, the built-in optimization strategy of the NPU operator by using the CANN computing architecture is applied, on the basis, the operator still having the overflow problem is added into the blacklist, and the operator is forcedly appointed to use the 32-bit floating point number for operation, so that the efficiency of the whole flow for solving the overflow problem can be improved.
Drawings
Fig. 1 is a schematic general flow chart of a method according to an embodiment of the present application.
Fig. 2 is a schematic diagram of overflow detection according to an embodiment of the present application.
Fig. 3 is a schematic diagram of overflow data analysis flow according to an embodiment of the present application.
Fig. 4 is a schematic flow chart of an adjustment operator optimization strategy according to an embodiment of the present application.
Detailed Description
For the purpose of making the technical solution and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings and examples. It should be understood that the particular embodiments described herein are illustrative only and are not intended to limit the invention, i.e., the embodiments described are merely some, but not all, of the embodiments of the invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present invention. It is noted that relational terms such as "first" and "second", and the like, are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The features and capabilities of the present invention are described in further detail below in connection with the examples.
Examples:
the AI processor operator overflow optimization method under the CANN computing architecture comprises the following steps:
s1: finding out operators with overflow problems through overflow detection;
s2: carrying out overflow data analysis on the operator found in the step S1, and judging whether the operator is a root operator of data abnormality or not;
s3: modifying the operator black-white gray list on the basis of the step S2, thereby adjusting an operator optimization strategy;
wherein steps S1-S3 are iterated until the rising AI processor operator data overflow is completely resolved.
In step S1:
in order to solve the problem of overflow of operator data of a rising AI processor under the CANN computing architecture, we first design an overflow detection method to find out operators with overflow problem. The object of overflow detection is the output of each operator in the neural network model, schematically shown in fig. 2 (a).
To overflow detect the output of an operator, we should first specify the representation of the operator data overflow. When data is stored as a 16-bit floating point number in an NPU, data overflow includes two cases:
1. 65504 is present in the input or output value because 65504 is the largest number that can be represented by a 16-bit floating point number, which means that data is too large to be represented by a 16-bit floating point number, and overflow occurs.
2. Nan exists in the input or output values, mainly due to the fact that zero is divided by zero, infinity is divided by infinity, infinity is reduced by infinity, and the situation that zero cannot be calculated is caused by the fact that infinity is multiplied by zero, and the situation that data overflow exists in essence, so that zero and infinity occur in the calculation process. Thus, by detecting whether 65504 or Nan exists in the input or input data of each operator during model training, it can be determined whether there is a data overflow condition for that operator. If overflow exists, operator data with overflow is recorded and output is transferred to the next operator; if there is no overflow, no operator data is recorded and the output is passed to the next operator. The overflow detection flow chart is shown in fig. 2 (b).
In step S2:
although we have acquired overflow data, knowing which operators have data overflowed, it is noted that not all overflow data is due to the current operator data type being improperly set, since overflow data is likely to be transferred from previous operators. In order to find out the root operator which really causes the problem of data overflow, we design a set of overflow data analysis scheme, and the flow chart is shown in figure 3.
Firstly, checking an input data value of overflow data, if 65504 or Nan exists in the input data, indicating that the overflow data of a current operator is likely to be transmitted by a forward operator, and determining whether the current operator causes data overflow or not can not be performed, and analyzing the forward operator; if the input data does not exist 65504 or Nan, the output data value is viewed for further analysis. If the output data value exists 65504 or Nan at the moment, the current operator causes data overflow, the operator is recorded, and otherwise, the operator is not recorded. It is noted that when the input data value is overflowed, a conservative strategy is adopted, that is, when the overflow condition is not determined to be caused by the current operator or transmitted by the forward operator, only the forward operator is analyzed, and the current operator is not analyzed. This is why the method needs to iterate until the problem of rising AI processor operator data overflow is completely solved.
In step S3:
when NPU is used for deep neural network training under the CANN computing architecture, the optimization strategy of the operator is mainly configured through a blacklist, a whitelist and a gray list. Wherein the blacklist refers to operators which do not allow the current 32-bit floating point number type to be reduced to 16-bit floating point numbers; the white list refers to an operator which allows the current 32-bit floating point number type to be reduced in precision to 16-bit floating point numbers; the gray list means that the mixed precision processing mechanism of the current operator is consistent with the previous operator, namely if the previous operator supports the precision reducing processing, the current operator also supports the precision reducing; the current operator does not support the downscaling if the previous operator does not allow the downscaling.
According to the operator overflow analysis result, an operator optimization strategy is adjusted by modifying an operator blacklist, an operator causing overflow problem is set to operate by using 32-bit floating point numbers, so that the problem of operator data overflow of the rising AI processor under the CANN computing architecture is fundamentally solved, and a flow chart is shown in figure 4.
Because the NPU operators are various, the efficiency is low because all operators are reconfigured when the optimization strategy is adjusted every time, so that the optimization strategy which is built in the CANN is adopted, for example, an automatic mixed precision mode is adopted, the precision of the operators of part 32-bit floating point numbers is automatically reduced to 16-bit floating point numbers, and on the basis, the optimization strategy of part of operators is adjusted according to the overflow data analysis result. If the overflow analysis result shows that a certain operator causes the data overflow problem, the operator is added into a blacklist, and is forced to use 32-bit floating point numbers for operation, so that the problem of the operator data overflow of the rising AI processor under the CANN computing architecture is avoided.
The AI processor operator overflow optimization system under the CANN computing architecture is also provided for realizing the AI processor operator overflow optimization method under the CANN computing architecture.
The above is a preferred embodiment of the present invention, and all changes made according to the technical solution of the present invention belong to the protection scope of the present invention when the generated functional effects do not exceed the scope of the technical solution of the present invention.

Claims (5)

  1. An AI processor operator overflow optimization method under a CANN computing architecture is characterized by comprising the following steps:
    s1: finding out operators with overflow problems through overflow detection;
    s2: carrying out overflow data analysis on the operator found in the step S1, and judging whether the operator is a root operator of data abnormality or not;
    s3: modifying the operator black-white gray list on the basis of the step S2, thereby adjusting an operator optimization strategy;
    wherein, the steps S1-S3 are iterated until the operator data overflow of the rising AI processor is completely solved;
    the step S1 is specifically as follows:
    when data is stored as a 16-bit floating point number in an NPU, data overflow includes two cases, namely
    65504 is present in the input or output value because 65504 is the largest number that can be represented by a 16-bit floating point number, which means that data is too large to be represented by a 16-bit floating point number, and overflow occurs;
    nan exists in the input or output value, which is caused by the fact that zero is divided by zero, infinity is divided by infinity, infinity is subtracted from infinity, and infinite multiplication cannot be calculated, and the nature is that zero and infinity appear in the calculation process due to the fact that data overflow exists;
    detecting whether 65504 or Nan exists in the input or input data of each operator in the model training process, and judging whether the operator has data overflow or not; if overflow exists, operator data with overflow is recorded and output is transferred to the next operator; if there is no overflow, no operator data is recorded and the output is passed to the next operator.
  2. 2. The method for optimizing AI processor operator overflow under the CANN computing architecture of claim 1, wherein step S2 is specifically as follows:
    firstly, checking an input data value of overflow data, if 65504 or Nan exists in the input data, indicating that the overflow data of a current operator is possible to be transferred by a forward operator, and at the moment, determining whether the current operator causes data overflow or not cannot be determined, and analyzing the forward operator in turn; if the input data does not exist 65504 or Nan, then looking up the output data value for further analysis; if the output data value exists 65504 or Nan at the moment, the current operator causes data overflow, the operator is recorded, and otherwise, the operator is not recorded.
  3. 3. The method for optimizing AI processor operator overflow under the CANN computing architecture of claim 2, wherein step S3 is specifically as follows:
    when NPU is used for deep neural network training under the CANN computing architecture, an optimization strategy of an operator is configured through a blacklist, a whitelist and a gray list; wherein, the liquid crystal display device comprises a liquid crystal display device,
    the blacklist refers to an operator which does not allow the current 32-bit floating point number type to be reduced to 16-bit floating point number;
    the white list refers to an operator which allows the current 32-bit floating point number type to be reduced in precision to 16-bit floating point numbers;
    the gray list means that the mixed precision processing mechanism of the current operator is consistent with the previous operator, namely if the previous operator supports the precision reducing processing, the current operator also supports the precision reducing; if the previous operator does not allow the precision reduction, the current operator does not support the precision reduction;
    according to the operator overflow analysis result, an operator optimization strategy is adjusted by modifying an operator blacklist, and an operator causing the overflow problem is set to operate by using 32-bit floating point numbers, so that the problem of operator data overflow of an AI processor under a CANN computing architecture is fundamentally solved.
  4. 4. The method of claim 3, wherein the AI processor under the CANN architecture is a rising AI processor under the CANN architecture.
  5. An AI processor operator overflow optimization system under a CANN computing architecture, configured to implement the AI processor operator overflow optimization method under a CANN computing architecture as claimed in any one of claims 1-4.
CN202210635859.4A 2022-06-07 2022-06-07 AI processor operator overflow optimization method and system under CANN computing architecture Active CN115271053B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210635859.4A CN115271053B (en) 2022-06-07 2022-06-07 AI processor operator overflow optimization method and system under CANN computing architecture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210635859.4A CN115271053B (en) 2022-06-07 2022-06-07 AI processor operator overflow optimization method and system under CANN computing architecture

Publications (2)

Publication Number Publication Date
CN115271053A CN115271053A (en) 2022-11-01
CN115271053B true CN115271053B (en) 2023-05-23

Family

ID=83760063

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210635859.4A Active CN115271053B (en) 2022-06-07 2022-06-07 AI processor operator overflow optimization method and system under CANN computing architecture

Country Status (1)

Country Link
CN (1) CN115271053B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200043087A1 (en) * 2018-08-01 2020-02-06 Dynasty Marketplace, Inc. Artificial intelligence based digital leasing assistant
CN114461186A (en) * 2021-12-15 2022-05-10 中山大学 Method for automatically compiling and running C/C + + code for Huaji Shengteng accelerator card

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106569734B (en) * 2015-10-12 2019-04-09 北京国双科技有限公司 The restorative procedure and device that memory overflows when data are shuffled
CN106445783A (en) * 2016-09-27 2017-02-22 北京金山安全软件有限公司 Method and device for detecting jamming of electronic equipment and electronic equipment
CN110868425A (en) * 2019-11-27 2020-03-06 上海三零卫士信息安全有限公司 Industrial control information safety monitoring system adopting black and white list for analysis
CN111353582B (en) * 2020-02-19 2022-11-29 四川大学 Particle swarm algorithm-based distributed deep learning parameter updating method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200043087A1 (en) * 2018-08-01 2020-02-06 Dynasty Marketplace, Inc. Artificial intelligence based digital leasing assistant
CN114461186A (en) * 2021-12-15 2022-05-10 中山大学 Method for automatically compiling and running C/C + + code for Huaji Shengteng accelerator card

Also Published As

Publication number Publication date
CN115271053A (en) 2022-11-01

Similar Documents

Publication Publication Date Title
CN113038302B (en) Flow prediction method and device and computer storage medium
CN107945099B (en) OpenGL-oriented attribute configuration optimization method
CN115271053B (en) AI processor operator overflow optimization method and system under CANN computing architecture
CN107085532A (en) Task monitor method and device
CN114244681B (en) Equipment connection fault early warning method and device, storage medium and electronic equipment
CN109032853B (en) Method and device for controlling FPGA card group
CN110796115A (en) Image detection method and device, electronic equipment and readable storage medium
CN107844327B (en) Detection system and detection method for realizing context consistency
CN111401020A (en) Interface loading method and system and computing equipment
CN117349734B (en) Water meter equipment identification method and device, electronic equipment and storage medium
CN110633742A (en) Method for acquiring characteristic information and computer storage medium
CN115967609A (en) Content delivery network fault detection method and equipment
CN111598185B (en) Training data balancing method, device and system based on deep learning
CN116011593B (en) Method and device for determining energy consumption of network model
CN117349734A (en) Water meter equipment identification method and device, electronic equipment and storage medium
CN116227568A (en) Deep learning model optimization method and system based on infrared image
CN116467325A (en) Method, device, electronic equipment and medium for shortening ordering time
CN113297473A (en) Data pushing method and device based on cloud computing and cloud server
CN113870142A (en) Method, apparatus and computer program product for enhancing image contrast
CN115423719A (en) Depth image estimation method, depth image estimation device, electrical equipment and storage medium
CN117746174A (en) Model training method, device, computer equipment and storage medium
CN116095198A (en) Intelligent construction method and system for laboratory data center
CN116129239A (en) Small target detection method, device, equipment and storage medium
CN110554910A (en) Method and apparatus for optimizing distributed computing performance
CN115374936A (en) Neural network model clipping method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant