CN111767981B - Approximate calculation method of Mish activation function - Google Patents

Approximate calculation method of Mish activation function Download PDF

Info

Publication number
CN111767981B
CN111767981B CN202010431133.XA CN202010431133A CN111767981B CN 111767981 B CN111767981 B CN 111767981B CN 202010431133 A CN202010431133 A CN 202010431133A CN 111767981 B CN111767981 B CN 111767981B
Authority
CN
China
Prior art keywords
function
neural network
convolutional neural
hard
activation function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010431133.XA
Other languages
Chinese (zh)
Other versions
CN111767981A (en
Inventor
胡炳然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Unisound Intelligent Technology Co Ltd
Xiamen Yunzhixin Intelligent Technology Co Ltd
Original Assignee
Unisound Intelligent Technology Co Ltd
Xiamen Yunzhixin Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Unisound Intelligent Technology Co Ltd, Xiamen Yunzhixin Intelligent Technology Co Ltd filed Critical Unisound Intelligent Technology Co Ltd
Priority to CN202010431133.XA priority Critical patent/CN111767981B/en
Publication of CN111767981A publication Critical patent/CN111767981A/en
Application granted granted Critical
Publication of CN111767981B publication Critical patent/CN111767981B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Complex Calculations (AREA)

Abstract

The invention provides an approximate calculation method of a Mish activation function, which adopts a segmented approximation mode to construct and form a hard-Mish segmented function with simpler form, wherein the calculation complexity of the hard-Mish segmented function is far lower than that of the Mish activation function, so that the time consumption of function operation can be effectively reduced, in addition, the number of times of accessing a system memory and the memory occupancy rate in the operation process can be effectively reduced by the hard-Mish segmented function, and in addition, the application universality of the Mish activation function can be effectively improved due to the fact that smaller errors exist between the hard-Mish segmented function and the Mish activation function on the calculation result.

Description

Approximate calculation method of Mish activation function
Technical Field
The invention relates to the technical field of computer data processing, in particular to an approximate calculation method of a Mish activation function.
Background
The Mish activation function is a newly studied activation function, and is specifically expressed in mathematical form of Mish (x) =x·tanh (ln (1+e) x ) Compared with the previous sigmoid and swish activation functions, the Mish activation function has more excellent calculation performance in the aspect of detection and identification of a plurality of computer vision tasks, but the Mish activation function occupies more system memory and consumes longer operation time in the operation process due to the complex calculation formula of the Mish activation function, so that the application universality of the Mish activation function is severely restricted. It can be seen that the prior art continues with a function approximation calculation method that can approximate a rish activated function with a small calculation error from the rish activated function.
Disclosure of Invention
Aiming at the defects existing in the prior art, the invention provides an approximate calculation method of a Mish activation function, which comprises the following steps: step S1, constructing a convolutional neural network; step S2, constructing another hard-rish function according to a piecewise approximation principle about the rish activation function, and taking the another hard-rish function as a new activation function; step S3, training the convolutional neural network according to the hard-hash function, so as to update the weight parameters of the convolutional neural network to a convergence state; s4, outputting the convolutional neural network with the weight parameters in a convergence state; therefore, the approximate calculation method of the Mish activation function adopts a segmented approximation mode to construct and form a hard-Mish segmented function with a simpler form, the calculation complexity of the hard-Mish segmented function is far lower than that of the Mish activation function, so that the time consumption of function operation can be effectively reduced, in addition, the number of times of accessing a system memory and the memory occupancy rate in the operation process can be effectively reduced by the hard-Mish segmented function, and small errors exist between the hard-Mish segmented function and the Mish activation function on the calculation result, so that the application universality of the Mish activation function is effectively improved.
The invention provides an approximate calculation method of a Mish activation function, which is characterized by comprising the following steps of:
step S1, constructing a convolutional neural network;
step S2, constructing another hard-rish function according to a piecewise approximation principle about the rish activation function, and taking the another hard-rish function as a new activation function;
step S3, training the convolutional neural network according to the hard-mix function, so as to update the weight parameters of the convolutional neural network to a convergence state;
s4, outputting the convolutional neural network with the weight parameters in a convergence state;
further, in the step S1, constructing a convolutional neural network specifically includes,
constructing the convolutional neural network model according to the data calculation scene and/or calculation data type of the Mish activation function;
further, in said step S2, a further hard-mix function is constructed according to the piecewise approximation principle with respect to the mix activation function, which comprises in particular,
step S201, interval segmentation is carried out on the Mish activation function, so that function curve parameters of the Mish activation function in a plurality of different intervals are obtained;
step S202, performing infinite approximation calculation processing on the function curve parameters of each interval, and constructing a hard-mix function as shown in the following formula (1)
Further, in said step S201, the interval segmentation is performed on the dash activation function, so as to obtain function curve parameters of the dash activation function in several different intervals, including in particular,
step S2011, setting a mathematical formula of the Mish activation function as shown in the following formula (2)
Mish(x)=x·tanh(ln(1+e x ) (2);
Step S2012, the variable x in the mix activation function, according to the interval (- ≡3), (-3, 3) and [3, + -infinity) for segmentation;
step S2013, calculating the function calculation result offset of the function curve corresponding to each of the three intervals of the Mish activation function according to the three interval segmentation results of the variable x, and taking the function calculation result offset as the function curve parameter;
further, in the step S202, infinite approximation calculation processing is performed on the function curve parameters of each of the intervals, and the hard-mix function shown in the formula (1) is constructed based on the infinite approximation calculation processing,
step S2021, calculating the result offset of the function curve corresponding to each interval, and performing simulation of a linear function or a nonlinear function to correspondingly obtain three sub-functions;
step S2022, performing a function connection smoothing process on the three sub-functions at the interval connection points x= -3 and x=3, so as to construct and form a hard-mix function as shown in the formula (1);
further, in the step S3, training the convolutional neural network according to the hard-mix function, so as to update the weight parameters of the convolutional neural network to a convergence state specifically includes a step S301 of performing iterative training on the convolutional neural network for a predetermined number of times according to the hard-mix function, so as to obtain the weight parameters of the convolutional neural network;
step S302, judging whether the current weight parameter of the convolutional neural network is in a convergence interval range;
step S303, if the current weight parameter is determined to be in the convergence interval range, stopping the iterative training of the convolutional neural network;
step S304, if the current weight parameter is determined not to be in the convergence interval range, training the convolutional neural network for a single time according to the hard-mix function until the weight parameter obtained after training is in the convergence interval range;
further, in the step S4, the convolutional neural network in which the output weight parameter is in a converged state specifically includes,
and outputting the convolutional neural network with the weight parameters in the convergence state to a corresponding visual computing task, so as to execute corresponding visual computing analysis.
Compared with the prior art, the approximate calculation method of the Mish activation function comprises the following steps: step S1, constructing a convolutional neural network; step S2, constructing another hard-rish function according to a piecewise approximation principle about the rish activation function, and taking the another hard-rish function as a new activation function; step S3, training the convolutional neural network according to the hard-hash function, so as to update the weight parameters of the convolutional neural network to a convergence state; s4, outputting the convolutional neural network with the weight parameters in a convergence state; therefore, the approximate calculation method of the Mish activation function adopts a segmented approximation mode to construct and form a hard-Mish segmented function with a simpler form, the calculation complexity of the hard-Mish segmented function is far lower than that of the Mish activation function, so that the time consumption of function operation can be effectively reduced, in addition, the number of times of accessing a system memory and the memory occupancy rate in the operation process can be effectively reduced by the hard-Mish segmented function, and small errors exist between the hard-Mish segmented function and the Mish activation function on the calculation result, so that the application universality of the Mish activation function is effectively improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic overall flow chart of an approximation calculation method of a dash activation function provided by the invention.
Fig. 2 is a schematic diagram of a refinement flow of step S2 in the method for approximating a mich activation function provided by the present invention.
Fig. 3 is a schematic diagram of a refinement flow of step S3 in the method for approximating a mich activation function provided by the present invention.
Fig. 4 is an actual error diagram between a hard-mix function and a mix activation function obtained by the approximate calculation method of the mix activation function provided by the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, an overall flow chart of a method for approximating a mich activation function according to an embodiment of the present invention is shown. The approximate calculation method of the Mish activation function comprises the following steps:
step S1, constructing a convolutional neural network;
step S2, constructing another hard-rish function according to a piecewise approximation principle about the rish activation function, and taking the another hard-rish function as a new activation function;
step S3, training the convolutional neural network according to the hard-hash function, so as to update the weight parameters of the convolutional neural network to a convergence state;
and S4, outputting the convolutional neural network with the weight parameters in a convergence state.
Preferably, in this step S1, constructing a convolutional neural network specifically includes,
and constructing the convolutional neural network model according to the data calculation scene and/or calculation data type of the Mish activation function.
Preferably, in the step S4, the convolutional neural network in which the output weight parameter is in a converged state specifically includes,
and outputting the convolutional neural network with the weight parameters in the convergence state to a corresponding visual computing task, so as to execute corresponding visual computing analysis.
Referring to fig. 2, a detailed flowchart of step S2 in the method for approximating the mich activation function according to the embodiment of the present invention is shown. In this step S2, a further hard-mix function is constructed, based on the piecewise approximation principle with respect to the mix activation function, which comprises in particular,
step S201, the Mish activation function is segmented in intervals, so that function curve parameters of the Mish activation function in a plurality of different intervals are obtained;
step S202, performing infinite approximation calculation processing on the function curve parameters of each interval, and constructing a hard-mix function as shown in the following formula (1)
Preferably, in the step S201, the dash activation function is segmented into intervals, so as to obtain function curve parameters of the dash activation function in a plurality of different intervals,
step S2011, setting a mathematical formula of the Mish activation function as shown in the following formula (2)
Mish(x)=x·tanh(ln(1+e x ) (2);
Step S2012, the variable x in the mix activation function, according to the interval (- ≡3), (-3, 3) and [3, + -infinity) for segmentation;
step S2013, calculating the function calculation result offset of the function curve corresponding to each of the three intervals of the Mish activation function according to the three interval segmentation results of the variable x, and taking the function calculation result offset as the function curve parameter.
Preferably, in this step S202, an infinite approximation calculation process is performed on the function curve parameters of each of the intervals, and constructing a hard-mix function as shown in formula (1) therefrom specifically includes,
step S2021, calculating the result offset of the function curve corresponding to each interval, and performing simulation of a linear function or a nonlinear function to correspondingly obtain three sub-functions;
in step S2022, at the interval connection points x= -3 and x=3, the three sub-functions are subjected to a function connection smoothing process, so as to construct and form a hard-mix function as shown in the formula (1).
Referring to fig. 3, a detailed flowchart of step S3 in the method for approximating the mich activation function according to the embodiment of the present invention is shown. In the step S3, training the convolutional neural network according to the hard-mix function, so as to update the weight parameters of the convolutional neural network to a convergence state specifically includes,
step S301, performing iterative training on the convolutional neural network for a predetermined number of times according to the hard-hash function, so as to obtain a weight parameter of the convolutional neural network;
step S302, judging whether the current weight parameter of the convolutional neural network is in a convergence interval range;
step S303, if the current weight parameter is determined to be in the convergence interval range, stopping the iterative training on the convolutional neural network;
step S304, if it is determined that the current weight parameter is not in the convergence interval range, performing a single training on the convolutional neural network according to the hard-mix function until the weight parameter obtained after training is in the convergence interval range.
Referring to fig. 4, an actual error diagram between a hard-mix function and a mix activation function is obtained by the approximate calculation method of the mix activation function according to the embodiment of the present invention. It can be seen from this fig. 4 that the dashed line represents the curve of the dash-activation function, the solid line represents the curve of the hard-dash function, and that there is a certain deviation value of the two function curves over the interval (-5, -1), but the deviation value can be maintained at (-0.3, +0.3), while the deviation values of the two function curves over the interval [ -1, + -infinity) are substantially negligible, in particular over the interval [ +3, + ], the two function curves can be considered to be substantially coincident.
As can be seen from the content of the foregoing embodiment, the approximate calculation method of the hash activation function adopts a piecewise approximation manner to construct and form a hard-hash piecewise function with a simpler form, and the computational complexity of the hard-hash piecewise function is far lower than that of the hash activation function, so that the time consumption of function operation can be effectively reduced, in addition, the number of accesses to the system memory and the memory occupancy rate in the operation process can be effectively reduced by the hard-hash piecewise function, and the hard-hash piecewise function and the hash activation function have smaller errors in the calculation result, thereby effectively improving the application universality of the hash activation function.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (4)

  1. The approximate calculation method of the Mish activation function is characterized by comprising the following steps of:
    step S1, constructing a convolutional neural network;
    step S2, constructing another hard-rish function according to a piecewise approximation principle about the rish activation function, and taking the another hard-rish function as a new activation function;
    step S3, training the convolutional neural network according to the hard-mix function, so as to update the weight parameters of the convolutional neural network to a convergence state;
    s4, outputting the convolutional neural network with the weight parameters in a convergence state;
    in said step S2, a further hard-mix function is constructed, based on the piecewise approximation principle with respect to the mix activation function, comprising in particular,
    step S201, interval segmentation is carried out on the Mish activation function, so that function curve parameters of the Mish activation function in a plurality of different intervals are obtained;
    step S202, performing infinite approximation calculation processing on the function curve parameters of each interval, and constructing a hard-mix function as shown in the following formula (1)
    (1);
    In said step S201, the interval segmentation is performed on the dash activation function, so as to obtain function curve parameters of the dash activation function in several different intervals, including in particular,
    step S2011, setting a mathematical formula of the Mish activation function as shown in the following formula (2)
    (2);
    Step S2012, the variables in the Mish activation function are processedAccording to the interval (- ≡3)](3, 3) and [3, + -infinity) for segmentation;
    step S2013, according to the variablesCalculating the function calculation result offset of the function curve corresponding to each of the three intervals of the Mish activation function, and taking the function calculation result offset as the function curve parameter;
    in the step S4, the convolutional neural network in which the output weight parameter is in a converged state specifically includes,
    outputting the convolutional neural network with the weight parameters in a convergence state to a corresponding visual computing task, so as to execute corresponding visual computing analysis;
    the calculation complexity of the hard-mix piecewise function is lower than that of the mix activated function, so that the time consumption of function operation is effectively reduced;
    the hard-mix piecewise function can effectively reduce the access times and the memory occupancy rate to the system memory in the operation process.
  2. 2. A method of approximating a mich activation function according to claim 1, wherein:
    in the step S1, constructing a convolutional neural network specifically includes,
    and constructing the convolutional neural network model according to the data calculation scene and/or calculation data type of the Mish activation function.
  3. 3. A method of approximating a mich activation function according to claim 1, wherein:
    in the step S202, infinite approximation calculation processing is performed on the function curve parameters of each of the intervals, and a hard-mix function as shown in formula (1) is constructed based on the infinite approximation calculation processing,
    step S2021, calculating the result offset of the function curve corresponding to each interval, and performing simulation of a linear function or a nonlinear function to correspondingly obtain three sub-functions;
    step S2022, performing a function connection smoothing process on the three sub-functions at the interval connection points x= -3 and x=3, so as to construct and form a hard-mix function as shown in the formula (1).
  4. 4. A method of approximating a mich activation function according to claim 1, wherein:
    in the step S3, training the convolutional neural network according to the hard-mix function, so as to update the weight parameters of the convolutional neural network to a convergence state specifically includes,
    step S301, performing iterative training on the convolutional neural network for a preset number of times according to the hard-mix function, so as to obtain a weight parameter of the convolutional neural network;
    step S302, judging whether the current weight parameter of the convolutional neural network is in a convergence interval range;
    step S303, if the current weight parameter is determined to be in the convergence interval range, stopping the iterative training of the convolutional neural network;
    step S304, if the current weight parameter is determined not to be in the convergence interval range, training the convolutional neural network for a single time according to the hard-mix function until the weight parameter obtained after training is in the convergence interval range.
CN202010431133.XA 2020-05-20 2020-05-20 Approximate calculation method of Mish activation function Active CN111767981B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010431133.XA CN111767981B (en) 2020-05-20 2020-05-20 Approximate calculation method of Mish activation function

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010431133.XA CN111767981B (en) 2020-05-20 2020-05-20 Approximate calculation method of Mish activation function

Publications (2)

Publication Number Publication Date
CN111767981A CN111767981A (en) 2020-10-13
CN111767981B true CN111767981B (en) 2023-12-05

Family

ID=72719522

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010431133.XA Active CN111767981B (en) 2020-05-20 2020-05-20 Approximate calculation method of Mish activation function

Country Status (1)

Country Link
CN (1) CN111767981B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114861859A (en) * 2021-01-20 2022-08-05 华为技术有限公司 Training method of neural network model, data processing method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101404071A (en) * 2008-11-07 2009-04-08 湖南大学 Electronic circuit fault diagnosis neural network method based on grouping particle swarm algorithm
CN104408302A (en) * 2014-11-19 2015-03-11 北京航空航天大学 Bearing variable-condition fault diagnosis method based on LMD-SVD (Local Mean Decomposition-Singular Value Decomposition) and extreme learning machine
CN110569816A (en) * 2019-09-12 2019-12-13 中国石油大学(华东) Traffic sign classification algorithm based on new activation function

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11775805B2 (en) * 2018-06-29 2023-10-03 Intel Coroporation Deep neural network architecture using piecewise linear approximation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101404071A (en) * 2008-11-07 2009-04-08 湖南大学 Electronic circuit fault diagnosis neural network method based on grouping particle swarm algorithm
CN104408302A (en) * 2014-11-19 2015-03-11 北京航空航天大学 Bearing variable-condition fault diagnosis method based on LMD-SVD (Local Mean Decomposition-Singular Value Decomposition) and extreme learning machine
CN110569816A (en) * 2019-09-12 2019-12-13 中国石油大学(华东) Traffic sign classification algorithm based on new activation function

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《FPGA Implementation of a BPSK 1D-CNN Demodulator》;Yan Liu等;《 Applied Sciences》;全文 *
《基于FPGA的BP神经网络硬件实现及改进 》;杨景明等;《计算机工程与设计》;全文 *

Also Published As

Publication number Publication date
CN111767981A (en) 2020-10-13

Similar Documents

Publication Publication Date Title
CN112101530B (en) Neural network training method, device, equipment and storage medium
CN111160531B (en) Distributed training method and device for neural network model and electronic equipment
CN112099345B (en) Fuzzy tracking control method, system and medium based on input hysteresis
CN112818588B (en) Optimal power flow calculation method, device and storage medium of power system
WO2022110640A1 (en) Model optimization method and apparatus, computer device and storage medium
CN111767981B (en) Approximate calculation method of Mish activation function
CN115562037B (en) Nonlinear multi-intelligent system control method, device, equipment and application
CN113419424B (en) Modeling reinforcement learning robot control method and system for reducing overestimation
CN110765843A (en) Face verification method and device, computer equipment and storage medium
CN116702335B (en) Optimal arrangement method and switching method for hydrogen concentration sensor of fuel cell automobile
CN111898752A (en) Apparatus and method for performing LSTM neural network operations
CN110275895B (en) Filling equipment, device and method for missing traffic data
CN117014507A (en) Training method of task unloading model, task unloading method and device
CN111931553A (en) Remote sensing data enhanced generation countermeasure network method, system, storage medium and application
CN114614797B (en) Adaptive filtering method and system based on generalized maximum asymmetric correlation entropy criterion
CN116259328A (en) Post-training quantization method, apparatus and storage medium for audio noise reduction
CN110533158B (en) Model construction method, system and non-volatile computer readable recording medium
CN114065913A (en) Model quantization method and device and terminal equipment
CN112929006A (en) Variable step size selection updating kernel minimum mean square adaptive filter
CN106846341B (en) Method and device for determining point cloud area growth threshold of complex outer plate of ship body
CN111582473B (en) Method and device for generating countermeasure sample
CN114580578B (en) Method and device for training distributed random optimization model with constraints and terminal
CN113298256B (en) Adaptive curve learning method and device, computer equipment and storage medium
CN117706539A (en) Target tracking method, device and equipment for self-adaptive adjustment and storage medium
CN117115551A (en) Image detection model training method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant