WO2020082595A1 - Procédé de classification d'image, dispositif terminal et support de stockage non volatil lisible par ordinateur - Google Patents

Procédé de classification d'image, dispositif terminal et support de stockage non volatil lisible par ordinateur Download PDF

Info

Publication number
WO2020082595A1
WO2020082595A1 PCT/CN2018/124630 CN2018124630W WO2020082595A1 WO 2020082595 A1 WO2020082595 A1 WO 2020082595A1 CN 2018124630 W CN2018124630 W CN 2018124630W WO 2020082595 A1 WO2020082595 A1 WO 2020082595A1
Authority
WO
WIPO (PCT)
Prior art keywords
image classification
preset
classification model
trained
value
Prior art date
Application number
PCT/CN2018/124630
Other languages
English (en)
Chinese (zh)
Inventor
金戈
徐亮
肖京
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020082595A1 publication Critical patent/WO2020082595A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Definitions

  • the present application belongs to the field of computer technology, and particularly relates to an image classification method, a terminal device, and a computer non-volatile readable storage medium.
  • Image classification models based on deep learning or partial machine learning require training before they can be used to perform specific image classification functions, such as ethnic classification functions.
  • the process of training the image classification model is actually the process of optimizing the parameters in the image classification model, that is, to find the optimal parameters of the image classification model.
  • the image classification model can be used To perform the corresponding image classification function.
  • common momentum optimization algorithms such as stochastic gradient descent algorithm can generally be used to update the parameters in the image classification model to find the optimal parameters.
  • the stochastic gradient descent algorithm specifically needs to determine whether the model finds the optimal parameter by whether the loss function in the image classification model reaches the global minimum.
  • the loss function may be caused by the saddle point in the loss function It will not be able to converge to the global extremum point, and the optimal parameters of the image classification model cannot be determined.
  • the image classification model needs to analyze the image characteristics of the input image based on the optimal parameters in the model. For the image classification model that cannot determine the optimal parameters, the classification accuracy of the corresponding image classification model decreases.
  • An embodiment of the present application provides an image classification method, terminal device, and computer non-volatile readable storage medium to solve the problem of low classification accuracy of the image classification model in the prior art.
  • a first aspect of the embodiments of the present application provides that the first aspect provides an image classification method, including:
  • the target image is subjected to feature extraction to obtain image features, and the image features are subjected to classification prediction processing to obtain image classification results, wherein the optimal parameters are in the image classification model
  • the optimal parameters are in the image classification model
  • the second norm of the loss function is less than the first preset value, it is obtained based on a preset noise value, which is used to make the model parameters determined by the trained image classification model avoid the saddle point during iterative optimization;
  • the image classification result is output.
  • a second aspect of the embodiments of the present application provides a terminal device.
  • the terminal device includes a memory, a processor, and computer-readable instructions stored in the memory and executable on the processor.
  • the processor The following steps are realized when the computer-readable instructions are executed:
  • the target image is subjected to feature extraction to obtain image features, and the image features are subjected to classification prediction processing to obtain image classification results, wherein the optimal parameters are in the
  • the second norm of the loss function is less than the first preset value, it is obtained based on a preset noise value, which is used to make the model parameters determined by the trained image classification model avoid the saddle point during iterative optimization;
  • the image classification result is output.
  • a third aspect of the embodiments of the present application provides a terminal device, including:
  • An obtaining unit used to obtain the target image to be classified
  • the execution unit is configured to perform feature extraction on the target image based on the optimal parameters in the image classification model to obtain image features, and perform classification prediction processing on the image features to obtain image classification results, where the optimal parameters are in all
  • the second norm of the loss function of the image classification model is less than the first preset value, it is obtained based on a preset noise value, and the preset noise value is used to make iterative optimization of the model parameters determined by the trained image classification model Avoid the saddle point
  • the output unit is used to output the image classification result.
  • a fourth aspect of the embodiments of the present application provides a computer nonvolatile readable storage medium, the computer nonvolatile readable storage medium stores computer readable instructions, and the computer readable instructions are executed by a processor The following steps are implemented:
  • the target image is subjected to feature extraction to obtain image features, and the image features are subjected to classification prediction processing to obtain image classification results, wherein the optimal parameters are in the image classification model
  • the optimal parameters are in the image classification model
  • the second norm of the loss function is less than the first preset value, it is obtained based on a preset noise value, which is used to make the model parameters determined by the trained image classification model avoid the saddle point during iterative optimization;
  • the image classification result is output.
  • the terminal device acquires the target image to be classified; based on the optimal parameters in the image classification model, the target image is subjected to feature extraction to obtain image features, and the image features are subjected to classification prediction processing to obtain image classification results,
  • the optimal parameter is obtained based on a preset noise value when the second norm of the loss function of the image classification model is less than a first preset value, and the preset noise value is used to classify the trained image
  • the model parameters determined by the model avoid the saddle point during iterative optimization, so that the terminal device can extract the feature of the target image based on the optimal parameters in the image classification model to obtain the image feature, and can more accurately extract the image feature corresponding to the target image ;
  • the predicted image classification result will also be more accurate.
  • FIG. 3 is a schematic diagram of a terminal device according to a third embodiment of the present application.
  • FIG. 4 is a schematic diagram of a terminal device according to a fourth embodiment of the present application.
  • FIG. 1 is a flowchart of an image classification method in the first embodiment of the present application.
  • the execution subject of the image classification method in this embodiment is a terminal device.
  • the image classification method as shown in the figure may include the following steps:
  • the user when a user needs to perform classification processing on a target image to be classified through the terminal device, the user may input the target image to be classified into the terminal device, and the terminal device acquires the target image to be classified. Among them, the terminal device classifies the target image based on the pre-stored image classification model pre-stored in the terminal device.
  • the image classification model may specifically be a classification model that implements a race classification function. All classification results that the image classification model can predict include at least two Of course, it is not limited to this.
  • S102 Perform feature extraction on the target image based on the optimal parameters in the image classification model to obtain image features, and perform classification prediction processing on the image features to obtain image classification results, where the optimal parameters are classified in the image
  • the second norm of the loss function of the model is less than the first preset value, it is obtained based on a preset noise value, and the preset noise value is used to avoid the model parameters determined by the trained image classification model during the iterative optimization. Saddle point.
  • the terminal device performs feature extraction on the target image based on the optimal parameters in the image classification model to obtain image features, and performs classification prediction processing on the image features to obtain image classification results.
  • Image classification The classification of model prediction is generally only one.
  • the preset noise value is used to classify the trained image
  • the model parameters determined by the model avoid the saddle point during iterative optimization, so that the optimal parameters are those determined when the image classification model converges to the global extremum during training, and the terminal device is based on the optimal parameters in the image classification model
  • the terminal device performs classification prediction processing on the image features based on the optimal parameters in the image classification model to obtain the image classification result.
  • the predicted image classification results will also be more accurate.
  • the image classification model may include a convolutional layer and a fully connected layer.
  • the model parameters may specifically be parameters in the convolutional layer and the fully connected layer.
  • the terminal device performs based on the parameter target image corresponding to the convolutional layer in the image classification model. Convolution calculation to extract the image features corresponding to the target image; the terminal device calculates based on the parametric image features corresponding to the fully connected layer in the image classification model, and predicts the image classification results corresponding to the image features.
  • the terminal device outputs the image classification result predicted by the image classification model, so that the user can obtain the corresponding image classification result.
  • the terminal device acquires the target image to be classified; based on the optimal parameters in the image classification model, the target image is subjected to feature extraction to obtain image features, and the image features are subjected to classification prediction processing to obtain image classification results,
  • the optimal parameter is obtained based on a preset noise value when the second norm of the loss function of the image classification model is less than a first preset value, and the preset noise value is used to classify the trained image
  • the model parameters determined by the model avoid the saddle point during iterative optimization, so that the terminal device can extract the feature of the target image based on the optimal parameters in the image classification model to obtain the image feature, and more accurately extract the image feature corresponding to the target image ;
  • the predicted image classification result will also be more accurate.
  • FIG. 2 is an implementation flowchart of the image classification method provided by the second embodiment of the present application.
  • S2011-S2014 are further included.
  • S201-S204 are the same as S101-S104 in the first embodiment.
  • S2011 ⁇ S2014 are as follows:
  • S2011 Determine the first gradient corresponding to the first loss function value according to the first loss function value corresponding to the image classification model trained in the current iteration, and determine the two corresponding to the first gradient according to the first gradient. Norm.
  • the image classification model needs to be trained to perform the image classification function, and the process of training the image classification model is the process of iterative optimization of the model parameters of the image classification model, so that the model parameters of the image classification model can be optimized .
  • the terminal device determines the first gradient corresponding to the first loss function value according to the first loss function value corresponding to the image classification model under the current iteration optimization times of the image classification model, and The second norm corresponding to the first gradient is determined according to the first gradient.
  • the first loss function value is the loss function value calculated by the loss function in the current iteration optimization times, and the gradient is used to represent the parameter vector corresponding to the loss function that changes the fastest and has the largest change rate during the current iteration optimization, the first gradient
  • the terminal device will also determine and obtain the second norm corresponding to the first gradient according to the first gradient.
  • S2012 Determine whether the second norm is less than the first preset value.
  • the terminal device Since the loss function has a saddle point, and the saddle point is the local minimum value of the loss function, in the prior art, the terminal device cannot distinguish whether the loss function is a local minimum value or a global minimum value, resulting in the image classification model unable to converge to The situation of the global extreme point.
  • the terminal device determines whether the second norm corresponding to the first gradient is less than the first A preset value to determine whether the loss function reaches the saddle point, where the first preset value is a preset value.
  • the second norm corresponding to the first gradient When the second norm corresponding to the first gradient is less than the first preset value, it means that the loss function reaches the saddle point; when the second norm corresponding to the first gradient is greater than or the first preset value, it means that the loss function has not reached the saddle point Office.
  • the preset noise value is added to the first model parameter determined by the image classification model trained in the current iteration.
  • the preset noise value is used to make the The model parameters determined by the trained image classification model bring disturbance effects when iterative optimization is performed, so that the model parameters determined by the trained image classification model can avoid the saddle point during iterative optimization, and the preset noise value is for the image
  • the model parameters in the classification model are iteratively optimized, they are obtained by random sampling in the sample library of model parameters. Adding noise values to the model parameters determined by the image classification model can avoid stopping at the saddle point when iteratively optimizing the image classification model, so as to avoid that the terminal device will directly use the corresponding model parameters when converging to the local minimum as image classification The optimal parameters of the model.
  • the terminal device determines whether the difference between the value of the second loss function corresponding to the image classification model trained and the value of the first loss function corresponding to the image classification model trained in the current iteration Less than the second preset value, if the difference between the second loss function value corresponding to the trained image classification model and the first loss function value corresponding to the image classification model trained in the current iteration is less than the second preset value
  • the terminal device determines that the image classification model has converged to the global extremum point during training, and outputs the second model parameter determined in the target iteration as the optimal parameter of the trained image classification model.
  • the terminal device takes the corresponding model parameter when the image classification model converges to the global minimum as the optimal parameter, so that the terminal device can extract the feature of the target image based on the optimal parameter in the image classification model to obtain the image feature more accurately To the image feature corresponding to the target image; when the terminal device classifies and predicts the image feature based on the optimal parameters in the image classification model to obtain the image classification result, the predicted image classification result will be more accurate.
  • the calculation method of the first preset value is specifically: Terminal equipment according to preset calculation formula as well as The first preset value is calculated.
  • g is a preset first preset value
  • d is the number of corresponding model parameters in the trained image classification model
  • c, ⁇ , and ⁇ are preset constants
  • l is a Lipschitz continuous constant
  • ⁇ f is the gradient function corresponding to the loss function of the trained image classification model.
  • the second norm is less than the first preset value, including:
  • adding the preset noise value to the first model parameter determined by the image classification model trained in the current iteration includes:
  • the terminal device When the second norm corresponding to the first gradient is less than the first preset value, before adding the preset noise value to the first model parameter determined by the image classification model trained in the current iteration, the terminal device also determines Before the current iteration, whether the number of iterations without the preset noise value added to the model parameters determined by the trained image classification model reaches the third preset value, where the third preset value is a positive integer, if the During the iterative optimization process with three preset values, and the corresponding second norm is less than the first preset value, the preset noise value is added to the first model parameter determined by the image classification model trained in the current iteration, so that The terminal equipment can accurately determine whether the loss function reaches the saddle point.
  • the method for calculating the third preset value includes:
  • the third preset value is calculated, where k is the third preset value, d is the number of corresponding model parameters in the trained image classification model, c, ⁇ , ⁇ , and ⁇ are preset constants, and l is the profit Pushitz continuous constant, ⁇ f is the gradient function corresponding to the loss function of the trained image classification model.
  • the third preset value is calculated, where k is the third preset value, d is the number of corresponding model parameters in the trained image classification model, c, ⁇ , ⁇ , and ⁇ are preset constants, and l is Lipschitz continuous constant, ⁇ f is the gradient function corresponding to the loss function of the trained image classification model. It should be noted that, when the third preset value k is not a positive integer, the terminal device will select a positive integer with the smallest difference from the third preset value k to round the third preset value k.
  • FIG. 3 is a schematic diagram of a terminal device according to a third embodiment of the present application.
  • Each unit included in the terminal device is used to execute each step in the embodiment corresponding to FIG. 1 or FIG. 2.
  • the terminal equipment includes:
  • the obtaining unit 101 is used to obtain a target image to be classified.
  • the execution unit 102 is configured to perform feature extraction on the target image based on the optimal parameters in the image classification model to obtain image features, and perform classification prediction processing on the image features to obtain an image classification result, where the optimal parameters are When the second norm of the loss function of the image classification model is less than the first preset value, it is obtained based on a preset noise value, and the preset noise value is used to iterate the model parameters determined by the trained image classification model Avoid the saddle point when optimizing.
  • the output unit 103 is configured to output the image classification result.
  • the terminal device further includes:
  • a determining unit configured to determine a first gradient corresponding to the first loss function value according to the first loss function value corresponding to the image classification model trained in the current iteration, and determine the first gradient according to the first gradient Corresponding second norm.
  • the judging unit is used to judge whether the second norm is less than the first preset value.
  • An adding unit configured to add a preset noise value to the first model parameter determined by the image classification model trained in the current iteration if the second norm is less than the first preset value, the preset noise value Used to make the model parameters determined by the trained image classification model avoid the saddle point during iterative optimization.
  • the determining unit is used if the difference between the second loss function value corresponding to the image classification model trained in the target iteration after the current iteration and the first loss function value corresponding to the image classification model trained in the current iteration is less than the first Two preset values, it is determined that the image classification model has converged to the global extremum point during training, and the second model parameters determined in the target iteration are output as the optimal parameters of the trained image classification model.
  • the determining unit is also used to:
  • the first preset value is calculated, where g is the first preset value, d is the number of corresponding model parameters in the trained image classification model, c, ⁇ , and ⁇ are preset constants, and l is Lipsch Tz continuous constant, ⁇ f is the gradient function corresponding to the loss function of the trained image classification model.
  • the terminal device further includes:
  • the judging unit is further configured to judge whether the number of iterations without adding a preset noise value to the model parameters determined by the image classification model trained before the current iteration reaches a third preset value;
  • the adding unit is specifically configured to: if the model parameter determined by the image classification model trained before the current iteration does not add a preset noise value, the number of iterations reaches a third preset value, and the second norm is less than the first preset Value, the preset noise value is added to the first model parameter determined by the image classification model trained in the current iteration.
  • the determining unit is also used to:
  • the third preset value is calculated, where k is the third preset value, d is the number of corresponding model parameters in the trained image classification model, c, ⁇ , ⁇ , and ⁇ are preset constants, and l is the profit Pushitz continuous constant, ⁇ f is the gradient function corresponding to the loss function of the trained image classification model.
  • the terminal device acquires the target image to be classified; based on the optimal parameters in the image classification model, the target image is subjected to feature extraction to obtain image features, and the image features are subjected to classification prediction processing to obtain image classification results,
  • the optimal parameter is obtained based on a preset noise value when the second norm of the loss function of the image classification model is less than a first preset value, and the preset noise value is used to classify the trained image
  • the model parameters determined by the model avoid the saddle point during iterative optimization, so that the terminal device can extract the feature of the target image based on the optimal parameters in the image classification model to obtain the image feature, and can more accurately extract the image feature corresponding to the target image ;
  • the predicted image classification result will also be more accurate.
  • FIG. 4 is a schematic diagram of a terminal device according to a fourth embodiment of the present application.
  • the terminal device 4 of this embodiment includes: a processor 40, a memory 41, and computer-readable instructions 42 stored in the memory 41 and executable on the processor 40, such as the terminal device ’s control program.
  • the processor 40 executes the computer-readable instruction 42
  • the steps in the above embodiments of the image classification method of each terminal device 4 are implemented, for example, S101 to S103 shown in FIG. 1.
  • the processor 40 executes the computer-readable instructions 42
  • the functions of the units in the foregoing device embodiments are realized, for example, the functions of the units 101 to 103 shown in FIG. 3.
  • the computer-readable instructions 42 may be divided into one or more units, and the one or more units are stored in the memory 41 and executed by the processor 40 to complete the application .
  • the one or more units may be an instruction segment of a series of computer-readable instructions capable of performing a specific function.
  • the instruction segment is used to describe the execution process of the computer-readable instruction 42 in the terminal device 4.
  • the computer-readable instructions 42 may be divided into an acquisition unit, an execution unit, and an output unit, and the specific functions of each unit are as described above.
  • the terminal device may include, but is not limited to, the processor 40 and the memory 41.
  • FIG. 4 is only an example of the terminal device 4 and does not constitute a limitation on the terminal device 4, and may include more or fewer components than the illustration, or a combination of certain components, or different components.
  • the terminal device may further include an input and output device, a network access device, a bus, and the like.
  • the so-called processor 40 may be a central processing unit (Central Processing Unit, CPU), or other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the memory 41 may be an internal storage unit of the terminal device 4, such as a hard disk or a memory of the terminal device 4.
  • the memory 41 may also be an external storage terminal device of the terminal device 4, such as a plug-in hard disk equipped on the terminal device 4, a smart memory card (Smart, Media, Card, SMC), and secure digital (SD) ) Card, flash card (Flash Card), etc.
  • the memory 41 may include both an internal storage unit of the terminal device 4 and an external storage terminal device.
  • the memory 41 is used to store the computer-readable instructions and other programs and data required by the terminal device.
  • the memory 41 can also be used to temporarily store data that has been or will be output.

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

La présente invention concerne un procédé de classification d'image, un dispositif terminal et un support de stockage non volatil lisible par ordinateur, se rapportant au domaine technique des ordinateurs. Le procédé comprend les étapes consistant à : obtenir une image cible à classifier (S101) ; sur la base de paramètres optimaux dans un modèle de classification d'image, réaliser une extraction de caractéristique sur l'image cible pour obtenir des caractéristiques d'image, et réaliser un processus de prédiction de classification sur les caractéristiques d'image pour obtenir un résultat de classification d'image (S102) ; les paramètres optimaux sont obtenus sur la base d'une valeur de bruit prédéfinie lorsque la norme 2 de la fonction de perte du modèle de classification d'image est inférieure à une première valeur prédéfinie, et la valeur de bruit prédéfinie est utilisée pour permettre à des paramètres de modèle déterminés par le modèle de classification d'image appris d'éviter des points de selle pendant une optimisation itérative ; et délivrer en sortie le résultat de classification d'image (S103). Le procédé de classification d'image selon la présente invention permet d'analyser les caractéristiques d'image d'une image d'entrée sur la base des paramètres optimaux dans le modèle, ce qui améliore ainsi la précision de classification du modèle de classification d'image.
PCT/CN2018/124630 2018-10-26 2018-12-28 Procédé de classification d'image, dispositif terminal et support de stockage non volatil lisible par ordinateur WO2020082595A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811255779.6A CN109522939B (zh) 2018-10-26 2018-10-26 图像分类方法、终端设备及计算机可读存储介质
CN201811255779.6 2018-10-26

Publications (1)

Publication Number Publication Date
WO2020082595A1 true WO2020082595A1 (fr) 2020-04-30

Family

ID=65773935

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/124630 WO2020082595A1 (fr) 2018-10-26 2018-12-28 Procédé de classification d'image, dispositif terminal et support de stockage non volatil lisible par ordinateur

Country Status (2)

Country Link
CN (1) CN109522939B (fr)
WO (1) WO2020082595A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111368792B (zh) * 2020-03-18 2024-05-14 北京奇艺世纪科技有限公司 特征点标注模型训练方法、装置、电子设备及存储介质
CN113628759A (zh) * 2021-07-22 2021-11-09 中国科学院重庆绿色智能技术研究院 一种基于大数据的传染病疫情安全区域预测方法
CN115035353B (zh) * 2022-08-11 2022-12-23 粤港澳大湾区数字经济研究院(福田) 图像分类方法、图像分类模型、智能终端及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106779073A (zh) * 2016-12-27 2017-05-31 西安石油大学 基于深度神经网络的媒体信息分类方法及装置
US20180068463A1 (en) * 2016-09-02 2018-03-08 Artomatix Ltd. Systems and Methods for Providing Convolutional Neural Network Based Image Synthesis Using Stable and Controllable Parametric Models, a Multiscale Synthesis Framework and Novel Network Architectures
CN108229543A (zh) * 2017-12-22 2018-06-29 中国科学院深圳先进技术研究院 图像分类模型设计方法及装置
CN108229298A (zh) * 2017-09-30 2018-06-29 北京市商汤科技开发有限公司 神经网络的训练和人脸识别方法及装置、设备、存储介质
CN108268855A (zh) * 2018-02-05 2018-07-10 北京信息科技大学 一种面向行人再识别的函数模型的优化方法及装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105160444B (zh) * 2015-10-22 2017-02-15 广东电网有限责任公司电力调度控制中心 电力设备故障率确定方法及系统
CN107133626B (zh) * 2017-05-10 2020-03-17 安徽大学 一种基于部分平均随机优化模型的医学影像分类方法
CN107688823B (zh) * 2017-07-20 2018-12-04 北京三快在线科技有限公司 一种图像特征获取方法及装置,电子设备
CN108229379A (zh) * 2017-12-29 2018-06-29 广东欧珀移动通信有限公司 图像识别方法、装置、计算机设备和存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180068463A1 (en) * 2016-09-02 2018-03-08 Artomatix Ltd. Systems and Methods for Providing Convolutional Neural Network Based Image Synthesis Using Stable and Controllable Parametric Models, a Multiscale Synthesis Framework and Novel Network Architectures
CN106779073A (zh) * 2016-12-27 2017-05-31 西安石油大学 基于深度神经网络的媒体信息分类方法及装置
CN108229298A (zh) * 2017-09-30 2018-06-29 北京市商汤科技开发有限公司 神经网络的训练和人脸识别方法及装置、设备、存储介质
CN108229543A (zh) * 2017-12-22 2018-06-29 中国科学院深圳先进技术研究院 图像分类模型设计方法及装置
CN108268855A (zh) * 2018-02-05 2018-07-10 北京信息科技大学 一种面向行人再识别的函数模型的优化方法及装置

Also Published As

Publication number Publication date
CN109522939A (zh) 2019-03-26
CN109522939B (zh) 2024-05-07

Similar Documents

Publication Publication Date Title
WO2020082595A1 (fr) Procédé de classification d'image, dispositif terminal et support de stockage non volatil lisible par ordinateur
Wang et al. Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
WO2021089013A1 (fr) Procédé de formation de réseau de convolution de graphe spatial, dispositif électronique et support de stockage
WO2021208079A1 (fr) Procédé et appareil pour l'obtention de données de durée de vie de batterie d'alimentation, dispositif informatique et support
WO2016151618A1 (fr) Système de mise à jour de modèle prédictif, procédé de mise à jour de modèle prédictif, et programme de mise à jour de modèle prédictif
KR101828215B1 (ko) Long Short Term Memory 기반 순환형 상태 전이 모델의 학습 방법 및 장치
WO2021051556A1 (fr) Procédé et système de mise à jour de pondération d'apprentissage profond, dispositif informatique et support de stockage
Yuan et al. Design and performance analysis of deterministic learning of sampled-data nonlinear systems
CN110458875B (zh) 异常点对的检测方法、图像拼接方法、相应装置及设备
CN110969100B (zh) 一种人体关键点识别方法、装置及电子设备
US8589852B1 (en) Statistical corner extraction using worst-case distance
CN111652371A (zh) 一种离线强化学习网络训练方法、装置、系统及存储介质
JP2013097467A (ja) 画像処理装置及びその制御方法
JP2014160456A (ja) 疎変数最適化装置、疎変数最適化方法および疎変数最適化プログラム
CN114998679A (zh) 深度学习模型的在线训练方法、装置、设备及存储介质
Lesser et al. Approximate safety verification and control of partially observable stochastic hybrid systems
WO2020107264A1 (fr) Procédé et appareil de recherche d'architecture de réseau neuronal
He et al. Transfer learning in high‐dimensional semiparametric graphical models with application to brain connectivity analysis
WO2019174392A1 (fr) Traitement de vecteur pour informations de rpc
KR102430989B1 (ko) 인공지능 기반 콘텐츠 카테고리 예측 방법, 장치 및 시스템
WO2022143224A1 (fr) Procédé et dispositif d'estimation d'amplitude de circuit quantique, support de stockage et dispositif électronique
CN111026879B (zh) 多维度价值导向的针对意图的面向对象数值计算方法
CN111582456B (zh) 用于生成网络模型信息的方法、装置、设备和介质
WO2021146977A1 (fr) Procédé et appareil de recherche d'architecture neuronale
WO2020224118A1 (fr) Procédé et appareil de détermination de lésion sur la base d'une conversion d'images, et dispositif informatique

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18937780

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18937780

Country of ref document: EP

Kind code of ref document: A1