CN111368874B - Image category incremental learning method based on single classification technology - Google Patents

Image category incremental learning method based on single classification technology Download PDF

Info

Publication number
CN111368874B
CN111368874B CN202010076321.5A CN202010076321A CN111368874B CN 111368874 B CN111368874 B CN 111368874B CN 202010076321 A CN202010076321 A CN 202010076321A CN 111368874 B CN111368874 B CN 111368874B
Authority
CN
China
Prior art keywords
image
neural network
data set
class
category
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010076321.5A
Other languages
Chinese (zh)
Other versions
CN111368874A (en
Inventor
侯春萍
丁杰轩
杨阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202010076321.5A priority Critical patent/CN111368874B/en
Publication of CN111368874A publication Critical patent/CN111368874A/en
Application granted granted Critical
Publication of CN111368874B publication Critical patent/CN111368874B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention relates to an image category increment learning method based on a single classification technology, which comprises the following steps: making a data set; building an initial classification neural network; and step 3: building a class increment learning neural network: modifying the initial classification neural network in the step 2 to enable the initial classification neural network to have a class increment learning function: adjusting output categories and category quantity parameters, putting a data set of a new category image into a designated folder, updating model parameters through a knowledge distillation algorithm when training a category incremental learning neural network, namely fusing distillation loss and a cross entropy loss function to form a new loss function, and adding a Bias correction structure in the category incremental learning neural network to enable the category incremental learning neural network to adapt to large-scale image category incremental learning. And 4, step 4: and (4) building an image category incremental learning model based on a single classification technology.

Description

Image category incremental learning method based on single classification technology
Technical Field
The invention belongs to the field of computer image classification, and relates to an incremental learning method for improving image category identification accuracy based on a single classification technology.
Background
In the field of image classification, a trained good neural network model can distinguish different classes of objects in an image, and a high recognition rate can be achieved. Most neural networks today are not satisfactory for recognizing untrained objects. That is, the neural network model can only identify the object of the known class, and for the untrained object, the neural network model can wrongly classify the object into the known class, thereby causing the identification accuracy to be reduced.
Single classification [1] The purpose of (a) is to generate a description, i.e. a region in sample space, for a certain class of data. If the sample does not fall within the region, the sample is deemed not to belong to the category.
The purpose of incremental learning is to let a computer learn the task incrementally like a human, so that the computer can learn for the whole life. The idea of incremental learning is mainly embodied in two aspects:
(1) In an actual task, the data volume may gradually rise, and when new data is encountered, the trained model can be modified by the incremental learning method, so that the computer can learn the knowledge implied in the new data.
(2) For a trained model, the time cost of modifying the model is lower than the cost required to retrain a model.
A good incremental learning model should have the following three criteria [2,3]
(1) When new category images appear at different times, the models are trainable
(2) Better classification effect in learned image classes at any time
(3) The computing power and the memory should be fixed or slowly increased along with the increase of the number of categories
However, the existing incremental learning model based on deep learning suffers from catastrophic forgetfulness, and how to effectively reduce the catastrophic problem of the model is a difficult problem which needs to be solved urgently at the present stage.
[1]Lukas Ruff,Roboert A,et al.Deep One-Class Classfication[J].2018.
[2]Rebuffi S A,Kolesnikov A,Sperl G,et al.iCaRL:Incremental Classifier and Representation Learning[J].2016.
[3]Wu Y,Chen Y,Wang L,et al.Large Scale Incremental Learning[J].2019.
Disclosure of Invention
The invention provides an image category incremental learning method which can effectively relieve catastrophic forgetting and improve the accuracy of model identification. The invention designs an image category incremental model by combining a single classifier based on an incremental learning method, and the model can effectively improve the classification performance of computer images. And a new idea is provided for subsequent incremental learning algorithm research and engineering application. The invention is exemplified by computer image classification in engineering applications, and can also be used for other classification problems. The technical scheme is as follows:
an image category incremental learning method based on a single classification technology comprises the following steps:
step 1: data set production
Processing the sizes of the collected different kinds of images into fixed sizes, dividing the fixed sizes into a training set, a verification set and a test set according to a certain proportion, and then labeling the images. And (4) manufacturing a binary file containing data, labels and lists, and finishing manufacturing the data set.
Step 2: initial classification neural network construction
And constructing an initial classification neural network, selecting a deep residual error network ResNet, setting initial training and testing parameters, including setting image types and types, verifying the number of sets and iteration times, putting a data set of old type images into a designated folder, and training the classification neural network to obtain the initial classification neural network.
And step 3: incremental learning-like neural network construction
Modifying the initial classification neural network in the step 2 to enable the initial classification neural network to have a class increment learning function: the output category and category quantity parameters are adjusted, a data set of a new category image is placed in a designated folder, and when the class incremental learning neural network is trained, model parameters are updated through a knowledge distillation algorithm, namely, a distillation loss and a cross entropy loss function are fused to form a new loss function, and moreover, a Bias correction structure is added into the class incremental learning neural network, so that the class incremental learning neural network can adapt to large-scale image category incremental learning.
And 4, step 4: image category incremental learning model building based on single classification technology
In the training stage, (1) if only the data set of the old class image is contained, the single classifier is not trained, and only the initial classification neural network is trained. (2) If the data set of the new category image is added into the data set for the first time, the data set of the new category image is used for training the incremental learning neural network, and meanwhile, the data set of the new category image is used for training the single classifier, so that the single classifier can effectively identify the data set of the old category image and the data set of the new category image. (3) And (3) if the data set of the new class image is added into the data set for the second time, regarding the new class image added for the second time as a new class image, regarding the data set obtained in the step (2) as an old class image data set, training a class increment learning neural network by using the data set of the new class image, training a single classifier by using the data set of the new class image, discarding the single classifier trained in the step (2), enabling the new single classifier to effectively recognize the data set of the old class image and the data set of the new class image, storing the class increment learning neural network after the class increment learning neural network is recorded as the class increment learning network after the latest increment, storing the class increment learning neural network in the step (2) and recording the class increment learning neural network reserved after the last increment, and adding the new class image according to the training process.
In the testing stage, the pictures in the test data set are subjected to a single classifier, and whether the pictures in the test data set belong to the old class images or the new class images is judged. If the test image is of an old type, the test image enters a class increment learning neural network reserved after the trained last increment for prediction, and a prediction result is output; if the test image is of a new category, the test image enters a category increment learning network with the latest increment for prediction, and a prediction result is output.
Drawings
FIG. 1 is a diagram of an initial classification neural network configuration in the method of the present invention
FIG. 2 is a flow chart of an incremental learning algorithm in the method of the present invention
FIG. 3 is a schematic diagram of the incremental learning model Bias Correction position in the method of the present invention
FIG. 4 is a schematic diagram of an incremental learning model of image category based on single classification technology in the method of the present invention
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention are described in further detail below with reference to the accompanying drawings
Step 1: data set production
The collected different kinds of color image sizes are first processed into fixed sizes 32 x 32. Dividing the data set into a training set, a verification set and a test set according to the proportion of 9/12, 1/12 and 2/12, and then labeling the images. And (4) performing specific processing on the image to manufacture a binary file containing data, a label and a list, and finishing the manufacture of a data set.
Step 2: initial classification neural network construction
In the invention, the initial classification neural network adopts a 34-layer ResNet network structure, and the deep residual error network can solve the problems of information loss, loss and the like in information transmission to a certain extent. The integrity of the information is protected by directly transmitting the input picture information to the output by detour. Two residual error models are adopted in the network structure, the first one is that two convolution networks of 3*3 are connected in series to serve as a residual error network module, and the other one is that 3 convolution networks of 1*1, 3*3 and 1*1 are connected in series to serve as a residual error module. The initial classification neural network adopts a first residual error network module, and if a deeper ResNet network structure is adopted, a second residual error network module can be used. The ResNet network structure of the invention is shown in figure 1, and the use of the ResNet network structure is easier to optimize, and can improve the accuracy by increasing the equivalent depth. And putting the data set of the old type image into a designated folder, and training the classification neural network by using the training set in the data set of the old type image to obtain a good initial classification neural network.
Step 3, building class increment learning neural network
Incremental learning algorithm flow diagram as shown in fig. 2, the present invention applies an incremental learning algorithm to an initially classified neural network, which for a data set having n old classes (n old image classes) and m new classes (m new image classes), learns a new neural network using knowledge refinement in the initially classified neural network that classifies the n old classes to classify the n + m classes. Specifically, the model parameters are updated by a loss function, wherein the loss function is formed by fusing distillation loss and a cross entropy loss function.
For the distillation loss function, see: geofrey Hinton, oriol Vinyals, et al, dispensing the Knowledge in a Neural Network [ J ].2015. This example is illustrated below:
denote the sample of the new class as X m ={(x i ,y i ),1≤i≤M,y i ∈[n+1,...,n+m]Where M represents the number of new samples, x i And y i Respectively, an image and a label of the specimen are shown. Examples selected from the old examples are denoted as
Figure BDA0002378579260000041
Wherein N is s Representing the number of old images, where N may be present in practical applications s Much less than M, since more and more data sets may be added. Representing the logits output of the old classifier and the logits output of the new classifier as
Figure BDA0002378579260000042
And
Figure BDA0002378579260000043
updating the parametric model of the network by a loss function: the loss function is L (theta) = Lambda L d (θ)+(1-λ)L c (θ) wherein the distillation loss function is
Figure BDA0002378579260000044
The cross entropy classification loss function is
Figure BDA0002378579260000045
Wherein p is k (x) The output probability of the kth category in the n + m old and new categories is represented.
In order to guarantee incremental learning of large-scale data, after the model parameters are updated, the model is used for correcting deviation. The method comprises the following specific steps: an offset correction layer is added to the neural network. The logits output of the old class (1, 1., n) is retained and a linear model is applied to correct the bias of the logits output of the new class (n +1, 1., n + m). The model is as follows:
Figure BDA0002378579260000046
where α and β are the deviation parameters of the new class. Since all new classes share the deviation parameters α and β, this allows us to estimate them by a smaller set of verifications. When the bias parameters are optimized, the convolutional layers and the fully-connected layers are frozen. Loss of classification
Figure BDA0002378579260000047
For optimizing the deviation parameter.
The method can effectively correct the bias introduced by the complete connection layer. A schematic diagram of the incremental learning model Bias Correction position is shown in fig. 3. And completing the construction of the incremental learning neural network.
And 4, step 4: image category incremental learning model building based on single classification technology
The single-class classifier is realized by a support vector field description (SVDD) scheme. When the data set of the new category image enters a single classifier, the classifier firstly trains out a minimum hypersphere, the image data of the pile are all enclosed, and when a new image data is identified, if the image data point falls in the hypersphere, the image data point belongs to the new category, otherwise, the image data point does not belong to the new category.
Firstly, a minimum spherical surface with a center a and a radius R is obtained:
Figure BDA0002378579260000051
where C is a constant.
Such that the plane satisfies (x) i -a) T (x i -a)≤R 2i
Figure BDA0002378579260000052
Xi therein i Is the slack variable.
And after the Lagrangian dual solution is adopted, whether a new data point z is included can be judged, if the distance from the z to the center is smaller than or equal to the radius R, the image is a new category image, and if the distance is outside a hyper-sphere, the image is an old category image. And finishing the training of the single classifier.
The process of building the image category incremental learning model based on the single classification technology is divided into a training phase and a testing phase, and is shown in fig. 4.
In the training stage, (1) if only the data set of the old class image is contained, the single classifier is not trained, and only the initial classification neural network is trained. (2) And if the data set of the new type of image is added into the data set for the first time, training a class increment learning neural network by using the data set of the new type of image. Meanwhile, the data set of the new category image is utilized to train the single classifier, so that the single classifier can effectively identify the data set of the old category image and the data set of the new category image. (3) If the dataset of the new category image is added to the dataset for the second time (different from the dataset in (2)), the dataset of the new category image in (2) automatically becomes the dataset of the old category image, and the class incremental learning neural network is trained using the dataset of the new category image (the dataset is the dataset of the new category image added for the second time, and the same applies to (2)). And (3) simultaneously training a single classifier by using the data set of the new class image, and discarding the trained single classifier in the step (2), so that the new single classifier can effectively identify the data set of the old class image and the data set of the new class image. And (3) saving the similar increment learning neural network as the newly increased similar increment learning network, and simultaneously saving the similar increment learning neural network in the step (2) and recording the similar increment learning neural network reserved after the last increment. (4) And (4) if the data set of the new type image is added for the nth (n > 2) time in the data set, the training process is consistent with the step in the step (3).
In the testing stage, the pictures in the test data set are subjected to a single classifier, and whether the pictures in the test data set belong to the old class images or the new class images is judged. If the test image is of an old type, the test image enters a class increment learning neural network reserved after the trained last increment for prediction, and a prediction result is output; if the test image is of a new category, the test image enters a category increment learning network with the latest increment for prediction, and a prediction result is output.
And 5: analysis of Experimental data
And comparing the model experiment result without adding the step 4 with the model experiment result with adding the step 4. The results are shown in the embodiment. By analyzing the implementation effect of the scheme, the incremental learning identification accuracy can be effectively improved by the model. And a new idea is provided for subsequent incremental learning algorithm research and engineering application.
Effects of embodiment
The dataset consisted of 1200 32 x 32 color images of 10 classes, each class having a total of 120 images. There were 90 training images, 10 validation images and 20 test images.
The model learns two types of pictures in advance, and in the incremental learning process, the number of the types of the pictures is increased by two types each time. The experimental effects are shown in the following table:
Figure BDA0002378579260000053
Figure BDA0002378579260000061
therefore, the method of the step 4 can effectively improve the identification accuracy of the incremental learning model.

Claims (1)

1. An image category incremental learning method based on a single classification technology comprises the following steps:
step 1: data set production
Processing the sizes of the collected different kinds of images into fixed sizes, dividing the fixed sizes into a training set, a verification set and a test set according to a certain proportion, and then labeling the images; manufacturing a binary file containing data, a label and a list, and finishing manufacturing a data set;
step 2: initial classification neural network construction
Constructing an initial classification neural network, selecting a deep residual error network ResNet, setting initial training and testing parameters, including setting image types and types, verifying the number of sets and iteration times, putting a data set of old type images into a designated folder, and training the classification neural network to obtain the initial classification neural network;
and 3, step 3: incremental learning-like neural network construction
Modifying the initial classification neural network in the step 2 to enable the initial classification neural network to have a class increment learning function: adjusting the output category and category quantity parameters, putting a data set of a new category image into an appointed folder, updating model parameters through a knowledge distillation algorithm when training a category incremental learning neural network, namely fusing distillation loss and a cross entropy loss function to form a new loss function, and adding a Bias correction structure into the category incremental learning neural network to enable the category incremental learning neural network to adapt to large-scale image category incremental learning;
and 4, step 4: image category incremental learning model building based on single classification technology
In the training stage, (1) if only the data set of the old class image is contained, a single classifier is not trained, and only an initial classification neural network is trained; (2) If a data set of a new class image is added into the data set for the first time, training a class increment learning neural network by using the data set of the new class image, and simultaneously training a single classifier by using the data set of the new class image, so that the single classifier can effectively identify the data set of an old class image and the data set of the new class image; (3) If the data set of the new class image is added into the data set for the second time, the new class image added for the second time is regarded as a new class image, the data set obtained in the step (2) is regarded as an old class image data set, the class increment learning neural network is trained by using the data set of the new class image, meanwhile, the single classifier is trained by using the data set of the new class image, and the single classifier trained in the step (2) is discarded, so that the new single classifier can effectively identify the data set of the old class image and the data set of the new class image, the class increment learning neural network is stored and recorded as the class increment learning network after the latest increment, meanwhile, the class increment learning neural network in the step (2) is stored and recorded as the class increment learning neural network reserved after the last increment, and the new class image is added according to the training process;
in the testing stage, the pictures in the test data set are subjected to a single classifier, and whether the pictures in the test data set belong to old class images or not is judged; if the test image is of an old type, the test image enters a class increment learning neural network reserved after the trained last increment for prediction, and a prediction result is output; otherwise, the test image enters the class increment learning network after the latest increment to be predicted, and a prediction result is output; the single classifier is realized by adopting a support vector field description SVDD scheme, when a data set of a new class image enters the single classifier, the classifier firstly trains out a minimum hypersphere, encloses all image data and identifies new image data, if the image data point falls in the hypersphere, the image data point belongs to the new class, otherwise, the image data point does not belong to the new class.
CN202010076321.5A 2020-01-23 2020-01-23 Image category incremental learning method based on single classification technology Active CN111368874B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010076321.5A CN111368874B (en) 2020-01-23 2020-01-23 Image category incremental learning method based on single classification technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010076321.5A CN111368874B (en) 2020-01-23 2020-01-23 Image category incremental learning method based on single classification technology

Publications (2)

Publication Number Publication Date
CN111368874A CN111368874A (en) 2020-07-03
CN111368874B true CN111368874B (en) 2022-11-15

Family

ID=71207910

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010076321.5A Active CN111368874B (en) 2020-01-23 2020-01-23 Image category incremental learning method based on single classification technology

Country Status (1)

Country Link
CN (1) CN111368874B (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111597374B (en) * 2020-07-24 2020-10-27 腾讯科技(深圳)有限公司 Image classification method and device and electronic equipment
CN112200260B (en) * 2020-10-19 2022-06-14 厦门大学 Figure attribute identification method based on discarding loss function
CN112488209B (en) * 2020-11-25 2024-02-20 南京大学 Incremental picture classification method based on semi-supervised learning
CN113762304B (en) * 2020-11-26 2024-02-06 北京京东乾石科技有限公司 Image processing method, image processing device and electronic equipment
CN112766380B (en) * 2021-01-21 2023-01-03 西安电子科技大学 Image classification method and system based on feature gain matrix incremental learning
CN112990280B (en) * 2021-03-01 2023-08-25 华南理工大学 Class increment classification method, system, device and medium for image big data
CN113177630B (en) * 2021-04-13 2024-02-13 中国科学院信息工程研究所 Data memory elimination method and device for deep learning model
CN113259331B (en) * 2021-04-29 2022-10-11 上海电力大学 Unknown abnormal flow online detection method and system based on incremental learning
CN113361719B (en) * 2021-06-04 2024-08-06 深圳市晶帆光电科技有限公司 Incremental learning method and image processing method based on image processing model
CN113515656B (en) * 2021-07-06 2022-10-11 天津大学 Multi-view target identification and retrieval method and device based on incremental learning
CN113469090B (en) * 2021-07-09 2023-07-14 王晓东 Water pollution early warning method, device and storage medium
CN113626827A (en) * 2021-07-29 2021-11-09 西安电子科技大学 Intelligent contract vulnerability detection method, system, equipment, medium and terminal
CN113837156A (en) * 2021-11-26 2021-12-24 北京中超伟业信息安全技术股份有限公司 Intelligent warehousing sorting method and system based on incremental learning
CN114359619A (en) * 2021-12-01 2022-04-15 南方电网科学研究院有限责任公司 Incremental learning-based power grid defect detection method, device, equipment and medium
CN114492745B (en) * 2022-01-18 2024-09-27 天津大学 Knowledge distillation mechanism-based individual identification method for similar incremental radiation source
CN114612721A (en) * 2022-03-15 2022-06-10 南京大学 Image classification method based on multilevel adaptive feature fusion type increment learning
CN114693993A (en) * 2022-03-23 2022-07-01 腾讯科技(深圳)有限公司 Image processing and image classification method, device, equipment and storage medium
CN114677547B (en) * 2022-04-07 2024-03-29 中国科学技术大学 Image classification method based on self-holding characterization expansion type incremental learning
WO2024119422A1 (en) * 2022-12-08 2024-06-13 上海成电福智科技有限公司 Deep-neural-network-based class-incremental learning method for mobile phone radiation source spectrogram
CN116229388B (en) * 2023-03-27 2023-09-12 哈尔滨市科佳通用机电股份有限公司 Method, system and equipment for detecting motor car foreign matters based on target detection network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463191A (en) * 2014-10-30 2015-03-25 华南理工大学 Robot visual processing method based on attention mechanism
CN105809184A (en) * 2015-10-30 2016-07-27 哈尔滨工程大学 Vehicle real-time identification tracking and parking space occupancy determining method suitable for gas station
CN108121998A (en) * 2017-12-05 2018-06-05 北京寄云鼎城科技有限公司 A kind of training method of support vector machine based on Spark frames
CN109492765A (en) * 2018-11-01 2019-03-19 浙江工业大学 A kind of image Increment Learning Algorithm based on migration models

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7565370B2 (en) * 2003-08-29 2009-07-21 Oracle International Corporation Support Vector Machines in a relational database management system
US9646226B2 (en) * 2013-04-16 2017-05-09 The Penn State Research Foundation Instance-weighted mixture modeling to enhance training collections for image annotation
US20190095400A1 (en) * 2017-09-28 2019-03-28 Sas Institute Inc. Analytic system to incrementally update a support vector data description for outlier identification

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463191A (en) * 2014-10-30 2015-03-25 华南理工大学 Robot visual processing method based on attention mechanism
CN105809184A (en) * 2015-10-30 2016-07-27 哈尔滨工程大学 Vehicle real-time identification tracking and parking space occupancy determining method suitable for gas station
CN108121998A (en) * 2017-12-05 2018-06-05 北京寄云鼎城科技有限公司 A kind of training method of support vector machine based on Spark frames
CN109492765A (en) * 2018-11-01 2019-03-19 浙江工业大学 A kind of image Increment Learning Algorithm based on migration models

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Credit scoring using incremental learning algorithm for SVDD;Yongquan Cai et al.;《2016 International Conference on Computer, Information and Telecommunication Systems (CITS)》;20160818;全文 *
一种SVDD增量学习淘汰算法;孔祥鑫等;《计算机工程与应用》;20180930;全文 *

Also Published As

Publication number Publication date
CN111368874A (en) 2020-07-03

Similar Documents

Publication Publication Date Title
CN111368874B (en) Image category incremental learning method based on single classification technology
CN114241282B (en) Knowledge distillation-based edge equipment scene recognition method and device
Tong et al. An evidential classifier based on Dempster-Shafer theory and deep learning
CN111985601A (en) Data identification method for incremental learning
CN112308211B (en) Domain increment method based on meta learning
CN114170461B (en) Noise-containing label image classification method based on feature space reorganization for teacher and student architecture
CN110555459A (en) Score prediction method based on fuzzy clustering and support vector regression
CN113095229B (en) Self-adaptive pedestrian re-identification system and method for unsupervised domain
CN115048870A (en) Target track identification method based on residual error network and attention mechanism
CN114357221A (en) Self-supervision active learning method based on image classification
CN114495114B (en) Text sequence recognition model calibration method based on CTC decoder
JP6701467B2 (en) Learning device and learning method
CN113961727B (en) Cross-media Hash retrieval method, device, terminal and storage medium
Louati et al. Embedding channel pruning within the CNN architecture design using a bi-level evolutionary approach
CN116011514A (en) Electronic nose domain adaptive migration learning method based on DANN-OS-ELM
CN115082955A (en) Deep learning global optimization method, recognition method, device and medium
CN112633344B (en) Quality inspection model training method, device, equipment and readable storage medium
Lemghari et al. Handling noisy annotations in deep supervised learning
WO2024000566A9 (en) Method and apparatus for auxiliary learning with joint task and data scheduling
CN118278520B (en) Knowledge tracking method and system for topic and concept double-attention coding
Wan Efficiently Designing Efficient Deep Neural Networks
CN113537290B (en) Image matching method based on ultra-high dimensional data element clustering
WO2022198477A1 (en) Method and apparatus for implementing incremental learning on classification model, and electronic device and medium
Doan Ensemble Learning for Multiple Data Mining Problems
CN113642665A (en) Relation network-based few-sample classification method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant