CN111639718A - Classifier application method and device - Google Patents

Classifier application method and device Download PDF

Info

Publication number
CN111639718A
CN111639718A CN202010505890.7A CN202010505890A CN111639718A CN 111639718 A CN111639718 A CN 111639718A CN 202010505890 A CN202010505890 A CN 202010505890A CN 111639718 A CN111639718 A CN 111639718A
Authority
CN
China
Prior art keywords
classifier
training
sample set
training sample
fake
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010505890.7A
Other languages
Chinese (zh)
Other versions
CN111639718B (en
Inventor
童楚婕
彭勃
栾英英
严洁
徐晓健
李福洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bank of China Ltd
Original Assignee
Bank of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bank of China Ltd filed Critical Bank of China Ltd
Priority to CN202010505890.7A priority Critical patent/CN111639718B/en
Publication of CN111639718A publication Critical patent/CN111639718A/en
Application granted granted Critical
Publication of CN111639718B publication Critical patent/CN111639718B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a classifier application method and a classifier application device, wherein the method comprises the following steps: receiving an image to be identified; inputting the image to be identified to a classifier to obtain a classification result of the classifier; the classifier is obtained after a real training sample set and a fake training sample set output by a noise generator are trained, and the noise generator randomly adds noise to the real training sample to obtain a fake training sample containing image damage; and outputting the classification result. Because the classifier is trained by adopting other training samples except the real training sample set, the classifier obtained after training can identify images except the real training sample set, thereby improving the robustness of the classifier.

Description

Classifier application method and device
Technical Field
The present application relates to the field of communications technologies, and in particular, to a classifier application method and apparatus.
Background
Deep learning has a lot of breakthrough progress and application in the fields of computer vision, voice recognition and the like. However, it is known that if the distributions of the training sample set and the test sample set are different, the accuracy of the machine model is reduced. That is, because the training sample set has limited size and types, if the test sample set contains test samples of the types that the training sample set does not have, the machine model will lack robustness.
In the field of image recognition, training sample sets and test sample sets are distributed differently. For example, images that are not corrupted by a certain type are included in the training sample set, and a classifier is obtained based on the training sample set. In the case where the test sample set contains images corrupted by this type (e.g., more creases or lines on a clean bill), the accuracy of the classifier in identifying the corrupted images in the test sample set may be greatly reduced.
In order to avoid the above situation, noise is usually added to the training sample set by using a data enhancement method to improve the robustness of the classifier obtained after training based on the training sample set.
Disclosure of Invention
The research process of the applicant finds that:
a countermeasure generation network (GAN) can be used to improve the accuracy of the classifier, in which the data enhancement process is: the generation network adds noise to the real image to construct a forged image which is similar to the real image as much as possible, and the real image and the forged image are utilized to train the classifier, so that the classifier can accurately distinguish the real image and the forged image, and the accuracy of the classifier is improved.
Although the generation network in the countermeasure generation network can add noise to train the generator and the discriminator when constructing the forged image, the generator in the countermeasure generation network aims to iterate the noise distribution by the generator and further approximate the real distribution of the training set, and the finally generated forged image is the same as the real image as far as possible, so that the discriminator cannot distinguish whether the sample is generated or real. .
Although the countermeasure generation network adds noise to the generator, since the purpose of adding noise is to bring the forged image and the real image as close as possible, image destruction other than the real image is not added to the forged image. That is, the generation networks in the antagonistic generation network do not provide image corruption types beyond the training sample set.
In view of this, the present application provides a classifier application method and apparatus, in which the classifier is obtained by training a training sample set and a forged sample set output by a noise generator, and the noise generator can output forged samples containing different image destruction types from the samples in the training sample set as much as possible, thereby improving the robustness of the classifier.
In order to achieve the above object, the present invention provides the following technical features:
a classifier application method, comprising:
receiving an image to be identified;
inputting the image to be identified to a classifier to obtain a classification result of the classifier; the classifier is obtained after a real training sample set and a fake training sample set which is output by a noise generator and contains image destruction are trained, and the noise generator randomly adds noise to the real training sample to obtain a fake training sample;
and outputting the classification result.
Optionally, before receiving the image to be recognized, the method further includes:
iteratively training the classifier and the noise generator until the classifier reaches an end-of-training condition;
validating the classifier using a set of test samples;
if the accuracy of the classifier is greater than the preset accuracy, the classifier is saved;
and if the accuracy of the classifier is not greater than the preset accuracy, the step of iteratively training the classifier and the noise generator is carried out until the classifier reaches a training end condition.
Optionally, the iteratively training the classifier and the noise generator until the classifier reaches a training end condition includes:
constructing a noise generator based on the set of real training samples and an activation function;
inputting the real training sample set to the noise generator, wherein the noise generator adds random noise to a plurality of pixel points of a plurality of real training samples to generate an image-damaged forged training sample set;
training a classifier by using the real training sample set and the fake training sample set until the classifier reaches a training end condition.
Optionally, if the accuracy of the classifier is not greater than a preset accuracy, before entering the step of iteratively training the classifier and the noise generator until the classifier reaches a training end condition, the method further includes: and adjusting the parameters of the activation function to obtain an updated activation function.
Optionally, training a classifier using the real training sample set and the fake training sample set until the classifier reaches a training end condition, further comprising:
performing screening operation on the fake training set to obtain fake training samples different from the real training samples by more than a threshold value;
and recombining the fake training samples which are different from the real training samples by more than a threshold value into the fake training sample set.
A classifier application apparatus comprising:
the receiving unit is used for receiving the image to be identified;
the input unit is used for inputting the image to be recognized to a classifier to obtain a classification result of the classifier; the classifier is obtained after a real training sample set and a fake training sample set output by a noise generator are trained, and the noise generator randomly adds noise to the real training sample to obtain a fake training sample containing image damage;
and the output unit is used for outputting the classification result.
Optionally, before the receiving unit, the method further includes:
a training unit for iteratively training the classifier and the noise generator until the classifier reaches a training end condition;
a validation unit for validating the classifier using a test sample set; if the accuracy of the classifier is greater than the preset accuracy, the classifier is saved; and if the accuracy of the classifier is not greater than the preset accuracy, entering a training unit.
Optionally, the training unit includes:
a construction unit for constructing a noise generator based on the set of real training samples and an activation function;
the generating unit is used for inputting the real training sample set to the noise generator, and the noise generator adds random noise to a plurality of pixel points of a plurality of real training samples to generate a fake training sample set;
and the training classifier unit is used for training a classifier by using the real training sample set and the fake training sample set until the classifier reaches a training end condition.
Optionally, if the accuracy of the classifier is not greater than a preset accuracy, before entering the step of iteratively training the classifier and the noise generator until the classifier reaches a training end condition, the method further includes:
and the adjusting unit is used for adjusting the parameters of the activation function to obtain the updated activation function.
Optionally, before training the classifier unit, the method further includes:
the screening unit is used for carrying out screening operation on the fake training set to obtain fake training samples with the difference larger than a threshold value from the real training samples; and recombining the fake training samples which are different from the real training samples by more than a threshold value into the fake training sample set.
Through the technical means, the following beneficial effects can be realized:
the invention provides a classifier application method, which can train and obtain a classifier based on a real training sample set and a fake training sample set containing multiple image destruction types and output by a noise generator.
Because the classifier is trained by adopting other training samples except the real training sample set, the classifier obtained after training can identify the damaged images except the image damage contained in the real training sample set, thereby improving the robustness of the classifier.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a classifier application system disclosed in an embodiment of the present application;
FIG. 2 is a flowchart of a first embodiment of a classifier training method disclosed in the embodiments of the present application;
FIG. 3 is a flowchart of a second embodiment of a classifier training method disclosed in the embodiments of the present application;
FIG. 4 is a flowchart of a classifier application method disclosed in an embodiment of the present application;
fig. 5 is a schematic structural diagram of a classifier application device disclosed in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The invention provides a classifier application system, referring to fig. 1, which may include:
and the terminal 100 is configured to send the image to be identified to the server, and receive and display the classification result sent by the server.
The server 200 is configured to receive the image to be recognized, input the image to be recognized to a classifier, and obtain a classification result of the classifier; the classifier is obtained after a real training sample set and a fake training sample set output by a noise generator are trained, and the noise generator randomly adds noise to the real training sample to obtain a fake training sample containing image damage; and sending the classification result to a terminal.
The invention provides a classifier application system, which can train and obtain a classifier based on a real training sample set and a fake training sample set containing various image destruction types and output by a noise generator.
Because the classifier is trained by adopting other training samples except the real training sample set, the classifier obtained after training can identify the damaged images containing other types of damages except the image damages contained in the real training sample set, thereby improving the robustness of the classifier.
The invention provides a first embodiment of a classifier training method, which comprises the following steps with reference to fig. 2:
step S200: and acquiring a real training sample set and a test sample set.
Step S201: iteratively training the classifier and the noise generator until the classifier reaches an end-of-training condition.
This step may include the steps of:
step S2011: a noise generator is constructed based on the set of real training samples and an activation function.
And inputting a real training sample set to the neural network model, and capturing the damage type in the real training sample. The destruction type distribution of a real training sample set is learnt resistively, namely, an activation function in a neural network model is trained through a back propagation algorithm, and the neural network model after training is determined as a noise generator.
The noise generator can output independent and same-distributed noise distribution (which can be in various distribution forms) for a plurality of pixel points in the training sample, so as to generate a destructive image containing a plurality of image destruction types. Optionally, independent noise distribution with the same distribution can be output for each pixel point.
Step S2012: and inputting the real training sample set to the noise generator, wherein the noise generator adds random noise to a plurality of pixel points of a plurality of real training samples to generate an image-damaged forged training sample set.
The real training samples can be respectively input into the noise generator, and random noise can be added to a plurality of pixel points in the real training samples by the noise generator.
Optionally, random noise may be added to each pixel point in the real training sample, so as to generate a fake training sample containing image damage. The plurality of fake training samples form a fake training sample set.
Because the noise generator can add noise to the pixel points randomly, the probability of the forged training sample is different from that of the real training sample, namely the real training sample and the forged training sample have different damage types.
Step S2013: training a classifier by using the real training sample set and the fake training sample set until the classifier reaches a training end condition.
The classifier is trained by utilizing the real training sample set and the fake training sample set except the real training sample set, so that the classifier can learn the image damage type in the real training sample set and can also learn the image damage type in the fake training sample set, and the robustness of the classifier is improved.
Step S202: the classifier is validated using a set of test samples.
Carrying out supervised verification on the classifier by using the test sample set, and if the classifier successfully classifies one test sample, adding one to the verification success frequency; if the classifier fails to classify one test sample, the number of verification failures is increased by one.
And calculating the accuracy of the classifier by using the verification success times and the verification failure times.
Step S203: and judging whether the accuracy of the classifier is greater than a preset accuracy. If so, the process proceeds to step S204, otherwise, the process proceeds to step S201.
Step S204: and if the accuracy of the classifier is greater than the preset accuracy, saving the classifier.
If the accuracy of the classifier is greater than the preset accuracy, the fact that the accuracy is greater than the preset accuracy after the classifier is verified by the test sample set is shown, and the classifier training is completed.
If the accuracy of the classifier is not greater than the preset accuracy, it indicates that the classifier cannot pass the verification, and the classifier needs to be trained continuously, so step S201 is entered to repeat iterative training of the classifier and the noise generator until the classifier reaches the training end condition. And then verifying whether the classifier can pass the verification again, and if the classifier still cannot pass the verification, continuing to step S201 until the classifier passes the verification.
The invention provides a second embodiment of a classifier training method, which is additionally provided with a step S205 and a step S2013 on the basis of the first embodiment. Referring to fig. 3, the method comprises the following steps:
step S200: and acquiring a real training sample set and a test sample set.
Step S201: iteratively training the classifier and the noise generator until the classifier reaches an end-of-training condition.
This step may include the steps of:
step S2011: a noise generator is constructed based on the set of real training samples and an activation function.
And inputting a real training sample set to the neural network model, and capturing the damage type in the real training sample. The destruction type distribution of a real training sample set is learnt resistively, namely, an activation function in a neural network model is trained through a back propagation algorithm, and the neural network model after training is determined as a noise generator.
The noise generator can output independent and equally distributed noise distributions (which can be in various distribution forms) for a plurality of pixel points in the training sample. Optionally, noise distribution may be added to each pixel point.
Step S2012: and inputting the real training sample set to the noise generator, wherein the noise generator adds random noise to a plurality of pixel points of a plurality of real training samples to generate a fake training sample set.
The real training samples can be respectively input into the noise generator, and random noise can be added to a plurality of pixel points in the real training samples by the noise generator. Optionally, random noise may be added to each pixel point in the real training sample, so as to generate a fake training sample. The plurality of fake training samples form a fake training sample set.
Because the noise generator can add noise to the pixel points randomly, the probability of the forged training sample is different from that of the real training sample, namely the real training sample and the forged training sample have different damage types.
Step S2013: performing screening operation on the fake training set to obtain fake training samples different from the real training samples by more than a threshold value; and recombining the fake training samples which are different from the real training samples by more than a threshold value into the fake training sample set.
In order to make the difference between the fake training sample set used for training the classifier and the real training sample set large enough, the fake training sample set can be screened, the screening operation is executed from the fake training sample set, the fake training samples with Euclidean distance larger than the threshold value from the real training samples are screened, and the fake training samples with difference larger than the threshold value from the real training samples are recombined into the fake training sample set. This may further improve the robustness of the subsequent classifier.
Step S2014: training a classifier by using the real training sample set and the fake training sample set until the classifier reaches a training end condition.
The classifier is trained by utilizing the real training sample set and the fake training sample set except the real training sample set, so that the classifier can learn the image damage type in the real training sample set and can also learn the image damage type in the fake training sample set, and the robustness of the classifier is improved.
Step S202: the classifier is validated using a set of test samples.
Carrying out supervised verification on the classifier by using the test sample set, and if the classifier successfully classifies one test sample, adding one to the verification success frequency; if the classifier fails to classify one test sample, the number of verification failures is increased by one.
And calculating the accuracy of the classifier by using the verification success times and the verification failure times.
Step S203: and judging whether the accuracy of the classifier is greater than a preset accuracy. If so, the process proceeds to step S204, otherwise, the process proceeds to step S205.
Step S204: and if the accuracy of the classifier is greater than the preset accuracy, saving the classifier.
If the accuracy of the classifier is greater than the preset accuracy, the fact that the accuracy is greater than the preset accuracy after the classifier is verified by the test sample set is shown, and the classifier training is completed.
Step S205: if the accuracy of the classifier is not greater than the preset accuracy, adjusting the parameters of the activation function to obtain an updated activation function, and entering step S201.
And if the accuracy of the classifier is not greater than the preset accuracy, the noise generated by the noise generator does not completely cover the pattern damage type in the test sample set. Thus, the parameters of the activation function in the noise generator can be adjusted so that the noise generator can generate a new noise profile.
If the accuracy of the classifier is not greater than the preset accuracy, it indicates that the classifier cannot pass the verification, and the classifier needs to be trained continuously, so step S201 is entered to repeat iterative training of the classifier and the noise generator until the classifier reaches the training end condition.
And then verifying whether the classifier can pass the verification again, and if the classifier still cannot pass the verification, continuing to step S201 until the classifier passes the verification.
And saving the classifier after obtaining the classifier so as to apply the classifier to perform classification operation on the image.
The invention provides a classifier application method which is used for describing a specific implementation process of a server in detail. Referring to fig. 4, the method comprises the following steps:
step S401: an image to be recognized is received.
The server can receive the image to be recognized so as to perform classification operation on the image to be recognized. The embodiment can be applied to a plurality of application scenes for classifying the images.
Step S402: inputting the image to be identified to a classifier to obtain a classification result of the classifier; the classifier is obtained after a real training sample set and a fake training sample set which is output by a noise generator and contains image destruction are trained, and the noise generator randomly adds noise to the real training sample to obtain a fake training sample.
Step S403: and outputting the classification result.
The server can output the classification result and send the classification result to the terminal so that the terminal can display the classification result.
Through the technical means, the following beneficial effects can be realized:
the invention provides a classifier application method, which can train and obtain a classifier based on a real training sample set and a fake training sample set output by a noise generator.
Because the classifier is trained by adopting other training samples except the real training sample set, the classifier obtained after training can identify images except the real training sample set, thereby improving the robustness of the classifier.
Referring to fig. 5, the present invention also provides a classifier application apparatus, including:
a receiving unit 51 for receiving an image to be recognized.
An input unit 52, configured to input the image to be recognized to a classifier to obtain a classification result of the classifier; the classifier is obtained after a real training sample set and a fake training sample set which is output by a noise generator and contains image destruction are trained, and the noise generator randomly adds noise to the real training sample to obtain a fake training sample.
And an output unit 53, configured to output the classification result.
Wherein, before the receiving unit 51, the following are also included:
a training unit 54 for iteratively training the classifier and the noise generator until the classifier reaches a training end condition;
a validation unit 55 for validating the classifier with a test sample set; if the accuracy of the classifier is greater than the preset accuracy, the classifier is saved; and if the accuracy of the classifier is not greater than the preset accuracy, entering a training unit.
Wherein the training unit 54 comprises:
a construction unit 541, configured to construct a noise generator based on the set of real training samples and an activation function;
a generating unit 542, configured to input the real training sample set to the noise generator, where the noise generator adds random noise to multiple pixel points of multiple real training samples to generate a fake training sample set;
a training classifier unit 543, configured to train a classifier using the real training sample set and the fake training sample set until the classifier reaches a training end condition.
An adjusting unit 544, configured to, if the accuracy of the classifier is not greater than a preset accuracy, adjust a parameter of the activation function to obtain an updated activation function before entering the step of iteratively training the classifier and the noise generator until the classifier reaches a training end condition.
Before training the classifier unit 543, further comprising:
a screening unit 545, configured to perform a screening operation on the fake training set to obtain a fake training sample that differs from the real training sample by more than a threshold; and recombining the fake training samples which are different from the real training samples by more than a threshold value into the fake training sample set.
For specific implementation of the classifier application device, reference may be made to the embodiments shown in fig. 2 to 4, which are not described herein again.
Through the technical means, the following beneficial effects can be realized:
the invention provides a classifier application method, which can train and obtain a classifier based on a real training sample set and a fake training sample set output by a noise generator.
Because the classifier is trained by adopting other training samples except the real training sample set, the classifier obtained after training can identify images except the real training sample set, thereby improving the robustness of the classifier.
The functions described in the method of the present embodiment, if implemented in the form of software functional units and sold or used as independent products, may be stored in a storage medium readable by a computing device. Based on such understanding, part of the contribution to the prior art of the embodiments of the present application or part of the technical solution may be embodied in the form of a software product stored in a storage medium and including several instructions for causing a computing device (which may be a personal computer, a server, a mobile computing device or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A classifier application method, comprising:
receiving an image to be identified;
inputting the image to be identified to a classifier to obtain a classification result of the classifier; the classifier is obtained after a real training sample set and a fake training sample set which is output by a noise generator and contains image destruction are trained, and the noise generator randomly adds noise to the real training sample to obtain a fake training sample;
and outputting the classification result.
2. The method of claim 1, further comprising, prior to receiving the image to be identified:
iteratively training the classifier and the noise generator until the classifier reaches an end-of-training condition;
validating the classifier using a set of test samples;
if the accuracy of the classifier is greater than the preset accuracy, the classifier is saved;
and if the accuracy of the classifier is not greater than the preset accuracy, the step of iteratively training the classifier and the noise generator is carried out until the classifier reaches a training end condition.
3. The method of claim 2, wherein the iteratively training the classifier and the noise generator until the classifier reaches an end-of-training condition comprises:
constructing a noise generator based on the set of real training samples and an activation function;
inputting the real training sample set to the noise generator, wherein the noise generator adds random noise to a plurality of pixel points of a plurality of real training samples to generate an image-damaged forged training sample set;
training a classifier by using the real training sample set and the fake training sample set until the classifier reaches a training end condition.
4. The method of claim 3,
if the accuracy of the classifier is not greater than the preset accuracy, before entering the step of iteratively training the classifier and the noise generator until the classifier reaches a training end condition, the method further comprises the following steps: and adjusting the parameters of the activation function to obtain an updated activation function.
5. The method of claim 4, wherein training a classifier using the set of real training samples and the set of fake training samples until the classifier reaches a training end condition, further comprising:
performing screening operation on the fake training set to obtain fake training samples different from the real training samples by more than a threshold value;
and recombining the fake training samples which are different from the real training samples by more than a threshold value into the fake training sample set.
6. A classifier application apparatus, comprising:
the receiving unit is used for receiving the image to be identified;
the input unit is used for inputting the image to be recognized to a classifier to obtain a classification result of the classifier; the classifier is obtained after a real training sample set and a fake training sample set output by a noise generator are trained, and the noise generator randomly adds noise to the real training sample to obtain a fake training sample containing image damage;
and the output unit is used for outputting the classification result.
7. The apparatus of claim 6, further comprising, prior to the receiving unit:
a training unit for iteratively training the classifier and the noise generator until the classifier reaches a training end condition;
a validation unit for validating the classifier using a test sample set; if the accuracy of the classifier is greater than the preset accuracy, the classifier is saved; and if the accuracy of the classifier is not greater than the preset accuracy, entering a training unit.
8. The apparatus of claim 7, wherein the training unit comprises:
a construction unit for constructing a noise generator based on the set of real training samples and an activation function;
the generating unit is used for inputting the real training sample set to the noise generator, and the noise generator adds random noise to a plurality of pixel points of a plurality of real training samples to generate a fake training sample set;
and the training classifier unit is used for training a classifier by using the real training sample set and the fake training sample set until the classifier reaches a training end condition.
9. The apparatus of claim 8,
if the accuracy of the classifier is not greater than the preset accuracy, before entering the step of iteratively training the classifier and the noise generator until the classifier reaches a training end condition, the method further comprises the following steps:
and the adjusting unit is used for adjusting the parameters of the activation function to obtain the updated activation function.
10. The apparatus of claim 9, prior to training the classifier unit, further comprising:
the screening unit is used for carrying out screening operation on the fake training set to obtain fake training samples with the difference larger than a threshold value from the real training samples; and recombining the fake training samples which are different from the real training samples by more than a threshold value into the fake training sample set.
CN202010505890.7A 2020-06-05 2020-06-05 Classifier application method and device Active CN111639718B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010505890.7A CN111639718B (en) 2020-06-05 2020-06-05 Classifier application method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010505890.7A CN111639718B (en) 2020-06-05 2020-06-05 Classifier application method and device

Publications (2)

Publication Number Publication Date
CN111639718A true CN111639718A (en) 2020-09-08
CN111639718B CN111639718B (en) 2023-06-23

Family

ID=72333032

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010505890.7A Active CN111639718B (en) 2020-06-05 2020-06-05 Classifier application method and device

Country Status (1)

Country Link
CN (1) CN111639718B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113807281A (en) * 2021-09-23 2021-12-17 深圳信息职业技术学院 Image detection model generation method, detection method, terminal and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100310158A1 (en) * 2008-09-26 2010-12-09 Tencent Technology (Shenzhen) Company Limited Method And Apparatus For Training Classifier, Method And Apparatus For Image Recognition
CN103632134A (en) * 2013-10-17 2014-03-12 浙江师范大学 Human face identification method based on fisher low-rank matrix restoration
CN107862785A (en) * 2017-10-16 2018-03-30 深圳市中钞信达金融科技有限公司 Bill authentication method and device
CN108960278A (en) * 2017-05-18 2018-12-07 英特尔公司 Use the novetly detection of the discriminator of production confrontation network
CN109190665A (en) * 2018-07-30 2019-01-11 国网上海市电力公司 A kind of general image classification method and device based on semi-supervised generation confrontation network
CN111027439A (en) * 2019-12-03 2020-04-17 西北工业大学 SAR target recognition method for generating countermeasure network based on auxiliary classification

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100310158A1 (en) * 2008-09-26 2010-12-09 Tencent Technology (Shenzhen) Company Limited Method And Apparatus For Training Classifier, Method And Apparatus For Image Recognition
CN103632134A (en) * 2013-10-17 2014-03-12 浙江师范大学 Human face identification method based on fisher low-rank matrix restoration
CN108960278A (en) * 2017-05-18 2018-12-07 英特尔公司 Use the novetly detection of the discriminator of production confrontation network
CN107862785A (en) * 2017-10-16 2018-03-30 深圳市中钞信达金融科技有限公司 Bill authentication method and device
CN109190665A (en) * 2018-07-30 2019-01-11 国网上海市电力公司 A kind of general image classification method and device based on semi-supervised generation confrontation network
CN111027439A (en) * 2019-12-03 2020-04-17 西北工业大学 SAR target recognition method for generating countermeasure network based on auxiliary classification

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JOHN WRIGHT等: "Robust Face Recognition via Sparse Representation" *
董西伟: "有监督和半监督多视图特征学习方法研究" *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113807281A (en) * 2021-09-23 2021-12-17 深圳信息职业技术学院 Image detection model generation method, detection method, terminal and storage medium
CN113807281B (en) * 2021-09-23 2024-03-29 深圳信息职业技术学院 Image detection model generation method, detection method, terminal and storage medium

Also Published As

Publication number Publication date
CN111639718B (en) 2023-06-23

Similar Documents

Publication Publication Date Title
CN111444952B (en) Sample recognition model generation method, device, computer equipment and storage medium
WO2021189364A1 (en) Method and device for generating adversarial image, equipment, and readable storage medium
CN111475797A (en) Method, device and equipment for generating confrontation image and readable storage medium
CN111967609B (en) Model parameter verification method, device and readable storage medium
US20200125836A1 (en) Training Method for Descreening System, Descreening Method, Device, Apparatus and Medium
CN116250020A (en) Detecting an antagonism example using a potential neighborhood graph
Mazumdar et al. Universal image manipulation detection using deep siamese convolutional neural network
CN112884075A (en) Traffic data enhancement method, traffic data classification method and related device
CN111639718A (en) Classifier application method and device
CN111488950B (en) Classification model information output method and device
CN111813593B (en) Data processing method, device, server and storage medium
Dahanayaka et al. Robust open-set classification for encrypted traffic fingerprinting
Bui et al. A clustering-based shrink autoencoder for detecting anomalies in intrusion detection systems
CN112149121A (en) Malicious file identification method, device, equipment and storage medium
Hirofumi et al. Did You Use My GAN to Generate Fake? Post-hoc Attribution of GAN Generated Images via Latent Recovery
CN111490945A (en) VPN tunnel flow identification method based on deep learning method and DFI
CN115292701A (en) Malicious code detection method and system based on combination of initiative and passivity
CN116416486A (en) Image recognition method and system
CN113657808A (en) Personnel evaluation method, device, equipment and storage medium
CN111352827A (en) Automatic testing method and device
CN111784182A (en) Asset information processing method and device
CN115250199B (en) Data stream detection method and device, terminal equipment and storage medium
CN114065867B (en) Data classification method and system and electronic equipment
CN115913769B (en) Data security storage method and system based on artificial intelligence
EP4184398A1 (en) Identifying, or checking integrity of, a machine-learning classification model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant