AU2020103207A4 - A novel method of introducing basic elementary disturbances for testing machine learning models - Google Patents

A novel method of introducing basic elementary disturbances for testing machine learning models Download PDF

Info

Publication number
AU2020103207A4
AU2020103207A4 AU2020103207A AU2020103207A AU2020103207A4 AU 2020103207 A4 AU2020103207 A4 AU 2020103207A4 AU 2020103207 A AU2020103207 A AU 2020103207A AU 2020103207 A AU2020103207 A AU 2020103207A AU 2020103207 A4 AU2020103207 A4 AU 2020103207A4
Authority
AU
Australia
Prior art keywords
machine learning
disturbances
testing
elementary
learning models
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2020103207A
Inventor
Ravindra Daga Badgujar
Mahesh Bhimsham Dembrani
Navin G. Haswani
Tushar Hrishikesh Jaware
Jitendra Prakash Patil
Prashant Gorakh Patil
Sheetal Nana Patil
Vinod Ramesh Patil
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dembrani Mahesh Bhimsham Dr
Patil Prashant Gorakh Dr
Patil Sheetal Nana Miss
Original Assignee
Dembrani Mahesh Bhimsham Dr
Patil Prashant Gorakh Dr
Patil Sheetal Nana Miss
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dembrani Mahesh Bhimsham Dr, Patil Prashant Gorakh Dr, Patil Sheetal Nana Miss filed Critical Dembrani Mahesh Bhimsham Dr
Priority to AU2020103207A priority Critical patent/AU2020103207A4/en
Application granted granted Critical
Publication of AU2020103207A4 publication Critical patent/AU2020103207A4/en
Ceased legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

A NOVEL METHOD OF INTRODUCING BASIC ELEMENTARY DISTURBANCES FOR TESTING MACHINE LEARNING MODELS ABSTRACT Different Machine Learning Models are used in a wide range of fields and have become an important part of computer science. Machine Learning Models involves Systems that can be improved through experience. Machine Learning Models can be created and used in many different ways. One of the major problems with Machine learning compared to Conventional, Non-Machine Learning Solutions is that Machine Learning Solutions generally are difficult to test. Evaluation is usually performed by seeing how good a machine learning model is at classifying unseen data. Testing of Machine Learning Model is most case specific. The Invention disclosed here in is a Novel Method of Introducing Basic Elementary Disturbances for Testing Machine Learning Models comprising of: Dataset-Train (103), Automatic Elementary Disturbances (104), Machine Learning Model (105), Training (106), Dataset-Test (107), and Testing (108). The invention disclosed here uses a Novel method to introduce Basic Elementary Interference to the input data. The Method is commonly used to work with many types of data and Machine Learning Models. Simple Disruptions can be used to forecast the handling of unexposed disturbances by a Machine Learning Model. An overall test method can be helpful as a clear predictor of the tolerance of the Machine Learning Model to intangible disorders. 1/4 A NOVEL METHOD OF INTRODUCING BASIC ELEMENTARY DISTURBANCES FOR TESTING MACHINE LEARNING MODELS DRAWINGS 101 O01 I / 0 Figure 1: Generalized Data Structure Visual Representation of a 3x3x1 Matrix being Converted into a 9x1 Vector. 100 Training 103108 Dataset- Train Machine Learning Testing Model iYi Automatic Elementary Dataset-Test Disturbances Figure 2: A Novel Method of Introducing Basic Elementary Disturbances for Testing Machine Learning Models.

Description

1/4
A NOVEL METHOD OF INTRODUCING BASIC ELEMENTARY DISTURBANCES FOR TESTING MACHINE LEARNING MODELS DRAWINGS
101
O01
/ 0 I
Figure 1: Generalized Data Structure Visual Representation of a 3x3x1 Matrix being Converted into a 9x1 Vector.
100
Training
103108
Dataset- Train Machine Learning Testing Model
iYi Automatic Elementary Dataset-Test Disturbances
Figure 2: A Novel Method of Introducing Basic Elementary Disturbances for Testing Machine Learning Models.
AUSTRALIA Patents Act 1990
COMPLETE SPECIFICATION INNOVATION PATENT A NOVEL METHOD OF INTRODUCING BASIC ELEMENTARY DISTURBANCES FOR TESTING MACHINE LEARNING MODELS
The following statement is a full description of this invention, including the best method of performing it known to me:
I A NOVEL METHOD OF INTRODUCING BASIC ELEMENTARY DISTURBANCES FOR TESTING MACHINE LEARNING MODELS FIELD OF INVENTION
[0001] The present invention relates to the technical field of Artificial Intelligence of Computer Science Engineering.
[0002] Particularly, the present invention is related to a Novel Method of Introducing Basic Elementary Disturbances for Testing Machine Learning Models of the broader field of Machine Learning in Artificial Intelligence of Computer Science Engineering.
[0003] More particularly, the present invention is relates to a Novel Method of Introducing Basic Elementary Disturbances for Testing Machine Learning Models in which the basic elementary disturbances are introduced to test the Machine Learning Model.
BACKGROUND OF INVENTION
[0004] A Novel Method of Introducing Basic Elementary Disturbances for Testing Machine Learning Models disclosed here is the most useful where the testing of the Machine Learning is most case specific and require most considerable extra work. The performance evaluation of Machine Learning Models depends on how model is trained and how the testing is carried on the Machine Leaning Model.
[0005] The unknown disturbances are added to the dataset which are not trained earlier. The Machine Learning Models are now evaluated for their adaptability of classification of data accuracy.
[0006] If the data that the models are tested on is incomplete or is not representative of all the cases encountered in the target system, there is no way of telling how accurate a model will be. This fact is the fundamental problem with metrics used for evaluating machine learning models. If the data is not verified, and if it lacks all the test-cases, the metrics themselves become insufficient and arbitrary. Because of this, a testing methodology and its results have to be as informative as possible. Instead of having one quantitative test, like testing the accuracy, it would be better to know a set of weaknesses for the tested machine learning model.
[0007] The related work in the field of testing can, for the most part, be classified as two groups. One group describes the problem of understanding and overcoming weaknesses in machine learning models. This group could be methods to explore, exploit, or solve the weaknesses in machine learning models. Another group describes different testing methodologies and how to tackle the difficulties of understanding machine learning models. Essentially there is one group exploring weaknesses.
[0008] The typical basic evaluation method when training a Supervised Machine Learning Model is to look at the results from holdout validation. This means that the data used is split into two or more different parts, generally one part for training and one or more part for evaluating the performance. During Training, the Machine Learning Model uses the Training Part of the Data to improve, increasing the correctness of the Model. To make sure the Machine Learning Model works as intended, it is tested on the Evaluation Part of the Data. This is only the data that the machine learning model has not encountered during training. The reason for this is to test how well the model performs with unseen data.
[0009] Often, it is useful to have some additional methods of evaluation when it comes to Machine Learning Models. The currently available methods vary in methodology and are often case-specific. They are created for a specific task, a specific type of data, or a specific type of Machine Learning Model.
[0009a] The invention disclosed here in is a Method of Introducing Basic Elementary Disturbances for Testing Machine Learning Models used for any task, for any type of data, or with any type of Machine Learning Model.
[0010] Often, it is useful to have some additional methods of evaluation when it comes to Machine Learning Models. The currently available methods vary in methodology and are often case-specific. They are created for a specific task, a specific type of data, or a specific type of Machine Learning Model.
[0011] The types of disturbances chosen for testing have a few common features. They are chosen and designed in such a way that they should be usable for any type of data. Since the data is generalized before disturbances are added, disturbances have to work with this type of flattened data. This means that there is no existing knowledge of the structure of the data, which means that there are no created disturbances that are specific to any type of data. The disturbances are created in such a way that they work with the flattened, generalized data structure as shown in Figure 1.
[0011a] Referring to Figure 1, Generalized Data Structure Visual Representation of a 3x3x1 Matrix being Converted into a 9x1 Vector comprising of Data with Disturbance in a single 3x3 matrix (101), and a 9x1 vector (102) for flatten the data which is very suitable for Machine Learning Model.
[0011b] The Disturbances are simple modifications. The modifications are Elementary, modifying the smallest building blocks of the data. These smallest building blocks are the individual elements of data, for example, the red, green, and blue colors of a pixel.
[0011c] The automatic elementary disturbances tested are Setting Random Element to a Random Value, Setting Random Elements to 1, Setting Random Elements to 0, Fading between two Data Points, Exponential Mapping Decrease of Elements (black shift), and Exponential Mapping increase of Elements (white shift).
[0012] Setting random element to a random value sets a random input to a random value between 0 and 1. The xr = d where 0, d < 1 and d is a uniform distribution random function, r is a random number identifying the element in the input vector (r N, 0 r < N), N is the amount of elements in the input vector, x is a output vector. Binary search is performed to find at what level of disturbance the machine learning model misclassifies the original data. Conceptually this could, for example, mean that a pixel in an image becomes a different color or something similar. The effect of this Random Disturbance being applied to an Image can be seen in Figure 4. In this, random pixels are altered into different random values of grey until the machine learning algorithm misclassifies the data.
[0013] Setting random elements to 1 can achieved x,= 1 where r is a random number identifying the element in the input vector (r N, 0 r < N ), N is the amount of elements in the input vector. This disturbance sets a random input to 1. Binary search is performed to find at what level of disturbance the machine learning model misclassifies the original data. Setting random elements of an input vector to 1 can represent several disturbances. One example is audio, where a phenomenon known as white noise can occur. White noise is where all frequencies are at similar intensity. The effect of this can be seen in Figure 5. In the example, random pixels get set to white until the machine learning algorithm misclassifies the input.
[0014] Setting random elements to 0 can achieved with x, = 0 where r is a random number identifying the element in the input vector (r N, 0 r < N). This disturbance sets a random input to 0. Binary search is performed to find at what level of disturbance the machine learning model misclassifies the original data. Conceptually this could, for example, mean in the removal of data, such as missing pixels in an image, gaps in an audio file, or stutter in a video. The effect of this can be seen in Figure 6. In the example, random pixels get set to black until the machine learning algorithm misclassifies the input.
[0014a] Setting random elements in the input vector to 0 represents the removal of random data. Depending on the structure of data, setting elements to 0 could have the same effect as setting them to 1, if the representation is reversed. This means that setting to 0 and setting to 1 could be interchangeable, depending on the circumstance.
[0015] Fade between two data points can be achieved with n(n N, 0 n < N ) : x= iin (1 f ) + i 2 nf , where x is the output vector, ii and i 2 are different entries from the data-set, f is the fade, 0 means that x = ii, and 1 that x = i 2 , n is the index for each vector element, for example a pixel, N is the amount of elements in the input vector. Binary search is performed to find at what level of fade (f)the machine learning model misclassifies the original data, or is cancelled when x= i2 . Conceptually this could, for example, mean when two sounds are picked up simultaneously, or when a transparent object is placed in-front of something. The effect of this can be seen in Figure 7.
[0016] A Novel Method of Introducing Basic Elementary Disturbances for Testing
Machine Learning Models disclosed here is implemented in the programming language Python. Implementation of machine learning models should be in Python to fit with the current invention. The same method could, however, be implemented in other languages as well
SUMMARY OF INVENTION
[0017] A Novel Method of Introducing Basic Elementary Disturbances for Testing Machine Learning Models is invented in which the Basic Elementary Disturbances are introduced to test the Machine Learning Model.
[0018] Basic Elementary Disturbances are introduced to test the Machine Learning Model are Setting Random Element to a Random Value, Setting Random Elements to 1, Setting Random Elements to 0, Fading between two Data Points, Exponential Mapping Decrease of Elements (black shift), and Exponential Mapping increase of Elements (white shift).
[0019] To evaluate if the novel method disclosed here can be useful when predicting the performance for unseen disturbances, the testing methodology shown in Figure 2 is used. The method uses two datasets. One dataset contains data without any disturbances; on this dataset, automatic elementary disturbances are added. The second dataset is a data set filled with data that have complex data specific disturbances.
[0020] The invention disclosed here is comprising of Two datasets namely Dataset Train (103), Dataset-Test (104). These two datasets are different. The Automatic Elementary Disturbances (104) is introduced in four ways into the Dataset-train (103) to the Machine Learning Model to Train (106) the Machine Learning Model (105).
[0021] The Dataset-Test (107) is unknown dataset used to Test (108) the Machine Learning Model. The Performance of the Machine Learning Model for unknown elementary disturbances is evaluated in the invention disclosed here.
[0021a] The testing is performed on many, differently trained, two-layer convolution neural networks. This means that they are trained on the same data but with different
U
parameters. Too see the exact parameters used for each test see the raw results.
[0021b] The parameters used are Batch size: 128-512, Convolutional layer 1 size: 32 256, Convolutional layer 2 size: 64, Fully connected layer size: 512-4096, and Training steps: 200-8000.
[0021c] For each Machine Learning Model tested, the simple disturbances are applied individually to 1000 data points to test the limit of each classification. The 1000 data points are picked from the 8732 sound excerpts in the URBANSOUND8K dataset when testing audio and from the 60000 images from the CIFAR-10 dataset when testing images, the first 1000 data points are used for each test. For the advanced disturbances, the average accuracy is saved; here the average is calculated after testing all the data points of the dataset.
[0021d] After data collection, statistical analysis can be made using the results. This analysis clarifies the viability of the method for that machine learning model. Ideally, there should be a correlation between the simple disturbances and the advanced disturbances. Having this correlation would indicate that the simple disturbances can be used to predict performance on unseen advanced disturbances.
BRIEF DESCRIPTION OF DRAWINGS
[0022] The Accompanying Drawings are included to provide further understanding of the invention disclosed here, and are incorporated in and constitute a part this specification. The drawing illustrates exemplary embodiments of the present disclosure and, together with the description, serves to explain the principles of the present disclosure. The Drawings are for illustration only, which thus not a limitation of the present disclosure.
[0023] Referring to Figure 1, illustrates Generalized Data Structure Visual Representation of a 3x3x1 Matrix being Converted into a 9x1 Vector, in accordance with an exemplary embodiment of the present disclosure.
[0024] Referring to Figure 2, illustrates A Novel Method of Introducing Basic Elementary Disturbances for Testing Machine Learning Models comprising of Dataset Train (103), Automatic Elementary Disturbances (104), Machine Learning Model (105), Training (106), Dataset-Test (107), and Testing (108), in accordance with another exemplary embodiment of the present disclosure.
[0025] Referring to Figure 3, illustrates Evaluation of Invention Disclosed by Four Phases, in accordance with another exemplary embodiment of the present disclosure.
[0026] Referring to Figure 4, illustrates Random Disturbance being applied to an Image, in accordance with another exemplary embodiment of the present disclosure.
[0027] Referring to Figure 5, illustrates Element Setting Disturbance being applied to an Image, in accordance with another exemplary embodiment of the present disclosure.
[0028] Referring to Figure 6, illustrates Element unsetting Disturbance being applied to an Image, in accordance with another exemplary embodiment of the present disclosure.
[0029] Referring to Figure 7, illustrates Fade Disturbance being applied to an Image, in accordance with another exemplary embodiment of the present disclosure.
[0030] Referring to Figure 8, illustrates Images originally Classified as 0, are now classified as 2. in accordance with another exemplary embodiment of the present disclosure.
DETAIL DESCRIPTION OF INVENTION
[0031] Referring to Figure 2, A Novel Method of Introducing Basic Elementary Disturbances for Testing Machine Learning Models comprising of Dataset-Train (103) is used by the Machine Learning Model (105), and Machine Learning Models used for both the CIFAR-10 dataset and the URBAN-SOUND8K dataset. Automatic Elementary Disturbances (104) is introduced in four ways into the Dataset-train (103) to the Machine Learning Model to Train (106) the Machine Learning Model (105). The Dataset-Test (107) is unknown dataset used to Test 108 the Machine Learning
Model. The Performance of the Machine Learning Model for unknown elementary disturbances is evaluated in the invention disclosed here.
[0032] The Elementary Disturbances (104) are added to the Machine Learning Model to test the Machine Learning model for correct classification or for the wrong classification.
[0032a] The following, Table 1, is an aggregate from the tables, showing the distribution of classifications after disturbances are added. Each row is a simple disturbance; the columns 0-9 are the new classifications after the data is modified.
TABLE 1
Classification Distribution after Simple Disturbances are added
Distribution Type Classification Accuracy(%)
0 0 53 33 4 3 0 1 3 1 Setting random elements to 1
0 4 15 24 3 31 0 14 5 3 Setting random elements to 0
9 8 9 10 9 7 7 8 15 18 Fade Between two Data points
Exponential Mapping Increase of 10 11 10 11 9 9 9 10 10 10 Elements Exponential Mapping Decrease of 9 11 10 11 10 11 8 11 7 9 Elements
[0032b] Each row is the distribution of new classifications, after the limit for the wrong classification is reached. For example, after performing the "setting random element to a random value" disturbance, 47% of the new, wrong, classifications were classified as "12".
[0033] Referring to Figure 3, Evaluation of Invention Disclosed by Four Phases comprising of Training Machine Learning Model (201) is used to Train multiple machine learning models on the dataset without disturbances. Test the performance of the Model with (202) uses two types of disturbances names Automatic Elementary y
Disturbances (203) which is the smallest building blocks of the data are modified. These smallest building blocks are the individual elements of data, and Data sets with Real/Complex Disturbance (204). Analyze how well the machine learning models can handle the automatic elementary disturbances. Test the machine learning models on the data-sets with real/complex disturbances. Compare the results and Create Linear Regression Model (205) by comparing the results and see if there is any correlation. Create simple linear regression models to see the improvement.
[0033a] As a baseline a linear regression model using only the hold out validation accuracy is created. If a better linear regression model can be created with the added information from the proposed method, this would indicate that the simple disturbances can be used to improve the prediction ability of the machine learning model's resilience to unseen disturbances. Getting a more accurate linear regression model would mean that the added information from the proposed method cannot be extrapolated from only the accuracy from hold out validation. This would, in turn, indicate that the simple disturbances in addition to the accuracy from the holdout validation can be used in a general testing methodology for predicting how well a machine learning model will handle unseen disturbances.
[0034] Referring to Figure 4, Random Disturbance being applied to an Image comprising of an input Image (301) to which Random Disturbance is applied produces an output image (302) which is misclassified by the Machine Learning Model. The Misclassification Results also saved on the Machine Learning Model for testing Purpose.
[0035] Referring to Figure 5, Element Setting Disturbance being applied to an Image comprising of an input Image (303) to which Element Setting Disturbance being applied produces an output image (304) which is misclassified by the Machine Learning Model. The Misclassification Results also saved on the Machine Learning Model for testing Purpose.
[0036] Referring to Figure 6, Element unsetting Disturbance being applied to an Image comprising of an input Image (305) to which Element unsetting Disturbance being applied to an Image produces an output image (306) which is misclassified by the
1U
Machine Learning Model. The Misclassification Results also saved on the Machine Learning Model for testing Purpose.
[0037] Referring to Figure 7, Fade Disturbance being applied to an Image comprising of an input Image (307) multiplied with scaling factor 0.57, input Image (308) multiplied with scaling factor 0.47 to which Fade Disturbance being applied produces an output image (309) which is misclassified by the Machine Learning Model. The Misclassification Results also saved on the Machine Learning Model for testing Purpose.
[0038] Referring to Figure 8, Images originally Classified as 0, are now classified as 2 is due to the elementary disturbances added to the Machine Learning model for its training data misclassification results and this misclassification results is also stored in the Machine Learning to understand the unknown Dataset.
[0039] During the evaluation phase, some alterations were performed to the different ma- chine learning models to get more diverse results, but the structures of the different models were very similar. Essentially the goal was trying to create machine learning models with varying results. This was done by changing the parameters that determine how the neural networks were trained. This is problematic due to the fact that the machine learning models learned in very similar ways and had very similar performance, even though the attempted variability. Having a much wider array of networks and different machine learning models would most likely have yielded results that were more diverse. This would have given more data to work with hence increasing the possibility of getting a more interesting result. For example, all networks performed similarly in both automatic elementary disturbances and on the advanced disturbances.
[0040] The invention described are broad and meant to work as a novel method to help while doing a general analysis, independent of what type of machine learning model is used, and no matter what type of data is used. A problem with focusing on the broader picture is that less focus is given to the smaller details. There are a few improvements that could be made to make this type of testing easier to use and more adaptable.

Claims (5)

A NOVEL METHOD OF INTRODUCING BASIC ELEMENTARY DISTURBANCES FOR TESTING MACHINE LEARNING MODELS CLAIMS We claim:
1. A Novel Method of Introducing Basic Elementary Disturbances for Testing Machine Learning Models comprising of: Dataset-Train (103), Automatic Elementary Disturbances (104), Machine Learning Model (105), Training (106), Dataset-Test (107), and Testing (108) introduce Basic Elementary Interference to the input data for testing the Machine Learning Model.
2. A Novel Method of Introducing Basic Elementary Disturbances for Testing Machine Learning Models as claimed in claim 1, wherein it introduces basic elementary disturbances namely Setting Random Element to a Random Value, Setting Random Elements to 1, Setting Random Elements to 0, Fading between two Data Points, Exponential Mapping Decrease of Elements (black shift), and Exponential Mapping increase of Elements (white shift) to test the Machine Learning Model.
3. A Novel Method of Introducing Basic Elementary Disturbances for Testing Machine Learning Models as claimed in claim 1, wherein it is trained with one dataset and Tested with different Dataset. The classification results of testing are stored for future testing.
4. A Novel Method of Introducing Basic Elementary Disturbances for Testing Machine Learning Models as claimed in claim 1, wherein it uses four phases evaluation comprising of Training Machine Learning Model (201), Test the performance of the Model with (202), Automatic Elementary Disturbances (203), Data sets with Real/Complex Disturbance (204), and Compare the results and Create Linear Regression Model (205).
5. A Novel Method of Introducing Basic Elementary Disturbances for Testing Machine Learning Models as claimed in claim 1, wherein it helpful as a clear predictor of the tolerance of the Machine Learning Model to intangible disorders.
AU2020103207A 2020-11-03 2020-11-03 A novel method of introducing basic elementary disturbances for testing machine learning models Ceased AU2020103207A4 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2020103207A AU2020103207A4 (en) 2020-11-03 2020-11-03 A novel method of introducing basic elementary disturbances for testing machine learning models

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2020103207A AU2020103207A4 (en) 2020-11-03 2020-11-03 A novel method of introducing basic elementary disturbances for testing machine learning models

Publications (1)

Publication Number Publication Date
AU2020103207A4 true AU2020103207A4 (en) 2021-01-14

Family

ID=74103566

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2020103207A Ceased AU2020103207A4 (en) 2020-11-03 2020-11-03 A novel method of introducing basic elementary disturbances for testing machine learning models

Country Status (1)

Country Link
AU (1) AU2020103207A4 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113656813A (en) * 2021-07-30 2021-11-16 深圳清华大学研究院 Image processing method, system, equipment and storage medium based on anti-attack
CN113780575A (en) * 2021-08-30 2021-12-10 征图智能科技(江苏)有限公司 Super-parameter optimization method of progressive deep learning model

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113656813A (en) * 2021-07-30 2021-11-16 深圳清华大学研究院 Image processing method, system, equipment and storage medium based on anti-attack
CN113656813B (en) * 2021-07-30 2023-05-23 深圳清华大学研究院 Image processing method, system, equipment and storage medium based on attack resistance
CN113780575A (en) * 2021-08-30 2021-12-10 征图智能科技(江苏)有限公司 Super-parameter optimization method of progressive deep learning model
CN113780575B (en) * 2021-08-30 2024-02-20 征图智能科技(江苏)有限公司 Visual classification method based on progressive deep learning model

Similar Documents

Publication Publication Date Title
Risum et al. Using deep learning to evaluate peaks in chromatographic data
EP3620990A1 (en) Capturing network dynamics using dynamic graph representation learning
AU2020103207A4 (en) A novel method of introducing basic elementary disturbances for testing machine learning models
CN106530305A (en) Semantic segmentation model training and image segmentation method and device, and calculating equipment
KR102337070B1 (en) Method and system for building training database using automatic anomaly detection and automatic labeling technology
CN111126433A (en) Positive and negative sample data balancing method in factory PCB defect detection
CN112527676A (en) Model automation test method, device and storage medium
Tcheng et al. Visual recognition software for binary classification and its application to spruce pollen identification
CN114528913A (en) Model migration method, device, equipment and medium based on trust and consistency
JP2019075078A (en) Construction site image determination device and construction site image determination program
CN111104339B (en) Software interface element detection method, system, computer equipment and storage medium based on multi-granularity learning
KR20210003661A (en) Systems and methods for detecting flaws on panels using images of the panels
US11687782B2 (en) Systems and methods for recognition of user-provided images
CN116777865A (en) Underwater crack identification method, system, device and storage medium
CN114861739B (en) Characteristic channel selectable multi-component system degradation prediction method and system
CN112529025A (en) Data processing method and device
CN113627538B (en) Method for training asymmetric generation of image generated by countermeasure network and electronic device
CN113128556B (en) Deep learning test case sequencing method based on mutation analysis
CN116151319A (en) Method and device for searching neural network integration model and electronic equipment
JP2019074774A (en) Construction site image determination device and construction site image determination program
CN114896138A (en) Software defect prediction method based on complex network and graph neural network
CN110738638B (en) Visual saliency detection algorithm applicability prediction and performance blind evaluation method
Quinlan et al. Bayesian design of experiments for logistic regression to evaluate multiple nuclear forensic algorithms
JP7314723B2 (en) Image processing system and image processing program
KR102489115B1 (en) Method and Apparatus for Deep Machine Learning for Vision Inspection of a Manufactured Product

Legal Events

Date Code Title Description
FGI Letters patent sealed or granted (innovation patent)
MK22 Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry