CN110647992A - Training method of convolutional neural network, image recognition method and corresponding devices thereof - Google Patents

Training method of convolutional neural network, image recognition method and corresponding devices thereof Download PDF

Info

Publication number
CN110647992A
CN110647992A CN201910889110.0A CN201910889110A CN110647992A CN 110647992 A CN110647992 A CN 110647992A CN 201910889110 A CN201910889110 A CN 201910889110A CN 110647992 A CN110647992 A CN 110647992A
Authority
CN
China
Prior art keywords
training
neural network
convolutional neural
loss function
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910889110.0A
Other languages
Chinese (zh)
Inventor
陈锡显
苏玉鑫
赵胜林
沈小勇
戴宇荣
贾佳亚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Cloud Computing Beijing Co Ltd
Original Assignee
Tencent Cloud Computing Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Cloud Computing Beijing Co Ltd filed Critical Tencent Cloud Computing Beijing Co Ltd
Priority to CN201910889110.0A priority Critical patent/CN110647992A/en
Publication of CN110647992A publication Critical patent/CN110647992A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization

Abstract

The application provides a training method of a convolutional neural network, an image recognition method and a corresponding device thereof, wherein the method comprises the following steps: determining a training loss function for training the convolutional neural network, wherein the training loss function comprises a regular term and a preset loss function which correspond to each convolutional layer in the convolutional neural network, the regular term represents the sum of difference values of each characteristic value and a preset numerical value of a parameter matrix of the corresponding convolutional layer, and the preset loss function represents the difference between the output of the convolutional neural network and an actual result in the training process; acquiring a training data set, wherein the training data set comprises sample data, the sample data is marked with a corresponding real result label, and the sample data comprises challenge sample data; and training the convolutional neural network based on the training loss function and the training data set until the value of the training loss function meets the preset condition, so as to obtain the trained convolutional neural network. The convolutional neural network obtained by training in the scheme has better robustness and higher defense capability.

Description

Training method of convolutional neural network, image recognition method and corresponding devices thereof
Technical Field
The present application relates to the field of computer technologies, and in particular, to a training method for a convolutional neural network, an image recognition method, and a device corresponding to the method.
Background
With the development of artificial intelligence, artificial intelligence systems based on convolutional neural networks are excellent in the fields of computer vision, natural language processing, and the like, but convolutional neural networks are at risk of being attacked by counterattacks (adaptive Attacks). The counterattack is that countersamples (adaptive samples) are obtained through counterdisturbance (adaptive perturbation) added in normal samples, and then the countersamples are input into the convolutional neural network, so that the convolutional neural network outputs an error result. If the convolutional neural network is subjected to countermeasure attack, the subsequent operation of the artificial intelligence system based on the convolutional neural network after receiving the countermeasure sample will make the system unable to work normally, and even cause catastrophic results.
The existing defense means aiming at the attack resistance of the convolutional neural network is to improve the resistance of the convolutional neural network through resistance training, but the convolutional neural network obtained through the training of the existing training method still has the problem of low accuracy, and the high-efficiency and safe operation of an artificial intelligence system based on the convolutional neural network cannot be ensured.
Disclosure of Invention
The purpose of this application is to solve at least one of the above technical defects, and the technical solution provided by this application embodiment is as follows:
in a first aspect, an embodiment of the present application provides a training method for a convolutional neural network, including:
determining a training loss function for training the convolutional neural network, wherein the training loss function comprises a regular term and a preset loss function which correspond to each convolutional layer in the convolutional neural network, the regular term represents the sum of difference values of each characteristic value and a preset numerical value of a parameter matrix of the corresponding convolutional layer, and the preset loss function represents the difference between the output of the convolutional neural network and an actual result in the training process;
acquiring a training data set, wherein the training data set comprises sample data, the sample data is marked with a corresponding real result label, and the sample data comprises challenge sample data;
and training the convolutional neural network based on the training loss function and the training data set until the value of the training loss function meets the preset condition, so as to obtain the trained convolutional neural network.
In an optional implementation, training the convolutional neural network based on the training loss function and the training data set specifically includes:
inputting sample data into a convolutional neural network, and determining a value of a training loss function based on the output of the convolutional neural network and a corresponding real result label;
and updating the network parameters of the convolutional neural network by using a gradient descent method based on the value of the training loss function.
In an optional implementation manner, based on the value of the training loss function, updating the network parameters of the convolutional neural network by using a gradient descent method specifically includes:
acquiring a first gradient corresponding to a preset loss function and a second gradient corresponding to each regular term;
updating network parameters of the convolutional neural network based on the value of the training loss function, the first gradient, and the second gradient.
In an optional implementation manner, obtaining the second gradient corresponding to each regularization term specifically includes:
and performing first-order gradient derivation on each regular term by using an explicit calculation mode to obtain a second gradient.
In an optional embodiment, performing first-order gradient derivation on each regularization term by using an explicit computation manner, further includes:
and if singular value decomposition exists in the process of carrying out first-order gradient derivation on each regular item, carrying out singular value decomposition by adopting random singular value decomposition.
In an alternative embodiment, the method further comprises:
the network parameters of each convolutional layer of the convolutional neural network are initialized so that the network parameters of each convolutional layer obey a normal distribution N (0, 10).
In an optional embodiment, the expression of each regular term is:
λ||WTW-In||*
where λ is the regular coefficient, W is the parameter matrix of the convolutional layer, WTIs a transposed matrix of W, InIs an n-order square matrix, n is the number of columns of the parameter matrix W, | A | | non-calculation*Representing the summation of all singular values of the matrix a.
In a second aspect, the present application reasonably provides an image recognition method based on a convolutional neural network in time, including:
acquiring an image to be identified;
performing image recognition on an image to be recognized based on the first aspect or the convolutional neural network obtained by training in any optional embodiment of the first aspect to obtain a recognition result;
the convolutional neural network is obtained based on training of a training loss function, the training loss function comprises a regular term and a preset loss function, the regular term represents the sum of the difference value of each characteristic value and a preset value of each convolutional layer of a parameter matrix of the corresponding convolutional layer, and the preset loss function represents the difference between the output of the convolutional neural network and an actual result in the training process.
In a third aspect, an embodiment of the present application provides a training apparatus for a convolutional neural network, including:
the loss function determination module is used for determining a training loss function for training the convolutional neural network, the training loss function comprises a regular term and a preset loss function corresponding to each convolutional layer in the convolutional neural network, the regular term represents the sum of the difference value of each characteristic value of a parameter matrix of the corresponding convolutional layer and a preset numerical value, and the preset loss function represents the difference between the output of the convolutional neural network and an actual result in the training process;
the training data set acquisition module is used for acquiring a training data set, wherein the training data set comprises sample data, the sample data is marked with a corresponding real result label, and the sample data comprises confrontation sample data;
and the training module is used for training the convolutional neural network based on the training loss function and the training data set until the value of the training loss function meets the preset condition, so as to obtain the trained convolutional neural network.
In a fourth aspect, an embodiment of the present application provides an image recognition apparatus based on a convolutional neural network, including:
the image acquisition module is used for acquiring an image to be identified;
the image recognition module is used for carrying out image recognition on the image to be recognized based on the convolutional neural network obtained by training in the first aspect or any optional embodiment of the first aspect to obtain a recognition result;
the convolutional neural network is obtained based on training of a training loss function, the training loss function comprises a regular term and a preset loss function, the regular term represents the sum of the difference value between each characteristic value of a parameter matrix of a corresponding convolutional layer and 1, and the preset loss function represents the difference between the output of the convolutional neural network and an actual result in the training process.
In a fifth aspect, an embodiment of the present application provides an electronic device, including a memory and a processor;
the memory has a computer program stored therein;
a processor configured to execute a computer program to implement the method provided in the embodiment of the first aspect, any optional embodiment of the first aspect, the embodiment of the second aspect, or any optional embodiment of the second aspect.
In a sixth aspect, embodiments of the present application provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method provided in the embodiments of the first aspect, any of the alternative embodiments of the first aspect, the embodiments of the second aspect, or any of the alternative embodiments of the second aspect.
The scheme provided by the embodiment of the application has the beneficial effects that:
the method is characterized in that an additional regular term constraint is added to each convolutional layer of the convolutional neural network in the training process, the regular term represents the sum of the difference value between each characteristic value of a parameter matrix of the corresponding convolutional layer and a preset value, the scattered characteristic space corresponding to a smaller characteristic value is removed in the training process, the antagonistic disturbance in the antagonistic sample is generally distributed in the scattered characteristic space corresponding to the smaller characteristic value, the interference of the antagonistic disturbance in the antagonistic sample can be removed through the addition of the regular term, namely, the antagonistic attack of the antagonistic sample can be prevented through the addition of the regular term, and the trained convolutional neural network obtained through the training of the scheme has better robustness, higher capacity of preventing the antagonistic attack and higher accuracy rate, so that the artificial intelligent system based on the convolutional neural network can be ensured to be efficient, And (4) safe operation.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments of the present application will be briefly described below.
FIG. 1 is a schematic diagram of generation of a countermeasure sample image according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a training method of a convolutional neural network according to an embodiment of the present disclosure;
FIG. 3 is a schematic flow chart of an example provided by an embodiment of the present application;
fig. 4 is a schematic flowchart of an image identification method based on a convolutional neural network according to an embodiment of the present disclosure;
fig. 5 is a block diagram illustrating a structure of a training apparatus for a convolutional network according to an embodiment of the present disclosure;
fig. 6 is a block diagram illustrating an image recognition apparatus based on a convolutional neural network according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
With the research and progress of artificial intelligence technology, the artificial intelligence technology is developed and applied in a plurality of fields, such as common smart homes, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned driving, automatic driving, unmanned aerial vehicles, robots, smart medical care, smart customer service, and the like.
The scheme provided by the embodiment of the application relates to the computer vision technology of artificial intelligence, the machine learning technology and the like, and is specifically explained by the following embodiment, and firstly, several nouns are explained and explained:
artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
Computer Vision technology (CV) Computer Vision is a science for researching how to make a machine "see", and further refers to that a camera and a Computer are used to replace human eyes to perform machine Vision such as identification, tracking and measurement on a target, and further image processing is performed, so that the Computer processing becomes an image more suitable for human eyes to observe or transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision technologies generally include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technologies, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and also include common biometric technologies such as face recognition and fingerprint recognition.
Machine Learning (ML) is a multi-domain cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and formal education learning.
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Artificial intelligence systems based on convolutional neural networks are at risk of being counter-attacked. The counterattack is that countersamples are obtained through counterdisturbance added in normal samples, and then the countersamples are input into the convolutional neural network, so that the convolutional neural network outputs wrong results. For example, as shown in fig. 1, the left normal sample image and the middle image (including the antagonistic disturbance) are combined to obtain the right antagonistic sample image, and the human eye cannot distinguish the difference between the right antagonistic sample image and the left normal sample image. The left normal sample image is input into the image recognition system based on the convolutional neural network, the recognition result corresponding to the graph can be output as the vehicle, and the right countermeasure sample image is input into the image recognition system based on the convolutional neural network, so that the recognition result corresponding to the image is not the vehicle but other results, namely, the recognition fails. If the image recognition system based on the convolutional neural network is applied to the field of automatic driving, when the image recognition system is subjected to the counterattack of the counterattack sample image, the automatic driving system can generate misjudgment, and further catastrophic results are caused.
Existing needleThe defense means for the convolutional neural network to resist the attacks are to improve the resistance of the convolutional neural network through resistance training, and generally, the conventional loss function is added with regular term constraints on each convolutional layer, and then the convolutional neural network is trained on the basis of the loss function containing the regular terms, so that the convolutional neural network obtained through training has higher robustness, and the defense capability for resisting the attacks is improved. Common regularization terms are: (1) the regularization term FO: lambda | WTW-In||F(2), the regularization term DFO: lambda | WTW-In||F+||WWT-Im||F(3), the regularization term MC: lambda | WTW-In||(4), the regularization term SRIP: lambda | WTW-In||2Where λ is a regular coefficient, W is a parameter matrix of the convolutional layer, WTIs a transposed matrix of W, InIs an n-order square matrix, n is the number of columns of the parameter matrix W, | A | | non-calculationFRepresenting the F norm of matrix A, | A | | non-woven phosphorRepresenting the infinite norm of matrix A, | A | | luminance2Representing a 2-norm of matrix a. However, the convolutional neural network obtained by training based on the loss function containing the above regular terms still has the problem of low accuracy, and the efficient and safe operation of the artificial intelligence system based on the convolutional neural network cannot be ensured.
In view of the foregoing problems, an embodiment of the present application provides a training method for a convolutional neural network, so that the trained convolutional neural network has a stronger defense capability against attacks, as shown in fig. 2, the method may include:
step S201, determining a training loss function for training the convolutional neural network, where the training loss function includes a regular term and a preset loss function corresponding to each convolutional layer in the convolutional neural network, the regular term represents a sum of differences between each characteristic value of a parameter matrix of the corresponding convolutional layer and a preset value, and the preset loss function represents a difference between an output of the convolutional neural network and an actual result in a training process.
Wherein the convolutional neural network may comprise a plurality of convolutional layers, the parameter of each convolutional layer comprising the width of the convolutional coreThe parameters C of the height, input channel and output channel, general convolutional layer can be expressed by tensor
Figure BDA0002208172870000081
Where w is the width of the convolution kernel, h is the height of the convolution kernel, IcIs the number of input channels, OcIs the number of output channels. To facilitate calculation in the training process, the parameter matrix of the convolutional layer can be obtained by matrixing the parameters of the convolutional layer
Figure BDA0002208172870000082
Wherein m is w × h × Ic,n=Oc
The training loss function for training the convolutional neural network includes a preset loss function and a regular term corresponding to each convolutional layer, the preset loss function represents a difference between an output of the convolutional neural network and an actual result in a training process, and for example, the preset loss function may be a cross entropy loss function. Each regular term representation sums up differences between each eigenvalue of the parameter matrix of the corresponding convolutional layer and a preset value, in practical application, the preset value may be set to 1, and for convenience of description, the scheme of the present application is described by setting the preset value to 1 in the following text, but it is understood that the value of the preset value is not limited to this, and then each regular term representation sums up singular values of a specific matrix, where the specific matrix is obtained by subtracting a specified unit matrix from a transposed matrix of the corresponding parameter matrix after multiplying the parameter matrix, and the order of the specified unit matrix is the same as the number of columns of the parameter matrix.
It should be noted that the training loss function may be obtained by adding the sum of each regularization term to a preset loss function, and meanwhile, a weight may also be multiplied by the sum of each regularization term, where the weight value is referred to as a regularization coefficient. In practical application, the regularization coefficient may be set within a preset range according to the magnitude of the function of the regularization term.
Step S202, a training data set is obtained, wherein the training data set comprises sample data, the sample data is marked with a corresponding real result label, and the sample data comprises confrontation sample data.
The training data set comprises normal sample data and countermeasure sample data, the convolutional neural network is trained through the normal sample data and a preset loss function, the trained convolutional neural network can output correct results corresponding to the normal sample data, the countermeasure sample data is combined with each regular term to train the convolutional neural network, the convolutional neural network can output correct results corresponding to the countermeasure sample data, and defense against attacks is achieved.
And step S203, training the convolutional neural network based on the training loss function and the training data set until the value of the training loss function meets the preset condition, and obtaining the trained convolutional neural network.
Wherein, satisfying the preset condition may be the training loss function convergence.
Specifically, in the training process, besides the preset loss function, the regular term constraint is additionally added to the parameter matrix of each convolution layer, since the corresponding regularization term characterization for each convolutional layer sums the difference of the eigenvalues of the parameter matrix with 1, equivalent to 1 norm optimization of each parameter in the parameter matrix, the main feature space corresponding to a larger eigenvalue (larger than 1) is reserved, and the non-main feature space (scattered feature space) corresponding to a smaller eigenvalue (smaller than 1) is removed in the training process, since the antagonistic perturbations in the antagonistic sample are typically distributed in a non-dominant feature space corresponding to a small eigenvalue (less than 1), then the interference of the adversarial disturbance in the adversarial sample can be removed by the addition of the above-mentioned regularization term, in other words, the adversarial attack against the sample can be defended by the addition of the above-mentioned regularization term.
According to the training method of the convolutional neural network, each convolutional layer of the convolutional neural network is added with additional regular term constraint in the training process, the regular term represents the sum of the difference value of each characteristic value of a parameter matrix of the corresponding convolutional layer and a preset numerical value, scattered characteristic space corresponding to a smaller characteristic value is removed in the training process, and due to the fact that antagonistic disturbance in an antagonistic sample is generally distributed in the scattered characteristic space corresponding to the smaller characteristic value, interference of the antagonistic disturbance in the antagonistic sample can be removed through the addition of the regular term, namely the antagonistic attack of the antagonistic sample can be prevented through the addition of the regular term, and the trained convolutional neural network obtained through the training of the scheme has better robustness, higher capacity of preventing the antagonistic attack and higher accuracy rate, so that the high-efficiency, high-speed and high-accuracy artificial intelligence system based on the convolutional neural network can be guaranteed, And (4) safe operation.
In an optional embodiment of the present application, training the convolutional neural network based on a training loss function and a training data set specifically includes:
inputting sample data into a convolutional neural network, and determining the value of a training loss function based on the output of the convolutional neural network and a corresponding real result label;
and updating the network parameters of the convolutional neural network by using a gradient descent method based on the value of the training loss function.
The value of the training loss function can be generally expressed by a distance, generally, the output of the convolutional neural network is in a vector form, the corresponding result label is also in a vector form, and the distance between the two is obtained to be the value of the training loss function.
Specifically, in the training process, each time (or batch) of training, the network parameters of the convolutional neural network are updated, and the updating process can be understood as a process of solving variables (network parameters of the convolutional neural network) in the training loss function by using a gradient descent method according to the values of the training loss function.
In an optional embodiment of the present application, updating a network parameter of a convolutional neural network by using a gradient descent method based on a value of a training loss function specifically includes:
acquiring a first gradient corresponding to a preset loss function and a second gradient corresponding to each regular term;
updating network parameters of the convolutional neural network based on the value of the training loss function, the first gradient, and the second gradient.
Specifically, in the process of updating the network parameters by using the gradient descent method, in addition to obtaining the value of the training loss function, it is also necessary to obtain the corresponding gradient in each training (or batch). Here, a first gradient corresponding to the predetermined loss function and a second gradient corresponding to the sum of the regularization terms may be obtained.
In an optional embodiment of the present application, obtaining the second gradient corresponding to each regularization term specifically includes:
and performing first-order gradient derivation on each regular term by using an explicit calculation mode to obtain a second gradient.
The explicit calculation is to differentiate the time, there is no iteration and convergence problem, and the minimum time step depends on the size of the minimum unit. The explicit computation mode is used for first-order gradient derivation, compared with the automatic first-order gradient derivation by using deep learning frames such as PyTorch and TensorFlow, the solving process of the second gradient can be greatly accelerated, and meanwhile, the accuracy of the first-order gradient derivation can be improved.
In an optional embodiment of the present application, performing a first-order gradient derivation on each regularization term by using an explicit computation manner, further includes:
and if the Singular Value Decomposition (SVD) exists in the process of carrying out first-order gradient derivation on each regular term, carrying out SVD by adopting the random SVD.
Singular Value Decomposition (SVD) is a step that may occur in performing first-order gradient derivation on each regular term by using an explicit calculation method, and may be performed by using a standard SVD, but the solution speed is slow. In order to further accelerate the first-order gradient derivation process, when SVD exists in the calculation process, a random SVD method can be adopted to replace standard SVD to carry out SVD so as to obtain a solution result. Further, in order to take account of the solving speed and the solving accuracy, in the process of solving by using the random SVD, the rank k of the approximate matrix can be selected to be min (10, m/3, n/3), and the hyperparameter q can be set to be 1 as same as the standard SVD.
In an optional embodiment of the present application, the method may further comprise:
the network parameters of each convolutional layer of the convolutional neural network are initialized so that the network parameters of each convolutional layer obey a normal distribution N (0, 10).
Specifically, in the training process, before sample data is input for the first time, network parameters of the convolutional neural network need to be initialized to obtain initial network parameters, and the initial network parameters have a large influence on the training speed and accuracy in the subsequent training process. During initialization, the most critical parameters are network parameters of each convolution layer, and since each regularization item added in subsequent training characterizes the summation of the difference values of each eigenvalue and 1 of the pair parameter matrix, that is, characterizes the summation of singular values of a specific matrix, all singular values of each parameter matrix can be made to be greater than 1. Further, a normal distribution N (0, 10) may be used to initialize the network parameters of each convolutional layer to meet the above initialization requirement.
In an optional embodiment of the present application, the expression of each regular term is:
λ||WTW-In||*
where λ is the regular coefficient, W is the parameter matrix of the convolutional layer, WTIs a transposed matrix of W, InIs an n-order square matrix, n is the number of columns of the parameter matrix W, | A | | non-calculation*Representing the summation of all singular values of the matrix a.
Further, the expression of the training loss function may be:
Figure BDA0002208172870000111
therein, loss2For training loss functions, loss1K is the number of convolutional layers, and loss is1=f(W1+W2+W3+…+Wk),WiIs a parameter matrix of the ith convolutional layer,
Figure BDA0002208172870000112
is WiThe transposed matrix of (2).
In this example, different regular terms are added to preset loss functions respectively to obtain different training loss functions, the different training loss functions are used to train two convolutional neural networks for image recognition respectively, error rates of the trained convolutional neural networks during image recognition are counted, and then the accuracy of the convolutional neural networks trained by using the training loss functions corresponding to the regular terms is obtained through comparison. The specific implementation diagram of the training based on each training loss function in this example is shown in fig. 3, and may include: (a) constructing an artificial intelligence system based on a convolutional neural network; (b) performing matrixing on the parameters of each convolution layer of the convolutional neural network to obtain a corresponding parameter matrix; (c) adding regularization term constraint to the parameter matrix corresponding to each convolution layer, namely regularizing the matrix parameters; (d) initializing network parameters of a convolutional neural network before inputting sample data for the first time; (e) updating network parameters by adopting a gradient descent method (first-order gradient) in the training process; (f) and when the training loss function is converged, finishing the training to obtain the trained convolutional neural network. Respectively training two convolutional neural networks for image recognition by using different training loss functions, and then counting the error rate of the trained convolutional neural networks during image recognition, wherein the statistical result is as follows:
(1) when the network wideResNet28-10 is trained, a CIFAR10 data set is adopted as a training data set, the regular terms contained in the training loss function of the training loss function are respectively a regular term, a regular term FO, a regular term MC and a regular term SRIP in the scheme of the application, an FGSM (fast Gradient Sign method) attack algorithm is adopted to obtain a confrontation sample image, the obtained confrontation sample image is respectively input into each trained network wideResNet28-10, and the error rate corresponding to each training mode obtained through statistics is shown in table 1:
TABLE 1
Training mode The scheme of the application Regularization term FO Regularization term MC Regular term SRIP
Image recognition error rate 4.14% 17.12% 16.21% 16.61%
As can be seen from table 1, the training mode provided by the present application has a lower error rate, i.e., a higher accuracy, and is better than the training modes corresponding to the other three regular terms.
(2) When the network wideResNet29-8-64 is trained, a CIFAR10 data set is adopted as a training data set, the training loss function of the training data set comprises a regular term, a regular term FO, a regular term MC and a regular term SRIP in the scheme of the application, a FGSM attack algorithm is adopted to obtain a confrontation sample image, the obtained confrontation sample image is respectively input into each trained network wideResNet29-8-64, and the error rate corresponding to each training mode obtained through statistics is shown in Table 2:
TABLE 2
Training mode The scheme of the application Regularization term FO Regularization term MC Regular term SRIP
Image recognition error rate 4.28% 14.68% 15.90% 11.42%
As can be seen from table 2, the training mode provided by the present application has a lower error rate, i.e., a higher accuracy, and is better than the training modes corresponding to the other three regular terms.
Fig. 4 is a schematic flowchart of an image identification method based on a convolutional neural network according to an embodiment of the present disclosure, and as shown in fig. 4, the method may include:
step S401, acquiring an image to be identified.
The image to be identified can be a normal image or a confrontation sample image.
Step S402, performing image recognition on an image to be recognized based on the convolutional neural network obtained by training in the mode of the embodiment corresponding to FIG. 2 to obtain a recognition result;
the convolutional neural network is obtained based on training of a training loss function, the training loss function comprises a regular term and a preset loss function, the regular term represents the sum of the difference value of each characteristic value and a preset value of each convolutional layer of a parameter matrix of the corresponding convolutional layer, and the preset loss function represents the difference between the output of the convolutional neural network and an actual result in the training process.
Wherein, the sample data comprises both normal images and confrontation sample images in the training process.
According to the image recognition method based on the convolutional neural network, an additional regular term constraint is added to each convolutional layer of the convolutional neural network in the training process, the regular term represents the sum of the difference value between each characteristic value of a parameter matrix of the corresponding convolutional layer and a preset value, in the training process, a scattered characteristic space corresponding to a smaller characteristic value is removed, as the adversity disturbance in an adversity sample is generally distributed in the scattered characteristic space corresponding to the smaller characteristic value, the interference on the adversity disturbance in the adversity sample can be removed through the addition of the regular term, namely the adversity attack of the adversity sample can be defended through the addition of the regular term, the trained convolutional neural network obtained through the training of the scheme has better robustness, has higher capability of defending the adversity attack and has higher accuracy, therefore, the efficient and safe operation of the image recognition system based on the image recognition method of the convolutional neural network can be ensured.
Fig. 5 is a block diagram of a training apparatus for a convolutional network according to an embodiment of the present disclosure, and as shown in fig. 5, the apparatus 500 may include: a loss function determination module 501, a training data set acquisition module 502, and a training module 503. Wherein:
the loss function determining module 501 is configured to determine a training loss function for training the convolutional neural network, where the training loss function includes a regular term and a preset loss function corresponding to each convolutional layer in the convolutional neural network, the regular term represents a sum of differences between each eigenvalue of a parameter matrix of the corresponding convolutional layer and a preset value, and the preset loss function represents a difference between an output of the convolutional neural network and an actual result in a training process. The training data set obtaining module 502 is configured to obtain a training data set, where the training data set includes sample data, and the sample data is labeled with a corresponding real result label, and includes challenge sample data. The training module 503 is configured to train the convolutional neural network based on a training loss function and a training data set, and obtain the trained convolutional neural network until a value of the training loss function satisfies a preset condition.
According to the training device for the convolutional neural network, each convolutional layer of the convolutional neural network is additionally constrained by a regular term in the training process, the regular term represents the sum of the difference value of each characteristic value of a parameter matrix of the corresponding convolutional layer and a preset numerical value, in the training process, scattered characteristic space corresponding to a smaller characteristic value is removed, and as the antagonistic disturbance in the antagonistic sample is generally distributed in the scattered characteristic space corresponding to the smaller characteristic value, the interference of the antagonistic disturbance in the antagonistic sample can be removed by adding the regular term, namely the antagonistic attack of the antagonistic sample can be prevented by adding the regular term, and the trained convolutional neural network obtained by training in the scheme has better robustness, higher capacity of preventing the antagonistic attack and higher accuracy rate, so that the high-efficiency, high-efficiency and high-speed of an artificial intelligence system based on the convolutional neural network can be guaranteed, And (4) safe operation.
In an optional embodiment of the present application, the training module is specifically configured to:
inputting sample data into a convolutional neural network, and determining a value of a training loss function based on the output of the convolutional neural network and a corresponding real result label;
and updating the network parameters of the convolutional neural network by using a gradient descent method based on the value of the training loss function.
In an optional embodiment of the present application, the training module is specifically configured to:
acquiring a first gradient corresponding to a preset loss function and a second gradient corresponding to each regular term;
updating network parameters of the convolutional neural network based on the value of the training loss function, the first gradient, and the second gradient.
In an optional embodiment of the present application, the training module is specifically configured to:
and performing first-order gradient derivation on each regular term by using an explicit calculation mode to obtain a second gradient.
In an optional embodiment of the present application, the training module is specifically configured to:
performing first-order gradient derivation on each regular term by using an explicit calculation mode, further comprising:
and if the Singular Value Decomposition (SVD) exists in the process of carrying out first-order gradient derivation on each regular term, carrying out SVD by adopting the random SVD.
In an optional embodiment of the present application, the apparatus may further include a parameter initialization module, specifically configured to:
the network parameters of each convolutional layer of the convolutional neural network are initialized so that the network parameters of each convolutional layer obey a normal distribution N (0, 10).
In an optional embodiment of the present application, the expression of each regular term is:
λ||WTW-In||*
where λ is the regular coefficient, W is the parameter matrix of the convolutional layer, WTIs a transposed matrix of W, InIs an n-order square matrix, n is the number of columns of the parameter matrix W, | A | | non-calculation*Representing the summation of all singular values of the matrix a.
Fig. 6 is a block diagram illustrating an image recognition apparatus based on a convolutional neural network according to an embodiment of the present disclosure, where the apparatus 600 may include: an image acquisition module 601 and an image recognition module 602, wherein:
the image obtaining module 601 is used for obtaining an image to be identified. The image recognition module 602 is configured to perform image recognition on an image to be recognized based on the convolutional neural network obtained through training in the manner of the embodiment corresponding to fig. 2, so as to obtain a recognition result. The convolutional neural network is obtained based on training of a training loss function, the training loss function comprises a regular term and a preset loss function, the regular term represents the sum of the difference value of each characteristic value and a preset value of each convolutional layer of a parameter matrix of the corresponding convolutional layer, and the preset loss function represents the difference between the output of the convolutional neural network and an actual result in the training process.
According to the image recognition device based on the convolutional neural network, an additional regular term constraint is added to each convolutional layer of the convolutional neural network in the training process, the regular term represents the sum of the difference value between each characteristic value of a parameter matrix of the corresponding convolutional layer and a preset value, in the training process, a scattered characteristic space corresponding to a smaller characteristic value is removed, as the adversity disturbance in an adversity sample is generally distributed in the scattered characteristic space corresponding to the smaller characteristic value, the interference on the adversity disturbance in the adversity sample can be removed through the addition of the regular term, namely the adversity attack of the adversity sample can be defended through the addition of the regular term, the trained convolutional neural network obtained through the training of the scheme has better robustness, has higher capability of defending the adversity attack and has higher accuracy, therefore, the efficient and safe operation of the image recognition system based on the image recognition method of the convolutional neural network can be ensured.
Based on the same principle, an embodiment of the present application further provides an electronic device, where the electronic device includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the method provided in any optional embodiment of the present application is implemented, and the following specific cases may be implemented:
the first condition is as follows: determining a training loss function for training the convolutional neural network, wherein the training loss function comprises a regular term and a preset loss function which correspond to each convolutional layer in the convolutional neural network, the regular term represents the sum of difference values of each characteristic value and a preset numerical value of a parameter matrix of the corresponding convolutional layer, and the preset loss function represents the difference between the output of the convolutional neural network and an actual result in the training process; acquiring a training data set, wherein the training data set comprises sample data, the sample data is marked with a corresponding real result label, and the sample data comprises challenge sample data; and training the convolutional neural network based on the training loss function and the training data set until the value of the training loss function meets the preset condition, so as to obtain the trained convolutional neural network.
Case two: acquiring an image to be identified; performing image recognition on an image to be recognized based on the convolutional neural network obtained by training in the mode of the embodiment corresponding to fig. 2 to obtain a recognition result; the convolutional neural network is obtained based on training of a training loss function, the training loss function comprises a regular term and a preset loss function, the regular term represents the sum of the difference value of each characteristic value and a preset value of each convolutional layer of a parameter matrix of the corresponding convolutional layer, and the preset loss function represents the difference between the output of the convolutional neural network and an actual result in the training process.
The embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the method shown in any embodiment of the present application.
It is to be understood that the medium may store a computer program corresponding to a training method of a convolutional neural network, or a computer program corresponding to an image recognition method based on a convolutional neural network.
Fig. 7 is a schematic structural diagram of an electronic device to which the embodiment of the present application is applied, and as shown in fig. 7, an electronic device 700 shown in fig. 7 includes: a processor 701 and a memory 703. The processor 701 is coupled to a memory 703, such as via a bus 702. Further, the electronic device 700 may also include a transceiver 704, and the electronic device 700 may interact with other electronic devices through the transceiver 704. It should be noted that the transceiver 704 is not limited to one in practical applications, and the structure of the electronic device 700 is not limited to the embodiment of the present application.
The processor 701 applied in this embodiment of the present application may be configured to implement the functions of the loss function determining module, the training data set obtaining module, and the training module shown in fig. 5, and may also be configured to implement the functions of the image obtaining module and the image recognition module shown in fig. 6.
The processor 701 may be a CPU, general purpose processor, DSP, ASIC, FPGA or other programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 701 may also be a combination of computing functions, e.g., comprising one or more microprocessors, DSPs, and microprocessors, among others.
Bus 702 may include a path that transfers information between the above components. The bus 702 may be a PCI bus or an EISA bus, etc. The bus 702 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 7, but this is not intended to represent only one bus or type of bus.
The memory 703 may be, but is not limited to, ROM or other type of static storage device that can store static information and instructions, RAM or other type of dynamic storage device that can store information and instructions, EEPROM, CD-ROM or other optical disk storage, optical disk storage (including compact disk, laser disk, optical disk, digital versatile disk, blu-ray disk, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
The memory 703 is used for storing application program codes for executing the present invention, and is controlled by the processor 701. The processor 701 is configured to execute application program codes stored in the memory 703 to implement the actions of the training apparatus for convolutional neural network provided in the embodiment shown in fig. 5 or the actions of the image recognition apparatus based on convolutional neural network provided in the embodiment shown in fig. 6.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The foregoing is only a partial embodiment of the present application, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present application, and these modifications and decorations should also be regarded as the protection scope of the present application.

Claims (12)

1. A method of training a convolutional neural network, comprising:
determining a training loss function for training the convolutional neural network, wherein the training loss function comprises a regular term and a preset loss function corresponding to each convolutional layer in the convolutional neural network, the regular term represents the sum of difference values of each characteristic value and a preset numerical value of a parameter matrix of the corresponding convolutional layer, and the preset loss function represents the difference between the output and an actual result of the convolutional neural network in the training process;
acquiring a training data set, wherein the training data set comprises sample data, the sample data is marked with a corresponding real result label, and the sample data comprises confrontation sample data;
and training the convolutional neural network based on the training loss function and the training data set until the value of the training loss function meets a preset condition, so as to obtain the trained convolutional neural network.
2. The method of claim 1, wherein training the convolutional neural network based on the training loss function and the training data set comprises:
inputting the sample data into the convolutional neural network, and determining the value of the training loss function based on the output of the convolutional neural network and a corresponding real result label;
and updating the network parameters of the convolutional neural network by using a gradient descent method based on the value of the training loss function.
3. The method according to claim 2, wherein the updating the network parameters of the convolutional neural network using a gradient descent method based on the value of the training loss function specifically comprises:
acquiring a first gradient corresponding to the preset loss function and a second gradient corresponding to each regular term;
updating network parameters of the convolutional neural network based on the values of the training loss function, the first gradient, and the second gradient.
4. The method according to claim 3, wherein obtaining the second gradient corresponding to each regularization term specifically includes:
and performing first-order gradient derivation on each regular term by using an explicit calculation mode to obtain the second gradient.
5. The method of claim 4, wherein performing a first order gradient derivation on each regularization term using an explicit computation, further comprises:
and if singular value decomposition exists in the process of carrying out first-order gradient derivation on each regular item, carrying out singular value decomposition by adopting random singular value decomposition.
6. The method of claim 1, further comprising:
initializing network parameters of each convolutional layer of the convolutional neural network so that the network parameters of each convolutional layer obey normal distribution N (0, 10).
7. The method of any of claims 1-6, wherein the expression for each regular term is:
λ||WTW-In||*
where λ is the regular coefficient, W is the parameter matrix of the convolutional layer, WTIs a transposed matrix of W, InIs an n-order square matrix, n is the number of columns of the parameter matrix W, | A | | non-calculation*Representing the summation of all singular values of the matrix a.
8. An image recognition method based on a convolutional neural network is characterized by comprising the following steps:
acquiring an image to be identified;
performing image recognition on the image to be recognized based on the convolutional neural network obtained by training in the mode of any one of claims 1 to 7 to obtain a recognition result;
the convolutional neural network is obtained based on training of a training loss function, the training loss function comprises a regular term and a preset loss function, the regular term represents the sum of difference values of all characteristic values and preset numerical values of parameter matrixes of corresponding convolutional layers, and the preset loss function represents the difference between the output and an actual result of the convolutional neural network in the training process.
9. An apparatus for training a convolutional neural network, comprising:
a loss function determining module, configured to determine a training loss function used for training the convolutional neural network, where the training loss function includes a regular term and a preset loss function corresponding to each convolutional layer in the convolutional neural network, the regular term represents a sum of differences between each eigenvalue of a parameter matrix of the corresponding convolutional layer and a preset value, and the preset loss function represents a difference between an output of the convolutional neural network and an actual result in a training process;
the training data set acquisition module is used for acquiring a training data set, wherein the training data set comprises sample data, the sample data is marked with a corresponding real result label, and the sample data comprises confrontation sample data;
and the training module is used for training the convolutional neural network based on the training loss function and the training data set until the value of the training loss function meets a preset condition, so as to obtain the trained convolutional neural network.
10. An image recognition apparatus based on a convolutional neural network, comprising:
the image acquisition module is used for acquiring an image to be identified;
the image recognition module is used for carrying out image recognition on the image to be recognized based on the convolutional neural network obtained by training in the mode of any one of claims 1 to 7 to obtain a recognition result;
the convolutional neural network is obtained based on training of a training loss function, the training loss function comprises a regular term and a preset loss function, the regular term represents the sum of difference values of all characteristic values and preset numerical values of parameter matrixes of corresponding convolutional layers, and the preset loss function represents the difference between the output and an actual result of the convolutional neural network in the training process.
11. An electronic device comprising a memory and a processor;
the memory has stored therein a computer program;
the processor for executing the computer program to implement the method of any one of claims 1 to 8.
12. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method of any one of claims 1 to 8.
CN201910889110.0A 2019-09-19 2019-09-19 Training method of convolutional neural network, image recognition method and corresponding devices thereof Pending CN110647992A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910889110.0A CN110647992A (en) 2019-09-19 2019-09-19 Training method of convolutional neural network, image recognition method and corresponding devices thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910889110.0A CN110647992A (en) 2019-09-19 2019-09-19 Training method of convolutional neural network, image recognition method and corresponding devices thereof

Publications (1)

Publication Number Publication Date
CN110647992A true CN110647992A (en) 2020-01-03

Family

ID=69010816

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910889110.0A Pending CN110647992A (en) 2019-09-19 2019-09-19 Training method of convolutional neural network, image recognition method and corresponding devices thereof

Country Status (1)

Country Link
CN (1) CN110647992A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111783085A (en) * 2020-06-29 2020-10-16 浙大城市学院 Defense method and device for resisting sample attack and electronic equipment
CN111798414A (en) * 2020-06-12 2020-10-20 北京阅视智能技术有限责任公司 Method, device and equipment for determining definition of microscopic image and storage medium
CN112580732A (en) * 2020-12-25 2021-03-30 北京百度网讯科技有限公司 Model training method, device, equipment, storage medium and program product
CN113239223A (en) * 2021-04-14 2021-08-10 浙江大学 Image retrieval method based on input gradient regularization
CN113537492A (en) * 2021-07-19 2021-10-22 第六镜科技(成都)有限公司 Model training and data processing method, device, equipment, medium and product
WO2022052601A1 (en) * 2020-09-10 2022-03-17 华为技术有限公司 Neural network model training method, and image processing method and device
CN113537492B (en) * 2021-07-19 2024-04-26 第六镜科技(成都)有限公司 Model training and data processing method, device, equipment, medium and product

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111798414A (en) * 2020-06-12 2020-10-20 北京阅视智能技术有限责任公司 Method, device and equipment for determining definition of microscopic image and storage medium
CN111783085A (en) * 2020-06-29 2020-10-16 浙大城市学院 Defense method and device for resisting sample attack and electronic equipment
CN111783085B (en) * 2020-06-29 2023-08-22 浙大城市学院 Defense method and device for resisting sample attack and electronic equipment
WO2022052601A1 (en) * 2020-09-10 2022-03-17 华为技术有限公司 Neural network model training method, and image processing method and device
CN112580732A (en) * 2020-12-25 2021-03-30 北京百度网讯科技有限公司 Model training method, device, equipment, storage medium and program product
CN112580732B (en) * 2020-12-25 2024-02-23 北京百度网讯科技有限公司 Model training method, device, apparatus, storage medium and program product
CN113239223A (en) * 2021-04-14 2021-08-10 浙江大学 Image retrieval method based on input gradient regularization
CN113537492A (en) * 2021-07-19 2021-10-22 第六镜科技(成都)有限公司 Model training and data processing method, device, equipment, medium and product
CN113537492B (en) * 2021-07-19 2024-04-26 第六镜科技(成都)有限公司 Model training and data processing method, device, equipment, medium and product

Similar Documents

Publication Publication Date Title
CN110647992A (en) Training method of convolutional neural network, image recognition method and corresponding devices thereof
Sax et al. Mid-level visual representations improve generalization and sample efficiency for learning visuomotor policies
Sax et al. Learning to navigate using mid-level visual priors
US20170083754A1 (en) Methods and Systems for Verifying Face Images Based on Canonical Images
EP2724297B1 (en) Method and apparatus for a local competitive learning rule that leads to sparse connectivity
CN110659723B (en) Data processing method and device based on artificial intelligence, medium and electronic equipment
CN110969250A (en) Neural network training method and device
Tao et al. Nonlocal neural networks, nonlocal diffusion and nonlocal modeling
CN111476806B (en) Image processing method, image processing device, computer equipment and storage medium
CN111782826A (en) Knowledge graph information processing method, device, equipment and storage medium
CN112215332A (en) Searching method of neural network structure, image processing method and device
CN112907552A (en) Robustness detection method, device and program product for image processing model
CN113435520A (en) Neural network training method, device, equipment and computer readable storage medium
CN111310821A (en) Multi-view feature fusion method, system, computer device and storage medium
CN108229536A (en) Optimization method, device and the terminal device of classification prediction model
CN111008631A (en) Image association method and device, storage medium and electronic device
CN114511042A (en) Model training method and device, storage medium and electronic device
CN115496144A (en) Power distribution network operation scene determining method and device, computer equipment and storage medium
CN113808044B (en) Encryption mask determining method, device, equipment and storage medium
CN113569611A (en) Image processing method, image processing device, computer equipment and storage medium
CN111950635A (en) Robust feature learning method based on hierarchical feature alignment
CN115982645A (en) Method, device, processor and computer-readable storage medium for realizing data annotation based on machine learning in trusted environment
CN110502975A (en) A kind of batch processing system that pedestrian identifies again
CN112036446B (en) Method, system, medium and device for fusing target identification features
Zheng et al. Minimal support vector machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40018637

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination