CN111768792B - Audio steganalysis method based on convolutional neural network and domain countermeasure learning - Google Patents

Audio steganalysis method based on convolutional neural network and domain countermeasure learning Download PDF

Info

Publication number
CN111768792B
CN111768792B CN202010415018.3A CN202010415018A CN111768792B CN 111768792 B CN111768792 B CN 111768792B CN 202010415018 A CN202010415018 A CN 202010415018A CN 111768792 B CN111768792 B CN 111768792B
Authority
CN
China
Prior art keywords
network
layer
audio
steganalysis
domain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010415018.3A
Other languages
Chinese (zh)
Other versions
CN111768792A (en
Inventor
王让定
林昱臻
严迪群
董理
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Hongyue Information Technology Co ltd
Tianyi Safety Technology Co Ltd
Original Assignee
Tianyi Safety Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi Safety Technology Co Ltd filed Critical Tianyi Safety Technology Co Ltd
Priority to CN202010415018.3A priority Critical patent/CN111768792B/en
Publication of CN111768792A publication Critical patent/CN111768792A/en
Application granted granted Critical
Publication of CN111768792B publication Critical patent/CN111768792B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/018Audio watermarking, i.e. embedding inaudible data in the audio signal
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/06Determination or coding of the spectral characteristics, e.g. of the short-term prediction coefficients
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Complex Calculations (AREA)

Abstract

The invention relates to an audio steganalysis method based on convolutional neural network and domain countermeasure learning, which is characterized in that: the network framework corresponding to the method comprises a feature extraction sub-networkSteganalysis subnetworkAnd a carrier source discrimination subnetworkWherein θ is f 、θ y 、θ d The network parameters of each sub-network are represented respectively, and the problem of performance degradation of an audio steganography analysis model caused by the problem of carrier source mismatch can be effectively relieved by providing an audio steganography analysis method based on convolutional neural networks and domain countermeasure learning, so that a feasible idea is provided for application of an audio steganography analysis technology in a complex Internet big data evidence obtaining scene.

Description

Audio steganalysis method based on convolutional neural network and domain countermeasure learning
Technical Field
The invention relates to the technical field of audio steganography, in particular to an audio steganography analysis method based on convolutional neural networks and domain countermeasure learning.
Background
The current audio steganalysis model based on the deep learning technology has higher detection performance under laboratory conditions. However, in the actual network big data evidence obtaining environment, the audio carrier data has the characteristics of diversity, heterogeneity and the like, and if the audio carrier data is directly detected by using a steganographic analysis model obtained by laboratory training, the accuracy rate is greatly reduced.
Carrier source mismatch (Cover Source Mismatch, CSM) problems in audio steganalysis are determined from training set audio data and measurementsThe sources of the test set audio data (such as recording equipment, speaker gender, language, etc.) are different. CSM is essentially a domain adaptation in transfer learning (Domain Adaptation) problem, which can be defined as: given a tagged source data fieldAnd a label-free target data field +.>Assuming that they have the same feature space, the same class space and the same conditional probability distribution, but different edge distributions for the two domains, the goal of domain adaptive learning is to use the labeled data D s To learn a classifier f x t →y t To predict the target domain D t To minimize the risk of errors in the predictions.
But there is currently no solution specifically directed to the CSM problem in audio steganalysis.
Disclosure of Invention
In view of the above problems, the invention aims to provide an audio steganalysis method based on convolutional neural network and domain countermeasure learning, which can effectively relieve the influence of CSM phenomenon on the performance degradation of an audio steganalysis model and improve the application feasibility of an audio steganalysis technology in a complex Internet big data evidence obtaining scene.
In order to achieve the above purpose, the technical scheme of the invention is as follows: the audio steganalysis method based on convolutional neural network and domain countermeasure learning is characterized in that: the network framework corresponding to the method comprises a feature extraction sub-networkSteganalysis subnetwork->And vector origin discrimination subnetwork->Wherein θ is f 、θ y 、θ d Representing network parameters of respective sub-networks, the method comprising,
s1, inputting source domain dataTarget Domain data->An countermeasure training factor lambda, a learning rate eta;
s2, outputting a steganographic analysis feature vector F through a feature extraction sub-network;
s3, the steganalysis feature vector F is output through a steganalysis sub-network to obtain binary steganalysis prediction probabilityCalculating binary steganography prediction probability +.>Cross entropy loss with original steganographic tag y y And updates the network parameter θ accordingly by a back propagation error and gradient descent algorithm y Wherein y is {0,1}, when y takes the value 0 as the original carrier and takes the value 1, it represents the hidden carrier;
s4, outputting the steganographic analysis feature vector F through a carrier source discrimination sub-network to obtain a carrier source prediction probability valueCalculating the vector source prediction probability value->Cross entropy loss with original steganographic tag d/ d And updates the network parameter θ accordingly by back-propagation error d Where d ε {0,1}, when d takes the value 0 representing the source domain and takes the value 1 representing the target domain.
Further, the feature extraction sub-network in S2 includes an audio preprocessing layer and 4 cascaded convolution groups after the audio preprocessing layer, namely, a 1 st convolution group, a 2 nd convolution group, a 3 rd convolution group, and a 4 th convolution group.
Further, the audio preprocessing layer consists of 4 1×5 convolution kernels D1 to D4, and initial weights are respectively:
D1=[1,-1,0,0,0],D1=[1,-2,1,0,0],D1=[1,-3,3,1,0],D1=[1,-4,6,-4,1];
the 1 st convolution group includes a 1×1 first convolution layer, a 1×5 second convolution layer, and a 1×1 third convolution layer;
the 2 nd convolution group, the 3 rd convolution group and the 4 th convolution group all comprise a 1 multiplied by 5 convolution layer, a 1 multiplied by 1 convolution layer and a mean-average pooling layer, wherein the mean-average pooling layer of the 4 th convolution group is a global mean-average pooling layer;
the steganalysis feature vector is a 256-dimensional vector.
Furthermore, the audio preprocessing layer adopts a differential filtering design.
Further, the steganalysis sub-network comprises a full-connection layer and a steganographic label prediction layer, wherein the full-connection layer is formed by cascading two layers, and the full-connection layer comprises 128 neurons and 64 neurons respectively.
Further, the carrier source discrimination subnetwork comprises a gradient inversion layer, a domain discrimination layer and a domain label prediction layer, wherein the gradient inversion layer keeps constant mapping of input and output data in a forward propagation stage, and gradient values of inversion errors in an error reverse propagation stage are respectively expressed as,
Forward:F(x)=x
wherein F (x) represents an equivalent function formula of the gradient inversion layer, and I is an identity matrix.
Further, in the step S3, the network parameter θ is updated y And updating the network parameter θ in S4 d The optimization is carried out by the following formula,
wherein, and respectively representing the network parameters determined by each sub-network, wherein n is the number of training samples of the source domain data, and m is the number of training samples of the target domain data.
Compared with the prior art, the invention has the advantages that:
by combining a convolutional neural network and domain countermeasure learning and applying the combination to an audio general steganalysis model, the domain independent steganalysis characteristic can be obtained, the problem of performance degradation of the audio steganalysis model caused by the problem of carrier source mismatch can be effectively relieved, and a feasible idea is provided for application of an audio steganalysis technology in a complex Internet big data evidence obtaining scene.
Detailed Description
The following detailed description of embodiments of the invention is merely exemplary in nature and is not intended to limit the invention to the precise forms disclosed.
The invention provides an audio steganalysis method based on convolutional neural network and domain countermeasure learning, which is characterized in that: the network framework corresponding to the method comprises a feature extraction sub-networkSteganalysis subnetwork->And vector origin discrimination subnetwork->Wherein θ is f 、θ y 、θ d Representing network parameters of respective sub-networks, the method comprising,
s1, inputting source domain dataTarget Domain data->An countermeasure training factor lambda, a learning rate eta;
s2, outputting a steganographic analysis feature vector F through a feature extraction sub-network;
s3, the steganalysis feature vector F is output through a steganalysis sub-network to obtain binary steganalysis prediction probabilityCalculating binary steganography prediction probability +.>Cross entropy loss with original steganographic tag y y And updates the network parameter θ accordingly by a back propagation error and gradient descent algorithm y Wherein y is {0,1}, when y takes the value 0 as the original carrier and takes the value 1, it represents the hidden carrier;
s4, outputting the steganographic analysis feature vector F through a carrier source discrimination sub-network to obtain a carrier source prediction probability valueCalculating the vector source prediction probability value->Cross entropy loss with original steganographic tag d/ d And updates the network parameter θ accordingly by back-propagation error d Where d ε {0,1}, when d takes the value 0 representing the source domain and takes the value 1 representing the target domain.
The feature extraction sub-network is used for adaptively extracting features, and in order to alleviate the degradation of the steganography analysis performance caused by the CSM problem, the output feature vector F needs to have steganography detectability (namely, the correct steganography analysis result is obtained after the steganography classification sub-network is input), and also needs to have certain field independence (namely, the feature spatial distribution of different audio carrier data is kept consistent). Feature extraction network promotion by continuously learning differences in data distribution of original audio samples and steganographic audio samplesThe learned feature F is for the ability to correctly detect steganographic audio. At the same time, reverse in the counter-propagating phaseThe error gradient brought about to update +.>Network parameter theta of (2) f To reduce the correlation of its extracted features F to the field of audio carrier data.
For the network architecture, the detailed architecture parameters of the individual sub-network modules are shown in the following table. Examples of meaning of parameters in the table: 64x (1 x 5), reLU, represents that the parameters of the convolutional layer are set to a convolution kernel of 1x5 with an output channel of 64, and the output is activated using ReLU. FC-256 represents a fully connected layer with 256 neurons.
For feature extraction subnetworksThe function of this is to adaptively extract steganalysis features from the input audio data. In the CNN steganalysis model, setting a reasonable preprocessing layer can improve the steganalysis performance of the network. Therefore, at the beginning of the feature extraction subnetwork, an audio preprocessing layer based on differential filtering design is used, which consists of 4 1x5 convolution kernels D1-D4, with initial weights of:
D 1 =[1,-1,0,0,0]
D 2 =[1,-2,1,0,0]
D 3 =[1,-3,3,1,0]
D 4 =[1,-4,6,-4,1]
the audio preprocessing layer is followed by 4 concatenated convolutional group modules.The convolution layer in the 1 st convolution group module does not undergo nonlinear activation processing, and the pooling operation is cancelled, so as to more effectively capture weak information brought by steganography. The 2 nd to 4 th convolution group modules all comprise a 1x5 convolution layer, a 1x1 convolution layer and a mean-pooling layer, wherein the last mean-pooling layer of the 4 th convolution group module is replaced by a global mean-pooling (Global Average Pooling) layer, so as to fuse global features.And finally outputting 256-dimensional steganalysis feature vector F.
Classifying sub-networks for steganographyIt is followed by a feature output layer, which is structured as a two-layer cascade of fully connected layers (containing 128 and 64 neuron structures, respectively).
For carrier source discrimination subnetworkThe structure of the carrier source discrimination network is similar to that of the steganographic classification network, and the main structure is also composed of a full-connection layer. The difference is that the feature extraction subnetwork +.>Output characteristics F of (2) and vector origin discrimination subnetwork->The domain discrimination layers of (2) are connected by gradient inversion layers (Gradient Reversal Layer, GRL).
For the formula Forward: F (x) =x andthe smaller λ, the smaller the importance of the domain tag, +.>The extracted feature vector F is also allowed to contain more domain information. When λ is 0 then it means that the influence of the domain label is not considered, i.e. migration is not considered. At this time, the classifier and the source domain numberThe dependence is the strongest. It is therefore also important to set a reasonable lambda. When the two domains differ significantly, λ may be suitably larger.
The method of the invention protects the source domain audio data in the training processWith complete steganographic tag information, and target domain audio data +.>Then steganographic tag information is not included. The training process of the whole network can be divided into two parts: 1)/>And->The sub-networks are cascaded to form a supervised steganalysis network; 2)/>And->And a carrier source discriminating process formed by cascading the sub-networks. The training purpose of the whole network is as follows: by training->To promote the difference of the characteristic F in the steganography space by training +.>To discriminate the audio data of different sources and extract the domain information, and at the same time, by +.>And->To eliminate->Domain related information of the extracted feature F. Integral netThe training purpose of the complex is equivalent to solving the following optimization problem:
wherein, and respectively representing the network parameters determined by each sub-network, wherein n is the number of training samples of the source domain data, and m is the number of training samples of the target domain data.
To achieve the above object, the training process of the whole network can be represented by the following table, namely
The method effectively relieves the performance degradation of the audio steganography analysis model caused by the problem of carrier source mismatch, and provides a feasible thought for the application of the audio steganography analysis technology in a complex Internet big data evidence obtaining scene.
While embodiments of the invention have been shown and described, it will be understood by those skilled in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the invention, the scope of which is defined by the claims and their equivalents.

Claims (6)

1. An audio steganalysis method based on convolutional neural network and domain countermeasure learning is characterized in that: network corresponding to the methodThe framework includes feature extraction subnetworksSteganalysis subnetwork->And a carrier source discrimination subnetworkWherein θ is f 、θ y 、θ d Representing network parameters of respective sub-networks, the method comprising,
s1, inputting source domain dataTarget Domain data->An countermeasure training factor lambda, a learning rate eta;
s2, outputting a steganographic analysis feature vector F through a feature extraction sub-network;
s3, the steganalysis feature vector F is output through a steganalysis sub-network to obtain binary steganalysis prediction probabilityCalculating binary steganography prediction probability +.>Cross entropy loss with original steganographic tag y y And updates the network parameter θ accordingly by a back propagation error and gradient descent algorithm y Wherein y is {0,1}, when y takes the value 0 as the original carrier and takes the value 1, it represents the hidden carrier;
s4, outputting the steganographic analysis feature vector F through a carrier source discrimination sub-network to obtain a carrier source prediction probability valueCalculating the vector source prediction probability value->Cross entropy loss with original steganographic tag d/ d And updates the network parameter θ accordingly by back-propagation error d Wherein d is {0,1}, when d takes the value 0 as the source domain and takes the value 1 as the target domain;
by trainingTo promote the difference of the characteristic F in the steganography space by training +.>To discriminate the audio data of different sources and extract the domain information, and at the same time, by +.>And->To eliminate->Domain-related information of the extracted feature F;
updating network parameter θ y And updating the network parameter θ in S4 d The optimization is carried out by the following formula,
wherein, and respectively representing the network parameters determined by each sub-network, wherein n is the number of training samples of the source domain data, and m is the number of training samples of the target domain data.
2. The method for audio steganalysis based on convolutional neural network and domain countermeasure learning according to claim 1, wherein:
the characteristic extraction sub-network in the S2 comprises an audio preprocessing layer and 4 cascaded convolution groups after the audio preprocessing layer, namely a 1 st convolution group, a 2 nd convolution group, a 3 rd convolution group and a 4 th convolution group.
3. The audio steganalysis method based on convolutional neural network and domain countermeasure learning of claim 2, wherein:
the audio preprocessing layer consists of 4 1X5 convolution kernels D1-D4, and initial weights are respectively as follows:
D1=[1,-1,0,0,0],D2=[1,-2,1,0,0],D3=[1,-3,3,1,0],D4=[1,-4,6,-4,1];
the 1 st convolution group includes a 1×1 first convolution layer, a 1×5 second convolution layer, and a 1×1 third convolution layer;
the 2 nd convolution group, the 3 rd convolution group and the 4 th convolution group all comprise a 1 multiplied by 5 convolution layer, a 1 multiplied by 1 convolution layer and a mean-average pooling layer, wherein the mean-average pooling layer of the 4 th convolution group is a global mean-average pooling layer;
the steganalysis feature vector is a 256-dimensional vector.
4. The audio steganalysis method based on convolutional neural network and domain countermeasure learning of claim 2, wherein:
the audio preprocessing layer adopts a differential filtering design.
5. The method for audio steganalysis based on convolutional neural network and domain countermeasure learning according to claim 1, wherein:
the steganalysis sub-network comprises a full-connection layer and a steganographic label prediction layer, wherein the full-connection layer is formed by two layers of cascade connection and comprises 128 neurons and 64 neurons respectively.
6. The method for audio steganalysis based on convolutional neural network and domain countermeasure learning according to claim 1, wherein:
the carrier source discrimination sub-network comprises a gradient inversion layer, a domain discrimination layer and a domain label prediction layer, wherein the gradient inversion layer keeps constant mapping of input and output data in a forward propagation stage, and gradient values of inversion errors in an error counter propagation stage are respectively expressed as,
Forward:F(x)=x
Backprogation:
wherein F (x) represents an equivalent function formula of the gradient inversion layer, and I is an identity matrix.
CN202010415018.3A 2020-05-15 2020-05-15 Audio steganalysis method based on convolutional neural network and domain countermeasure learning Active CN111768792B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010415018.3A CN111768792B (en) 2020-05-15 2020-05-15 Audio steganalysis method based on convolutional neural network and domain countermeasure learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010415018.3A CN111768792B (en) 2020-05-15 2020-05-15 Audio steganalysis method based on convolutional neural network and domain countermeasure learning

Publications (2)

Publication Number Publication Date
CN111768792A CN111768792A (en) 2020-10-13
CN111768792B true CN111768792B (en) 2024-02-09

Family

ID=72719407

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010415018.3A Active CN111768792B (en) 2020-05-15 2020-05-15 Audio steganalysis method based on convolutional neural network and domain countermeasure learning

Country Status (1)

Country Link
CN (1) CN111768792B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112581346B (en) * 2020-12-24 2023-11-17 深圳大学 Binary image steganography method based on countermeasure network
CN114169462A (en) * 2021-12-14 2022-03-11 四川大学 Feature-guided depth sub-field self-adaptive steganography detection method
CN115457985B (en) * 2022-09-15 2023-04-07 北京邮电大学 Visual audio steganography method based on convolutional neural network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105469799A (en) * 2015-12-01 2016-04-06 北京科技大学 Audible sound positioning method and system based on hidden channel
CN108923922A (en) * 2018-07-26 2018-11-30 北京工商大学 A kind of text steganography method based on generation confrontation network
WO2019138329A1 (en) * 2018-01-09 2019-07-18 Farm4Trade S.R.L. Method and system, based on the use of deep learning techniques, for the univocal biometric identification of an animal
CN110390941A (en) * 2019-07-01 2019-10-29 清华大学 MP3 audio hidden information analysis method and device based on coefficient correlation model

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190325861A1 (en) * 2018-04-18 2019-10-24 Maneesh Kumar Singh Systems and Methods for Automatic Speech Recognition Using Domain Adaptation Techniques
US11520923B2 (en) * 2018-11-07 2022-12-06 Nec Corporation Privacy-preserving visual recognition via adversarial learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105469799A (en) * 2015-12-01 2016-04-06 北京科技大学 Audible sound positioning method and system based on hidden channel
WO2019138329A1 (en) * 2018-01-09 2019-07-18 Farm4Trade S.R.L. Method and system, based on the use of deep learning techniques, for the univocal biometric identification of an animal
CN108923922A (en) * 2018-07-26 2018-11-30 北京工商大学 A kind of text steganography method based on generation confrontation network
CN110390941A (en) * 2019-07-01 2019-10-29 清华大学 MP3 audio hidden information analysis method and device based on coefficient correlation model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A novel detection scheme for MP3Stego with low payload;Chao Jin等;《2014 IEEE China Summit & International Conference on Signal and Information Processing (ChinaSIP)》;20140713;全文 *
基于CNN的低嵌入率MP3stego隐写分析;张坚等;《无线通信技术》;20180915(第03期);全文 *

Also Published As

Publication number Publication date
CN111768792A (en) 2020-10-13

Similar Documents

Publication Publication Date Title
CN111768792B (en) Audio steganalysis method based on convolutional neural network and domain countermeasure learning
Chen et al. A deep learning framework for time series classification using Relative Position Matrix and Convolutional Neural Network
CN110048827B (en) Class template attack method based on deep learning convolutional neural network
Vasumathi et al. An effective pomegranate fruit classification based on CNN-LSTM deep learning models
CN111783841A (en) Garbage classification method, system and medium based on transfer learning and model fusion
Pan et al. Intelligent diagnosis of northern corn leaf blight with deep learning model
Peng et al. CNN and transformer framework for insect pest classification
Finjan et al. Arabic handwritten digits recognition based on convolutional neural networks with resnet-34 model
CN113051983B (en) Method for training field crop disease recognition model and field crop disease recognition
CN111144500A (en) Differential privacy deep learning classification method based on analytic Gaussian mechanism
CN115604025B (en) PLI4 DA-based network intrusion detection method
Su et al. Comparative study of ensemble models of deep convolutional neural networks for crop pests classification
Tarek et al. Leveraging three-tier deep learning model for environmental cleaner plants production
Jakhar et al. Classification and Measuring Accuracy of Lenses Using Inception Model V3
Baranidharan et al. An improved inception layer-based convolutional neural network for identifying rice leaf diseases
Lecun et al. Convolutional Neural Networks
Pérez-Bravo et al. Encoding generative adversarial networks for defense against image classification attacks
Pradipkumar et al. Performance analysis of deep learning models for tree species identification from UAV images
Ramadan et al. Wheat Leaf Disease Synthetic Image Generation from Limited Dataset Using GAN
Bedi et al. Artificial Intelligence in Agriculture
Chandra Towards prediction of rapid intensification in tropical cyclones with recurrent neural networks
Sultana et al. Identification of Potato Leaf Diseases Using Hybrid Convolution Neural Network with Support Vector Machine
CN113655341B (en) Fault positioning method and system for power distribution network
Kapoor et al. Bell-Pepper Leaf Bacterial Spot Detection Using AlexNet and VGG-16
Kumari et al. A Survey on Plant Leaf Disease Detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240112

Address after: Chinatelecom tower, No. 19, Chaoyangmen North Street, Dongcheng District, Beijing 100010

Applicant after: Tianyi Safety Technology Co.,Ltd.

Address before: Room 1104, Building A, Zhiyun Industrial Park, No. 13 Huaxing Road, Henglang Community, Dalang Street, Longhua District, Shenzhen City, Guangdong Province, 518109

Applicant before: Shenzhen Hongyue Information Technology Co.,Ltd.

Effective date of registration: 20240112

Address after: Room 1104, Building A, Zhiyun Industrial Park, No. 13 Huaxing Road, Henglang Community, Dalang Street, Longhua District, Shenzhen City, Guangdong Province, 518109

Applicant after: Shenzhen Hongyue Information Technology Co.,Ltd.

Address before: 315211, Fenghua Road, Jiangbei District, Zhejiang, Ningbo 818

Applicant before: Ningbo University

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant