CN116633809A - Detection method and system based on artificial intelligence - Google Patents

Detection method and system based on artificial intelligence Download PDF

Info

Publication number
CN116633809A
CN116633809A CN202310762753.5A CN202310762753A CN116633809A CN 116633809 A CN116633809 A CN 116633809A CN 202310762753 A CN202310762753 A CN 202310762753A CN 116633809 A CN116633809 A CN 116633809A
Authority
CN
China
Prior art keywords
deep learning
feature
sample set
sampling
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310762753.5A
Other languages
Chinese (zh)
Other versions
CN116633809B (en
Inventor
魏亮
谢玮
魏薇
彭志艺
凌霞
海涵
郑晓玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Academy of Information and Communications Technology CAICT
Original Assignee
China Academy of Information and Communications Technology CAICT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Academy of Information and Communications Technology CAICT filed Critical China Academy of Information and Communications Technology CAICT
Priority to CN202310762753.5A priority Critical patent/CN116633809B/en
Publication of CN116633809A publication Critical patent/CN116633809A/en
Application granted granted Critical
Publication of CN116633809B publication Critical patent/CN116633809B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/04Processing captured monitoring data, e.g. for logfile generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Security & Cryptography (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a detection method and a detection system based on artificial intelligence, which can cover all types of data packets of a network by respectively extracting the characteristics of three different dimensions of deep learning, biology and time aiming at the data packets of different types. Adopting dimension-reducing sampling and sliding window subsampling aiming at deep learning characteristics; biological fingerprint matching and sliding frame secondary sampling are adopted aiming at biological characteristics; and redefining a sampling window according to the differentiation result aiming at the time characteristics, and sampling and reorganizing the two characteristics of deep learning and biology again. Through the steps, clustering treatment of two different approaches is used, and classification is better performed by using the model.

Description

Detection method and system based on artificial intelligence
Technical Field
The application relates to the technical field of network security, in particular to a detection method and system based on artificial intelligence.
Background
The existing content detection method is insufficient to cope with application scenes of low-quality mixed data, and a detection method and a detection system capable of automatically adjusting extraction characteristics are needed.
Thus, there is an urgent need for a targeted artificial intelligence-based detection method and system.
Disclosure of Invention
The application aims to provide a detection method and a detection system based on artificial intelligence, which solve the problem that the existing application scene needs to be treated with low-quality mixed data.
In a first aspect, the present application provides an artificial intelligence based detection method, the method comprising:
collecting data packets of different types in a network, and extracting deep learning features, biological features and time features carried in the data packets;
performing primary discretization on the acquired deep learning features to obtain a second deep learning feature data set with reduced dimension, vectorizing the second deep learning feature data set, inputting the second deep learning feature data set into an N-layer convolution unit, and outputting a first intermediate result after convolution;
determining the width of a sliding window according to the characteristic value distribution of the first intermediate result, and performing secondary discretization on the acquired deep learning characteristics by using the sliding window to obtain a first characteristic sample set;
performing primary discretization sampling on the acquired biological characteristics, forming a sequence by sampling values, matching the sequence with a plurality of biological fingerprints stored in a server in advance, generating a plurality of sliding frames corresponding to the plurality of biological fingerprints conforming to the matching rule, performing secondary in-frame sampling on the biological characteristics by using the plurality of sliding frames, and recombining sampling values sampled in the secondary frames to obtain a second characteristic sample set;
performing difference comparison processing on the acquired time features, defining different sampling windows according to the difference degree, sampling the deep learning features and the biological features again by using the sampling windows, and recombining sampled values to obtain a third feature sample set;
clustering the first characteristic sample set and the second characteristic sample set with the third characteristic sample set respectively, sending the first characteristic sample set and the second characteristic sample set into a convolution layer of an identification model twice, selecting local characteristic components by utilizing sliding windows with different sizes, and splicing the local characteristic components to sequentially obtain a first characteristic matrix and a second characteristic matrix;
sequentially sending the first feature matrix and the second feature matrix into a pooling layer of the identification model according to time sequence, selecting effective feature values by selecting pooling functions, and sequentially obtaining a third feature matrix and a fourth feature matrix after splicing again;
comparing the difference degree of the third feature matrix and the fourth feature matrix, and when the difference degree is smaller than or equal to a threshold value, determining that the difference degree is effective, selecting the third feature matrix as an object, and matching with a reference matrix stored in a server to obtain a classification result;
when the difference is greater than a threshold value, the classification recognition operation is stopped as invalid;
and controlling according to the classification result.
In a second aspect, the present application provides an artificial intelligence based detection system, the system comprising:
the acquisition unit is used for acquiring data packets of different types in the network and extracting deep learning characteristics, biological characteristics and time characteristics carried in the data packets;
the first feature extraction unit is used for carrying out primary discretization on the acquired deep learning features to obtain a second deep learning feature data set after dimension reduction, vectorizing the second deep learning feature data set, inputting the second deep learning feature data set into the N-layer convolution unit, and outputting a first intermediate result after convolution;
determining the width of a sliding window according to the characteristic value distribution of the first intermediate result, and performing secondary discretization on the acquired deep learning characteristics by using the sliding window to obtain a first characteristic sample set;
the second feature extraction unit is used for performing primary discretization sampling on the acquired biological features, forming a sequence by sampling values, matching the sequence with a plurality of biological fingerprints stored in the server in advance, generating a plurality of corresponding sliding frames by the plurality of biological fingerprints conforming to the matching rules, performing secondary in-frame sampling on the biological features by using the plurality of sliding frames, and recombining sampling values sampled in the secondary frames to obtain a second feature sample set;
the third feature extraction unit is used for carrying out difference comparison processing on the acquired time features, defining different sampling windows according to the difference degree, sampling the deep learning features and the biological features again by using the sampling windows, and recombining sampled values to obtain a third feature sample set;
the fusion unit is used for clustering the first characteristic sample set and the second characteristic sample set with the third characteristic sample set respectively, sending the first characteristic sample set and the second characteristic sample set into a convolution layer of the recognition model twice, selecting local characteristic components by utilizing sliding windows with different sizes, and splicing the local characteristic components to sequentially obtain a first characteristic matrix and a second characteristic matrix;
sequentially sending the first feature matrix and the second feature matrix into a pooling layer of the identification model according to time sequence, selecting effective feature values by selecting pooling functions, and sequentially obtaining a third feature matrix and a fourth feature matrix after splicing again;
the classification unit is used for comparing the difference degree of the third feature matrix and the fourth feature matrix, and when the difference degree is smaller than or equal to a threshold value, the classification unit is used for recognizing the difference degree as effective, selecting the third feature matrix as an object, and matching the third feature matrix with a reference matrix stored in a server to obtain a classification result;
when the difference is greater than a threshold value, the classification recognition operation is stopped as invalid;
and the execution unit is used for controlling according to the classification result.
In a third aspect, the present application provides an artificial intelligence based detection system comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the method according to any one of the four possible aspects of the first aspect according to instructions in the program code.
In a fourth aspect, the present application provides a computer readable storage medium for storing program code for performing the method of any one of the four possibilities of the first aspect.
Advantageous effects
The application provides a detection method and a detection system based on artificial intelligence, which can cover all types of data packets of a network by respectively extracting the characteristics of three different dimensions of deep learning, biology and time aiming at the data packets of different types. Adopting dimension-reducing sampling and sliding window subsampling aiming at deep learning characteristics; biological fingerprint matching and sliding frame secondary sampling are adopted aiming at biological characteristics; and redefining a sampling window according to the differentiation result aiming at the time characteristics, and sampling and reorganizing the two characteristics of deep learning and biology again. Through the steps, the clustering processing of two different approaches is used, the models are better utilized for classification, the defect that the prior art faces to low-quality mixed data scenes in a network is overcome, and automatic adjustment of detection is realized.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a schematic flow chart of an artificial intelligence based detection method of the present application;
FIG. 2 is a block diagram of an artificial intelligence based detection system of the present application.
Detailed Description
The preferred embodiments of the present application will be described in detail below with reference to the accompanying drawings so that the advantages and features of the present application can be more easily understood by those skilled in the art, thereby making clear and defining the scope of the present application.
FIG. 1 is a schematic flow chart of an artificial intelligence based detection method according to the present application, the method comprising:
collecting data packets of different types in a network, and extracting deep learning features, biological features and time features carried in the data packets;
performing primary discretization on the acquired deep learning features to obtain a second deep learning feature data set with reduced dimension, vectorizing the second deep learning feature data set, inputting the second deep learning feature data set into an N-layer convolution unit, and outputting a first intermediate result after convolution;
determining the width of a sliding window according to the characteristic value distribution of the first intermediate result, and performing secondary discretization on the acquired deep learning characteristics by using the sliding window to obtain a first characteristic sample set;
performing primary discretization sampling on the acquired biological characteristics, forming a sequence by sampling values, matching the sequence with a plurality of biological fingerprints stored in a server in advance, generating a plurality of sliding frames corresponding to the plurality of biological fingerprints conforming to the matching rule, performing secondary in-frame sampling on the biological characteristics by using the plurality of sliding frames, and recombining sampling values sampled in the secondary frames to obtain a second characteristic sample set;
performing difference comparison processing on the acquired time features, defining different sampling windows according to the difference degree, sampling the deep learning features and the biological features again by using the sampling windows, and recombining sampled values to obtain a third feature sample set;
clustering the first characteristic sample set and the second characteristic sample set with the third characteristic sample set respectively, sending the first characteristic sample set and the second characteristic sample set into a convolution layer of an identification model twice, selecting local characteristic components by utilizing sliding windows with different sizes, and splicing the local characteristic components to sequentially obtain a first characteristic matrix and a second characteristic matrix;
sequentially sending the first feature matrix and the second feature matrix into a pooling layer of the identification model according to time sequence, selecting effective feature values by selecting pooling functions, and sequentially obtaining a third feature matrix and a fourth feature matrix after splicing again;
comparing the difference degree of the third feature matrix and the fourth feature matrix, and when the difference degree is smaller than or equal to a threshold value, determining that the difference degree is effective, selecting the third feature matrix as an object, and matching with a reference matrix stored in a server to obtain a classification result;
when the difference is greater than a threshold value, the classification recognition operation is stopped as invalid;
and controlling according to the classification result.
Wherein, when a certain feature (or extraction failure) is found to be absent in the process of extracting the network data packet, for example: when the three of the deep learning features, the biological features and the time features lack the biological features, the subsequent algorithm can automatically use the deep learning features to obtain a corresponding feature sample set according to a calculation method of the biological features. Thereby making up for the shortfall of low quality, mixed data.
In some preferred embodiments, the entropy loss function is minimized by a reverse propagation mode when training the recognition model, oversaturation is avoided, and when the accuracy of the recognition model meets the requirement of a threshold value, the recognition model training is completed. And then available for data verification.
In some preferred embodiments, the deep learning features have a correspondence with the type of the data packet, and the deep learning features to be extracted are determined according to the type of the data packet.
In some preferred embodiments, the biometric features include multimedia information in a data packet relating to facial activity, physiological features of the person.
FIG. 2 is a block diagram of an artificial intelligence based detection system according to the present application, the system comprising:
the acquisition unit is used for acquiring data packets of different types in the network and extracting deep learning characteristics, biological characteristics and time characteristics carried in the data packets;
the first feature extraction unit is used for carrying out primary discretization on the acquired deep learning features to obtain a second deep learning feature data set after dimension reduction, vectorizing the second deep learning feature data set, inputting the second deep learning feature data set into the N-layer convolution unit, and outputting a first intermediate result after convolution;
determining the width of a sliding window according to the characteristic value distribution of the first intermediate result, and performing secondary discretization on the acquired deep learning characteristics by using the sliding window to obtain a first characteristic sample set;
the second feature extraction unit is used for performing primary discretization sampling on the acquired biological features, forming a sequence by sampling values, matching the sequence with a plurality of biological fingerprints stored in the server in advance, generating a plurality of corresponding sliding frames by the plurality of biological fingerprints conforming to the matching rules, performing secondary in-frame sampling on the biological features by using the plurality of sliding frames, and recombining sampling values sampled in the secondary frames to obtain a second feature sample set;
the third feature extraction unit is used for carrying out difference comparison processing on the acquired time features, defining different sampling windows according to the difference degree, sampling the deep learning features and the biological features again by using the sampling windows, and recombining sampled values to obtain a third feature sample set;
the fusion unit is used for clustering the first characteristic sample set and the second characteristic sample set with the third characteristic sample set respectively, sending the first characteristic sample set and the second characteristic sample set into a convolution layer of the recognition model twice, selecting local characteristic components by utilizing sliding windows with different sizes, and splicing the local characteristic components to sequentially obtain a first characteristic matrix and a second characteristic matrix;
sequentially sending the first feature matrix and the second feature matrix into a pooling layer of the identification model according to time sequence, selecting effective feature values by selecting pooling functions, and sequentially obtaining a third feature matrix and a fourth feature matrix after splicing again;
the classification unit is used for comparing the difference degree of the third feature matrix and the fourth feature matrix, and when the difference degree is smaller than or equal to a threshold value, the classification unit is used for recognizing the difference degree as effective, selecting the third feature matrix as an object, and matching the third feature matrix with a reference matrix stored in a server to obtain a classification result;
when the difference is greater than a threshold value, the classification recognition operation is stopped as invalid;
and the execution unit is used for controlling according to the classification result.
The application provides an artificial intelligence based detection system, which comprises: the system includes a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the method according to any of the embodiments of the first aspect according to instructions in the program code.
The present application provides a computer readable storage medium for storing program code for performing the method of any one of the embodiments of the first aspect.
In a specific implementation, the present application also provides a computer storage medium, where the computer storage medium may store a program, where the program may include some or all of the steps in the various embodiments of the present application when executed. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM) or a Random Access Memory (RAM).
It will be apparent to those skilled in the art that the techniques of embodiments of the present application may be implemented in software plus a necessary general purpose hardware platform. Based on such understanding, the technical solutions in the embodiments of the present application may be embodied in essence or a part contributing to the prior art in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments or some parts of the embodiments of the present application.
The same or similar parts between the various embodiments of the present description are referred to each other. In particular, for the embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference should be made to the description of the method embodiments for the matters.
The embodiments of the present application described above do not limit the scope of the present application.

Claims (7)

1. An artificial intelligence based detection method, the method comprising:
collecting data packets of different types in a network, and extracting deep learning features, biological features and time features carried in the data packets;
performing primary discretization on the acquired deep learning features to obtain a second deep learning feature data set with reduced dimension, vectorizing the second deep learning feature data set, inputting the second deep learning feature data set into an N-layer convolution unit, and outputting a first intermediate result after convolution;
determining the width of a sliding window according to the characteristic value distribution of the first intermediate result, and performing secondary discretization on the acquired deep learning characteristics by using the sliding window to obtain a first characteristic sample set;
performing primary discretization sampling on the acquired biological characteristics, forming a sequence by sampling values, matching the sequence with a plurality of biological fingerprints stored in a server in advance, generating a plurality of sliding frames corresponding to the plurality of biological fingerprints conforming to the matching rule, performing secondary in-frame sampling on the biological characteristics by using the plurality of sliding frames, and recombining sampling values sampled in the secondary frames to obtain a second characteristic sample set;
performing difference comparison processing on the acquired time features, defining different sampling windows according to the difference degree, sampling the deep learning features and the biological features again by using the sampling windows, and recombining sampled values to obtain a third feature sample set;
clustering the first characteristic sample set and the second characteristic sample set with the third characteristic sample set respectively, sending the first characteristic sample set and the second characteristic sample set into a convolution layer of an identification model twice, selecting local characteristic components by utilizing sliding windows with different sizes, and splicing the local characteristic components to sequentially obtain a first characteristic matrix and a second characteristic matrix;
sequentially sending the first feature matrix and the second feature matrix into a pooling layer of the identification model according to time sequence, selecting effective feature values by selecting pooling functions, and sequentially obtaining a third feature matrix and a fourth feature matrix after splicing again;
comparing the difference degree of the third feature matrix and the fourth feature matrix, and when the difference degree is smaller than or equal to a threshold value, determining that the difference degree is effective, selecting the third feature matrix as an object, and matching with a reference matrix stored in a server to obtain a classification result;
when the difference is greater than a threshold value, the classification recognition operation is stopped as invalid;
and controlling according to the classification result.
2. The method according to claim 1, characterized in that: and when the recognition model is trained, minimizing the entropy loss function through a reverse propagation mode, avoiding supersaturation, and when the precision of the recognition model meets the requirement of a threshold value, indicating that the recognition model is trained.
3. The method according to claim 1, characterized in that: and the deep learning features have a corresponding relation with the type of the data packet, and the deep learning features needing to be extracted are determined according to the type of the data packet.
4. A method according to any one of claims 2 or 3, wherein: the biometric features include multimedia information in the data package relating to facial activity and physiological features of the person.
5. An artificial intelligence based detection system, the system comprising:
the acquisition unit is used for acquiring data packets of different types in the network and extracting deep learning characteristics, biological characteristics and time characteristics carried in the data packets;
the first feature extraction unit is used for carrying out primary discretization on the acquired deep learning features to obtain a second deep learning feature data set after dimension reduction, vectorizing the second deep learning feature data set, inputting the second deep learning feature data set into the N-layer convolution unit, and outputting a first intermediate result after convolution;
determining the width of a sliding window according to the characteristic value distribution of the first intermediate result, and performing secondary discretization on the acquired deep learning characteristics by using the sliding window to obtain a first characteristic sample set;
the second feature extraction unit is used for performing primary discretization sampling on the acquired biological features, forming a sequence by sampling values, matching the sequence with a plurality of biological fingerprints stored in the server in advance, generating a plurality of corresponding sliding frames by the plurality of biological fingerprints conforming to the matching rules, performing secondary in-frame sampling on the biological features by using the plurality of sliding frames, and recombining sampling values sampled in the secondary frames to obtain a second feature sample set;
the third feature extraction unit is used for carrying out difference comparison processing on the acquired time features, defining different sampling windows according to the difference degree, sampling the deep learning features and the biological features again by using the sampling windows, and recombining sampled values to obtain a third feature sample set;
the fusion unit is used for clustering the first characteristic sample set and the second characteristic sample set with the third characteristic sample set respectively, sending the first characteristic sample set and the second characteristic sample set into a convolution layer of the recognition model twice, selecting local characteristic components by utilizing sliding windows with different sizes, and splicing the local characteristic components to sequentially obtain a first characteristic matrix and a second characteristic matrix;
sequentially sending the first feature matrix and the second feature matrix into a pooling layer of the identification model according to time sequence, selecting effective feature values by selecting pooling functions, and sequentially obtaining a third feature matrix and a fourth feature matrix after splicing again;
the classification unit is used for comparing the difference degree of the third feature matrix and the fourth feature matrix, and when the difference degree is smaller than or equal to a threshold value, the classification unit is used for recognizing the difference degree as effective, selecting the third feature matrix as an object, and matching the third feature matrix with a reference matrix stored in a server to obtain a classification result;
when the difference is greater than a threshold value, the classification recognition operation is stopped as invalid;
and the execution unit is used for controlling according to the classification result.
6. An artificial intelligence based detection system, the system comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the method according to any of the claims 1-4 according to instructions in the program code.
7. A computer readable storage medium, characterized in that the computer readable storage medium is for storing a program code for performing a method implementing any of claims 1-4.
CN202310762753.5A 2023-06-26 2023-06-26 Detection method and system based on artificial intelligence Active CN116633809B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310762753.5A CN116633809B (en) 2023-06-26 2023-06-26 Detection method and system based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310762753.5A CN116633809B (en) 2023-06-26 2023-06-26 Detection method and system based on artificial intelligence

Publications (2)

Publication Number Publication Date
CN116633809A true CN116633809A (en) 2023-08-22
CN116633809B CN116633809B (en) 2024-01-23

Family

ID=87617221

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310762753.5A Active CN116633809B (en) 2023-06-26 2023-06-26 Detection method and system based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN116633809B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020159439A1 (en) * 2019-01-29 2020-08-06 Singapore Telecommunications Limited System and method for network anomaly detection and analysis
CN112464713A (en) * 2020-10-21 2021-03-09 安徽农业大学 Communication radiation source radio frequency fingerprint identification method based on deep learning
CN112566174A (en) * 2020-12-02 2021-03-26 中国电子科技集团公司第五十二研究所 Abnormal I/Q signal identification method and system based on deep learning
CN114915575A (en) * 2022-06-02 2022-08-16 电子科技大学 Network flow detection device based on artificial intelligence
CN116150651A (en) * 2022-12-13 2023-05-23 天津市国瑞数码安全系统股份有限公司 AI-based depth synthesis detection method and system
US20230171266A1 (en) * 2021-11-26 2023-06-01 At&T Intellectual Property Ii, L.P. Method and system for predicting cyber threats using deep artificial intelligence (ai)-driven analytics

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020159439A1 (en) * 2019-01-29 2020-08-06 Singapore Telecommunications Limited System and method for network anomaly detection and analysis
CN112464713A (en) * 2020-10-21 2021-03-09 安徽农业大学 Communication radiation source radio frequency fingerprint identification method based on deep learning
CN112566174A (en) * 2020-12-02 2021-03-26 中国电子科技集团公司第五十二研究所 Abnormal I/Q signal identification method and system based on deep learning
US20230171266A1 (en) * 2021-11-26 2023-06-01 At&T Intellectual Property Ii, L.P. Method and system for predicting cyber threats using deep artificial intelligence (ai)-driven analytics
CN114915575A (en) * 2022-06-02 2022-08-16 电子科技大学 Network flow detection device based on artificial intelligence
CN116150651A (en) * 2022-12-13 2023-05-23 天津市国瑞数码安全系统股份有限公司 AI-based depth synthesis detection method and system

Also Published As

Publication number Publication date
CN116633809B (en) 2024-01-23

Similar Documents

Publication Publication Date Title
CN111444881B (en) Fake face video detection method and device
CN109271958B (en) Face age identification method and device
CN111814744A (en) Face detection method and device, electronic equipment and computer storage medium
CN109726195B (en) Data enhancement method and device
CN114913588B (en) Face image restoration and recognition method applied to complex scene
CN114359787A (en) Target attribute identification method and device, computer equipment and storage medium
CN111353385A (en) Pedestrian re-identification method and device based on mask alignment and attention mechanism
CN116206334A (en) Wild animal identification method and device
CN117409419A (en) Image detection method, device and storage medium
CN113449676A (en) Pedestrian re-identification method based on double-path mutual promotion disentanglement learning
CN113705468A (en) Digital image identification method based on artificial intelligence and related equipment
CN116633809B (en) Detection method and system based on artificial intelligence
CN115578765A (en) Target identification method, device, system and computer readable storage medium
CN116866211B (en) Improved depth synthesis detection method and system
CN116843988B (en) Target detection method and system based on deep learning
CN114299586A (en) Intelligent deep learning system based on convolutional neural network
CN112733670A (en) Fingerprint feature extraction method and device, electronic equipment and storage medium
CN117851835B (en) Deep learning internet of things recognition system and method
CN113782033B (en) Voiceprint recognition method, voiceprint recognition device, voiceprint recognition equipment and storage medium
CN115550684B (en) Improved video content filtering method and system
CN113255665B (en) Target text extraction method and system
CN117830896A (en) Image recognition method and device
CN118537900A (en) Face recognition method and device, electronic equipment and storage medium
CN117173163A (en) Portrait quality assessment method, system, device and readable storage medium
CN117894041A (en) Slaughterhouse intelligent management method and system based on Internet of things

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant