CN117011766B - Artificial intelligence detection method and system based on intra-frame differentiation - Google Patents
Artificial intelligence detection method and system based on intra-frame differentiation Download PDFInfo
- Publication number
- CN117011766B CN117011766B CN202310926811.3A CN202310926811A CN117011766B CN 117011766 B CN117011766 B CN 117011766B CN 202310926811 A CN202310926811 A CN 202310926811A CN 117011766 B CN117011766 B CN 117011766B
- Authority
- CN
- China
- Prior art keywords
- frame
- intra
- difference value
- convolution
- video data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 33
- 230000004069 differentiation Effects 0.000 title claims description 32
- 238000013473 artificial intelligence Methods 0.000 title claims description 15
- 238000013136 deep learning model Methods 0.000 claims abstract description 24
- 238000012549 training Methods 0.000 claims abstract description 16
- 230000006399 behavior Effects 0.000 claims abstract description 13
- 230000006870 function Effects 0.000 claims abstract description 10
- 238000000034 method Methods 0.000 claims description 22
- 230000008859 change Effects 0.000 claims description 15
- 238000012545 processing Methods 0.000 claims description 14
- 238000013135 deep learning Methods 0.000 claims description 7
- 238000011897 real-time detection Methods 0.000 claims description 4
- 230000003068 static effect Effects 0.000 abstract description 5
- 230000000694 effects Effects 0.000 abstract description 4
- 230000007246 mechanism Effects 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 5
- 239000011159 matrix material Substances 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 238000013526 transfer learning Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Biodiversity & Conservation Biology (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention provides an artificial intelligent detection method and system based on intra-frame differencing, which realizes the accurate detection of AI behaviors by carrying out deep learning model training on differentiated data in a video frame, wherein the differentiated data in the frame is obtained through intra-frame network, the intra-frame network adopts a multi-level convolution layer iteration difference value to extract intra-frame features, and Euclidean distance and contrast loss function between the intra-frame features are calculated to obtain an image difference value, thereby overcoming the problems of limited accuracy and inconsistent effects on dynamic and static targets in the prior art, and realizing the quick and accurate detection.
Description
Technical Field
The application relates to the technical field of network security, in particular to an artificial intelligent detection method and system based on intra-frame differentiation.
Background
With the rapid development of artificial intelligence technology, AI is increasingly widely used in the fields of video analysis, security monitoring, intelligent transportation and the like. However, since the AI may have misuse, etc. when performing a task, it is particularly important to accurately detect the AI.
In the prior art, the detection of AI is mainly realized by image processing and computer vision technology, however, the methods have limited accuracy and inconsistent effects on dynamic and static targets.
Disclosure of Invention
The invention aims to provide an artificial intelligence detection method and system based on intra-frame differentiation, which realize the accurate detection of AI by training a deep learning model. The system comprises a video acquisition module, a differential data processing module, a deep learning module and a detection output module.
In a first aspect, the present application provides an artificial intelligence detection method based on intra-frame differentiation, the method comprising:
step one, acquiring video data through video acquisition equipment, and respectively inputting the video data into the subsequent step two and step four;
inputting the received video data into an intra-frame network, extracting the intra-frame characteristics of each frame of image, and calculating image differentiation data between adjacent frames according to the intra-frame characteristics;
the intra-frame network in the second step comprises a plurality of convolution layers, wherein after the first-stage convolution layer receives video data, convolution operation is carried out to obtain a first convolution value, a difference value between the first convolution value and the video data is calculated, a first difference value is output, and the first difference value is transmitted to the next-stage convolution layer; the next-stage convolution layer receives the input first difference value, carries out convolution operation to obtain a second convolution value, calculates the difference value between the second convolution value and the first difference value, outputs a second difference value, and transmits the second difference value to the next-stage convolution layer; repeating the operation until the video data is transmitted to a final-stage convolution layer, performing convolution operation to obtain a final-stage convolution value, calculating the sum of the final-stage convolution value and the video data, and outputting the intra-frame characteristics of the video data;
the calculating the image differentiation data between adjacent frames according to the intra-frame features comprises the following steps: calculating Euclidean distance between the intra-frame features of the current frame and the intra-frame features of the adjacent frames, and then calculating a contrast loss function between the current frame and the adjacent frames according to the Euclidean distance to obtain an image difference value between the adjacent frames;
training a deep learning model by utilizing image differentiation data between the adjacent frames;
and step four, inputting the received video data into the trained deep learning model, detecting in real time, and outputting alarm information if AI behaviors are detected.
In a second aspect, the present application provides an artificial intelligence detection system based on intra-frame differentiation, the system comprising: the device comprises a video acquisition module, a differential data processing module, a deep learning module and a detection output module;
the video acquisition module is used for acquiring video data through video acquisition equipment and respectively inputting the video data into the subsequent differential data processing module and the detection output module;
the differential data processing module is used for inputting the received video data into an intra-frame network, extracting the intra-frame characteristic of each frame of image and calculating image differential data between adjacent frames according to the intra-frame characteristic;
the intra-frame network comprises a plurality of convolution layers, wherein after video data are received by a first-stage convolution layer, convolution operation is carried out to obtain a first convolution value, a difference value between the first convolution value and the video data is calculated, a first difference value is output, and the first difference value is transmitted to a next-stage convolution layer; the next-stage convolution layer receives the input first difference value, carries out convolution operation to obtain a second convolution value, calculates the difference value between the second convolution value and the first difference value, outputs a second difference value, and transmits the second difference value to the next-stage convolution layer; repeating the operation until the video data is transmitted to a final-stage convolution layer, performing convolution operation to obtain a final-stage convolution value, calculating the sum of the final-stage convolution value and the video data, and outputting the intra-frame characteristics of the video data;
the calculating the image differentiation data between adjacent frames according to the intra-frame features comprises the following steps: calculating Euclidean distance between the intra-frame features of the current frame and the intra-frame features of the adjacent frames, and then calculating a contrast loss function between the current frame and the adjacent frames according to the Euclidean distance to obtain an image difference value between the adjacent frames;
the deep learning module is used for training a deep learning model by utilizing image differentiation data between the adjacent frames;
and the detection output module is used for inputting the received video data into the trained deep learning model to perform real-time detection, and outputting alarm information if AI behaviors are detected.
In a third aspect, the present application provides an intra-frame differentiation-based artificial intelligence detection system comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the method according to any one of the four possible aspects of the first aspect according to instructions in the program code.
In a fourth aspect, the present application provides a computer readable storage medium for storing program code for performing the method of any one of the four possible aspects of the first aspect.
Advantageous effects
The invention provides an artificial intelligent detection method and system based on intra-frame differencing, which realizes the accurate detection of AI behaviors by carrying out deep learning model training on differentiated data in a video frame, wherein the differentiated data in the frame is obtained through intra-frame network, the intra-frame network adopts a multi-level convolution layer iteration difference value to extract intra-frame features, and Euclidean distance and contrast loss function between the intra-frame features are calculated to obtain an image difference value, thereby overcoming the problems of limited accuracy and inconsistent effects on dynamic and static targets in the prior art, and realizing the quick and accurate detection.
The invention has the advantages that:
high accuracy: through training and application of the deep learning model, the AI can be accurately detected, and the accuracy of the traditional image processing and computer vision methods is improved.
Effective for both dynamic and static targets: the system can effectively detect both dynamic and static targets by processing intra-frame differentiated data.
Real-time performance: the system detects video data in real time, can discover and output AI behaviors in time, and has practical application value.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a schematic flow chart of an artificial intelligence detection method based on intra-frame differentiation according to the present invention;
FIG. 2 is a block diagram of an artificial intelligence detection system based on intra-frame differentiation according to the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings so that the advantages and features of the present invention can be more easily understood by those skilled in the art, thereby making clear and defining the scope of the present invention.
FIG. 1 is a general flow chart of an intra-frame differentiation-based artificial intelligence detection method provided herein, the method comprising:
step one, video data are collected through video collecting equipment such as cameras, monitoring equipment and the like, and the video data are respectively input into the following step two and step four;
inputting the received video data into an intra-frame network, extracting the intra-frame characteristic of each frame of image, and calculating image differentiation data between adjacent frames according to the intra-frame characteristic so as to highlight the behavior characteristic of AI;
the intra-frame network in the second step comprises a plurality of convolution layers, wherein after the first-stage convolution layer receives video data, convolution operation is carried out to obtain a first convolution value, a difference value between the first convolution value and the video data is calculated, a first difference value is output, and the first difference value is transmitted to the next-stage convolution layer; the next-stage convolution layer receives the input first difference value, carries out convolution operation to obtain a second convolution value, calculates the difference value between the second convolution value and the first difference value, outputs a second difference value, and transmits the second difference value to the next-stage convolution layer; repeating the operation until the video data is transmitted to a final-stage convolution layer, performing convolution operation to obtain a final-stage convolution value, calculating the sum of the final-stage convolution value and the video data, and outputting the intra-frame characteristics of the video data;
the calculating the image differentiation data between adjacent frames according to the intra-frame features comprises the following steps: calculating Euclidean distance between the intra-frame features of the current frame and the intra-frame features of the adjacent frames, and then calculating a contrast loss function between the current frame and the adjacent frames according to the Euclidean distance to obtain an image difference value between the adjacent frames;
training a deep learning model by utilizing image differentiation data between the adjacent frames; model training employs Convolutional Neural Networks (CNNs) or Recurrent Neural Networks (RNNs) to achieve accurate identification of AI behavior.
And step four, inputting the received video data into the trained deep learning model, detecting in real time, and outputting alarm information if AI behaviors are detected.
On the basis of the above embodiments, the training process of the deep learning model can be further optimized. In the model training process, technologies such as transfer learning and the like are adopted, a pre-trained model is used as a basic network, and the application scene of the method is adapted in a fine adjustment mode and the like.
In addition, advanced technologies such as an attention mechanism and the like can be introduced, and the identification accuracy of the model on the AI behavior is improved. And the attention mechanism is used for carrying out fine recognition on the key area of the target object and improving the capturing capability of texture detail information and the noise suppression capability in the image.
The attention mechanism mainly comprises the following steps:
calculating an input representation vector: first, the input information is represented as a vector or matrix. This vector or matrix may be calculated by the neural network layer or may be pre-processed.
Calculating attention weight: next, the weights of the different parts of the input are calculated by one attention mechanism. This weight is typically calculated by comparing the correlation of different parts of the input with the information of current interest.
Calculating a weighted sum: and according to the calculated weight, carrying out weighted sum calculation on different input parts to obtain a final output. This output is typically a vector or matrix representing a weighted sum of the different parts of the input information.
Differentiable attention mechanisms: finally, the attention weight is combined with the input vector or matrix by a differentiable attention mechanism to obtain the final output. This attention mechanism may be a differentiable mechanism such as dot product, additive, multi-layer perceptron, etc., so that the overall calculation process can be optimized by a back propagation algorithm.
The real-time performance of the system can be further optimized. The system is deployed on a plurality of computing nodes by adopting the technologies of distributed computing and the like, so that parallel processing and real-time detection are realized. In addition, the training and application process of the deep learning model can be accelerated by using high-performance computing equipment such as a GPU and the like, and the real-time performance of the system is improved.
In some preferred embodiments, the calculating the euclidean distance between the intra-frame features of the current frame and the intra-frame features of the neighboring frames includes calculating the euclidean distance between the intra-frame features of the current frame and the intra-frame features of the previous and next frames, respectively.
In some preferred embodiments, the training of the deep learning model minimizes the entropy loss function by a reverse propagation mode, avoids oversaturation, and indicates that the training of the deep learning model is completed when the accuracy of the deep learning model meets the requirement of a threshold.
In some preferred embodiments, the image differentiation data comprises: pixel difference, texture change, shape change, motion change, color change, or a combination thereof.
The intra-frame differentiated data includes the following:
pixel difference: the pixel difference between adjacent frames can be obtained by calculating the difference in pixel values. For example, the absolute or relative difference of pixel values may be used to measure the image difference between adjacent frames.
Texture change: in video, there may be differences in texture of different regions, which can be used to distinguish between different objects or scenes. Such differentiated information can be obtained by calculating the variation of texture between adjacent frames.
Shape change: in video, the shape of the object may change, for example, the size, shape, and position of the object. Such differentiated information can be obtained by calculating the shape change between adjacent frames.
Motion change: in video, motion variation of objects is one of the common differentiating features. Such differentiated information may be obtained by calculating changes in motion between adjacent frames, such as velocity, direction, acceleration, etc. of the object.
Color change: in video, color variation is also one of the common differentiating features. Such differentiated information may be obtained by calculating the change in color between adjacent frames, such as saturation and brightness of the color, etc.
The above is some common intra-frame differential data, and the differential information in the video can be extracted through the data, so as to accurately identify and detect artificial intelligent behaviors.
FIG. 2 is a block diagram of an intra-frame differentiation-based artificial intelligence detection system provided herein, the system comprising: the device comprises a video acquisition module, a differential data processing module, a deep learning module and a detection output module;
the video acquisition module is used for acquiring video data through video acquisition equipment and respectively inputting the video data into the subsequent differential data processing module and the detection output module;
the differential data processing module is used for inputting the received video data into an intra-frame network, extracting the intra-frame characteristic of each frame of image and calculating image differential data between adjacent frames according to the intra-frame characteristic;
the intra-frame network comprises a plurality of convolution layers, wherein after video data are received by a first-stage convolution layer, convolution operation is carried out to obtain a first convolution value, a difference value between the first convolution value and the video data is calculated, a first difference value is output, and the first difference value is transmitted to a next-stage convolution layer; the next-stage convolution layer receives the input first difference value, carries out convolution operation to obtain a second convolution value, calculates the difference value between the second convolution value and the first difference value, outputs a second difference value, and transmits the second difference value to the next-stage convolution layer; repeating the operation until the video data is transmitted to a final-stage convolution layer, performing convolution operation to obtain a final-stage convolution value, calculating the sum of the final-stage convolution value and the video data, and outputting the intra-frame characteristics of the video data;
the calculating the image differentiation data between adjacent frames according to the intra-frame features comprises the following steps: calculating Euclidean distance between the intra-frame features of the current frame and the intra-frame features of the adjacent frames, and then calculating a contrast loss function between the current frame and the adjacent frames according to the Euclidean distance to obtain an image difference value between the adjacent frames;
the deep learning module is used for training a deep learning model by utilizing image differentiation data between the adjacent frames;
and the detection output module is used for inputting the received video data into the trained deep learning model to perform real-time detection, and outputting alarm information if AI behaviors are detected.
The application provides an artificial intelligence detection system based on intra-frame differentiation, the system comprising: the system includes a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the method according to any of the embodiments of the first aspect according to instructions in the program code.
The present application provides a computer readable storage medium for storing program code for performing the method of any one of the embodiments of the first aspect.
In a specific implementation, the present invention also provides a computer storage medium, where the computer storage medium may store a program, where the program may include some or all of the steps in the various embodiments of the present invention when executed. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM) or a Random Access Memory (RAM).
It will be apparent to those skilled in the art that the techniques of embodiments of the present invention may be implemented in software plus a necessary general purpose hardware platform. Based on such understanding, the technical solutions in the embodiments of the present invention may be embodied in essence or a part contributing to the prior art in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments or some parts of the embodiments of the present invention.
The same or similar parts between the various embodiments of the present description are referred to each other. In particular, for the embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference should be made to the description of the method embodiments for the matters.
The embodiments of the present invention described above do not limit the scope of the present invention.
Claims (7)
1. An artificial intelligence detection method based on intra-frame differentiation, which is characterized by comprising the following steps:
step one, acquiring video data through video acquisition equipment, and respectively inputting the video data into the subsequent step two and step four;
inputting the received video data into an intra-frame network, extracting the intra-frame characteristics of each frame of image, and calculating image differentiation data between adjacent frames according to the intra-frame characteristics;
the intra-frame network in the second step comprises a plurality of convolution layers, wherein after the first-stage convolution layer receives video data, convolution operation is carried out to obtain a first convolution value, a difference value between the first convolution value and the video data is calculated, a first difference value is output, and the first difference value is transmitted to the next-stage convolution layer; the next-stage convolution layer receives the input first difference value, carries out convolution operation to obtain a second convolution value, calculates the difference value between the second convolution value and the first difference value, outputs a second difference value, and transmits the second difference value to the next-stage convolution layer; repeating the operation until the video data is transmitted to a final-stage convolution layer, performing convolution operation to obtain a final-stage convolution value, calculating the sum of the final-stage convolution value and the video data, and outputting the intra-frame characteristics of the video data;
the calculating the image differentiation data between adjacent frames according to the intra-frame features comprises the following steps: calculating Euclidean distance between the intra-frame features of the current frame and the intra-frame features of the adjacent frames, and then calculating a contrast loss function between the current frame and the adjacent frames according to the Euclidean distance to obtain an image difference value between the adjacent frames;
training a deep learning model by utilizing image differentiation data between the adjacent frames;
and step four, inputting the received video data into the trained deep learning model, detecting in real time, and outputting alarm information if AI behaviors are detected.
2. The method according to claim 1, characterized in that: the calculating the euclidean distance between the intra-frame features of the current frame and the intra-frame features of the adjacent frames comprises respectively calculating the euclidean distance between the intra-frame features of the current frame and the intra-frame features of the previous frame and the next frame.
3. The method according to claim 1, characterized in that: when the deep learning model is trained, the entropy loss function is minimized through a reverse propagation mode, supersaturation is avoided, and when the precision of the deep learning model meets the requirement of a threshold value, the deep learning model is trained.
4. A method according to claim 3, characterized in that: the image differentiation data includes: pixel difference, texture change, shape change, motion change, color change, or a combination thereof.
5. An artificial intelligence detection system based on intra-frame differentiation, the system comprising: the device comprises a video acquisition module, a differential data processing module, a deep learning module and a detection output module;
the video acquisition module is used for acquiring video data through video acquisition equipment and respectively inputting the video data into the subsequent differential data processing module and the detection output module;
the differential data processing module is used for inputting the received video data into an intra-frame network, extracting the intra-frame characteristic of each frame of image and calculating image differential data between adjacent frames according to the intra-frame characteristic;
the intra-frame network comprises a plurality of convolution layers, wherein after video data are received by a first-stage convolution layer, convolution operation is carried out to obtain a first convolution value, a difference value between the first convolution value and the video data is calculated, a first difference value is output, and the first difference value is transmitted to a next-stage convolution layer; the next-stage convolution layer receives the input first difference value, carries out convolution operation to obtain a second convolution value, calculates the difference value between the second convolution value and the first difference value, outputs a second difference value, and transmits the second difference value to the next-stage convolution layer; repeating the operation until the video data is transmitted to a final-stage convolution layer, performing convolution operation to obtain a final-stage convolution value, calculating the sum of the final-stage convolution value and the video data, and outputting the intra-frame characteristics of the video data;
the calculating the image differentiation data between adjacent frames according to the intra-frame features comprises the following steps: calculating Euclidean distance between the intra-frame features of the current frame and the intra-frame features of the adjacent frames, and then calculating a contrast loss function between the current frame and the adjacent frames according to the Euclidean distance to obtain an image difference value between the adjacent frames;
the deep learning module is used for training a deep learning model by utilizing image differentiation data between the adjacent frames;
and the detection output module is used for inputting the received video data into the trained deep learning model to perform real-time detection, and outputting alarm information if AI behaviors are detected.
6. An artificial intelligence detection system based on intra-frame differentiation, the system comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the method according to any of the claims 1-4 according to instructions in the program code.
7. A computer readable storage medium, characterized in that the computer readable storage medium is for storing a program code for performing a method implementing any of claims 1-4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310926811.3A CN117011766B (en) | 2023-07-26 | 2023-07-26 | Artificial intelligence detection method and system based on intra-frame differentiation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310926811.3A CN117011766B (en) | 2023-07-26 | 2023-07-26 | Artificial intelligence detection method and system based on intra-frame differentiation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117011766A CN117011766A (en) | 2023-11-07 |
CN117011766B true CN117011766B (en) | 2024-02-13 |
Family
ID=88564859
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310926811.3A Active CN117011766B (en) | 2023-07-26 | 2023-07-26 | Artificial intelligence detection method and system based on intra-frame differentiation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117011766B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108280233A (en) * | 2018-02-26 | 2018-07-13 | 南京邮电大学 | A kind of VideoGIS data retrieval method based on deep learning |
JP6668514B1 (en) * | 2018-11-15 | 2020-03-18 | 株式会社 ジーワイネットワークス | Violence detection frameworking method and device using spatio-temporal characteristics analysis of deep planning-based shadow images |
CN115147758A (en) * | 2022-06-23 | 2022-10-04 | 山东大学 | Depth forged video detection method and system based on intra-frame inter-frame feature differentiation |
-
2023
- 2023-07-26 CN CN202310926811.3A patent/CN117011766B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108280233A (en) * | 2018-02-26 | 2018-07-13 | 南京邮电大学 | A kind of VideoGIS data retrieval method based on deep learning |
JP6668514B1 (en) * | 2018-11-15 | 2020-03-18 | 株式会社 ジーワイネットワークス | Violence detection frameworking method and device using spatio-temporal characteristics analysis of deep planning-based shadow images |
CN115147758A (en) * | 2022-06-23 | 2022-10-04 | 山东大学 | Depth forged video detection method and system based on intra-frame inter-frame feature differentiation |
Also Published As
Publication number | Publication date |
---|---|
CN117011766A (en) | 2023-11-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108133188B (en) | Behavior identification method based on motion history image and convolutional neural network | |
CN114972418B (en) | Maneuvering multi-target tracking method based on combination of kernel adaptive filtering and YOLOX detection | |
CN107423702B (en) | Video target tracking method based on TLD tracking system | |
CN108921877B (en) | Long-term target tracking method based on width learning | |
CN109919032B (en) | Video abnormal behavior detection method based on motion prediction | |
CN109101897A (en) | Object detection method, system and the relevant device of underwater robot | |
CN110120064B (en) | Depth-related target tracking algorithm based on mutual reinforcement and multi-attention mechanism learning | |
CN112464807A (en) | Video motion recognition method and device, electronic equipment and storage medium | |
CN110853074B (en) | Video target detection network system for enhancing targets by utilizing optical flow | |
CN111832228B (en) | Vibration transmission system based on CNN-LSTM | |
CN111832484A (en) | Loop detection method based on convolution perception hash algorithm | |
CN112036381B (en) | Visual tracking method, video monitoring method and terminal equipment | |
CN106650617A (en) | Pedestrian abnormity identification method based on probabilistic latent semantic analysis | |
CN113313123B (en) | Glance path prediction method based on semantic inference | |
CN112288778A (en) | Infrared small target detection method based on multi-frame regression depth network | |
CN117011766B (en) | Artificial intelligence detection method and system based on intra-frame differentiation | |
Kizrak et al. | Crowd density estimation by using attention based capsule network and multi-column CNN | |
CN116468980A (en) | Infrared small target detection method and device for deep fusion of edge details and deep features | |
CN114820723B (en) | Online multi-target tracking method based on joint detection and association | |
CN115375966A (en) | Image countermeasure sample generation method and system based on joint loss function | |
CN114092844A (en) | Multi-band image target detection method based on generation countermeasure network | |
CN113554685A (en) | Method and device for detecting moving target of remote sensing satellite, electronic equipment and storage medium | |
CN116883907A (en) | Artificial intelligence detection method and system based on inter-frame correlation | |
CN113658218B (en) | Dual-template intensive twin network tracking method, device and storage medium | |
CN118549923B (en) | Video radar monitoring method and related equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |