CN113326400A - Model evaluation method and system based on depth counterfeit video detection - Google Patents
Model evaluation method and system based on depth counterfeit video detection Download PDFInfo
- Publication number
- CN113326400A CN113326400A CN202110728869.8A CN202110728869A CN113326400A CN 113326400 A CN113326400 A CN 113326400A CN 202110728869 A CN202110728869 A CN 202110728869A CN 113326400 A CN113326400 A CN 113326400A
- Authority
- CN
- China
- Prior art keywords
- model
- detection
- video
- score
- depth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 94
- 238000011156 evaluation Methods 0.000 title claims abstract description 23
- 238000000034 method Methods 0.000 claims abstract description 13
- 230000005284 excitation Effects 0.000 claims description 11
- 102100029469 WD repeat and HMG-box DNA-binding protein 1 Human genes 0.000 claims description 4
- 101710097421 WD repeat and HMG-box DNA-binding protein 1 Proteins 0.000 claims description 4
- 238000005259 measurement Methods 0.000 claims description 4
- 238000004806 packaging method and process Methods 0.000 claims description 2
- 230000007123 defense Effects 0.000 abstract description 16
- 238000005516 engineering process Methods 0.000 abstract description 8
- 238000011160 research Methods 0.000 abstract description 8
- 238000011161 development Methods 0.000 abstract description 6
- 238000010276 construction Methods 0.000 abstract description 5
- 238000007596 consolidation process Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 2
- 239000002131 composite material Substances 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 238000005242 forging Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000003032 molecular docking Methods 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0631—Resource planning, allocation, distributing or scheduling for enterprises or organisations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0639—Performance analysis of employees; Performance analysis of enterprise or organisation operations
- G06Q10/06393—Score-carding, benchmarking or key performance indicator [KPI] analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/018—Certifying business or products
- G06Q30/0185—Product, service or business identity fraud
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
Abstract
The invention particularly relates to a model evaluation method based on deep fake video detection, which comprises the following steps: s100, importing a model to be evaluated; s200, randomly selecting a specified number of samples from a video sample library to obtain a sample set, wherein the video sample library stores videos and authenticity marks of the videos; s300, inputting the sample set into a model to be evaluated for authenticity identification; s400, comparing the identification result with the actual authenticity of the sample to obtain a measured value P1 of the specified grading index of the model to be graded; and S500, automatically scoring according to the measured value P1 to obtain and output the score of the model to be scored. The method can screen out a high-value detection model to be brought into a professional deep forgery detection defense system, realizes the construction of the deep forgery detection defense system, and builds and consolidates a network security defense line; the continuous forward development of the depth forgery detection model technology can be promoted, and depth forgery model research personnel is attracted to participate in the continuous improvement of the video sample library, so that a virtuous circle is achieved.
Description
Technical Field
The invention relates to the technical field of depth counterfeit video detection, in particular to a model evaluation method and system based on depth counterfeit video detection.
Background
The deep forgery (deep) technology can not only realize face changing, but also create people without reality. By the technology, the false audio and video which appoints anyone to speak anything and do anything can be theoretically made. Although deep forgery techniques have their positive aspects, such as creating virtual characters in movie production, sound simulation, "reviving" historical characters or elapsed friends and friends, and the like. However, the malicious deep counterfeit content can be used for misleading public opinion, disturbing social order and possibly even intervening government elections and subverting national political rights, and has become a weapon of a new round of information war at present, thereby bringing great threats to national network security, data security, information security and national defense security.
Therefore, the development of deep forgery detection technology is a big matter related to national information security, and various research institutions and universities of enterprises are intensively studying the aspect at present. However, how to relatively objectively evaluate the detection capability level of the deep forgery detection model developed by each team in a real scene, how to screen out a high-value model from the deep forgery detection model and apply the high-value model to the defense of national security in actual combat, and no effective means and method exist in the market at present. It is common practice for each team to train and test their own model through some open or private local sample sets, and then compare the accuracy to obtain the model capability, such as the patents named "deep-forged image detection method and apparatus" (application No. 202011352811. X) and "deep-forged video detection method and system based on time-series inconsistency" (application No. 202011417127.5).
The models required in actual combat are often models with high generalization, and most of deep forged videos can be effectively detected and intercepted. However, the performance and capability evaluation of the current detection model of the deep-forged video is generally evaluated by a model researcher through an algorithm such as log loss under a limited specific sample set or a specific scene. The effect and the capability of the model capability evaluated in the actual application scene can not be objectively evaluated, and can not be compared with other algorithms or effectively cooperated, so that the actual landing application significance of the final model is greatly reduced.
Disclosure of Invention
The invention mainly aims to provide a model evaluation method based on deep counterfeit video detection, which can conveniently score a model to be detected.
In order to realize the purpose, the invention adopts the technical scheme that: a model evaluation method based on depth counterfeit video detection comprises the following steps: s100, importing a model to be evaluated; s200, randomly selecting a specified number of samples from a video sample library to obtain a sample set, wherein the video sample library stores videos and authenticity marks of the videos; s300, inputting the sample set into a model to be evaluated for authenticity identification; s400, comparing the identification result with the actual authenticity of the sample to obtain a measured value P1 of the specified grading index of the model to be graded; and S500, automatically scoring according to the measured value P1 to obtain and output the score of the model to be scored.
Compared with the prior art, the invention has the following technical effects: by establishing a public video sample library and providing a rich simulated reality scene video data set to evaluate the capability level of the deep forgery detection model, on one hand, the high-value detection model can be screened out and incorporated into a professional deep forgery detection defense system, and the construction, the creation and the consolidation of a network security defense line of the deep forgery detection defense system are realized; on the other hand, the continuous forward development of the depth forgery detection model technology can be promoted, and depth forgery model research personnel is attracted to participate in the continuous improvement of the video sample library, so that a virtuous circle is achieved.
Another object of the present invention is to provide an evaluation system for a model based on deep counterfeit video detection, which can conveniently score the model to be detected.
In order to realize the purpose, the invention adopts the technical scheme that: a model evaluation system based on deep forgery video detection comprises a video sample library, an intelligent detection scoring module and an intelligent task scheduling module; the video sample library stores videos and authenticity marks of the videos; the task intelligent scheduling module sets the tasks for automatic scoring, samples a sample set of a specified number of samples from a video sample library and executes scheduling work of the scoring tasks; the intelligent detection scoring module presets evaluation indexes and automatically calculates the model score according to the task execution result in steps S100-S500.
Compared with the prior art, the invention has the following technical effects: by establishing a public video sample library and providing a rich simulated reality scene video data set to evaluate the capability level of the deep forgery detection model, on one hand, the high-value detection model can be screened out and incorporated into a professional deep forgery detection defense system, and the construction, the creation and the consolidation of a network security defense line of the deep forgery detection defense system are realized; on the other hand, the continuous forward development of the depth forgery detection model technology can be promoted, and depth forgery model research personnel is attracted to participate in the continuous improvement of the video sample library, so that a virtuous circle is achieved.
Drawings
FIG. 1 is a system architecture diagram of the present invention;
FIG. 2 is a flow chart of the operation of the present invention;
FIG. 3 is a sampling flow diagram of a video sample library;
fig. 4 is a scoring flow chart of the present invention.
Detailed Description
The present invention will be described in further detail with reference to fig. 1 to 4.
The invention discloses a model evaluation method based on deep fake video detection, which comprises the following steps: s100, importing a model to be evaluated; s200, randomly selecting a specified number of samples from a video sample library to obtain a sample set, wherein the video sample library stores videos and authenticity marks of the videos; s300, inputting the sample set into a model to be evaluated for authenticity identification; s400, comparing the identification result with the actual authenticity of the sample to obtain a measured value P1 of the specified grading index of the model to be graded; and S500, automatically scoring according to the measured value P1 to obtain and output the score of the model to be scored. By establishing a public video sample library and providing a rich simulated reality scene video data set to evaluate the capability level of the deep forgery detection model, on one hand, the high-value detection model can be screened out and incorporated into a professional deep forgery detection defense system, and the construction, the creation and the consolidation of a network security defense line of the deep forgery detection defense system are realized; on the other hand, the continuous forward development of the depth forgery detection model technology can be promoted, and depth forgery model research personnel is attracted to participate in the continuous improvement of the video sample library, so that a virtuous circle is achieved.
Referring to fig. 4, since there are many samples in the video sample library, a specified number of samples need to be randomly selected from the library, and in consideration of the randomness of sample selection, when different samples are selected, the obtained scores may also be different, and in order to make the score result more stable and reliable, in the present application, a score correction procedure is preferably introduced, which specifically includes the following steps: a reference model is built in, in steps S300 and S400, and the sample is input into the reference model to obtain a measured value P0 of the reference model, in step S500, the score of the model to be scored is equal to a + k (b-a) (P1-P0)/P0, wherein: k is an excitation coefficient which respectively represents whether the evaluation index is positive excitation or negative excitation and is respectively represented by 1 and-1, the positive excitation, namely the measurement value is larger, the score is higher, the negative excitation, namely the measurement value is smaller, the score is higher, taking the detection speed as an example, the detection time is smaller, the score of the model is higher, and at the moment, P1-P0 is negative, so k at the moment is represented by-1 and is negative excitation.
Where a is the preset score of the reference model and b is the full score value of the score, the reference model can be more fully detected by more samples, so that the actual ability of the reference model can be more intuitively judged, then, a preset score is assigned to the reference model according to the representation of the reference model, wherein the score a is equivalent to a reference benchmark, all other models to be scored are scored based on the benchmark, and if the models to be scored are better than the measurements of the reference model, the score is higher than a, if the model to be scored is worse than the measured value of the reference model, the score is lower than a, so that the final scoring of all models is of more reference value, and most importantly, the scoring result does not have larger difference along with the difference of the sample sets, so that the scoring result is ensured to have small floating and more reference value.
Certainly, there may be more than one reference model, and we may set a plurality of reference models, set a preset score for each reference model, score the model to be scored for each reference model, and finally score all the scores and calculate the average value more reasonably.
For the model to be scored, there are several dimensions to evaluate it, so it is preferable in this case that the measured value includes one or more of detection speed, detection accuracy, false positive rate, and false negative rate, and the intelligent detection scoring module outputs a final score after each score and/or weighted average of multiple scores. One or more of the above measured values can be selected according to actual needs, and other measured values can be added according to other requirements. Meanwhile, the model to be scored is finally scored, each measured value can be independently scored, so that a plurality of scores are obtained, all scores can be weighted and averaged to obtain a comprehensive score, all scores can be reflected, and the scoring is generally selected according to actual needs.
Further, the score a of the reference model is [50,90], the full score value b is equal to 100, the measured value comprises detection speed, detection accuracy, false positive rate and false negative rate, and k corresponding to the detection speed, the detection accuracy, the false positive rate and the false negative rate is-1, -1 and-1 respectively. The score a of the reference model may be fluctuated due to different selected models and subjective scoring, for example, in this embodiment, we consider that the score of the model with the detection speed of 50s, the detection accuracy of 90%, the false positive rate of 5%, and the false negative rate of 5% performs better, and may be scored as 80, and when the scoring step is performed, the value of a is determined, and this value range of a represents an optional range of the selected reference model when we perform subjective scoring. For measuring speed, the smaller its time, the higher the final score, so it is a negative stimulus; the higher the detection accuracy, the higher the final score, so it is a positive stimulus; the false positive rate and the false negative rate are the same, the lower the false positive rate and the better the false negative rate, so that the false positive rate and the false negative rate are negative excitation.
In the above embodiment, each parameter of the model to be scored may be scored individually, but this is not favorable for comprehensive comparison of the two models, so it is preferable that the intelligent detection scoring module outputs a final score after weighted average of multiple scores, and the weights corresponding to the detection speed, the detection accuracy, the false positive rate, and the false negative rate are 10%, 60%, 10%, and 20%, respectively. The weight value can be adjusted according to actual conditions, for example, if the requirement for the detection speed is high, the weight value of the detection speed can be increased; if the requirement on the false positive rate is high, the weight of the false positive rate can be increased, and the value can be taken according to the actual situation, and the weight is only used for reference.
The above scores are illustrated below by specific examples, see the following table:
in the above table, each score and the composite score of the reference model are 80 scores; each item score of the model to be detected can be calculated according to a formula, and the comprehensive score of the model to be detected calculated according to the weight is as follows: 79.868 points.
Referring to fig. 3, N samples are selected from the video sample library to form a sample set, and there are many random ways. In the present invention, preferably, the step S200 includes the following steps: s201, randomly taking out a number x from 1-M1Wherein M is the total number of videos in the video sample library; s202, in the x-thnAs a seed according to formula xn+1=(xnXc + d) mod M + 1 to obtain a new random number, wherein c and d are preset integers, and mod is a remainder operation; s203, x calculated each timen+1If it has occurred before, discarding and re-randomizing a number, ultimately generating a set of specified number of non-repeating numbers { x }1、x2、…、xNN is the number of sample sets; s204, taking out x1、x2、…、xNThe corresponding video serves as a sample set. More preferably, c is equal to 1103515245 and d is equal to 12345. The sample set obtained by the steps randomly has stronger randomness, so that the obtained scoring result has higher reference value.
Referring to fig. 1 and 2, an evaluation system based on a model of deep forgery video detection includes a video sample library, an intelligent detection scoring module, and an intelligent task scheduling module; the video sample library stores videos and authenticity marks of the videos; the task intelligent scheduling module sets the tasks for automatic scoring, samples a sample set of a specified number of samples from a video sample library and executes scheduling work of the scoring tasks; the intelligent detection scoring module presets evaluation indexes and automatically calculates the model score according to the task execution result in steps S100-S500. The system is described in a modular manner, and shows that software or hardware capable of realizing corresponding functions can form the system, and when the system is actually used, the system consists of software and hardware, and the system architecture diagram is shown in fig. 1.
The hardware part mainly comprises a server cluster with GPU computing capability and has certain storage and computing power expansibility. The software part mainly comprises a video sample library, an intelligent detection scoring module and an intelligent task scheduling module, and each model to be scored is packaged into a form capable of being called by an interface through a given packaging standard and is accessed into the system.
The docking and operating principle of the whole system can be described as a flow chart shown in fig. 2, and the basic flow is described as follows:
1. and the research team packages the model call through a standard interface document provided by the system, and then uploads a package to the system to complete the entrance and residence process of the model.
2. After the model is successfully parked, the system can test the function of the model, perform function verification for subsequent automatic grading and simultaneously support the updating and off-shelf functions of the model.
3. The automatic scoring tasks are set through the configuration of the task intelligent scheduling module, parameters such as task starting time, sample set number and sample number of each sample set are mainly set, and then the tasks are submitted to the intelligent detection scoring module to execute the detection scoring tasks.
4. The task intelligent scheduling module automatically samples a sample set containing a specified number of samples from the video sample library, and then determines the number of tasks to be executed simultaneously according to the load of the GPU.
5. The video sample library can be supplemented and updated by research teams.
6. After all sample set detection and scoring tasks are finished, the intelligent detection and scoring module finishes automatic scoring of the model through a weighted average method.
7. The display and the derivation of the scoring report are supported, and the research and development team can know the performance of the model in the sample environment simulating the real situation.
The invention forms a large-scale video sample library containing the mixture of the forged video and the real video generated by various depth forging algorithms by a customized autonomous recording and open wide acquisition mode, and simulates the existing forged video environment in the actual situation to a certain extent. And (4) an evaluation index system which is classified according to the models and corresponding to the models is supplemented, and comprehensive evaluation scoring is carried out on the detection effect of the models from multiple samples and multiple dimensions, so that the capability level of the deep counterfeiting detection model under the actual combat condition is evaluated. The system can be used for creating a third-party deep forgery video detection platform, and the ability level of a deep forgery detection model is evaluated by providing abundant simulated reality scene video data sets, so that on one hand, a high-value detection model can be screened out and incorporated into a professional deep forgery detection defense system, and the construction, creation and consolidation of the deep forgery detection defense system are realized; on the other hand, the continuous forward development of the deep forgery detection model technology can be promoted, and deep forgery model research personnel is attracted to participate in the continuous improvement of the system sample library, so that a virtuous circle is achieved.
Claims (10)
1. A model evaluation method based on depth counterfeit video detection is characterized in that: the method comprises the following steps:
s100, importing a model to be evaluated;
s200, randomly selecting a specified number of samples from a video sample library to obtain a sample set, wherein the video sample library stores videos and authenticity marks of the videos;
s300, inputting the sample set into a model to be evaluated for authenticity identification;
s400, comparing the identification result with the actual authenticity of the sample to obtain a measured value P1 of the specified grading index of the model to be graded;
and S500, automatically scoring according to the measured value P1 to obtain and output the score of the model to be scored.
2. A method for evaluating a model based on depth-counterfeit video detection as claimed in claim 1, wherein: a reference model is built in, in steps S300 and S400, a sample is simultaneously input into the reference model to obtain a measurement value P0 of the reference model, and in step S500, a score calculation formula of the model to be scored is as follows: a + k (b-a) (P1-P0)/P0, wherein: k is an excitation coefficient which respectively represents whether the evaluation index is positive excitation or negative excitation and is respectively represented by 1 and-1, a is a preset score of the reference model, and b is a full score value of the score.
3. The method for evaluating a model based on depth-counterfeit video detection as claimed in claim 2, wherein: the measured value comprises one or more of detection speed, detection accuracy, false positive rate and false negative rate, and the intelligent detection scoring module outputs a final score after each score and/or weighted average of a plurality of scores.
4. A method for evaluating a model based on depth-counterfeit video detection as claimed in claim 3, wherein: the score a of the reference model is taken as [50,90], the full score value b is equal to 100, the measured value comprises detection speed, detection accuracy, false positive rate and false negative rate, and k corresponding to the detection speed, the detection accuracy, the false positive rate and the false negative rate is-1, -1 and-1 respectively.
5. The method for evaluating a model based on depth-counterfeit video detection as claimed in claim 4, wherein: the intelligent detection scoring module outputs the final score after weighted average of the multiple scores, and the weights corresponding to the detection speed, the detection accuracy, the false positive rate and the false negative rate are respectively 10%, 60%, 10% and 20%.
6. A method for evaluating a model based on depth-counterfeit video detection according to any of claims 1-5, wherein: the step S200 includes the following steps:
s201, randomly taking out a number x from 1-M1Wherein M is the total number of videos in the video sample library;
s202, in the x-thnAs a seed according to formula xn+1=(xnXc + d) mod M + 1 to obtain a new random number, wherein c and d are preset integers, and mod is a remainder operation;
s203, x calculated each timen+1If it has occurred before, discarding and re-randomizing a number, ultimately generating a set of specified number of non-repeating numbers { x }1、x2、…、xNN is the number of sample sets;
s204, taking out x1、x2、…、xNThe corresponding video serves as a sample set.
7. The method for evaluating a model based on depth-counterfeit video detection as claimed in claim 6, wherein: c is equal to 1103515245 and d is equal to 12345.
8. An evaluation system based on a model for detecting a depth-forged video is characterized in that: the system comprises a video sample library, an intelligent detection scoring module and an intelligent task scheduling module; the video sample library stores videos and authenticity marks of the videos; the task intelligent scheduling module sets the tasks for automatic scoring, samples a sample set of a specified number of samples from a video sample library and executes scheduling work of the scoring tasks; the intelligent detection scoring module presets evaluation indexes and automatically calculates the model score according to the task execution result in steps S100-S500.
9. A model-based evaluation system for depth-spoofing video detection as in claim 8 wherein: the task intelligent scheduling module is used for automatically setting and scheduling the execution of the detection tasks, and comprises the steps of setting the starting time of the tasks, the number of sample sets, the number of samples in each sample set, and the number of tasks which are arranged to be executed in parallel according to the number of models to be detected and the system load; and the intelligent detection scoring module is used for adjusting the evaluation excitation coefficient and setting the weight.
10. A model-based evaluation system for depth-spoofing video detection as in claim 8 wherein: the model to be detected is packaged into a form capable of being called by an interface through a given packaging standard and is accessed into the system; the videos in the video sample library may be supplemented and updated.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110728869.8A CN113326400B (en) | 2021-06-29 | 2021-06-29 | Evaluation method and system of model based on depth fake video detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110728869.8A CN113326400B (en) | 2021-06-29 | 2021-06-29 | Evaluation method and system of model based on depth fake video detection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113326400A true CN113326400A (en) | 2021-08-31 |
CN113326400B CN113326400B (en) | 2024-01-12 |
Family
ID=77425171
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110728869.8A Active CN113326400B (en) | 2021-06-29 | 2021-06-29 | Evaluation method and system of model based on depth fake video detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113326400B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115171198A (en) * | 2022-09-02 | 2022-10-11 | 腾讯科技(深圳)有限公司 | Model quality evaluation method, device, equipment and storage medium |
CN115457374A (en) * | 2022-11-09 | 2022-12-09 | 之江实验室 | Deep pseudo-image detection model generalization evaluation method and device based on reasoning mode |
Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101482923A (en) * | 2009-01-19 | 2009-07-15 | 刘云 | Human body target detection and sexuality recognition method in video monitoring |
CN102855681A (en) * | 2011-06-30 | 2013-01-02 | 富士施乐株式会社 | Authenticity determination support device and support method, authenticity determination device and authenticity determination support method |
CN108646722A (en) * | 2018-07-18 | 2018-10-12 | 杭州安恒信息技术股份有限公司 | A kind of industrial control system information security simulation model and terminal |
US20190073603A1 (en) * | 2017-09-06 | 2019-03-07 | InfoVista Sweden AB | SYSTEM AND METHOD FOR MACHINE LEARNING BASED QoE PREDICTION OF VOICE/VIDEO SERVICES IN WIRELESS NETWORKS |
CN109816625A (en) * | 2018-11-27 | 2019-05-28 | 广东电网有限责任公司 | A kind of video quality score implementation method |
CN109886244A (en) * | 2019-03-01 | 2019-06-14 | 北京视甄智能科技有限公司 | A kind of recognition of face biopsy method and device |
CN109902823A (en) * | 2018-12-29 | 2019-06-18 | 华为技术有限公司 | A kind of model training method and equipment based on generation confrontation network |
CN110428399A (en) * | 2019-07-05 | 2019-11-08 | 百度在线网络技术(北京)有限公司 | Method, apparatus, equipment and storage medium for detection image |
CN110427839A (en) * | 2018-12-26 | 2019-11-08 | 西安电子科技大学 | Video object detection method based on multilayer feature fusion |
CN110929098A (en) * | 2019-11-14 | 2020-03-27 | 腾讯科技(深圳)有限公司 | Video data processing method and device, electronic equipment and storage medium |
CN111341383A (en) * | 2020-03-17 | 2020-06-26 | 安吉康尔(深圳)科技有限公司 | Method, device and storage medium for detecting copy number variation |
CN111401558A (en) * | 2020-06-05 | 2020-07-10 | 腾讯科技(深圳)有限公司 | Data processing model training method, data processing device and electronic equipment |
CN111767877A (en) * | 2020-07-03 | 2020-10-13 | 北京视甄智能科技有限公司 | Living body detection method based on infrared features |
CN112001453A (en) * | 2020-08-31 | 2020-11-27 | 北京易华录信息技术股份有限公司 | Method and device for calculating accuracy of video event detection algorithm |
CN112036902A (en) * | 2020-07-14 | 2020-12-04 | 深圳大学 | Product authentication method and device based on deep learning, server and storage medium |
CN112200001A (en) * | 2020-09-11 | 2021-01-08 | 南京星耀智能科技有限公司 | Depth-forged video identification method in specified scene |
CN112464245A (en) * | 2020-11-26 | 2021-03-09 | 重庆邮电大学 | Generalized security evaluation method for deep learning image classification model |
CN112488013A (en) * | 2020-12-04 | 2021-03-12 | 重庆邮电大学 | Depth-forged video detection method and system based on time sequence inconsistency |
CN112749894A (en) * | 2021-01-12 | 2021-05-04 | 云南电网有限责任公司电力科学研究院 | Defect detection model evaluation method and device |
CN112749746A (en) * | 2021-01-12 | 2021-05-04 | 云南电网有限责任公司电力科学研究院 | Method, system and device for iteratively updating defect sample |
CN112766189A (en) * | 2021-01-25 | 2021-05-07 | 北京有竹居网络技术有限公司 | Depth forgery detection method, device, storage medium, and electronic apparatus |
-
2021
- 2021-06-29 CN CN202110728869.8A patent/CN113326400B/en active Active
Patent Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101482923A (en) * | 2009-01-19 | 2009-07-15 | 刘云 | Human body target detection and sexuality recognition method in video monitoring |
CN102855681A (en) * | 2011-06-30 | 2013-01-02 | 富士施乐株式会社 | Authenticity determination support device and support method, authenticity determination device and authenticity determination support method |
US20190073603A1 (en) * | 2017-09-06 | 2019-03-07 | InfoVista Sweden AB | SYSTEM AND METHOD FOR MACHINE LEARNING BASED QoE PREDICTION OF VOICE/VIDEO SERVICES IN WIRELESS NETWORKS |
CN108646722A (en) * | 2018-07-18 | 2018-10-12 | 杭州安恒信息技术股份有限公司 | A kind of industrial control system information security simulation model and terminal |
CN109816625A (en) * | 2018-11-27 | 2019-05-28 | 广东电网有限责任公司 | A kind of video quality score implementation method |
CN110427839A (en) * | 2018-12-26 | 2019-11-08 | 西安电子科技大学 | Video object detection method based on multilayer feature fusion |
CN109902823A (en) * | 2018-12-29 | 2019-06-18 | 华为技术有限公司 | A kind of model training method and equipment based on generation confrontation network |
CN109886244A (en) * | 2019-03-01 | 2019-06-14 | 北京视甄智能科技有限公司 | A kind of recognition of face biopsy method and device |
CN110428399A (en) * | 2019-07-05 | 2019-11-08 | 百度在线网络技术(北京)有限公司 | Method, apparatus, equipment and storage medium for detection image |
CN110929098A (en) * | 2019-11-14 | 2020-03-27 | 腾讯科技(深圳)有限公司 | Video data processing method and device, electronic equipment and storage medium |
CN111341383A (en) * | 2020-03-17 | 2020-06-26 | 安吉康尔(深圳)科技有限公司 | Method, device and storage medium for detecting copy number variation |
CN111401558A (en) * | 2020-06-05 | 2020-07-10 | 腾讯科技(深圳)有限公司 | Data processing model training method, data processing device and electronic equipment |
CN111767877A (en) * | 2020-07-03 | 2020-10-13 | 北京视甄智能科技有限公司 | Living body detection method based on infrared features |
CN112036902A (en) * | 2020-07-14 | 2020-12-04 | 深圳大学 | Product authentication method and device based on deep learning, server and storage medium |
CN112001453A (en) * | 2020-08-31 | 2020-11-27 | 北京易华录信息技术股份有限公司 | Method and device for calculating accuracy of video event detection algorithm |
CN112200001A (en) * | 2020-09-11 | 2021-01-08 | 南京星耀智能科技有限公司 | Depth-forged video identification method in specified scene |
CN112464245A (en) * | 2020-11-26 | 2021-03-09 | 重庆邮电大学 | Generalized security evaluation method for deep learning image classification model |
CN112488013A (en) * | 2020-12-04 | 2021-03-12 | 重庆邮电大学 | Depth-forged video detection method and system based on time sequence inconsistency |
CN112749894A (en) * | 2021-01-12 | 2021-05-04 | 云南电网有限责任公司电力科学研究院 | Defect detection model evaluation method and device |
CN112749746A (en) * | 2021-01-12 | 2021-05-04 | 云南电网有限责任公司电力科学研究院 | Method, system and device for iteratively updating defect sample |
CN112766189A (en) * | 2021-01-25 | 2021-05-07 | 北京有竹居网络技术有限公司 | Depth forgery detection method, device, storage medium, and electronic apparatus |
Non-Patent Citations (3)
Title |
---|
张怡暄;李根;曹纭;赵险峰;: "基于帧间差异的人脸篡改视频检测方法", 信息安全学报, no. 02, pages 54 - 77 * |
赵增顺;高寒旭;孙骞;滕升华;常发亮;DAPENG OLIVER WU;: "生成对抗网络理论框架、衍生模型与应用最新进展", 小型微型计算机系统, no. 12, pages 44 - 48 * |
陶建华;傅睿博;易江燕;王成龙;汪涛;: "语音伪造与鉴伪的发展与挑战", 信息安全学报, no. 02, pages 33 - 43 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115171198A (en) * | 2022-09-02 | 2022-10-11 | 腾讯科技(深圳)有限公司 | Model quality evaluation method, device, equipment and storage medium |
CN115457374A (en) * | 2022-11-09 | 2022-12-09 | 之江实验室 | Deep pseudo-image detection model generalization evaluation method and device based on reasoning mode |
Also Published As
Publication number | Publication date |
---|---|
CN113326400B (en) | 2024-01-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Liu et al. | The effect of sample size on the accuracy of species distribution models: considering both presences and pseudo‐absences or background sites | |
WO2023093397A1 (en) | Efficiency evaluation method based on mass adversarial simulation deduction data modeling and analysis | |
CN108322317B (en) | Account identification association method and server | |
CN113326400A (en) | Model evaluation method and system based on depth counterfeit video detection | |
CN108737439A (en) | A kind of large-scale malicious domain name detecting system and method based on self feed back study | |
CN112183643B (en) | Hard rock tension-shear fracture identification method and device based on acoustic emission | |
CN109045702A (en) | A kind of plug-in detection method, device, calculates equipment and medium at system | |
CN109902018A (en) | A kind of acquisition methods of intelligent driving system test cases | |
Yin et al. | Towards accurate intrusion detection based on improved clonal selection algorithm | |
CN108009058A (en) | Warping apparatus recognition methods and device and electronic equipment | |
CN110474878A (en) | Ddos attack situation method for early warning and server based on dynamic threshold | |
CN110807509A (en) | Depth knowledge tracking method based on Bayesian neural network | |
CN109951554A (en) | Information security technology contest anti-cheat method in real time | |
CN114021188A (en) | Method and device for interactive security verification of federated learning protocol and electronic equipment | |
CN110941933B (en) | Complex electromagnetic environment fidelity assessment model system and method based on similarity theory | |
CN110084291B (en) | Student behavior analysis method and device based on big data extreme learning | |
CN116776208B (en) | Training method of seismic wave classification model, seismic wave selecting method, equipment and medium | |
CN116680633B (en) | Abnormal user detection method, system and storage medium based on multitask learning | |
CN110765668B (en) | Concrete penetration depth test data abnormal point detection method based on deviation index | |
CN108573292A (en) | Manufacturing management method and manufacturing management system | |
CN110213094B (en) | Method and device for establishing threat activity topological graph and storage equipment | |
TWI636276B (en) | Method of determining earthquake with artificial intelligence and earthquake detecting system | |
CN110342363A (en) | Test method, apparatus, terminal device and the storage medium of elevator safety performance | |
CN113673811B (en) | On-line learning performance evaluation method and device based on session | |
Ahmed Khan et al. | Generating realistic IoT‐based IDS dataset centred on fuzzy qualitative modelling for cyber‐physical systems |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 230088 21 / F, building A1, phase I, Zhongan chuanggu Science Park, No. 900, Wangjiang West Road, high tech Zone, Hefei, Anhui Applicant after: HEFEI HIGH DIMENSIONAL DATA TECHNOLOGY Co.,Ltd. Address before: 230088 Block C, building J2, innovation industrial park, 2800 innovation Avenue, high tech Zone, Hefei City, Anhui Province Applicant before: HEFEI HIGH DIMENSIONAL DATA TECHNOLOGY Co.,Ltd. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |