CN110427824B - Automatic security testing method and system for artificial intelligent virtual scene - Google Patents

Automatic security testing method and system for artificial intelligent virtual scene Download PDF

Info

Publication number
CN110427824B
CN110427824B CN201910580168.7A CN201910580168A CN110427824B CN 110427824 B CN110427824 B CN 110427824B CN 201910580168 A CN201910580168 A CN 201910580168A CN 110427824 B CN110427824 B CN 110427824B
Authority
CN
China
Prior art keywords
picture
monitoring
specific
real shooting
monitoring picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910580168.7A
Other languages
Chinese (zh)
Other versions
CN110427824A (en
Inventor
鲍敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Terminus Beijing Technology Co Ltd
Original Assignee
Terminus Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Terminus Beijing Technology Co Ltd filed Critical Terminus Beijing Technology Co Ltd
Priority to CN201910580168.7A priority Critical patent/CN110427824B/en
Publication of CN110427824A publication Critical patent/CN110427824A/en
Application granted granted Critical
Publication of CN110427824B publication Critical patent/CN110427824B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items

Abstract

The invention discloses an automatic security test method and system for an artificial intelligent virtual scene. The invention can generate a sufficient number of virtual test pictures containing the abnormal scenes of the preset type through the GAN framework, further input the virtual test pictures into the artificial intelligent video security monitoring system, test whether the video security monitoring system can correctly and automatically identify the abnormal scenes and the types thereof, and realize automatic alarm.

Description

Automatic security testing method and system for artificial intelligent virtual scene
Technical Field
The invention belongs to the technical field of video security monitoring, and particularly relates to an automatic security testing method and system for an artificial intelligent virtual scene.
Background
The intelligent city security monitoring system comprises a plurality of video security cameras distributed in all places of a city space and a background server connected with the video security cameras through a network, so that infrastructure of security monitoring of the intelligent city is formed, the intelligent city security monitoring system can be applied to all aspects of security defense, traffic monitoring, people flow analysis, target tracking and the like, and important guarantee is provided for maintaining public order and public safety.
For traditional video security monitoring, a monitoring picture can only be shot by using a video security camera, and the monitoring picture is transmitted back to a background server and transferred to a television wall and the like by the background server for viewing. Various abnormal scenes reflected in the monitoring picture, such as people or vehicle congestion, people or vehicle backward running, people or vehicle detention and the like, can be manually identified and processed only by workers, so that the error identification rate and the missing identification rate of the abnormal scenes are high, the working efficiency is low, and the disposal timeliness is poor.
At present, the artificial intelligent video security monitoring technology replaces the traditional video security monitoring, and becomes a great trend. One important advantage of artificial intelligence video security monitoring is that automatic identification and alarm can be performed on abnormal scenes reflected by monitoring pictures. For example, abnormal scenes such as people or vehicle congestion, people or vehicle backward running, people or vehicle detention and the like appear in the monitoring picture, the background server of the system does not need manual identification, the abnormal scenes are automatically extracted, analyzed and identified, and after the abnormal scenes are judged to exist, the background server can automatically pop up alarm prompts for workers.
In order to enhance reliability, it is obviously necessary to perform necessary tests on the automatic identification and alarm functions of the abnormal scene of the artificial intelligent video security monitoring system, so as to avoid missing identification or error identification of the abnormal scene.
In order to realize the test, a certain amount of monitoring pictures with abnormal scenes need to be obtained firstly, the monitoring pictures with the abnormal scenes are integrated with the monitoring pictures without the abnormal scenes, then the monitoring pictures are input into an artificial intelligent video security monitoring system, the system can be observed to correctly identify the abnormal scenes and the types of the abnormal scenes, and automatic alarm is realized.
However, the monitoring pictures of abnormal scenes are difficult to obtain, and if the monitoring pictures actually shot by the video security camera are screened and collected, the problems that the number of monitoring picture samples for testing is too small and the types of the abnormal scenes are not comprehensive are difficult to avoid. If arranging personnel and/or vehicle to take a beat, it is more to spend manpower and material resources, and the organization degree of difficulty is great, and the control picture sample that is used for the test often repeatability is strong each other, and the fidelity of picture is difficult to guarantee.
Disclosure of Invention
In view of this, the invention provides an automatic security testing method and system for an artificial intelligent virtual scene. The invention can generate a sufficient number of virtual test pictures containing the abnormal scenes of the preset type through the GAN framework, further input the virtual test pictures into the artificial intelligent video security monitoring system, test whether the video security monitoring system can correctly and automatically identify the abnormal scenes and the types thereof, and realize automatic alarm.
The invention provides an automatic security test method for an artificial intelligent virtual scene, which comprises the following steps:
s1: determining an abnormal scene type for which automatic testing is aimed;
s2: acquiring a monitoring picture sample corresponding to the abnormal scene type from a monitoring picture sample library according to the abnormal scene type;
s3, utilizing a GAN framework composed of a picture generator and a discriminator, wherein the picture generator is used for generating a virtual test picture based on the initial random value; after the discriminator trains the monitoring picture sample, the discriminator judges the truth of the virtual test picture generated by the picture generator; when the judgment result is false, the picture generator adjusts the generation parameters and regenerates the virtual test picture until the virtual test picture generated by the picture generator is judged to be true by the discriminator, and the generated virtual test picture is output, wherein the virtual test picture and the monitoring picture sample have the same abnormal scene type;
s4: and inputting at least one generated virtual test picture into a tested artificial intelligent video security monitoring system, obtaining the identification result of the artificial intelligent video security monitoring system on whether an abnormal scene exists and the type of the abnormal scene, judging whether the intelligent video security monitoring system can identify the abnormal scene, and judging whether the identification result is matched with the type of the abnormal scene of the virtual test picture.
Preferably, the monitoring picture sample is a multi-frame continuous monitoring picture which is selected from monitoring pictures actually shot by a camera of the tested artificial intelligent video security monitoring system and has a specific abnormal scene type.
Preferably, a monitoring picture sample with a specific abnormal scene type is selected from the real shot monitoring pictures according to the following steps: extracting each specific target from multiple continuous real shooting monitoring pictures; judging the action state of each specific target according to the interframe position change of each specific target; classifying scenes of the real shooting monitoring pictures according to a preset standard according to action states of all specific targets in the multi-frame continuous real shooting monitoring pictures to obtain scene types; and when the scene type of the real shooting monitoring picture belongs to a specific abnormal scene type, selecting the real shooting monitoring picture as the monitoring picture sample.
Further preferably, each specific target is extracted from the multi-frame continuous live-shooting monitoring picture according to the following steps: extracting an image area where each target is located from each monitoring picture of a plurality of frames of continuous real shooting monitoring pictures, and extracting the image characteristics of the target through color histogram distribution; and when the image areas with consistent image characteristics exist in the multi-frame continuous real shooting monitoring pictures, judging that the targets corresponding to the image areas with consistent image characteristics in the real shooting monitoring pictures are the same specific target.
More preferably, the action state of each specific target is judged according to the following steps: for the same specific target of multiple frames of continuous real shooting monitoring pictures, the coordinate of the specific target in the ith frame of monitoring picture is expressed as (X)i,Yi) The inter-frame position change from the i-1 st frame to the i-th frame is represented as (△ X)i,△Yi) The inter-frame position change from the i-th frame to the i + 1-th frame is represented as (△ X)i+1,△Yi+1) And so on, thereby obtaining a series of interframe position change parameters:
…(△Xi,△Yi),(△Xi+1,△Yi+1)…(△Xi+n,△Yi+n)…;
inputting the obtained inter-frame position change parameters of the specific target into at least one trained SVM classification vector machine, and determining the action state type of the specific target according to the output of the SVM classification vector machine. Each SVM classification vector machine utilizes the interframe position change parameters of the known targets with the same specific action state type to train, the consistency of the output result and the specific action state type of the known targets is achieved, stable convergence is achieved, and therefore whether the SVM classification vector machine has the specific action state type or not can be output according to the input interframe position change parameters of the specific targets. Therefore, four SVM classification vector machines are generally trained, wherein the specific action state types respectively corresponding to the four SVM classification vector machines are: the four types of specific action states basically meet the requirements of artificial intelligent video security and protection monitoring of people or vehicles in urban public spaces. And inputting the inter-frame position change parameters of each specific target in parallel or in cascade to four SVM classification vector machines, and determining the action state type of the specific target according to the respective outputs of the SVM classification vector machines.
Further preferably, the scenes of the live-action monitoring picture are classified according to the following steps to obtain the scene types: presetting classification standards of scene types, and when the action state statistical results of all specific targets in the multi-frame continuous real shooting monitoring pictures meet the classification standard of a certain scene type, determining that the real shooting monitoring pictures have the scene type; wherein the classification criteria is a ratio of specific targets having a specific action status type. For example, when the ratio of the action state "staying" among all the specific targets of the real-shot monitoring picture is greater than or equal to a specific ratio value, the scene type of the real-shot monitoring picture is "staying"; when the proportion of action state congestion in all specific targets in the real shooting monitoring picture is more than or equal to a specific proportion value, the scene type of the real shooting monitoring picture is congestion; and when the ratio of the action state of the all specific targets in the real shooting monitoring picture to be in the 'retrograde motion' state is more than or equal to a specific ratio value, the scene type of the real shooting monitoring picture is in the 'retrograde motion'. When the scene type of the real shooting monitoring picture belongs to specific abnormal scene types such as 'detention', 'congestion' and 'retrograde motion', the real shooting monitoring picture is selected as the monitoring picture sample, the real shooting monitoring picture is added into the monitoring picture sample library, and each real shooting monitoring picture is correspondingly associated with the abnormal scene type.
Preferably, the discriminator uses a BP neural network which can judge whether the virtual test picture has the same abnormal scene type as the monitoring picture sample according to the input virtual test picture. The BP neural network is composed of a plurality of layers of neurons which are connected with each other and have specific weight; the input layer receives picture pixel information of a virtual test picture, and the output layer outputs a judgment result, namely an authenticity judgment result, of whether the abnormal scene type is the same as that of the monitored picture sample. The BP neural network has self-adaption, self-organization and self-learning capabilities, two processes of information forward propagation and error backward propagation are realized in the training process, and when the picture pixel information of the monitored picture sample is propagated in the forward direction in the training process, if the actually output judgment result is not in accordance with the expected output, the error backward propagation process of the BP network can correct the weight of each layer to minimize the error, so that whether the abnormal scene type is the same as the monitored picture sample or not can be accurately judged according to the input picture pixel information through the trained BP neural network; after the training is completed, the virtual test picture generated by the picture generator is input, and authenticity judgment is executed.
Preferably, the frame generator uses a BP neural network that generates virtual test frames based on the initial random values. The BP neural network is composed of a plurality of layers of neurons which are connected with each other and have specific weight; the input layer receives the initial random value as picture pixel information, and the output layer outputs the picture pixel information of the generated virtual test picture. Judging whether the product is true or false according to the discriminator; and when the judgment result is false, the picture generator adjusts the parameters of the weight values of each layer, and regenerates the virtual test picture until the virtual test picture generated by the picture generator is judged to be true by the discriminator, and outputs the generated virtual test picture, wherein the virtual test picture and the monitoring picture sample have the same abnormal scene type.
Preferably, in S4, after at least one generated virtual test picture is connected in series with the real shooting monitoring picture without an abnormal scene, the generated virtual test picture is input to the tested artificial intelligent video security monitoring system for testing.
The invention provides an automatic security test system for an artificial intelligent virtual scene, which comprises:
the test scene type setting module is used for determining the abnormal scene type aimed at by the automatic test;
a monitoring picture sample library for storing monitoring picture samples corresponding to various abnormal scene types;
the monitoring picture sample acquisition module is used for acquiring a monitoring picture sample corresponding to the abnormal scene type from a monitoring picture sample library according to the abnormal scene type;
the virtual test picture GAN network is a GAN framework consisting of a picture generator and a discriminator, wherein the picture generator is used for generating a virtual test picture based on an initial random value; after the discriminator trains the monitoring picture sample, the discriminator judges the truth of the virtual test picture generated by the picture generator; when the judgment result is false, the picture generator adjusts the generation parameters and regenerates the virtual test picture until the virtual test picture generated by the picture generator is judged to be true by the discriminator, and the generated virtual test picture is output, wherein the virtual test picture and the monitoring picture sample have the same abnormal scene type;
and the test execution module is used for inputting at least one generated virtual test picture into the tested artificial intelligent video security monitoring system, obtaining the identification result of the artificial intelligent video security monitoring system on whether an abnormal scene exists and the type of the abnormal scene, judging whether the intelligent video security monitoring system can identify the abnormal scene, and judging whether the identification result is matched with the type of the abnormal scene of the virtual test picture.
Preferably, the automatic security protection test system further comprises a monitoring picture sample selection module, which is used for selecting a multi-frame continuous monitoring picture with a specific abnormal scene type from the monitoring pictures actually shot by the camera of the tested artificial intelligent video security protection monitoring system, and using the multi-frame continuous monitoring picture as the monitoring picture sample.
Preferably, the monitoring picture sample selection module extracts each specific target from a plurality of continuous real-shot monitoring pictures; judging the action state of each specific target according to the interframe position change of each specific target; classifying scenes of the real shooting monitoring pictures according to a preset standard according to action states of all specific targets in the multi-frame continuous real shooting monitoring pictures to obtain scene types; and when the scene type of the real shooting monitoring picture belongs to a specific abnormal scene type, selecting the real shooting monitoring picture as the monitoring picture sample.
Further preferably, the monitoring picture sample selection module extracts each specific target from multiple frames of continuous real-time monitoring pictures according to the following steps: extracting an image area where each target is located from each monitoring picture of a plurality of frames of continuous real shooting monitoring pictures, and extracting the image characteristics of the target through color histogram distribution; and when the image areas with consistent image characteristics exist in the multi-frame continuous real shooting monitoring pictures, judging that the targets corresponding to the image areas with consistent image characteristics in the real shooting monitoring pictures are the same specific target.
Further preferably, the monitoring picture sample selection module judges the action state of each specific target according to the following steps: for the same specific target of multiple frames of continuous real shooting monitoring pictures, the coordinate of the specific target in the ith frame of monitoring picture is expressed as (X)i,Yi) The inter-frame position change from the i-1 st frame to the i-th frame is represented as (△ X)i,△Yi) The inter-frame position change from the i-th frame to the i + 1-th frame is represented as (△ X)i+1,△Yi+1) And so on, thereby obtaining a series of interframe position change parameters:
…(△Xi,△Yi),(△Xi+1,△Yi+1)…(△Xi+n,△Yi+n)…;
inputting the obtained inter-frame position change parameters of the specific target into at least one trained SVM classification vector machine, and determining the action state type of the specific target according to the output of the SVM classification vector machine. Each SVM classification vector machine utilizes the interframe position change parameters of the known targets with the same specific action state type to train, the consistency of the output result and the specific action state type of the known targets is achieved, stable convergence is achieved, and therefore whether the SVM classification vector machine has the specific action state type or not can be output according to the input interframe position change parameters of the specific targets. Therefore, four SVM classification vector machines are generally trained, wherein the specific action state types respectively corresponding to the four SVM classification vector machines are: the four types of specific action states basically meet the requirements of artificial intelligent video security and protection monitoring of people or vehicles in urban public spaces. And inputting the inter-frame position change parameters of each specific target in parallel or in cascade to four SVM classification vector machines, and determining the action state type of the specific target according to the respective outputs of the SVM classification vector machines.
Further preferably, the monitoring picture sample selection module classifies scenes of the live-shooting monitoring picture according to the following steps to obtain a scene type: presetting classification standards of scene types, and when the action state statistical results of all specific targets in the multi-frame continuous real shooting monitoring pictures meet the classification standard of a certain scene type, determining that the real shooting monitoring pictures have the scene type; wherein the classification criteria is a ratio of specific targets having a specific action status type. For example, when the ratio of the action state "staying" among all the specific targets of the real-shot monitoring picture is greater than or equal to a specific ratio value, the scene type of the real-shot monitoring picture is "staying"; when the proportion of action state congestion in all specific targets in the real shooting monitoring picture is more than or equal to a specific proportion value, the scene type of the real shooting monitoring picture is congestion; and when the ratio of the action state of the all specific targets in the real shooting monitoring picture to be in the 'retrograde motion' state is more than or equal to a specific ratio value, the scene type of the real shooting monitoring picture is in the 'retrograde motion'. When the scene type of the real shooting monitoring picture belongs to specific abnormal scene types such as 'detention', 'congestion' and 'retrograde motion', the real shooting monitoring picture is selected as the monitoring picture sample, the real shooting monitoring picture is added into the monitoring picture sample library, and each real shooting monitoring picture is correspondingly associated with the abnormal scene type.
Preferably, the discriminator uses a BP neural network which can judge whether the virtual test picture has the same abnormal scene type as the monitoring picture sample according to the input virtual test picture. The BP neural network is composed of a plurality of layers of neurons which are connected with each other and have specific weight; the input layer receives picture pixel information of a virtual test picture, and the output layer outputs a judgment result, namely an authenticity judgment result, of whether the abnormal scene type is the same as that of the monitored picture sample. The BP neural network has self-adaption, self-organization and self-learning capabilities, two processes of information forward propagation and error backward propagation are realized in the training process, and when the picture pixel information of the monitored picture sample is propagated in the forward direction in the training process, if the actually output judgment result is not in accordance with the expected output, the error backward propagation process of the BP network can correct the weight of each layer to minimize the error, so that whether the abnormal scene type is the same as the monitored picture sample or not can be accurately judged according to the input picture pixel information through the trained BP neural network; after the training is completed, the virtual test picture generated by the picture generator is input, and authenticity judgment is executed.
Preferably, the frame generator uses a BP neural network that generates virtual test frames based on the initial random values. The BP neural network is composed of a plurality of layers of neurons which are connected with each other and have specific weight; the input layer receives the initial random value as picture pixel information, and the output layer outputs the picture pixel information of the generated virtual test picture. Judging whether the product is true or false according to the discriminator; and when the judgment result is false, the picture generator adjusts the parameters of the weight values of each layer, and regenerates the virtual test picture until the virtual test picture generated by the picture generator is judged to be true by the discriminator, and outputs the generated virtual test picture, wherein the virtual test picture and the monitoring picture sample have the same abnormal scene type.
Preferably, the test execution module connects at least one generated virtual test picture with the real shooting monitoring picture without abnormal scene in series, and inputs the virtual test picture and the real shooting monitoring picture into the tested artificial intelligent video security monitoring system for testing.
Therefore, the invention provides an automatic security test method and system for an artificial intelligent virtual scene with a GAN network architecture as a core. The invention can generate massive virtual test pictures with various abnormal scene types based on a small amount of monitoring picture samples, and applies the pictures to the test of the artificial intelligent video security monitoring system. The invention effectively solves the problems of small number of test pictures, not rich abnormal scenes, high difficulty in shooting organization and poor picture variability and simulation in the conventional video security monitoring test, improves the feasibility and reliability of the test and reduces the cost.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic flow chart of an automatic security test method for an artificial intelligence virtual scene according to an embodiment of the present invention;
FIG. 2 is a schematic view of a process for selecting a sample of a monitoring screen according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating the extraction of specific targets and inter-frame position changes from a real-shot monitoring picture according to an embodiment of the present invention;
fig. 4 is a framework diagram of an automatic security test system for an artificial intelligence virtual scene according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
FIG. 1 is a flowchart of an automatic security testing method for an artificial intelligence virtual scene provided by the invention. The method specifically comprises the following steps:
s1: and determining the abnormal scene type aimed at by the automatic test. The abnormal scene types which can generate the virtual test picture and expand the test in the invention comprise 'congestion', 'detention' and 'retrograde motion' aiming at people or vehicles; the test executor can determine the type of the abnormal scene to be tested in advance, so that whether the video security monitoring system can correctly and automatically identify the abnormal scene of the type from the monitoring picture is tested, and automatic alarm is realized.
S2: and obtaining a monitoring picture sample corresponding to the abnormal scene type from a monitoring picture sample library according to the abnormal scene type. The monitoring picture sample library stores a certain number of monitoring picture samples, the monitoring picture samples respectively correspond to abnormal scene types such as 'congestion', 'detention', 'retrograde motion', and the like, the monitoring picture samples are multi-frame continuous monitoring pictures which are selected from monitoring pictures actually shot by a camera of an artificial intelligent video security monitoring system to be tested and have specific abnormal scene types, and a specific selection process is specifically described below with reference to fig. 2 and 3. Thus, according to the abnormal scene type determined at S1, a certain number of samples of the monitoring picture corresponding to the type can be extracted from the monitoring picture sample library, and these samples will be used for training the discriminator of GAN at S3.
S3, utilizing a GAN framework composed of a picture generator and a discriminator, wherein the picture generator is used for generating a virtual test picture based on the initial random value; after the discriminator trains the monitoring picture sample, the discriminator judges the truth of the virtual test picture generated by the picture generator; and when the judgment result is false, the picture generator adjusts the generation parameters and regenerates the virtual test picture until the virtual test picture generated by the picture generator is judged to be true by the discriminator, and outputs the generated virtual test picture, wherein the virtual test picture and the monitoring picture sample have the same abnormal scene type.
As shown in fig. 4, the present invention utilizes a frame generator and an arbiter to form a GAN architecture. The discriminator adopts a BP neural network which can judge whether the BP neural network has the same abnormal scene type with the monitoring picture sample according to the input virtual testing picture. The BP neural network is composed of a plurality of layers of neurons which are connected with each other and have specific weight; the input layer receives picture pixel information of a virtual test picture, and the output layer outputs a judgment result, namely an authenticity judgment result, of whether the abnormal scene type is the same as that of the monitored picture sample. The BP neural network has self-adaption, self-organization and self-learning capabilities, two processes of information forward propagation and error backward propagation are realized in the training process, and when the picture pixel information of the monitored picture sample is propagated in the forward direction in the training process, if the actually output judgment result is not in accordance with the expected output, the error backward propagation process of the BP network can correct the weight of each layer to minimize the error, so that whether the abnormal scene type is the same as the monitored picture sample or not can be accurately judged according to the input picture pixel information through the trained BP neural network; after the training is completed, the virtual test picture generated by the picture generator is input, and authenticity judgment is executed. The frame generator adopts a BP neural network which can generate a virtual test frame according to an initial random value. The BP neural network is composed of a plurality of layers of neurons which are connected with each other and have specific weight; the input layer receives the initial random value as picture pixel information, and the output layer outputs the picture pixel information of the generated virtual test picture. Judging whether the product is true or false according to the discriminator; and when the judgment result is false, the picture generator adjusts the parameters of the weight values of each layer, and regenerates the virtual test picture until the virtual test picture generated by the picture generator is judged to be true by the discriminator, and outputs the generated virtual test picture, wherein the virtual test picture and the monitoring picture sample have the same abnormal scene type.
S4: and inputting at least one generated virtual test picture into a tested artificial intelligent video security monitoring system, obtaining the identification result of the artificial intelligent video security monitoring system on whether an abnormal scene exists and the type of the abnormal scene, judging whether the intelligent video security monitoring system can identify the abnormal scene, and judging whether the identification result is matched with the type of the abnormal scene of the virtual test picture. More specifically, in S4, after at least one generated virtual test picture is connected in series with the real shooting monitoring picture without an abnormal scene, the virtual test picture is input into the artificial intelligent video security monitoring system to be tested for testing. If the tested artificial intelligent video security monitoring system is normal in function, an alarm can be given for the virtual test picture with the abnormal scene, the abnormal scene is prompted to exist, and the type of the abnormal scene is correctly identified and output. On the contrary, if the tested artificial intelligent video security monitoring system cannot give an alarm prompt or the identified abnormal scene type is not accordant with the abnormal scene type of the virtual test picture, the system fails to pass the test and needs to be repaired and debugged.
As mentioned above, the multi-frame continuous monitoring picture with a specific abnormal scene type is selected from the monitoring pictures actually taken by the camera of the artificial intelligent video security monitoring system to be tested, and is used as the monitoring picture sample, and the process of selecting the monitoring picture sample is specifically described below with reference to fig. 2 and 3.
As shown in fig. 2, first, each specific target is extracted from a plurality of continuous live-shooting monitoring pictures; judging the action state of each specific target according to the interframe position change of each specific target; classifying scenes of the real shooting monitoring pictures according to a preset standard according to action states of all specific targets in the multi-frame continuous real shooting monitoring pictures to obtain scene types; and when the scene type of the real shooting monitoring picture belongs to a specific abnormal scene type, selecting the real shooting monitoring picture as the monitoring picture sample.
Specifically, each specific target is extracted from a plurality of continuous real shooting monitoring pictures according to the following steps: extracting an image area where each target is located from each monitoring picture of a plurality of frames of continuous real shooting monitoring pictures, and extracting the image characteristics of the target through color histogram distribution; and when the image areas with consistent image characteristics exist in the multi-frame continuous real shooting monitoring pictures, judging that the targets corresponding to the image areas with consistent image characteristics in the real shooting monitoring pictures are the same specific target. As shown in fig. 3, the multi-frame continuous real-time monitoring picture can see a plurality of specific objects and image areas where the specific objects are located, wherein the color histogram distribution of the pixel color of each image area can be counted, and the color histogram distribution substantially corresponds to the clothing (character) or appearance color (vehicle) of the specific object, so that when the image areas with consistent color histogram distribution exist in the multi-frame continuous real-time monitoring picture, the image areas can be considered to correspond to the specific object of the same character or vehicle.
Further, the action state of each specific target is judged according to the following steps: for the same specific target of multiple frames of continuous real shooting monitoring pictures, the coordinate of the specific target in the ith frame of monitoring picture is expressed as (X)i,Yi) The inter-frame position change from the i-1 st frame to the i-th frame is represented as (△ X)i,△Yi) The inter-frame position change from the i-th frame to the i + 1-th frame is represented as (△ X)i+1,△Yi+1) And so on, thereby obtaining a series of interframe position change parameters:
…(△Xi,△Yi),(△Xi+1,△Yi+1)…(△Xi+n,△Yi+n)…;
inputting the obtained inter-frame position change parameters of the specific target into at least one trained SVM classification vector machine, and determining the action state type of the specific target according to the output of the SVM classification vector machine. Each SVM classification vector machine utilizes the interframe position change parameters of the known targets with the same specific action state type to train, the consistency of the output result and the specific action state type of the known targets is achieved, stable convergence is achieved, and therefore whether the SVM classification vector machine has the specific action state type or not can be output according to the input interframe position change parameters of the specific targets. Therefore, four SVM classification vector machines are generally trained, wherein the specific action state types respectively corresponding to the four SVM classification vector machines are: the four types of specific action states basically meet the requirements of artificial intelligent video security and protection monitoring of people or vehicles in urban public spaces. And inputting the inter-frame position change parameters of each specific target in parallel or in cascade to four SVM classification vector machines, and determining the action state type of the specific target according to the respective outputs of the SVM classification vector machines.
And then, classifying the scenes of the real shooting monitoring picture according to the following steps to obtain the scene types: presetting classification standards of scene types, and when the action state statistical results of all specific targets in the multi-frame continuous real shooting monitoring pictures meet the classification standard of a certain scene type, determining that the real shooting monitoring pictures have the scene type; wherein the classification criteria is a ratio of specific targets having a specific action status type. For example, when the ratio of the action state "staying" among all the specific targets of the real-shot monitoring picture is greater than or equal to a specific ratio value, the scene type of the real-shot monitoring picture is "staying"; when the proportion of action state congestion in all specific targets in the real shooting monitoring picture is more than or equal to a specific proportion value, the scene type of the real shooting monitoring picture is congestion; and when the ratio of the action state of the all specific targets in the real shooting monitoring picture to be in the 'retrograde motion' state is more than or equal to a specific ratio value, the scene type of the real shooting monitoring picture is in the 'retrograde motion'.
And finally, when the scene type of the real shooting monitoring picture belongs to specific abnormal scene types such as 'detention', 'congestion' and 'retrograde motion', selecting the real shooting monitoring picture as the monitoring picture sample, adding the real shooting monitoring picture into the monitoring picture sample library, and correspondingly associating the abnormal scene type of each real shooting monitoring picture.
As shown in fig. 4, the present invention further provides an automatic security test system for an artificial intelligence virtual scene, comprising:
a monitoring picture sample selection module used for selecting a multi-frame continuous monitoring picture with a specific abnormal scene type from the monitoring pictures actually shot by the camera of the tested artificial intelligent video security monitoring system as the monitoring picture sample
A monitoring picture sample library for storing monitoring picture samples corresponding to various abnormal scene types;
the test scene type setting module is used for determining the abnormal scene type aimed at by the automatic test;
the monitoring picture sample acquisition module is used for acquiring a monitoring picture sample corresponding to the abnormal scene type from a monitoring picture sample library according to the abnormal scene type;
the virtual test picture GAN network is a GAN framework consisting of a picture generator and a discriminator, wherein the picture generator is used for generating a virtual test picture based on an initial random value; after the discriminator trains the monitoring picture sample, the discriminator judges the truth of the virtual test picture generated by the picture generator; when the judgment result is false, the picture generator adjusts the generation parameters and regenerates the virtual test picture until the virtual test picture generated by the picture generator is judged to be true by the discriminator, and the generated virtual test picture is output, wherein the virtual test picture and the monitoring picture sample have the same abnormal scene type;
and the test execution module is used for inputting at least one generated virtual test picture into the tested artificial intelligent video security monitoring system, obtaining the identification result of the artificial intelligent video security monitoring system on whether an abnormal scene exists and the type of the abnormal scene, judging whether the intelligent video security monitoring system can identify the abnormal scene, and judging whether the identification result is matched with the type of the abnormal scene of the virtual test picture.
The monitoring picture sample selection module extracts each specific target from a plurality of continuous real-shot monitoring pictures; judging the action state of each specific target according to the interframe position change of each specific target; classifying scenes of the real shooting monitoring pictures according to a preset standard according to action states of all specific targets in the multi-frame continuous real shooting monitoring pictures to obtain scene types; and when the scene type of the real shooting monitoring picture belongs to a specific abnormal scene type, selecting the real shooting monitoring picture as the monitoring picture sample.
Specifically, the monitoring picture sample selection module extracts each specific target from multiple continuous real-shooting monitoring pictures according to the following steps: extracting an image area where each target is located from each monitoring picture of a plurality of frames of continuous real shooting monitoring pictures, and extracting the image characteristics of the target through color histogram distribution; and when the image areas with consistent image characteristics exist in the multi-frame continuous real shooting monitoring pictures, judging that the targets corresponding to the image areas with consistent image characteristics in the real shooting monitoring pictures are the same specific target.
And the monitoring picture sample selection module judges the action state of each specific target according to the following steps: for the same specific target of multiple frames of continuous real shooting monitoring pictures, the coordinate of the specific target in the ith frame of monitoring picture is expressed as (X)i,Yi) The inter-frame position change from the i-1 st frame to the i-th frame is represented as (△ X)i,△Yi) The inter-frame position change from the i-th frame to the i + 1-th frame is represented as (△ X)i+1,△Yi+1) And so on, thereby obtaining a series of interframe position change parameters:
…(△Xi,△Yi),(△Xi+1,△Yi+1)…(△Xi+n,△Yi+n)…;
inputting the obtained inter-frame position change parameters of the specific target into at least one trained SVM classification vector machine, and determining the action state type of the specific target according to the output of the SVM classification vector machine. Each SVM classification vector machine utilizes the interframe position change parameters of the known targets with the same specific action state type to train, the consistency of the output result and the specific action state type of the known targets is achieved, stable convergence is achieved, and therefore whether the SVM classification vector machine has the specific action state type or not can be output according to the input interframe position change parameters of the specific targets. Therefore, four SVM classification vector machines are generally trained, wherein the specific action state types respectively corresponding to the four SVM classification vector machines are: the four types of specific action states basically meet the requirements of artificial intelligent video security and protection monitoring of people or vehicles in urban public spaces. And inputting the inter-frame position change parameters of each specific target in parallel or in cascade to four SVM classification vector machines, and determining the action state type of the specific target according to the respective outputs of the SVM classification vector machines.
And the monitoring picture sample selection module classifies scenes of the real shooting monitoring picture according to the following steps to acquire scene types: presetting classification standards of scene types, and when the action state statistical results of all specific targets in the multi-frame continuous real shooting monitoring pictures meet the classification standard of a certain scene type, determining that the real shooting monitoring pictures have the scene type; wherein the classification criteria is a ratio of specific targets having a specific action status type. For example, when the ratio of the action state "staying" among all the specific targets of the real-shot monitoring picture is greater than or equal to a specific ratio value, the scene type of the real-shot monitoring picture is "staying"; when the proportion of action state congestion in all specific targets in the real shooting monitoring picture is more than or equal to a specific proportion value, the scene type of the real shooting monitoring picture is congestion; and when the ratio of the action state of the all specific targets in the real shooting monitoring picture to be in the 'retrograde motion' state is more than or equal to a specific ratio value, the scene type of the real shooting monitoring picture is in the 'retrograde motion'. When the scene type of the real shooting monitoring picture belongs to specific abnormal scene types such as 'detention', 'congestion' and 'retrograde motion', the real shooting monitoring picture is selected as the monitoring picture sample, the real shooting monitoring picture is added into the monitoring picture sample library, and each real shooting monitoring picture is correspondingly associated with the abnormal scene type.
For the GAN network, the discriminator adopts a BP neural network which can judge whether the GAN network has the same abnormal scene type with the monitoring picture sample according to the input virtual test picture. The BP neural network is composed of a plurality of layers of neurons which are connected with each other and have specific weight; the input layer receives picture pixel information of a virtual test picture, and the output layer outputs a judgment result, namely an authenticity judgment result, of whether the abnormal scene type is the same as that of the monitored picture sample. The BP neural network has self-adaption, self-organization and self-learning capabilities, two processes of information forward propagation and error backward propagation are realized in the training process, and when the picture pixel information of the monitored picture sample is propagated in the forward direction in the training process, if the actually output judgment result is not in accordance with the expected output, the error backward propagation process of the BP network can correct the weight of each layer to minimize the error, so that whether the abnormal scene type is the same as the monitored picture sample or not can be accurately judged according to the input picture pixel information through the trained BP neural network; after the training is completed, the virtual test picture generated by the picture generator is input, and authenticity judgment is executed. The frame generator adopts a BP neural network which can generate a virtual test frame according to an initial random value. The BP neural network is composed of a plurality of layers of neurons which are connected with each other and have specific weight; the input layer receives the initial random value as picture pixel information, and the output layer outputs the picture pixel information of the generated virtual test picture. Judging whether the product is true or false according to the discriminator; and when the judgment result is false, the picture generator adjusts the parameters of the weight values of each layer, and regenerates the virtual test picture until the virtual test picture generated by the picture generator is judged to be true by the discriminator, and outputs the generated virtual test picture, wherein the virtual test picture and the monitoring picture sample have the same abnormal scene type.
And the test execution module connects at least one generated virtual test picture with the real shooting monitoring picture without abnormal scenes in series and inputs the virtual test picture and the real shooting monitoring picture into the tested artificial intelligent video security monitoring system for testing. If the tested artificial intelligent video security monitoring system is normal in function, an alarm can be given for the virtual test picture with the abnormal scene, the abnormal scene is prompted to exist, and the type of the abnormal scene is correctly identified and output. On the contrary, if the tested artificial intelligent video security monitoring system cannot give an alarm prompt or the identified abnormal scene type is not accordant with the abnormal scene type of the virtual test picture, the system fails to pass the test and needs to be repaired and debugged.
Therefore, the invention provides an automatic security test method and system for an artificial intelligent virtual scene with a GAN network architecture as a core. The invention can generate massive virtual test pictures with various abnormal scene types based on a small amount of monitoring picture samples, and applies the pictures to the test of the artificial intelligent video security monitoring system. The invention effectively solves the problems of small number of test pictures, not rich abnormal scenes, high difficulty in shooting organization and poor picture variability and simulation in the conventional video security monitoring test, improves the feasibility and reliability of the test and reduces the cost.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application. It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

Claims (7)

1. An automatic security test method for an artificial intelligent virtual scene comprises the following steps:
s1: determining an abnormal scene type for which automatic testing is aimed;
s2: acquiring a monitoring picture sample corresponding to the abnormal scene type from a monitoring picture sample library according to the abnormal scene type; the monitoring picture sample is a multi-frame continuous monitoring picture which is selected from monitoring pictures actually shot by a camera of the tested artificial intelligent video security monitoring system and has a specific abnormal scene type;
s3, utilizing a GAN framework composed of a picture generator and a discriminator, wherein the picture generator is used for generating a virtual test picture based on the initial random value; after the discriminator trains the monitoring picture sample, the discriminator judges the truth of the virtual test picture generated by the picture generator; when the judgment result is false, the picture generator adjusts the generation parameters and regenerates the virtual test picture until the virtual test picture generated by the picture generator is judged to be true by the discriminator, and the generated virtual test picture is output, wherein the virtual test picture and the monitoring picture sample have the same abnormal scene type;
s4: inputting at least one generated virtual test picture into a tested artificial intelligent video security monitoring system, obtaining the identification result of the artificial intelligent video security monitoring system on whether an abnormal scene exists and the type of the abnormal scene, judging whether the intelligent video security monitoring system can identify the abnormal scene, judging whether the identification result is matched with the type of the abnormal scene of the virtual test picture,
the method comprises the following steps of selecting a monitoring picture sample with a specific abnormal scene type from a real-shot monitoring picture: extracting each specific target from multiple continuous real shooting monitoring pictures; judging the action state of each specific target according to the interframe position change of each specific target; classifying scenes of the real shooting monitoring pictures according to a preset standard according to action states of all specific targets in the multi-frame continuous real shooting monitoring pictures to obtain scene types; and when the scene type of the real shooting monitoring picture belongs to a specific abnormal scene type, selecting the real shooting monitoring picture as the monitoring picture sample.
2. The method for automatically testing the security of the artificial intelligent virtual scene according to claim 1, wherein each specific target is extracted from a plurality of continuous real-time monitoring pictures according to the following steps: extracting an image area where each target is located from each monitoring picture of a plurality of frames of continuous real shooting monitoring pictures, and extracting the image characteristics of the target through color histogram distribution; and when the image areas with consistent image characteristics exist in the multi-frame continuous real shooting monitoring pictures, judging that the targets corresponding to the image areas with consistent image characteristics in the real shooting monitoring pictures are the same specific target.
3. The method for automatically testing the security of the artificial intelligent virtual scene according to claim 2, wherein the action state of each specific target is judged according to the following steps: for the same specific target of multiple frames of continuous real shooting monitoring pictures, the coordinate of the specific target in the ith frame of monitoring picture is expressed as (X)i,Yi) The inter-frame position change from the i-1 st frame to the i-th frame is represented as (△ X)i,△Yi) The inter-frame position change from the i-th frame to the i + 1-th frame is represented as (△ X)i+1,△Yi+1) And so on, thereby obtaining a series of interframe position change parameters:
…(△Xi,△Yi),(△Xi+1,△Yi+1)…(△Xi+n,△Yi+n)…;
inputting the obtained inter-frame position change parameters of the specific target into at least one trained SVM classification vector machine, and determining the action state type of the specific target according to the output of the SVM classification vector machine; each SVM classification vector machine utilizes the interframe position change parameters of the known targets with the same specific action state type to train, the consistency of the output result and the specific action state type of the known targets is achieved, and stable convergence is achieved, so that whether the SVM classification vector machine has the specific action state type or not can be output according to the input interframe position change parameters of the specific targets; therefore, four SVM classification vector machines are required to be trained, and the specific action state types respectively corresponding to the four SVM classification vector machines are: the four types of specific action states basically meet the requirements of artificial intelligent video security and protection monitoring of people or vehicles in urban public spaces; and inputting the inter-frame position change parameters of each specific target in parallel or in cascade to four SVM classification vector machines, and determining the action state type of the specific target according to the respective outputs of the SVM classification vector machines.
4. The method for automatically testing the security of the artificial intelligent virtual scene according to claim 3, wherein the scenes of the real shooting monitoring picture are classified according to the following steps to obtain the scene types: presetting classification standards of scene types, and when the action state statistical results of all specific targets in the multi-frame continuous real shooting monitoring pictures meet the classification standard of a certain scene type, determining that the real shooting monitoring pictures have the scene type; wherein, the classification standard is the proportion of specific targets with specific action state types; for example, when the ratio of the action state "staying" among all the specific targets of the real-shot monitoring picture is greater than or equal to a specific ratio value, the scene type of the real-shot monitoring picture is "staying"; when the proportion of action state congestion in all specific targets in the real shooting monitoring picture is more than or equal to a specific proportion value, the scene type of the real shooting monitoring picture is congestion; when the ratio of the action state of the real shooting to the 'retrograde motion' in all the specific targets in the real shooting monitoring picture is more than or equal to a specific ratio value, the scene type of the real shooting monitoring picture is 'retrograde motion'; when the scene type of the real shooting monitoring picture belongs to specific abnormal scene types such as 'detention', 'congestion' and 'retrograde motion', the real shooting monitoring picture is selected as the monitoring picture sample, the real shooting monitoring picture is added into the monitoring picture sample library, and each real shooting monitoring picture is correspondingly associated with the abnormal scene type.
5. The utility model provides an automatic test system of security protection of artificial intelligence virtual scene which characterized in that includes:
the test scene type setting module is used for determining the abnormal scene type aimed at by the automatic test;
a monitoring picture sample library for storing monitoring picture samples corresponding to various abnormal scene types;
the monitoring picture sample acquisition module is used for acquiring a monitoring picture sample corresponding to the abnormal scene type from a monitoring picture sample library according to the abnormal scene type;
the virtual test picture GAN network is a GAN framework consisting of a picture generator and a discriminator, wherein the picture generator is used for generating a virtual test picture based on an initial random value; after the discriminator trains the monitoring picture sample, the discriminator judges the truth of the virtual test picture generated by the picture generator; when the judgment result is false, the picture generator adjusts the generation parameters and regenerates the virtual test picture until the virtual test picture generated by the picture generator is judged to be true by the discriminator, and the generated virtual test picture is output, wherein the virtual test picture and the monitoring picture sample have the same abnormal scene type;
a test execution module for inputting at least one generated virtual test picture into the tested artificial intelligent video security monitoring system, obtaining the identification result of the artificial intelligent video security monitoring system on whether an abnormal scene exists and the type of the abnormal scene, judging whether the intelligent video security monitoring system can identify the abnormal scene, and judging whether the identification result is matched with the type of the abnormal scene of the virtual test picture,
the automatic security protection test system further comprises a monitoring picture sample selection module, wherein the monitoring picture sample selection module is used for selecting a multi-frame continuous monitoring picture with a specific abnormal scene type from the monitoring pictures actually shot by the camera of the tested artificial intelligent video security protection monitoring system to serve as the monitoring picture sample.
6. The system according to claim 5, wherein the monitoring picture sample selection module extracts each specific target from a plurality of continuous real-time monitoring pictures; judging the action state of each specific target according to the interframe position change of each specific target; classifying scenes of the real shooting monitoring pictures according to a preset standard according to action states of all specific targets in the multi-frame continuous real shooting monitoring pictures to obtain scene types; and when the scene type of the real shooting monitoring picture belongs to a specific abnormal scene type, selecting the real shooting monitoring picture as the monitoring picture sample.
7. The system according to claim 6, wherein the monitoring picture sample selection module extracts each specific target from a plurality of continuous real-time monitoring pictures according to the following steps: extracting an image area where each target is located from each monitoring picture of a plurality of frames of continuous real shooting monitoring pictures, and extracting the image characteristics of the target through color histogram distribution; and when the image areas with consistent image characteristics exist in the multi-frame continuous real shooting monitoring pictures, judging that the targets corresponding to the image areas with consistent image characteristics in the real shooting monitoring pictures are the same specific target.
CN201910580168.7A 2019-06-28 2019-06-28 Automatic security testing method and system for artificial intelligent virtual scene Active CN110427824B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910580168.7A CN110427824B (en) 2019-06-28 2019-06-28 Automatic security testing method and system for artificial intelligent virtual scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910580168.7A CN110427824B (en) 2019-06-28 2019-06-28 Automatic security testing method and system for artificial intelligent virtual scene

Publications (2)

Publication Number Publication Date
CN110427824A CN110427824A (en) 2019-11-08
CN110427824B true CN110427824B (en) 2020-07-21

Family

ID=68408905

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910580168.7A Active CN110427824B (en) 2019-06-28 2019-06-28 Automatic security testing method and system for artificial intelligent virtual scene

Country Status (1)

Country Link
CN (1) CN110427824B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111046832B (en) * 2019-12-24 2023-06-02 广州地铁设计研究院股份有限公司 Retrograde judgment method, device, equipment and storage medium based on image recognition
CN113052036A (en) * 2021-03-16 2021-06-29 三一智造(深圳)有限公司 Intelligent people stream management system method based on big data
CN113438469B (en) * 2021-05-31 2022-03-15 深圳市大工创新技术有限公司 Automatic testing method and system for security camera
CN114549942A (en) * 2022-04-27 2022-05-27 网思科技股份有限公司 Artificial intelligent security system test method and device, storage medium and test equipment
CN116226726B (en) * 2023-05-04 2023-07-25 济南东方结晶器有限公司 Application performance evaluation method, system, equipment and medium for crystallizer copper pipe

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103198467A (en) * 2011-07-29 2013-07-10 奥林巴斯株式会社 Image processing apparatus and image processing method
CN105608446A (en) * 2016-02-02 2016-05-25 北京大学深圳研究生院 Video stream abnormal event detection method and apparatus
CN109685097A (en) * 2018-11-08 2019-04-26 银河水滴科技(北京)有限公司 A kind of image detecting method and device based on GAN

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9887139B2 (en) * 2011-12-28 2018-02-06 Infineon Technologies Austria Ag Integrated heterojunction semiconductor device and method for producing an integrated heterojunction semiconductor device
CN104994334A (en) * 2015-06-09 2015-10-21 海南电网有限责任公司 Automatic substation monitoring method based on real-time video
CN106407984B (en) * 2015-07-31 2020-09-11 腾讯科技(深圳)有限公司 Target object identification method and device
CN108898079A (en) * 2018-06-15 2018-11-27 上海小蚁科技有限公司 A kind of monitoring method and device, storage medium, camera terminal

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103198467A (en) * 2011-07-29 2013-07-10 奥林巴斯株式会社 Image processing apparatus and image processing method
CN105608446A (en) * 2016-02-02 2016-05-25 北京大学深圳研究生院 Video stream abnormal event detection method and apparatus
CN109685097A (en) * 2018-11-08 2019-04-26 银河水滴科技(北京)有限公司 A kind of image detecting method and device based on GAN

Also Published As

Publication number Publication date
CN110427824A (en) 2019-11-08

Similar Documents

Publication Publication Date Title
CN110427824B (en) Automatic security testing method and system for artificial intelligent virtual scene
US9251425B2 (en) Object retrieval in video data using complementary detectors
CN110135269B (en) Fire image detection method based on mixed color model and neural network
CN109218619A (en) Image acquiring method, device and system
TW202013252A (en) License plate recognition system and license plate recognition method
CN110516529A (en) It is a kind of that detection method and system are fed based on deep learning image procossing
CN111598132B (en) Portrait recognition algorithm performance evaluation method and device
CN111222478A (en) Construction site safety protection detection method and system
CN111476191B (en) Artificial intelligent image processing method based on intelligent traffic and big data cloud server
CN110222604A (en) Target identification method and device based on shared convolutional neural networks
CN111126293A (en) Flame and smoke abnormal condition detection method and system
CN107330414A (en) Act of violence monitoring method
CN111626199A (en) Abnormal behavior analysis method for large-scale multi-person carriage scene
CN112309068B (en) Forest fire early warning method based on deep learning
US20090310823A1 (en) Object tracking method using spatial-color statistical model
Szczodrak et al. Behavior analysis and dynamic crowd management in video surveillance system
CN111860457A (en) Fighting behavior recognition early warning method and recognition early warning system thereof
CN111126411B (en) Abnormal behavior identification method and device
CN115410134A (en) Video fire smoke detection method based on improved YOLOv5s
Khan et al. Comparative study of various crowd detection and classification methods for safety control system
Dupre et al. A human and group behavior simulation evaluation framework utilizing composition and video analysis
CN113052055A (en) Smoke detection method based on optical flow improvement and Yolov3
Duque et al. The OBSERVER: An intelligent and automated video surveillance system
CN115439933A (en) Garbage classification release site detection method based on multiple model recognition strategies
CN111325185B (en) Face fraud prevention method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant