CN115953740A - Security control method and system based on cloud - Google Patents
Security control method and system based on cloud Download PDFInfo
- Publication number
- CN115953740A CN115953740A CN202310238920.6A CN202310238920A CN115953740A CN 115953740 A CN115953740 A CN 115953740A CN 202310238920 A CN202310238920 A CN 202310238920A CN 115953740 A CN115953740 A CN 115953740A
- Authority
- CN
- China
- Prior art keywords
- abnormal
- factor
- optical flow
- candidate
- frame image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 230000002159 abnormal effect Effects 0.000 claims abstract description 166
- 230000003287 optical effect Effects 0.000 claims abstract description 134
- 238000012544 monitoring process Methods 0.000 claims abstract description 47
- 230000004927 fusion Effects 0.000 claims abstract description 41
- 238000011217 control strategy Methods 0.000 claims abstract description 38
- 238000004458 analytical method Methods 0.000 claims description 23
- 238000004422 calculation algorithm Methods 0.000 claims description 22
- 238000006243 chemical reaction Methods 0.000 claims description 14
- 238000000605 extraction Methods 0.000 claims description 12
- 238000011176 pooling Methods 0.000 claims description 12
- 239000013598 vector Substances 0.000 claims description 10
- 238000001514 detection method Methods 0.000 claims description 9
- 238000007477 logistic regression Methods 0.000 claims description 6
- 230000004913 activation Effects 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 2
- 238000004891 communication Methods 0.000 claims description 2
- 230000006870 function Effects 0.000 description 13
- 239000000284 extract Substances 0.000 description 7
- 230000033001 locomotion Effects 0.000 description 7
- 230000005856 abnormality Effects 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 238000013473 artificial intelligence Methods 0.000 description 4
- 238000005070 sampling Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 230000010485 coping Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 230000002265 prevention Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000010977 unit operation Methods 0.000 description 2
- UGFAIRIUMAVXCW-UHFFFAOYSA-N Carbon monoxide Chemical compound [O+]#[C-] UGFAIRIUMAVXCW-UHFFFAOYSA-N 0.000 description 1
- 229910002091 carbon monoxide Inorganic materials 0.000 description 1
- 230000001364 causal effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000009545 invasion Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 239000000779 smoke Substances 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Landscapes
- Alarm Systems (AREA)
Abstract
The invention relates to the field of intelligent decision making, and discloses a security control method based on a cloud end, which comprises the following steps: identifying a security area, a service scene and a corresponding control factor type of a security system to be controlled, analyzing the control factor type and a corresponding factor state thereof, and configuring a control strategy when the factor state is abnormal; acquiring a monitoring video of a security area according to a service scene, extracting frame images of the monitoring video, respectively extracting candidate spatial frame images and candidate optical flow frame images in the frame images, and extracting spatial features of the candidate spatial frame images and optical flow features of the candidate optical flow frame images by using a double-flow network; fusing the spatial features and the optical flow features to obtain fused features of the monitoring video; calculating factor probabilities corresponding to the fusion characteristics in the control factor categories; and judging the factor state of the security system according to the factor probability, and taking the control strategy as a final control strategy of the security system when the factor state is abnormal. The invention can improve the security control precision.
Description
Technical Field
The invention relates to the field of intelligent decision making, in particular to a security control method and system based on a cloud end.
Background
Security, which can be understood as an acronym for "security precautions", refers to the preparation and protection to cope with attacks or to avoid victims, so that the protected object is in a safe state without danger, infringement, or accidents.
The intelligent security protection means that the original data analyzed and checked by people are automatically identified and analyzed through an AI algorithm, and mass data are converted into effective information with analysis results. Taking video analysis as an example, the intelligent security system can identify different people, objects and environmental states through automatic analysis and processing of video images, find abnormal conditions in monitoring pictures, and give an alarm and feedback information in real time. With the rapid development of the cloud, the intelligent security technology is deployed at the cloud, and powerful computing power is provided for the intelligent security system.
At present, the traditional intelligent security control method mainly takes manual characteristics as training samples, adopts a statistical method of probability density estimation to judge whether events obey normal or abnormal distribution, or utilizes a Gaussian mixture model and a Markov model to deduce abnormal characteristics, but the methods have characteristic-dependent selection and are only suitable for specific scenes, so that the problem of low security control precision easily occurs.
Disclosure of Invention
The invention provides a security control method and system based on a cloud end, and mainly aims to improve security control precision.
In order to achieve the above object, the invention provides a security control method based on a cloud, comprising:
identifying a security area, a service scene and a corresponding control factor category of a security system to be controlled, analyzing the control factor category and a corresponding factor state thereof, and configuring a control strategy when the factor state is abnormal;
acquiring a monitoring video of the security area according to the service scene, extracting a frame image of the monitoring video, extracting a candidate space frame image in the frame image, and extracting a space feature of the candidate space frame image by using a space flow network;
extracting candidate optical flow frame images in the frame images, and extracting optical flow characteristics of the candidate optical flow frame images by using a time flow network;
fusing the spatial features and the optical flow features to obtain fused features of the monitoring video;
calculating factor probabilities corresponding to the fusion features in the control factor categories;
and judging the factor state of the security system according to the factor probability, and taking the control strategy as a final control strategy of the security system when the factor state is abnormal.
Optionally, the configuring and configuring the control policy when the factor status is abnormal includes:
when the factor state is abnormal, analyzing the abnormal reason and the abnormal type of the historical control factor corresponding to the factor state;
and constructing an abnormal tree of a historical security system by using an abnormal tree analysis method based on the abnormal reason and the abnormal type, and configuring a control strategy of each abnormal node of the abnormal tree.
Optionally, the constructing an abnormal tree of the historical security system by using an abnormal tree analysis method based on the abnormal reason and the abnormal type includes:
based on the abnormal type, inquiring abnormal influence corresponding to the abnormal type from a pre-constructed historical security system abnormal database, and calculating the abnormal probability of the abnormal event of the abnormal type;
analyzing the abnormal grade of the abnormal type by using an abnormal mode, an influence and hazard analysis method according to the abnormal influence;
constructing a top event node of the historical security system according to the abnormal grade and the abnormal probability;
according to the top event node and the abnormal reason corresponding to the top event node, constructing intermediate event nodes of the historical security system, and configuring intermediate logic gates among the intermediate event nodes;
according to the intermediate event node and the abnormal reason corresponding to the intermediate event node, constructing a bottom event node of the historical security system, and configuring a bottom logic gate between the bottom event nodes;
and generating an abnormal tree of the historical security system according to the top event node, the middle logic gate, the bottom event node and the bottom logic gate.
Optionally, the extracting a frame image of the surveillance video includes:
decoding the monitoring video by using a video decoder to obtain a decoded video;
converting the decoded video into an original frame image by using a video frame conversion algorithm;
converting the original frame image into a target frame image using the following formula:
wherein ,represents a target frame image, is asserted>Represents a red channel, in the RGB color space, is present>Represents a green channel in the RGB color space, means>Represents the blue channel, in the RGB color space, is greater than or equal to>Represents brightness, <' > in YUV color space>To representChroma in YUV color space, <' >>Representing the concentration in the YUV color space;
and carrying out format conversion on the target frame image to obtain a frame image of the monitoring video.
Optionally, the extracting, by using a spatial stream network, the spatial feature of the candidate spatial frame image includes:
performing convolution on the candidate spatial frame image by utilizing a convolution layer in a spatial stream residual error network to obtain convolution spatial features;
performing linear conversion and activation conversion on the convolution space characteristic by using a residual unit in a space flow residual network to obtain a residual space characteristic;
and determining the spatial features of the candidate spatial frame images according to the residual spatial features.
Optionally, the extracting, by using a time flow network, optical flow features of the candidate optical flow frame images includes:
constructing an optical flow equation of the candidate optical flow frame image by using an optical flow detection algorithm in the time flow network;
calculating dense optical flows of the candidate optical flow frame images according to the optical flow equation;
and extracting the optical flow characteristics of the candidate optical flow frame images according to the dense optical flow.
Optionally, the calculating a dense optical flow of the candidate optical flow frame image according to the optical flow equation includes:
calculating a dense optical flow of the candidate optical flow frame image using the following formula:
wherein ,representing dense light flow,>represents the horizontal light flow of the pixel point of the candidate light flow frame image in the preset neighborhood window, and then is judged>Represents the vertical light flow of the pixel point of the candidate light flow frame image in the preset neighborhood window, and/or>,Respective pixel grayscale edges @, representing candidate bitstream frame images> andThe optical flow equation of direction, based on the direction of the light>An optical flow equation representing pixel gray versus time for a candidate optical flow frame image, -based on the pixel gray versus time>,Respectively representing a first or second party in a predetermined neighborhood window of a candidate bitstream frame image>Gray edge of each pixel point> andThe optical flow equation of direction, based on the direction of the light>And the number of pixel points of the candidate optical flow frame image in a preset neighborhood window is represented.
Optionally, the fusing the spatial feature and the optical flow feature to obtain a fused feature of the surveillance video includes:
fusing the spatial features and the optical flow features using the following formula:
wherein ,represents a fused feature of the surveillance video>Represents a spatial characteristic of the monitored video, and>an optical flow feature representing a surveillance video>Represents a convolution kernel, <' > based on>Representing the bias parameters.
Optionally, the calculating a factor probability corresponding to the fusion feature in the control factor category includes:
pooling the fusion characteristics to obtain pooled fusion vectors;
calculating the category score of the pooling fusion vector in the control factor category by using a full connection layer in a trained security system anomaly analysis model;
and calculating the regression factor probability of the category score in the control factor category by using a logistic regression function of a factor classifier in a trained security system anomaly analysis model according to the category score, and taking the regression factor probability as the factor probability corresponding to the fusion feature in the control factor category.
In order to solve the above problem, the present invention further provides a security control device based on a cloud, where the security control device includes:
identifying a security area, a service scene and a corresponding control factor type of a security system to be controlled, analyzing the control factor type and a corresponding factor state, and configuring a control strategy when the factor state is abnormal;
acquiring a monitoring video of the security area according to the service scene, extracting a frame image of the monitoring video, extracting a candidate space frame image in the frame image, and extracting a space feature of the candidate space frame image by using a space flow network;
extracting candidate optical flow frame images in the frame images, and extracting optical flow characteristics of the candidate optical flow frame images by using a time flow network;
fusing the spatial features and the optical flow features to obtain fused features of the monitoring video;
calculating factor probabilities corresponding to the fusion features in the control factor categories;
and judging the factor state of the security system according to the factor probability, and taking the control strategy as a final control strategy of the security system when the factor state is abnormal.
It can be seen that, in the embodiment of the present invention, a security area, a service scene and a corresponding control factor category of a security system to be controlled are identified to determine a security action area of the security system, the actual application scene and the control factor category provide an operation premise for subsequent control of the security system, and a corresponding control strategy is implemented for subsequent states according to the control factor by analyzing the control factor category and a corresponding factor state thereof, so as to implement intelligent control of the security system, and a control strategy for configuring each abnormal node of the abnormal tree can provide a countermeasure strategy for subsequent abnormal security systems, and a surveillance video of the security area is acquired according to the service scene to serve as an initial operation object for subsequent abnormal detection of the security system, and a frame image of the surveillance video is extracted to provide a guarantee for subsequent extraction of a candidate spatial frame image and a candidate optical flow frame image; secondly, the embodiment of the invention can remove a large amount of redundant information between continuous frame images by extracting the candidate space frame images in the frame images so as to improve the computing efficiency of the security system for processing data, utilizes a space flow network algorithm to extract the space characteristics of the candidate space frame images so as to obtain the static space characteristics of the monitoring video and be the premise of subsequent fusion characteristics, and extracts the candidate optical flow frame images in the frame images so as to be the premise of subsequent extraction of time sequence dynamic characteristics, utilizes a time flow network to extract the optical flow characteristics of the candidate optical flow frame images so as to obtain the time sequence motion characteristics of the monitoring video and be the premise of subsequent fusion characteristics, and fuses the space characteristics and the optical flow characteristics so as to obtain the fusion characteristics of the monitoring video, so that the time-space association relation in the monitoring video can be fully excavated, the time-space characteristics of the monitoring video can be more comprehensively expressed, and the detection accuracy of abnormal events in the monitoring video can be improved; further, in the embodiment of the present invention, the occurrence probability of the abnormal event in each control factor category may be obtained by calculating the factor probability corresponding to the control factor category of the fusion feature, so as to subsequently determine the control factor category where the abnormal event occurs, and according to the factor probability, the factor state of the security system may be determined to determine the control factor category where the abnormal event occurs in the security system, so as to subsequently adopt a corresponding control policy, and the control policy configured when the factor state is abnormal is used as a final control strategy of the security system, so as to implement accurate control of the abnormal event of the security system. Therefore, the security control method and system based on the cloud terminal provided by the embodiment of the invention can improve the security control precision.
Drawings
Fig. 1 is a schematic flow chart of a security control method based on a cloud terminal according to an embodiment of the present invention;
fig. 2 is a schematic module diagram of a security control system based on a cloud according to an embodiment of the present invention;
the implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the invention provides a security control method based on a cloud. The execution subject of the security control method based on the cloud includes, but is not limited to, at least one of electronic devices such as a server and a terminal, which can be configured to execute the method provided by the embodiment of the present invention. In other words, the cloud-based security control method may be executed by software or hardware installed in the terminal device or the server device, and the software may be a blockchain platform. The server includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like. The server may be an independent server, or may be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like.
Fig. 1 is a schematic flow chart of a security control method based on a cloud terminal according to an embodiment of the present invention. In an embodiment of the present invention, the security control method based on the cloud includes:
s1, identifying a security area, a service scene and a corresponding control factor category of a security system to be controlled, analyzing the control factor category and a corresponding factor state thereof, and configuring a control strategy when the factor state is abnormal.
According to the embodiment of the invention, the security protection action area, the actual application scene and the control factor category of the security protection system can be determined by identifying the security protection area, the service scene and the corresponding control factor category of the security protection system to be controlled, so that the operation premise is provided for the subsequent control of the security protection system. The security system is an electronic system or network for maintaining social public safety and achieving the purposes of loss prevention and crime prevention, such as an intrusion alarm system, a video security monitoring system, an entrance and exit control system, a BSV liquid crystal splicing wall system, an entrance guard fire-fighting system, an explosion-proof safety inspection system and the like. The security protection refers to the comprehensive realization of the security protection of personnel, equipment, buildings or areas in buildings or building groups (including surrounding areas) or specific places and areas by adopting the modes of manpower protection, technical protection, physical protection and the like, generally, the security protection mainly refers to the technical protection, and the security protection is realized by adopting security technology protection products and protection facilities. The business scene is used for describing application environments which are provided for users possibly needed and associated products or services in due time, such as anti-theft, intelligent home, illegal invasion, large-scale public place people flow monitoring, station and airport explosion-proof inspection and the like. The control factor types are element types which have influence on the control of the security system, such as people flow, carbon monoxide or smoke concentration, vehicle driving direction, system temperature, forbidden areas and the like.
Further, in an optional embodiment of the present invention, the identifying of the security area, the service scene, and the corresponding control factor category of the security system to be controlled may be implemented by analyzing service requirements and user targets of the security system.
Further, the embodiment of the invention can implement corresponding control strategies for the subsequent control according to the states of the control factors by analyzing the control factor types and the corresponding factor states thereof, wherein the factor states comprise abnormal states and normal states, so as to realize intelligent control on the security system.
Further, in an optional embodiment of the present invention, the analyzing the control factor categories and the corresponding factor statuses thereof, where the factor statuses include abnormal and normal, includes: analyzing cause and effect logic of the control factor category; extracting associated features of the control factor category according to the causal logic; and analyzing the factor states of the control factor classes according to the associated characteristics, wherein the factor states comprise abnormal states and normal states.
Further, the embodiment of the invention adopts the control strategy when the factor state is abnormal through the configuration to ensure that the follow-up security system adopts the corresponding control strategy.
Further, in an optional embodiment of the present invention, the configuring the control policy when the factor status is abnormal includes: when the factor state is abnormal, analyzing the abnormal reason and the abnormal type of the historical control factor corresponding to the factor state; and constructing an abnormal tree of a historical security system by using an abnormal tree analysis method based on the abnormal type, and configuring a control strategy of each abnormal node of the abnormal tree.
The abnormal tree analysis method analyzes hardware, software, environment and human factors which may cause product abnormity, and draws an abnormal tree, thereby determining various possible combination modes of the product abnormity reasons or the occurrence probability of the product abnormity reasons. An exception tree is a special inverted tree-like logical cause and effect graph that describes the cause and effect relationships between various events in the system using event symbols, logic gate symbols, and transition symbols to indicate which component exceptions or external events or combinations thereof will cause the system to generate a given exception.
Further, in an optional embodiment of the present invention, when the factor status is abnormal, analyzing the abnormal reason and the abnormal type of the historical control factor corresponding to the factor status includes: when the factor state is abnormal, inquiring abnormal influence of the historical security system from a pre-constructed historical security system abnormal database; all possible anomaly causes causing the anomaly effect and the anomaly type thereof are analyzed. The abnormal influence refers to the influence of some abnormal event on the system, the subsystem, the unit operation, the function or the state, such as road congestion, property loss, and disorder of the order of public places.
Further, in an optional embodiment of the present invention, the constructing an abnormal tree of a historical security system by using an abnormal tree analysis method based on the abnormal reason and the abnormal type includes: based on the abnormal type, inquiring abnormal influence corresponding to the abnormal type from a pre-constructed historical security system abnormal database, and calculating the abnormal probability of the abnormal event of the abnormal type; analyzing the abnormal grade of the abnormal type by using an abnormal mode, influence and hazard analysis method according to the abnormal influence; constructing a top event node of the historical security system according to the abnormal grade and the abnormal probability; according to the top event node and the abnormal reason corresponding to the top event node, constructing an intermediate event node of the historical security system, and configuring an intermediate logic gate between the intermediate event nodes; constructing bottom event nodes of the historical security system according to the intermediate event nodes and the abnormal reasons corresponding to the intermediate event nodes, and configuring bottom logic gates among the bottom event nodes; and generating an abnormal tree of the historical security system according to the top event node, the middle logic gate, the bottom event node and the bottom logic gate.
The historical security system fault and anomaly database is an organized set of structured information or data (generally stored in a computer system in an electronic form) of the historical security system anomalies, and comprises anomaly types, anomaly reasons, anomaly influences, anomaly levels, occurrence probabilities of the anomaly types, historical monitoring data and the like. The exception impact refers to the impact of a certain exception type on a system, subsystem, unit operation, function, or state. The abnormal mode, influence and hazard analysis method aims at all possible abnormalities of products, determines the influence of each abnormal mode on the work of the products according to the analysis of the abnormal modes, finds out single-point abnormalities, and determines the hazard of the abnormal modes according to the severity and the occurrence probability of the abnormal modes, wherein the single-point abnormalities refer to local abnormalities which cause the product abnormalities and do not have redundant or alternative working programs as remedies. The anomaly rating means a rating of anomalies by their severity of their impact on the property or system, and includes a mild rating, a critical rating, a severe rating, and a fatal rating.
Further, the embodiment of the invention can provide a precondition for coping strategies when the subsequent security system is abnormal by configuring the control strategy of each abnormal node of the abnormal tree.
Further, in an optional embodiment of the present invention, the control policy for configuring and configuring each abnormal node of the abnormal tree may be configured according to the abnormal type and the abnormal cause of each abnormal node of the abnormal tree.
S2, acquiring the monitoring video of the security area according to the service scene, extracting frame images of the monitoring video, extracting candidate space frame images in the frame images, and extracting the space characteristics of the candidate space frame images by using a space flow network algorithm.
According to the embodiment of the invention, the monitoring video of the security area is obtained according to the service scene and can be used as an initial operation object for the subsequent abnormal detection of the security system.
Further, in an optional embodiment of the present invention, the monitoring video of the security area obtained according to the service scene may be obtained through a data script, and the data script may be compiled through a JS script language.
Further, the embodiment of the invention provides guarantee for the subsequent extraction of candidate spatial frame images and candidate optical flow frame images by extracting the frame images of the monitoring video.
Further, in an optional embodiment of the present invention, the extracting a frame image of the surveillance video includes: decoding the monitoring video by using a video decoder to obtain a decoded video; converting the decoded video into an original frame image by using a video frame conversion algorithm;
converting the original frame image into a target frame image using the following formula:
wherein ,represents a target frame image, is asserted>Represents a red channel in RGB color space, and +>,Represents the green channel, in the RGB color space, is greater than or equal to>,Representing the blue channel in the RGB color space,,represents brightness, <' > in YUV color space>,Representing the chrominance in the YUV color space,,represents a concentration in YUV color space, < >>;
And carrying out format conversion on the target frame image to obtain a frame image of the monitoring video.
The video decoder refers to a program or a device capable of decompressing compressed digital video, such as decoders of RM/RMVB Real Media, MOV Quick Time, 3GP/MP4, DVD/VOB, divx, xvid, and WMV. The video frame conversion algorithm refers to an algorithm capable of converting video into a sequence of frames, such as an FFmpeg algorithm. The original frame image is an original image in YUV format, which is obtained by frame extraction of an original monitoring video. The target frame image is an image in an RGB format after format conversion.
Furthermore, according to the embodiment of the invention, a large amount of redundant information between continuous frame images can be removed by extracting the candidate spatial frame images in the frame images, so that the computing efficiency of the security system for processing data is improved.
Further, in an optional embodiment of the present invention, the extracting a candidate spatial frame image from the frame images includes: segmenting the frame image according to a preset frame threshold value to obtain a segmented frame sequence; and carrying out random frame sampling on the segmented frame sequence by utilizing a preset sampling threshold value to obtain the candidate space frame image. The frame threshold refers to the number of frames extracted from consecutive frame images, and represents that every several frame images are used as a frame image sequence, and may be set to 15 frames, or may be set according to an actual service scene and an occurrence probability of an abnormal event. The sampling threshold is the number of frames for sampling the frame image sequence, and can be set to 1 or 2 frames, or can be set according to the actual service scene and the occurrence probability of the abnormal event.
Further, in the embodiment of the present invention, by using the spatial stream network algorithm, the static spatial features of the surveillance video can be obtained by extracting the spatial features of the candidate spatial frame images, so as to provide the premise for subsequent feature fusion.
Further, in an optional embodiment of the present invention, the extracting spatial features of the candidate spatial frame images by using a spatial stream network includes: performing convolution on the candidate space frame image by utilizing a convolution layer in a space flow residual error network to obtain convolution space characteristics; performing linear conversion and activation conversion on the convolution space characteristics by using a residual error unit in a space flow residual error network to obtain residual error space characteristics; and determining the spatial features of the candidate spatial frame images according to the residual spatial features.
Further, in an optional embodiment of the present invention, the performing linear transformation and activation transformation on the convolution space feature by using a residual unit in a space flow residual network to obtain a residual space feature may be calculated by the following formula, including:
wherein ,residual spatial features in a candidate spatial frame image>,Respectively representing the ^ th ^ or ^ th of the residual unit in the spatial stream residual network>Layer and/or>The slope factor of a layer->,Respectively represent the ^ th or greater in the spatial stream residual network>Layer and the firstIntercept coefficient of a layer->Convolution spatial feature, representing a candidate spatial frame image, is->Represents an activation function, <' > is selected>The sign of the maximum function is indicated. />
And S3, extracting candidate optical flow frame images in the frame images, and extracting optical flow characteristics of the candidate optical flow frame images by using a time flow network.
The embodiment of the invention can be used as the premise for subsequently extracting the time sequence dynamic characteristics by extracting the candidate optical flow frame images in the frame images.
Further, in an optional embodiment of the present invention, an implementation principle of extracting the candidate optical stream frame image in the frame image is the same as that of extracting the candidate spatial frame image in the frame image in S2, and a specific implementation process may refer to S2, which is not described herein again.
Further, in the embodiment of the present invention, by using the time flow network, the time-series motion feature of the monitoring video may be obtained by extracting the optical flow feature of the candidate optical flow frame image, so as to be a precondition for subsequent fusion features.
Further, in an optional embodiment of the present invention, the extracting optical flow features of the candidate optical flow frame images by using a time flow network includes: constructing an optical flow equation of the candidate optical flow frame image by using an optical flow detection algorithm in the time flow network; calculating dense optical flows of the candidate optical flow frame images according to the optical flow equation; and extracting the optical flow characteristics of the candidate optical flow frame images according to the dense optical flow.
The optical flow detection algorithm aims at estimating the moving speed and direction of an object according to the gray value intensity change of pixel points in an image, such as a Lucas-Kanade optical flow algorithm, a Lucas-Kanade optical flow algorithm based on pyramid layering, an HS optical flow algorithm, a PyrLk pyramid optical flow algorithm, a reverse optical flow algorithm and the like.
Further, in an optional embodiment of the present invention, the optical flow equation of the candidate optical flow frame image is constructed by constructing the optical flow equation of the motion feature point according to three assumptions of the optical flow algorithm, wherein the assumptions include that brightness is constant, time is continuous, or motion is "small motion" and space is consistent, and the following formula is used for constructing the optical flow equation of the motion feature point, and the formula includes:
wherein ,represents the ^ h or greater within a preset neighborhood window of a candidate bitstream frame image>The horizontal gray scale gradient of each pixel point is greater or less>Represents the ^ h or greater within a preset neighborhood window of a candidate bitstream frame image>The vertical gray scale gradient of each pixel point is greater or less>The horizontal optical flow of the pixel points in the preset neighborhood window is represented, device for selecting or keeping>Vertical light flow representing a pixel in a predetermined neighborhood window->And representing the number of pixel points in a preset neighborhood window.
7. Further, in an optional embodiment of the present invention, the calculating a dense optical flow of the candidate optical flow frame image according to the optical flow equation includes:
calculating a dense optical flow of the candidate optical flow frame image using the following formula:
wherein ,represents a dense flow of light, <' > or>Represents the horizontal light flow of the pixel point of the candidate light flow frame image in the preset neighborhood window, and then is judged>Represents the vertical light flow of the pixel point of the candidate light flow frame image in the preset neighborhood window, and/or>,Respective pixel grayscale edges @, representing candidate bitstream frame images> andDirectional optical flow equation>An optical flow equation representing pixel gray scale versus time for a candidate optical flow frame image, <' >>,Respectively represent the ^ th or ^ th condition in a preset neighborhood window of a candidate light stream frame image>The gray scale edge of each pixel point is greater or less than> andThe optical flow equation of direction, based on the direction of the light>And the number of pixel points of the candidate optical flow frame image in a preset neighborhood window is represented.
Further, in an optional embodiment of the present invention, the computing of the optical flow characteristics of the candidate optical flow frame images according to the dense optical flow may utilize the following formula,
wherein ,representing a candidate bitstream frame image ÷ th +>Individual pixel point light flow characteristic->Representing a candidate light stream frame image ^ h>Horizontal light flow of individual pixel points->Representing a candidate light stream frame image ^ h>Vertical light flow of individual pixel points->And representing the number of pixel points of the candidate optical flow frame image.
And S4, fusing the spatial features and the optical flow features to obtain fusion features of the monitoring video.
According to the embodiment of the invention, the spatial features and the optical flow features are fused to obtain the fusion features of the surveillance video, so that the spatial-temporal association relation in the surveillance video can be fully mined, the spatial-temporal features of the surveillance video can be more comprehensively expressed, and the accuracy rate of detecting abnormal events in the surveillance video can be improved.
Further, in an embodiment of the present invention, the fusing the spatial feature and the optical flow feature to obtain a fused feature of the surveillance video may be implemented by using the following formula:
wherein ,represents a fused feature of the surveillance video>Represents a spatial characteristic of the monitored video, and>an optical flow feature representing a surveillance video>Represents a convolution kernel, <' > or>Representing the bias parameters.
And S5, calculating factor probability corresponding to the fusion features in the control factor category.
According to the embodiment of the invention, the occurrence probability of the abnormal event in each control factor category can be obtained by calculating the factor probability corresponding to the control factor category of the fusion feature, so as to determine the control factor category of the abnormal event in the following.
Further, in an optional embodiment of the present invention, the calculating a factor probability corresponding to the fusion feature in the control factor category includes: pooling the fusion characteristics to obtain pooled fusion vectors; calculating the category score of the pooling fusion vector in the control factor category by using a full-connection layer in a trained security system anomaly analysis model; and calculating the regression factor probability of the category score in the control factor category by using a logistic regression function of a factor classifier in a trained security system anomaly analysis model according to the category score, and taking the regression factor probability as the factor probability corresponding to the fusion feature in the control factor category.
The pooling is an important concept in the convolutional neural network, and is essentially a form of downsampling to increase the operation speed, which includes maximum pooling, average pooling, pyramid pooling, bilinear pooling, and the like. The fully connected layer is a network structure in which each node is connected with all nodes in the previous layer and is used for integrating the extracted features. The logistic regression function is a machine learning algorithm that maps the real number domain of the linear model output to an effective real number space of [0, 1] representing the probability distribution, and is used for model classification.
Further, in an optional embodiment of the present invention, the calculating the category score of the pooling fusion vector in the control factor category by using a fully connected layer in the trained security system anomaly analysis model may be implemented by the following formula:
wherein ,indicates the th in the full connection layer>Category scores for individual control factor categories, <' > >>Indicating a th in a fully connected layer>Weight vector for individual control factor classes>Indicates the th in the full connection layer>Bias vectors in individual control factor classes>Represents a pooled fusion vector, < > or >>A sequence number indicating the category of control factors.
Further, in an optional embodiment of the present invention, the calculating, according to the category score, a regression factor probability of the category score in the control factor category by using a logistic regression function of a factor classifier in a trained security system anomaly analysis model may be implemented by the following formula:
wherein ,indicates the fifth->Regression factor probabilities for individual control factor classes>Represents a logistic regression function, <' > based on the regression curve>Indicates the th in the full connection layer>Category scores for individual control factor categories>Indicates the number of control factor classes>Number indicating the type of control factor>Indicating the sign of the summation.
And S6, judging the factor state of the security system according to the factor probability, and taking the control strategy as a final control strategy of the security system when the factor state is abnormal.
According to the embodiment of the invention, the control factor category of the abnormal event of the security system can be determined by judging the factor state of the security system according to the factor probability, so that a corresponding control strategy is adopted subsequently.
Further, in an optional embodiment of the present invention, the determining the factor state of the security system according to the factor probability may be according to a preset factor threshold, where the factor state is determined to be abnormal when the factor probability is not less than the factor threshold, and the factor state is determined to be normal when the factor probability is less than the factor threshold.
It should be understood that, when the factor state is abnormal, it indicates that an abnormal event occurs in the control factor category corresponding to the factor state, and therefore, in the embodiment of the present invention, the control policy configured when the factor state is abnormal is used as the final control strategy of the security system, so as to implement accurate control of the abnormal event of the security system.
The embodiment of the invention can determine the security action area of the security system by identifying the security area, the service scene and the corresponding control factor type of the security system to be controlled, provides an operation premise for the subsequent control of the security system by actually applying the scene and the control factor type, implements a corresponding control strategy for the subsequent state according to the control factor by analyzing the control factor type and the corresponding factor state thereof, so as to realize the intelligent control of the security system, configures the control strategy of each abnormal node of the abnormal tree, can make a coping strategy for the subsequent security system when the security system is abnormal, acquires the monitoring video of the security area according to the service scene, can be used as an initial operation object for the subsequent abnormal detection of the security system, and extracts the frame image of the monitoring video to provide guarantee for the subsequent extraction of the candidate spatial frame image and the candidate optical flow frame image; secondly, the embodiment of the invention can remove a large amount of redundant information between continuous frame images by extracting the candidate space frame images in the frame images so as to improve the computing efficiency of the security system for processing data, utilizes a space flow network algorithm to extract the space characteristics of the candidate space frame images so as to obtain the static space characteristics of the monitoring video and be the premise of subsequent fusion characteristics, and extracts the candidate optical flow frame images in the frame images so as to be the premise of subsequent extraction of time sequence dynamic characteristics, utilizes a time flow network to extract the optical flow characteristics of the candidate optical flow frame images so as to obtain the time sequence motion characteristics of the monitoring video and be the premise of subsequent fusion characteristics, and fuses the space characteristics and the optical flow characteristics so as to obtain the fusion characteristics of the monitoring video, so that the time-space association relation in the monitoring video can be fully excavated, the time-space characteristics of the monitoring video can be more comprehensively expressed, and the detection accuracy of abnormal events in the monitoring video can be improved; further, in the embodiment of the present invention, the occurrence probability of the abnormal event in each control factor category may be obtained by calculating the factor probability corresponding to the control factor category of the fusion feature, so as to subsequently determine the control factor category where the abnormal event occurs, and according to the factor probability, the factor state of the security system may be determined to determine the control factor category where the abnormal event occurs in the security system, so as to subsequently adopt a corresponding control policy, and the control policy configured when the factor state is abnormal is used as a final control strategy of the security system, so as to implement accurate control of the abnormal event of the security system. Therefore, the security control method and system based on the cloud terminal provided by the embodiment of the invention can improve the security control precision.
Fig. 2 is a functional block diagram of the security control system based on the cloud end of the present invention.
The cloud-based security control system 100 of the present invention may be installed in an electronic device. According to the implemented functions, the cloud-based security control system may include a control policy configuration module 101, a spatial feature extraction module 102, an optical flow feature extraction module 103, a fusion feature generation module 104, a factor probability calculation module 105, and a factor state discrimination module 106. A module according to the present invention, which may also be referred to as a unit, refers to a series of computer program segments that can be executed by a processor of an electronic device and that can perform a fixed function, and which are stored in a memory of the electronic device.
In the present embodiment, the functions regarding the respective modules/units are as follows:
the control strategy configuration module 101 is configured to identify a security area, a service scene and a control factor category corresponding to the security area and the service scene of the security system to be controlled, analyze the control factor category and a factor state corresponding to the control factor category, and configure a control strategy when the factor state is abnormal;
the spatial feature extraction module 102 is configured to obtain a surveillance video of the security area according to the service scene, extract a frame image of the surveillance video, extract a candidate spatial frame image in the frame image, and extract spatial features of the candidate spatial frame image by using a spatial stream network;
the optical flow feature extraction module 103 is configured to extract candidate optical flow frame images in the frame images, and extract optical flow features of the candidate optical flow frame images by using a time flow network;
the fusion feature generation module 104 is configured to fuse the spatial feature and the optical flow feature to obtain a fusion feature of the surveillance video;
the factor probability calculating module 105 is configured to calculate factor probabilities corresponding to the fusion features in the control factor categories;
and the factor state judging module 106 is configured to judge a factor state of the security system according to the factor probability, and when the factor state is abnormal, use the control strategy as a final control strategy of the security system.
In detail, in the embodiment of the present invention, when the modules in the cloud-based security control system 100 are used, the same technical means as the cloud-based security control method described in fig. 1 are used, and the same technical effect can be produced, which is not described herein again.
It is to be understood that the embodiments described are for illustrative purposes only and that the scope of the claimed invention is not limited to this configuration.
In the several embodiments provided in the present invention, it should be understood that the disclosed apparatus, system, and method may be implemented in other ways. For example, the system embodiments described above are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The embodiment of the invention can acquire and process related data based on an artificial intelligence technology. Among them, artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the same, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made to the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.
Claims (10)
1. A security control method based on a cloud end is characterized by comprising the following steps:
identifying a security area, a service scene and a corresponding control factor category of a security system to be controlled, analyzing the control factor category and a corresponding factor state thereof, and configuring a control strategy when the factor state is abnormal;
acquiring a monitoring video of the security area according to the service scene, extracting a frame image of the monitoring video, extracting a candidate space frame image in the frame image, and extracting a space feature of the candidate space frame image by using a space flow network;
extracting candidate optical flow frame images in the frame images, and extracting optical flow characteristics of the candidate optical flow frame images by using a time flow network;
fusing the spatial features and the optical flow features to obtain fused features of the monitoring video;
calculating factor probabilities corresponding to the fusion features in the control factor categories;
and judging the factor state of the security system according to the factor probability, and taking the control strategy as a final control strategy of the security system when the factor state is abnormal.
2. The cloud-based security control method of claim 1, wherein the configuring the control policy when the factor status is abnormal includes:
when the factor state is abnormal, analyzing the abnormal reason and the abnormal type of the historical control factor corresponding to the factor state;
and constructing an abnormal tree of a historical security system by using an abnormal tree analysis method based on the abnormal reason and the abnormal type, and configuring a control strategy of each abnormal node of the abnormal tree.
3. The cloud-based security control method of claim 2, wherein the constructing an abnormal tree of a historical security system by using an abnormal tree analysis method based on the abnormal reason and the abnormal type comprises:
based on the abnormal type, inquiring abnormal influence corresponding to the abnormal type from a pre-constructed historical security system abnormal database, and calculating abnormal probability of abnormal events of the abnormal type;
analyzing the abnormal grade of the abnormal type by using an abnormal mode, influence and hazard analysis method according to the abnormal influence;
constructing a top event node of the historical security system according to the abnormal grade and the abnormal probability;
according to the top event node and the abnormal reason corresponding to the top event node, constructing an intermediate event node of the historical security system, and configuring an intermediate logic gate between the intermediate event nodes;
constructing bottom event nodes of the historical security system according to the intermediate event nodes and the abnormal reasons corresponding to the intermediate event nodes, and configuring bottom logic gates among the bottom event nodes;
and generating an abnormal tree of the historical security system according to the top event node, the middle logic gate, the bottom event node and the bottom logic gate.
4. The cloud-based security control method of claim 1, wherein the extracting the frame image of the surveillance video comprises:
decoding the monitoring video by using a video decoder to obtain a decoded video;
converting the decoded video into an original frame image by using a video frame conversion algorithm;
converting the original frame image into a target frame image using the following formula:
wherein ,FRepresents a target frame image, is asserted>Represents a red channel, in the RGB color space, is present>Represents the green channel, in the RGB color space, is greater than or equal to>Represents the blue channel, in the RGB color space, is greater than or equal to>Represents brightness, <' > in YUV color space>Represents chrominance in YUV color space>Representing the density in YUV color space;
and carrying out format conversion on the target frame image to obtain a frame image of the monitoring video.
5. The cloud-based security control method of claim 1, wherein the extracting spatial features of the candidate spatial frame images using a spatial stream network comprises:
performing convolution on the candidate space frame image by utilizing a convolution layer in a space flow residual error network to obtain convolution space characteristics;
performing linear conversion and activation conversion on the convolution space characteristics by using a residual error unit in a space flow residual error network to obtain residual error space characteristics;
and determining the spatial features of the candidate spatial frame images according to the residual spatial features.
6. The cloud-based security control method of claim 1, wherein the extracting optical flow features of the candidate optical flow frame images using a time flow network comprises:
constructing an optical flow equation of the candidate optical flow frame image by using an optical flow detection algorithm in the time flow network;
calculating a dense optical flow of the candidate optical flow frame image according to the optical flow equation;
and extracting the optical flow characteristics of the candidate optical flow frame images according to the dense optical flow.
7. The cloud-based security control method of claim 6, wherein said calculating a dense optical flow of the candidate optical flow frame images according to the optical flow equation comprises:
calculating a dense optical flow of the candidate optical flow frame image using the following formula:
wherein ,represents a dense flow of light, <' > or>Represents the horizontal light flow of the pixel point of the candidate light flow frame image in the preset neighborhood window, and then is judged>Represents the vertical light flow of the pixel point of the candidate light flow frame image in the preset neighborhood window, and/or>,Respective pixel grayscale edges @, representing candidate bitstream frame images> andDirectional optical flow equation>An optical flow equation representing pixel gray scale versus time for a candidate optical flow frame image, <' >>,Respectively represent the ^ th or ^ th condition in a preset neighborhood window of a candidate light stream frame image>The gray scale edge of each pixel point is greater or less than> andThe optical flow equation of direction, based on the direction of the light>And the number of pixel points of the candidate optical flow frame image in a preset neighborhood window is represented.
8. The cloud-based security control method of claim 1, wherein the fusing the spatial features and the optical flow features to obtain fused features of the surveillance video comprises:
fusing the spatial features and the optical flow features using the following formula:
9. The cloud-based security control method of claim 3, wherein the calculating the factor probability corresponding to the fused feature in the control factor category comprises:
pooling the fusion characteristics to obtain pooled fusion vectors;
calculating the category score of the pooling fusion vector in the control factor category by using a full-connection layer in a trained security system anomaly analysis model;
and calculating the regression factor probability of the category score in the control factor category by using a logistic regression function of a factor classifier in a trained security system anomaly analysis model according to the category score, and taking the regression factor probability as the factor probability corresponding to the fusion feature in the control factor category.
10. The utility model provides a security protection control system based on high in clouds, a serial communication port, the system includes:
the control strategy configuration module is used for identifying a security area, a service scene and a corresponding control factor category of a security system to be controlled, analyzing the control factor category and a corresponding factor state thereof, and configuring a control strategy when the factor state is abnormal;
the spatial feature extraction module is used for acquiring the monitoring video of the security area according to the service scene, extracting frame images of the monitoring video, extracting candidate spatial frame images in the frame images, and extracting spatial features of the candidate spatial frame images by using a spatial flow network;
the optical flow characteristic extraction module is used for extracting candidate optical flow frame images in the frame images and extracting the optical flow characteristics of the candidate optical flow frame images by using a time flow network;
the fusion feature generation module is used for fusing the spatial features and the optical flow features to obtain fusion features of the monitoring video;
the factor probability calculation module is used for calculating the factor probability corresponding to the fusion characteristics in the control factor category;
and the factor state judging module is used for judging the factor state of the security system according to the factor probability, and taking the control strategy as the final control strategy of the security system when the factor state is abnormal.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310238920.6A CN115953740B (en) | 2023-03-14 | 2023-03-14 | Cloud-based security control method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310238920.6A CN115953740B (en) | 2023-03-14 | 2023-03-14 | Cloud-based security control method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115953740A true CN115953740A (en) | 2023-04-11 |
CN115953740B CN115953740B (en) | 2023-06-02 |
Family
ID=85892341
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310238920.6A Active CN115953740B (en) | 2023-03-14 | 2023-03-14 | Cloud-based security control method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115953740B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106357423A (en) * | 2016-08-19 | 2017-01-25 | 南京国电南自电网自动化有限公司 | Abnormal diagnosis method of secondary equipment of intelligent substation based on fault tree |
CN109543513A (en) * | 2018-10-11 | 2019-03-29 | 平安科技(深圳)有限公司 | Method, apparatus, equipment and the storage medium that intelligent monitoring is handled in real time |
CN111027472A (en) * | 2019-12-09 | 2020-04-17 | 北京邮电大学 | Video identification method based on fusion of video optical flow and image space feature weight |
WO2021035807A1 (en) * | 2019-08-23 | 2021-03-04 | 深圳大学 | Target tracking method and device fusing optical flow information and siamese framework |
WO2021167394A1 (en) * | 2020-02-20 | 2021-08-26 | Samsung Electronics Co., Ltd. | Video processing method, apparatus, electronic device, and readable storage medium |
CN114202711A (en) * | 2020-08-28 | 2022-03-18 | 中车株洲电力机车研究所有限公司 | Intelligent monitoring method, device and system for abnormal behaviors in train compartment |
-
2023
- 2023-03-14 CN CN202310238920.6A patent/CN115953740B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106357423A (en) * | 2016-08-19 | 2017-01-25 | 南京国电南自电网自动化有限公司 | Abnormal diagnosis method of secondary equipment of intelligent substation based on fault tree |
CN109543513A (en) * | 2018-10-11 | 2019-03-29 | 平安科技(深圳)有限公司 | Method, apparatus, equipment and the storage medium that intelligent monitoring is handled in real time |
WO2021035807A1 (en) * | 2019-08-23 | 2021-03-04 | 深圳大学 | Target tracking method and device fusing optical flow information and siamese framework |
CN111027472A (en) * | 2019-12-09 | 2020-04-17 | 北京邮电大学 | Video identification method based on fusion of video optical flow and image space feature weight |
WO2021167394A1 (en) * | 2020-02-20 | 2021-08-26 | Samsung Electronics Co., Ltd. | Video processing method, apparatus, electronic device, and readable storage medium |
CN114202711A (en) * | 2020-08-28 | 2022-03-18 | 中车株洲电力机车研究所有限公司 | Intelligent monitoring method, device and system for abnormal behaviors in train compartment |
Also Published As
Publication number | Publication date |
---|---|
CN115953740B (en) | 2023-06-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Pustokhina et al. | An automated deep learning based anomaly detection in pedestrian walkways for vulnerable road users safety | |
US9652863B2 (en) | Multi-mode video event indexing | |
CN110348312A (en) | A kind of area video human action behavior real-time identification method | |
Li et al. | Decoupled appearance and motion learning for efficient anomaly detection in surveillance video | |
US20080298636A1 (en) | Method for detecting water regions in video | |
CN110827505A (en) | Smoke segmentation method based on deep learning | |
KR20150100141A (en) | Apparatus and method for analyzing behavior pattern | |
CN114743157B (en) | Pedestrian monitoring method, device, equipment and medium based on video | |
Ullah et al. | Gaussian mixtures for anomaly detection in crowded scenes | |
CN114373162B (en) | Dangerous area personnel intrusion detection method and system for transformer substation video monitoring | |
CN117197713A (en) | Extraction method based on digital video monitoring system | |
CN113095160B (en) | Power system personnel safety behavior identification method and system based on artificial intelligence and 5G | |
CN116310922A (en) | Petrochemical plant area monitoring video risk identification method, system, electronic equipment and storage medium | |
CN118038153A (en) | Method, device, equipment and medium for identifying external damage prevention of distribution overhead line | |
CN117523668A (en) | Abnormal behavior detection method of space-time action network | |
CN115953740B (en) | Cloud-based security control method and system | |
Dominguez et al. | A GPU-accelerated LPR algorithm on broad vision survillance cameras | |
Xie et al. | On‐line physical security monitoring of power substations | |
KR20230120410A (en) | Atypical object recognition method and apparatus applied with image turning point | |
Makris et al. | Learning scene semantics | |
CN113343757B (en) | Space-time anomaly detection method based on convolution sparse coding and optical flow | |
CN118365475B (en) | Intelligent monitoring method and device for photovoltaic power plant | |
Bagane et al. | Unsupervised Machine Learning for Unusual Crowd Activity Detection | |
Mallick et al. | Artificial Intelligence based Video Monitoring System for Security Applications. | |
Ilić | The Integration of Artificial Intelligence and Computer Vision in Large-Scale Video Surveillance of Railway Stations |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |