CN113989321A - Intrusion detection method, device and system and computer storage medium - Google Patents

Intrusion detection method, device and system and computer storage medium Download PDF

Info

Publication number
CN113989321A
CN113989321A CN202111062777.7A CN202111062777A CN113989321A CN 113989321 A CN113989321 A CN 113989321A CN 202111062777 A CN202111062777 A CN 202111062777A CN 113989321 A CN113989321 A CN 113989321A
Authority
CN
China
Prior art keywords
intrusion
target
tracking
identification information
tracking frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111062777.7A
Other languages
Chinese (zh)
Inventor
杜学丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202111062777.7A priority Critical patent/CN113989321A/en
Publication of CN113989321A publication Critical patent/CN113989321A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Abstract

The application discloses an intrusion detection method, an intrusion detection device, an intrusion detection system and a computer storage medium. The intrusion detection method comprises the following steps: acquiring a video stream of a monitored area, and carrying out target tracking processing on the video stream; acquiring a first tracking frame and first identification information of a first intrusion target in a current frame image of a video stream, and acquiring a second tracking frame and second identification information of a second intrusion target in a previous frame image; and determining whether to trigger intrusion alarm or not based on the similarity of the first tracking frame and the second tracking frame in response to the first identification information being different from the second identification information. By the method, the accuracy of intrusion detection and the accuracy of intrusion alarm can be improved.

Description

Intrusion detection method, device and system and computer storage medium
Technical Field
The application relates to the technical field of civil intelligent security, in particular to an intrusion detection method, an intrusion detection device, an intrusion detection system and a computer storage medium.
Background
Personal and property safety is appreciated at all times and places. With the continuous improvement of the technological level, the requirements of people on security facilities are also continuously changed. More and more people hope to know the environmental condition of the private field in real time through the intelligent security facility, and can receive intrusion alarm information in time when external personnel intrude, so as to maintain personal and property safety of individuals.
However, the existing intrusion detection method can cause frequent alarm triggering when a certain intrusion object repeatedly appears in a monitoring area.
Disclosure of Invention
The method mainly solves the technical problem of how to improve the accuracy of intrusion detection so as to solve the problem of repeatedly triggering and alarming the same intrusion target.
In order to solve the technical problem, the application adopts a technical scheme that: an intrusion detection method is provided. The intrusion detection method comprises the following steps: acquiring a video stream of a monitored area, and carrying out target tracking processing on the video stream; acquiring a first tracking frame and first identification information of a first intrusion target in a current frame image of a video stream, and acquiring a second tracking frame and second identification information of a second intrusion target in a previous frame image; and determining whether to trigger intrusion alarm or not based on the similarity of the first tracking frame and the second tracking frame in response to the first identification information being different from the second identification information.
In order to solve the technical problem, the application adopts a technical scheme that: an intrusion detection device is provided. The intrusion detection device comprises a memory and a processor, wherein the memory is coupled with the processor; wherein, the memorizer is used for storing the program data, the treater is used for carrying out the program data in order to realize: acquiring a video stream of a monitored area, and carrying out target tracking processing on the video stream; acquiring a first tracking frame and first identification information of a first intrusion target in a current frame image of a video stream, and acquiring a second tracking frame and second identification information of a second intrusion target in a previous frame image; and determining whether to trigger intrusion alarm or not based on the similarity of the first tracking frame and the second tracking frame in response to the first identification information being different from the second identification information.
In order to solve the technical problem, the application adopts a technical scheme that: an intrusion detection system is provided. The intrusion detection system includes: the system comprises an input/output device and an intrusion detection device connected with the input/output device; the input and output device is used for acquiring video streams of a monitored area; the intrusion detection device comprises a memory and a processor, wherein the memory is coupled with the processor; wherein, the memorizer is used for storing the program data, the treater is used for carrying out the program data in order to realize: acquiring a video stream from an input/output device, and performing target tracking processing on the video stream; acquiring a first tracking frame and first identification information of a first intrusion target in a current frame image of a video stream, and acquiring a second tracking frame and second identification information of a second intrusion target in a previous frame image; and determining whether to trigger intrusion alarm or not based on the similarity of the first tracking frame and the second tracking frame in response to the first identification information being different from the second identification information.
In order to solve the technical problem, the application adopts a technical scheme that: a computer storage medium is provided. The computer storage medium having stored thereon program instructions that, when executed, implement: acquiring a video stream of a monitored area, and carrying out target tracking processing on the video stream; acquiring a first tracking frame and first identification information of a first intrusion target in a current frame image of a video stream, and acquiring a second tracking frame and second identification information of a second intrusion target in a previous frame image; and determining whether to trigger intrusion alarm or not based on the similarity of the first tracking frame and the second tracking frame in response to the first identification information being different from the second identification information.
The beneficial effect of this application is: different from the prior art, the method comprises the steps of firstly obtaining a video stream of a monitored area, and carrying out target tracking processing on the video stream; then, acquiring a first tracking frame and first identification information of a first invasion target in a current frame image of the video stream, and acquiring a second tracking frame and second identification information of a second invasion target in a previous frame image; and then, in response to the first identification information being different from the second identification information, determining whether to trigger intrusion alarm based on the similarity of the first tracking frame and the second tracking frame. Therefore, when new identification information appears in a current frame image (relative to a previous frame image) of a video stream, namely whether first identification information is different from second identification information in the previous frame image or not, whether intrusion alarm is triggered or not is determined based on the similarity of the first tracking frame and the second tracking frame. Therefore, the method and the device can improve the accuracy of intrusion detection and the accuracy of intrusion alarm, and further can solve the problem of repeated alarm triggering of the same intrusion event.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts. Wherein:
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of an intrusion detection method according to the present application;
FIG. 2 is a schematic diagram of a neural network structure of an algorithm for target tracking processing in the intrusion detection method of FIG. 1;
fig. 3 is a detailed flowchart of step S13 in the intrusion detection method according to the embodiment of fig. 1;
FIG. 4 is a schematic flow chart of a similarity calculation method for a first tracking frame and a second tracking frame according to the present application;
FIG. 5 is a schematic flowchart of an embodiment of an intrusion detection method according to the present application;
FIG. 6 is a flowchart illustrating the intrusion detection method of the embodiment of FIG. 5 in step S54;
FIG. 7 is a schematic structural diagram of an embodiment of an intrusion detection device according to the present application;
FIG. 8 is a schematic view of the operation of the intrusion detection device of the embodiment of FIG. 7;
FIG. 9 is a schematic block diagram of an embodiment of an intrusion detection system according to the present application;
FIG. 10 is a schematic structural diagram of an embodiment of a computer storage medium according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first" and "second" in this application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless explicitly specifically limited otherwise. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The intrusion detection method not only can detect intruders in the monitored area, but also can detect other intrusion targets defined according to requirements in the monitored area, such as animals, vehicles or flyers.
The present application first provides an intrusion detection method, as shown in fig. 1, where fig. 1 is a schematic flow chart of an embodiment of the intrusion detection method of the present application. The intrusion detection method of the embodiment specifically comprises the following steps:
step S11: and acquiring a video stream of the monitored area, and performing target tracking processing on the video stream.
One or more cameras and other video acquisition equipment can be arranged in the monitored area, and video streams are acquired through the cameras; the acquisition range of one or more cameras should cover the entire monitoring area.
The camera of the present embodiment may be an infrared camera that emits infrared light using infrared light to illuminate an object, and the infrared light is diffused and received by the infrared camera to form a video stream; or the camera of this embodiment may also be a common visible light camera, and the like, which is not limited specifically.
The embodiment may adopt a FairMOT algorithm based on deep learning to perform target tracking processing on a video stream containing an intrusion target. And (3) training the neural network of the FairMOT algorithm on a corresponding data set to obtain an optimal model (optimal target tracking neural network model) capable of accurately tracking the intrusion target.
In other embodiments, other multi-target tracking algorithms, such as DeepsORT, MOTDT, etc., may also be employed.
As shown in fig. 2, in the neural network model of FairMOT algorithm, the (stride-4) high-resolution feature map extracted from the image by the encoder-decoder network will be taken as a feature map of four branches; three of the four branches are used to detect the object (Detection), and the other is used to output Re-ID information (Re-ID) of the object, herein abbreviated as ID information; each branch is referred to as a head branch. Each head is different except for the last output channel dimension; that is, each head is implemented with a 3x3 convolutional layer followed by a 1x1 convolutional layer. The final output of the network is: 1) the heat map is (1, H, W) in shape and has only one channel; 2) a midpoint offset, in the shape of (2, H, W), to compensate for slight offset due to downsampling; 3) the size and the shape of the detection frame are (2, H and W), only the position of the central point is not known, and the width and the height of the detection frame corresponding to the central point are calculated by using the feature map; 4) the Re-ID is embedded in the shape of (128, H, W), i.e. each object is represented by a 128-dimensional vector.
Therefore, the information of the four branches can be obtained by the target tracking processing.
Step S12: and acquiring a first tracking frame and first identification information of a first intrusion target in a current frame image of the video stream, and a second tracking frame and second identification information of a second intrusion target in a previous frame image.
The identification information of this embodiment may be ID information, and the tracking frame includes the corresponding heat map, the central point offset, and the detection frame size information.
Step S13: and determining whether to trigger intrusion alarm or not based on the similarity of the first tracking frame and the second tracking frame in response to the first identification information being different from the second identification information.
Comparing first ID information of a first intrusion target in a current frame image with second ID information of a second intrusion target in a previous frame image, if the first ID information is different from the second ID information, the first ID information can be regarded as newly added ID information, and the second intrusion target is possibly newly added in the current frame image, and whether the first ID information and the second ID information correspond to the same intrusion target needs to be further determined based on the similarity of a first tracking frame and a second tracking frame so as to determine whether to trigger intrusion alarm.
Alternatively, the present embodiment may implement step S13 by the method shown in fig. 3. The method of the present embodiment includes steps S31 to S34.
Step S31: and responding to the difference between the first identification information and the second identification information, and if the similarity of the first tracking frame and the second tracking frame is smaller than or equal to a first threshold value, determining that the first intrusion target is different from the second intrusion target.
Alternatively, the similarity of the first tracking frame and the second tracking frame may be calculated by a method as shown in fig. 4:
step S41: and acquiring a first feature vector from the first tracking frame and acquiring a second feature vector from the second tracking frame.
The embodiment can adopt a deep learning-based target classification algorithm to obtain the feature vector of the intrusion target. The underlying network of the target classification algorithm includes, but is not limited to, any of convolutional networks such as vgnet, densnet, ResNet, etc. And training the convolution network corresponding to the target classification algorithm on the corresponding data set to obtain the optimal classification model capable of being accurately identified.
In this embodiment, for the optimal classification model, the last specified number of network layers of the entire network are deleted as the final model of the target classification algorithm. Moving target areas (tracking frame areas) of two frames of images before and after the change of the ID information are respectively input into the optimal classification model, and a first feature vector of a first intrusion target and a second feature vector of a second intrusion target are respectively obtained.
Step S42: and calculating the similarity between the first feature vector and the second feature vector.
Optionally, the similarity of the present embodiment may be a cosine similarity, and the cosine similarity between the first feature vector and the second feature vector may be calculated by calculating a cosine distance between the first feature vector and the second feature vector, and is taken as the similarity between the first feature vector and the second feature vector.
Of course, in other embodiments, the similarity between the first feature vector and the second feature vector may also be obtained by calculating the euclidean distance between the first feature vector and the second feature vector, and the like.
And if the similarity between a first feature vector of a second intrusion target in a first tracking frame in the current frame image and a second feature vector of a second intrusion target in a second tracking frame in the previous frame image is smaller than or equal to a first threshold value, the first intrusion target and the second intrusion target are considered to be different, namely the first ID information in the current frame image and the second ID information in the previous frame image correspond to different intrusion targets.
Step S32: and triggering an intrusion alarm.
And if the first intrusion target is different from the second intrusion target, the second intrusion target is considered as a new intrusion target, an intrusion event needs to be newly added, and an intrusion alarm is triggered.
Step S33: and if the similarity of the first tracking frame and the second tracking frame is greater than a first threshold value, determining that the first intrusion target and the second intrusion target are the same.
And if the similarity between a first feature vector of a second intrusion target in a first tracking frame in the current frame image and a second feature vector of a second intrusion target in a second tracking frame in the previous frame image is greater than a first threshold value, the first intrusion target and the second intrusion target are considered to be the same, namely the first ID information in the current frame image and the second ID information in the previous frame image correspond to the same intrusion target.
Step S34: no intrusion alarm is triggered.
If the first intrusion target and the second intrusion target are the same, the second intrusion target is considered to be only changed in ID information and not a newly-intruded target, no intrusion event needs to be added, intrusion alarm is not triggered, and repeated triggering of intrusion alarm is avoided.
Optionally, the intrusion detection method of this embodiment further includes: and if the first identification information is determined to be the same as the second identification information, determining that the first intrusion target is the same as the second intrusion target, and not triggering intrusion alarm.
And if the first ID information is the same as the second ID information, the current frame image is considered to have no newly added intrusion target relative to the previous frame image, and intrusion alarm does not need to be started.
Different from the prior art, in this embodiment, when new identification information appears in a current frame image (relative to a previous frame image) of a video stream, that is, whether first identification information is different from second identification information in the previous frame image, whether intrusion alarm is triggered is determined based on the similarity between the first tracking frame and the second tracking frame. Therefore, the intrusion detection accuracy and the intrusion alarm accuracy can be improved, and the problem that the alarm is repeatedly triggered for the same intrusion event can be further improved.
The current frame image and the previous frame image in this embodiment may be adjacent frame images or non-adjacent frame images, and the above processing is performed for each frame image or frame image at a specific interval in the video stream, so as to ensure the accuracy of intrusion detection and the accuracy of intrusion alarm.
The existing detection method of the intrusion target is easily influenced by an application scene and an external environment, and other non-intrusion targets in the application scene or the external environment, such as false figures, human-shaped toys and the like shown by photos, magazines and the like, are easily identified as the intrusion target by mistake, so that the accuracy of intrusion target detection and the accuracy of intrusion alarm are lower.
To this end, the present application further proposes another embodiment of an intrusion detection method, as shown in fig. 5, fig. 5 is a schematic flowchart of an embodiment of the intrusion detection method of the present application. The intrusion detection method of the embodiment comprises the following steps:
step S51: and acquiring a video stream of the monitored area, and performing target tracking processing on the video stream.
Step S51 is similar to step S11 described above and is not repeated here.
Step S52: and acquiring a first tracking frame and first identification information of a first intrusion target in a current frame image and a second tracking frame and second identification information of a second intrusion target in a previous frame image of the video stream.
Step S52 is similar to step S12 described above and is not repeated here.
Step S53: and if the first identification information is the same as the second identification information, determining that the first intrusion target is the same as the second intrusion target.
The first and second intrusion targets are the same, and step S59 is performed.
Step S54: and if the first and second intrusion targets are determined to be the same, acquiring the motion states of the first and second intrusion targets based on the first and second tracking frames.
The motion state includes stationary and moving.
Alternatively, the present embodiment may implement step S54 by the method shown in fig. 6. The method of the present embodiment includes steps S61 to S63.
Step S61: and calculating the intersection ratio of the areas of the first tracking frame and the second tracking frame.
The area size of the first tracking frame and the area size of the second tracking frame are calculated firstly, then the intersection area size and the union area size of the two areas are calculated, and finally the ratio of the intersection area size to the union area size is calculated to be the intersection and union ratio of the areas of the first tracking frame and the second tracking frame.
Step S62: and if the intersection ratio is larger than a second threshold value, acquiring that the motion states of the first intrusion target and the second intrusion target are static.
If the intersection ratio is greater than the second threshold, the movement displacement of the intrusion target is considered to be small, and the state of the intrusion target (the first intrusion target is the same as the second intrusion target) is considered to be static.
Step S63: and if the intersection ratio is smaller than or equal to the second threshold, acquiring the motion states of the first intrusion target and the second intrusion target as motion.
If the intersection ratio is smaller than or equal to the second threshold, the movement displacement of the intrusion target is considered to be large, and the motion state of the intrusion target is motion.
In other embodiments, the motion state of the intrusion object can be judged according to the motion vector of the intrusion object in the multiple continuous frame images and the size of the motion vector.
Step S55: and filtering out first tracking frames corresponding to the first intrusion targets with static motion states from all first tracking frames in the current frame image, and filtering out second tracking frames corresponding to the second intrusion targets with static motion states from all second tracking frames in the previous frame image.
Generally, the motion state of an intrusion object, such as a person who has entered a room for theft, changes greatly, so that an intrusion object with a small movement displacement can be regarded as not an intrusion object.
The tracking frame contains the feature vector of the invading target, and the deletion of the tracking frame is equivalent to the deletion of the image information of the invading target, so that the feature vector of the static target cannot be extracted in the subsequent feature extraction.
Step S56: and responding to the difference between the first identification information and the second identification information, and if the similarity of the first tracking frame and the second tracking frame is smaller than or equal to a first threshold value, determining that the first intrusion target is different from the second intrusion target.
The first intrusion target and the second intrusion target are different, and step S58 is performed.
Step S57: and if the similarity of the first tracking frame and the second tracking frame is greater than a first threshold value, determining that the first intrusion target and the second intrusion target are the same.
The first and second intrusion targets are the same, and step S59 is performed.
Step S58: and triggering an intrusion alarm.
Step S58 is similar to step S32 described above and is not repeated here.
Step S59: no intrusion alarm is triggered.
Step S59 is similar to step S34 described above and is not repeated here.
Based on the above embodiments, further, in the embodiment, the intrusion target and the motion state thereof in the video stream can be obtained through multi-target tracking processing, the static target is filtered, and the intrusion alarm is triggered only based on the dynamic target and the motion state thereof, so that other non-intrusion targets, such as false people and humanoid toys presented by photos, magazines and the like, can be prevented from being mistakenly identified as the intrusion target. By the above manner, the non-invasive target in the image frame can be filtered, and the accuracy and the calculation cost of the subsequent intrusion detection step can be improved.
Fig. 7 is a schematic structural diagram of an embodiment of the intrusion detection device according to the present application. The intrusion detection apparatus 80 of the present embodiment includes a processor 81, a memory 82, an input-output device 83, and a bus 84.
The processor 81, the memory 82 and the input/output device 83 are respectively connected to the bus 84, the memory 82 stores program data, and the processor 81 is configured to execute the program data to implement: acquiring a first tracking frame and first identification information of a first intrusion target in a current frame image of a video stream, and acquiring a second tracking frame and second identification information of a second intrusion target in a previous frame image; and determining whether to trigger intrusion alarm or not based on the similarity of the first tracking frame and the second tracking frame in response to the first identification information being different from the second identification information.
The processor 81 also implements the intrusion detection method of the above-described embodiment when executing the program data.
In the present embodiment, the processor 81 may also be referred to as a CPU (Central Processing Unit). The processor 81 may be an integrated circuit chip having signal processing capabilities. Processor 81 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor 81 may be any conventional processor or the like.
In an application scenario, as shown in fig. 8, the processor 81 may be divided into a plurality of functional modules (not shown), wherein a target tracking module in the intrusion detection module (not shown) transmits a target tracking result, such as a tracking frame and ID information, to a target filtering module (not shown), the target filtering module filters out a stationary target, and the determining module (not shown) determines whether an ID (ID) of the intrusion target changes, and if not, does not trigger an intrusion alarm; if yes, a target re-matching module (not shown) is used for carrying out similarity calculation on the feature vectors of the intrusion targets, and a judging module judges that the current image frames have newly-added intrusion targets (intrusion personnel) based on the similarity; if yes, triggering intrusion alarm, if no, not triggering intrusion alarm, and outputting result.
Fig. 7 and 9 show an intrusion detection system, and fig. 9 is a schematic structural diagram of an embodiment of the intrusion detection system according to the present application. The intrusion detection system of the embodiment comprises: an input/output device 88 and an intrusion detection device 80 connected to the input/output device 88; the input and output device 88 is used for collecting the video stream of the monitored area; the intrusion detection device 80 comprises a memory 82 and a processor 81, the memory 82 being coupled to the processor 81; wherein the memory 82 is used for storing program data, and the processor 81 is used for executing the program data to realize: acquiring a first tracking frame and first identification information of a first intrusion target in a current frame image of a video stream, and acquiring a second tracking frame and second identification information of a second intrusion target in a previous frame image; and determining whether to trigger intrusion alarm or not based on the similarity of the first tracking frame and the second tracking frame in response to the first identification information being different from the second identification information.
The structure and principle of the intrusion detection device 80 can refer to the embodiment shown in fig. 7 and 8, and are not described herein.
The input/output device 88 of this embodiment may include a video capture device 810 and a mobile terminal 820, where the video capture device 810 is configured to obtain a video stream of a monitored area, and the mobile terminal 820 is configured to receive an intrusion alarm, display the video stream, and remotely control and adjust the video capture device 810.
The intrusion detection system of the present embodiment further includes a transmission device 89 for data transmission.
The present application further provides a computer storage medium, as shown in fig. 9, fig. 9 is a schematic structural diagram of an embodiment of the computer storage medium of the present application. The computer storage medium 90 has stored thereon program instructions 91, the program instructions 91 when executed by a processor (not shown) implement: acquiring a first tracking frame and first identification information of a first intrusion target in a current frame image of a video stream, and acquiring a second tracking frame and second identification information of a second intrusion target in a previous frame image; and determining whether to trigger intrusion alarm or not based on the similarity of the first tracking frame and the second tracking frame in response to the first identification information being different from the second identification information.
The program instructions 91 when executed by the processor also implement the intrusion detection method of the above-described embodiment.
The computer storage medium 90 of the present embodiment may be, but is not limited to, a usb disk, an SD card, a PD optical drive, a removable hard disk, a high-capacity floppy drive, a flash memory, a multimedia memory card, a server, etc.
Different from the prior art, the method comprises the steps of firstly obtaining a video stream of a monitored area, and carrying out target tracking processing on the video stream; then, acquiring a first tracking frame and first identification information of a first invasion target in a current frame image of the video stream, and acquiring a second tracking frame and second identification information of a second invasion target in a previous frame image; and then, in response to the first identification information being different from the second identification information, determining whether to trigger intrusion alarm based on the similarity of the first tracking frame and the second tracking frame. Therefore, when new identification information appears in a current frame image (relative to a previous frame image) of a video stream, namely whether first identification information is different from second identification information in the previous frame image or not, whether intrusion alarm is triggered or not is determined based on the similarity of the first tracking frame and the second tracking frame. Therefore, the method and the device can improve the accuracy of intrusion detection and the accuracy of intrusion alarm, and further can solve the problem of repeated alarm triggering of the same intrusion event.
In addition, if the above functions are implemented in the form of software functions and sold or used as a standalone product, the functions may be stored in a storage medium readable by a mobile terminal, that is, the present application also provides a storage device storing program data, which can be executed to implement the method of the above embodiments, the storage device may be, for example, a usb disk, an optical disk, a server, etc. That is, the present application may be embodied as a software product, which includes several instructions for causing an intelligent terminal to perform all or part of the steps of the methods described in the embodiments.
In the description of the present application, reference to the description of the terms "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, such as an ordered listing of executable instructions that can be viewed as implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device (e.g., a personal computer, server, network device, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions). For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (10)

1. An intrusion detection method, comprising:
acquiring a video stream of a monitored area, and carrying out target tracking processing on the video stream;
acquiring a first tracking frame and first identification information of a first intrusion target in a current frame image of the video stream, and acquiring a second tracking frame and second identification information of a second intrusion target in a previous frame image;
and in response to that the first identification information is different from the second identification information, determining whether to trigger intrusion alarm or not based on the similarity of the first tracking frame and the second tracking frame.
2. The intrusion detection method according to claim 1, wherein the determining whether to trigger an intrusion alarm based on the similarity between the first tracking frame and the second tracking frame comprises:
if the similarity of the first tracking frame and the second tracking frame is smaller than or equal to a first threshold value, determining that the first intrusion target is different from the second intrusion target; and are
And triggering an intrusion alarm.
3. The intrusion detection method according to claim 1, further comprising:
and if the similarity of the first tracking frame and the second tracking frame is greater than a first threshold value, determining that the first intrusion target is the same as the second intrusion target.
4. The intrusion detection method according to claim 1, wherein before determining whether to trigger an intrusion alert based on the similarity between the first tracking frame and the second tracking frame, the method further comprises:
acquiring a first feature vector from the first tracking frame and acquiring a second feature vector from the second tracking frame;
determining the similarity of the first tracking frame and the second tracking frame based on the similarity degree between the first feature vector and the second feature vector.
5. The intrusion detection method according to claim 1, further comprising:
and if the first identification information is determined to be the same as the second identification information, determining that the first intrusion target is the same as the second intrusion target.
6. The intrusion detection method according to claim 5, further comprising:
if the first intrusion target and the second intrusion target are determined to be the same, acquiring the motion states of the first intrusion target and the second intrusion target based on the first tracking frame and the second tracking frame;
and filtering the first tracking frames corresponding to the first intrusion targets with the static motion states from all the first tracking frames in the current frame image, and filtering the second tracking frames corresponding to the second intrusion targets with the static motion states from all the second tracking frames in the previous frame image.
7. The intrusion detection method according to claim 6, wherein the obtaining the motion states of the first and second intrusion targets based on the first and second tracking frames comprises:
calculating the intersection ratio of the areas of the first tracking frame and the second tracking frame;
if the intersection ratio is larger than a second threshold value, the motion states of the first intrusion target and the second intrusion target are obtained to be static;
and if the intersection ratio is smaller than or equal to the second threshold value, acquiring the motion states of the first intrusion target and the second intrusion target as motion.
8. An intrusion detection device comprising a memory and a processor, the memory coupled to the processor; wherein the memory is configured to store program data and the processor is configured to execute the program data to implement:
acquiring a video stream of a monitored area, and carrying out target tracking processing on the video stream;
acquiring a first tracking frame and first identification information of a first intrusion target in a current frame image of the video stream, and acquiring a second tracking frame and second identification information of a second intrusion target in a previous frame image;
and in response to that the first identification information is different from the second identification information, determining whether to trigger intrusion alarm or not based on the similarity of the first tracking frame and the second tracking frame.
9. An intrusion detection system, comprising: the system comprises an input/output device and an intrusion detection device connected with the input/output device; the input and output device is used for acquiring a video stream of a monitored area; the intrusion detection device includes a memory and a processor, the memory coupled to the processor; wherein the memory is configured to store program data and the processor is configured to execute the program data to implement:
acquiring the video stream from the input and output device, and performing target tracking processing on the video stream;
acquiring a first tracking frame and first identification information of a first intrusion target in a current frame image of the video stream, and acquiring a second tracking frame and second identification information of a second intrusion target in a previous frame image;
and in response to that the first identification information is different from the second identification information, determining whether to trigger intrusion alarm or not based on the similarity of the first tracking frame and the second tracking frame.
10. A computer storage medium having stored thereon program instructions that, when executed, implement:
acquiring a video stream of a monitored area, and carrying out target tracking processing on the video stream;
acquiring a first tracking frame and first identification information of a first intrusion target in a current frame image of the video stream, and acquiring a second tracking frame and second identification information of a second intrusion target in a previous frame image;
and in response to that the first identification information is different from the second identification information, determining whether to trigger intrusion alarm or not based on the similarity of the first tracking frame and the second tracking frame.
CN202111062777.7A 2021-09-10 2021-09-10 Intrusion detection method, device and system and computer storage medium Pending CN113989321A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111062777.7A CN113989321A (en) 2021-09-10 2021-09-10 Intrusion detection method, device and system and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111062777.7A CN113989321A (en) 2021-09-10 2021-09-10 Intrusion detection method, device and system and computer storage medium

Publications (1)

Publication Number Publication Date
CN113989321A true CN113989321A (en) 2022-01-28

Family

ID=79735622

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111062777.7A Pending CN113989321A (en) 2021-09-10 2021-09-10 Intrusion detection method, device and system and computer storage medium

Country Status (1)

Country Link
CN (1) CN113989321A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114758485A (en) * 2022-04-21 2022-07-15 成都商汤科技有限公司 Alarm information processing method and device, computer equipment and storage medium
CN115273368A (en) * 2022-07-20 2022-11-01 云南电网有限责任公司电力科学研究院 Method, medium, equipment and system for warning invasion of vehicles in power transmission line corridor construction

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114758485A (en) * 2022-04-21 2022-07-15 成都商汤科技有限公司 Alarm information processing method and device, computer equipment and storage medium
CN115273368A (en) * 2022-07-20 2022-11-01 云南电网有限责任公司电力科学研究院 Method, medium, equipment and system for warning invasion of vehicles in power transmission line corridor construction

Similar Documents

Publication Publication Date Title
CN109166261B (en) Image processing method, device and equipment based on image recognition and storage medium
KR102553883B1 (en) A method for generating alerts in a video surveillance system
CN110933955B (en) Improved generation of alarm events based on detection of objects from camera images
US9396400B1 (en) Computer-vision based security system using a depth camera
CN103270536B (en) Stopped object detection
US7961953B2 (en) Image monitoring system
US7391907B1 (en) Spurious object detection in a video surveillance system
US10032283B2 (en) Modification of at least one parameter used by a video processing algorithm for monitoring of a scene
KR101492180B1 (en) Video analysis
CN113989321A (en) Intrusion detection method, device and system and computer storage medium
US20120274776A1 (en) Fault tolerant background modelling
WO2009049314A2 (en) Video processing system employing behavior subtraction between reference and observed video image sequences
US20210124914A1 (en) Training method of network, monitoring method, system, storage medium and computer device
CN113052107B (en) Method for detecting wearing condition of safety helmet, computer equipment and storage medium
CN109255360B (en) Target classification method, device and system
CN111582060B (en) Automatic line drawing perimeter alarm method, computer equipment and storage device
CN112907867B (en) Early warning method and device based on image recognition and server
CN110569770A (en) Human body intrusion behavior recognition method and device, storage medium and electronic equipment
KR20180086048A (en) Camera and imgae processing method thereof
CN107122743A (en) Security-protecting and monitoring method, device and electronic equipment
CN111444758A (en) Pedestrian re-identification method and device based on spatio-temporal information
EP2000998A2 (en) Flame detecting method and device
CN112288975A (en) Event early warning method and device
JP4610005B2 (en) Intruding object detection apparatus, method and program by image processing
CN111325937A (en) Method and device for detecting crossing behavior and electronic system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination