CN109872483B - Intrusion alert photoelectric monitoring system and method - Google Patents

Intrusion alert photoelectric monitoring system and method Download PDF

Info

Publication number
CN109872483B
CN109872483B CN201910133621.XA CN201910133621A CN109872483B CN 109872483 B CN109872483 B CN 109872483B CN 201910133621 A CN201910133621 A CN 201910133621A CN 109872483 B CN109872483 B CN 109872483B
Authority
CN
China
Prior art keywords
target
suspicious
camera
image
tilt
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910133621.XA
Other languages
Chinese (zh)
Other versions
CN109872483A (en
Inventor
范强
雷波
张智杰
徐寅
邹尔博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
717th Research Institute of CSIC
Original Assignee
717th Research Institute of CSIC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 717th Research Institute of CSIC filed Critical 717th Research Institute of CSIC
Priority to CN201910133621.XA priority Critical patent/CN109872483B/en
Publication of CN109872483A publication Critical patent/CN109872483A/en
Application granted granted Critical
Publication of CN109872483B publication Critical patent/CN109872483B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses an intrusion alert photoelectric monitoring system, which comprises five visible light digital cameras, a continuously variable-focus large-zoom-ratio telephoto camera, an electric pan-tilt, an alert platform base, a central control processing unit and a display terminal, wherein the five visible light digital cameras collect panoramic images and send the panoramic images back to the central control processing unit for processing, after suspicious objects are monitored, the suspicious objects are identified and classified, the target directions are transmitted to the electric pan-tilt through the central control processing unit according to set alert levels, so that the telephoto camera is aligned to the suspicious objects for continuous tracking, and meanwhile, the identification results are transmitted to the display terminal for displaying. Also discloses a method for identifying and tracking the suspicious target invaded by the system. The method can perform horizontal panoramic imaging and target identification and tracking, has high identification accuracy, is quicker by utilizing the method for determining the target monitoring area by utilizing the motion area, is easy to realize and ensures the real-time requirement.

Description

Intrusion alert photoelectric monitoring system and method
Technical Field
The invention belongs to the field of visible light image target identification, relates to ground panoramic target monitoring and tracking and regional warning, and particularly relates to an intrusion warning photoelectric monitoring system and an intrusion suspicious target identification and tracking method.
Background
The security of key areas is the foundation for the survival and development of society and enterprises, and with the technological progress, criminal means are more hidden. The system can continuously monitor the surrounding environment and accurately and quickly judge the invasion suspicious target, is beneficial to preventing theft, reduces the working intensity of patrol operators on duty and improves the efficiency. Generally, vehicles and people with security threats are always moving in the scene and have a tendency to move close to the surveillance zone.
The security and protection warning technology mainly comprises three types: a monitoring system monitors a fixed area for a long time, and takes a snapshot and gives an alarm if a suspicious object invades; one is constant scanning monitoring for fixed locations; there is also a real-time monitoring of the surroundings for panoramic views.
Compared with the former two warning systems, the panoramic real-time monitoring system has obvious advantages, can perform panoramic 360-degree imaging on the environment to be monitored by utilizing the photoelectric imaging equipment, can perform uninterrupted observation on surrounding 360-degree scenes in real time, can alarm the invasion suspicious target, and is an effective mode for protecting key areas.
Disclosure of Invention
One of the purposes of the invention is to provide an alarm system capable of accurately and rapidly monitoring, identifying and tracking an invasion suspicious target, aiming at the problems that the real-time monitoring, rapid capture and high recognition error rate of the invasion suspicious target of a panoramic area are difficult to realize by fixed azimuth monitoring and patrol monitoring in the prior art.
The technical scheme adopted by the invention for solving the technical problems is as follows: an intrusion alert photoelectric monitoring system comprises five visible light digital cameras, a photoelectric sensor and a monitoring module, wherein the five visible light digital cameras are used for realizing panoramic image acquisition; the continuous variable-focus large-zooming long-focus camera is used for tracking and capturing an interested target; the electric pan-tilt is used for controlling the horizontal rotation and the pitching motion of the tele camera; the warning platform base is used for bearing the electric pan-tilt, the long-focus camera and the visible light digital camera; the central control processing unit is used for image acquisition, processing, target identification and tracking and control of the electric pan-tilt; the display terminal is used for man-machine interaction, image display and threat warning; the five visible light digital cameras collect panoramic images and send the panoramic images back to the central control processing unit for processing, after the suspicious invading object is monitored, the suspicious invading object is identified and classified, the object position is transmitted to the electric pan-tilt through the central control processing unit according to the set warning level, so that the tele-camera is aligned to the suspicious invading object for continuous tracking, and meanwhile, the identification result is transmitted to the display terminal for displaying.
Another object of the present invention is to provide a method for identifying and tracking an intrusion suspicious target based on the above-mentioned intrusion alert photoelectric monitoring system, which comprises the following steps:
s1, arranging a panoramic camera consisting of five visible light digital cameras on a warning platform base, and obtaining a scene image within a 360-degree range of horizontal full-panoramic vision; a telephoto camera is arranged on an electric holder and then arranged on a warning platform base for observing and tracking an interested target;
s2, monitoring the intrusion suspicious target in the monitored panoramic area, setting a self-adaptive threshold according to the currently shot external environment, and comparing the difference result of adjacent frames with the self-adaptive threshold when the moving target is monitored, and judging the target area;
s3, identifying and classifying the suspicious targets invading into the target area in the step S2;
s4, manually or automatically tracking the invasion suspicious target according to the preset alert level;
and S5, controlling the electric pan-tilt to rotate based on the coordinates of the invaded suspicious target, enabling the tele-camera to continuously aim at the invaded suspicious target, and displaying the image and the alarm result on the display terminal.
In the method for identifying and tracking an intrusion suspicious object, the method for determining a moving object in step S2 includes the following steps:
s21, continuously shooting the surrounding environment through the panoramic camera, establishing a background model, and taking the background model as a standard for difference values of other frames;
s22, setting a threshold value, carrying out differential operation on the current frame shot by the panoramic camera and the background model, and judging the area larger than the threshold value as a motion area;
s23, because of the influence of factors such as shaking caused by illumination, humidity and wind power, the change of the continuously shot background is slow, and therefore a dynamic self-adaptive threshold value is set to adapt to different background templates caused by various weather changes;
s24, setting the adaptive threshold as:
AdpBG(x,y,t)=αL(x,y,t)+βH(x,y,t)+γW(x,y,t)
AdpBG (x, y, t) in the above formula represents an adaptive threshold value, and is influenced by the position and time of a pixel point, L represents an illumination factor, alpha represents a weight factor, H represents a humidity factor, beta represents a weight factor, W represents a wind factor, and gamma represents a weight factor, the three factors influencing the threshold value can obtain an initial value according to the current environmental condition, and the range constraint is as follows:
L+H+W=255
α+β+γ=1;
adjusting illumination factors according to the morning, the middle and the evening time periods of each day and different weather conditions such as sunny days, cloudy days, rainy days and the like, and adjusting humidity factors and wind conditions according to the change of the humidity of the weather;
s25, based on the set adaptive threshold, carrying out differential operation on the pixel points in the current frame image and the adaptive threshold, searching the regions met in the current frame, and determining the regions with significant difference from the background as target regions:
D(x,y,t)=|f(x,y,t)-f(x,y,t-1)|
Figure BDA0001976256180000031
in the above equation, D (x, y, t) represents the difference result, and ROI (x, y, t) represents a moving image in which non-zero pixels are the region where the moving object is located.
In the method for identifying and tracking an intrusion suspicious target, the method for identifying the intrusion suspicious target in step S3 includes the following steps:
s31, performing further morphological processing on the selected ROI, and performing region filling, wherein considering that the resolution of the panoramic camera is generally not lower than 1920 x 1080, the target region is not suitable to be selected too small for the convenience of rapid target classification;
s32, selecting a convolutional neural network as a network model, adding an additional convolutional layer in the network finally, and reducing the characteristic scale through the operation of the convolutional layer;
and S33, adding other intrusion suspicious targets to be monitored based on the image data in the ImageNet image library, constructing a self-built database as a positive sample for convolutional neural network training, and training by a back propagation algorithm to obtain an intrusion suspicious target recognition network model.
In the method for identifying and tracking an intrusion suspicious target, the method for tracking the intrusion suspicious target in step S4 includes the following steps:
s41, for a certain target which is expected to be continuously tracked and shot, a rectangular area can be manually selected, and a target which is preferentially tracked can be selected according to the preset threat degree of the suspicious target of invasion;
s42, selecting a rectangular area, and selecting two adjacent frames as the input of a full convolution asymmetric twin network, wherein the first frame x 'is a template picture, the next frame z' is a test picture, and the image is searched corresponding to the target object and is used for searching by utilizing a sliding window in the rectangular area, so that the real position of the target object is determined;
s43, processing the two images by a convolutional neural network CNN with a learning parameter rho, wherein the two CNN network structures are respectively composed of a convolutional layer, a normalization layer, a nonlinear activation layer, a pooling layer, a convolutional layer, a nonlinear activation layer, a convolutional layer and a nonlinear activation;
s44, recalculating a new template in each frame, then combining the new template with the former moving average template, adding relevant filtering after a network in the full convolution asymmetrical twin network in order to realize real-time tracking, and performing cross correlation on two branches of the network:
hρ,s,b(x',z')=sω(fρ(x'))*fρ(z')+b
in the above formula, ω (x) is a cross-correlation filtering module, and x (f) is mapped from a training featureρ(x') computing a standard correlation filter template ω by solving a ridge regression problem in the frequency domain; simultaneously, scalar parameter amplitude s and bias b are required to be introduced to enable the fraction range to be suitable for regression, and then offline training is carried out;
s45, using a label c for each cross-correlation result of two imagesiIs expressed by a spatial mapping of values of-1, the real object location belongs to the positive class, denoted by 1, and all other attributes belong to the negative class, denoted by-1;
s46, defining a loss function, and minimizing the loss function l on the training set:
Figure BDA0001976256180000051
in the training process, a large-area correlation filter needs to be provided, in order to reduce the influence of the circular boundary on the result, the x' feature mapping is pre-multiplied by a cosine window, and finally the template is cut, wherein the regression cost function is as follows:
Figure BDA0001976256180000052
in the above formula, ω is a vector of length n, excluding the coefficient θ of the intercept term0(ii) a Theta is a vector of length n +1, the coefficient theta including an intercept term0(ii) a m is the number of samples; n is a characteristic number;
and S47, training a network, and calculating based on a forward model only when target tracking is performed, wherein the maximum value in the feature mapping is the target position.
In the method for identifying and tracking the intrusion suspicious target, the method for controlling the motion of the electric pan-tilt in the step S5 includes the following steps:
s51, taking the image center of a certain visible light digital camera as 0 degree of the azimuth axis, taking the image center as the No. 1 camera, taking the image center of the second visible light digital camera clockwise as 72 degrees of the azimuth axis, and taking the image centers of other visible light digital cameras respectively corresponding to 144 degrees, 216 degrees and 288 degrees of the azimuth axis;
s52, because the visual field of the visible light digital camera in the horizontal direction is larger than 72 degrees, the visual fields of the adjacent cameras are overlapped, if the visual fields of the two adjacent cameras in the horizontal direction are overlapped by N pixels, the resolution of the cameras is M × N, and the images of the cameras areIf the coordinate of the previous point p is (x, y), then the x-direction coordinate, the direction angle θ and the pitch angle of the visible light digital camera No. k (k is 1,2, …,5)
Figure BDA0001976256180000061
The correspondence of (a) can be determined as follows:
Figure BDA0001976256180000062
Figure BDA0001976256180000063
s53, when the target area is smaller than 100 x 100, calculating to obtain the azimuth angle of the moving target according to the center coordinate of the target area, sending a moving command to the electric pan-tilt by the central control processing unit, enabling the continuous zooming and zooming long-focus camera on the electric pan-tilt to be aligned to the target area, adjusting the focal length of the continuous zooming and zooming long-focus camera to enable the target area to be between 100 x 100 and 400 x 400, and then identifying and marking the target according to the step S2.
The invention has the following beneficial effects: the invention adopts a method combining panoramic imaging and detail observation, quickly finds moving objects around by using a moving object monitoring method, identifies moving objects by using a convolutional neural network, selects interested objects to track in real time based on preset warning conditions, and can perform amplification observation and identification on long-distance objects.
Compared with the current engineering method, the method can perform horizontal panoramic imaging and perform target identification and tracking, the used engineering realization algorithm has higher identification accuracy compared with an artificial feature extraction method, and compared with the target monitoring methods such as SSD and the like, the method for determining the target monitoring area by utilizing the motion area is quicker, is easy to realize and ensures the real-time requirement.
Drawings
FIG. 1 is a structural example diagram of an embodiment of the present invention;
FIG. 2 is a schematic diagram of the structure of the central control processing unit of the present invention;
fig. 3 is a flow chart of monitoring and tracking intrusion into a suspicious target according to an embodiment of the present invention.
The figures are numbered: the system comprises a long-focus camera, a 2-electric holder, a 3-visible light digital camera, a 4-warning platform base and a 5-single-board computer.
Detailed Description
The invention will be further explained with reference to the drawings.
Referring to fig. 1, the photoelectric monitoring system for intrusion alert disclosed by the present invention comprises five visible light digital cameras 3 for realizing panoramic image acquisition; a continuously variable-focus large-zooming long-focus camera 1 for tracking and capturing an interested target; an electric pan-tilt 2 for controlling the horizontal rotation and pitching motion of the tele camera 1; a warning platform base 4 for carrying the electric pan-tilt 2, the telephoto camera 1 and the visible light digital camera 3 (panoramic camera), which can be replaced by a processing unit case with a single board computer 5 inside; a central control processing unit for image acquisition, processing, target identification, tracking and control of the electric pan-tilt 2; the display terminal is used for man-machine interaction, image display and threat warning; the five visible light digital cameras 3 collect panoramic images and send the panoramic images back to the central control processing unit for processing, after monitoring the invasion of suspicious targets, the suspicious targets are identified and classified, and the target directions are transmitted to the electric pan-tilt 2 through the central control processing unit according to the set warning level, so that the tele-camera 1 is aligned to the invasion of suspicious targets for continuous tracking, and meanwhile, the identification results are transmitted to the display terminal for displaying.
Referring to fig. 1 and 2, the central control processing unit of the present invention is used for loading dual-mode alert target recognition software, and includes an image sensor, an image reading module, a motion sensing module, a target tracking module and an equipment control module which are connected in sequence, wherein the target tracking module is connected with the image reading module after being sequentially connected with a target recognition module and a human-computer interaction interface, the target tracking module, the target recognition module and the human-computer interaction interface are all connected with the equipment control module, the equipment control module is connected with the image sensor, and the motion sensing module is connected with the human-computer interaction interface.
Referring to fig. 3, the method for identifying and tracking an intrusion suspicious target of an intrusion alert photoelectric monitoring system disclosed by the invention comprises the following steps:
firstly, arranging a panoramic camera consisting of five visible light digital cameras 3 on a warning platform base 4, and obtaining a scene image within a horizontal full-panoramic 360-degree range; a long-focus camera 1 is arranged on an electric holder 2 and then arranged on a warning platform base 4 for observing and tracking an interested target.
And secondly, monitoring the suspicious invading object in the monitored panoramic area, setting a self-adaptive threshold according to the current shooting external environment, comparing the difference result of adjacent frames with the self-adaptive threshold when the moving object is monitored, and judging the object area. The specific method is as follows.
And S21, continuously shooting the surrounding environment through the panoramic camera, establishing a background model, and taking the background model as a standard for difference values of other frames.
And S22, setting a threshold value, carrying out difference operation on the current frame shot by the panoramic camera and the background model, and judging the area larger than the threshold value as a motion area.
S23, because of the influence of factors such as shaking caused by illumination, humidity and wind power, the background of continuous shooting changes slowly, so that a dynamic adaptive threshold is set to adapt to different background templates caused by various weather changes.
S24, setting the adaptive threshold as:
AdpBG(x,y,t)=αL(x,y,t)+βH(x,y,t)+γW(x,y,t)
AdpBG (x, y, t) in the above formula represents an adaptive threshold value, and is influenced by the position and time of a pixel point, L represents an illumination factor, alpha represents a weight factor, H represents a humidity factor, beta represents a weight factor, W represents a wind factor, and gamma represents a weight factor, the three factors influencing the threshold value can obtain an initial value according to the current environmental condition, and the range constraint is as follows:
L+H+W=255
α+β+γ=1;
the illumination factors are adjusted according to the morning, the middle and the evening of each day and different weather conditions such as sunny days, cloudy days, rainy days and the like, and the humidity factors and the wind conditions are adjusted according to the change of the humidity of the weather.
S25, based on the set adaptive threshold, carrying out differential operation on the pixel points in the current frame image and the adaptive threshold, searching the regions met in the current frame, and determining the regions with significant difference from the background as target regions:
D(x,y,t)=|f(x,y,t)-f(x,y,t-1)|
Figure BDA0001976256180000091
in the above equation, D (x, y, t) represents the difference result, and ROI (x, y, t) represents a moving image in which non-zero pixels are the region where the moving object is located.
Thirdly, identifying and classifying the suspicious targets of the invasion in the target area in the step S2. The specific method is as follows.
And S31, performing further morphological processing on the selected ROI to fill the region, wherein considering that the resolution of the panoramic camera is generally not lower than 1920 × 1080, the target region is not suitable to be selected too small for the purpose of quickly classifying targets.
S32, selecting a convolutional neural network as a network model, selecting a general classification network model such as GoogleNet, VGG16 or ResNet, and the like, adding an additional convolutional layer at the end of the network, reducing the characteristic scale through the operation of the convolutional layer, in order to realize the fast learning and prediction of the network, the target area of the input image is not too large, and integrating the step S31, and setting the input image to be 500 × 500 pixels.
S33, based on image data in the ImageNet image library, adding other intrusion suspicious targets to be monitored in a targeted manner, constructing a self-built database as a positive sample of convolutional neural network training, and training through a back propagation algorithm to obtain an intrusion suspicious target recognition network model.
And fourthly, manually or automatically tracking the invasion suspicious target according to a preset warning level. The specific method is as follows.
S41, for a certain target object which is expected to be continuously tracked and shot, a rectangular area can be manually selected, and a target which is preferentially tracked can be selected according to the preset threat degree of the suspicious target of invasion.
And S42, selecting a rectangular area, and selecting two adjacent frames as the input of the full convolution asymmetric twin network, wherein the first frame x 'is a template picture, and the next frame image z' is a test picture, and searching images corresponding to the target object for searching by using a sliding window in the rectangular area, so as to determine the real position of the target object. The full convolution asymmetric twin network is formed by adding a differentiable layer to one branch of the symmetric twin network. The differentiable layer function is used to implement correlated filtering and clipping.
And S43, processing the two images by using convolutional neural networks CNN with learning parameters of rho, wherein the two CNN network structures are respectively composed of a convolutional layer, a normalization layer, a nonlinear activation layer, a pooling layer, a convolutional layer, a nonlinear activation layer, a convolutional layer and a nonlinear activation.
S44, recalculating a new template in each frame, then combining the new template with the former moving average template, adding relevant filtering after a network in the full convolution asymmetrical twin network in order to realize real-time tracking, and performing cross correlation on two branches of the network:
hρ,s,b(x',z')=sω(fρ(x'))*fρ(z')+b
in the above formula, ω (x) is a cross-correlation filtering module, and x (f) is mapped from a training featureρ(x') computing a standard correlation filter template ω by solving a ridge regression problem in the frequency domain; meanwhile, scalar parameter amplitude s and bias b are required to be introduced, so that the fraction range is suitable for regression, and then offline training is carried out.
S45, using a label c for each cross-correlation result of two imagesiIs expressed by the spatial mapping of (1), the value range of (1, 1), the real object position belongs to the positive class, and the table 1 is used forAll other attributes belong to the negative class, denoted by-1.
S46, defining a loss function, and minimizing the loss function l on the training set:
Figure BDA0001976256180000101
in the training process, a large-area correlation filter needs to be provided, in order to reduce the influence of the circular boundary on the result, the x' feature mapping is pre-multiplied by a cosine window, and finally the template is cut, wherein the regression cost function is as follows:
Figure BDA0001976256180000111
in the above formula, ω is a vector of length n, excluding the coefficient θ of the intercept term0(ii) a Theta is a vector of length n +1, the coefficient theta including an intercept term0(ii) a m is the number of samples; n is a characteristic number.
And S47, training a network, and calculating based on a forward model only when target tracking is performed, wherein the maximum value in the feature mapping is the target position.
And fifthly, controlling the electric pan-tilt to rotate based on the coordinates of the invaded suspicious target, enabling the tele-camera to continuously aim at the suspicious target, and displaying the image and the alarm result on the display terminal. The specific method is as follows.
S51, using the image center of one visible light digital camera as 0 ° of the azimuth axis, and using it as the 1 st camera, the image center of the second visible light digital camera clockwise is 72 ° of the azimuth axis, and the image centers of the other visible light digital cameras correspond to 144 °, 216 °, and 288 ° of the azimuth axis, respectively.
S52, since the visual field of the visible light digital camera in the horizontal direction is greater than 72 °, the visual fields of the adjacent cameras overlap, and if the visual fields of the two adjacent cameras in the horizontal direction overlap by N pixels, the resolution of the camera is M × N, and the coordinate of a point p on the image is (x, y), then the coordinate of the visible light digital camera No. k (k is 1,2, …,5) in the x direction, the direction angle θ and the pitch angle
Figure BDA0001976256180000112
The correspondence of (a) can be determined as follows:
Figure BDA0001976256180000113
Figure BDA0001976256180000114
s53, when the moving target area is smaller than 100 x 100, calculating to obtain the azimuth angle of the moving target according to the center coordinate of the target area, sending a moving command to the electric pan-tilt by the central control processing unit, enabling the continuous zooming and zooming long-focus camera on the electric pan-tilt to be aligned to the target area, adjusting the focal length of the continuous zooming and zooming long-focus camera to enable the target area to be between 100 x 100 and 400 x 400, and then identifying and marking the target according to the steps in S2.
The present invention is not limited to the above-mentioned preferred embodiments, and any person skilled in the art can derive other variants and modifications within the scope of the present invention, however, any variation in shape or structure is within the scope of protection of the present invention, and any technical solution similar or equivalent to the present application is within the scope of protection of the present invention.

Claims (3)

1. An intrusion alert photoelectric monitoring method is used for identifying and tracking an intrusion suspicious target and is characterized in that: the method comprises the following steps:
s1, arranging a panoramic camera consisting of five visible light digital cameras (3) on a warning platform base (4) and obtaining a scene image within a horizontal full-panoramic 360-degree range; the method comprises the following steps that a tele camera (1) is installed on an electric pan-tilt (2) and then is arranged on a warning platform base (4) and used for observing and tracking an interested target, the electric pan-tilt (2) is used for controlling the tele camera (1) to horizontally rotate and pitch motion, the warning platform base (4) is used for bearing the electric pan-tilt (2), the tele camera (1) and a visible light digital camera (3), a central control processing unit is used for image acquisition and processing, target identification and tracking, the electric pan-tilt (2) is used for controlling, and a display terminal is used for man-machine interaction, image display and threat warning; the method comprises the steps that panoramic images are collected by five visible light digital cameras (3) and sent back to a central control processing unit for processing, after a suspicious invading target is monitored, the suspicious invading target is identified and classified, the target position is transmitted to an electric pan-tilt (2) through the central control processing unit according to a set warning level, so that a long-focus camera (1) is aligned to the suspicious invading target for continuous tracking, and meanwhile, the identification result is transmitted to a display terminal for displaying;
s2, monitoring the intrusion suspicious target in the monitored panoramic area, setting a self-adaptive threshold according to the currently shot external environment, and comparing the difference result of adjacent frames with the self-adaptive threshold when the moving target is monitored, and judging the target area; the method for determining the moving target comprises the following steps:
s21, continuously shooting the surrounding environment through the panoramic camera, establishing a background model, and taking the background model as a standard for difference values of other frames;
s22, setting a threshold value, carrying out differential operation on the current frame shot by the panoramic camera and the background model, and judging the area larger than the threshold value as a motion area;
s23, setting a dynamic adaptive threshold to adapt to different background templates caused by various weather changes;
s24, setting the adaptive threshold as:
AdpBG(x,y,t)=αL(x,y,t)+βH(x,y,t)+γW(x,y,t)
AdpBG (x, y, t) in the above formula represents an adaptive threshold value, and is influenced by the position and time of a pixel point, L represents an illumination factor, alpha represents a weight factor, H represents a humidity factor, beta represents a weight factor, W represents a wind factor, and gamma represents a weight factor, the three factors influencing the threshold value can obtain an initial value according to the current environmental condition, and the range constraint is as follows:
L+H+W=255
α+β+γ=1;
adjusting illumination factors according to the morning, the middle and the evening time periods of each day and different weather conditions such as sunny days, cloudy days, rainy days and the like, and adjusting humidity factors and wind conditions according to the change of the humidity of the weather;
s25, based on the set adaptive threshold, carrying out differential operation on the pixel points in the current frame image and the adaptive threshold, searching the regions met in the current frame, and determining the regions with significant difference from the background as target regions:
D(x,y,t)=|f(x,y,t)-f(x,y,t-1)|
Figure FDA0002632321580000021
d (x, y, t) in the above formula represents a difference result, ROI (x, y, t) represents a moving image, and non-zero pixels in the image are the areas where moving objects are located;
s3, identifying and classifying the invaded suspicious target in the target area in the step S2;
s4, according to the preset alert level, the intrusion suspicious target is tracked:
s41, for a certain target which is expected to be continuously tracked and shot, a rectangular area can be manually selected, and a target which is preferentially tracked can be selected according to the preset threat degree of the suspicious target of invasion;
s42, selecting a rectangular area, and selecting two adjacent frames as the input of a full convolution asymmetric twin network, wherein the first frame x 'is a template picture, the next frame z' is a test picture, and the image is searched corresponding to the target object and is used for searching by utilizing a sliding window in the rectangular area, so that the real position of the target object is determined;
s43, processing the two images by a convolutional neural network CNN with a learning parameter rho, wherein the two CNN network structures are respectively composed of a convolutional layer, a normalization layer, a nonlinear activation layer, a pooling layer, a convolutional layer, a nonlinear activation layer, a convolutional layer and a nonlinear activation;
s44, recalculating a new template in each frame, combining the new template with the former moving average template, adding relevant filtering after one network in the full convolution asymmetric twin network, and performing cross correlation on two branches of the network:
hρ,s,b(x',z')=sω(fρ(x'))*fρ(z')+b
in the above formula, ω (x) is a cross-correlation filtering module, and x (f) is mapped from a training featureρ(x') computing a standard correlation filter template ω by solving a ridge regression problem in the frequency domain; simultaneously, scalar parameter amplitude s and bias b are required to be introduced to enable the fraction range to be suitable for regression, and then offline training is carried out;
s45, using a label c for each cross-correlation result of two imagesiIs expressed by a spatial mapping of values of-1, the real object location belongs to the positive class, denoted by 1, and all other attributes belong to the negative class, denoted by-1;
s46, defining a loss function, and minimizing the loss function l on the training set:
Figure FDA0002632321580000031
in the training process, a large-area correlation filter needs to be provided, in order to reduce the influence of the circular boundary on the result, the x' feature mapping is pre-multiplied by a cosine window, and finally the template is cut, wherein the regression cost function is as follows:
Figure FDA0002632321580000032
in the above formula, ω is a vector of length n, excluding the coefficient θ of the intercept term0(ii) a Theta is a vector of length n +1, the coefficient theta including an intercept term0(ii) a m is the number of samples; n is a characteristic number;
s47, training a network, and calculating based on a forward model only when tracking a target, wherein the maximum value in the characteristic mapping is the target position; and S5, controlling the electric pan-tilt (2) to rotate based on the coordinates of the invaded suspicious target, enabling the long-focus camera (1) to continuously aim at the invaded suspicious target, and displaying an image and an alarm result on the display terminal.
2. The method according to claim 1, wherein the method for identifying the suspicious object of intrusion alert in step S3 includes the following steps:
s31, performing morphological processing on the selected ROI to fill the region;
s32, selecting a convolutional neural network as a network model, adding an additional convolutional layer in the network finally, and reducing the characteristic scale through the operation of the convolutional layer;
and S33, adding other intrusion suspicious targets to be monitored based on the image data in the ImageNet image library, constructing a self-built database as a positive sample for convolutional neural network training, and training by a back propagation algorithm to obtain an intrusion suspicious target recognition network model.
3. The method for monitoring the intrusion alert photoelectricity according to claim 1 or 2, wherein the method for controlling the movement of the motorized pan/tilt head (2) in the step S5 includes the following steps:
s51, taking the image center of one visible light digital camera (3) as 0 degree of the azimuth axis, taking the image center as the No. 1 camera, taking the image center of the second visible light digital camera (3) in the clockwise direction as 72 degrees of the azimuth axis, and taking the image centers of other visible light digital cameras (3) as 144 degrees, 216 degrees and 288 degrees of the azimuth axis respectively;
s52, because the visual field of the visible light digital camera (3) in the horizontal direction is larger than 72 degrees, the visual fields of the adjacent cameras are overlapped, if the visual fields of the two adjacent cameras in the horizontal direction are overlapped by N pixels, the resolution of the camera is M × N, the coordinate of a point p on the image is (x, y), and then the coordinate of the visible light digital camera (3) No. k (k is 1,2, …,5) in the x direction, the direction angle theta and the pitch angle
Figure FDA0002632321580000041
The correspondence of (a) can be determined as follows:
Figure FDA0002632321580000042
Figure FDA0002632321580000043
and S53, when the target area is smaller than 100 x 100, calculating to obtain the azimuth angle of the moving target according to the central coordinate of the target area, sending a moving instruction to the electric pan-tilt (2) by the central control processing unit, enabling the tele-camera (1) on the electric pan-tilt (2) to be aligned to the target area, adjusting the focal length of the tele-camera to enable the target area to be between 100 x 100 and 400 x 400, and then identifying and marking the target according to the step S2.
CN201910133621.XA 2019-02-22 2019-02-22 Intrusion alert photoelectric monitoring system and method Active CN109872483B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910133621.XA CN109872483B (en) 2019-02-22 2019-02-22 Intrusion alert photoelectric monitoring system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910133621.XA CN109872483B (en) 2019-02-22 2019-02-22 Intrusion alert photoelectric monitoring system and method

Publications (2)

Publication Number Publication Date
CN109872483A CN109872483A (en) 2019-06-11
CN109872483B true CN109872483B (en) 2020-09-29

Family

ID=66919125

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910133621.XA Active CN109872483B (en) 2019-02-22 2019-02-22 Intrusion alert photoelectric monitoring system and method

Country Status (1)

Country Link
CN (1) CN109872483B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443247A (en) * 2019-08-22 2019-11-12 中国科学院国家空间科学中心 A kind of unmanned aerial vehicle moving small target real-time detecting system and method
CN111105429B (en) * 2019-12-03 2022-07-12 华中科技大学 Integrated unmanned aerial vehicle detection method
CN111639765A (en) * 2020-05-15 2020-09-08 视若飞信息科技(上海)有限公司 Interaction method for using point track and detection domain
CN112614163B (en) * 2020-12-31 2023-05-09 华中光电技术研究所(中国船舶重工集团公司第七一七研究所) Target tracking method and system integrating Bayesian track reasoning
CN112896879B (en) * 2021-02-24 2022-11-18 同济大学 Environment sensing system for intelligent sanitation vehicle
CN114285974A (en) * 2021-12-24 2022-04-05 中国科学院长春光学精密机械与物理研究所 Panoramic monitoring system for global observation and active laser tracking
CN115052109B (en) * 2022-08-16 2022-11-29 北京理工大学 Target positioning method and system based on multiple types of cameras
CN115359609A (en) * 2022-08-17 2022-11-18 杭州国巡机器人科技有限公司 Method and system for autonomously driving living body invasion
CN115376257A (en) * 2022-08-17 2022-11-22 杭州国巡机器人科技有限公司 Method and system for autonomously identifying living body invasion
CN116052081A (en) * 2023-01-10 2023-05-02 山东高速建设管理集团有限公司 Site safety real-time monitoring method, system, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104704816A (en) * 2012-09-25 2015-06-10 Sk电信有限公司 Apparatus and method for detecting event from plurality of photographed images
JP6336693B1 (en) * 2016-12-09 2018-06-06 株式会社日立国際電気 Water intrusion detection system and method
CN108737793A (en) * 2010-12-30 2018-11-02 派尔高公司 Use camera network moving object tracking
CN108810464A (en) * 2018-06-06 2018-11-13 合肥思博特软件开发有限公司 A kind of intelligence is personal to identify image optimization tracking transmission method and system

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9159210B2 (en) * 2012-11-21 2015-10-13 Nettalon Security Systems, Inc. Method and system for monitoring of friend and foe in a security incident
CN104301669A (en) * 2014-09-12 2015-01-21 重庆大学 Suspicious target detection tracking and recognition method based on dual-camera cooperation
US10402658B2 (en) * 2016-11-03 2019-09-03 Nec Corporation Video retrieval system using adaptive spatiotemporal convolution feature representation with dynamic abstraction for video to language translation
CN106707296B (en) * 2017-01-09 2019-03-05 华中科技大学 It is a kind of based on the unmanned machine testing of Based on Dual-Aperture photo electric imaging system and recognition methods
CN107483889A (en) * 2017-08-24 2017-12-15 北京融通智慧科技有限公司 The tunnel monitoring system of wisdom building site control platform
CN108174165A (en) * 2018-01-17 2018-06-15 重庆览辉信息技术有限公司 Electric power safety operation and O&M intelligent monitoring system and method
CN108449545B (en) * 2018-03-29 2020-10-30 北京环境特性研究所 Monitoring system and application method thereof
CN108647577B (en) * 2018-04-10 2021-04-20 华中科技大学 Self-adaptive pedestrian re-identification method and system for difficult excavation
CN108600701B (en) * 2018-05-02 2020-11-24 广州飞宇智能科技有限公司 Monitoring system and method for judging video behaviors based on deep learning
CN108829848A (en) * 2018-06-20 2018-11-16 华中科技大学 A kind of image search method and system
CN108509946A (en) * 2018-07-05 2018-09-07 常州纺织服装职业技术学院 A kind of intelligent Building System based on recognition of face
CN109785562B (en) * 2018-12-29 2023-08-15 华中光电技术研究所(中国船舶重工集团有限公司第七一七研究所) Vertical photoelectric ground threat alert system and suspicious target identification method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108737793A (en) * 2010-12-30 2018-11-02 派尔高公司 Use camera network moving object tracking
CN104704816A (en) * 2012-09-25 2015-06-10 Sk电信有限公司 Apparatus and method for detecting event from plurality of photographed images
JP6336693B1 (en) * 2016-12-09 2018-06-06 株式会社日立国際電気 Water intrusion detection system and method
CN108810464A (en) * 2018-06-06 2018-11-13 合肥思博特软件开发有限公司 A kind of intelligence is personal to identify image optimization tracking transmission method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
融合分层卷积特征和尺度自适应核相关滤波器的目标跟踪;白冰 等;《小型微型计算机系统》;20170915;第38卷(第9期);第2062-2066页 *

Also Published As

Publication number Publication date
CN109872483A (en) 2019-06-11

Similar Documents

Publication Publication Date Title
CN109872483B (en) Intrusion alert photoelectric monitoring system and method
CN105894702B (en) Intrusion detection alarm system based on multi-camera data fusion and detection method thereof
CN110674746B (en) Method and device for realizing high-precision cross-mirror tracking by using video spatial relationship assistance, computer equipment and storage medium
CN108109385B (en) System and method for identifying and judging dangerous behaviors of power transmission line anti-external damage vehicle
CN107483889A (en) The tunnel monitoring system of wisdom building site control platform
CN111242025B (en) Real-time action monitoring method based on YOLO
CN103929592A (en) All-dimensional intelligent monitoring equipment and method
KR101982751B1 (en) Video surveillance device with motion path tracking technology using multi camera
CN113159466B (en) Short-time photovoltaic power generation prediction system and method
CN112785628B (en) Track prediction method and system based on panoramic view angle detection tracking
CN108897342B (en) Positioning and tracking method and system for fast-moving civil multi-rotor unmanned aerial vehicle
KR100777199B1 (en) Apparatus and method for tracking of moving target
CN113452912B (en) Pan-tilt camera control method, device, equipment and medium for inspection robot
CN110619276A (en) Anomaly and violence detection system and method based on unmanned aerial vehicle mobile monitoring
CN109917364A (en) A kind of warning system of integrated radar guiding and photoelectric tracking function
CN113507577A (en) Target object detection method, device, equipment and storage medium
CN109785562B (en) Vertical photoelectric ground threat alert system and suspicious target identification method
CN109841022B (en) Target moving track detecting and alarming method, system and storage medium
CN113194249A (en) Moving object real-time tracking system and method based on camera
KR102171384B1 (en) Object recognition system and method using image correction filter
CN112954188B (en) Human eye perception imitating active target snapshot method and device
Chan A robust target tracking algorithm for FLIR imagery
CN115984768A (en) Multi-target pedestrian real-time detection positioning method based on fixed monocular camera
CN114912536A (en) Target identification method based on radar and double photoelectricity
US20100296743A1 (en) Image processing apparatus, image processing method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant