CN115361499B - Dual-machine cooperative border defense target recognition and tracking system and method - Google Patents

Dual-machine cooperative border defense target recognition and tracking system and method Download PDF

Info

Publication number
CN115361499B
CN115361499B CN202210982619.1A CN202210982619A CN115361499B CN 115361499 B CN115361499 B CN 115361499B CN 202210982619 A CN202210982619 A CN 202210982619A CN 115361499 B CN115361499 B CN 115361499B
Authority
CN
China
Prior art keywords
image
target
fusion
visible light
infrared
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210982619.1A
Other languages
Chinese (zh)
Other versions
CN115361499A (en
Inventor
杨帆
程政
王忠林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaoshi Technology Jiangsu Co ltd
Original Assignee
Xiaoshi Technology Jiangsu Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaoshi Technology Jiangsu Co ltd filed Critical Xiaoshi Technology Jiangsu Co ltd
Priority to CN202210982619.1A priority Critical patent/CN115361499B/en
Publication of CN115361499A publication Critical patent/CN115361499A/en
Application granted granted Critical
Publication of CN115361499B publication Critical patent/CN115361499B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • G08B13/19608Tracking movement of a target, e.g. by detecting an object predefined as a target, using target direction and or velocity to predict its new position
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/30Transforming light or analogous information into electric information
    • H04N5/33Transforming infrared radiation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a dual-computer coordinated border defense target recognition and tracking system, which comprises: the system comprises a beyond-view intelligent camera terminal and edge computing equipment, wherein the beyond-view intelligent camera terminal and the edge computing equipment are deployed at the border line position, the first beyond-view intelligent camera terminal is used as a lookout machine for finding and identifying targets of an invasive border line, and the second beyond-view intelligent camera terminal is used as a tracker for receiving instructions, locking the targets and tracking the targets in real time; the edge computing equipment locally analyzes the video data in real time, detects the target attribute, instructs the control tracker to track the target, receives real-time data sent back by the tracker, acquires the target track and coordinates, and sends alarm information to the border defense monitoring center. The edge computing equipment performs visible light and infrared light imaging complementarity analysis on the multi-source image based on visible light-infrared light fusion, so that target detection and identification are realized, the problems of wide border line range and difficult coverage of private network private lines are solved, and tracking efficiency is improved through double-machine cooperative work.

Description

Dual-machine cooperative border defense target recognition and tracking system and method
Technical Field
The invention relates to the technical field of target identification, in particular to a dual-computer cooperative border defense target identification and tracking system and method.
Background
The existing remote laser night vision intelligent integrated holder camera in border line inspection can support visible light imaging and laser illumination imaging, adopts high-power ultra-homogenized infrared laser as an illumination source, uses a megapixel ultra-low illumination tele lens camera, ensures even and clear night vision pictures through a laser homogenization technology, can perform long-distance continuous monitoring day and night, and is suitable for a security remote day and night monitoring function in a general state.
However, the existing remote laser night vision integrated pan-tilt camera has weak target recognition and analysis capability, and is difficult to distinguish target attributes (such as people, animals, unmanned aerial vehicles and the like); the real-time image is required to be sent to the background, the background computing resources are utilized for data computation and collaborative allocation, the time delay is often larger in a border line environment with complex environment, the tracking effect is not ideal, especially in a single-unit working mode, the normalized target monitoring and the target dynamic tracking are difficult to be compatible, the dynamic tracking of the target is mostly difficult to be realized, and the real-time computation of the target position data cannot be realized.
Disclosure of Invention
The invention aims to provide a dual-computer cooperative border defense target recognition and tracking device and method, which can accurately detect the recognition of the invasion targets of people, animals, unmanned aerial vehicles and other border in a beyond-vision scene by combining beyond-vision small target detection with a night infrared imaging target recognition technology, and can support normalized inspection by combining dual-computer cooperative operation, so that continuous remote day and night monitoring, early warning and tracking of border lines are realized.
According to a first aspect of the present invention, a dual cooperative border protection target recognition and tracking system is provided, comprising:
the system comprises a first beyond-view intelligent camera terminal, a second beyond-view intelligent camera terminal and a real-time tracking device, wherein the first beyond-view intelligent camera terminal is deployed at a border line position and is used as a lookout machine for finding and identifying targets of an invasive border line, and the second beyond-view intelligent camera terminal is used as a tracking machine for receiving instructions, locking the targets and tracking the targets in real time;
the edge computing equipment is used for receiving target discovery and identification data sent by the lookout machine, analyzing the target with respect to the small beyond-view-range target in real time, detecting the target attribute, commanding the control tracker to track the target, receiving real-time data sent by the tracker, acquiring a target track and coordinates, and sending alarm information to a computer system deployed in a remote frontier defense monitoring center;
the target discovery and identification data sent by the lookout machine comprises an infrared image based on an infrared imaging channel and a visible light image based on a visible light imaging channel;
the edge computing device performs visible light and infrared light imaging complementarity analysis on the multi-source image based on visible light-infrared light fusion, and achieves target detection and identification.
According to a second aspect of the present invention, there is also provided a method for identifying and tracking a security border object, comprising the steps of:
disposing beyond visual range intelligent camera terminals and edge computing equipment adjacent to the beyond visual range intelligent camera terminals at the border line positions, wherein the first beyond visual range intelligent camera terminals are used as lookout machines for finding and identifying targets of invasive border lines, and the second beyond visual range intelligent camera terminals are used as tracking machines for receiving instructions, locking the targets and tracking in real time;
the edge computing equipment is used for receiving target discovery and identification data sent by the lookout machine, carrying out real-time analysis on a small target with a beyond visual range, detecting target attributes, instructing the control tracker to track the target, receiving real-time data sent back by the tracker, obtaining target tracks and coordinates, and sending alarm information to a computer system deployed in a remote border defense monitoring center;
the edge computing equipment performs visible light and infrared light imaging complementarity analysis on the multi-source image based on visible light-infrared light fusion to realize target detection and identification.
According to the method, a target detection and night infrared imaging target identification technology is integrated, day and night long-distance continuous monitoring is carried out on the border defense line, long-distance and small targets are identified and analyzed in real time on line by adopting edge computing equipment, a tracking machine is scheduled to track the target route in real time through double-machine cooperation, target longitude and latitude positioning information is locked, early warning and real-time positioning data are reported to a monitoring center, continuous long-distance day and night monitoring, early warning and tracking of the border defense line are achieved, and border invasion targets such as personnel, animals and unmanned aerial vehicles in a beyond-sight scene are detected, and meanwhile, normal patrol and target dynamic tracking mutual noninterference can be supported by utilizing double-machine cooperation operation, so that complex environment interference and illegal entry molecule 'tiger regulation and departure' strategies can be eliminated.
It should be understood that all combinations of the foregoing concepts, as well as additional concepts described in more detail below, may be considered a part of the inventive subject matter of the present disclosure as long as such concepts are not mutually inconsistent. In addition, all combinations of claimed subject matter are considered part of the disclosed inventive subject matter.
The foregoing and other aspects, embodiments, and features of the present teachings will be more fully understood from the following description, taken together with the accompanying drawings. Other additional aspects of the invention, such as features and/or advantages of the exemplary embodiments, will be apparent from the description which follows, or may be learned by practice of the embodiments according to the teachings of the invention.
Drawings
The drawings are not intended to be drawn to scale. In the drawings, each identical or nearly identical component that is illustrated in various figures may be represented by a like numeral. For purposes of clarity, not every component may be labeled in every drawing. Embodiments of various aspects of the invention will now be described, by way of example, with reference to the accompanying drawings, in which:
fig. 1 is a schematic diagram of an over-the-horizon intelligent camera terminal in a dual-camera collaborative security target recognition and tracking system according to an embodiment of the present invention.
Fig. 2 is a schematic deployment diagram of lookers, trackers, and edge computing devices according to an embodiment of the invention.
FIG. 3 is a schematic workflow diagram of a lookout machine, a tracker machine, and an edge computing device according to an embodiment of the invention.
Detailed Description
For a better understanding of the technical content of the present invention, specific examples are set forth below, along with the accompanying drawings.
Aspects of the invention are described in this disclosure with reference to the drawings, in which are shown a number of illustrative embodiments. The embodiments of the present disclosure are not necessarily intended to include all aspects of the invention. It should be understood that the various concepts and embodiments described above, as well as those described in more detail below, may be implemented in any of a number of ways, as the disclosed concepts and embodiments are not limited to any implementation. Additionally, some aspects of the disclosure may be used alone or in any suitable combination with other aspects of the disclosure.
1-2, a dual-machine collaborative frontier target recognition and tracking system in accordance with an exemplary embodiment of the present invention includes a beyond-line-of-sight intelligent camera terminal deployed at a border line location and an edge computing device.
The beyond-visual-range intelligent camera terminal is deployed at the border line position, particularly at the border line high-point observation position, and is used for monitoring border line invasion targets in real time and completing track tracking tasks.
In the embodiment of the invention, the beyond-view intelligent camera terminals deployed at the border line position comprise 2 camera terminals, wherein the first beyond-view intelligent camera terminal is used as a lookout machine 10, namely a host machine, for finding and identifying the targets of the invasive border line, and the second beyond-view intelligent camera terminal is used as a tracker 20, namely an auxiliary machine, for receiving the instructions, locking the targets and tracking in real time.
The beyond-view-range intelligent camera terminal is preferably two beyond-view-range infrared cameras sharing a cradle head.
The edge computing device 30 disposed at the border line location and adjacent to the beyond-view intelligent camera terminal may be disposed in a chassis located below the cradle head. For example, the camera is arranged in a pole holding box matched with the beyond-visual-distance intelligent camera terminal.
The edge computing device 30 is used for receiving target discovery and identification data sent by the lookout machine, carrying out real-time analysis on a small target with a beyond visual range, detecting target attributes, commanding the control tracker to track the target, receiving real-time data sent back by the tracker, obtaining target tracks and coordinates, and sending alarm information to a computer system deployed in a remote border defense monitoring center.
The object discovery and identification data sent by the lookout machine 10 comprises an infrared image based on an infrared imaging channel and a visible light image based on a visible light imaging channel.
Thus, the edge computing device 30 performs visible light and infrared light imaging complementarity analysis on the multi-source image based on visible light-infrared light fusion, thereby realizing target detection and identification.
In an alternative embodiment, the edge computing device 30 may be a multi-core edge computing device with video analysis and processing capabilities, for example, an IVP02C artificial intelligent device, supports running a deep learning neural network algorithm, integrates a high-power AI processing module therein, has the capability of encoding and decoding 8K high-definition videos, integrates various video signal processing units, has the characteristics of light weight, high performance, low power consumption, convenient access and the like, and is widely applied to various fields such as face recognition, intelligent security and behavior analysis.
In an alternative embodiment, the tracker 20 tracks the identified target in real time according to instructions of the edge computing device, measures the distance between the target and the tracker through a visual difference between two images by using a binocular vision ranging technology, calculates the coordinate position of the tracked target, and returns the coordinate position to the edge computing device. The edge computing device 30 obtains the target track and coordinates according to the coordinate data of the tracked target returned by the tracker, and sends alarm information to a computer system deployed in a remote border protection monitoring center.
The remote border protection monitoring center can be communicated with the edge computing equipment through a wireless network or a special line network to receive data and is used for functions of alarm management, inspection station account management, data analysis mining, service visual display and the like.
In combination with the workflow shown in fig. 3, the edge computing device 30 extracts the effective image frame from the original video image data according to the video data sent by the lookout machine 10, analyzes the effective image frame, invokes the target detection model to identify the invasion objects such as people, animals, unmanned aerial vehicles and the like, completes the beyond-line-of-sight target identification and analysis work on site, and once confirming that the invasion object is a tracking object, reports the invasion alarm to the edge protection monitoring center, and simultaneously controls the tracking machine to capture and lock the invasion object. The tracker has dynamic zooming performance, tracks the target in real time, and utilizes ranging positioning to complete the calculation of the position information of the invasive target, and reports the calculated position information to the monitoring center in real time.
The target detection model can be packaged with a commercial target detection algorithm, especially an identification algorithm based on a deep learning neural network, and is pre-deployed in the edge computing device.
After the edge protection monitoring center receives the alarm information, the management computer directly sends the alarm information to the patrol operator, and the management computer is connected with the video monitoring condition of the tracking machine, so that the tracking video image is intuitively displayed, meanwhile, the reported position data is calculated according to the edge computing equipment, and the track of the tracking object and the position information are drawn in real time by combining with the GIS map, so that the superposition display of the target track is completed.
In the embodiment of the invention, the edge computing device 30 performs visible light and infrared light imaging complementarity analysis on the multi-source image by utilizing visible light-infrared light fusion analysis, wherein the image information in the visible light imaging is rich, the infrared light imaging can meet the requirement of target image acquisition under the condition of low illumination, the higher imaging effect is realized by image fusion processing, and the target detection and recognition success rate is improved.
In a specific processing procedure, the edge computing device is configured to detect an intrusion object based on differential operation of the real-time image and the background image and combined with visible light-infrared light image fusion to filter background image signals; and carrying out target recognition on the intruding object based on a preset target recognition model.
As an alternative embodiment, the edge computing device includes a multi-source image fusion module and a fused image differential processing module;
the multi-source image fusion module is used for carrying out image fusion on the visible light image and the infrared light image;
and the fusion image difference processing module is used for carrying out visible light-infrared light fusion image difference processing on the target pixel point.
Wherein, as an alternative embodiment, the multi-source image fusion module is configured to perform image fusion in the following manner:
wherein IF (m, n) represents a visible light-infrared light fusion image, RGB (m, n) represents a visible light image with a pixel size of m x n, IR (m, n) represents an infrared image with a pixel size of m x n, k RGB Represents the fusion coefficient, k, of the visible light channel IR Representing the fusion coefficient of the infrared channel;
according to the image quality standard deviation evaluation method based on the fusion image, the fusion coefficient of the visible light-infrared light corresponding channel is calculated according to the image quality standard deviation in a traversing way, and the fusion coefficient of the visible light and the infrared channel corresponding to the highest relative image quality, namely the fusion coefficient of the infrared channel and the fusion coefficient of the visible light channel, is calculated.
The standard deviation can be regarded as representing the contrast of the image and reflecting the discrete degree of the pixel value and the mean value in the image data, and the larger the correlation between the fused image and the standard deviation data is, namely the larger the standard deviation is, the better the quality of the image is.
In the embodiment of the invention, the method for evaluating the standard deviation of the fusion image is adopted, and the standard deviation of the image corresponding to each group of fusion coefficients is calculated, namely the relative optimal solution of the visible light-infrared light fusion coefficients is traversed and solved according to the standard deviation of the image quality.
In a specific processing process, solving the corresponding visible light k when the relative image quality is highest RGB Fusion coefficient k of infrared channel IR Comprising:
the infrared channel fusion coefficient and the visible light channel fusion coefficient are calculated based on the following modes:
IF(m,n)=kRGB RGB (m,n)+k IR IR(m,n);
k RGB +k IR =1;
k RGB =rand(0,1);
k IR =rand(0,1);
the image size of the visible light-infrared light fusion image is x y, gray represents the image average Gray value of the visible light-infrared light fusion image, SD represents the image quality standard deviation of the visible light-infrared light fusion image, the larger the SD value is, the higher the contrast of the fusion image is, the smaller the SD value is, the lower the contrast of the fusion image is, and the Gray value is close.
Therefore, the fusion coefficients of the visible light and the infrared light corresponding to the highest relative image quality can be solved through optimal solution calculation, and therefore a better fusion image is obtained.
The image imaging quality is directly related to the target detection and recognition analysis result, in order to improve the target imaging quality, background images and real-time images are acquired in a time-sharing mode to serve as calculation basis, based on the visible light-infrared light fusion image, the real-time images and the background images are subjected to differential operation by utilizing differential filtering rules, recognition accuracy influence caused by background image signal disturbance is restrained, so that a target enhancement effect is obtained, and the target detection and recognition success rate is improved.
As an alternative embodiment, the fused image difference processing module is configured to perform image difference processing in the following manner:
the differential calculation formula of the visible light-infrared light fusion image of the target pixel point is calculated according to the differential filtering rule as follows:
ΔIF t (m,n)=IF t (m,n)-IFBG t (m,n)
wherein IF is t (m, n) is a real-time visible light-infrared light fusion image at the time t, IFBG t (m, n) is a visible light-infrared light background fusion image corresponding to the moment t, delta IF t (m, n) is visible light-infrared light image differential information, t is time;
wherein, the visible light-infrared light background fusion image IFNG corresponding to the time t t (m, n) arranged to use the same background fusion image in a predetermined period.
Wherein the predetermined period is set to divide the time period according to 24H a day. In one embodiment, the predetermined period is set to 2H, that is, a background image is acquired every 2 hours to perform differential calculation as a basis, so as to reduce calculation delay and calculation power consumption caused by the need of performing differential calculation at each moment.
In other embodiments, the predetermined period is set to be related to real-time weather.
For example, in a clear weather condition, according to weather forecast or real-time weather, it is set to collect background images in a T1 period, such as 2H or 3H, while in a cloudy weather condition, it is set to collect background images in a T2 period, such as 1H or less, while in other weather conditions such as rain, snow, fog, etc., it is set to collect background images in a T3 period, such as 0.5H or less.
It should be appreciated that overall, in a better weather environment, the acquisition period is longer, while in cloudy or other severe weather conditions, the acquisition period is relatively shorter, T1 > T2 > T3.
The problems of precision reduction, false recognition and the like caused by the disturbance of most background image signals can be effectively filtered by combining the differential operation of the real-time image and the background image and the fusion of the visible light and the infrared light, and the detection of an intrusion target is realized by calculating the significant difference between two or more continuous frames of images, so that the success rate of target detection and recognition is improved.
In conjunction with the illustration of fig. 3, the tracker 30 tracks the identified target in real time according to the instructions of the edge computing device, measures the distance between the target and the tracker by using binocular vision ranging technology through the vision difference between the two images, and can be implemented by using binocular vision ranging algorithm, so as to complete the coordinate position calculation of the tracked target and return to the edge computing device 30.
It should be appreciated that in a camera imaging system, four coordinate systems are included in total: world coordinate system, camera coordinate system, image coordinate system, and pixel coordinate system.
The world coordinate system is the three-dimensional geographic coordinate system of the target object in the real world. The camera coordinate system describes the object position from the camera angle, and is used for converting the world coordinate system and the image coordinate system. The image coordinate system is a coordinate system describing the imaging of the target object at the focal length of the camera, and is used for converting the camera coordinate system and the pixel coordinate system. The pixel coordinate system is a coordinate system established by using pixels to represent the position of a point in an image.
The process of camera imaging is to convert the target object in the three-dimensional geographic coordinate system into a two-dimensional image, namely, the world coordinate of the target object is converted into a camera coordinate system taking the optical center as the center, and then is converted into an image coordinate system and a pixel coordinate system. The conversion from camera pixel coordinates and image coordinates to three-dimensional geographic coordinates of the target object can be realized through the binocular camera, and the world coordinates of the locking target can be calculated through coordinate conversion.
The conversion of the world coordinate system and the two-dimensional pixel coordinate system is calculated as follows:
where u, v is the abscissa on the pixel coordinate system, u 0 ,v 0 The coordinate of the center of the camera photosensitive plate under the pixel coordinate system is f, the focal length of the camera is f, Z is a scale factor, R is a rotation matrix, T is a translation vector, and Xw, yw and Zw are physical coordinates of a point under the world coordinate system.
By such coordinate conversion, the coordinate position calculation of the tracking target can be realized.
The edge computing device 30 obtains the target track and coordinates according to the coordinate data of the tracked target returned by the tracker 20, and sends alarm information to a computer system deployed in a remote border protection monitoring center.
According to the technical scheme, the dual-computer collaborative frontier defense target recognition and tracking system provided by the invention is used for fusing the visible light target recognition and infrared image target recognition technologies, obtaining the visible light-infrared light fusion image with higher relative quality by using the standard deviation evaluation method, solving the problems of poor visible light low-illumination imaging effect and single infrared light imaging information, improving the imaging quality of the image, carrying out all-weather detection recognition on objects such as people, animals, unmanned aerial vehicles and the like, and realizing the analysis and statistics of the number of invasion objects.
The edge computing equipment utilizes a differential filtering rule to carry out differential operation on the real-time image and the background image, effectively filters the problems of precision reduction, false recognition and the like caused by the disturbance of most background image signals, improves the signal-to-noise ratio of the image, rapidly captures target information, filters most false alarm conditions, and greatly improves all-weather target detection recognition precision.
The image recognition analysis is completed at the edge side by utilizing the edge computing equipment, and the mode of image analysis and calculation of large-flow data such as video images and the like through a private network uploading monitoring center is not needed, so that the problem of delay caused by large-scale image data uploading and background analysis in a complex environment can be solved, and the instantaneity of image analysis is greatly improved; meanwhile, the data quantity of the alarm and position data is small, and the wireless network can be used for uploading the data in real time, so that the problem that the border line range is wide and the special line of the private network is difficult to cover is solved; the lookout machine and the tracker are directly dispatched to work cooperatively locally through edge calculation, so that the tracking efficiency is improved.
The dual-machine cooperative border defense target recognition and tracking system provided by the invention utilizes the cooperative work of the lookout machine and the tracking machine, takes account of the normalized border line target monitoring and the dynamic tracking work of the intrusion object, ensures that the inspection and the tracking work are not wrong, avoids the illegal inbound molecule 'pitch and departure' strategy, and simultaneously combines the edge side computing and scheduling mechanism, thereby saving the time and the economic cost of background computing and scheduling. The tracking machine dynamically tracks the invasion target, calculates world coordinates of the invasion target in real time by combining with the edge equipment, the background management computer draws the track and the position information of the invasion target in real time by combining with the GIS map, and the background management computer can conduct data analysis mining, so that the information of the invasion target can be locked in real time, nearby on-duty police can be scheduled, blocking interception measures can be formulated, analysis works such as high-rise invasion time, invasion positions and routes can be developed by combining with big data technology, work such as service management line planning, monitoring point position planning and invasion emergency response planning can be assisted, and the dynamic patrol capacity of border lines can be improved in all directions.
While the invention has been described with reference to preferred embodiments, it is not intended to be limiting. Those skilled in the art will appreciate that various modifications and adaptations can be made without departing from the spirit and scope of the present invention. Accordingly, the scope of the invention is defined by the appended claims.

Claims (7)

1. A two-machine cooperative security border target recognition and tracking system, comprising:
the system comprises a first beyond-view intelligent camera terminal, a second beyond-view intelligent camera terminal and a real-time tracking device, wherein the first beyond-view intelligent camera terminal is deployed at a border line position and is used as a lookout machine for finding and identifying targets of an invasive border line, and the second beyond-view intelligent camera terminal is used as a tracking machine for receiving instructions, locking the targets and tracking the targets in real time;
the edge computing equipment is used for receiving target discovery and identification data sent by the lookout machine, analyzing the target with respect to the small beyond-view-range target in real time, detecting the target attribute, commanding the control tracker to track the target, receiving real-time data sent by the tracker, acquiring a target track and coordinates, and sending alarm information to a computer system deployed in a remote frontier defense monitoring center;
the target discovery and identification data sent by the lookout machine comprises an infrared image based on an infrared imaging channel and a visible light image based on a visible light imaging channel;
the edge computing equipment performs imaging complementarity analysis of visible light and infrared light on the multi-source image based on visible light-infrared light fusion, so as to realize target detection and identification;
the edge computing equipment comprises a multi-source image fusion module and a fusion image difference processing module;
the multi-source image fusion module is arranged for carrying out image fusion on the visible light image and the infrared image;
the fusion image differential processing module is used for carrying out visible light-infrared light fusion image differential processing of the target pixel point;
the multi-source image fusion module is configured to perform image fusion in the following manner:
wherein IF (m, n) represents a visible light-infrared light fusion image, RGB (m, n) represents a visible light image with a pixel size of m x n, IR (m, n) represents an infrared image with a pixel size of m x n, k RGB Represents the fusion coefficient, k, of the visible light channel IR Representing the fusion coefficient of the infrared channel;
according to the image quality standard deviation evaluation method based on the fusion image, traversing and solving a fusion coefficient relative optimal solution of a visible light-infrared light corresponding channel according to the image quality standard deviation, and solving fusion coefficients of the corresponding visible light and infrared channel when the relative image quality is highest, namely, the fusion coefficient of the infrared channel and the fusion coefficient of the visible light channel;
the solving of the fusion coefficients of the corresponding visible light and infrared channels when the relative image quality is highest comprises:
the infrared channel fusion coefficient and the visible light channel fusion coefficient are calculated based on the following modes:
IF(m,n)=k RGB RGB(m,n)+k IR IR(m,n);
k RGB +k IR =1;
k RGB =rand(0,1);
k IR =rand(0,1);
the image size of the visible light-infrared light fusion image is x y, gray represents the image average Gray value of the visible light-infrared light fusion image, SD represents the image quality standard deviation of the visible light-infrared light fusion image, the larger the SD value is, the higher the contrast of the fusion image is, the smaller the SD value is, the lower the contrast of the fusion image is, and the Gray value is close.
2. The two-machine collaborative frontier defense target recognition and tracking system according to claim 1, wherein the edge computing device is configured to detect intrusive objects based on differential operation of real-time images and background images, in combination with visible-infrared light image fusion to filter background image signals; and carrying out target recognition on the intruding object based on a preset target recognition model.
3. The two-machine collaborative frontier defense target recognition and tracking system according to claim 1, wherein the fused image differencing processing module is configured to difference image processing in the following manner:
the differential calculation formula of the visible light-infrared light fusion image of the target pixel point is calculated according to the differential filtering rule as follows:
ΔIF t (m,n)=IFt(m,n)-IFBG t (m,n)
wherein IF is t (m, n) is a real-time visible light-infrared light fusion image at the time t, IFBG t (m, n) is a visible light-infrared light background fusion image corresponding to the moment t, delta IF t (m, n) is visible light-infrared light image differential information, t is time;
wherein, the visible light-infrared light background fusion image IFBG corresponding to the time t t (m, n) arranged to use the same background fusion image in a predetermined period.
4. The two-machine cooperative security target recognition and tracking system of claim 3, wherein the predetermined period is configured to divide the time period according to 24H a day.
5. The two-machine collaborative frontier defense target recognition and tracking system of claim 3 or 4 wherein the predetermined period is set to correlate with real-time weather.
6. The dual-computer collaborative border defense target recognition and tracking system according to claim 1, wherein the tracking machine tracks the recognized target in real time according to instructions of an edge computing device, measures the distance between the target and the tracking machine through a visual difference between two images by using a binocular vision ranging technology, calculates the coordinate position of the tracked target, and returns the coordinate position to the edge computing device;
and the edge computing equipment acquires a target track and coordinates according to the coordinate data of the tracked target returned by the tracker, and sends alarm information to a computer system deployed in a remote border protection monitoring center.
7. A method of edge protection target identification and tracking for a two-machine collaborative edge protection target identification and tracking system according to any one of claims 1-6, comprising the steps of:
disposing beyond visual range intelligent camera terminals and edge computing equipment adjacent to the beyond visual range intelligent camera terminals at the border line positions, wherein the first beyond visual range intelligent camera terminals are used as lookout machines for finding and identifying targets of invasive border lines, and the second beyond visual range intelligent camera terminals are used as tracking machines for receiving instructions, locking the targets and tracking in real time;
the edge computing equipment is used for receiving target discovery and identification data sent by the lookout machine, carrying out real-time analysis on a small target with a beyond visual range, detecting target attributes, instructing the control tracker to track the target, receiving real-time data sent back by the tracker, obtaining target tracks and coordinates, and sending alarm information to a computer system deployed in a remote border defense monitoring center;
the edge computing equipment performs visible light and infrared light imaging complementarity analysis on the multi-source image based on visible light-infrared light fusion to realize target detection and identification.
CN202210982619.1A 2022-08-16 2022-08-16 Dual-machine cooperative border defense target recognition and tracking system and method Active CN115361499B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210982619.1A CN115361499B (en) 2022-08-16 2022-08-16 Dual-machine cooperative border defense target recognition and tracking system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210982619.1A CN115361499B (en) 2022-08-16 2022-08-16 Dual-machine cooperative border defense target recognition and tracking system and method

Publications (2)

Publication Number Publication Date
CN115361499A CN115361499A (en) 2022-11-18
CN115361499B true CN115361499B (en) 2024-03-12

Family

ID=84032998

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210982619.1A Active CN115361499B (en) 2022-08-16 2022-08-16 Dual-machine cooperative border defense target recognition and tracking system and method

Country Status (1)

Country Link
CN (1) CN115361499B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105338248A (en) * 2015-11-20 2016-02-17 成都因纳伟盛科技股份有限公司 Intelligent multi-target active tracking monitoring method and system
CN110428008A (en) * 2019-08-02 2019-11-08 深圳市唯特视科技有限公司 A kind of target detection and identification device and method based on more merge sensors
CN110958376A (en) * 2019-12-27 2020-04-03 利卓创新(北京)科技有限公司 Dual-waveband holder linkage intelligent camera and working method
CN111259748A (en) * 2020-01-10 2020-06-09 利卓创新(北京)科技有限公司 Edge calculation and communication system for video monitoring
CN112907624A (en) * 2021-01-27 2021-06-04 湖北航天技术研究院总体设计所 Target positioning and tracking method and system based on multi-band information fusion
CN114115296A (en) * 2022-01-21 2022-03-01 南京理工大学 Intelligent inspection and early warning system and method for key area
CN216916282U (en) * 2021-12-16 2022-07-08 西安邮电大学 Infrared and visible light cross-modal data fusion search robot
CN114723781A (en) * 2022-03-07 2022-07-08 北京拙河科技有限公司 Target tracking method and system based on camera array

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105338248A (en) * 2015-11-20 2016-02-17 成都因纳伟盛科技股份有限公司 Intelligent multi-target active tracking monitoring method and system
CN110428008A (en) * 2019-08-02 2019-11-08 深圳市唯特视科技有限公司 A kind of target detection and identification device and method based on more merge sensors
CN110958376A (en) * 2019-12-27 2020-04-03 利卓创新(北京)科技有限公司 Dual-waveband holder linkage intelligent camera and working method
CN111259748A (en) * 2020-01-10 2020-06-09 利卓创新(北京)科技有限公司 Edge calculation and communication system for video monitoring
CN112907624A (en) * 2021-01-27 2021-06-04 湖北航天技术研究院总体设计所 Target positioning and tracking method and system based on multi-band information fusion
CN216916282U (en) * 2021-12-16 2022-07-08 西安邮电大学 Infrared and visible light cross-modal data fusion search robot
CN114115296A (en) * 2022-01-21 2022-03-01 南京理工大学 Intelligent inspection and early warning system and method for key area
CN114723781A (en) * 2022-03-07 2022-07-08 北京拙河科技有限公司 Target tracking method and system based on camera array

Also Published As

Publication number Publication date
CN115361499A (en) 2022-11-18

Similar Documents

Publication Publication Date Title
US11410002B2 (en) Ship identity recognition method based on fusion of AIS data and video data
KR101942491B1 (en) Hybrid ai cctv mediation module device consisting of road traffic situation monitoring and real time traffic information analysis
US8416298B2 (en) Method and system to perform optical moving object detection and tracking over a wide area
CN108802758B (en) Intelligent security monitoring device, method and system based on laser radar
EP1560160A2 (en) A multiple camera system for obtaining high resolution images of objects
CN103686131A (en) Monitoring apparatus and system using 3d information of images and monitoring method using the same
CN105391975A (en) Video processing method in scene monitoring, device and scene monitoring system
CN103929592A (en) All-dimensional intelligent monitoring equipment and method
CN105306892B (en) A kind of generation of ship video of chain of evidence form and display methods
CN110830756A (en) Monitoring method and device
CN112449093A (en) Three-dimensional panoramic video fusion monitoring platform
CN108897342B (en) Positioning and tracking method and system for fast-moving civil multi-rotor unmanned aerial vehicle
CN112053391B (en) Monitoring and early warning method and system based on dynamic three-dimensional model and storage medium
US9418299B2 (en) Surveillance process and apparatus
CN106548251A (en) A kind of electronic monitoring and control system and method based on main passive fusion
CN109982044B (en) Tracking method of target positioning and tracking system based on CCTV sensor network
CN105810023B (en) Airport undercarriage control automatic monitoring method
JP7035272B2 (en) Shooting system
CN115035470A (en) Low, small and slow target identification and positioning method and system based on mixed vision
CN112257683A (en) Cross-mirror tracking method for vehicle running track monitoring
CN113743286A (en) Target monitoring system and method for multi-source signal fusion
CN115361499B (en) Dual-machine cooperative border defense target recognition and tracking system and method
CN116403128A (en) Automatic searching method and system for small desert targets based on multiple unmanned aerial vehicles
CN111063148A (en) Remote night vision target video detection method
CN115150591B (en) Regional video monitoring method based on intelligent algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: China

Address after: No.568 longmian Avenue, gaoxinyuan, Jiangning District, Nanjing City, Jiangsu Province, 211000

Applicant after: Xiaoshi Technology (Jiangsu) Co.,Ltd.

Address before: No.568 longmian Avenue, gaoxinyuan, Jiangning District, Nanjing City, Jiangsu Province, 211000

Applicant before: NANJING ZHENSHI INTELLIGENT TECHNOLOGY Co.,Ltd.

Country or region before: China

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant