CN113762161A - Intelligent obstacle monitoring method and system - Google Patents

Intelligent obstacle monitoring method and system Download PDF

Info

Publication number
CN113762161A
CN113762161A CN202111050775.6A CN202111050775A CN113762161A CN 113762161 A CN113762161 A CN 113762161A CN 202111050775 A CN202111050775 A CN 202111050775A CN 113762161 A CN113762161 A CN 113762161A
Authority
CN
China
Prior art keywords
image
obstacle
original image
enhancement processing
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111050775.6A
Other languages
Chinese (zh)
Other versions
CN113762161B (en
Inventor
申岩
祁吉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Cloud Electric Pen Intelligent Technology Co ltd
Original Assignee
Zhejiang Cloud Electric Pen Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Cloud Electric Pen Intelligent Technology Co ltd filed Critical Zhejiang Cloud Electric Pen Intelligent Technology Co ltd
Priority to CN202111050775.6A priority Critical patent/CN113762161B/en
Priority claimed from CN202111050775.6A external-priority patent/CN113762161B/en
Publication of CN113762161A publication Critical patent/CN113762161A/en
Application granted granted Critical
Publication of CN113762161B publication Critical patent/CN113762161B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J5/00Radiation pyrometry, e.g. infrared or optical thermometry
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V8/00Prospecting or detecting by optical means
    • G01V8/10Detecting, e.g. by using light barriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • G08B21/24Reminder alarms, e.g. anti-loss alarms
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J5/00Radiation pyrometry, e.g. infrared or optical thermometry
    • G01J2005/0077Imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention relates to an intelligent obstacle monitoring method and system, in particular to an intelligent obstacle monitoring method and system based on infrared thermal imaging, aiming at solving the problems that the existing visible light equipment has single action, and the obstacle monitoring method not only needs to depend on manual operation for routing inspection, but also cannot realize real-time monitoring, the method comprises the steps of arranging an infrared thermal imaging device at a part or a target area needing temperature monitoring; acquiring an image or an image sequence of a corresponding area of the infrared thermal imaging device as an original image; carrying out non-uniformity correction on the original image; carrying out image detail enhancement processing on the original image after the nonuniformity correction; extracting a target area in the original image after the detail enhancement processing; carrying out pseudo-color enhancement processing on the target area; detecting whether an obstacle exists in the target area after the pseudo-color enhancement processing; and monitoring or early warning the obstacle. The invention is used for monitoring whether an obstacle exists in a target area, and belongs to the technical field of obstacle monitoring and identification.

Description

Intelligent obstacle monitoring method and system
Technical Field
The invention relates to an intelligent monitoring method and system for an obstacle, in particular to an intelligent monitoring method and system for an obstacle based on infrared thermal imaging, and belongs to the technical field of obstacle monitoring and identification.
Background
The monitoring and identification of the existing obstacles mainly adopt a visible light imaging scheme as a main means for obstacle monitoring. The definition of the visible light equipment can be guaranteed, the gray level layer of the visible light image is clearer than that of the infrared thermal image, in addition, the texture features of the visible light image are more than those of the infrared thermal image, so that the defect that the infrared thermal image reflects the texture information of the surface of an object is caused, but in practical application, the obstacle in a large-area monitoring area using the visible light equipment is single in effect, the main functions of most monitoring equipment are usually two functions of temperature measurement and obstacle troubleshooting, and the method of using the visible light to cover the large area is not practical.
The existing obstacle early warning mainly depends on manual inspection and unmanned aerial vehicle inspection, and mainly has the following defects: the manual inspection is low in visual inspection efficiency, cannot comprehensively inspect whether obstacles are accumulated and hung in a target area or not, is easy to miss inspection due to faults, cannot timely monitor and early warn, and is high in cost; unmanned aerial vehicle patrols and examines and also need dispose relevant personnel and operate the target area who patrols and examines to accomplish all-weather and patrol and examine, the video data volume of shooting is big, and the unable intelligent processing information data of later stage is high to data platform's requirement.
In summary, the existing visible light device has a single function, and the obstacle monitoring method not only needs to rely on manual operation for routing inspection, but also cannot realize real-time monitoring.
Disclosure of Invention
The invention provides an intelligent obstacle monitoring method and system in order to solve the problems that the existing visible light equipment is single in function, an obstacle monitoring method is required to depend on manual operation for routing inspection, and real-time monitoring cannot be achieved.
The technical scheme adopted by the invention is as follows:
an intelligent obstacle monitoring method comprises the following steps:
s1, arranging an infrared thermal imaging device at the position needing obstacle monitoring;
s2, acquiring an image or an image sequence of a corresponding area of the infrared thermal imaging device as an original image;
s3, carrying out non-uniformity correction on the original image;
s4, performing image detail enhancement processing on the original image after the nonuniformity correction;
s5, extracting a target area in the original image after the detail enhancement processing;
s6, carrying out pseudo color enhancement processing on the target area;
s7, detecting whether an obstacle exists in the target area after the pseudo color enhancement processing;
and S8, carrying out obstacle monitoring or early warning based on the obstacle identified in the S7.
Further, in S3, the non-uniformity of the original image is corrected by a two-point calibration algorithm.
Further, the method for performing non-uniformity correction on the original image by adopting a two-point calibration correction algorithm comprises the following steps:
s31, selecting a radiometric calibration point phi in the infrared focal plane arrayLAnd phiHRecording the response output values of all the NxM detector units;
s32, acquiring correction parameters of each detector unit;
Figure BDA0003252661700000021
Si,jL) Is indicative of phiLThe response output values of the detector units within the range concerned;
Si,jH) Is indicative of phiHThe response output values of the detector units within the range concerned;
SLrepresenting all response output values Si,jL) Average value of (d);
SHrepresenting all response output values Si,jH) Average value of (d);
i represents the number of rows of detector units in the infrared focal plane array;
j represents the number of columns of detector units in the infrared focal plane array;
s33, carrying out non-uniformity correction on the original image;
Figure BDA0003252661700000022
phi denotes the irradiance incident on the detector unit;
Figure BDA0003252661700000024
a correction value representing a response output value of the (i, j) th detector cell;
Si,j(phi) represents the response output value of the (i, j) th detector cell.
Further, in S4, the method for performing the image detail enhancement processing on the original image after the non-uniformity correction includes:
s41, filtering the original image after the nonuniformity correction, and obtaining a filtering result h (x):
Figure BDA0003252661700000023
k (x) denotes a normalization factor, k (x) ═ c (epsilon, x) s (f (epsilon), f (x)) d epsilon;
c (epsilon, x) represents a weight generated by a spatial distance between the current pixel x and the domain pixel epsilon;
s (f (epsilon), f (x)) represents the weight generated by the difference between the gray value of the current pixel and the gray value of the field pixel;
f (epsilon) represents the gray value of the current pixel;
(x) the original image after the nonuniformity correction is shown;
ε represents the domain image;
s42, subtracting the original image and the filtering result to obtain a detail image;
s43, carrying out the most value normalization processing on the detail image:
Figure BDA0003252661700000031
fout(x, y) represents the result image of the most-value normalization;
fin(x, y) represents a detail image;
min represents the minimum value of the pixel, max represents the maximum value of the pixel, 0 < min < max < L, L represents the number of gray levels of the image.
Further, in S5, the method for extracting the target area in the original image after the detail enhancement processing includes:
s51, extracting a feature image in the original image after the detail enhancement processing, wherein the feature image comprises a contrast feature, an entropy feature and a gradient feature;
s52, generating a multi-modal fusion characteristic image by utilizing the characteristic image;
s53, sequentially performing pre-immersion and region filling on the multi-mode fusion characteristic image;
and S54, extracting the target region in the multi-modal fusion characteristic image after immersion and region filling.
Further, in S6, the pseudo color enhancement processing is performed on the target region by using the spatial domain gray scale-color conversion.
Further, in S7, an ORB feature matching algorithm is used to detect whether an obstacle exists in the target area after the pseudo color enhancement processing.
Further, in S7, the detecting whether the target area after the pseudo color enhancement processing has an obstacle includes:
(a) no obstacle is detected;
(b) and (3) detecting the obstacle:
(b1) detecting an obstacle, but the obstacle does not cause the temperature of the monitored part to be overhigh;
(b2) an obstruction is detected and causes the monitored site to become too hot.
Further, in S8, when the obstacle is detected in S7, but the temperature of the monitored part is not too high due to the obstacle, the obstacle is monitored in real time by using an infrared thermal imaging device; when an obstacle is detected in S7 and the obstacle causes the temperature of the monitored part to be too high, the monitored part is immediately warned.
An intelligent obstacle monitoring system comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing any of the steps of a method of intelligent obstacle monitoring when executing the computer program.
Has the advantages that:
1. the invention integrates temperature monitoring and obstacle monitoring by adopting an infrared thermal imaging system, collects real-time images of a monitored part or a target area at fixed points, can monitor temperature change and can also monitor whether obstacles exist, thereby solving the problem of single action of visible light equipment. And the state of the shooting equipment is combined, information such as a GPS coordinate position, a camera attitude angle, a camera view angle and shooting parameters at the early warning moment is recorded, the information is uploaded to a cloud server through the Internet of things communication technology, early warning information is transmitted to a management scheduling center through a memory, man-machine interaction is realized, a barrier at a monitored part in an infrared image is represented on a display interface for monitoring and tracking, data is stored, so that technicians can analyze whether actual factors such as actual properties of the barrier or interference temperature change and the like exist in an online manner to timely remove the barrier, when the barrier causes high temperature at a detection point, information is transmitted to a client in real time, the early warning effect is achieved, and the practicability is higher.
2. The invention can be applied to obstacle monitoring in various fields, uploads real-time video stream and images to the cloud server, sends signal source addresses and image information to a designated operation and maintenance contact person through the address end of the cloud server, warns the maintenance person that an obstacle exists at the position, helps the maintenance person to judge whether the obstacle at the position needs to be maintained or neglected for early warning according to the infrared image, can early warn various temperature data information of a monitoring point when the obstacle target causes high temperature or has no obstacle, improves the safety and reduces the maintenance cost of the maintenance person.
3. The method provided by the invention replaces the traditional method of obstacle routing inspection by combining the infrared thermal imaging technology and the cloud server, realizes the monitoring intellectualization of the obstacle, reduces the manpower input, ensures the real-time performance of the obstacle monitoring, and simultaneously realizes the obstacle early warning function.
Drawings
FIG. 1 is a flow chart for generating a visual infrared thermal map;
fig. 2 is a flow chart of S6;
Detailed Description
The first embodiment is as follows: the present embodiment is described with reference to fig. 1 and fig. 2, and the method for intelligently monitoring an obstacle on a distribution line according to the present embodiment includes the following steps:
s1, arranging an infrared thermal imaging device at the position needing obstacle monitoring;
the infrared thermal imaging system is installed at a position or a target area needing obstacle monitoring, the infrared thermal imaging system comprises power supply equipment, an infrared thermal imaging device integrated with an internet of things communication module and a cloud server, the infrared thermal imaging device comprises an infrared thermal imager, the power supply equipment can be used in each environment area, such as streets, places such as the open air, and the like, and in order to improve the practicability of the infrared thermal imaging system, a solar cell panel and a storage battery are used as a power supply of the system. The data information transmission needs data communication, and the Internet of things communication module and the Internet of things network card are used for providing data flow for the system, so that the problem that no private network exists outdoors can be solved. Resolution ratio is adjustable to infrared thermal imager, and the suitable distance can be adjusted according to operating condition to the point location of monitoring, and the point location of real-time supervision gets into the field of vision of camera. The cloud server can store a large amount of uploaded data, data loss is prevented, the edge computing capacity of cloud data is realized, expired data can be deleted regularly, online processing is rapid, information is sent to the client side, and a user of the client side can check the information. The parts needing to be monitored include, but are not limited to, overhead line (including rail transit contact systems) wire clamps, metal wiring terminals, busbar connection points, transformer outgoing lines and other conductor connection points, transformer surface and other instrument parts which can generate high temperature in the fields of electric power, petrifaction, new energy and the like.
S2, acquiring an image or an image sequence of a corresponding area of the infrared thermal imaging device as an original image;
the device and the equipment can be handheld terminal holder camera equipment, a fixed gunlock or dome camera, a ground mobile robot platform, a flying unmanned aerial vehicle platform and other equipment and memories which are hung with a movable holder camera, and infrared light images or image sequence videos of scenes which need temperature early warning and identification and are acquired by a camera lens.
S3, carrying out non-uniformity correction on the original image;
the infrared imaging equipment is wide, and due to the reasons of materials and processes, the response rates of the focal plane detection units are difficult to be consistent, so that different response voltages are given by the final detectors facing uniform radiation targets, and the direct result is the IRFPA imaging effect. The pixel response of the focal plane must be corrected.
In the infrared focal plane array, the response function of each unit detector is a nonlinear function, but in a smaller range, the response curve of the detector can be approximately regarded as a straight line, and assuming that the response of the detector has stability in time, the response output value of a single detector in the infrared focal plane array can be expressed by a linear equation:
Si,j(φ)=gi,j(φ)φ+oi,j(φ) (1)
wherein, gi,j(phi) represents a gain coefficient (corresponding rate) at illuminance phi;
oi,j(phi) represents bias (dark current) at illuminance phi;
phi denotes the irradiance incident on the detector unit;
i represents the number of rows of detector units in the infrared focal plane array;
j represents the number of columns of detector units in the infrared focal plane array;
the heterogeneity of the infrared focal plane array is shown in g between each detector uniti,j(phi) and oi,j(phi) difference.
The basic idea of the correction is as follows:
(1) measuring the response output value of each detector unit by using a reference radiation source to provide uniform irradiance to the infrared focal plane array;
(2) calculating correction parameters of each detector unit;
(3) and when the infrared focal plane array receives the actual scene irradiance, the corresponding correction parameters of each detector unit are used for actually correcting the output of each detector unit.
And calculating a correction value by measuring the response of each detector unit in the array to two uniform black body radiations with different radiation degrees by adopting a two-point calibration correction algorithm according to the number of the reference radiation source calibration points, thereby realizing the non-uniformity correction.
The algorithm is realized as follows:
selecting two radiometric calibration points phi in the infrared focal plane arrayLAnd phiHRespectively recording response output values of all the NxM detector units in the infrared focal plane array;
outputs a value S for all of these responsesi,jL) And Si,jH) Respectively averaged to obtain
Figure BDA0003252661700000061
Wherein S isi,jL) Is indicative of phiLThe response output values of the detector units within the range concerned;
Si,jH) Is indicative of phiHThe response output values of the detector units within the range concerned;
SLrepresenting all response output values Si,jL) Average value of (d);
SHrepresenting all response output values Si,jH) Average value of (d);
by passing
Figure BDA0003252661700000062
And
Figure BDA0003252661700000063
the determined straight line is used as a correction straight line, wherein
Figure BDA0003252661700000064
Representing the output signals S to all detector units of the infrared focal plane arrayLThe average is obtained,
Figure BDA0003252661700000071
representing the calculation of S from the output signals of all the detector units of the infrared focal plane arrayHAveraging;
under a certain illumination phi, the response output value S of the (i, j) th detector uniti,j(phi) correction values S of response output values of its detector units· i,jThe following proportional relationship exists between (φ):
Figure BDA0003252661700000072
it is possible to obtain:
Figure BDA0003252661700000073
order to
Figure BDA0003252661700000074
And
Figure BDA0003252661700000075
the formula can be simplified to obtain
S· i,j(φ)=Gi,jSi,j(φ)+Oi,j (5)
S4, performing image detail enhancement processing on the original image after the nonuniformity correction;
because the dynamic range of the infrared image is large, the loss of image details is easily caused in the process of converting the infrared image into a simulation image suitable for human eye observation, and the observation effect of human eyes is influenced. How to acquire images with good contrast and rich information is an important technology in infrared image processing. At present, the original data of the infrared image is generally 14 bits, the analog image suitable for human eyes to observe is 8 bits, and the infrared image processing can be completed by mapping the infrared original data to the analog image by adopting an image transformation algorithm.
In order to enhance the detail part, the detail information needs to be extracted from the original image, and then the whole large background image is compressed, and the detail part is reserved or enhanced. The details of the image correspond to the high frequency portions of the image, while the overall contour corresponds to the low frequency portions of the image. Thus, a detail map can be obtained using a method of subtracting the original map from its low-pass filtered image.
The scheme adopts double-domain filtering, which is the combination of spatial domain wave and gray domain filtering and is essentially weighted average filtering, and the weight value of the double-domain filtering different from the common low-pass filtering depends on the spatial distance between the current pixel and each pixel in the neighborhood and also depends on the gray distance between each pixel in the neighborhood and the current pixel.
If the image to be filtered (the original image after the non-uniformity correction) is f (x), the result h (x) of the two-domain filtering can be expressed as:
Figure BDA0003252661700000076
where k (x) represents a normalization factor, k (x) ═ c (epsilon, x) s (f (epsilon), f (x)) d epsilon;
c (epsilon, x) calculating a weight value generated by the space distance between the current pixel x and the field pixel epsilon;
s (f (epsilon), f (x)) is the weight generated by the difference between the gray value of the current pixel and the gray value of the field pixel;
f (epsilon) represents the gray value of the current pixel;
(x) representing an image to be filtered, namely the original image after the nonuniformity correction;
ε represents the domain image.
Therefore, the two-domain filtering is a special low-pass filtering, the result h (x) is the basic part of the image, and the detail part (detail map) of the image can be obtained by subtracting the filtering result (low-pass filtering image) from the original image.
The process of carrying out the most value normalization processing on the detail image is as follows:
the histogram h (k) of the image to be enhanced (the original image after the non-uniformity correction) is counted, wherein k is 0.. L-1, and L represents the number of gray levels of the image.
Counting the number of pixels one by one from the left and right ends of the histogram to the middle, i.e. counting the number of pixels
S1=H(1)+H(2)+…+H(min) (7)
S2=H(L-1)+H(L-2)+…+H(max) (8)
Wherein S is1Indicates that the statistics from 1 to min are the total number of pixels;
S2represents the total number of pixels counted from (L-1) to max;
min represents the minimum value of the pixel, max represents the maximum value of the pixel, 0 < min < max < L.
Judgment S1,S2When S is a value of1Stopping accumulation and storing the current value min when T is greater than T; when S is2And when the maximum sum is greater than T, stopping accumulating and storing the current max value, wherein T is a preset value.
The mode normalization is performed using min as the minimum and max as the maximum. Namely:
Figure BDA0003252661700000081
wherein f isin(x, y) represents an input image;
fout(x, y) is the result image of the most value normalization.
S5, extracting a target area in the original image after the detail enhancement processing;
when the infrared thermal imager is collecting images, the interference mainly comprises:
(1) the heat conducting component interferes, different types of interference can occur to different components, and due to the similarity of the target area and the interference area, the target area is difficult to accurately extract.
(2) The target area is not accurately positioned due to small temperature difference between the target area and the environment caused by the temperature and light of the cloudy day, so that an extraction algorithm of the infrared thermal image is necessary to effectively segment the target area in the complex infrared thermal image.
The infrared thermal image target area extraction algorithm mainly comprises three steps:
(1) and extracting the characteristic image, wherein the characteristic image comprises contrast characteristics, entropy characteristics and gradient characteristics. And calculating the contrast characteristic image of the channel image and the entropy characteristic image of the original image through a gray level co-occurrence matrix (GLCM). And the image brightness is reduced by linearly mixing the coverage operators, so that the interference of the brightness in the data image in the construction of a gradient image is reduced, and a gradient characteristic image is obtained by utilizing a horizontal component difference method and a vertical component difference method.
(2) And generating a multi-modal fusion feature map. Obtaining a weighted contrast characteristic by using median filtering, obtaining a weighted entropy characteristic by using morphological corrosion, extracting a gradient characteristic by using a method of weakening the image brightness influence by using a covering operator, and fusing the contrast characteristic, the entropy characteristic and the gradient characteristic to generate a multi-mode characteristic image.
(3) And performing pre-immersion and region filling on the multi-modal characteristic image so as to realize target extraction.
And solving a contrast characteristic diagram of the original image by adopting a gray level co-occurrence matrix method. The infrared thermal image of the channel image mainly represents profile features, and the channel image represents high-temperature and low-temperature features. The combined and component images can simultaneously represent the outline and the area temperature of the images. And carrying out linear weighting on the sum components, combining the sum components into a new component, and converting the new component into a gray image, thereby obtaining the contour position of the target area.
And solving an entropy characteristic diagram of the original image by adopting a gray level co-occurrence matrix method. Entropy reflects the degree of non-uniformity or complexity of texture in an image and is a measure of the degree of image misordering. And performing median filtering on the contrast characteristic image by adopting median filtering, and selecting a kernel function as a matrix to perform median filtering to obtain the contrast characteristic image subjected to median filtering. The entropy characteristic value characteristic image adopts a morphological corrosion method, an elliptic operator is used, the size of a kernel function is a 3 x 3 matrix, and the corrosion method can connect regions and edges. And (3) obtaining an entropy characteristic image through a morphological method.
The method for weakening the influence of image brightness by using a coverage operator is adopted to solve the gradient characteristic of the original image, and the coverage operator is as follows:
f2(x,y)=(1-μ)f0(x,y)+μf1(x,y) (10)
f2(x, y) represents an image with reduced brightness;
(1-. mu.) represents an attenuation factor;
f0(x, y) represents the original image after the detail enhancement processing;
f1(x, y) represents an input image;
when μ changes from 0 to 1, the attenuation factor attenuates the original image f after the detail enhancement processing0Influence of (x, y), f1(x, y) is an input image subjected to cross fusion, f1(x, y) is a completely black image, the effect of the cross-frame fusion is used to reduce the influence of the brightness factor, the brightness of the image is reduced, and the image with reduced brightness is f2(x, y), converting the image from RGB mode to HSI mode, and finding f2I component image I of (x, y)2(x, y) as shown in formula:
I2=0.299Rf2+0.587Gf2+0.114Bf2 (11)
g(x,y)=|I2(x,y)-I2(x+1,y)|+|I2(x,y)-I2(x,y+1)| (12)
I2(x, y) denotes f2One pixel point in (x, y), Rf2、Gf2、Bf2The method of finding the gradient uses a difference method between the horizontal and vertical components, I2(x, y) the gradient image g (x, y) is obtained by the horizontal-vertical difference method.
And selecting seed points for filling, wherein the seed points can be manually selected, or the center of the maximum inscribed polygon is simulated by a contrast characteristic diagram through a morphological method to be the seed points. After the region filling is completed, small holes still appear inside the region due to the limited range of the positive difference value, and the extracted image of the whole region is more complete through the hole filling.
S6, carrying out pseudo color enhancement processing on the target area;
the infrared thermal image and the visible light imaging principle are different, so that the essential difference of the images is determined, the infrared thermal image reflects the energy difference of infrared radiation emitted by the target and the background, and the visible light image reflects the intensity of sunlight and other light rays reflected by the target and the background, so that the gray difference of the target and the background is larger in the same scene. The gray level of the visible light image is more distinct than that of the infrared thermal image, and in addition, the texture features of the visible light image are more than those of the infrared thermal image, so that the defect that the infrared thermal image reflects the texture information of the surface of an object is caused, when an obstacle stops at a monitoring point, the gray level image meeting the monitoring temperature cannot directly reflect the texture information of the obstacle, the resolution is too low, data enhancement needs to be further realized, the visual sensitivity of an observer is increased through a pseudo-color processing method, and the resolution capability of color difference is improved.
The pseudo color processing means converting a gray image into a color image or converting a monochrome image into an image with a given color distribution to make the image layering effect more obvious and thus make the target area more prominent. The principle of the infrared thermal image false color data enhancement such as gray level-color conversion is as follows:
(1) segmenting the gray scale range of the original image f (x, y), performing 3 independent transformations on the gray levels of any input pixels;
(2) after three different transformations of red, green and blue, TR (-), TG (-), and TB (-), the results are sent to three channels of a color monitor to become three primary color components r (x, y), g (x, y), b (x, y);
(3) and then a color image is synthesized.
The pseudo-color processing data enhancement technology includes but is not limited to intensity layering, gray level-color conversion and frequency domain conversion methods, and performs data enhancement operation processing on gray values forming an image in a spatial domain to obtain various infrared pseudo-color images such as white heat, black heat, iron oxide red, high-contrast rainbow, rainbow and iron oxide gray, so that the details of the image are easier to recognize, and the source image or video is clearer.
S7, detecting whether an obstacle exists in the target area after the pseudo color enhancement processing;
an ORB feature matching algorithm is adopted to detect whether obstacles exist in a video or image sequence, and the main steps are as follows:
(1) detecting the characteristic points;
(2) calculating a feature point descriptor;
(3) and matching image feature points.
In the characteristic point detection part, an ORB searches key points in an image by adopting a FAST algorithm, gives a pixel point P, and FAST compares 16 pixels in the circle range of a target P, wherein each pixel is higher than P, smaller than P or similar to P and is divided into three categories. The comparison is with a threshold h. For a given threshold h, brighter pixels will be pixels with a luminance above IP + h, darker pixels will be pixels with a luminance below IP-h, and similar pixels will be pixels with a luminance between these two values. After the pixels are classified, if more than 8 contiguous pixels on the circle, darker or lighter than P, then the pixel P is selected as the keypoint.
The feature point descriptor extracted by the BRIEF algorithm is a binary character string, a current neighborhood space patch of a feature point is established, and is set as P, then a binary test defined for the patch P is performed:
Figure BDA0003252661700000111
wherein p (x) represents the image gray scale value at point x;
p (y) represents the image gray value at the y point;
p represents an image gray value;
τ represents a contrast operation;
this results in a binary string of n bits:
Figure BDA0003252661700000121
the coordinate distribution for x and y is used in this scheme to be a gaussian distribution centered on the feature point. To make the BRIEF algorithm rotation invariant, we need to rotate the neighborhood of feature points by an angle, which is the direction angle θ we find above for the feature points. However, the cost of rotating a neighborhood as a whole is relatively large, and a more efficient way is to rotate the matching points x in the neighborhoods obtained beforei,yi
Let n test point pairs generating feature point descriptors be xi,yiDefine a 2 × n matrix:
Figure BDA0003252661700000122
the rotation matrix formed by the angle theta is RθThen the coordinates of the matching point after rotation are
Sθ=RθS (16)
The rBRIEF algorithm further weakens the correlation of descriptors of feature points in the same image by changing the calculation method of descriptors, and specifically, the method comprises the following steps:
for each corner point, the 31 × 31 neighborhood is considered, the average of the pixel values of the 5 × 5 neighborhood around each point in the domain is used as the pixel value of the point, and the sizes of the point pairs are compared. The above calculation results in (31-5+1) × (31-5+1) ═ 729 sub-windows, and there are 729 × 728 ═ 265356 methods for extracting the dot pairs, and descriptors are formed by selecting 256 methods from these 265356 methods. The following are specific selection steps:
(1) in each 31 × 31 neighborhood of 300k feature points, M — 265356 methods take point pairs, compare the sizes of the point pairs, and form a matrix Q. Each column in the matrix represents a binary number obtained by an extraction method.
(2) And (3) calculating the average value of each column of the Q matrix, and rearranging the column vectors of Q according to the distance from the average value to 0.5 to form a matrix T.
(3) The first column vector of T is placed into R.
(4) And calculating the correlation of the next column vector of T and all the column vectors in R, and if the correlation coefficient is smaller than a set threshold value, moving the column vector in T into R.
(5) And (5) repeating the operation of the step (4) until the number of the column vectors in the R is 256.
For binary descriptors used by ORBs, hamming indices are typically used that determine the quality of the match between two keypoints by calculating the number of different bits between the binary descriptors. When comparing the keypoints of the training image and the query image, the keypoint pair with the least number of differences is considered as the best match. And after the matching function compares all the key points in the training image and the query image, returning the most matched key point pair.
After the template matching is completed, the matching quality of the source image and the real-time collected image can be judged, the obstacle appears in the image, the infrared thermal imaging system sends out warning and uploads information including but not limited to an early warning image, a GPS coordinate, a camera posture, a field angle, shooting internal parameters, shooting time and the like.
And S8, carrying out obstacle monitoring or early warning based on the identified obstacles.
And tracking temperature change in real time when a video or image sequence is shot at a monitoring part, and monitoring whether an obstacle exists on a line or not in real time. If no obstacle exists, uploading information records including but not limited to an early warning image, a GPS coordinate, a camera posture, a field angle, shooting internal parameters, shooting time and the like, not sending an early warning prompt to a client, and enabling operation and maintenance personnel to watch the stored information records such as the image or the video and the like in the validity period on line; if the obstacle exists, uploading information records including but not limited to an early warning image, a GPS coordinate, a camera posture, a view angle, shooting internal parameters, shooting time and the like, tracking the obstacle in real time, and monitoring whether the temperature rises due to the existence of the obstacle in real time by using an infrared thermal imaging device to monitor the obstacle in real time. If an obstacle exists in the real-time shot image or image sequence and the monitored part is high in temperature, the image is stored in a memory, temperature early warning and obstacle early warning are carried out, early warning is carried out on the image and the obstacle, information including but not limited to early warning images, GPS coordinates, camera postures, view angles, shooting internal parameters, shooting time and the like is uploaded to a management center.
The second embodiment is as follows: the present embodiment is described with reference to fig. 1 and fig. 2, and the intelligent obstacle monitoring system according to the present embodiment includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the processor implements any step of the intelligent obstacle monitoring method.

Claims (10)

1. An intelligent obstacle monitoring method is characterized in that: the method comprises the following steps:
s1, arranging an infrared thermal imaging device at the position needing obstacle monitoring;
s2, acquiring an image or an image sequence of a corresponding area of the infrared thermal imaging device as an original image;
s3, carrying out non-uniformity correction on the original image;
s4, performing image detail enhancement processing on the original image after the nonuniformity correction;
s5, extracting a target area in the original image after the detail enhancement processing;
s6, carrying out pseudo color enhancement processing on the target area;
s7, detecting whether an obstacle exists in the target area after the pseudo color enhancement processing;
and S8, carrying out obstacle monitoring or early warning based on the obstacle identified in the S7.
2. The intelligent obstacle monitoring method according to claim 1, wherein: in S3, the two-point calibration correction algorithm is used to perform the non-uniformity correction on the original image.
3. The intelligent obstacle monitoring method according to claim 2, wherein: the method for carrying out non-uniformity correction on the original image by adopting a two-point calibration correction algorithm comprises the following steps:
s31, selecting a radiometric calibration point phi in the infrared focal plane arrayLAnd phiHRecording the response output values of all the NxM detector units;
s32, acquiring correction parameters of each detector unit;
Figure FDA0003252661690000011
Si,jL) Is indicative of phiLThe response output values of the detector units within the range concerned;
Si,jH) Is indicative of phiHThe response output values of the detector units within the range concerned;
SLrepresenting all response output values Si,jL) Average value of (d);
SHrepresenting all response output values Si,jH) Average value of (d);
i represents the number of rows of detector units in the infrared focal plane array;
j represents the number of columns of detector units in the infrared focal plane array;
s33, carrying out non-uniformity correction on the original image;
Figure FDA0003252661690000021
phi denotes the irradiance incident on the detector unit;
Figure FDA0003252661690000022
a correction value representing a response output value of the (i, j) th detector cell;
Si,j(phi) represents the response output value of the (i, j) th detector cell.
4. The intelligent obstacle monitoring method according to claim 1, wherein: in S4, the method of performing the image detail enhancement processing on the original image after the nonuniformity correction includes:
s41, filtering the original image after the nonuniformity correction, and obtaining a filtering result h (x):
Figure FDA0003252661690000023
k (x) denotes a normalization factor, k (x) ═ c (epsilon, x) s (f (epsilon), f (x)) d epsilon;
c (epsilon, x) represents a weight generated by a spatial distance between the current pixel x and the domain pixel epsilon;
s (f (epsilon), f (x)) represents the weight generated by the difference between the gray value of the current pixel and the gray value of the field pixel;
f (epsilon) represents the gray value of the current pixel;
(x) the original image after the nonuniformity correction is shown;
ε represents the domain image;
s42, subtracting the original image and the filtering result to obtain a detail image;
s43, carrying out the most value normalization processing on the detail image:
Figure FDA0003252661690000024
fout(x, y) represents the result image of the most-value normalization;
fin(x, y) represents a detail image;
min represents the minimum value of the pixel, max represents the maximum value of the pixel, 0 < min < max < L, L represents the number of gray levels of the image.
5. The intelligent obstacle monitoring method according to claim 1, wherein: in S5, the method for extracting the target area in the original image after the detail enhancement processing includes:
s51, extracting a feature image in the original image after the detail enhancement processing, wherein the feature image comprises a contrast feature, an entropy feature and a gradient feature;
s52, generating a multi-modal fusion characteristic image by utilizing the characteristic image;
s53, sequentially performing pre-immersion and region filling on the multi-mode fusion characteristic image;
and S54, extracting the target region in the multi-modal fusion characteristic image after immersion and region filling.
6. The intelligent obstacle monitoring method according to claim 1, wherein: in S6, the target region is subjected to pseudo color enhancement processing by spatial domain gray scale-color conversion.
7. The intelligent obstacle monitoring method according to claim 1, wherein: in S7, an ORB feature matching algorithm is used to detect whether an obstacle exists in the target region after the pseudo-color enhancement processing.
8. The intelligent obstacle monitoring method according to claim 7, wherein: in S7, the detecting whether the target region after the pseudo color enhancement processing has an obstacle includes:
(a) no obstacle is detected;
(b) and (3) detecting the obstacle:
(b1) detecting an obstacle, but the obstacle does not cause the temperature of the monitored part to be overhigh;
(b2) an obstruction is detected and causes the monitored site to become too hot.
9. The intelligent obstacle monitoring method according to claim 8, wherein: in the step S8, when the obstacle is detected in the step S7 and the temperature of the monitored part is not overhigh, the obstacle is monitored in real time by using an infrared thermal imaging device; when an obstacle is detected in S7 and the obstacle causes the temperature of the monitored part to be too high, the monitored part is immediately warned.
10. An intelligent obstacle monitoring system comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein: the processor, when executing the computer program, realizes the steps of the method according to any of claims 1-9.
CN202111050775.6A 2021-09-08 Intelligent obstacle monitoring method and system Active CN113762161B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111050775.6A CN113762161B (en) 2021-09-08 Intelligent obstacle monitoring method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111050775.6A CN113762161B (en) 2021-09-08 Intelligent obstacle monitoring method and system

Publications (2)

Publication Number Publication Date
CN113762161A true CN113762161A (en) 2021-12-07
CN113762161B CN113762161B (en) 2024-04-19

Family

ID=

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115188091A (en) * 2022-07-13 2022-10-14 国网江苏省电力有限公司泰州供电分公司 Unmanned aerial vehicle grid inspection system and method integrating power transmission and transformation equipment
CN116503767A (en) * 2023-06-02 2023-07-28 合肥众安睿博智能科技有限公司 River course floater recognition system based on semantic image processing

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008022454A (en) * 2006-07-14 2008-01-31 Sumitomo Electric Ind Ltd Obstacle detection system and obstacle detection method
CN102567983A (en) * 2010-12-26 2012-07-11 浙江大立科技股份有限公司 Determining method for positions of monitored targets in instant infrared chart and application
CN105318972A (en) * 2014-06-24 2016-02-10 南京理工大学 Anti-blinding uncooled infrared thermal imager based on liquid crystal light valve
CN110996317A (en) * 2019-12-16 2020-04-10 杭州天铂云科光电科技有限公司 Infrared thermal imaging device with equipment identification encryption networking function and use method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008022454A (en) * 2006-07-14 2008-01-31 Sumitomo Electric Ind Ltd Obstacle detection system and obstacle detection method
CN102567983A (en) * 2010-12-26 2012-07-11 浙江大立科技股份有限公司 Determining method for positions of monitored targets in instant infrared chart and application
CN105318972A (en) * 2014-06-24 2016-02-10 南京理工大学 Anti-blinding uncooled infrared thermal imager based on liquid crystal light valve
CN110996317A (en) * 2019-12-16 2020-04-10 杭州天铂云科光电科技有限公司 Infrared thermal imaging device with equipment identification encryption networking function and use method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王忆锋: "《2013年的中国红外技术》", 《红外技术》, pages 1 - 13 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115188091A (en) * 2022-07-13 2022-10-14 国网江苏省电力有限公司泰州供电分公司 Unmanned aerial vehicle grid inspection system and method integrating power transmission and transformation equipment
CN115188091B (en) * 2022-07-13 2023-10-13 国网江苏省电力有限公司泰州供电分公司 Unmanned aerial vehicle gridding inspection system and method integrating power transmission and transformation equipment
CN116503767A (en) * 2023-06-02 2023-07-28 合肥众安睿博智能科技有限公司 River course floater recognition system based on semantic image processing
CN116503767B (en) * 2023-06-02 2023-09-22 合肥众安睿博智能科技有限公司 River course floater recognition system based on semantic image processing

Similar Documents

Publication Publication Date Title
CN106548467B (en) The method and device of infrared image and visual image fusion
CN106356757B (en) A kind of power circuit unmanned plane method for inspecting based on human-eye visual characteristic
CN110142785A (en) A kind of crusing robot visual servo method based on target detection
US7613360B2 (en) Multi-spectral fusion for video surveillance
CN103487729B (en) Based on the power equipments defect detection method that ultraviolet video and infrared video merge
CN109416413A (en) Solar energy forecast
CN108734143A (en) A kind of transmission line of electricity online test method based on binocular vision of crusing robot
CN109029731A (en) A kind of power equipment exception monitoring system and method based on multi-vision visual
Yang et al. A total sky cloud detection method using real clear sky background
CN105741379A (en) Method for panoramic inspection on substation
CN109034272A (en) A kind of transmission line of electricity heat generating components automatic identifying method
US20200250944A1 (en) System and Methods For Computerized Safety and Security
WO2022206161A1 (en) Feature point recognition-based block movement real-time detection method
CN113947555A (en) Infrared and visible light fused visual system and method based on deep neural network
KR20200004680A (en) Aerosol distribution measuring system by using sky image
CN105551178A (en) Power grid intelligent monitoring alarm method and device
CN109632092A (en) A kind of luminance test system and method based on spatial light field
CN116797977A (en) Method and device for identifying dynamic target of inspection robot and measuring temperature and storage medium
CN112288682A (en) Electric power equipment defect positioning method based on image registration
Feng et al. Infrared image recognition technology based on visual processing and deep learning
CN110827375B (en) Infrared image true color coloring method and system based on low-light-level image
CN113762161B (en) Intelligent obstacle monitoring method and system
CN109544535B (en) Peeping camera detection method and system based on optical filtering characteristics of infrared cut-off filter
CN113762161A (en) Intelligent obstacle monitoring method and system
CN114973028A (en) Aerial video image real-time change detection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant