CN114758249B - Target object monitoring method, device, equipment and medium based on field night environment - Google Patents

Target object monitoring method, device, equipment and medium based on field night environment Download PDF

Info

Publication number
CN114758249B
CN114758249B CN202210669439.8A CN202210669439A CN114758249B CN 114758249 B CN114758249 B CN 114758249B CN 202210669439 A CN202210669439 A CN 202210669439A CN 114758249 B CN114758249 B CN 114758249B
Authority
CN
China
Prior art keywords
image
light intensity
contour
heat source
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210669439.8A
Other languages
Chinese (zh)
Other versions
CN114758249A (en
Inventor
陈先取
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Uovision Technology Shenzhen Co ltd
Original Assignee
Uovision Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Uovision Technology Shenzhen Co ltd filed Critical Uovision Technology Shenzhen Co ltd
Priority to CN202210669439.8A priority Critical patent/CN114758249B/en
Publication of CN114758249A publication Critical patent/CN114758249A/en
Application granted granted Critical
Publication of CN114758249B publication Critical patent/CN114758249B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention relates to an artificial intelligence technology, and discloses a target object monitoring method based on a field night environment, which comprises the following steps: calculating the ambient light intensity according to the visible light image; acquiring a normal light intensity image, and identifying the outline of a fixed shelter and the light intensity of the image of the normal light intensity image; acquiring an infrared image shot by a camera, and performing color information repairing on the infrared image according to the ambient light intensity and the image light intensity to obtain a corrected image; identifying the heat source area contour of the corrected image, and dividing the corrected image into a plurality of local areas according to the heat source area contour and the fixed shelter contour; identifying local features of each local region; and performing contour completion on the contour of the heat source region according to the local characteristics, and identifying the type of the target object corresponding to the contour of the heat source region according to the complete contour of the heat source obtained by completion. The invention also provides a target object monitoring device, equipment and medium based on the field night environment. The invention can improve the accuracy of monitoring the target object in the field night environment.

Description

Target object monitoring method, device, equipment and medium based on field night environment
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a target object monitoring method and device based on a field night environment, electronic equipment and a computer readable storage medium.
Background
With the deep exploration of the nature by people, the research on species diversification becomes a key point of attention of people, and in the existing observation of wild animal life habits and species evolution, a camera shooting mode is mostly adopted to monitor target species.
Due to the uncertainty of the field monitoring environment, particularly under the night condition, the field light is insufficient, and more shelters are covered, the monitoring environment is severe, however, for the infrared image analysis commonly used for night monitoring, due to the fact that the resolution ratio of the infrared image is poor, the contrast ratio is low, the visual effect is fuzzy and the like, the situation that the target species are mistakenly monitored, the monitoring is omitted and the like under the field night condition can be caused, and then the situation that only the infrared image is relied on is caused, the target species are difficult to be accurately monitored under the field night condition, and the accuracy of monitoring the target species under the field night condition is low.
Disclosure of Invention
The invention provides a target object monitoring method and device based on a field night environment and a computer readable storage medium, and mainly aims to solve the problem of low accuracy of target object monitoring in the field night environment.
In order to achieve the above object, the present invention provides a target object monitoring method based on field night environment, which comprises:
acquiring a visible light image shot by a camera, and calculating the ambient light intensity of the camera according to the multi-color order difference weight average value of all pixel points in the visible light image;
acquiring a normal light intensity image of the geographic position of the camera, identifying the outline of a fixed shelter in the normal light intensity image, and identifying the image light intensity of the normal light intensity image;
acquiring an infrared image shot by the camera, and performing color information repairing on the infrared image according to the environmental light intensity and the image light intensity to obtain a corrected image;
recognizing a heat source area contour in the corrected image, and performing area division on the corrected image according to the heat source area contour and the fixed shelter contour to obtain a plurality of local areas;
respectively carrying out double-channel fusion feature identification on each local area to obtain the local feature of each local area;
and performing contour completion on the heat source region contour according to the local characteristics to obtain a complete heat source contour, and identifying the type of the target object corresponding to the heat source region contour according to the complete heat source contour.
Optionally, the calculating the ambient light intensity of the camera according to the weighted average of the multi-color orders of all the pixels in the visible light image includes:
reading the color gradation value of each pixel point in the visible light image;
selecting one pixel point from the visible light image one by one as a target pixel point, and calculating the difference value between the color gradation value corresponding to the target pixel point and a preset color gradation median value;
calculating the ambient light intensity of the camera from the difference value using an weighted average algorithm as follows:
Figure 100491DEST_PATH_IMAGE001
Figure 446021DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure 799642DEST_PATH_IMAGE003
is the ambient light intensity of the camera and,
Figure 863413DEST_PATH_IMAGE004
is the first in the visible light image
Figure 796734DEST_PATH_IMAGE005
The color level value of each target pixel point,
Figure 262351DEST_PATH_IMAGE006
for the number of pixel points within the visible light image,
Figure 786873DEST_PATH_IMAGE007
is a constant which is preset by the user,
Figure 72361DEST_PATH_IMAGE008
and the preset color level median value is obtained.
Optionally, the identifying a fixed obstruction profile within the normal intensity image comprises:
performing convolution and pooling treatment on each pixel point in the normal light intensity image for preset times to obtain low-dimensional pixel characteristics of the normal light intensity image;
mapping the low-dimensional pixel features into a pre-constructed high-dimensional space coordinate system to obtain a space coordinate corresponding to each low-dimensional pixel feature;
selecting one space coordinate from the space coordinates one by one as an understanding coordinate, calculating a distance value between the target coordinate and each preset fixed shelter label, and selecting the fixed shelter label with the minimum distance value as a fixed shelter label of the low-dimensional image characteristic corresponding to the target coordinate;
and determining a connected domain formed by the pixel points with the same fixed shelter label in the normal light intensity image as a fixed shelter outline in the normal light intensity image.
Optionally, the color information inpainting the infrared image according to the environmental light intensity and the image light intensity to obtain a corrected image includes:
mapping pixels in the infrared image to a first plane coordinate system which is constructed in advance by taking a central pixel point of the infrared image as an original point to obtain a first plane coordinate;
calculating a first pixel gradient of each pixel point in the infrared image in preset N directions by using a non-downsampling contourlet transform mode;
mapping pixels in the normal light intensity image to a pre-constructed second plane coordinate system by taking a central pixel point of the normal light intensity image as an origin to obtain a second plane coordinate;
calculating second pixel gradients of each pixel point in the normal light intensity image in preset N directions by using a non-down sampling shear wave conversion mode;
calculating the difference value between the image light intensity and the environment light intensity, and taking the difference value as a transformation coefficient;
and adjusting the first pixel gradient to the direction of the second pixel gradient by using the transformation coefficient, and taking the adjusted infrared image as the corrected image.
Optionally, the identifying a heat source region contour within the corrected image comprises:
identifying the heat value of each pixel point in the corrected image;
selecting a connected domain formed by the pixel points with the thermal value larger than a preset thermal threshold value as a heat source region;
and connecting the most external pixel points of the heat source area to obtain the outline of the heat source area.
Optionally, the performing contour completion on the heat source region contour according to the local feature to obtain a complete heat source contour includes:
determining a plurality of contour breakpoints of the contour of the heat source region according to the position relation between the fixed shielding object and the heat source region;
and constructing a tangent of each contour breakpoint, determining an intersection point of each tangent as a predicted connection point, and connecting each contour breakpoint and each predicted connection point by using a smooth curve to obtain a complete heat source contour.
Optionally, the performing dual-channel fusion feature recognition on each local region respectively to obtain the local feature of each local region includes:
selecting one local area from the plurality of local areas one by one as a target area;
identifying a first region feature of the target region by using a preset first feature channel, and identifying a second region feature of the target region by using a preset second feature channel, wherein the first region feature is different from the second region feature, and the first region feature and the second region feature belong to two different features of an angular point feature, a spot feature, an edge feature, a linear feature and a texture feature;
and fusing the first area characteristic and the second area characteristic into a local characteristic of the target area by utilizing a preset characteristic pyramid network.
In order to solve the above problems, the present invention also provides a target object monitoring device based on a field night environment, the device comprising:
the image analysis module is used for acquiring a visible light image shot by a camera and calculating the ambient light intensity of the camera according to the multi-color order difference weight average value of all pixel points in the visible light image; acquiring a normal light intensity image of the geographical position of the camera, identifying the outline of a fixed shelter in the normal light intensity image, and identifying the image light intensity of the normal light intensity image;
the information repairing module is used for acquiring an infrared image shot by the camera and repairing color information of the infrared image according to the environmental light intensity and the image light intensity to obtain a corrected image;
the area dividing module is used for identifying a heat source area contour in the corrected image and dividing the corrected image into a plurality of local areas according to the heat source area contour and the fixed shelter contour;
the characteristic extraction module is used for respectively carrying out double-channel fusion characteristic identification on each local area to obtain the local characteristic of each local area;
and the target object monitoring module is used for performing contour completion on the heat source region contour according to the local features to obtain a complete heat source contour and identifying the type of the target object corresponding to the heat source region contour according to the complete heat source contour.
In order to solve the above problem, the present invention also provides an electronic device, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the method for monitoring an object based on a field nighttime environment as described above.
In order to solve the above problem, the present invention further provides a computer-readable storage medium, in which at least one computer program is stored, and the at least one computer program is executed by a processor in an electronic device to implement the target object monitoring method based on the field night environment.
According to the embodiment of the invention, the color information of the infrared image is repaired by utilizing the ambient light intensity when the camera shoots the infrared image and the image light intensity of the normal light intensity image, and partial color information can be added into the original infrared image only containing heat information, so that the quality of the infrared image is improved, and the subsequent detection accuracy of the target object by utilizing the infrared image is favorably improved; meanwhile, the corrected infrared image is divided into a plurality of local areas according to the outline of the heat source area and the outline of the fixed shielding object, so that the whole division of the target object and the fixed shielding object can be realized, the objects of different types are prevented from appearing in the same local area, and the accuracy of subsequent target object identification is improved; furthermore, the dual-channel fusion feature recognition is respectively carried out on each local area, so that the accuracy of the recognized local features can be improved, and the feature information content in the local features is increased, so that the accuracy of the final target object recognition is improved; and finally, performing contour completion on the contour of the heat source area, avoiding incomplete contour form under the condition that the target object is partially shielded, and being beneficial to improving monitoring accuracy. Therefore, the target object monitoring method, the target object monitoring device, the electronic equipment and the computer readable storage medium based on the field night environment, which are provided by the invention, can solve the problem of low accuracy of target object monitoring in the field night environment.
Drawings
Fig. 1 is a schematic flow chart of a target object monitoring method based on a field night environment according to an embodiment of the present invention;
FIG. 2 is a schematic view illustrating a process of performing color information inpainting on an infrared image according to an ambient light intensity and an image light intensity according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart illustrating a process for identifying a heat source region contour in a corrected image according to an embodiment of the present invention;
FIG. 4 is a functional block diagram of a target object monitoring device based on a field night environment according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device for implementing the target object monitoring method based on the field night environment according to an embodiment of the present invention.
The implementation, functional features and advantages of the present invention will be further described with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
The embodiment of the application provides a target object monitoring method based on a field night environment. The subject of execution of the target object monitoring method based on the field night environment includes, but is not limited to, at least one of the electronic devices of a server, a terminal, and the like, which can be configured to execute the method provided by the embodiment of the present application. In other words, the target object monitoring method based on the field night environment may be performed by software or hardware installed in a terminal device or a server device, and the software may be a block chain platform. The server includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like. The server may be an independent server, or may be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like.
Fig. 1 is a schematic flow chart of a target object monitoring method based on a field night environment according to an embodiment of the present invention. In this embodiment, the target object monitoring method based on the field night environment includes:
and S1, acquiring a visible light image shot by the camera, and calculating the ambient light intensity of the camera according to the multi-color order difference weight average value of all pixel points in the visible light image.
In the embodiment of the present invention, the camera may be any device having an image capturing function such as image capturing, video recording, and the like, wherein the visible light image is an image captured by the camera in a visible light range of human eyes.
In one practical application scene of the invention, the color gradation range of each pixel in the visible light image is between 0 and 255, the color gradation refers to the brightness of the color, 0 represents the darkest black, and 255 represents the brightest white; with the increase of the numerical value, the higher the represented color gradation brightness (that is, with the increase of the color gradation numerical value, the closer the represented color is to white), so that the pixels of different color gradations of all the pixel points in the visible light image can be counted, and the ambient light intensity of the current environment of the camera can be determined by using the multi-color gradation weighted average value of all the pixel points in the visible light image.
In an embodiment of the present invention, the calculating the ambient light intensity of the camera according to the weighted average of the multi-color orders of all the pixel points in the visible light image includes:
reading the color gradation value of each pixel point in the visible light image;
selecting one pixel point from the visible light image one by one as a target pixel point, and calculating the difference value between the color gradation numerical value corresponding to the target pixel point and a preset color gradation median value;
calculating the ambient light intensity of the camera from the difference value using an weighted average algorithm as follows:
Figure 340531DEST_PATH_IMAGE009
Figure 598337DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure 559340DEST_PATH_IMAGE003
is the ambient light intensity of the camera and,
Figure 332124DEST_PATH_IMAGE004
is the first in the visible light image
Figure 138406DEST_PATH_IMAGE005
The color level value of each target pixel point,
Figure 578614DEST_PATH_IMAGE006
for the number of pixel points within the visible light image,
Figure 648202DEST_PATH_IMAGE007
is a preset constant and is used as a reference,
Figure 173861DEST_PATH_IMAGE008
and Y is a direction coefficient, and Y =1 or Y = -1.
In detail, it is preferable that the air conditioner includes,
Figure 987096DEST_PATH_IMAGE007
preferably of the order of 255 f,
Figure 281811DEST_PATH_IMAGE008
preferably 125.
In the embodiment of the invention, the difference value between the color gradation numerical value corresponding to the target pixel point and the preset color gradation median value is calculated to judge that the target pixel point is moreThe target pixel point is given different value directions according to different bias of the target pixel point when the environment light intensity of the camera is calculated, namely when the target pixel point is biased to a black color level or a white color level, namely when the target pixel point is biased to a white color level
Figure 787879DEST_PATH_IMAGE010
If yes, indicating that the target pixel point is more biased to a black color level, and contributing to the overall brightness of the visible light image as negative, wherein Y = -1; when in use
Figure 535255DEST_PATH_IMAGE011
And if so, indicating that the target pixel point is more biased to a white color level, and making the contribution of the overall brightness of the visible light image positive, wherein Y = 1.
In detail, calculating the color gradation value corresponding to the target pixel point, the color gradation value and the preset constant
Figure 948919DEST_PATH_IMAGE007
The quotient (preferably 255) can give different difference weights to different target pixel points so as to realize accurate evaluation of the whole light intensity in the visible light image, and further calculate the mean value of the quotient values of all the pixel points to obtain the ambient light intensity of the camera, thereby realizing accurate analysis of the ambient light intensity of the camera.
S2, acquiring a normal light intensity image of the geographical position of the camera, identifying the outline of a fixed obstruction in the normal light intensity image, and identifying the image light intensity of the normal light intensity image.
In the embodiment of the present invention, the normal light intensity image refers to an image captured by the camera under normal light intensity, such as an image captured under a geographical position where the camera captures the visible light image in a daytime environment.
In one practical application scenario of the present invention, since the normal light intensity image is an image captured by the camera under normal light intensity, the normal light intensity image can be analyzed to identify the outline of the fixed obstruction in the normal light intensity image, wherein in the field environment, the fixed obstruction includes but is not limited to an object with high fixity such as a rock, a tree, and the like.
In an embodiment of the present invention, the identifying a contour of a fixed obstruction in the normal light intensity image includes:
performing convolution and pooling processing on each pixel point in the normal light intensity image for preset times to obtain low-dimensional pixel characteristics of the normal light intensity image;
mapping the low-dimensional pixel features into a pre-constructed high-dimensional space coordinate system to obtain a space coordinate corresponding to each low-dimensional pixel feature;
selecting one space coordinate from the space coordinates one by one as a target coordinate, calculating a distance value between the target coordinate and each preset fixed shelter label, and selecting the fixed shelter label with the minimum distance value as a fixed shelter label of a low-dimensional image characteristic corresponding to the target coordinate;
and determining a connected domain formed by the pixel points with the same fixed shelter label in the normal light intensity image as a fixed shelter outline in the normal light intensity image.
In detail, in the embodiment of the present invention, a preset image processing model (such as VGG net, LeNet, Alex net, or the like) may be used to perform convolution and pooling processing for a preset number of times on each pixel point in the normal light intensity image, so as to implement dimension reduction on data of the normal light intensity image, and extract low-dimensional pixel features included in the normal light intensity image.
Specifically, the convolution and pooling operations may implement dimension reduction on data of a normal light intensity image, so as to facilitate efficient extraction of image features from the normal light intensity image, but the features extracted after dimension reduction are not favorable for feature classification due to lack of dimensions, so that in the embodiment of the present invention, a preset mapping function may be used to map the low-dimensional pixel features into a pre-constructed high-dimensional spatial coordinate system, to obtain a spatial coordinate of each low-dimensional pixel feature in a higher dimension, so as to facilitate subsequent classification of the features, where the mapping function includes, but is not limited to, a map function and a gaussian function.
In the embodiment of the invention, the distance value between the target coordinate and each preset fixed shelter label can be calculated by utilizing algorithms with distance value calculation functions, such as an Euclidean distance algorithm, a cosine distance algorithm and the like, wherein the fixed shelter labels are labels which are generated in advance and used for marking the characteristics of different fixed shelters in a field environment, and the fixed shelter labels include but are not limited to rock shelter labels, tree shelter labels and the like.
Further, the step of recognizing the image light intensity of the normal light intensity image is the same as the step of calculating the ambient light intensity of the camera according to the multi-color order difference weight average value of all pixel points in the visible light image in S1, which is not repeated herein.
In the embodiment of the invention, through the analysis of the normal light intensity image, the fixed shelters contained in the pictures captured by the camera at the geographical position can be identified from the normal light intensity image, so that the subsequent accurate image analysis is facilitated, and the fixed shelters can obscure the pictures, which causes the inaccuracy of the analysis result.
And S3, acquiring the infrared image shot by the camera, and performing color information patching on the infrared image according to the environmental light intensity and the image light intensity to obtain a corrected image.
In the embodiment of the invention, the infrared image can be shot and acquired by an infrared image module contained in the camera, and because light is very lack in the outdoor night environment, the infrared image is shot by the camera and is analyzed, so that the accuracy of finally detecting the target object in the outdoor night environment is improved.
In one practical application scene, because the infrared image has the characteristics of poor resolution, low contrast, low signal-to-noise ratio, fuzzy visual effect, wireless relation between gray distribution and target reflection characteristics and the like, if the infrared image is directly analyzed, the situations of error identification or missing identification of a target object in the image are easily caused. Therefore, the embodiment of the invention can carry out color information repair on the infrared image according to the current environment light intensity of the environment camera and the image light intensity of the correct light intensity image obtained by shooting by the camera, so as to add partial color information in the original infrared image only containing heat information, further improve the quality of the infrared image, and be beneficial to improving the accuracy of detecting the target object by using the infrared image subsequently.
In the embodiment of the present invention, referring to fig. 2, the performing color information patch on the infrared image according to the ambient light intensity and the image light intensity to obtain a corrected image includes:
s21, mapping pixels in the infrared image to a first plane coordinate system which is constructed in advance by taking the central pixel point of the infrared image as an origin to obtain a first plane coordinate;
s22, calculating first pixel gradients of each pixel point in the infrared image in preset N directions in a non-downsampling contourlet transform mode;
s23, mapping pixels in the normal light intensity image to a pre-constructed second plane coordinate system by taking the central pixel point of the normal light intensity image as an origin to obtain a second plane coordinate;
s24, calculating second pixel gradients of each pixel point in the normal light intensity image in preset N directions by using a non-subsampled shear wave transformation mode;
s25, calculating the difference value between the image light intensity and the environment light intensity, and taking the difference value as a transformation coefficient;
s26, adjusting the first pixel gradient toward the second pixel gradient by using the transformation coefficient, and using the adjusted infrared image as the corrected image.
In detail, the non-downsampling contourlet transformation means that a non-downsampling pyramid filter bank of scale decomposition is utilized to decompose a low-frequency component of each pixel point in the infrared image into a low-pass sub-band and a high-frequency sub-band; and meanwhile, decomposing the high-frequency component of each pixel point in the infrared image and the high-frequency sub-band obtained by decomposing the low-frequency component by using a non-downsampling filter group subjected to directional decomposition to obtain a band-pass directional sub-band, and calculating the first pixel gradient of each pixel point in the infrared image in the preset N directions by using the decomposed low-pass sub-band and the band-pass directional sub-band to obtain the first pixel gradient of each pixel point in the infrared image in the preset N directions.
Specifically, the non-downsampling shear wave transformation refers to that when the normal light intensity image is processed by applying the shear wave transformation, the multi-scale decomposition of the image is obtained by using a non-downsampling laplacian tower filter bank in a discrete form, and then the direction decomposition is performed on the obtained sub-band images of all scales by using the shear wave filter combination, so that sub-band characteristics of different scales and different directions are obtained, and second pixel gradients of each pixel point in the normal light intensity image in the preset N directions are calculated.
In the embodiment of the invention, because the normal light intensity image contains more color information, the pixel gradient of each pixel point in the infrared image in the preset N directions can be adjusted to the pixel gradient direction of each pixel point in the normal light intensity image in the preset N directions according to the difference value between the image light intensity of the normal light intensity image and the environmental light intensity, so that the color information of the infrared image is added, a corrected image is obtained, and the accuracy of monitoring a target object according to the infrared image in the follow-up process is improved.
And S4, recognizing the heat source area contour in the corrected image, and performing area division on the corrected image according to the heat source area contour and the fixed shielding object contour to obtain a plurality of local areas.
In the embodiment of the present invention, since the corrected image is obtained by performing color correction on the infrared image, the corrected image also includes heat source information originally included in the infrared image, and further, the corrected image may be analyzed to identify a heat source area contour in the corrected image.
In detail, in the field night environment, the heat source generally represents a living body, and therefore, the target object can be monitored by further analyzing the heat source region contour in the identified correction image.
In an embodiment of the present invention, referring to fig. 3, the identifying a heat source region contour in the corrected image includes:
s31, identifying the heat value of each pixel point in the corrected image;
s32, selecting a connected domain formed by the pixel points with the thermal value larger than a preset thermal threshold value as a heat source region;
and S33, connecting the most external pixel points of the heat source area to obtain the outline of the heat source area.
In detail, in the embodiment of the present invention, a preset image recognition software (such as Photoshop, Lighting, etc.) may be used to recognize the heat value of each pixel point in the corrected image.
Specifically, as the heat source emitted by the organism can be dissipated to the whole body within a certain range along with the contacted medium, the embodiment of the invention selects the connected domain formed by the pixel points with the heat value larger than the preset thermal threshold value as the heat source region (wherein the region is used for marking the actual body type range of the target object), so that the situation that the difference between the identified heat source region and the actual body type of the target object is larger due to the dissipation of the heat source can be effectively avoided, and the accuracy of finally monitoring the target object is improved.
In the embodiment of the invention, the outermost pixel points of the heat source area can be connected to form the heat source area outline of the heat source area.
Furthermore, in order to recognize the target object from the corrected image, the content of the corrected image needs to be analyzed, and most of the existing recognition methods extract global invariant features in the image and further recognize the target object in the image by using the global invariant features. However, due to the complexity of the field environment, the identification method of the global invariant features is difficult to eliminate the imaging distortion of the image, and particularly under the conditions that the field image is complex in structure and local shielding exists, the matching identification of the target object by using the global invariant features is very difficult.
Therefore, the correction image can be divided into a plurality of local areas, each local area is analyzed independently, and the target object in the correction image is identified according to the results of all local analyses in the follow-up process, so that the accuracy of detecting the target object is improved.
In the embodiment of the invention, the corrected image can be divided into a plurality of local areas according to the heat source area outline and the fixed shielding object outline, the division of the method is different from the prior art that the image is directly divided according to the preset size proportion, and the image is directly divided according to the preset size proportion, so that the parts which originally do not belong to the same whole are divided into the same area, the originally same and complete whole is divided into a plurality of fragments, and further, the influence of increased calculated amount and disordered characteristics is caused for the subsequent characteristic extraction and the target object detection. Therefore, according to the embodiment of the invention, the corrected image is divided into the plurality of local areas according to the heat source area contour and the fixed shielding object contour, so that the whole target object and the fixed shielding object can be divided, the objects of different types are prevented from appearing in the same local area, and the accuracy of subsequent target object identification is improved.
And S5, performing double-channel fusion feature recognition on each local area respectively to obtain the local feature of each local area.
In the embodiment of the invention, in order to realize accurate detection of the target object, the characteristic identification can be respectively carried out on each local area to obtain the local characteristic of each local area.
In detail, the embodiment of the invention can identify the local features of each local area by adopting a dual-channel fusion feature identification mode. The dual-channel fusion feature identification mode is to identify different types of local features of each local region by using two channels respectively and perform feature fusion on the identified different types of local features to finally generate the local features of each local region, which are fused with dual features, so that the accuracy of the identified local features is improved, the content of feature information in the local features is increased, and the accuracy of target object identification is finally improved.
In particular, the local features include, but are not limited to, corner features, blob features, edge features, line features, texture features, and the like.
In the embodiment of the present invention, the performing dual-channel fusion feature identification on each local region to obtain the local feature of each local region includes:
selecting one local area from the plurality of local areas one by one as a target area;
identifying a first region feature of the target region by using a preset first feature channel, and identifying a second region feature of the target region by using a preset second feature channel, wherein the first region feature is different from the second region feature, and the first region feature and the second region feature belong to two different features of an angular point feature, a spot feature, an edge feature, a linear feature and a texture feature;
and fusing the first area characteristic and the second area characteristic into a local characteristic of the target area by utilizing a preset characteristic pyramid network.
In detail, the first feature channel and the second feature channel are feature channels constructed by using a feature detection operator in advance, and extraction of different types of features in a target region can be realized.
For example, when the feature to be extracted from the target region is a corner feature, a feature channel may be constructed using a corner detection operator (e.g., Harris operator, SUSAN operator, CSS operator, features from obtained segment operator, etc.); when the feature to be extracted from the target region is a blob feature, a feature channel may be constructed by using a blob detection operator (e.g., DoG operator, Multi-Scale Harris operator, SIFT operator, SURF operator, etc.).
Further, in the embodiment of the present invention, the features of different feature channels may be respectively mapped to different levels of a preset feature pyramid network, and then the first region feature and the second region feature mapped to the feature pyramid network are fused from top to bottom layer by layer to obtain the local feature of the target region.
And S6, performing contour completion on the heat source region contour according to the local features to obtain a complete heat source contour, and identifying the type of the target object corresponding to the heat source region contour according to the complete heat source contour.
In one practical application scenario of the invention, due to the complexity of the field environment, the target object contained in the corrected image may be partially shielded by fixed shielding objects such as rocks and trees, which may cause incomplete target object shape, and if the corrected image is directly monitored and identified, the accuracy of the monitoring result is low.
Therefore, the embodiment of the invention can complement the contour of the heat source area according to the local characteristics to obtain a complete heat source contour, and then monitor the target object according to the complete heat source contour so as to improve the monitoring accuracy.
In an embodiment of the present invention, the performing contour completion on the contour of the heat source region according to the local feature to obtain a complete contour of the heat source includes:
determining a plurality of contour breakpoints of the contour of the heat source region according to the position relation between the fixed shielding object and the heat source region;
and constructing a tangent of each contour breakpoint, determining an intersection point of each tangent as a predicted connection point, and connecting each contour breakpoint and each predicted connection point by using a smooth curve to obtain a complete heat source contour.
In detail, the contour break point is a pixel point where the heat source contour and the fixed shielding object are intersected, when the fixed shielding object shields a part of the heat source region, at least two contour break points are intersected with the fixed shielding object on the heat source contour of the heat source region, therefore, a tangent of each contour break point can be constructed, an intersection point of the tangents is used as a prediction connection point, and the contour break points and the prediction connection points are connected by a smooth curve to obtain a complete heat source contour.
Further, in the embodiment of the present invention, a pixel coordinate system may be constructed by using the central pixel point of the complete heat source profile as an origin, and according to the coordinate value of each pixel point of the complete heat source profile, the following matching degree algorithm is used to identify the type of the target object corresponding to the profile of the heat source region according to the complete heat source profile:
Figure 35824DEST_PATH_IMAGE012
wherein the content of the first and second substances,
Figure 775109DEST_PATH_IMAGE013
is the complete heat source profile
Figure 947465DEST_PATH_IMAGE014
The coordinate values of the individual pixel points,
Figure 164819DEST_PATH_IMAGE015
is a preset one
Figure 371810DEST_PATH_IMAGE016
A coordinate label corresponding to a preset target object type is set,
Figure 281997DEST_PATH_IMAGE017
the total number of pixel points of the complete heat source profile,
Figure 738386DEST_PATH_IMAGE018
for the complete heat source profile and the preset second
Figure 697115DEST_PATH_IMAGE016
And matching values between preset target object types.
The embodiment of the invention can determine that the target object type with the maximum matching value is the target object type corresponding to the complete heat source contour, namely, the target object contained in the infrared image is the target object type with the maximum matching value with the complete heat source contour.
According to the embodiment of the invention, the color information of the infrared image is repaired by utilizing the ambient light intensity when the camera shoots the infrared image and the image light intensity of the normal light intensity image, and partial color information can be added into the original infrared image only containing heat information, so that the quality of the infrared image is improved, and the subsequent detection accuracy of the target object by utilizing the infrared image is favorably improved; meanwhile, the corrected infrared image is divided into a plurality of local areas according to the outline of the heat source area and the outline of the fixed shielding object, so that the whole division of the target object and the fixed shielding object can be realized, the objects of different types are prevented from appearing in the same local area, and the accuracy of subsequent target object identification is improved; furthermore, the dual-channel fusion feature recognition is respectively carried out on each local area, so that the accuracy of the recognized local features can be improved, and the feature information content in the local features is increased, so that the accuracy of the final target object recognition is improved; and finally, performing contour completion on the contour of the heat source area, avoiding incomplete contour form under the condition that the target object is partially shielded, and being beneficial to improving monitoring accuracy. Therefore, the target object monitoring method based on the field night environment can solve the problem of low accuracy of target object monitoring in the field night environment.
Fig. 4 is a functional block diagram of a target object monitoring device based on a field night environment according to an embodiment of the present invention.
The target object monitoring device 100 based on the field night environment of the invention can be installed in electronic equipment. According to the realized functions, the target object monitoring device 100 based on the field night environment may include an image analysis module 101, an information patch module 102, a region division module 103, a feature extraction module 104, and a target object monitoring module 105. The module of the present invention, which may also be referred to as a unit, refers to a series of computer program segments that can be executed by a processor of an electronic device and that can perform a fixed function, and that are stored in a memory of the electronic device.
In the present embodiment, the functions regarding the respective modules/units are as follows:
the image analysis module 101 is configured to obtain a visible light image captured by a camera, and calculate an ambient light intensity of the camera according to a multi-color order difference weight average value of all pixel points in the visible light image; acquiring a normal light intensity image of the geographical position of the camera, identifying the outline of a fixed shelter in the normal light intensity image, and identifying the image light intensity of the normal light intensity image;
the information patching module 102 is configured to acquire an infrared image captured by the camera, and perform color information patching on the infrared image according to the environmental light intensity and the image light intensity to obtain a corrected image;
the region dividing module 103 is configured to identify a heat source region contour in the corrected image, and perform region division on the corrected image according to the heat source region contour and the fixed blocking object contour to obtain a plurality of local regions;
the feature extraction module 104 is configured to perform dual-channel fusion feature identification on each local region respectively to obtain a local feature of each local region;
the target object monitoring module 105 is configured to perform contour completion on the heat source region contour according to the local features to obtain a complete heat source contour, and identify a target object type corresponding to the heat source region contour according to the complete heat source contour.
In detail, when the modules in the target object monitoring device 100 based on the field night environment according to the embodiment of the present invention are used, the same technical means as the target object monitoring method based on the field night environment described in fig. 1 to fig. 3 are adopted, and the same technical effects can be produced, which is not described herein again.
Fig. 5 is a schematic structural diagram of an electronic device for implementing a target object monitoring method based on a field night environment according to an embodiment of the present invention.
The electronic device 1 may include a processor 10, a memory 11, a communication bus 12, and a communication interface 13, and may further include a computer program stored in the memory 11 and operable on the processor 10, such as an object monitoring program based on a field night environment.
In some embodiments, the processor 10 may be composed of an integrated circuit, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same function or different functions, and includes one or more Central Processing Units (CPUs), a microprocessor, a digital Processing chip, a graphics processor, a combination of various control chips, and the like. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the whole electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device by running or executing programs or modules stored in the memory 11 (for example, executing a target object monitoring program based on a field night environment, etc.), and calling data stored in the memory 11.
The memory 11 includes at least one type of readable storage medium including flash memory, removable hard disks, multimedia cards, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disks, optical disks, and the like. The memory 11 may in some embodiments be an internal storage unit of the electronic device, for example a removable hard disk of the electronic device. The memory 11 may also be an external storage device of the electronic device in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device. The memory 11 may be used not only to store application software installed in the electronic device and various types of data, such as codes of an object monitoring program based on a field night environment, etc., but also to temporarily store data that has been output or will be output.
The communication bus 12 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus. The bus may be divided into an address bus, a data bus, a control bus, etc. The bus is arranged to enable connection communication between the memory 11 and at least one processor 10 or the like.
The communication interface 13 is used for communication between the electronic device and other devices, and includes a network interface and a user interface. Optionally, the network interface may include a wired interface and/or a wireless interface (e.g., WI-FI interface, bluetooth interface, etc.), which are typically used to establish a communication connection between the electronic device and other electronic devices. The user interface may be a Display (Display), an input unit such as a Keyboard (Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable, among other things, for displaying information processed in the electronic device and for displaying a visualized user interface.
Only electronic devices having components are shown, and those skilled in the art will appreciate that the structures shown in the figures do not constitute limitations on the electronic devices, and may include fewer or more components than shown, or some components in combination, or a different arrangement of components.
For example, although not shown, the electronic device may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so that functions of charge management, discharge management, power consumption management and the like are realized through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
It is to be understood that the embodiments described are illustrative only and are not to be construed as limiting the scope of the claims.
The object monitoring program stored in the memory 11 of the electronic device 1 and based on the field night environment is a combination of instructions, which when executed in the processor 10, can realize:
acquiring a visible light image shot by a camera, and calculating the ambient light intensity of the camera according to the multi-color-order difference weight average value of all pixel points in the visible light image;
acquiring a normal light intensity image of the geographic position of the camera, identifying the outline of a fixed shelter in the normal light intensity image, and identifying the image light intensity of the normal light intensity image;
acquiring an infrared image shot by the camera, and performing color information repairing on the infrared image according to the ambient light intensity and the image light intensity to obtain a corrected image;
recognizing a heat source area contour in the corrected image, and performing area division on the corrected image according to the heat source area contour and the fixed shielding object contour to obtain a plurality of local areas;
respectively carrying out double-channel fusion feature identification on each local area to obtain the local feature of each local area;
and performing contour completion on the heat source region contour according to the local features to obtain a complete heat source contour, and identifying the type of the target object corresponding to the heat source region contour according to the complete heat source contour.
Specifically, the specific implementation method of the instruction by the processor 10 may refer to the description of the relevant steps in the embodiment corresponding to the drawings, which is not described herein again.
Further, the integrated modules/units of the electronic device 1, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. The computer readable storage medium may be volatile or non-volatile. For example, the computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM).
The present invention also provides a computer-readable storage medium, storing a computer program which, when executed by a processor of an electronic device, may implement:
acquiring a visible light image shot by a camera, and calculating the ambient light intensity of the camera according to the multi-color order difference weight average value of all pixel points in the visible light image;
acquiring a normal light intensity image of the geographical position of the camera, identifying the outline of a fixed shelter in the normal light intensity image, and identifying the image light intensity of the normal light intensity image;
acquiring an infrared image shot by the camera, and performing color information repairing on the infrared image according to the ambient light intensity and the image light intensity to obtain a corrected image;
recognizing a heat source area contour in the corrected image, and performing area division on the corrected image according to the heat source area contour and the fixed shelter contour to obtain a plurality of local areas;
respectively carrying out double-channel fusion feature identification on each local area to obtain the local feature of each local area;
and performing contour completion on the heat source region contour according to the local features to obtain a complete heat source contour, and identifying the type of the target object corresponding to the heat source region contour according to the complete heat source contour.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
The embodiment of the application can acquire and process related data based on an artificial intelligence technology. Among them, Artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (8)

1. A method for monitoring a target object based on a field night environment, the method comprising:
acquiring a visible light image shot by a camera, reading the color gradation value of each pixel point in the visible light image, selecting one pixel point from the visible light image one by one as a target pixel point, and calculating the difference value between the color gradation value corresponding to the target pixel point and a preset color gradation median value;
calculating the ambient light intensity of the camera according to the difference value by using an weighted average algorithm as follows:
Figure DEST_PATH_IMAGE001
Figure 440269DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE003
is the intensity of the ambient light of the camera,
Figure 290151DEST_PATH_IMAGE004
is the first in the visible light image
Figure DEST_PATH_IMAGE005
The color gradation value of each target pixel point,
Figure 111477DEST_PATH_IMAGE006
for the number of pixel points within the visible light image,
Figure DEST_PATH_IMAGE007
is a preset constant and is used as a reference,
Figure 788446DEST_PATH_IMAGE008
the preset color level median value is obtained, and Y is a direction coefficient;
acquiring a normal light intensity image of the geographical position of the camera, identifying the outline of a fixed shelter in the normal light intensity image, and identifying the image light intensity of the normal light intensity image;
acquiring an infrared image shot by the camera, and mapping pixels in the infrared image to a first plane coordinate system which is constructed in advance by taking a central pixel point of the infrared image as an origin to obtain a first plane coordinate;
calculating a first pixel gradient of each pixel point in the infrared image in preset N directions by using a non-down sampling contourlet transformation mode, mapping pixels in the normal light intensity image to a pre-constructed second plane coordinate system by taking a central pixel point of the normal light intensity image as an original point to obtain a second plane coordinate, and calculating a second pixel gradient of each pixel point in the normal light intensity image in preset N directions by using a non-down sampling shear wave transformation mode;
calculating the difference value between the image light intensity and the environment light intensity, taking the difference value as a transformation coefficient, adjusting the first pixel gradient to the direction of the second pixel gradient by using the transformation coefficient, and taking the adjusted infrared image as a corrected image;
recognizing a heat source area contour in the corrected image, and performing area division on the corrected image according to the heat source area contour and the fixed shielding object contour to obtain a plurality of local areas;
respectively carrying out double-channel fusion feature identification on each local area to obtain the local feature of each local area;
and performing contour completion on the heat source region contour according to the local features to obtain a complete heat source contour, and identifying the type of the target object corresponding to the heat source region contour according to the complete heat source contour.
2. The method for monitoring the target object based on the field night environment as claimed in claim 1, wherein the identifying the outline of the fixed shelter in the normal light intensity image comprises:
performing convolution and pooling treatment on each pixel point in the normal light intensity image for preset times to obtain low-dimensional pixel characteristics of the normal light intensity image;
mapping the low-dimensional pixel features into a pre-constructed high-dimensional space coordinate system to obtain a space coordinate corresponding to each low-dimensional pixel feature;
selecting one space coordinate from the space coordinates one by one as an understanding coordinate, calculating a distance value between the target coordinate and each preset fixed shelter label, and selecting the fixed shelter label with the minimum distance value as a fixed shelter label of the low-dimensional image characteristic corresponding to the target coordinate;
and determining a connected domain formed by the pixel points with the same fixed shelter label in the normal light intensity image as a fixed shelter outline in the normal light intensity image.
3. The method for monitoring the target object based on the field night environment as claimed in claim 1, wherein the identifying the heat source area contour in the corrected image comprises:
identifying the heat value of each pixel point in the corrected image;
selecting a connected domain formed by the pixel points with the thermal value larger than a preset thermal threshold value as a heat source region;
and connecting the most external pixel points of the heat source area to obtain the outline of the heat source area.
4. The field night environment-based target object monitoring method as claimed in claim 3, wherein the performing contour completion on the heat source region contour according to the local features to obtain a complete heat source contour comprises:
determining a plurality of contour breakpoints of the contour of the heat source region according to the position relation between the fixed shielding object and the heat source region;
and constructing a tangent of each contour breakpoint, determining an intersection point of each tangent as a predicted connection point, and connecting each contour breakpoint and each predicted connection point by using a smooth curve to obtain a complete heat source contour.
5. The method for monitoring the target object based on the field night environment according to any one of claims 1 to 4, wherein the performing two-channel fusion feature recognition on each local area to obtain the local feature of each local area comprises:
selecting one local area from the plurality of local areas one by one as a target area;
identifying a first region feature of the target region by using a preset first feature channel, and identifying a second region feature of the target region by using a preset second feature channel, wherein the first region feature is different from the second region feature, and the first region feature and the second region feature belong to two different features of an angular point feature, a spot feature, an edge feature, a linear feature and a texture feature;
and fusing the first area characteristic and the second area characteristic into a local characteristic of the target area by utilizing a preset characteristic pyramid network.
6. An object monitoring device based on a field night environment, the device comprising:
the image analysis module is used for acquiring a visible light image shot by a camera, reading the color gradation value of each pixel point in the visible light image, selecting one pixel point from the visible light image one by one as a target pixel point, calculating the difference value between the color gradation value corresponding to the target pixel point and a preset color gradation median value, and calculating the ambient light intensity of the camera according to the difference value by using the following different-weight mean value algorithm:
Figure DEST_PATH_IMAGE009
Figure 695222DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure 850260DEST_PATH_IMAGE003
is the intensity of the ambient light of the camera,
Figure 57250DEST_PATH_IMAGE004
is the first in the visible light image
Figure 905120DEST_PATH_IMAGE005
The color level value of each target pixel point,
Figure 564772DEST_PATH_IMAGE006
for the number of pixel points within the visible light image,
Figure 523500DEST_PATH_IMAGE007
is a preset constant and is used as a reference,
Figure 319418DEST_PATH_IMAGE008
the preset color level median value is obtained, and Y is a direction coefficient;
acquiring a normal light intensity image of the geographical position of the camera, identifying the outline of a fixed shelter in the normal light intensity image, and identifying the image light intensity of the normal light intensity image;
the information patching module is used for acquiring an infrared image shot by the camera, and mapping pixels in the infrared image to a first plane coordinate system which is constructed in advance by taking a central pixel point of the infrared image as an origin to obtain a first plane coordinate; calculating a first pixel gradient of each pixel point in the infrared image in preset N directions by using a non-down sampling contourlet transformation mode, mapping pixels in the normal light intensity image to a pre-constructed second plane coordinate system by taking a central pixel point of the normal light intensity image as an original point to obtain a second plane coordinate, and calculating a second pixel gradient of each pixel point in the normal light intensity image in preset N directions by using a non-down sampling shear wave transformation mode; calculating the difference value between the image light intensity and the environment light intensity, taking the difference value as a transformation coefficient, adjusting the first pixel gradient to the direction of the second pixel gradient by using the transformation coefficient, and taking the adjusted infrared image as a corrected image;
the area dividing module is used for identifying a heat source area contour in the corrected image and dividing the corrected image into a plurality of local areas according to the heat source area contour and the fixed shelter contour;
the characteristic extraction module is used for respectively carrying out double-channel fusion characteristic identification on each local area to obtain the local characteristic of each local area;
and the target object monitoring module is used for performing contour completion on the heat source region contour according to the local features to obtain a complete heat source contour and identifying the type of the target object corresponding to the heat source region contour according to the complete heat source contour.
7. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform a method for monitoring an object based on a field nighttime environment as claimed in any one of claims 1 to 5.
8. A computer-readable storage medium, storing a computer program, wherein the computer program, when executed by a processor, implements a method for monitoring an object based on a field nighttime environment according to any one of claims 1 to 5.
CN202210669439.8A 2022-06-14 2022-06-14 Target object monitoring method, device, equipment and medium based on field night environment Active CN114758249B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210669439.8A CN114758249B (en) 2022-06-14 2022-06-14 Target object monitoring method, device, equipment and medium based on field night environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210669439.8A CN114758249B (en) 2022-06-14 2022-06-14 Target object monitoring method, device, equipment and medium based on field night environment

Publications (2)

Publication Number Publication Date
CN114758249A CN114758249A (en) 2022-07-15
CN114758249B true CN114758249B (en) 2022-09-02

Family

ID=82336094

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210669439.8A Active CN114758249B (en) 2022-06-14 2022-06-14 Target object monitoring method, device, equipment and medium based on field night environment

Country Status (1)

Country Link
CN (1) CN114758249B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115830431B (en) * 2023-02-08 2023-05-02 湖北工业大学 Neural network image preprocessing method based on light intensity analysis
CN116129157B (en) * 2023-04-13 2023-06-16 深圳市夜行人科技有限公司 Intelligent image processing method and system for warning camera based on extreme low light level
CN116740653A (en) * 2023-08-14 2023-09-12 山东创亿智慧信息科技发展有限责任公司 Distribution box running state monitoring method and system
CN117593341B (en) * 2024-01-19 2024-05-07 深圳市超诺科技有限公司 System and method for processing target object monitoring data based on hunting camera

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306295A (en) * 2011-05-31 2012-01-04 东华大学 Natural color night vision realization method based on single band infrared image
KR20130126418A (en) * 2012-05-11 2013-11-20 삼성탈레스 주식회사 Night sighting device for vulcan using thermal camera and method thereof
CN105044899A (en) * 2015-08-29 2015-11-11 四川七彩光电科技有限公司 Night vision device based on low light level imaging and implementation method thereof
CN106407948A (en) * 2016-09-30 2017-02-15 防城港市港口区高创信息技术有限公司 Pedestrian detection and recognition method based on infrared night vision device
CN109116924A (en) * 2018-08-14 2019-01-01 潍坊歌尔电子有限公司 The adjusting method of indicating light brightness, intelligent wearable device and storage medium
CN109472256A (en) * 2019-01-02 2019-03-15 绍兴佳林商务信息咨询有限公司 A kind of northeastern tiger dynamic trace monitoring system and method
WO2021189912A1 (en) * 2020-09-25 2021-09-30 平安科技(深圳)有限公司 Method and apparatus for detecting target object in image, and electronic device and storage medium
CN113609893A (en) * 2021-06-18 2021-11-05 大连民族大学 Low-illuminance indoor human body target visible light feature reconstruction method and network based on infrared camera
CN114445733A (en) * 2021-12-22 2022-05-06 武汉理工大学 Night road information sensing system based on machine vision, radar and WiFi and information fusion method
CN114565866A (en) * 2021-11-05 2022-05-31 南京大学 All-time target tracking system based on dual-mode multi-band fusion

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102011010334B4 (en) * 2011-02-04 2014-08-28 Eads Deutschland Gmbh Camera system and method for observing objects at a great distance, in particular for monitoring target objects at night, mist, dust or rain
CN105447838A (en) * 2014-08-27 2016-03-30 北京计算机技术及应用研究所 Method and system for infrared and low-level-light/visible-light fusion imaging
CN111323767B (en) * 2020-03-12 2023-08-08 武汉理工大学 System and method for detecting obstacle of unmanned vehicle at night
CN114049568A (en) * 2021-11-29 2022-02-15 中国平安财产保险股份有限公司 Object shape change detection method, device, equipment and medium based on image comparison

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306295A (en) * 2011-05-31 2012-01-04 东华大学 Natural color night vision realization method based on single band infrared image
KR20130126418A (en) * 2012-05-11 2013-11-20 삼성탈레스 주식회사 Night sighting device for vulcan using thermal camera and method thereof
CN105044899A (en) * 2015-08-29 2015-11-11 四川七彩光电科技有限公司 Night vision device based on low light level imaging and implementation method thereof
CN106407948A (en) * 2016-09-30 2017-02-15 防城港市港口区高创信息技术有限公司 Pedestrian detection and recognition method based on infrared night vision device
CN109116924A (en) * 2018-08-14 2019-01-01 潍坊歌尔电子有限公司 The adjusting method of indicating light brightness, intelligent wearable device and storage medium
CN109472256A (en) * 2019-01-02 2019-03-15 绍兴佳林商务信息咨询有限公司 A kind of northeastern tiger dynamic trace monitoring system and method
WO2021189912A1 (en) * 2020-09-25 2021-09-30 平安科技(深圳)有限公司 Method and apparatus for detecting target object in image, and electronic device and storage medium
CN113609893A (en) * 2021-06-18 2021-11-05 大连民族大学 Low-illuminance indoor human body target visible light feature reconstruction method and network based on infrared camera
CN114565866A (en) * 2021-11-05 2022-05-31 南京大学 All-time target tracking system based on dual-mode multi-band fusion
CN114445733A (en) * 2021-12-22 2022-05-06 武汉理工大学 Night road information sensing system based on machine vision, radar and WiFi and information fusion method

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
A simple self-adjusting model for correcting the blooming effects in DMSP-OLS nighttime light images;Xin Cao等;《Remote Sensing of Environment》;20190430;第224卷;第401-411页 *
GMS-5红外图像上夜间雾的分离实验;陈伟等;《气象科学》;20041231(第2期);第193-198页 *
Optimizing observing strategies for monitoring animals using drone-mounted thermal infrared cameras;Claire Burke等;《International Journal of Remote Sensing》;20190117;第40卷(第2期);第439-467页 *
利用热红外成像仪识别夜间林火;刘柯珍;《福建农林大学学报(自然科学版)》;20191231;第48卷(第1期);第69-74页 *
基于红外热成像与改进YOLOV3的夜间野兔监测方法;易诗等;《农业工程学报》;20191031;第35卷(第19期);第223-229页 *
红外相机技术在物种监测中的应用及数据挖掘;刘雪华等;《生物多样性》;20181231;第1-12页 *

Also Published As

Publication number Publication date
CN114758249A (en) 2022-07-15

Similar Documents

Publication Publication Date Title
CN114758249B (en) Target object monitoring method, device, equipment and medium based on field night environment
CN110060237B (en) Fault detection method, device, equipment and system
CN113283446B (en) Method and device for identifying object in image, electronic equipment and storage medium
CN109916415B (en) Road type determination method, device, equipment and storage medium
CN110288612B (en) Nameplate positioning and correcting method and device
CN111639704A (en) Target identification method, device and computer readable storage medium
CN114049568A (en) Object shape change detection method, device, equipment and medium based on image comparison
CN115457451B (en) Constant temperature and humidity test box monitoring method and device based on Internet of things
CN114241338A (en) Building measuring method, device, equipment and storage medium based on image recognition
CN114842240A (en) Method for classifying images of leaves of MobileNet V2 crops by fusing ghost module and attention mechanism
CN112686872B (en) Wood counting method based on deep learning
CN116563040A (en) Farm risk exploration method, device, equipment and storage medium based on livestock identification
CN115294426B (en) Method, device and equipment for tracking interventional medical equipment and storage medium
CN117036947A (en) Image recognition-based agricultural risk early warning method, device, equipment and medium
CN114627435B (en) Intelligent light adjusting method, device, equipment and medium based on image recognition
CN113792801B (en) Method, device, equipment and storage medium for detecting face dazzling degree
CN113610934B (en) Image brightness adjustment method, device, equipment and storage medium
CN115526883A (en) LED lamp bead detection method and device, computer equipment and storage medium
CN114882059A (en) Dimension measuring method, device and equipment based on image analysis and storage medium
CN112329596B (en) Target damage assessment method and device, electronic equipment and computer-readable storage medium
CN114463685A (en) Behavior recognition method and device, electronic equipment and storage medium
CN114187476A (en) Vehicle insurance information checking method, device, equipment and medium based on image analysis
CN113792671A (en) Method and device for detecting face synthetic image, electronic equipment and medium
CN111695609B (en) Target damage degree judging method and device, electronic equipment and storage medium
CN114708230B (en) Vehicle frame quality detection method, device, equipment and medium based on image analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant