CN112904778B - Wild animal intelligent monitoring method based on multi-dimensional information fusion - Google Patents

Wild animal intelligent monitoring method based on multi-dimensional information fusion Download PDF

Info

Publication number
CN112904778B
CN112904778B CN202110145630.8A CN202110145630A CN112904778B CN 112904778 B CN112904778 B CN 112904778B CN 202110145630 A CN202110145630 A CN 202110145630A CN 112904778 B CN112904778 B CN 112904778B
Authority
CN
China
Prior art keywords
image
wild animal
area
wild
monitoring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110145630.8A
Other languages
Chinese (zh)
Other versions
CN112904778A (en
Inventor
谢永华
徐其森
姜广顺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeast Forestry University
Original Assignee
Northeast Forestry University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeast Forestry University filed Critical Northeast Forestry University
Priority to CN202110145630.8A priority Critical patent/CN112904778B/en
Publication of CN112904778A publication Critical patent/CN112904778A/en
Application granted granted Critical
Publication of CN112904778B publication Critical patent/CN112904778B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers
    • G05B19/042Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
    • G05B19/0423Input/output
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/24Pc safety
    • G05B2219/24215Scada supervisory control and data acquisition

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a wild animal intelligent monitoring method based on multi-dimensional information fusion, which comprises the following steps: carrying a double-light coaxial camera by using an unmanned aerial vehicle, and acquiring a fusion image of a visible light image and an infrared thermal image of a wild animal according to a preset air route; tracking the trend of the footprints of the wild animals by using an unmanned aerial vehicle automatic navigation algorithm to obtain a footprint chain image; respectively splicing the wild animal fusion image and the footprint chain image to obtain spliced images; pre-burying a vibration sensor network in a monitoring area, carrying out real-time online identification and classification on vibration signals acquired by the vibration sensor network, and uploading classification results to a monitoring terminal; and analyzing the spliced image by using a pre-trained dual-channel network identification model to obtain a wild animal species classification result in the monitoring area, and uploading the wild animal species classification result to a monitoring terminal. The invention can carry out multi-dimensional and all-weather monitoring and identification on the wild animals in the wild animal habitat, thereby greatly improving the identification precision and the working efficiency.

Description

Wild animal intelligent monitoring method based on multi-dimensional information fusion
Technical Field
The invention relates to the technical field of wild animal monitoring, in particular to a wild animal intelligent monitoring method based on multi-dimensional information fusion.
Background
Wild animals are one of the important members of the ecosystem, and are not only closely related to the maintenance of ecological balance, but also have inseparable connection with the life of human beings and the development of society. With the increase of productivity level, increasingly serious ecological environmental problems are brought about, and many wild animals are endangered. Monitoring work for protecting endangered wild animals is not quite variable, and the quality of the monitoring work is directly related to whether real-time protection can be carried out on the wild animals.
At present, an infrared camera or an unmanned aerial vehicle is mainly used for carrying a visible light camera to monitor wild animals. The infrared camera technology is that wild animal image data are obtained through an automatic camera system (such as a passive/active infrared trigger camera or a timing shooting camera), when the infrared camera is used for monitoring, a large amount of personnel strength is needed for maintenance, the monitoring data need to be manually processed and analyzed by depending on the mode that personnel go to each point for copying, a large amount of manpower and time are consumed, the hysteresis of an analysis result is caused, and the monitoring efficiency is low. The unmanned aerial vehicle carries on the visible light camera though not restricted by the topography, and the operation scope is wide, even meet natural disasters and also can guarantee continuous operation, and fail safe nature is high, compares the manual work and obtains data and has huge advantage. However, the visible light sensor is sensitive to illumination changes, night visibility and other environments, and cannot monitor wild animals all day long.
Meanwhile, the monitoring mode that the infrared camera or the unmanned aerial vehicle carries the visible light camera cannot comprehensively record and intelligently analyze the weight, the approximate position and the footprint chain of the wild animal.
In recent years, deep learning makes a major breakthrough in the field of artificial intelligence, great success is achieved in the fields of voice recognition, image recognition, video analysis and the like, when wild animals are monitored, image data are easily affected by the behaviors of the animals, the surrounding environment and the climate when the wild animals are shot in the field, imaging conditions are poor, so that the recognition accuracy is low, particularly in a natural scene, the problems of shielding and angles caused by the fact that nonlinear factors such as illumination intervene in the wild animal non-fit type research target are solved, some traditional algorithms are difficult to directly apply to the problems, and the recognition accuracy is low.
Therefore, the technical staff in the art needs to solve the problem of how to provide an intelligent wild animal monitoring method based on multi-dimensional information fusion, which can accurately identify wild animals in a multi-dimensional and all-weather manner and perform intelligent and efficient analysis on monitoring data.
Disclosure of Invention
In view of the above, the invention provides an intelligent wild animal monitoring method based on multi-dimensional information fusion, which can perform multi-dimensional and all-weather monitoring and identification on wild animals in a wild animal habitat, and greatly improves identification precision and work efficiency.
In order to achieve the purpose, the invention adopts the following technical scheme:
a wild animal intelligent monitoring method based on multi-dimensional information fusion comprises the following steps:
carrying a double-light coaxial camera by using an unmanned aerial vehicle, and acquiring a visible light image and an infrared thermal image of a monitoring area according to a preset air route;
carrying out fusion processing on the visible light image and the infrared thermal image to obtain a wild animal fusion image;
acquiring the footprints of wild animals in a monitored area, tracking the trend of the footprints of the wild animals by using an unmanned aerial vehicle automatic navigation algorithm, and acquiring a footprint chain image;
respectively carrying out split-flight-zone splicing and integral splicing on the wild animal fusion image and the footprint chain image by using a graphic workstation to respectively obtain a split-flight-zone spliced image and an integral spliced image;
pre-burying a vibration sensor network in a monitoring area, carrying out real-time online identification and classification on vibration signals acquired by the vibration sensor network, and uploading classification results to a monitoring terminal;
and integrally analyzing the spliced images of the branch navigation belts and the whole spliced image by using a pre-trained dual-channel network identification model to obtain a wild animal species classification result in the monitoring area, and uploading the wild animal species classification result to the monitoring terminal.
According to the technical scheme, compared with the prior art, the intelligent wild animal monitoring method based on multi-dimensional information fusion is disclosed and provided, the intelligent wild animal monitoring method is characterized in that a double-light coaxial camera is carried by an unmanned aerial vehicle, and a route planning strategy is combined, so that a wild animal fusion image is obtained according to a preset sampling route covering the inhabited region of the wild animal; the method comprises the steps of automatically tracking an animal footprint chain to obtain a wild animal footprint chain image by identifying the walking direction of large wild animal snow footprint chains such as northeast tigers and leopards and combining an unmanned aerial vehicle automatic navigation algorithm; respectively carrying out navigation belt splicing and integral splicing on the wild animal image and the footprint chain image by using a high-performance graphic workstation, carrying out target detection and segmentation by using a regression algorithm to generate an ROI (region of interest) image, then identifying the spliced image by using a pre-trained global-local dual-channel network identification model, accessing a classifier to output a species identification result, and generating a habitat wild animal population number and individual information statistical report; the invention can also extract the individual ecological information of the animals by pre-embedding the vibration optical fiber sensor network in the habitat, thereby realizing multi-aspect monitoring on the wild animals and improving the monitoring precision.
Preferably, in the above method for intelligently monitoring wild animals based on multi-dimensional information fusion, the fusion processing is performed on the visible light image and the infrared thermal image to obtain a wild animal fusion image, and the method includes:
carrying out wild animal detection on the visible light image to obtain a first detection area of the wild animal;
carrying out wild animal detection on the infrared thermal image to obtain a second detection area of the wild animal;
comparing the first detection area of the wild animal with the second detection area of the wild animal, judging whether the coincidence area of the first detection area of the wild animal and the second detection area of the wild animal exceeds a preset coincidence area threshold value, and if so, indicating that a target of the wild animal is detected;
image registering the visible light image and the infrared thermal image comprising a wildlife target;
and inputting the registered visible light image and infrared thermal image into a pre-trained fusion network, and outputting the wild animal fusion image.
Preferably, in the above method for intelligently monitoring wild animals based on multi-dimensional information fusion, the wild animal detection is performed on the infrared thermal image to obtain a second detection area of wild animals, and the method includes:
carrying out temperature marking on the infrared thermal image to obtain a temperature marking area;
calculating whether the temperature difference value of adjacent areas is smaller than a preset threshold value or not according to the temperature marking areas; if the current temperature is less than the preset temperature, the adjacent area belongs to the same target, the adjacent area is communicated, and the circular detection is carried out until the temperature marking areas are completely communicated, so that the areas segmented according to the temperature marking are obtained;
segmenting the infrared thermal image by utilizing a maximum inter-class variance method;
and comparing the image segmented by the maximum inter-class variance method with the segmented region labeled according to the temperature, and keeping the coincidence and removing the non-coincidence to obtain a second detection region of the wild animal.
Preferably, in the wild animal intelligent monitoring method based on multi-dimensional information fusion, the double-optical coaxial camera includes an optical camera and an infrared imaging camera.
Preferably, in the method for intelligently monitoring wild animals based on multi-dimensional information fusion, the method includes the steps of obtaining the track of the wild animals in a monitoring area, tracking the trend of the track of the wild animals by using an unmanned aerial vehicle automatic navigation algorithm, and obtaining a track chain image, and includes:
acquiring the footprints of wild animals in a monitoring area by using a double-optical coaxial camera, and identifying the types of the footprints by using a pre-constructed footprint image library;
controlling the unmanned aerial vehicle to track the trend of the type of footprints according to the preset height and speed by using an unmanned aerial vehicle automatic navigation algorithm, and automatically aerial-shooting to obtain a plurality of footprint images;
and splicing the footprint images to obtain the footprint chain image.
Preferably, in the above intelligent wild animal monitoring method based on multidimensional information fusion, the pre-constructed footprint image library includes a sample joint representation dictionary composed of a sole dictionary and a heel dictionary; the sample joint representation dictionary is provided with a sole label and a heel label which respectively correspond to the sole dictionary and the heel dictionary;
and sequentially comparing the currently acquired wild animal footprints with the sole tags and the heel tags in the sample joint representation dictionary by the pre-constructed footprint image library, and determining the types of the wild animal footprints if the overall similarity is more than 90%.
Preferably, in the above method for intelligently monitoring wild animals based on multidimensional information fusion, the embedding of the vibration sensor network in the monitoring area to perform real-time online identification and classification on the vibration signals acquired by the vibration sensor network includes:
processing the vibration signal of the vibration sensor network by using short-time passband energy conversion to realize disturbance positioning;
extracting disturbance continuous time and signal distribution of a region near a disturbance point, and acquiring a frequency-space-time image by using short-time Fourier transform to acquire a disturbance signal;
and performing online identification and classification on the disturbance signals by using a pre-trained entity information-vibration signal relation model, and outputting a classification result.
Preferably, in the above method for intelligently monitoring wild animals based on multidimensional information fusion, the processing the vibration signal of the vibration sensor network by using short-time passband energy transformation to realize disturbance positioning further includes:
uploading the position of the disturbance point to the monitoring terminal;
the monitoring terminal plans a new cruising route according to the position of the disturbance point and the current position of the unmanned aerial vehicle carrying the double-light coaxial camera;
the unmanned aerial vehicle carries the double-light coaxial camera to fly to the disturbance position according to the new cruising route for aerial photography, and an image of an area near a disturbance point is obtained;
and synchronously uploading the images of the area near the disturbance point and the classification result to the monitoring terminal.
Preferably, in the method for intelligently monitoring wild animals based on multi-dimensional information fusion, the integrally analyzing the spliced image of the separate navigation band and the whole spliced image by using a pre-trained two-channel network recognition model to obtain a wild animal species classification result in the monitoring area, and uploading the wild animal species classification result to the monitoring terminal includes:
respectively acquiring the spliced images of the sub-navigation belts and the whole spliced image;
detecting the area coordinates of the wild animals in the original images of the split navigation band spliced images and the whole spliced image by using a regression algorithm, and segmenting the original images to obtain target area images;
analyzing the original image and the target area image respectively by utilizing a pre-trained dual-channel network recognition model to obtain two classification probability results;
and fusing the probabilities of the two classification results, connecting the two classification results into a classifier, outputting wild animal population information and individual information statistical reports of a monitoring area, and uploading the statistical reports to the monitoring terminal.
Preferably, in the method for intelligently monitoring wild animals based on multi-dimensional information fusion, the split flight zone mosaic image comprises a split flight zone mosaic image of the wild animal fusion image and a split flight zone mosaic image of the footprint chain image; the whole mosaic image comprises a whole mosaic image of the wild animal fusion image and a whole mosaic image of the footprint chain image.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a schematic structural diagram of a wild animal intelligent monitoring method based on multi-dimensional information fusion provided by the invention;
FIG. 2 is a flow chart of S2 provided by the present invention;
FIG. 3 is a flow chart illustrating S22 provided by the present invention;
FIG. 4 is a flow chart of S3 provided by the present invention;
FIG. 5 is a flow chart of S5 provided by the present invention;
FIG. 6 is a flowchart illustrating another embodiment of S5 according to the present invention;
FIG. 7 is a flow chart of S6 provided by the present invention;
fig. 8 is a schematic structural diagram of a two-channel network recognition model provided by the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in FIG. 1, the embodiment of the invention discloses a wild animal intelligent monitoring method based on multi-dimensional information fusion, which comprises the following steps:
s1, carrying the double-light coaxial camera by the unmanned aerial vehicle, and acquiring the visible light image and the infrared thermal image of the monitoring area according to a preset air route.
The double-light coaxial camera comprises an optical camera and an infrared imaging camera. The double-light coaxial camera in the embodiment can realize single-light and double-light coaxial imaging in a low-light environment, and realizes breakthrough of an accurate identification technology, the detectable distance of double-light fusion imaging is larger than 5Km, and long-distance monitoring in the daytime and at night is realized, wherein 800 ten thousand visible light photographing pixels and 200 ten thousand video effective pixels, the real-time data transmission distance reaches 5Km, and the image delay is less than 200 ms. The double-light coaxial camera provides guarantee for field high-quality image acquisition, the animal track is monitored by visible light in the daytime and infrared light at night, and infrared light supplementing double-light fusion imaging is started after the animal track is found.
In this embodiment, the process of planning the route is as follows:
for a specific experimental area, such as the Wanqing national-level natural protection area in Jilin province, 4 flight routes are sampled and selected, and the situation that the flight routes are uniformly distributed in the whole protection area is ensured.
On the premise of improving the resolution and increasing the overlapping degree between the navigation zones as much as possible, the navigation zones are investigated to cover different protected area subareas such as a core area, a buffer area and an experimental area. And the positions of the routes are adjusted according to the animal habit distribution, so that all 4 routes cover the monitoring area as much as possible.
And adjusting the route too close to the artificial ground object and the town to avoid the region with large influence on human activities. Meanwhile, by combining with DEM (digital Elevation model) data, the flight height is properly adjusted to be high in areas with dense mountains and mountains, and accidents of the unmanned aerial vehicle are avoided.
According to the terrain condition, the position of the flight band is adjusted, a suitable flying field is selected, and the overlapping degree and the suitable flying height between the sensors are set. And adjusting the optimal flying height according to the sensitivity of the animal species in the natural protection area to the noise of the unmanned aerial vehicle.
And S2, carrying out fusion processing on the visible light image and the infrared thermal image to obtain a wild animal fusion image.
S3, acquiring the wild animal footprints of the monitored areas, tracking the trends of the wild animal footprints by utilizing an unmanned aerial vehicle automatic navigation algorithm, and acquiring a footprint chain image.
And S4, respectively carrying out split flight band splicing and integral splicing on the wild animal fusion image and the footprint chain image by using the graphic workstation to respectively obtain a split flight band spliced image and an integral spliced image.
S5, pre-burying a vibration sensor network in the monitoring area, carrying out real-time online identification and classification on vibration signals acquired by the vibration sensor network, and uploading classification results to a monitoring terminal.
And S6, integrally analyzing the spliced image of the navigation belt and the whole spliced image by using a pre-trained two-channel network recognition model to obtain a wild animal species classification result in the monitoring area, and uploading the wild animal species classification result to the monitoring terminal.
Specifically, as shown in fig. 2, S2 includes:
and S21, carrying out wild animal detection on the visible light image to obtain a first detection area of the wild animal.
And S22, carrying out wild animal detection on the infrared thermal image to obtain a second detection area of the wild animal.
In the year of 2010-2019, more than 2000 automatic cameras are erected in Wanqing national natural reserve area, yellow mud river national natural reserve area, 29682and spring national natural reserve area of Jilin province for many times, and the occurrence information of tiger leopards is monitored for more than 4000 times and the information of animals such as roe deer, sika deer, red deer, wild boar and the like is monitored for more than ten thousand times. In the last two years, images of wild animals in different seasons for tens of hours are also acquired by monitoring with an unmanned aerial vehicle. The ecological behavior of wild northeast tiger, leopard, ounce, lynx and their major preys (red deer, roe deer, wild boar, sika deer) was monitored with emphasis, building a wild animal monitoring image database. The wild animal is detected on the basis of the wild animal monitoring image database, and the wild animal species can be accurately identified.
Specifically, as shown in fig. 3, S22 includes:
s221, carrying out temperature annotation on the infrared thermal image to obtain a temperature annotation area;
s222, calculating whether the temperature difference value of the adjacent area is smaller than a preset threshold value according to the temperature marking area; if the current temperature is less than the preset temperature, the adjacent area belongs to the same target, the adjacent area is communicated, the circular detection is carried out until the temperature marking area is completely communicated, and the area segmented according to the temperature marking is obtained;
s223, segmenting the infrared thermal image by utilizing a maximum inter-class variance method;
s224, comparing the image segmented by the maximum inter-class variance method with the region segmented according to the temperature mark, and keeping the superposition and removing the non-superposition to obtain a second detection region of the wild animal.
S23, comparing the first detection area of the wild animal with the second detection area of the wild animal, judging whether the coincidence area of the first detection area and the second detection area exceeds a preset coincidence area threshold value, and if so, indicating that the target of the wild animal is detected.
And S24, carrying out image registration on the visible light image and the infrared thermal image containing the wild animal target.
And S25, inputting the registered visible light image and infrared thermal image into a pre-trained fusion network, and outputting a wild animal fusion image.
In this embodiment, a deep learning architecture for the problem of fusion of infrared thermal images and visible light images is adopted, and compared with a conventional convolutional network, an encoding network is combined with convolutional layers, a fusion layer and dense blocks, wherein the outputs of each layer are connected with each other.
Before fusion, the depth features of the visible light image and the infrared thermal image are extracted, the first convolutional layer extracts the rough features, and then three convolutional layers (the output of each layer is cascaded to the input of the subsequent layer) form a dense block. Such an architecture has two advantages. First, the size of the filter and the step size of the convolution operation are 3 × 3 and 1, respectively. Using this strategy, the input image can be any size; second, dense blocks can preserve depth features as much as possible in the coding network, and this operation can ensure that all salient features are used in the fusion strategy.
The L1 norm and softmax operations are applied at the fusion level. The fused layer includes a plurality of convolutional layers (3 × 3 convolutions), the output of the fused layer is the input of the convolutional layers, and the plurality of convolutional layers are used to reconstruct the fused image to constitute a decoder, and the fused feature map is converted into a fused picture. This simple and efficient architecture is used to reconstruct the final fused image.
As shown in fig. 4, S3 includes:
s31, acquiring the wild animal footprints in the monitoring area by using the double-optical coaxial camera, and identifying the types of the footprints by using a pre-constructed footprint image library.
The pre-constructed footprint image library comprises a sample joint representation dictionary consisting of a sole dictionary and a heel dictionary; the sample joint representation dictionary carries a ball label and a heel label corresponding to the ball dictionary and the heel dictionary, respectively.
And sequentially comparing the currently acquired wild animal footprints with the sole tags and the heel tags in the sample joint representation dictionary by a pre-constructed footprint image library, and determining the type of the wild animal footprints if the overall similarity is more than 90%.
And S32, controlling the unmanned aerial vehicle to track the type of footprint trend according to the preset height and speed by using an unmanned aerial vehicle automatic navigation algorithm, and automatically aerial-shooting to obtain a plurality of footprint images.
And S33, splicing the footprint images to obtain a footprint chain image.
For the extremely endangered large mammals such as the northeast tigers and the leopards, basically, the possibility of wearing collars for tracking the large mammals is not obtained, but the track chain of walking in winter is completely recorded by snow for 6 months, and the track chain of walking in winter is more or less provided with some information of the current activity of the individual animals, such as time information, individual number information, walking direction information and the like. The embodiment of the invention can also collect footprint chain images of northeast tigers, leopards and other field walking during snow accumulation in seasons, and can realize automatic shooting, automatic information identification and navigation by combining the principle of unmanned aerial vehicle navigation, and the walking direction of animals, individual quantity information and the like can be analyzed through the footprint chain images.
As shown in fig. 5, S5 includes:
s51, processing the vibration signal of the vibration sensor network by using short-time passband energy transformation to realize disturbance positioning;
s52, extracting disturbance continuous time and signal distribution of a region near a disturbance point, and acquiring a frequency-space-time image by using short-time Fourier transform to acquire a disturbance signal;
and S53, performing online identification and classification on the disturbance signals by using a pre-trained entity information-vibration signal relation model, and outputting a classification result.
In another embodiment, after S53, the method further includes:
and S54, uploading the position of the disturbance point to a monitoring terminal.
S55, planning a new cruising route by the monitoring terminal according to the position of the disturbance point and the current position of the unmanned aerial vehicle carrying the double-light coaxial camera.
S56, the unmanned aerial vehicle carries the double-light coaxial camera to fly to the disturbance position according to the new cruising route for aerial photography, and the image of the area near the disturbance point is obtained.
And S57, synchronously uploading the images of the area near the disturbance point and the classification result to the monitoring terminal.
The vibration sensor of the present embodiment adopts an optical fiber sensor, and is usually buried around a protected area and under the ground, so that the vibration sensor has certain concealment. Once the wild animal enters the monitoring area, the sensing optical fiber has the sensitive characteristic, and can generate a vibratory inductive signal for various external force invasion behaviors caused by touch or non-direct touch conduction, animal individual parameters such as animal weight, gait frequency and the like are estimated by utilizing a pre-trained relation model between different monitoring target entity information and the vibratory signal, and position positioning is carried out. Meanwhile, the monitoring terminal transfers the unmanned aerial vehicle to shoot according to the disturbance point position and a certain transfer rule, association research is carried out, and the unmanned aerial vehicle can be transferred according to the distance or whether a wild animal target exists in a current shooting picture. The classification results of the shot pictures and the relation models are synchronously uploaded to the monitoring terminal, so that the images, the weights and other information of wild animals in the current monitoring area are synchronously obtained, and the monitoring precision is improved.
As shown in fig. 7, S6 includes:
s61, respectively acquiring a spliced image of the sub-navigation belt and the whole spliced image; the split flight zone spliced image comprises a split flight zone spliced image of the wild animal fusion image and a split flight zone spliced image of the footprint chain image; the whole mosaic image comprises a whole mosaic of wild animal fusion images and a whole mosaic of footprint chain images.
And S62, detecting the area coordinates of the wild animals in the original images of the split navigation band spliced images and the whole spliced image by using a regression algorithm, and segmenting the original images to obtain target area images.
And S63, analyzing the original image and the target area image respectively by using a pre-trained dual-channel network recognition model to obtain two classification probability results. The structure diagram of the dual-channel network model is shown in fig. 8, the original image in the wild animal monitoring image database and the extracted target detection area ROI image are used for training the dual-channel network model, after training is completed, the currently acquired wild animal fusion image and the footprint chain image are segmented and detected, the original image and the segmented image are respectively identified, and the detection precision is greatly improved.
And S64, fusing the probabilities of the two classification results, accessing the two classification results into a classifier, outputting wild animal population information and individual information statistical reports of the monitoring area, and uploading the statistical reports to a monitoring terminal.
And the statistical report comprises an original image and an ROI image of the wild animal fusion image and the footprint chain image, and related wild animal population information and individual information are marked on the original image and the ROI image and are stored to generate an original image database and a TOI image database.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. A wild animal intelligent monitoring method based on multi-dimensional information fusion is characterized by comprising the following steps:
carrying a double-light coaxial camera by using an unmanned aerial vehicle, and acquiring a visible light image and an infrared thermal image of a monitoring area according to a preset air route;
carrying out fusion processing on the visible light image and the infrared thermal image to obtain a wild animal fusion image;
acquiring the footprints of wild animals in a monitored area, tracking the trend of the footprints of the wild animals by using an unmanned aerial vehicle automatic navigation algorithm, and acquiring a footprint chain image;
respectively carrying out split-flight-zone splicing and integral splicing on the wild animal fusion image and the footprint chain image by using a graphic workstation to respectively obtain a split-flight-zone spliced image and an integral spliced image;
pre-burying a vibration sensor network in a monitoring area, carrying out real-time online identification and classification on vibration signals acquired by the vibration sensor network, and uploading classification results to a monitoring terminal;
integrally analyzing the spliced images of the sub-navigation belts and the whole spliced image by using a pre-trained dual-channel network identification model to obtain a wild animal species classification result in a monitoring area, and uploading the wild animal species classification result to the monitoring terminal;
carrying out fusion processing on the visible light image and the infrared thermal image to obtain a wild animal fusion image, wherein the fusion processing comprises the following steps:
carrying out wild animal detection on the visible light image to obtain a first detection area of the wild animal;
carrying out wild animal detection on the infrared thermal image to obtain a second detection area of the wild animal;
comparing the first detection area of the wild animal with the second detection area of the wild animal, judging whether the coincidence area of the first detection area of the wild animal and the second detection area of the wild animal exceeds a preset coincidence area threshold value, and if so, indicating that a target of the wild animal is detected;
image registering the visible light image and the infrared thermal image comprising a wildlife target;
inputting the registered visible light image and infrared thermal image into a pre-trained fusion network, and outputting the wild animal fusion image;
the method comprises the following steps of utilizing a pre-trained dual-channel network identification model to carry out overall analysis on spliced images of the sub-navigation belt and the whole spliced image to obtain a wild animal species classification result in a monitoring area, and uploading the wild animal species classification result to a monitoring terminal, wherein the method comprises the following steps:
respectively acquiring the spliced images of the sub-navigation belts and the whole spliced image;
detecting the area coordinates of the wild animals in the original images of the split navigation band spliced images and the whole spliced image by using a regression algorithm, and segmenting the original images to obtain target area images;
analyzing the original image and the target area image respectively by utilizing a pre-trained dual-channel network recognition model to obtain two classification probability results;
and fusing the probabilities of the two classification results, connecting the two classification results into a classifier, outputting wild animal population information and individual information statistical reports of a monitoring area, and uploading the statistical reports to the monitoring terminal.
2. The method for intelligently monitoring the wild animals based on the multi-dimensional information fusion of claim 1, wherein the wild animal detection is performed on the infrared thermal image to obtain a second detection area of the wild animals, and the method comprises the following steps:
carrying out temperature marking on the infrared thermal image to obtain a temperature marking area;
calculating whether the temperature difference value of adjacent areas is smaller than a preset threshold value or not according to the temperature marking areas; if the current temperature is less than the preset temperature, the adjacent area belongs to the same target, the adjacent area is communicated, and the circular detection is carried out until the temperature marking areas are completely communicated, so that the areas segmented according to the temperature marking are obtained;
segmenting the infrared thermal image by utilizing a maximum inter-class variance method;
and comparing the image segmented by the maximum inter-class variance method with the segmented region labeled according to the temperature, and keeping the coincidence and removing the non-coincidence to obtain a second detection region of the wild animal.
3. The intelligent wild animal monitoring method based on multi-dimensional information fusion of claim 1, wherein the double-optical coaxial camera comprises an optical camera and an infrared imaging camera.
4. The method for intelligently monitoring the wild animals based on the multi-dimensional information fusion as claimed in claim 1, wherein the steps of obtaining the footprints of the wild animals in the monitored area, tracking the trend of the footprints of the wild animals by using an unmanned aerial vehicle automatic navigation algorithm, and obtaining a footprint chain image comprise:
acquiring the footprints of wild animals in a monitoring area by using a double-optical coaxial camera, and identifying the types of the footprints by using a pre-constructed footprint image library;
controlling the unmanned aerial vehicle to track the trend of the type of footprints according to the preset height and speed by using an unmanned aerial vehicle automatic navigation algorithm, and automatically aerial-shooting to obtain a plurality of footprint images;
and splicing the footprint images to obtain the footprint chain image.
5. The wild animal intelligent monitoring method based on multi-dimensional information fusion of claim 4, wherein the pre-constructed footprint image library comprises a sample joint representation dictionary consisting of a sole dictionary and a heel dictionary; the sample joint representation dictionary is provided with a sole label and a heel label which respectively correspond to the sole dictionary and the heel dictionary;
and sequentially comparing the currently acquired wild animal footprints with the sole tags and the heel tags in the sample joint representation dictionary by the pre-constructed footprint image library, and determining the types of the wild animal footprints if the overall similarity is more than 90%.
6. The intelligent wild animal monitoring method based on multi-dimensional information fusion as claimed in claim 1, wherein a vibration sensor network is pre-embedded in a monitoring area, and real-time online identification and classification of vibration signals acquired by the vibration sensor network are performed, and the method comprises the following steps:
processing the vibration signal of the vibration sensor network by using short-time passband energy conversion to realize disturbance positioning;
extracting disturbance continuous time and signal distribution of a region near a disturbance point, and acquiring a frequency-space-time image by using short-time Fourier transform to acquire a disturbance signal;
and performing online identification and classification on the disturbance signals by using a pre-trained entity information-vibration signal relation model, and outputting a classification result.
7. The intelligent wild animal monitoring method based on multi-dimensional information fusion as claimed in claim 6, wherein the vibration signal of the vibration sensor network is processed by using short-time passband energy transformation to realize disturbance localization, further comprising:
uploading the position of the disturbance point to the monitoring terminal;
the monitoring terminal plans a new cruising route according to the position of the disturbance point and the current position of the unmanned aerial vehicle carrying the double-light coaxial camera;
the unmanned aerial vehicle carries the double-light coaxial camera to fly to the position of the disturbance point according to the new cruising route for aerial photography, and an image of an area near the disturbance point is obtained;
and synchronously uploading the images of the area near the disturbance point and the classification result to the monitoring terminal.
8. The wild animal intelligent monitoring method based on multi-dimensional information fusion is characterized in that the split flight band spliced image comprises a split flight band spliced image of a wild animal fusion image and a split flight band spliced image of a footprint chain image; the whole mosaic image comprises a whole mosaic image of the wild animal fusion image and a whole mosaic image of the footprint chain image.
CN202110145630.8A 2021-02-02 2021-02-02 Wild animal intelligent monitoring method based on multi-dimensional information fusion Active CN112904778B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110145630.8A CN112904778B (en) 2021-02-02 2021-02-02 Wild animal intelligent monitoring method based on multi-dimensional information fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110145630.8A CN112904778B (en) 2021-02-02 2021-02-02 Wild animal intelligent monitoring method based on multi-dimensional information fusion

Publications (2)

Publication Number Publication Date
CN112904778A CN112904778A (en) 2021-06-04
CN112904778B true CN112904778B (en) 2022-04-15

Family

ID=76122552

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110145630.8A Active CN112904778B (en) 2021-02-02 2021-02-02 Wild animal intelligent monitoring method based on multi-dimensional information fusion

Country Status (1)

Country Link
CN (1) CN112904778B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102685391A (en) * 2012-04-25 2012-09-19 安徽师范大学 Shooting device for outdoor large wild animals
CN103546728A (en) * 2013-11-14 2014-01-29 北京林业大学 Wild animal field monitoring device
CN108469762A (en) * 2018-03-20 2018-08-31 中南林业科技大学 A kind of intelligent pet ring, pet monitoring system and monitoring method
CN108709633A (en) * 2018-08-29 2018-10-26 中国科学院上海光学精密机械研究所 Distributed optical fiber vibration sensing intelligent and safe monitoring method based on deep learning
CN110297450A (en) * 2019-07-05 2019-10-01 智飞智能装备科技东台有限公司 A kind of UAV Intelligent monitor supervision platform
CN110751675A (en) * 2019-09-03 2020-02-04 平安科技(深圳)有限公司 Urban pet activity track monitoring method based on image recognition and related equipment
CN110769194A (en) * 2019-10-10 2020-02-07 四川瑞霆电力科技有限公司 Heat source monitoring and identifying method and system based on double-light fusion
CN111765974A (en) * 2020-07-07 2020-10-13 中国环境科学研究院 Wild animal observation system and method based on miniature refrigeration thermal infrared imager

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102685391A (en) * 2012-04-25 2012-09-19 安徽师范大学 Shooting device for outdoor large wild animals
CN103546728A (en) * 2013-11-14 2014-01-29 北京林业大学 Wild animal field monitoring device
CN108469762A (en) * 2018-03-20 2018-08-31 中南林业科技大学 A kind of intelligent pet ring, pet monitoring system and monitoring method
CN108709633A (en) * 2018-08-29 2018-10-26 中国科学院上海光学精密机械研究所 Distributed optical fiber vibration sensing intelligent and safe monitoring method based on deep learning
CN110297450A (en) * 2019-07-05 2019-10-01 智飞智能装备科技东台有限公司 A kind of UAV Intelligent monitor supervision platform
CN110751675A (en) * 2019-09-03 2020-02-04 平安科技(深圳)有限公司 Urban pet activity track monitoring method based on image recognition and related equipment
CN110769194A (en) * 2019-10-10 2020-02-07 四川瑞霆电力科技有限公司 Heat source monitoring and identifying method and system based on double-light fusion
CN111765974A (en) * 2020-07-07 2020-10-13 中国环境科学研究院 Wild animal observation system and method based on miniature refrigeration thermal infrared imager

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《无人机遥感调查黄河源玛多县岩羊数量及分布》;郭兴健等;《自然资源学报》;20190512;P1054-1065 *
《有蹄类等哺乳动物的大样方监测技术》;孙海义等;《林业科技》;20111231;P33-35 *

Also Published As

Publication number Publication date
CN112904778A (en) 2021-06-04

Similar Documents

Publication Publication Date Title
CN111259809B (en) Unmanned aerial vehicle coastline floating garbage inspection system based on DANet
Dong et al. UAV-based real-time survivor detection system in post-disaster search and rescue operations
CN110660222B (en) Intelligent environment-friendly electronic snapshot system for black-smoke road vehicle
CN103679674A (en) Method and system for splicing images of unmanned aircrafts in real time
Chang et al. Video analytics in smart transportation for the AIC'18 challenge
CN113065578A (en) Image visual semantic segmentation method based on double-path region attention coding and decoding
CN109033245B (en) Mobile robot vision-radar image cross-modal retrieval method
CN113743358B (en) Landscape vision feature recognition method adopting omnibearing collection and intelligent calculation
CN112613668A (en) Scenic spot dangerous area management and control method based on artificial intelligence
CN115761537B (en) Power transmission line foreign matter intrusion identification method oriented to dynamic feature supplementing mechanism
CN110751209A (en) Intelligent typhoon intensity determination method integrating depth image classification and retrieval
CN117292230A (en) Building earthquake damage intelligent assessment method based on multi-mode large model
CN116448773A (en) Pavement disease detection method and system with image-vibration characteristics fused
CN114417048A (en) Unmanned aerial vehicle positioning method without positioning equipment based on image semantic guidance
CN111582069B (en) Track obstacle zero sample classification method and device for air-based monitoring platform
CN114358401A (en) Road icing identification method fusing image and meteorological environment data
CN113935395A (en) Training of object recognition neural networks
CN114463701A (en) Monitoring and early warning system based on multisource big data animal breeding data mining
CN112906511B (en) Wild animal intelligent monitoring method combining individual image and footprint image
CN116846059A (en) Edge detection system for power grid inspection and monitoring
CN111354028A (en) Binocular vision-based power transmission channel hidden danger identification and tracking method
CN111950524B (en) Orchard local sparse mapping method and system based on binocular vision and RTK
Katrojwar et al. Design of Image based Analysis and Classification using Unmanned Aerial Vehicle
CN112904778B (en) Wild animal intelligent monitoring method based on multi-dimensional information fusion
CN117423157A (en) Mine abnormal video action understanding method combining migration learning and regional invasion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant