CN114743108B - Grassland mouse condition identification and quantification method and mouse condition recorder - Google Patents

Grassland mouse condition identification and quantification method and mouse condition recorder Download PDF

Info

Publication number
CN114743108B
CN114743108B CN202210454046.5A CN202210454046A CN114743108B CN 114743108 B CN114743108 B CN 114743108B CN 202210454046 A CN202210454046 A CN 202210454046A CN 114743108 B CN114743108 B CN 114743108B
Authority
CN
China
Prior art keywords
mouse
module
deep learning
grassland
learning model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210454046.5A
Other languages
Chinese (zh)
Other versions
CN114743108A (en
Inventor
王大伟
刘升平
林克剑
刘晓辉
张�杰
郭秀明
王宁
杜波波
张福顺
李宁
宋英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Plant Protection of Chinese Academy of Agricultural Sciences
Agricultural Information Institute of CAAS
Original Assignee
Institute of Plant Protection of Chinese Academy of Agricultural Sciences
Agricultural Information Institute of CAAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Plant Protection of Chinese Academy of Agricultural Sciences, Agricultural Information Institute of CAAS filed Critical Institute of Plant Protection of Chinese Academy of Agricultural Sciences
Priority to CN202210454046.5A priority Critical patent/CN114743108B/en
Publication of CN114743108A publication Critical patent/CN114743108A/en
Application granted granted Critical
Publication of CN114743108B publication Critical patent/CN114743108B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D21/00Measuring or testing not otherwise provided for
    • G01D21/02Measuring two or more variables by means not covered by a single other subclass
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention relates to the technical field of image recognition, and discloses a grassland mouse situation recognition and quantification method based on a deep learning model and a mouse situation recorder, wherein the grassland mouse situation recognition and quantification method comprises the following steps: (1) Constructing a grassland mouse situation characteristic data set, (2) carrying out characteristic extraction of mouse holes and mouse dunes and mouse classification training on the deep learning model; (3) And (3) performing feature extraction on the mouse situation image through the model generated by training in the step (2) to obtain the mouse situation features.

Description

Grassland mouse condition identification and quantification method and mouse condition recorder
Technical Field
The invention relates to the technical field of image recognition, in particular to a grassland mouse situation recognition and quantification method based on a deep learning model and a mouse situation recorder.
Background
Grassland rats are one of the important biological disasters. The pest rats gnaw the pasture, the nests destroy grassland vegetation, desertification and degeneration of grasslands are caused, the yield of the pasture is reduced seriously, the animal husbandry economy is influenced, the ecological environment of the grasslands is also destroyed, and the desertification is aggravated. Meanwhile, the grassland bandicoot also spreads more than 60 kinds of diseases of people and livestock, such as plague, hemorrhagic fever, forest encephalitis and the like, and seriously threatens the life health of farmers and herdsmen. The grassland rats are usually bred seasonally, the breeding starts in spring every year, the population density reaches the maximum from autumn to autumn, and related departments need to know the main species and density occurrence dynamics of the grassland rats in time in order to effectively control the grassland rats.
At present, the situation of the rats in the grassland is mainly monitored by the traditional manual means, such as a hole blocking and stealing method, a trapping method, a visual method and the like, time and labor are wasted, the error is large when the area of the grassland is wide, the efficiency is low, and the situation depends on the experience of technicians seriously. Therefore, the accuracy, real-time performance and timeliness of the acquired data are difficult to guarantee.
Although the prior art has a method for collecting images by using machine vision, namely an image collecting device, after the mouse situation images are collected, manual interpretation and identification are needed, the efficiency is low, and the mouse situation information cannot be quickly obtained. In addition, the system based on the intelligent identification of the live images of the pest rats depends on the attraction of equipment to the pest rats, and the consistency of the investigation effect of the rat situation in different regions is difficult to ensure.
Disclosure of Invention
Therefore, a grassland mouse situation recognition and quantification method based on a deep learning model and a mouse situation recorder are needed to be provided, and the problems that the conventional grassland mouse situation investigation efficiency is low, the recognition accuracy is poor and the basis of professional knowledge of investigators is seriously relied are solved.
In order to achieve the purpose, the invention provides a grassland mouse situation recognition and quantification method based on a deep learning model, which comprises the following steps:
(1) Constructing a grassland mouse condition characteristic data set, and manually labeling targets of the grassland mouse condition characteristic data set, wherein the grassland mouse condition data set is used for characteristic extraction of mouse holes and mouse hills and mouse classification training;
(2) Performing characteristic extraction training and mouse classification training on the deep learning model through a grassland mouse situation characteristic data set to obtain two types of models, wherein one type of model is a mouse hole and mouse hill target detection model to solve the problem of accurate identification of the number of the mouse holes and the mouse hillocks, the other type of model is a mouse classification detection model to obtain mouse classification information through external characteristics of the mouse holes and the mouse hillocks and regional mouse distribution conditions;
(3) Inputting the mouse situation image to be identified into a mouse hole, a mouse hillock target detection model and a mouse classification detection model, extracting the characteristics of the mouse situation image, and finally obtaining the mouse situation characteristics in the mouse situation image, wherein the mouse situation characteristics comprise the number, the positions and the mouse species classification of the mouse hole and the mouse hillock in a survey sample line or a sample plot.
A rat feelings recorder comprising: the system comprises an acquisition module, a determination module, a calculation module and a display module, wherein the acquisition module comprises a high-definition camera, the determination module comprises a GPS positioning module, a ranging module light intensity sensor and a gyroscope, the calculation module comprises a core processor and an algorithm processor, the display module comprises a storage module, a wireless communication module, a highlight screen, a capacitive touch screen and a battery module, all components in the acquisition module, the determination module and the display module are electrically connected with the core processor,
the core processor and the algorithm processor are coupled to operate to realize the grassland mouse situation recognition and quantification method based on the deep learning model.
The acquisition module is used for acquiring and inputting the mouse condition image information and the environment information, the determination module is used for acquiring spatial distribution information such as angle information, distance information and the like of the mouse condition image, the calculation module is used for automatically calculating the mouse condition image in real time and acquiring the distribution information of a rat cave, the display module is used for integrally displaying an original picture, an analysis picture and a distribution picture of the mouse condition image acquired in real time,
the technical scheme has the following beneficial effects:
the grassland mouse situation recognition and quantification method provided by the invention improves the characteristic expression capacity of the rat hole by adopting a multi-layer characteristic fusion mode, and improves the accuracy of rat hole detection. The low-level features are more concerned with detailed information, lack high-level semantic information, and the opposite is true for high-level semantics. According to the scheme, the characteristics of different depth layers are fused, firstly, the characteristics of a high layer are fused downwards layer by layer in an up-sampling mode, semantic information of a characteristic layer of a lower layer is enriched, then the characteristics of the lower layer are fused layer by layer from bottom to top in a down-sampling mode, the detailed characteristic expression capacity of the high layer is improved, the mouse situation characteristic identification precision is improved, and accurate statistics of the mouse situation is guaranteed.
The mouse situation recorder of the invention takes an advanced CPU (central processing unit) + an edge GPU (arithmetic processing unit) as a core, and is integrated into a novel intelligent paper-free rat hole identification system by a large-scale integrated circuit, a large-capacity FLASH memory, an intelligent picture identification and a high-resolution liquid crystal display. The intelligent high-definition photographing gyroscope has the functions of high-definition photographing, intelligent recognition, GPS positioning, gyroscope leveling, ultrasonic ranging and the like, and has the characteristics of stable operation, low power consumption, high precision, portability, convenience, high reliability and the like.
Drawings
Fig. 1 is a structural diagram of a convolutional neural network of the deep learning model described in embodiment 1.
Fig. 2 is a structural diagram of a plurality of feature extraction modules described in embodiment 1.
Fig. 3 is a structural diagram of the attention module S according to embodiment 1.
Fig. 4 is a connection diagram of electronic components of the rat situation recorder in example 2.
Detailed Description
To explain technical contents, structural features, and objects and effects of the technical solutions in detail, the following detailed description is given with reference to the accompanying drawings in conjunction with the embodiments.
The embodiment provides a grassland rat situation recognition and quantification method based on a deep learning model, which comprises the following steps:
1. constructing a grassland mouse condition characteristic data set, wherein the mouse condition comprises a mouse hole, a mouse dune and mouse species, constructing the grassland mouse condition data set for the characteristic extraction of the mouse hole and the mouse dune and the model training of species detection, comprising the steps of utilizing a hand-held mouse condition recorder to shoot the mouse hole and the mouse dune image information to realize the target detection data acquisition of the mouse hole, utilizing the hand-held mouse condition recorder to shoot the mouse hole and the mouse dune image information to realize the species detection data acquisition of the mouse, carrying out data annotation on the mouse hole, the mouse dune data and the mouse species data through the target detection and the key point detection,
2. training of deep learning models
Performing rat hole and rat hill feature extraction training and rat classification detection training on the deep learning model through a grassland rat situation feature data set to respectively obtain a rat hole and rat hill target detection model and a rat classification detection model,
3. mouse situation recognition
Collecting the image information of the mouse situation by a hand-held mouse situation recorder, respectively obtaining the mouse situation characteristics in the image by using a mouse hole, a mouse dune target detection model and a mouse classification detection model, including the number of the mouse holes and the mouse dunes and the classification of mouse species, finally counting the mouse situation characteristics in the mouse situation image, wherein the mouse situation characteristics include the number, the positions and the mouse species of the mouse holes and the mouse dunes in a survey sample line or a sample plot,
4. statistics of mouse feelings
The mouse situation characteristics also comprise survey sample line length or sample area, the monitoring area is obtained through calculation, and the density of unit rat holes and rat dunes in the monitoring area is obtained through calculation.
When the grassland mouse situation characteristic data set is constructed, the time for collecting the pictures spans multiple seasons including different weather conditions and multiple illumination conditions, so that the model can independently learn mouse holes and mouse dunes in different environments, and the generalization of the model is improved. And then, marking the mouse hole and the mouse dune in a manual marking mode, wherein the marked information comprises position information of the mouse hole and the mouse dune, specifically coordinate information of the upper left corner and the lower right corner of the rectangular frame, type information of the mouse hole and the mouse dune, and specifically natural numbers classified according to sequence numbers.
During training of the deep learning model, image data are divided into a training set, a verification set and a test set according to a ratio of 4.
After the mouse hole, the mouse dune target detection model and the mouse classification detection model are trained, the models can be adapted and converted before formal use, and because the formats of the models required by the terminal equipment are various, the frames of the training models are various, the network models obtained by training on a desktop computer need to be converted into formats capable of normally running on the terminal equipment. The specific method is that the current relatively mature deep learning model framework is used as an intermediary model, and the trained model is converted into a format required by the terminal equipment.
After the neural network training is finished, the terminal equipment is tested in different types of grassland environments, the result shows that the equipment runs stably and reliably, the timeliness and the accuracy of algorithm running meet the requirements, the identification, the quantification and the classification of rat holes and rat dunes are greatly improved when the method is applied in practice, and the investigation level of the rat situation is greatly improved.
The process of rat situation recognition statistics comprises the following steps:
(1) Acquiring a mouse situation image to be identified through a high-definition camera;
(2) Pretreatment: carrying out image splicing and image normalization processing on the acquired images;
(3) Determining spatial distribution information of a mouse situation image to be identified;
(4) Inputting the mouse situation image to be identified into the deep learning model, extracting the characteristics of the mouse situation image by the convolution neural network of the deep learning model to finally obtain the mouse situation characteristics in the mouse situation image,
(5) And finally, counting the mouse condition features identified in the mouse condition image, and calculating the mouse condition features and the monitoring area to obtain the unit mouse density of the monitoring area.
In the invention, as shown in fig. 1, the convolutional neural network of the deep learning model comprises a plurality of feature extraction modules and a feature analysis fusion network, wherein the feature extraction modules are arranged layer by layer, high-level features are fused layer by layer downwards in an upsampling mode, then the features of the lower layer are fused layer by layer from bottom to top in a downsampling mode, and the feature analysis fusion network is used for splicing a plurality of features with different scales fused by the feature extraction modules to finally obtain the rat situation features.
Specifically, the plurality of feature extraction modules comprise a feature extraction module A, a feature extraction module B, a feature extraction module C and a feature extraction module D from a high layer to a low layer,
as shown in fig. 2, the feature extraction module a sequentially includes 3 × 3, 64/1,1 × 1, 128^ q/1, the spatial attention module S,3 × 3, 64/1,
the feature extraction module B comprises a 3 × 3 module, a 128/1,1 × 1 module, a 256^ q/1 module, a space attention module S, a 3 × 3 module, a 128/1 module,
the feature extraction module C comprises a 3 × 3 module, a 256/1,1 × 1 module, a 512^ q/1 module, a space attention module S, a 3 × 3 module, a 256/1 module,
the feature extraction module D sequentially comprises a 3X 3 module, a 512/1,1X 1 module, a 1024^ q/1 module and a space attention module S, a 3X 3 module and a 512/1 module.
Of the convolution kernels 3 × 3 and 512/1, 3 × 3 represents the length and width of the convolution kernel, and 512/1 represents the number and step size of the nonlinear transformation.
In the figure, a characteristic extraction module D obtains a representation characteristic D1, the D1 is subjected to upsampling and then added with a characteristic C extracted by a characteristic extraction module C to obtain C1, the C1 is subjected to upsampling and then added with a characteristic B extracted by a characteristic extraction module B to obtain B1, and the B1 is subjected to upsampling and then added with a characteristic A extracted by the characteristic extraction module A to obtain A1. A2 is the same as A1, A2 is added with B1 after downsampling to obtain B2, B2 is added with C1 after downsampling to obtain C2, C2 is added with D1 after downsampling to obtain D2, A2, B2, C2 and D2 are used as features of different scales, and the number and the positions of rat holes and the classification of rat species are obtained by a feature analysis fusion network according to the features A2, B2, C2 and D2.
Because in handheld terminal shooting, the distance of equipment apart from rat hole differs, and the rat hole yardstick that leads to gathering is different: the closer to the upper part of the picture, the farther from the acquisition equipment, the smaller the dimension of the rat hole is; the closer to the lower part of the picture, the closer to the acquisition equipment, the larger the dimension of the rat hole. The traditional image processing method cannot effectively acquire information such as space distribution information and distance information.
As shown in fig. 3, the spatial attention module S of the present invention is used for autonomously learning spatial distances of pictures, reducing the problem of unbalanced target scales caused by inconsistent distances from cameras, and improving performance of rat hole detection, and the spatial attention module S is used in the following process: the input features are subjected to average pooling and maximum pooling in channel dimensions and are connected to obtain two-channel features, 7x7 convolution dimensionality reduction is adopted to obtain a single channel, and finally nonlinear transformation is carried out by adopting an activation function sigmoid to obtain weight information of a space position, wherein the input features comprise a channel c.
The backbone network of the convolutional neural network is selected as a ResNet residual network, and the four structures in the attached figure 2 are all residual structures. The residual error module directly connects the low-level features to the high-level features, reduces gradient loss, and is beneficial to improving the depth of a network layer and extracting the abstract features of the higher level.
The results of the feature analysis fusion network are subjected to error analysis by a loss function L,
the loss function L of the deep learning model is as follows:
L=L conf +αL loc formula 1
Wherein L is conf Represents the confidence loss, L loc The predicted positioning loss of the rat hole is shown, and the adjustment coefficient alpha is adopted for adjustment due to the difference of the two dimensions,
the localization loss adopts a Smooth L1 loss function, as shown in formula 2:
Figure SMS_1
calculating the difference value between the prediction offset and the actual offset of each prediction frame predicted as a positive example to be used as the positioning loss of the search frame, taking the prediction losses of all positive examples as the overall positioning loss, and calculating the positioning loss according to the formula 3:
Figure SMS_2
/>
where pos represents a search box predicted to be a positive example,
Figure SMS_3
offset information representing the predicted actual target and the search box>
Figure SMS_4
Offset information indicating actual target and search boxes, cx, cy, h, w indicating position offsets and size offsets of the actual target box and search box, respectively,
the confidence loss adopts a softmax loss function, as shown in formula 4:
Figure SMS_5
wherein the content of the first and second substances,
Figure SMS_6
and q represents the confidence of the search box i to the type j, and q represents the type q of the target box to be predicted of the search box i.
The feature analysis fusion network comprises an output activation function, and the output activation function is specifically a RELU function.
The final output result of the deep learning model is a three-dimensional vector [ w ', h', m ], wherein w 'and h' represent the position information of the search box, the numerical values of the search box are w '× h', m represents the output length of the corresponding search box, and the m value is p +4,p which is the number of the target types and is increased by four numerical values.
Example 2
As shown in fig. 4, a rat condition recorder includes: the device comprises an acquisition module, a determination module, a calculation module and a display module, wherein the acquisition module comprises a high-definition camera, the determination module comprises a GPS (global positioning system) positioning module, a light intensity sensor, a ranging module and a gyroscope, the calculation module comprises a core processor and an algorithm processor, the display module comprises a storage module, a wireless communication module, a highlight screen, a capacitive touch screen and a battery module, and all components in the acquisition module, the determination module and the display module are electrically connected with the core processor.
The invention can also comprise a data interface, and the data interface can comprise an audio interface, an Ethernet interface and a USB interface.
The core processor and the algorithm processor are coupled to operate to realize the deep learning model-based grassland mouse situation recognition and quantification method in embodiment 1.
The mouse condition image acquisition and display system comprises an acquisition module, a determination module, a calculation module and a display module, wherein the acquisition module is used for acquiring and inputting mouse condition image information and environment information, the determination module is used for acquiring spatial distribution information such as angle information and distance information of the mouse condition image, the calculation module is used for automatically calculating the mouse condition image in real time and acquiring mouse hole distribution information, and the display module is used for integrally displaying an original image, an analysis image and a distribution image of the mouse condition image acquired in real time.
The prepared mouse condition recorder can be handheld, the deep learning model can match the mouse condition features in the image with the standards, and the mouse condition features are counted and output; the obtained data and the system can be independently learned, the identification precision is improved to more than 95%, and the analyzed and output result data can be uploaded to a server through a wireless communication module for further processing. The hand-held rat condition recorder can provide rat density measurement and calculation data for rat monitoring work, the obtained data is matched with a traditional investigation method, and the rat density measurement and calculation data is comparable and is not limited to a use environment, so that the problems of time and labor waste, low investigation efficiency and poor accuracy of manual counting in the traditional method are solved.
The invention can detect mouse holes and mouse dunes through a deep neural network, and can calculate results in real time. The invention adopts the neural network to learn the visual characteristics of a large number of rat holes and rat dunes in different scenes, including the visual characteristics of the color, the shape, the texture, the size, the environment and the like of the rat holes and the rat dunes, comprehensively utilizes the characteristics to judge the rat holes and the rat dunes, and has high accuracy. Meanwhile, the intelligent mouse damage detection equipment provided by the invention is high in integration level, convenient to use, low in cost and easy to popularize.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrases "comprising … …" or "comprising … …" does not exclude the presence of additional elements in a process, method, article, or terminal that comprises the element. Further, herein, "greater than," "less than," "more than," and the like are understood to exclude the present numbers; the terms "above", "below", "within" and the like are to be understood as including the number.
Although the embodiments have been described, once the basic inventive concept is obtained, other variations and modifications of these embodiments can be made by those skilled in the art, so that the above embodiments are only examples of the present invention, and not intended to limit the scope of the present invention, and all equivalent structures or equivalent processes using the contents of the present specification and drawings, or any other related technical fields, which are directly or indirectly applied thereto, are included in the scope of the present invention.

Claims (6)

1. A grassland mouse situation recognition and quantification method based on a deep learning model is characterized by comprising the following steps:
(1) Constructing a grassland mouse condition characteristic data set, and manually labeling targets of the grassland mouse condition characteristic data set, wherein the grassland mouse condition characteristic data set is used for characteristic extraction of mouse holes and mouse hills and mouse classification training;
(2) Carrying out characteristic extraction training and mouse classification training on the rat hole and the rat dune of the deep learning model through a grassland rat situation characteristic data set to respectively obtain a rat hole target detection model, a rat dune target detection model and a mouse classification detection model;
(3) Inputting the mouse situation image to be identified into the model generated by training in the step (2), extracting the characteristics of the mouse situation image, and finally quantifying to obtain the mouse situation characteristics, wherein the mouse situation characteristics comprise the number, the positions and the mouse classification of mouse holes and mouse hills in a survey sample line or a sample plot,
the convolutional neural network of the deep learning model comprises a plurality of feature extraction modules and a feature analysis fusion network, wherein the feature extraction modules are arranged layer by layer, high-level features are fused layer by layer in an up-sampling mode and downwards, then lower-level features are fused layer by layer from bottom to top through down-sampling, the feature analysis fusion network is used for splicing a plurality of features with different scales fused by the feature extraction modules and finally obtaining the mouse situation features through quantification,
the plurality of feature extraction modules comprise a feature extraction module A, a feature extraction module B, a feature extraction module C and a feature extraction module D from a high layer to a low layer,
the feature extraction module A, the feature extraction module B, the feature extraction module C and the feature extraction module D are all internally provided with a space attention module S which is arranged between the second convolution kernel and the third convolution kernel and is used for independently learning the space distance of the picture,
the spatial attention module S is used as follows: the input features adopt average pooling and maximum pooling in channel dimensions and are connected to obtain features of two channels, then 7x7 convolution dimensionality reduction is adopted to obtain a single channel, finally nonlinear transformation is carried out by adopting an activation function sigmoid to obtain weight information of a space position, and the input features contain channels c;
the results of the feature analysis fusion network are subjected to error analysis by a loss function L,
the loss function L of the deep learning model is as follows:
L=L conf +αL loc formula 1
Wherein L is conf Represents a loss of confidence, L loc The predicted positioning loss of the rat hole is shown, and the adjustment coefficient alpha is adopted for adjustment due to the difference of the two dimensions,
the localization loss adopts a Smooth L1 loss function, as shown in formula 2:
Figure FDA0004072323570000021
calculating the difference value between the prediction offset and the actual offset of each prediction frame predicted as a positive example to be used as the positioning loss of the prediction frame, and taking the prediction losses of all positive examples as the overall positioning loss, wherein the positioning loss is calculated as shown in formula 3:
Figure FDA0004072323570000022
where pos represents a search box predicted to be a positive example,
Figure FDA0004072323570000023
offset information representing the predicted actual target and the search box>
Figure FDA0004072323570000024
Offset information indicating actual target and search boxes, cx, cy, h, w indicating position offsets and size offsets of the actual target box and search box, respectively,
the confidence loss adopts a softmax loss function, as shown in formula 4:
Figure FDA0004072323570000025
wherein the content of the first and second substances,
Figure FDA0004072323570000026
represents the confidence of the search box i for the category j, q represents the category q of the target box to be predicted of the search box i,
the mouse condition characteristics further comprise the length of a survey sample line or the area of a sample plot, and the step (3) is followed by the step (4) of mouse condition statistics: and (4) carrying out statistics on the mouse condition characteristics identified in the mouse condition image, and calculating a monitoring area by investigating the length of a sample line or the area of a sample area to obtain the unit mouse density of the monitoring area.
2. The deep learning model-based grassland murmurmur situation recognition and quantification method as claimed in claim 1, wherein a backbone network of the convolutional neural network is selected as a ResNet residual network.
3. A method for identifying and quantifying the moor emotion in a grassland based on a deep learning model as claimed in claim 1, wherein the feature analysis fusion network comprises an output activation function, and the output activation function is a specific RELU function.
4. The deep learning model-based grassland murmurmur recognition and quantification method as claimed in claim 1, wherein the murmur image is preprocessed before being input into the deep learning model, and the preprocessing comprises performing the following operations: image stitching and image normalization.
5. The deep learning model-based grassland murine situation recognition and quantification method according to claim 1, wherein the deep learning model is trained by the following steps:
acquiring image data, and performing labeling and preprocessing operations on the image data, wherein the image data are divided into a training set, a verification set and a test set according to a ratio of 4;
and inputting the preprocessed image data into the deep learning model, and adjusting parameters in the deep learning model to obtain the trained deep learning model.
6. A rat condition recorder is characterized by comprising: the system comprises an acquisition module, a determination module, a calculation module and a display module, wherein the acquisition module comprises a high-definition camera, the determination module comprises a GPS positioning module, a ranging module light intensity sensor and a gyroscope, the calculation module comprises a core processor and an algorithm processor, the display module comprises a storage module, a wireless communication module, a highlight screen, a capacitive touch screen and a battery module, all components in the acquisition module, the determination module and the display module are electrically connected with the core processor,
the core processor and the algorithm processor are coupled to operate to realize the deep learning model-based grassland mouse situation recognition and quantification method as claimed in any one of claims 1 to 5.
CN202210454046.5A 2022-04-24 2022-04-24 Grassland mouse condition identification and quantification method and mouse condition recorder Active CN114743108B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210454046.5A CN114743108B (en) 2022-04-24 2022-04-24 Grassland mouse condition identification and quantification method and mouse condition recorder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210454046.5A CN114743108B (en) 2022-04-24 2022-04-24 Grassland mouse condition identification and quantification method and mouse condition recorder

Publications (2)

Publication Number Publication Date
CN114743108A CN114743108A (en) 2022-07-12
CN114743108B true CN114743108B (en) 2023-04-18

Family

ID=82284081

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210454046.5A Active CN114743108B (en) 2022-04-24 2022-04-24 Grassland mouse condition identification and quantification method and mouse condition recorder

Country Status (1)

Country Link
CN (1) CN114743108B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109284735A (en) * 2018-10-17 2019-01-29 思百达物联网科技(北京)有限公司 Mouse feelings monitoring method, device, processor and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180007862A1 (en) * 2016-07-05 2018-01-11 The Governing Council Of The University Of Toronto Systems, methods and apparatus for rodent behavioural monitoring
CN108537130A (en) * 2018-03-15 2018-09-14 甘肃农业大学 A kind of Myospalax baileyi and Ochotona curzoniae based on miniature drone technology endanger monitoring method
CN110516535A (en) * 2019-07-12 2019-11-29 杭州电子科技大学 A kind of mouse liveness detection method and system and hygienic appraisal procedure based on deep learning
CN113313070A (en) * 2021-06-24 2021-08-27 华雁智能科技(集团)股份有限公司 Overhead transmission line defect detection method and device and electronic equipment
CN114202743A (en) * 2021-09-10 2022-03-18 湘潭大学 Improved fast-RCNN-based small target detection method in automatic driving scene

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109284735A (en) * 2018-10-17 2019-01-29 思百达物联网科技(北京)有限公司 Mouse feelings monitoring method, device, processor and storage medium

Also Published As

Publication number Publication date
CN114743108A (en) 2022-07-12

Similar Documents

Publication Publication Date Title
CN110147771B (en) Sow lateral-lying posture real-time detection system based on sow key part and environment combined partition
CN112464971A (en) Method for constructing pest detection model
CN109325431B (en) Method and device for detecting vegetation coverage in feeding path of grassland grazing sheep
CN106844614A (en) A kind of floor plan functional area system for rapidly identifying
CN107679183A (en) Grader training data acquisition methods and device, server and storage medium
CN109886928A (en) A kind of target cell labeling method, device, storage medium and terminal device
CN114565826A (en) Agricultural pest and disease identification and diagnosis method, system and device
CN107862687A (en) A kind of early warning system for being used to monitor agricultural pest
Sun et al. Remote estimation of grafted apple tree trunk diameter in modern orchard with RGB and point cloud based on SOLOv2
CN113822185A (en) Method for detecting daily behavior of group health pigs
CN112861666A (en) Chicken flock counting method based on deep learning and application
CN114898405B (en) Portable broiler chicken anomaly monitoring system based on edge calculation
Xuesong et al. Aphid identification and counting based on smartphone and machine vision
CN115527130A (en) Grassland pest mouse density investigation method and intelligent evaluation system
Li et al. An intelligent monitoring system of diseases and pests on rice canopy
CN114743108B (en) Grassland mouse condition identification and quantification method and mouse condition recorder
CN109993071B (en) Method and system for automatically identifying and investigating color-changing forest based on remote sensing image
CN116229001A (en) Urban three-dimensional digital map generation method and system based on spatial entropy
CN109523509A (en) Detection method, device and the electronic equipment of wheat heading stage
CN114492657A (en) Plant disease classification method and device, electronic equipment and storage medium
Bai et al. Video target detection of East Asian migratory locust based on the MOG2-YOLOv4 network
CN114612898A (en) YOLOv5 network-based litchi fruit borer emergence rate detection method
Amemiya et al. Appropriate grape color estimation based on metric learning for judging harvest timing
CN112465821A (en) Multi-scale pest image detection method based on boundary key point perception
CN113947780A (en) Sika deer face recognition method based on improved convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant