CN112217979B - Self-adaptive low-power-consumption wild animal snapshot device and method based on Internet of things - Google Patents

Self-adaptive low-power-consumption wild animal snapshot device and method based on Internet of things Download PDF

Info

Publication number
CN112217979B
CN112217979B CN202011090138.7A CN202011090138A CN112217979B CN 112217979 B CN112217979 B CN 112217979B CN 202011090138 A CN202011090138 A CN 202011090138A CN 112217979 B CN112217979 B CN 112217979B
Authority
CN
China
Prior art keywords
main controller
data
infrared
controller mcu
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011090138.7A
Other languages
Chinese (zh)
Other versions
CN112217979A (en
Inventor
江朝元
曹晓莉
彭鹏
陈露
范超
李靖
杨强
封强
喻贵柯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Intercontrol Electronics Co ltd
Original Assignee
Chongqing Intercontrol Electronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Intercontrol Electronics Co ltd filed Critical Chongqing Intercontrol Electronics Co ltd
Priority to CN202011090138.7A priority Critical patent/CN112217979B/en
Publication of CN112217979A publication Critical patent/CN112217979A/en
Application granted granted Critical
Publication of CN112217979B publication Critical patent/CN112217979B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/56Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D21/00Measuring or testing not otherwise provided for
    • G01D21/02Measuring two or more variables by means not covered by a single other subclass
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers
    • G05B19/042Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
    • G05B19/0423Input/output
    • G05B19/0425Safety, monitoring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/95Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/02Mechanical actuation
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/02Mechanical actuation
    • G08B13/14Mechanical actuation by lifting or attempted removal of hand-portable articles
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • G08B25/01Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium
    • G08B25/08Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium using communication transmission lines
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y20/00Information sensed or collected by the things
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y40/00IoT characterised by the purpose of the information processing
    • G16Y40/10Detection; Monitoring
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J7/00Circuit arrangements for charging or depolarising batteries or for supplying loads from batteries
    • H02J7/34Parallel operation in networks using both storage and other dc sources, e.g. providing buffering
    • H02J7/35Parallel operation in networks using both storage and other dc sources, e.g. providing buffering with light sensitive cells
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Automation & Control Theory (AREA)
  • Power Engineering (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses an adaptive low-power consumption wild animal snapshot device based on the Internet of things, which comprises an aluminum alloy shell, wherein a main controller MCU is arranged in the aluminum alloy shell, a remote infrared acquisition end of the main controller MCU is connected with a first identification sensor, a second identification sensor, a illuminance sensor, a light supplement camera device, a memory and a transmission communication module, the transmission communication module is connected with an image processing module, the main controller MCU is also connected with a positioning module, a dumping sensor, a vibration sensor and an internal environment sensor, and a power supply end of the main controller MCU is connected with an automatic power supply module. Has the advantages that: the micro-power consumption work of the whole machine and the accurate energy consumption state control are realized by utilizing a micro-power consumption technology and a multi-mode energy consumption state machine model. The infrared of the animal heat body is detected by a double-technology consisting of a point type non-refrigeration pyroelectric infrared sensor and an array type non-refrigeration pyroelectric infrared sensor.

Description

Self-adaptive low-power-consumption wild animal snapshot device and method based on Internet of things
Technical Field
The invention relates to the technical field of wild animal shooting, in particular to an adaptive low-power-consumption wild animal snapshot device and method based on the Internet of things.
Background
The method can accurately master the information of population distribution, quantity scale, living environment, living quality, habitat condition and the like of the wild animals, is the basis of ecological research of the wild animals, and can provide scientific basis for ecological research of the wild animals and effective protection of the wild animals. Therefore, gridding monitoring of wild animal population and distribution information is extremely important.
Besides manual field investigation, automatic monitoring of the 24h internet of things is an extremely important technical means. At present, wild animal infrared shooting camera obtains the wide application at home, and when the wild animal appeared in infrared shooting camera monitoring visual field scope, triggers the camera through infrared pyroelectric sensing technology perception animal and shoots to with the photo storage in data storage card, obtain wild animal by artifical regularly getting the card and shoot the photo or pass back the photo through mobile communication network.
The main defects of the prior art are as follows: (1) the batteries are replaced frequently. Disposable batteries, such as solar or rechargeable batteries, are commonly used for power. However, the power consumption of the circuit design is high, the longest power supply period is 3 months, and the huge workload of frequently replacing the battery is caused. (2) The false alarm rate of photography is high. Usually, a single pyroelectric sensing monitoring and identifying technology is adopted, false alarm photography is easily caused by external interference or personnel activities, and a large number of false alarm photography consumes battery energy and causes a large number of photo screening workload. (3) The shell protection level is low. The casing is usually designed by adopting engineering materials, the protection grade of the casing is lower, the casing is easily damaged by being influenced by the field climate environment, the continuous working life of the whole machine is shorter, and the reliability is lower. (4) No monitoring and easy stealing. Usually, the system does not have internal working state monitoring and anti-theft functions, so that the system is not only lack of effective equipment running state monitoring, but also easy to be stolen manually.
Disclosure of Invention
Aiming at the problems, the invention provides an adaptive low-power consumption wild animal snapshot device and method based on the Internet of things.
In order to achieve the purpose, the invention adopts the following specific technical scheme:
the utility model provides a self-adaptation low-power consumption wild animal snapshot device based on thing networking, its key technology lies in: the device comprises an aluminum alloy shell, a main controller MCU is arranged in the aluminum alloy shell, a remote infrared acquisition end of the main controller MCU is connected with a first identification sensor, a focusing infrared acquisition end of the main controller MCU is connected with a second identification sensor, a light acquisition end of the main controller MCU is connected with a light intensity sensor, a light supplement shooting end of the main controller MCU is connected with a light supplement camera device, an image processing end of the main controller MCU is connected with an image processing module, a storage end of the main controller MCU is connected with a memory, a communication end of the main controller MCU is connected with a transmission communication module, the transmission communication module is connected with the image processing module, a positioning module is connected with a positioning end of the main controller MCU, an inclination sensor is connected with an inclination detection end of the main controller MCU, and a vibration sensor is connected with a vibration detection end of the main controller MCU, the internal environment detection end of the main controller MCU is connected with an internal environment sensor, and the power supply end of the main controller MCU is connected with a self-power supply module.
Through the design, the shell made of the aluminum alloy material is adopted, the protection grade of the shell is IP67, and the inner cavity is filled with inert gas. And the self-powered module is designed and used for outdoor charging and power supply, so that the service life of the device is prolonged. The dual-authentication sensor is formed by combining a first authentication sensor and a second authentication sensor. When the animal picture is not shot, the whole device is in a low-power consumption state, only the first identification sensor is used for monitoring, the remote infrared acquisition equipment is used for acquiring, the monitoring shooting distance is increased, after the infrared signal is shot, the whole system is quitted from the low-power consumption state, the second identification sensor is started to acquire the animal thermal imaging temperature picture, and the lighting and light supplementing technology is combined to shoot the color picture with better effect. The micro-power consumption work of the whole machine and the accurate energy consumption state control are realized by utilizing a micro-power consumption technology and a multi-mode energy consumption state machine model. The main controller is connected with the dumping sensor, the vibration sensor and the positioning module, so that the device is effectively prevented from being moved, stolen and the like, the safety performance of the device is improved, and self-alarming processing is realized. Through the integration and the design of multiple sensors, the wild animal snapshot based on the Internet of things is realized, and the snapshot effect is improved.
Furthermore, the first identification sensor comprises an infrared sensor, the infrared sensor is formed by reversely connecting double infrared pyroelectric sensing elements in series, a first identification hole is formed in the shell, a Fresnel lens is arranged on the first identification hole, an infrared acquisition end of the infrared sensor is opposite to the Fresnel lens, the infrared sensor is connected with an input end of an infrared sensor driving circuit, and an infrared waveform signal output by an output end of the infrared sensor driving circuit is connected with a focusing infrared acquisition end of the main controller MCU;
the second identification sensor comprises an infrared array sensor, a second identification hole is formed in the aluminum alloy shell, an infrared lens is arranged in the second identification hole, the infrared acquisition end of the infrared array sensor is aligned to the infrared lens, and the temperature signal of the infrared array sensor is connected with the focusing infrared acquisition end of the main controller MCU.
By adopting the scheme, the centers of the Fresnel lens of the first identification sensor and the infrared acquisition end of the infrared sensor are positioned on the same horizontal line, so that a remote infrared pulse effect is formed, and the identification distance is further increased; the wavelength of the infrared sensor is set to be 9-12um, the specific wavelength is set, the infrared sensor is sensitive to the infrared wavelength of animal body radiation, double elements are packaged in a reverse series mode, the sensitivity is further improved, and the first identification sensor is a first trigger source for identifying and triggering the whole device and is used for starting the second identification sensor. The infrared lens of the second identification sensor is made of special rare earth materials and specifically passes infrared band light. Meanwhile, more infrared rays can be converged on the rear-end sensor through an optical focusing principle, so that the identification distance is increased. The infrared array sensor outputs an infrared thermal imaging image.
The further technical scheme is as follows: the transmission communication module comprises a mobile communication unit and a short-distance wireless transmission unit, wherein the short-distance wireless transmission unit is a Bluetooth transmission unit or a WiFi transmission unit;
the self-powered module comprises a solar cell panel, the solar cell panel is connected with an energy collecting and converting circuit, and the energy collecting and converting circuit supplies power to the main controller MCU through a three-stage energy storage device.
The mobile communication unit is used for transmitting long-distance data to the appointed cloud platform, and the Bluetooth transmission unit or the WiFi transmission unit is used for transmitting short-distance data. Under the condition of nonuse, the transmission communication module is in a non-working state, and the running period is set in the MCU to realize the periodic running. The device is arranged in the field, self-charging is carried out through the solar cell panel, and energy is self-sufficient. Realize the long life cycle's of open-air environment autonomous energy supply, need not change the battery in the device life cycle.
Further, the internal environment sensor comprises a current and voltage detection module, a temperature detection module, a humidity detection module and an air pressure detection module, and the current and voltage detection module, the temperature detection module, the humidity detection module and the air pressure detection module are all connected with the main controller MCU.
In order to detect, monitor and self-diagnose the equipment, a current and voltage detection module, a temperature detection module, a humidity detection module and an air pressure detection module are arranged, so that unattended and rapid alarm are realized.
A self-adaptive low-power consumption wild animal snapshot device method based on the Internet of things is characterized by comprising the following steps:
a step for performing authentication, judgment and snapshot;
a step for wirelessly transmitting a picture;
a step for uploading the working state of the snapshot system at regular time;
and (5) pouring and anti-theft monitoring.
Further, the steps for performing the identification, the judgment and the snapshot are as follows:
pretreatment: setting infrared waveform discrete coefficient threshold value U and ambient light illumination
S11: the main controller MCU is in a low power consumption state and controls the first identification sensor to acquire an infrared waveform signal in real time, and the step S12 is carried out after the main controller MCU acquires the infrared waveform signal u (t);
s12: the main controller MCU exits from a low power consumption state, and performs data processing, interframe difference processing and characteristic quantity extraction on the acquired infrared waveform signal u (t) to obtain an infrared waveform discrete coefficient u1 (t);
s13: the main controller MCU compares the two-time infrared waveform discrete coefficient U1(t) with the infrared waveform discrete coefficient threshold value U; if the infrared waveform discrete coefficient u1(t) is greater than or equal to the infrared waveform discrete coefficient threshold n for two times, the step S14 is executed; otherwise, returning to the step S11, and entering a low power consumption state;
s14: the main controller MCU starts the second identification sensors, continuously acquires temperature signals acquired by at least three second identification sensors, forms a thermal imaging image, establishes an infrared array temperature picture and performs interframe differential analysis;
s15: the main controller MCU combines the image processing module to perform artificial intelligent animal mode identification to obtain an animal judgment result; if the animal is the animal, the step S16 is entered, otherwise, the step S11 is returned;
s16: the main controller MCU starts the illuminance sensor and acquires the current ambient illuminance;
s17: if the current ambient illuminance is lower than the preset ambient illuminance, the main controller MCU starts the light supplementing camera device to supplement light and shoot to obtain color pictures RGB, and the color pictures RGB are stored in the memory; otherwise, the main controller MCU starts the light supplementing camera shooting device to shoot and stores the shot data to the memory.
The point type uncooled pyroelectric infrared sensor and the array type uncooled pyroelectric infrared sensor are used for forming a double-technology to detect the infrared of the animal heat body, so that low-power consumption detection is realized, and the service cycle is prolonged.
Still further, the spectrum information carried by the infrared waveform discrete coefficient u1(t) obtained in step S12 is a specific spectrum range;
subjecting u1(t) to Fourier transform FFT
Figure BDA0002721841530000061
Transforming a formula to obtain a frequency spectrum and a magnitude spectrum of u1 (t);
taking the amplitude of the frequency f at the (1, 10) discrete point according to an empirical formula and experimental data, and carrying out differential comparison:
assuming that the first sampling result has a maximum value m1 at a in the frequency range 1-10, namely: at f1Maximum amplitude spectrum at a ∈ (0,10) }
Figure BDA0002721841530000062
Assuming that the second sampling result takes a maximum value m2 at b in the frequency range a-10, namely: at f2B ∈ (a,10) } maximum amplitude spectrum
Figure BDA0002721841530000063
If m1Greater than or equal to the threshold value n, and m of the infrared waveform discrete coefficient2Greater than or equal to the infrared waveform discrete coefficient threshold value n; step S14 is entered.
Still further, the infrared array temperature picture size is 128 x 128; the color picture RGB is X Y;
the specific steps in step S15 are:
s151: compressing and combining the infrared array temperature picture and the color picture RGB to form combined picture data Input, wherein the specific contents are as follows: mapping the RGB size of the color picture into 128 × 128 adjustment picture data of 3 channels according to the size of the infrared array temperature picture; obtaining infrared temperature data of 1 channel 128 x 128 according to the infrared array temperature picture; compressing and combining the adjusted picture data and the infrared temperature data to form combined picture data Input of 4 channels and 128 × 128;
s152: sending the merged picture data Input obtained in the step S1 to a data convolution network structure for processing to obtain Output picture data Output;
the specific steps of sending the merged picture data Input into the data convolution network structure for processing are as follows:
s1521: setting a first convolution kernel group, wherein the first convolution kernel group comprises 16 1 × 1 convolution kernels, Filter1X, and X is 1,2,3 … 16; the number of channels of each convolution kernel Filter1X is 4, and the step length is set to be 4;
performing group convolution operation on the merged picture data Input and 16 convolution kernels Filter1X to obtain group convolution output data M1, which specifically comprises:
the merged picture data Input and all convolution kernels Filter1X in the first convolution kernel group respectively have 4 channels, four channels of the merged picture data Input and each convolution kernel Filter1X are multiplied in sequence and added to obtain first-time intermediate data, the first-time intermediate data are substituted into an h-swish function, and first-time convolution output data M1 of 16 channels N X N are obtained through calculation;
in S1521, the number of channels is increased, so that the hidden features in the original merged picture data Input are upscaled to a higher dimensional space, and then the nonlinear expression capability is increased by using the h-swish function.
S1522: setting a second convolution kernel group, wherein the second convolution kernel group comprises 14 by 4 convolution kernels Filter2, the number of channels of each second convolution kernel Filter2 is 4, and the step length is set to be 4;
performing conventional convolution operation on the first convolution output data M1 and a convolution kernel Filter2 to obtain conventional convolution output data M2; the method specifically comprises the following steps:
the first convolution output data M1 has 16 channels, now divided into 4 groups of 4 channels each; taking 4 groups of the data, respectively performing convolution operation on the 4 groups of data and convolution kernels Filter2, merging the data to obtain second-time intermediate data of 4 channels 32 x 32, and substituting the second-time intermediate data into a h-swish function to obtain second-time convolution output data M2;
in step S1522, the first convolution output data M1 is divided into 4 groups, and each group is sequentially subjected to convolution operation with the convolution kernel Filter 2. Unlike Filter1, Filter2 has only one convolution kernel, so that 4 groups of first-time convolved output data M1 share one convolution kernel of convolution kernel Filter2 (i.e., share the convolution kernel weight). Such an operation greatly reduces the time and memory consumed in the calculation process.
S1523: setting a third convolution kernel group, wherein the third convolution kernel group comprises 14 by 4 convolution kernels Filter3, the number of channels of the convolution kernels Filter3 is 4, and the step length is set to be 4;
performing a third convolution operation on the second convolution output data M2 and the convolution kernel Filter3 to obtain third convolution output data M3; the method specifically comprises the following steps:
the second convolution output data M2 is 4-channel 32 x 32 data, the convolution kernel Filter3 is 4-channel data, each channel is respectively convolved to obtain 4 third intermediate data with 8 x 8, then the 4 intermediate data are added to obtain 1-channel 8 x 8 data, and the 1-channel 8 x 8 data are substituted to obtain third convolution output data M3 through an h-swish function;
s1524: setting a fourth convolution kernel group, wherein the fourth convolution kernel group comprises 1 8 x 8 convolution kernels Filter4 with the channel number being 1;
performing convolution operation on the third convolution output data M3 and a convolution kernel Filter4 to obtain fourth intermediate data, and substituting the fourth intermediate data into a tanh function to obtain merged picture data Input;
wherein the h-swish function is:
Figure BDA0002721841530000081
ReLU6[x]=Min(Max(x,0),6);
x is first time intermediate data, second time intermediate data or third time intermediate data;
the tanh function is:
Figure BDA0002721841530000082
y is the fourth intermediate data.
S153: setting a judgment threshold, and judging whether an animal exists or not according to the value of the Output picture data Output;
setting the judgment threshold value as a range of (-1,1), wherein the closer the value of the merged picture data Input is to 1, the more certain the picture contains animals; the closer the value of the merged picture data Input is to-1, the more certain the picture is determined to contain no animal.
The data convolution network structure is characterized in that group convolution and sharing weight are adopted to enable the number of parameters to be small, data interaction can be conducted through remote connection with a cloud platform when picture recognition is conducted, power consumption is low, and speed is high.
The further technical scheme is as follows: the step for wirelessly transmitting the picture comprises a step for wirelessly transmitting the picture in a short distance and a step for wirelessly transmitting the picture in a long distance;
the steps for transmitting the pictures in the close range wirelessly are as follows:
s21: the method comprises the following steps that a main controller MCU is in a low power consumption state, and a near field wireless transmission pairing request of a transmission communication module is obtained in real time;
s22: when the main controller MCU acquires a close-range wireless transmission pairing request of any intelligent terminal, the main controller MCU exits from a low-power consumption state and is in close-range wireless transmission pairing connection with the corresponding intelligent terminal;
s23: the method comprises the following steps that a main controller MCU obtains a data transmission request of an intelligent terminal and predicts data transmission time;
s24: the method comprises the steps that a main controller MCU obtains the data transmission endurance time of a current self-powered module;
s25: if the data transmission endurance time is greater than the predicted data transmission time, performing data transmission; otherwise, rejecting the data transmission request and entering a low power consumption state;
the steps for transmitting the picture remotely and wirelessly are as follows:
pretreatment: setting a long-distance wireless transmission picture period;
s31: the method comprises the following steps that a main controller MCU is in a low power consumption state, and periodically obtains the coverage state of a current mobile signal;
s32: if the mobile signal is in the coverage state, go to step S33; otherwise, returning to the step S31;
s33: the main controller MCU exits from the low power consumption state and transmits the shooting data to the cloud platform through the transmission communication module; after the transmission is completed, the process returns to step S31.
The further technical scheme is as follows: for monitoring and theft prevention, the steps for dumping and theft prevention monitoring are as follows:
s41: the main controller MCU is in a low power consumption state, acquires a dumping detection signal of the dumping sensor in real time, exits from the low power consumption state after acquiring the dumping detection signal, and enters into a step S42;
s42: the main controller MCU starts a mobile communication unit and a vibration sensor in the transmission communication module;
s43: the main controller MCU sends a dumping alarm signal to the cloud platform through the mobile communication unit; simultaneously acquiring a vibration signal detected by a vibration sensor;
s44: the main controller MCU compares the preset theft vibration signal with the detected vibration signal, if the similarity is larger than the preset theft similarity threshold, the step S45 is entered; otherwise, entering a low power consumption state, and returning to the step S41;
s45: and the main controller MCU controls the start of the positioning module and sends the current position to the cloud platform.
The invention has the beneficial effects that: the solar energy power supply, multi-stage energy storage and backup power supply technologies are utilized to realize the autonomous energy supply with long life cycle in the field environment, and the battery does not need to be replaced in the life cycle of the device. The micro-power consumption work of the whole machine and the accurate energy consumption state control are realized by utilizing a micro-power consumption technology and a multi-mode energy consumption state machine model. The infrared of the animal heat body is detected by a double-technology consisting of a point type non-refrigeration pyroelectric infrared sensor and an array type non-refrigeration pyroelectric infrared sensor. And the obtained pictures are identified and judged by combining an identification algorithm, so that the pictures are screened, the memory occupation amount is reduced, and more useful data and pictures can be stored. And the screening time of workers is shortened. The supervision efficiency of the field animals is improved. And the device is moved and monitored by using the dumping and vibrating sensor, and the real-time position acquisition is carried out by combining the positioning sensor, so that the device is beneficial to theft prevention and recovery. With the aluminum alloy shell design, the protection grade reaches IP 67. The working state of each unit circuit of the device is monitored and diagnosed by using sensors such as current, voltage, temperature, humidity, air pressure and the like, and remote parameter measurement and control are supported.
Drawings
FIG. 1 is a control block diagram of the present invention;
FIG. 2 is a flow chart of performing authentication, decision and snapshot;
FIG. 3 is a flow chart of artificial intelligence animal pattern recognition;
FIG. 4 is a flow chart for transferring pictures wirelessly at close range;
FIG. 5 is a flow chart for wirelessly transmitting pictures over long distances;
fig. 6 is a flow chart for pour and anti-theft monitoring.
Detailed Description
The following provides a more detailed description of the embodiments and the operation of the present invention with reference to the accompanying drawings.
A self-adaptive low-power consumption wild animal snapshot device based on the Internet of things comprises an aluminum alloy shell, wherein a main controller MCU1 is arranged in the aluminum alloy shell, as can be seen from figure 1, a remote infrared acquisition end of the main controller MCU1 is connected with a first identification sensor 2, a focusing infrared acquisition end of a main controller MCU1 is connected with a second identification sensor 3, a light acquisition end of the main controller MCU1 is connected with a light intensity sensor 4, a light supplement shooting end of the main controller MCU1 is connected with a light supplement camera device 5, an image processing end of the main controller MCU1 is connected with an image processing module 6, a storage end of the main controller MCU1 is connected with a memory 7, a communication end of the main controller MCU1 is connected with a transmission communication module 12, the transmission communication module 12 is connected with the image processing module 6, and a positioning module 9 is connected with a positioning end of the main controller MCU1, the sensor 10 is emptyd to main control unit MCU 1's the detection of empting serving being connected with, be connected with vibration sensor 11 on the vibration detection of main control unit MCU1, be connected with interior environment sensor K on the internal environment detection of main control unit MCU1, be connected with self-power module 13 on the main control unit MCU1 power supply end.
In the embodiment, the main controller MCU1 adopts an ultra-low power consumption microprocessor with ARM Cortex-M4 as core and FPU floating point arithmetic unit, which can realize ultra-low standby operation power consumption of < 500 nA. And the control of each function of the device is realized through algorithm and data interaction.
In the present embodiment, the fill-in imaging apparatus 5 supports at least 1080P image and video capture using a visible light sensor with 1200 ten thousand pixels.
As can be seen from fig. 1, the first identification sensor 2 includes an infrared sensor 22 formed by connecting two pyroelectric infrared sensitive elements in series in reverse, the housing is provided with a first identification hole, the first identification hole is provided with a fresnel lens 21, an infrared collection end of the infrared sensor 22 faces the fresnel lens 21, the infrared sensor 22 is connected with an input end of an infrared sensor driving circuit, and an infrared waveform signal output by an output end of the infrared sensor driving circuit is connected with a focusing infrared collection end of the main controller MCU 1;
in the present embodiment, the wavelength of the infrared sensor 22 is 9-12um, the sensitivity is >4000V/W, and the angle of view is >100 °.
The second identification sensor 3 comprises an infrared array sensor 32, a second identification hole is formed in the aluminum alloy shell, an infrared lens 31 is arranged in the second identification hole, the infrared acquisition end of the infrared array sensor 32 is opposite to the infrared lens 31, and the temperature signal of the infrared array sensor 32 is connected with the focusing infrared acquisition end of the main controller MCU 1.
In the present embodiment, in the second discrimination sensor 3, the pixel array of the infrared array sensor 32 is equal to or greater than 128 × 128.
In this embodiment, the transmission communication module 12 includes a mobile communication unit and a short-distance wireless transmission unit, where the short-distance wireless transmission unit is a WiFi transmission unit; in this embodiment, the WiFi transmission unit may provide a maximum transmission rate of 150 Mbit/s.
In this embodiment, the mobile communication unit may be 4G/5G/6G, etc.
As can also be seen in fig. 1, the self-powered module 13 includes a solar panel 13b, the solar panel 13b is connected to an energy collecting and converting circuit 13a, and the energy collecting and converting circuit 13a supplies power to the main controller MCU1 through a three-stage energy storage device.
The solar cell panel 13b adopts a double-A-level monocrystalline silicon high-efficiency solar cell panel, and is matched with a low-illumination charge pump technology and an MPPT tracking technology to realize stable energy storage under illumination of more than 500 lux.
As can be seen from fig. 1, the internal environment sensor K includes a current and voltage detection module K1, a temperature detection module K2, a humidity detection module K3, and an air pressure detection module K4, and the current and voltage detection module K1, the temperature detection module K2, the humidity detection module K3, and the air pressure detection module K4 are all connected to the main controller MCU 1.
A self-adaptive low-power consumption wild animal snapshot device method based on the Internet of things is characterized by comprising the following steps:
a step for performing authentication, judgment and snapshot;
a step for wirelessly transmitting a picture;
a step for uploading the working state of the snapshot system at regular time;
and (5) pouring and anti-theft monitoring.
As can be seen from fig. 2, the steps for performing the authentication, the judgment and the snapshot are as follows:
pretreatment: setting infrared waveform discrete coefficient threshold value U and ambient light illumination
S11: the main controller MCU1 is in a low power consumption state and controls the first authentication sensor 2 to acquire an infrared waveform signal in real time, and the process goes to step S12 when the main controller MCU1 acquires the infrared waveform signal u (t);
s12: the main controller MCU1 exits the low power consumption state, and performs data processing, interframe difference processing and characteristic quantity extraction on the acquired infrared waveform signal u (t) to obtain an infrared waveform discrete coefficient u1 (t);
s13: the main controller MCU1 compares the two-time infrared waveform discrete coefficient U1(t) with the infrared waveform discrete coefficient threshold value U; if the infrared waveform discrete coefficient u1(t) is greater than or equal to the infrared waveform discrete coefficient threshold n for two times, the step S14 is executed; otherwise, returning to the step S11, and entering a low power consumption state;
s14: the main controller MCU1 starts the second identification sensor 3, and continuously acquires the temperature signals collected by at least three second identification sensors 3 to form a thermal imaging image, establishes an infrared array temperature picture, and performs interframe differential analysis;
s15: the main controller MCU1 combines with the image processing module 6 to perform artificial intelligence animal mode recognition to obtain an animal judgment result; if the animal is the animal, the step S16 is entered, otherwise, the step S11 is returned;
s16: the main controller MCU1 starts the illuminance sensor 4 and obtains the current ambient illuminance;
s17: if the current ambient illuminance is lower than the preset ambient illuminance, the main controller MCU1 starts the light supplement camera device 5 to supplement light and shoot to obtain color images RGB, and stores the color images RGB in the memory 7; otherwise, the main controller MCU1 starts the fill-in camera device 5 to shoot and stores the shot data in the memory 7.
In this embodiment, the spectrum information carried by the infrared waveform discrete coefficient u1(t) obtained in step S12 is a specific spectrum range;
subjecting u1(t) to Fourier transform FFT
Figure BDA0002721841530000141
Transforming a formula to obtain a frequency spectrum and a magnitude spectrum of u1 (t);
taking the amplitude of the frequency f at the (1, 10) discrete point according to an empirical formula and experimental data, and carrying out differential comparison:
assuming that the first sampling result has a maximum value m1 at a in the frequency range 1-10, namely: at f1Maximum amplitude spectrum at a ∈ (0,10) }
Figure BDA0002721841530000142
Assuming that the second sampling result takes a maximum value m2 at b in the frequency range a-10, namely: at f2B ∈ (a,10) } maximum amplitude spectrum
Figure BDA0002721841530000143
If m1Greater than or equal to the threshold value n, and m of the infrared waveform discrete coefficient2Greater than or equal to the infrared waveform discrete coefficient threshold value n; step S14 is entered.
In this embodiment, the infrared array temperature picture size is 128 × 128; the color picture RGB is X Y;
as can be seen from fig. 3, the specific steps in step S15 are:
s151: compressing and combining the infrared array temperature picture and the color picture RGB to form combined picture data Input, wherein the specific contents are as follows: mapping the RGB size of the color picture into 128 × 128 adjustment picture data of 3 channels according to the size of the infrared array temperature picture; obtaining infrared temperature data of 1 channel 128 x 128 according to the infrared array temperature picture; compressing and combining the adjusted picture data and the infrared temperature data to form combined picture data Input of 4 channels and 128 × 128;
s152: sending the merged picture data Input obtained in the step S1 to a data convolution network structure for processing to obtain Output picture data Output;
the specific steps of sending the merged picture data Input into the data convolution network structure for processing are as follows:
s1521: setting a first convolution kernel group, wherein the first convolution kernel group comprises 16 1 × 1 convolution kernels, Filter1X, and X is 1,2,3 … 16; the number of channels of each convolution kernel Filter1X is 4, and the step length is set to be 4;
performing group convolution operation on the merged picture data Input and 16 convolution kernels Filter1X to obtain group convolution output data M1, which specifically comprises:
the merged picture data Input and all convolution kernels Filter1X in the first convolution kernel group respectively have 4 channels, four channels of the merged picture data Input and each convolution kernel Filter1X are multiplied in sequence and added to obtain first-time intermediate data, the first-time intermediate data are substituted into an h-swish function, and first-time convolution output data M1 of 16 channels N X N are obtained through calculation;
s1522: setting a second convolution kernel group, wherein the second convolution kernel group comprises 14 by 4 convolution kernels Filter2, the number of channels of each second convolution kernel Filter2 is 4, and the step length is set to be 4;
performing conventional convolution operation on the first convolution output data M1 and a convolution kernel Filter2 to obtain conventional convolution output data M2; the method specifically comprises the following steps:
the first convolution output data M1M1 has 16 channels, now divided into 4 groups of 4 channels each; taking 4 groups of the data, respectively performing convolution operation on the 4 groups of data and convolution kernels Filter2, merging the data to obtain second-time intermediate data of 4 channels 32 x 32, and substituting the second-time intermediate data into a h-swish function to obtain second-time convolution output data M2;
s1523: setting a third convolution kernel group, wherein the third convolution kernel group comprises 14 by 4 convolution kernels Filter3, the number of channels of the convolution kernels Filter3 is 4, and the step length is set to be 4;
performing a third convolution operation on the second convolution output data M2 and the convolution kernel Filter3 to obtain third convolution output data M3; the method specifically comprises the following steps:
the second convolution output data M2 is 4-channel 32 x 32 data, the convolution kernel Filter3 is 4-channel data, each channel is respectively convolved to obtain 4 third intermediate data with 8 x 8, then the 4 intermediate data are added to obtain 1-channel 8 x 8 data, and the 1-channel 8 x 8 data are substituted to obtain third convolution output data M3 through an h-swish function;
s1524: setting a fourth convolution kernel group, wherein the fourth convolution kernel group comprises 1 8 x 8 convolution kernels Filter4 with the channel number being 1;
performing convolution operation on the third convolution output data M3 and a convolution kernel Filter4 to obtain fourth intermediate data, and substituting the fourth intermediate data into a tanh function to obtain merged picture data Input;
wherein the h-swish function is:
Figure BDA0002721841530000161
ReLU6[x]=Min(Max(x,0),6);
x is first time intermediate data, second time intermediate data or third time intermediate data;
the tanh function is:
Figure BDA0002721841530000162
y is the fourth intermediate data.
S153: setting a judgment threshold, and judging whether an animal exists or not according to the value of the Output picture data Output;
setting the judgment threshold value as a range of (-1,1), wherein the closer the value of the merged picture data Input is to 1, the more certain the picture contains animals; the closer the value of the merged picture data Input is to-1, the more certain the picture is determined to contain no animal.
The step for wirelessly transmitting the picture comprises a step for wirelessly transmitting the picture in a short distance and a step for wirelessly transmitting the picture in a long distance;
as can be seen from fig. 4, the steps for transmitting the pictures in the short-distance wireless mode are as follows:
s21: the main controller MCU1 is in a low power consumption state, and acquires a close range wireless transmission pairing request of the transmission communication module 12 in real time;
s22: when the main controller MCU1 acquires a close-range wireless transmission pairing request of any intelligent terminal, the main controller MCU1 exits from a low power consumption state and is in close-range wireless transmission pairing connection with the corresponding intelligent terminal;
s23: the main controller MCU1 acquires the intelligent terminal data transmission request and predicts the data transmission time;
s24: the main controller MCU1 obtains the current data transmission endurance time of the self-powered module 13;
s25: if the data transmission endurance time is greater than the predicted data transmission time, performing data transmission; otherwise, rejecting the data transmission request and entering a low power consumption state;
as can be seen in fig. 5, the steps for transmitting the picture over long distance wirelessly are as follows:
pretreatment: setting a long-distance wireless transmission picture period;
s31: the main controller MCU1 is in a low power consumption state and periodically acquires the current mobile signal coverage state;
s32: if the mobile signal is in the coverage state, go to step S33; otherwise, returning to the step S31;
s33: the main controller MCU1 exits the low power consumption state, and transmits the shooting data to the cloud platform through the transmission communication module 12; after the transmission is completed, the process returns to step S31.
As can be seen in fig. 6, the steps for pouring and theft monitoring are:
s41: the main controller MCU1 is in a low power consumption state, and acquires a tilting detection signal of the tilting sensor 10 in real time, and exits from the low power consumption state after acquiring the tilting detection signal, and then enters step S42;
s42: the main controller MCU1 starts the mobile communication unit and the vibration sensor 11 in the transmission communication module 12;
s43: the main controller MCU1 sends a dumping alarm signal to the cloud platform through the mobile communication unit; simultaneously acquiring a vibration signal detected by the vibration sensor 11;
s44: the main controller MCU1 compares the preset theft vibration signal with the detected vibration signal, if the similarity is larger than the preset theft similarity threshold, the step S45 is entered; otherwise, entering a low power consumption state, and returning to the step S41;
s45: the main controller MCU1 controls the start positioning module 9 and sends the current position to the cloud platform.
It should be noted that the above description is not intended to limit the present invention, and the present invention is not limited to the above examples, and those skilled in the art may make variations, modifications, additions or substitutions within the spirit and scope of the present invention.

Claims (6)

1. A self-adaptive low-power consumption wild animal snapshot device method based on the Internet of things is characterized by comprising the following steps:
firstly, a self-adaptive low-power consumption wild animal snapshot device based on the Internet of things is built and comprises an aluminum alloy shell, a main controller MCU (1) is arranged in the aluminum alloy shell, a remote infrared acquisition end of the main controller MCU (1) is connected with a first identification sensor (2), a focusing infrared acquisition end of the main controller MCU (1) is connected with a second identification sensor (3), a light acquisition end of the main controller MCU (1) is connected with a light intensity sensor (4), a light supplement shooting end of the main controller MCU (1) is connected with a light supplement camera device (5), an image processing end of the main controller MCU (1) is connected with an image processing module (6), a storage end of the main controller MCU (1) is connected with a storage (7), and a communication end of the main controller MCU (1) is connected with a transmission communication module (12), the transmission communication module (12) is connected with the image processing module (6), a positioning module (9) is connected to a positioning end of the main controller MCU (1), a dumping sensor (10) is connected to a dumping detection end of the main controller MCU (1), a vibration sensor (11) is connected to a vibration detection end of the main controller MCU (1), an internal environment sensor (K) is connected to an internal environment detection end of the main controller MCU (1), and a self-power supply module (13) is connected to a power supply end of the main controller MCU (1);
the first identification sensor (2) comprises an infrared sensor (22) formed by reversely connecting double infrared pyroelectric sensitive elements in series, a first identification hole is formed in the shell, a Fresnel lens (21) is arranged on the first identification hole, an infrared acquisition end of the infrared sensor (22) is opposite to the Fresnel lens (21), the infrared sensor (22) is connected with an input end of an infrared sensor driving circuit, and an infrared waveform signal output by an output end of the infrared sensor driving circuit is connected with a focusing infrared acquisition end of the main controller MCU (1);
the second identification sensor (3) comprises an infrared array sensor (32), a second identification hole is formed in the aluminum alloy shell, an infrared lens (31) is arranged in the second identification hole, an infrared acquisition end of the infrared array sensor (32) is opposite to the infrared lens (31), and a temperature signal of the infrared array sensor (32) is connected with a focusing infrared acquisition end of the main controller MCU (1);
secondly, the method comprises the following steps:
a step for performing authentication, judgment and snapshot;
a step for wirelessly transmitting a picture;
a step for uploading the working state of the snapshot system at regular time;
a step for dumping and theft monitoring;
the steps for identifying, judging and snapshotting are as follows:
pretreatment: setting infrared waveform discrete coefficient threshold value U and ambient light illumination
S11: the method comprises the following steps that a main controller MCU (1) is in a low power consumption state and controls a first identification sensor (2) to acquire an infrared waveform signal in real time, and when the main controller MCU (1) acquires an infrared waveform signal u (t), the step S12 is carried out;
s12: the main controller MCU (1) exits from a low power consumption state, and performs data processing, interframe difference processing and characteristic quantity extraction on the acquired infrared waveform signal u (t) to obtain an infrared waveform discrete coefficient u1 (t);
s13: the main controller MCU (1) compares the two-time infrared waveform discrete coefficient U1(t) with an infrared waveform discrete coefficient threshold value U; if the infrared waveform discrete coefficient u1(t) is greater than or equal to the infrared waveform discrete coefficient threshold n for two times, the step S14 is executed; otherwise, returning to the step S11, and entering a low power consumption state;
s14: the main controller MCU (1) starts the second identification sensors (3), continuously acquires temperature signals acquired by at least three second identification sensors (3), forms a thermal imaging image, establishes an infrared array temperature picture and performs interframe differential analysis;
s15: the main controller MCU (1) is combined with the image processing module (6) to carry out artificial intelligent animal mode identification to obtain an animal judgment result; if the animal is the animal, the step S16 is entered, otherwise, the step S11 is returned;
s16: the main controller MCU (1) starts the illuminance sensor (4) and acquires the current ambient illuminance;
s17: if the current ambient illuminance is lower than the preset ambient illuminance, the main controller MCU (1) starts the light supplement camera device (5) to supplement light and shoot to obtain color pictures RGB, and the color pictures RGB are stored in the memory (7); otherwise, the main controller MCU (1) starts the light supplement camera device (5) to shoot and stores the shot data in the memory (7);
the spectrum information carried by the infrared waveform discrete coefficient u1(t) obtained in the step S12 is a specific spectrum range;
subjecting u1(t) to Fourier transform FFT
Figure FDA0003366312350000031
Transforming a formula to obtain a frequency spectrum and a magnitude spectrum of u1 (t);
taking the amplitude of the frequency f at the (1, 10) discrete point according to an empirical formula and experimental data, and carrying out differential comparison:
assuming that the first sampling result has a maximum value m1 at a in the frequency range 1-10, namely: at f1Maximum amplitude spectrum at a ∈ (0,10) }
Figure FDA0003366312350000032
Assuming that the second sampling result takes a maximum value m2 at b in the frequency range a-10, namely: at f2B ∈ (a,10) } maximum amplitude spectrum
Figure FDA0003366312350000033
If m1Greater than or equal to the threshold value n, and m of the infrared waveform discrete coefficient2Greater than or equal to the infrared waveform discrete coefficient threshold value n; step S14 is entered.
2. The method of the adaptive low-power consumption wild animal snapshot device based on the internet of things as claimed in claim 1, wherein:
the specific steps in step S15 are:
s151: compressing and combining the infrared array temperature picture and the color picture RGB to form combined picture data Input, wherein the specific contents are as follows: mapping the RGB size of the color picture into 128 × 128 adjustment picture data of 3 channels according to the size of the infrared array temperature picture; obtaining infrared temperature data of 1 channel 128 x 128 according to the infrared array temperature picture; compressing and combining the adjusted picture data and the infrared temperature data to form combined picture data Input of 4 channels and 128 × 128;
the infrared array temperature picture size is 128 x 128; the color picture RGB is X Y;
s152: sending the merged picture data Input obtained in the step S1 to a data convolution network structure for processing to obtain Output picture data Output;
the specific steps of sending the merged picture data Input into the data convolution network structure for processing are as follows:
s1521: setting a first convolution kernel group, wherein the first convolution kernel group comprises 16 1 × 1 convolution kernels Filter1X, and X is 1,2,3 … 16; the number of channels of each convolution kernel Filter1X is 4, and the step length is set to be 4;
performing group convolution operation on the merged picture data Input and 16 convolution kernels Filter1X to obtain group convolution output data M1, which specifically comprises:
the merged picture data Input and all convolution kernels Filter1X in the first convolution kernel group respectively have 4 channels, four channels of the merged picture data Input and each convolution kernel Filter1X are multiplied in sequence and added to obtain first-time intermediate data, the first-time intermediate data are substituted into an h-swish function, and first-time convolution output data M1 of 16 channels N X N are obtained through calculation;
s1522: setting a second convolution kernel group, wherein the second convolution kernel group comprises 14 by 4 convolution kernels Filter2, the number of channels of each second convolution kernel Filter2 is 4, and the step length is set to be 4;
performing conventional convolution operation on the first convolution output data M1 and a convolution kernel Filter2 to obtain conventional convolution output data M2; the method specifically comprises the following steps:
the first convolution output data M1M1 has 16 channels, now divided into 4 groups of 4 channels each; taking 4 groups of the data, respectively performing convolution operation on the 4 groups of data and convolution kernels Filter2, merging the data to obtain second-time intermediate data of 4 channels 32 x 32, and substituting the second-time intermediate data into a h-swish function to obtain second-time convolution output data M2;
s1523: setting a third convolution kernel group, wherein the third convolution kernel group comprises 14 by 4 convolution kernels Filter3, the number of channels of the convolution kernels Filter3 is 4, and the step length is set to be 4;
performing a third convolution operation on the second convolution output data M2 and the convolution kernel Filter3 to obtain third convolution output data M3; the method specifically comprises the following steps:
the second convolution output data M2 is 4-channel 32 x 32 data, the convolution kernel Filter3 is 4-channel data, each channel is respectively convolved to obtain 4 third intermediate data with 8 x 8, then the 4 intermediate data are added to obtain 1-channel 8 x 8 data, and the 1-channel 8 x 8 data are substituted to obtain third convolution output data M3 through an h-swish function;
s1524: setting a fourth convolution kernel group, wherein the fourth convolution kernel group comprises 1 8 x 8 convolution kernels Filter4 with the channel number being 1;
performing convolution operation on the third convolution output data M3 and a convolution kernel Filter4 to obtain fourth intermediate data, and substituting the fourth intermediate data into a tanh function to obtain merged picture data Input;
wherein the h-swish function is:
Figure FDA0003366312350000051
x is first time intermediate data, second time intermediate data or third time intermediate data;
the tanh function is:
Figure FDA0003366312350000052
y is fourth intermediate data;
s153: setting a judgment threshold, and judging whether an animal exists or not according to the value of the Output picture data Output;
setting the judgment threshold value as a range of (-1,1), wherein the closer the value of the merged picture data Input is to 1, the more certain the picture contains animals; the closer the value of the merged picture data Input is to-1, the more certain the picture is determined to contain no animal.
3. The method for the adaptive low-power consumption wild animal snapshot device based on the Internet of things as claimed in claim 1, wherein the step for wirelessly transmitting the pictures comprises a step for wirelessly transmitting the pictures in a short distance and a step for wirelessly transmitting the pictures in a long distance;
the steps for transmitting the pictures in the close range wirelessly are as follows:
s21: the method comprises the following steps that a main controller MCU (1) is in a low power consumption state, and a near-distance wireless transmission pairing request of a transmission communication module (12) is obtained in real time;
s22: when the main controller MCU (1) acquires a close-range wireless transmission pairing request of any intelligent terminal, the main controller MCU exits from a low-power consumption state and is in close-range wireless transmission pairing connection with the corresponding intelligent terminal;
s23: the method comprises the following steps that a main controller MCU (1) obtains a data transmission request of an intelligent terminal and predicts data transmission time;
s24: the method comprises the steps that a main controller MCU (1) obtains the current data transmission endurance time of a self-powered module (13);
s25: if the data transmission endurance time is greater than the predicted data transmission time, performing data transmission; otherwise, rejecting the data transmission request and entering a low power consumption state;
the steps for transmitting the picture remotely and wirelessly are as follows:
pretreatment: setting a long-distance wireless transmission picture period;
s31: the method comprises the following steps that a main controller MCU (1) is in a low power consumption state, and periodically obtains the coverage state of a current mobile signal;
s32: if the mobile signal is in the coverage state, go to step S33; otherwise, returning to the step S31;
s33: the main controller MCU (1) exits from a low power consumption state and transmits shooting data to the cloud platform through the transmission communication module (12); after the transmission is completed, the process returns to step S31.
4. The method of the adaptive low-power consumption wild animal snapshot device based on the internet of things as claimed in claim 1, wherein: the steps for dumping and theft monitoring are as follows:
s41: the main controller MCU (1) is in a low power consumption state, acquires a dumping detection signal of the dumping sensor (10) in real time, exits from the low power consumption state after acquiring the dumping detection signal, and enters into a step S42;
s42: the main controller MCU (1) starts a mobile communication unit and a vibration sensor (11) in a transmission communication module (12);
s43: the main controller MCU (1) sends a dumping alarm signal to the cloud platform through the mobile communication unit; simultaneously acquiring a vibration signal detected by a vibration sensor (11);
s44: the main controller MCU (1) compares a preset stealing vibration signal with the detected vibration signal, and if the similarity is greater than a preset stealing similarity threshold, the step S45 is executed; otherwise, entering a low power consumption state, and returning to the step S41;
s45: the main controller MCU (1) controls the starting of the positioning module (9) and sends the current position to the cloud platform.
5. The method of the adaptive low-power consumption wild animal snapshot device based on the internet of things as claimed in claim 1, wherein: the transmission communication module (12) comprises a mobile communication unit and a short-distance wireless transmission unit, wherein the short-distance wireless transmission unit is a Bluetooth transmission unit or a WiFi transmission unit;
the self-powered module (13) comprises a solar panel (13b), the solar panel (13b) is connected with an energy collecting and converting circuit (13a), and the energy collecting and converting circuit (13a) supplies power to the main controller MCU (1) through a three-stage energy storage device.
6. The method of the adaptive low-power consumption wild animal snapshot device based on the internet of things as claimed in claim 1, wherein: interior environmental sensor (K) includes current-voltage detection module (K1), temperature detection module (K2), humidity detection module (K3), atmospheric pressure detection module (K4), current-voltage detection module (K1), temperature detection module (K2), humidity detection module (K3), atmospheric pressure detection module (K4) all with main control unit MCU (1) is connected.
CN202011090138.7A 2020-10-13 2020-10-13 Self-adaptive low-power-consumption wild animal snapshot device and method based on Internet of things Active CN112217979B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011090138.7A CN112217979B (en) 2020-10-13 2020-10-13 Self-adaptive low-power-consumption wild animal snapshot device and method based on Internet of things

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011090138.7A CN112217979B (en) 2020-10-13 2020-10-13 Self-adaptive low-power-consumption wild animal snapshot device and method based on Internet of things

Publications (2)

Publication Number Publication Date
CN112217979A CN112217979A (en) 2021-01-12
CN112217979B true CN112217979B (en) 2022-01-25

Family

ID=74054573

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011090138.7A Active CN112217979B (en) 2020-10-13 2020-10-13 Self-adaptive low-power-consumption wild animal snapshot device and method based on Internet of things

Country Status (1)

Country Link
CN (1) CN112217979B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112907868A (en) * 2021-01-29 2021-06-04 烟台艾睿光电科技有限公司 Mobile terminal device, outdoor early warning method, system, device and storage medium
CN116366930A (en) * 2023-02-28 2023-06-30 生态环境部南京环境科学研究所 Wild animal dynamic monitoring feedback system based on infrared camera

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102360420A (en) * 2011-10-10 2012-02-22 星越实业(香港)有限公司 Method and system for identifying characteristic face in dual-dynamic detection manner

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103546728B (en) * 2013-11-14 2017-03-15 北京林业大学 A kind of wild animal field monitoring device
CN204102272U (en) * 2014-09-19 2015-01-14 山东康威通信技术股份有限公司 A kind of well cover antitheft device adopting three dimension acceleration sensor
CN106534786A (en) * 2016-11-22 2017-03-22 西南林业大学 Wild animal data transmission system based on image identification
CN108986379B (en) * 2018-08-15 2020-09-08 重庆英卡电子有限公司 Flame detector with infrared photographing function and control method thereof
CN108961647B (en) * 2018-08-15 2020-09-08 重庆英卡电子有限公司 Photographing type flame detector and control method thereof
CN110121028A (en) * 2019-04-29 2019-08-13 武汉理工大学 A kind of energy-saving field camera system
CN110225300A (en) * 2019-05-06 2019-09-10 南京理工大学 For the low power image Transmission system and method in wireless sensor network
CN110415267A (en) * 2019-08-15 2019-11-05 利卓创新(北京)科技有限公司 A kind of online thermal infrared target identification device of low-power consumption and working method
CN211127924U (en) * 2019-12-11 2020-07-28 武汉迈威通信股份有限公司 Infrared camera device with animal recognition function

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102360420A (en) * 2011-10-10 2012-02-22 星越实业(香港)有限公司 Method and system for identifying characteristic face in dual-dynamic detection manner

Also Published As

Publication number Publication date
CN112217979A (en) 2021-01-12

Similar Documents

Publication Publication Date Title
CN110321853B (en) Distributed cable external-damage-prevention system based on video intelligent detection
CN112422783B (en) Unmanned aerial vehicle intelligent patrol system based on parking apron cluster
CN112217979B (en) Self-adaptive low-power-consumption wild animal snapshot device and method based on Internet of things
CN110097787A (en) A kind of ship collision warning monitoring system and method based on monitoring navigation light
CN206573982U (en) Electric power multifunctional intellectual PDA loggings
CN106488184A (en) Networkmonitor
CN110031904B (en) Indoor personnel presence detection system based on low-resolution infrared thermal imaging
WO2019076951A1 (en) Intrusion detection methods and devices
US20100296703A1 (en) Method and device for detecting and classifying moving targets
CN213152184U (en) Animal identification type field monitoring system based on convolutional neural network
CN103826063A (en) Panoramic 360-degree intelligent monitoring device
CN217335718U (en) Hunting camera capable of identifying animals
Khedkar Wireless Intruder Detection System for Remote Locations
CN110361050A (en) A kind of hydrographic information forecasting system based on wireless sensor network
CN113747064B (en) Automatic underwater organism detection and shooting system and method thereof
CN202887397U (en) Geological environment disaster video monitor
US11956554B2 (en) Image and video analysis with a low power, low bandwidth camera
CN205490956U (en) Wireless super long range positioning alarm system that shoots
CN112529973B (en) Method for identifying field self-powered animal snap-shot pictures
CN114445988A (en) Unattended electronic sentinel capable of realizing intelligent control and working method thereof
KR102375380B1 (en) Apparatus for monitoring animals with low power and method thereof
CN202127477U (en) Wireless mechanic-operating all-day automatic video cassette recorder
CN106408831A (en) Intelligent household anti-theft system based on infrared detection
Magno et al. Multi-modal video surveillance aided by pyroelectric infrared sensors
CN111478441A (en) Power transmission line image monitoring equipment with front end analysis and analysis method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant