WO2018081610A1 - System and methods for detecting bats and avian carcasses - Google Patents

System and methods for detecting bats and avian carcasses Download PDF

Info

Publication number
WO2018081610A1
WO2018081610A1 PCT/US2017/058828 US2017058828W WO2018081610A1 WO 2018081610 A1 WO2018081610 A1 WO 2018081610A1 US 2017058828 W US2017058828 W US 2017058828W WO 2018081610 A1 WO2018081610 A1 WO 2018081610A1
Authority
WO
WIPO (PCT)
Prior art keywords
carcass
animal
image
arrival
animal carcass
Prior art date
Application number
PCT/US2017/058828
Other languages
French (fr)
Inventor
Daniel J. CAVANAGH
Brogan P. MORTON
Benjamin J. CHOMYN
Thomas Joseph Nostrand
Original Assignee
Nrg Systems, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nrg Systems, Inc. filed Critical Nrg Systems, Inc.
Publication of WO2018081610A1 publication Critical patent/WO2018081610A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Definitions

  • the present disclosure generally relates to a machine-vision approach to identifying and classifying animal carcasses amongst other objects in a field of view, and in particular, to a system and method for detecting arrival of animal carcasses on the ground around target structures such as wind turbines.
  • FIG. 1 shows an example animal carcass detection system consistent with embodiments of the present disclosure
  • FIG. 2 shows the animal carcass detection system of FIG. 1 mounted to a wind turbine tower, in accordance with an embodiment of the present disclosure
  • FIG. 3 shows a plurality of animal carcass detection systems consistent with the present disclosure mounted to a wind turbine
  • FIG. 4 shows an example process flow for an animal carcass detection system consistent with the present disclosure
  • FIG. 5 shows an example image frame having a field of view (FOV) of an area around a target structure, in accordance with an embodiment of the present disclosure
  • FIG. 6 shows an example image frame having a field of view (FOV) of an area around a target structure, in accordance with an embodiment of the present disclosure
  • FIG. 7 shows another example image frame having a field of view (FOV) of an area around a target structure with potential animal carcasses, in accordance with an embodiment of the present disclosure
  • FIG. 8 shows an example user interface (UI) in accordance with an embodiment of the present disclosure.
  • FIG. 9 shows an example classification process in accordance with an embodiment of the present disclosure.
  • the present disclosure is generally directed to an animal carcass detection system that utilizes two-dimensional image data to detect the arrival of animal carcasses on the ground adjacent a target structure, e.g., a wind turbine.
  • the animal carcass detection system may also be referred to as a carcass detection system for brevity.
  • the carcass detection system includes one or more camera devices with a field of view (FOV) aligned with a predetermined section of ground.
  • FOV field of view
  • the carcass detection system sends one or more alert messages, e.g. via a network such as the Internet, to a remote computing system.
  • the alert message which may also be referred to herein as a carcass event message, can include a geographical location for each detected animal carcass, a time of arrival, and other metadata associated with the event, e.g., a confidence score, a wind turbine identifier (ID), an animal species identifier, and so on.
  • the alert messages can be received directly at one or more designated mobile computing devices, e.g., smart phones, tablets, laptop computers, or may arrive indirectly by way of a cloud server or other server that communicates with N number of carcass detection systems and acts as an intermediary. Personnel such as biologists, researchers, etc., may receive the alerts and travel to the location of the detected animal carcass to collect the same.
  • an animal carcass detection system may detect a scavenger event that results in a previously detected animal carcass being removed. Scavengers such as foxes, hawks, weasels, and bears may find and eat an animal carcass prior to personnel arriving to perform collection.
  • the carcass detection system may therefore detect an intervening scavenger event and send an alert message based on the same.
  • the alert message may indicate the type of scavenger (e.g., based on object recognition), time of scavenging, and may also provide an image showing the location of the previously detected animal carcass before and/or after the intervening scavenger event.
  • the alert message may further provide one or more image frames with the scavenger animal in view. Note that in some cases humans may also trigger such events, or the animal carcass detection system may be configured to filter and ignore events related to human activity and other false-positives.
  • the system and methods for animal carcass detection variously disclosed herein may provide an automated approach for detecting the arrival of animal carcasses that saves labor costs, costs associated with specialized search training, and can further eliminate inaccuracies due to human error and intervening scavengers who operate 24x7 in search of easy meals.
  • FIG. 1 shows an example animal carcass detection system 1 consistent with embodiments of the present disclosure.
  • the carcass detection system 1 is shown in a highly simplified form and other embodiments are within the scope of this disclosure.
  • the carcass detection system 1 includes a controller 2, a memory 3, a camera device (or devices) 4, a transmit (TX) circuit 5, an antenna 6, and a housing 8.
  • the carcass detection system 1 is depicted as a single system disposed within a single housing, e.g., housing 8, this disclosure is not necessarily limited in this regard.
  • a camera may capture image frames and deliver the same via a network, e.g., the Internet, to a remote computer system, such as a computer server, workstation, or mobile computing device, which may then perform carcass detection processes as variously disclosed herein.
  • a network e.g., the Internet
  • a remote computer system such as a computer server, workstation, or mobile computing device, which may then perform carcass detection processes as variously disclosed herein.
  • the controller 2 comprises at least one processing device/circuit such as, for example, a digital signal processor (DSP), a field-programmable gate array (FPGA), Reduced Instruction Set Computer (RISC) processor, x86 instruction set processor, microcontroller, an application- specific integrated circuit (ASIC).
  • DSP digital signal processor
  • FPGA field-programmable gate array
  • RISC Reduced Instruction Set Computer
  • ASIC application- specific integrated circuit
  • the controller 2 may comprise a single chip, or multiple separate chips/circuitry.
  • the controller 2 may implement an animal carcass identification stage/process using software (e.g., C or C++ executing on the controller/processor 2), hardware (e.g., hardcoded gate level logic or purpose-built silicon) or firmware (e.g., embedded routines executing on a microcontroller), or any combination thereof.
  • the controller 2 may be configured to carry out the processes 90 of FIG. 9.
  • the memory 3 may comprise volatile and/or non- volatile memory devices.
  • the memory 3 may include a relational database, flat file, or other data storage area for maintaining a carcass identification model that may be used when attempting to perform carcass recognition on image data.
  • the memory 3 may further include a scavenger detection model to perform scavenger recognition on image data.
  • the models disclosed variously herein may use known object detection models and may be trained or otherwise updated over time to improve detection accuracy.
  • the camera device 4 may comprise one or more image sensors.
  • the one or more image sensors may output color image data (RGB), color and depth image data (RGBD camera), depth sensor information, stereo camera information (L/R RGB), YUV, short wave infrared (SWIR), and/or midwave infrared (MWIR).
  • RGB color image data
  • RGB color and depth image data
  • RGB stereo camera information
  • SWIR stereo camera information
  • MWIR midwave infrared
  • the one or more image sensors may also output monochrome or grey-scale image data, e.g., image data corresponding to visible or ultraviolet (UV), SWIR and/or MWIR bands).
  • the camera device 4 may include a first sensor being an infrared detector (e.g., to achieve so-called "night-vision") and a second sensor being a color-image sensor (e.g., RGB, YUV). Also the camera device 4 may use active illumination (e.g., through a separate or integrated lighting device) to enhance detection of carcasses at night or during other low-light conditions.
  • the camera device 4 may be configured to capture image information 7 from a variety of known image sensor types.
  • the camera device 4 may include associated conversion circuitry to convert, for instance, analog image data 7 to digital image data.
  • the TX circuit 5 may comprise a network interface circuit (NIC) for communication via a network, e.g., the Internet.
  • NIC network interface circuit
  • the antenna device 6 may be utilized.
  • the carcass detection system 1 may be configured for close range or long range communication between the carcass detection system 1 and remote computing devices.
  • Close range communication is used herein to refer to systems and methods for sending/receiving data signals between devices that are relatively close to one another.
  • Close range communication includes, for example, communication between wireless devices using a BLUETOOTHTM network, a personal area network (PAN), near field communication, ZigBee networks, millimeter wave communication, ultra-high frequency (UHF) communication, combinations thereof, and the like.
  • Close range communication may also include so-called “wired” connections such as a USB or cross-over Ethernet cable. Close range communication may therefore be understood as enabling direct communication between devices, without the need for intervening hardware/systems such as routers, cell towers, internet service providers, and the like.
  • Long range communication refers to systems and methods for sending/receiving data signals between devices that are a significant distance away from one another.
  • Long range communication includes, for example, communication between devices using WiFi, a wide area network (WAN) (including but not limited to a cell phone network, the Internet, a global positioning system (GPS), a whitespace network such as an IEEE 802.22 WRAN, combinations thereof and the like.
  • WAN wide area network
  • GPS global positioning system
  • WRAN global positioning system
  • Long range communication may also include utilizing networks with "wired" connections such as optical fibers, copper lines, and data over power lines, e.g., Broadband over Power Line (BPL) technologies.
  • BPL Broadband over Power Line
  • Long range communication may therefore be understood as enabling communication between devices through the use of intervening hardware/systems such as routers, cell towers, whitespace towers, internet service providers, one or more optical fibers, combinations thereof, and the like.
  • the housing 8 may be ruggedized and sealed to prevent ingress of contaminants such as dust and moisture.
  • the housing 8 may comport with standards for ingress protection (IP) and have an IP67 rating for the housing 8 and associated cables and connectors (not shown) as defined within ANSI/IEC 60529 Ed. 2.1b, although other IPXY ratings are within the scope of this disclosure with the X denoting protection from solids and Y denoting protection from liquids.
  • IP ingress protection
  • the housing 8 comprises a plastic, polycarbonate, or any other suitably rigid material.
  • FIG. 2 shows an example embodiment of the carcass detection system 1 of FIG. 1 configured to detect animal carcasses about an area adjacent of a wind turbine.
  • the carcass detection system 1 may be mounted to a side of a wind turbine tower 9.
  • the housing 8 of the carcass detection system 1 may be ruggedized, as discussed above, to allow the carcass detection system 1 to operate for extended periods in outdoor environments.
  • Target structures may include, but are not limited to, wind turbines, office buildings, towers, or any other structure that may cause animal fatalities.
  • the carcass detection system 1 may not necessarily be mounted to a target structure, e.g., the turbine tower 9 or other structure, although being mounted to a wind turbine tower may be preferable as it provides a clear line of sight to an area where animal carcasses may land after strikes with a turbine blade, for instance.
  • the carcass detection system 1 may be mounted to a structure, e.g., a tower, building, a temporary structure (e.g., a tripod), or a vehicle adjacent a target structure, and the associated camera device 4 may then be pointed towards the target structure to be monitored, and more particularly the area in which flying animals are mostly likely to fall after striking the target structure.
  • the carcass detection system 1 may also be implemented within or otherwise utilize autonomous robots such as drones.
  • a camera drone having one or more on-board cameras may be used to monitor a given area.
  • the camera drone may monitor multiple areas through yaw/pitch adjustments, which may advantageously allow a single camera drone to monitor multiple areas without necessarily having to have multiple fixed-position camera devices.
  • the camera device 4 may be aligned/angled relative to the ground 11 to ensure that the field of view 10 for the camera device 4 covers a predetermined area about the turbine tower 9.
  • the camera device 4 may capture image data 7 at a predetermined rate.
  • the predetermined rate may be between 1 frame-per-second (FPS) to 60 fps, although other frame rates are within the scope of this disclosure.
  • the predetermined rate may be one frame every 3, 5, 15 or N seconds, depending on a desired configuration. This reduced framerate may limit the amount of power necessary to perform carcass detection.
  • the captured image frames may then be processed to generate a background image, or reference image.
  • a portion of the captured images may be masked, for instance, to remove portions of an area where monitoring is not desired.
  • Subsequent captured image frames may be compared to the generated referenced image to detect if a "new" object has entered within the FOV.
  • Filters/models may be utilized to eliminate changes due to environmental noise, e.g., moving vegetation 14, objects exceeding a predefined threshold size (e.g., too large to be a flying animal carcass), changes in lighting, and so on.
  • Those objects that appear in subsequent frames relative to the background and remain for a predefined period of time e.g., appearing in X number of frames in the same location between frames, will be flagged as potential carcass detections.
  • objects that arrive and remain stationary for at least one minute may be flagged as potential carcasses. If the frame rate is set to two frames per minute, this would result in two frames being sufficient to flag an object. Note the particular predefined period of time/number of frames prior to flagging a potential object as a carcass may vary depending on a desired application.
  • Pixels associated with the potential carcass detections may then be analyzed via a classification process.
  • the classification process may use a carcass recognition model to determine an object type.
  • an alert may be sent to the mobile computing device of a user, e.g., a smart phone, laptop computer, tablet.
  • the alert may include metadata comprising at least one of an animal carcass location, time of arrival, classification information (e.g., category of animal, species of animal), confidence score, and/or other associated metadata such as a structure ID.
  • both positive and negative classifications may be provided as alerts to users.
  • Carcass detection systems disclosed herein may receive feedback/update messages from user devices to train a machine learning algorithm and/or various detection models in order to recognize new objects, e.g., to allow for recognition of previously unknown species of flying animals, and scavenger animals, and/or to improve the recognition accuracy of known object types.
  • a plurality of carcass detection systems 15-1 to 15-4 may be mounted to a wind turbine 12, with the plurality of carcass detection systems monitoring 360 degrees around the wind turbine 12.
  • each of the plurality of carcass detection systems 15-1 to 15-4 may have respective field of views 10-1 to 10-4 and monitor about 90 degrees of non-overlapping region/area about the wind turbine 12.
  • each of the field of views 10-1 to 10-4 may overlap by a predetermined amount which may advantageously reduce dead zones.
  • each of the carcass detection systems 15-1 to 15-4 are illustrated as independent systems, this disclosure is limited in this regard.
  • the carcass detection systems 15-1 to 15-4 may communicate with each other and collectively form a distributed detection system for a given target structure.
  • two or more carcass identification systems may be aligned with and overlap the same FOV, or at least partially the same, FOV.
  • the two or more carcass identification systems may be mounted to the same or different structures.
  • frames from two or more camera devices may be utilized to tandem to increase detection accuracy.
  • two or more carcass classification systems may work in tandem to capture different perspectives of a potential animal carcass and may synchronize to provide a single alert based on the carcass classification system with the highest confidence classification.
  • FIG. 4 shows an example process flow for an animal carcass detection system 1A consistent with an embodiment of the present disclosure.
  • the camera device 4 captures a plurality of time- synchronized image frames from a current field of view (FOV).
  • the plurality time-synchronized image frames may comprise two-dimensional color image data, black and white image data, thermal image data, and/or stereo image data.
  • the captured time- synchronized image frames may be output by the camera device 4 as encoded image frames, e.g., JPEG, .GIF, .PNG, .BMP or any other suitable compressed or uncompressed format.
  • the carcass identifier circuit 17 receives the encoded image frames from the camera device 4 as an input.
  • the carcass identifier circuit 17 may be implemented within the controller 2 of FIG. 1 or in any other combination of hardware and/or software.
  • the carcass identifier circuit 17 may then perform image preprocessing to filter and/or mask out regions from image data.
  • the carcass identifier circuit 17 may then generate a background (or reference) image based on one or more captured image frames.
  • the generated background image and image frames captured subsequent to the generation of the background image may then be utilized by a carcass detection process, such as the carcass detection process 90 of FIG. 9, to perform object detection.
  • the carcass identifier circuit 17 may utilize any known object recognition technique for identifying objects within the encoded images.
  • FIG. 5 shows an example image frame 50 that may be captured by the camera device 4 and analyzed by the carcass identifier circuit 17.
  • the example image frame 50 may also represent the generated background image discussed above. Note the image frame 50 has been filtered based on a plurality of previously captured/historical images in order to remove a majority of features which remain consistent between image frames and that would otherwise represent noise, e.g., such as vegetation, fences, structures, and so on.
  • the FOV of the example image frame 50 may correspond with field of view 10 shown the embodiment of FIG. 2.
  • the FOV includes markers 41-1 to 41-3.
  • the markers 41-1 to 41-3 may be physical markers disposed uniformly at predefined intervals from a target structure, e.g., the wind turbine 12 (see FIG. 3). Note, when viewed in a top-down fashion, the markers 41-1 to 41-3 may extend from a target structure along, for instance, each cardinal direction in a uniform manner from the target structure (see FIG. 3). However, this disclosure is not necessarily limited in this regard and distance markers may be disposed in other configurations. Also, markers need not be permanently placed and may be removed after initial calibration.
  • each of the markers 41-1 to 41-3 may be spaced at a predefined interval of 5 meters (m) from each other.
  • Each of the markers 41-1 to 41-3 may be color coded or otherwise marked (e.g., with a machine-readable fiducial) to enable the carcass identifier circuit 17 to determine a relative distance.
  • the carcass identifier circuit 17 may then extrapolate the local position of objects/carcasses identified within the FOV relative to the target structure based on the markers 41-1 to 4-13.
  • the result of the extrapolation may be a local XY coordinate wherein 0,0 represents a center of the target structure, for instance, although this disclosure is not necessarily limited to a Cartesian coordinate system.
  • Changes in position of the markers 41-1 to 41-3 may be detected including removal of the markers 41-1 to 41-3, and an alert may be sent in response to the removal/movement of markers.
  • the carcass identifier circuit 17 may utilize the local XY position to calculate a geolocation identifier for potential animal carcasses. For instance, the known GPS location of a target structure may be offset/augmented based on the XY local coordinates of an object/carcass identified within the FOV to provide a GPS location for the potential animal carcass. The derived GPS of each object/carcass may then be provided as metadata in one or more carcass event messages, as discussed below.
  • FIG. 6 shows an example image frame 60 that may be captured by the camera device 4 and analyzed by the carcass identifier circuit 17. As shown in the image frame 60, a plurality of potential animal carcasses were identified by the carcass identifier circuit 17, namely potential carcasses 61-1 and 61-2.
  • potential carcasses 61-1 and 61-2 may be detected based on comparing pixels from the background/reference image frame, e.g., image frame 50 of FIG. 5, with pixels from the image frame 60.
  • each pixel value from a first image may be subtracted with a corresponding pixel from a second image to derive a delta D.
  • the absolute value of the delta D may then be utilize to detect a change based on the same exceeding a predefined threshold (e.g., a 10% change, or other value depending on a desired configuration). Pixels detected as being changed may then be set/flagged as pixels of interest.
  • the carcass identifier circuit 17 may then group the pixels of interest with other adjacent pixels, including other pixels of interest, to form MxN arrays of pixels or other suitable groupings. For instance, the pixels associated with potential carcasses 61-1 and 61-2, respectively, may then be passed into a classification process to determine the presence of animal carcasses.
  • the classification process may utilize a machine learning algorithm in combination with various object identification models to perform carcass recognition.
  • Carcass recognition may include determining whether the object is an animal carcass, e.g., a simple binary yes/no, and/or may include recognizing the specific species of the animal carcass.
  • Environmental characteristics such as time of year (e.g., the current season), time of day, geographic location, weather, and so on, may be used as "hints" to determine the likelihood of a particular species of animal. For instance, if the time of day is presently night, the likelihood of a bat strike is higher than during daylight hours. Likewise, if the season is spring or fall, certain species of migratory birds may be anticipated.
  • the classification process may be trained to distinguish objects that may trigger false-positives, such as humans, moving vegetation, and so on.
  • training may occur in the factory, e.g., be pre-loaded, and/or may be provided over time via user feedback/update messages.
  • the carcass identifier circuit 17 outputs a carcass event (or alert) message in response to the carcass identifier circuit 17 positively identifying/classifying one or more animal carcasses about a target structure.
  • the carcass identifier circuit may also output carcass event messages even when failing to classify an object. This may advantageously allow for additional training data to be provided by a user to recognize new object types and/or improve recognition of known objects.
  • each carcass event message may simply indicate the geographical position of an animal carcass relative to the target structure.
  • the carcass event message may comprise a global positioning satellite (GPS) location of a potential animal carcass.
  • GPS global positioning satellite
  • each carcass event message may include image data from the encoded image frames to aid in identifying the position of the animal carcass and/or to allow for visual verification/validation by a user.
  • false positives may be easily ignored by end users who are trained to distinguish carcasses of flying animals from other objects.
  • the carcass identification system 1 may receive update messages in such cases of false positives, with the update messages being used to train the classification process in order to minimize or otherwise reduce future false positives.
  • carcass identifier circuit 17 may augment image data to include a visual indicator to allow a user to easily identify an animal carcass detected by the carcass identifier circuit 17.
  • the visual indicator may comprise a regular shape (e.g. a rectangle, square, circle, and so on) or irregular shape to draw attention to the position of an animal carcass within an image (See FIG. 6).
  • the visual indicator may comprise an arrow or other symbol. Therefore, the augmented image data may be included in a carcass event message. If multiple images of an animal carcass are available, a plurality of images may be provided within the carcass event message with or without augmentation.
  • the carcass event message may further include metadata associated with each animal carcass detected by the carcass identifier circuit 17.
  • the metadata may comprise, for example, an animal carcass type, a confidence score, an arrival timestamp, and/or a structure identifier (e.g., a wind turbine ID).
  • the animal carcass type may comprise a value corresponding to a category of animal, e.g., avian, bat, etc.
  • the animal carcass type may further include a value corresponding to the specific species of animal detected by the carcass identifier circuit 17.
  • the confidence score may range from an integer value of 0 to 100, with a score above 80 suggesting a high confidence of the classification processes' findings. Other scoring methodologies/values may be utilized and the provided example is not intended to be limiting.
  • the carcass event message may further indicate an intervening scavenger event.
  • an animal carcass may be identified/classified by the carcass identifier circuit 17, which may cause a first carcass event message to be output by the carcass identifier circuit 17.
  • a scavenger such as a fox, bear, weasel, cat, bird of prey, or other animal may consume or otherwise remove the previously detected animal carcass.
  • the carcass identifier circuit 17 may output a carcass event message indicating that an intervening scavenger event occurred and that an animal carcass has been removed.
  • the carcass event message may include metadata, as discussed above.
  • a scavenger type value may be included along with a confidence score.
  • the scavenger type value may correspond with the specific type of scavenger (e.g., bird, fox, bear, cat, weasel, and so on) and/or an identifier of a specific species of scavenger.
  • the carcass event message may further include one or more captured image frames, or at least a portion thereof, with the potential scavenger depicted therein.
  • the image(s) provided within the carcass event message may also include a visual identifier, as discussed, to indicate position of the scavenger within the image(s).
  • FIG. 7 shows an example image frame 70 that may be captured by the camera device 4 and analyzed by the carcass identifier circuit 17.
  • the carcass identifier circuit 17 may determine that one or more previously identified animal carcasses, e.g., animal carcasses 61-1 and 61-2, are no longer disposed at their previously known locations, e.g., based on a detected change indicating the carcasses 61-1 and 61-2 were removed.
  • the carcass identifier circuit 17 may output a carcass event message that indicates the occurrence of a scavenger event, as discussed above.
  • a scavenger may be within one or more captured image frames.
  • Detection of scavenger 71 may be performed by pixel analysis using thresholding, as various described herein, and/or may be performed via known object recognition approaches.
  • the classification process may utilize a scavenger classification model to determine the category of the scavenger and/or a specific species of the scavenger.
  • the classification process may also utilize user update/feedback messages to train or otherwise adjust the scavenger classification model to increase detection accuracy.
  • the carcass identifier circuit 17 may include image frame 70, or at least a portion thereof, within the carcass event message that indicates the occurrence of a scavenger event.
  • the TX circuit 5 receives the carcass event message from the carcass identifier circuit 17 as an input.
  • the TX circuit 5 then outputs carcass event message to a remote computing system via a network, e.g., the Internet.
  • the remote computing system may comprise for example, a cloud computing server, or any other suitable computer device configured to log data events and to provide event notifications to users.
  • the remote computing system may do push-alerts to one or more mobile computing devices such as smart phones, tablets, etc., with the push-alerts including information associated with the carcass event message.
  • the mobile computing devices may be used by researches, scientists, technicians or other users who may have interest in animal carcasses detected around a target structure.
  • FIG. 8 shows an example user interface (UI) that may be utilized to visualize carcass event messages received from the scavenger detection system 1.
  • the example UI may be implemented on a smartphone as a so-called "app", or as an application on a desktop computer, laptop, or other mobile computing device.
  • the example UI shows a top- down view of an area surrounding a target structure 12-1.
  • a plurality of adjacent structures 12- 2 to 12-5 may be represented in instances where, for example, multiple target structures are being monitored for animal carcasses, or simply to provide additional points of reference/landmarks.
  • the current position of the mobile computing device relative to the target structure 12-1 may be visualized by a symbol, such as a person icon 81.
  • the location of an animal carcass may be visualized by a push-pin or other symbol.
  • the location of the animal carcass may also be provided by a geolocation, such as GPS coordinates 86.
  • the example UI may function similar to a GPS application that seeks to guide a user to a desired destination.
  • the destination is the approximate location of one or more potential animal carcasses detected by the animal carcass detection system 1.
  • a user may then dispose of the animal carcass and log any necessary data pertaining to the animal carcass for research/accounting purposes. This logging may be performed through features/functions (not shown) of the UI 80.
  • the example UI 80 may visualize a first preview image 82, with preview image 82 being generated based on a received carcass alert message.
  • the preview image 82 may then be utilized by a user as a reference to aide in recovery of the animal carcass.
  • the example UI 80 may visualize a second preview image 83.
  • the second preview image may also be generated based on a received carcass alert message.
  • the second preview image 83 may provide a broad view of a FOV in which an animal carcass is detected relative to the first preview image 83.
  • FIG. 9 is a flow chart illustrating one exemplary embodiment 90 of a classification process that may be performed by a carcass detection system consistent with the present disclosure.
  • one or more image frames are received and may be used to generate a reference (or background) image. Note generation of the reference image may further include filtering and other image pre-processing routines as previously discussed.
  • image frames are then received at a predetermined interval (or framerate), with the received image frames being captured subsequent to those used in act 91 to generate the reference image.
  • act 93 the image frames received in act 92 are then compared to the reference image generated in act 91.
  • the process 90 then classifies or fails to classify objects associated with the detected change in act 94, as the case may be.
  • the detected change may include, for instance, the arrival of an animal carcass. Change detection may further include detecting removal of a carcass and/or the arrival of a scavenger animal.
  • act 95 one or more carcass event message may be output.
  • Each carcass event message may include a geolocation, e.g., GPS coordinates, of one or more potential animal carcasses and associated metadata such as an arrival timestamp, target structure identifier, animal carcass type and/or associated confidence score.
  • a system for detecting animal carcasses of flying animals comprising a first camera device for capturing image frames, the camera device having a field of view aligned with a first area adjacent a target structure, and a carcass detection circuit to generate a reference image based on at least a first image frame captured by the first camera device, detect arrival of an animal carcass based on comparing the reference image with one or more image frames captured subsequent to the first image frame, and in response to detecting the arrival of the animal carcass, sending a first carcass alert message to a remote computing device.
  • the system comprising a plurality of carcass detection systems, each of the plurality of carcass detection systems being mounted on a wind turbine tower and comprising a first camera device for capturing image frames, the first camera device having a field of view aligned with a region about the wind turbine tower, and a carcass detection circuit to generate a reference image based on at least a first image frame captured by the first camera device, detect arrival of an animal carcass based on comparing the reference image with one or more image frames captured subsequent to the first image frame, and in response to detecting the arrival of the animal carcass, sending a first carcass alert message to a remote computing device.
  • a computer- implemented method for detecting arrival of animal carcasses comprising receiving, by a controller, a first image frame from a first camera device, generating, by the controller, a reference image based in part on the received first image frame, receiving, by the controller, a second image frame from the first camera device subsequent to the first image frame, detecting, by the controller, an arrival of a potential animal carcass based at least in part on comparing the reference image with the second image frame, and in response to detecting the arrival of the potential animal carcass, sending by the controller a first carcass alert message to a remote computing device.
  • Embodiments of the methods described herein may be implemented using a processor and/or other programmable device. To that end, the methods described herein may be implemented on a tangible, computer readable storage medium having instructions stored thereon that when executed by one or more processors perform the methods.
  • the transmitter and/or receiver may include a storage medium (not shown) to store instructions (in, for example, firmware or software) to perform the operations described herein.
  • the storage medium may include any type of non-transitory tangible medium, for example, any type of disk including floppy disks, optical disks, compact disk read-only memories (CD- ROMs), compact disk re-writables (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic and static RAMs, erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), flash memories, magnetic or optical cards, or any type of media suitable for storing electronic instructions.
  • any type of disk including floppy disks, optical disks, compact disk read-only memories (CD- ROMs), compact disk re-writables (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic and static RAMs, erasable programmable read-only memories (EPROMs), electrical
  • Block diagrams herein represent conceptual views of illustrative circuitry embodying the principles of the disclosure.
  • any flow charts, flow diagrams, state transition diagrams, pseudocode, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
  • Software modules, or simply modules which are implied to be software may be represented herein as any combination of flowchart elements or other elements indicating performance of process steps and/or textual description. Such modules may be executed by hardware that is expressly or implicitly shown.
  • processor or “controller” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (DSP) hardware, network processor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), read-only memory (ROM) for storing software, random access memory (RAM), and non-volatile storage.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • ROM read-only memory
  • RAM random access memory
  • non-volatile storage Other hardware, conventional and/or custom, may also be included.
  • circuit or “circuitry” may comprise, for example, singly or in any combination, hardwired circuitry, programmable circuitry, state machine circuitry, and/or firmware that stores instructions executed by programmable circuitry.
  • the transmitter and receiver may comprise one or more integrated circuits.
  • An "integrated circuit” may be a digital, analog or mixed-signal semiconductor device and/or microelectronic device, such as, for example, but not limited to, a semiconductor integrated circuit chip.
  • the term “coupled” as used herein refers to any connection, coupling, link or the like by which signals carried by one system element are imparted to the "coupled” element.
  • Such “coupled” devices, or signals and devices, are not necessarily directly connected to one another and may be separated by intermediate components or devices that may manipulate or modify such signals.
  • use of the term “nominal” or “nominally” when referring to an amount means a designated or theoretical amount that may vary from the actual amount.
  • use of the articles “a” and/or “an” and/or “the” to modify a noun may be understood to be used for convenience and to include one, or more than one, of the modified noun, unless otherwise specifically stated.
  • the terms “comprising”, “including” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements.
  • use of the term “nominal” or “nominally” when referring to an amount means a designated or theoretical amount that may vary from the actual amount.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure is generally directed to a detection system that utilizes two-dimensional image data to detect arrival of animal carcasses after striking wind turbines or other target structures. The system includes one or more camera devices with a field of view aligned with a predetermined section of ground adjacent a target structure. In response to detecting the arrival of an animal carcass, the carcass detection system sends one or more alert messages, e.g. via a network such as the Internet, to a remote computing device. The alert message can include a geographical location for each detected animal carcass, an arrival timestamp, and other metadata associated with the event, e.g., a confidence score, a wind turbine identifier (ID), and/or an animal species identifier. The alert message may be utilized by a user, e.g., a biologist, to easily locate and account for detected animal carcasses and avoid manual searches.

Description

SYSTEM AND METHODS FOR DETECTING BATS AND AVIAN
CARCASSES
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] The present non-provisional application claims the benefit of U.S. Provisional Patent Application Serial No. 62/413,502 filed on October 27, 2016, the entire content of which is hereby incorporated by reference.
TECHNICAL FIELD
[0002] The present disclosure generally relates to a machine-vision approach to identifying and classifying animal carcasses amongst other objects in a field of view, and in particular, to a system and method for detecting arrival of animal carcasses on the ground around target structures such as wind turbines.
BACKGROUND INFORMATION
[0003] Many forms of renewable energy, such as wind turbines, endanger wildlife such as bats and other flying animals that have habitats in close proximity. Concern about declining bat populations and endangered avian species in the U.S. has created regulatory pressure to reduce the number of wildlife kills on wind plants/farms. The cost of compliance with the United States Fish & Wildlife Service (USFWS) Wind Energy Guidelines (WEG) and Endangered Species Act (ESA) is significant as manual labor is required to search out and account for animal carcasses of flying animals resulting from wind plant operations. Trained personnel are often dispatched to perform searches and record found carcasses. However, such manual searches raise numerous non-trivial challenges as human searchers often fail to search each turbine in a given wind farm, or all areas around each wind turbine in which animal carcasses of flying animals may land. Often, the carcasses are relatively small and may blend in with their surroundings. In addition, scavengers often locate animal carcasses and may remove/consume animal carcasses long before they can be accounted for by manual searches. BRIEF DESCRIPTION OF THE DRAWINGS
[0004] These and other features and advantages will be better understood by reading the following detailed description, taken together with the drawings wherein:
[0005] FIG. 1 shows an example animal carcass detection system consistent with embodiments of the present disclosure;
[0006] FIG. 2 shows the animal carcass detection system of FIG. 1 mounted to a wind turbine tower, in accordance with an embodiment of the present disclosure;
[0007] FIG. 3 shows a plurality of animal carcass detection systems consistent with the present disclosure mounted to a wind turbine;
[0008] FIG. 4 shows an example process flow for an animal carcass detection system consistent with the present disclosure;
[0009] FIG. 5 shows an example image frame having a field of view (FOV) of an area around a target structure, in accordance with an embodiment of the present disclosure;
[0010] FIG. 6 shows an example image frame having a field of view (FOV) of an area around a target structure, in accordance with an embodiment of the present disclosure;
[0011] FIG. 7 shows another example image frame having a field of view (FOV) of an area around a target structure with potential animal carcasses, in accordance with an embodiment of the present disclosure;
[0012] FIG. 8 shows an example user interface (UI) in accordance with an embodiment of the present disclosure; and
[0013] FIG. 9 shows an example classification process in accordance with an embodiment of the present disclosure.
DETAILED DESCRIPTION
[0014] The present disclosure is generally directed to an animal carcass detection system that utilizes two-dimensional image data to detect the arrival of animal carcasses on the ground adjacent a target structure, e.g., a wind turbine. The animal carcass detection system may also be referred to as a carcass detection system for brevity. The carcass detection system includes one or more camera devices with a field of view (FOV) aligned with a predetermined section of ground. In response to detecting the arrival of a an animal carcass, e.g., the carcass of an avian, bat, etc., the carcass detection system sends one or more alert messages, e.g. via a network such as the Internet, to a remote computing system. The alert message, which may also be referred to herein as a carcass event message, can include a geographical location for each detected animal carcass, a time of arrival, and other metadata associated with the event, e.g., a confidence score, a wind turbine identifier (ID), an animal species identifier, and so on. The alert messages can be received directly at one or more designated mobile computing devices, e.g., smart phones, tablets, laptop computers, or may arrive indirectly by way of a cloud server or other server that communicates with N number of carcass detection systems and acts as an intermediary. Personnel such as biologists, researchers, etc., may receive the alerts and travel to the location of the detected animal carcass to collect the same.
[0015] In an embodiment, an animal carcass detection system consistent with the present disclosure may detect a scavenger event that results in a previously detected animal carcass being removed. Scavengers such as foxes, hawks, weasels, and bears may find and eat an animal carcass prior to personnel arriving to perform collection. The carcass detection system may therefore detect an intervening scavenger event and send an alert message based on the same. The alert message may indicate the type of scavenger (e.g., based on object recognition), time of scavenging, and may also provide an image showing the location of the previously detected animal carcass before and/or after the intervening scavenger event. The alert message may further provide one or more image frames with the scavenger animal in view. Note that in some cases humans may also trigger such events, or the animal carcass detection system may be configured to filter and ignore events related to human activity and other false-positives.
[0016] Thus, the system and methods for animal carcass detection variously disclosed herein may provide an automated approach for detecting the arrival of animal carcasses that saves labor costs, costs associated with specialized search training, and can further eliminate inaccuracies due to human error and intervening scavengers who operate 24x7 in search of easy meals.
[0017] Now turning to the Figures, FIG. 1 shows an example animal carcass detection system 1 consistent with embodiments of the present disclosure. The carcass detection system 1 is shown in a highly simplified form and other embodiments are within the scope of this disclosure. [0018] As shown, the carcass detection system 1 includes a controller 2, a memory 3, a camera device (or devices) 4, a transmit (TX) circuit 5, an antenna 6, and a housing 8. Note while the carcass detection system 1 is depicted as a single system disposed within a single housing, e.g., housing 8, this disclosure is not necessarily limited in this regard. For instance, in some embodiments a camera may capture image frames and deliver the same via a network, e.g., the Internet, to a remote computer system, such as a computer server, workstation, or mobile computing device, which may then perform carcass detection processes as variously disclosed herein.
[0019] Continuing on, the controller 2 comprises at least one processing device/circuit such as, for example, a digital signal processor (DSP), a field-programmable gate array (FPGA), Reduced Instruction Set Computer (RISC) processor, x86 instruction set processor, microcontroller, an application- specific integrated circuit (ASIC). The controller 2 may comprise a single chip, or multiple separate chips/circuitry. As discussed further below, the controller 2 may implement an animal carcass identification stage/process using software (e.g., C or C++ executing on the controller/processor 2), hardware (e.g., hardcoded gate level logic or purpose-built silicon) or firmware (e.g., embedded routines executing on a microcontroller), or any combination thereof. In one embodiment, the controller 2 may be configured to carry out the processes 90 of FIG. 9.
[0020] The memory 3 may comprise volatile and/or non- volatile memory devices. In an embodiment, the memory 3 may include a relational database, flat file, or other data storage area for maintaining a carcass identification model that may be used when attempting to perform carcass recognition on image data. Likewise, the memory 3 may further include a scavenger detection model to perform scavenger recognition on image data. Note, the models disclosed variously herein may use known object detection models and may be trained or otherwise updated over time to improve detection accuracy.
[0021] The camera device 4 may comprise one or more image sensors. The one or more image sensors may output color image data (RGB), color and depth image data (RGBD camera), depth sensor information, stereo camera information (L/R RGB), YUV, short wave infrared (SWIR), and/or midwave infrared (MWIR). Note, the one or more image sensors may also output monochrome or grey-scale image data, e.g., image data corresponding to visible or ultraviolet (UV), SWIR and/or MWIR bands). In some cases, such image sensors may achieve higher resolution (e.g., for size thresholding) and/or higher sensitivity (e.g., for pixel/contrast thresholding) in the visible/UV/IR bands relative to other sensor types, e.g., color image sensors. In an embodiment, the camera device 4 may include a first sensor being an infrared detector (e.g., to achieve so-called "night-vision") and a second sensor being a color-image sensor (e.g., RGB, YUV). Also the camera device 4 may use active illumination (e.g., through a separate or integrated lighting device) to enhance detection of carcasses at night or during other low-light conditions. Thus, the camera device 4 may be configured to capture image information 7 from a variety of known image sensor types. The camera device 4 may include associated conversion circuitry to convert, for instance, analog image data 7 to digital image data.
[0022] The TX circuit 5 may comprise a network interface circuit (NIC) for communication via a network, e.g., the Internet. In cases where the TX circuit 5 communicates wirelessly, the antenna device 6 may be utilized. The carcass detection system 1 may be configured for close range or long range communication between the carcass detection system 1 and remote computing devices.
[0023] The term, "close range communication" is used herein to refer to systems and methods for sending/receiving data signals between devices that are relatively close to one another. Close range communication includes, for example, communication between wireless devices using a BLUETOOTH™ network, a personal area network (PAN), near field communication, ZigBee networks, millimeter wave communication, ultra-high frequency (UHF) communication, combinations thereof, and the like. Close range communication may also include so-called "wired" connections such as a USB or cross-over Ethernet cable. Close range communication may therefore be understood as enabling direct communication between devices, without the need for intervening hardware/systems such as routers, cell towers, internet service providers, and the like.
[0024] In contrast, the term, "long range communication" is used herein to refer to systems and methods for sending/receiving data signals between devices that are a significant distance away from one another. Long range communication includes, for example, communication between devices using WiFi, a wide area network (WAN) (including but not limited to a cell phone network, the Internet, a global positioning system (GPS), a whitespace network such as an IEEE 802.22 WRAN, combinations thereof and the like. Long range communication may also include utilizing networks with "wired" connections such as optical fibers, copper lines, and data over power lines, e.g., Broadband over Power Line (BPL) technologies. Long range communication may therefore be understood as enabling communication between devices through the use of intervening hardware/systems such as routers, cell towers, whitespace towers, internet service providers, one or more optical fibers, combinations thereof, and the like.
[0025] The housing 8 may be ruggedized and sealed to prevent ingress of contaminants such as dust and moisture. In some specific example cases, the housing 8 may comport with standards for ingress protection (IP) and have an IP67 rating for the housing 8 and associated cables and connectors (not shown) as defined within ANSI/IEC 60529 Ed. 2.1b, although other IPXY ratings are within the scope of this disclosure with the X denoting protection from solids and Y denoting protection from liquids. In some cases, the housing 8 comprises a plastic, polycarbonate, or any other suitably rigid material.
[0026] FIG. 2 shows an example embodiment of the carcass detection system 1 of FIG. 1 configured to detect animal carcasses about an area adjacent of a wind turbine. As shown, the carcass detection system 1 may be mounted to a side of a wind turbine tower 9. The housing 8 of the carcass detection system 1 may be ruggedized, as discussed above, to allow the carcass detection system 1 to operate for extended periods in outdoor environments.
[0027] Although the scenarios and examples discussed herein specifically reference monitoring an area about/around a wind turbine, this disclosure is not limited in this regard. For instance, any area about a target structure may be monitored for animal carcasses depending on a desired application. Target structures may include, but are not limited to, wind turbines, office buildings, towers, or any other structure that may cause animal fatalities.
[0028] Likewise, the carcass detection system 1 may not necessarily be mounted to a target structure, e.g., the turbine tower 9 or other structure, although being mounted to a wind turbine tower may be preferable as it provides a clear line of sight to an area where animal carcasses may land after strikes with a turbine blade, for instance. In some cases, the carcass detection system 1 may be mounted to a structure, e.g., a tower, building, a temporary structure (e.g., a tripod), or a vehicle adjacent a target structure, and the associated camera device 4 may then be pointed towards the target structure to be monitored, and more particularly the area in which flying animals are mostly likely to fall after striking the target structure.
[0029] In an embodiment, the carcass detection system 1 may also be implemented within or otherwise utilize autonomous robots such as drones. For example, a camera drone having one or more on-board cameras may be used to monitor a given area. The camera drone may monitor multiple areas through yaw/pitch adjustments, which may advantageously allow a single camera drone to monitor multiple areas without necessarily having to have multiple fixed-position camera devices.
[0030] In any such cases, the camera device 4 may be aligned/angled relative to the ground 11 to ensure that the field of view 10 for the camera device 4 covers a predetermined area about the turbine tower 9. In operation, the camera device 4 may capture image data 7 at a predetermined rate. The predetermined rate may be between 1 frame-per-second (FPS) to 60 fps, although other frame rates are within the scope of this disclosure. For instance, the predetermined rate may be one frame every 3, 5, 15 or N seconds, depending on a desired configuration. This reduced framerate may limit the amount of power necessary to perform carcass detection.
[0031] The captured image frames may then be processed to generate a background image, or reference image. A portion of the captured images may be masked, for instance, to remove portions of an area where monitoring is not desired. Subsequent captured image frames may be compared to the generated referenced image to detect if a "new" object has entered within the FOV. Filters/models may be utilized to eliminate changes due to environmental noise, e.g., moving vegetation 14, objects exceeding a predefined threshold size (e.g., too large to be a flying animal carcass), changes in lighting, and so on. Those objects that appear in subsequent frames relative to the background and remain for a predefined period of time, e.g., appearing in X number of frames in the same location between frames, will be flagged as potential carcass detections. In one specific example embodiment, objects that arrive and remain stationary for at least one minute may be flagged as potential carcasses. If the frame rate is set to two frames per minute, this would result in two frames being sufficient to flag an object. Note the particular predefined period of time/number of frames prior to flagging a potential object as a carcass may vary depending on a desired application.
[0032] Pixels associated with the potential carcass detections, e.g., potential carcass 13, may then be analyzed via a classification process. The classification process may use a carcass recognition model to determine an object type. In response to a positive classification by the classification stage, an alert may be sent to the mobile computing device of a user, e.g., a smart phone, laptop computer, tablet. The alert may include metadata comprising at least one of an animal carcass location, time of arrival, classification information (e.g., category of animal, species of animal), confidence score, and/or other associated metadata such as a structure ID.
[0033] In an embodiment, both positive and negative classifications may be provided as alerts to users. Carcass detection systems disclosed herein may receive feedback/update messages from user devices to train a machine learning algorithm and/or various detection models in order to recognize new objects, e.g., to allow for recognition of previously unknown species of flying animals, and scavenger animals, and/or to improve the recognition accuracy of known object types.
[0034] As shown in FIG. 3, a plurality of carcass detection systems 15-1 to 15-4 may be mounted to a wind turbine 12, with the plurality of carcass detection systems monitoring 360 degrees around the wind turbine 12. In this embodiment, each of the plurality of carcass detection systems 15-1 to 15-4 may have respective field of views 10-1 to 10-4 and monitor about 90 degrees of non-overlapping region/area about the wind turbine 12. Alternatively, each of the field of views 10-1 to 10-4 may overlap by a predetermined amount which may advantageously reduce dead zones. Note that although each of the carcass detection systems 15-1 to 15-4 are illustrated as independent systems, this disclosure is limited in this regard. The carcass detection systems 15-1 to 15-4 may communicate with each other and collectively form a distributed detection system for a given target structure.
[0035] In an embodiment, two or more carcass identification systems, or multiple camera devices for a single carcass identification system, may be aligned with and overlap the same FOV, or at least partially the same, FOV. Note, the two or more carcass identification systems may be mounted to the same or different structures. In this embodiment, frames from two or more camera devices may be utilized to tandem to increase detection accuracy. For instance, two or more carcass classification systems may work in tandem to capture different perspectives of a potential animal carcass and may synchronize to provide a single alert based on the carcass classification system with the highest confidence classification.
[0036] FIG. 4 shows an example process flow for an animal carcass detection system 1A consistent with an embodiment of the present disclosure. The camera device 4 captures a plurality of time- synchronized image frames from a current field of view (FOV). The plurality time-synchronized image frames may comprise two-dimensional color image data, black and white image data, thermal image data, and/or stereo image data. The captured time- synchronized image frames may be output by the camera device 4 as encoded image frames, e.g., JPEG, .GIF, .PNG, .BMP or any other suitable compressed or uncompressed format.
[0037] The carcass identifier circuit 17 receives the encoded image frames from the camera device 4 as an input. The carcass identifier circuit 17 may be implemented within the controller 2 of FIG. 1 or in any other combination of hardware and/or software. The carcass identifier circuit 17 may then perform image preprocessing to filter and/or mask out regions from image data. The carcass identifier circuit 17 may then generate a background (or reference) image based on one or more captured image frames. The generated background image and image frames captured subsequent to the generation of the background image may then be utilized by a carcass detection process, such as the carcass detection process 90 of FIG. 9, to perform object detection. However, the carcass identifier circuit 17 may utilize any known object recognition technique for identifying objects within the encoded images.
[0038] FIG. 5 shows an example image frame 50 that may be captured by the camera device 4 and analyzed by the carcass identifier circuit 17. The example image frame 50 may also represent the generated background image discussed above. Note the image frame 50 has been filtered based on a plurality of previously captured/historical images in order to remove a majority of features which remain consistent between image frames and that would otherwise represent noise, e.g., such as vegetation, fences, structures, and so on. The FOV of the example image frame 50 may correspond with field of view 10 shown the embodiment of FIG. 2.
[0039] As shown in FIG. 5, the FOV includes markers 41-1 to 41-3. The markers 41-1 to 41-3 may be physical markers disposed uniformly at predefined intervals from a target structure, e.g., the wind turbine 12 (see FIG. 3). Note, when viewed in a top-down fashion, the markers 41-1 to 41-3 may extend from a target structure along, for instance, each cardinal direction in a uniform manner from the target structure (see FIG. 3). However, this disclosure is not necessarily limited in this regard and distance markers may be disposed in other configurations. Also, markers need not be permanently placed and may be removed after initial calibration.
[0040] In an embodiment, each of the markers 41-1 to 41-3 may be spaced at a predefined interval of 5 meters (m) from each other. Each of the markers 41-1 to 41-3 may be color coded or otherwise marked (e.g., with a machine-readable fiducial) to enable the carcass identifier circuit 17 to determine a relative distance. The carcass identifier circuit 17 may then extrapolate the local position of objects/carcasses identified within the FOV relative to the target structure based on the markers 41-1 to 4-13. The result of the extrapolation may be a local XY coordinate wherein 0,0 represents a center of the target structure, for instance, although this disclosure is not necessarily limited to a Cartesian coordinate system. Changes in position of the markers 41-1 to 41-3 may be detected including removal of the markers 41-1 to 41-3, and an alert may be sent in response to the removal/movement of markers.
[0041] Thus, and in accordance with an embodiment, the carcass identifier circuit 17 may utilize the local XY position to calculate a geolocation identifier for potential animal carcasses. For instance, the known GPS location of a target structure may be offset/augmented based on the XY local coordinates of an object/carcass identified within the FOV to provide a GPS location for the potential animal carcass. The derived GPS of each object/carcass may then be provided as metadata in one or more carcass event messages, as discussed below.
[0042] FIG. 6 shows an example image frame 60 that may be captured by the camera device 4 and analyzed by the carcass identifier circuit 17. As shown in the image frame 60, a plurality of potential animal carcasses were identified by the carcass identifier circuit 17, namely potential carcasses 61-1 and 61-2.
[0043] In an embodiment, potential carcasses 61-1 and 61-2 may be detected based on comparing pixels from the background/reference image frame, e.g., image frame 50 of FIG. 5, with pixels from the image frame 60. In some cases, each pixel value from a first image (background/reference image) may be subtracted with a corresponding pixel from a second image to derive a delta D. The absolute value of the delta D may then be utilize to detect a change based on the same exceeding a predefined threshold (e.g., a 10% change, or other value depending on a desired configuration). Pixels detected as being changed may then be set/flagged as pixels of interest. The carcass identifier circuit 17 may then group the pixels of interest with other adjacent pixels, including other pixels of interest, to form MxN arrays of pixels or other suitable groupings. For instance, the pixels associated with potential carcasses 61-1 and 61-2, respectively, may then be passed into a classification process to determine the presence of animal carcasses.
[0044] The classification process may utilize a machine learning algorithm in combination with various object identification models to perform carcass recognition. Carcass recognition may include determining whether the object is an animal carcass, e.g., a simple binary yes/no, and/or may include recognizing the specific species of the animal carcass. Environmental characteristics such as time of year (e.g., the current season), time of day, geographic location, weather, and so on, may be used as "hints" to determine the likelihood of a particular species of animal. For instance, if the time of day is presently night, the likelihood of a bat strike is higher than during daylight hours. Likewise, if the season is spring or fall, certain species of migratory birds may be anticipated. The classification process may be trained to distinguish objects that may trigger false-positives, such as humans, moving vegetation, and so on. In any event, training may occur in the factory, e.g., be pre-loaded, and/or may be provided over time via user feedback/update messages.
[0045] Returning to FIG. 3, The carcass identifier circuit 17 outputs a carcass event (or alert) message in response to the carcass identifier circuit 17 positively identifying/classifying one or more animal carcasses about a target structure. As previously mentioned, the carcass identifier circuit may also output carcass event messages even when failing to classify an object. This may advantageously allow for additional training data to be provided by a user to recognize new object types and/or improve recognition of known objects.
[0046] Continuing on, each carcass event message may simply indicate the geographical position of an animal carcass relative to the target structure. For instance, the carcass event message may comprise a global positioning satellite (GPS) location of a potential animal carcass. Alternatively, or in addition to the geographical position, each carcass event message may include image data from the encoded image frames to aid in identifying the position of the animal carcass and/or to allow for visual verification/validation by a user. For example, false positives may be easily ignored by end users who are trained to distinguish carcasses of flying animals from other objects. The carcass identification system 1 may receive update messages in such cases of false positives, with the update messages being used to train the classification process in order to minimize or otherwise reduce future false positives.
[0047] Continuing on, carcass identifier circuit 17 may augment image data to include a visual indicator to allow a user to easily identify an animal carcass detected by the carcass identifier circuit 17. The visual indicator may comprise a regular shape (e.g. a rectangle, square, circle, and so on) or irregular shape to draw attention to the position of an animal carcass within an image (See FIG. 6). Alternatively, or in addition, the visual indicator may comprise an arrow or other symbol. Therefore, the augmented image data may be included in a carcass event message. If multiple images of an animal carcass are available, a plurality of images may be provided within the carcass event message with or without augmentation.
[0048] The carcass event message may further include metadata associated with each animal carcass detected by the carcass identifier circuit 17. The metadata may comprise, for example, an animal carcass type, a confidence score, an arrival timestamp, and/or a structure identifier (e.g., a wind turbine ID). The animal carcass type may comprise a value corresponding to a category of animal, e.g., avian, bat, etc. In addition, the animal carcass type may further include a value corresponding to the specific species of animal detected by the carcass identifier circuit 17. The confidence score may range from an integer value of 0 to 100, with a score above 80 suggesting a high confidence of the classification processes' findings. Other scoring methodologies/values may be utilized and the provided example is not intended to be limiting.
[0049] In an embodiment, the carcass event message may further indicate an intervening scavenger event. By way of example, consider that initially an animal carcass may be identified/classified by the carcass identifier circuit 17, which may cause a first carcass event message to be output by the carcass identifier circuit 17. Subsequently, a scavenger such as a fox, bear, weasel, cat, bird of prey, or other animal may consume or otherwise remove the previously detected animal carcass. To this end, the carcass identifier circuit 17 may output a carcass event message indicating that an intervening scavenger event occurred and that an animal carcass has been removed. In this scenario, the carcass event message may include metadata, as discussed above. However, instead of an animal carcass type, a scavenger type value may be included along with a confidence score. The scavenger type value may correspond with the specific type of scavenger (e.g., bird, fox, bear, cat, weasel, and so on) and/or an identifier of a specific species of scavenger. The carcass event message may further include one or more captured image frames, or at least a portion thereof, with the potential scavenger depicted therein. The image(s) provided within the carcass event message may also include a visual identifier, as discussed, to indicate position of the scavenger within the image(s).
[0050] FIG. 7 shows an example image frame 70 that may be captured by the camera device 4 and analyzed by the carcass identifier circuit 17. As shown, the carcass identifier circuit 17 may determine that one or more previously identified animal carcasses, e.g., animal carcasses 61-1 and 61-2, are no longer disposed at their previously known locations, e.g., based on a detected change indicating the carcasses 61-1 and 61-2 were removed. In response, the carcass identifier circuit 17 may output a carcass event message that indicates the occurrence of a scavenger event, as discussed above. Depending on the frame rate of the camera device 4, a scavenger may be within one or more captured image frames. Detection of scavenger 71 may be performed by pixel analysis using thresholding, as various described herein, and/or may be performed via known object recognition approaches. Note the classification process may utilize a scavenger classification model to determine the category of the scavenger and/or a specific species of the scavenger. The classification process may also utilize user update/feedback messages to train or otherwise adjust the scavenger classification model to increase detection accuracy. In an embodiment, the carcass identifier circuit 17 may include image frame 70, or at least a portion thereof, within the carcass event message that indicates the occurrence of a scavenger event.
[0051] Continuing on with FIG. 3, the TX circuit 5 receives the carcass event message from the carcass identifier circuit 17 as an input. The TX circuit 5 then outputs carcass event message to a remote computing system via a network, e.g., the Internet. The remote computing system may comprise for example, a cloud computing server, or any other suitable computer device configured to log data events and to provide event notifications to users. For example, the remote computing system may do push-alerts to one or more mobile computing devices such as smart phones, tablets, etc., with the push-alerts including information associated with the carcass event message. The mobile computing devices may be used by researches, scientists, technicians or other users who may have interest in animal carcasses detected around a target structure.
[0052] FIG. 8 shows an example user interface (UI) that may be utilized to visualize carcass event messages received from the scavenger detection system 1. The example UI may be implemented on a smartphone as a so-called "app", or as an application on a desktop computer, laptop, or other mobile computing device. As shown, the example UI shows a top- down view of an area surrounding a target structure 12-1. A plurality of adjacent structures 12- 2 to 12-5 may be represented in instances where, for example, multiple target structures are being monitored for animal carcasses, or simply to provide additional points of reference/landmarks. The current position of the mobile computing device relative to the target structure 12-1 may be visualized by a symbol, such as a person icon 81. The location of an animal carcass may be visualized by a push-pin or other symbol. The location of the animal carcass may also be provided by a geolocation, such as GPS coordinates 86.
[0053] The example UI may function similar to a GPS application that seeks to guide a user to a desired destination. In the context of the present disclosure, the destination is the approximate location of one or more potential animal carcasses detected by the animal carcass detection system 1. Once located, a user may then dispose of the animal carcass and log any necessary data pertaining to the animal carcass for research/accounting purposes. This logging may be performed through features/functions (not shown) of the UI 80.
[0054] Optionally, and as shown in FIG. 8, the example UI 80 may visualize a first preview image 82, with preview image 82 being generated based on a received carcass alert message. The preview image 82 may then be utilized by a user as a reference to aide in recovery of the animal carcass. Alternatively, or in addition, the example UI 80 may visualize a second preview image 83. The second preview image may also be generated based on a received carcass alert message. The second preview image 83 may provide a broad view of a FOV in which an animal carcass is detected relative to the first preview image 83. [0055] FIG. 9 is a flow chart illustrating one exemplary embodiment 90 of a classification process that may be performed by a carcass detection system consistent with the present disclosure. Exemplary details of the operations shown in FIG. 9 are discussed above. In act 91, one or more image frames are received and may be used to generate a reference (or background) image. Note generation of the reference image may further include filtering and other image pre-processing routines as previously discussed. In act 92, image frames are then received at a predetermined interval (or framerate), with the received image frames being captured subsequent to those used in act 91 to generate the reference image.
[0056] In act 93, the image frames received in act 92 are then compared to the reference image generated in act 91. In response to detecting a change, the process 90 then classifies or fails to classify objects associated with the detected change in act 94, as the case may be. The detected change may include, for instance, the arrival of an animal carcass. Change detection may further include detecting removal of a carcass and/or the arrival of a scavenger animal. In act 95, one or more carcass event message may be output. Each carcass event message may include a geolocation, e.g., GPS coordinates, of one or more potential animal carcasses and associated metadata such as an arrival timestamp, target structure identifier, animal carcass type and/or associated confidence score.
[0057] In accordance with an aspect of the present disclosure a system for detecting animal carcasses of flying animals is disclosed. The system comprising a first camera device for capturing image frames, the camera device having a field of view aligned with a first area adjacent a target structure, and a carcass detection circuit to generate a reference image based on at least a first image frame captured by the first camera device, detect arrival of an animal carcass based on comparing the reference image with one or more image frames captured subsequent to the first image frame, and in response to detecting the arrival of the animal carcass, sending a first carcass alert message to a remote computing device.
[0058] In accordance with another aspect of the present disclosure as system is disclosed. The system comprising a plurality of carcass detection systems, each of the plurality of carcass detection systems being mounted on a wind turbine tower and comprising a first camera device for capturing image frames, the first camera device having a field of view aligned with a region about the wind turbine tower, and a carcass detection circuit to generate a reference image based on at least a first image frame captured by the first camera device, detect arrival of an animal carcass based on comparing the reference image with one or more image frames captured subsequent to the first image frame, and in response to detecting the arrival of the animal carcass, sending a first carcass alert message to a remote computing device.
[0059] In accordance with another aspect of the present disclosure a computer- implemented method for detecting arrival of animal carcasses is disclosed. The method comprising receiving, by a controller, a first image frame from a first camera device, generating, by the controller, a reference image based in part on the received first image frame, receiving, by the controller, a second image frame from the first camera device subsequent to the first image frame, detecting, by the controller, an arrival of a potential animal carcass based at least in part on comparing the reference image with the second image frame, and in response to detecting the arrival of the potential animal carcass, sending by the controller a first carcass alert message to a remote computing device.
[0060] Embodiments of the methods described herein may be implemented using a processor and/or other programmable device. To that end, the methods described herein may be implemented on a tangible, computer readable storage medium having instructions stored thereon that when executed by one or more processors perform the methods. Thus, for example, the transmitter and/or receiver may include a storage medium (not shown) to store instructions (in, for example, firmware or software) to perform the operations described herein. The storage medium may include any type of non-transitory tangible medium, for example, any type of disk including floppy disks, optical disks, compact disk read-only memories (CD- ROMs), compact disk re-writables (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic and static RAMs, erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), flash memories, magnetic or optical cards, or any type of media suitable for storing electronic instructions.
[0061] Block diagrams herein represent conceptual views of illustrative circuitry embodying the principles of the disclosure. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudocode, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown. Software modules, or simply modules which are implied to be software, may be represented herein as any combination of flowchart elements or other elements indicating performance of process steps and/or textual description. Such modules may be executed by hardware that is expressly or implicitly shown.
[0062] The functions of the various elements shown in the figures, including any functional blocks, may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term "processor" or "controller" should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (DSP) hardware, network processor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), read-only memory (ROM) for storing software, random access memory (RAM), and non-volatile storage. Other hardware, conventional and/or custom, may also be included.
[0063] As used in any embodiment herein, "circuit" or "circuitry" may comprise, for example, singly or in any combination, hardwired circuitry, programmable circuitry, state machine circuitry, and/or firmware that stores instructions executed by programmable circuitry. In at least one embodiment, the transmitter and receiver may comprise one or more integrated circuits. An "integrated circuit" may be a digital, analog or mixed-signal semiconductor device and/or microelectronic device, such as, for example, but not limited to, a semiconductor integrated circuit chip. The term "coupled" as used herein refers to any connection, coupling, link or the like by which signals carried by one system element are imparted to the "coupled" element. Such "coupled" devices, or signals and devices, are not necessarily directly connected to one another and may be separated by intermediate components or devices that may manipulate or modify such signals. As used herein, use of the term "nominal" or "nominally" when referring to an amount means a designated or theoretical amount that may vary from the actual amount. [0064] Throughout the entirety of the present disclosure, use of the articles "a" and/or "an" and/or "the" to modify a noun may be understood to be used for convenience and to include one, or more than one, of the modified noun, unless otherwise specifically stated. The terms "comprising", "including" and "having" are intended to be inclusive and mean that there may be additional elements other than the listed elements. As used herein, use of the term "nominal" or "nominally" when referring to an amount means a designated or theoretical amount that may vary from the actual amount.
[0065] The terms and expressions which have been employed herein are used as terms of description and not of limitation, and there is no intention, in the use of such terms and expressions, of excluding any equivalents of the features shown and described (or portions thereof), and it is recognized that various modifications are possible within the scope of the claims. Also features of any embodiment described herein may be combined or substituted for features of any other embodiment described herein.
[0066] While the principles of the disclosure have been described herein, it is to be understood by those skilled in the art that this description is made only by way of example and not as a limitation as to the scope of the disclosure. Other embodiments are contemplated within the scope of the present disclosure in addition to the embodiments shown and described herein. Modifications and substitutions by one of ordinary skill in the art are considered to be within the scope of the present disclosure, which is not to be limited except by the following claims.

Claims

CLAIMS What is claimed is:
1. A system for detecting animal carcasses of flying animals, the system comprising: a first camera device for capturing image frames, the camera device having a field of view aligned with a first area adjacent a target structure; and
a carcass detection circuit to:
generate a reference image based on at least a first image frame captured by the first camera device;
detect arrival of an animal carcass based on comparing the reference image with one or more image frames captured subsequent to the first image frame; and
in response to detecting the arrival of the animal carcass, sending a first carcass alert message to a remote computing device.
2. The system of claim 1, wherein detecting arrival of the animal carcass includes comparing the reference image with a plurality of image frames captured subsequent to the first image frame, the plurality of image frames depicting the animal carcass in a same position within each image frame.
3. The system of claim 1, wherein detecting arrival of the animal carcass includes determining a change between one or more pixels of the generated reference image and corresponding pixels of the one or more image frames captured subsequent to the first image frame.
4. The system of claim 3, wherein the change is determined based on a difference between a value of the one or more pixels of the generated reference image and corresponding pixel values of the one or more image frames exceeding a predefined threshold.
5. The system of claim 1, wherein detecting arrival of the animal carcass includes performing object recognition on regions of the one or more image frames that include changes relative to the reference image.
6. The system of claim 1, wherein the first carcass alert message comprises at least one of an animal carcass type, a confidence score associated with the animal carcass type, a geographical identifier, and/or an arrival timestamp.
7. The system of claim 6, wherein the geographical identifier comprises a global positioning satellite (GPS) location, the GPS location corresponding to a position of the animal carcass.
8. The system of claim 7, wherein the GPS location corresponding to the animal carcass is determined based at least in part on one or more reference markers disposed within the field of view at uniform intervals.
9. The system of claim 1, wherein the target structure is a wind turbine.
10. The system of claim 1, wherein the carcass detection circuit is further configured to detect removal of the animal carcass, and in response to detecting the removal of the animal carcass, send a second carcass alert message to a remote computer system, the second carcass alert message indicating the detected removal of the animal carcass.
11. The system of claim 10, wherein the second carcass alert message includes at least one of a scavenger type value, an image depicting a scavenger animal within the field of view, and/or a carcass removal timestamp.
12. The system of claim 10, wherein the camera device comprises at least one of color image data (RGB) sensor, color and depth image data (RGBD camera) sensor, stereo camera information (L/R RGB) sensor, short wave infrared (SWIR) sensor, and/or mid wave infrared (MWIR) sensor.
13. The system of claim 1, wherein a carcass detection circuit is further configured to receive an update message and adjust one or more classification models in a memory to increase object recognition accuracy.
14. The system of claim 1, wherein the camera device comprises an on-board camera of a camera drone.
15. A system comprising:
a plurality of carcass detection systems, each of the plurality of carcass detection systems being mounted on a wind turbine tower and comprising:
a first camera device for capturing image frames, the first camera device having a field of view aligned with a region about the wind turbine tower; and a carcass detection circuit to:
generate a reference image based on at least a first image frame captured by the first camera device;
detect arrival of an animal carcass based on comparing the reference image with one or more image frames captured subsequent to the first image frame; and
in response to detecting the arrival of the animal carcass, sending a first carcass alert message to a remote computing device.
16. The system of claim 15, wherein detecting arrival of the animal carcass includes comparing the reference image with a plurality of image frames captured subsequent to the first image frame, the plurality of image frames depicting the animal carcass in a same position within each image frame.
17. The system of claim 15, wherein each of the plurality of carcass detection systems have an associated field of view different from each other.
18. The system of claim 15, wherein the plurality of carcass detection systems have at least one overlapping field of view.
19. A computer-implemented method for detecting arrival of animal carcasses, the method comprising:
receiving, by a controller, a first image frame from a first camera device;
generating, by the controller, a reference image based in part on the received first image frame;
receiving, by the controller, a second image frame from the first camera device subsequent to the first image frame;
detecting, by the controller, an arrival of a potential animal carcass based at least in part on comparing the reference image with the second image frame; and
in response to detecting the arrival of the potential animal carcass, sending by the controller a first carcass alert message to a remote computing device.
20. The method of claim 19, wherein detecting the arrival of the potential animal carcass further includes comparing the reference image with a plurality of image frames including the second image frame, the plurality of image frames depicting the potential animal carcass in a same position within each image frame.
21. The method of claim 19, wherein detecting the arrival of the animal carcass further includes failing to classify the potential animal carcass as a known carcass type, and wherein first carcass alert message includes a value indicating the failure to classify the potential animal carcass.
22. The method of claim 19, further comprising receiving an update message from a remote computing system, the update message configured to update an object detection model stored in a memory to allow for classification of a previously unknown animal carcass type or to update data associated with a known animal carcass type to improve recognition accuracy.
23. The method of claim 19, further comprising:
detecting, by the controller, removal of the potential animal carcass; and
in response to detecting removal of the potential animal carcass, sending by the controller a second carcass alert message to a remote computing device, the second carcass alert message indicating a potential scavenger event.
24. The method of claim 23, wherein the second carcass alert message comprises at least one of a scavenger event timestamp, a potential scavenger type, a confidence score associated with the potential scavenger type, and/or image data depicting a scavenger animal within a field of view of the first camera device.
PCT/US2017/058828 2016-10-27 2017-10-27 System and methods for detecting bats and avian carcasses WO2018081610A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201662413502P 2016-10-27 2016-10-27
US62/413,502 2016-10-27

Publications (1)

Publication Number Publication Date
WO2018081610A1 true WO2018081610A1 (en) 2018-05-03

Family

ID=62024143

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/058828 WO2018081610A1 (en) 2016-10-27 2017-10-27 System and methods for detecting bats and avian carcasses

Country Status (1)

Country Link
WO (1) WO2018081610A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10776695B1 (en) 2019-03-08 2020-09-15 Ai Concepts, Llc Intelligent recognition and alert methods and systems
WO2021067783A1 (en) * 2019-10-03 2021-04-08 Hubbell Incorporated Image based process for detecting moving objects
US11336868B2 (en) * 2016-02-13 2022-05-17 Michal PRZYBYCIN Device recording the collisions of flying animals with wind turbines and indicating where they fell on the ground
US11699078B2 (en) 2019-03-08 2023-07-11 Ai Concepts, Llc Intelligent recognition and alert methods and systems

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100020170A1 (en) * 2008-07-24 2010-01-28 Higgins-Luthman Michael J Vehicle Imaging System
US20110260907A1 (en) * 2008-12-16 2011-10-27 Henri-Pierre Roche Method for detecting a bird or a flying object
US20130050400A1 (en) * 2011-08-31 2013-02-28 Henrik Stiesdal Arrangement and Method to Prevent a Collision of a Flying Animal with a Wind Turbine
US20150204973A1 (en) * 2011-09-09 2015-07-23 Accipiter Radar Technologies, Inc. Device and method for 3d sampling with avian radar
US20160055400A1 (en) * 2014-08-21 2016-02-25 Boulder Imaging, Inc. Avian detection systems and methods

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100020170A1 (en) * 2008-07-24 2010-01-28 Higgins-Luthman Michael J Vehicle Imaging System
US20110260907A1 (en) * 2008-12-16 2011-10-27 Henri-Pierre Roche Method for detecting a bird or a flying object
US20130050400A1 (en) * 2011-08-31 2013-02-28 Henrik Stiesdal Arrangement and Method to Prevent a Collision of a Flying Animal with a Wind Turbine
US20150204973A1 (en) * 2011-09-09 2015-07-23 Accipiter Radar Technologies, Inc. Device and method for 3d sampling with avian radar
US20160055400A1 (en) * 2014-08-21 2016-02-25 Boulder Imaging, Inc. Avian detection systems and methods

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KORNER-NIEVERGELT ET AL.: "A new method to determine bird and bat fatality at wind energy turbines from carcass searches", WILDLIFE BIOLOGY, vol. 17, no. 4, 2011, pages 350 - 363, XP055501146, Retrieved from the Internet <URL:http://www.bioone.org/doi/pdf/10.2981/10-121> [retrieved on 20171203] *
XU ET AL.: "Internet of Things Applications: Animal Monitoring with Unmanned Aerial Vehicle", 20 October 2016 (2016-10-20), pages 1 - 12, XP055501151, Retrieved from the Internet <URL:https://arxiv.org/pdf/1610.05287.pdf> [retrieved on 20171203] *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11336868B2 (en) * 2016-02-13 2022-05-17 Michal PRZYBYCIN Device recording the collisions of flying animals with wind turbines and indicating where they fell on the ground
US10776695B1 (en) 2019-03-08 2020-09-15 Ai Concepts, Llc Intelligent recognition and alert methods and systems
US11250324B2 (en) 2019-03-08 2022-02-15 Ai Concepts, Llc Intelligent recognition and alert methods and systems
US11537891B2 (en) 2019-03-08 2022-12-27 Ai Concepts, Llc Intelligent recognition and alert methods and systems
US11699078B2 (en) 2019-03-08 2023-07-11 Ai Concepts, Llc Intelligent recognition and alert methods and systems
WO2021067783A1 (en) * 2019-10-03 2021-04-08 Hubbell Incorporated Image based process for detecting moving objects

Similar Documents

Publication Publication Date Title
US20210073692A1 (en) Method and system for utility infrastructure condition monitoring, detection and response
AU2021202277B2 (en) Avian detection systems and methods
WO2018081610A1 (en) System and methods for detecting bats and avian carcasses
US10498955B2 (en) Commercial drone detection
Kays et al. Monitoring wild animal communities with arrays of motion sensitive camera traps
KR102400452B1 (en) Context-aware object detection in aerial photographs/videos using travel path metadata
Sugumar et al. An improved real time image detection system for elephant intrusion along the forest border areas
US9922049B2 (en) Information processing device, method of processing information, and program for processing information
US20180107182A1 (en) Detection of drones
Ahmad et al. A novel method for vegetation encroachment monitoring of transmission lines using a single 2D camera
US9558564B1 (en) Method for finding important changes in 3D point clouds
US11120676B2 (en) Intrusion detection methods and devices
CN115631421A (en) Intelligent protection method and system for cultivated land
US8934020B2 (en) Integrated video quantization
AU2023278096A1 (en) Method and system for utility power lines vegetation proximity monitoring and controlling
KR102351398B1 (en) Method and system for detecting marine life using imaging equipment
KR20200139616A (en) Apparatus and method for detecting abnormal objects
WO2019076954A1 (en) Intrusion detection methods and devices
US20240193946A1 (en) Bird detection and species determination
CN107885231A (en) A kind of unmanned plane capturing method and system based on visible images identification
Liu et al. Towards continuous surveillance of fruit flies using sensor networks and machine vision
Teixeira et al. A survey on applications of unmanned aerial vehicles using machine learning
JP7300958B2 (en) IMAGING DEVICE, CONTROL METHOD, AND COMPUTER PROGRAM
Kim et al. Human monitoring system using drones for riverside area
Ogura et al. Ground object recognition and segmentation from aerial image‐based 3D point cloud

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17865928

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17865928

Country of ref document: EP

Kind code of ref document: A1