CN114581851A - Bird trajectory prediction method, bird repelling device, bird trajectory prediction system and storage medium - Google Patents

Bird trajectory prediction method, bird repelling device, bird trajectory prediction system and storage medium Download PDF

Info

Publication number
CN114581851A
CN114581851A CN202210236163.4A CN202210236163A CN114581851A CN 114581851 A CN114581851 A CN 114581851A CN 202210236163 A CN202210236163 A CN 202210236163A CN 114581851 A CN114581851 A CN 114581851A
Authority
CN
China
Prior art keywords
bird
flying
image frame
frame sequence
monitoring image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210236163.4A
Other languages
Chinese (zh)
Inventor
唐红强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sungrow Power Supply Co Ltd
Original Assignee
Sungrow Power Supply Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sungrow Power Supply Co Ltd filed Critical Sungrow Power Supply Co Ltd
Priority to CN202210236163.4A priority Critical patent/CN114581851A/en
Publication of CN114581851A publication Critical patent/CN114581851A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E10/00Energy generation through renewable energy sources
    • Y02E10/50Photovoltaic [PV] energy

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for predicting a flying bird trajectory, a bird repelling method, equipment, a system and a storage medium, belonging to the technical field of bird repelling of a power station, wherein the method for predicting the flying bird trajectory comprises the following steps: acquiring a first monitoring image frame sequence within a first preset time before the flying bird occurrence time; inputting the first monitoring image frame sequence into a bird trajectory prediction model, and predicting to obtain a second monitoring image frame sequence with a second preset time length in the future; and detecting the coordinate position of the flying bird in the second monitoring image frame sequence through the bird recognition model to obtain the motion track of the flying bird in the future preset time length. The bird repelling method comprises the following steps: and adjusting the bird repelling device to perform bird repelling according to the motion track, so that the flying birds are repelled in the flying process of the flying birds. The invention can efficiently dispel the flying birds in the photovoltaic power station, and dispel the flying birds as much as possible in the flying process of the flying birds so as to prevent the flying birds from falling on the photovoltaic module.

Description

Bird trajectory prediction method, bird repelling device, bird trajectory prediction system and storage medium
Technical Field
The invention relates to the field of bird repelling of power stations, in particular to a method for predicting bird trajectory, a method, equipment, a system and a storage medium for repelling birds.
Background
At present, in the operation in-process at solar photovoltaic power generation station, birds can fall on photovoltaic board edge all around after the flight and have a rest, thereby can shelter from the normal work efficiency who influences the photovoltaic board to the sunshine that shines on the photovoltaic board when birds gathering is too much around the photovoltaic board to and can leave a large amount of birds droppings on the photovoltaic board when birds fly away from the photovoltaic board, pile up on the photovoltaic board for a long time and can cause the corruption to the photovoltaic board and shelter from thereby influencing the normal work of photovoltaic board to the sunshine that shines on the photovoltaic board. In the operation process of the photovoltaic power station, the most important work is to clean and maintain the photovoltaic components, and the cleaning degree of the photovoltaic components has great influence on the generating capacity of the photovoltaic power station. As a result, individual photovoltaic power generation enterprises often need to expend significant costs for photovoltaic module maintenance. During the maintenance of the power station, it is found that the stay of birds and the excrement of the birds have great influence on the normal power generation of the photovoltaic module, and the large-area photovoltaic array attracts many birds to stay on the surface of the array module for a long time, and therefore, a large amount of excrement is left. The accumulation of bird excrement can make photovoltaic module produce local shadow, and the current, the voltage of the battery monolithic that receives the shadow shading have changed, can produce local temperature rise on these battery pieces, and this kind of phenomenon is called "hot spot effect". Bird excrement is not only corrosive, and the viscidity is also very big, and general sanitizer and cleaning robot wash can not fall, and different from the dust, these excrement also are difficult to be blown away or are washed away by the rainwater. Therefore, the frequency and difficulty of cleaning the components are greatly increased, and the workload of the operation and maintenance personnel is increased.
Therefore, how to effectively drive the flying birds in the photovoltaic power station is a very important problem, and the accumulation of bird droppings on the photovoltaic component can be reduced only by reducing the residence time of the flying birds on the photovoltaic component to the maximum extent and avoiding the flying birds from falling on the photovoltaic component, so that operation and maintenance personnel can maintain the components on the power station site more efficiently.
Disclosure of Invention
The invention mainly aims to provide a flying bird trajectory prediction method, and aims to solve the technical problem of how to efficiently disperse flying birds in a photovoltaic power station and drive the flying birds in the flying process as much as possible so as to avoid the flying birds falling on a photovoltaic module.
In order to achieve the above object, the present invention provides a method for predicting a trajectory of a flying bird, including:
acquiring a first monitoring image frame sequence within a first preset time before the flying bird occurrence time;
inputting the first monitoring image frame sequence into a flying bird trajectory prediction model, and predicting to obtain a second monitoring image frame sequence with a second preset time length in the future;
and detecting the coordinate position of the flying bird in the second monitoring image frame sequence through a bird recognition model to obtain the motion track of the flying bird in the future for the second preset time.
Optionally, before the step of acquiring the first monitoring image frame sequence within a first preset time period before the bird appearance time, the method further includes:
acquiring a real-time monitoring image frame sequence, and detecting whether a flying bird appears in the real-time monitoring image frame sequence through the bird recognition model;
and if the flying bird appears, executing the step of acquiring the first monitoring image frame sequence within a first preset time before the flying bird appearance moment.
Optionally, after the step of detecting whether a bird appears in the real-time monitoring image frame sequence through the bird recognition model, the method further includes:
and if no flying bird appears, executing the step of acquiring the real-time monitoring image frame sequence.
Optionally, before the step of detecting whether a bird appears in the real-time monitoring image frame sequence through the bird recognition model, the method further includes:
training based on the yolov5 network model to obtain the bird recognition model.
Optionally, before the step of inputting the first monitoring image frame sequence into a bird trajectory prediction model, the method further includes:
and constructing an ST-LSTM network model, and training based on the ST-LSTM network model to obtain the flying bird trajectory prediction model.
Optionally, the network structure of the ST-LSTM network model includes an encoding structure and a decoding structure, and the step of constructing the ST-LSTM network model includes:
extracting a state value in a vertical direction from the spatial stacking sequence structure, and transmitting the state value in the vertical direction;
increasing a pass through operation of the state values between the encoding structure and the decoding structure;
and transmitting the state value and the output value of the last layer of the coding structure to the first layer of the decoding structure through the transmission operation, and connecting to obtain the ST-LSTM network model.
Optionally, the step of obtaining the bird trajectory prediction model based on the ST-LSTM network model training includes:
arranging the first monitoring image frame sequence according to the time sequence, and training the ST-LSTM network model by using the first monitoring image frame sequence as training input data;
comparing the second monitoring image frame sequence with a third monitoring image frame sequence with a second preset time length in the actual future according to the time sequence;
and adjusting parameters of the ST-LSTM network model based on the error obtained by the comparison calculation to obtain the flying bird trajectory prediction model.
In addition, in order to achieve the above object, the present invention further provides a bird repelling method, which is applied to the above motion trajectory, and comprises the following steps:
and adjusting the bird repelling device to perform bird repelling according to the motion track, so that the flying birds are repelled in the flying process of the flying birds.
Optionally, the bird repelling method further comprises:
when the bird repelling device is a video laser bird repelling device, the angle of a laser transmitter in the video laser bird repelling device is adjusted, so that the laser transmitter emits laser and repeatedly sweeps the last track point in the motion track, and the flying bird is repelled in the flying process.
In order to achieve the above object, the present invention also provides a bird trajectory prediction device, including: the system comprises a memory, a processor and a bird trajectory prediction program stored on the memory and executable on the processor, wherein the bird trajectory prediction program is configured to implement the steps of the bird trajectory prediction method.
In addition, in order to achieve the above object, the present invention further provides a bird repelling system, which includes the bird trajectory prediction device and the bird repelling apparatus, wherein the bird repelling apparatus includes a camera for collecting video images in a monitoring area in real time and a structure capable of executing bird repelling action.
Optionally, the bird repelling device is a video laser bird repelling device.
Furthermore, to achieve the above object, the present invention also provides a computer readable storage medium having stored thereon a computer program, which when executed by a processor, implements the steps of the bird trajectory prediction method and/or the bird repelling method as described above.
The embodiment of the invention provides a flying bird trajectory prediction method, a bird repelling method, a flying bird trajectory prediction device, a bird repelling system and a computer readable storage medium, and the method comprises the steps of obtaining a first monitoring image frame sequence within a first preset time before the flying bird appearance moment; inputting the first monitoring image frame sequence into a bird trajectory prediction model, and predicting to obtain a second monitoring image frame sequence with a second preset time length in the future; and detecting the coordinate position of the flying bird in the second monitoring image frame sequence through a bird recognition model to obtain the motion track of the flying bird in the future for the second preset time.
When detecting that the flying bird appears in the video image in the monitoring area acquired by the camera in real time, storing a first monitoring image frame sequence within a continuous first preset time length. The first sequence of surveillance image frames is input to a bird trajectory prediction model which will make a prediction and output a second sequence of surveillance image frames within a second predetermined time period in the future of the prediction. And detecting the coordinate position of the flying bird in the second monitoring image frame sequence by using the bird recognition model so as to obtain the motion track of the flying bird in a second preset time length in the future.
And adjusting the bird repelling device to execute bird repelling action according to the obtained movement track of the flying bird within a second preset time length in the future, for example, adjusting the angle of a laser transmitter in the video laser bird repelling device, so that the laser transmitter emits laser and repeatedly scans the last track point in the movement track, and the flying bird is repelled in the flying process. The bird repelling device can repel birds in the flying process of the birds, and can well prevent the birds from staying on the photovoltaic modules, so that the birds can be efficiently dispelled in the photovoltaic power station.
Drawings
FIG. 1 is a schematic diagram of a hardware execution environment execution device according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating a method for predicting bird trajectories according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart illustrating a method for predicting bird trajectories according to another embodiment of the present invention;
fig. 4 is a schematic diagram of an overall network structure of a yolov5 network model according to an embodiment of the bird trajectory prediction method of the present invention;
FIG. 5 is a schematic diagram of a yolov5 network model training process according to an embodiment of the bird trajectory prediction method of the present invention;
FIG. 6 is a schematic diagram of a convLSTM model according to an embodiment of the bird trajectory prediction method of the present invention;
FIG. 7 is a schematic diagram of model timing of convLSTM according to an embodiment of the bird trajectory prediction method of the present invention;
FIG. 8 is a schematic diagram of a partial improved ConvLSTM model network according to yet another embodiment of the bird trajectory prediction method of the present invention;
FIG. 9 is a schematic diagram of a partially improved ConvLSTM model according to an embodiment of the bird trajectory prediction method of the present invention;
FIG. 10 is a schematic diagram of an ST-LSTM model according to an embodiment of the bird trajectory prediction method of the present invention;
FIG. 11 is a schematic diagram illustrating a comparison of model structures of an embodiment of a method for predicting bird trajectories according to the present invention;
FIG. 12 is a schematic diagram of an ST-LSTM network structure according to an embodiment of the bird trajectory prediction method of the present invention;
FIG. 13 is a schematic diagram of an improved ST-LSTM network structure according to an embodiment of the bird trajectory prediction method of the present invention;
fig. 14 is a schematic diagram of a prediction implementation of an embodiment of a bird trajectory prediction method according to the present invention.
The implementation, functional features and advantages of the present invention will be further described with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an operating device in a hardware operating environment according to an embodiment of the present invention.
As shown in fig. 1, the operation device may include: a processor 1001, such as a Central Processing Unit (CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a WIreless interface (e.g., a WIreless-FIdelity (WI-FI) interface). The Memory 1005 may be a Random Access Memory (RAM) Memory, or may be a Non-Volatile Memory (NVM), such as a disk Memory. The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the configuration shown in FIG. 1 does not constitute a limitation of the operating device and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a storage medium, may include therein an operating system, a data storage module, a network communication module, a user interface module, and a bird trajectory prediction program and a bird repelling program.
In the operating device shown in fig. 1, the network interface 1004 is mainly used for data communication with other devices; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 in the running device of the present invention may be provided in a running device that calls the bird trajectory prediction program stored in the memory 1005 by the processor 1001 and performs the following operations:
acquiring a first monitoring image frame sequence within a first preset time before the flying bird occurrence time;
inputting the first monitoring image frame sequence into a bird trajectory prediction model, and predicting to obtain a second monitoring image frame sequence with a second preset time length in the future;
and detecting the coordinate position of the flying bird in the second monitoring image frame sequence through a bird recognition model to obtain the motion track of the flying bird in the future for the second preset time.
Further, the processor 1001 may call the bird trajectory prediction program stored in the memory 1005, and further perform the following operations:
before the step of acquiring the first monitoring image frame sequence within a first preset time period before the bird appearance time, the method further includes:
acquiring a real-time monitoring image frame sequence, and detecting whether a flying bird appears in the real-time monitoring image frame sequence through the bird recognition model;
and if the flying bird appears, executing the step of acquiring the first monitoring image frame sequence within a first preset time before the flying bird appearance moment.
Further, the processor 1001 may call the bird trajectory prediction program stored in the memory 1005, and further perform the following operations:
after the step of detecting whether the bird appears in the real-time monitoring image frame sequence through the bird recognition model, the method further comprises the following steps:
and if no flying bird appears, executing the step of acquiring the real-time monitoring image frame sequence.
Further, the processor 1001 may call the bird trajectory prediction program stored in the memory 1005, and further perform the following operations:
before the step of detecting whether the flying bird appears in the real-time monitoring image frame sequence through the bird identification model, the method further comprises the following steps:
training based on the yolov5 network model to obtain the bird recognition model.
Further, the processor 1001 may call the bird trajectory prediction program stored in the memory 1005, and further perform the following operations:
before the step of inputting the first monitoring image frame sequence into the bird trajectory prediction model, the method further comprises:
and constructing an ST-LSTM network model, and training based on the ST-LSTM network model to obtain the flying bird trajectory prediction model.
Further, the processor 1001 may call the bird trajectory prediction program stored in the memory 1005, and further perform the following operations:
the network structure of the ST-LSTM network model comprises an encoding structure and a decoding structure, and the step of constructing the ST-LSTM network model comprises the following steps:
extracting a state value in a vertical direction from the spatial stacking sequence structure, and transmitting the state value in the vertical direction;
increasing a pass through operation of the state values between the encoding structure and the decoding structure;
and transmitting the state value and the output value of the last layer of the coding structure to the first layer of the decoding structure through the transmission operation, and connecting to obtain the ST-LSTM network model.
Further, the processor 1001 may call the bird trajectory prediction program stored in the memory 1005, and further perform the following operations:
the step of training to obtain the bird trajectory prediction model based on the ST-LSTM network model comprises the following steps:
arranging the first monitoring image frame sequence according to the time sequence, and training the ST-LSTM network model by using the first monitoring image frame sequence as training input data;
comparing the second monitoring image frame sequence with a third monitoring image frame sequence with a second preset time length in the actual future according to the time sequence;
and adjusting parameters of the ST-LSTM network model based on the error obtained by the comparison calculation to obtain the flying bird trajectory prediction model.
The operating device calls a bird repelling program stored in the memory 1005 through the processor 1001, and the bird repelling method is applied to the motion trajectory and executes the following operations:
and adjusting the bird repelling device to perform bird repelling according to the motion track so as to repel the flying birds in the flying process of the flying birds.
Further, the processor 1001 may call the bird repelling program stored in the memory 1005, and also perform the following operations:
the bird repelling method further comprises the following steps:
when the bird repelling device is a video laser bird repelling device, the angle of a laser transmitter in the video laser bird repelling device is adjusted, so that the laser transmitter emits laser and repeatedly sweeps the last track point in the motion track, and the flying bird is repelled in the flying process.
An embodiment of the present invention provides a method for predicting a bird trajectory, and referring to fig. 2, fig. 2 is a schematic flow diagram of a first embodiment of a method for predicting a bird trajectory according to the present invention.
In this embodiment, the bird trajectory prediction method includes:
step S10: the method comprises the steps of obtaining a first monitoring image frame sequence within a first preset time before the bird appearance time.
Taking a bird repelling device as an example, when a bird is detected to appear in video images in a monitoring area acquired by a camera in real time in the rotation process of the video laser bird repelling device, storing a first monitoring image frame sequence within a continuous first preset time period, for example, storing a first monitoring image frame sequence A of 30 frames in total within a continuous 3-second time period. The setting of 30 frames for 3 seconds is set for matching with the hardware limitation of a camera and a laser transmitter of the video laser bird repelling device, and can be debugged and modified according to the actual effect. Moreover, the area of the monitoring area of the camera should be larger than, at least equal to but not smaller than the area of the photovoltaic panel to be cleaned, so that the laser emitter drives away the birds in advance before the birds fall on the photovoltaic panel.
Step S20: and inputting the first monitoring image frame sequence into a bird trajectory prediction model, and predicting to obtain a second monitoring image frame sequence with a second preset time length in the future.
The first monitoring image frame sequence a is input to a bird trajectory prediction model, which makes a prediction and outputs a second monitoring image frame sequence within a second predetermined time duration in the future, for example, outputs a second monitoring image frame sequence B of 30 frames in total within 3 seconds in the future.
Step S30: and detecting the coordinate position of the flying bird in the second monitoring image frame sequence through a bird recognition model to obtain the motion track of the flying bird in the future for the second preset time.
And detecting the coordinate position of the bird in the second monitoring image frame sequence B by using the bird recognition model, so as to obtain the motion track of the bird in a second preset time length in the future, such as 3 seconds.
In the embodiment, a first monitoring image frame sequence within a first preset time before the bird appearance time is obtained; inputting the first monitoring image frame sequence into a bird trajectory prediction model, and predicting to obtain a second monitoring image frame sequence with a second preset time length in the future; and detecting the coordinate position of the flying bird in the second monitoring image frame sequence through a bird recognition model to obtain the motion track of the flying bird in the future for the second preset time.
When detecting that the flying bird appears in the video image in the monitoring area acquired by the camera in real time, storing a first monitoring image frame sequence within a continuous first preset time length. The first sequence of surveillance image frames is input to a bird trajectory prediction model, which makes a prediction and outputs a second sequence of surveillance image frames within a second predetermined time period in the future of the prediction. And detecting the coordinate position of the flying bird in the second monitoring image frame sequence by using the bird recognition model so as to obtain the motion track of the flying bird in a second preset time length in the future.
And adjusting the bird repelling device to execute bird repelling action according to the obtained movement track of the flying bird within a second preset time length in the future, for example, adjusting the angle of a laser transmitter in the video laser bird repelling device, so that the laser transmitter emits laser and repeatedly scans the last track point in the movement track, and the flying bird is repelled in the flying process. The bird repelling device can repel birds in the flying process of the birds, and can well prevent the birds from staying on the photovoltaic modules, so that the birds can be efficiently dispelled in the photovoltaic power station.
An embodiment of the present invention provides a method for predicting a bird trajectory, and referring to fig. 3, fig. 3 is a schematic flow chart of another embodiment of the method for predicting a bird trajectory according to the present invention.
In this embodiment, the bird repelling device is used as a video laser bird repelling device, and 30 frames in total are obtained within a period of 3 seconds for both the duration and the number of frames of the first monitored image frame sequence and the second monitored image frame sequence, for example, the duration and the number of frames are set in cooperation with the limitation of hardware, and can be debugged and modified according to the actual effect, which is not limited herein, and the bird trajectory prediction method includes the following steps:
the bird recognition model was trained using the yolov5 network model.
And training a flying bird track prediction model by using the ST-LSTM network model.
And detecting whether the flying birds appear in the real-time monitoring image frame sequence through a bird recognition model.
And if the flying bird appears, acquiring and storing a first monitoring image frame sequence A of 30 frames in total within 3 seconds of a continuous preset time before the flying bird appearing moment.
The first monitored image frame sequence a is input to a bird trajectory prediction model, which predicts the bird trajectory and outputs a predicted second monitored image frame sequence B having a total of 30 frames in the future 3 seconds.
And detecting the coordinate position of the bird in the second monitoring image frame sequence B by using the bird recognition model so as to obtain the motion trail of the bird with the future preset time length of 3 seconds.
And adjusting the angle of a laser transmitter in the video laser bird repelling device according to the movement track, so that the laser transmitter transmits laser to repeatedly scan the last track point in the movement track, thereby repelling the birds in the flying process.
For example, in this embodiment, the first monitored image frame sequence a within 3s is obtained, the second monitored image frame sequence B obtained through prediction may be an image of 2s in the future or 4s in the future or 3s in the future, and values of the first and second preset durations are close to each other as much as possible to ensure prediction accuracy, and the first monitored image frame sequence is preferably obtained at equal time intervals.
Optionally, before the step of acquiring the first monitoring image frame sequence within a first preset time period before the bird appearance time, the method further includes:
acquiring a real-time monitoring image frame sequence, and detecting whether a flying bird appears in the real-time monitoring image frame sequence through the bird recognition model;
and if the flying bird appears, executing the step of acquiring the first monitoring image frame sequence within a first preset time before the flying bird appearance moment.
Referring to fig. 3, before the step of acquiring the first monitoring image frame sequence within a first preset time period before the flying bird appearance time, the flying bird appearance time needs to be obtained. The method comprises the steps of obtaining a real-time monitoring image frame sequence collected by a camera of a video laser bird repelling device, detecting the real-time monitoring image frame sequence through a pre-trained bird recognition model, judging whether a flying bird appears, and if the flying bird appears, obtaining and storing a first monitoring image frame sequence within a first preset time before the flying bird appearing moment, such as a first monitoring image frame sequence A with 30 frames in total within 3 seconds of a continuous preset time.
Optionally, after the step of detecting whether a bird appears in the real-time monitoring image frame sequence through the bird recognition model, the method further includes:
and if no flying bird appears, executing the step of acquiring the real-time monitoring image frame sequence.
Referring to fig. 3, a real-time monitoring image frame sequence acquired by a camera of the video laser bird repelling device is acquired, the real-time monitoring image frame sequence is detected through a pre-trained bird recognition model, whether a bird appears is judged, if no bird appears, the real-time monitoring image frame sequence acquired by the camera is continuously acquired through the video laser bird repelling device, and whether a bird appears in the real-time monitoring image frame sequence is continuously detected through the bird recognition model.
Optionally, before the step of detecting whether a bird appears in the real-time monitoring image frame sequence through the bird recognition model, the method further includes:
training based on the yolov5 network model to obtain the bird recognition model.
Referring to fig. 4, the overall network structure of yolov5 network model is mainly divided into four parts, i.e., input data, a backhaul network structure, a tack network structure, and a Prediction structure. Wherein the content of the first and second substances,
firstly, input data mainly comprises Mosaic data enhancement and self-adaptive anchor frame calculation.
And adaptively calculating the optimal anchor frame value in different training sets during each training. In the Yolo algorithm, there are anchor boxes with initial set length and width for different data sets. In the network training, the network outputs a prediction frame on the basis of an initial anchor frame, and then compares the prediction frame with a real frame group, calculates the difference between the prediction frame and the real frame group, and then reversely updates and iterates network parameters.
And secondly, the Backbone network structure consists of a Focus structure and a CSP structure.
Focus structure: the original 608 × 3 image is input into the Focus structure, and is changed into a 304 × 12 feature map by a slicing operation, and is then subjected to a convolution operation of 32 convolution kernels, and finally changed into a 304 × 32 feature map.
CSP structure: by taking the CSPNet network structure as a reference, a CSP structure is designed in the backbone network and consists of a convolutional layer and X Res units Concat. Where Res unit refers to a residual structure in the Resnet network for reference, the network can be constructed deeper, Concat refers to tensor splicing, and the dimension of two tensors, for example, 26 × 256 and 26 × 512, is expanded, and as a result, 26 × 768 is obtained, and Concat has the same function as route in cfg files.
And thirdly, the Neck network structure consists of an FPN structure and a PAN structure.
The PANET for reference of FPN + PAN is mainly applied to the field of image segmentation, and the feature extraction capability is further improved. The FPN is from top to bottom, and the feature information of the high layer is transmitted and fused in an up-sampling mode to obtain a feature map for prediction. In the field of target detection, in order to better extract fusion features, layers are usually inserted into a Backbone and an output layer, and this part is called as heck and is equivalent to the Neck of a target detection network. In the Neck structure of yolov5, a CSP2 structure designed by referring to CSPnet is adopted, so that the capability of network feature fusion is enhanced.
Fourthly, Prediction structure:
the Loss function of the target detection task generally consists of two parts, namely a classification Loss function and a Bounding Box regression Loss function, wherein CIOU _ Loss is adopted as the Loss function of the Bounding Box in Yolov5, the dimension information of the width-to-height ratio of the Bounding Box is considered, the target of the CIOU _ Loss is equivalent to the penalty of a closed packet formed by adding a real frame group and a prediction frame into the Loss function, and the penalty term is that the proportion of the area of the closed packet minus the union of the two frames in the closed packet is smaller, and the smaller is the better. The regression mode of CIOU _ Loss is adopted, so that the regression speed and precision of the prediction frame are higher.
Optionally, the step of training the bird recognition model based on the yolov5 network model includes:
acquiring a monitoring area image, and marking the flying birds in the monitoring area image to obtain a flying bird data set;
constructing the yolov5 network model, and training the yolov5 network model through the bird data set;
and adjusting parameters of the yolov5 network model based on the training result to obtain the bird identification model.
Referring to fig. 5, a bird image of a monitored area is acquired, birds in the bird image are marked to produce a bird data set, and the bird data set is divided into a training set, a test set and a verification set according to a certain proportion. The method comprises the steps of constructing a yolov5 network model, training the yolov5 network model through a flying bird data set obtained by acquiring flying bird images, adjusting parameters of the yolov5 network model based on a training result to obtain a bird recognition model, and recognizing flying birds by using the trained bird recognition model.
The yolov5 network model uses the FPN enhanced feature extraction network to replace PAN, so that the model is simpler and faster. In the detection of the fast-moving flying bird, based on the light weight and the real-time performance of the yolov5 network model, the bird recognition model can more quickly recognize the flying bird from the real-time monitoring image frame sequence.
Optionally, before the step of inputting the first monitored image frame sequence into the bird trajectory prediction model, the method further includes:
and constructing an ST-LSTM network model, and training based on the ST-LSTM network model to obtain the flying bird trajectory prediction model.
LSTM (Long Short Term Memory) is a special recurrent neural network. Unlike a general feedforward neural network, the LSTM can analyze an input using a time series. Designed originally to solve the long-term dependence problem prevalent in general RNNs, LSTM has the key to adopt hidden states of neurons (cell states), which can be understood as "memory" of the input data by the recurrent neural network, to effectively transfer and express information in long-time sequences without causing useful information to be ignored (forgotten) for a long time. At the same time, LSTM can also solve the gradient vanishing/explosion problem in RNN.
In order to construct a space-time sequence prediction model and grasp time and space information at the same time, the full-connection weight in the LSTM is changed into convolution, so that convLSTM is obtained. The convLSTM unit calculation procedure is as follows, where Ct-1: the state value at the time t-1; ht-1: output at time t-1; xt: inputting at the time t; ht: outputting at the time t; ct: a state value at time t; it: an input gate; ft: forgetting to open the door; ot: and an output gate.
Figure BDA0003540046330000131
Figure BDA0003540046330000132
Figure BDA0003540046330000133
Figure BDA0003540046330000134
Figure BDA0003540046330000135
The resulting model of convLSTM is shown in FIG. 6.
The above-mentioned conventional convLSTM model has the following problems: as shown in fig. 7, an encoding-decoding structure of ConvLSTM with four layers is added, an input frame enters the first layer, and a future video sequence is generated in the fourth layer, in this process, the spatial dimension is gradually encoded along with the cnn structure of each layer, and the memory cells in the temporal dimension are independent of each other and updated in each time step, in this case, the bottom layer ignores the temporal information of the top layer in the previous time step, which is also a disadvantage of the independent mermoranism between layers of ConvLSTM. In fact, in a simple parallel stacked structure, the layers are independent after stacking, and the bottommost cell at time t ignores the time information of the topmost cell at time t-1. It is true that there is no time information connection between cells corresponding to tones.
Assuming that the information of the input sequence should be preserved, the information extracted by the different levels cnn is needed. That is, for each input, there is an abstraction of information across each layer of the network structure, and the last abstract information extracted should be retained for the next input at the first layer. Therefore, a network structure is proposed as shown in fig. 8, where M is a cell output, and is labeled M for the sake of distinction in the drawing, and the formula itself is changed according to the structure. The formula for the partially modified ConvLSTM model at this time is:
Figure BDA0003540046330000141
Figure BDA0003540046330000142
Figure BDA0003540046330000143
Figure BDA0003540046330000144
Figure BDA0003540046330000145
Figure BDA0003540046330000146
comparison with ConvLSTM yields: the original ConvLSTM input hidden state and cell output are all the previous time, while the modified structure L-1 represents the formula transformation of a single network cell when the structure is not the bottommost layer, the hidden state and cell output are both the previous layer (L-1), and { L ═ 1} in the formula indicates that there is a special case when L ═ 1, namely, the partially improved ConvLSTM model has a propagation part from the polyline top of the middle network structure to bottom, and the partially improved ConvLSTM model is as shown in FIG. 9.
However, the following disadvantages exist in the structure of the partially improved ConvLSTM model: 1. removing time flow in the horizontal direction sacrifices temporal consistency because there is no time flow at different times in the same layer. 2. Memory requires a longer path to flow between distant states, more likely causing the gradient to vanish. Therefore, a new building blocks is introduced as ST-LSTM, namely the ST-LSTM model in the application, as shown in FIG. 10.
As shown in FIG. 11, compared with the original LSTM model structure, two identical structures in the ST-LSTM model are LSTMs, except that the cell output and the hidden state of the original LSTM model are both replaced by M, and the other output parts in the ST-LSTM model are actually equivalent to integrating the outputs of the two LSTM structures together and calculating the output respectively. The upper half of the ST-LSTM model is called 'Standard Temporal Memory' and the lower half is called 'spatial Temporal Memory', the upper half is not different from the ordinary LSTM, and the lower half is equivalent to changing c and h together into M, M immediate empty Memory state.
Therefore, the network structure of the ST-LSTM model is equivalent to adding one more M state on the original infrastructure as shown in fig. 12, and the information of the top layer at the previous time point is connected to the bottom layer at this time point by the M state polyline. And the M state is introduced in the vertical direction. Such a structure is actually equivalent to integrating two states in the left part of the partially improved ConvLSTM model network structure (FIG. 8) into a state M, and then integrating the structure and the right part to obtain the ST-LSTM model network structure, except that the present application skillfully solves the above two disadvantages by using an ST-LSTM model.
The process of calculating the state values transferred in the horizontal direction of the ST-LSTM network model constructed in the present embodiment is as follows, wherein,
(1)*: representing a convolution operation; ☉: representing a dot product operation; l: represents a network layer number; t: represents a time of day; ct-1: the state value transmitted in the horizontal direction at the moment t-1; ht-1: output at time t-1; xt: inputting at the time t; ht: output at time t; ct: the state value transmitted in the horizontal direction at the time t; gt: the new state value at time t.
(2) W: representing a weight parameter; b: representing the bias parameter.
(3) it: an input gate; ft: the door is forgotten.
(4) tan h: representing an activation function; σ: representing an activation function.
Figure BDA0003540046330000151
Figure BDA0003540046330000152
Figure BDA0003540046330000153
Figure BDA0003540046330000154
The process of calculating the state value transferred in the vertical direction of the ST-LSTM network model constructed in the present embodiment is as follows, wherein,
(1)*: representing a convolution operation; ☉: representing a dot product operation; l-1: represents a network layer number; l: represents a network layer number; t: represents a time of day; mt: a state value transmitted in the vertical direction at time t; xt: inputting at the time t; gt': the new state value at time t.
(2) W': representing a weight parameter; b': representing the bias parameters.
(3) it': an input gate; ft': the door is forgotten.
(4) tan h: representing an activation function; σ: representing an activation function.
Figure BDA0003540046330000161
Figure BDA0003540046330000162
Figure BDA0003540046330000163
Figure BDA0003540046330000164
The output calculation process of the ST-LSTM network model constructed in the present embodiment is as follows, in which,
(1)*: representing a convolution operation; ☉: representing a dot product operation; l: represents a network layer number; t: represents a time of day; mt: a state value transmitted in the vertical direction at time t; ct: the state value transmitted in the horizontal direction at the time t; xt: inputting at the time t; ht-1: output at time t-1; ht: and (4) output at the time t.
(2) W: representing a weight parameter; b: representing the bias parameter.
(3) ot: and an output gate.
(4) tan h: representing an activation function; σ: representing an activation function.
Figure BDA0003540046330000165
Figure BDA0003540046330000166
Optionally, the network structure of the ST-LSTM network model includes an encoding structure and a decoding structure, and the step of constructing the ST-LSTM network model includes:
extracting a state value in a vertical direction from the spatial stacking sequence structure, and transmitting the state value in the vertical direction;
increasing a pass through operation of the state values between the encoding structure and the decoding structure;
and transmitting the state value and the output value of the last layer of the coding structure to the first layer of the decoding structure through the transmission operation, and connecting to obtain the ST-LSTM network model.
Referring to fig. 13, the network structure of the ST-LSTM network model is composed of two parts, an encoding structure and a decoding structure, respectively. Wherein, the modified units 1-3 represent modified convLSTM unit (ST-LSTM) structures. In the stacked network structure, a line connecting from the lowermost layer to the uppermost layer and from the uppermost layer to the lowermost layer represents that a state value in the vertical direction is extracted from the spatial stacked sequence structure, and is transferred in the vertical direction. The horizontal/vertical hidden state represents [ Ht, Mt ], where Ht is the state value of the horizontal output at time t, and Mt is the state value of the vertical output at time t.
convLSTM only extracts the state value in the horizontal direction on a time series, passes it in the horizontal direction, but does not consider the state values present in the spatially stacked sequence, so the ST-LSTM network model considers extracting the state value in the vertical direction from the spatially stacked sequence structure, passing it in the vertical direction. The state value in the vertical direction is beneficial to learning image characteristic information on each scale space by an ST-LSTM network model, better supports long-time sequence prediction of a second monitoring image frame sequence with a second preset time length in the future, and can relieve the phenomenon that the second monitoring image frame sequence with the second preset time length in the future predicted by the model is fuzzy as the sequence becomes longer.
In the network structure of the ST-LSTM network model, an information transfer operation, i.e., "horizontal/vertical hidden state" in fig. 13, is added in the last layer of the coding structure and the first layer of the decoding structure, and [ Ht, Mt ] corresponding to the t time of the last layer of the coding structure is transferred to the input [ Ht +1, Mt +1] of the t +1 time of the last layer of the decoding structure, so as to avoid uncertainty caused by random initialization thereof.
Optionally, the step of obtaining the bird trajectory prediction model based on the ST-LSTM network model training includes:
arranging the first monitoring image frame sequence according to the time sequence, and training the ST-LSTM network model by using the first monitoring image frame sequence as training input data;
comparing the second monitoring image frame sequence with a third monitoring image frame sequence with a second preset time length in the actual future according to the time sequence;
and adjusting parameters of the ST-LSTM network model based on the error obtained by the comparison calculation to obtain the flying bird trajectory prediction model.
Referring to fig. 14, data preprocessing is first performed: for example, a first monitoring image frame sequence A of 30 frames in total in the historical continuous 3 seconds before the flying bird appears is arranged in time sequence and used as training input data to train the ST-LSTM network model.
And then arranging the second monitoring image frame sequences B of 30 frames in the future within continuous 3 seconds in time according to the time sequence, performing comparison calculation with the second monitoring image frame sequences B which are output by the model prediction model and predict the future for 3 seconds, and adjusting parameters of the ST-LSTM network model based on errors obtained by the comparison calculation to obtain the bird image frame sequence prediction model.
The embodiment of the invention provides a bird repelling method, which is applied to the motion trail and comprises the following steps:
and adjusting the bird repelling device to perform bird repelling according to the motion track, so that the flying birds are repelled in the flying process of the flying birds.
Optionally, the bird repelling method further comprises:
when the bird repelling device is a video laser bird repelling device, the angle of a laser transmitter in the video laser bird repelling device is adjusted, so that the laser transmitter emits laser and repeatedly sweeps the last track point in the motion track, and the flying bird is repelled in the flying process.
For example, a bird recognition model is used to detect the coordinate position of a bird in the second monitoring image frame sequence B of 30 frames in total in the future 3 seconds, and then coordinates (bird coordinate positions) of the center point of a red rectangular frame where each bird in the second monitoring image frame sequence B is located are connected into a line, so as to obtain the future 3-second motion trajectory of the bird. Through adjusting the laser emitter angle in the video laser bird repellent device, make laser emitter sweep the last track point in the movement track repeatedly to reach and drive it at the flight in-process of flying bird, thereby can avoid the flying bird to stop on photovoltaic module well, realize dispelling the flying bird in photovoltaic power plant high-efficiently.
In addition, under the condition of abundant hardware equipment, the neural network model for identification and prediction can be upgraded, and the bird repelling effect is improved. For example, a plurality of flying birds can be identified at the same time, the movement tracks of the flying birds can be predicted, and the plurality of flying birds can be driven at the same time by installing a plurality of video laser bird repelling devices; for example, the identification area of the camera can be enlarged, and birds can be identified, predicted and driven at a position farther away from the photovoltaic panel; for example, the active time of nearby flying birds can be judged according to historical data of flying bird driving, such as driving time, all video laser bird driving devices are started during the period that the flying birds are active, and only the video laser bird driving devices with a large driving frequency are started during the period that the flying birds are inactive; for example, in the active time period of the flying bird, that is, the time period when the probability and frequency of the flying bird are high, the predicted time length of the flying bird is modified, and the flying bird is driven as soon as possible in a shorter time.
In addition, an embodiment of the present invention further provides a bird trajectory prediction apparatus, where the bird trajectory prediction apparatus includes: the system comprises a memory, a processor and a bird trajectory prediction program stored on the memory and executable on the processor, wherein the bird trajectory prediction program is configured to implement the steps of the bird trajectory prediction method.
In addition, in order to achieve the above object, the present invention further provides a bird repelling system, which includes the bird trajectory prediction device and the bird repelling apparatus, wherein the bird repelling apparatus includes a camera for collecting video images in a monitoring area in real time and a structure capable of executing bird repelling action.
Optionally, the bird repelling device is a video laser bird repelling device.
Furthermore, an embodiment of the present invention also provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the bird trajectory prediction method and/or the bird repelling method as described above.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (13)

1. A bird trajectory prediction method is characterized by comprising the following steps:
acquiring a first monitoring image frame sequence within a first preset time before the flying bird occurrence time;
inputting the first monitoring image frame sequence into a bird trajectory prediction model, and predicting to obtain a second monitoring image frame sequence with a second preset time length in the future;
and detecting the coordinate position of the flying bird in the second monitoring image frame sequence through a bird recognition model to obtain the motion track of the flying bird in the future for the second preset time.
2. The method for predicting bird trajectory according to claim 1, further comprising, before the step of obtaining the first sequence of monitored image frames within a first preset time period before the bird appearance time, the steps of:
acquiring a real-time monitoring image frame sequence, and detecting whether a flying bird appears in the real-time monitoring image frame sequence through the bird recognition model;
and if the flying birds appear, executing the step of acquiring the first monitoring image frame sequence within a first preset time before the flying bird appearing moment.
3. The method of predicting bird trajectories of claim 2, further comprising, after the step of detecting whether a bird is present in the real-time monitoring image frame sequence by the bird recognition model:
and if no flying bird appears, executing the step of acquiring the real-time monitoring image frame sequence.
4. The method of predicting bird trajectories of claim 2, wherein prior to the step of detecting whether a bird is present in the sequence of real-time surveillance image frames via the bird recognition model, further comprising:
training based on the yolov5 network model to obtain the bird recognition model.
5. The method of predicting a bird trajectory of claim 1, prior to the step of inputting the first sequence of monitored image frames into a bird trajectory prediction model, further comprising:
and constructing an ST-LSTM network model, and training based on the ST-LSTM network model to obtain the flying bird trajectory prediction model.
6. The bird trajectory prediction method of claim 5, wherein the network structure of the ST-LSTM network model comprises an encoding structure and a decoding structure,
the step of constructing the ST-LSTM network model comprises the following steps:
extracting a state value in a vertical direction from the spatial stacking sequence structure, and transmitting the state value in the vertical direction;
increasing a pass through operation of the state values between the encoding structure and the decoding structure;
and transmitting the state value and the output value of the last layer of the coding structure to the first layer of the decoding structure through the transmission operation, and connecting to obtain the ST-LSTM network model.
7. The method of predicting a bird trajectory according to claim 5, wherein the step of training the bird trajectory prediction model based on the ST-LSTM network model comprises:
arranging the first monitoring image frame sequence according to a time sequence, and taking the first monitoring image frame sequence as training input data to train the ST-LSTM network model;
comparing the second monitoring image frame sequence with a third monitoring image frame sequence with a second preset time length in the actual future according to the time sequence;
and adjusting parameters of the ST-LSTM network model based on the error obtained by the comparison calculation to obtain the flying bird trajectory prediction model.
8. A method of repelling birds, applied to the trajectory of motion as claimed in claim 1, comprising the steps of:
and adjusting the bird repelling device to perform bird repelling according to the motion track, so that the flying birds are repelled in the flying process of the flying birds.
9. A method of repelling a bird as claimed in claim 8 further comprising:
when the bird repelling device is a video laser bird repelling device, the angle of a laser transmitter in the video laser bird repelling device is adjusted, so that the laser transmitter emits laser and repeatedly sweeps the last track point in the motion track, and the flying bird is repelled in the flying process.
10. A bird trajectory prediction apparatus characterized by comprising: a memory, a processor, a bird trajectory prediction program stored on the memory and executable on the processor, the bird trajectory prediction program configured to implement the steps of the bird trajectory prediction method of any one of claims 1-7.
11. A bird repelling system, comprising the bird trajectory prediction device of claim 10 and a bird repelling apparatus, wherein the bird repelling apparatus comprises a camera for acquiring video images in a monitored area in real time and a structure capable of performing bird repelling action.
12. A bird repellent system according to claim 11, wherein the bird repellent device is a video laser bird repellent device.
13. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the bird trajectory prediction method according to any one of claims 1 to 7 and/or the bird repelling method according to any one of claims 8 to 9.
CN202210236163.4A 2022-03-10 2022-03-10 Bird trajectory prediction method, bird repelling device, bird trajectory prediction system and storage medium Pending CN114581851A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210236163.4A CN114581851A (en) 2022-03-10 2022-03-10 Bird trajectory prediction method, bird repelling device, bird trajectory prediction system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210236163.4A CN114581851A (en) 2022-03-10 2022-03-10 Bird trajectory prediction method, bird repelling device, bird trajectory prediction system and storage medium

Publications (1)

Publication Number Publication Date
CN114581851A true CN114581851A (en) 2022-06-03

Family

ID=81775201

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210236163.4A Pending CN114581851A (en) 2022-03-10 2022-03-10 Bird trajectory prediction method, bird repelling device, bird trajectory prediction system and storage medium

Country Status (1)

Country Link
CN (1) CN114581851A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116644862A (en) * 2023-07-24 2023-08-25 志成信科(北京)科技有限公司 Bird flight trajectory prediction method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116644862A (en) * 2023-07-24 2023-08-25 志成信科(北京)科技有限公司 Bird flight trajectory prediction method and device
CN116644862B (en) * 2023-07-24 2023-09-22 志成信科(北京)科技有限公司 Bird flight trajectory prediction method and device

Similar Documents

Publication Publication Date Title
CN108710126A (en) Automation detection expulsion goal approach and its system
US7596241B2 (en) System and method for automatic person counting and detection of specific events
CN110414400B (en) Automatic detection method and system for wearing of safety helmet on construction site
CN112148028B (en) Environment monitoring method and system based on unmanned aerial vehicle shooting image
CN110555420B (en) Fusion model network and method based on pedestrian regional feature extraction and re-identification
CN114912612A (en) Bird identification method and device, computer equipment and storage medium
CN114581851A (en) Bird trajectory prediction method, bird repelling device, bird trajectory prediction system and storage medium
Bowley et al. Detecting wildlife in uncontrolled outdoor video using convolutional neural networks
Zhuang et al. Image processing with the artificial swarm intelligence
CN113239877A (en) Farmland monitoring method based on computer vision and related equipment thereof
Ranjith et al. An IoT based Monitoring System to Detect Animal in the Railway Track using Deep Learning Neural Network
Leonid et al. Human wildlife conflict mitigation using YOLO algorithm
CN114342910A (en) Laser bird repelling method and related device
CN116758539B (en) Embryo image blastomere identification method based on data enhancement
Mehta et al. Exploring the efficacy of CNN and SVM models for automated damage severity classification in heritage buildings
CN117726853A (en) Bird protection method, device, equipment and storage medium based on artificial intelligence
CN116740635A (en) Embedded system for realizing social distance detection
Jurj et al. Real-time identification of animals found in domestic areas of Europe
KR102563346B1 (en) System for monitoring of structural and method ithereof
KR102321130B1 (en) Marine debris monitoring system based on image analysis and marine debris monitoring method using thereof
CN114359614A (en) System and method with robust classifier for protection against patch attacks
Fung et al. A neural network based intelligent intruders detection and tracking system using cctv images
Asad et al. Learning attention models for resource-constrained, self-adaptive visual sensing applications
Zhuang Image feature extraction with the perceptual graph based on the ant colony system
CN115410136B (en) Laser explosive disposal system emergency safety control method based on convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination