CN117475358B - Collision prediction method and device based on unmanned aerial vehicle vision - Google Patents

Collision prediction method and device based on unmanned aerial vehicle vision Download PDF

Info

Publication number
CN117475358B
CN117475358B CN202311812843.7A CN202311812843A CN117475358B CN 117475358 B CN117475358 B CN 117475358B CN 202311812843 A CN202311812843 A CN 202311812843A CN 117475358 B CN117475358 B CN 117475358B
Authority
CN
China
Prior art keywords
unmanned aerial
key frame
aerial vehicle
collision prediction
dynamic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311812843.7A
Other languages
Chinese (zh)
Other versions
CN117475358A (en
Inventor
陈麒
孙一卓
吴炎瑾
郑博
张欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Southern Planning & Designing Institute Of Telecom Consultation Co ltd
Original Assignee
Guangdong Southern Planning & Designing Institute Of Telecom Consultation Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Southern Planning & Designing Institute Of Telecom Consultation Co ltd filed Critical Guangdong Southern Planning & Designing Institute Of Telecom Consultation Co ltd
Priority to CN202311812843.7A priority Critical patent/CN117475358B/en
Publication of CN117475358A publication Critical patent/CN117475358A/en
Application granted granted Critical
Publication of CN117475358B publication Critical patent/CN117475358B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/48Matching video sequences

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Remote Sensing (AREA)
  • Computational Linguistics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of unmanned aerial vehicles, and discloses a collision prediction method and device based on unmanned aerial vehicle vision, wherein the method comprises the following steps: acquiring video data for a target area based on an unmanned aerial vehicle; wherein a dynamic target exists in the air of the target area; determining a key frame sequence corresponding to the video data; the key frame sequence comprises a plurality of key frames and all key frames in the key frame sequence are ordered based on time sequence; inputting the key frame sequence into a feature extraction layer of a pre-trained collision prediction model to obtain dynamic feature information of the key frame sequence aiming at a dynamic target; based on a linear integration layer of the collision prediction model, analyzing dynamic characteristic information to obtain a dynamic change analysis result aiming at a dynamic target; and generating a collision prediction result corresponding to the unmanned aerial vehicle according to the dynamic change analysis result. Therefore, the method and the device can be used for efficiently predicting the collision risk of the unmanned aerial vehicle, and are beneficial to improving the flight safety of the unmanned aerial vehicle.

Description

Collision prediction method and device based on unmanned aerial vehicle vision
Technical Field
The invention relates to the technical field of unmanned aerial vehicles, in particular to a collision prediction method and device based on unmanned aerial vehicle vision.
Background
Unmanned aerial vehicle is short for unmanned aerial vehicle, is unmanned aerial vehicle that utilizes radio remote control equipment and self-contained program control device to control. In modern society, the application range of unmanned aerial vehicles is expanding, which covers various fields from aerial photography to express delivery.
However, with the increase of the number of unmanned aerial vehicles, the occurrence frequency of collision accidents of unmanned aerial vehicles is increasing, and personal and property safety loss is easy to cause. Therefore, it is important to provide a technical scheme capable of efficiently predicting the collision risk of the unmanned aerial vehicle so as to improve the flight safety of the unmanned aerial vehicle.
Disclosure of Invention
The invention aims to solve the technical problem of providing a collision prediction method and device based on unmanned aerial vehicle vision, which can efficiently predict the collision risk of an unmanned aerial vehicle and is beneficial to improving the flight safety of the unmanned aerial vehicle.
In order to solve the technical problems, the first aspect of the invention discloses a collision prediction method based on unmanned aerial vehicle vision, which comprises the following steps:
acquiring video data for a target area based on an unmanned aerial vehicle; wherein a dynamic target exists in the air of the target area;
Determining a key frame sequence corresponding to the video data; the key frame sequence includes a plurality of key frames and all of the key frames in the key frame sequence are ordered based on timing;
inputting the key frame sequence to a feature extraction layer of a pre-trained collision prediction model to obtain dynamic feature information of the key frame sequence aiming at the dynamic target;
analyzing the dynamic characteristic information based on a linear integration layer of the collision prediction model to obtain a dynamic change analysis result aiming at the dynamic target;
and generating a collision prediction result corresponding to the unmanned aerial vehicle according to the dynamic change analysis result.
The second aspect of the invention discloses a collision prediction device based on unmanned aerial vehicle vision, which comprises:
The acquisition module is used for acquiring video data aiming at a target area based on the unmanned aerial vehicle; wherein a dynamic target exists in the air of the target area;
A determining module, configured to determine a key frame sequence corresponding to the video data; the key frame sequence includes a plurality of key frames and all of the key frames in the key frame sequence are ordered based on timing;
The feature extraction module is used for inputting the key frame sequence into a feature extraction layer of a pre-trained collision prediction model to obtain dynamic feature information of the key frame sequence aiming at the dynamic target;
The analysis module is used for analyzing the dynamic characteristic information based on the linear integration layer of the collision prediction model to obtain a dynamic change analysis result aiming at the dynamic target;
and the generation module is used for generating a collision prediction result corresponding to the unmanned aerial vehicle according to the dynamic change analysis result.
In a second aspect of the present invention, as an optional implementation manner, the specific manner of inputting the key frame sequence into the feature extraction layer of the pre-trained collision prediction model to obtain the dynamic feature information of the key frame sequence for the dynamic target includes:
inputting the key frame sequence to a feature extraction layer of a pre-trained collision prediction model;
Based on the feature extraction layer, performing feature extraction operation on all the key frames in the key frame sequence in parallel to obtain feature information corresponding to each key frame;
determining target feature information corresponding to each key frame from feature information corresponding to each key frame based on the feature extraction layer, wherein the target feature information is feature information aiming at the dynamic target;
The key frame sequence comprises target feature information corresponding to each key frame aiming at the dynamic features of the dynamic target.
In a second aspect of the present invention, the analyzing module analyzes the dynamic feature information based on a linear integration layer of the collision prediction model, and the specific manner of obtaining the dynamic change analysis result for the dynamic target includes:
For each key frame, comparing target feature information corresponding to the key frame with target feature information corresponding to an adjacent key frame based on a linear integration layer of the collision prediction model to obtain a feature comparison result corresponding to the key frame; the adjacent keyframes include keyframes adjacent to and time-ordered before the keyframe and/or keyframes adjacent to and time-ordered after the keyframe;
based on the linear integration layer, determining the result weight corresponding to each feature comparison result according to the feature comparison result corresponding to each key frame;
Calculating an output value corresponding to the linear integration layer according to the characteristic comparison result corresponding to each key frame and the result weight corresponding to each characteristic comparison result based on the linear integration formula corresponding to the linear integration layer;
And determining the output value corresponding to the linear integration layer as a dynamic change analysis result of the key frame sequence aiming at the dynamic target.
In a second aspect of the present invention, as an optional implementation manner, the generating module generates, according to the dynamic change analysis result, a collision prediction result corresponding to the unmanned aerial vehicle in a specific manner includes:
Calculating a collision prediction score corresponding to the dynamic change analysis result based on a score evaluation function of the collision prediction model;
And generating a collision prediction result corresponding to the unmanned aerial vehicle according to the collision prediction score corresponding to the dynamic change analysis result.
In a second aspect of the present invention, as an optional implementation manner, the generating module generates the collision prediction result corresponding to the unmanned aerial vehicle according to the collision prediction score corresponding to the dynamic change analysis result, where a specific manner of generating the collision prediction result corresponding to the unmanned aerial vehicle includes:
judging whether the collision prediction score corresponding to the dynamic change analysis result is larger than or equal to a preset collision prediction score;
When the collision prediction score is larger than or equal to the preset collision prediction score, determining that a collision prediction result corresponding to the unmanned aerial vehicle is that a collision risk exists for the unmanned aerial vehicle by the dynamic target, wherein the collision risk is used for indicating that the distance between the dynamic target and the unmanned aerial vehicle is smaller than or equal to a preset collision distance interval;
And when judging that the collision prediction score is smaller than the preset collision prediction score, determining that a collision prediction result corresponding to the unmanned aerial vehicle is that the collision risk of the dynamic target to the unmanned aerial vehicle does not exist.
As an optional implementation manner, in the second aspect of the present invention, the determining module is further configured to determine, when the collision prediction result is used to indicate that the dynamic target has the collision risk for the unmanned aerial vehicle, a collision risk level corresponding to the unmanned aerial vehicle according to the collision prediction score;
wherein the apparatus further comprises:
The acquisition module is used for acquiring the flight planning route of the unmanned aerial vehicle and the environment information corresponding to the target area;
the determining module is further configured to determine a target flight route corresponding to the unmanned aerial vehicle according to the collision risk level, the flight planning route and the environmental information;
The determining module is further configured to determine flight control parameters corresponding to the unmanned aerial vehicle according to the target flight route;
and the control module is used for controlling the unmanned aerial vehicle to execute the flight operation related to the target flight route according to the flight control parameters.
As an alternative embodiment, in the second aspect of the present invention, the video data includes a plurality of video frames and all of the video frames in the video data are ordered based on timing;
the specific mode of determining the key frame sequence corresponding to the video data by the determining module comprises the following steps:
screening a plurality of candidate key frames corresponding to the time interval from the video data based on a preset time interval to obtain a candidate key frame sequence;
Acquiring window parameters of a sliding window corresponding to the candidate key frame sequence; the window parameters comprise the window length of the sliding window, the sliding length of the sliding window and the starting position of the sliding window in the candidate key frame sequence; the initial position is used for representing the arrangement order of the candidate key frames with the earliest time sequence contained in the sliding window in the candidate key frame sequence;
Selecting a number of candidate key frames corresponding to the window length as key frames based on the starting position to obtain a key frame sequence corresponding to the video data;
And, the apparatus further comprises:
And the updating module is used for updating the initial position of the sliding window based on the sliding length and the initial position after the generating module generates the collision prediction result corresponding to the unmanned aerial vehicle according to the dynamic change analysis result.
The third aspect of the invention discloses another collision prediction device based on unmanned aerial vehicle vision, which comprises:
A memory storing executable program code;
a processor coupled to the memory;
The processor invokes the executable program codes stored in the memory to execute the collision prediction method based on unmanned aerial vehicle vision disclosed in the first aspect of the invention.
A fourth aspect of the present invention discloses a computer storage medium storing computer instructions for performing the unmanned aerial vehicle vision-based collision prediction method disclosed in the first aspect of the present invention when the computer instructions are invoked.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
In the embodiment of the invention, video data aiming at a target area is collected based on an unmanned aerial vehicle; wherein a dynamic target exists in the air of the target area; determining a key frame sequence corresponding to the video data; the key frame sequence comprises a plurality of key frames and all key frames in the key frame sequence are ordered based on time sequence; inputting the key frame sequence into a feature extraction layer of a pre-trained collision prediction model to obtain dynamic feature information of the key frame sequence aiming at a dynamic target; based on a linear integration layer of the collision prediction model, analyzing dynamic characteristic information to obtain a dynamic change analysis result aiming at a dynamic target; and generating a collision prediction result corresponding to the unmanned aerial vehicle according to the dynamic change analysis result. Therefore, the method and the device can be used for acquiring the video data aiming at the target area based on the unmanned aerial vehicle, determining the key frame sequence corresponding to the video data, extracting the characteristics of the key frame sequence based on the characteristic extraction layer of the pre-trained collision prediction model, obtaining the dynamic characteristic information of the key frame sequence aiming at the dynamic target, analyzing the dynamic characteristic information based on the linear integration layer to obtain the corresponding dynamic change analysis result, generating the collision prediction result corresponding to the unmanned aerial vehicle according to the dynamic change analysis result, realizing the intelligent analysis of the collision risk of the unmanned aerial vehicle based on the vision of the unmanned aerial vehicle, capturing the corresponding characteristic data of the dynamic target in the air when the unmanned aerial vehicle is about to collide with the unmanned aerial vehicle without high-precision images, namely realizing the collision prediction analysis of the unmanned aerial vehicle by using fewer calculation resources, and simultaneously improving the detection accuracy of the dynamic target, thereby improving the analysis accuracy of the characteristic information change of the dynamic target, further improving the generation accuracy of the collision prediction result, realizing the efficient prediction of the collision risk of the unmanned aerial vehicle, and being favorable for improving the flight safety of the unmanned aerial vehicle to avoid the collision planned flight path of the dynamic target.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a collision prediction method based on unmanned aerial vehicle vision, which is disclosed in the embodiment of the invention;
Fig. 2 is a schematic flow chart of another collision prediction method based on unmanned aerial vehicle vision according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a collision prediction model according to an embodiment of the present invention;
Fig. 4 is a schematic diagram of a collision prediction method based on unmanned aerial vehicle vision according to an embodiment of the present invention;
fig. 5 is a schematic diagram of another collision prediction method based on unmanned aerial vehicle vision according to an embodiment of the present invention;
FIG. 6 is a flow chart of yet another collision prediction method based on unmanned aerial vehicle vision according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a collision prediction device based on unmanned aerial vehicle vision according to an embodiment of the present invention;
Fig. 8 is a schematic structural diagram of another collision prediction apparatus based on unmanned aerial vehicle vision according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a collision prediction apparatus based on unmanned aerial vehicle vision according to another embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, apparatus, article, or article that comprises a list of steps or elements is not limited to only those listed but may optionally include other steps or elements not listed or inherent to such process, method, article, or article.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the invention. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The invention discloses a collision prediction method and device based on unmanned aerial vehicle vision, which can acquire video data aiming at a target area based on an unmanned aerial vehicle, determine a key frame sequence corresponding to the video data, extract features of the key frame sequence based on a feature extraction layer of a pre-trained collision prediction model to obtain dynamic feature information of the key frame sequence aiming at a dynamic target, analyze the dynamic feature information based on a linear integration layer to obtain a corresponding dynamic change analysis result, and generate a collision prediction result corresponding to the unmanned aerial vehicle according to the dynamic change analysis result. The following will describe in detail.
Example 1
Referring to fig. 1, fig. 1 is a schematic flow chart of a collision prediction method based on unmanned aerial vehicle vision according to an embodiment of the present invention. The collision prediction method based on the unmanned aerial vehicle vision described in fig. 1 may be applied to a collision prediction device based on the unmanned aerial vehicle vision, where the device may include one of a prediction device, a prediction terminal, a prediction system and a server, and the server may include a local server or a cloud server, where the embodiment of the invention is not limited; the method can also be applied to an unmanned aerial vehicle or a control system corresponding to the unmanned aerial vehicle; the method may also be applied to a collision prediction model, and an exemplary architecture diagram of the collision prediction model may be shown in fig. 3, where fig. 3 is an architecture diagram of a collision prediction model disclosed in an embodiment of the present invention, and the embodiment of the present invention is not limited. As shown in fig. 1, the collision prediction method based on unmanned aerial vehicle vision may include the following operations:
101. video data for a target area is collected based on the drone.
In the embodiment of the invention, the target area is one of the areas allowing the unmanned aerial vehicle to fly; wherein, there are dynamic targets in the air of the target area, optionally, there may be one or more dynamic targets, the embodiment of the invention is not limited; by way of example, the dynamic target may be other objects capable of moving in the air, such as aircrafts, birds, balloons, etc., and the embodiments of the present invention are not limited; optionally, there may also be background static targets in the target area; by way of example, the background static target may be a static object such as a building, a tree, etc., and embodiments of the present invention are not limited; wherein the video data may comprise a plurality of video frames and all video frames are ordered based on timing, wherein the video frames are images.
In the embodiment of the invention, the video data can be acquired based on the camera equipment (such as a camera) corresponding to the unmanned aerial vehicle in the flight process of the unmanned aerial vehicle, and the embodiment of the invention is not limited.
It should be noted that, when the dynamic target is about to collide with the unmanned aerial vehicle, the visual angle of the dynamic target in the unmanned aerial vehicle camera can be rapidly expanded, and the obvious characteristic can enable corresponding characteristic information of collision to be directly captured without extremely high image precision.
102. And determining a key frame sequence corresponding to the video data.
In the embodiment of the invention, the key frame sequence comprises a plurality of key frames, and all key frames in the key frame sequence are ordered based on time sequence; wherein each key frame is one of the video frames in the video data; alternatively, all key frames in the key frame sequence may be consecutive frames; alternatively, the key frame may be a video frame after being preprocessed, which is not limited by the embodiment of the present invention.
103. And inputting the key frame sequence into a feature extraction layer of a pre-trained collision prediction model to obtain dynamic feature information of the key frame sequence aiming at a dynamic target.
In the embodiment of the invention, the collision prediction model may include a feature extraction layer, a linear integration layer and a score evaluation layer, which is not limited by the embodiment of the invention; optionally, the feature extraction layer of the collision prediction model may adopt a PeleeNet architecture, where PeleeNet (fast Region-based Convolutional Neural Network) is a deep learning architecture for target detection, and the architecture has the characteristics of high accuracy and instantaneity in the field of target detection while having a smaller model size, and can improve feature extraction accuracy of a dynamic target, so as to improve detection accuracy of the dynamic target.
104. And analyzing the dynamic characteristic information based on a linear integration layer of the collision prediction model to obtain a dynamic change analysis result aiming at the dynamic target.
In the embodiment of the invention, the linear integration layer of the collision prediction model is used for integrating and analyzing the dynamic characteristic information output by the characteristic extraction layer, and classifying the dynamic characteristic information after the integrated analysis, so that the output value corresponding to the linear integration layer is obtained by calculation.
105. And generating a collision prediction result corresponding to the unmanned aerial vehicle according to the dynamic change analysis result.
In the embodiment of the invention, the collision prediction result corresponding to the unmanned aerial vehicle is used for indicating whether collision risk exists between the unmanned aerial vehicle and the dynamic target; the collision risk is used for indicating that the distance between the dynamic target and the unmanned aerial vehicle is smaller than or equal to a preset collision distance interval; the collision distance interval is used for indicating a distance interval in which the possibility of collision between the unmanned aerial vehicle and the dynamic target is greater than the preset possibility.
Therefore, the method described by the embodiment of the invention can be implemented to acquire the video data aiming at the target area based on the unmanned aerial vehicle, determine the key frame sequence corresponding to the video data, extract the characteristics of the key frame sequence based on the characteristic extraction layer of the pre-trained collision prediction model, obtain the dynamic characteristic information of the key frame sequence aiming at the dynamic target, then analyze the dynamic characteristic information based on the linear integration layer to obtain the corresponding dynamic change analysis result, and generate the collision prediction result corresponding to the unmanned aerial vehicle according to the dynamic change analysis result, so that the intelligent analysis of the collision risk of the unmanned aerial vehicle can be realized based on the vision of the unmanned aerial vehicle, the corresponding characteristic data of the dynamic target in the air when the unmanned aerial vehicle is about to collide can be captured without high-precision images, namely, the collision prediction analysis of the unmanned aerial vehicle can be realized by using less calculation resources, the characteristic extraction efficiency of the dynamic target is improved, the cost of the unmanned aerial vehicle collision prediction is reduced, and the detection accuracy of the dynamic target is improved, so that the analysis accuracy of the characteristic information change of the dynamic target is improved, the generation accuracy of the collision prediction result is improved, the high-efficiency prediction unmanned aerial vehicle collision risk is realized, the unmanned aerial vehicle collision risk is well-avoided, and the unmanned aerial vehicle is favorable for the safety planning of the unmanned aerial vehicle.
In an alternative embodiment, the video data may comprise a plurality of video frames and all video frames in the video data are ordered based on timing;
the determining the key frame sequence corresponding to the video data may include the following operations:
Based on a preset time interval, a plurality of candidate key frames corresponding to the time interval are screened out from video data, and a candidate key frame sequence is obtained;
Acquiring window parameters of a sliding window corresponding to the candidate key frame sequence; the window parameters comprise the window length of the sliding window, the sliding length of the sliding window and the starting position of the sliding window in the candidate key frame sequence; the initial position is used for representing the arrangement order of the candidate key frames with the first time sequence contained in the sliding window in the candidate key frame sequence;
Selecting a number of candidate key frames corresponding to the window length as key frames based on the starting position to obtain a key frame sequence corresponding to the video data;
And after generating a collision prediction result corresponding to the unmanned aerial vehicle according to the dynamic change analysis result, the method may further include the following operations:
Based on the sliding length and the starting position, the starting position of the sliding window is updated.
In the embodiment of the invention, a time sequence data processing strategy is adopted, and the sliding window is the time sequence data sliding window in the strategy. The keyframe sequences selected by the sliding window may be integrated into a Batch (Batch), and then the Batch (i.e., keyframe sequences) may be input to the feature extraction layer. For example, the sliding length of the sliding window may be 1, that is, when the sliding window slides to select the key frame sequence each time, a frame with the first time sequence in the candidate key frames included in the sliding window is removed, and a frame is added at the rearmost end of the sliding window, which is not limited in the embodiment of the present invention.
Therefore, the optional embodiment can select the corresponding candidate key frame from the video data based on the preset time interval to obtain the candidate key frame sequence, so that the determined key frame can uniformly cover the whole video data, and the feature analysis accuracy of the dynamic target is improved; and sliding and selecting a corresponding number of candidate key frames with the window length in the candidate key frame sequence as the key frame sequence through sliding windows, so that the collision prediction model can more sensitively sense the characteristic information change trend of the continuous key frames, the dynamic monitoring capability of the dynamic target is improved, the analysis accuracy of the characteristic information change of the dynamic target is improved, and the collision risk of the unmanned aerial vehicle is predicted accurately.
In this optional embodiment, optionally, based on a preset time interval, a plurality of candidate key frames corresponding to the time interval are selected from the video data, so as to obtain a candidate key frame sequence, which may include the following operations:
And screening a plurality of candidate key frames corresponding to the time interval from the video data based on the preset time interval, and performing preprocessing operation on all the candidate key frames to obtain a candidate key frame sequence.
Wherein the preprocessing operation may include one or more combinations of denoising processing, correction processing, scaling processing, and data normalization processing, and data enhancement processing. The denoising processing can be to adjust parameters of a filter according to the noise level of the image and the expected denoising effect so as to smooth the image, and simultaneously, the important image details and characteristics are reserved as far as possible; the correction processing can be to correct the camera image by lens image correction to correct distortion in the image, so as to ensure that the shape and position of an object in the image are accurately captured; the upscaling process may be to scale the image to a size appropriate for the model input; data normalization may be mapping a range of pixel values of an image to a particular interval.
Wherein, optionally, in a training stage of the collision prediction model, the preprocessing operation performed on the training data set adopted for model training may further include data enhancement processing, where the training data set includes a plurality of training images; the data enhancement processing may include performing operations such as randomly rotating a training image in the training data set, randomly selecting an area, clipping, image flipping, adding noise, changing brightness and contrast, and transforming a color space, which is not limited in the embodiment of the present invention. Therefore, the diversity of model training data can be improved, the model training accuracy is improved, and the collision risk of the unmanned aerial vehicle can be accurately predicted based on the trained model.
Therefore, the optional embodiment can also preprocess the candidate key frames, and can improve the image quality of the key frames input to the feature extraction layer, so that the feature extraction accuracy is improved, and further the analysis accuracy of the feature change of the dynamic target is improved.
In this alternative embodiment, optionally, the method may further comprise the operations of:
Judging whether the number of candidate key frames included in the candidate key frame sequence is greater than or equal to the window length of the sliding window;
When the number of the candidate key frames is judged to be greater than or equal to the window length of the sliding window, triggering and executing the operation of selecting the candidate key frames with the corresponding number of the window length as the key frames based on the initial position to obtain a key frame sequence corresponding to the video data;
When it is determined that the number of candidate key frames is smaller than the window length of the sliding window, the execution of step 101 is triggered.
It can be seen that, in this alternative embodiment, when the number of candidate key frames is greater than or equal to the size of the sliding window, a key frame sequence can be selected, and when the number of candidate key frames is less than the size of the sliding window, video data continues to be collected, so that the selection efficiency of the key frames can be improved, and the feature extraction efficiency of the key frames in the feature extraction layer can be improved.
In another alternative embodiment, inputting the key frame sequence into the feature extraction layer of the pre-trained collision prediction model to obtain the dynamic feature information of the key frame sequence for the dynamic target may include the following operations:
inputting the key frame sequence into a feature extraction layer of a pre-trained collision prediction model;
based on the feature extraction layer, performing feature extraction operation on all the key frames in the key frame sequence in parallel to obtain feature information corresponding to each key frame;
Based on the feature extraction layer, determining target feature information corresponding to each key frame from feature information corresponding to each key frame, wherein the target feature information is feature information aiming at a dynamic target;
the dynamic characteristics of the key frame sequence aiming at the dynamic target comprise target characteristic information corresponding to each key frame.
Optionally, the feature information corresponding to each key frame may further include static feature information corresponding to a background static target, which is not limited in the embodiment of the present invention.
Therefore, the optional embodiment can process the key frames in the key frame sequence in parallel based on the feature extraction layer of the pre-trained collision prediction model, perform feature extraction to obtain the corresponding feature information of each key frame, determine the target feature information aiming at the dynamic target from the feature information corresponding to each key frame, realize parallel and continuous processing of a plurality of key frames, and improve the feature extraction efficiency of the key frames, thereby improving the feature extraction efficiency of the dynamic target, further improving the detection efficiency of the dynamic target, and being beneficial to improving the feature change analysis efficiency of the dynamic target.
In this optional embodiment, optionally, based on the linear integration layer of the collision prediction model, analyzing the dynamic feature information to obtain a dynamic change analysis result for the dynamic target may include the following operations:
For each key frame, based on a linear integration layer of the collision prediction model, comparing target feature information corresponding to the key frame with target feature information corresponding to an adjacent key frame to obtain a feature comparison result corresponding to the key frame; adjacent keyframes include keyframes adjacent to and in timing before the keyframe and/or keyframes adjacent to and in timing after the keyframe;
Based on the linear integration layer, determining the result weight corresponding to each feature comparison result according to the feature comparison result corresponding to each key frame;
based on a linear integration formula corresponding to the linear integration layer, calculating an output value corresponding to the linear integration layer according to the feature comparison result corresponding to each key frame and the result weight corresponding to each feature comparison result;
and determining an output value corresponding to the linear integration layer as a dynamic change analysis result of the key frame sequence aiming at the dynamic target.
The dynamic change analysis result aiming at the dynamic target can be used for representing the characteristic change of the dynamic target in the key frame sequence, and the dynamic change analysis result can reflect the distance change between the dynamic target and the unmanned aerial vehicle. It should be noted that, when the unmanned aerial vehicle collides, the characteristic change of the dynamic target colliding with the unmanned aerial vehicle in the image acquired by the unmanned aerial vehicle is more obvious compared with the characteristic change of the background static target in the image.
Therefore, the optional embodiment can compare the target feature information between the continuous key frames, so as to determine corresponding weight according to the feature comparison result, calculate the output value corresponding to the linear integration layer, determine the output value as the dynamic change analysis result aiming at the dynamic target, and realize the dynamic feature information comparison between the continuous key frames, thereby more sensitively sensing the feature information change trend of the continuous key frames, further improving the dynamic monitoring capability of the dynamic target, further improving the analysis accuracy of the feature information change of the dynamic target, and being beneficial to improving the generation accuracy of the collision prediction result.
In this optional embodiment, optionally, according to the feature comparison result corresponding to each key frame, determining the result weight corresponding to each feature comparison result may include the following operations:
For each key frame, determining a classification result corresponding to the feature comparison result based on a predetermined mathematical model according to the feature comparison result corresponding to the key frame;
And determining the result weight corresponding to each feature comparison result according to the classification result corresponding to each feature comparison result.
Therefore, the alternative embodiment can also determine the classification result corresponding to the feature comparison result corresponding to each key frame based on the mathematical model, and then determine the corresponding result weight according to the classification result, so that the classification accuracy of the feature comparison result can be improved, the determination accuracy of the result weight is improved, and further the feature change analysis accuracy of the dynamic target is improved.
Further optionally, for each key frame, based on a predetermined mathematical model, determining a classification result corresponding to the feature comparison result according to the feature comparison result corresponding to the key frame may include the following operations:
For each key frame, determining the characteristic difference degree between the key frame and the adjacent key frame based on a predetermined mathematical model according to the characteristic comparison result corresponding to the key frame;
for each key frame, when the characteristic difference degree between the key frame and the adjacent key frame is larger than or equal to the preset difference degree, determining a classification result corresponding to the characteristic comparison result as a first classification result;
For each key frame, determining a classification result corresponding to the feature comparison result as a second classification result when the feature difference degree between the key frame and the adjacent key frame is smaller than a preset difference degree;
When the classification result corresponding to the feature comparison result is a first classification result, the result weight corresponding to the feature comparison result is a first weight; when the classification result corresponding to the feature comparison result is a second classification result, the result weight corresponding to the feature comparison result is a second weight; wherein the first weight is higher than the second weight. The specific value of the result weight is determined based on the degree of difference, which is not limited in the embodiment of the invention.
It can be seen that, this optional embodiment can also be based on mathematical model, according to the feature comparison result, confirm the feature difference degree between the continuous key frame, if the feature difference degree about the dynamic target between the continuous key frame is too big, then the result weight that determines is high, if the feature difference degree is little, then the result weight that determines is low, can utilize unmanned aerial vehicle vision to carry out further classification analysis to the feature comparison result with respect to the mathematical model of collision condition, further improve the classification accuracy of feature comparison result, thereby further improve the determination accuracy of result weight, and then be favorable to improving the feature change analysis accuracy of dynamic target.
In the embodiment of the present invention, the mathematical model may be mathematical modeling for the collision situation of the unmanned aerial vehicle, and an exemplary schematic diagram of the mathematical model may be shown in fig. 4, and fig. 4 is a schematic diagram of a collision prediction method based on unmanned aerial vehicle vision, which is disclosed in the embodiment of the present invention, where:
Unmanned aerial vehicle (unmanned aerial vehicle a shown in fig. 4) is used as an observation source, a dynamic target (unmanned aerial vehicle b shown in fig. 4) is located at a horizontal height and moves opposite to unmanned aerial vehicle a, the speed of unmanned aerial vehicle a is v a, the speed of unmanned aerial vehicle b is v b, the distance L 1 between unmanned aerial vehicle a and unmanned aerial vehicle b is L 1, the height of unmanned aerial vehicle b is h, and the viewing angle observed by unmanned aerial vehicle a is theta 1; the background static target (rear building as shown in fig. 4) is H high, L 2 from the drone a, and the observed view angle size is θ 2. As can be seen through calculation, the relationship between the observation angle of the unmanned aerial vehicle a and the distance and height of the observation target can be approximately expressed as the following formula:
drawing the two formulas into a function curve as shown in a relation graph on the left side of fig. 5 (assuming h=0.5 m, h=10m), wherein fig. 5 is a schematic diagram of another collision prediction method based on unmanned aerial vehicle vision, which is disclosed in the embodiment of the invention, and is used for representing a relation curve between a viewing angle of an unmanned aerial vehicle observation target and a distance of the unmanned aerial vehicle observation target, a curve H in fig. 5 is used for referring to a relation curve corresponding to a background static target, and a curve H in fig. 5 is used for referring to a relation curve corresponding to a dynamic target; as can be seen from the graph of the relationship on the left side of fig. 5, when the distance between the unmanned aerial vehicle and the observation target (dynamic target or background static target) is reduced to a certain range, the observation viewing angle θ starts to increase in acceleration; once the distance between the two enters the collision zone (i.e., collision distance zone), the viewing angle θ rapidly rises to 180 °. It can also be noted that when the distances are the same, the overall change in the viewing angle θ of the dynamic object is greater than that of the background static object. This is because the initial viewing angle of the background static object H is already much larger than that of the dynamic object H, so that the overall change range of the viewing angle of the unmanned plane for observing the background static object H is small.
The graph on the left side of fig. 5 can also be described as the change of the viewing angle when the source unmanned aerial vehicle actively approaches to the dynamic target remains stationary and is located in the same vertical direction (x=0) as the background static target. The graph on the right of fig. 5, which depicts the change in viewing angle when the drone b as a dynamic target approaches the drone from both sides when the drone is in the vertical direction x=0 and remains stationary, while the background static target is in the vertical direction x= -5, is more realistic. In the collision zone of the right graph, since a certain distance is kept between the background static object and the observation source unmanned aerial vehicle, the change of the viewing angle of the unmanned aerial vehicle for observing the background static object is very small. However, the viewing angle of the observation source unmanned aerial vehicle for observing the dynamic target is dramatically increased, so that the feature corresponding to the dynamic target forms obvious visual features in the video acquired by the unmanned aerial vehicle.
In this alternative embodiment, further optionally, the method may further comprise the operations of:
And inputting the dynamic characteristic information of the key frame sequence aiming at the dynamic target, which is output by the characteristic extraction layer, into the linear integration layer, triggering and executing the linear integration layer based on the collision prediction model, and analyzing the dynamic characteristic information to obtain the operation of the dynamic change analysis result aiming at the dynamic target.
Further optionally, the feature extraction layer includes a plurality of first neurons and each of the first neurons may output dynamic feature information, and the linear integration layer includes a plurality of second neurons; and inputting the dynamic feature information of the key frame sequence output by the feature extraction layer for the dynamic target to the linear integration layer may be specifically:
The dynamic characteristic information output by each first neuron of the characteristic extraction layer is respectively input to each second neuron of the linear integration layer.
The window length of the sliding window may be taken as the input vector dimension of the linear integration layer, and, for example, assuming that the window length of the sliding window is N, the input vector dimension of the linear integration layer is N, that is, the feature extraction layer includes N first neurons.
Further alternatively, the linear integration formula corresponding to the linear integration layer may be an integration formula of each second neuron of the linear integration layer, and the output value corresponding to the linear integration layer may include an output value of each second neuron; wherein, when the linear integration layer includes M neurons, a specific expression of the integration formula of each second neuron is as follows:
Wherein i is the number of the second neuron in the linear integration layer, j is the number of the first neuron in the feature extraction layer, x ij is the input value from the j-th first neuron in the feature extraction layer to the i-th second neuron in the linear integration layer, w ij is the weight from the j-th first neuron in the feature extraction layer to the i-th second neuron in the linear integration layer (i.e., the result weight corresponding to each feature comparison result), y i is the output value from the i-th second neuron in the linear integration layer, and b is the bias weight.
Therefore, according to the alternative embodiment, the characteristic information extracted by the characteristic extraction layer can be calculated and analyzed based on the specific structure of the linear integration layer and the specific integration formula, so that the output value of the more accurate linear integration layer is obtained to serve as a more accurate dynamic change analysis result, and further a more accurate collision prediction result is generated, so that the collision risk of the unmanned aerial vehicle is accurately predicted.
Example two
Referring to fig. 2, fig. 2 is a flow chart of a collision prediction method based on unmanned aerial vehicle vision according to an embodiment of the present invention. The collision prediction method based on the unmanned aerial vehicle vision described in fig. 2 may be applied to a collision prediction device based on the unmanned aerial vehicle vision, where the device may include one of a prediction device, a prediction terminal, a prediction system and a server, and the server may include a local server or a cloud server, where the embodiment of the invention is not limited; the method can also be applied to an unmanned aerial vehicle or a control system corresponding to the unmanned aerial vehicle; the method may also be applied to a collision prediction model, and an exemplary architecture diagram of the collision prediction model may be shown in fig. 3, where fig. 3 is an architecture diagram of a collision prediction model disclosed in an embodiment of the present invention, and the embodiment of the present invention is not limited. As shown in fig. 2, the collision prediction method based on the unmanned aerial vehicle vision may include the following operations:
201. video data for a target area is collected based on the drone.
202. And determining a key frame sequence corresponding to the video data.
203. And inputting the key frame sequence into a feature extraction layer of a pre-trained collision prediction model to obtain dynamic feature information of the key frame sequence aiming at a dynamic target.
204. And analyzing the dynamic characteristic information based on a linear integration layer of the collision prediction model to obtain a dynamic change analysis result aiming at the dynamic target.
205. And calculating a collision prediction score corresponding to the dynamic change analysis result based on the score evaluation function of the collision prediction model.
In the embodiment of the present invention, optionally, the score evaluation function of the collision prediction model may be a loss function, which is not limited in the embodiment of the present invention; the score evaluation function of the collision prediction model may be a Log function, which is not limited in the embodiment of the present invention.
206. And generating a collision prediction result corresponding to the unmanned aerial vehicle according to the collision prediction score corresponding to the dynamic change analysis result.
In the embodiment of the present invention, for other detailed descriptions of step 201 to step 204, please refer to the detailed descriptions of step 101 to step 104 in the first embodiment, and the detailed description of the embodiment of the present invention is omitted.
Therefore, the method described by the embodiment of the invention can be implemented to acquire the video data aiming at the target area based on the unmanned aerial vehicle, determine the key frame sequence corresponding to the video data, extract the characteristics of the key frame sequence based on the characteristic extraction layer of the pre-trained collision prediction model, obtain the dynamic characteristic information of the key frame sequence aiming at the dynamic target, then analyze the dynamic characteristic information based on the linear integration layer to obtain the corresponding dynamic change analysis result, and generate the collision prediction result corresponding to the unmanned aerial vehicle according to the dynamic change analysis result, so that the intelligent analysis of the collision risk of the unmanned aerial vehicle can be realized based on the vision of the unmanned aerial vehicle, the corresponding characteristic data of the dynamic target in the air when the unmanned aerial vehicle is about to collide can be captured without high-precision images, namely, the collision prediction analysis of the unmanned aerial vehicle can be realized by using less calculation resources, the characteristic extraction efficiency of the dynamic target is improved, the cost of the unmanned aerial vehicle collision prediction is reduced, and the detection accuracy of the dynamic target is improved, so that the analysis accuracy of the characteristic information change of the dynamic target is improved, the generation accuracy of the collision prediction result is improved, the high-efficiency prediction unmanned aerial vehicle collision risk is realized, the unmanned aerial vehicle collision risk is well-avoided, and the unmanned aerial vehicle is favorable for the safety planning of the unmanned aerial vehicle. In addition, the method can evaluate and calculate the collision prediction score corresponding to the dynamic change analysis result based on the score evaluation function of the collision prediction model, generate the collision prediction result corresponding to the unmanned aerial vehicle according to the collision prediction score, realize intelligent evaluation of the collision risk of the unmanned aerial vehicle based on the score evaluation function, and realize accurate quantification of the collision prediction result, thereby improving the accuracy of the collision prediction result and efficiently predicting the collision risk of the unmanned aerial vehicle.
In an alternative embodiment, when the linear integration layer includes M neurons, please refer to the detailed description in the first embodiment for the detailed description of the linear integration layer, and the score evaluation function of the collision prediction model may be as follows:
Wherein output is a collision prediction score output by the score evaluation function, i is a second neuron sequence number of the linear integration layer, w i is an output weight of an ith second neuron of the linear integration layer, y i is an output value of the ith second neuron of the linear integration layer, and b is a bias weight.
Optionally, in the training stage of the collision prediction model, when the dynamic target observed by the unmanned aerial vehicle enters the collision distance zone, the collision prediction score output by the score evaluation function is set to 1, and in other cases, is set to 0, which is not limited by the embodiment of the present invention.
Therefore, the optional embodiment can calculate the collision prediction score corresponding to the dynamic change analysis result based on the specific score evaluation function, and improves the determination accuracy of the collision prediction score, thereby being beneficial to further accurate quantification of the collision prediction result and improving the accuracy of the collision prediction result.
In another optional embodiment, generating the collision prediction result corresponding to the unmanned aerial vehicle according to the collision prediction score corresponding to the dynamic change analysis result may include the following operations:
Judging whether the collision prediction score corresponding to the dynamic change analysis result is larger than or equal to a preset collision prediction score;
When judging that the collision prediction score is larger than or equal to the preset collision prediction score, determining that a collision prediction result corresponding to the unmanned aerial vehicle is that a collision risk exists for the unmanned aerial vehicle by the dynamic target, wherein the collision risk is used for indicating that the distance between the dynamic target and the unmanned aerial vehicle is smaller than or equal to a preset collision distance interval;
when the collision prediction score is smaller than the preset collision prediction score, determining that a collision prediction result corresponding to the unmanned aerial vehicle is that a dynamic target does not have collision risk for the unmanned aerial vehicle.
The collision distance interval is a super parameter of the collision prediction model, and an empirical value can be adopted for the collision distance interval.
Therefore, according to the method and the device for predicting the collision risk of the unmanned aerial vehicle, when the collision prediction score is judged to be larger than or equal to the preset collision prediction score, the collision prediction result corresponding to the unmanned aerial vehicle is determined to be that the collision risk of the unmanned aerial vehicle exists, when the collision prediction score is judged to be smaller than the preset collision prediction score, the collision prediction result corresponding to the unmanned aerial vehicle is determined to be that the collision risk of the unmanned aerial vehicle does not exist, the judgment accuracy of the collision risk of the unmanned aerial vehicle can be improved based on the collision prediction score, so that the generation accuracy of the collision prediction result is improved, and further, the method and the device are favorable for planning a flight route of the unmanned aerial vehicle for avoiding the collision of the dynamic target in advance, so that the flight safety of the unmanned aerial vehicle is improved.
In yet another alternative embodiment, when the collision prediction result is used to indicate that the dynamic target is at risk of collision for the unmanned aerial vehicle, the method may further include the operations of:
According to the collision prediction score, determining a collision risk level corresponding to the unmanned aerial vehicle;
acquiring the flight planning route of the unmanned aerial vehicle and the environment information corresponding to the target area;
determining a target flight route corresponding to the unmanned aerial vehicle according to the collision risk level, the flight planning route and the environmental information;
according to the target flight route, determining flight control parameters corresponding to the unmanned aerial vehicle;
and controlling the unmanned aerial vehicle to execute the flight operation related to the target flight route according to the flight control parameters.
The higher the collision risk level is, the greater the possibility that the unmanned aerial vehicle collides with the dynamic object is or the unmanned aerial vehicle collides with the dynamic object in a shorter time. The unmanned aerial vehicle executes flight operation according to the target flight route so as to avoid the collision of a dynamic target on the unmanned aerial vehicle; the flight control parameters corresponding to the unmanned aerial vehicle may include one or more of a flight angle, a flight speed, a flight acceleration and a flight duration, which are not limited in the embodiment of the present invention; optionally, when the coordinate system (for example, a three-dimensional rectangular coordinate system, a spherical coordinate system, and a polar coordinate system) is established with the unmanned aerial vehicle as the center, the flight angle, the flight speed, and the flight acceleration corresponding to each direction (for example, the x direction, the y direction, and the z direction) may be determined based on the direction of the coordinate system, which is not limited in the embodiment of the present invention.
Therefore, according to the method and the device for determining the collision risk level of the unmanned aerial vehicle, the collision risk level corresponding to the unmanned aerial vehicle can be determined according to the collision prediction score, the obtained flight planning route of the unmanned aerial vehicle and the environment information corresponding to the target area are combined, the target flight route corresponding to the unmanned aerial vehicle is determined, and then the flight control parameters corresponding to the unmanned aerial vehicle are determined according to the target flight route, so that the unmanned aerial vehicle is controlled to execute the flight operation related to the target flight route, the accuracy of determining the flight route for avoiding the collision of the dynamic target by the unmanned aerial vehicle can be improved when the collision risk exists in the unmanned aerial vehicle, the possibility of avoiding the collision risk by the unmanned aerial vehicle is improved, and the flight safety of the unmanned aerial vehicle is further improved.
In the embodiment of the present invention, when the collision prediction method based on unmanned aerial vehicle vision is applied to a collision prediction model, an architecture diagram of the collision prediction model may be shown in fig. 3, where fig. 3 is an architecture diagram of a collision prediction model disclosed in the embodiment of the present invention, where the collision prediction model includes a time-series data sliding window (i.e., sliding window), a key frame acquisition & preprocessing, a feature extraction layer, a linear integration layer and a loss function (i.e., a score evaluation function), where a series of pictures arranged according to time series in fig. 3 are candidate key frame sequences determined from video data and preprocessed, the key frame sequences are selected from the candidate key frame sequences by adopting the time-series data sliding window, and the key frame sequences are input to the feature extraction layer, the feature extraction layer performs feature extraction on the key frame sequences by adopting CNN (convolutional neural network), obtains dynamic feature information corresponding to the key frame sequences, and inputs the dynamic feature information to the linear integration layer, obtains a dynamic change analysis result for a dynamic target, and inputs the dynamic change analysis result to the score evaluation function, so as to generate a predicted result corresponding to the unmanned aerial vehicle;
and when the collision prediction method based on unmanned aerial vehicle vision is applied to a collision prediction model, a flow diagram of the method may be shown in fig. 6, and fig. 6 is a flow diagram of another collision prediction method based on unmanned aerial vehicle vision, which is disclosed in the embodiment of the present invention, and the method flow may specifically be:
After the acquisition and preprocessing of the key frames are completed on the basis of the video data acquired by the unmanned aerial vehicle, judging whether the number of key frames of the key frame sequence selected by the sliding window meets the window length of the sliding window (batch=n shown in fig. 6), if not, continuing to acquire the key frames, if so, extracting features of the key frame sequence, then linearly integrating the extracted feature information to obtain an integrated output value (namely a dynamic change analysis result), and then calculating a loss value of the output value to obtain a collision prediction score, generating a collision prediction result and sliding the position of the sliding window.
Example III
Referring to fig. 7, fig. 7 is a schematic structural diagram of a collision prediction apparatus based on unmanned aerial vehicle vision according to an embodiment of the present invention. The collision prediction device based on unmanned aerial vehicle vision described in fig. 7 may include one of a prediction device, a prediction terminal, a prediction system and a server, where the server may include a local server or a cloud server, and the embodiment of the present invention is not limited; the device can be applied to an unmanned aerial vehicle or a control system corresponding to the unmanned aerial vehicle; the device may also be applied to a collision prediction model, and an exemplary architecture diagram of the collision prediction model may be shown in fig. 3, where fig. 3 is an architecture diagram of a collision prediction model disclosed in an embodiment of the present invention, and the embodiment of the present invention is not limited thereto. As shown in fig. 7, the collision prediction apparatus based on unmanned aerial vehicle vision may include:
an acquisition module 301, configured to acquire video data for a target area based on an unmanned aerial vehicle; wherein a dynamic target exists in the air of the target area;
A determining module 302, configured to determine a key frame sequence corresponding to video data; the key frame sequence comprises a plurality of key frames and all key frames in the key frame sequence are ordered based on time sequence;
The feature extraction module 303 is configured to input a key frame sequence to a feature extraction layer of a pre-trained collision prediction model, so as to obtain dynamic feature information of the key frame sequence for a dynamic target;
The analysis module 304 is configured to analyze the dynamic feature information based on the linear integration layer of the collision prediction model, and obtain a dynamic change analysis result for the dynamic target;
and the generating module 305 is configured to generate a collision prediction result corresponding to the unmanned aerial vehicle according to the dynamic change analysis result.
Therefore, the device described by the embodiment of the invention can acquire the video data aiming at the target area based on the unmanned aerial vehicle, determine the key frame sequence corresponding to the video data, extract the characteristics of the key frame sequence based on the characteristic extraction layer of the pre-trained collision prediction model, obtain the dynamic characteristic information of the key frame sequence aiming at the dynamic target, then analyze the dynamic characteristic information based on the linear integration layer to obtain the corresponding dynamic change analysis result, and generate the collision prediction result corresponding to the unmanned aerial vehicle according to the dynamic change analysis result, so that the intelligent analysis of the collision risk of the unmanned aerial vehicle can be realized based on the vision of the unmanned aerial vehicle, the corresponding characteristic data of the dynamic target in the air when the unmanned aerial vehicle is about to collide can be captured without high-precision images, namely, the collision prediction analysis of the unmanned aerial vehicle can be realized by using less calculation resources, the characteristic extraction efficiency of the dynamic target is improved, the cost of the collision prediction of the unmanned aerial vehicle is reduced, and the detection accuracy of the dynamic target is also improved, so that the analysis accuracy of the characteristic information change of the dynamic target is improved, the generation accuracy of the collision prediction result is improved, the high-efficiency prediction unmanned aerial vehicle collision risk is realized, the unmanned aerial vehicle collision risk is well-avoided, and the unmanned aerial vehicle is favorable for the safety planning of the dynamic target.
In an alternative embodiment, the feature extraction module 303 inputs the key frame sequence to the feature extraction layer of the pre-trained collision prediction model, and the specific manner of obtaining the dynamic feature information of the key frame sequence for the dynamic target may include:
inputting the key frame sequence into a feature extraction layer of a pre-trained collision prediction model;
based on the feature extraction layer, performing feature extraction operation on all the key frames in the key frame sequence in parallel to obtain feature information corresponding to each key frame;
Based on the feature extraction layer, determining target feature information corresponding to each key frame from feature information corresponding to each key frame, wherein the target feature information is feature information aiming at a dynamic target;
the dynamic characteristics of the key frame sequence aiming at the dynamic target comprise target characteristic information corresponding to each key frame.
Therefore, the device described by implementing the alternative embodiment can process the key frames in the key frame sequence in parallel based on the feature extraction layer of the pre-trained collision prediction model, perform feature extraction to obtain the corresponding feature information of each key frame, determine the target feature information aiming at the dynamic target from the feature information corresponding to each key frame, realize parallel and continuous processing of a plurality of key frames, and improve the feature extraction efficiency of the key frames, thereby improving the feature extraction efficiency of the dynamic target, further improving the detection efficiency of the dynamic target, and being beneficial to improving the feature change analysis efficiency of the dynamic target.
In this optional embodiment, optionally, the specific manner of analyzing the dynamic feature information to obtain the dynamic change analysis result for the dynamic target by using the analysis module 304 based on the linear integration layer of the collision prediction model may include:
For each key frame, based on a linear integration layer of the collision prediction model, comparing target feature information corresponding to the key frame with target feature information corresponding to an adjacent key frame to obtain a feature comparison result corresponding to the key frame; adjacent keyframes include keyframes adjacent to and in timing before the keyframe and/or keyframes adjacent to and in timing after the keyframe;
Based on the linear integration layer, determining the result weight corresponding to each feature comparison result according to the feature comparison result corresponding to each key frame;
based on a linear integration formula corresponding to the linear integration layer, calculating an output value corresponding to the linear integration layer according to the feature comparison result corresponding to each key frame and the result weight corresponding to each feature comparison result;
and determining an output value corresponding to the linear integration layer as a dynamic change analysis result of the key frame sequence aiming at the dynamic target.
Therefore, the device described by implementing the alternative embodiment can also compare the target feature information between the continuous key frames, so as to determine corresponding weights according to the feature comparison result, calculate the output value corresponding to the linear integration layer, determine the output value as a dynamic change analysis result aiming at the dynamic target, and can realize dynamic feature information comparison between the continuous key frames, thereby more sensitively sensing the feature information change trend of the continuous key frames, further improving the dynamic monitoring capability of the dynamic target, further improving the analysis accuracy of the feature information change of the dynamic target, and being beneficial to improving the generation accuracy of the collision prediction result.
In another alternative embodiment, the specific manner of generating the collision prediction result corresponding to the unmanned aerial vehicle by the generating module 305 according to the dynamic change analysis result may include:
Calculating a collision prediction score corresponding to the dynamic change analysis result based on a score evaluation function of the collision prediction model;
and generating a collision prediction result corresponding to the unmanned aerial vehicle according to the collision prediction score corresponding to the dynamic change analysis result.
Therefore, the device described by implementing the alternative embodiment can evaluate and calculate the collision prediction score corresponding to the dynamic change analysis result based on the score evaluation function of the collision prediction model, generate the collision prediction result corresponding to the unmanned aerial vehicle according to the collision prediction score, realize the intelligent evaluation of the collision risk of the unmanned aerial vehicle based on the score evaluation function, and realize the accurate quantification of the collision prediction result, thereby improving the accuracy of the collision prediction result and efficiently predicting the collision risk of the unmanned aerial vehicle.
In this optional embodiment, optionally, the specific manner of generating, by the generating module 305, the collision prediction result corresponding to the unmanned aerial vehicle according to the collision prediction score corresponding to the dynamic change analysis result may include:
Judging whether the collision prediction score corresponding to the dynamic change analysis result is larger than or equal to a preset collision prediction score;
When judging that the collision prediction score is larger than or equal to the preset collision prediction score, determining that a collision prediction result corresponding to the unmanned aerial vehicle is that a collision risk exists for the unmanned aerial vehicle by the dynamic target, wherein the collision risk is used for indicating that the distance between the dynamic target and the unmanned aerial vehicle is smaller than or equal to a preset collision distance interval;
when the collision prediction score is smaller than the preset collision prediction score, determining that a collision prediction result corresponding to the unmanned aerial vehicle is that a dynamic target does not have collision risk for the unmanned aerial vehicle.
It can be seen that the device described in this optional embodiment may further determine that the collision prediction result corresponding to the unmanned aerial vehicle is a collision risk of the dynamic target for the unmanned aerial vehicle when the collision prediction score is determined to be greater than or equal to the preset collision prediction score, and determine that the collision prediction result corresponding to the unmanned aerial vehicle is a collision risk of the dynamic target for the unmanned aerial vehicle when the collision prediction score is determined to be less than the preset collision prediction score, so that the determination accuracy of the collision risk of the unmanned aerial vehicle can be improved based on the collision prediction score, thereby improving the generation accuracy of the collision prediction result, and further being beneficial to planning a flight route of the unmanned aerial vehicle for avoiding the collision of the dynamic target in advance, so as to improve the flight safety of the unmanned aerial vehicle.
In this optional embodiment, further optionally, the determining module 302 is further configured to determine, when the collision prediction result is used to indicate that the dynamic target has a collision risk for the unmanned aerial vehicle, a collision risk level corresponding to the unmanned aerial vehicle according to the collision prediction score;
Wherein, as shown in fig. 8, the device may further include:
The acquiring module 306 is configured to acquire a flight planning route of the unmanned aerial vehicle and environmental information corresponding to the target area;
the determining module 302 is further configured to determine a target flight route corresponding to the unmanned aerial vehicle according to the collision risk level, the flight planning route and the environmental information;
the determining module 302 is further configured to determine flight control parameters corresponding to the unmanned aerial vehicle according to the target flight route;
A control module 307 for controlling the unmanned aerial vehicle to execute a flight operation with respect to the target flight path according to the flight control parameters.
It can be seen that the device described in this optional embodiment can also determine the collision risk level corresponding to the unmanned aerial vehicle according to the collision prediction score, and combine the obtained flight planning route of the unmanned aerial vehicle and the environmental information corresponding to the target area to determine the target flight route corresponding to the unmanned aerial vehicle, and then determine the flight control parameters corresponding to the unmanned aerial vehicle according to the target flight route, so as to control the unmanned aerial vehicle to execute the flight operation about the target flight route, thereby improving the accuracy of determining the flight route for the unmanned aerial vehicle to avoid the collision of the dynamic target when the unmanned aerial vehicle has collision risk, so as to improve the possibility of the unmanned aerial vehicle to avoid the collision risk, and further improve the flight safety of the unmanned aerial vehicle.
In yet another alternative embodiment, the video data may include a plurality of video frames and all video frames in the video data are ordered based on timing;
the specific manner of determining the key frame sequence corresponding to the video data by the determining module 302 may include:
Based on a preset time interval, a plurality of candidate key frames corresponding to the time interval are screened out from video data, and a candidate key frame sequence is obtained;
Acquiring window parameters of a sliding window corresponding to the candidate key frame sequence; the window parameters comprise the window length of the sliding window, the sliding length of the sliding window and the starting position of the sliding window in the candidate key frame sequence; the initial position is used for representing the arrangement order of the candidate key frames with the first time sequence contained in the sliding window in the candidate key frame sequence;
Selecting a number of candidate key frames corresponding to the window length as key frames based on the starting position to obtain a key frame sequence corresponding to the video data;
And, as shown in fig. 8, the apparatus may further include:
The updating module 308 is configured to update the starting position of the sliding window based on the sliding length and the starting position after the generating module 305 generates the collision prediction result corresponding to the unmanned aerial vehicle according to the dynamic change analysis result.
Therefore, the device described by implementing the alternative embodiment can select the corresponding candidate key frame from the video data based on the preset time interval to obtain the candidate key frame sequence, so as to ensure that the determined key frame can uniformly cover the whole video data, thereby being beneficial to improving the accuracy of the feature analysis of the dynamic target; and sliding and selecting a corresponding number of candidate key frames with the window length in the candidate key frame sequence as the key frame sequence through sliding windows, so that the collision prediction model can more sensitively sense the characteristic information change trend of the continuous key frames, the dynamic monitoring capability of the dynamic target is improved, the analysis accuracy of the characteristic information change of the dynamic target is improved, and the collision risk of the unmanned aerial vehicle is predicted accurately.
Example IV
Referring to fig. 9, fig. 9 is a schematic structural diagram of a collision prediction apparatus based on unmanned aerial vehicle vision according to an embodiment of the present invention. As shown in fig. 9, the collision predicting apparatus based on the unmanned aerial vehicle vision may include:
A memory 401 storing executable program codes;
A processor 402 coupled with the memory 401;
The processor 402 invokes executable program codes stored in the memory 401 to perform the steps in the collision prediction method based on unmanned aerial vehicle vision described in the first or second embodiment of the present invention.
Example five
The embodiment of the invention discloses a computer storage medium which stores computer instructions for executing the steps in the collision prediction method based on unmanned aerial vehicle vision described in the first or second embodiment of the invention when the computer instructions are called.
Example six
An embodiment of the present invention discloses a computer program product comprising a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform the steps in the collision prediction method based on unmanned aerial vehicle vision described in embodiment one or embodiment two.
The apparatus embodiments described above are merely illustrative, wherein the modules illustrated as separate components may or may not be physically separate, and the components shown as modules may or may not be physical, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above detailed description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course by means of hardware. Based on such understanding, the foregoing technical solutions may be embodied essentially or in part in the form of a software product that may be stored in a computer-readable storage medium including Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), programmable Read-Only Memory (Programmable Read-Only Memory, PROM), erasable programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), one-time programmable Read-Only Memory (OTPROM), electrically erasable programmable Read-Only Memory (EEPROM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM) or other optical disc Memory, magnetic disc Memory, tape Memory, or any other medium that can be used for computer-readable carrying or storing data.
Finally, it should be noted that: the embodiment of the invention discloses a collision prediction method and device based on unmanned aerial vehicle vision, which are disclosed by the embodiment of the invention and are only used for illustrating the technical scheme of the invention, but not limiting the technical scheme; although the invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that; the technical scheme recorded in the various embodiments can be modified or part of technical features in the technical scheme can be replaced equivalently; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (8)

1. A collision prediction method based on unmanned aerial vehicle vision, the method comprising:
acquiring video data for a target area based on an unmanned aerial vehicle; wherein a dynamic target exists in the air of the target area;
Determining a key frame sequence corresponding to the video data; the key frame sequence includes a plurality of key frames and all of the key frames in the key frame sequence are ordered based on timing;
inputting the key frame sequence to a feature extraction layer of a pre-trained collision prediction model to obtain dynamic feature information of the key frame sequence aiming at the dynamic target;
analyzing the dynamic characteristic information based on a linear integration layer of the collision prediction model to obtain a dynamic change analysis result aiming at the dynamic target;
Generating a collision prediction result corresponding to the unmanned aerial vehicle according to the dynamic change analysis result;
The linear integration layer based on the collision prediction model analyzes the dynamic characteristic information to obtain a dynamic change analysis result aiming at the dynamic target, and the method comprises the following steps:
For each key frame, comparing target feature information corresponding to the key frame with target feature information corresponding to an adjacent key frame based on a linear integration layer of the collision prediction model to obtain a feature comparison result corresponding to the key frame; the adjacent keyframes include keyframes adjacent to and time-ordered before the keyframe and/or keyframes adjacent to and time-ordered after the keyframe;
based on the linear integration layer, determining the result weight corresponding to each feature comparison result according to the feature comparison result corresponding to each key frame;
Calculating an output value corresponding to the linear integration layer according to the characteristic comparison result corresponding to each key frame and the result weight corresponding to each characteristic comparison result based on the linear integration formula corresponding to the linear integration layer;
determining an output value corresponding to the linear integration layer as a dynamic change analysis result of the key frame sequence aiming at the dynamic target;
wherein the video data comprises a plurality of video frames and all of the video frames in the video data are ordered based on timing;
Wherein the determining the key frame sequence corresponding to the video data includes:
screening a plurality of candidate key frames corresponding to the time interval from the video data based on a preset time interval to obtain a candidate key frame sequence;
Acquiring window parameters of a sliding window corresponding to the candidate key frame sequence; the window parameters comprise the window length of the sliding window, the sliding length of the sliding window and the starting position of the sliding window in the candidate key frame sequence; the initial position is used for representing the arrangement order of the candidate key frames with the earliest time sequence contained in the sliding window in the candidate key frame sequence;
Selecting a number of candidate key frames corresponding to the window length as key frames based on the starting position to obtain a key frame sequence corresponding to the video data;
and after the collision prediction result corresponding to the unmanned aerial vehicle is generated according to the dynamic change analysis result, the method further comprises:
and updating the starting position of the sliding window based on the sliding length and the starting position.
2. The collision prediction method based on unmanned aerial vehicle vision according to claim 1, wherein the inputting the key frame sequence to a feature extraction layer of a pre-trained collision prediction model, to obtain dynamic feature information of the key frame sequence for the dynamic target, includes:
inputting the key frame sequence to a feature extraction layer of a pre-trained collision prediction model;
Based on the feature extraction layer, performing feature extraction operation on all the key frames in the key frame sequence in parallel to obtain feature information corresponding to each key frame;
determining target feature information corresponding to each key frame from feature information corresponding to each key frame based on the feature extraction layer, wherein the target feature information is feature information aiming at the dynamic target;
The key frame sequence comprises target feature information corresponding to each key frame aiming at the dynamic features of the dynamic target.
3. The collision prediction method based on unmanned aerial vehicle vision according to claim 1, wherein the generating the collision prediction result corresponding to the unmanned aerial vehicle according to the dynamic change analysis result comprises:
Calculating a collision prediction score corresponding to the dynamic change analysis result based on a score evaluation function of the collision prediction model;
And generating a collision prediction result corresponding to the unmanned aerial vehicle according to the collision prediction score corresponding to the dynamic change analysis result.
4. The collision prediction method based on unmanned aerial vehicle vision according to claim 3, wherein the generating the collision prediction result corresponding to the unmanned aerial vehicle according to the collision prediction score corresponding to the dynamic change analysis result comprises:
judging whether the collision prediction score corresponding to the dynamic change analysis result is larger than or equal to a preset collision prediction score;
When the collision prediction score is larger than or equal to the preset collision prediction score, determining that a collision prediction result corresponding to the unmanned aerial vehicle is that a collision risk exists for the unmanned aerial vehicle by the dynamic target, wherein the collision risk is used for indicating that the distance between the dynamic target and the unmanned aerial vehicle is smaller than or equal to a preset collision distance interval;
And when judging that the collision prediction score is smaller than the preset collision prediction score, determining that a collision prediction result corresponding to the unmanned aerial vehicle is that the collision risk of the dynamic target to the unmanned aerial vehicle does not exist.
5. The unmanned aerial vehicle vision-based collision prediction method of claim 3 or 4, wherein when the collision prediction result is used to indicate that the dynamic target is at risk of collision to the unmanned aerial vehicle, the method further comprises:
determining a collision risk level corresponding to the unmanned aerial vehicle according to the collision prediction score;
acquiring a flight planning route of the unmanned aerial vehicle and environment information corresponding to the target area;
Determining a target flight route corresponding to the unmanned aerial vehicle according to the collision risk level, the flight planning route and the environmental information;
determining flight control parameters corresponding to the unmanned aerial vehicle according to the target flight route;
And controlling the unmanned aerial vehicle to execute the flight operation related to the target flight route according to the flight control parameters.
6. A collision prediction apparatus based on unmanned aerial vehicle vision, the apparatus comprising:
The acquisition module is used for acquiring video data aiming at a target area based on the unmanned aerial vehicle; wherein a dynamic target exists in the air of the target area;
A determining module, configured to determine a key frame sequence corresponding to the video data; the key frame sequence includes a plurality of key frames and all of the key frames in the key frame sequence are ordered based on timing;
The feature extraction module is used for inputting the key frame sequence into a feature extraction layer of a pre-trained collision prediction model to obtain dynamic feature information of the key frame sequence aiming at the dynamic target;
The analysis module is used for analyzing the dynamic characteristic information based on the linear integration layer of the collision prediction model to obtain a dynamic change analysis result aiming at the dynamic target;
the generation module is used for generating a collision prediction result corresponding to the unmanned aerial vehicle according to the dynamic change analysis result;
The specific way for the analysis module to obtain the dynamic change analysis result aiming at the dynamic target comprises the following steps of:
For each key frame, comparing target feature information corresponding to the key frame with target feature information corresponding to an adjacent key frame based on a linear integration layer of the collision prediction model to obtain a feature comparison result corresponding to the key frame; the adjacent keyframes include keyframes adjacent to and time-ordered before the keyframe and/or keyframes adjacent to and time-ordered after the keyframe;
based on the linear integration layer, determining the result weight corresponding to each feature comparison result according to the feature comparison result corresponding to each key frame;
Calculating an output value corresponding to the linear integration layer according to the characteristic comparison result corresponding to each key frame and the result weight corresponding to each characteristic comparison result based on the linear integration formula corresponding to the linear integration layer;
determining an output value corresponding to the linear integration layer as a dynamic change analysis result of the key frame sequence aiming at the dynamic target;
wherein the video data comprises a plurality of video frames and all of the video frames in the video data are ordered based on timing;
the specific mode of determining the key frame sequence corresponding to the video data by the determining module comprises the following steps:
screening a plurality of candidate key frames corresponding to the time interval from the video data based on a preset time interval to obtain a candidate key frame sequence;
Acquiring window parameters of a sliding window corresponding to the candidate key frame sequence; the window parameters comprise the window length of the sliding window, the sliding length of the sliding window and the starting position of the sliding window in the candidate key frame sequence; the initial position is used for representing the arrangement order of the candidate key frames with the earliest time sequence contained in the sliding window in the candidate key frame sequence;
Selecting a number of candidate key frames corresponding to the window length as key frames based on the starting position to obtain a key frame sequence corresponding to the video data;
And, the apparatus further comprises:
And the updating module is used for updating the initial position of the sliding window based on the sliding length and the initial position after the generating module generates the collision prediction result corresponding to the unmanned aerial vehicle according to the dynamic change analysis result.
7. A collision prediction apparatus based on unmanned aerial vehicle vision, the apparatus comprising:
A memory storing executable program code;
a processor coupled to the memory;
The processor invokes the executable program code stored in the memory to perform the unmanned vision-based collision prediction method of any of claims 1-5.
8. A computer storage medium storing computer instructions which, when invoked, are operable to perform the unmanned aerial vehicle vision-based collision prediction method of any one of claims 1 to 5.
CN202311812843.7A 2023-12-27 2023-12-27 Collision prediction method and device based on unmanned aerial vehicle vision Active CN117475358B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311812843.7A CN117475358B (en) 2023-12-27 2023-12-27 Collision prediction method and device based on unmanned aerial vehicle vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311812843.7A CN117475358B (en) 2023-12-27 2023-12-27 Collision prediction method and device based on unmanned aerial vehicle vision

Publications (2)

Publication Number Publication Date
CN117475358A CN117475358A (en) 2024-01-30
CN117475358B true CN117475358B (en) 2024-04-23

Family

ID=89624102

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311812843.7A Active CN117475358B (en) 2023-12-27 2023-12-27 Collision prediction method and device based on unmanned aerial vehicle vision

Country Status (1)

Country Link
CN (1) CN117475358B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110908399A (en) * 2019-12-02 2020-03-24 广东工业大学 Unmanned aerial vehicle autonomous obstacle avoidance method and system based on light weight type neural network
WO2020113423A1 (en) * 2018-12-04 2020-06-11 深圳市大疆创新科技有限公司 Target scene three-dimensional reconstruction method and system, and unmanned aerial vehicle
CN112711271A (en) * 2020-12-16 2021-04-27 中山大学 Autonomous navigation unmanned aerial vehicle power optimization method based on deep reinforcement learning
CN113625762A (en) * 2021-08-30 2021-11-09 吉林大学 Unmanned aerial vehicle obstacle avoidance method and system, and unmanned aerial vehicle cluster obstacle avoidance method and system
CN114200956A (en) * 2021-11-04 2022-03-18 西安理工大学 Anti-collision method for wireless ultraviolet light early warning broadcast in unmanned aerial vehicle formation
CN114693754A (en) * 2022-05-30 2022-07-01 湖南大学 Unmanned aerial vehicle autonomous positioning method and system based on monocular vision inertial navigation fusion
CN114972840A (en) * 2022-04-12 2022-08-30 北京工商大学 Momentum video target detection method based on time domain relation
CN116382346A (en) * 2023-04-27 2023-07-04 中国人民解放军国防科技大学 Unmanned aerial vehicle perception avoidance method and system based on event camera and deep reinforcement learning
CN116721378A (en) * 2023-04-17 2023-09-08 唐山惠唐物联科技有限公司 Anti-collision method based on image recognition
CN117150272A (en) * 2023-09-26 2023-12-01 飞客工场科技(北京)有限公司 Unmanned aerial vehicle real-time track prediction method and system based on artificial intelligence
CN117237867A (en) * 2023-09-15 2023-12-15 首都机场集团有限公司北京大兴国际机场 Self-adaptive field monitoring video target detection method and system based on feature fusion

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020113423A1 (en) * 2018-12-04 2020-06-11 深圳市大疆创新科技有限公司 Target scene three-dimensional reconstruction method and system, and unmanned aerial vehicle
CN111433818A (en) * 2018-12-04 2020-07-17 深圳市大疆创新科技有限公司 Target scene three-dimensional reconstruction method and system and unmanned aerial vehicle
CN110908399A (en) * 2019-12-02 2020-03-24 广东工业大学 Unmanned aerial vehicle autonomous obstacle avoidance method and system based on light weight type neural network
CN112711271A (en) * 2020-12-16 2021-04-27 中山大学 Autonomous navigation unmanned aerial vehicle power optimization method based on deep reinforcement learning
CN113625762A (en) * 2021-08-30 2021-11-09 吉林大学 Unmanned aerial vehicle obstacle avoidance method and system, and unmanned aerial vehicle cluster obstacle avoidance method and system
CN114200956A (en) * 2021-11-04 2022-03-18 西安理工大学 Anti-collision method for wireless ultraviolet light early warning broadcast in unmanned aerial vehicle formation
CN114972840A (en) * 2022-04-12 2022-08-30 北京工商大学 Momentum video target detection method based on time domain relation
CN114693754A (en) * 2022-05-30 2022-07-01 湖南大学 Unmanned aerial vehicle autonomous positioning method and system based on monocular vision inertial navigation fusion
CN116721378A (en) * 2023-04-17 2023-09-08 唐山惠唐物联科技有限公司 Anti-collision method based on image recognition
CN116382346A (en) * 2023-04-27 2023-07-04 中国人民解放军国防科技大学 Unmanned aerial vehicle perception avoidance method and system based on event camera and deep reinforcement learning
CN117237867A (en) * 2023-09-15 2023-12-15 首都机场集团有限公司北京大兴国际机场 Self-adaptive field monitoring video target detection method and system based on feature fusion
CN117150272A (en) * 2023-09-26 2023-12-01 飞客工场科技(北京)有限公司 Unmanned aerial vehicle real-time track prediction method and system based on artificial intelligence

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Pelee: A Real-Time Object Detection System on Mobile Devices;Robert J. Wang ET AL;《arXiv》;20190118;第1-10页 *
基于 Transformer模块和 CNN 的无人机避障方法研究;梁永勋 等;《机械与电子》;20230531;第41卷(第5期);第56-61页 *
基于多支路聚合的帧预测轻量化视频异常检测;黄少年 等;《图学学报》;20231011;第1-12页 *
基于改进鸽群层级的无人机集群视觉巡检模型 基于改进鸽群层级的无人机集群视觉;陈麒 等;《系统仿真学报》;20220630;第34卷(第6期);第1275-1285页 *

Also Published As

Publication number Publication date
CN117475358A (en) 2024-01-30

Similar Documents

Publication Publication Date Title
US11823429B2 (en) Method, system and device for difference automatic calibration in cross modal target detection
Aker et al. Using deep networks for drone detection
EP3338248B1 (en) Systems and methods for object tracking
CN110909651B (en) Method, device and equipment for identifying video main body characters and readable storage medium
US9767570B2 (en) Systems and methods for computer vision background estimation using foreground-aware statistical models
JP6032921B2 (en) Object detection apparatus and method, and program
CN110781836A (en) Human body recognition method and device, computer equipment and storage medium
CN108648211B (en) Small target detection method, device, equipment and medium based on deep learning
CN112926410A (en) Target tracking method and device, storage medium and intelligent video system
CN111784737B (en) Automatic target tracking method and system based on unmanned aerial vehicle platform
EP2860661A1 (en) Mean shift tracking method
CN116453109A (en) 3D target detection method, device, equipment and storage medium
CN112861755A (en) Method and system for real-time segmentation of multiple classes of targets
CN115565146A (en) Perception model training method and system for acquiring aerial view characteristics based on self-encoder
CN118311955A (en) Unmanned aerial vehicle control method, terminal, unmanned aerial vehicle and storage medium
CN111553474A (en) Ship detection model training method and ship tracking method based on unmanned aerial vehicle video
CN114169425A (en) Training target tracking model and target tracking method and device
CN111915653B (en) Dual-station visual target tracking method
CN111950507B (en) Data processing and model training method, device, equipment and medium
CN111428567B (en) Pedestrian tracking system and method based on affine multitask regression
CN114596515A (en) Target object detection method and device, electronic equipment and storage medium
CN117274740A (en) Infrared target detection method and device
CN110287957B (en) Low-slow small target positioning method and positioning device
CN117475358B (en) Collision prediction method and device based on unmanned aerial vehicle vision
CN116823884A (en) Multi-target tracking method, system, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant