CN112019757A - Unmanned aerial vehicle collaborative shooting method and device, computer equipment and storage medium - Google Patents

Unmanned aerial vehicle collaborative shooting method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN112019757A
CN112019757A CN202011100252.3A CN202011100252A CN112019757A CN 112019757 A CN112019757 A CN 112019757A CN 202011100252 A CN202011100252 A CN 202011100252A CN 112019757 A CN112019757 A CN 112019757A
Authority
CN
China
Prior art keywords
scene
sub
unmanned aerial
aerial vehicle
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011100252.3A
Other languages
Chinese (zh)
Other versions
CN112019757B (en
Inventor
黄惠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Moutong Technology Co ltd
Original Assignee
Shenzhen Moutong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Moutong Technology Co ltd filed Critical Shenzhen Moutong Technology Co ltd
Priority to CN202011100252.3A priority Critical patent/CN112019757B/en
Publication of CN112019757A publication Critical patent/CN112019757A/en
Application granted granted Critical
Publication of CN112019757B publication Critical patent/CN112019757B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Signal Processing (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to an unmanned aerial vehicle collaborative shooting method and device, computer equipment and a storage medium. The method comprises the following steps: dividing the whole dynamic scene into a plurality of sub-scenes in real time; determining a target shooting position corresponding to each sub-scene according to the central position of each sub-scene, the range of the sub-scenes and the position relationship between the central position of the sub-scenes and the central position of the whole dynamic scene; performing real-time task assignment based on the current position of each unmanned aerial vehicle and the principle that the total flying distance of the unmanned aerial vehicle cluster is minimum, and determining the unmanned aerial vehicle assigned to each target shooting position; and controlling the unmanned aerial vehicle to fly to the assigned target shooting position from the current position aiming at each assigned unmanned aerial vehicle, and shooting towards the central position of the corresponding sub-scene at the target shooting position to obtain a group of relevance shooting pictures covering the whole dynamic scene and aiming at the target moving object. The scheme can be suitable for complex dynamic scene aerial photography.

Description

Unmanned aerial vehicle collaborative shooting method and device, computer equipment and storage medium
Technical Field
The application relates to the technical field of computers and unmanned planes, in particular to an unmanned plane collaborative shooting method and device, computer equipment and a storage medium.
Background
In recent years, the unmanned aerial vehicle technology has been widely applied and developed, and people increasingly use unmanned aerial vehicles to shoot various scenes, especially fixed scenes such as city street scenes, landscape scenes and the like. However, the use requirements of people for the unmanned aerial vehicle are not limited to shooting of fixed static scenes, and the unmanned aerial vehicle is often required to be used for aerial shooting of some complex dynamic scenes. For example, a multi-player ball game or a venue for holding events belongs to a relatively complex dynamic scene related to moving objects.
However, the conventional automatic aerial photography method for the unmanned aerial vehicle mainly adopts a fixed and pre-generated aerial photography route to carry out aerial photography of an overall fixed scene, and the moving object cannot be shot in a targeted manner if the object in the scene does not move or the moving object is not the key point for shooting. For complex dynamic scenes, a shooting route cannot be predetermined frequently and a moving object is required to be shot in real time. Therefore, the traditional method is not suitable for complex dynamic scene aerial photography.
Disclosure of Invention
In view of the above, it is necessary to provide a coordinated unmanned aerial vehicle photography method, device, computer device and storage medium suitable for complex dynamic scene aerial photography.
An unmanned aerial vehicle collaborative photography method comprises the following steps:
dividing the whole dynamic scene into a plurality of sub-scenes in real time; an overall dynamic scene, which is a scene including a plurality of target moving objects in a moving state; each sub scene comprises at least one target moving object, and the range of each sub scene is in the shooting range of the unmanned aerial vehicle;
determining a target shooting position corresponding to each sub-scene according to the central position of each sub-scene, the range of the sub-scenes and the position relationship between the central position of the sub-scenes and the central position of the whole dynamic scene; the target shooting position is used for shooting the motion details of a target motion object in the sub-scene and the position of the unmanned aerial vehicle when the part of the whole dynamic scene related to the sub-scene is shot;
performing real-time task assignment based on the current position of each unmanned aerial vehicle and the principle that the total flying distance of the unmanned aerial vehicle cluster is minimum, and determining the unmanned aerial vehicle assigned to each target shooting position;
and controlling the unmanned aerial vehicle to fly to the assigned target shooting position from the current position aiming at each assigned unmanned aerial vehicle, and shooting towards the central position of the corresponding sub-scene at the target shooting position to obtain a group of relevance shooting pictures covering the whole dynamic scene and aiming at the target moving object.
An unmanned aerial vehicle collaborative photography device, comprising:
the scene segmentation module is used for segmenting the whole dynamic scene into a plurality of sub-scenes in real time; an overall dynamic scene, which is a scene including a plurality of target moving objects in a moving state; each sub scene comprises at least one target moving object, and the range of each sub scene is in the shooting range of the unmanned aerial vehicle;
the target position determining module is used for determining a corresponding target shooting position of each sub-scene according to the central position of each sub-scene, the range of the sub-scenes and the position relation between the central position of the sub-scenes and the central position of the whole dynamic scene; the target shooting position is used for shooting the motion details of a target motion object in the sub-scene and the position of the unmanned aerial vehicle when the part of the whole dynamic scene related to the sub-scene is shot;
the task assignment module is used for carrying out real-time task assignment based on the current position of each unmanned aerial vehicle and the principle that the total flying distance of the unmanned aerial vehicle cluster is minimum, and determining the unmanned aerial vehicle assigned to each target shooting position;
and the collaborative shooting module is used for controlling the unmanned aerial vehicle to fly to the assigned target shooting position from the current position aiming at each assigned unmanned aerial vehicle, and shooting towards the central position of the corresponding sub-scene at the target shooting position to obtain a group of relevance shooting pictures which cover the whole dynamic scene and aim at the target moving object.
A computer device comprises a storage and a processor, wherein the storage stores a computer program, and the processor executes the computer program to realize the steps in the unmanned aerial vehicle collaborative shooting method in the embodiments of the application.
A computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps in the unmanned aerial vehicle collaborative photography method in the embodiments of the present application.
According to the unmanned aerial vehicle collaborative shooting method, the unmanned aerial vehicle collaborative shooting device, the computer equipment and the storage medium, the whole dynamic scene is divided into a plurality of sub-scenes in real time; and determining the corresponding target shooting position of each sub-scene according to the position relation between the central position of the sub-scene and the central position of the whole dynamic scene, assigning the unmanned aerial vehicle to the assigned target shooting position in real time based on the principle that the total flying distance of the unmanned aerial vehicle cluster is minimum, and shooting towards the central position of the corresponding sub-scene. In this way, each drone can shoot the motion details of the target moving object in the sub-scene and part of the overall dynamic scene associated with the sub-scene at the target shooting position, so that a group of associated shot pictures covering the overall dynamic scene and aiming at the target moving object can be obtained by combining the shot pictures of a plurality of drones. The method realizes effective shooting of complex dynamic scenes.
Drawings
Fig. 1 is a schematic flowchart of a collaborative photographing method for an unmanned aerial vehicle in an embodiment.
Fig. 2 is a schematic diagram of a collaborative unmanned aerial vehicle photography method in one embodiment.
Fig. 3 is a schematic diagram of determining the horizontal position of a drone in one embodiment.
FIG. 4 is a diagram illustrating a fixed clustering scenario in an embodiment.
Fig. 5 is a flight trajectory diagram of unmanned aerial vehicle collaborative photography in a fixed clustering scene in an embodiment.
FIG. 6 is a schematic diagram of an embodiment in a time-lapse scene.
FIG. 7 is a diagram illustrating a random motion scenario in one embodiment.
FIG. 8 is a graph showing the comparison effect of the control test in one embodiment.
Fig. 9 is a block diagram of the structure of the cooperative photographing apparatus for unmanned aerial vehicles in one embodiment.
FIG. 10 is a diagram showing an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The unmanned aerial vehicle collaborative shooting method can be applied to the following application environments. Wherein the computer device communicates with a plurality of drones through a network. The computer device may be a terminal or a server, among others. The terminal can be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers and portable wearable devices, and the server can be implemented by an independent server or a server cluster formed by a plurality of servers.
In one embodiment, as shown in fig. 1, a collaborative photography method for unmanned aerial vehicles is provided, which is described by taking the method as an example of being applied to a computer device, it is understood that the computer device may be a server or a terminal, and the method may also be applied to a system including a terminal and a server, and is implemented through interaction between the terminal and the server. In this embodiment, the method includes the steps of:
step 102, the whole dynamic scene is divided into a plurality of sub-scenes in real time.
The overall dynamic scene is a complete scene including a plurality of target moving objects in a moving state and needing to be shot. The target moving object is an object in a moving state and needing attention or shooting. In the motion state, the target moving object is not stationary but continuously moving. The sub-scene is a small scene obtained after the whole dynamic scene is divided. All sub-scenes are combined together to form an overall dynamic scene. Each sub-scene comprises at least one target moving object, and the range of each sub-scene is in the shooting range of the unmanned aerial vehicle. The range of the sub-scene refers to the size of the sub-scene. That is, the range of the sub-scene obtained by division cannot be too large, and the sub-scene needs to be within the shooting range of the unmanned aerial vehicle, so as to avoid that all moving objects in the whole sub-scene cannot be shot.
In one embodiment, the target moving object includes a human, an animal, a vehicle, or the like that generates motion.
In one embodiment, the computer device may segment the overall dynamic scene into sub-scenes by clustering the target moving objects. The clustering algorithm is used for sub-scene segmentation, so that each sub-scene is ensured to have at least one target moving object, and sub-scenes without target moving objects do not exist.
The computer device may also obtain a plurality of continuous sub-scenes by a non-clustering method, for example, surrounding a target moving object in the entire dynamic scene with a convex hull, and then segmenting the convex hull.
In one embodiment, when the computer device performs scene segmentation by clustering target moving objects, a K-means clustering algorithm, K-means clustering algorithm (K-means clustering algorithm), or an adaptive K-means clustering algorithm with a threshold may be used. In other embodiments, other clustering algorithms may be used, and are not limited thereto.
In an embodiment, the computer device may perform scene segmentation using a K-means clustering algorithm, and specifically, the computer device may divide target moving objects in the entire dynamic scene into K preset clusters, randomly select an initial clustering center from each cluster, calculate a distance between each target moving object and each clustering center, reassign each target moving object to a cluster where a closest clustering center is located, and re-determine a position of the clustering center of each cluster according to a position mean of the target moving object in the cluster, so as to repeat iteration until the clustering is stable.
In the embodiments of the present application, the entire dynamic scene is dynamically segmented in real time, so as to dynamically segment the scene according to the real-time motion condition of the target moving object in the entire dynamic scene, so as to ensure the accuracy of the scene segmentation, thereby improving the accuracy of the dynamic aerial photography, instead of performing the segmentation only once.
And 104, determining the corresponding target shooting position of each sub-scene according to the central position of each sub-scene, the range of the sub-scenes and the position relationship between the central position of the sub-scenes and the central position of the whole dynamic scene.
It can be understood that, for an overall dynamic scene (i.e. a large scene), after the initial segmentation of the scene is completed, the sub-scene cannot be photographed directly according to the segmentation result, otherwise, the photographing result becomes a dispersed independent sub-scene. On the basis, the target shooting position of the unmanned aerial vehicle is needed, wherein the target shooting position can be used for shooting the motion details in the sub-scene and ensuring that the shot pictures are not correlated.
The target shooting position is a position where the unmanned aerial vehicle is located when the target shooting position is used for shooting the motion details of the target moving object in the sub-scene and a part of the whole dynamic scene related to the sub-scene. The target shooting position is the best shooting position of the sub-scenes shot to which the unmanned aerial vehicle flies, the unmanned aerial vehicle can shoot the motion details of the target moving object in the sub-scenes (namely meeting the shooting requirement of the motion details of each sub-scene) after flying to the target shooting position, and can shoot a part of the whole dynamic scene associated with the sub-scenes, so that certain relevance is embodied, and therefore the situation that the shot pictures are too isolated due to shooting of only the sub-scenes is avoided.
It can be understood that the range of the sub-scene is determined according to the shooting requirement of the unmanned aerial vehicle and the camera parameters of the unmanned aerial vehicle. The central position of the sub-scene is the position where the central point of the sub-scene is located. The central position of the overall dynamic scene refers to a position where a central point of the overall dynamic scene is located.
In an embodiment, the central position of the entire dynamic scene may be obtained by averaging the central positions of the respective sub-scenes, or by calculating a central point of the entire dynamic scene.
In one embodiment, when the overall dynamic scene is segmented by using the method for clustering the target moving objects, the center position of the sub-scene is the cluster center, and the range of the sub-scene is the cluster size.
In one embodiment, the range of the sub-scene may be a circular area range determined according to the cluster radius with the cluster center as a center.
Specifically, the computer device may determine, in combination with a preset photography composition rule, a target shooting position corresponding to each sub-scene according to the center position of each sub-scene, the range of the sub-scenes, and a positional relationship between the center position of the sub-scene and the center position of the entire dynamic scene.
And 106, assigning tasks in real time based on the current position of each unmanned aerial vehicle and the principle that the total flying distance of the unmanned aerial vehicle cluster is minimum, and determining the unmanned aerial vehicle assigned to each target shooting position.
The principle that the total flying distance of the unmanned aerial vehicle cluster is the minimum is that the total flying distance of the unmanned aerial vehicle cluster used for shooting a plurality of sub-scenes is the minimum when the unmanned aerial vehicle cluster flies to the target shooting position corresponding to the sub-scenes. I.e. an optimal assignment scheme.
Specifically, the computer device can acquire the real-time current position of each unmanned aerial vehicle through real-time communication with the unmanned aerial vehicle, and then, based on the principle that the total flying distance of the unmanned aerial vehicle cluster is minimum, task assignment is performed by using an assignment algorithm, so as to determine the unmanned aerial vehicle assigned to the target shooting position of each sub-scene.
It is understood that each target shooting position may be assigned a drone to shoot, i.e. the sub-scene and the assigned drone are in a one-to-one shooting relationship. In other embodiments, when the number of drones is less than the number of sub-scenes, one drone may also be assigned to shoot multiple sub-scenes simultaneously.
And step 108, controlling the unmanned aerial vehicle to fly to the assigned target shooting position from the current position for each assigned unmanned aerial vehicle, and shooting towards the central position of the corresponding sub-scene at the target shooting position to obtain a group of relevance shooting pictures covering the whole dynamic scene and aiming at the target moving object.
It can be understood that the shooting pictures shot by the multiple unmanned aerial vehicles are combined together, so that a group of shooting pictures covering the whole dynamic scene and aiming at the relevance of the target moving object can be shot, namely, a group of shooting pictures which can be combined together to completely cover the whole dynamic scene, are rich in the motion details of each target moving object and have certain relevance with each other are obtained.
Referring to fig. 2, (a) the whole dynamic scene is first divided into 3 sub-scenes by clustering (the sub-scenes are shown in the circular dotted line region in the figure). The computer device may determine a corresponding target capture position 206 for each sub-scene based on the center position 202 of the sub-scene and the center position 204 of the overall dynamic scene. As shown in (C), the computer device may assign a drone to each of the 3 sub-scenes, and the assigned drone may fly to the target shooting location pointed by the dotted arrow to shoot the corresponding sub-scene.
In the unmanned aerial vehicle collaborative shooting method, each unmanned aerial vehicle can shoot the motion details of the target moving object in the sub-scene and part of the whole dynamic scene associated with the sub-scene at the target shooting position, so that a group of shooting pictures covering the whole dynamic scene and aiming at the relevance of the target moving object can be obtained by combining the shooting pictures of a plurality of unmanned aerial vehicles. The real-time shooting of complex dynamic scenes with high requirements on manpower and technology is realized. In addition, according to the current position of the unmanned aerial vehicle, each unmanned aerial vehicle is reasonably scheduled in real time, and the total flight distance of the unmanned aerial vehicle cluster in the real-time shooting process is reduced.
In one embodiment, step 102 (referred to as a scene segmentation step) includes: acquiring a preset distance threshold, and selecting an initial cluster center from the whole dynamic scene; respectively determining the distance between each target moving object and the center of each cluster aiming at each target moving object in the whole dynamic scene; determining a target moving object farthest from all cluster centers according to the distances, taking the determined target moving object as a newly added cluster center when the distance between the determined target moving object and the nearest cluster center is larger than a preset distance threshold, returning to iteration to respectively determine the distance between the target moving object and each cluster center and subsequent steps, and stopping the iteration until the cluster centers are not generated any more; performing K-means clustering on a target moving object in the overall dynamic scene based on a cluster center obtained when iteration is stopped to obtain a final clustering cluster; and taking the cluster center of each cluster as the center position of the sub-scene, and taking the cluster size of each cluster as the range of the sub-scene, so as to segment the whole dynamic scene to obtain a plurality of sub-scenes.
In one embodiment, the preset distance threshold is determined according to the effective shooting range of the camera of the unmanned aerial vehicle.
It can be understood that, in this embodiment, an adaptive K-means clustering algorithm with a threshold is equivalently used. That is, the computer device may obtain a preset distance threshold, select an initial cluster center from the overall dynamic scene, determine the remaining cluster centers one by one according to the distance and the threshold, thus obtain a group of cluster centers, and then perform the K-means clustering algorithm for clustering.
Specifically, the computer device may randomly select a position from the overall dynamic scene as an initial cluster center, may also use an average value of the positions of the target moving objects in the overall dynamic scene as the initial cluster center, and may also randomly use the position of one target moving object as the initial cluster center.
Further, the computer device may determine, for each target moving object in the overall dynamic scene, a distance between each target moving object and a center of each cluster, respectively. The computer device may determine the target moving object farthest from the centers of all clusters according to the distance. Further, the computer apparatus may compare the distance between the target moving object (i.e., the target moving object farthest from all cluster centers) and the cluster center closest to the target moving object with a preset distance threshold, and when the distance is greater than the preset distance threshold, it means that the target moving object may serve as an independent cluster center, and thus the target moving object farthest from all cluster centers may serve as a newly added cluster center. The computer device may return to iteratively perform the determining the distance between the target moving object and each cluster center and the subsequent steps, respectively, until the cluster center is no longer generated, and stop the iteration.
It can be understood that when the distance is smaller than the preset distance threshold, it indicates that the distance from any target moving object to a certain cluster center is smaller than the preset distance threshold, and the target moving object can belong to a certain existing cluster without generating a new cluster center.
Further, the computer device may perform K-means clustering on the cluster centers obtained when the iteration is stopped, to obtain a final clustered cluster. The computer device may use the cluster center of each cluster as the center position of the sub-scene, and use the cluster size of each cluster as the range of the sub-scene, so as to segment the entire dynamic scene to obtain a plurality of sub-scenes.
In one embodiment, determining a target moving object farthest from all cluster centers according to the distance, and taking the determined target moving object as a newly added cluster center when the distance between the determined target moving object and the nearest cluster center is greater than a preset distance threshold, includes: for each target moving object, selecting the minimum distance between the target moving object and the center of each cluster as a mark value of the target moving object; selecting the maximum mark value from the mark values of all target moving objects; and when the maximum mark value is larger than the preset distance threshold value, taking the target moving object corresponding to the maximum mark value as the newly added cluster center.
The mark value of the target moving object is the distance between the target moving object and the nearest cluster center. The target moving object corresponding to the maximum mark value is the target moving object farthest from the centers of all clusters.
For ease of understanding, the adaptive K-means clustering algorithm with thresholds is now described in its entirety with reference to a detailed embodiment. The computer device may perform clustering according to the following steps:
(1): and acquiring a preset distance threshold T. If there are N target moving objects in the whole dynamic scene, the average value of the positions of the N target moving objects can be selected as the initial cluster center C1
(2): selecting a distance C1The farthest point being the second cluster center C2The second cluster center thus selected is generally the caseThe lower cluster will not be the first cluster. If C is present1And C2The distance is less than T, i.e. | | C1-C2||<T, then means that the overall dynamic scene is small and can be contained with one cluster, then the clustering process ends.
(3): and sequentially calculating the distance between each target moving object and all known cluster centers, and selecting the minimum distance as the marking value of the target moving object, namely di.
(4): selecting the marking value with the largest marking value from all the marking values of the target moving objects, i.e. selecting the marking value with the largest marking valueMAXdi. If it is notMAXdiIf the target moving object is the farthest point from the centers of all the existing clusters, the target moving object is taken as a newly added cluster center CjAnd if the iteration returns to the step 3, the new cluster center C can be obtained through iteration in sequence3,C4… … are provided. If it is notMAXdiAnd (5) if the target motion object is less than or equal to T, indicating that all target motion objects are in the range of the cluster corresponding to the cluster center, stopping iteration and finishing the selection of the cluster center.
(5): and outputting the obtained K initial cluster centers. And performing K-mean clustering on N target moving objects in the whole dynamic scene on the basis of K initial cluster centers.
In the embodiment, the cluster center can be selected in a self-adaptive manner according to the distance threshold, so that the selected cluster center is farthest from the existing cluster center, and the distance between the cluster centers is greater than the preset distance threshold, thereby improving the stability of the clustering result and improving the accuracy and effectiveness of scene segmentation.
In one embodiment, determining the corresponding target shooting position of each sub-scene according to the central position of each sub-scene, the range of the sub-scenes, and the position relationship between the central position of the sub-scene and the central position of the whole dynamic scene includes: for each sub-scene, determining a theoretical distance between the unmanned aerial vehicle and the center position of the sub-scene according to the camera parameters of the unmanned aerial vehicle; determining the target flight height of the unmanned aerial vehicle according to the theoretical distance and a preset shooting angle of the unmanned aerial vehicle; determining the horizontal position of the unmanned aerial vehicle according to the target flight height and the central position of the sub-scene; and determining the corresponding target shooting position of the sub-scene according to the flying height and the horizontal position of the target.
The theoretical distance between the central positions of the unmanned aerial vehicle and the sub-scene is the most suitable distance between the central positions of the unmanned aerial vehicle and the sub-scene, namely, the theoretically optimal shooting distance between the central positions of the unmanned aerial vehicle and the sub-scene. The target flight height is the flight height at which the unmanned aerial vehicle should be positioned when shooting a sub-scene. The horizontal position of the unmanned aerial vehicle is the position where the unmanned aerial vehicle should be in the horizontal direction when shooting a sub-scene.
In one embodiment, the camera parameters of the drone, including the camera field angle of view of the drone, may also include other camera parameters that enable determination of the theoretical distance between the drone and the center location of the sub-scene.
Because the corresponding target shooting position of the sub-scene is determined by the flying height and the horizontal position, the computer equipment can determine the horizontal position of the unmanned aerial vehicle according to the flying height of the target and the central position of the sub-scene, and determine the corresponding target shooting position of the sub-scene according to the flying height of the target and the horizontal position of the target.
In one embodiment, the computer device may determine a theoretical distance between the drone and a center location of the sub-scene from a camera field angle of the drone, a range of the sub-scene, and a proportion of a center area of a shot of the drone.
In one embodiment, determining the target flight height of the drone according to the theoretical distance and the preset shooting angle of the drone includes: determining the theoretical flying height of the unmanned aerial vehicle according to the theoretical distance and a preset shooting angle of the unmanned aerial vehicle; determining the target flight height of the unmanned aerial vehicle according to the theoretical flight height; the target flight height is a flight height within the safe flight height range of the unmanned aerial vehicle and closest to the theoretical flight height.
The theoretical flying height of the unmanned aerial vehicle is the theoretical most adaptive flying height of the unmanned aerial vehicle (namely, the optimal shooting height of the unmanned aerial vehicle) when the sub-scene is shot. The safe flying height range is a range determined according to the lowest height and the highest height when the unmanned aerial vehicle flies safely, namely, the flying height of the unmanned aerial vehicle is between the lowest height (including) and the highest height (including), and belongs to the safe height.
It will be appreciated that when the theoretical flying height is within the safe flying height range, then the flying height closest to the theoretical flying height, i.e. the theoretical flying height itself, then the target flying height is the theoretical flying height. And when the theoretical flying height is lower than the lowest height in the safe flying height range, the flying height which is closest to the theoretical flying height in the safe flying height range is the lowest height, and the target flying height is the lowest height. And when the theoretical flying height is higher than the highest height in the safe flying height range, the flying height which is closest to the theoretical flying height in the safe flying height range is the highest height, and the target flying height is the highest height.
In one embodiment, the theoretical distance between the drone and the center location of the sub-scene may be determined by the following formula:
Figure 367825DEST_PATH_IMAGE001
, (1);
in one embodiment, determining the target flight height of the drone may include:
Figure 691490DEST_PATH_IMAGE002
, (2);
Figure 736806DEST_PATH_IMAGE003
, (3) ;
Figure 397595DEST_PATH_IMAGE004
, (4);
wherein,CluRadis the radius of the cluster and is,CenRatfor taking picturesThe proportion of the heart area is that of the heart,FOVin order to make the camera view angle,θfor the set shooting angle of the unmanned aerial vehicle,D"the theoretical distance between the unmanned aerial vehicle and the center position of the sub-scene is shown;H"in order to be at the theoretical flying height,min_heightandmax_heightthe minimum height and the maximum height in the safe flying height range of the unmanned aerial vehicle are respectively, and H is the solved target flying height of the unmanned aerial vehicle.
In the embodiment, the target shooting position of the unmanned aerial vehicle is determined by combining the shooting composition design of a complex dynamic scene according to the position and the range of a small scene, and the overall shooting effect for each sub-scene is provided.
In one embodiment, determining the horizontal position of the drone according to the target flight altitude and the center position of the sub-scene includes: determining the horizontal distance between the unmanned aerial vehicle and the center position of the sub-scene according to the target flight height and the unmanned aerial vehicle shooting angle; averaging the central positions of the sub-scenes to obtain the central position of the whole dynamic scene; and determining a ray pointing to the central position of the sub-scene from the central position of the whole dynamic scene, and taking a position on the ray and at a position which meets the horizontal distance from the central position of the sub-scene as the horizontal position of the unmanned aerial vehicle.
It can be understood that after the target flying height of the unmanned aerial vehicle is determined, the horizontal position of the unmanned aerial vehicle can be calculated according to the target flying height and the known center position of the sub-scene. Since it is desirable to cover the entire scene while shooting a target moving object and have a full stereo look and feel for the entire scene, no one has the opportunity to shoot from the outside of the sub-scene to the center of the sub-scene.
Specifically, the computer device may be wired from the center position of the overall dynamic scene to the center position of each sub-scene, the horizontal position of the drone being on this ray. The computer equipment can determine the horizontal distance between the unmanned aerial vehicle and the center position of the sub-scene according to the target flight height and the shooting angle of the unmanned aerial vehicle, and the position intercepted on the ray according to the horizontal distance is the horizontal position of the unmanned aerial vehicle. Namely, the position at which the horizontal distance from the center position of the sub-scene is satisfied is taken on the ray, so that the horizontal position of the unmanned aerial vehicle is accurately obtained.
Fig. 3 is a schematic diagram of determining the horizontal position of a drone in one embodiment. Referring to fig. 3, a, B, and C are the central positions of the three sub-scenes, respectively, O is the central position of the entire dynamic scene, and a ', B ', and C ' are the horizontal positions of the corresponding drones, respectively. As can be seen from fig. 3, the horizontal position of the drone, on the ray pointing from the center position O of the overall dynamic scene to the center position of the sub-scene, is the horizontal distance of the drone from the center position of the respective sub-scene, and the lengths A A ', B B ', and CC ' respectively.
In one embodiment, the horizontal position of the drone may be determined by the following formula:
Figure 851448DEST_PATH_IMAGE005
, (5)
Figure 939489DEST_PATH_IMAGE006
, (6)
wherein,X i to the second needing to shootiThe center position of the sub-scene,Kthe number of the sub-scene centers after the division is equal,
Figure 472102DEST_PATH_IMAGE007
for taking a pictureiThe horizontal position of the drones of a sub-scene,Ois the central position of the overall dynamic scene,H i for taking a pictureiThe target flight height of the drone for the sub-scene,θshoot the angle for unmanned aerial vehicle.
In one embodiment, the real-time task assignment is performed based on the current position of each unmanned aerial vehicle and the principle that the total flying distance of the unmanned aerial vehicle cluster is minimum, and the unmanned aerial vehicle assigned to each target shooting position is determined, including: generating a flight cost matrix according to the distance from the current position of each unmanned aerial vehicle to each target shooting position; defining a binary function aiming at the combination pair of the unmanned aerial vehicle and the target shooting position; when the unmanned aerial vehicle in the combined pair flies to the target shooting position, the value of the binary function is 1, and when the unmanned aerial vehicle in the combined pair does not fly to the target shooting position, the value of the binary function is 0; constructing a mathematical model of the minimized flight cost according to the binary function and the flight cost matrix, and determining an assignment scheme of the minimized cost according to the mathematical model of the minimized flight cost; in the assignment scheme, the target shooting position and the assigned unmanned aerial vehicle satisfy a one-to-one relationship.
It can be understood that the problem of task assignment for many-to-many flying tasks of unmanned aerial vehicles can be converted into a task assignment problem. The distance of each drone to the respective target point may be approximated as a flight cost or flight cost for each mission. The complete shooting of the scene is completed by ensuring that the called unmanned aerial vehicle and the target scene are in one-to-one correspondence. In the actual matching process, the unmanned aerial vehicle can be dynamically called according to the number of the divided sub-scenes, and one-to-one shooting of the unmanned aerial vehicle for the sub-scenes is realized.
It will be appreciated that the distance that each drone flies to each target capture location also represents the cost of that flight, so the computer device may generate a flight cost matrix based on the distance that each drone flies from the current location to each target capture location. In addition, the precondition that the unmanned aerial vehicles meeting the target shooting positions and the assigned unmanned aerial vehicles meet the one-to-one relationship is that the number of the unmanned aerial vehicles is greater than or equal to the number of the target shooting positions.
The description method of the task assignment problem of the unmanned aerial vehicle is as follows: for m target shooting positions, n unmanned planes can be called to shoot, and the distance from the ith unmanned plane to the jth target point is calledD ij And also represents the flight cost for that flight. We want the total flight distance of the drone swarm to be minimal (i.e., the total flight cost of the drone swarm to be minimal), so the task assignment problem can be discussed in two cases, depending on the difference between m and n (the difference between the number of drones and the number of sub-scenes split):
when m = n, each drone is assigned a target shooting position, also called a perfect match.
When m < n, that is, the number of the drones is greater than the number of the target shooting positions, the (n-m) target shooting positions should be assumed, and at this time, there is no specific position of the target shooting position, but the cost of the distance from any drone to the virtual target shooting position is 0, that is, the drone may not move.
Thus, with the dummy target capture positions, the task assignment problem for the drone can translate to a perfectly matched balanced assignment. A binary function is defined for each combined pair of drone and target shooting position (i.e., drone and target shooting position are a pair combination):
Figure 77527DEST_PATH_IMAGE008
(7)
the mathematical model that assigns the problem requires minimizing the cost of flight is:
Figure 11985DEST_PATH_IMAGE009
(8)
wherein:
Figure 270928DEST_PATH_IMAGE010
(9)
Figure 290836DEST_PATH_IMAGE011
(10)
equation (8) is the flight cost function z of the drone,X ij as a binary functionX(i,j)In the case of the other writing method of (1),D ij and the distance from the ith unmanned aerial vehicle to the jth target point is represented, m is the number of target shooting positions, and n is the number of unmanned aerial vehicles. Formula (9) shows that any unmanned aerial vehicle can only fly to one target shooting position, and formula (10) shows that any target shooting position is also onlyThe unmanned aerial vehicle and the target shooting position can be combined to represent one-to-one constraint of the unmanned aerial vehicle and the target shooting position. Thus, the problem is transformed into a 0-1 planning problem, and a KM algorithm (a computer algorithm with the function of maximum power matching under perfect matching) developed from the Hungarian algorithm can be used for solving to obtain an assignment scheme for minimizing the flight cost.
ByD ij The formed flight cost matrix is called an efficiency matrix in the assignment algorithm, and the balanced matrix is an n-x-n matrix. The KM algorithm is based on the principle that the same constant is added or subtracted to or from any row or column in the efficiency matrix without changing the distribution scheme, and the basic idea is to properly transform the efficiency matrix under the premise of ensuring that elements in the efficiency matrix cannot be negative, so that the matrix comprises n zero elements in different rows and different columns, then the elements are selected to form a result matrix R to represent an assignment result, and the positions of the corresponding selected zero elements in the result matrix are taken as 1, and the other zero elements are taken as 0. The zero elements of different rows and different columns are chosen because each assignment is one-to-one.
In a specific implementation process, firstly, the efficiency matrix is converted into a matrix with zero elements in each row and each column, then the matrix is tentatively assigned, and if the zero elements in different columns of n different rows cannot be selected, the zero elements in the matrix are continuously increased by increasing and decreasing transformation until the zero elements in different columns of n different rows can be selected in the matrix.
Here, a simple n =3 matrix is combined
Figure 168793DEST_PATH_IMAGE012
As an example. The specific execution steps are as follows:
step 1: carrying out row transformation and column transformation on the efficiency matrix to enable each row and each column to have 0 element, and obtaining
Figure 957758DEST_PATH_IMAGE013
Step 2: a tentative assignment is made. After the 1 st transformation, each row and each column in the coefficient matrix have 0 elements, but n independent 0 elements need to be found out. If so, an optimal solution is obtained. The method comprises the following steps:
starting from a row (column) with only one 0 element, this 0 element is marked with a selection mark, denoted X. Indicating that the drone represented by the row, only one target location may fly. Then, the other 0 element in the column (row) is marked as an exclusion mark and is marked as H. This indicates that the target location represented by this column has been assigned and no further drones are considered.
Making a selection mark for 0 element of only one 0 element column (row) and marking as X; then the 0 element of the row (column) is excluded and is marked as H. And repeating the first step and the second step until all 0 elements are marked with X or H.
If there are still unmarked 0 elements and there are at least two 0 elements in the same row (column) (which means that the unmanned aerial vehicle flies to a plurality of target positions in equal distance), starting from the row (column) with the least 0 elements left, comparing the number of 0 elements in the column where each 0 element in the row is located, selecting the 0 element mark X in the column (row) with the least 0 element (which means that the selectivity is less prior to the selectivity is more), and then excluding the other 0 elements in the same row and column, mark H. And c, exchanging the rows and columns in the fourth step until all 0 elements are marked with X or H. To obtain
Figure 653181DEST_PATH_IMAGE014
If the number a of X elements is equal to the order number n of the matrix, the optimal solution of the assignment problem is obtained. If a < n, the next step 3 is entered. In the above example a < n, so step 3 will continue.
And step 3: the least straight line is made to cover all 0 elements to determine the most independent elements in the coefficient matrix can be found.
The method comprises marking X number on the row without X; marking the X number of all columns containing 0 elements (including X and H elements) in the rows marked with the X number; thirdly, marking the X number on the row with the X element in the column marked with the X number; fourthly, repeating the third step until new rows and columns with the numbers of the X are not obtained; fifthly, drawing a horizontal line on the row without the number of the line with the number. If k is less than n, the current matrix is required to be transformed, and then n independent 0 elements can be found, so that the 4 th step is carried out; if k = n and a = n, it should return to the operation of step 2, starting from the other 0 elements.
And 4, step 4: and (5) matrix transformation.
The purpose of transforming the matrix at the end of step 3 is to add 0 elements. Therefore, the minimum element is found in the part which is not covered by the straight line, then the minimum element is subtracted from each element in the row and column, and the minimum element is added to each element in the row and column, so as to ensure that the original 0 element is not changed, and meanwhile, a new 0 element is generated. This results in a new coefficient matrix whose optimal solution is the same as the original problem. And returning to the step 2, performing trial assignment on the new matrix again, obtaining the optimal solution if n independent 0 elements can be obtained, otherwise, continuously and repeatedly performing the steps 2, 3 and 4 until n independent 0 elements are obtained. The resulting matrix is
Figure 160386DEST_PATH_IMAGE015
For the efficiency matrix
Figure 576455DEST_PATH_IMAGE016
Is best assigned as the following table.
TABLE one example optimal assignment scheme (i.e., assignment scheme that minimizes costs)
Unmanned plane Target shooting position Distance cost
2 1 14
3 2 15
1 3 13
The total cost is 14+15+13= 42. Accordingly, the unmanned aerial vehicle 2 can be controlled to fly to the target shooting position 1, the unmanned aerial vehicle 3 flies to the target shooting position 2, and the unmanned aerial vehicle 1 flies to the target shooting position 3.
It should be noted that, when the number of the drones is less than the number of the target shooting positions (i.e., less than the number of the sub-scenes obtained by segmentation), the whole shooting can be completed by increasing the number of the drones or changing the field of view of the drones so as to change the preset distance threshold of the cluster.
In the above embodiment, a mathematical model for minimizing the flight cost is constructed according to the binary function and the flight cost matrix, and an assignment scheme for minimizing the cost is determined according to the mathematical model for minimizing the flight cost, so that the unmanned aerial vehicles are assigned one-to-one for the target shooting positions, and the flight cost is saved.
It should be understood that although the steps in the flowcharts of the present application are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows, and the steps may be performed in other sequences unless explicitly stated herein. Moreover, at least a portion of the steps in each flowchart may include multiple steps or multiple stages, which are not necessarily performed at the same time, and which are not necessarily performed in sequence, but may be performed alternately or in turns with at least a portion of the other steps or stages.
It can be understood that the method in each embodiment of the present application may be applied to a communication program for controlling an unmanned aerial vehicle in a computer device. The design target of the communication program is that the program can be used for controlling the unmanned aerial vehicle which is used randomly, manual control and algorithm calling are included to enable the unmanned aerial vehicle to automatically execute movement, and the program can obtain feedback of the state of the unmanned aerial vehicle in a scene in real time and timely reflect the feedback to a user. The programming process of the communication program for controlling the unmanned aerial vehicle comprises the following steps: firstly, determining safe flight rules of the unmanned aerial vehicle, and establishing a basic control loop of a program by a control mechanism of the unmanned aerial vehicle. The loop may be responsive to operating instructions and can be moved by manual control, the speed of which can be set. Meanwhile, the unmanned aerial vehicle is provided with a safety limiting system, and in the process of flying to a target shooting position, an alarm can be given when the position of the unmanned aerial vehicle is too high, too low from the ground position or too close to an obstacle in a certain direction, the unmanned aerial vehicle is stopped from further executing an instruction, and a flight assignment command is waited to be recalculated; when the unmanned aerial vehicle moving speed is too fast, the unmanned aerial vehicle can only move according to the maximum speed limit. In the program, a tracking function of the unmanned aerial vehicle on a single target is added. According to the flows in the embodiments, the unmanned aerial vehicle collaborative shooting algorithm is added to realize the unmanned aerial vehicle collaborative shooting method in the embodiments of the application.
The method adopts a simulation mode for verification. The simulation system deploys an unmanned aerial vehicle basic motion rule, a single unmanned aerial vehicle tracking algorithm and a multi-unmanned aerial vehicle collaborative photography algorithm of a complex dynamic scene, and can perform free manual control on the unmanned aerial vehicle or enable the unmanned aerial vehicle to execute a specified automatic flight task. We designed 3 scenarios for testing multi-drone collaborative photography, each with multiple instances. In the example shown below, the window at the top of the center fixedly displays a comprehensive overhead view of the whole area, which may be referred to as a global view, for looking over the whole area to observe the positions of other drones in the whole scene (i.e. the middle diagram of the first row of each of fig. 4, 6, and 7, for example, 404 in fig. 4). And each of the other windows displays a corresponding shooting picture of the unmanned aerial vehicle, and the window corresponding to the unmanned aerial vehicle which is not called is still. For the convenience of observation, the objects of interest of all scenes in the example are moving human models. In addition, according to the original data such as the position of the unmanned aerial vehicle and the clustering result recorded in real time, an unmanned aerial vehicle track graph is drawn. In the unmanned aerial vehicle trajectory graph, the flight trajectory of unmanned aerial vehicle is represented to the solid line, and the dotted line represents the unmanned aerial vehicle and the line of its cluster center that corresponds, the directional direction of unmanned aerial vehicle camera under also being the ideal condition.
Test scenario 1 — fixed clustering scenario: the target moving objects are divided into a plurality of groups, and each group moves according to a respective route. The characteristic of this scene is that its clusters are relatively fixed, but the relative position variation between clusters is random. FIG. 4 is a diagram illustrating a fixed clustering scenario in an embodiment. Examples the motion trajectory of the target moving object in the scene used is shown in (a) in fig. 4. In the whole dynamic scene, 4 sub-scenes can be stably clustered. In this case, a total of 5 unmanned aerial vehicles, only called 4 to shoot respectively to 4 crowd as required in the actual shooting.
The pictures at the same position in (b) - (c) in fig. 4 are pictures shot by the same unmanned aerial vehicle at different times. Taking (b) in fig. 4 as an example, the shot pictures of different drones in the fixed clustering scene are explained. 402, 406, 408, and 410 in fig. 4 (b) are shooting screens of the 4 called drones shooting for the respective assigned groups of people. The picture 412 is the picture of the unmanned aerial vehicle which is one of the 5 unmanned aerial vehicles which does not execute the flight mission, so the pictures in (b) to (c) are not changed all the time. And 404 is the whole scene picture shot by the fixed camera of the unbound unmanned aerial vehicle, so the background picture shot by the fixed camera in (b) - (c) is not changed all the time, for example, the picture shot in the middle of the first row in (c) in fig. 4 is the picture shot by the fixed camera and is consistent with the background picture of 404 in (b). Fig. 5 is a flight trajectory diagram of collaborative photography by different drones in a fixed clustering scene. According to the shot pictures and the analysis of the trajectory diagram in fig. 5, the unmanned aerial vehicle can still continuously perform key shooting on the clustering center (namely the center of the sub-scene) after a long interval time, and the pictures shot by the unmanned aerial vehicle all point to the scene center and are not independent pictures any more, so that the unmanned aerial vehicle has a sense of unity. From this, to the scene of fixed clustering, this unmanned aerial vehicle photography algorithm in coordination can obtain fine shooting effect.
Test scenario 2 — time-convergence time-dispersion scenario: the target moving objects move towards or away from a central point, can gather to the center in a cycle, then move to different positions respectively, and move back and forth, and clustering of the target moving objects is represented as time-gathering time-scattering. FIG. 6 is a schematic diagram of an embodiment in a time-lapse scene.
The motion trajectory of the scene used in this example is shown in fig. 6 (a). The scene is a circular reciprocating motion of 2 convergence and convergence for four character targets, and the number of clustering centers is changed into 1-2-4-2-1. In this case, from 5 candidate drones, 4 drones are started in total, and the drone is dynamically called in the shooting process. Fig. 6 (b) to (c) show the images taken at different times in the time-lapse scene. Since scene 2 also calls up a maximum of 4 drones. Therefore, similarly, the last picture in each of (b) - (c) in fig. 6 is a picture taken by the unmanned aerial vehicle not performing the flight mission, and the picture in the middle of the first line in each of (b) - (c) is an overall scene picture taken by the fixed camera of the unbound unmanned aerial vehicle.
According to the shot picture analysis, for the scene with the number of the clustering centers changing violently, the unmanned aerial vehicle can be called dynamically, the unmanned aerial vehicle which is not called can hover and wait in place, and the algorithm is dynamically adaptive to the scene. In the above series of shooting pictures, the shooting effect is good in most of the time. The group of pictures shown in fig. 6 (b) is a group of two persons, and two drones are called to shoot, so that only the shooting pictures of 2 drones in the first row of the first picture and the third picture contain persons. In the group of diagrams shown in fig. 6 (c), 4 persons separately go to 4 corners of the scene, and 4 drones are called to shoot 4 persons respectively. Therefore, for time-focusing time-scattering scenes, the algorithm can obtain a good shooting effect.
Test scenario 3 — random motion scenario: the plurality of objects move randomly without motion rules. The clustering number of the scenes is difficult to predict, the shooting difficulty is high, and long shooting time is needed. FIG. 7 is a diagram illustrating a random motion scenario in one embodiment. The motion trajectory of the scene used in this example is shown in fig. 7 (a). In this scenario, the number of calculated clustering results is typically 4 or 5. Fig. 7 (b) to (c) show the images taken at different times in the time-lapse scene. Similarly, the middle of the first line of each of (b) - (c) in fig. 7 is the whole scene picture shot by the fixed camera of the unbound unmanned aerial vehicle.
This test scenario 3 has jointly launched 5 unmanned aerial vehicles. In the above series of pictures, the shooting effect is good in most of the times, and in some of the times, the unmanned aerial vehicle can re-determine the shooting target in a short time due to the drastic change of the cluster, so that the picture shakes seriously, and the shooting is also re-stable along with the re-stabilization of the cluster center, for example, in the last picture in (b) in fig. 7, the picture shakes comparatively, but is subsequently gradually stable, as shown in the last picture in (c) in fig. 7, the picture is already stable. For a random motion scene, the algorithm can meet shooting requirements under most conditions, and dynamic adaptation is achieved by dynamically calling the unmanned aerial vehicle.
It should be noted that, for (b) - (c) in fig. 4, 6, and 7, the pictures at the same position are pictures taken by the same unmanned aerial vehicle at different times.
In addition, the unmanned aerial vehicle collaborative photography algorithm is compared and tested with a surface uniform division method and an angle uniform division method. FIG. 8 is a graph showing the comparison effect of the control test in one embodiment. The scene adopted in the test is a fixed clustering scene, the figure locus diagram of the example scene takes (a) in fig. 4 as an example, and the comparison effect is shown in (a) - (c) in fig. 8. As shown in fig. 8, the area-averaging method calls 4 drones to shoot from top to bottom, and the angle-averaging method calls 5 drones to shoot from the periphery to the center of the scene. In contrast, the unmanned aerial vehicle collaborative photography algorithm in the embodiments of the present application has the following several advantages: 1. the cluster target can be shot in a targeted and concentrated mode, reasonable shooting positions are kept as much as possible, and details of the motion of the object can be clearly viewed. 2. The sparse or blank position is not the key point of shooting, the distribution of the unmanned aerial vehicle can be reduced, and the picture has primary and secondary points. 3. The important object is prevented from being divided by the shot picture. 4. All the nobody will move according to the object cluster. In conclusion, the unmanned aerial vehicle collaborative shooting algorithm is more suitable for shooting complex dynamic scenes than the equipartition shooting method.
In one embodiment, as shown in fig. 9, there is provided a cooperative photographing apparatus for unmanned aerial vehicles, including: a scene segmentation module 902, a target location determination module 904, a task assignment module 906, and a collaborative shooting module 908, wherein:
a scene segmentation module 902, configured to segment an overall dynamic scene into multiple sub-scenes in real time; an overall dynamic scene, which is a scene including a plurality of target moving objects in a moving state; each sub-scene comprises at least one target moving object, and the range of each sub-scene is in the shooting range of the unmanned aerial vehicle.
A target position determining module 904, configured to determine a target shooting position corresponding to each sub-scene according to the center position of each sub-scene, the range of the sub-scene, and a position relationship between the center position of the sub-scene and the center position of the entire dynamic scene; and the target shooting position is used for shooting the motion details of the target moving object in the sub-scene and the partial whole dynamic scene related to the sub-scene, and the unmanned aerial vehicle is located at the position.
And a task assignment module 906, configured to perform task assignment in real time based on a principle that a current position of each drone and a total flying distance of the drone swarm are minimum, and determine a drone assigned to each target shooting position.
And a collaborative shooting module 908, configured to control, for each assigned drone, the drone to fly from the current position to the assigned target shooting position, and shoot at the target shooting position toward the center position of the corresponding sub-scene, so as to obtain a group of relevance shooting pictures covering the entire dynamic scene and targeting the target moving object.
In one embodiment, the scene segmentation module 902 is further configured to obtain a preset distance threshold, and select an initial cluster center from the entire dynamic scene; respectively determining the distance between each target moving object and the center of each cluster aiming at each target moving object in the whole dynamic scene; determining a target moving object farthest from all cluster centers according to the distances, taking the determined target moving object as a newly added cluster center when the distance between the determined target moving object and the nearest cluster center is larger than a preset distance threshold, returning to iteration to respectively determine the distance between the target moving object and each cluster center and subsequent steps, and stopping the iteration until the cluster centers are not generated any more; performing K-means clustering on a target moving object in the overall dynamic scene based on a cluster center obtained when iteration is stopped to obtain a final clustering cluster; and taking the cluster center of each cluster as the center position of the sub-scene, and taking the cluster size of each cluster as the range of the sub-scene, so as to segment the whole dynamic scene to obtain a plurality of sub-scenes.
In one embodiment, the scene segmentation module 902 is further configured to select, for each target moving object, a minimum distance between the target moving object and each cluster center as a marker value of the target moving object; selecting the maximum mark value from the mark values of all target moving objects; the target moving object corresponding to the maximum marking value is the target moving object farthest from the centers of all clusters; and when the maximum mark value is larger than the preset distance threshold value, taking the target moving object corresponding to the maximum mark value as the newly added cluster center.
In one embodiment, the target position determination module 904 is further configured to determine, for each sub-scene, a theoretical distance between the drone and a center position of the sub-scene according to camera parameters of the drone; determining the target flight height of the unmanned aerial vehicle according to the theoretical distance and a preset shooting angle of the unmanned aerial vehicle; determining the horizontal position of the unmanned aerial vehicle according to the target flight height and the central position of the sub-scene; and determining the corresponding target shooting position of the sub-scene according to the flying height and the horizontal position of the target.
In one embodiment, the target position determining module 904 is further configured to determine a horizontal distance between the drone and a center position of the sub-scene according to the target flight height and the drone shooting angle; averaging the central positions of the sub-scenes to obtain the central position of the whole dynamic scene; and determining a ray pointing to the central position of the sub-scene from the central position of the whole dynamic scene, and taking a position on the ray and at a position which meets the horizontal distance from the central position of the sub-scene as the horizontal position of the unmanned aerial vehicle.
In one embodiment, the target position determining module 904 is further configured to determine a theoretical distance between the unmanned aerial vehicle and a center position of the sub-scene according to a camera field angle of the unmanned aerial vehicle, a range of the sub-scene, and a ratio of a center area of a shooting picture of the unmanned aerial vehicle; determining the theoretical flying height of the unmanned aerial vehicle according to the theoretical distance and a preset shooting angle of the unmanned aerial vehicle; determining the target flight height of the unmanned aerial vehicle according to the theoretical flight height; the target flight height is a flight height within the safe flight height range of the unmanned aerial vehicle and closest to the theoretical flight height.
In one embodiment, the task assigning module 906 is further configured to generate a flight cost matrix according to a distance that each drone flies from the current location to each target shooting location; defining a binary function aiming at the combination pair of the unmanned aerial vehicle and the target shooting position; when the unmanned aerial vehicle in the combined pair flies to the target shooting position, the value of the binary function is 1, and when the unmanned aerial vehicle in the combined pair does not fly to the target shooting position, the value of the binary function is 0; constructing a mathematical model of the minimized flight cost according to the binary function and the flight cost matrix, and determining an assignment scheme of the minimized cost according to the mathematical model of the minimized flight cost; in the assignment scheme, the target shooting position and the assigned unmanned aerial vehicle satisfy a one-to-one relationship.
For specific limitations of the unmanned aerial vehicle collaborative photography device, reference may be made to the above limitations on the unmanned aerial vehicle collaborative photography method, which is not described herein again. The above-mentioned unmanned aerial vehicle cooperates each module in the photographic means can be realized through software, hardware and their combination wholly or partly. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server or a server, and its internal structure diagram may be as shown in fig. 10. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a collaborative photographing method for unmanned aerial vehicles. Those skilled in the art will appreciate that the architecture shown in fig. 10 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, which includes a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the above embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. An unmanned aerial vehicle collaborative photography method is characterized by comprising the following steps:
dividing the whole dynamic scene into a plurality of sub-scenes in real time; the overall dynamic scene is a scene comprising a plurality of target moving objects in a moving state; each sub scene comprises at least one target moving object, and the range of each sub scene is in the shooting range of the unmanned aerial vehicle;
determining a target shooting position corresponding to each sub-scene according to the central position of each sub-scene, the range of the sub-scene and the position relation between the central position of the sub-scene and the central position of the whole dynamic scene; the target shooting position is used for shooting the motion details of the target moving object in the sub-scene and the position of the unmanned aerial vehicle when the part of the whole dynamic scene related to the sub-scene is shot;
performing real-time task assignment based on the current position of each unmanned aerial vehicle and the principle that the total flying distance of the unmanned aerial vehicle cluster is minimum, and determining the unmanned aerial vehicle assigned to each target shooting position;
and aiming at each assigned unmanned aerial vehicle, controlling the unmanned aerial vehicle to fly to the assigned target shooting position from the current position, and shooting towards the central position of the corresponding sub-scene at the target shooting position to obtain a group of relevance shooting pictures covering the whole dynamic scene and aiming at the target moving object.
2. The method of claim 1, wherein the real-time partitioning of the overall dynamic scene into a plurality of sub-scenes comprises:
acquiring a preset distance threshold, and selecting an initial cluster center from the whole dynamic scene;
respectively determining the distance between each target moving object and the center of each cluster aiming at each target moving object in the whole dynamic scene;
determining a target moving object farthest from all cluster centers according to the distances, taking the determined target moving object as a newly added cluster center when the distance between the determined target moving object and the nearest cluster center is larger than the preset distance threshold, returning to iteration execution to respectively determine the distance between the target moving object and each cluster center and subsequent steps, and stopping the iteration until no cluster center is generated;
performing K-means clustering on a target moving object in the overall dynamic scene based on a cluster center obtained when iteration is stopped to obtain a final clustering cluster;
and taking the cluster center of each cluster as the center position of the sub-scene, and taking the cluster size of each cluster as the range of the sub-scene, so as to segment the whole dynamic scene to obtain a plurality of sub-scenes.
3. The method according to claim 2, wherein the determining a target moving object farthest from all cluster centers according to the distances, and when the determined distance between the target moving object and the nearest cluster center is greater than the preset distance threshold, determining the target moving object as a newly added cluster center comprises:
for each target moving object, selecting the minimum distance between the target moving object and the center of each cluster as the mark value of the target moving object;
selecting the maximum mark value from the mark values of all target moving objects; the target moving object corresponding to the maximum mark value is the target moving object farthest from the centers of all clusters;
and when the maximum mark value is larger than the preset distance threshold, taking the target moving object corresponding to the maximum mark value as a newly added cluster center.
4. The method according to claim 1, wherein the determining the corresponding target shooting position of each sub-scene according to the central position of each sub-scene, the range of the sub-scene, and the position relationship between the central position of the sub-scene and the central position of the overall dynamic scene comprises:
for each sub-scene, determining a theoretical distance between the unmanned aerial vehicle and the center position of the sub-scene according to the camera parameters of the unmanned aerial vehicle;
determining the target flight height of the unmanned aerial vehicle according to the theoretical distance and a preset shooting angle of the unmanned aerial vehicle;
determining the horizontal position of the unmanned aerial vehicle according to the target flight height and the central position of the sub-scene;
and determining a target shooting position corresponding to the sub-scene according to the target flight height and the horizontal position.
5. The method of claim 4, wherein determining the horizontal position of the drone according to the target flight height and the center position of the sub-scene comprises:
determining the horizontal distance between the unmanned aerial vehicle and the center position of the sub-scene according to the target flight height and the unmanned aerial vehicle shooting angle;
averaging the central positions of the sub-scenes to obtain the central position of the whole dynamic scene;
and determining a ray pointing to the central position of the sub-scene from the central position of the whole dynamic scene, and taking the position on the ray and at the position which is far away from the central position of the sub-scene and meets the horizontal distance as the horizontal position of the unmanned aerial vehicle.
6. The method of claim 4, wherein determining the theoretical distance between the drone and the center position of the sub-scene according to the camera parameters of the drone comprises:
determining a theoretical distance between the unmanned aerial vehicle and the central position of the sub-scene according to the camera field angle of the unmanned aerial vehicle, the range of the sub-scene and the ratio of the central area of the shooting picture of the unmanned aerial vehicle;
according to theoretical distance and predetermined unmanned aerial vehicle shooting angle, confirm unmanned aerial vehicle's target flying height and include:
determining the theoretical flying height of the unmanned aerial vehicle according to the theoretical distance and a preset shooting angle of the unmanned aerial vehicle;
determining the target flight height of the unmanned aerial vehicle according to the theoretical flight height; the target flight altitude is a flight altitude within a safe flight altitude range of the unmanned aerial vehicle and closest to the theoretical flight altitude.
7. The method according to any one of claims 1 to 6, wherein the determining the designated drone for each target shooting location based on real-time task assignment based on the current location of each drone and the principle that the total flying distance of the drone group is minimum comprises:
generating a flight cost matrix according to the distance from the current position of each unmanned aerial vehicle to each target shooting position;
defining a binary function aiming at the combination pair of the unmanned aerial vehicle and the target shooting position; wherein the value of the binary function is 1 when the drone in the combined pair flies to the target shooting position, and the value of the binary function is 0 when the drone in the combined pair does not fly to the target shooting position;
constructing a mathematical model of the minimized flight cost according to the binary function and the flight cost matrix, and determining an assignment scheme of the minimized cost according to the mathematical model of the minimized flight cost; in the assignment scheme, the target shooting position and the assigned unmanned aerial vehicle meet a one-to-one relationship.
8. An unmanned aerial vehicle collaborative photography device, the device comprising:
the scene segmentation module is used for segmenting the whole dynamic scene into a plurality of sub-scenes in real time; the overall dynamic scene is a scene comprising a plurality of target moving objects in a moving state; each sub scene comprises at least one target moving object, and the range of each sub scene is in the shooting range of the unmanned aerial vehicle;
the target position determining module is used for determining a target shooting position corresponding to each sub-scene according to the central position of each sub-scene, the range of the sub-scene and the position relation between the central position of the sub-scene and the central position of the whole dynamic scene; the target shooting position is used for shooting the motion details of the target moving object in the sub-scene and the position of the unmanned aerial vehicle when the part of the whole dynamic scene related to the sub-scene is shot;
the task assignment module is used for carrying out real-time task assignment based on the current position of each unmanned aerial vehicle and the principle that the total flying distance of the unmanned aerial vehicle cluster is minimum, and determining the unmanned aerial vehicle assigned to each target shooting position;
and the collaborative shooting module is used for controlling each assigned unmanned aerial vehicle to fly to the assigned target shooting position from the current position and shoot at the target shooting position towards the central position of the corresponding sub-scene to obtain a group of relevance shooting pictures covering the whole dynamic scene and aiming at the target moving object.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202011100252.3A 2020-10-15 2020-10-15 Unmanned aerial vehicle collaborative shooting method and device, computer equipment and storage medium Active CN112019757B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011100252.3A CN112019757B (en) 2020-10-15 2020-10-15 Unmanned aerial vehicle collaborative shooting method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011100252.3A CN112019757B (en) 2020-10-15 2020-10-15 Unmanned aerial vehicle collaborative shooting method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112019757A true CN112019757A (en) 2020-12-01
CN112019757B CN112019757B (en) 2021-03-02

Family

ID=73527977

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011100252.3A Active CN112019757B (en) 2020-10-15 2020-10-15 Unmanned aerial vehicle collaborative shooting method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112019757B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113810625A (en) * 2021-10-15 2021-12-17 江苏泰扬金属制品有限公司 Cloud service system for resource allocation
CN114020029A (en) * 2021-11-09 2022-02-08 深圳大漠大智控技术有限公司 Automatic generation method and device of aerial route for cluster and related components
CN114115361A (en) * 2021-11-08 2022-03-01 苏州热工研究院有限公司 Unmanned aerial vehicle inspection system based on photovoltaic power station and inspection method thereof
CN115378488A (en) * 2022-07-05 2022-11-22 江苏大势航空科技有限公司 Dynamic relay method and control system for data transmission of unmanned aerial vehicle group oblique photography

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101814063A (en) * 2010-05-24 2010-08-25 天津大学 Global K-means clustering algorithm based on distance weighting
CN103744290A (en) * 2013-12-30 2014-04-23 合肥工业大学 Hierarchical target allocation method for multiple unmanned aerial vehicle formations
CN104168455A (en) * 2014-08-08 2014-11-26 北京航天控制仪器研究所 Air-based large-scene photographing system and method
KR20150014646A (en) * 2013-07-30 2015-02-09 국방과학연구소 Method for segmenting aerial images based region and Computer readable storage medium for storing program code executing the same
CN106227237A (en) * 2016-09-29 2016-12-14 广州极飞科技有限公司 The distribution method of the aerial mission of unmanned plane and device
CN111722639A (en) * 2019-03-18 2020-09-29 北京京东尚科信息技术有限公司 Takeoff control method, device and system of unmanned aerial vehicle cluster and readable medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101814063A (en) * 2010-05-24 2010-08-25 天津大学 Global K-means clustering algorithm based on distance weighting
KR20150014646A (en) * 2013-07-30 2015-02-09 국방과학연구소 Method for segmenting aerial images based region and Computer readable storage medium for storing program code executing the same
CN103744290A (en) * 2013-12-30 2014-04-23 合肥工业大学 Hierarchical target allocation method for multiple unmanned aerial vehicle formations
CN104168455A (en) * 2014-08-08 2014-11-26 北京航天控制仪器研究所 Air-based large-scene photographing system and method
CN106227237A (en) * 2016-09-29 2016-12-14 广州极飞科技有限公司 The distribution method of the aerial mission of unmanned plane and device
CN111722639A (en) * 2019-03-18 2020-09-29 北京京东尚科信息技术有限公司 Takeoff control method, device and system of unmanned aerial vehicle cluster and readable medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113810625A (en) * 2021-10-15 2021-12-17 江苏泰扬金属制品有限公司 Cloud service system for resource allocation
CN114115361A (en) * 2021-11-08 2022-03-01 苏州热工研究院有限公司 Unmanned aerial vehicle inspection system based on photovoltaic power station and inspection method thereof
CN114020029A (en) * 2021-11-09 2022-02-08 深圳大漠大智控技术有限公司 Automatic generation method and device of aerial route for cluster and related components
CN115378488A (en) * 2022-07-05 2022-11-22 江苏大势航空科技有限公司 Dynamic relay method and control system for data transmission of unmanned aerial vehicle group oblique photography

Also Published As

Publication number Publication date
CN112019757B (en) 2021-03-02

Similar Documents

Publication Publication Date Title
CN112019757B (en) Unmanned aerial vehicle collaborative shooting method and device, computer equipment and storage medium
CN107943072B (en) Unmanned aerial vehicle flight path generation method and device, storage medium and equipment
Christiansen et al. Unsuperpoint: End-to-end unsupervised interest point detector and descriptor
CN109447326B (en) Unmanned aerial vehicle migration track generation method and device, electronic equipment and storage medium
Devrim Kaba et al. A reinforcement learning approach to the view planning problem
US9288449B2 (en) Systems and methods for maintaining multiple objects within a camera field-of-view
CN108981706B (en) Unmanned aerial vehicle aerial photography path generation method and device, computer equipment and storage medium
CN110232706B (en) Multi-person follow shooting method, device, equipment and storage medium
WO2020014949A1 (en) Unmanned aerial vehicle aerial photography path generation method, computer device, and storage medium
WO2022027596A1 (en) Control method and device for mobile platform, and computer readable storage medium
CN104867142B (en) Air navigation aid based on three-dimensional scenic
US11853080B2 (en) Spray operation method and device for unmanned aerial vehicle
CN112965507B (en) Cluster unmanned aerial vehicle cooperative work system and method based on intelligent optimization
JP2024041752A (en) Method and system for supporting sharing of experiences between users, and non-transitory computer-readable recording medium
CN110069074B (en) Unmanned aerial vehicle collaborative flight path planning method based on multi-target three-point positioning
JP2015114954A (en) Photographing image analysis method
CN111444786A (en) Crowd evacuation method, device and system based on unmanned aerial vehicle group and storage medium
CN112884256A (en) Path planning method and device, computer equipment and storage medium
CN115951598A (en) Virtual-real combined simulation method, device and system for multiple unmanned aerial vehicles
Bartashevich et al. PSO-based Search mechanism in dynamic environments: Swarms in Vector Fields
CN113741495B (en) Unmanned aerial vehicle attitude adjustment method and device, computer equipment and storage medium
CN111399533A (en) Heterogeneous multi-unmanned aerial vehicle cooperative task allocation and path optimization method
Ilya et al. Imitation of human behavior in 3d-shooter game
CN115345901B (en) Animal motion behavior prediction method and system and camera system
CN113597754A (en) Method and device for acquiring match picture and method and device for controlling shooting device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant